Posted:
Our customers have a wide range of compute needs, from temporary batch processing to high-scale web workloads. Google Cloud Platform provides a resilient compute platform for workloads of all sizes enabling our customers with both scale out and scale up capabilities.
Today we are making two scaling capabilities available to all customers.
Announcing General Availability of Google Compute Engine Autoscaler
From startup to an established enterprise, it’s important for you to preserve a great user experience when responding to spiky traffic - whether caused by sudden popularity, a flash sale, or a change in user behavior. But too often scaling your services variable load with spikes of millions of requests per second is a complex process. Autoscaler makes this simpler.
With Google Compute Engine Autoscaler you’re able to dynamically scale the number of instances in response to load conditions. Simply define the ideal utilization of your group of compute instances, and Autoscaler will add instances when needed and remove them when traffic is low. This saves you money and headaches since you don’t have to buy and hold spare capacity. Furthermore, Autoscaler can scale from zero to millions of requests per second in minutes without the need to pre-warm.
Today Autoscaler is generally available, along with the underlying engine of managed infrastructure - Managed Instance Groups.
Autoscaler removes complexity and lets you forget about capacity planning or load traffic monitoring, so that you can focus on what’s most important - your business. See our tutorial video to learn more about how to scale on Google Compute Engine. To read more about Autoscaler and to provide feedback about the feature, see the documentation.

 
Announcing General Availability of 32-core VMs
If you’re doing large-scale compute and storage-intensive work such as graphics rendering, you may benefit from bigger compute instances. During our beta, 32-core VMs have proven very popular with customers running many different workloads, including visual effects rendering, video transcoding, large MySQL and Postgres instances, and more.
Today 32-core VMs are generally available for three machine types:
  • Standard: 32 virtual CPUs and 120 GB of memory
  • High-memory: 32 virtual CPUs and 208 GB of memory
  • High-CPU: 32 virtual CPUs and 28.8 GB of memory
And we are not stopping there! If your application or workload needs even beefier VMs, we'd love to hear more about your requirements.
Google Cloud Platform provides a complete set of compute capabilities, from PaaS (App Engine) to Containers (Container Engine) to Virtual Machines (Compute Engine) at the best price:performance ratio currently available. You can take us for a spin with a Free Trial today!


- Posted by Jerzy Foryciarz and Scott Van Woudenberg, Google Compute Engine Product Managers

Posted:
The HTTPS protocol helps to protect the privacy and integrity of web interactions, so it’s not surprising that many Google Cloud Platform customers want to use it as extensively as Google does. However, there's a cost to establishing an HTTPS connection: setting up the underlying Transport Layer Security (TLS) session on which HTTPS  is layered requires an exchange of X.509 certificates and cryptographic operations, which can be time consuming.

It's common to use REST to handle communications between client apps and Google App Engine apps. Naturally, you'd want to batch multiple REST requests into a single HTTPS connection to improve overall performance. Establishing the connection only once for many requests would save you the overhead of setting up connections repeatedly.

Unfortunately, you can’t just use HTTP keep-alive headers to batch requests, because App Engine controls several HTTP headers in incoming requests and removes the Keep-Alive header. This means you don't get a persistent connection, so you can't batch multiple requests on it.

However, there is a better solution. Good old HTTP is not the only game in town anymore. The experimental but widely implemented SPDY protocol, in addition to optimization and server-push features, automatically supports TLS and persistent connections. In fact, with SPDY, all connections are persistent. Even better, the new HTTP/2 protocol applies and further improves upon all the learnings from SPDY, including the features that make batching secure requests a breeze. App Engine automatically uses HTTP/2 or SPDY for all HTTPS traffic, as long as the client also supports either protocol. Fortunately, most browsers, in their latest versions, do support either or both protocols.

As a mobile-app developer, in order to take advantage of HTTP/2 or SPDY to batch HTTPS REST requests to your App Engine app, you may need to code and build your app using a library that supports one or both of these protocols. For Android, you can try Square’s OkHttp library.
On iOS, SPDY is enabled by default, so you shouldn't need to make any changes in your app. If you run into trouble, Twitter’s CocoaSPDY library is still an available and popular option.

SPDY and HTTP/2 are compatible with HTTP(S), so it's easy to use them. In fact, if you have a web application running on App Engine, you're probably already using them. Find out by using SPDY tools, such as the magic Chrome URL about:net-internals/#events.

- Posted by Alex Amies, Technical Account Manager and Alex Martelli, Technical Solutions Engineer

Posted:
We’ve had a lot of great responses and feedback (keep ‘em coming!) about our cloud pricing posts (Local SSDs, Virtual Machines, Data Warehouses) and today we’re back to talk about running NoSQL databases in the cloud. Specifically, we want to give you the information you need to understand how to estimate the cost of running NoSQL workloads on Google Cloud Platform.

NoSQL Databases
The NoSQL database market has experienced massive growth for the last few years and NoSQL databases have been instrumental in solving many distributed data and scaling challenges, which have opened the door for new and innovative applications and solutions. “NoSQL” is an umbrella term that encompasses any data store that fits the notion of “not only SQL” and many products offer a high degree of tunability around the standard relational database concepts of atomicity, consistency, isolation, and durability (see ACID for more information) and the distributed systems concepts of consistency, availability, and partition tolerance (see CAP theorem for more information). And every NoSQL database offers something different when it comes to how data is modeled and stored - including, but not limited to - JSON document, key-value, wide-column, and blob storage.

As expected, there are several different self-managed options available such as MongoDB, Apache Cassandra, Riak, Apache CouchDB, Couchbase and many more. Today we’re going to focus on how to estimate pricing when running MongoDB. MongoDB is a document-based, highly-scalable NoSQL database that provides dynamic JSON schemas along with a powerful query language. There are a variety of use cases for MongoDB such as, 360-degree view of the customer, real-time analytics, internet of things applications, and content management (to name a few).

However, when looking at the pricing data for MongoDB, we noticed something interesting. We had planned a separate blog post to talk about pricing Cassandra on Google Cloud Platform as well. But the hardware (virtual or real) requirements are very similar and neither require a license to be purchased, so the costs are very similar. It didn’t make sense to have another post stating more or less the same thing, just replacing the name of the database so we are going to include Cassandra here as well.

Cassandra, unlike MongoDB, is a key-value store. Cassandra was written at Facebook with much of the data model inspired by Google's Bigtable white paper and the availability design inspired by Amazon's Dynamo white paper. Cassandra was designed for high availability, performance, and tunable consistency. Cassandra has no leader or master node, but rather all the nodes in a cluster exist in a ring, where data is replicated a configurable number of times. Availability comes from having a headless cluster storing your data; tunable consistency comes from how much effort you want your cluster to spend to return your queries. Cassandra and MongoDB are two of the most used NoSQL databases that we see our customers using.

Starting Point
So how do you estimate pricing given multiple use cases and different possible query and traffic patterns? To get started with MongoDB, we’re going to narrow the scope a bit and estimate the costs of the resources used in existing benchmarks. There are several benchmarks that have been published about MongoDB performance and we’ll focus in on two of them, one published by MongoDB and another from United Software Associates. Both benchmarks reach roughly the same throughput and latency conclusions so this is a reasonable model to build upon.

While the benchmarks from United Software Associates used a single MongoDB node for testing, the benchmarks published by MongoDB used a 3-node replica set. Replica sets are a redundant, highly-available deployment of MongoDB and they are strongly recommended for all production workloads (at a minimum). The smallest possible replica set is comprised of three nodes, each configured with matching specifications so we’ll include that configuration in our pricing breakdown below. The on-prem reference hardware specs used in the benchmarks were as follows (MongoDB, like most databases, tends to favor more RAM and storage IOPS where possible):

Benchmark
MongoDB
United Software Associates
CPU
Dual 10-core Xeon 3.0 GHz
Dual 6-core Xeon 3.06 GHz
RAM
128 GB
96 GB
Storage
2 x 960 GB SSD
2 x 960 GB SSD
Monthly Price (single node)
$1,525.00* (estimate)
Unavailable**
Monthly Price (3-node replica set)
$4,575.00* (estimate)
Unavailable**

Now if we map that back to Google Compute Engine instances and storage offerings we would have the following 2 closely matching configurations along with pricing:

Instance Type
n1-highmem-16
n1-standard-32
CPU
16 Xeon vCPU
32 Xeon vCPU
RAM
104 GB
120 GB
Storage
4 x 375 GB Local SSD
4 x 375 GB Local SSD
Monthly Price (single node)
$843.60
$1,146.10
Monthly Price (3-node replica set)
$2,530.76 (estimate)
$3,438.30 (estimate)
Monthly Price Difference
44%
24%
Annual Savings vs. On-Premise
$24,530.88
$13,640.40

The cost breakdown above shows the pricing for a single node and for a 3-node replica set, which is a typical production deployment of MongoDB as stated above. We selected Local SSD for the storage layer in order to support the IOPS required for the throughput metrics achieved in the benchmark reports. As shown in this disk type comparison, Local SSD can support up to 280,000 write IOPS per instance. We know that Local SSD is ephemeral storage, meaning that its lifecycle is tied to the virtual machine to which it is mounted, which is another reason why we chose to estimate pricing for the highly available MongoDB 3-node replica set option. Finally, the prices shown above include Google Cloud Platform sustained use discounts which totals about a 30% discount over the course of the month.

The pricing for Cassandra is pretty similar to MongoDB. They both benefit from Local SSD in terms of performance. And the trade-off between more memory (n1-highmem-16) and more compute (n1-standard-32) is the type of choice that DBAs will have to make when designing a typical Cassandra cluster. Of course, this is just guidance on pricing to get you started, you won't know what's best for your application until you actually run tests yourself.

Running Your Own Tests
As with any benchmarks, your mileage may vary when testing your particular workloads. Isolated tests run during benchmarks don’t always equate to real world performance so it is important that you run your own tests and assess read-write performance for a workload that closely matches your usage. Take a look at PerfKit and use to it to profile your own proposed deployments, including mixing and matching workloads or worker counts.

Pricing NoSQL workloads can be somewhat challenging but hopefully we’ve given you a way to get started in estimating your costs. If you’re interested in learning more about compute and storage on Google Cloud Platform, check out Google Compute Engine or take a look at the documentation. Feedback is always welcome so if you’ve got comments or questions, don’t hesitate to let us know in the comments.

We’ve gotten a lot of great feedback about this post, and we wanted to let you know that we will also be posting about cloud pricing for Google Cloud Platform's managed NoSQL options in the near future. In forthcoming blog posts, we’ll talk about how to understand the pricing around Google Cloud Bigtable and Google Cloud Datastore and compare those to other popular managed offerings. Thanks for the questions and comments, keep ‘em coming!

- Posted by Sandeep Parikh and Peter-Mark Verwoerd, Solutions Architects

* - Price was taken from a configure-to-order bare metal server at Softlayer
** - Configuration was unavailable to estimate the monthly price

Posted:
Earlier this year, we teamed up with VMware to offer enterprise grade Google Cloud Platform services to VMware customers through VMware vCloud Air. Today we are excited to announce that vCloud Air Object Storage Service, powered by Google Cloud Platform, is generally available to all customers.

With the availability of Google Cloud Storage through vCloud Air, VMware customers will have access to a durable and highly available object storage service powered by Google Cloud Platform. Google Cloud Storage enables enterprises to store data on Google's infrastructure with very high reliability, performance and availability. It provides a simple HTTP-based API accessible from applications written in any modern programming language, which enables customers to take advantage of Google's own reliable and fast networking infrastructure to perform data operations in a cost effective manner. When you need to expand, you benefit from the scalability provided by Google's infrastructure.

VMware customers will have access to all three classes of object storage offered by Google:
  1. Standard storage offers our highest performance storage, with very high availability.  
  2. Durable Reduced Availability storage provides a lower cost option that doesn’t require immediate and uninterrupted access to storage.  Cost savings are made by reducing replicas.  Durable Reduced Availability storage offers the same durability as Standard storage.
  3. Nearline storage, our newest storage service,  offers customers a simple, low-cost, fast-response storage service with quick data backup, and access for storage charges of 1 cent per GB of data.

Today’s announcement marks the launch of the first of many Google Cloud Platform services that will be offered to VMware customers through vCloud Air. We’re excited to extend Google Cloud Platform to the VMware vCloud Air customer base.

To learn more, contact your VMware sales team or Google Cloud Platform Sales.

- Posted by Adam Massey - Director, Global Partner Business