Today’s guest blog comes from Dale Thoms, co-founder and CTO of Backflip Studios, the mobile game development studio. The company is behind popular titles such as DragonVale, Paper Toss and Ninjump, and has grown from three to over 100 employees in the past five years.

In the fast changing mobile games market, speed of development is critical. We need to launch games quickly, but more importantly, we need to release frequent updates to existing games so that players keep coming back. In 2009 when we started the company, most mobile games had very little server infrastructure behind them if at all. But over time games have grown to include frequent content updates, cross-device play, community events, player communication via ads, push notifications and sophisticated data analysis. Now it is crucial to have a server infrastructure that can handle all of that and more.

Google Cloud Platform gives us the peace of mind that comes from not having to worry about setting up and managing servers, or having a dedicated server engineer to ensure systems never go down. We wish downtime and latency issues didn't exist, but when it happens it's comforting to know Google will take care of them. We started by building our games’ server components on Google App Engine, but now, our code uses other elements of Cloud Platform, namely Google BigQuery and Google Cloud Storage.

Autoscaling is critical to our business because we can't predict when our games will be featured on an app store or review site, and we wind up with a giant influx of new users. What we can predict is that whenever we push a new update to a game like DragonVale, users come back to the game in droves, doubling or even tripling, normal traffic volume over the span of a few minutes. With App Engine in the background, we’ve been able to scale smoothly to meet every spike in demand. Best of all, we only pay for the capacity our application uses.

We’re also very data-driven and frequently do analysis against data we’ve collected. Our games live in App Engine and Datastore, where a player’s game state (player level, dragons owned, placement of items on islands, etc.) is stored in a format optimized for use by our game engine. In order for our marketing and analytics teams to make use of the data, we need to pull it into a system that they can run queries against.

Initially, we were pulling data into an in-house SQL database, but when BigQuery became available, we switched as it imported data and ran queries many times faster than the previous SQL database. We now pull data out of Datastore and transform it via MapReduce from the game engine optimized format into a more traditional database form. The analytics team can then run queries in BigQuery to analyze what players are doing. With these insights, we can figure out areas where players are struggling, that need improvement -- what new features and content to offer, how to better retain players, etc. -- to keep them coming back.

We evaluated other vendors’ cloud-based solutions, but we would have had to build additional services to get all the functionality we needed. In comparison, Cloud Platform took on a lot of the burden, freeing up our developers to focus on actual game development, and the ramp on App Engine was fast thanks to its simple architecture. Our engineering department was pleasantly surprised that all they needed to do was to write the application logic, and all the database, server and scaling components were taken care of for them.

The initial team creating DragonVale was fairly small, and we had only one developer building most of the server backend for the game. Yet, it took only six months from start to finish. If we didn't have Cloud Platform, we would have likely needed a larger team to work on the database and other components, which would have extended the development cycle.

Each game rollout is easier and faster than the previous one as we get better on Cloud Platform. We have several new games coming out this summer, all of which utilize Cloud Platform. We couldn’t move at anything close to this pace without the services provided by Cloud Platform, and thanks to Google, we can make games faster and sleep better at night knowing our infrastructure is in good hands.

-Contributed by Dale Thoms, co-founder and CTO, Backflip Studios

Today's guest blog post comes from Ryan Coleman, Puppet Forge Product Owner at Puppet Labs. This is the second in a series of guest blog posts following publication of Compute Engine Management with Puppet, Chef, Salt, and Ansible

At Puppet Labs, we're all about enabling people to enact change quickly, predictably and consistently. It's at the core of everything we do, and one of the reasons we moved our Puppet Forge service to Google Compute Engine. Google Compute Engine immediately halved our service's response time on comparable instances and offers a lot of flexibility in how we deploy and manage our instances. Much of that flexibility comes from their gcutil command-line utility and their REST API.

As of Puppet Enterprise 3.1, we used these tools to provide native support for Google Compute Engine. The gce_compute module, available on the Puppet Forge, provides everything you need to manage compute instances, disk storage, networks and load balancers in Google Compute Engine with Puppet's declarative DSL. In this post, we'll run through a few examples of what you can do with it.

Here's a really simple example of what an instance looks like in Puppet's language and the local application of Puppet to manage the instance. Puppet is easy to install, so it's easy to follow along and create your own running Compute Engine instance. Simply save each example to a file, prepare gcutil by running `gcloud auth login` and then run `puppet apply` against the example file.

# Compute Engine.pp
gce_instance { 'ryan-compute':
   ensure       => present,
   machine_type => 'n1-standard-1',
   zone         => 'us-central1-a',
   network      => 'default',
   image        => 'projects/centos-cloud/global/images/centos-6-v20131120',

ryan:gce ryan$ puppet apply Compute Engine.pp
Notice: /Stage[main]//gce_instance[ryan-compute]/ensure: created

With this simple example, I have described the instance I want in Compute Engine. I can share with my co-workers to treat as documentation or use in their own Google Cloud project to get an instance built just like mine. This concept becomes more useful the more complex your infrastructure is.

Here's an example much closer to the real world. It expresses two instances configured by Puppet to be proof-of-concept http servers complete with a Compute Engine load balancer and health checks.

gce_instance { ['web1', 'web2']:
   ensure       => present,
   description  => 'web server',
   machine_type => 'n1-standard-1',
   zone         => 'us-central1-a',
   network      => 'default',
   image        => 'projects/centos-cloud/global/images/centos-6-v20131120',
   tags         => ['web'],
   modules      => ['puppetlabs-apache', 'puppetlabs-stdlib', 
 'puppetlabs-concat', 'puppetlabs-firewall'],
   manifest     => 'include apache
   firewall { "100 allow http access on host":
       port   => 80,
       proto  => tcp,
       action => accept,

gce_firewall { 'allow-http':
   ensure      => present,
   network     => 'default',
   description => 'allows incoming HTTP connections',
   allowed     => 'tcp:80',
gce_httphealthcheck { 'basic-http':
   ensure       => present,
   require      => gce_instance['web1', 'web2'],
   description  => 'basic http health check',
gce_targetpool { 'web-pool':
   ensure       => present,
   require      => gce_httphealthcheck['basic-http'],
   health_checks => 'basic-http',
   instances    => 'us-central1-a/web1,us-central1-b/web2',
   region       => 'us-central1',
gce_forwardingrule { 'web-lb':
   ensure       => present,
   description  => 'Forward HTTP to web instances',
   port_range   => '80',
   region       => 'us-central1',
   target       => 'web-pool',
   require      => gce_targetpool['web-pool'],

With Puppet Enterprise and Google Compute Engine, it becomes fairly simple to build and continuously manage complex services from the storage/network/compute resources in Google Compute Engine through operating system configuration and application management. Another cool feature is the relationship graph that Puppet automatically generates from the requirements you express. You can use this as a tool to communicate with your team on how your compute instances relate to each other or to express the dependencies in your application.
Screen Shot 2014-03-14 at 9.53.56 AM.png

These examples demonstrate how to apply Puppet configuration directly in the gce_instance resource, but it's more practical in production to manage the configuration of your entire infrastructure through a Puppet Enterprise master and its agents. If you want to run yours in Compute Engine or just try it out, the gce_compute module makes it simple to bring up a fully-functional Puppet Enterprise Master and Console.

gce_instance { 'puppet-enterprise-master':
   ensure       => present,
   description  => 'An evaluation Puppet Enterprise Master and Console',
   machine_type => 'n1-standard-1',
   zone         => 'us-central1-a',
   network      => 'default',
   image        => 'projects/centos-cloud/global/images/centos-6-v20131120',
   tags         => ['puppet', 'master'],
   startupscript        => '',
   metadata             => {
      'pe_role'          => 'master',
      'pe_version'       => '3.2.0',
      'pe_consoleadmin'  => '',
      'pe_consolepwd'    => 'puppetize',
   block_for_startup_script => true,

gce_instance { 'agent1':
   ensure => present,
   zone                 => 'us-central1-a',
   machine_type         => 'f1-micro',
   network              => 'default',
   image                => 'projects/centos-cloud/global/images/centos-6-v20131120',
   startupscript        => '',
   metadata             => {
      'pe_role'          => 'agent',
      'pe_master'        => 'puppet-enterprise-master',
      'pe_version'       => '3.2.0',
   tags => ['puppet', 'agent'],
   require => gce_instance["puppet-enterprise-master"],

This example will bring up a single master and agent, in sequence. The Puppet Master installation process may take a few minutes. When it's finished, you can browse over https to its external IP address and log in to the Puppet Enterprise Console. Once you have Puppet Enterprise installed, you also have access to our `node_gce` cloud provisioner, offering another way to manage Google Compute instances with Puppet.

From base compute, storage and networking all the way up to a consistently managed application serving your customers, Google Compute Engine and Puppet Enterprise offer a readable, reusable and shareable definition of how your cloud infrastructure is built and interrelated.

Learn More

Contribued by Ryan Coleman, Puppet Forge Product Owner


Last year, we rolled out support for deploying your Google App Engine application using git, giving you an easy-to-use mechanism for deploying your application on every push to your cloud repository’s master branch.

Today, we’re happy to extend support for this feature to repositories hosted on GitHub. By connecting your App Engine project to your GitHub repository, you can trigger a deployment by pushing to the project’s master branch on GitHub.

Let’s walk through an example.

Prerequisites: If you don’t have the git tool installed, get it here.

Connecting the repository

  1. Go to the Google Developers Console and create a project or click on an existing project that you wish to sync with GitHub.
  2. Click Cloud Development and then Releases in the left-hand navigation panel.
  3. The next step is to link your project’s repository to GitHub. On the Configuration click Connect a GitHub repo.
  4. Enter the GitHub repository URL in the dialog box that appears. The repository URL is in the format This is the same URL that you open in your web browser when you are viewing the repository on the GitHub site.
  5. Read and accept the consent option in the dialog box and click Connect.
  6. Authorize access to your repository in the GitHub page that opens.
  7. The GitHub repository now appears on the Releases page and is all set up for Python and PHP development.
  8. If you are setting up this feature for use with a Java application, select the Java: Maven Build, Unit Test, and Deploy option in the Release Type field.
  9. Now, every time you push to your project’s master branch on GitHub using git push origin master the source code will be deployed to App Engine. You can click on the Release History tab to see the status of the current deployment.

This feature makes it easier than ever to deploy your App Engine application hosted on GitHub!

- Posted by Weston Hutchins, Product Manager

Our guest post today comes from Olivier Devaux, co-founder of feedly, a reading app founded in 2008 in Palo Alto. feedly offers a free version as well as a Pro version that includes power search and integrations with other popular applications, including Evernote, LinkedIn and Hootsuite.

With over 15 million users, feedly is one of the most popular apps for purposeful reading in the world. People can tailor their feedly accounts to serve up their favorite collection of blogs, web sites, magazines, journals and more. Our goal is to deliver to readers the content that matters to them. Over the past year, we have focused on making feedly the reading app of choice for professionals.

For our first few years, we had around four million users, and we hosted all of the content we aggregated on our own servers. We ran a small instance of Google App Engine to extract picture URLs within articles.

In the middle of last year, our servers were overwhelmed with hundreds of thousands of new signups, and we experienced our first service outage. The first thing we did was move all of our static content to App Engine. Within an hour we were up and running again with 10 times the capacity we had before. This turned out to be a good thing – we added millions more users over the next few months and more than doubled in size.

It’s been almost a year since that day, and we’ve greatly expanded our service with Google Cloud Platform. We now use App Engine as a dynamic content delivery network (CDN) for all static content in feedly, as well as to serve formatted images displayed in the app or desktop.

A fast response time is even more important on mobile, and App Engine helps us load images immediately so that there’s no lag when users scroll through their feeds. As a feedly user scrolls through content, the app sends App Engine information in the background about what articles are coming next. App Engine then fetches images from the article page on the Web, determines the best image, stores it in Cloud Storage and receives a serving URL from the Image service. For users, this leads to a seamless scrolling experience.

To optimize the feedly user experience, we make heavy use of the Memcache API and App Engine Modules and the Taskqueue API. The combined result of these services allows us to cut our response time for user requests in the app down to milliseconds.

As an engineer, one of my favorite things about App Engine is that it generates detailed usage reports so we can see the exact cost of our code, like CPU usage or the amount we’ve spent to date, and continue to optimize our performance.

We learned the hard way what happens when you don’t prepare for the unexpected. But this turned out to be a blessing in disguise, because it prompted us to move to Cloud Platform, and expand and improve our service. App Engine has taken pressure off our small team and allowed us to focus on building the best reading experience for our users. With Google’s infrastructure on the backend, today we only need to worry about pushing code.

- Posted by Olivier Devaux, co-founder of feedly

Today, we are making it easier for you to run Hadoop jobs directly against your data in Google BigQuery and Google Cloud Datastore with the Preview release of Google BigQuery connector and Google Cloud Datastore connector for Hadoop. The Google BigQuery and Google Cloud Datastore connectors implement Hadoop’s InputFormat and OutputFormat interfaces for accessing data. These two connectors complement the existing Google Cloud Storage connector for Hadoop, which implements the Hadoop Distributed File System interface for accessing data in Google Cloud Storage.

The connectors can be automatically installed and configured when deploying your Hadoop cluster using bdutil simply by including the extra “env” files:
  • ./bdutil deploy
  • ./bdutil deploy
  • ./bdutil deploy

Diagram of Hadoop on Google Cloud Platform

These three connectors allow you to directly access data stored in Google Cloud Platform’s storage services from Hadoop and other Big Data open source software that use Hadoop's IO abstractions. As a result, your valuable data is available simultaneously to multiple Big Data clusters and other services, without duplications. This should dramatically simplify the operational model for your Big Data processing on Google Cloud Platform.

Here are some word-count MapReduce code samples to get you started:

As always, we would love to hear your feedback and ideas on improving these connectors and making Hadoop run better on Google Cloud Platform.

-Posted by Pratul Dublish, Product Manager

Today, we are announcing the release of App Engine 1.9.3.

This release offers stability and scalability improvements, themes that we will continue to build on with the next few releases. We know that you rely on App Engine for critical applications, and with the significant growth we’ve experienced over the past couple years we wanted to take a step back and spend a few release cycles with a laser focus on the core functionality that impacts your service and end users. As a result, new features and functionality may take a back seat to these improvements. That said, we fully expect to continue making progress with existing services, including Dedicated Memcache.

Dedicated Memcache
Today we are pleased to announce the General Availability of our dedicated memcache service in the European Union. Dedicated Memcache lets you provision additional, isolated memcache capacity for your application. For more details about this service, see our recent announcement.

Our goal is to make sure that App Engine is the best place to grow your application and business rapidly. As always, you can find the latest SDK on our release page along with detailed release notes and can share questions/comments with us at Stack Overflow.

When Applibot needed a flexible computing architecture to help them grow in the competitive mobile gaming market in Japan, they turned to Google Cloud Platform. When Tagtoo, a online content tagging startup, needed to tap into the power of analytics to better serve digital ads to customers in Taiwan, they turned to Google Cloud Platform. In fact, companies all over the world are turning to Cloud Platform to create great apps and build successful businesses.

Now, more developers in Asia Pacific can experience the speed and scale of Google’s infrastructure with the expansion of support for Cloud Platform. Today we switched on support for Compute Engine zones in Asia Pacific, as well as deploying Cloud Storage and Cloud SQL.  

This region comes with our latest Cloud technology, which includes Andromeda - the codename for Google’s network virtualization stack - to provide blazing fast networking performance as well as transparent maintenance with live migration, and automatic restart for Compute Engine.

In addition to local product availability, the Google Cloud Platform website and the developer console will also be available in Japanese and Traditional Chinese. These websites have updated use cases, documentation and all sorts of goodies and tools to help local developers get started with Google Cloud Platform. Developers interested in learning more about Google Cloud Platform can join one of the Google Cloud Platform Global Roadshow events coming up in Tokyo, Taipei, Seoul or Hong Kong.

The launch of Cloud Platform support in Asia Pacific is in line with our increasing investment in the region and our commitment to developers around the world. To all our customers in the region, we would like to say “THANK YOU / 謝謝 / ありがとう ” for your support of Google Cloud Platform.

-Posted by Howard Wu, Head of Asia Pacific Marketing and Ken Sim, Product Manager