Ruby on Rails has been one of the most popular frameworks for developing web applications. Some of the largest websites like Github, Basecamp and Shopify are known to use the framework. Developers or startups can rapidly create new features, easily maintain code, and take advantage of contributions from the open source community. Wouldn’t it be great if they could easily deploy their favorite framework on high performance Google Compute Engine VMs as well as take advantage of sustained-use discounts?

Today, we’re announcing Click to Deploy for Ruby development stack on Google Compute Engine. The Ruby stack provides you with the best of open source software today:
  • Apache Web Server
  • Passenger App Server
  • Ruby on Rails Framework
  • MySQL Database

With a single button click, you can launch a complete Ruby/Rails stack ready for development! Click to Deploy handles all the software installs for you to get started. So, go ahead and click to deploy your Ruby development stack today!

Learn more about running the Ruby development stack on Google Compute Engine at or try it now.

-Posted by Ravi Madasu, Program Manager

“Rails”, “Ruby on Rails”, and the Rails logo are registered trademarks of David Heinemeier Hansson. All other trademarks cited here are the property of their respective owners.

Ezakus, a leading data management platform, relies on Hadoop to process 600 million digital touch points raised by 40 million users and mobile users.

Fast growth created challenges in managing Ezakus’s existing Hadoop installation, so they tested different alternatives for running Hadoop. Their benchmarks found that Hadoop on Google Compute Engine provided processing speed that was three to four times better than the next-best cloud provider.

“Our benchmark tests used the Cloudera Hadoop distribution”, said Olivier Gardinetti, CTO. “We were careful to use identical infrastructure - the same logical CPU count, the same mem capacity and so forth. We also ran each test several times to ensure that outliers weren't skewing the results.”

When using MapReduce for basic stats processing of 20,469,283 entries along their browsing history over 1 month, Compute Engine computed the stats in 1 minute and 3 seconds, four times faster than the alternative tested. When more complex queries were run in a second test, Compute Engine computed in 7 minutes and 47 seconds, 3 times faster than the closest alternative which ran at 23 minutes and 31 seconds.

Ezakus can now provide more performance and predictions and serve more clients, “because we can more easily deploy all the servers in a very short time,” said Gardinetti. To learn more about their migration to Google Cloud Platform and subsequent results for their business, read the case study here.

-Posted by Ori Weinroth, Product Marketing Manager

Cross-posted from the Google Enterprise blog

No matter how you slice it, mobile and cloud are essential for future business growth and productivity. This is driving increases in security spending as organizations wrestle with threats and regulatory compliance — according to Gartner, the computer security industry will reach $71 billion this year, which is a 7.9 percent increase over 2013.

To help organizations spend their money wisely, it’s essential that cloud companies are transparent about their security capabilities. Since we see transparency as a crucial way to earn and maintain our customers’ confidence, we ask independent auditors to examine the controls in our systems and operations on a regular basis. The audits are rigorous, and customers can use these reports to make sure Google meets their compliance and data protection needs.

We’re proud to announce we have received an updated ISO 27001 certificate and SOC 2 and SOC 3 Type II audit report, which are the most widely recognized, internationally accepted independent security compliance reports. These audits refresh our coverage for Google Apps for Business and Education, as well Google Cloud Platform, and we’ve expanded the scope to include Google+ and Hangouts. To make it easier for everyone to verify our security, we’re now publishing our updated ISO 27001 certificate and new SOC3 audit report for the first time, on our Google Enterprise security page.

Keeping your data safe is at the core of what we do. That’s why we hire the world’s foremost experts in security—the team is now comprised of over 450 full-time engineers—to keep customers’ data secure from imminent and evolving threats. These certifications, along with our existing offerings of FISMA for Google Apps for Government, support for FERPA and COPPA compliance in Google Apps for Education, model contract clauses for Google Apps customers who operate within Europe, and HIPAA business associate agreements for organizations with protected health information, help assure our customers and their regulators that we’re committed to keeping their data and that of their users secure, private and compliant.

Every software company today needs a place to store their code and collaborate with teammates. Today we are announcing a solution that can scale with your business. GitLab Community Server is great way to get the benefits of collaborative development for your team wherever you want it. While GitLab already provides simple application installers, we wanted to take it one step further.

Today, we’re announcing Click to Deploy for the GitLab Community Server built on the following open source stack:
  • Nginx, a fast, minimal web server
  • Unicorn, Ruby on Rails hosting server
  • Redis, scalable caching service
  • PostgreSQL, popular SQL database

Get your own, dedicated code collaboration server today!

Learn more about running the GitLab Community Server on Google Compute Engine at

-Posted by Brian Lynch, Solutions Architect

GitLab is a registered trademark of GitLab B.V.. All other trademarks cited here are the property of their respective owners.

Today we are announcing that Zync Render, the visual effects cloud rendering technology behind Star Trek Into Darkness and Looper, is joining the Google Cloud Platform team.

Creating amazing special effects requires a skilled team of visual artists and designers, backed by a highly powerful infrastructure to render scenes. Many studios, however, don’t have the resources or desire to create an in-house rendering farm, or they need to burst past their existing capacity.

Together Zync + Cloud Platform will offer studios the rendering performance and capacity they need, while helping them manage costs. For example, with per-minute billing studios aren’t trapped into paying for unused capacity when their rendering needs don’t fit in perfect hour increments.

We’re excited they're joining us. We’ll have more details to share in the coming months — stay tuned!

-Posted by Belwadi Srikanth, Product Manager


Two months ago, we announced Kubernetes, an open source cluster manager for Docker containers. Since then we’ve seen an impressive community develop around Kubernetes, and today we’re thrilled to welcome VMware to the Kubernetes community.

We’ve spent a lot of time talking about how we’re building Kubernetes to provide a unique infrastructure for easily building scalable, reliable systems like we do at Google. With the addition of VMware in the community, we thought we’d take the time to discuss the infrastructure side of cluster management and how VMware’s deep technical expertise in this area will make Kubernetes a more capable, powerful and secure platform beyond Google Cloud Platform.

One of the fundamental tenets of Kubernetes is the decoupling of application containers from the details of the systems on which they run. Google Cloud Platform provides a homogenous set of raw resources via virtual machines (VMs) to Kubernetes, and in turn, Kubernetes schedules containers to use those resources. This decoupling simplifies application development since users only ask for abstract resources like cores and memory, and it also simplifies data center operations, since every machine is identical and isolated from the details of the applications that run on them.

VMware will provide enhanced capabilities for running a reliable Kubernetes cluster, much like Google Cloud Platform. The core resources here are:

  • Machines: virtual machines on which containers run
  • Network: the physical or virtualized connectivity between containers in the cluster
  • Storage: reliable, cluster level distributed storage outside of a container’s lifecycle

Providing machines for Kubernetes in not only necessary as a pool of raw cycles and bytes but also can provide a critical extra layer of security. Security is a continuum on which you pick solutions based on threats and risk tolerance. While container security is an evolving area, VMs have a longer track record and are a smaller attack surface. Fundamentally, even in Kubernetes, the machine is a strong security domain. Linux containers can provide strong resource isolation, ensuring, for example, that one container has dedicated access to a specific core in the processor. For semi-trusted workloads, containers may be sufficient. However, because containers share the same kernel, there’s an expanded surface area that may make them insufficient as your only line of defense. For untrusted workloads or users, we highly suggest defense in depth with virtual machine technology as a second layer of security. Indeed, this is how two different users’ Kubernetes clusters can safely co-exist on the same physical infrastructure in a Google data center. VMware will help Kubernetes implement this same pattern of using virtualization to secure physical machines, when those machines are outside of Google’s data centers.

While running individual containers is sufficient for some use cases, the real power of containers comes from implementing distributed systems, and to do this you need a network. However, you don’t just need any network. Containers provide end users with an abstraction that makes each container a self contained unit of computation Traditionally, one place where this has broken down is networking, where containers are exposed on the network via the shared host machine’s address. In Kubernetes, we’ve taken an alternative approach: that each group of containers (called a Pod) deserves its own, unique IP address that’s reachable from any other Pod in the cluster, whether they’re co-located on the same physical machine or not. To achieve this in the Google data center, we’ve taken advantage of the advanced routing features that are available via Google Compute Engine’s Andromeda network virtualization. VMware, with their deep knowledge in network virtualization, specifically Open Virtual Switch (OVS), will simplify network configuration in Kubernetes clusters running outside of Google’s data centers.

Finally, nearly every application that you run needs some sort of storage, but the storing that data on specific machines in your datacenter makes it difficult to schedule containers in the cluster to maximize efficiency and reliability, since pods are forced to co-locate with their data. When Kubernetes runs on Google Cloud Platform, you’ll soon be able to pair your container up with a Persistent Disk (PD) volume, so that regardless of where your container is scheduled in the cluster, its storage follows it to the physical machine. VMware will work with Kubernetes to include integration points to distributed storage systems such as their Virtual-SAN scalable virtual storage solution to enable similar capabilities for users not running on Google Cloud Platform, in addition to simpler less robust shared storage solutions available for users that don't have access to a reliable network storage system.

We developed and open sourced Kubernetes to provide applications developers and operations teams with the ability to build and scale their applications like Google. The addition of VMware’s technical expertise in cluster infrastructure will enable people begin to compute like Google, regardless of where they physically do that computation.

-Posted by Craig Mcluckie, Product Manager

Today’s guest post is by Florian Leibert, Mesosphere Co-Founder & CEO. Prior to Mesosphere, he was an engineering lead at Twitter where he helped introduced Mesos to Twitter where it now runs every new service. He then went on to help build the analytics stack at Airbnb on Mesos. He is the main author of Chronos, an Apache Mesos framework for managing and scheduling ETL systems.

Mesosphere enables users to manage their datacenter or cloud as if it were one large machine. It does this by creating a single, highly-elastic pool of resources from which all applications can draw, creating sophisticated clusters out of raw compute nodes (whether physical machines or virtual machines). These Mesosphere clusters are highly available and support scheduling of diverse workloads on the same cluster, such as those from Marathon, Chronos, Hadoop, and Spark. Mesosphere is based on the open source Apache Mesos distributed systems kernel used by customers like Twitter, Airbnb, and Hubspot to power internet scale applications. Mesosphere makes it possible to develop and deploy applications faster with less friction, operate them at massive scale with lower overhead, and enjoy higher levels of resiliency and resource efficiency with no code changes.

We’re collaborating with Google to bring together Mesosphere, Kubernetes and Google Cloud Platform to make it even easier for our customers to run applications and containers at scale. Today, we are excited to announce that we’re bringing Mesosphere to the Google Cloud Platform with a web app that enables customers to deploy Mesosphere clusters in minutes. In addition, we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.

With our new web app, developers can literally spin up a Mesosphere cluster on Cloud Platform in just a few clicks, using either standard or custom configurations. The app automatically installs and configures everything you need to run a Mesosphere cluster, including the Mesos kernel, Zookeeper and Marathon, as well as OpenVPN so you can log into your cluster. Also, we’re excited that this functionality will soon be incorporated into the Google Cloud Platform dashboard via the click-to-deploy feature. There is no cost for using this service beyond the charges for running the configured instances on your Google Cloud Platform account. To get started with our web app, simply login with your Google credentials and spin up a Mesos cluster.

We are also incorporating Kubernetes into Mesos and our Mesosphere ecosystem to manage the deployment of Docker workloads. Our combined compute fabric can run anywhere, whether on Google Cloud Platform, your own datacenter, or another cloud provider. You can schedule Docker containers side by side on the same Mesosphere cluster as other Linux workloads such as data analytics tasks like Spark and Hadoop and more traditional tasks like shell scripts and jar files.

Whether you are running massive, internet scale workloads like many of our customers, or you are just getting started, we think the combination of Mesos, Kubernetes, and Google Cloud Platform will help you build your apps faster, deploy them more efficiently, and run them with less overhead. We look forward to working with Google to make Cloud Platform the best place to run traditional Mesosphere workloads, such as Marathon, Chronos, Hadoop, or Spark—or newer Kubernetes workloads. And they can all be run together while sharing resources on the same cluster using Mesos. Please take Mesosphere for Google Cloud Platform for a test drive and let us know what you think.

- Contributed by Florian Leibert, Mesosphere Co-Founder & CEO