2015: the surrendering to the cloud

I thought I’d label 2015 as the year of the surrendering to the cloud. And by this I do not mean that mass adoption that every software vendor was waiting for, but surrendering to (1) the fact that cloud is now pervasive and it is no longer up for a debate and (2) to the dominance of Amazon Web Services.

A debate had been previously going way too long, on what are the real benefits of the cloud. And I’m not talking about end customers here, I’m talking about IT professionals, for whom new technologies should be bread and butter. But around cloud computing, they somehow showed the strongest skepticism, a high dose of arrogance (how many times I heard “we were doing cloud 20 years ago, but we were just not calling it that way”) and reluctancy to embrace change. The great majority of them underestimated the phenomenon to the point of challenging its usefulness or bringing it down to virtualisation in some other data center which is not here.

I asked myself why this has happened and I came to the conclusion that cloud has been just too disruptive, even for IT pros. To understand the benefits of the cloud in full, one had to make a mental leap. People naturally learn by small logical next steps, so cloud was interpreted just like the natural next step after having virtualised their data centres. But as I wrote more than three years ago in the blog post Cloud computing is not the evolution of virtualisation, the cloud came to solve a different problem and used virtualisation just as a delivery method to accomplish its goal. But finally, in 2015 I personally witnessed that long overdue increased level of maturity with respect to cloud technologies. Conversations I had with service providers and end customers’ IT pros were no longer about “if” to cloud or not to cloud, but about “what” and “when” instead.

What has helped achieving this maturity? I think it is the fact that nobody could ignore anymore the elephant in the room. The elephant called Amazon Web Services. That cloud pioneer and now well consolidated player that is probably five years ahead of its nearest competitor, in terms of innovation and feature richness. And not only they’re not ignoring it anymore, everyone wants to have a ride on it.

Many of those IT pros I mentioned are actually employed by major software vendors, maybe even leading their cloud strategy. Their initial misunderstanding of the real opportunity behind cloud adoption led to multi-million investments on the wrong products. And in 2015 (here we come to the surrendering number 2) we saw many of these failures surfacing up and demanding real change. Sometimes these changes were addressed with new acquisitions (like the EMC acquisition of Virtustream) or with the decision to co-opt instead of compete.

To pick some examples:

On Tuesday [Oct 6th] at AWS re:Invent, Rackspace launched Fanatical Support for AWS, beginning with U.S.-based customers. Non-U.S. customers will have to wait a while, although Rackspace will offer support for them in beta mode. In addition, Rackspace will also resell and offer support services for AWS’s elastic cloud as it’s now officially become an authorized AWS reseller.
Hewlett-Packard is dropping the public cloud that it offered as part of its Helion
“hybrid” cloud platform, ceding the territory to Amazon Web Services and Microsoft’s Azure. The company will focus on private cloud and traditional IT that its large corporate customers want, while supporting AWS and Azure for public cloud needs.
HP Enterprise’s latest strategy, which dovetails with earlier plans to focus on private and managed clouds, is to partner with Microsoft and become an Azure reseller.

What does this tell us? Most software vendors are now late to the game and are trying to enter the market by holding the hand of those who understood (and somewhat contribute to create) the public cloud market. But don’t we always say the cloud market is heading to commoditisation, why there seem to be no space for a considerable number of players? Certainly HP, VMware or IBM have the investment capacity of Amazon to grow big and compete head to head.

The reality is that we’re far from this commoditisation. If virtual machines may well be a commodity, they’re not more than a tiny bit of the whole cloud services offered for example by AWS (EC2 was mentioned only once during the two main keynotes at AWS re:Invent this year!). The software to enable the full portfolio of cloud services still make a whole lot of difference and to deliver it, this requires vision, leadership, understanding and a ton of talent. Millions of investments without the rest was definitely not the way.

Happy 2016!

The 3 reasons why Docker got it right

Containers have been around for a while. But why did they finally get their well deserved popularity only with the rise of Docker? Was it just a matter of market maturity, or something else? Having worked at Joyent, I had the luck of being in the container business before Docker was even invented, and I would like to give you my take on that.

A brief history of containers

We hear this again and again in compute science: what we think has been recently invented by some computing visionary has actually its roots typically decades ago. It happened with hardware virtualisation (emulation), with the cloud client-server de-centralization (mainframes) and, yes, also with containers.

If you also started to hack with Unix back in the early 90’s you’ll certainly remember chroot. How many times I’ve used that to make sure my process wasn’t messing around with the main OS environment. And you’ll probably remember FreeBSD jails, that was adding all that required kernel-level isolation to implement the very first OS-level virtualisation system.

Sun Microsystem also believed strongly in containers and developed what they called “zones“, definitely the most powerful and well thought container system. But despite Sun believed in containers more than it did on hardware level virtualisation, the market moved towards the latter, not because it was the right approach but simply because it allowed the guest OS to stay untouched. Unfortunately Sun never managed to see much of the results of zones as nobody knows what really happened to them after the acquisition from Oracle. Luckily another company, Joyent, picked up the legacy of OpenSolaris with its SmartOS derivative. SmartOS is now used as the foundation of the Joyent Cloud with an improved version of zones at the very core of it.

At the same time, yet another company, Parallels (now Odin), stewarded OpenVZ, a Linux open-source project for OS-level virtualisation. The commercial version of it was called Virtuozzo and Parallels sold it as their virtualisation system of choice.

Since late 2000’s, Joyent and Parallels have been pioneering the container revolution but nobody talked about them as much as it’s now being done for Docker. Let’s try to understand why.

Positioning of containers

The easy conclusion would be that the market wasn’t just ready yet. We all know how timing is important when releasing something new and I’m sure this also played a role with containers. However, in my view, that’s not the main reason.

Let’s look at how these two companies were selling their container technology. Joyent made it all around performance and transparency: if you’re using a container instead of a virtual machine (i.e. hardware level virtualised) you can get an order of magnitude of performance increase, as well as total transparency and visibility of the underlying hardware. That’s absolutely spot on and relevant. But apparently it wasn’t enough.

Parallels made it all around density. Parallels’ target market was hosting companies and VPS providers, those who’s selling a single server for something like four bucks a month. So, if you’re selling a container instead of a virtual machine, you’ll be able to squeeze twice or three times the amount of servers on the same physical host. Therefore you can keep your prices lower and attract more customers. Given that you’re not reserving resources to a specific container, higher density is a real advantage that can be achieved without affecting performance too much. Absolutely true but again, it did not resonate too loud.

The need to lower the overhead

In the last few years, we also witnessed the desperate need to lower the overhead. Distributed system caused server sprawl. Thousands of under utilised VMs running what we call micro services, each with a heavy baggage to carry: a multi-process, multi-user full OS, whose features are almost totally useless to them. Therefore the research in lowering the overhead: from ZeroVM (acquired by Rackspace) to Cloudius Systems, that tried to rewrite the Linux kernel, chopping off those features that weren’t really necessary to run single process instances.

And then came Docker

Docker started as delivery model for the infrastructure behind the dotCloud PaaS, it was using containers to deliver something else. It was using containers to deliver application environments with the required agility and flexibility to deploy, scale and orchestrate. When Docker spun off, it added also the ability to package those environment and ship them to a central repository. Bingo. It turned containers as a simple mean to do something else. It wasn’t the container per se, it was what containers unlocked: the ability to package, ship and run isolated application environments in a fraction of a second.

And it was running on Linux. The most popular OS of all times.

Why Docker got it right

All of this made me think that there are three main reasons behind the success of Docker.

1. It used containers to unlock a totally new use case

The use case that container unlocked according to Joyent, Parallels and Docker were all different: performance of a virtual server in the case of Joyent, density of virtual servers in the case of Parallels and application delivery with Docker. They all make a lot of sense but the first two were focused on delivering a virtual server, Docker moved on and used containers to deliver applications instead.

2. It did not try to compete against virtual machines

Joyent and Parallels tried to position containers against virtual machines. You could do something better with containers when using a container instead of a virtual machine. And that was a tough sale. Trying to address the same use case as what everybody already acknowledged as the job of a VM was hard. It was right but it would have required much longer time to establish itself.

Docker did not compete with VMs and, as demonstration of that, most people are actually running Docker inside VMs today… even if Bryan Cantrill (@bcantrill), CTO of Joyent, would have something to say about it! Docker runs either on the bare metal or in a VM, it does not matter much when what you want to achieve is to build, package and run lightweight application environments for distributed systems.

3. It did not try to reinvent Unix but used Unix for what it was built for

Docker didn’t try to rewrite the Linux kernel. However it fully achieved the objective to reduce overhead. Containers can be used to run a single process with no burden to carry an entire OS. At the same time, the underlying host can make best use of its multi-process capabilities to effectively manage hundreds of containers.

Don’t get my wrong. I absolutely believe about the superiority of containers when compared to virtual machines. I think both Joyent and Parallels did an amazing job spreading out their benefits like no other. However, I also recognise in Docker the unique ability to have made them shine much brighter than anyone has ever done before.

In conclusion, co-opting with the established worlds of virtual machines and Linux to exploit the largest reach, while adding fundamental value to them was the reason behind Docker’s success. At the same time, looking at containers from an orthogonal perspective, not as the goal but as a mean to achieve something different than delivering a virtual server, is what landed containers on the mouth of everyone.

Virtualization no longer matters

There is no doubt. The product is there. The vision, too. At times, they leave some space to arrogance as well but, come on, they are the market leader, aware of being far ahead than anybody else in this field. A field they actually invented themselves. We almost feel like forgiving that arrogance. Don’t we.

The AWS summit 2013 in London has been just one more time the confirmation that the cloud infrastructure market is there, the potential is higher than ever and that Amazon “gets” it, drives it and dominates it quite undisturbed. All the others struggle to distinguish themselves among a huge amount of technology companies, old and new, who are strongly convinced of having jumped into the cloud business but, I’m pretty sure, the majority of their executives thinks that cloud is just the new name for hosting services.

Before going forward, I want to thank Garret Murphy (@garrettmurphy) for having transferred his AWS summit ticket to me, without even knowing who I was, but simply and kindly responding to my tweeted inquiry. I wish him and his Dublin-based startup 247tech.ie the required amount of luck that, coupled with great talent, leads to success.

Now, I won’t go through the whole event, because being this a roadshow which London wasn’t the first edition, much has been said already here and here. The general perception I had is that AWS is still focusing on presenting the advantages of cloud-based as opposed to on-premises IT infrastructures, showing off the rich toolset they have put in place and eventually bringing MANY (I counted nearly 20 ones) customers testifying how they are effectively using the AWS cloud and what advantages they got doing that. Ok, most of them were the usual hyper-scale Internet companies but I’ve seen the effort to bring enterprise testimonials like ATOC (The Association of Train Operating Companies of the UK). However, they all said to be using AWS only for web facing applications, staging environment or big data analytics. Usual stuff which we know to be cloud friendly.

What really impressed me was the OpsWorks demo. OpsWorks was released not long ago as the nth complementary Amazon Web Service to help operating resilient self-healing applications in the cloud. Aside from the confusion around what-to-use-when, given the large number of tools available (and without considering those from third parties which are growing uncontrolled day by day), there is one evident trend arising from that.

For those who don’t know OpsWorks, it is an API-driven layer built on top of Chef in order to automate the setup, deployment and un-deployment of application stacks. An attempt to the DevOps automation. How this is going to meet customers’ actual requirements while still keeping simplicity (a.k.a. without having to provide a too large number of options) is not clear yet.
During the session demonstrating OpsWorks, the AWS solution architect remarked that no custom AMIs (Amazon Machine Images) are available for selection while creating an application stacks. Someone in the audience immediately complained on Twitter about this, probably because he wasn’t happy about having to re-build all his customizations through Chef recipes on top of lightweight basic OS images, discarding them from his custom VM image.

In fact there are several advantages of moving the actual machine setup to the post-boostrap automation layer. For example, the ease of upgrading software versions (e.g. Apache, MySQL) simply by changing a line in a configuration file instead of having to rebuild the whole operating system image. But mostly because, keeping OS images adherent to the clean vendor releases, you probably will find them available in other cloud providers, making your application setup completely cross-cloud. Of course there are disadvantages too, including the delay added by operations like software download or configuration that may be necessary each time you decide to scale-up your application.

Cross-cloud application deployment. No vendor lock-in. Cool. There is actually a Spanish startup called Besol that is building its entire (amazing) product “Tapp into the Cloud” on the management of cross-cloud application stacks, leveraging a rich library of Chef cookbook templates. And while I was writing this post on a flight from London, Jason Hoffman (@jasonh) was being interviewed by GigaOM and, while announcing a better integration between Joyent and Chef, he mentioned the compatibility between cloud environments as a major advantage of using Chef.

What we’re observing is a major shift from leveraging operating system images towards the adoption of automation layers that can quickly prepare for you whatever application you want your virtual server to host. That means that one of the major advantages introduced by virtualization technology, that is the software manipulation of OS images, one of the triggers of the rise of cloud computing, no longer matters.

Potentially, with the adoption of automation platforms like Chef, Puppet or CFEngine, service providers could build a complete cloud infrastructure service, without employing any kind of hypervisor. And this trend is further confirmed by facts like:

Of course there are still advantages for using a hypervisor, because certain applications require architectures made of many micro-instances for performing parallel computing, thus it’s still necessary to slice a server into many small portions. However, with the silicon processors increasing the number of cores and the ability of using threads, virtualization may not be so important anymore for the cloud.

In the end, I think we no longer can say that virtualization is the foundation of cloud computing. The correct statement could perhaps be that virtualization inspired cloud computing. But the future may leave even a smaller space for that.

The truth on enterprise private clouds

Oh yes!

It feels so great when someone among the most recognized high tech analysts out there writes down exactly what you think. It’s an endorsement of your own thinking to read James Staten (@Staten7) from Forrester Research on “Why your enterprise private cloud is failing”, where he describes so clearly what you’ve always been thinking and trying to explain.

His blog is saying two important things:

  1. Enterprise private clouds are failing. As I’ve also written on a Quora answer to “What is the future of private cloud?”, no matter what marketing and vendors are saying, efficient, large scale production enterprise private clouds don’t exist as of today. In my opinion, cloud is an extremely new model in delivering IT infrastructure that the culture of its utilization won’t be able to reach the enterprise with a bottom-up approach (evolving from their current infrastructure) but only taking a top-down direction (deploying into public clouds and then migrating back in-house). A revolution as opposed to an evolution.
  2. Enterprise private clouds are failing due to the wrong approach taken by the IT department. Treating the cloud just like an infrastructure stack instead of a service, because “you are building the private cloud without engaging the buyers who will consume this cloud”, Staten says.

And of course, I wasn’t the only one recognizing “the truth” in James Staten’s words. His opinion on failing private clouds echoed throughout the web, generating a large consensus among cloud experts and visionaries such as James Urquhart (@jamesurquhart):

The two cloud models

Much has been already written about different approaches to the cloud and big brains have concluded that all of them can be summarized in two different cloud models. They have been given various names according to the author, but I shall refer to the nomenclature of the OpenNebula blog post.

  1. Datacenter Virtualization model: cloud as an extension of virtualization in the datacenter. Some more automation, service catagloue, etc. VMware vCloud-like approach.
  2. Infrastructure Provision model: a powerful service-oriented API to provision effectively and efficiently commodity computing resources. AWS-like approach.

With reference to the above models, James Staten is basically saying that the Datacenter Virtualization cloud model is wrong. That is not the right approach to implementing a private cloud. Because “a Porsche is [not] just a Volkswagen with better engine, tires, suspension and seats.”

Awesome. I’ve been convinced about that for some time. If you read my very first post on “Cloud Computing is not the evolution of virtualization”, as the title says, I’ve been always considering exclusively the Infrastructure Provision model as the only possible cloud implementation, completely excluding the Datacenter Virtualization to be even called cloud.

And I don’t think this was an extremist approach. As I said many times, cloud is a tremendous opportunity for the enterprise to start thinking differently. In my opinion, cloud will be able to reach the enterprise IT departments only using a top-down approach: from a public cloud implementation to back in-house. Enterprise cloud consumers will try (and love) the public cloud and eventually drive the implementation of something similar within the enterprise itself. But trying to transform the current virtualized infrastructure into a private cloud will simply fail. Fail to deliver a real elastic and service-oriented cloud infrastructure to the real cloud consumers.

Vendors didn’t get it

So what? All enterprise IT departments simply didn’t get it? What’s their problem? It’s a vendor problem. Enterprise software vendors didn’t get it. Every one of them started to think of the cloud as an opportunity (that’s good, as a matter of principle) and they all just tried to profit from the hype. For virtualization technology vendors, that was an easy path: adding a new product to their portfolio to “cloudify” the existing virtualization products, that would have been a natural extension to existing implementations within the enterprise. The perfect scenario for IT departments. Pity that it doesn’t work to deliver what cloud consumers are looking for.

But recently we heard something new from virtualization vendors. They actively started perceiving public clouds, and AWS in particular, as a threat to the workload which is (was?) currently running on their virtualization technology and that’s failing to migrate to private clouds for the above reasons. Despite their very rich cloud products portfolio, workload is still moving from the enterprises to commodity public clouds. Why?

Hearing VMware CEO Pat Gelsinger saying that he finds hard to believe they cannot beat a company that sells books, makes me think they really didn’t get the point at all. Good luck guys.

Cloud computing is not the evolution of virtualization

Many of you may probably think that after the success of virtualization technology they had to invent something appealing to keep pushing sales and they called it Cloud Computing. And the same people would think that cloud computing is just an extra layer on top of your virtualization management platform for better and coordinated resource management, that provides things like billing, machine catalogues, self-provisioning, etc.

Cloud Computing actually has a much wider meaning (that sometimes makes it simply look like a marketing trend) so today I will narrow it down and focus on cloud infrastructures. The questions I will try to answer are: what is a cloud infrastructure, and when can you say you’re really running your business in the cloud?

To provide the right answers, you have to think of the applications that you want to run on your IT infrastructure. Many of you have probably gone through the server consolidation process that made VMware a billion dollar company: you had lots unused hardware resources but you still wanted to separate operating environments so, no problem, hardware virtualization could solve that for you, without the need to change anything in your application code or architecture. The same application you were running before on the bare-metal would run exactly in the same way inside a virtual machine.

After server consolidation practices became common, somehow the evolution of hardware virtualization went much faster than the evolution of applications. Hypervisor vendors started to provide more and more features to make the underlying hardware always available for running applications, so they could endlessly run without even caring about potential hardware failures.

What people tend to forget when buying powerful hardware platforms is that application failures are much more the primary reason of outages than hardware failures. For this reason, sooner or later you realize that and you have to build up an application-level redundancy in order to implement a real highly available system. But with application-level redundancy, do you still need to have underlying expensive hardware? Why not to run your application on commodity servers?

This question will lead to the real concept of cloud computing. Let’s now try to give a definition: a cloud infrastructure can be called so if it:

  • is scalable and elastic
  • provides process automation (self-provisioning / self-service / billing)
  • is highly available
  • provides full multi-tenancy

And what is the purpose of all of the above? If you think carefully, you’ll realize that it’s all aimed at commoditizing the infrastructure itself. Companies shouldn’t spend anymore time to build up their IT foundations but they should concentrate on their actual business workflows, supported by really innovative applications. Infrastructure is something they want to take for granted.

In this scenario, a cloud platform should have another important characteristic: it has to be cheap.

So can you achieve all of that with a traditional hardware virtualization-powered infrastructure? No.

Scalability will be an issue if you’re using centralized resources (that can’t grow big forever) that are usually necessary for providing hardware-level HA.

You will feel safe thanks to all those automatic live machine migration features but don’t forget that they protect you only from hardware failures. If the application fails there is not much they can do for you. You should protect yourself from application failures by building a redundant application architecture but, if you do so, do you still need expensive hardware-level HA? No, you don’t.

And one more thing, cost. Hardware virtualization infrastructures require complex high-end hardware that won’t get the point of being cheap in order to turn the IT infrastructure into a commodity.

In the end, do you want to run your old legacy application in the cloud? Forget it. Just keep it on your powerful expensive virtualization platform. That will work just fine. But if you’re a visionary who believes in a future that requires performant, scalable, elastic and cheap commodity IT infrastructures, then choose your next applications to be cloud aware. That will take you much further, much faster.