Upcoming Research: Cloud Management Platforms

This post originally appeared on the Gartner Blog Network.

At Gartner, we’re often asked how to select cloud management platforms (CMP). We’ve been asked that question in the past, when a CMP was the software to transform virtualized data centers into API endpoints. We’re being asked the same question today, when a CMP is used to manage public clouds.

At Gartner Catalyst 2017 – one of the largest gathering of technical professionals – in one of my presentations I remarked how confused the market is. Even vendors don’t know whether they should call their product a CMP or not. In the last few years, the cloud management market has rapidly evolved. Public cloud providers have constantly released more native management tools. Organizations have continued to adopt public cloud services and have gradually abandoned the idea of building a cloud themselves. Public cloud services require the adoption of new processes and new tooling, such as in the areas of self-service enablement, governance and cost management. Finally, public cloud has to co-exist and co-operate with on-premises data centers in hybrid scenarios.

At Gartner, we’re committed to help our client organizations define their processes, translate them into management requirements and map them to market-leading tools. With the public cloud market maturing and playing a key role in the future of IT, we are now seeing the opportunity to make clarity and defining the functions that a CMP must provide.

My colleague Alan Waite and I are drafting the Evaluation Criteria (EC) for Cloud Management Platforms. An EC is a Gartner for Technical Professionals (GTP) branded research that lists the technical criteria of a specific technology and classifies them as required, preferred and optional. Clients can take an EC and use it to assess a vendor’s technical functionality. They can use it to form the basis for an RFP or even to simply define their management requirements. With our upcoming EC for CMPs, client organizations will be able to shed light on the confused cloud management market. They will be able to understand which tools to use for which management tasks and how to compare them against one another.

I’m bullish about the outcome of this research as I’m so looking forward to its publication. I’m extremely thankful to all the extended analyst community at Gartner who’s collaborating with me and Alan to increase the quality of this important piece of research. If you’re an existing Gartner client, don’t forget to track the “Cloud Computing” key initiative to be notified about the publication. if you’re willing to contribute to this research, feel free to schedule an inquiry or a vendor briefing with myself or Alan. Looking forward to the next update. Stay tuned!

New Research: How To Manage Public Cloud Costs on Amazon Web Services and Microsoft Azure

This post originally appeared on the Gartner Blog Network.

Today, I am proud to announce that I just published new research (available here) on how to manage public IaaS and PaaS cloud costs on AWS and Microsoft Azure. The research illustrates a multicloud governance framework that organizations can use to successfully plan, track and optimize cloud spending on an ongoing basis. The note also provides a comprehensive list of cloud providers’ native tools that can be leveraged to implement each step of the framework.

In the last 12 months of client inquiries, I felt a remarkable enthusiasm for public cloud services. Every organization I talked to was at some stage of public cloud adoption. Almost nobody was asking me “if” they should adopt cloud services but only “how” and “how fast”. However, these conversations also showed that only few organizations had realized the cost implications of public cloud.

In the data center, organizations were often over-architecting their deployments in order to maximize the return-on-investment of their hardware platforms. These platforms were refreshed every three-to-five years and sized to serve the maximum expected workload demand over that time frame. The cloud reverses this paradigm and demands that organizations size their deployment much more precisely or they’ll quickly run into overspending.

Futhermore, cloud providers price lists, pricing models, discounts and billing mechanisms can be complex to manage even for mature cloud users. Understanding the most cost-effective option to run certain workloads is a management challenge that organizations are often unprepared to address.

Using this framework will help you take control of your public cloud costs. It will make your organization achieve operational excellence in cost management and realize many of the promised cost benefits of public cloud.

The Gartner’s framework for cost management comprises five main steps:

  • Plan: Create a forecast to set spending expectations.
  • Track: Observe your actual cloud spending and compare it with your budget to detect anomalies before they become a surprise.
  • Reduce: Quickly eliminate resources that waste cloud spending.
  • Optimize: Leverage the provider’s discount models and optimize your workload for cost.
  • Mature: Improve and expand your cost management processes on a continual basis.

If you recognize yourself in the above challenges, this new research note is an absolute recommended read. For a comprehensive description of the framework and the correspondent mapping of AWS and Microsoft Azure cost management tools, see “How To Manage Public Cloud Costs on AWS and Microsoft Azure”.

The era of applications: microservices and containers in the cloud

It’s finally time to turn the page. We’re now right into the era of applications. They now are dictating how the rest of the world of IT should behave. Infrastructure has no longer the spotlight but it’s taking the passenger’s seat and merely delivering what it’s being asked for. What I sort of predicted almost 3 years ago in “Why the developer cloud will be the only one” is now happening.

That’s no good news for someone in this industry, I know. It’s much easier to talk (and sell) dumb CPU, RAM, storage space than it is about continuous integration, delivery, runtimes, inherently resilient services or even things like the CAP theorem. Sorry guys, if you want to remain in this industry, it’s time to step up and start understanding more about what’s happening up there, at the application level. Luckily, we have things that help us do so. Things like thenewstack.io (cheers @alexwilliams and team) who’s doing an awesome job introducing and explaining all these new concepts to the wider audience (including me!).

And to learn more about this phenomenon is also why I am going to attend KubeCon in London next week. For those who don’t know, KubeCon is the conference of Kubernetes, a Google-stewarded project for containers orchestration and scheduling. Some people will say it’s just one out of many right now but for us, at Flexiant, it’s just *the* one, as it possess the right level of abstraction to deliver container-based distributed applications. I’m going there to listen to industry leaders, innovators and just anyone who’s fully understood the new needs and who’s working every day to solve these new problems in new ways.

If you still don’t know what I’m talking about, read on. I’m going to take a step back and tell you a bit more about the drivers that caused applications to take over and why we needed different architectures, such as microservices, to be able to efficiently deliver and maintain that software that’s eating the world. If you’re short of time, you can simply watch the embedded video at the top of the page which should tell roughly the same story.

TL;DR

Software is now pervasive in people’s everyday’s life and it’s handling many of the new business transactions. Application architectures had to evolve in order to be able to cope with the increase in demand. Microservices is the optimal software architecture that combined with cloud infrastructure and containers can successfully fulfil new application requirements.

However, its numerous advantages are counter balanced by an increased complexity, which requires new orchestration tools that are able to join the dots and hold a global vision of how things are going. To achieve this, orchestration needs to happen at a higher level of the stack than what we had been previously seeing in the previous infrastructure era.

The rise of microservices

We can comfortably acknowledge how the way we do business has changed, how we see more and more transaction happening just online and how software is the only mean to achieve them. Software that, by the global nature of the relationships, needs to cope with a large number of users. It also needs to constantly deliver performance, to satisfy its user base as well as new features to keep up with competition, serve new needs and unlock new opportunities. You can easily understand how the traditional way of developing applications could just not power this kind of software. Monolithic applications were just too hard to scale, slow to update and difficult to maintain; on the other hand, the recently hyped PaaS was just too abstracted (compromising on developers’ freedom) and expensive to deliver the required efficiency.

That’s when microservices came in as the new preferred architectural pattern. Of course they had been around for a while even before they were called so but, as Bryan Cantrill (@bcantrill) said “[…] only now that we gave a name to it, it has been able to spread much beyond that initial use case”. That means that whenever we manage to label something in IT, this helps with diffusion, adoption and it serves as a baseline for further innovation. This has happened with cloud computing, and we see this happening again. For once, thank you marketing!

What exactly are microservices? We call microservices a software architecture that breaks down applications into many atomic interdependent components that talk to each other using language-agnostic APIs. A single piece of software gets broken into many smaller components, each of them publishing a API contract. Any other piece of that application can make use of that microservice just by addressing its API and without knowing anything about what’s behind it, including which language it’s written in, or which software libraries it uses. That unlocks a number of benefits, like single components that can be developed by different people, shared, taken from heterogeneous sources and reused a number of times. They can be updated independently, rolled back or grown in number whenever the application as whole needs to handle more workload. All of this without having to tear down the giant, heavy and slow monolith. Sounds great? Thumbs up.

Infrastructure for microservices

Setting up the infrastructure to host such distributed, complex and ever changing software architectures would be a real challenge, if we did not have cloud. In fact, infrastructure-as-a-service is a just no brainer to host microservices. Why? Because it provides commodity infrastructure, because you pay for what you use, because it’s just everywhere near your end users and because it never (well, almost) runs out of capacity. But what makes it so perfect for microservices is its programmability, required whenever a distributed application needs to deploy and re-deploy again and again, while adapting its footprint to its workload requirements at any given moment. We couldn’t have done it with traditional data centre software. Full stop.

When people think of IaaS, they think about virtual machines, virtual disks and network. Let’s call it traditional IaaS. And if you look at it, it’s not really fit for purpose for what we described above. In fact, virtual machines have zero visibility of what’s going on on top of their OS, let alone the interdependencies with other virtual machines hosting other services! So, we’ve seen things like configuration management systems (Chef, Puppet, Ansible, etc) taking over this part and, at the completion of the OS boot, to execute a number of configuration tasks to reach the full application deployment. It sort of worked so far, but at what price? First, virtual machines are slow. They need to be commissioned and then full OS needs to come up before it can execute anything. We’re talking about 20-30 seconds if it’s your lucky day up to several hours if it’s not. Second, virtual machines are heavy. The overhead that the hypervisor carries is just nonsense, as well as all that other multi-process and multi-user functionality that their OS was born to deliver, when in reality they simply need to host a little – micro – service. And configuration management systems? Even slower. Let alone their own weight (I’m looking for example at the full Ruby stack with Chef) they typically rely on external dependencies that, unfortunately, can change and generate different errors every time they are called in.

There was the need for something better. Oh wait, we already had something better! Containers. They existed for a while but they were having a seat in the previous infrastructure-centric IT world that was dominated by the expensive feature-full virtual machines. Guess what, containers understood (yeah, they apparently have a thinking brain!) that they could make the leap into the new application-centric world and shake hands to software developers. That’s how they recently become – rightly so – popular, as I wrote before in “The three reasons why Docker got it right”. Containers are just right to host microservices because (1) they’re micro as well, and they can start in a fraction of a second, (2) they’re further abstracted from the infrastructure and hence have no dependency on the infrastructure provider, (3) they are self-contained and don’t rely on external dependencies that can change but, most importantly, they are immutable. Any change within the container configuration can trigger the re-deployment of a new version of the container itself, which then can be rolled back if the result is not what we were expecting.

New challenges for new opportunities

As it happens every decade or so, the tech world solves big problems by tackling them from the side with disruptive solutions that unlock tremendous new opportunities. The veterans of this industry see these solutions and think “wow, that’s the right way of doing it!”, no question. However, new approaches typically also open up to a number of unprecedented challenges that could not even exist in legacy environments. These new challenges demand new solutions and that’s where the hottest (and most volatile) startup scene is currently playing a game.

Kubernetes, Docker, Mesos and all their eco-systems are right there, trying to overcome challenges that arise from the multitude of microservices that need to operate independently (providing a scalable and always available service) as well as with each other, cooperating to make up the whole application’s business logic. Networking challenges coming from microservices that need to communicate in a timely, predictable and secure way over the network. Monitoring challenges as you need to understand what’s going on when an end user presses a button, if any of the components is suffering from performance and needs to be scaled. And not to forget organisational challenges that come from when you potentially have so many teams working together on so many independent components that involve security, adaptability and access control, to name a few.

In the end, as we can’t stop software from eating more pieces of the world, we simply can try to improve its digestive system. Making software transparent to end users to generate positive emotions and ease transactions, while helping businesses not to miss any opportunity that’s out there are the ultimate goals. Making better software is a just a mean to get there and microservices, cloud and containers are headed in that direction. See you at Kubecon EU!

2015: the surrendering to the cloud

I thought I’d label 2015 as the year of the surrendering to the cloud. And by this I do not mean that mass adoption that every software vendor was waiting for, but surrendering to (1) the fact that cloud is now pervasive and it is no longer up for a debate and (2) to the dominance of Amazon Web Services.

A debate had been previously going way too long, on what are the real benefits of the cloud. And I’m not talking about end customers here, I’m talking about IT professionals, for whom new technologies should be bread and butter. But around cloud computing, they somehow showed the strongest skepticism, a high dose of arrogance (how many times I heard “we were doing cloud 20 years ago, but we were just not calling it that way”) and reluctancy to embrace change. The great majority of them underestimated the phenomenon to the point of challenging its usefulness or bringing it down to virtualisation in some other data center which is not here.

I asked myself why this has happened and I came to the conclusion that cloud has been just too disruptive, even for IT pros. To understand the benefits of the cloud in full, one had to make a mental leap. People naturally learn by small logical next steps, so cloud was interpreted just like the natural next step after having virtualised their data centres. But as I wrote more than three years ago in the blog post Cloud computing is not the evolution of virtualisation, the cloud came to solve a different problem and used virtualisation just as a delivery method to accomplish its goal. But finally, in 2015 I personally witnessed that long overdue increased level of maturity with respect to cloud technologies. Conversations I had with service providers and end customers’ IT pros were no longer about “if” to cloud or not to cloud, but about “what” and “when” instead.

What has helped achieving this maturity? I think it is the fact that nobody could ignore anymore the elephant in the room. The elephant called Amazon Web Services. That cloud pioneer and now well consolidated player that is probably five years ahead of its nearest competitor, in terms of innovation and feature richness. And not only they’re not ignoring it anymore, everyone wants to have a ride on it.

Many of those IT pros I mentioned are actually employed by major software vendors, maybe even leading their cloud strategy. Their initial misunderstanding of the real opportunity behind cloud adoption led to multi-million investments on the wrong products. And in 2015 (here we come to the surrendering number 2) we saw many of these failures surfacing up and demanding real change. Sometimes these changes were addressed with new acquisitions (like the EMC acquisition of Virtustream) or with the decision to co-opt instead of compete.

To pick some examples:

On Tuesday [Oct 6th] at AWS re:Invent, Rackspace launched Fanatical Support for AWS, beginning with U.S.-based customers. Non-U.S. customers will have to wait a while, although Rackspace will offer support for them in beta mode. In addition, Rackspace will also resell and offer support services for AWS’s elastic cloud as it’s now officially become an authorized AWS reseller.
Hewlett-Packard is dropping the public cloud that it offered as part of its Helion
“hybrid” cloud platform, ceding the territory to Amazon Web Services and Microsoft’s Azure. The company will focus on private cloud and traditional IT that its large corporate customers want, while supporting AWS and Azure for public cloud needs.
HP Enterprise’s latest strategy, which dovetails with earlier plans to focus on private and managed clouds, is to partner with Microsoft and become an Azure reseller.

What does this tell us? Most software vendors are now late to the game and are trying to enter the market by holding the hand of those who understood (and somewhat contribute to create) the public cloud market. But don’t we always say the cloud market is heading to commoditisation, why there seem to be no space for a considerable number of players? Certainly HP, VMware or IBM have the investment capacity of Amazon to grow big and compete head to head.

The reality is that we’re far from this commoditisation. If virtual machines may well be a commodity, they’re not more than a tiny bit of the whole cloud services offered for example by AWS (EC2 was mentioned only once during the two main keynotes at AWS re:Invent this year!). The software to enable the full portfolio of cloud services still make a whole lot of difference and to deliver it, this requires vision, leadership, understanding and a ton of talent. Millions of investments without the rest was definitely not the way.

Happy 2016!

The 3 reasons why Docker got it right

Containers have been around for a while. But why did they finally get their well deserved popularity only with the rise of Docker? Was it just a matter of market maturity, or something else? Having worked at Joyent, I had the luck of being in the container business before Docker was even invented, and I would like to give you my take on that.

A brief history of containers

We hear this again and again in compute science: what we think has been recently invented by some computing visionary has actually its roots typically decades ago. It happened with hardware virtualisation (emulation), with the cloud client-server de-centralization (mainframes) and, yes, also with containers.

If you also started to hack with Unix back in the early 90’s you’ll certainly remember chroot. How many times I’ve used that to make sure my process wasn’t messing around with the main OS environment. And you’ll probably remember FreeBSD jails, that was adding all that required kernel-level isolation to implement the very first OS-level virtualisation system.

Sun Microsystem also believed strongly in containers and developed what they called “zones“, definitely the most powerful and well thought container system. But despite Sun believed in containers more than it did on hardware level virtualisation, the market moved towards the latter, not because it was the right approach but simply because it allowed the guest OS to stay untouched. Unfortunately Sun never managed to see much of the results of zones as nobody knows what really happened to them after the acquisition from Oracle. Luckily another company, Joyent, picked up the legacy of OpenSolaris with its SmartOS derivative. SmartOS is now used as the foundation of the Joyent Cloud with an improved version of zones at the very core of it.

At the same time, yet another company, Parallels (now Odin), stewarded OpenVZ, a Linux open-source project for OS-level virtualisation. The commercial version of it was called Virtuozzo and Parallels sold it as their virtualisation system of choice.

Since late 2000’s, Joyent and Parallels have been pioneering the container revolution but nobody talked about them as much as it’s now being done for Docker. Let’s try to understand why.

Positioning of containers

The easy conclusion would be that the market wasn’t just ready yet. We all know how timing is important when releasing something new and I’m sure this also played a role with containers. However, in my view, that’s not the main reason.

Let’s look at how these two companies were selling their container technology. Joyent made it all around performance and transparency: if you’re using a container instead of a virtual machine (i.e. hardware level virtualised) you can get an order of magnitude of performance increase, as well as total transparency and visibility of the underlying hardware. That’s absolutely spot on and relevant. But apparently it wasn’t enough.

Parallels made it all around density. Parallels’ target market was hosting companies and VPS providers, those who’s selling a single server for something like four bucks a month. So, if you’re selling a container instead of a virtual machine, you’ll be able to squeeze twice or three times the amount of servers on the same physical host. Therefore you can keep your prices lower and attract more customers. Given that you’re not reserving resources to a specific container, higher density is a real advantage that can be achieved without affecting performance too much. Absolutely true but again, it did not resonate too loud.

The need to lower the overhead

In the last few years, we also witnessed the desperate need to lower the overhead. Distributed system caused server sprawl. Thousands of under utilised VMs running what we call micro services, each with a heavy baggage to carry: a multi-process, multi-user full OS, whose features are almost totally useless to them. Therefore the research in lowering the overhead: from ZeroVM (acquired by Rackspace) to Cloudius Systems, that tried to rewrite the Linux kernel, chopping off those features that weren’t really necessary to run single process instances.

And then came Docker

Docker started as delivery model for the infrastructure behind the dotCloud PaaS, it was using containers to deliver something else. It was using containers to deliver application environments with the required agility and flexibility to deploy, scale and orchestrate. When Docker spun off, it added also the ability to package those environment and ship them to a central repository. Bingo. It turned containers as a simple mean to do something else. It wasn’t the container per se, it was what containers unlocked: the ability to package, ship and run isolated application environments in a fraction of a second.

And it was running on Linux. The most popular OS of all times.

Why Docker got it right

All of this made me think that there are three main reasons behind the success of Docker.

1. It used containers to unlock a totally new use case

The use case that container unlocked according to Joyent, Parallels and Docker were all different: performance of a virtual server in the case of Joyent, density of virtual servers in the case of Parallels and application delivery with Docker. They all make a lot of sense but the first two were focused on delivering a virtual server, Docker moved on and used containers to deliver applications instead.

2. It did not try to compete against virtual machines

Joyent and Parallels tried to position containers against virtual machines. You could do something better with containers when using a container instead of a virtual machine. And that was a tough sale. Trying to address the same use case as what everybody already acknowledged as the job of a VM was hard. It was right but it would have required much longer time to establish itself.

Docker did not compete with VMs and, as demonstration of that, most people are actually running Docker inside VMs today… even if Bryan Cantrill (@bcantrill), CTO of Joyent, would have something to say about it! Docker runs either on the bare metal or in a VM, it does not matter much when what you want to achieve is to build, package and run lightweight application environments for distributed systems.

3. It did not try to reinvent Unix but used Unix for what it was built for

Docker didn’t try to rewrite the Linux kernel. However it fully achieved the objective to reduce overhead. Containers can be used to run a single process with no burden to carry an entire OS. At the same time, the underlying host can make best use of its multi-process capabilities to effectively manage hundreds of containers.

Don’t get my wrong. I absolutely believe about the superiority of containers when compared to virtual machines. I think both Joyent and Parallels did an amazing job spreading out their benefits like no other. However, I also recognise in Docker the unique ability to have made them shine much brighter than anyone has ever done before.

In conclusion, co-opting with the established worlds of virtual machines and Linux to exploit the largest reach, while adding fundamental value to them was the reason behind Docker’s success. At the same time, looking at containers from an orthogonal perspective, not as the goal but as a mean to achieve something different than delivering a virtual server, is what landed containers on the mouth of everyone.

Why developers won’t go straight to the source

I’m so excited. On last Wednesday Flexiant has announced the acquisition of the Tapp technology platform and business. I met the guys behind it quite a while ago and I have never refrained from remarking how great their technology is (see here). I recognized a trend in their way of addressing the cloud management problem and I’m so glad to be part of, right now.

Disclaimer. I am currently working for Flexiant as Vice President Products. I have endorsed this acquisition and I am fully behind the reasons and convinced of the potential of it. This is my personal blog and whatever you read here has not been agreed with my employer in advance and therefore it represents my very personal opinion.

Right after the acquisition (read more about it here) we’ve heard tremendous noise on social networks and the press. David Meyer (@superglaze) of GigaOm in particular wrote up a few interesting comments and he picked up well the reasoning behind it, but he also ended the article with an open question:

“This [the Tapp technology platform] would help such players [Service Providers] appeal to certain developers that are currently just heading straight for EC2 or Google.
 
Of course, this is ultimately the challenge for the likes of Flexiant – can anything stop those developers going straight to the source? That question remains unanswered.”

Well, I’d like to answer that question and say why I’m actually convinced there is a lot of value to add for multi-cloud managers.

Much has been written these days from the business side of the acquisition and I don’t have anything meaningful to add. Instead, I would like to raise a few interesting points from a technology point of view (that’s my job, after all) and unveil those values that are maybe not so obvious at the first sight.

Multi-cloud management

Multi-cloud management per se has a very large meaning spectrum. There are multi-cloud managers brokerage, therefore primarily on getting you the best deal out there. Despite this is a good example about how to provide a “multi-cloud” value, I’m still wondering how they can actually find a way to compare apples with oranges. In fact, cloud infrastructure service offerings are so different and heterogeneous that being simply a cloud broker will make it extremely difficult to succeed, deliver real value and differentiate. So, point number one: Tapp isn’t a cloud brokerage technology platform.

Other multi-cloud managers deliver value by adding a management layer on top of existing cloud infrastructures. This management layer may be focused on specific verticals like scaling Internet applications (e.g. Rightscale) or providing enterprise governance (e.g. Enstratius, now Dell Multicloud Manager). By choosing a vertical, they can address specific requirements, cut off the unnecessary stuff from the general purpose cloud provider and enhance the user experience of very specific use cases. That’s indeed a fair point but not yet what Tapp is all about.

So why, when using Tapp, developers won’t “go straight to the source”? Well, first of all, let’s make clear that developers are already at the source. In fact, to use any multi-cloud manager you need an AWS account or a Rackspace account (or any other supported provider account). You need to configure your API keys in order to enable to communication with the cloud provider of choice. So if someone is using your multi-cloud manager, it means that he prefers it over the management layer provided by the “the source”.

The cloud provider lock-in

One of the reasons behind Amazon’s success is the large portfolio of services they rolled out. They’re all services that can be put together by end users to build applications, letting developers focus just on their core business logic, without worrying too much about queuing, notifying, load balancing, scaling or monitoring. However, whenever you use one of the tools like ELB, Route53, CloudWatch or DynamoDB you’re locking yourself into Amazon. The more you use multi-tenant proprietary services that exist only on a specific provider, you won’t be able to easily migrate your application away.

You may claim to be “happy” to be locked in a vendor who’s actually solving your problems so well, but there are a lot of good reasons (“Why Cloud Lock-in is a Bad Idea“) to avoid vendor lock-in as a principle. Many times, this is one of the first requirements of those enterprises that everyone is trying to attract into the cloud.

Deploying the complete application toolkit

Imagine there could be a way to replicate those services onto another cloud provider by building them up from ground up on top of some virtual servers. Imagine this could be done by a management layer, on demand, on your cloud infrastructure of choice. Imagine you could consume and control those services using always the same API. That would enable your application to be deployed in a consistent manner across multiple clouds, exclusively relying on the possibility to spin up some virtual servers, which you can find in every cloud infrastructure provider.

This is what Tapp is about. And the advantages of doing that are not trivial, these include:

1. Independency, consistency and compatibility

This is the obvious one. For instance, a user can click a button to deploy an application on Rackspace and another button to deploy a DNS manager and a load balancer. These two would provide an API that is directly integrated into the control panel and therefore consumable as-a-service. Now, the exact same thing can be also done on Amazon, Azure, Joyent or any other supported provider, obtaining the exact same result. Cloud providers became suddenly compatible.

2. Extra geographical reach

Let’s say you like Joyent but you want to deploy your application closer to a part of your user base that lives where Joyent doesn’t have a data center. But look, Amazon has one there and, despite you don’t like its pricing, you’re ready to make an exception to gain some latency advantages to serve your user base. If your application is using some of the Joyent proprietary tools, it would be extremely difficult to replicate it on Amazon. Instead, if you could deploy the whole toolkit using just some EC2 instances, then it all becomes possible.

3. Software-as-a-(single)-tenant

If multi-tenancy has been considered as a key point of Cloud Computing, I started to believe that maybe as long as an end user can consume an application as-a-service, who cares if it’s multi-tenant or single-tenant.

If you can deploy a database in a few clicks and have your connector as a result, does it really matter if this database is also hosting other customers or not? Actually, single-tenancy would become the preferred option1 as he would not have to be worried about isolation from other customers, noisy neighbors, et al. Tony Lucas (@tonylucas) wrote about this before on the Flexiant blog and I think he’s spot on, there is a “third” way and that’s what I think it’s going mainstream.

The Tapp’s way

The Tapp technology platform was built to provide all of that. A large set of application-centric tools, features and functions2 that can be deployed across multiple clouds and consumed as-a-service.

Of course it’s not just about tools. It’s also about the application core, whatever it is. The Tapp technology solves also that consistency problem by pushing the application deployment and configuration into some Chef recipes, as opposed to cloud provider-specific OS images or templates3. Every time you run those recipes you get the same result, in any cloud provider. In fact, to deploy your application you’ll just need the availability of vanilla OS images, like Ubuntu 14.04 or Windows 2012 R2 that, honestly, are offered by any cloud provider.

All those end users who want to deploy applications without feeling locked in a specific providers, today had only one way of doing it: DIY (“do-it-yourself”). They would have to maintain and operate OS images, load balancers, DNS servers, monitors, auto-scalers, etc. That’s a burden that, most of the time, they’re not ready to take. They don’t want to spend time deploying all those services that end up being all the same, all the time. Tapp takes away that burden from them. It deploys applications and service toolkits in an automated fashion and provides users just with the API to control them. And this API is consistent, independently from the chosen cloud provider. This is the key value that, I believe, will prevent developers from going straight to the source.

1. Multi-tenancy would be the preferred option for the Service Provider because this would translate into economies of scale. However, economies of scale often obtain cost optimisation and end user price reduction and, therefore, it can be considered an indirect advantage for end customers as well.

2. Tapp features include: application blueprinting with Chef, geo-DNS management and load balancing, networking load balancing, auto-scaling based on application performance, application monitoring, object storage and FDN (file delivery network).

3. It worths mentioning that pushing the deployment of application into configuration management tools like Chef or Puppet significantly affects the deployment time. That’s why it’s strongly advised to find the optimal balance between what is built-in the OS image and what is left to the configuration management tool.

Who’s the clever one? The cloud or the application?

During my recent experience at Structure:Europe I have engaged in a discussion regarding whether the “workload” should “care” about the cloud or not. It is a great topic to debate on and I decided to write a few more thoughts here below. But let me recall the conversation first.

The trigger was a sentence by @ditlev, CEO of OnApp, that we also heard during his session with @tonylucas during the second day of Structure, and that I commented on Twitter with:

Among others, @khushil, engineer at Mail Online, did not agree with me as we read his tweet replying: “wrong way around, the cloud needs to understand the workload to scale to support. other way misses the point“.

Ditlev obviously picked this up and explained further that “workloads should be agnostic, your infrastructure (incl cloud)/platform should adapt” and that he would “like to see abstraction layers between workloads and infrastructure“.

What’s the source of the workload?

One may be confused and agree in principle with everyone, as every tweet seems to be reasonable, but I think first we should make some assumptions upfront, starting from the definition of workload:

The amount of work performed by an entity in a given period of time, or the average amount of work handled by an entity at a particular instant of time. The amount of work handled by an entity gives an estimate of the efficiency and performance of that entity. In computer science, this term refers to computer systems’ ability to handle and process work.

In computer science, we indeed have several abstraction layers and that happens also with cloud computing (more insights in one of my previous post here). In such scenario, however, there are also many “entities” performing some work at those different layers. So which one is the entity whose workload we were talking about? Given what our companies do, I bet we were referring to cloud infrastructures handling and processing work that is generated by applications running on top of them.

Clarified the context, I shall explain why applications should instead care about the cloud without expecting any magic to happen down there.

The new era of IT infrastructures

We are witnessing a tremendous change in the core functioning of IT infrastructures. Up to the advent of cloud computing, the general approach taken by IT professionals was to manually provision a specific footprint made of servers, CPUs, memory, storage and network devices. That footprint was probably over-sized in order to accommodate predictable workload growth over time. Applications were designed to abstract from infrastructures, they were simply demanding more CPU cycles or IOPS whenever they wanted to, regardless of the actual availability. The result of this approach has been a tremendous increase in the total cost of ownership of IT departments. Hardware was required to be reliable, fast and able to accommodate peaks in workload without any performance loss. This all came at a price.

Two main drivers came to disrupt and trigger a drastic change. The first one is mobile computing, i.e. the demand of Internet services that suddenly became ubiquitous, leading the generation of unpredictable workload demand from anywhere in the world, at any time of the day. The second driver is the growing availability of a large quantity of data, user and machine generated Big Data, that require to be stored and analyzed.

To accommodate the above scenarios, IT infrastructures had to become completely software-driven, highly elastic and extremely scalable. With cloud infrastructures, today it is in fact possible to provision an infrastructure footprint using a few mouse clicks or a couple of functions within a few lines of code. The size of the infrastructure can be adapted to the required workload in a specific moment, no more need to over-provision, as resources can grow and shrink using few simple automatic operations.

And with the availability of software-consumable infrastructures, also applications are changing their approach, becoming much more infrastructure-aware. In fact, in case of resource shortage, applications can request for more by using API calls, growing the infrastructure footprint as required. At the same way, they’re now able to handle faults, making expensive highly available infrastructures completely worthless! (I blogged about this before here).

Think application!

All of this to explain that no, the cloud (infrastructure) does not have to understand the workload and does not have to automatically adapt to it. Even if that could be theoretically possible, the infrastructure lacks of the right metrics to recognise a real need for more resources. Instead, it is the application itself that actively adapts its own infrastructure, because only the application understands how the user experience is going, which the only metric that should be taken into consideration when measuring application performance.

Are you currently auto-scaling your infrastructure based on CPU utilisation? When it happens, are you sure that corresponds to a real improvement of the user experience? Or simply to a higher bill of your cloud provider?

And what if your VM goes down? Will you blame your cloud provider and then blog about its ridiculous SLA penalty fees, that never corresponds to your real loss of business? Isn’t it more effective to make sure that you understand the VM is down and that you (your application, I mean) take the necessary step to failover elsewhere?

With a clever application, infrastructure can be seen just as a toolbox. And you need to know how to use those tools in order to build highly available, auto-scaling applications. Don’t expect your screwdriver to build on your behalf the room for your newborn baby.

Or more simply, as @AmberCoster of AppDynamics said to me in another Twitter conversation: “Think application, not just infrastructure!”.

Why the developer cloud will be the only one

It has been a while since my last blog. My new engagement with Flexiant is keeping myself so busy with customers that I have plenty of reasons to think a lot but not enough time to write what I’m thinking about. Customers are indeed one the most interesting sources for people like me who always try to identify common patterns in behaviour and decisions (that are the “technology trends”, this blog’s previous title).

Yesterday, James Urquhart (@jamesurquhart) wrote about “traditional IT buyers” versus “developers” consuming cloud services, with reference to the different strategical approach taken by VMware in its vCloud Hybrid Service proposition. Just the fact that we have to word “traditional” while speaking about one of the most innovative sector of our industry, says it all about what will be the winning approach between the two.

Indeed, I concur with James when he says that VMware’s strategy will be successful in the next few years but will likely fail on the long term. Following his brilliant observations, I still feel like adding something more for all those companies who are trying (or who want) to intercept some cloud computing business by running after enterprise IT departments.

The IT infrastructure demand

First off, IT departments don’t want to get to the cloud. This is known and has been said enough, mainly due to their self-preservation instinct. But they also can’t resist this revolution. So they’re interpreting the cloud opportunity just as the shift of their data centre location, demanding the exact toolset they’ve been using in their on-premises deployment. At the other side, service providers are trying to listen to those customers and give them what they are asking for. Usually, nothing gets real because the offer hardly meets the demand, mainly due to the actual unwillingness of the IT crowd to outsource data centre control. And even if all their requirements seem to reveal the existence of a real market opportunity, I believe they’re actually driving a false demand.

Let’s take a look at the chart below. It’s far from being accurate as it’s not based on real numbers. Lines are straight as I simply wanted to help visualising the trend of what I think it’s happening.

On the overall demand of IT infrastructure, the one represented by the green area is generated by IT professionals, while the blue one is generated by developers or applications (yes, don’t forget that applications are now capable of driving IT infrastructure demand all by themselves, based on workload triggers).

Today we’re at that point in time where those two areas overlap, meaning that while there is still demand of infrastructure by IT departments, it is showing a declining pattern. According to James, VMware is really going to win the biggest slice of that declining green area but will fall short because the green area is going to disappear. And most service providers who are evolving from their colo/hosting business really seem to run after the same green demand, fighting for a shrinking market, while others are just making billions thanks to developers and their applications. And we hear traditional (ah! This word again) service providers calling that type demand simply “test and dev”?

Test and dev or software-consumable infrastructure?

We really need to move away from thinking the Amazon cloud is good for test and dev and not for production. So many times I heard the objection from providers that Amazon is just for developers because of they lack of built-in HA in the infrastructure. Needless to remind how many transaction-sensitive companies are making billions on that cloud.

People who say that are actually missing the point. Developers like and use the Amazon cloud because it is software-consumable. Because their applications can spin up a server to handle extra capacity at the same way they handle their core business logic. Also, they can handle infrastructure failure by replicating data-stores, maybe in different geographic sites, and managing failover in case of outage of the underlying infrastructure. This is the real potential of IaaS and not just the OPEX vs CAPEX scenario, not only the commoditisation of computing and storage resources, but the ability to fully automate the infrastructure provisioning and configuration as part of any application business logic.

Build and launch your developer cloud

Now that we know what developers like about the cloud, should we forecast enterprise developers stop deploying applications on-premises and finally do that in the cloud, thus becoming part of the blue demand? That’s what many people may think, but then, I’ve come across this from James Urquhart again:

I was asked to speak at the Insight Integrated Systems Real Cloud Summit in Long Beach, Calif. […] The RealCloud audience was primarily medium size businesses (between 500 and 10,000 employees), and I jumped at the chance to meet a segment of the IT industry with which I rarely interact.

About half way through my talk […] I came to a point I thought was very important to most software developers. On a whim, I asked this audience how many of them saw custom software development as a key part of their IT strategy. I expected about half the room of 100 or so to respond positively.

One hand went up at the back of the room. (It turns out that was someone from NASA’s Jet Propulsion Laboratory. Well, duh.)

Boom. Any discussion about why developers were bypassing IT to gain agility in addressing new models was immaterial here. The idea that Infrastructure as a Service and Platform as a Service were going to change the way software was going to be built and delivered just didn’t directly apply to these guys.

Well, that explains the success of packaged software so far. But does that mean that the demand for hosting packaged software will gradually move to the cloud? I don’t believe so. Instead, I believe that packaged software will simply be re-invented via SaaS model. It has been already demonstrated how that model can be extremely successful and broad in its adoption (Salesforce.com, Workday, etc).

I believe enterprises will eventually move from in-house hosted packaged software directly to consuming SaaS without passing through intermediate steps, like running the same packaged software in some “enterprise” or “IT” IaaS clouds. Everyone’s waiting for enterprises to start moving their workload, but eventually they will make a big jump all at once. Maybe even without letting their IT departments contribute to that choice (as it’s mainly a business decision).

In the end, if you’re a service provider who aims at finally getting the enterprise workload to the cloud, you’d probably better focus on standing up a software-consumable cloud (a.k.a. “developer cloud”) and go after all those SaaS companies, small and big, to get your slice of the real cloud market. Stop thinking your HA cloud is better than “test and dev” clouds (like Amazon?) and stop discarding the web-scale companies (as SaaS companies are sometimes referred to) from your target market but give them the importance they deserve. If you don’t do that, one day, you may lose the entire enterprise IT business at once.

IaaS eats the biggest slice

When I read market research firms saying that SaaS is the most adopted cloud model by the enterprises I can’t but concur due to the ease of use and the simplicity of integration with existing IT assets. Actually, the integration ends up being minimal and entirely in developers’ hands, who can make use of the SaaS service usually comprehensive API, thus completely bypassing their internal IT department.

So what about IaaS and PaaS? Should those who invested heavily in those two cloud models start worrying about their choice? No way. As my provocative title says, I am fairly much convinced that the lower layers of the cloud stack eventually share the whole cloud business, with IaaS eating the biggest slice of it, both directly and indirectly.

I am actually writing this post to give further insight and supporting data to a tweet of mine I wrote some time ago:

Now let’s see what I mean by indirectly.

Layers over layers

In computer science we are used to have layers over layers called “abstraction layers“, each one of them aimed at hiding the complexity of the lower one, while providing some kind of added value and an interface for the immediately upper layer to access resources. With the rise of cloud services, the approach of the community has been the same again: using abstraction layers to handle the increased complexity of IT infrastructures, which now involve thousands of resources to be managed and orchestrated.

As mentioned above, there are three main cloud layers largely accepted by the community: IaaS, PaaS and SaaS. However, many cloud providers don’t fit exclusively in one of them as they tend to enlarge their offering with different services at multiple layers of the stack. Since this creates a little confusion among cloud consumers, I want to take the opportunity to present them one more time from a different perspective, trying to concentrate on what added value each layer brings to the stack.

Ok, I still have to work a bit on my ability to visually represent concepts but I hope the above chart can help making some clarification. First, we have raw resources at the bottom of the stack and if we add some elasticity we obtain an IaaS. This is over-simplified as there are certainly more values brought by any good IaaS layer, however, for the sake of understanding, I’ll limit myself to the most evident one: elasticity, a.k.a. the ability to create, destroy, enlarge and shrink computing resources on demand via an API.

Let’s now go upper, we have an IaaS layer and we decide to add some DevOps tools and operations such as middlewares, auto-scaling, application deployment and code validation mechanisms. While doing that, if the principle of abstraction layers is respected, we don’t need to care anymore about how to handle raw resources, since the IaaS provides us with tools to automate their management. What we obtain is a Platform-as-a-Service, an environment where multiple users can deploy their applications.

Eventually, let’s take some business logic to solve a specific problem (i.e. CRM, ERP, etc) and, provided of course that we have done all the multi-tentancy stuff and that we want it to be consumed as-a-service, we are now working at the SaaS layer. At this stage, we can concentrate on making our software more powerful, adding killing features and conquering our market niche. We don’t need (neither we do want, right?) to take care of all the infrastructure to serve our users nor we want to know what hardware lies underneath, as those would be just a distraction from our core business focus.

Sounds logical doesn’t it? All the layers stack up together so nicely and they look so complementary. Indeed they are. In fact, cloud companies end up buying services from other cloud companies that operate at a lower level of the stack. For further evidence, I have done a small research and I found out that most SaaS companies deploy their software on top of a PaaS provider that, itself, deploys its automation layer on top of one (or more) IaaS providers. What does that mean? That if an enterprise adopts a SaaS cloud service and pays for it, eventually some dollars will end up in some IaaS providers’ pocket. You like it or not.

The infrastructure of PaaS providers

To bring supporting examples, let’s check the most popular PaaS providers infrastructures as they’re most likely obliged to reveal their backends in order to inform their customers on their data center locations.

Most popular PaaS services rely on IaaS providers
PaaS Provider Supported Languages IaaS backends
Heroku Ruby, Java, Python, Node.js AWS
Engine Yard Ruby, PHP, Node.js AWS, Verizon Terremark
AppFog PHP, Java, Node, .NET, Ruby AWS, Rackspace, HP Cloud Services
OpenShift PHP, Java, Node.js, Python, Ruby AWS, Rackspace
Nodejitsu Node.js Joyent
AppHarbor .NET AWS
CloudBees Java AWS, HP Cloud Services

The cloud market is known to be huge and it is mandatory for every player in the IT industry today to take up a position, a vision and a direction within this space. If you’re an investor who wants to participate in the cloud opportunity, it is extremely valuable to understand how different cloud models are currently sharing the market. On the other hand, if you’re an enterprise evaluating the adoption of any cloud service, you should be concerned about who’s running the games up and down the cloud stack, as this will eventually affect you service level, your security and your data integrity.

POST UPDATE on 4/8/2013

I’ve been asked by Jack Clarke (@mappingbabel) of ZDnet on what basis did I single out the above PaaS providers as “most popular”. The answer I gave him is press coverage as well as “on the field”, meaning talking to customers and gathering experiences. It’s a simple personal feeling which is not based on any scientific data. I’m actually a field person and not a researcher. Besides, I don’t think any of those provider is really willing to disclose customer data.

However, it’s noteworthy to mention there are other PaaS services offered by large vendors that are difficult to define in terms of popularity; the press usually refers to the vendor as a whole and since they’re no longer in the startup phase, you can’t even measure the funding amount they’ve received from VCs. Despite the difficult measurability, I owe them a mention in this post for being active players in the PaaS landscape, contributing effectively to the cloud awareness battle.

And one can assume the above theory is respected by the above providers as well, for example with Elastic Beanstalk running on top of EC2 and App Engine running on top of Compute Engine. However, given those services are provided by the same vendor as the PaaS provider, they don’t trigger any economic transaction and thus no real shift in the measurement of the market size.

The truth on enterprise private clouds

Oh yes!

It feels so great when someone among the most recognized high tech analysts out there writes down exactly what you think. It’s an endorsement of your own thinking to read James Staten (@Staten7) from Forrester Research on “Why your enterprise private cloud is failing”, where he describes so clearly what you’ve always been thinking and trying to explain.

His blog is saying two important things:

  1. Enterprise private clouds are failing. As I’ve also written on a Quora answer to “What is the future of private cloud?”, no matter what marketing and vendors are saying, efficient, large scale production enterprise private clouds don’t exist as of today. In my opinion, cloud is an extremely new model in delivering IT infrastructure that the culture of its utilization won’t be able to reach the enterprise with a bottom-up approach (evolving from their current infrastructure) but only taking a top-down direction (deploying into public clouds and then migrating back in-house). A revolution as opposed to an evolution.
  2. Enterprise private clouds are failing due to the wrong approach taken by the IT department. Treating the cloud just like an infrastructure stack instead of a service, because “you are building the private cloud without engaging the buyers who will consume this cloud”, Staten says.

And of course, I wasn’t the only one recognizing “the truth” in James Staten’s words. His opinion on failing private clouds echoed throughout the web, generating a large consensus among cloud experts and visionaries such as James Urquhart (@jamesurquhart):

The two cloud models

Much has been already written about different approaches to the cloud and big brains have concluded that all of them can be summarized in two different cloud models. They have been given various names according to the author, but I shall refer to the nomenclature of the OpenNebula blog post.

  1. Datacenter Virtualization model: cloud as an extension of virtualization in the datacenter. Some more automation, service catagloue, etc. VMware vCloud-like approach.
  2. Infrastructure Provision model: a powerful service-oriented API to provision effectively and efficiently commodity computing resources. AWS-like approach.

With reference to the above models, James Staten is basically saying that the Datacenter Virtualization cloud model is wrong. That is not the right approach to implementing a private cloud. Because “a Porsche is [not] just a Volkswagen with better engine, tires, suspension and seats.”

Awesome. I’ve been convinced about that for some time. If you read my very first post on “Cloud Computing is not the evolution of virtualization”, as the title says, I’ve been always considering exclusively the Infrastructure Provision model as the only possible cloud implementation, completely excluding the Datacenter Virtualization to be even called cloud.

And I don’t think this was an extremist approach. As I said many times, cloud is a tremendous opportunity for the enterprise to start thinking differently. In my opinion, cloud will be able to reach the enterprise IT departments only using a top-down approach: from a public cloud implementation to back in-house. Enterprise cloud consumers will try (and love) the public cloud and eventually drive the implementation of something similar within the enterprise itself. But trying to transform the current virtualized infrastructure into a private cloud will simply fail. Fail to deliver a real elastic and service-oriented cloud infrastructure to the real cloud consumers.

Vendors didn’t get it

So what? All enterprise IT departments simply didn’t get it? What’s their problem? It’s a vendor problem. Enterprise software vendors didn’t get it. Every one of them started to think of the cloud as an opportunity (that’s good, as a matter of principle) and they all just tried to profit from the hype. For virtualization technology vendors, that was an easy path: adding a new product to their portfolio to “cloudify” the existing virtualization products, that would have been a natural extension to existing implementations within the enterprise. The perfect scenario for IT departments. Pity that it doesn’t work to deliver what cloud consumers are looking for.

But recently we heard something new from virtualization vendors. They actively started perceiving public clouds, and AWS in particular, as a threat to the workload which is (was?) currently running on their virtualization technology and that’s failing to migrate to private clouds for the above reasons. Despite their very rich cloud products portfolio, workload is still moving from the enterprises to commodity public clouds. Why?

Hearing VMware CEO Pat Gelsinger saying that he finds hard to believe they cannot beat a company that sells books, makes me think they really didn’t get the point at all. Good luck guys.