Abstraction As A Service

By , December 19, 2017 7:55 pm

The birth of abstraction layers

The last five decades of computing have seen a gradual progression of architectural abstraction layers. Around 50 years ago, IBM mainframes gained virtualization capabilities. Despite explosive progress in the sophistication of hardware following Moore’s Law, there wasn’t too much further innovation in abstraction layers in server computing until well after the dawn of the microcomputer era, in the early 2000s, when virtualization suddenly became all the rage again. (I heard a rumour that this was due to certain IBM patents expiring, but maybe that’s an urban myth.) Different types of hypervisors emerged, including early forms of containers.

Then we started to realise that a hypervisor wasn’t enough, and we needed a whole management layer to keep control of the new “VM sprawl” problem which had arisen. A whole bunch of solutions appeared, including the concept of “cloud”, but many were proprietary, and so after a few years OpenStack came along to the rescue!

The cloud era

But then we realised that managing OpenStack itself was a pain, and someone had the idea that rather than building a separate management layer for managing OpenStack, we could just use OpenStack to manage itself! And so OpenStack on OpenStack, or Triple-O as it’s now known, was born.

Within and alongside OpenStack, several other new exciting trends emerged: Software-Defined Networking (SDN), Software-Defined Storage (e.g. Ceph), etc. So the umbrella term Software-Defined Infrastructure was coined to refer to this group of abstraction layers.

The container era

Whilst OpenStack was busy growing up and moving past the Peak of Inflated Expectations, all of a sudden Docker and containers burst onto the scene and provided a lot of new buzzwords to get everyone excited again. But after the excitement started to fade, that familiar sinking feeling came back with the realisation that just like VMs, containers need something to manage them.

But then Kubernetes leapt in to the rescue! And all the excitement returned. Except that of course then you need something to manage Kubernetes, but fortunately we already had OpenStack, so we could just use that! And so Magnum was born. And since Kubernetes is so awesome, we realised that we could also use it as the basis for deploying OpenStack. At the recent OpenStack Summit in Sydney, we saw the continued rise in popularity of running both Kubernetes on OpenStack, and OpenStack on Kubernetes.

Looking to the future

But that still leaves the pesky job of managing raw hardware to put all this stuff on top. Fortunately there are services you can pay for so that other people do that for you, and they even have APIs you can hook into! This is called public cloud. And even better, many companies use OpenStack to drive their public clouds.

So now we’re running Kubernetes on OpenStack on Kubernetes on OpenStack. And now we’re done! Right?

Well … if we’re to learn anything from this history, it should be that we’ll always find more good reasons for new abstraction layers. I mean, there’s already a huge amount of work going into things like Cloud Foundry on the PaaS layer, NFV in the telco space, serverless computing, … And I hear that Kubernetes is a great platform for running Cloud Foundry, just like it is for OpenStack. And wouldn’t OpenStack be a cool platform to provide inside Cloud Foundry, e.g. for people who just want to quickly try it out? So who knows, maybe in the next few years we’ll have OpenStack on Cloud Foundry on Kubernetes on OpenStack on Kubernetes on OpenStack.

A proposal to make things simpler

Of course this starts getting a bit unwieldly. Every time we introduce a new abstraction layer there’s extra complexity to deal with. But that’s OK, because we can always deal with complexity by abstracting it away! It’s a bit like the cyber-equivalent of delegating difficult tasks to someone else. So I’d like to propose a new concept, and corresponding meta-component of the overall architecture:

Whenever we realise we need a new abstraction layer, rather than having to deal with the complexity of deploying and managing this layer, we could just invoke APIs to a central service which takes care of this complexity for us. We could call this new concept (drum roll, please…) Abstraction As A Service, or AaaS. (If you’re British or Australian you may prefer to pronounce this as if there was an “r” in the middle, to distinguish from similar-sounding existing words such as “as”. Or maybe we should call it Sweeping Stuff Under The Carpet As A Service? Or SSUtCaaS for short, which can be pronounced “suitcase” (thanks to Florian for pointing this out).

With AaaS, if we wanted say, NFV on serverless on CF on Kubernetes on OpenStack on Kubernetes on OpenStack on public cloud on COBOL, we could simply write some declarative YAML or JSON describing the stack we want, push it to the AaaS REST API endpoint via an HTTP POST, and it would set the whole thing up for us automatically. We could build any number and combination of abstraction layers we would possibly need, so at this point the job could be considered well and truly done, once and for all!

Except, ya know, we’d need a way to deploy and manage our AaaS service, of course. Maybe we could build an AaaSaaS service for that …

Sheepish postscript / disclaimer

P.S. I know, it’s a terrible joke if you have to explain it, but based on previous experiences of my dry British humour being misunderstood (especially given the inevitably international audience), I feel the need to point out that this blog post was intended as nothing more than poking a bit of gentle fun at the cloud software industry. I’m a huge fan of all the technologies mentioned here, and yes, I’m even in favour of multiple abstraction layers, despite occasionally wondering if we’ve all gone a bit insane 😉 Thanks to Florian, Andrew, and Dirk for reviewing an earlier draft, but I take responsibility for any mistakes or any offence unintentionally caused!

Share

Leave a Reply

 

Panorama Theme by Themocracy