Abstraction As A Service

By , December 19, 2017 7:55 pm

The birth of abstraction layers

The last five decades of computing have seen a gradual progression of architectural abstraction layers. Around 50 years ago, IBM mainframes gained virtualization capabilities. Despite explosive progress in the sophistication of hardware following Moore’s Law, there wasn’t too much further innovation in abstraction layers in server computing until well after the dawn of the microcomputer era, in the early 2000s, when virtualization suddenly became all the rage again. (I heard a rumour that this was due to certain IBM patents expiring, but maybe that’s an urban myth.) Different types of hypervisors emerged, including early forms of containers.

Then we started to realise that a hypervisor wasn’t enough, and we needed a whole management layer to keep control of the new “VM sprawl” problem which had arisen. A whole bunch of solutions appeared, including the concept of “cloud”, but many were proprietary, and so after a few years OpenStack came along to the rescue!

The cloud era

But then we realised that managing OpenStack itself was a pain, and someone had the idea that rather than building a separate management layer for managing OpenStack, we could just use OpenStack to manage itself! And so OpenStack on OpenStack, or Triple-O as it’s now known, was born.

Within and alongside OpenStack, several other new exciting trends emerged: Software-Defined Networking (SDN), Software-Defined Storage (e.g. Ceph), etc. So the umbrella term Software-Defined Infrastructure was coined to refer to this group of abstraction layers.

Continue reading 'Abstraction As A Service'»

Share

Announcing OpenStack’s Self-healing SIG

By , November 24, 2017 4:15 pm

One of the biggest promises of the cloud vision was the idea that all infrastructure could be managed in a policy-driven fashion, reacting to failures and other events by automatically healing and optimising services.

In OpenStack, most of the components required to implement such an architecture already exist, and are nicely scoped, for the most part without too much overlap:

However, there is not yet a clear strategy within the community for how these should all tie together. (The OPNFV community is arguably further ahead in this respect, but hopefully some of their work could be applied outside NFV-specific environments.)

Designing a new SIG

To address this, I organised an unofficial kick-off meeting at the PTG in Denver, at which it became clear that there was sufficient interest in this idea from many of the above projects in order to create a new “Self-healing” SIG. However, there were still open questions:

  1. What exactly should be the scope of the SIG? Should it be for developers and operators, or also end users?
  2. What should the name be? Is “self-healing” good enough, or should it also include, say, non-failure scenarios like optimization?

Continue reading 'Announcing OpenStack’s Self-healing SIG'»

Share

Squash-merging and other problems with GitHub

By , August 16, 2017 6:45 pm

(Thanks to Ben North, Colleen Murphy, and Nicolas Bock for reviewing earlier drafts of this post.)

In April 2016, GitHub announced a new feature supporting squashing of multiple commits in a PR at merge-time (announced on April 1st, but it was actually bona-fide 😉 ).

I appreciate that there was high demand for this feature (and similarly on GitLab), and apparently that many projects have a “squash before submitting a PR” policy, but I’d like to contend that this is a poor-man’s workaround for the lack of a real solution to the underlying problems.

Why squash-merge?

So what are the underlying problems which made this such a frequently requested feature? From reading the various links above, it seems that by far the biggest motivator is that people frequently submit pull requests (or merge requests, in GitLab-speak) which contain multiple commits, and these commits are seen as too “noisy” / fine-grained. In other words there is a desire to not pollute the target/trunk branch (e.g. master) with these fine-grained commits, and instead only have larger, less fine-grained commits merged.

But where does this desire come from? Well, if the fine-grained commits which accumulate on a PR branch are frequently amendments to earlier commits in the same PR (like “oops, fix typo I just made” or “oops, fix bug I just introduced”) then this desire is entirely understandable, because noone wants to see that kind of mess on master. However the real problem here is that that kind of mess should have never made it onto GitHub in the first place – not even onto a PR branch! It should have instead been fixed in the developer’s local repository. That is why there is a whole section in the “Pro Git” book dedicated to explaining how to rewrite local history, and why git-commit(1) and git-rebase(1) have native support for creating and squashing “fixup” commits into commits which they fix.

Use the force-push, Luke

If an existing PR needs to be amended, make the change and then rewrite local history so that it’s clean. The new version of the branch can then be force-pushed to GitHub via git push -f, which is an operation GitHub understands and in many situations handles reasonably gracefully. I have previously blogged about why this way is better, but one way of quickly summarising it is: don’t wash your dirty linen in public any more than you have to.

Continue reading 'Squash-merging and other problems with GitHub'»

Share

Panorama Theme by Themocracy