Currently showing posts tagged: development

managing your github notifications inbox with mutt

By , October 5, 2014 1:59 pm

Like many F/OSS developers, I’m a heavy user of GitHub. This means I interact with other developers via GitHub multiple times a day. GitHub has a very nice notifications system which lets me know when there has been some activity on a project I’m collaborating on.

I’m a fan of David Allen’s GTD (“Getting Things Done”) system, and in my experience I get the best results by minimising the number of inboxes I have to look at every day. So I use another great feature of GitHub, which is the ability to have notification emails delivered directly to your email inbox. This means I don’t have to keep checking in addition to my email inbox.

However, this means that I receive GitHub notifications in two places. Wouldn’t it be nice if when I read them in my email inbox, GitHub could somehow realise and mark them read at too, so that when I look there, I don’t end up getting reminded about notifications I’ve already seen in my inbox? Happily the folks at GitHub already thought of this too, and come up with a solution:

If you read a notification email, it’ll automatically be marked as read in the Notifications section. An invisible image is embedded in each mail message to enable this, which means that you must allow viewing images from in order for this feature to work.

But there’s a catch! Like many Linux geeks, I use mutt for reading and writing email. In fact, I’ve been using it since 1997 and I’m still waiting for another MUA to appear which is more powerful and lets me crunch through email faster. However mutt is primarily text-based, which means by default it doesn’t download images when displaying HTML-based email. Of course, it can. But do I want it to automatically open a new tab in my browser every time I encounter an HTML attachment? No! That would slow me down horribly. Even launching a terminal-based HTML viewer such as w3m or links or lynx would be too slow.

So I figured out a better solution. mutt has a nice message-hook feature where you can configure it to automatically execute mutt functions for any message matching specific criteria just before it displays the message. So we can use that to pipe the whole email to a script whenever a message is being read for the first time:

message-hook "(~N|~O) ~f" "push '<pipe-message>read-github-notification\n'"

(~N|~O) matches mails which have the N flag (meaning new unread email) or O (meaning old unread email) set.

The read-github-notifications script reads the email on STDIN, extracts the URL of the 1-pixel read notification beacon <img> embedded in the HTML attachment, and sends an HTTP request for that image, so that github knows the notification has been read.

This means an extra delay of 0.5 seconds or so when viewing a notification email, but for me it’s a worthwhile sacrifice.

If you want to try it, simply download the script and stick it somewhere on your $PATH, and then add the above line to your ~/.muttrc file.


more uses for git notes, and hidden treasures in Gerrit

By , October 2, 2013 2:05 pm

I recently blogged about some tools I wrote which harness the notes feature of git to help with the process of porting commits from one branch to another. Since then I’ve discovered a couple more consumers of this functionality which are pretty interesting: palaver, and Gerrit.

Continue reading 'more uses for git notes, and hidden treasures in Gerrit'»


Easier upstreaming / back-porting of patch series with git

By , September 19, 2013 9:22 pm

Have you ever needed to port a selection of commits from one git branch to another, but without doing a full merge? This is a common challenge, e.g.

  • forward-porting / upstreaming bugfixes from a stable release branch to a development branch, or
  • back-porting features from a development branch to a stable release branch.

Of course, git already goes quite some way to making this possible:

  • git cherry-pick can port individual commits, or even a range of commits (since git 1.7.2) from anywhere, into the current branch.
  • git cherry can compare a branch with its upstream branch and find which commits have been upstreamed and which haven’t. This command is particularly clever because, thanks to git patch-id, it can correctly spot when a commit has been upstreamed, even when the upstreaming process resulted in changes to the commit message, line numbers, or whitespace.
  • git rebase --onto can transplant a contiguous series of commits onto another branch.

It’s not always that easy …

However, on the occasions when you need to sift through a larger number of commits on one branch, and port them to another branch, complications can arise:

  • If cherry-picking a commit results in changes to its patch context, git patch-id will return a different SHA-1, and subsequent invocations of git cherry will incorrectly tell you that you haven’t yet ported that commit.
  • If you mess something up in the middle of a git rebase, recovery can be awkward, and git rebase --abort will land you back at square one, undoing a lot of your hard work.
  • If the porting process is big enough, it could take days or even weeks, so you need some way of reliably tracking which commits have already been ported and which still need porting. In this case you may well want to adopt a divide-and-conquer approach by sharing out the porting workload between team-mates.
  • The more the two branches have diverged, the more likely it is that conflicts will be encountered during cherry-picking.
  • There may be commits within the range you are looking at which after reviewing, you decide should be excluded from the port, or at least porting them needs to be postponed to a later point.

It could be argued that all of these problems can be avoided with the right branch and release management workflows, and I don’t want to debate that in this post. However, this is the real world, and sometimes it just happens that you have to deal with a porting task which is less than trivial. Well, that happened to me and my team not so long ago, so I’m here to tell you that I have written and published some tools to solve these problems. If that’s of interest, then read on!

Continue reading 'Easier upstreaming / back-porting of patch series with git'»


music industry learns nothing from the Avid / Sibelius saga?

By , February 25, 2013 11:46 pm

UPDATE 26/02/2013: Daniel has replied to this post, and I have replied to his reply.

As George Santayana famously said, “those who cannot remember the past are condemned to repeat it”. In light of recent news regarding music notation software, I would add with some disappointment and frustration that those who choose to ignore the past are also condemned to repeat it.

For those of you who don’t already know, Sibelius is a proprietary software product for music notation which has for many years been one of the most popular choices for professional musicians and composers. For many of the more experienced customers in the technology industry who have already been burned in the past, a heavy reliance on a single technology is enough to trigger alarm bells – what if the company providing that technology goes bust, or decides to change direction and cease work on it, or simply does an awful job (*cough* Microsoft *cough*) of maintaining and supporting that technology? Then you’re up a certain creek without the proverbial paddle.

In the IT industry, this is a well-known phenomenon called vendor lock-in. A powerful movement based on Free Software was born in the early eighties to free computer users from this lock-in, and is now used on billions of devices world-wide. You may have never heard of Free Software, but if you own an Android phone or a “broadband” router, or have ever used the Firefox browser or Google Chrome, you have already used it. The vast majority of the largest companies in the world all run Free Software in their datacentres around the world; for example, every time you access Google or Facebook you are (indirectly) using Free Software.

What does any of this have to do with Sibelius? Continue reading 'music industry learns nothing from the Avid / Sibelius saga?'»


Maven fail

By , October 7, 2010 8:18 pm

In my recent work I have encountered Apache Maven, and I think the following snippet of real-world Maven code nicely sums up why Maven is not the idea replacement for the horror that is ant:


Dear god. 34 lines and a plug-in, just to change the permissions on a file in a platform-specific way??

I should add that the above was written by an extremely smart guy who is a top-notch programmer; no, I don’t think the author is at fault here. Even if there’s a more concise/portable way of achieving the same result in Maven (and there might well be – I admit I’m still a Maven newbie), there’s still the undeniable fact that XML is horrendously verbose, and any code written in it is by nature unnecessarily difficult to maintain. To this end I applaud the ongoing efforts supporting the use of YAML to implement the Maven POM.

It’s worth seeing what the above would look like if we wrote it in rake:

require 'pathname'

desc "Make binary executable"
task :chmod do + "").chmod(0755)

I don’t think I need to make a case for which is more legible or maintainable. Oh, and the Ruby version is cross-platform.

To continue an anti-XML rant which has been made countless times already: what the ant and Maven people don’t seem to realise is that XML is not a real programming language and is therefore not expressive enough to deal with many cases that a build system needs. The clue’s in the name, guys! “M” is for “markup” not “Turing-complete“. That’s why every time you need to do something vaguely unusual for which there isn’t an ant taskdef or Maven plugin, you have to write hundreds of lines more Java/XML just to cope with that case. That’s why Maven needs so many damn plugins.

The accidental silver lining to this is that because it takes so much effort to accomplish simple tasks, Maven developers find themselves compelled to reuse and share plugins, and to be fair, Maven has some good ideas on how to do this, even if the implementation isn’t always the best. For example, the built-in plug-in repository management and plug-in dependency management seem to work nicely, but unfortunately for some reason it has a propensity to download plug-ins on most runs, far more frequently than any sensible caching layer should.

DSL issues aside, I’m not convinced by the fixed lifecycle philosophy behind Maven either. I wonder if it was borne out of frustration with the lack of proper dependency checking in ant.

That said, I do like how Maven encourages standardization of the build lifecycle and phase namespace thereof, since newcomers to a project immediately know some familiar entry points. But the same could be said of 99% of projects which use Make and use standard rule target names such as install and clean. And I suspect that many developers suffer when they try to shoe-horn their own project’s build requirements into Maven’s standard lifecycle.

My concern with this phased approach is that it is too linear.  The expectation is that a build process is a one-dimensional sequence of steps, and you get to choose your starting point but not much else.  This seems fundamentally wrong to me.  A build dependency tree is well understood to be a DAG, and any build system which doesn’t model this properly seems to me to be burying its head in the sand.  On the other hand, if it does model it properly, which includes implementing proper dependency resolution, the required build lifecycle should emerge naturally without having to dictate that generate-sources comes before compile which comes before test and so on.

I’ve had some ideas of what the ideal build system looks like, and how to get there from the conventional Java world. More on that soon.


Panorama Theme by Themocracy