Planet NZOSS

25 October 2016

Nat Torkington

Startups and failure

(Wynyard Group, an NZ tech high-growth company [or, perhaps, not-so-high growth] just entered voluntary administration. On Twitter, a friend was adamant bad luck had nothing to do with it. Instead of a tweetstorm, here’s my response in a vintage retro format known as “a blog post”)

You can always look back at every failure and assign one or more causes, because SOMETHING always kills the startup.  And someone is always responsible for the fatal decisions. That’s “pilot error” for startups.

But hindsight is far easier than foresight. Everybody does the best they can with the info and skills they have.  Everyone operates with imperfect knowledge and incomplete control. Everyone. Investors, board, executive, and rank and file all act with imperfect information. They don’t have a choice. The pattern-matching we hate in VCs is just a reaction to the fog of the market. “Maybe past performance will predict future success?”

The hardest part of bringing something into existence is surviving that imperfect information.

In the case of Wynyard, there are almost certainly multiple proximal causes of failure. People at Wynyard undoubtedly said and did things that materially contributed to failure. That doesn’t mean they weren’t unlucky. Even if Wynyard exec repeatedly doubled-down on unsuccessful strategies, the company and investors were unlucky. Unlucky to have the team they had, unlucky not to notice, unlucky that their faith in people was misplaced.

Successes have recognised and recovered from their fuckups before they went broke. Failures didn’t. Do you have the right info, skills, team, board, partners, customers, market conditions to recover before you go broke? That’s luck.

To suggest there’s no luck in successful or unsuccessful startups is just silly.

24 October 2016

Francois Marier


Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.


In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

  • where website traffic comes from, and
  • how users are navigating the site.

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

  • 0 to never send the header
  • 1 to send the header only when clicking on links and similar elements
  • 2 (default) to send the header on all requests (e.g. images, links, etc.)

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

  • 0 (default) to send the full URL
  • 1 to send the URL without its query string
  • 2 to only send the scheme, host and port

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:

  • 0 (default) to send the referrer in all cases
  • 1 to send a referrer only when the base domains are the same
  • 2 to send a referrer only when the full hostnames match


If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

23 October 2016


NZOSS Council for 2016-17

Yesterday's AGM, included participants from around New Zealand using open standards-based video conferencing software to meet, virtually, and fulfil our Society's obligations for an Annual General Meeting. 

The assembled group has provided us with a streamlined Council (as intended) for the coming year:

  1. Rob Elshire (Palmerston North)
  2. Steven Ellis ( Auckland)
  3. Danny Adair (Wellington)
  4. Daniel Reurich (Wellington)
  5. Dave Lane (Christchurch) - president
  6. Tim McNamara (Wellington) - vice president

Rob is a new addition to the council (welcome aboard!) and the others have all had at least one term.

Our Vice President is Tim McNamara, and President is Dave Lane. Daniel Reurich accepted another term as our Treasurer, and we are still to appoint a Secretary.

We're very grateful to our previous Councillors for their hard work and, in many cases, very long service! We would like to thank

  • Grant Paton-Simpson (Auckland)
  • Rose Lu (Wellington)
  • birgit bachler (wellington)
  • Brent Wood (Wellington)
  • Marek Kuziel (Christchurch)
  • Peter Harrison (Auckland)
  • Nicolás Erdödy (Oamaru, North Otago)
  • Jaco van der Mewe (Auckland)
  • Matthew Holloway (Wellington)

for all they have done, and, we're sure, will continue to do to "share the freedom of open source software, open data, and open standards for the benefit of all New Zealanders".

For his great service to the Society, we would especially like to thank Peter Harrison who is stepping down from the Council to focus on another organisation of which he's the President, the New Zealand Association of Rationalists and Humanists. Peter is both a former President of the Society as well as its founder (in 2003).

Our new, smaller Council means that we will have an easier time organising monthly meetings (and achieving quorum necessary for formal decision making!), but we will be looking to these committed members of our community (and others, too) to help us by leading various initiatives we will be pursuing in the coming year.

We also look forward to substantially increasing our financial membership this year. We believe we have a significant role to play in New Zealand's civil society, and that our contribution has value to many. We will be making it easy and, we hope, compelling for our informal community to support the Society both with their good will and participation as well as their $20-$40 for an annual membership!

We're looking forward to working with you all this coming year.

22 October 2016

Tom Ryder

Custom commands

Unix-like operating systems provide a rich set of command-line tools according to various standards that allow solving very many general problems on the command line, particularly well suited to filtering and aggregating text-based information. Combined with shell features like convenient and terse input and output redirection and streaming with pipes, they can be put together in ad-hoc ways to accomplish larger tasks more specific to the system’s application.

As users grow more familiar with the feature set available to them on their computers, and grow more comfortable using the command line, they will find more often that they develop their own routines for solving problems using their preferred tools, often repeatedly solving the same problem in the same way. You can usually tell if you’ve entered this stage if one or more of the below applies:

  • You type a particular longish command so often it’s gone into muscle memory, and you type it without thinking.
  • You have a text file somewhere with a list of useful commands to solve some frequently recurring problem or task, and you realize you’re cutting and pasting from it a lot.
  • You’re keeping large amounts of history so you can search back through commands you ran weeks or months ago with ^R, to find the last time an instance of a problem came up, and getting angry when you realize it’s fallen away off the end of your history file.
  • You’ve found that you prefer to run a tool like ls(1) more often with a non-default flag than without it; -l is a common example.

You can definitely accomplish a lot of work quickly with shoving the output of some monolithic program through a terse one-liner to get the information you want, or by developing muscle memory for your chosen toolbox and oft-repeated commands, but if you want to apply more discipline and automation to managing these sorts of tasks, it may be useful for you to explore more rigorously defining your own commands for use during your shell sessions, or for automation purposes.

This is consistent with the original idea of the Unix shell as a programming environment; the tools provided by the base system are intentionally very general, not prescribing how they’re used, an approach which allows the user to build and customize their own command set as appropriate for their system’s needs, even on a per-user basis.

What this all means is that you need not treat the tools available to you as holy writ. To leverage the Unix philosophy’s real power, you should consider customizing and extending the command set in ways that are useful to you, refining them as you go, and sharing those extensions and tweaks if they may be useful to others. We’ll discuss here a few methods for implementing custom commands, and where and how to apply them.


The first step users take toward customizing the behaviour of their shell tools is often to define shell aliases in their shell’s startup file, usually specifically for interactive sessions; for Bash, this is usually ~/.bashrc.

Some aliases are so common that they’re included as commented-out suggestions in the default ~/.bashrc file for new users. For example, on Debian systems, the following alias is defined by default if the dircolors(1) tool is available for coloring ls(1) output by filetype:

alias ls='ls --color=auto'

With this defined at startup, invoking ls, with or without other arguments, will expand to run ls --color=auto, including any given arguments on the end as well.

In the same block of that file, but commented out, are suggestions for other aliases to enable coloured output for GNU versions of the dir and grep tools:

#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'

#alias grep='grep --color=auto'
#alias fgrep='fgrep --color=auto'
#alias egrep='egrep --color=auto'

Further down still, there are some suggestions for different methods of invoking ls:

#alias ll='ls -l'
#alias la='ls -A'
#alias l='ls -CF'

Commenting these out would make ll, la, and l work as commands during an interactive session, with the appropriate options added to the call.

You can check the aliases defined in your current shell session by typing alias with no arguments:

$ alias
alias ls='ls --color=auto'

Aliases are convenient ways to add options to commands, and are very common features of ~/.bashrc files shared on the web. They also work in POSIX-confirming shells besides Bash. However, for general use, they aren’t very sophisticated. For one thing, you can’t process arguments with them:

# An attempt to write an alias that searches for a given pattern in a fixed
# file; doesn't work because aliases don't expand parameters
alias grepvim='grep "$1" ~/.vimrc'

They also don’t work for defining new commands within scripts:

alias ll='ls -l'

When saved in a file as test, made executable, and run, this script fails:

./test: line 3: ll: command not found

So, once you understand how aliases work so you can read them when others define them in startup files, my suggestion is there’s no point writing any. Aside from some very niche evaluation tricks, they have no functional advantages over shell functions and scripts.


A more flexible method for defining custom commands for an interactive shell (or within a script) is to use a shell function. We could declare our ll function in a Bash startup file as a function instead of an alias like so:

# Shortcut to call ls(1) with the -l flag
ll() {
    command ls -l "$@"

Note the use of the command builtin here to specify that the ll function should invoke the program named ls, and not any function named ls. This is particularly important when writing a function wrapper around a command, to stop an infinite loop where the function calls itself indefinitely:

# Always add -q to invocations of gdb(1)
gdb() {
    command gdb -q "$@"

In both examples, note also the use of the "$@" expansion, to add to the final command line any arguments given to the function. We wrap it in double quotes to stop spaces and other shell metacharacters in the arguments causing problems. This means that the ll command will work correctly if you were to pass it further options and/or one or more directories as arguments:

$ ll -a
$ ll ~/.config

Shell functions declared in this way are specified by POSIX for Bourne-style shells, so they should work in your shell of choice, including Bash, dash, Korn shell, and Zsh. They can also be used within scripts, allowing you to abstract away multiple instances of similar commands to improve the clarity of your script, in much the same way the basics of functions work in general-purpose programming languages.

Functions are a good and portable way to approach adding features to your interactive shell; written carefully, they even allow you to port features you might like from other shells into your shell of choice. I’m fond of taking commands I like from Korn shell or Zsh and implementing them in Bash or POSIX shell functions, such as Zsh’s vared or its two-argument cd features.

If you end up writing a lot of shell functions, you should consider putting them into separate configuration subfiles to keep your shell’s primary startup file from becoming unmanageably large.

Examples from the author

You can take a look at some of the shell functions I have defined here that are useful to me in general shell usage; a lot of these amount to implementing convenience features that I wish my shell had, especially for quick directory navigation, or adding options to commands:

Variables in shell functions

You can manipulate variables within shell functions, too:

# Print the filename of a path, stripping off its leading path and
# extension
fn() {
    printf '%s\n' "$name"

This works fine, but the catch is that after the function is done, the value for name will still be defined in the shell, and will overwrite whatever was in there previously:

$ printf '%s\n' "$name"
$ fn /home/you/Task_List.doc
$ printf '%s\n' "$name"

This may be desirable if you actually want the function to change some aspect of your current shell session, such as managing variables or changing the working directory. If you don’t want that, you will probably want to find some means of avoiding name collisions in your variables.

If your function is only for use with a shell that provides the local (Bash) or typeset (Ksh) features, you can declare the variable as local to the function to remove its global scope, to prevent this happening:

# Bash-like
fn() {
    local name
    printf '%s\n' "$name"

# Ksh-like
fn() {
    typeset name
    printf '%s\n' "$name"

If you’re using a shell that lacks these features, or you want to aim for POSIX compatibility, things are a little trickier, since local function variables aren’t specified by the standard. One option is to use a subshell, so that the variables are only defined for the duration of the function:

# POSIX; note we're using plain parentheses rather than curly brackets, for
# a subshell
fn() (
    printf '%s\n' "$name"

# POSIX; alternative approach using command substitution:
fn() {
    printf '%s\n' "$(
        printf %s "$name"

This subshell method also allows you to change directory with cd within a function without changing the working directory of the user’s interactive shell, or to change shell options with set or Bash options with shopt only temporarily for the purposes of the function.

Another method to deal with variables is to manipulate the positional parameters directly ($1, $2 … ) with set, since they are local to the function call too:

# POSIX; using positional parameters
fn() {
    set -- "${1##*/}"
    set -- "${1%.*}"
    printf '%s\n' "$1"

These methods work well, and can sometimes even be combined, but they’re awkward to write, and harder to read than the modern shell versions. If you only need your functions to work with your modern shell, I recommend just using local or typeset. The Bash Guide on Greg’s Wiki has a very thorough breakdown of functions in Bash, if you want to read about this and other aspects of functions in more detail.

Keeping functions for later

As you get comfortable with defining and using functions during an interactive session, you might define them in ad-hoc ways on the command line for calling in a loop or some other similar circumstance, just to solve a task in that moment.

As an example, I recently made an ad-hoc function called monit to run a set of commands for its hostname argument that together established different types of monitoring system checks, using an existing script called nmfs:

$ monit() { nmfs "$1" Ping Y ; nmfs "$1" HTTP Y ; nmfs "$1" SNMP Y ; }
$ for host in webhost{1..10} ; do
> monit "$host"
> done

After that task was done, I realized I was likely to use the monit command interactively again, so I decided to keep it. Shell functions only last as long as the current shell, so if you want to make them permanent, you need to store their definitions somewhere in your startup files. If you’re using Bash, and you’re content to just add things to the end of your ~/.bashrc file, you could just do something like this:

$ declare -f monit >> ~/.bashrc

That would append the existing definition of monit in parseable form to your ~/.bashrc file, and the monit function would then be loaded and available to you for future interactive sessions. Later on, I ended up converting monit into a shell script, as its use wasn’t limited to just an interactive shell.

If you want a more robust approach to keeping functions like this for Bash permanently, I wrote a tool called Bashkeep, which allows you to quickly store functions and variables defined in your current shell into separate and appropriately-named files, including viewing and managing the list of names conveniently:

$ keep monit
$ keep
$ ls ~/.bashkeep.d
$ keep -d monit


Shell functions are a great way to portably customize behaviour you want for your interactive shell, but if a task isn’t specific only to an interactive shell context, you should instead consider putting it into its own script whether written in shell or not, to be invoked somewhere from your PATH. This makes the script useable in contexts besides an interactive shell with your personal configuration loaded, for example from within another script, by another user, or by an X11 session called by something like dmenu.

Even if your set of commands is only a few lines long, if you need to call it often–especially with reference to other scripts and in varying contexts– making it into a generally-available shell script has many advantages.


Users making their own scripts often start by putting them in /usr/local/bin and making them executable with sudo chmod +x, since many Unix systems include this directory in the system PATH. If you want a script to be generally available to all users on a system, this is a reasonable approach. However, if the script is just something for your own personal use, or if you don’t have the permissions necessary to write to this system path, it may be preferable to have your own directory for logical binaries, including scripts.

Private bindir

Unix-like users who do this seem to vary in where they choose to put their private logical binaries directory. I’ve seen each of the below used or recommended:

  • ~/bin
  • ~/.bin
  • ~/.local/bin
  • ~/Scripts

I personally favour ~/.local/bin, but you can put your scripts wherever they best fit into your HOME directory layout. You may want to choose something that fits in well with the XDG standard, or whatever existing standard or system your distribution chooses for filesystem layout in $HOME.

In order to make this work, you will want to customize your login shell startup to include the directory in your PATH environment variable. It’s better to put this into ~/.profile or whichever file your shell runs on login, so that it’s only run once. That should be all that’s necessary, as PATH is typically exported as an environment variable for all the shell’s child processes. A line like this at the end of one of those scripts works well to extend the system PATH for our login shell:


Note that we specifically put our new path at the front of the PATH variable’s value, so that it’s the first directory searched for programs. This allows you to implement or install your own versions of programs with the same name as those in the system; this is useful, for example, if you like to experiment with building software in $HOME.

If you’re using a systemd-based Linux, and particularly if you’re using a display manager like GDM rather than a TTY login and startx for your X11 environment, you may find it more robust to instead set this variable with the appropriate systemd configuration file. Another option you may prefer on systems using PAM is to set it with pam_env(8).

After logging in, we first verify the directory is in place in the PATH variable:

$ printf '%s\n' "$PATH"

We can test this is working correctly by placing a test script into the directory, including the #!/bin/sh shebang, and making it executable by the current user with chmod(1):

$ cat >~/.local/bin/test-private-bindir
printf 'Working!\n'
$ chmod u+x ~./local/bin/test-private-bindir
$ test-private-bindir

Examples from the author

I publish the more generic scripts I keep in ~/.local/bin, which I keep up-to-date on my personal systems in version control using Git, along with my configuration files. Many of the scripts are very short, and are intended mostly as building blocks for other scripts in the same directory. A few examples:

  • gscr(1df): Run a set of commands on a Git repository to minimize its size.
  • fgscr(1df): Find all Git repositories in a directory tree and run gscr(1df) over them.
  • hurl(1df): Extract URLs from links in an HTML document.
  • maybe(1df): Exit with success or failure with a given probability.
  • rfcr(1df): Download and read a given Request for Comments document.
  • sum(1df): Add up a list of numbers.

For such scripts, I try to write them as much as possible to use tools specified by POSIX, so that there’s a decent chance of them working on whatever Unix-like system I need them to.

On systems I use or manage, I might specify commands to do things relevant specifically to that system, such as:

  • Filter out uninteresting lines in an Apache HTTPD logfile with awk.
  • Check whether mail has been delivered to system users in /var/mail.
  • Upgrade the Adobe Flash player in a private Firefox instance.

The tasks you need to solve both generally and specifically will almost certainly be different; this is where you can get creative with your automation and abstraction.

X windows scripts

An additional advantage worth mentioning of using scripts rather than shell functions where possible is that they can be called from environments besides shells, such as in X11 or by other scripts. You can combine this method with X11-based utilities such as dmenu(1), libnotify’s notify-send(1), or ImageMagick’s import(1) to implement custom interactive behaviour for your X windows session, without having to write your own X11-interfacing code.

Other languages

Of course, you’re not limited to just shell scripts with this system; it might suit you to write a script completely in a language like awk(1), or even sed(1). If portability isn’t a concern for the particular script, you should use your favourite scripting language. Notably, don’t fall into the trap of implementing a script in shell for no reason …

awk '/foobar/ && NF>2 {print $1}' "$@"

… when you can instead write the whole script in the main language used, and save a fork(2) syscall:

#!/usr/bin/awk -f
/foobar/ && NF>2 {print $1}

Versioning and sharing

Finally, if you end up writing more than a couple of useful shell functions and scripts, you should consider versioning them with Git or a similar version control system. This also eases implementing your shell setup and scripts on other systems, and sharing them with others via publishing on GitHub. You might even go so far as to write a Makefile to install them, or manual pages for quick reference as documentation … if you’re just a little bit crazy …

04 October 2016

Nat Torkington

Job Titles

Job titles don’t matter, because it’s the work that counts.  But job titles matter because they get you meetings with partners, interviews for next jobs, etc.  They send signals that can be useful so, conversely, life can be harder if you’re sending the wrong signal.

There’s an implicit hierarchy in American job titles.  Kiwis have a different implicit hierarchy (“Managing Director”, etc.) but this post is about the American hierarchy.  For the companies I work with, it is more important to signal to American companies than to signal to a Kiwi establishment.  The hierarchy is a rough norm: there is variation in it between companies, but the basic shape holds.

It is:

  • CEO
  • CxO
  • VP
  • Director
  • Manager
  • an individual contributor who might be an “agent” or “operator” or so on.
(Big companies will have Senior and Junior VP, Director, Manager gradations as well)
There’s one big trap in nomenclature here: “Manager” implies “has an area that they oversee” which may or may not include managing people.  “Director” implies has direct reports, aka is a manager (and is not a board-of-directors director).  In New Zealand, “manager” carries the connotation of managing people.
So in a software-only company, the CEO does a lot of sales and visionary leadership PR and fundraising, and manages a team of CxOs (often including a COO who Gets Shit Done while CEO is out selling & fundraising).  One of those is the CTO, who typically holds the future direction of the technology in her head.  Reporting to the CTO is the VP of Engineering, who makes sure things are built and kept running.
The hierarchy above is for Sales, Marketing, Finance, etc. where you end up with Director > Manager > staff. Engineering teams often have a different hierarchy below VP, e.g. “Engineering Manager” being someone who leads a group.  But they’ll designate individual engineers as “Lead” for a project or a team.  Likewise there are Architects and Designers and whatnot as speciality roles below Manager.  Engineering organisations typically define a ladder to bring clarity to that chaos.
The big philosophical question that an org chart tries to answer is “who fits with whom?”:
  • Product figures out what to build,  Engineering make it, Marketing tells people about it, Sales sell it, Operations keep it going, and Support keep the customer happy when Operations fail.  If all goes well, info from Sales, Support, and Operations feed back into Product, informing what to build next.
  • These are separate functions but each depends on the rest to do their job well.
  • Companies generally have to place some functions under others in the org chart, if only to limit the number of people reporting to the CEO.
  • Product sits between sales/marketing and engineering/ops, can live in either team.  Product managers understand the customer and drive the requirements for & oversee software development.  (Product = “build the right thing”, Engineering = “build it right”)
  • Support often starts out in engineering but can end up in the customer-facing sales/mkt group because it’s relationships as much as tech.

Francois Marier


How Safe Browsing works in Firefox

Firefox has had support for Google's Safe Browsing since 2005 when it started as a stand-alone Firefox extension. At first it was only available in the USA, but it was opened up to the rest of the world in 2006 and moved to the Google Toolbar. It then got integrated directly into Firefox 2.0 before the public launch of the service in 2007.

Many people seem confused by this phishing and malware protection system and while there is a pretty good explanation of how it works on our support site, it doesn't go into technical details. This will hopefully be of interest to those who have more questions about it.

Browsing Protection

The main part of the Safe Browsing system is the one that watches for bad URLs as you're browsing. Browsing protection currently protects users from:

If a Firefox user attempts to visit one of these sites, a warning page will show up instead, which you can see for yourself here:

The first two warnings can be toggled using the browser.safebrowsing.malware.enabled preference (in about:config) whereas the last one is controlled by browser.safebrowsing.phishing.enabled.

List updates

It would be too slow (and privacy-invasive) to contact a trusted server every time the browser wants to establish a connection with a web server. Instead, Firefox downloads a list of bad URLs every 30 minutes from the server ( and does a lookup against its local database before displaying a page to the user.

Downloading the entire list of sites flagged by Safe Browsing would be impractical due to its size so the following transformations are applied:

  1. each URL on the list is canonicalized,
  2. then hashed,
  3. of which only the first 32 bits of the hash are kept.

The lists that are requested from the Safe Browsing server and used to flag pages as malware/unwanted or phishing can be found in urlclassifier.malwareTable and urlclassifier.phishTable respectively.

If you want to see some debugging information in your terminal while Firefox is downloading updated lists, turn on browser.safebrowsing.debug.

Once downloaded, the lists can be found in the cache directory:

  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/ on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/ on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\ on Windows

Resolving partial hash conflicts

Because the Safe Browsing database only contains partial hashes, it is possible for a safe page to share the same 32-bit hash prefix as a bad page. Therefore when a URL matches the local list, the browser needs to know whether or not the rest of the hash matches the entry on the Safe Browsing list.

In order resolve such conflicts, Firefox requests from the Safe Browsing server (browser.safebrowsing.provider.mozilla.gethashURL) all of the hashes that start with the affected 32-bit prefix and adds these full-length hashes to its local database. Turn on browser.safebrowsing.debug to see some debugging information on the terminal while these "completion" requests are made.

If the current URL doesn't match any of these full hashes, the load proceeds as normal. If it does match one of them, a warning interstitial page is shown and the load is canceled.

Download Protection

The second part of the Safe Browsing system protects users against malicious downloads. It was launched in 2011, implemented in Firefox 31 on Windows and enabled in Firefox 39 on Mac and Linux.

It roughly works like this:

  1. Download the file.
  2. Check the main URL, referrer and redirect chain against a local blocklist (urlclassifier.downloadBlockTable) and block the download in case of a match.
  3. On Windows, if the binary is signed, check the signature against a local whitelist (urlclassifier.downloadAllowTable) of known good publishers and release the download if a match is found.
  4. If the file is not a binary file then release the download.
  5. Otherwise, send the binary file's metadata to the remote application reputation server (browser.safebrowsing.downloads.remote.url) and block the download if the server indicates that the file isn't safe.

Blocked downloads can be unblocked by right-clicking on them in the download manager and selecting "Unblock".

While the download protection feature is automatically disabled when malware protection (browser.safebrowsing.malware.enabled) is turned off, it can also be disabled independently via the browser.safebrowsing.downloads.enabled preference.

Note that Step 5 is the only point at which any information about the download is shared with Google. That remote lookup can be suppressed via the browser.safebrowsing.downloads.remote.enabled preference for those users concerned about sending that metadata to a third party.

Types of malware

The original application reputation service would protect users against "dangerous" downloads, but it has recently been expanded to also warn users about unwanted software as well as software that's not commonly downloaded.

These various warnings can be turned on and off in Firefox through the following preferences:

  • browser.safebrowsing.downloads.remote.block_dangerous
  • browser.safebrowsing.downloads.remote.block_dangerous_host
  • browser.safebrowsing.downloads.remote.block_potentially_unwanted
  • browser.safebrowsing.downloads.remote.block_uncommon

and tested using Google's test page.

If you want to see how often each "verdict" is returned by the server, you can have a look at the telemetry results for Firefox Beta.


One of the most persistent misunderstandings about Safe Browsing is the idea that the browser needs to send all visited URLs to Google in order to verify whether or not they are safe.

While this was an option in version 1 of the Safe Browsing protocol (as disclosed in their privacy policy at the time), support for this "enhanced mode" was removed in Firefox 3 and the version 1 server was decommissioned in late 2011 in favor of version 2 of the Safe Browsing API which doesn't offer this type of real-time lookup.

Google explicitly states that the information collected as part of operating the Safe Browsing service "is only used to flag malicious activity and is never used anywhere else at Google" and that "Safe Browsing requests won't be associated with your Google Account". In addition, Firefox adds a few privacy protections:

  • Query string parameters are stripped from URLs we check as part of the download protection feature.
  • Cookies set by the Safe Browsing servers to protect the service from abuse are stored in a separate cookie jar so that they are not mixed with regular browsing/session cookies.
  • When requesting complete hashes for a 32-bit prefix, Firefox throws in a number of extra "noise" entries to obfuscate the original URL further.

On balance, we believe that most users will want to keep Safe Browsing enabled, but we also make it easy for users with particular needs to turn it off.

Learn More

If you want to learn more about how Safe Browsing works in Firefox, you can find all of the technical details on the Safe Browsing and Application Reputation pages of the Mozilla wiki or you can ask questions on our mailing list.

Google provides some interesting statistics about what their systems detect in their transparency report and offers a tool to find out why a particular page has been blocked. Some information on how phishing sites are detected is also available on the Google Security blog, but for more detailed information about all parts of the Safe Browsing system, see the following papers:

01 October 2016

Simon Lyall

DevOpsDays Wellington 2016 – Day 2, Session 3


Mrinal Mukherjee – How to choose a DevOps tool

Right Tool
– Does the job
– People will accept

Wrong tool
– Never ending Poc
– Doesn’t do the job

How to pick
– Budget / Licensing
– does it address your pain points
– Learning cliff
– Community support
– Enterprise acceptability
– Config in version control?

Central tooling team
– Pro standardize, educate, education
– Constant Bottleneck, delays, stifles innovation, not in sync with teams

DevOps != Tool
Tools != DevOps

Tools facilitate it not define it.

Howard Duff – Eric and his blue boxes

Physical example of KanBan in an underwear factory

Lindsey Holmwood – Deepening people to weather the organisation

Note: Lindsey presents really fast so I missed recording a lot from the talk

His Happy, High performing Team -> He left -> 6 months later half of team had left

How do you create a resilient culture?

What is culture?
– Lots of research in organisation psychology
– Edgar Schein – 3 levels of culture
– Artefacts, Values, Assumptions

– Physical manifestations of our culture
– Standups, Org charts, desk layout, documentation
– actual software written
– Easiest to see and adopt

– Goals, strategies and philosophise
– “we will dominate the market”
– “Management if available”
– “nobody is going to be fired for making a mistake”
– lived values vs aspiration values (People have good nose for bullshit)
– Example, cores values of Enron vs reality
– Work as imagined vs Work is actually done

– beliefs, perceptions, thoughts and feelings
– exist on an unconscious level
– hard to discern
– “bad outcomes come from bad people”
– “it is okay to withhold information”
– “we can’t trust that team”
– “profits over people”

If we can change our people, we can change our culture

What makes a good team member?

– Vulnerability
– Assume the best of others
– Aware of their cognitive bias
– Aware of the fundamental attribution error (judge others by actions, judge ourselves by our intentions)
– Aware of hindsight bias. Hindsight bias is your culture killer
– When bad things happen explain in terms of foresight
– Regular 1:1s
Eliminate performance reviews
Willing to play devils advocate

Commit and acting
– Shared goal settings
– Don’t solutioneer
– Provide context about strategy, about desired outcome
What makes a good team?

Influence of hiring process
– Willingness to adapt and adopt working in new team
– Qualify team fit, tech talent then rubber stamp from team lead
– have a consistent script, but be prepared to improvise
– Everyone has the veto power
– Leadership is vetoing at the last minute, thats a systemic problem with team alignment not the system
– Benefit: team talks to candidate (without leadership present)
– Many different perspectives
– unblock management bottlenecks
– Risk: uncovering dysfunctions and misalignment in your teams
– Hire good people, get out of their way

Diversity and inclusion
– includes: race, gender, sexual orientation, location, disability, level of experience, work hours
– Seek out diverse candidates.
– Sponsor events and meetups
– Make job description clear you are looking for diverse background
– Must include and embrace differences once they actually join
– Safe mechanism for people to raise criticisms, and acting on them

Leadership and Absence of leadership
– Having a title isn’t required
– If leader steps aware things should continue working right
– Team is their own shit umbrella
– empowerment vs authority
– empowerment is giving permission from above (potentially temporary)
– authority is giving power (granting autonomy)

Part of something bigger than the team
– help people build up for the next job
– Guilds in the Spotify model
– Run them like meetups
– Get senior management to come and observe
– What we’re talking about is tech culture

We can change tech culture
– How to make it resist the culture of the rest of the organisation
– Artefacts influence behaviour
– Artifact fast builds -> value: make better quality
– Artifact: post incident reviews -> Value: Failure is an opportunity for learning

Q: What is a pre-incident review
A: Brainstorm beforehand (eg before a big rollout) what you think might go wrong if something is coming up
then afterwards do another review of what just went wrong

Q: what replaces performance reviews
A: One on ones

Q: Overcoming Resistance
A: Do it and point back at the evidence. Hard to argue with an artifact

Q: First step?
A: One on 1s

Getting started, reading books by Patrick Lencioni:
– Solos, Politics and turf wars
– 5 Dysfunctions of a team


30 September 2016

Simon Lyall

DevOpsDays Wellington 2016 – Day 2, Session 2

Troy Cornwall & Alex Corkin – Health is hard: A Story about making healthcare less hard, and faster!

Maybe title should be “Culture is Hard”

@devtroy @4lexNZ

Working at HealthLink
– Windows running Java stuff
– Out of date and poorly managed
– Deployments manual, thrown over the wall by devs to ops

Team Death Star
– Destroy bad processes
– Change deployment process

Existing Stack
– VMware
– Windows
– Puppet

CD and CI Requirements
– Goal: Time to regression test under 2 mins, time to deploy under 2 mins (from 2 weeks each)
– Puppet too slow to deploy code in a minute or two. App deply vs Conf mngt
– Can’t use (then) containers on Windows so not an option

New Stack
– VMware
– Ubuntu
– Puppet for Server config
– Docker
– rancher

Smashed the 2 minute target!

– We focused on the tech side and let the people side slip
– Windows shop, hard work even to get a Linux VM at the start
– Devs scared to run on Linux. Some initial deploy problems burnt people
– Lots of different new technologies at once all pushed to devs, no pull from them.

Blackout where we weren’t allowed to talk to them for four weeks
– Should have been a warning sign…

We thought we were ready.
– Ops was not ready

“5 dysfunctions of a team”
– Trust as at the bottom, we didn’t have that

– We were aware of this, but didn’t follow though
– We were used to disruption but other teams were not

Note: I’m not sure how the story ended up, they sort of left it hanging.

Pavel Jelinek – Kubernetes in production

Works at Movio
– Software for Cinema chains (eg Loyalty cards)
– 100million emails per month. million of SMS and push notifications (less push cause ppl hate those)

Old Stack
– Started with mysql and php application
– AWS from the beginning
– On largest aws instance but still slow.

Decided to go with Microservices
– Put stuff in Docker
– Used Jenkins, puppet, own docker registery, rundeck (see blog post)
– Devs didn’t like writing puppet code and other manual setup

Decided to go to new container management at start of 2016
– Was pushing for Nomad but devs liked Kubernetes

– Built in ports, HA, LB, Health-checks

Concepts in Kub
– POD – one or more containers
– Deployment, Daemon, Pet Set – Scaling of a POD
– Service- resolvable name, load balancing
– ConfigMap, Volume, Secret – Extended Docker Volume

Devs look after some kub config files
– Brings them closer to how stuff is really working

– Using kubectl to create pod in his work’s lab env
– Add load balancer in front of it
– Add a configmap to update the container’s nginx config
– Make it public
– LB replicas, Rolling updates

Best Practices
– lots of small containers are better
– log on container stdout, preferable via json
– Test and know your resource requirements (at movio devs teams specify, check and adjust)
– Be aware of the node sizes
– Stateless please
– if not stateless than clustered please
– Must handle unexpected immediate restarts


DevOpsDays Wellington 2016 – Day 2, Session 1

Jethro Carr – Powering with DevOps goodness
– “News” Website
– 5 person DevOps team

– “Something you do because Gartner said it’s cool”
– Sysadmin -> InfraCoder/SRE -> Dev Shepherd -> Dev
– Stuff in the middle somewhere
– DevSecOps

Company Structure drives DevOps structure
– Lots of products – one team != one product
– Dev teams with very specific focus
– Scale – too big, yet to small

About our team
– Mainly Ops focus
– small number compared to developers
– Operate like an agency model for developers
– “If you buy the Dom Post it would help us grow our team”
– Lots of different vendors with different skill levels and technology

Work process
– Use KanBan with Jira
– Works for Ops focussed team
– Not so great for long running projects

War Against OnCall
– Biggest cause of burnout
– focus on minimising callouts
– Zero alarm target
– Love pagerduty

Commonalities across platforms
– Everyone using compute
– Most Java and javascript
– Using Public Cloud
– Using off the shelf version control, deployment solutions
– Don’t get overly creative and make things too complex
– Proven technology that is well tried and tested and skills available in marketplace
– Classic technologist like Nginx, Java, Varnish still have their place. Don’t always need latest fashion

– Linux, ubuntu
– Adobe AEM Java CMS
– AWS 14x c4.2xlarge
– Varnish in front, used by everybody else. Makes ELB and ALB look like toys

How use Varnish
– Retries against backends if 500 replies, serve old copies
– split routes to various backends
– Control CDN via header
– Dynamic Configuration via puppet

– Akamai
– Keeps online during breaking load
– 90% cache offload
– Management is a bit slow and manual

– Small batch jobs
– Check mail reputation score
– “Download file from a vendor” type stuff
– Purge cache when static file changes
– Lamda webapps – Hopefully soon, a bit immature

Increasing number of microservices

Standards are vital for microservices
– Simple and reasonable
– Shareable vendors and internal
– flexible
– grow organicly
– Needs to be detail
– 12 factor App
– 3 languages Node, Java, Ruby
– Common deps (SQL, varnish, memcache, Redis)
– Build pipeline standardise. Using Codeship
– Standardise server builds
– Everything Automated with puppet
– Puppet building docker containers (w puppet + puppetstry)
– Std Application deployment

Init systems
– Had proliferation
– pm2, god, supervisord, systemvinit are out
– systemd and upstart are in

Always exceptions
– “Enterprise ___” is always bad
– Educating the business is a forever job
– Be reasonable, set boundaries

More Stuff at

Q: Pull request workflow
A: Largely replaced traditional review

Q: DR eg AWS outage
A: Documented process if codeship dies can manually push, Rest in 2*AZs, Snapshots

Q: Dev teams structure
A: Project specific rather than product specific.

Q: Puppet code tested?
A: Not really, Kinda tested via the pre-prod environment, Would prefer result (server spec) testing rather than low level testing of each line
A: Code team have good test coverage though. 80-90% in many cases.

Q: Load testing, APM
A: Use New Relic. Not much luck with external load testing companies

Q: What is somebody wants something non-standard?
A: Case-by-case. Allowed if needed but needs a good reason.

Q: What happens when automation breaks?
A: Documentation is actually pretty good.


25 August 2016

Francois Marier


Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
/usr/bin/gnome-shell: error while loading shared libraries: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

17 August 2016


NZ Open Source Award 2016 Finalists Announced

From the NZOSA press release email:

"The judges have completed their review of the many projects and individuals nominated for awards this year, and have announced the finalists for each category.  

“The quality of all of the nominations was very impressive. We are fortunate to have so many people here using open source technology and philosophy to deliver amazing technical, social and creative projects.

“The judges would like to congratulate all of the nominees, as all of their contributions have created significant value for our communities, research institutions and businesses.” said Jason Ryan, Chair of the judging panel.

We would like to thank the judges for all the time they put into reviewing the nominations.

View a list of the 2016 finalists -

People's Choice award

Nominations for the People's Choice award are listed on the website, ready for voting, which is open until 5 September.
View People's Choice nominations -


We would like to thank all our sponsors for the NZOSA 2016.
Platinum - Catalyst and NZRS
Silver - Redhat, SilverStripe and IITP
Bronze - Silicon Systems, NZRise and NZOSS. 

If you are interested in becoming a sponsor, please contact sponsors [at] nzosa [dot] org [dot] nz

Clinton Bedogni Prize for Open Systems

Once again The University of Auckland Department of Computer science is presenting the Clinton Bedgoni prize for open systems at the NZOSA gala dinner, applications close on 27 August.  More details and nomination forms are available at

Gala dinner

The gala dinner is being held on Tuesday 25 October at Te Papa in Wellington. 
Tickets are $45, to reserve your ticket contact awards [at] nzosa [dot] org [dot] nz

Media Release- Finalists August 2016.pdf1.17 MB

07 August 2016


ACT calls for government open source adoption

The ACT Party has called for the NZ Government to adopt open source software, citing, among other things, the UK's transition away from Microsoft products to open source alternatives.

While the NZOSS is gratified to see Free and Open Source Software (FOSS) being advocated by the ACT Party (and the Greens have similarly advocated it for at least the past decade) we think that FOSS sells itself if the playing field is level. At present it is not. The UK's transition has been made possible by its mandatory adoption of open standards. Without that change, adopting FOSS is blocked by the fact that corporate software suppliers control the documents everyone produces, and they can unilaterally change those proprietary formats to ensure that the FOSS alternatives are never quite fully compatible as has been the case for the past 20 years.

We would love to see the NZ government make good on the goals of the D5 Charter, to which we have formally been signed up (thanks Minister Dunne), which among other admirable things include: adoption of open standards, and broader use of open source software. We advocate that the NZ government formally mandate the use of open standards, because upon doing so, government software procurement will be on a level playing field, and at that point we are confident that FOSS will increasingly be the best choice on all criteria.

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

exten => a,1,Goto(pstn,027xxx,1)

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

exten => a,1,Goto(vmfwd,${ARG1},1)

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)


exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

08 May 2016

Tom Ryder

Cron best practices

The time-based job scheduler cron(8) has been around since Version 7 Unix, and its crontab(5) syntax is familiar even for people who don’t do much Unix system administration. It’s standardised, reasonably flexible, simple to configure, and works reliably, and so it’s trusted by both system packages and users to manage many important tasks.

However, like many older Unix tools, cron(8)‘s simplicity has a drawback: it relies upon the user to know some detail of how it works, and to correctly implement any other safety checking behaviour around it. Specifically, all it does is try and run the job at an appropriate time, and email the output. For simple and unimportant per-user jobs, that may be just fine, but for more crucial system tasks it’s worthwhile to wrap a little extra infrastructure around it and the tasks it calls.

There are a few ways to make the way you use cron(8) more robust if you’re in a situation where keeping track of the running job is desirable.

Apply the principle of least privilege

The sixth column of a system crontab(5) file is the username of the user as which the task should run:

0 * * * *  root  cron-task

To the extent that is practical, you should run the task as a user with only the privileges it needs to run, and nothing else. This can sometimes make it worthwhile to create a dedicated system user purely for running scheduled tasks relevant to your application.

0 * * * *  myappcron  cron-task

This is not just for security reasons, although those are good ones; it helps protect you against nasties like scripting errors attempting to remove entire system directories.

Similarly, for tasks with database systems such as MySQL, don’t use the administrative root user if you can avoid it; instead, use or even create a dedicated user with a unique random password stored in a locked-down ~/.my.cnf file, with only the needed permissions. For a MySQL backup task, for example, only a few permissions should be required, including SELECT, SHOW VIEW, and LOCK TABLES.

In some cases, of course, you really will need to be root. In particularly sensitive contexts you might even consider using sudo(8) with appropriate NOPASSWD options, to allow the dedicated user to run only the appropriate tasks as root, and nothing else.

Test the tasks

Before placing a task in a crontab(5) file, you should test it on the command line, as the user configured to run the task and with the appropriate environment set. If you’re going to run the task as root, use something like su or sudo -i to get a root shell with the user’s expected environment first:

$ sudo -i -u cronuser
$ cron-task

Once the task works on the command line, place it in the crontab(5) file with the timing settings modified to run the task a few minutes later, and then watch /var/log/syslog with tail -f to check that the task actually runs without errors, and that the task itself completes properly:

May  7 13:30:01 yourhost CRON[20249]: (you) CMD (cron-task)

This may seem pedantic at first, but it becomes routine very quickly, and it saves a lot of hassles down the line as it’s very easy to make an assumption about something in your environment that doesn’t actually hold in the one that cron(8) will use. It’s also a necessary acid test to make sure that your crontab(5) file is well-formed, as some implementations of cron(8) will refuse to load the entire file if one of the lines is malformed.

If necessary, you can set arbitrary environment variables for the tasks at the top of the file:


0 * * * *  you  cron-task

Don’t throw away errors or useful output

You’ve probably seen tutorials on the web where in order to keep the crontab(5) job from sending standard output and/or standard error emails every five minutes, shell redirection operators are included at the end of the job specification to discard both the standard output and standard error. This kluge is particularly common for running web development tasks by automating a request to a URL with curl(1) or wget(1):

*/5 * * *  root  curl >/dev/null 2>&1

Ignoring the output completely is generally not a good idea, because unless you have other tasks or monitoring ensuring the job does its work, you won’t notice problems (or know what they are), when the job emits output or errors that you actually care about.

In the case of curl(1), there are just way too many things that could go wrong, that you might notice far too late:

  • The script could get broken and return 500 errors.
  • The URL of the cron.php task could change, and someone could forget to add a HTTP 301 redirect.
  • Even if a HTTP 301 redirect is added, if you don’t use -L or --location for curl(1), it won’t follow it.
  • The client could get blacklisted, firewalled, or otherwise impeded by automatic or manual processes that falsely flag the request as spam.
  • If using HTTPS, connectivity could break due to cipher or protocol mismatch.

The author has seen all of the above happen, in some cases very frequently.

As a general policy, it’s worth taking the time to read the manual page of the task you’re calling, and to look for ways to correctly control its output so that it emits only the output you actually want. In the case of curl(1), for example, I’ve found the following formula works well:

curl -fLsS -o /dev/null
  • -f: If the HTTP response code is an error, emit an error message rather than the 404 page.
  • -L: If there’s an HTTP 301 redirect given, try to follow it.
  • -sS: Don’t show progress meter (-S stops -s from also blocking error messages).
  • -o /dev/null: Send the standard output (the actual page returned) to /dev/null.

This way, the curl(1) request should stay silent if everything is well, per the old Unix philosophy Rule of Silence.

You may not agree with some of the choices above; you might think it important to e.g. log the complete output of the returned page, or to fail rather than silently accept a 301 redirect, or you might prefer to use wget(1). The point is that you take the time to understand in more depth what the called program will actually emit under what circumstances, and make it match your requirements as closely as possible, rather than blindly discarding all the output and (worse) the errors. Work with Murphy’s law; assume that anything that can go wrong eventually will.

Send the output somewhere useful

Another common mistake is failing to set a useful MAILTO at the top of the crontab(5) file, as the specified destination for any output and errors from the tasks. cron(8) uses the system mail implementation to send its messages, and typically, default configurations for mail agents will simply send the message to an mbox file in /var/mail/$USER, that they may not ever read. This defeats much of the point of mailing output and errors.

This is easily dealt with, though; ensure that you can send a message to an address you actually do check from the server, perhaps using mail(1):

$ printf '%s\n' 'Test message' | mail -s 'Test subject'

Once you’ve verified that your mail agent is correctly configured and that the mail arrives in your inbox, set the address in a MAILTO variable at the top of your file:

0 * * * *    you  cron-task-1
*/5 * * * *  you  cron-task-2

If you don’t want to use email for routine output, another method that works is sending the output to syslog with a tool like logger(1):

0 * * * *   you  cron-task | logger -it cron-task

Alternatively, you can configure aliases on your system to forward system mail destined for you on to an address you check. For Postfix, you’d use an aliases(5) file.

I sometimes use this setup in cases where the task is expected to emit a few lines of output which might be useful for later review, but send stderr output via MAILTO as normal. If you’d rather not use syslog, perhaps because the output is high in volume and/or frequency, you can always set up a log file /var/log/cron-task.log … but don’t forget to add a logrotate(8) rule for it!

Put the tasks in their own shell script file

Ideally, the commands in your crontab(5) definitions should only be a few words, in one or two commands. If the command is running off the screen, it’s likely too long to be in the crontab(5) file, and you should instead put it into its own script. This is a particularly good idea if you want to reliably use features of bash or some other shell besides POSIX/Bourne /bin/sh for your commands, or even a scripting language like Awk or Perl; by default, cron(8) uses the system’s /bin/sh implementation for parsing the commands.

Because crontab(5) files don’t allow multi-line commands, and have other gotchas like the need to escape percent signs % with backslashes, keeping as much configuration out of the actual crontab(5) file as you can is generally a good idea.

If you’re running cron(8) tasks as a non-system user, and can’t add scripts into a system bindir like /usr/local/bin, a tidy method is to start your own, and include a reference to it as part of your PATH. I favour ~/.local/bin, and have seen references to ~/bin as well. Save the script in ~/.local/bin/cron-task, make it executable with chmod +x, and include the directory in the PATH environment definition at the top of the file:


0 * * * *  you  cron-task

Having your own directory with custom scripts for your own purposes has a host of other benefits, but that’s another article…

Avoid /etc/crontab

If your implementation of cron(8) supports it, rather than having an /etc/crontab file a mile long, you can put tasks into separate files in /etc/cron.d:

$ ls /etc/cron.d

This approach allows you to group the configuration files meaningfully, so that you and other administrators can find the appropriate tasks more easily; it also allows you to make some files editable by some users and not others, and reduces the chance of edit conflicts. Using sudoedit(8) helps here too. Another advantage is that it works better with version control; if I start collecting more than a few of these task files or to update them more often than every few months, I start a Git repository to track them:

$ cd /etc/cron.d
$ sudo git init
$ sudo git add --all
$ sudo git commit -m "First commit"

If you’re editing a crontab(5) file for tasks related only to the individual user, use the crontab(1) tool; you can edit your own crontab(5) by typing crontab -e, which will open your $EDITOR to edit a temporary file that will be installed on exit. This will save the files into a dedicated directory, which on my system is /var/spool/cron/crontabs.

On the systems maintained by the author, it’s quite normal for /etc/crontab never to change from its packaged template.

Include a timeout

cron(8) will normally allow a task to run indefinitely, so if this is not desirable, you should consider either using options of the program you’re calling to implement a timeout, or including one in the script. If there’s no option for the command itself, the timeout(1) command wrapper in coreutils is one possible way of implementing this:

0 * * * *  you  timeout 10s cron-task

Greg’s wiki has some further suggestions on ways to implement timeouts.

Include file locking to prevent overruns

cron(8) will start a new process regardless of whether its previous runs have completed, so if you wish to avoid locking for long-running task, on GNU/Linux you could use the flock(1) wrapper for the flock(2) system call to set an exclusive lockfile, in order to prevent the task from running more than one instance in parallel.

0 * * * *  you  flock -nx /var/lock/cron-task cron-task

Greg’s wiki has some more in-depth discussion of the file locking problem for scripts in a general sense, including important information about the caveats of “rolling your own” when flock(1) is not available.

If it’s important that your tasks run in a certain order, consider whether it’s necessary to have them in separate tasks at all; it may be easier to guarantee they’re run sequentially by collecting them in a single shell script.

Do something useful with exit statuses

If your cron(8) task or commands within its script exit non-zero, it can be useful to run commands that handle the failure appropriately, including cleanup of appropriate resources, and sending information to monitoring tools about the current status of the job. If you’re using Nagios Core or one of its derivatives, you could consider using send_nsca to send passive checks reporting the status of jobs to your monitoring server. I’ve written a simple script called nscaw to do this for me:

0 * * * *  you  nscaw CRON_TASK -- cron-task

Consider alternatives to cron(8)

If your machine isn’t always on and your task doesn’t need to run at a specific time, but rather needs to run once daily or weekly, you can install anacron and drop scripts into the cron.hourly, cron.daily, cron.monthly, and cron.weekly directories in /etc, as appropriate. Note that on Debian and Ubuntu GNU/Linux systems, the default /etc/crontab contains hooks that run these, but they run only if anacron(8) is not installed.

If you’re using cron(8) to poll a directory for changes and run a script if there are such changes, on GNU/Linux you could consider using a daemon based on inotifywait(1) instead.

Finally, if you require more advanced control over when and how your task runs than cron(8) can provide, you could perhaps consider writing a daemon to run on the server consistently and fork processes for its task. This would allow running a task more often than once a minute, as an example. Don’t get too bogged down into thinking that cron(8) is your only option for any kind of asynchronous task management!

01 May 2016

Nat Torkington

Lesson: Moats and Flywheels

It’s bloody hard to build something new into the world.  Don’t let anyone tell you it’s easy: there’s a lot of unfunded and unrecognised work that you have to do before you can get to the point where fame and/or fortune arrive.  And once you’ve finished the painful birth of your new service or product into the world, maybe even defining an entirely new product category or unserviced market segment, dozens of unimaginative parasites will appear from nowhere and try to eat your lunch.

Hapara had this: we have a track record of beautiful firsts.  First Education solution for Google Apps, first device management for Chromebooks.  And, sure enough, as soon as we’d proven that Google Apps could be improved for educators, Google popped up and entered the space with Classroom.  Dozens of companies sprung up to help educators manage Chromebooks.  I like to think that the Hapara team do it better than those Johnny-Come-Latelys, but regardless of what I think there’s still competition. (I’ve never been at a place that had so many Firsts in such a short period time; there’s something incredible about Tony and Jan’s ability to identify great product niches)

I first noticed a similar dynamic at O’Reilly’s Emerging Technology conference.  My buddy Rael had built a conference where incredible cutting edge technologists, the kind of people who follow Alan Kay’s dictum and predict the future by inventing it.  But I soon realised that while the first movers started as Crazy Dreamers On The Edge, they were quickly joined by competitors who were happy to copy their ideas.  Sometimes those first movers would be beaten by the competitors, who had better market access or other products to piggyback on.  Being first to market doesn’t mean you’ll always lead the market.

So what’s an entrepreneur to do?  I look for moats and/or flywheels in the business plans and pitches that come to me.

Moats are the defences around your castle’s treasure: do you have patents, or an exclusive arrangement with suppliers, or an exclusive arrangement with the major distributors, or all the world’s experts in the field working for you?  Those are the sorts of things that thwart would-be competitors, or at least slow them down because now they have to blaze their own trail to the sweet land of Product-Market-Fit that your startup has identified.

Flywheel is Jeff Bezos’s term for a virtuous circle in your business.  As Tim O’Reilly described it, “your app gets better as more people use it.”  More specifically, I’m looking for geometric (exponential) value in the face of arithmetic (linear) customer growth.  If twice as many people use the telephone, the phone network is four times as valuable because the value of the phone network isn’t just for the new user—now all the previous users can call that new person too.  Amazon’s great at building out parts of their empire such that the success of one part makes other parts stronger; check out Jeff Bezos’s letters to shareholders for more on his thinking.

This is strongly related to the idea of virality.  If number of users is important to you then you want your product to attract users by itself.  The most viral products don’t rely on friends telling friends; instead, merely by using them you’re dragging your friends in.  Examples from history are Plaxo (address book in the cloud, where all your friends were told you’re managing contacts through Plaxo and is this info you have for them accurate?) and any sharing service like Dropbox or Google Docs, where inviting people to collaborate/share files means that they’re becoming users of your service.

Flywheels mean that the time advantage you have by being first to discover the new product/market is directly turned into a more durable value advantage—each new user you get makes your system not just one user’s worth of better.

Neither moats nor flywheels guarantee success, but they can keep the wolves from your heels while you chase it!

26 April 2016

Tim Penhey


It has been too long

Well, it has certainly been a lot longer since I wrote a post than I thought.

My work at Canonical still has me on the Juju team. Juju has come a long way in the last few years, and we are on the final push for the 2.0 version. This was initially intended to come out with the Xenial release, but unfortunately was not ready. Xenial has 2.0-beta4 right now, soon to be beta 6. Hoping that real soon now we'll step through the release candidates to a final release. This will be SRU'ed into both Xenial and Trusty.

I plan to do some more detailed posts on some of the Go utility libraries that have come out of the Juju work. In particular, talking again about loggo which I moved under the "" banner, and the errors package.

Recent work has had me look at the database agnostic model representations for migrating models from one controller to another, and also at gomaasapi - the Go library for talking with MAAS. Perhaps more on that later.

06 October 2015

Mark Foster

CLI tool for 'diffing' in a useful fashion: vimdiff

Had to scratch my head to find the right tool for the job today - something that I used regularly at SMX but havn't had much need to use since.

The tool was 'vimdiff'. I needed to compare to configuration files (retrieved from two different servers) to understand what difference existed. Whilst 'diff' alone would've done it, I find the output hard to follow. vimdiff worked wonders!

Google hit I used also has some other useful tools:

For posterity.

Honourable mention for icdiff also.

15 September 2015

Mark Foster

Firefox's 'Tiles' Antifeature - disabling this spam/track tool

I came across a Reddit Thread recently which included this gem:

From the comments that have been posted on this thread and what I found on the Mozilla forums so far:

1- In a new tab, type or paste about:config in the address bar and press Enter/Return. Click the button promising to be careful.

2- Set browser.newtab.url to about:blank>

3- To disable the callbacks to without enabling the "do not track" feature you need to remove the address from and


So i've now done both of the above and feel much better.

The Reddit page linked to a article talking about Mozilla's new Advertising strategy. I for one don't need a third party tracking what I do when I click on 'new tab' ... ! Interestingly there's also a move to remove browser.newtab.url due to "Abuse" which seems to be in itself, contentious, but it's possible you'll have to use an addon to achieve the above, at least in part, in the near future.

29 January 2015

Tom Ryder

Shell config subfiles

Large shell startup scripts (.bashrc, .profile) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it’s not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.

Given that shell configuration is just shell code, we can apply the source builtin (or the . builtin for POSIX sh) to load several files at the end of a .bashrc, for example:

source ~/.bashrc.options
source ~/.bashrc.aliases
source ~/.bashrc.functions

This is a better approach, but it still binds us into using those filenames; we still have to edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.

Fortunately, UNIX-like systems have a common convention for this, the .d directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory ~/.bashrc.d:

$ ls ~/.bashrc.d

With a slightly more advanced snippet at the end of ~/.bashrc, we can then load every file with the suffix .bash in this directory:

# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
    source "$config"
unset -v config

Note that we unset the config variable after we’re done, otherwise it’ll be in the namespace of our shell where we don’t need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there’s at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.

The same method can be applied with .profile to load all scripts with the suffix .sh in ~/.profile.d, if we want to write in POSIX sh, with some slightly different syntax:

# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
    . "$config"
unset -v config

Another advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your .bashrc file.

Here’s my implementation of the above method, for both .bashrc and .profile:

Thanks to commenter oylenshpeegul for correcting the syntax of the loops.

02 December 2014

Andrew Ruthven

LCA2015 - Debian Miniconf & nz2015 Debian mini-DebConf

nz2015 mini-DebConf

Already attending Come a couple of days earlier and attend the mini-DebConf too! There will be a day of talks with a strong focus on the Debian project and a bug squashing day.

Debian Miniconf

After 5 years, the Debian Miniconf is back! Run as part of 2015, this event will attract speakers talking on topics that suit the broader audience attending LCA. The Debian Miniconf has been one of the largest miniconfs in the history of

For more information about both these events which I'm organising, head over to:!

24 September 2014

Robert Collins



So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.

08 September 2014

Glen Ogilvie

Google authenticator TFA for Android - Backup and OATH

I’ve been a fan of using multi/two factor authentication for anything that matters.

Thankfully, many sites these days are beginning to support using MFA, and many of them have standardized on OATH,
Google Authenticator, is one such OATH client app, implimenting TOTP (time based on time passsword).

OATH is a reasonaly good method of providing MFA, because it’s easy for the user to setup and pretty secure, and open, both for the client and server.
You can read all about how it works in RFC 6238, or wikipedia.
So, in a nut shell, we now have method that a client can generate a key, and a server than can authenticate that key.

Google Authenticator, being the client I use, as it supports adding the share key by simply reading a QR code, is great.
But, what if I loose my phone.. or want to use a second device.. or my computer? MFA of course, pretty much locks you out if you
loose your way to generate your TOTP..

Well, google provides you a number of static keys.. you can use.. but that’s not good enough for me.

So, I thought I’d see if I could backup google authenticator, and read the shared key from it. The answer to this is yes.

Here’s the technical details:
Backup Google Authenticator using Titanium Backup. This will generate 3 files on your SD card:
The file of intrest is:

In this tar.gz, you will find:

This is an SQLlite3 database, that contains each account you have added to google authenticator.
So, after opening it with sqllite3, IE:

tar -zxvf sdcard/TitaniumBackup/ data/data/
sqlite3 data/data/
sqlite> select * from accounts;

to get a list of your keys.
Each key is base32 encoded.

So, to decode your key, you use:

$ python
>>> import base64
>>> base64.b16encode(base64.b32decode('THEKEYFROMTHESELECT', True))

Then this will output the key into base 16, which is the format that oathtool

You can then generate the token form your linux, computer.
Ensure you have the package: oath-toolkit


$ oathtool --totp BASE16KEY

will generate you the same key as google authentcator, provided the time is correct on your Linux system.
Note: Make sure you clear your bash history, if you don’t want your MFA key in your history. And of course,
only store it on encrypted storage.. including make sure your sdcard is secure/erased in some way.

29 August 2014

Robert Collins


Test processes as servers

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. vs – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…

25 August 2014

The Open Source School

Vocal | Open Source Podcast Manager

"Vocal is a podcast manager that aims to do what similar, and well intentioned, apps fail to: do one thing and do it well.

While it’s not the first to try and serve the needs of podcast users, its developer, Nathan Dyer, hopes to avoid the ‘clunky, bloated [and] unnecessarily complicated’ flaws of current options.

13 August 2014

The Open Source School

LibreOffice 4.3 upgrades spreadsheets, brings 3D models to presentations | Ars Technica

"LibreOffice's latest release provides easier ways of working with spreadsheets and the ability to insert 3D models into presentations, along with dozens of other changes.

LibreOffice was created as a fork from OpenOffice in September 2010 because of concerns over Oracle's management of the open source project. LibreOffice has now had eight major releases and is powered by "thousands of volunteers and hundreds of developers," the Document Foundation, which was formed to oversee its development, said in an announcement today. (OpenOffice survived the Oracle turmoil by being transferred to the Apache Software Foundation and continues to be updated.)

In LibreOffice 4.3, spreadsheet program Calc "now allows the performing of several tasks more intuitively, thanks to the smarter highlighting of formulas in cells, the display of the number of selected rows and columns in the status bar, the ability to start editing a cell with the content of the cell above it, and being able to fully select text conversion models by the user," the Document Foundation said.

For LibreOffice Impress, the presentation application, users can now insert 3D models in the gITF format. "To use this feature, go to Insert ▸ Object ▸ 3D Model," the LibreOffice 4.3 release notes say. So far, this feature is available for Windows and Linux but not OS X."

10 July 2014

Andrew Ruthven

Cloud - in New Zealand!

I've spent a reasonable chunk of the past year working on a project we launched last month, Catalyst Cloud! It is using OpenStack with Ceph as the object store. It has taken a lot of work, and it is now very exciting seeing the level of interest there we're receiving about this new service!

The great part of this is that we can now offer private cloud services to our customers which provides all the flexibility that we've come to expect with the "cloud", but hosted in New Zealand by a New Zealand owned company so no concerns about jurisdiction of your data! Not only are we able to offer private cloud services on our OpenStack cluster(s), but we can also deploy OpenStack onto our customers own hardware using our ProdStack solution (I get to look directly at the Dashboard shown on that page, which is pretty cool).

Next up is deploying another OpenStack cluster in our new data centre (which is another project I'm working on). In the near future we also hope to start using Open Compute Project hardware for our clusters.

05 May 2014

Robert Collins


Distributed bugtracking – quick thoughts

Just saw and I feel compelled to note that distributed bug trackers are not new – the earliest I personally encountered was Aaron Bentley’s Bugs everywhere – coming up on it’s 10th birthday. BE meets many of the criteria in the dbts post I read earlier today, but it hasn’t taken over the world – and I think this is in large part due to the propogation nature of bugs being very different to code – different solutions are needed.

XXXX: With distributed code versioning we often see people going to some effort to avoid conflicts – semantic conflicts are common, and representation conflicts extremely common.The idions

Take for example Here we can look at the nature of the content:

  1. Concurrent cannot-conflict content – e.g. the discussion about the bug. In general everyone should have this in their local bug database as soon as possible, and anyone can write to it.
  2. Observations of fact – e.g. ‘the code change that should fix the bug has landed in Ubuntu’ or ‘Commit C should fix the bug’.
  3. Reports of symptoms – e.g. ‘Foo does not work for me in Ubuntu with package versions X, Y and Z’.
  4. Collaboratively edited metadata – tags, title, description, and arguably even the fields like package, open/closed, importance.

Note that only one of these things – the commit to fix the bug – happens in the same code tree as the code; and that the commit fixing it may be delayed by many things before the fix is available to users. Also note that conceptually conflicts can happen in any of those fields except 1).

Anyhow – my humble suggestion for tackling the conflicts angle is to treat all changes to a bug as events in a timeline – e.g. adding a tag ‘foo’ is an event to add ‘foo’, rather than an event setting the tags list to ‘bar,foo’ – then multiple editors adding ‘foo’ do not conflict (or need special handling). Collaboratively edited fields would be likely be unsatisfying with this approach though – last-writer-wins isn’t a great story. OTOH the number of people that edit the collaborative fields on any given bug tend to be quite low – so one could defer that to manual fixups.

Further, as a developer wanting local access to my bug database, syncing all of these things is appealing – but if I’m dealing with a million-bug bug database, I may actually need the ability to filter what I sync or do not sync with some care. Even if I want everything, query performance on such a database is crucial for usability (something git demonstrated convincingly in the VCS space).

Lastly, I don’t think distributed bug tracking is needed – it doesn’t solve a deeply burning use case – offline access would be a 90% solution for most people. What does need rethinking is the hugely manual process most bug systems use today. Making tools like whoopsie-daisy widely available is much more interesting (and that may require distributed underpinnings to work well and securely). Automatic collation of distinct reports and surfacing the most commonly experienced faults to developers offers a path to evidence based assessment of quality – something I think we badly need.

10 April 2014

Mark Foster

A useful pair of commands

Sorry, couple more glitches with the server today. Really need to get on to migrating onto the new hardware I have running in parellel...

However I did find this useful stuff today :-)

Force Reboot :
#echo 1 > /proc/sys/kernel/sysrq
#echo b > /proc/sysrq-trigger
If you want to force shutdown machine try this.
#echo 1 > /proc/sys/kernel/sysrq
#echo o > /proc/sysrq-trigger

Remove comments as required. :)

04 April 2014

The Open Source School

First Ubuntu Tablets To Launch This Autumn

For about three years, sensing the move from the desktop to the mobile device, Canonical have been making Ubuntu more tablet-friendly. Now we hear that tablets (and smartphones) are due to ship with Ubuntu OEM soon.
I've been a fan of Ubuntu for a long time, and if you'd like to try it out to see how easy-to-use free software operating systems are, you can do it online here: (Tip: full-screen your browser and it's like you're running it natively on your machine.)

21 February 2014

Glen Ogilvie

Fritz!Box Telephony


In New Zealand,  VDSL internet is available from a number of providers.  Snap! is the provider I use, and they offer some very cool Fritz!Box VDSL modems.   I have the Fritz!Box 7390, and the other one on offer is the cheaper Fritz!Box 7360.

These routers are far more than just a basic VDSL router, offering a range of awesome features, including NAS, IPV6 (standard with Snap), good WIFI, DECT, VPNs.   This blog posts are about my experiences with setting up the Telephony component.

The first thing to note is, Snap! does not ship the right cable for connecting the Fritz!Box to a standard telephone line.  This is a special Y cable, and they will ship it to you if you ask them.   Note however, you’ll still need to make up an adapter, as this cable is for RJ545 telephone plugs, not NZ BT plugs.  My setup, is that I have a monitored alarm, so I need a telephone line, theirfore, this is for VDSL + standard phone line, rather than VDSL + VOIP phone.


Step 1:

Get the Y cable from Snap!.    It will have 1 plug at the end that connects to the Fritz!Box and a split end, one for your VDSL plug, and one for the telephone line.  They will change you $5 postage.
Here is the description of it, it’s the grey cable, first one on the left.

Here is the email I sent to Snap!   It took a while, but Michael Wadman did agree to ship it to me, case number #BOA-920-53402  if you need a reference case.


I purchased a Fritz!Box 7390 from you, along with my internet subscription. It looks to me, from looking at their website, that it should come with some
cables that I don’t have. See:

The Fritz!Box 7390 you supplied came with two Ethernet cables and power, but no other cables.
In the above link, you can see it should come with:

  • 4.25 m-long ADSL/fixed-line network connection cable (Y cable)
  • 1.5 m-long LAN cable
  • RJ45-RJ11 adapter for connection to the ADSL line
  • RJ45-RJ11 adapter for connection to the analog telephone line

I would like to plug my Analog PSTN phone line into the Fritz!Box so I can use it’s telephony features with my fixed line.     To do this, I need the cable that goes between the ISDN/analog and the phone jack on the wall.
I am aware that the Y cable does not actually fit NZ phone plugs.
This post discusses the matter on geekzone.


Step 2:

Build an adapter for the telephone end.  This is easier than you might think.  What you need:

  • RF45  Crimping tool
  • RJ45 plug
  • Any old phone cable with a BT11 plug on it
  • A CAT5 RJ45 Network Cable Extender Plug Coupler Joiner, you can get this off ebay for < $2
  • A multi-meter

Then, simply cut off the end of the old phone cable that had the end the plugs into the phone.  It will probably be a 4 wire cable.  Now, use your multimeter to identify which two wires are connected to the outer two BT-11 pins.  Then plug it into your phone jack and check you get around 50 volts DC from those two pins..   Once you’ve got the two wires that have power, cut off the other 2 wires and crimp the two powered wires to the two outside pins on the RJ45 plug.   See the above diagram.

Then, plug the RJ45 plug you crimped into the joiner you got off ebay, and label it, as you don’t want to be plugging in normal networking equipment by accident to this plug.

Step 3:

  1. Connect the Y cable to your Fritz!Box and to your VDSL.  Check it works.
  2. Connect the Y cable to your Analog line, using the adapter you made.  Your Fritz!Box is now able answer and make calls with your landline.

Now, your ready to configure Telephony on the FRITZ!Box.



Start with Telephone numbers.  Here you should configure your fixed line, plus any SIP providers you want. I have added ippi and comfytel.   Also to note, that Snap! provides a SIP service, if you want.

Next, connect some phones to your Fritz!Box.  You can plugin your standard analoge phones into the FON1 and FON2 plugs.  You can connect your ISDN phones, you can connect any DECT wireless phones to the Fritz!Box, as their base station (your luck my vary), and you can connect your mobile phones to it, using the FRITZ!Box Fon app.  These will appear under telephone devices.  You should now be able to make a call.

Each device can have a default outgoing telephone number connected to it, and you can pre-select which phone number to make outgoing calls with, by dialing the ** prefix code.

Things you can do

  1. Use a number of devices as phones in your home, including normal phones and your mobiles
  2. Answer calls from skype, sip, landlines and internal numbers
  3. Make free calls to skype and global sip inum’s.
  4. Make low cost calls to overseas landlines using a sip provider
  5. Make calls to local numbers via your normal phone line
  6. Answer machine
  7. Click to dail
  8. Telephone book, including calling internal numbers
  9. Wakeup calls
  10. Send and receive faxes
  11. Block calls
  12. Call Diversion / Call through
  13. Automatically select different providers when dialing different numbers
  14. Set a device to use a specific number, or only to ring for calls for a specific number.
  15. Set do not disturb on a device, based on time of day.
  16. Connect your wireless DECT phones directly to the FRITZ!Box as a base station
  17. See a call log of the calls you’ve made

Sip providers

ippi – – They allow free outgoing and incoming skype calls, plus a free global sip number

Country Number
SIP numeric 889507473
iNum +883510012028558

For outgoing skype calls with ippi, if your phone cannot dial email addresses, you need to add the skype contacts to the phone book on the website under your account, and assign a short code, which you can then use from your phone.

Comfytel – – They provide cheap calling, but you have to pay them manually with paypal, and currently their skype gateway does not work.

iNum: 883510001220681
Internal number: 99982009943


Since screen shots are much nicer than words, below is a collection from my config for your reference.

Call Log:

Answer Phone:

Telephone book:



Call Blocking:

Call through

Dialing rules

Telephone Devices

Dect configuration (I don’t have any compatible phones)

Telephone Numbers (fixed and SIP)

ComfyTel configuration:

IPPI configuration:

IPPI phone book, for calling skype numbers:

Fixed line configuration:

Line Settings:

30 January 2014

Colin Jackson

So long and thanks for all the fish

I stopped updating this blog some time ago, mainly because my work started taking me overseas so much I couldn’t keep up with it.

But now I am blogging again, perhaps a bit less frequently than I used to, over at my new website Jackson Strategy. I’ll be covering technology and how it changes our lives, and what we should be doing about that. Please drop by!

28 December 2013

Tim Penhey


2013 in review

2013 started with what felt like a failure, but in the end, I believe that the
best decision was made.  During 2011 and 2012 I worked on and then managed
the Unity desktop team.  This was a C++ project that brought me back to my
hard-core hacker side after four and a half years on Launchpad.  The Unity
desktop was a C++ project using glib, nux, and Compiz. After bringing Unity to
be the default desktop in 12.04 and ushering in the stability and performance
improvements, the decision was made to not use it as the way to bring the
Ubuntu convergence story forward. At the time I was very close tho the Unity 7
codebase and I had an enthusiastic capable team working on it. The decision
was to move forwards with a QML based user interface.  I can see now that this
was the correct decision, and in fact I could see it back in January, but that
didn't make it any easier to swallow.

I felt that I was at a juncture and I had to move on.  Either I stayed with
Canonical and took another position or I found something else to do. I do like
the vision that Mark has for Ubuntu and the convergence story and I wanted to
hang around for it even if I wasn't going to actively work on the story itself.  For a while I was interested in learning a new programming language, and Go was considered the new hotness, so I looked for a position working on Juju. I was lucky to be able to join the the juju-core team.

After a two weak break in January to go to a family wedding, I came back to
work and started reading around Go. I started with the language specification
and then read around and started with the Go playground. Then started with the
Juju source.

Go was a very interesting language to move to from C++ and Python. No
inheritance, no exceptions, no generics. I found this quite a change.  I even
blogged about some of these frustrations.

As much as I love the C++ language, it is a huge and complex language. One
where you are extremely lucky if you are working with other really competent
developers. C++ is the sort of language where you have a huge amount of power and control, but you pay other costs for that power and control. Most C++ code is pretty terrible.

Go, as a contrast, is a much smaller, more compact, language. You can keep the
entire language specification in your head relatively easily. Some of this is
due to specific decisions to keep the language tight and small, and others I'm
sure are due to the language being young and immature. I still hope for
generics of some form to make it into the language because I feel that they
are a core building block that is missing.

I cut my teeth in Juju on small things. Refactoring here, tweaking
there. Moving on to more substantial changes.  The biggest bit that leaps to
mind is working with Ian to bring LXC containers and the local provider to the
Go version of Juju.  Other smaller things were adding much more infrastructure
around the help mechanism, adding plugin support, refactoring the provisioner,
extending the logging, and recently, adding KVM container support.

Now for the obligatory 2014 predictions...

I will continue working on the core Juju product bringing new and wonderful
features that will only be beneficial to that very small percentage of
developers in the world who actually deal with cloud deployments.

Juju will gain more industry support outside just Canonical, and will be seen
as the easiest way to OpenStack clouds.

I will become more proficient in Go, but will most likely still be complaining
about the lack of generics at the end of 2014.

Ubuntu phone will ship.  I'm guessing on more than just one device and with
more than one carrier. Now I do have to say that these are just personal
predictions and I have no more insight into the Ubuntu phone process than
anyone outside Canonical.

The tablet form-factor will become more mature and all the core applications,
both those developed by Canonical and all the community contributed core
applications will support the form-factor switching on the fly.

The Unity 8 desktop that will be based on the same codebase as the phone and
tablet will be available on the desktop, and will become the way that people
work with the new very high resolution laptops.

30 October 2013

Tim Penhey


loggo - hierarchical loggers for Go

Some readers of this blog will just think of me as that guy that complains about the Go language a lot.  I complain because I care.

I am working on the Juju project.  Juju is all about orchestration of cloud services.  Getting workloads running on clouds, and making sure they communicate with other workloads that they need to communicate with. Juju currently works with Amazon EC2, HP Cloud, Microsoft Azure, local LXC containers for testing, and Ubuntu's MAAS. More cloud providers are in development. Juju is also written in Go, so that was my entry point to the language.

My background is from Python and C++.  I have written several logging libraries in the past, but always in C++ and with reasonably specific performance characteristics.  One thing I really felt was missing with the standard library in Go was a good logging library. Features that I felt were pretty necessary were:
  • A hierarchy of loggers
  • Able to specify different logging levels for different loggers
  • Loggers inherited the level of their parent if not explicitly set
  • Multiple writers could be attached
  • Defaults should "just work" for most cases
  • Logging levels should be configurable easily
  • The user shouldn't have to care about synchronization
Initially this project was hosted on Launchpad.  I am trialing moving the trunk of this branch to github.  I have been quite isolated from the git world for some time, and this is my first foray in git, and specifically git and go.  If I have done something wrong, please let me know.


There is an example directory which demonstrates using loggo (albeit relatively trivially).

import ""
logger = loggo.GetLogger("project.area")
logger.Debugf("This is debug output.")
logger.Warningf("Some error: %v", err)

In juju, we normally create one logger for the module, and the dotted name normally reflects the module. This logger is then used by the other files in the module.  Personally I would have preferred file local variables, but Go doesn't support that, not where they are private to the file, and as a convention, we use the variable name "logger".

Specifying logging levels

There are two main ways to set the logging levels. The first is explicitly for a particular logger:


or chained calls:


Alternatively you can use a function to specify levels based on a string.

loggo.ConfigureLoggers("<root>=INFO; project=DEBUG; project.some.area=TRACE")

The ConfigureLoggers function parses the string and sets the logging levels for the loggers specified.  This is an additive function.  To reset logging back to the default (which happens to be "<root>=WARNING", you call


You can see a summary of the current logging levels with


Adding Writers

A writer is defined using an interface. The default configuration is to have a "default" writer that writes to Stderr using the default formatter.  Additional writers can be added using loggo.RegisterWriter and reset using loggo.ResetWriters. Named writers can be removed using loggo.RemoveWriter.  Writers are registered with a severity level. Logging below that severity level are not written to that writer.

More to do

I want to add a syslog writer, but the default syslog package for Go doesn't give the formatting I want. It has been suggested to me to just take a copy of the library implementation and make it work how I want.

I also want to add some filter-ability to the writers, both on the inclusive and exclusive, so you could say when registering a writer, "only show me messages from these modules", or "don't show messages from these other modules".

This library has been used in Juju for some time now, and fits with most our needs.  For now at least.

27 June 2013

Malcolm Locke


First Adventures in Spectroscopy

One of my eventual amateur astronomy goals is to venture into the colourful world of spectroscopy. Today I took my first steps on that path.

We have a small cut glass pendant hanging in our window which casts pretty rainbows around the living room in the evening sun. A while ago I took photo of one of these with the intention of one day seeing what data could be extracted from it.


Not much to look at, but I decided tonight to see how much information can be extracted from this humble image.

First step was to massage the file into FITS format, the standard for astronomical data, with the hope that I could use some standard tools on the resulting file.

I cropped the rainbow, converted it to grayscale, and saved it as FITS in Gimp. I'd hoped to use my usual swiss army knife for FITS files, SAOImage DS9, to extract a graph from the resulting file, but no dice. Instead I used the following Python script to plot a graph of the averages brightness values of the columns of pixels across the spectrum. Averaging the vertical columns of pixels helps cancel out the effects of noise in the source image.

import pyfits
import matplotlib.pyplot as plt
import sys

hdulist =[1])
# Grab the mean value of each column in the image
mean_data = hdulist[0].data.mean(axis=0)


Here's the resulting graph, with the cropped colour and grayscale spectra added for context.

Solar black body curve

If you weren't asleep during high school physics class, you may recognise the tell tale shape of a black body radiation curve.

From the image above we can see that the Sun's peak intensity is actually in green light, not yellow as we perceive it. But better still we can (roughly!) measure the surface temperature of the Sun using this curve.

Here's a graph of colour vs wavelength.

I estimate the wavelength of the green in the peak of the graph above to be about 510nm. With this figure the Sun's surface temperature can be calculated using Wien's displacement law.

λmax * T = b

This simple equation says that the peak frequency (λmax) of the black body curve times the temperature (T) is a constant, b, called Wien's displacement constant. We can rearrange the equation ...

T = b / λmax

... and then plug in the value of b to get the surface temperature of the Sun in degrees Kelvin.

T = 2,897,768 / 510 = 5681 K

That's close enough to the actual value of 5780 K for me!

I'm pretty encouraged by this result with nothing more than a piece of glass and a basic digital camera. A huge amount of what we know about the cosmos comes from examining spectra, but it's a field that doesn't get much love from amateur astronomers. Stay tuned for some hopefully more refined experiments in future.

08 May 2013

Glen Ogilvie

time to blog again.

I have been a bit slack with my blogging and not posted much for a long time. This has been due to both working on lots of things, buying a house and a busy lifestyle.

I do however have a few things to blog about. So, in the coming days i will blog about auto_inst os testing, corporate patching, android tools, aucklug, raspberry pi, rdiff-backup, mulitseat Linux, the local riverside community centre, getting 10 laptops, which will run mageia, my cat gorse, gps tracking, house automation, Amazon AMIs and maybe some other stuff.

03 March 2013

Nevyn H

Why are the USB Ports Locked?

I was at a school recently where the USB ports of the computers are locked down (on the student's log in). This struck me as counter to learning. Your students ability to share is greatly diminished. So what's the justification? Security. They might get virus'.

This to me just means that MS Windows is not fit for the purpose of education. If education needs to suffer to keep a computer system secure in a school, then it's a no brainer. Chose something that doesn't require you to sacrifice education - especially if that's your primary purpose.

I know MS Windows is good for some things. Microsoft Office is a top notch application for example. You can do things in MS Office that you can't in other office suites. But that's only important if those features of MS Office used.

Is the ability to create a pivot chart all that important in education? I would argue that if it's only in MS Office, then you've probably got more important things to be learning or teaching. Database theory i.e. relational databases - is probably more important anyway.

I think Linux has something to offer here. A lot of people would probably be surprised by the amazing things that can be done using Open Source software - and for the most part, without software licensing costs. Take GIMP for example. Although not photoshop, it is incredibly capable. You can do all sorts of things in it that teach a whole range of interesting concepts such as layers, filters, super imposing etc. There's a social lesson in there too - just because you see a photograph, doesn't necessarily mean that something is real. Removing a few pimples, stretching out a persons neck etc. isn't all that hard.

So what's stopping Linux in schools?

Firstly, the perception that Windows if free for schools. Actually, quite a lot of money goes towards the MS schools agreement - money that could be used elsewhere.

Secondly, who offers schools help with Linux? I was replaced in my last job by a Windows person - not a Linux person. There seem to be very few companyies offering Linux desktop support. Given that school I.T. support is tied up in just a handful of I.T. companies, who are all willing to perpetuate the "Windows is free for schools" mantra, where do schools go for Linux support?

Thirdly, is there any real efforts into making Linux suitable for schools? Some might argue that edubuntu is going in this direction but... well look at their goals:
Our aim is to put together a system that contains all the best free software available in education and make it easy to install and maintain.
So the first part talks about free educational software. The second part is pretty much what Ubuntu provides anyway. So basically, it's a copy of Ubuntu with a few education applications added. And while I hate to criticise Open Source software (although I do fairly often), a lot of it is made by geeks for geeks. This is incredibly evident in the educational software sector where educational games often lack lasting engagement.

When looking at what schools are already using on their desktops it's not unusual to see a set up identical to what a secretary in a small business might have. The operating system, a browser and MS Office. All of which (LibreOffice rather than MS Office) are installed in most desktop Linux distributions anyway.

What does Windows offer? A whole lot of management. This isn't a road I would like Linux taking as I think it just stifles education anyway. Instead, I think a school set up should be concerned with keeping kids safe - an I.T. system should look after itself without limiting education.

So what does this look like to me?

Kids as the admins. They should be able to install whatever applications they need to accomplish a task.

A fall back position - currently there are PXE boot options. I think it needs to be more local than that - a rescue partition. Perhaps PXE for the initial load OR usb sticks. A compete restore should taken less than 10 minutes.

Some small amount of management - applications that can or cannot be installed for example. Internet security done on a network level, not an individual machine level.

If you find yourself justifying something on the desktop for security, then you have to ask seriously ask yourself, what is it that you're protecting? It should no longer be enough to just play the security card by default. There's a cost to security. This needs to be understood.

Cloud or server based storage. The individual machines should not hold files vital to a child's work. This makes back ups a whole lot easier - even better if you can outsource that to someone else. Of course, there's the whole "off shore" issue. i.e. government agencies do not store information offshore.

I guess this post is really just a great big justification for Tartare Source. It's not the only use. I think a similar set up could be incredibly beneficial to a business for example. Less overheads in terms of licensing tracking and security concerns. Freedom for people to work in the way that they feel most comfortable etc.

So I guess the question is still, where would you find the support? With the user in mind and "best practise" considered inappropriate in civilised company...

27 February 2013

Robin Paulson

The Digital Commons: Escape From Capital?

So, after a year of reading, study, analysis and writing, my thesis is complete. It’s on the digital commons, of course, this particular piece is an analysis to determine whether or not the digital commons represents an escape from, or a continuation of, capitalism. The full text is behind the link below.

The Digital Commons: Escape From Capital?

In the conclusion I suggested various changes which could be made to avert the encroachment of capitalist modes, as such I will be releasing various pieces of software and other artefacts over the coming months.

For those who are impatient, here’s the abstract, the conclusion is further down:

In this thesis I examine the suggestion that the digital commons represents a form of social organisation that operates outside any capitalist relationships. I do this by carrying out an analysis of the community and methods of three projects, namely Linux, a piece of software; Wikipedia, an encyclopedia; and Open Street Map, a geographic database.

Each of these projects, similarly to the rest of the digital commons, do not require any money or other commodities in return for accessing them, thus denying exchange as the dominant method of distributing resources, instead offering a more social way of conducting relations. They further allow the participation of anyone who desires to take part, in relatively unhindered ways. This is in contrast to the capitalist model of
requiring participants demonstrate their value, and take part in ways demanded by capital.

The digital commons thus appear to resist the capitalist mode of production. My analysis uses concepts from Marx’s Capital Volume 1, and Philosophic and Economic Manuscripts of 1844, with further support from Hardt and Negri’s Empire trilogy. It analyses five concepts, those of class, commodities, alienation, commodity fetishism and surplus-value.

I conclude by demonstrating that the digital commons mostly operates outside capitalist exchange relations, although there are areas where indicators of this have begun to encroach. I offer a series of suggestions to remedy this situation.

Here’s the conclusion:

This thesis has explored the relationship between the digital commons and aspects of the capitalist mode of production, taking three iconic projects: the Linux operating system kernel, the Wikipedia encyclopedia and the Open Street Map geographical database as case studies. As a result of these analyses, it appears digital commons represent a partial escape from the domination of capital.


As the artefacts assembled by our three case studies can be accessed by almost anybody who desires, there appear to be few class barriers in place. At the centre of this is the maxim “information wants to be free” 1 underpinning the digital commons, which results in assistance and education being widely disseminated rather than hoarded. However, there are important resources whose access is determined by a small group in each project, rather than by a wider set of commoners. This prevents all commoners who take part in the projects from attaining their full potential, favouring one group and thus one set of values over others. Despite the highly ideological suggestion that anyone can fork a project at any time and do with it as they wish, which would suggest a lack of class barriers, there is significant inertia which makes this difficult to achieve. It should be stressed however, that the exploitation and domination existing within the three case studies is relatively minor when compared to typical capitalist class relations. Those who contribute are a highly educated elite segment of society, with high levels of self-motivation and confidence, which serves to temper what the project leaders and administrators can do.


The artefacts assembled cannot be exchanged as commodities, due to the license under which they are released, which demands that the underlying information, be it the source code, knowledge or geographical data always be available to anyone who comes into contact with the artefact, that it remain in the commons in perpetuity.


This lack of commoditisation of the artefacts similarly resists the alienation of those who assemble them. The thing made by workers can be freely used by them, they make significant decisions around how it is assembled, and due to the collaborative nature essential to the process of assembly, constructive, positive, valuable relationships are built with collaborators, both within the company and without. This reinforces Stallman’s suggestion that free software, and thus the digital commons is a more social way of being 2.


Further, the method through which the artefacts are assembled reduces the likelihood of fetishisation. The work is necessarily communal, and involves communication and association between those commoners who make and those who use. This assists the collaboration essential for such high quality artefacts, and simultaneously invites a richer relationship between those commoners who take part. However, as has been shown, recent changes have shown there are situations where the social nature of the artefacts is being partially obscured, in favour of speed, convenience and quality, thus demonstrating a possible fetishisation.


The extraction of surplus-value is, however, present. The surplus extracted is not money, but in the form of symbolic capital. This recognition from others can be exchanged for other forms of capital, enabling the leaders of the three projects investigated here to gain high paying, intellectually fulfilling jobs, and to spread their political beliefs. While it appears there is thus exploitation of the commoners who contribute to these projects, it is firstly mild, and secondly does not result in a huge imbalance of wealth and opportunity, although this should not be seen as an apology for the behaviour which goes on. Whether in future this will change, and the wealth extracted will enable the emergence of a super-rich as seen in the likes of Bill Gates, the Koch brothers and Larry Ellison remains to be seen, but it appears unlikely.


There are however ways in which these problems could be overcome. At present, the projects are centred upon one website, and an infrastructure and values, all generally controlled by a small group who are often self-selected, or selected by some external group with their own agenda. This reflects a hierarchical set of relationships, which could possibly be addressed through further decentralisation of key resources. For examples of this, we can look at YaCy 3, a search engine released under a free software license. The software can be used in one of a number of ways, the most interesting of these is network mode, in which several computers federate their results together. Each node searches a different set of web sites, which can be customised, the results from each node are then pooled, thus when a commoner carries out a search, the terms are searched for in the databases of several computers, and the results aggregated. This model of decentralisation prevents one entity taking control over what are a large and significant set of resources, and thus decreases the possibility of exploitation, domination and the other attendant problems of minority control or ownership over the means of production.


Addressing the problem of capitalists continuing to extract surplus, requires a technically simple, but ideologically difficult, solution. There is a general belief within the projects discussed that any use of the artefacts is fine, so long as the license is complied with. Eric Raymond, author of the influential book on digital commons governance and other matters, The Cathedral and The Bazaar, and populariser of the term open source, is perhaps most vocal about this, stating that the copyleft tradition of Stallman’s GNU is overly restrictive of what people, by which he means businesses, can do, and that BSD-style, no copyleft licenses are the way forward 4. The majority of commoners taking part do not follow his explicit preference for no copyleft licenses, but nonetheless have no problem with business use of the artefacts, suggesting that wide spread use makes the tools better, and that sharing is inherently good. It appears they either do not have a problem with this, or perhaps more likely do not understand that this permissiveness allows for uses that they might not approve of. Should this change, a license switch to something preventing commercial use is one possibility.

1Roger Clarke, ‘Roger Clarke’s “Information Wants to Be Free …”’, Roger Clarke’s Web-Site, 2013,

2Richard Stallman, Free Software Free Society: Selected Essays of Richard M. Stallman, ed. Joshua Gay, 2nd ed (Boston, MA: GNU Press, Free Software Foundation, 2010), 8.

3YaCy, ‘Home’, YaCy – The Peer to Peer Search Engine, 2013,

4Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, ed. Tim O’Reilly, 2nd ed. (Sebastopol, California: O’Reilly, 2001), 68–69.

09 January 2013

Malcolm Locke


Kill All Tabs in Eclipse

As a long time Vim user, I have the following config in my ~/.vimrc to ensure I never ever enter an evil tab character into my source code.

set   shiftwidth=2
set   tabstop=2
set   smarttab
set   et

For my Android projects, I'm starting to use Eclipse, and unfortunately eternally banishing all tabs in Eclipse is not such an easy task. Here's where I'm at so far, YMMV and I'll update this as I find more. It seems the tab boss is difficult to kill in this app.

  • Under Window -> Preferences -> General -> Editors -> Text Editors ensure Insert spaces for tabs is checked.
  • Under Window -> Preferences -> Java -> Code Style -> Formatter create a new profile based off the default, and under the Indentation tab set Tab policy to to Spaces only.

18 December 2012

Nevyn H

Open Source Battle Grounds

I often find myself at odds with a lot of FLOSS (Free/Libre/Open-Source Software) people due to my attitude on office suites.

To me, they're not the place to start introducing FLOSS. Those ideals about interoperability are lost when people then find they're having to adjust bullet points and the like when those same files end up on a different office suite. This is fiddly work (which I'm of the opinion should NEVER be a problem).

The matter is even more confused when there's already a de facto standard used in businesses everywhere.

The fact is, trying to replace MS Office with LibreOffice/OpenOffice and formerly StarOffice, is setting up for failure.

It's my belief that bringing FLOSS to a business should ALWAYS extend their functionality. Introducing GIMP to crop images in place of packages too expensive for most businesses to contemplate (in which case, a cost benefit is seen immediately) or building a database (a real database - which is NOT a spreadsheet or multi-sheet spreadsheet - more rubbish that I hear perpetrated) to help ensure data integrity and enable future expansion (i.e. a database can then be offered via a web front end or used with other information to build a more complete information management system) adds value rather than asking people to sacrifice something - whether it's as simple as user interface elements or more complex like sharing documents - they're losses. Introducing FLOSS should not cause a loss if you want to promote it.

I'm also not a fan of Google Docs. Sure, they're great for collaborative work - in fact, for this purpose, they just can't be beat. However, instead of the office suite being the limiting factor, the limiting factor has turned into the browser. Page breaks in a word processing document appear in different places depending on what browser you're using. There ALWAYS seem to be annoying nags if you're not using Google Chrome (OS, the Browser or Chromium Browser).

I'm a big fan of getting rid of office suites. I consider them to be HORRIBLY outdated technology.

Spreadsheets are great for small quick tabulated, the presentation is more important than data integrity, sort of quick jobs BUT extending the range that spreadsheets can handle (i.e. previously you couldn't have more than 65,536 rows) has confused their purpose even further.

They should be replaced by databases. This then introduces the opportunity to make a system work to a business rather than a business trying to work around a software package.
    And no... I don't mean sqlite. Sqlite is probably good for stand alone programs but for the most part, those databases that are vital to a businesses everyday operations, need to be shared by multiple people. A more server-centric database:
    1. Has locking features which make them scalable (i.e. if one person has a spreadsheet open, then generally, the whole spreadsheet is locked (collaborative features aside - although in a version of Office, this caused all sorts of headaches). Having the capacity for an information system to scale brings with it an incredibly positive message - you're working with the business to help with it's growth.
    2. Clears the way for expansion such as having it work with other data in a consistent way.
    3. Helps with data integrity by ensuring everyone is accessing the information in the same way (hopefully by web based interfaces - even if not available on the Internet, designing interfaces for the Internet generally creates a OS/device agnostic interface).
    The word processor could be a whole lot smarter. Rather than presenting you with a 20,000 formatting controls (on an individual character basis), I'm of the opinion that you should instead be able to use styles - yes, the same concept as web pages. Lyx, which describes itself as a document processor, is close except that it doesn't make it easy for you to define your own styles. Other word processors have a cursory nod in this direction but do incredibly badly at enforcing it (a friend of mine wrote up a document, sent it away for review, and then went through it again to fix up the structured nature of it - as pointless and just as busy work like as fixing up bullet points). Currently, writing up a document is a mad frenzy of content and formatting. What if, you could concentrate on the content and then quickly mark out blocks of text (That's a heading. That's a sub heading etc.) and let the computer take care of the rest?

    I'm of the opinion that presentations are, in the normal course of things, done INCREDIBLY badly. The few good presentations I've seen have been from people who do presentations for a living. The likes of Lawrence Lessig and Al Gore. These presentations were used to illustrate something. I don't believe that the lack of presentation applications would have stopped either of these people from having brilliant presentations. They are the exception and people who have gone to exceptional lengths to have good presentations. Otherwise, they're bullet points - points to talk to. They don't engage the audience. It's much better to have vital information - that which you need illustrated - behind you such as charts. Something that helps to illustrate a point (when I was told I needed to have some slides behind me I agonised over it. I didn't want them. I didn't need any charts and I think they're more of a distraction than an aid when you don't actually need them. I spent more time agonising over those slides than I did actually thinking about what I wanted to say - fitting a speech to slides just sucks. The speech was awful as I was feeling horrendously anxious about the slides.) is so much more engaging than putting up bullet points.

    So I think FLOSS has a huge part to play in advancing technology here. Instead of fighting a losing battle with trying to perpetuate existing terrible practises (based on MS's profits), FLOSS could instead be used to show people better and more efficient ways of doing things.

    This really came home to me when I was trying to compile a report on warranties. Essentially, while trying to break down the types of repairs across each of the school's, I was finding I was having a hell of a time trying to get the formatting consistent. The problem would be the same regardless of which office suite I was using. Instead, I really should have been able to create a template and then select which sheets it should use to generate a finished report - charts and all.

    Office Suites create bad (and soul destroying) work practises.

    But back to the original point, when making a proposal, think about what value it's adding. Is it adding value? Is it adding value perceived by the intended audience? The revolution that oh so many Linux people talk about isn't going to happen by replacing like for like. Instead, it needs to be something better. Something that the intended audience sees value in. Something that revolutionises the way people do things. Things that encompass the best in Open Source Software - flexibility, scalability and, horribly important to me, customizability. All derivatives of the core concept of Free (as in Freedom)...

    16 December 2012

    Colin Jackson

    People should be allowed to have red cars!

    Dear Editor

    I am writing to ask why people are not allowed to have red cars. Some of my friends’ favourite colours are red, but they are not able to have their cars painted this way. Why?

    I have seen people writing in your newspaper to say that cars are meant to be black and it is simply wrong to paint them any other colour. They generally don’t explain why they think this, except to point to the manufacturers’ books that say cars must be made black (but don’t justify this). For heaven’s sake! This is the twentieth century and we have moved on so far since cars were invented. Back then, some people said that having any kind of car was wrong – look how they’ve changed once they have got used to the idea.

    Others have said, if my friends have red cars, that their black cars will be worth less somehow. Nonsense! There’s absolutely no reason to assume that. They can keep driving round the black cars they’ve always had. Some of the sillier of these people have even said that, if we let people paint cars red, they will want to go around painting other things red, like horses and dogs. What a crazy thing to say!

    The most honest people who don’t want people to have red cars say it’s because they just don’t like them. Some of the people who use other arguments really think this, as well, but they don’t like to say it in public. But no-one is going to force them to have a red car. They can keep having a black one, or none at all. Other people having red cars won’t affect them at all.

    I’m asking everyone who doesn’t think there should be red cars – think about why you oppose them. Why is it your business to try to stop something that won’t affect you and will make other people happy?


    Red Car Lover

    20 November 2012

    Nevyn H

    Open Source Awards

    So it's a few days after the NZ Open Source Awards. The Manaiakalani Project won! The presenter for the award in education, Paul Seiler, did mention the fact that the award was really about 2 things. The project itself and the use of Open Source Software and the contribution by me (sorry - ego does have to enter in at times - I'm awesome!). So my time to shine. I'm thinking about doing a great big post about the evolution of a speech. I'll post my notes, which I didn't take with me here though:
    • Finalists
    • Community
    • Leadership
    • Synergy
    • Personal
    Half way through my speech I'd realised my accent had turned VERY kiwi. Sod it - carry on.

    Anyway, I did say in a previous post that I'd look at the awards and process a little more closely. So let's do that!

    Nominations are found by opening it up to the public. This runs for a few anxious months. i.e. I didn't want to nominate myself as I felt this would be egotistical. However, with the discussion going on - the lack of mention of Open Source Software on the Manaiakalani website etc. - I was fairly confident the project had been nominated.

    The judges are all involved with various projects. It's such a small community that it'd be difficult to find people who knew what to look for who weren't involved in the community in some way. They have to do this with full disclosure.

    Amy Adams opened up the dinner with a "Software development is important to the government" with emphasis on the economic benefits. Of course, what she didn't say was that we spend far more money offshore on software than we do onshore. Take this as a criticism - we have the skills in New Zealand to be able to take our software into our hands and make it work to how we work. We don't have the commitment from the New Zealand government or businesses to be able to do this.

    The dinner itself it's a little strange. Here you are sitting at a table of your peers and you can't help but think that all of the finalists should probably be getting their bit in the lime light. So at our table we had a sort of mock rivalry going on.

    There was:

    • Nathan Parker principal at Warrington School - the first school in New Zealand to take on a full Open Source and Open Culture philosophy. I've idolised Nathan for awhile - the guy's a dude! So the school itself runs on Open Source Software - completely. It's small enough that they don't need a Student Management System. As well as that, they have a low power FM station running 24/7 that's been run for the last 2 years, by volunteers. This station plays Creative Commons content. They also do sort of a "computers at homes" programme done right i.e. the sense of ownership is accomplished by getting people to build their own computers.
    • Ian Beardslee from Catalyst I.T. ltd. for the Catalyst Open Source Academy. For 2 summers now, Catalyst I.T. have taken an intake of students - basically dropping that barrier of entry into opensource contribution through a combination of classroom type sessions, and mentored sessions for real contributions.
    Personally, I don't think I'd liked to have been a judge given just how close I perceive all of these projects to be. Paul Seiler, who I was sitting next to, did kind of say that you're all accomplishing the same sorts of things in different ways.

    And I saw a comment in a mailing list about those projects that are out there doing their thing but no one nominated. This puts me in mind of yet another TED talk that I watched the morning after the NZOSA dinner.

    There was lots in that video that resonated with me. Things like me wanting to learn electronics but having a fear around it due to what people kept saying around me - "You have to get it absolutely right or it won't work" - paralysing me with fear of getting it wrong and it not working. The same thing was said to me about programming though I learnt fairly quickly that I could make mistakes and it wasn't the end of the world. In fact, programming is kind of the art of putting bugs in.

    But more importantly, and more on topic, the video kind of defines why there are likely a whole lot of projects that should have been nominated but weren't due to weird hang ups.

    And in a greater sense, shame is probably a huge threat to the Open Source Community. I don't think I've ever contributed more than a few lines of code openly because I'm convinced that whatever code just isn't good enough. That I'll be ridiculed for my coding style or assumptions etc. And I've always said that I'll put out my code after a clean up etc.

    Take the code for the Manaiakalani project. It often felt like I was hacking things (badly in some cases) in order to get the functionality we needed - things like creating a blacklist of applications for example. I'm sure that there's evidence of the fact that I'd never coded in python before the project in the code as well. And yet, it's not the code, but the thoughts behind the code, that's award winning. Even if the code is hideous, it's what the code is accomplishing or attempting to accomplish that's important.

    So for the next NZOSA gala, I would love to see, not just a list of the finalists, but also a short list of nominations (those that are deserving of at least a little recognition even if not quite making the final cut). I'd love to see the organisers and judges complaining about the number of nominations coming through. I would love to see the "Open Source Special Recognition" award become a permanent fixture (won by Nathan Parker this year). And hell, greater media attention probably wouldn't be such a bad thing.

    On a very personal note though - I now have to change from being a "Professional Geek" to being an "Award Winning Geek". Feels good on these ol' shoulders of mine.

    03 November 2012

    Rob Connolly

    Tiny MQTT Broker with OpenWRT

    So yet again I’ve been really lax at posting, but meh. I’ve still been working on various projects aimed at home automation – this post is a taster of where I’m going…

    MQTT (for those that haven’t heard about it) is a real time, lightweight, publish/subscribe protocol for telemetry based applications (i.e. sensors). It’s been described as “RSS for the Internet of Things” (a rather poor description in my opinion).

    The central part of MQTT is the broker: clients connect to brokers in order to publish data and receive data in feeds to which they are subscribed. Multiple brokers can be fused together in a heirarchical structure, much like the mounting of filesystems in a unix-like system.

    I’ve been considering using MQTT for the communication medium in my planned home automation/sensor network projects. I wanted to set up a heirarchical system with different brokers for different areas of the house, but hadn’t settled on a hardware platform. Until now…

    …enter the TP-Link MR3020 ‘travel router’, which is much like the TL-WR703N which I’ve seen used in several hardware hacks recently:

    It's a Tiny MQTT Broker!

    It’s a Tiny MQTT Broker!

    I had to ask a friend in Hong Kong to send me a couple of these (they aren’t available in NZ) – thanks Tony! Once I received them installing OpenWRT was easy (basically just upload through the exisiting web UI, follow the instructions on the wiki page I linked to above). I then configured the wireless adapter in station mode so that it would connect to my existing wireless network and added a cheap 8GB flash drive to expand the available storage (the device only has 4MB of on-board flash, of which ~900KB is available after installing OpenWRT). I followed the OpenWRT USB storage howto for this and to my relief found that the on-board flash had enough space for the required drivers (phew!).

    Once the hardware type stuff was sorted with the USB partitioned (1GB swap, 7GB /opt) and mounting on boot, I was able to install Mosquitto, the Open Source MQTT broker with the following command:

    $ opkg install mosquitto -d opt

    The -d option allows the package manager to install to a different destination, in this case /opt. Destinations are configured in /etc/opkg.conf.

    It took a little bit of fiddling to get mosquitto to start at boot, mainly because of the custom install location. In the end I just edited the paths in /opt/etc/init.d/mosquitto to point to the correct locations (I changed the APP and CONF settings). I then symlinked the script to /etc/rc.d/S50mosquitto to start it at boot.

    That’s about as far as I’ve got, apart from doing a quick test with mosquitto_pub/mosquitto_sub to test everything works. I haven’t tried mounting the broker under the master broker running on my home server yet.

    The next job is to investigate the serial port on the device in order to attach an Arduino clone which I soldered up a while ago. That will be a story for another day, hopefully in the not-to-distant future!

    21 August 2012

    Rob Connolly

    Smartclock Prototype

    So as promised here are the details and photos of the Arduino project I’ve been working on – a little late I know, but I’ve actually been concentrating on the project.

    The project I’m working on is a clock, but as I mentioned before it’s not just any old clock. The clock is equipped with sensors for temperature, light level and battery level. It also has a bluetooth module for relaying this data back to my home server. This is the first part of a larger plan to build a home automation and sensor network around the house (and garden). It’s serving as kind of a test bed for some of the components I want to use as well as getting me started with the software.

    Prototype breadboard

    The prototype breadboard showing the Roving Networks RN-41 bluetooth module on the left and the sensors on the right. The temperature sensor (bottom middle) is a TMP36 and the light sensor is a simple voltage divider using a photocell.

    As you can see from the photos this is a very basic prototype at the moment – although as of this weekend all the hardware is working as well as the software drivers. I just have the firmware to finalise before building the final unit.

    Time display

    The (very bright!) display showing the time. I’m using the Sparkfun 7-segment serial display, which I acquired via Nicegear. It’s a lovely display to work with!

    The display is controlled via SPI and the input from the light sensor is used to turn off the display when it is dark in order to save power when there is no-one in the room. The display will also be able to be controlled from the server via a web interface.

    Temperature display

    The display showing the current temperature. The display switches between modes every 20 seconds with it’s default settings.

    Careful readers will note the absence of a real time clock chip to keep accurate time. The time is kept using one of the timers on the ATmega328p. Yes, before you ask this isn’t brilliantly accurate (it loses about 30 seconds every hour!), but I am planning to sync the time from the server via the bluetooth connection, so I’m not concerned.

    The final version of will use an Arduino Pro Mini 3.3v (which I also got from Nicegear) for the brains, along with the peripherals shown. The Duemilanove shown is just easier for prototyping (although it makes interfacing with the RN-41 a little more difficult).

    I intend to publish all the code (both for the firmware and the server) and schematics under Open Source licences as well as another couple of blog posts on the subject (probably one on the final build with photos and one on the server). However, that’s it for now.

    15 August 2012

    Rob Connolly

    Quick Update…

    Well, I’ve not been doing great with posting more, especially on the quick short posts front. I guess it’s because I’m either too busy all the time or because I just don’t think anyone wants to read every last thought which pops into my head. Probably a bit of both!

    Anyway, here’s a quick run-down of what I’ve been up to over the past couple of weeks:

    • I’ve been working on an Arduino project at home. I’ll post more on this over the weekend (with photos). For now I’ll just say that it’s a clock with some sensors on it – but it does a little more than your average clock. Although I’ve had my Arduino for a couple of years I’ve never really used it in earnest and I’m finding it refreshing to work with. Since I use PICs at work the simpler architecture is nice. Of course I program it in C so I can’t comment on the IDE/language provided by the Arduino tools.
    • The beer which I made recently is now bottled and maturing. It’ll need a couple more weeks to be ready to drink though (actually the longer the better really, but I can never wait!). I’ll report back on what it’s like when I try it.
    • I’ve been thinking about ways to get the ton of data I have spread across my machines in order. Basically I want to get it all onto my mythbox/home server/personal cloud and acessible via ownCloud and NFS. I also have a ton of dead tree (read ‘important’ documents) which need scanning and a ton of CDRs that need backing up. After that I have to overhaul my backup scheme. It’s a big job – hence why I’ve only been thinking about doing it so far.
    • I’ve also been thinking about upgrading my security with the recent hacks which have occured. Since I’m not hugely reliant on external services (i.e. Google, Facebook, Apple and Amazon) I’m doing pretty well already. Also, I already encrypt all my computers anyway (which is way more effective than that stupid ‘remote wipe’ misfeature Apple have). I am considering upgrading to two factor authentication using Google Authenticator anywhere I can and I want to switch to using GPG subkeys and storing my master private key somewhere REALLY secure. I’ll be writing about these as I do them so stay tuned.

    Well, hopefully that’s a quick summary of what I’ve been up to (tech-wise) lately as well as what might be to come in these pages. For now, that’s all folks.

    24 July 2012

    Pass the Source

    Google Recruiting

    So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

    Here’s the suggestion I made on how they can really get in front of FOSS developers:

    Hi [name]

    Just a quick note to thank you for getting in touch of so many our
    Catalyst IT staff, both here and in Australia, with job offers. It comes
    across as a real compliment to our company that the folks that work here
    are considered worthy of Google’s attention.

    One thing about most of our staff is that they *love* open source. Can I
    suggest, therefore, that one of the best ways for Google to demonstrate
    its commitment to FOSS and FOSS developers this year would be to be a
    sponsor of the NZ Open Source Awards. These have been very successful at
    celebrating and recognising the achievements of FOSS developers,
    projects and users. This year there is even an “Open Science” category.

    Google has been a past sponsor of the event and it would be good to see
    you commit to it again.

    For more information see:

    Many thanks

    09 July 2012

    Andrew Caudwell

    Inventing On Principle Applied to Shader Editing

    Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

    There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

    Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

    I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

    Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):



    GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

    I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

    Update: Fixed links to point at

    05 June 2012

    Pass the Source

    Wellington City Council Verbal Submission

    I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.


    I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

    I have 3 Points to make in 3 minutes.

    1. The Long Term plan lacks vision and is a plan for stagnation and erosion

    It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

    Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

    My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

    2. Show faith in local companies

    The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

    A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

    • It improves project success rates, which helps the public sector be more effective.
    • It reduces project cost, which benefits the taxpayers.
    • It invites small business, which stimulates the economy.

    3. Smart cities are open source cities

    Use open source software as the default.

    It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

    It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

    This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

    As well as saving money, open source brings a state of mind. That is:

    • Willingness to share and collaborate
    • Willingness to receive information
    • The right attitude to be innovative, creative, and try new things

    Thank you. There should now be 2 minutes left for questions.

    11 March 2012

    Malcolm Locke


    Secure Password Storage With Vim and GnuPG

    There are a raft of tools out there for secure storage of passwords, but they will all come and go, Vim and GnuPG are forever.

    Here's the config:

    augroup encrypted
        " First make sure nothing is written to ~/.viminfo while editing
        " an encrypted file.
        autocmd BufReadPre,FileReadPre      *.gpg set viminfo=
        " We don't want a swap file, as it writes unencrypted data to disk
        autocmd BufReadPre,FileReadPre      *.gpg set noswapfile
        " Switch to binary mode to read the encrypted file
        autocmd BufReadPre,FileReadPre      *.gpg set bin
        autocmd BufReadPre,FileReadPre      *.gpg let ch_save = &ch|set ch=2
        autocmd BufReadPost,FileReadPost    *.gpg '[,']!gpg --decrypt 2> /dev/null
        " Switch to normal mode for editing
        autocmd BufReadPost,FileReadPost    *.gpg set nobin
        autocmd BufReadPost,FileReadPost    *.gpg let &ch = ch_save|unlet ch_save
        autocmd BufReadPost,FileReadPost    *.gpg execute ":doautocmd BufReadPost " . expand("%:r")
        " Convert all text to encrypted text before writing
        autocmd BufWritePre,FileWritePre    *.gpg   '[,']!gpg --default-recipient-self -ae 2>/dev/null
        " Undo the encryption so we are back in the normal text, directly
        " after the file has been written.
        autocmd BufWritePost,FileWritePost  *.gpg   u
        " Fold entries by default
        autocmd BufReadPre,FileReadPre      *.gpg set foldmethod=expr
        autocmd BufReadPre,FileReadPre      *.gpg set foldexpr=getline(v:lnum)=~'^\\s*$'&&getline(v:lnum+1)=~'\\S'?'<1':1
    augroup END

    Now, open a file, say super_secret_passwords.gpg and enter your passwords with a blank line between each set:

    My Twitter account
    malc : s3cr3t
    My Facebook account
    malc : s3cr3t
    My LinkedIn account
    malc : s3cr3t

    When you write the file out, it will be encrypted with your GPG key. When you next open it, you'll be prompted for your GPG private key passphrase to decrypt the file.

    The line folding config will mean all the passwords will be hidden by default when you open the file, you can reveal the details using zo (or right arrow / l) with the cursor over the password title.

    I like this system because as long as I have gpg and my private key available, I can extract any long lost password from my collection.

    05 January 2012

    Pass the Source

    The Real Tablet Wars

    tl;dr formally known as Executive Summary, Openness + Good Taste Wins

    Gosh, it’s been a while. But this site is not dead. Just been distracted by and twitter.

    I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

    It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

    Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

    I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

    The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

    We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

    “No problem”, I thought. “Let’s install Firefox, I know that works”.

    But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

    Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

    Er, and the upgrade failed to fix the problem. One day gone.

    So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

    The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

    But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

    Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

    The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

    Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

    PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

    * I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

    11 November 2011

    Robin Paulson

    Free software and the extraction of capital

    This essay will asses the relationship between free software and the capitalist mode of accumulation, namely that of the extraction of various forms of capital to produce profit. I will perform an analysis through the lens of the Marxist concept of extracting surplus from workers, utilise Bourdieu’s theory of capital, and the ideas of Hardt and Negri as they discuss the various economic paradigms, and the progression through these.

    The free software movement is one which states that computer software should not have owners (Stallman, 2010, chap. 5), and that proprietary software is fundamentally unethical (Stallman, 2010, p. 5). This idea is realised through “the four freedoms” and a range of licenses, which permit anyone to: use for any purpose; modify; examine and redistribute modified copies, of the software so licensed (Free Software Foundation, 2010). These freedoms are posited as a contrast to the traditional model of software development, which rests all ownership and control of the product in its creators. As free software is not under private control, it would appear at first to escape the capitalist mode of production, and the problems which ensue from that, such as alienation, commodity fetishism and the concentration of power and wealth in the hands of a few.

    For a definition of the commons, Bollier states:

    commons comprises a wide range of shared assets and forms of community governance. Some are tangible, while others are more abstract, political, and cultural. The tangible assets of the commons include the vast quantities of oil, minerals, timber, grasslands, and other natural resources on public lands, as well as the broadcast airwaves and such public facilities as parks, stadiums, and civic institutions. … The commons also consists of intangible assets that are not as readily identified as belonging to the public. Such commons include the creative works and public knowledge not privatized under copyright law. … A last category of threatened commons is that of so-called ‘gift economies’. These are communities of shared values in which participants freely contribute time, energy, or property and over time receive benefits from membership in the community. The global corps of GNU/Linux software programmers is a prime example: enthusiasts volunteer their talents and in return receive useful rewards and group esteem. (2002)

    Thus, free software would appear to offer an escape from the system of capitalist dominance based upon private property, as the products of free software contribute to the commons, resist attempts at monopoly control and encourage contributors to act socially.

    Marx described how through the employment of workers, investors in capitalist businesses were able to amass wealth and thus power. The employer invests an amount of money into a business, to employ labour, and he labourer creates some good, be it tangible or intangible. The labourer is then paid for this work, and the company owner takes the good and sells it at some higher price, to cover other costs and to provide a profit. The money the labourer is paid is for the “necessary labour” (Marx, 1976a, p. 325), i.e. the amount the person requires to reproduce labour, that is the smallest amount possible to ensure the worker can live, eat, house themself, work fruitfully and produce offspring who will do similar. The difference between this amount and the amount the good sells for, minus other costs, which are based upon the labour of other workers, is the “surplus value”, and equals the profit to the employer (Marx, 1976a, p. 325). The good is then sold to a customer, who thus enters into a social relationship with the worker that made it. However, the customer has no knowledge of the worker, does not know the conditions they work under, their wage, their name or any other information about them, their relationship is mediated entirely through the commodity which passes from producer to consumer. Thus, despite the social relationship between the two, they are alienated from each other, and the relationship is represented through a commodity object, which is thus fetishised over the actual social relationship (Marx, 1976a, chap. 1). The worker is further alienated, from the product of their labour, for which they are not fully recompensed, as they are not paid the full exchange amount which the capitalist company obtains, and do not have control over any further part in the commodity than the work they employed to put in.

    If we study the reasons participants have for contributing to free software projects, coders fall into one or more of the following three categories: firstly, coders who contribute to create something of utility to themselves, secondly, those who are paid by a company which employs them to write code in a traditional employment relationship, and finally those who write software without economic compensation, to benefit the commons (Hars & Ou, 2001). The first category does not enter into a relationship with others, so the system of capitalist exchange does not need to be considered. The second category, that of a worker being paid to contribute to a project, might seem unusual, as the company appears to be giving away the result of capital investment, thus benefiting competitors. Although this is indeed the case, the value gained in other contributors viewing, commenting on and fixing the code is perceived to outweigh any disadvantages. In the case of a traditional employee of a capitalist company, the work, be it production of knowledge, carrying out of a service or making a tangible good, will be appropriated by the company the person works for, and credited as its own. The work is then sold at some increased cost, the difference between the cost to make it and the cost it is sold for being surplus labour, which reveals itself as profit.

    The employed software coder working on a free software performs necessary labour (Marx, 1976a, p. 325), as any other employee does, and this is rewarded with a wage. However, the surplus value, which nominally is used to create profit for the employer by them appropriating the work of the employee, is not solely controlled by the capitalist. Due to the nature of the license, the product of the necessary and surplus labour can be taken, used and modified by any other person, including the worker. Thus, the traditional relationship of the commons to the capitalist is changed. The use of paid workers to create surplus value is an example of the capitalist taking the commons and re-appropriating it for their own gain. However, as the work is given back to the commons, there is an argument that the employer has instead contributed to the wider sphere of human knowledge, without retaining monopoly control as the traditional copyright model does. Further, the worker is not alienated by their employer from the product of their labour, it is available for them to use as they see fit.

    The second category of contributors to a project, volunteers are generally also highly-skilled, well-paid, and materially comfortable in life. According to Maslow’s Hierarchy of Needs (Maslow, 1943), as individuals attain the material comforts in life, so they are likely to turn their aspirations towards less tangible but more fulfilling achievements, such as creative pursuits. Some will start free software projects of their own, as some people will start capitalist businesses: the Linux operating system kernel, The GNU operating system and the Diaspora* [sic] distributed social networking software are examples of this situation. If a project then appears successful to others, it will gain new coders, who will lend their assistance and improve the software. The person(s) who started the project are acknowledged as the leader(s), and often jokingly referred to as the “benevolent dictator for life” (Rivlin, 2003), although their power is contingent, because as Raymond put it, “the culture’s ‘big men’ and tribal elders are required to talk softly and humorously deprecate themselves at every turn in order to maintain their status.” (2002). As leaders, they will make the final decision of what code goes into the ‘official’ releases, and be recognised as the leader in the wider free software community.

    Although there may be hundreds of coders working on a project, as there is an easily identifiable leader, he or she will generally receive the majority of the credit for the project. Each coder will carry out enough work to produce the piece of code they wish to work on, thus producing a useful addition to the software. As suggested above by Maslow, the coder will gain symbolic capital, defined by Bourdieu as “the acquisition of a reputation for competence and and image of respectability” (1984, p. 291) and as “predisposition to function as symbolic capital, i.e., to be unrecognized as capital and recognized as legitimate competence, as authority exerting an effect of (mis)recognition … the specifically symbolic logic of distinction” (Bourdieu, 1986). This capital will be attained through working on the project, and being recognised by: other coders involved in the project and else where; the readers of their blog; their friends and colleagues, and they may occasionally be featured in articles on technology web news sites (KernelTrap, 2002; Mills, 2007). Each coder adds their piece of effort to the project, gaining enough small acknowledgements for their work along the way to feel they should continue coding, which could be looked at as necessary labour (Marx, 1976a, p. 325). Contemporaneously, the project leader gains a smaller acknowledgement for the improvements to the project as a whole, which in the case of a large project can be significant over time. In the terms expressed by Marx, although the coder carries out a certain amount of work, it is then handed over to the project, represented in the eyes of the public by the leader who accrues similar small amounts of capital from all coders on the project. This profit is surplus value (Marx, 1976a, p. 325). Similarly to the employed coder, the economic value of the project does not belong to the leaders, there is no surplus extracted there, as all can use it.

    To take a concrete example, Linus Torvalds, originator and head of the Linux kernel is known for his work throughout the free software world, and feted as one of its most important contributors (Veltman, 2006, p. 92). The perhaps surprising part of this, is that Torvalds does not write code for the project any more, he merely manages others, and makes grand decisions as to which concepts, not actual code, will be allowed into the mainline, or official, release of the project (Stout, 2007). Drawing a parallel with a traditional capitalist company, Linus can be seen as the original investor who started the organisation, who manages the workers, and who takes a dividend each year, despite not carrying out any productive work. Linus’ original investment in 1991 was economic and cultural capital, in the form of time and a part-finished degree in computer science (Calore, 2009). While he was the only contributor, the project progressed slowly, and the originator gained symbolic, social and cultural capital solely through his own efforts, thus resembling a member of the petit bourgeois. As others saw the value in the project, they offered small pieces of code to solve small problems and progress the code. These were incorporated, thus rapidly improving the software, and the standing of Torvalds.

    Like consumers of any other product, users of Linux will not have be aware of who had made the specific change unless they make an effort to read the list of changes for each release, thus resulting in the coder being alienated from the product of their labour and the users of the software (Marx, 1959, p. 29), who fetishise (Marx, 1976a, chap. 1) the software over the social relationship which should be prevalent. For each contribution, which results in a small gain in symbolic capital to the coder, Linus takes a smaller gain in those forms of capital, in a way analogous to a business investor extracting surplus economic capital from her employees, despite not having written the code in question. The capitalist investor possesses no particular values, other than to whom and where she was born, yet due to the capital she is able to invest, she can amass significant economic power from the work of others. Over 18 years, these small gains in capital have also added up for Linus Torvalds, and such is now the symbolic capital expropriated that he is able to continue extracting this capital fro Linux, while reinvesting capital in writing code for other projects, in this case ‘Git’ (Torvalds, 2005), which has attracted coders in part due to the fame of its principal architect. The surplus value of the coders on this project is also extracted and transferred to the nominal leader, and so the cycle continues, with the person at the top continuously and increasingly benefiting from the work of others, at their cost.

    The different forms of capital can readily be exchanged for one another. As such, Linus has been offered book contracts (Torvalds, 2001), is regularly interviewed for a range of publications (Calore, 2009; Rivlin, 2003), has gained jobs at high prestige technology companies (Martin Burns, 2002), and been invited to various conferences as guest speaker. The other coders on the Linux project have also gained, through skills learned, social connections and prestige for being part of what is a key project in free software, although none in the same way as Linus.

    Free software is constructed in such a way as to allow a range of choices to address most needs, for instance in the field of desktop operating systems there are hundreds to choose from, with around six distributions, or collections of software, covering the majority of users, through being recognised as well-supported, stable and aimed at the average user (, 2011). In order for the leaders of each of these projects to increase their symbolic capital, they must continuously attract new users, be regularly mentioned in the relevant media outlets and generally be seen as adding to the field of free software, contributing in some meaningful way. Doing so requires a point-of-difference between their software and the other distributions. However, this has become increasingly difficult, as the components used in each project have become increasingly stable and settled, so the current versions of each operating system will contain virtually identical lists of packages. In attempting to gain users, some projects have chosen to make increasingly radical changes, such as including versions of software with new features even though they are untested and unstable (Canonical Ltd., 2008), and changing the entire user experience, often negatively as far as users are concerned (Collins, 2011). Although this keeps the projects in the headlines on technology news sites, and thus attracts new users, it turns off experienced users, who are increasingly moving to more stable systems (Parfeni, 2011).

    This proliferation of systems, declining opportunities to attract new users, and increasingly risky attempts to do so, demonstrates the tendency of the rate of profit to fall, and the efforts capitalist companies go to in seeking new consumers (Marx, 1976b, chap. 3), so they can continue extracting increased surplus value as profit Each project must put in more and more effort, in increasingly risky areas, thus requiring increased maintenance and bug-fixing, to attract users and be appreciated in the eyes of others.

    According to Hardt and Negri, since the Middle Ages, there have been three economic paradigms, identified by the three forms from which profit is extracted. These are: land, which can be rented out to others or mined for minerals; tangible, movable products, which are manufactured by exploited workers and sold at a profit; and services, which involve the creation and manipulation of knowledge and affect, and the care of other humans, again by exploited workers (2000, p. 280). Looking more closely at these phases, we can see a procession. The first phase relied mainly upon the extraction of profit from raw materials, such as the earth itself, coal and crops, with little if any processing by humans. The second phase still required raw materials, such as iron ore, bauxite, rubber and oil, but also required a significant amount of technical processing by humans to turn these materials into commodities which were then sold, with profit extracted from the surplus labour of workers. Thus the products of the first phase were important in a supporting role to the production of the commodities, in the form of land for the factory, food for workers, fuel for smelters and machinery, and materials to fashion, but the majority of the value of the commodity was generated by activities resting on these resources, the working of those raw materials into useful items by humans. The latter of the phases listed above, the knowledge, affect and care industry, entails workers collecting and manipulating data and information, or performing some sort of service work, which can then be rented to others. Again, this phase relies on the other phases: from the first phase, land for offices, data centres, laboratories, hospitals, financial institutes, and research centres; food for workers, fuel for power; plus from the second phase: commodities including computers, medical equipment, office supplies, and laboratory and testing equipment, to carry out the work. Similarly to the previous phase, these materials and items are not directly the source of the creation of profit, but are required, the generation of profit relies and rests on their existence.

    In the context of IT, this change in the dominant paradigm was most aptly demonstrated by the handover of power from the mighty IBM to new upstart Microsoft in 1979, when the latter retained control over their operating system software MS-DOS, despite the former agreeing to install it on their new desktop computer range. The significance of this apparent triviality was illustrated in the film ‘Pirates of Silicon Valley’, during a scene depicting the negotiations between the two companies, in which everyone but Bill Gates’ character froze as he broke the ‘fourth wall’, turning to the camera and explaining the consequences of the mistake IBM had made (Burke, 1999). IBM, the dominant power in computing of the time, were convinced high profit continued to lie in physical commodities, the computer hardware they manufactured, and were unconcerned by lack of ownership of the software. Microsoft recognised the value of immaterial labour, and soon eclipsed IBM in value and influence of the industry, a position which they held for around 20 years.

    Microsoft’s method of generating profit was to dominate the field of software, their products enabling users to create, publish and manipulate data, while ignoring the hardware, which was seen as a commodity platform upon which to build (Paulson, 2010). Further, the company wasn’t particularly interested what its customers were doing with their computers, so long as they were using Windows, Office and other technologies, to work with that data, as demonstrated by a lack of effort to control the creation or distribution of information. As Microsoft were increasing their dominance, the free software GNU Project was developing a free alternative, to firstly the Unix operating system (Stallman, 2010, p. 9), and later to Microsoft products. Fuelled by the rise in highly capable, cost-free software which competed with and undercut Microsoft, so commoditising the market, the dominance of that company faded in the early 2000s (Ahmad, 2009), to be replaced by a range of companies which built on the products of the free software movement, by relying on the use value, but no longer having any interest in the exchange value of the software (Marx, 1976a, p. 126). The power Microsoft retains today through its desktop software products is due in significant part to ‘vendor lock-in’ (Duke, n.d.), the process of using closed standards, only allowing their software to interact with data in ways prescribed by the vendor. Google, Apple and Facebook, the dominant powers in computing today, would not have existed in their current form were it not for various pieces of free software (Rooney, 2011). Notably, the prime method of profit making of these companies is through content, rather than via a software or hardware platform. Apple and Google both provide platforms, such as the iPhone and Gmail, although neither companies makes large profit directly from these platforms, sometimes to the point of giving them away, subsidised heavily via their profit-making content divisions (Chen, 2008).

    Returning to the economic paradigms discussed by Hardt and Negri, we have a series of sub-phases, each building on the sub-phase before. Within the third, knowledge, phase, the first sub-phase of IT, computer software, such as operating systems, web servers and email servers, was a potential source of high profits through the 1980s and 1990s, but due to high competition, predominantly from the free software movement, the rate of profit has dropped considerably, with for instance the free software ‘Apache’ web server being used to host over 60% of all web sites (Netcraft Ltd., 2011). Conversely, the capitalist companies from the next sub-phase were returning high profits and growth, through extensive use of these free products to sell other services. This sub-phase is noticeable for its reliance on creating and manipulating data, rather than producing the tools to do so, although both still come under the umbrella of knowledge production. This trend was mirrored in the free software world, as the field of software stabilised, thus realising fewer opportunities for increasing one’s capital through the extraction of surplus in this area.

    As the falling rate of profit reduced the potential to gain symbolic capital through free software, so open data projects, which produce large sets of data under open licences, became more prevalent, providing further areas for open content contributors to invest their capital. These initially included Wikipedia, the web-based encyclopedia which anyone can edit, in 2001 (“Wikipedia:About,” n.d.). Growth of this project was high for several years, with a large number of new editors joining, but has since become so small as to find attracting new users very difficult (Chi, 2009; Moeller & Zachte, 2009). Similarly, OpenStreetMap, which aims to map the world, was begun in 2004, and grew at a very high rate once it became known in the mainstream technology press. However, now that the majority of streets and significant geographical data in western countries are mapped, the project is finding it difficult to attract new users, unless they are willing to work on adding increasingly esoteric minutiae, which has little obvious effect on the map, and thus provides a less obvious gain in symbolic capital attained by the user (Fairhurst, 2011). For the leaders of the project, this represents higher and higher effort to be put in, for comparatively smaller returns, again the rate of profit is falling. Rather than the previous, relatively passive method of attracting new users and expanding into other areas, the project founders and leading lights are now aggressively pushing the project to map less well-covered areas, such as a recent effort in a slum in Africa (Map Kibera, 2011); starting a sub-group to create maps in areas such as Haiti, to help out after natural disasters (Humanitarian OpenStreetMap Team, 2011); and providing economic grants for those who will map in less-developed countries (Black, 2008). This closely follows the capitalist need to seek out new markets and territories, once all existing ones are saturated, to continuously push for more growth, to arrest the falling rate of profit.

    According to Hardt and Negri,

    You can think and form relationships not only on the job buy also in the street, at home, with your neighbors and friends. The capacities of biopolitical labor-power exceed work and spill over into life. We hesitate to use the word “excess” for this capacity because from the perspective of society as a whole it is never too much. It is excess only from the perspective of capital because it does not produce economic value that can be captured by the individual capitalist (2011)

    The capitalist mode of production brings organisational structure to the production of value, but in doing so fetters the productivity of the commons, the productivity of the commons is higher when capital stays external to the production process. This hands-off approach to managing production can be seen extensively in free software, through the self-organising, decentralised model it utilises (Ingo, 2006, p. 38), eschewing traditional management forms with chains of responsibility. Economic forms of capital are prevalent in free software, as when technology companies including advertising provider Google, software support company Red Hat and software and services provider Novell employ coders to commit code to various projects such as the Linux kernel (The Linux Foundation, 2009). However, the final decision of whether the code is accepted, is left up to the project itself, which is usually free of corporate management. There are numerous, generally temporary exceptions to this rule, including, the free software office suite, which was recently acquired by software developer Oracle. Within a few months of the acquisition, the number of senior developers involved in the project dropped significantly, most of them citing interference from Oracle in the management of the software, and those who left set up their own fork of the project, based on the Oracle version (Clarke, 2010). Correspondingly, a number of software collections also stopped including the Oracle software, and instead used the version released by the new, again community-managed, offshoot (Sneddon, 2010). Due to the license which is released under, all of Oracle’s efforts to take direct control of the project were easily sidestepped. Oracle may possess the copyright to all of the original code, through purchasing the project, but this comes to naught once that code is released, it can be taken and modified by anyone who sees fit.

    This increased productivity of the commons can be seen in the response to flaws with the software: as there is no hierarchical structure enforced by, for example, employment contract, problems reported by users can and are taken on by volunteer coders who will work on the flaw until it is fixed, without needing to consult line managers, and align with a corporate strategy. If the most recognised source for the software does not respond quickly, either due to financial or technical reasons, because of the nature of the licence, other coders are able to fix the problem, including those hired by customers. For those not paid, symbolic capital continues to play a part here: although the coders may appear to be unpaid volunteers, in reality there is kudos to be gained by solving a problem quickly, pushing coders to compete against each other, even while sharing their advances.

    Despite this realisation that capital should not get too close to free software, the products of free software are still utilised by many corporates: free software forms the key infrastructure for a high proportion of web servers (Netcraft Ltd., 2011), and is extensively used in mobile phones (Germain, 2011) and financial trading (Jackson, 2011). The free software model thus forms a highly effective method for producing efficient software useful to capital. The decentralised, hard-to-control model disciplines capital into keeping its distance, forcing corporations to realise that if they get too close, try to control too much, they will lose out by wasting resources and appearing as bad citizens of the free software community, thus losing symbolic capital in the eyes of potential investors and customers.


    The preceding analysis of free software and its relationship to capitalism demonstrates four areas in which the former is relevant to the latter.

    Firstly, free software claims to form a part of the commons, and to a certain extent, this is true: the data and code in the projects are licensed in a way which allows all to take benefit from using them, they cannot be monopolised, owned and locked-down as capitalism has done with the tangible assets of the commons, and many parts of the intangible commons. Further, it appears that not only is free software not enclosable, but whenever any attempt to control it is exerted by an external entity, the project radically changes direction, sheds itself of regulation and begins where it left off, more wary of interference from capital.

    Secondly, however, the paradigm of free software shows that ownership of the thing is not necessarily required to extract profit with it, there are still opportunities for the capitalist mode of accumulation despite this lack of close control of it. The high quality, efficient tools provided by free software are readily used by capitalist organisations to sell and promote other intangible products, and to manipulate various forms of data, particularly financial instruments, a growth industry in modern knowledge capitalism, at greater margins than had free software not existed. This high quality is due largely to the aforementioned ability of free software to keep capital from taking a part in its development, due to its apparent inefficiency at managing the commons.

    Thirdly, although free software cannot be owned and controlled as physical objects can, thus apparently foiling the extraction of surplus value as economic profit from alienated employees, the nominal leaders of each free software project appear to take a significant part of the credit for the project they steer, thus extracting symbolic capital from other, less prominent coders of the project. This is despite not being involved in much, or in some cases any, of the actual code-writing, thus mirroring the extraction of profit through surplus labour adopted by capitalism.

    Finally, the tendency of the rate of profit to fall seems to pervade free software in the same way as it affects capitalism. Certain free software projects have been shown to have difficulty extracting profit, in the form of surplus symbolic capital, and this in turn, has caused a turn to open data, which initially showed itself to be an area with potentiality for growth and profit, although it too has now suffered the same fate as free software.


    Ahmad, A. (2009). Google beating the evil empire | Malay Mail Online. Retrieved November 3, 2011, from

    Black, N. (2008). CloudMade?» OpenStreetMap Grants. Retrieved October 29, 2011, from

    Bollier, D. (2002). Reclaiming the Commons. Retrieved November 3, 2011, from

    Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. London: Routledge & Kegan Paul.

    Bourdieu, P. (1986). The Forms of Capital. Retrieved November 5, 2011, from

    Burke, M. (1999). Pirates of Silicon Valley.

    Calore, M. (2009). Aug. 25, 1991: Kid From Helsinki Foments Linux Revolution | This Day In Tech | Retrieved November 5, 2011, from

    Canonical Ltd. (2008). “firefox-3.0” source package?: Hardy (8.04)?: Ubuntu. Retrieved October 29, 2011, from

    Chen, J. (2008). AT&T’s 3G iPhone Is $199 This Summer | Gizmodo Australia. Retrieved November 3, 2011, from

    Chi, E. H. (2009, July 22). PART 1: The slowing growth of Wikipedia: some data, models, and explanations. Augmented Social Cognition Research Blog from PARC. Retrieved November 3, 2011, from

    Clarke, G. (2010). OpenOffice files Oracle divorce papers • The Register. Retrieved October 30, 2011, from

    Collins, B. (2011). Ubuntu Unity: the great divider | PC Pro blog. Retrieved October 24, 2011, from (2011). Put the fun back into computing. Use Linux, BSD. Retrieved October 30, 2011, from

    Duke, O. (n.d.). Open Sesame | Love Learning. Retrieved November 3, 2011, from

    Fairhurst, R. (2011). File:Osmdbstats8.png – OpenStreetMap Wiki. Retrieved October 29, 2011, from

    Free Software Foundation. (2010). The Free Software Definition – GNU Project – Free Software Foundation. Retrieved August 29, 2011, from

    Germain, Ja. M. (2011). Linux News: Android: How Linuxy Is Android? Retrieved October 29, 2011, from

    Hardt, M., & Negri, A. (2000). Empire. Cambridge, Mass: Harvard University Press.

    Hardt, M., & Negri, A. (2011). Commonwealth. Cambridge, Massachusetts: Belknap Press of Harvard University Press.

    Hars, A., & Ou, S. (2001). Working for Free? – Motivations of Participating in Open Source Projects. Hawaii International Conference on System Sciences (Vol. 7, p. 7014). Los Alamitos, CA, USA: IEEE Computer Society. doi:

    Humanitarian OpenStreetMap Team. (2011). Humanitarian OpenStreetMap Team?» Using OpenStreetMap for Humanitarian Response & Economic Development. Retrieved November 3, 2011, from

    Ingo, H. (2006). Open Life: The Philosophy of Open Source. (S. Torvalds, Trans.). Retrieved from

    Jackson, J. (2011). How Linux mastered Wall Street | ITworld. Retrieved October 29, 2011, from

    KernelTrap. (2002). Interview: Andrew Morton | KernelTrap. Retrieved October 30, 2011, from

    Map Kibera. (2011). Map Kibera. Retrieved October 29, 2011, from

    Martin Burns. (2002). Where all the Work’s Hiding | Retrieved October 30, 2011, from

    Marx, K. (1959). Economic & Philosophic Manuscripts. (M. Mulligan, Trans.). Retrieved from Retrieved on 2011-11-03

    Marx, K. (1976a). Capital: A Critique of Political Economy (Vol. 1). Harmondsworth: Penguin Books in association with New Left Review.

    Marx, K. (1976b). Capital: A Critique of Political Economy. The Pelican Marx library (Vol. 3). Harmondsworth: Penguin Books in association with New Left Review.

    Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 50(4), 370-396.

    Mills, A. (2007). Why I quit: kernel developer Con Kolivas. Retrieved October 30, 2011, from

    Moeller, E., & Zachte, E. (2009). Wikimedia blog?» Blog Archive?» Wikipedia’s Volunteer Story. Retrieved November 3, 2011, from

    Netcraft Ltd. (2011). May 2011 Web Server Survey | Netcraft. Retrieved October 29, 2011, from

    Parfeni, L. (2011). Linus Torvalds Drops Gnome 3 for Xfce, Calls It “Crazy” – Softpedia. Retrieved October 29, 2011, from

    Paulson, R. (2010). Application of the theoretical tools of the culture industry to the concept of free culture. Retrieved October 25, 2010, from

    Raymond, E. S. (2002). Homesteading the Noosphere. Retrieved June 3, 2010, from

    Rivlin, G. (2003, November). Wired 11.11: Leader of the Free World. Retrieved from

    Rooney, P. (2011). IT Management: Red Hat CEO: Google, Facebook owe it all to Linux, open source. IT Management. Retrieved October 25, 2011, from

    Sneddon, J. (2010). LibreOffice – Google, Novell sponsored OpenOffice fork launched. Retrieved October 29, 2011, from

    Stallman, R. (2010). Free Software Free Society: Selected Essays of Richard M. Stallman. (J. Gay, Ed.) (2nd ed.). Boston, MA: GNU Press, Free Software Foundation.

    Stout, K. L. (2007). – Reclusive Linux founder opens up – May 18, 2006. Retrieved October 30, 2011, from

    The Linux Foundation. (2009). Linux Kernel Development. Retrieved from

    Torvalds, L. (2001). Just For Fun: The Story of an Accidental Revolutionary. London: Texere.

    Torvalds, L. (2005). “Re: Kernel SCM saga..” – MARC. Retrieved from

    Veltman, K. H. (2006). Understanding new media: augmented knowledge & culture. University of Calgary Press.

    Wikipedia:About. (n.d.).Wikipedia. Retrieved October 29, 2011, from

    26 October 2011

    Colin Jackson

    Retaking the Net

    This Saturday (29th October 2011) is the RetakeTheNet Bar Camp in the Wellington Town Hall.

    I’ve talked about RtN before. It’s a group of people who are uncomfortable about the extent of control of the Net being exerted by governments and companies, and who want to do concrete things to imp roe the situation. This last point is the kicker – anyone can yell a bit, but doing actual projects is a lot harder. We are trying to the use the features of the Net that have made it so successful, its openness and its innovation culture, to find ways to do things more freely.

    The bar camp is for people to come and contribute ideas, meet some fantastic people, and just maybe get energized enough to actually do stuff. There will be sessions through the day starting at 10am (best get get there a bit early) and going on until an after party, starting around 4:30.

    There are going to be some very cool people there. And, you never know, we just might make a difference! Come if you want to be part of that.

    12 October 2011

    Robin Paulson

    University Without Conditions has launched

    Our Free University, the University Without Conditions had its first meeting on Saturday, October the 8th.

    We talked through various issues, including what our University will be, courses we will hold, and a rough idea of principles.  These principles will be made concrete over the next few weeks.  In the meantime, we have decided on our first event; it will be an Equality Forum, to be held as part of the Occupy Auckland demonstration and occupation on October the 15th at Aotea Square.

    All are welcome to attend the first event on the 15th, suggest courses via the website, or join the discussion list to take part in creating our University.

    If you would like to be involved in the set-up, please ask for an account to create posts.

    For more information, see the website: or

    18 May 2011

    Andrew Caudwell

    Show Your True Colours

    This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

    A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

    When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):


    Those aren’t compression artifacts you’re seeing!

    Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.


    The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

    Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

    30 March 2011

    Vik Olliver

    31-Mar-2011 AM clippings


    Chapman Tripp and its allies are casting FUD at New Zealand's upcoming ban on software patents. The MED is not impressed:

    Two more Registration Authorities have had their SSL signing keys compromised. Web of trust fail:

    Acer lines up a dual-screen tablet PC as Microsoft waits for the tablet fad to pass:

    The first commercially viable nanogenerator, a flexible chip turning body movement into power, is shown to the American Chemical Society:

    And finally. Atomic wristwatches go out of kilter all over Japan. Not from radiation; the radio sync transmitter is 16 km from the Daiichi:

    Vik :v) Diamond Age Solutions Ltd.

    28 March 2011

    Guy Burgess

    The software patent affair

    Law firm Chapman Tripp has published an article criticising the Government’s decision to exclude software from patentability. While the article makes some valid points, it does not deal with some points fairly.

    The article claims:

    The [software patent] exclusion was the product of intense and successful lobbying by members of the “free and open source” software movement… In its April 2010 report to Parliament on the Patents Bill, the Commerce Select Committee acknowledged that the free software movement had convinced it that computer programs should be excluded from patentability.

    I’m sure this assertion of mighty lobbying power (the ability to sway an all-party, unanimous recommendation no less) would be flattering to any professional lobbyist, let alone FOSS supporters – if only it were true (it is not evidenced in the Commerce Committee report). A range of entities made submissions against software patents, including the statutorily independent University of Otago, InternetNZ, a number of small businesses (and my independent self, I modestly add). There were also submissions the other way, though interestingly the most submissions in favour of retaining software patents were from patent attorney law firms. It is also notable that other organisations including NZICT, which is a strong supporter of software patents and engaged in heavy after-the-event lobbying, did not make any submissions on the issue.

    The article adds the comment:

    The Committee said that “software patents can stifle innovation and competition, and can be granted for trivial or existing techniques”. The Committee provided no analysis or data to support that proposition.

    The fact that a Committee “provided no analysis or data” to support its recommendations is hardly noteworthy – that is not it’s job. Submitters provide analysis and data to the Committee, not the other way around. The material in support of the proposition is in the submissions.

    The article sets up an unfair straw-man argument:

    Free software proponents reckon that software should be free and, as a result, they generally oppose intellectual property rights. They say that IP rights lock away creativity and technology behind pay-walls which smother innovation. Most authors, inventors and entrepreneurs take the opposite view.

    I don’t claim to know what “free software proponents'” views on all manner of IP rights are, but when it comes to software patents in New Zealand, the evidence strongly suggests that the “authors, inventors and entrepreneurs” of software (FOSS or not) are opposed to software patents (see my posts here and here). This includes major companies, including NZ’s biggest software exporter Orion Health (see Orion Health backs moves to block patents).

    While the New Zealand Computer Society poll showing 81% member support for the exclusion is not scientific, it is at least indicative. In any case, opponents of the new law (mainly law firms) have consistently asserted a high level of opposition to the exclusion without any evidence to support that view.

    The article leads to the warning:

    If New Zealand enacted an outright ban on computer-implemented inventions we would be breaking international law. … Article 27(1) of TRIPs says that WTO members must make patents available for inventions “without discrimination as to… the field of technology…”.

    The authors rightly point out that breaching TRIPs could result in legal action against the Government by another country. However, that conclusion is premised on the basis that software is an “invention”. A number of processes and outcomes are not recognised as inventions for the purpose of patent law in different countries, including mathematical algorithms and business methods. The question of whether software is (or should be) an invention was commented on by a Comptroller-General of the UK Patent Office:

    Some have argued that the TRIPS agreement requires us to grant patents for software because it says “patents shall be available for any inventions … in all fields of technology, provided they are…..capable of industrial application”. However, it depends on how you interpret these words.

    Is a piece of pure software an invention? European law says it isn’t.

    The New Zealand Bill does not say that a computer program is an invention that is not patentable. It says, quite differently, that a computer program is “not a patentable invention”, along with human beings, surgical methods, etc.

    Article 27 has reportedly rarely been tested (twice in 17 years), and never in relation to software. The risk of possibly receiving a complaint under a provision (untested) of a multilateral agreement is not new. The New Zealand Law Society notes this in its submission on the Patents Bill (which does not address software patents):

    The proposal to exclude plant varieties under [the new Act] is because New Zealand has been in technical breach of the 1978 Union for the Protection of New Varieties of Plants (UPOV) treaty since it acceded to it in November 1981.

    What’s 30 years of technical breach between friends? Therefore, in fairness I would add a “third way” of dealing with the software patent exclusion: leave it as it is, and see how it goes (which is, after all, what the local industry appears to want). As I wrote last year, “Pressure to conform with international norms (if one emerges) and trading partner requirements may force a change down the track, but the New Zealand decision was born of widely supported policy …”

    If the ban on software patents as it currently stands does not make it into law (which is a possibility, despite clear statements from the Minister of Commerce that it will), it won’t be the end of the world. In fact, it will be the status quo. There are pro’s and con’s to software patents, and the authors are quite right that New Zealand will be going out on a limb by excluding them. The law can be changed again if need be. In the meantime, I refer again (unashamed self-cite) to my article covering the other, and much more popular, ways of protecting and commercialising software.

    27 March 2011

    Vik Olliver

    28-Mar-2011 AM clippings


    Microsoft seeks US state laws requiring customers of companies that use pirated software to pay the penalty (proprietary s/w only):

    Rumours abound that Amazon is about to release its own Android tablet, and ebook makers start to turn to Android too:

    An attacker broke into the Comodo Registration Authority (RA) based in Southern Europe and issued fraudulent SSL certificates:

    The first processor is printed on a sheet of plastic. Well, two sheet. One for the CPU, one for the code:

    And finally. What do you do if a grizzly attacks you while you're stoned? Why, you claim ACC of course:

    Vik :v) Diamond Age Solutions Ltd.

    22 March 2011

    Vik Olliver

    23-Mar-2011 AM clippings


    The UK prepares to introduce a national IP blocking system in the name of preventing piracy of movies and music. For now:

    A look at the new features in the recently released Firefox 4 web browser:

    A new 3D nanostructure for lithium and NiMH batteries allows very rapid charging - an electric vehicle in 5 mins if you have the amps:

    Quantum computing should hit the magic 10 qubit level this year. That's where it starts to surpass some standard computing techniques:

    A discussion with a scientist who is actually building things atom by atom and his take on when it will be possible to make machines:

    And finally. A facebook app that peels the clothes off people in your friends' pictures. Works with guys, girls and sad people:

    Vik :v) Diamond Age Solutions Ltd.

    06 October 2010

    Andrew Caudwell

    New Zealand Open Source Awards

    I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

    I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

    Update: here’s the video presented at Onward!:

    Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!

    16 August 2010

    Glynn Foster

    World’s First Pavlova Western

    Many months ago, I was lucky to be involved in the shooting of a western film – more appropriately, the world’s first Pavlova Western. Most people will be familiar with the concept of a Spaghetti Western, but now Mike Wallis (my brother in law) and his fiancée Inge Rademeyer from Mi Films have extended that concept to New Zealand.

    They are currently in post-production mode bringing all the pieces together, including an incredible music score from John Psathas (recently awarded Officer of the New Zealand Order of Merit for his Athens Olympics work). Jamie Selkirk (who received an Academy Award for his work on the Lord of the Rings trilogy) has also come on board to give them financial support to put the film through the final stages at Weta’s Park Road Post Production studios.

    And to top it all off, last week they appeared on TV One’s Close Up. Check out the following video –


    You can check their progress on the Facebook Pavlova Western group and the Pavlova Western blog.

    15 July 2010

    Guy Burgess

    Software patents to remain excluded

    The Government has cleared up the recent uncertainty about software patent reform by confirming that the proposed exclusion of software patents will proceed. A press release from Commerce Minister Simon Power said:

    “My decision follows a meeting with the chair of the Commerce Committee where it was agreed that a further amendment to the bill is neither necessary nor desirable.”

    During its consideration of the bill, the committee received many submissions opposing the granting of patents for computer programs on the grounds it would stifle innovation and restrict competition… The committee and the Minister accept this position.

    Barring any last-minute flip-flop – which is most unlikely given the Minister’s unequivocal statement – s15 of the new Patents Act, once passed, will read:

    15(3A) A computer program is not a patentable invention.


    It is clear that the lobbying by pro-software patent industry group NZICT was unsuccessful, although Computerworld reports that its CEO apparently still holds out hope that “[IPONZ] will clarify the situation and bring this country’s law into line with the position in Europe and the UK, where software patents have been granted”. Hope does indeed spring eternal: the exclusion is clear and leaves no room for IPONZ to “clarify” it to permit software patents (embedded software is quite different- see below).

    As I wrote earlier, it remains a mystery as to why NZICT, a professional and funded body, failed to make a single submission on the Patents Act reform process – they only had 8 years to do so – but instead engaged in private lobbying after the unanimous Select Committee decision had been made. It also did not (and still does not) have a policy paper on the subject, nor did it mention software patents once in its 17 November 2009 submission on “New Zealand’s research, science and technology priorities”. It is not as though the software patent issue had not been signalled – it was raised in the very first document in 2002. Despite this silence, it claims that software patents are actually critical to the IT industry it says it represents.

    The New Zealand Computer Society, on the other hand, did put in a submission and has articulated a clear and balanced view representing the broader ICT community. It said today that “we believe this is great news for software innovation in New Zealand”.

    Left vs right?

    Is there a political angle to this? While some debate has presumed an open-vs-proprietary angle (a false premise) some I have chatted with have seen it as a left-vs-right issue, something Stephen Bell also alluded to (in a different context) in this interesting article.

    Thankfully, it appears not. The revised Patents Bill was unanimously supported by the Commerce Committee, comprising members National, Labour, Act, the Greens, and Maori parties. It reported to Commerce Minister Simon Power (National) and Associate Minister Rodney Hide (Act). Unlike the previous Government’s Copyright Act reform, post-committee industry lobbying has not turned the Government.

    What about business? NZICT apart, the exclusion of software patents has received the wide support of the New Zealand ICT industry, including (publicly) leading software exporters Orion Health and Jade, which as Paul Matthews notes represent around 50% of New Zealand’s software exports. The overwhelming majority of NZCS members support the change. Internationally, many venture capitalists and other non-bleeding-heart-liberal types have spoken out against software patents, on business grounds.

    Some pro-software patent business owners might be miffed at a perceived lack of support from National or Act, perhaps assuming that software patents are a “right” and are valuable for their businesses. The reality is that only a handful of New Zealand companies have New Zealand software patents (I did see a figure quoted somewhere – will try to find it). Yes, they can be valuable if you have them but that is a separate issue (and remember, under the new Act no one loses existing patents). A capitalist, free market economy (and the less restrictive the better) abhors monopolies, and this decision benefits the majority of businesses in New Zealand. Strong IP protection is essential in modern society – including patents – (see my article “Protecting IP in a post-software patent environment“) but the extent of statutory protection when being reviewed will always come down to a perceived balance, not just for the minority holders of a patent (a private monopoly) but for the much larger majority artificially prevented from competing and innovating by that monopoly.

    I have always taken pains to note, like NZCS, that there are pros and cons to software patents. And I am a fan of patents generally. Patents are good! But for software patents, the cons outweigh the pros. There are sound business reasons to exclude them. This specific part of the reform targets one specific area, has unanimous political party support (how rare is that?), and wide local business support. The last thing it can be seen as is an anti-business, left-wing policy (if it was, I’d have to oppose it!)

    Embedded software

    Inventions containing embedded software will remain, rightly, not excluded under the Patents Bill. Minister Power confirmed that IPONZ will develop guidelines for embedded software, which hopefully will set some clear parameters for applicants.

    Software is essential to many inventions, and while that software itself will not be patentable, the invention it is a component of still may be. Some difficult conceptual issues can arise, but in most cases I don’t expect difficulties would arise. This “exception” (if it can be described as such) will not undermine the general exclusion for software patents.

    11 May 2010

    Guy Burgess

    Open source in government tenders

    Computerworld reports:

    A requirement that a component of a government IT tender be open-source has sparked debate on whether such a specification is appropriate.

    The relevant part of the RFP (for the State Services Commission) puts the requirement as follows:

    We are looking for an Open Source solution. By Open Source we mean:

    • Produce standards-compliant output;
    • Be documented and maintainable into the future by suitable developers;
    • Be vendor-independent, able to be migrated if needed;
    • Contain full source code. The right to review and modify this as needed shall be available to the SSC and its appointed contractors.

    The controversy is whether this is a mandate of open source licensing (which it isn’t). The government should not mandate open source licensing or proprietary licensing on commercial-line tenders. More precisely, it should not rule solutions in or out based on whether they are offered (to others) under an open source licence. The best options should be on the table.

    The four stated requirements are quite sensible. As the SSC spokesman said, there is nothing particularly unusual about them in government procurement. These requirements (or variations on them) are similarly common in private-sector procurement and development contracts. In the public sector in particular though, vendor independence and standards-compliance help avoid farcical situations like the renegotiation of the Ministry of Health’s bulk licensing deal.

    Open standards and interoperability in public sector procurement is gaining traction around the world. Recently, the European Union called for “the introduction of open standards and interoperability in government procurement of IT”. And in the recent UK election, all three of the main parties included open source procurement in their manifestos.

    So why the controversy in this case? Most likely it’s the perhaps inapt use of the term “open source” in the RFP (even though the intended meaning is clarified immediately afterwards). The term “open source” is a hot-button word that means many things to many people, but today it generally means having code licensed under a recognised open source licence, many of which are copyleft. Many vendors simply could not (or would never want to) licence their code under such a licence, and it would be uncommercial and somewhat capricious for a Government tender to rule out some (or even the majority of) candidates based on such criteria.

    However, it is clear that the SSC did not use the term in that context, and does not intend to impose such a requirement. An appropriate source-available licence is as capable of meeting the requirements as an open source licence (see my post on source available vs open source). The requirement for disclosure of code to contractors and future modification can be simply dealt with on standard commercial IP licensing terms.

    A level playing field for open and proprietary solutions is the essential starting point, with evaluation – which in most cases should include open standards and interoperability – proceeding from there.

    25 January 2010

    Glynn Foster

    While I catch a breadth and write up some of my experiences of LCA2010 last week, the Australian’s are in full gear for their Great Australian Internet Blackout Campaign.

    From their website –

    What’s the problem?

    The Federal Government is pushing forward with a plan to force Internet Service Providers to censor the Internet for all Australians. This plan will waste millions of dollars and won’t make anyone safer.

    1. It won’t protect children: The filter isn’t a “cyber safety” measure to stop kids seeing inappropriate content such as R and X rated websites. It is not even designed to prevent the spread of illegal material where it is most often found (chat rooms, peer-to-peer file sharing).
    2. We will all pay for this ineffective solution: Under this policy, ISPs will be forced to charge more for consumer and business broadband. Several hundred thousand dollars has already been spent to test the filter – without considering high-speed services such as the National Broadband Network!
    3. A dangerous precedent: We stand to join a small club of countries which impose centralised Internet censorship such as China, Iran and Saudi Arabia. The secret blacklist may be limited to “Refused Classification” content for now, but what might a future Australian Government choose to block?

    Help turn the lights out on the proposed Internet filter by joining the Great Australian Internet Blackout.

    New Zealand was supported worldwide during their appeal of Section 92A – it’s time to support our cousins in the west.

    14 January 2010

    Glynn Foster

    Come to LCA2010 Open Day

    We’ve got a great line up for LCA Open Day! Check out our great posters and pass them around your work, university, community group or government department!

    01 June 2009

    Gavin Treadgold

    Software for Disasters

    This is the original text I submitted to The Box feature on Disaster Tech on Tuesday the 2nd of June, 2009. It is archived here for my records. It also includes some additional content that didn’t make it to the print edition.

    On December 26, 2004, the Boxing Day tsunami killed over 35 thousand people and displaced over half a million people in Sri Lanka alone. A massive humanitarian crisis played out in numerous other countries also affected by the magnitude 9+ Great Sumatra-Andaman earthquake and resulting tsunami. Within days it became apparent that an information system was needed to manage the massive amounts of information being generated about who was doing what, and where – at one point there were approximately 1,100 registered NGO’s operating in Sri Lanka.

    It was decided by a group of Sri Lankan IT professionals that a system needed to be built to better manage the information as they couldn’t find any existing free solutions that could be quickly deployed. Free, was critical, as they couldn’t afford any commercial solutions.

    Sahana was implemented within a week by around four hundred IT volunteers, and it was named after the Sinhalese word for relief. Initially it provided tools for tracking missing persons, organisations involved in response, locations and details of camps set up in response to the tsunami, and a means of accepting requests for resources such as food, water and medicine.

    Following the tsunami, the Swedish International Development Agency provided funding to take the lessons learnt from writing and deploying software during a disaster, and to rebuild Sahana from the ground up, and release it as free and open source software to the world. After all, Sri Lanka had needed an open and available system to manage disaster information, surely other countries should benefit from their experience?

    Since 2005, Sahana has been officially deployed to earthquakes in Pakistan, Indonesia, China and Peru; a mudslide in the Philippines; and has been deployed in New York City as a preparedness measure to help manage storm evacuations.

    Being free and open source software has been critical to Sahana’s success. The more accessible a system is, the more likely it is to be adopted, used and improved. Even in developed countries, many disaster agencies are poorly funded and often cannot justify significant expenditure on systems – commercial systems are too expensive. With pressure being applied to many public budgets, the significance of this is even greater now. Perhaps the greatest benefit of applying open source approaches is that it encourages a collaborative and communal approach to improving the system. As more countries with experience in disaster management contribute to its development, this will also act as a form of expertise transfer to countries that may not have as much experience with disasters.

    Following Hurricane Katrina, there were nearly 50 websites created to track missing and displaced persons – all using different systems, all collecting duplicate information, and few of them sharing. Many of the potential benefits of the technology were lost due to a lack of co-ordination and massive replication of data. Access to tools such as Sahana will be more efficient as they can be deployed faster than solutions developed after an event occurs.

    Normally, management involves a ‘leisurely’ process to collect as much information as possible, to then decide what actions should be taken. This is completely the opposite immediately following a disaster whereby decisions have to be made, sometimes with little or no information and no time to gather it.

    A key benefit that IT can provide is in linking silos of information held by different organisations – everyone has a better shared picture of what has happened, what is occurring now, and what is planned.

    Software, however, is just one aspect. There is a need for open data (such as maps and statistics) and standards to ensure that the multitude of systems can connect to each other and share information.

    The most important aspect is having the relationships between organisations set up in advance of a disaster. This results in organisations having the confidence to connect their systems and share information. Without shared information the rest of the system will lose many potential benefits that IT can bring to disaster management.

    Often, little or no information is available to support decision-making – emergency managers are forced to make complex decisions without having the luxury of all the required information.

    A disaster can produce a massive number of tasks requiring hundreds of organisations and thousands of people to co-ordinate activity – meaning that there will always be some prioritisation needed. What should be done first? What can wait until later? How should an impacted community prioritise response and recovery with limited resources?

    The benefits are not just limited to agencies and NGO’s. The next evolutionary step will be to adopt an approach called ‘crowd sourcing’ whereby members of the community are provided with tools to interact with each other and emergency managers.

    This may be achieved with applications that run on mobile phones linking people and even submitting information from the field directly to Sahana servers. Imagine the situation where a passerby can take a georeferenced photo of some disaster damage, and if communications networks are working, send that directly to the system emergency managers are using to manage the event. There are a numberof efforts underway looking at how social networks and websites such as Facebook and Twitter can be utilised during a disaster.

    Disaster IT is really a force multiplier. It won’t usually save lives, but it will allow a better shared understanding of the problems, and will lead to more effective and co-ordinated response. It allows those responding to an event, whether an organisation or individual, to quickly access information and better inform decision-making. This can lead to less suffering and a quicker recovery for affected communities.

    Design for Disaster
    Computer systems can often be fragile by their design – they are especially reliant upon power and communications. If any of these are lost during a disaster, the value of a system can quickly be lost if it has not been designed to operate in adverse environments. Here are some design decisions that are very important for disaster applications:

    • Low bandwidth – we’ve all become accustomed to sucking bandwidth through massive broadband pipes, but during a disaster network connectivity for emergency managers may be limited to dialup speeds over satellite or digital radio connections. Disaster software needs to be designed for very efficient transfer of information, and should never assume vast quantities of bandwidth are available. At at extreme, some information may even be transferred by SMS or USB memory stick.
    • Intermittent connectivity – during a disaster communications will likely fail multiple times before they are finally restored. This means that most ‘software as a service’ or web applications on the Internet will be of little use to emergency managers. Disaster software needs to be stored and run locally, and be able to work without a connection to the Internet.
    • Synchronisation – one of the best techniques for designing around low bandwidth and intermittent connectivity, is to design a system to be able to synchronise information between two systems when communications are available. When communications later fail, both systems will have a copy of the same data, and can access it locally until communications are restored.
    • Low power – power can, and will fail during a disaster, so disaster software needs to be designed to run on low power devices. Laptops and notebooks are good targets as they are self-contained, have built-in batteries, and can be charged from solar cells or generators. Large, power hungry servers can be difficult to move and support in a disaster environment.

    How I became involved

    One might ask how a Kiwi became involved in Sahana. Ever since training as a Civil Defence volunteer in the late 90′s, I had an interest in how information technology could be used to improve disaster management. The tsunami in 2004 acted as the catalyst for Sri Lankan computer programmers to produce Sahana. I have been volunteering with the project since 2005. In September 2005, he helped facilitate a workshop in Colombo that formed the basis for the current version of Sahana. In March this year he attended a Sahana conference and Board meeting in Sri Lanka. At the Board meeting the existing ‘owner’ of Sahana – the Lanka Software Foundation – agreed to hand the project over to the open source community. Gavin is a member of the transition Board that is in the process of forming an international non-profit foundation that can accept financial donations, and act as the ‘custodian’ of Sahana.

    How you can help

    There are numerous ways Sahana is looking for help. Once registered, we will be able to accept financial donations that will be used to fund development. In the meantime, we are looking for open source programmers with web development skills (including mapping). If you’re not a programmer, we are always looking for translators that can convert the english text and documentation into many different languages. Perhaps most importantly, we are looking for experienced emergency managers to help provide design advice to the Sahana community and guide the developers.

    26 April 2009

    Gavin Treadgold

    Google investing USD$50,000 in Sahana

    Well, it has been a lot of work for the admins, the mentors, and the students, but it has paid off. The Sahana has been awarded 10 projects in the 2009 Summer of Code. We have some great projects lined up! The include:

    • Person Registry for Sahana
    • Warehouse Management
    • Disaster Victim Identification
    • J2ME clients for form data collection in the field
    • Optical Character Recognition for scanning forms
    • Peer to peer synchronisation of Sahana servers
    • CAP Aggregation and Firefox CAP plugin
    • CAP Editing and Publishing
    • Mashup/Aggregation Dashboard
    • Theme Manager

    Having been neck deep in the process; working with others to set up our assessment process, coming up with ideas (I’m stoked to have two students working on CAP ideas that came out of my earlier suggestion), and reviewing each and every of the 45 proposals we recieved, it has been exciting to get so many projects accepted.

    I think that by the end of the year, we are going to have some great new functionality available in Sahana. Even more, I hope we’ll attract more open source developers to our ever growing community!

    25 March 2009

    Gavin Treadgold

    Sahana – a catalyst to widespread EMIS deployment

    I’ve just uploaded the presentation I gave on Sahana at the Sahana 2009 Conference in Colombo, Sri Lanka on the 25th of March, 2009. I’ll put a link up to the associated paper soon as well.