NZOSS  
Planet NZOSS
 

17 October 2017

Francois Marier

face

Checking Your Passwords Against the Have I Been Pwned List

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

04 October 2017

Simon Lyall

DevOps Days Auckland 2017 – Wednesday Session 3

Sanjeev Sharma – When DevOps met SRE: From Apollo 13 to Google SRE

  • Author of Two DevOps Bookks
  • Apollo 13
    • Who were the real heroes? The guys back at missing control. The Astronaunts just had to keep breathing and not die
  • Best Practice for Incident management
    • Prioritize
    • Prepare
    • Trust
    • Introspec
    • Consider Alternatives
    • Practice
    • Change it around
  • Big Hurdles to adoption of DevOps in Enterprise
    • Literature is Only looking at one delivery platform at a time
    • Big enterprise have hundreds of platforms with completely different technologies, maturity levels, speeds. All interdependent
    • He Divides
      • Industrialised Core – Value High, Risk Low, MTBF
      • Agile/Innovation Edge – Value Low, Risk High, Rapid change and delivery, MTTR
      • Need normal distribution curve of platforms across this range
      • Need to be able to maintain products at both ends in one IT organisation
  • 6 capabilities needed in IT Organisation
    • Planning and architecture.
      • Your Delivery pipeline will be as fast as the slowest delivery pipeline it is dependent on
    • APIs
      • Modernizing to Microservices based architecture: Refactoring code and data and defining the APIs
    • Application Deployment Automation and Environment Orchestration
      • Devs are paid code, not maintain deployment and config scripts
      • Ops must provide env that requires devs to do zero setup scripts
    • Test Service and Environment Virtualisation
      • If you are doing 2week sprints, but it takes 3-weeks to get a test server, how long are your sprints
    • Release Management
      • No good if 99% of software works but last 1% is vital for the business function
    • Operational Readiness for SRE
      • Shift between MTBF to MTTR
      • MTTR  = Mean time to detect + Mean time to Triage + Mean time to restore
      • + Mean time to pass blame
    • Antifragile Systems
      • Things that neither are fragile or robust, but rather thrive on chaos
      • Cattle not pets
      • Servers may go red, but services are always green
    • DevOps: “Everybody is responsible for delivery to production”
    • SRE: “(Everybody) is responsible for delivering Continuous Business Value”

Share

03 October 2017

Simon Lyall

DevOps Days Auckland 2017 – Wednesday Session 2

Marcus Bristol (Pushpay) – Moving fast without crashing

  • Low tolerance for errors in production due to being in finance
  • Deploy twice per day
  • Just Culture – Balance safety and accountability
    • What rule?
    • Who did it?
    • How bad was the breach?
    • Who gets to decide?
  • Example of Retributive Culture
    • KPIs reflect incidents.
    • If more than 10% deploys bad then affect bonus
    • Reduced number of deploys
  • Restorative Culture
  • Blameless post-mortem
    • Can give detailed account of what happened without fear or retribution
    • Happens after every incident or near-incident
    • Written Down in Wiki Page
    • So everybody has the chance to have a say
    • Summary, Timeline, impact assessment, discussion, Mitigations
    • Mitigations become highest-priority work items
  • Our Process
    • Feature Flags
    • Science
    • Lots of small PRs
    • Code Review
    • Testers paired to devs so bugs can be fixed as soon as found
    • Automated tested
    • Pollination (reviews of code between teams)
    • Bots
      • Posts to Slack when feature flag has been changed
      • Nags about feature flags that seems to be hanging around in QA
      • Nags about Flags that have been good in prod for 30+ days
      • Every merge
      • PRs awaiting reviews for long time (days)
      • Missing postmortun migrations
      • Status of builds in build farm
      • When deploy has been made
      • Health of API
      • Answer queries on team member list
      • Create ship train of PRs into a build and user can tell bot to deploy to each environment

Share

DevOps Days Auckland 2017 – Wednesday Session 1

Michael Coté – Not actually a DevOps Talk

Digital Transformation

  • Goal: deliver value, weekly reliably, with small patches
  • Management must be the first to fail and transform
  • Standardize on a platform: special snow flakes are slow, expensive and error prone (see his slide, good list of stuff that should be standardize)
  • Ramping up: “Pilot low-risk apps, and ramp-up”
  • Pair programming/working
    • Half the advantage is people speed less time on reddit “research”
  • Don’t go to meetings
  • Automate compliance, have what you do automatic get logged and create compliance docs rather than building manually.
  • Crafting Your Cloud-Native Strategy

Sajeewa Dayaratne – DevOps in an Embedded World

  • Challenges on Embedded
    • Hardware – resource constrinaed
    • Debugging – OS bugs, Hardware Bugs, UFO Bugs – Oscilloscopes and JTAG connectors are your friend.
    • Environment – Thermal, Moisture, Power consumption
    • Deploy to product – Multi-month cycle, hard of impossible to send updates to ships at sea.
  • Principles of Devops , equally apply to embedded
    • High Frequency
    • Reduce overheads
    • Improve defect resolution
    • Automate
    • Reduce response times
  • Navico
    • Small Sonar, Navigation for medium boats, Displays for sail (eg Americas cup). Navigation displays for large ships
    • Dev around world, factory in Mexico
  • Codebase
    • 5 million lines of code
    • 61 Hardware Products supported – Increasing steadily, very long lifetimes for hardware
    • Complex network of products – lots of products on boat all connected, different versions of software and hardware on the same boat
  • Architecture
    • Old codebase
    • Backward compatible with old hardware
    • Needs to support new hardware
    • Desire new features on all products
  • What does this mean
    • Defects were found too late
    • Very high cost of bugs found late
    • Software stabilization taking longer
    • Manual test couldn’t keep up
    • Cost increasing , including opportunity cost
  • Does CI/CD provide answer?
    • But will it work here?
    • Case Study from HP. Large-Scale Agile Development by Gary Gruver
  • Our Plan
    • Improve tolls and archetecture
    • Build Speeds
    • Automated testing
    • Code quality control
  • Previous VCS
    • Proprietary tool with limit support and upgrades
    • Limited integration
    • Lack of CI support
    • No code review capacity
  • Move to git
    • Code reviews
    • Integrated CI
    • Supported by tools
  • Archetecture
    • Had a configurable codebase already
    • Fairly common hardware platform (only 9 variations)
    • Had runtime feature flags
    • But
      • Cyclic dependancies – 1.5 years to clean these up
      • Singletons – cut down
      • Promote unit testability – worked on
      • Many branches – long lived – mega merges
  • Went to a single Branch model, feature flags, smaller batch sizes, testing focused on single branch
  • Improve build speed
    • Start 8 hours to build Linux platform, 2 hours for each app, 14+ hours to build and package a release
    • Options
      • Increase speed
      • Parallel Builds
    • What did
      • ccache.clcache
      • IncrediBuild
      • distcc
    • 4-5hs down to 1h
  • Test automation
    • Existing was mock-ups of the hardware to not typical
    • Started with micro-test
      • Unit testing (simulator)
      • Unit testing (real hardware)
    • Build Tools
      • Software tools (n2k simulator, remote control)
      • Hardware tools ( Mimic real-world data, re purpose existing stuff)
    • UI Test Automation
      • Build or Buy
      • Functional testing vs API testing
      • HW Test tools
      • Took 6 hours to do full test on hardware.
  • PipeLine
    • Commit -> pull request
    • Automated Build / Unit Tests
    • Daily QA Build
  • Next?
    • Configuration as code
    • Code Quality tools
    • Simulate more hardware
    • Increase analytics and reporting
    • Fully simulated test env for dev (so the devs don’t need the hardware)
    • Scale – From internal infrastructure to the cloud
    • Grow the team
  • Lessons Learnt
    • Culture!
    • Collect Data
    • Get Executive Buy in
    • Change your tolls and processes if needed
    • Test automation is the key
      • Invest in HW
      • Silulate
      • Virtualise
    • Focus on good software design for Everything

Share

17 September 2017

Andrew Ruthven

Missing opkg status file on LEDE...

I tried to install LEDE on my home router which is running LEDE, only to be told that libc wasn't installed. Huh? What's going on?! It looked to all intents as purposes as though libc wasn't installed. And it looked like nothing was installed.

What to do if opkg list-installed is returning nothing?

I finally tracked down the status file it uses as being /usr/lib/opkg/status. And it was empty. Oh dear.

Fortunately the info directory had content. This means we can rebuild the status file. How? This is what I did:

cd /usr/lib/opkg/info
for x in *.list; do
pkg=$(basename $x .list)
echo $pkg
opkg info $pkg | sed 's/Status: .*$/Status: install ok installed/' >> ../status
done

And then for the special or virtual packages (such as libc and the kernel):

for x in *.control; do
pkg=$(basename $x .control)
if ! grep -q "Package: $pkg" ../status
then
echo $pkg is missing; cat $x >> ../status
fi
done

I then had to edit the file tidy up some newlines for the kernel and libc, and set the status lines correctly. I used "install hold installed".

Now I that I've shaved that yak, I can install tcpdump to try and work out why a VoIP phone isn't working. Joy.

09 September 2017

Francois Marier

face

TLS Authentication on Freenode and OFTC

In order to easily authenticate with IRC networks such as OFTC and Freenode, it is possible to use client TLS certificates (also known as SSL certificates). In fact, it turns out that it's very easy to setup both on irssi and on znc.

Generate your TLS certificate

On a machine with good entropy, run the following command to create a keypair that will last for 10 years:

openssl req -nodes -newkey rsa:2048 -keyout user.pem -x509 -days 3650 -out user.pem -subj "/CN=<your nick>"

Then extract your key fingerprint using this command:

openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g'

Share your fingerprints with NickServ

On each IRC network, do this:

/msg NickServ IDENTIFY Password1!
/msg NickServ CERT ADD <your fingerprint>

in order to add your fingerprint to the access control list.

Configure ZNC

To configure znc, start by putting the key in the right place:

cp user.pem ~/.znc/users/<your nick>/networks/oftc/moddata/cert/

and then enable the built-in cert plugin for each network in ~/.znc/configs/znc.conf:

<Network oftc>
    ...
            LoadModule = cert
    ...
</Network>
    <Network freenode>
    ...
            LoadModule = cert
    ...
</Network>

Configure irssi

For irssi, do the same thing but put the cert in ~/.irssi/user.pem and then change the OFTC entry in ~/.irssi/config to look like this:

{
  address = "irc.oftc.net";
  chatnet = "OFTC";
  port = "6697";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

and the Freenode one to look like this:

{
  address = "chat.freenode.net";
  chatnet = "Freenode";
  port = "7000";
  use_tls = "yes";
  tls_cert = "~/.irssi/user.pem";
  tls_verify = "yes";
  autoconnect = "yes";
}

That's it. That's all you need to replace password authentication with a much stronger alternative.

02 September 2017

Andrew Ruthven

Network boot a Raspberry Pi 3

I found to make all this work I had to piece together a bunch of information from different locations. This fills in some of the blanks from the official Raspberry Pi documentation. See here, here, and here.

Image

Download the latest raspbian image from https://www.raspberrypi.org/downloads/raspbian/ and unzip it. I used the lite version as I'll install only what I need later.

To extract the files from the image we need to jump through some hoops. Inside the image are two partitions, we need data from each one.

 # Make it easier to re-use these instructions by using a variable
 IMG=2017-04-10-raspbian-jessie-lite.img
 fdisk -l $IMG

You should see some output like:

 Disk 2017-04-10-raspbian-jessie-lite.img: 1.2 GiB, 1297862656 bytes, 2534888 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: dos
 Disk identifier: 0x84fa8189
 
 Device                               Boot Start     End Sectors  Size Id Type
 2017-04-10-raspbian-jessie-lite.img1       8192   92159   83968   41M  c W95 FAT32 (LBA)
 2017-04-10-raspbian-jessie-lite.img2      92160 2534887 2442728  1.2G 83 Linux

You need to be able to mount both the boot and the root partitions. Do this by tracking the offset of each one and multiplying it by the sector size, which is given on the line saying "Sector size" (typically 512 bytes), for example with the 2017-04-01 image, boot has an offset of 8192, so I mount it like this (it is VFAT):

 mount -v -o offset=$((8192 * 512)) -t vfat $IMG /mnt
 # I then copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-boot/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-boot/
 # unmount the partition now:
 umount /mnt

Then we do the same for the root partition:

 mount -v -o offset=$((92160 * 512)) -t ext4 $IMG /mnt
 # copy the data off:
 mkdir -p /data/diskless/raspbian-lite-base-root/
 rsync -xa /mnt/ /data/diskless/raspbian-lite-base-root/
 # umount the partition now:
 umount /mnt

DHCP

When I first set this up, I used OpenWRT on my router, and I had to patch /etc/init/dnsmasq to support setting DHCP option 43. As of the writting of this article, a similar patch has been merged, but isn't in a release yet, and, well, there may never be another release of OpenWRT. I'm now running LEDE, and the the good news is it already has the patch merged (hurrah!). If you're still on OpenWRT, then here's the patch you'll need:

https://git.lede-project.org/?p=source.git;a=commit;h=9412fc294995ae2543fabf84d2ce39a80bfb3bd6

This lets you put the following in /etc/config/dnsmasq, this says that any device that uses DHCP and has a MAC issued by the Raspberry PI Foundation, should have option 66 (boot server) and option 43 set as specified. Set the IP address on option 66 to the device that should be used for tftp on your network, if it's the same device that provides DHCP then it isn't required. I had to set the boot server, as my other network boot devices are using a different server (with an older tftpd-hpa, I explain the problem further down).

 config mac 'rasperrypi'
         option mac 'b8:27:eb:*:*:*'
         option networkid 'rasperrypi'
         list dhcp_option '66,10.1.0.253'
         list dhcp_option '43,Raspberry Pi Boot'

tftp

Initially I used a version of tftpd that was too old and didn't support how the RPi tried to discover if it should use the serial number based naming scheme. The version of tftpd-hpa Debian Jessie works just fine. To find out the serial number you'll probably need to increase the logging of tftpd-hpa, do so by editing /etc/default/tftpd-hpa and adding "-v" to the TFTP_OPTIONS option. It can also be useful to watch tcpdump to see the requests and responses, for example (10.1.0.203 is the IP of the RPi I'm working with):

  tcpdump -n -i eth0 host 10.1.0.203 and dst port 69

This was able to tell me the serial number of my RPi, so I made a directory in my tftpboot directory with the same serial number and copied all the boot files into there. I then found that I had to remove the init= portion from the cmdline.txt file I'm using. To ease debugging I also removed quiet. So, my current cmdline.txt contains (newlines entered for clarity, but the file has it all on one line):

idwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/nfs
nfsroot=10.1.0.253:/data/diskless/raspbian-lite-base-root,vers=3,rsize=1462,wsize=1462
ip=dhcp elevator=deadline rootwait hostname=rpi.etc.gen.nz

NFS root

You'll need to export the directories you created via NFS. My exports file has these lines:

/data/diskless/raspbian-lite-base-root	10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check)
/data/diskless/raspbian-lite-base-boot	10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check)

And you'll also want to make sure you're mounting those correctly during boot, so I have in /data/diskless/raspbian-lite-base-root/etc/fstab the following lines:

10.1.0.253:/data/diskless/raspbian-lite-base-root   /       nfs   rw,vers=3       0   0
10.1.0.253:/data/diskless/raspbian-lite-base-boot   /boot   nfs   vers=3,nolock   0   2

Network Booting

Now you can hopefully boot. Unless you into this bug, as I did. Where the RPi will sometimes fail to boot. Turns out the fix, which is mentioned on the bug report, is to put bootcode.bin (and only bootcode.bin) onto an SD card. That'll then load the fixed bootcode, and which will then boot reliably.

15 August 2017

Mark Foster

Chromebook Hackery

Had a bit of a tinker with my daughters Chromebook today.
In particular wanted to see if I could make it boot from USB (to find a way of running an OS on it other than ChromeOS, and without actually hosing the ChromeOS instance in the process.

Managed to generally follow the instructions I found at Chromium.org to enable Developer mode.

Once done, you boot it up as usual (Ctrl-D to keep booting) and once started, tied to Wifi and logged in with your Google account, you can use Ctrl-Alt-T to run a ChromeOS Terminal, and from there type 'shell' to get a bash shell.
The trap is that you're then logged in as 'chronos' and to be able to act as root you need to be able to su upward... requiring the knowledge of the password for chronos. The instructions I then found on Reddit (Ctrl-Alt-RightArrowFunction) lets you open a rootshell using the previously set root password, and then set a password for the user chronos, which can then be used to fully interact without tasking out of the Window manager...

Unfortunately the instructions to let the thing boot of USB don't appear to work, and it appears, i'm not the only one to confront this. .

That's about as far as i've gotten, I don't particularly want to brick it, nor have to factory default it as such. Just be nice to have an alternative boot option occasionally. Sigh...

12 August 2017

Francois Marier

face

pristine-tar and git-buildpackage Work-arounds

I recently ran into problems trying to package the latest version of my planetfilter tool.

This is how I was able to temporarily work-around bugs in my tools and still produce a package that can be built reproducibly from source and that contains a verifiable upstream signature.

pristine-tar being is unable to reproduce a tarball

After importing the latest upstream tarball using gbp import-orig, I tried to build the package but ran into this pristine-tar error:

$ gbp buildpackage
gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
xdelta3: normally this indicates that the source file is incorrect
xdelta3: please verify the source file with sha1sum or equivalent
xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56.
pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
pristine-tar: failed to generate tarball

So I decided to throw away what I had, re-import the tarball and try again. This time, I got a different pristine-tar error:

$ gbp buildpackage
gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT
xdelta3: normally this indicates that the source file is incorrect
xdelta3: please verify the source file with sha1sum or equivalent
xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56.
pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp
pristine-tar: failed to generate tarball

I filed bug 871938 for this.

As a work-around, I simply symlinked the upstream tarball I already had and then built the package using the tarball directly instead of the upstream git branch:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz ../planetfilter_0.7.4.orig.tar.gz
gbp buildpackage --git-tarball-dir=..

Given that only the upstream and master branches are signed, the .delta file on the pristine-tar branch could be fixed at any time in the future by committing a new .delta file once pristine-tar gets fixed. This therefore seems like a reasonable work-around.

git-buildpackage doesn't import the upstream tarball signature

The second problem I ran into was a missing upstream signature after building the package with git-buildpackage:

$ lintian -i planetfilter_0.7.4-1_amd64.changes
E: planetfilter changes: orig-tarball-missing-upstream-signature planetfilter_0.7.4.orig.tar.gz
N: 
N:    The packaging includes an upstream signing key but the corresponding
N:    .asc signature for one or more source tarballs are not included in your
N:    .changes file.
N:    
N:    Severity: important, Certainty: certain
N:    
N:    Check: changes-file, Type: changes
N: 

This problem (and the lintian error I suspect) is fairly new and hasn't been solved yet.

So until gbp import-orig gets proper support for upstream signatures, my work-around was to copy the upstream signature in the export-dir output directory (which I set in ~/.gbp.conf) so that it can be picked up by the final stages of gbp buildpackage:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz.asc ../build-area/planetfilter_0.7.4.orig.tar.gz.asc

If there's a better way to do this, please feel free to leave a comment (authentication not required)!

26 July 2017

NZOSS News

GovHack Hamilton networked by open source FAUCET

You are only as free as the tools you use. This notion articulates a key challenge of our times. We are surrounded by tech, it permeates our work, play and homes. This means that the basic freedoms to life, liberty and the pursuit of happiness are somewhat constrained to the digital literacy of individuals.


This is in part why open source is so important. It empowers individuals and communities to become makers and creators in their own destinies. It is a way to support collaborative competitive and even the playing field whilst also creating transparency and openness in the systems that surround us, many of which we need to be able to trust.   


GovHack is an awesome opportunity to explore and experiment with government data to improve the lives of New Zealanders. But it is also a chance to collaborate on the sorts of systems we want to build in the future. As such, open source and transparency are the lifeblood of GovHack and our community.


But what about the networks we use? How do they shape the world we live in? Most people blindly trust the network but how can the network support or undermine freedom? We ask all GovHackers to consider the full extent of how the tools they use support freedom, for geeks and our non-geek communities, and how we can build trust into the network through openness and transparency.


GovHack Hamilton is the first GovHack event hosted on a network controlled by open source software, Faucet. Faucet is an open source SDN (Software Defined Network) application that made it easy to spin up a brand new network for general members of public to join to for the duration of GovHack, rather than trying to authenticate these users on the University of Waikato corp network, which means people can spend more time on their project.


Faucet is a New Zealand project, that began at the University of Waikato and at REANNZ, and has spread internationally. Faucet can control both software and hardware network devices (so supports high performance networking, and switches from vendors like Allied Telesis, HP, and NoviFlow) and includes built in automated test features. Being open source as well as easy to test, Faucet enables operators to quickly and safely introduce new network features - most networks today are not open source and do not have automated testing. Being able to deploy changes rapidly, safely, and securely using SDN software like Faucet is important as network security and transparency concerns are increasing for all Internet users.


Finally, we’d like to thank a few companies who made this project possible by providing their support. Thanks to our hardware partners Allied-Telesis and HP Enterprise/Aruba for providing faucet compatible network switches to build the physical network. Thanks to REANNZ who provided a generous amount of Internet bandwidth to ensure attendees could quickly download datasets for their project. Thanks to the University of Waikato for hosting our Hamilton GovHack site again this year and helping us extend our network between multiple buildings by connecting our faucet SDN network to their corp network and demonstrating interoperability of the SDN network with a traditional network.

 

Author credit:

Brad Cowie

WAND Group, University of Waikato, NZ

23 May 2017

NZOSS News

Launch of 2017 Tech Manifesto

Today, 18 national technology sector communities are launching our 2017 Tech Manifesto.

The goal of this collectively composed manifesto is to guide party policy in the technical area going into the 2017 national election. It represents the united front of our many NZ-wide technology-focused communities and is, we trust, an informative and compelling read for any prospective leaders of NZ.

We look forward to seeing all of these well considered positions implemented in each parties' election platforms, as all of them are non-partisan and simply good for New Zealand.

Participating organisations include:

  • NZTech
  • InternetNZ
  • IT Professionals NZ (ITP)
  • TUANZ
  • NZRise
  • NZ Software Association
  • Canterbury Tech
  • FinTechNZ
  • HealthIT NZ
  • Health Infomatics New Zealand
  • The NZ Open Source Society
  • Project Management Institute of NZ
  • itSMFnz
  • Test Professionals Network
  • Game Developers Association
  • Precision Ag Association
  • AI Forum
  • VR/AR Association
PreviewAttachmentSize
2017 Tech Manifesto.pdf5.31 MB

20 May 2017

Mark Foster

Renewing SSL Certificates like a boss (aka validating you didn't screw up (or, you did))

Probably not the first time i've done it - renewing my Letsencrypt SSL certs without then actually bouncing daemons to load new certs.

Some tips for validating that your cert is actually working:

Firstly, show cert details:

blakjak@raven:~$ openssl s_client -connect localhost:25 -starttls smtp

Look for (in my case) something like:

subject=/CN=blakjak.net
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
---

And then there's IMAP and POP, the below will quickly show the dates of your cert:

blakjak@raven:~$ openssl s_client -connect localhost:993 2>/dev/null | openssl x509 -noout -dates
notBefore=May 17 07:45:00 2017 GMT
notAfter=Aug 15 07:45:00 2017 GMT
blakjak@raven:~$ openssl s_client -connect localhost:995 2>/dev/null | openssl x509 -noout -dates
notBefore=May 17 07:45:00 2017 GMT
notAfter=Aug 15 07:45:00 2017 GMT

18 February 2017

Tom Ryder

Shell from vi

A good sign of a philosophically sound interactive Unix tool is the facilities it offers for interacting with the filesystem and the shell: specifically, how easily can you run file operations and/or shell commands with reference to data within the tool? The more straightforward this is, the more likely the tool will fit neatly into a terminal-driven Unix workflow.

If all else fails, you could always suspend the task with Ctrl+Z to drop to a shell, but it’s helpful if the tool shows more deference to the shell than that; it means you can use and (even more importantly) write tools to manipulate the data in the program in whatever languages you choose, rather than being forced to use any kind of heretical internal scripting language, or worse, an over-engineered API.

vi is a good example of a tool that interacts openly and easily with the Unix shell, allowing you to pass open buffers as streams of text transparently to classic filter and text processing tools. In the case of Vim, it’s particularly useful to get to know these, because in many cases they allow you to avoid painful Vimscript, and to do things your way, without having to learn an ad-hoc language or to rely on plugins. This was touched on briefly in the “Editing” article of the Unix as IDE series.

Choosing your shell

By default, vi will use the value of your SHELL environment variable as the shell in which your commands will be run. In most cases, this is probably what you want, but it might pay to check before you start:

:set shell?

If you’re using Bash, and this prints /bin/bash, you’re good to go, and you’ll be able to use Bash-specific features or builtins such as [[ comfortably in your command lines if you wish.

Running commands

You can run a shell command from vi with the ! ex command. This is inherited from the same behaviour in ed. A good example would be to read a manual page in the same terminal window without exiting or suspending vi:

:!man grep

Or to build your project:

:!make

You’ll find that exclamation point prefix ! shows up in the context of running external commands pretty consistently in vi.

You will probably need to press Enter afterwards to return to vi. This is to allow you to read any output remaining on your screen.

Of course, that’s not the only way to do it; you may prefer to drop to a forked shell with :sh, or suspend vi with ^Z to get back to the original shell, resuming it later with fg.

You can refer to the current buffer’s filename in the command with %, but be aware that this may cause escaping problems for files with special characters in their names:

:!gcc % -o foo

If you want a literal %, you will need to escape it with a backslash:

:!grep \% .vimrc

The same applies for the # character, for the alternate buffer.

:!gcc # -o bar
:!grep \# .vimrc

And for the ! character, which expands to the previous command:

:!echo !
:!echo \!

You can try to work around special characters for these expansions by single-quoting them:

:!gcc '%' -o foo
:!gcc '#' -o bar

But that’s still imperfect for files with apostrophes in their names. In Vim (but not vi) you can do this:

:exe "!gcc " . shellescape(expand("%")) . " -o foo"

The Vim help for this is at :help :!.

Reading the output of commands into a buffer

Also inherited from ed is reading the output of commands into a buffer, which is done by giving a command starting with ! as the argument to :r:

:r !grep vim .vimrc

This will insert the output of the command after the current line position in the buffer; it works in the same way as reading in a file directly.

You can add a line number prefix to :r to place the output after that line number:

:5r !grep vim .vimrc

To put the output at the very start of the file, a line number of 0 works:

:0r !grep vim .vimrc

And for the very end of the file, you’d use $:

:$r !grep vim .vimrc

Note that redirections work fine, too, if you want to prevent stderr from being written to your buffer in the case of errors:

:$r !grep vim .vimrc 2>>vim_errorlog

Writing buffer text into a command

To run a command with standard input coming from text in your buffer, but without deleting it or writing the output back into your buffer, you can provide a ! command as an argument to :w. Again, this behaviour is inherited from ed.

By default, the whole buffer is written to the command; you might initially expect that only the current line would be written, but this makes sense if you consider the usual behaviour of w when writing directly to a file.

Given a file with a first column full of numbers:

304 Donald Trump
227 Hillary Clinton
3   Colin Powell
1   Spotted Eagle
1   Ron Paul
1   John Kasich
1   Bernie Sanders

We could calculate and view (but not save) the sum of the first column with awk(1), to see the expected value of 538 printed to the terminal:

:w !awk '{sum+=$1}END{print sum}'

We could limit the operation to the faithless electoral votes by specifying a line range:

:3,$w !awk '{sum+=$1}END{print sum}'

You can also give a range of just ., if you only want to write out the current line.

In Vim, if you’re using visual mode, pressing : while you have some text selected will automatically add the '<,'> range marks for you, and you can write out the rest of the command:

:'<,'>w !grep Bernie

Note that this writes every line of your selection to the command, not merely the characters you have selected. It’s more intuitive to use visual line mode (Shift+V) if you take this approach.

Filtering text

If you want to replace text in your buffer by filtering it through a command, you can do this by providing a range to the ! command:

:1,2!tr '[:lower:]' '[:upper:]'

This example would capitalise the letters in the first two lines of the buffer, passing them as input to the command and replacing them with the command’s output.

304 DONALD TRUMP
227 HILLARY CLINTON
3 Colin Powell
1 Spotted Eagle
1 Ron Paul
1 John Kasich
1 Bernie Sanders

Note that the number of lines passed as input need not match the number of lines of output. The length of the buffer can change. Note also that by default any stderr is included; you may want to redirect that away.

You can specify the entire file for such a filter with %:

:%!tr '[:lower:]' '[:upper:]'

As before, the current line must be explicitly specified with . if you want to use only that as input, otherwise you’ll just be running the command with no buffer interaction at all, per the first heading of this article:

:.!tr '[:lower:]' '[:upper:]'

You can also use ! as a motion rather than an ex command on a range of lines, by pressing ! in normal mode and then a motion (w, 3w, }, etc) to select all the lines you want to pass through the filter. Doubling it (!!) filters the current line, in a similar way to the yy and dd shortcuts, and you can provide a numeric prefix (e.g. 3!!) to specify a number of lines from the current line.

This is an example of a general approach that will work with any POSIX-compliant version of vi. In Vim, you have the gU command available to coerce text to uppercase, but this is not available in vanilla vi; the best you have is the tilde command ~ to toggle the case of the character under the cursor. tr(1), however, is specified by POSIX–including the locale-aware transformation–so you are much more likely to find it works on any modern Unix system.

If you end up needing such a command during editing a lot, you could make a generic command for your private bindir, say named upp for uppercase, that forces all of its standard input to uppercase:

#!/bin/sh
tr '[:lower:]' '[:upper:]'

Once saved somewhere in $PATH and made executable, this would allow you simply to write the following to apply the filter to the entire buffer:

:%!upp

The main takeaway from this is that the scripts you use with your editor don’t have to be in shell. You might prefer Awk:

#!/usr/bin/awk -f
{ print toupper($0) }

Or Perl:

#!/usr/bin/env perl
print uc while <>;

Or Python, or Ruby, or Rust, or …

Incidentally, this “filtering” feature is where vi‘s heritage from ed ends as far as external commands are concerned. In POSIX ed, there isn’t a way to filter buffer text through a command in one hit. It’s not too hard to emulate it with a temporary file, though, using all the syntax learned above:

*1,2w !upp > tmp
*1,2d
*0r tmp
*!rm tmp

10 February 2017

Tom Ryder

Bash hostname completion

As part of its programmable completion suite, Bash includes hostname completion. This completion mode reads hostnames from a file in hosts(5) format to find possible completions matching the current word. On Unix-like operating systems, it defaults to reading the file in its usual path at /etc/hosts.

For example, given the following hosts(5) file in place at /etc/hosts:

127.0.0.1      localhost
192.0.2.1      web.example.com www
198.51.100.10  mail.example.com mx
203.0.113.52   radius.example.com rad

An appropriate call to compgen would yield this output:

$ compgen -A hostname
localhost
web.example.com
www
mail.example.com
mx
radius.example.com
rad

We could then use this to complete hostnames for network diagnostic tools like ping(8):

$ complete -A hostname ping

Typing ping we and then pressing Tab would then complete to ping web.example.com. If the shopt option hostcomplete is on, which it is by default, Bash will also attempt host completion if completing any word with an @ character in it. This can be useful for email address completion or for SSH username@hostname completion.

We could also trigger hostname completion in any other Bash command line (regardless of complete settings) with the Readline shortcut Alt+@ (i.e. Alt+Shift+2). This works even if hostcomplete is turned off.

However, with DNS so widely deployed, and with system /etc/hosts files normally so brief on internet-connected systems, this may not seem terribly useful; you’d just end up completing localhost, and (somewhat erroneously) a few IPv6 addresses that don’t begin with a digit. It may seem even less useful if you have your own set of hosts in which you’re interested, since they may not correspond to the hosts in the system’s /etc/hosts file, and you probably really do want them looked up via DNS each time, rather than maintaining static addresses for them.

There’s a simple way to make host completion much more useful by defining the HOSTFILE variable in ~/.bashrc to point to any other file containing a list of hostnames. You could, for example, create a simple file ~/.hosts in your home directory, and then include this in your ~/.bashrc:

# Use a private mock hosts(5) file for completion
HOSTFILE=$HOME/.hosts

You could then populate the ~/.hosts file with a list of hostnames in which you’re interested, which will allow you to influence hostname completion usefully without messing with your system’s DNS resolution process at all. Because of the way the Bash HOSTFILE parsing works, you don’t even have to fake an IP address as the first field; it simply scans the file for any word that doesn’t start with a digit:

# Comments with leading hashes will be excluded
external.example.com
router.example.com router
github.com
google.com
...

You can even include other files from it with an $include directive!

$include /home/tom/.hosts.home
$include /home/tom/.hosts.work

Author’s note: This really surprised me when reading the source, because I don’t think /etc/hosts files generally support that for their usual name resolution function. I would love to know if any systems out there actually do support this.

The behaviour of the HOSTFILE variable is a bit weird; all of the hosts from the HOSTFILE are appended to the in-memory list of completion hosts each time the HOSTFILE variable is set (not even just changed), and host completion is attempted, even if the hostnames were already in the list. It’s probably sufficient just to set the file once in ~/.bashrc.

This setup allows you to set hostname completion as the default method for all sorts of network-poking tools, falling back on the usual filename completion if nothing matches with -o default:

$ complete -A hostname -o default curl dig host netcat ping telnet

You could also use hostname completions for ssh(1), but to account for hostname aliases and other ssh_config(5) tricks, I prefer to read Host directives values from ~/.ssh/config for that.

If you have machine-readable access to the complete zone data for your home or work domain, it may even be worth periodically enumerating all of the hostnames into that file, perhaps using rndc dumpdb -zones for a BIND9 setup, or using an AXFR request. If you have a locally caching recursive nameserver, you could even periodically examine the contents of its cache for new and interesting hosts to add to the file.

15 January 2017

NZOSS News

Lenovo laptops available for Linux users

Silicon Systems, an NZ-based Lenovo reseller (among other things), has extended an updated offer to NZOSS members, making 2 Lenovo laptop models available without Windows pre-installed. That means you can buy them without paying for a proprietary operating system you neither need nor want. The models are

  1. the Thinkpad X1 Carbon (gen 4) - this is a high-end 14" ultrabook: $2375 + GST
  2. the L560 Notebook - value-for-money, extensible 15.6" laptop: $1250 + GST

These are excellent machines (I've got a gen 2 X1 Carbon and am almost certainly going to upgrade to a newer one eventually). Full specs attached.

10 November 2016

Nat Torkington

On Moving to New Zealand

Hello, American friends!  President-Elect Trump has given his speech and begun to redact his campaign website of the obviously illegal and impossible campaign promises, and you look up from your keyboard through an election-defeat hangover and want to move to New Zealand.
First of all, consider staying.  America’s problems won’t be solved if all the tolerant and progressive people leave.
But that’s not an easy choice for everyone.  If you don’t think you’ll be safe, or you’re concerned about the effects on your children of growing up in the cloud of President Trump, you might be looking elsewhere.
Allow me to suggest New Zealand.
New Zealand has a fairly straightforward skilled migrant immigration scheme, where you get points for meeting certain criteria and if you clear a particular number of points then you can move here.  Some of those criteria are around education, language, and health, effectively biasing it against people who don’t speak English, those who aren’t highly-educated, as well as non-able-bodied and unwell people.
We maintain a list of jobs that are in demand.  If you can meet the needs of an OMG SO IN DEMAND job then you just need a job offer (as well as the points, as described above).  The government’s website on moving to NZ to work doesn’t suck. It’s harder but not impossible without ticking the ZOMGJOB list (and do look: there are some surprising inclusions).
There are plenty of tech startups looking to hire people.  NZ tends to have a reasonable number of fresh software engineering graduates, but few with the kinds of skills that people acquire in American startups: devops, engineering leadership, web scale distributed systems, big data pipelines, etc.  Which startups are hiring?  Look at PushPay, Raygun, Atomic, TimelyVend, Xero, TradeMe, etc.
Nerd conferences are good here.  Webstock (design, Feb) and Kiwicon (security, Nov) are anchors of the scene.  Attend those and you’ll meet many of the people with whom you could work, and some good friends.  There are additional web, mobile, etc. conferences.  Be sure to schedule Fieldays in your first year, because the agtech world is weird and wonderful and close to our grass-growing economic roots.
If you’re an investor, you can investor your way to residency.  Similarly entrepreneurs.
The pathway to citizenship is straightforward if you decide you’d like to live here forevs.
The absolute dollar value you’ll earn in NZ will look low if you translate it to American dollars.  Do not think you’ll be able to afford your San Francisco home because you’ve been working in Auckland.  It doesn’t work that way.
Food is expensive. Thanks to globalisation (fist shake! Grr globalisation, you!) the whole world can buy our food.  So we pay a lot to eat it.  We don’t have a Mexico just south of us always producing fruit, so (for examples) we eat strawberries for a month each year when they’re in season … and then not.  Meat’s available all year round, and pretty good in the shops.  And if you live semi-rural you can probably find a farmer who’ll let you buy a bull and have it butchered for you, then you can get a freezer full of export-grade yum.  Vegetarians, make friends with your local Asian grocery, where all sorts of surprising imports and deliciousness is available.
Housing is tight in Auckland, but easier in many other parts of the country.  You can buy a home in Auckland if you have the earning power of two professionals in your family, and then they’d better be successful professionals.  In many other parts of the country, one professional income is enough.
NZ is beautiful and diverse; there are many great places to live.  Think of it as the American West Coast: Auckland is Santa Barbara, Dunedin is Seattle.  (ish) The rule of thumb: warmer is norther, but there are some exceptions (Nelson and Napier are toastier than you’d expect given their locations)  Do you like hiking?  Skiing?  Fishing?  Swimming?  Hunting?  There are great places for these activities around the country, and you could live next to the national park or marina that means you can live your passion every weekend.
Are you more cultural and cerebral?  Wellington and Auckland have thriving arts scenes, with bands, coffee shops, theatre, opera, orchestras, etc.  The cities of Dunedin, Christchurch, Wellington, Palmerston North, Hamilton, and Auckland are university towns.
Caution: our hipsters are not as developed as America’s. So while there’s the occasional extravagant beard and fixie bike, and it seems like every town with more than 50 people has a cafe where you can get an excellent coffee, you’ll struggle to find someone who’ll charge you $27 for an artisanal cruelty-free microbatch locally-produced free-range recycled soy-inked letterpressed 50%-butter-by-volume coffee and there are no emoji-only ride-sharing voice-interface social network startups.  Turn back now if this is a problem for you.
Can Trump happen here?  Never say never, the world is going fucking nuts.  However … New Zealand so far has traded with crazy nations without becoming crazy itself: we have a lot of Brits but most Kiwis think Brexit was nuts; similarly with Americans and DT.  Kiwis have a much warmer relationship to regulation than Americans. There’s been no NZ indigenous genocide (unlike USA and Australia), and the worst social woes in NZ don’t register on the American scale.
Our racists and entitled old people have done little damage to the rest of us; both leading parties are center-right and center-left.  And our definition of “racist” is “I don’t think those blimmin Marries should be given any more money!” and “no more Chinese immigrants, they’re driving up house prices!” rather than KKK robes and skinheads beating the shit out of brown people on a regular basis.  To be clear: no skinheads or KKK robes in Hobbit-size.  We have a sad racist past, sad racists, and ongoing racial tension, but not on the scale of America.
NZ schools are pretty darn good.  We’re no Finland (as politicians constantly remind educators) but state schools are mostly very good.  Schools aren’t driven by yearly tests, and the NZ Curriculum is very flexible with plenty of room for schools to find their own identity (culture, technology, etc.). Schools are funded from central Government, not property taxes, and schools in poorer areas are given more money.  I’ve heard San Francisco residents complain that most state schools in the area are terrible—that is not the case in Auckland. We moved to NZ (wife is American, I’m a Kiwi who’d spent 10 years in Colorado) when our kids were 4 and 6 and the relaxed school environment, no gangs, no shootings … priceless.
We have proportional representation, so power is frequently split between parties.  We get to vote for MPs who represent our area, AND for a party.  The parties get MPs in proportion to the number of votes the party got — it’s not as complex as cricket, much fairer than your system, and you’ll get the hang of it.  The Green party is a contender here.  On the downside, we don’t (yet) have constitutional protections against the elected Parliament, so if NZ did elect a lot of arsehats then they could run amok.
We’re part of the Five Eyes network (with US, UK, Canada, and Australia), so Ed Snowden can’t move here either.  We have legal protections against wholesale surveillance of citizens, and distrust our spooks to play by the rules or politicians to make them watertight.  Like Americans, we all suspect that unless we’re using Tor and Signal our comms are fair game.  On the upside, NZ is small enough that you can easily meet your politicians and bureaucrats and give them a piece of your mind.
What else isn’t great?  We have higher child suicide and abuse against children rates than economically-comparable nations, and the government has done a shit job of taking care of the poor during the last decade’s housing in Auckland.  Because of that economic boom, NZers have invested more in property than in all the good stuff.  Our socialist healthcare system takes care of everyday things really well, but if you’re earning middle-class incomes then consider augmenting it with private insurance so you don’t have to join waiting lists should you need surgery (good news: the dominant provider is a coop so NZ health insurance costs are miniscule in comparison to American health insurance costs).  If your kid has very special needs (e.g., autism), the Government doesn’t fund enough assistance for their schooling to be awesome (and, obscenely, may not let you stay).  A surprising number of our rivers are full of animal shit and not swimmable (fancy that, in a dairying nation).  Why the hell in 2016 are we still building subdivisions without bike paths, and building roads without bike lanes?!  These are all issues that the NZ Left is familiar with and grumpy about.
In short, we’ve got our problems but they’re nothing in comparison to your problems!  You’d be welcome and loved here.  Ride out the Trumpocalypse with the sounds of native birds in the trees as you crack a cold craft beer and revel in your new home’s reasonable race relations, functioning political system, and complete absence of orange arsehats.
Of course, this is all just my opinion.  You should come and check it out for yourself!  See you soon!

30 October 2016

Nat Torkington

“Outcome is a function of process”

I was just catching up on Tim Kong’s excellent blog, when I read this great quote from Dan Carter:

“One thing we talk about over and over with this current All Blacks side is about never focusing on the outcome. We view the outcome as a function of following our processes. That might sound a little dry to some, but looking back at every major loss we’ve had over the years, they mostly started with us thinking too far ahead of the game.”

I like that quote a lot.  There’s a lot you can find in it:

  1. You can’t do success.  Instead, you can only run, pass, tackle, communicate … all of which can contribute to success.
  2. Even in a game with as many different plays, player matchups, imbalances, and opportunities as rugby, the winners are winners because they have s system that generates wins.
  3. The team’s playbook must necessarily be flexible, because it will be used in many different circumstances (and there’s an opponent who will exploit predictability).
  4. The team is still important.  You can’t give the All Blacks playbook to the Mahurangi B rugby team and expect them to win against the Lions.
  5. Your team still has to train, to be the best they can be and to lock in the playbook.
  6. So within general principles, you find what works and use that.

So too with engineering management.  Your job is to shape the processes that give you success.  They may be different in some ways from the processes that give others success.  Your processes won’t dictate every solution to your team.  The members of your team are still important, and they should still be learning and running.

But engineering management is not sport.  The tech environment changes constantly, and every day is game day.  Consequently, much more of the playbook related to solving specific problems on the field devolves to the team members themselves, and much more learning happens on the field.  But, as with sport, relentless running will exhaust your team so it’s wise to build rest days for learning and exploration into your team’s schedule if you want them able to play their best game the rest of the time.

Ok, I’m done.  I promise, no more sportsball metaphors.

 

25 October 2016

Nat Torkington

Startups and failure

(Wynyard Group, an NZ tech high-growth company [or, perhaps, not-so-high growth] just entered voluntary administration. On Twitter, a friend was adamant bad luck had nothing to do with it. Instead of a tweetstorm, here’s my response in a vintage retro format known as “a blog post”)

You can always look back at every failure and assign one or more causes, because SOMETHING always kills the startup.  And someone is always responsible for the fatal decisions. That’s “pilot error” for startups.

But hindsight is far easier than foresight. Everybody does the best they can with the info and skills they have.  Everyone operates with imperfect knowledge and incomplete control. Everyone. Investors, board, executive, and rank and file all act with imperfect information. They don’t have a choice. The pattern-matching we hate in VCs is just a reaction to the fog of the market. “Maybe past performance will predict future success?”

The hardest part of bringing something into existence is surviving that imperfect information.

In the case of Wynyard, there are almost certainly multiple proximal causes of failure. People at Wynyard undoubtedly said and did things that materially contributed to failure. That doesn’t mean they weren’t unlucky. Even if Wynyard exec repeatedly doubled-down on unsuccessful strategies, the company and investors were unlucky. Unlucky to have the team they had, unlucky not to notice, unlucky that their faith in people was misplaced.

Successes have recognised and recovered from their fuckups before they went broke. Failures didn’t. Do you have the right info, skills, team, board, partners, customers, market conditions to recover before you go broke? That’s luck.

To suggest there’s no luck in successful or unsuccessful startups is just silly.

22 October 2016

Tom Ryder

Custom commands

As users grow more familiar with the feature set available to them on UNIX-like operating systems, and grow more comfortable using the command line, they will find more often that they develop their own routines for solving problems using their preferred tools, often repeatedly solving the same problem in the same way. You can usually tell if you’ve entered this stage if one or more of the below applies:

  • You repeatedly search the web for the same long commands to copy-paste.
  • You type a particular long command so often it’s gone into muscle memory, and you type it without thinking.
  • You have a text file somewhere with a list of useful commands to solve some frequently recurring problem or task, and you copy-paste from it a lot.
  • You’re keeping large amounts of history so you can search back through commands you ran weeks or months ago with ^R, to find the last time an instance of a problem came up, and getting angry when you realize it’s fallen away off the end of your history file.
  • You’ve found that you prefer to run a tool like ls(1) more often with a non-default flag than without it; -l is a common example.

You can definitely accomplish a lot of work quickly with shoving the output of some monolithic program through a terse one-liner to get the information you want, or by developing muscle memory for your chosen toolbox and oft-repeated commands, but if you want to apply more discipline and automation to managing these sorts of tasks, it may be useful for you to explore more rigorously defining your own commands for use during your shell sessions, or for automation purposes.

This is consistent with the original idea of the Unix shell as a programming environment; the tools provided by the base system are intentionally very general, not prescribing how they’re used, an approach which allows the user to build and customize their own command set as appropriate for their system’s needs, even on a per-user basis.

What this all means is that you need not treat the tools available to you as holy writ. To leverage the Unix philosophy’s real power, you should consider customizing and extending the command set in ways that are useful to you, refining them as you go, and sharing those extensions and tweaks if they may be useful to others. We’ll discuss here a few methods for implementing custom commands, and where and how to apply them.

Aliases

The first step users take toward customizing the behaviour of their shell tools is often to define shell aliases in their shell’s startup file, usually specifically for interactive sessions; for Bash, this is usually ~/.bashrc.

Some aliases are so common that they’re included as commented-out suggestions in the default ~/.bashrc file for new users. For example, on Debian systems, the following alias is defined by default if the dircolors(1) tool is available for coloring ls(1) output by filetype:

alias ls='ls --color=auto'

With this defined at startup, invoking ls, with or without other arguments, will expand to run ls --color=auto, including any given arguments on the end as well.

In the same block of that file, but commented out, are suggestions for other aliases to enable coloured output for GNU versions of the dir and grep tools:

#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'

#alias grep='grep --color=auto'
#alias fgrep='fgrep --color=auto'
#alias egrep='egrep --color=auto'

Further down still, there are some suggestions for different methods of invoking ls:

#alias ll='ls -l'
#alias la='ls -A'
#alias l='ls -CF'

Commenting these out would make ll, la, and l work as commands during an interactive session, with the appropriate options added to the call.

You can check the aliases defined in your current shell session by typing alias with no arguments:

$ alias
alias ls='ls --color=auto'

Aliases are convenient ways to add options to commands, and are very common features of ~/.bashrc files shared on the web. They also work in POSIX-conforming shells besides Bash. However, for general use, they aren’t very sophisticated. For one thing, you can’t process arguments with them:

# An attempt to write an alias that searches for a given pattern in a fixed
# file; doesn't work because aliases don't expand parameters
alias grepvim='grep "$1" ~/.vimrc'

They also don’t work for defining new commands within scripts:

#!/bin/bash
alias ll='ls -l'
ll

When saved in a file as test, made executable, and run, this script fails:

./test: line 3: ll: command not found

So, once you understand how aliases work so you can read them when others define them in startup files, my suggestion is there’s no point writing any. Aside from some very niche evaluation tricks, they have no functional advantages over shell functions and scripts.

Functions

A more flexible method for defining custom commands for an interactive shell (or within a script) is to use a shell function. We could declare our ll function in a Bash startup file as a function instead of an alias like so:

# Shortcut to call ls(1) with the -l flag
ll() {
    command ls -l "$@"
}

Note the use of the command builtin here to specify that the ll function should invoke the program named ls, and not any function named ls. This is particularly important when writing a function wrapper around a command, to stop an infinite loop where the function calls itself indefinitely:

# Always add -q to invocations of gdb(1)
gdb() {
    command gdb -q "$@"
}

In both examples, note also the use of the "$@" expansion, to add to the final command line any arguments given to the function. We wrap it in double quotes to stop spaces and other shell metacharacters in the arguments causing problems. This means that the ll command will work correctly if you were to pass it further options and/or one or more directories as arguments:

$ ll -a
$ ll ~/.config

Shell functions declared in this way are specified by POSIX for Bourne-style shells, so they should work in your shell of choice, including Bash, dash, Korn shell, and Zsh. They can also be used within scripts, allowing you to abstract away multiple instances of similar commands to improve the clarity of your script, in much the same way the basics of functions work in general-purpose programming languages.

Functions are a good and portable way to approach adding features to your interactive shell; written carefully, they even allow you to port features you might like from other shells into your shell of choice. I’m fond of taking commands I like from Korn shell or Zsh and implementing them in Bash or POSIX shell functions, such as Zsh’s vared or its two-argument cd features.

If you end up writing a lot of shell functions, you should consider putting them into separate configuration subfiles to keep your shell’s primary startup file from becoming unmanageably large.

Examples from the author

You can take a look at some of the shell functions I have defined here that are useful to me in general shell usage; a lot of these amount to implementing convenience features that I wish my shell had, especially for quick directory navigation, or adding options to commands:

Other examples

Variables in shell functions

You can manipulate variables within shell functions, too:

# Print the filename of a path, stripping off its leading path and
# extension
fn() {
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

This works fine, but the catch is that after the function is done, the value for name will still be defined in the shell, and will overwrite whatever was in there previously:

$ printf '%s\n' "$name"
foobar
$ fn /home/you/Task_List.doc
Task_List
$ printf '%s\n' "$name"
Task_List

This may be desirable if you actually want the function to change some aspect of your current shell session, such as managing variables or changing the working directory. If you don’t want that, you will probably want to find some means of avoiding name collisions in your variables.

If your function is only for use with a shell that provides the local (Bash) or typeset (Ksh) features, you can declare the variable as local to the function to remove its global scope, to prevent this happening:

# Bash-like
fn() {
    local name
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

# Ksh-like
# Note different syntax for first line
function fn {
    typeset name
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

If you’re using a shell that lacks these features, or you want to aim for POSIX compatibility, things are a little trickier, since local function variables aren’t specified by the standard. One option is to use a subshell, so that the variables are only defined for the duration of the function:

# POSIX; note we're using plain parentheses rather than curly brackets, for
# a subshell
fn() (
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
)

# POSIX; alternative approach using command substitution:
fn() {
    printf '%s\n' "$(
        name=$1
        name=${name##*/}
        name=${name%.*}
        printf %s "$name"
    )"
}

This subshell method also allows you to change directory with cd within a function without changing the working directory of the user’s interactive shell, or to change shell options with set or Bash options with shopt only temporarily for the purposes of the function.

Another method to deal with variables is to manipulate the positional parameters directly ($1, $2 … ) with set, since they are local to the function call too:

# POSIX; using positional parameters
fn() {
    set -- "${1##*/}"
    set -- "${1%.*}"
    printf '%s\n' "$1"
}

These methods work well, and can sometimes even be combined, but they’re awkward to write, and harder to read than the modern shell versions. If you only need your functions to work with your modern shell, I recommend just using local or typeset. The Bash Guide on Greg’s Wiki has a very thorough breakdown of functions in Bash, if you want to read about this and other aspects of functions in more detail.

Keeping functions for later

As you get comfortable with defining and using functions during an interactive session, you might define them in ad-hoc ways on the command line for calling in a loop or some other similar circumstance, just to solve a task in that moment.

As an example, I recently made an ad-hoc function called monit to run a set of commands for its hostname argument that together established different types of monitoring system checks, using an existing script called nmfs:

$ monit() { nmfs "$1" Ping Y ; nmfs "$1" HTTP Y ; nmfs "$1" SNMP Y ; }
$ for host in webhost{1..10} ; do
> monit "$host"
> done

After that task was done, I realized I was likely to use the monit command interactively again, so I decided to keep it. Shell functions only last as long as the current shell, so if you want to make them permanent, you need to store their definitions somewhere in your startup files. If you’re using Bash, and you’re content to just add things to the end of your ~/.bashrc file, you could just do something like this:

$ declare -f monit >> ~/.bashrc

That would append the existing definition of monit in parseable form to your ~/.bashrc file, and the monit function would then be loaded and available to you for future interactive sessions. Later on, I ended up converting monit into a shell script, as its use wasn’t limited to just an interactive shell.

If you want a more robust approach to keeping functions like this for Bash permanently, I wrote a tool called Bashkeep, which allows you to quickly store functions and variables defined in your current shell into separate and appropriately-named files, including viewing and managing the list of names conveniently:

$ keep monit
$ keep
monit
$ ls ~/.bashkeep.d
monit.bash
$ keep -d monit

Scripts

Shell functions are a great way to portably customize behaviour you want for your interactive shell, but if a task isn’t specific only to an interactive shell context, you should instead consider putting it into its own script whether written in shell or not, to be invoked somewhere from your PATH. This makes the script useable in contexts besides an interactive shell with your personal configuration loaded, for example from within another script, by another user, or by an X11 session called by something like dmenu.

Even if your set of commands is only a few lines long, if you need to call it often–especially with reference to other scripts and in varying contexts– making it into a generally-available shell script has many advantages.

/usr/local/bin

Users making their own scripts often start by putting them in /usr/local/bin and making them executable with sudo chmod +x, since many Unix systems include this directory in the system PATH. If you want a script to be generally available to all users on a system, this is a reasonable approach. However, if the script is just something for your own personal use, or if you don’t have the permissions necessary to write to this system path, it may be preferable to have your own directory for logical binaries, including scripts.

Private bindir

Unix-like users who do this seem to vary in where they choose to put their private logical binaries directory. I’ve seen each of the below used or recommended:

  • ~/bin
  • ~/.bin
  • ~/.local/bin
  • ~/Scripts

I personally favour ~/.local/bin, but you can put your scripts wherever they best fit into your HOME directory layout. You may want to choose something that fits in well with the XDG standard, or whatever existing standard or system your distribution chooses for filesystem layout in $HOME.

In order to make this work, you will want to customize your login shell startup to include the directory in your PATH environment variable. It’s better to put this into ~/.profile or whichever file your shell runs on login, so that it’s only run once. That should be all that’s necessary, as PATH is typically exported as an environment variable for all the shell’s child processes. A line like this at the end of one of those scripts works well to extend the system PATH for our login shell:

PATH=$HOME/.local/bin:$PATH

Note that we specifically put our new path at the front of the PATH variable’s value, so that it’s the first directory searched for programs. This allows you to implement or install your own versions of programs with the same name as those in the system; this is useful, for example, if you like to experiment with building software in $HOME.

If you’re using a systemd-based GNU/Linux, and particularly if you’re using a display manager like GDM rather than a TTY login and startx for your X11 environment, you may find it more robust to instead set this variable with the appropriate systemd configuration file. Another option you may prefer on systems using PAM is to set it with pam_env(8).

After logging in, we first verify the directory is in place in the PATH variable:

$ printf '%s\n' "$PATH"
/home/tom/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games

We can test this is working correctly by placing a test script into the directory, including the #!/bin/sh shebang, and making it executable by the current user with chmod(1):

$ cat >~/.local/bin/test-private-bindir
#!/bin/sh
printf 'Working!\n'
^D
$ chmod u+x ~./local/bin/test-private-bindir
$ test-private-bindir
Working!

Examples from the author

I publish the more generic scripts I keep in ~/.local/bin, which I keep up-to-date on my personal systems in version control using Git, along with my configuration files. Many of the scripts are very short, and are intended mostly as building blocks for other scripts in the same directory. A few examples:

  • gscr(1df): Run a set of commands on a Git repository to minimize its size.
  • fgscr(1df): Find all Git repositories in a directory tree and run gscr(1df) over them.
  • hurl(1df): Extract URLs from links in an HTML document.
  • maybe(1df): Exit with success or failure with a given probability.
  • rfcr(1df): Download and read a given Request for Comments document.
  • tot(1df): Add up a list of numbers.

For such scripts, I try to write them as much as possible to use tools specified by POSIX, so that there’s a decent chance of them working on whatever Unix-like system I need them to.

On systems I use or manage, I might specify commands to do things relevant specifically to that system, such as:

  • Filter out uninteresting lines in an Apache HTTPD logfile with awk.
  • Check whether mail has been delivered to system users in /var/mail.
  • Upgrade the Adobe Flash player in a private Firefox instance.

The tasks you need to solve both generally and specifically will almost certainly be different; this is where you can get creative with your automation and abstraction.

X windows scripts

An additional advantage worth mentioning of using scripts rather than shell functions where possible is that they can be called from environments besides shells, such as in X11 or by other scripts. You can combine this method with X11-based utilities such as dmenu(1), libnotify’s notify-send(1), or ImageMagick’s import(1) to implement custom interactive behaviour for your X windows session, without having to write your own X11-interfacing code.

Other languages

Of course, you’re not limited to just shell scripts with this system; it might suit you to write a script completely in a language like awk(1), or even sed(1). If portability isn’t a concern for the particular script, you should use your favourite scripting language. Notably, don’t fall into the trap of implementing a script in shell for no reason …

#!/bin/sh
awk 'NF>2 && /foobar/ {print $1}' "$@"

… when you can instead write the whole script in the main language used, and save a fork(2) syscall and a layer of quoting:

#!/usr/bin/awk -f
NF>2 && /foobar/ {print $1}

Versioning and sharing

Finally, if you end up writing more than a couple of useful shell functions and scripts, you should consider versioning them with Git or a similar version control system. This also eases implementing your shell setup and scripts on other systems, and sharing them with others via publishing on GitHub. You might even go so far as to write a Makefile to install them, or manual pages for quick reference as documentation … if you’re just a little bit crazy …

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

[macro-stdexten]
...
exten => a,1,Goto(pstn,027xxx,1)
...

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

[macro-stdexten]
...
exten => a,1,Goto(vmfwd,${ARG1},1)
...

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

[vmfwd]
exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

[macro-stdexten]
exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)

[vmfwd]

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

26 April 2016

Tim Penhey

face

It has been too long

Well, it has certainly been a lot longer since I wrote a post than I thought.

My work at Canonical still has me on the Juju team. Juju has come a long way in the last few years, and we are on the final push for the 2.0 version. This was initially intended to come out with the Xenial release, but unfortunately was not ready. Xenial has 2.0-beta4 right now, soon to be beta 6. Hoping that real soon now we'll step through the release candidates to a final release. This will be SRU'ed into both Xenial and Trusty.

I plan to do some more detailed posts on some of the Go utility libraries that have come out of the Juju work. In particular, talking again about loggo which I moved under the "github.com/juju" banner, and the errors package.

Recent work has had me look at the database agnostic model representations for migrating models from one controller to another, and also at gomaasapi - the Go library for talking with MAAS. Perhaps more on that later.

06 October 2015

Mark Foster

CLI tool for 'diffing' in a useful fashion: vimdiff

Had to scratch my head to find the right tool for the job today - something that I used regularly at SMX but havn't had much need to use since.

The tool was 'vimdiff'. I needed to compare to configuration files (retrieved from two different servers) to understand what difference existed. Whilst 'diff' alone would've done it, I find the output hard to follow. vimdiff worked wonders!

Google hit I used also has some other useful tools:

https://unix.stackexchange.com/questions/79135/is-there-a-condensed-side-by-side-diff-format.

For posterity.

Honourable mention for icdiff also.

24 September 2014

Robert Collins

face

what-poles-for-the-tent

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


08 September 2014

Glen Ogilvie

Google authenticator TFA for Android - Backup and OATH

I’ve been a fan of using multi/two factor authentication for anything that matters.

Thankfully, many sites these days are beginning to support using MFA, and many of them have standardized on OATH,
Google Authenticator, is one such OATH client app, implimenting TOTP (time based on time passsword).

OATH is a reasonaly good method of providing MFA, because it’s easy for the user to setup and pretty secure, and open, both for the client and server.
You can read all about how it works in RFC 6238, or wikipedia.
So, in a nut shell, we now have method that a client can generate a key, and a server than can authenticate that key.

Google Authenticator, being the client I use, as it supports adding the share key by simply reading a QR code, is great.
But, what if I loose my phone.. or want to use a second device.. or my computer? MFA of course, pretty much locks you out if you
loose your way to generate your TOTP..

Well, google provides you a number of static keys.. you can use.. but that’s not good enough for me.

So, I thought I’d see if I could backup google authenticator, and read the shared key from it. The answer to this is yes.

Here’s the technical details:
Backup Google Authenticator using Titanium Backup. This will generate 3 files on your SD card:
The file of intrest is:
sdcard/TitaniumBackup/com.google.android.apps.authenticator2-DATE-TIME.tar.gz

In this tar.gz, you will find:
data/data/com.google.android.apps.authenticator2/./databases/databases

This is an SQLlite3 database, that contains each account you have added to google authenticator.
So, after opening it with sqllite3, IE:

tar -zxvf sdcard/TitaniumBackup/com.google.android.apps.authenticator2-DATE-TIME.tar.gz data/data/com.google.android.apps.authenticator2/./databases/databases
sqlite3 data/data/com.google.android.apps.authenticator2/./databases/databases
sqlite> select * from accounts;

to get a list of your keys.
Each key is base32 encoded.

So, to decode your key, you use:

$ python
>>> import base64
>>> base64.b16encode(base64.b32decode('THEKEYFROMTHESELECT', True))

Then this will output the key into base 16, which is the format that oathtool

You can then generate the token form your linux, computer.
Ensure you have the package: oath-toolkit

Then

$ oathtool --totp BASE16KEY

will generate you the same key as google authentcator, provided the time is correct on your Linux system.
Note: Make sure you clear your bash history, if you don’t want your MFA key in your history. And of course,
only store it on encrypted storage.. including make sure your sdcard is secure/erased in some way.

29 August 2014

Robert Collins

face

Test processes as servers

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


25 August 2014

The Open Source School

Vocal | Open Source Podcast Manager

"Vocal is a podcast manager that aims to do what similar, and well intentioned, apps fail to: do one thing and do it well.

While it’s not the first to try and serve the needs of podcast users, its developer, Nathan Dyer, hopes to avoid the ‘clunky, bloated [and] unnecessarily complicated’ flaws of current options.
"

13 August 2014

The Open Source School

LibreOffice 4.3 upgrades spreadsheets, brings 3D models to presentations | Ars Technica

"LibreOffice's latest release provides easier ways of working with spreadsheets and the ability to insert 3D models into presentations, along with dozens of other changes.

LibreOffice was created as a fork from OpenOffice in September 2010 because of concerns over Oracle's management of the open source project. LibreOffice has now had eight major releases and is powered by "thousands of volunteers and hundreds of developers," the Document Foundation, which was formed to oversee its development, said in an announcement today. (OpenOffice survived the Oracle turmoil by being transferred to the Apache Software Foundation and continues to be updated.)

In LibreOffice 4.3, spreadsheet program Calc "now allows the performing of several tasks more intuitively, thanks to the smarter highlighting of formulas in cells, the display of the number of selected rows and columns in the status bar, the ability to start editing a cell with the content of the cell above it, and being able to fully select text conversion models by the user," the Document Foundation said.

For LibreOffice Impress, the presentation application, users can now insert 3D models in the gITF format. "To use this feature, go to Insert ▸ Object ▸ 3D Model," the LibreOffice 4.3 release notes say. So far, this feature is available for Windows and Linux but not OS X."

05 May 2014

Robert Collins

face

Distributed bugtracking – quick thoughts

Just saw http://sny.no/2014/04/dbts and I feel compelled to note that distributed bug trackers are not new – the earliest I personally encountered was Aaron Bentley’s Bugs everywhere – coming up on it’s 10th birthday. BE meets many of the criteria in the dbts post I read earlier today, but it hasn’t taken over the world – and I think this is in large part due to the propogation nature of bugs being very different to code – different solutions are needed.

XXXX: With distributed code versioning we often see people going to some effort to avoid conflicts – semantic conflicts are common, and representation conflicts extremely common.The idions

Take for example https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/805661. Here we can look at the nature of the content:

  1. Concurrent cannot-conflict content – e.g. the discussion about the bug. In general everyone should have this in their local bug database as soon as possible, and anyone can write to it.
  2. Observations of fact – e.g. ‘the code change that should fix the bug has landed in Ubuntu’ or ‘Commit C should fix the bug’.
  3. Reports of symptoms – e.g. ‘Foo does not work for me in Ubuntu with package versions X, Y and Z’.
  4. Collaboratively edited metadata – tags, title, description, and arguably even the fields like package, open/closed, importance.

Note that only one of these things – the commit to fix the bug – happens in the same code tree as the code; and that the commit fixing it may be delayed by many things before the fix is available to users. Also note that conceptually conflicts can happen in any of those fields except 1).

Anyhow – my humble suggestion for tackling the conflicts angle is to treat all changes to a bug as events in a timeline – e.g. adding a tag ‘foo’ is an event to add ‘foo’, rather than an event setting the tags list to ‘bar,foo’ – then multiple editors adding ‘foo’ do not conflict (or need special handling). Collaboratively edited fields would be likely be unsatisfying with this approach though – last-writer-wins isn’t a great story. OTOH the number of people that edit the collaborative fields on any given bug tend to be quite low – so one could defer that to manual fixups.

Further, as a developer wanting local access to my bug database, syncing all of these things is appealing – but if I’m dealing with a million-bug bug database, I may actually need the ability to filter what I sync or do not sync with some care. Even if I want everything, query performance on such a database is crucial for usability (something git demonstrated convincingly in the VCS space).

Lastly, I don’t think distributed bug tracking is needed – it doesn’t solve a deeply burning use case – offline access would be a 90% solution for most people. What does need rethinking is the hugely manual process most bug systems use today. Making tools like whoopsie-daisy widely available is much more interesting (and that may require distributed underpinnings to work well and securely). Automatic collation of distinct reports and surfacing the most commonly experienced faults to developers offers a path to evidence based assessment of quality – something I think we badly need.


04 April 2014

The Open Source School

First Ubuntu Tablets To Launch This Autumn

For about three years, sensing the move from the desktop to the mobile device, Canonical have been making Ubuntu more tablet-friendly. Now we hear that tablets (and smartphones) are due to ship with Ubuntu OEM soon.
I've been a fan of Ubuntu for a long time, and if you'd like to try it out to see how easy-to-use free software operating systems are, you can do it online here: http://www.ubuntu.com/tour/en/ (Tip: full-screen your browser and it's like you're running it natively on your machine.)

21 February 2014

Glen Ogilvie

Fritz!Box Telephony

 

In New Zealand,  VDSL internet is available from a number of providers.  Snap! is the provider I use, and they offer some very cool Fritz!Box VDSL modems.   I have the Fritz!Box 7390, and the other one on offer is the cheaper Fritz!Box 7360.

These routers are far more than just a basic VDSL router, offering a range of awesome features, including NAS, IPV6 (standard with Snap), good WIFI, DECT, VPNs.   This blog posts are about my experiences with setting up the Telephony component.

The first thing to note is, Snap! does not ship the right cable for connecting the Fritz!Box to a standard telephone line.  This is a special Y cable, and they will ship it to you if you ask them.   Note however, you’ll still need to make up an adapter, as this cable is for RJ545 telephone plugs, not NZ BT plugs.  My setup, is that I have a monitored alarm, so I need a telephone line, theirfore, this is for VDSL + standard phone line, rather than VDSL + VOIP phone.

Hardware:

Step 1:

Get the Y cable from Snap!.    It will have 1 plug at the end that connects to the Fritz!Box and a split end, one for your VDSL plug, and one for the telephone line.  They will change you $5 postage.
Here is the description of it, it’s the grey cable, first one on the left.

Here is the email I sent to Snap!   It took a while, but Michael Wadman did agree to ship it to me, case number #BOA-920-53402  if you need a reference case.

Hi,

I purchased a Fritz!Box 7390 from you, along with my internet subscription. It looks to me, from looking at their website, that it should come with some
cables that I don’t have. See:

http://www.fritzbox.eu/en/products/FRITZBox_Fon_WLAN_7390/index.php?tab=5

The Fritz!Box 7390 you supplied came with two Ethernet cables and power, but no other cables.
In the above link, you can see it should come with:

  • 4.25 m-long ADSL/fixed-line network connection cable (Y cable)
  • 1.5 m-long LAN cable
  • RJ45-RJ11 adapter for connection to the ADSL line
  • RJ45-RJ11 adapter for connection to the analog telephone line

I would like to plug my Analog PSTN phone line into the Fritz!Box so I can use it’s telephony features with my fixed line.     To do this, I need the cable that goes between the ISDN/analog and the phone jack on the wall.
I am aware that the Y cable does not actually fit NZ phone plugs.
This post discusses the matter on geekzone.

http://www.geekzone.co.nz/forums.asp?forumid=90&topicid=115999

Ref:

http://service.avm.de/support/en/skb/FRITZ-Box-7390-int/56:Pin-assignment-of-cables-adapters-and-ports-for-telephony-devices

Step 2:

Build an adapter for the telephone end.  This is easier than you might think.  What you need:

  • RF45  Crimping tool
  • RJ45 plug
  • Any old phone cable with a BT11 plug on it
  • A CAT5 RJ45 Network Cable Extender Plug Coupler Joiner, you can get this off ebay for < $2
  • A multi-meter

Then, simply cut off the end of the old phone cable that had the end the plugs into the phone.  It will probably be a 4 wire cable.  Now, use your multimeter to identify which two wires are connected to the outer two BT-11 pins.  Then plug it into your phone jack and check you get around 50 volts DC from those two pins..   Once you’ve got the two wires that have power, cut off the other 2 wires and crimp the two powered wires to the two outside pins on the RJ45 plug.   See the above diagram.

Then, plug the RJ45 plug you crimped into the joiner you got off ebay, and label it, as you don’t want to be plugging in normal networking equipment by accident to this plug.

Step 3:

  1. Connect the Y cable to your Fritz!Box and to your VDSL.  Check it works.
  2. Connect the Y cable to your Analog line, using the adapter you made.  Your Fritz!Box is now able answer and make calls with your landline.

Now, your ready to configure Telephony on the FRITZ!Box.

Manual: http://www.avm.de/en/service/manuals/FRITZBox/Manual_FRITZBox_Fon_WLAN_7390.pdf

Configuration:

Start with Telephone numbers.  Here you should configure your fixed line, plus any SIP providers you want. I have added ippi and comfytel.   Also to note, that Snap! provides a SIP service, if you want.

Next, connect some phones to your Fritz!Box.  You can plugin your standard analoge phones into the FON1 and FON2 plugs.  You can connect your ISDN phones, you can connect any DECT wireless phones to the Fritz!Box, as their base station (your luck my vary), and you can connect your mobile phones to it, using the FRITZ!Box Fon app.  These will appear under telephone devices.  You should now be able to make a call.

Each device can have a default outgoing telephone number connected to it, and you can pre-select which phone number to make outgoing calls with, by dialing the ** prefix code.

Things you can do

  1. Use a number of devices as phones in your home, including normal phones and your mobiles
  2. Answer calls from skype, sip, landlines and internal numbers
  3. Make free calls to skype and global sip inum’s.
  4. Make low cost calls to overseas landlines using a sip provider
  5. Make calls to local numbers via your normal phone line
  6. Answer machine
  7. Click to dail
  8. Telephone book, including calling internal numbers
  9. Wakeup calls
  10. Send and receive faxes
  11. Block calls
  12. Call Diversion / Call through
  13. Automatically select different providers when dialing different numbers
  14. Set a device to use a specific number, or only to ring for calls for a specific number.
  15. Set do not disturb on a device, based on time of day.
  16. Connect your wireless DECT phones directly to the FRITZ!Box as a base station
  17. See a call log of the calls you’ve made

Sip providers

ippi – http://www.ippi.fr – They allow free outgoing and incoming skype calls, plus a free global sip number

Country Number
SIP glenogilvie@ippi.fr
SIP numeric 889507473
iNum +883510012028558

For outgoing skype calls with ippi, if your phone cannot dial email addresses, you need to add the skype contacts to the phone book on the ippi.com website under your account, and assign a short code, which you can then use from your phone.

Comfytel – http://www.comfytel.com/ – They provide cheap calling, but you have to pay them manually with paypal, and currently their skype gateway does not work.

iNum: 883510001220681
Internal number: 99982009943

 

Since screen shots are much nicer than words, below is a collection from my config for your reference.

Call Log:

Answer Phone:

Telephone book:

Alarm:

Fax:

Call Blocking:

Call through

Dialing rules

Telephone Devices

Dect configuration (I don’t have any compatible phones)

Telephone Numbers (fixed and SIP)

ComfyTel configuration:

IPPI configuration:

IPPI phone book, for calling skype numbers:

Fixed line configuration:

Line Settings:

30 January 2014

Colin Jackson

So long and thanks for all the fish

I stopped updating this blog some time ago, mainly because my work started taking me overseas so much I couldn’t keep up with it.

But now I am blogging again, perhaps a bit less frequently than I used to, over at my new website Jackson Strategy. I’ll be covering technology and how it changes our lives, and what we should be doing about that. Please drop by!

28 December 2013

Tim Penhey

face

2013 in review

2013 started with what felt like a failure, but in the end, I believe that the
best decision was made.  During 2011 and 2012 I worked on and then managed
the Unity desktop team.  This was a C++ project that brought me back to my
hard-core hacker side after four and a half years on Launchpad.  The Unity
desktop was a C++ project using glib, nux, and Compiz. After bringing Unity to
be the default desktop in 12.04 and ushering in the stability and performance
improvements, the decision was made to not use it as the way to bring the
Ubuntu convergence story forward. At the time I was very close tho the Unity 7
codebase and I had an enthusiastic capable team working on it. The decision
was to move forwards with a QML based user interface.  I can see now that this
was the correct decision, and in fact I could see it back in January, but that
didn't make it any easier to swallow.

I felt that I was at a juncture and I had to move on.  Either I stayed with
Canonical and took another position or I found something else to do. I do like
the vision that Mark has for Ubuntu and the convergence story and I wanted to
hang around for it even if I wasn't going to actively work on the story itself.  For a while I was interested in learning a new programming language, and Go was considered the new hotness, so I looked for a position working on Juju. I was lucky to be able to join the the juju-core team.

After a two weak break in January to go to a family wedding, I came back to
work and started reading around Go. I started with the language specification
and then read around and started with the Go playground. Then started with the
Juju source.

Go was a very interesting language to move to from C++ and Python. No
inheritance, no exceptions, no generics. I found this quite a change.  I even
blogged about some of these frustrations.

As much as I love the C++ language, it is a huge and complex language. One
where you are extremely lucky if you are working with other really competent
developers. C++ is the sort of language where you have a huge amount of power and control, but you pay other costs for that power and control. Most C++ code is pretty terrible.

Go, as a contrast, is a much smaller, more compact, language. You can keep the
entire language specification in your head relatively easily. Some of this is
due to specific decisions to keep the language tight and small, and others I'm
sure are due to the language being young and immature. I still hope for
generics of some form to make it into the language because I feel that they
are a core building block that is missing.

I cut my teeth in Juju on small things. Refactoring here, tweaking
there. Moving on to more substantial changes.  The biggest bit that leaps to
mind is working with Ian to bring LXC containers and the local provider to the
Go version of Juju.  Other smaller things were adding much more infrastructure
around the help mechanism, adding plugin support, refactoring the provisioner,
extending the logging, and recently, adding KVM container support.

Now for the obligatory 2014 predictions...

I will continue working on the core Juju product bringing new and wonderful
features that will only be beneficial to that very small percentage of
developers in the world who actually deal with cloud deployments.

Juju will gain more industry support outside just Canonical, and will be seen
as the easiest way to OpenStack clouds.

I will become more proficient in Go, but will most likely still be complaining
about the lack of generics at the end of 2014.

Ubuntu phone will ship.  I'm guessing on more than just one device and with
more than one carrier. Now I do have to say that these are just personal
predictions and I have no more insight into the Ubuntu phone process than
anyone outside Canonical.

The tablet form-factor will become more mature and all the core applications,
both those developed by Canonical and all the community contributed core
applications will support the form-factor switching on the fly.

The Unity 8 desktop that will be based on the same codebase as the phone and
tablet will be available on the desktop, and will become the way that people
work with the new very high resolution laptops.

30 October 2013

Tim Penhey

face

loggo - hierarchical loggers for Go

Some readers of this blog will just think of me as that guy that complains about the Go language a lot.  I complain because I care.

I am working on the Juju project.  Juju is all about orchestration of cloud services.  Getting workloads running on clouds, and making sure they communicate with other workloads that they need to communicate with. Juju currently works with Amazon EC2, HP Cloud, Microsoft Azure, local LXC containers for testing, and Ubuntu's MAAS. More cloud providers are in development. Juju is also written in Go, so that was my entry point to the language.

My background is from Python and C++.  I have written several logging libraries in the past, but always in C++ and with reasonably specific performance characteristics.  One thing I really felt was missing with the standard library in Go was a good logging library. Features that I felt were pretty necessary were:
  • A hierarchy of loggers
  • Able to specify different logging levels for different loggers
  • Loggers inherited the level of their parent if not explicitly set
  • Multiple writers could be attached
  • Defaults should "just work" for most cases
  • Logging levels should be configurable easily
  • The user shouldn't have to care about synchronization
Initially this project was hosted on Launchpad.  I am trialing moving the trunk of this branch to github.  I have been quite isolated from the git world for some time, and this is my first foray in git, and specifically git and go.  If I have done something wrong, please let me know.

Basics

There is an example directory which demonstrates using loggo (albeit relatively trivially).

import "github.com/howbazaar/loggo"
...
logger = loggo.GetLogger("project.area")
logger.Debugf("This is debug output.")
logger.Warningf("Some error: %v", err)

In juju, we normally create one logger for the module, and the dotted name normally reflects the module. This logger is then used by the other files in the module.  Personally I would have preferred file local variables, but Go doesn't support that, not where they are private to the file, and as a convention, we use the variable name "logger".

Specifying logging levels

There are two main ways to set the logging levels. The first is explicitly for a particular logger:

logger.SetLogLevel(loggo.DEBUG)

or chained calls:

loggo.GetLogger("some.logger").SetLogLevel(loggo.TRACE)

Alternatively you can use a function to specify levels based on a string.

loggo.ConfigureLoggers("<root>=INFO; project=DEBUG; project.some.area=TRACE")

The ConfigureLoggers function parses the string and sets the logging levels for the loggers specified.  This is an additive function.  To reset logging back to the default (which happens to be "<root>=WARNING", you call

loggo.ResetLoggers()

You can see a summary of the current logging levels with

loggo.LoggerInfo()

Adding Writers

A writer is defined using an interface. The default configuration is to have a "default" writer that writes to Stderr using the default formatter.  Additional writers can be added using loggo.RegisterWriter and reset using loggo.ResetWriters. Named writers can be removed using loggo.RemoveWriter.  Writers are registered with a severity level. Logging below that severity level are not written to that writer.

More to do

I want to add a syslog writer, but the default syslog package for Go doesn't give the formatting I want. It has been suggested to me to just take a copy of the library implementation and make it work how I want.

I also want to add some filter-ability to the writers, both on the inclusive and exclusive, so you could say when registering a writer, "only show me messages from these modules", or "don't show messages from these other modules".

This library has been used in Juju for some time now, and fits with most our needs.  For now at least.


27 June 2013

Malcolm Locke

face

First Adventures in Spectroscopy

One of my eventual amateur astronomy goals is to venture into the colourful world of spectroscopy. Today I took my first steps on that path.

We have a small cut glass pendant hanging in our window which casts pretty rainbows around the living room in the evening sun. A while ago I took photo of one of these with the intention of one day seeing what data could be extracted from it.

Spectrum

Not much to look at, but I decided tonight to see how much information can be extracted from this humble image.

First step was to massage the file into FITS format, the standard for astronomical data, with the hope that I could use some standard tools on the resulting file.

I cropped the rainbow, converted it to grayscale, and saved it as FITS in Gimp. I'd hoped to use my usual swiss army knife for FITS files, SAOImage DS9, to extract a graph from the resulting file, but no dice. Instead I used the following Python script to plot a graph of the averages brightness values of the columns of pixels across the spectrum. Averaging the vertical columns of pixels helps cancel out the effects of noise in the source image.

# spectraplot.py
import pyfits
import matplotlib.pyplot as plt
import sys

hdulist = pyfits.open(sys.argv[1])
# Grab the mean value of each column in the image
mean_data = hdulist[0].data.mean(axis=0)

plt.plot(mean_data)
plt.show()

Here's the resulting graph, with the cropped colour and grayscale spectra added for context.

Solar black body curve

If you weren't asleep during high school physics class, you may recognise the tell tale shape of a black body radiation curve.

From the image above we can see that the Sun's peak intensity is actually in green light, not yellow as we perceive it. But better still we can (roughly!) measure the surface temperature of the Sun using this curve.

Here's a graph of colour vs wavelength.

I estimate the wavelength of the green in the peak of the graph above to be about 510nm. With this figure the Sun's surface temperature can be calculated using Wien's displacement law.

λmax * T = b

This simple equation says that the peak frequency (λmax) of the black body curve times the temperature (T) is a constant, b, called Wien's displacement constant. We can rearrange the equation ...

T = b / λmax

... and then plug in the value of b to get the surface temperature of the Sun in degrees Kelvin.

T = 2,897,768 / 510 = 5681 K

That's close enough to the actual value of 5780 K for me!

I'm pretty encouraged by this result with nothing more than a piece of glass and a basic digital camera. A huge amount of what we know about the cosmos comes from examining spectra, but it's a field that doesn't get much love from amateur astronomers. Stay tuned for some hopefully more refined experiments in future.

08 May 2013

Glen Ogilvie

time to blog again.

I have been a bit slack with my blogging and not posted much for a long time. This has been due to both working on lots of things, buying a house and a busy lifestyle.

I do however have a few things to blog about. So, in the coming days i will blog about auto_inst os testing, corporate patching, android tools, aucklug, raspberry pi, rdiff-backup, mulitseat Linux, the local riverside community centre, getting 10 laptops, which will run mageia, my cat gorse, gps tracking, house automation, Amazon AMIs and maybe some other stuff.

03 March 2013

Nevyn H

Why are the USB Ports Locked?

I was at a school recently where the USB ports of the computers are locked down (on the student's log in). This struck me as counter to learning. Your students ability to share is greatly diminished. So what's the justification? Security. They might get virus'.

This to me just means that MS Windows is not fit for the purpose of education. If education needs to suffer to keep a computer system secure in a school, then it's a no brainer. Chose something that doesn't require you to sacrifice education - especially if that's your primary purpose.

I know MS Windows is good for some things. Microsoft Office is a top notch application for example. You can do things in MS Office that you can't in other office suites. But that's only important if those features of MS Office used.

Is the ability to create a pivot chart all that important in education? I would argue that if it's only in MS Office, then you've probably got more important things to be learning or teaching. Database theory i.e. relational databases - is probably more important anyway.

I think Linux has something to offer here. A lot of people would probably be surprised by the amazing things that can be done using Open Source software - and for the most part, without software licensing costs. Take GIMP for example. Although not photoshop, it is incredibly capable. You can do all sorts of things in it that teach a whole range of interesting concepts such as layers, filters, super imposing etc. There's a social lesson in there too - just because you see a photograph, doesn't necessarily mean that something is real. Removing a few pimples, stretching out a persons neck etc. isn't all that hard.

So what's stopping Linux in schools?

Firstly, the perception that Windows if free for schools. Actually, quite a lot of money goes towards the MS schools agreement - money that could be used elsewhere.

Secondly, who offers schools help with Linux? I was replaced in my last job by a Windows person - not a Linux person. There seem to be very few companyies offering Linux desktop support. Given that school I.T. support is tied up in just a handful of I.T. companies, who are all willing to perpetuate the "Windows is free for schools" mantra, where do schools go for Linux support?

Thirdly, is there any real efforts into making Linux suitable for schools? Some might argue that edubuntu is going in this direction but... well look at their goals:
Our aim is to put together a system that contains all the best free software available in education and make it easy to install and maintain.
So the first part talks about free educational software. The second part is pretty much what Ubuntu provides anyway. So basically, it's a copy of Ubuntu with a few education applications added. And while I hate to criticise Open Source software (although I do fairly often), a lot of it is made by geeks for geeks. This is incredibly evident in the educational software sector where educational games often lack lasting engagement.

When looking at what schools are already using on their desktops it's not unusual to see a set up identical to what a secretary in a small business might have. The operating system, a browser and MS Office. All of which (LibreOffice rather than MS Office) are installed in most desktop Linux distributions anyway.

What does Windows offer? A whole lot of management. This isn't a road I would like Linux taking as I think it just stifles education anyway. Instead, I think a school set up should be concerned with keeping kids safe - an I.T. system should look after itself without limiting education.

So what does this look like to me?

Kids as the admins. They should be able to install whatever applications they need to accomplish a task.

A fall back position - currently there are PXE boot options. I think it needs to be more local than that - a rescue partition. Perhaps PXE for the initial load OR usb sticks. A compete restore should taken less than 10 minutes.

Some small amount of management - applications that can or cannot be installed for example. Internet security done on a network level, not an individual machine level.

If you find yourself justifying something on the desktop for security, then you have to ask seriously ask yourself, what is it that you're protecting? It should no longer be enough to just play the security card by default. There's a cost to security. This needs to be understood.

Cloud or server based storage. The individual machines should not hold files vital to a child's work. This makes back ups a whole lot easier - even better if you can outsource that to someone else. Of course, there's the whole "off shore" issue. i.e. government agencies do not store information offshore.

I guess this post is really just a great big justification for Tartare Source. It's not the only use. I think a similar set up could be incredibly beneficial to a business for example. Less overheads in terms of licensing tracking and security concerns. Freedom for people to work in the way that they feel most comfortable etc.

So I guess the question is still, where would you find the support? With the user in mind and "best practise" considered inappropriate in civilised company...

27 February 2013

Robin Paulson

The Digital Commons: Escape From Capital?

So, after a year of reading, study, analysis and writing, my thesis is complete. It’s on the digital commons, of course, this particular piece is an analysis to determine whether or not the digital commons represents an escape from, or a continuation of, capitalism. The full text is behind the link below.

The Digital Commons: Escape From Capital?

In the conclusion I suggested various changes which could be made to avert the encroachment of capitalist modes, as such I will be releasing various pieces of software and other artefacts over the coming months.

For those who are impatient, here’s the abstract, the conclusion is further down:

In this thesis I examine the suggestion that the digital commons represents a form of social organisation that operates outside any capitalist relationships. I do this by carrying out an analysis of the community and methods of three projects, namely Linux, a piece of software; Wikipedia, an encyclopedia; and Open Street Map, a geographic database.

Each of these projects, similarly to the rest of the digital commons, do not require any money or other commodities in return for accessing them, thus denying exchange as the dominant method of distributing resources, instead offering a more social way of conducting relations. They further allow the participation of anyone who desires to take part, in relatively unhindered ways. This is in contrast to the capitalist model of
requiring participants demonstrate their value, and take part in ways demanded by capital.

The digital commons thus appear to resist the capitalist mode of production. My analysis uses concepts from Marx’s Capital Volume 1, and Philosophic and Economic Manuscripts of 1844, with further support from Hardt and Negri’s Empire trilogy. It analyses five concepts, those of class, commodities, alienation, commodity fetishism and surplus-value.

I conclude by demonstrating that the digital commons mostly operates outside capitalist exchange relations, although there are areas where indicators of this have begun to encroach. I offer a series of suggestions to remedy this situation.

Here’s the conclusion:

This thesis has explored the relationship between the digital commons and aspects of the capitalist mode of production, taking three iconic projects: the Linux operating system kernel, the Wikipedia encyclopedia and the Open Street Map geographical database as case studies. As a result of these analyses, it appears digital commons represent a partial escape from the domination of capital.

 

As the artefacts assembled by our three case studies can be accessed by almost anybody who desires, there appear to be few class barriers in place. At the centre of this is the maxim “information wants to be free” 1 underpinning the digital commons, which results in assistance and education being widely disseminated rather than hoarded. However, there are important resources whose access is determined by a small group in each project, rather than by a wider set of commoners. This prevents all commoners who take part in the projects from attaining their full potential, favouring one group and thus one set of values over others. Despite the highly ideological suggestion that anyone can fork a project at any time and do with it as they wish, which would suggest a lack of class barriers, there is significant inertia which makes this difficult to achieve. It should be stressed however, that the exploitation and domination existing within the three case studies is relatively minor when compared to typical capitalist class relations. Those who contribute are a highly educated elite segment of society, with high levels of self-motivation and confidence, which serves to temper what the project leaders and administrators can do.

 

The artefacts assembled cannot be exchanged as commodities, due to the license under which they are released, which demands that the underlying information, be it the source code, knowledge or geographical data always be available to anyone who comes into contact with the artefact, that it remain in the commons in perpetuity.

 

This lack of commoditisation of the artefacts similarly resists the alienation of those who assemble them. The thing made by workers can be freely used by them, they make significant decisions around how it is assembled, and due to the collaborative nature essential to the process of assembly, constructive, positive, valuable relationships are built with collaborators, both within the company and without. This reinforces Stallman’s suggestion that free software, and thus the digital commons is a more social way of being 2.

 

Further, the method through which the artefacts are assembled reduces the likelihood of fetishisation. The work is necessarily communal, and involves communication and association between those commoners who make and those who use. This assists the collaboration essential for such high quality artefacts, and simultaneously invites a richer relationship between those commoners who take part. However, as has been shown, recent changes have shown there are situations where the social nature of the artefacts is being partially obscured, in favour of speed, convenience and quality, thus demonstrating a possible fetishisation.

 

The extraction of surplus-value is, however, present. The surplus extracted is not money, but in the form of symbolic capital. This recognition from others can be exchanged for other forms of capital, enabling the leaders of the three projects investigated here to gain high paying, intellectually fulfilling jobs, and to spread their political beliefs. While it appears there is thus exploitation of the commoners who contribute to these projects, it is firstly mild, and secondly does not result in a huge imbalance of wealth and opportunity, although this should not be seen as an apology for the behaviour which goes on. Whether in future this will change, and the wealth extracted will enable the emergence of a super-rich as seen in the likes of Bill Gates, the Koch brothers and Larry Ellison remains to be seen, but it appears unlikely.

 

There are however ways in which these problems could be overcome. At present, the projects are centred upon one website, and an infrastructure and values, all generally controlled by a small group who are often self-selected, or selected by some external group with their own agenda. This reflects a hierarchical set of relationships, which could possibly be addressed through further decentralisation of key resources. For examples of this, we can look at YaCy 3, a search engine released under a free software license. The software can be used in one of a number of ways, the most interesting of these is network mode, in which several computers federate their results together. Each node searches a different set of web sites, which can be customised, the results from each node are then pooled, thus when a commoner carries out a search, the terms are searched for in the databases of several computers, and the results aggregated. This model of decentralisation prevents one entity taking control over what are a large and significant set of resources, and thus decreases the possibility of exploitation, domination and the other attendant problems of minority control or ownership over the means of production.

 

Addressing the problem of capitalists continuing to extract surplus, requires a technically simple, but ideologically difficult, solution. There is a general belief within the projects discussed that any use of the artefacts is fine, so long as the license is complied with. Eric Raymond, author of the influential book on digital commons governance and other matters, The Cathedral and The Bazaar, and populariser of the term open source, is perhaps most vocal about this, stating that the copyleft tradition of Stallman’s GNU is overly restrictive of what people, by which he means businesses, can do, and that BSD-style, no copyleft licenses are the way forward 4. The majority of commoners taking part do not follow his explicit preference for no copyleft licenses, but nonetheless have no problem with business use of the artefacts, suggesting that wide spread use makes the tools better, and that sharing is inherently good. It appears they either do not have a problem with this, or perhaps more likely do not understand that this permissiveness allows for uses that they might not approve of. Should this change, a license switch to something preventing commercial use is one possibility.

1Roger Clarke, ‘Roger Clarke’s “Information Wants to Be Free …”’, Roger Clarke’s Web-Site, 2013, http://www.rogerclarke.com/II/IWtbF.html.

2Richard Stallman, Free Software Free Society: Selected Essays of Richard M. Stallman, ed. Joshua Gay, 2nd ed (Boston, MA: GNU Press, Free Software Foundation, 2010), 8.

3YaCy, ‘Home’, YaCy – The Peer to Peer Search Engine, 2013, http://yacy.net/.

4Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, ed. Tim O’Reilly, 2nd ed. (Sebastopol, California: O’Reilly, 2001), 68–69.

09 January 2013

Malcolm Locke

face

Kill All Tabs in Eclipse

As a long time Vim user, I have the following config in my ~/.vimrc to ensure I never ever enter an evil tab character into my source code.

set   shiftwidth=2
set   tabstop=2
set   smarttab
set   et

For my Android projects, I'm starting to use Eclipse, and unfortunately eternally banishing all tabs in Eclipse is not such an easy task. Here's where I'm at so far, YMMV and I'll update this as I find more. It seems the tab boss is difficult to kill in this app.

  • Under Window -> Preferences -> General -> Editors -> Text Editors ensure Insert spaces for tabs is checked.
  • Under Window -> Preferences -> Java -> Code Style -> Formatter create a new profile based off the default, and under the Indentation tab set Tab policy to to Spaces only.

18 December 2012

Nevyn H

Open Source Battle Grounds

I often find myself at odds with a lot of FLOSS (Free/Libre/Open-Source Software) people due to my attitude on office suites.

To me, they're not the place to start introducing FLOSS. Those ideals about interoperability are lost when people then find they're having to adjust bullet points and the like when those same files end up on a different office suite. This is fiddly work (which I'm of the opinion should NEVER be a problem).

The matter is even more confused when there's already a de facto standard used in businesses everywhere.

The fact is, trying to replace MS Office with LibreOffice/OpenOffice and formerly StarOffice, is setting up for failure.

It's my belief that bringing FLOSS to a business should ALWAYS extend their functionality. Introducing GIMP to crop images in place of packages too expensive for most businesses to contemplate (in which case, a cost benefit is seen immediately) or building a database (a real database - which is NOT a spreadsheet or multi-sheet spreadsheet - more rubbish that I hear perpetrated) to help ensure data integrity and enable future expansion (i.e. a database can then be offered via a web front end or used with other information to build a more complete information management system) adds value rather than asking people to sacrifice something - whether it's as simple as user interface elements or more complex like sharing documents - they're losses. Introducing FLOSS should not cause a loss if you want to promote it.

I'm also not a fan of Google Docs. Sure, they're great for collaborative work - in fact, for this purpose, they just can't be beat. However, instead of the office suite being the limiting factor, the limiting factor has turned into the browser. Page breaks in a word processing document appear in different places depending on what browser you're using. There ALWAYS seem to be annoying nags if you're not using Google Chrome (OS, the Browser or Chromium Browser).

I'm a big fan of getting rid of office suites. I consider them to be HORRIBLY outdated technology.

Spreadsheets are great for small quick tabulated, the presentation is more important than data integrity, sort of quick jobs BUT extending the range that spreadsheets can handle (i.e. previously you couldn't have more than 65,536 rows) has confused their purpose even further.

They should be replaced by databases. This then introduces the opportunity to make a system work to a business rather than a business trying to work around a software package.
    And no... I don't mean sqlite. Sqlite is probably good for stand alone programs but for the most part, those databases that are vital to a businesses everyday operations, need to be shared by multiple people. A more server-centric database:
    1. Has locking features which make them scalable (i.e. if one person has a spreadsheet open, then generally, the whole spreadsheet is locked (collaborative features aside - although in a version of Office, this caused all sorts of headaches). Having the capacity for an information system to scale brings with it an incredibly positive message - you're working with the business to help with it's growth.
    2. Clears the way for expansion such as having it work with other data in a consistent way.
    3. Helps with data integrity by ensuring everyone is accessing the information in the same way (hopefully by web based interfaces - even if not available on the Internet, designing interfaces for the Internet generally creates a OS/device agnostic interface).
    The word processor could be a whole lot smarter. Rather than presenting you with a 20,000 formatting controls (on an individual character basis), I'm of the opinion that you should instead be able to use styles - yes, the same concept as web pages. Lyx, which describes itself as a document processor, is close except that it doesn't make it easy for you to define your own styles. Other word processors have a cursory nod in this direction but do incredibly badly at enforcing it (a friend of mine wrote up a document, sent it away for review, and then went through it again to fix up the structured nature of it - as pointless and just as busy work like as fixing up bullet points). Currently, writing up a document is a mad frenzy of content and formatting. What if, you could concentrate on the content and then quickly mark out blocks of text (That's a heading. That's a sub heading etc.) and let the computer take care of the rest?

    I'm of the opinion that presentations are, in the normal course of things, done INCREDIBLY badly. The few good presentations I've seen have been from people who do presentations for a living. The likes of Lawrence Lessig and Al Gore. These presentations were used to illustrate something. I don't believe that the lack of presentation applications would have stopped either of these people from having brilliant presentations. They are the exception and people who have gone to exceptional lengths to have good presentations. Otherwise, they're bullet points - points to talk to. They don't engage the audience. It's much better to have vital information - that which you need illustrated - behind you such as charts. Something that helps to illustrate a point (when I was told I needed to have some slides behind me I agonised over it. I didn't want them. I didn't need any charts and I think they're more of a distraction than an aid when you don't actually need them. I spent more time agonising over those slides than I did actually thinking about what I wanted to say - fitting a speech to slides just sucks. The speech was awful as I was feeling horrendously anxious about the slides.) is so much more engaging than putting up bullet points.

    So I think FLOSS has a huge part to play in advancing technology here. Instead of fighting a losing battle with trying to perpetuate existing terrible practises (based on MS's profits), FLOSS could instead be used to show people better and more efficient ways of doing things.

    This really came home to me when I was trying to compile a report on warranties. Essentially, while trying to break down the types of repairs across each of the school's, I was finding I was having a hell of a time trying to get the formatting consistent. The problem would be the same regardless of which office suite I was using. Instead, I really should have been able to create a template and then select which sheets it should use to generate a finished report - charts and all.

    Office Suites create bad (and soul destroying) work practises.

    But back to the original point, when making a proposal, think about what value it's adding. Is it adding value? Is it adding value perceived by the intended audience? The revolution that oh so many Linux people talk about isn't going to happen by replacing like for like. Instead, it needs to be something better. Something that the intended audience sees value in. Something that revolutionises the way people do things. Things that encompass the best in Open Source Software - flexibility, scalability and, horribly important to me, customizability. All derivatives of the core concept of Free (as in Freedom)...

    16 December 2012

    Colin Jackson

    People should be allowed to have red cars!

    Dear Editor

    I am writing to ask why people are not allowed to have red cars. Some of my friends’ favourite colours are red, but they are not able to have their cars painted this way. Why?

    I have seen people writing in your newspaper to say that cars are meant to be black and it is simply wrong to paint them any other colour. They generally don’t explain why they think this, except to point to the manufacturers’ books that say cars must be made black (but don’t justify this). For heaven’s sake! This is the twentieth century and we have moved on so far since cars were invented. Back then, some people said that having any kind of car was wrong – look how they’ve changed once they have got used to the idea.

    Others have said, if my friends have red cars, that their black cars will be worth less somehow. Nonsense! There’s absolutely no reason to assume that. They can keep driving round the black cars they’ve always had. Some of the sillier of these people have even said that, if we let people paint cars red, they will want to go around painting other things red, like horses and dogs. What a crazy thing to say!

    The most honest people who don’t want people to have red cars say it’s because they just don’t like them. Some of the people who use other arguments really think this, as well, but they don’t like to say it in public. But no-one is going to force them to have a red car. They can keep having a black one, or none at all. Other people having red cars won’t affect them at all.

    I’m asking everyone who doesn’t think there should be red cars – think about why you oppose them. Why is it your business to try to stop something that won’t affect you and will make other people happy?

    Yours

    Red Car Lover

    20 November 2012

    Nevyn H

    Open Source Awards

    So it's a few days after the NZ Open Source Awards. The Manaiakalani Project won! The presenter for the award in education, Paul Seiler, did mention the fact that the award was really about 2 things. The project itself and the use of Open Source Software and the contribution by me (sorry - ego does have to enter in at times - I'm awesome!). So my time to shine. I'm thinking about doing a great big post about the evolution of a speech. I'll post my notes, which I didn't take with me here though:
    • Finalists
    • Community
    • Leadership
    • Synergy
    • Personal
    Half way through my speech I'd realised my accent had turned VERY kiwi. Sod it - carry on.

    Anyway, I did say in a previous post that I'd look at the awards and process a little more closely. So let's do that!

    Nominations are found by opening it up to the public. This runs for a few anxious months. i.e. I didn't want to nominate myself as I felt this would be egotistical. However, with the discussion going on - the lack of mention of Open Source Software on the Manaiakalani website etc. - I was fairly confident the project had been nominated.

    The judges are all involved with various projects. It's such a small community that it'd be difficult to find people who knew what to look for who weren't involved in the community in some way. They have to do this with full disclosure.

    Amy Adams opened up the dinner with a "Software development is important to the government" with emphasis on the economic benefits. Of course, what she didn't say was that we spend far more money offshore on software than we do onshore. Take this as a criticism - we have the skills in New Zealand to be able to take our software into our hands and make it work to how we work. We don't have the commitment from the New Zealand government or businesses to be able to do this.

    The dinner itself it's a little strange. Here you are sitting at a table of your peers and you can't help but think that all of the finalists should probably be getting their bit in the lime light. So at our table we had a sort of mock rivalry going on.

    There was:

    • Nathan Parker principal at Warrington School - the first school in New Zealand to take on a full Open Source and Open Culture philosophy. I've idolised Nathan for awhile - the guy's a dude! So the school itself runs on Open Source Software - completely. It's small enough that they don't need a Student Management System. As well as that, they have a low power FM station running 24/7 that's been run for the last 2 years, by volunteers. This station plays Creative Commons content. They also do sort of a "computers at homes" programme done right i.e. the sense of ownership is accomplished by getting people to build their own computers.
    • Ian Beardslee from Catalyst I.T. ltd. for the Catalyst Open Source Academy. For 2 summers now, Catalyst I.T. have taken an intake of students - basically dropping that barrier of entry into opensource contribution through a combination of classroom type sessions, and mentored sessions for real contributions.
    Personally, I don't think I'd liked to have been a judge given just how close I perceive all of these projects to be. Paul Seiler, who I was sitting next to, did kind of say that you're all accomplishing the same sorts of things in different ways.

    And I saw a comment in a mailing list about those projects that are out there doing their thing but no one nominated. This puts me in mind of yet another TED talk that I watched the morning after the NZOSA dinner.

    There was lots in that video that resonated with me. Things like me wanting to learn electronics but having a fear around it due to what people kept saying around me - "You have to get it absolutely right or it won't work" - paralysing me with fear of getting it wrong and it not working. The same thing was said to me about programming though I learnt fairly quickly that I could make mistakes and it wasn't the end of the world. In fact, programming is kind of the art of putting bugs in.

    But more importantly, and more on topic, the video kind of defines why there are likely a whole lot of projects that should have been nominated but weren't due to weird hang ups.

    And in a greater sense, shame is probably a huge threat to the Open Source Community. I don't think I've ever contributed more than a few lines of code openly because I'm convinced that whatever code just isn't good enough. That I'll be ridiculed for my coding style or assumptions etc. And I've always said that I'll put out my code after a clean up etc.

    Take the code for the Manaiakalani project. It often felt like I was hacking things (badly in some cases) in order to get the functionality we needed - things like creating a blacklist of applications for example. I'm sure that there's evidence of the fact that I'd never coded in python before the project in the code as well. And yet, it's not the code, but the thoughts behind the code, that's award winning. Even if the code is hideous, it's what the code is accomplishing or attempting to accomplish that's important.

    So for the next NZOSA gala, I would love to see, not just a list of the finalists, but also a short list of nominations (those that are deserving of at least a little recognition even if not quite making the final cut). I'd love to see the organisers and judges complaining about the number of nominations coming through. I would love to see the "Open Source Special Recognition" award become a permanent fixture (won by Nathan Parker this year). And hell, greater media attention probably wouldn't be such a bad thing.

    On a very personal note though - I now have to change from being a "Professional Geek" to being an "Award Winning Geek". Feels good on these ol' shoulders of mine.

    03 November 2012

    Rob Connolly

    Tiny MQTT Broker with OpenWRT

    So yet again I’ve been really lax at posting, but meh. I’ve still been working on various projects aimed at home automation – this post is a taster of where I’m going…

    MQTT (for those that haven’t heard about it) is a real time, lightweight, publish/subscribe protocol for telemetry based applications (i.e. sensors). It’s been described as “RSS for the Internet of Things” (a rather poor description in my opinion).

    The central part of MQTT is the broker: clients connect to brokers in order to publish data and receive data in feeds to which they are subscribed. Multiple brokers can be fused together in a heirarchical structure, much like the mounting of filesystems in a unix-like system.

    I’ve been considering using MQTT for the communication medium in my planned home automation/sensor network projects. I wanted to set up a heirarchical system with different brokers for different areas of the house, but hadn’t settled on a hardware platform. Until now…

    …enter the TP-Link MR3020 ‘travel router’, which is much like the TL-WR703N which I’ve seen used in several hardware hacks recently:

    It's a Tiny MQTT Broker!

    It’s a Tiny MQTT Broker!

    I had to ask a friend in Hong Kong to send me a couple of these (they aren’t available in NZ) – thanks Tony! Once I received them installing OpenWRT was easy (basically just upload through the exisiting web UI, follow the instructions on the wiki page I linked to above). I then configured the wireless adapter in station mode so that it would connect to my existing wireless network and added a cheap 8GB flash drive to expand the available storage (the device only has 4MB of on-board flash, of which ~900KB is available after installing OpenWRT). I followed the OpenWRT USB storage howto for this and to my relief found that the on-board flash had enough space for the required drivers (phew!).

    Once the hardware type stuff was sorted with the USB partitioned (1GB swap, 7GB /opt) and mounting on boot, I was able to install Mosquitto, the Open Source MQTT broker with the following command:

    $ opkg install mosquitto -d opt

    The -d option allows the package manager to install to a different destination, in this case /opt. Destinations are configured in /etc/opkg.conf.

    It took a little bit of fiddling to get mosquitto to start at boot, mainly because of the custom install location. In the end I just edited the paths in /opt/etc/init.d/mosquitto to point to the correct locations (I changed the APP and CONF settings). I then symlinked the script to /etc/rc.d/S50mosquitto to start it at boot.

    That’s about as far as I’ve got, apart from doing a quick test with mosquitto_pub/mosquitto_sub to test everything works. I haven’t tried mounting the broker under the master broker running on my home server yet.

    The next job is to investigate the serial port on the device in order to attach an Arduino clone which I soldered up a while ago. That will be a story for another day, hopefully in the not-to-distant future!

    21 August 2012

    Rob Connolly

    Smartclock Prototype

    So as promised here are the details and photos of the Arduino project I’ve been working on – a little late I know, but I’ve actually been concentrating on the project.

    The project I’m working on is a clock, but as I mentioned before it’s not just any old clock. The clock is equipped with sensors for temperature, light level and battery level. It also has a bluetooth module for relaying this data back to my home server. This is the first part of a larger plan to build a home automation and sensor network around the house (and garden). It’s serving as kind of a test bed for some of the components I want to use as well as getting me started with the software.

    Prototype breadboard

    The prototype breadboard showing the Roving Networks RN-41 bluetooth module on the left and the sensors on the right. The temperature sensor (bottom middle) is a TMP36 and the light sensor is a simple voltage divider using a photocell.

    As you can see from the photos this is a very basic prototype at the moment – although as of this weekend all the hardware is working as well as the software drivers. I just have the firmware to finalise before building the final unit.

    Time display

    The (very bright!) display showing the time. I’m using the Sparkfun 7-segment serial display, which I acquired via Nicegear. It’s a lovely display to work with!

    The display is controlled via SPI and the input from the light sensor is used to turn off the display when it is dark in order to save power when there is no-one in the room. The display will also be able to be controlled from the server via a web interface.

    Temperature display

    The display showing the current temperature. The display switches between modes every 20 seconds with it’s default settings.

    Careful readers will note the absence of a real time clock chip to keep accurate time. The time is kept using one of the timers on the ATmega328p. Yes, before you ask this isn’t brilliantly accurate (it loses about 30 seconds every hour!), but I am planning to sync the time from the server via the bluetooth connection, so I’m not concerned.

    The final version of will use an Arduino Pro Mini 3.3v (which I also got from Nicegear) for the brains, along with the peripherals shown. The Duemilanove shown is just easier for prototyping (although it makes interfacing with the RN-41 a little more difficult).

    I intend to publish all the code (both for the firmware and the server) and schematics under Open Source licences as well as another couple of blog posts on the subject (probably one on the final build with photos and one on the server). However, that’s it for now.

    15 August 2012

    Rob Connolly

    Quick Update…

    Well, I’ve not been doing great with posting more, especially on the quick short posts front. I guess it’s because I’m either too busy all the time or because I just don’t think anyone wants to read every last thought which pops into my head. Probably a bit of both!

    Anyway, here’s a quick run-down of what I’ve been up to over the past couple of weeks:

    • I’ve been working on an Arduino project at home. I’ll post more on this over the weekend (with photos). For now I’ll just say that it’s a clock with some sensors on it – but it does a little more than your average clock. Although I’ve had my Arduino for a couple of years I’ve never really used it in earnest and I’m finding it refreshing to work with. Since I use PICs at work the simpler architecture is nice. Of course I program it in C so I can’t comment on the IDE/language provided by the Arduino tools.
    • The beer which I made recently is now bottled and maturing. It’ll need a couple more weeks to be ready to drink though (actually the longer the better really, but I can never wait!). I’ll report back on what it’s like when I try it.
    • I’ve been thinking about ways to get the ton of data I have spread across my machines in order. Basically I want to get it all onto my mythbox/home server/personal cloud and acessible via ownCloud and NFS. I also have a ton of dead tree (read ‘important’ documents) which need scanning and a ton of CDRs that need backing up. After that I have to overhaul my backup scheme. It’s a big job – hence why I’ve only been thinking about doing it so far.
    • I’ve also been thinking about upgrading my security with the recent hacks which have occured. Since I’m not hugely reliant on external services (i.e. Google, Facebook, Apple and Amazon) I’m doing pretty well already. Also, I already encrypt all my computers anyway (which is way more effective than that stupid ‘remote wipe’ misfeature Apple have). I am considering upgrading to two factor authentication using Google Authenticator anywhere I can and I want to switch to using GPG subkeys and storing my master private key somewhere REALLY secure. I’ll be writing about these as I do them so stay tuned.

    Well, hopefully that’s a quick summary of what I’ve been up to (tech-wise) lately as well as what might be to come in these pages. For now, that’s all folks.

    24 July 2012

    Pass the Source

    Google Recruiting

    So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

    Here’s the suggestion I made on how they can really get in front of FOSS developers:

    Hi [name]

    Just a quick note to thank you for getting in touch of so many our
    Catalyst IT staff, both here and in Australia, with job offers. It comes
    across as a real compliment to our company that the folks that work here
    are considered worthy of Google’s attention.

    One thing about most of our staff is that they *love* open source. Can I
    suggest, therefore, that one of the best ways for Google to demonstrate
    its commitment to FOSS and FOSS developers this year would be to be a
    sponsor of the NZ Open Source Awards. These have been very successful at
    celebrating and recognising the achievements of FOSS developers,
    projects and users. This year there is even an “Open Science” category.

    Google has been a past sponsor of the event and it would be good to see
    you commit to it again.

    For more information see:

    http://www.nzosa.org.nz/

    Many thanks
    Don

    09 July 2012

    Andrew Caudwell

    Inventing On Principle Applied to Shader Editing

    Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

    There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

    Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

    I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

    Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):

        

        

    GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

    I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

    Update: Fixed links to point at glslsandbox.com.

    05 June 2012

    Pass the Source

    Wellington City Council Verbal Submission

    I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.

    Introduction

    I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

    I have 3 Points to make in 3 minutes.

    1. The Long Term plan lacks vision and is a plan for stagnation and erosion

    It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

    Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

    My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

    2. Show faith in local companies

    The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

    A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

    • It improves project success rates, which helps the public sector be more effective.
    • It reduces project cost, which benefits the taxpayers.
    • It invites small business, which stimulates the economy.

    3. Smart cities are open source cities

    Use open source software as the default.

    It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

    It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

    This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

    As well as saving money, open source brings a state of mind. That is:

    • Willingness to share and collaborate
    • Willingness to receive information
    • The right attitude to be innovative, creative, and try new things

    Thank you. There should now be 2 minutes left for questions.

    11 March 2012

    Malcolm Locke

    face

    Secure Password Storage With Vim and GnuPG

    There are a raft of tools out there for secure storage of passwords, but they will all come and go, Vim and GnuPG are forever.

    Here's the config:

    augroup encrypted
        au!
    
        " First make sure nothing is written to ~/.viminfo while editing
        " an encrypted file.
        autocmd BufReadPre,FileReadPre      *.gpg set viminfo=
        " We don't want a swap file, as it writes unencrypted data to disk
        autocmd BufReadPre,FileReadPre      *.gpg set noswapfile
        " Switch to binary mode to read the encrypted file
        autocmd BufReadPre,FileReadPre      *.gpg set bin
        autocmd BufReadPre,FileReadPre      *.gpg let ch_save = &ch|set ch=2
        autocmd BufReadPost,FileReadPost    *.gpg '[,']!gpg --decrypt 2> /dev/null
        " Switch to normal mode for editing
        autocmd BufReadPost,FileReadPost    *.gpg set nobin
        autocmd BufReadPost,FileReadPost    *.gpg let &ch = ch_save|unlet ch_save
        autocmd BufReadPost,FileReadPost    *.gpg execute ":doautocmd BufReadPost " . expand("%:r")
    
        " Convert all text to encrypted text before writing
        autocmd BufWritePre,FileWritePre    *.gpg   '[,']!gpg --default-recipient-self -ae 2>/dev/null
        " Undo the encryption so we are back in the normal text, directly
        " after the file has been written.
        autocmd BufWritePost,FileWritePost  *.gpg   u
    
        " Fold entries by default
        autocmd BufReadPre,FileReadPre      *.gpg set foldmethod=expr
        autocmd BufReadPre,FileReadPre      *.gpg set foldexpr=getline(v:lnum)=~'^\\s*$'&&getline(v:lnum+1)=~'\\S'?'<1':1
    augroup END
    

    Now, open a file, say super_secret_passwords.gpg and enter your passwords with a blank line between each set:

    My Twitter account
    malc : s3cr3t
    
    My Facebook account
    malc : s3cr3t
    
    My LinkedIn account
    malc : s3cr3t
    

    When you write the file out, it will be encrypted with your GPG key. When you next open it, you'll be prompted for your GPG private key passphrase to decrypt the file.

    The line folding config will mean all the passwords will be hidden by default when you open the file, you can reveal the details using zo (or right arrow / l) with the cursor over the password title.

    I like this system because as long as I have gpg and my private key available, I can extract any long lost password from my collection.

    05 January 2012

    Pass the Source

    The Real Tablet Wars

    tl;dr formally known as Executive Summary, Openness + Good Taste Wins

    Gosh, it’s been a while. But this site is not dead. Just been distracted by indenti.ca and twitter.

    I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

    It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

    Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

    I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

    The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

    We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

    “No problem”, I thought. “Let’s install Firefox, I know that works”.

    But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

    Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

    Er, and the upgrade failed to fix the problem. One day gone.

    So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

    The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

    But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

    Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

    The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

    Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

    PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

    * I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

    11 November 2011

    Robin Paulson

    Free software and the extraction of capital

    This essay will asses the relationship between free software and the capitalist mode of accumulation, namely that of the extraction of various forms of capital to produce profit. I will perform an analysis through the lens of the Marxist concept of extracting surplus from workers, utilise Bourdieu’s theory of capital, and the ideas of Hardt and Negri as they discuss the various economic paradigms, and the progression through these.

    The free software movement is one which states that computer software should not have owners (Stallman, 2010, chap. 5), and that proprietary software is fundamentally unethical (Stallman, 2010, p. 5). This idea is realised through “the four freedoms” and a range of licenses, which permit anyone to: use for any purpose; modify; examine and redistribute modified copies, of the software so licensed (Free Software Foundation, 2010). These freedoms are posited as a contrast to the traditional model of software development, which rests all ownership and control of the product in its creators. As free software is not under private control, it would appear at first to escape the capitalist mode of production, and the problems which ensue from that, such as alienation, commodity fetishism and the concentration of power and wealth in the hands of a few.

    For a definition of the commons, Bollier states:

    commons comprises a wide range of shared assets and forms of community governance. Some are tangible, while others are more abstract, political, and cultural. The tangible assets of the commons include the vast quantities of oil, minerals, timber, grasslands, and other natural resources on public lands, as well as the broadcast airwaves and such public facilities as parks, stadiums, and civic institutions. … The commons also consists of intangible assets that are not as readily identified as belonging to the public. Such commons include the creative works and public knowledge not privatized under copyright law. … A last category of threatened commons is that of so-called ‘gift economies’. These are communities of shared values in which participants freely contribute time, energy, or property and over time receive benefits from membership in the community. The global corps of GNU/Linux software programmers is a prime example: enthusiasts volunteer their talents and in return receive useful rewards and group esteem. (2002)

    Thus, free software would appear to offer an escape from the system of capitalist dominance based upon private property, as the products of free software contribute to the commons, resist attempts at monopoly control and encourage contributors to act socially.

    Marx described how through the employment of workers, investors in capitalist businesses were able to amass wealth and thus power. The employer invests an amount of money into a business, to employ labour, and he labourer creates some good, be it tangible or intangible. The labourer is then paid for this work, and the company owner takes the good and sells it at some higher price, to cover other costs and to provide a profit. The money the labourer is paid is for the “necessary labour” (Marx, 1976a, p. 325), i.e. the amount the person requires to reproduce labour, that is the smallest amount possible to ensure the worker can live, eat, house themself, work fruitfully and produce offspring who will do similar. The difference between this amount and the amount the good sells for, minus other costs, which are based upon the labour of other workers, is the “surplus value”, and equals the profit to the employer (Marx, 1976a, p. 325). The good is then sold to a customer, who thus enters into a social relationship with the worker that made it. However, the customer has no knowledge of the worker, does not know the conditions they work under, their wage, their name or any other information about them, their relationship is mediated entirely through the commodity which passes from producer to consumer. Thus, despite the social relationship between the two, they are alienated from each other, and the relationship is represented through a commodity object, which is thus fetishised over the actual social relationship (Marx, 1976a, chap. 1). The worker is further alienated, from the product of their labour, for which they are not fully recompensed, as they are not paid the full exchange amount which the capitalist company obtains, and do not have control over any further part in the commodity than the work they employed to put in.

    If we study the reasons participants have for contributing to free software projects, coders fall into one or more of the following three categories: firstly, coders who contribute to create something of utility to themselves, secondly, those who are paid by a company which employs them to write code in a traditional employment relationship, and finally those who write software without economic compensation, to benefit the commons (Hars & Ou, 2001). The first category does not enter into a relationship with others, so the system of capitalist exchange does not need to be considered. The second category, that of a worker being paid to contribute to a project, might seem unusual, as the company appears to be giving away the result of capital investment, thus benefiting competitors. Although this is indeed the case, the value gained in other contributors viewing, commenting on and fixing the code is perceived to outweigh any disadvantages. In the case of a traditional employee of a capitalist company, the work, be it production of knowledge, carrying out of a service or making a tangible good, will be appropriated by the company the person works for, and credited as its own. The work is then sold at some increased cost, the difference between the cost to make it and the cost it is sold for being surplus labour, which reveals itself as profit.

    The employed software coder working on a free software performs necessary labour (Marx, 1976a, p. 325), as any other employee does, and this is rewarded with a wage. However, the surplus value, which nominally is used to create profit for the employer by them appropriating the work of the employee, is not solely controlled by the capitalist. Due to the nature of the license, the product of the necessary and surplus labour can be taken, used and modified by any other person, including the worker. Thus, the traditional relationship of the commons to the capitalist is changed. The use of paid workers to create surplus value is an example of the capitalist taking the commons and re-appropriating it for their own gain. However, as the work is given back to the commons, there is an argument that the employer has instead contributed to the wider sphere of human knowledge, without retaining monopoly control as the traditional copyright model does. Further, the worker is not alienated by their employer from the product of their labour, it is available for them to use as they see fit.

    The second category of contributors to a project, volunteers are generally also highly-skilled, well-paid, and materially comfortable in life. According to Maslow’s Hierarchy of Needs (Maslow, 1943), as individuals attain the material comforts in life, so they are likely to turn their aspirations towards less tangible but more fulfilling achievements, such as creative pursuits. Some will start free software projects of their own, as some people will start capitalist businesses: the Linux operating system kernel, The GNU operating system and the Diaspora* [sic] distributed social networking software are examples of this situation. If a project then appears successful to others, it will gain new coders, who will lend their assistance and improve the software. The person(s) who started the project are acknowledged as the leader(s), and often jokingly referred to as the “benevolent dictator for life” (Rivlin, 2003), although their power is contingent, because as Raymond put it, “the culture’s ‘big men’ and tribal elders are required to talk softly and humorously deprecate themselves at every turn in order to maintain their status.” (2002). As leaders, they will make the final decision of what code goes into the ‘official’ releases, and be recognised as the leader in the wider free software community.

    Although there may be hundreds of coders working on a project, as there is an easily identifiable leader, he or she will generally receive the majority of the credit for the project. Each coder will carry out enough work to produce the piece of code they wish to work on, thus producing a useful addition to the software. As suggested above by Maslow, the coder will gain symbolic capital, defined by Bourdieu as “the acquisition of a reputation for competence and and image of respectability” (1984, p. 291) and as “predisposition to function as symbolic capital, i.e., to be unrecognized as capital and recognized as legitimate competence, as authority exerting an effect of (mis)recognition … the specifically symbolic logic of distinction” (Bourdieu, 1986). This capital will be attained through working on the project, and being recognised by: other coders involved in the project and else where; the readers of their blog; their friends and colleagues, and they may occasionally be featured in articles on technology web news sites (KernelTrap, 2002; Mills, 2007). Each coder adds their piece of effort to the project, gaining enough small acknowledgements for their work along the way to feel they should continue coding, which could be looked at as necessary labour (Marx, 1976a, p. 325). Contemporaneously, the project leader gains a smaller acknowledgement for the improvements to the project as a whole, which in the case of a large project can be significant over time. In the terms expressed by Marx, although the coder carries out a certain amount of work, it is then handed over to the project, represented in the eyes of the public by the leader who accrues similar small amounts of capital from all coders on the project. This profit is surplus value (Marx, 1976a, p. 325). Similarly to the employed coder, the economic value of the project does not belong to the leaders, there is no surplus extracted there, as all can use it.

    To take a concrete example, Linus Torvalds, originator and head of the Linux kernel is known for his work throughout the free software world, and feted as one of its most important contributors (Veltman, 2006, p. 92). The perhaps surprising part of this, is that Torvalds does not write code for the project any more, he merely manages others, and makes grand decisions as to which concepts, not actual code, will be allowed into the mainline, or official, release of the project (Stout, 2007). Drawing a parallel with a traditional capitalist company, Linus can be seen as the original investor who started the organisation, who manages the workers, and who takes a dividend each year, despite not carrying out any productive work. Linus’ original investment in 1991 was economic and cultural capital, in the form of time and a part-finished degree in computer science (Calore, 2009). While he was the only contributor, the project progressed slowly, and the originator gained symbolic, social and cultural capital solely through his own efforts, thus resembling a member of the petit bourgeois. As others saw the value in the project, they offered small pieces of code to solve small problems and progress the code. These were incorporated, thus rapidly improving the software, and the standing of Torvalds.

    Like consumers of any other product, users of Linux will not have be aware of who had made the specific change unless they make an effort to read the list of changes for each release, thus resulting in the coder being alienated from the product of their labour and the users of the software (Marx, 1959, p. 29), who fetishise (Marx, 1976a, chap. 1) the software over the social relationship which should be prevalent. For each contribution, which results in a small gain in symbolic capital to the coder, Linus takes a smaller gain in those forms of capital, in a way analogous to a business investor extracting surplus economic capital from her employees, despite not having written the code in question. The capitalist investor possesses no particular values, other than to whom and where she was born, yet due to the capital she is able to invest, she can amass significant economic power from the work of others. Over 18 years, these small gains in capital have also added up for Linus Torvalds, and such is now the symbolic capital expropriated that he is able to continue extracting this capital fro Linux, while reinvesting capital in writing code for other projects, in this case ‘Git’ (Torvalds, 2005), which has attracted coders in part due to the fame of its principal architect. The surplus value of the coders on this project is also extracted and transferred to the nominal leader, and so the cycle continues, with the person at the top continuously and increasingly benefiting from the work of others, at their cost.

    The different forms of capital can readily be exchanged for one another. As such, Linus has been offered book contracts (Torvalds, 2001), is regularly interviewed for a range of publications (Calore, 2009; Rivlin, 2003), has gained jobs at high prestige technology companies (Martin Burns, 2002), and been invited to various conferences as guest speaker. The other coders on the Linux project have also gained, through skills learned, social connections and prestige for being part of what is a key project in free software, although none in the same way as Linus.

    Free software is constructed in such a way as to allow a range of choices to address most needs, for instance in the field of desktop operating systems there are hundreds to choose from, with around six distributions, or collections of software, covering the majority of users, through being recognised as well-supported, stable and aimed at the average user (Distrowatch.com, 2011). In order for the leaders of each of these projects to increase their symbolic capital, they must continuously attract new users, be regularly mentioned in the relevant media outlets and generally be seen as adding to the field of free software, contributing in some meaningful way. Doing so requires a point-of-difference between their software and the other distributions. However, this has become increasingly difficult, as the components used in each project have become increasingly stable and settled, so the current versions of each operating system will contain virtually identical lists of packages. In attempting to gain users, some projects have chosen to make increasingly radical changes, such as including versions of software with new features even though they are untested and unstable (Canonical Ltd., 2008), and changing the entire user experience, often negatively as far as users are concerned (Collins, 2011). Although this keeps the projects in the headlines on technology news sites, and thus attracts new users, it turns off experienced users, who are increasingly moving to more stable systems (Parfeni, 2011).

    This proliferation of systems, declining opportunities to attract new users, and increasingly risky attempts to do so, demonstrates the tendency of the rate of profit to fall, and the efforts capitalist companies go to in seeking new consumers (Marx, 1976b, chap. 3), so they can continue extracting increased surplus value as profit Each project must put in more and more effort, in increasingly risky areas, thus requiring increased maintenance and bug-fixing, to attract users and be appreciated in the eyes of others.

    According to Hardt and Negri, since the Middle Ages, there have been three economic paradigms, identified by the three forms from which profit is extracted. These are: land, which can be rented out to others or mined for minerals; tangible, movable products, which are manufactured by exploited workers and sold at a profit; and services, which involve the creation and manipulation of knowledge and affect, and the care of other humans, again by exploited workers (2000, p. 280). Looking more closely at these phases, we can see a procession. The first phase relied mainly upon the extraction of profit from raw materials, such as the earth itself, coal and crops, with little if any processing by humans. The second phase still required raw materials, such as iron ore, bauxite, rubber and oil, but also required a significant amount of technical processing by humans to turn these materials into commodities which were then sold, with profit extracted from the surplus labour of workers. Thus the products of the first phase were important in a supporting role to the production of the commodities, in the form of land for the factory, food for workers, fuel for smelters and machinery, and materials to fashion, but the majority of the value of the commodity was generated by activities resting on these resources, the working of those raw materials into useful items by humans. The latter of the phases listed above, the knowledge, affect and care industry, entails workers collecting and manipulating data and information, or performing some sort of service work, which can then be rented to others. Again, this phase relies on the other phases: from the first phase, land for offices, data centres, laboratories, hospitals, financial institutes, and research centres; food for workers, fuel for power; plus from the second phase: commodities including computers, medical equipment, office supplies, and laboratory and testing equipment, to carry out the work. Similarly to the previous phase, these materials and items are not directly the source of the creation of profit, but are required, the generation of profit relies and rests on their existence.

    In the context of IT, this change in the dominant paradigm was most aptly demonstrated by the handover of power from the mighty IBM to new upstart Microsoft in 1979, when the latter retained control over their operating system software MS-DOS, despite the former agreeing to install it on their new desktop computer range. The significance of this apparent triviality was illustrated in the film ‘Pirates of Silicon Valley’, during a scene depicting the negotiations between the two companies, in which everyone but Bill Gates’ character froze as he broke the ‘fourth wall’, turning to the camera and explaining the consequences of the mistake IBM had made (Burke, 1999). IBM, the dominant power in computing of the time, were convinced high profit continued to lie in physical commodities, the computer hardware they manufactured, and were unconcerned by lack of ownership of the software. Microsoft recognised the value of immaterial labour, and soon eclipsed IBM in value and influence of the industry, a position which they held for around 20 years.

    Microsoft’s method of generating profit was to dominate the field of software, their products enabling users to create, publish and manipulate data, while ignoring the hardware, which was seen as a commodity platform upon which to build (Paulson, 2010). Further, the company wasn’t particularly interested what its customers were doing with their computers, so long as they were using Windows, Office and other technologies, to work with that data, as demonstrated by a lack of effort to control the creation or distribution of information. As Microsoft were increasing their dominance, the free software GNU Project was developing a free alternative, to firstly the Unix operating system (Stallman, 2010, p. 9), and later to Microsoft products. Fuelled by the rise in highly capable, cost-free software which competed with and undercut Microsoft, so commoditising the market, the dominance of that company faded in the early 2000s (Ahmad, 2009), to be replaced by a range of companies which built on the products of the free software movement, by relying on the use value, but no longer having any interest in the exchange value of the software (Marx, 1976a, p. 126). The power Microsoft retains today through its desktop software products is due in significant part to ‘vendor lock-in’ (Duke, n.d.), the process of using closed standards, only allowing their software to interact with data in ways prescribed by the vendor. Google, Apple and Facebook, the dominant powers in computing today, would not have existed in their current form were it not for various pieces of free software (Rooney, 2011). Notably, the prime method of profit making of these companies is through content, rather than via a software or hardware platform. Apple and Google both provide platforms, such as the iPhone and Gmail, although neither companies makes large profit directly from these platforms, sometimes to the point of giving them away, subsidised heavily via their profit-making content divisions (Chen, 2008).

    Returning to the economic paradigms discussed by Hardt and Negri, we have a series of sub-phases, each building on the sub-phase before. Within the third, knowledge, phase, the first sub-phase of IT, computer software, such as operating systems, web servers and email servers, was a potential source of high profits through the 1980s and 1990s, but due to high competition, predominantly from the free software movement, the rate of profit has dropped considerably, with for instance the free software ‘Apache’ web server being used to host over 60% of all web sites (Netcraft Ltd., 2011). Conversely, the capitalist companies from the next sub-phase were returning high profits and growth, through extensive use of these free products to sell other services. This sub-phase is noticeable for its reliance on creating and manipulating data, rather than producing the tools to do so, although both still come under the umbrella of knowledge production. This trend was mirrored in the free software world, as the field of software stabilised, thus realising fewer opportunities for increasing one’s capital through the extraction of surplus in this area.

    As the falling rate of profit reduced the potential to gain symbolic capital through free software, so open data projects, which produce large sets of data under open licences, became more prevalent, providing further areas for open content contributors to invest their capital. These initially included Wikipedia, the web-based encyclopedia which anyone can edit, in 2001 (“Wikipedia:About,” n.d.). Growth of this project was high for several years, with a large number of new editors joining, but has since become so small as to find attracting new users very difficult (Chi, 2009; Moeller & Zachte, 2009). Similarly, OpenStreetMap, which aims to map the world, was begun in 2004, and grew at a very high rate once it became known in the mainstream technology press. However, now that the majority of streets and significant geographical data in western countries are mapped, the project is finding it difficult to attract new users, unless they are willing to work on adding increasingly esoteric minutiae, which has little obvious effect on the map, and thus provides a less obvious gain in symbolic capital attained by the user (Fairhurst, 2011). For the leaders of the project, this represents higher and higher effort to be put in, for comparatively smaller returns, again the rate of profit is falling. Rather than the previous, relatively passive method of attracting new users and expanding into other areas, the project founders and leading lights are now aggressively pushing the project to map less well-covered areas, such as a recent effort in a slum in Africa (Map Kibera, 2011); starting a sub-group to create maps in areas such as Haiti, to help out after natural disasters (Humanitarian OpenStreetMap Team, 2011); and providing economic grants for those who will map in less-developed countries (Black, 2008). This closely follows the capitalist need to seek out new markets and territories, once all existing ones are saturated, to continuously push for more growth, to arrest the falling rate of profit.

    According to Hardt and Negri,

    You can think and form relationships not only on the job buy also in the street, at home, with your neighbors and friends. The capacities of biopolitical labor-power exceed work and spill over into life. We hesitate to use the word “excess” for this capacity because from the perspective of society as a whole it is never too much. It is excess only from the perspective of capital because it does not produce economic value that can be captured by the individual capitalist (2011)

    The capitalist mode of production brings organisational structure to the production of value, but in doing so fetters the productivity of the commons, the productivity of the commons is higher when capital stays external to the production process. This hands-off approach to managing production can be seen extensively in free software, through the self-organising, decentralised model it utilises (Ingo, 2006, p. 38), eschewing traditional management forms with chains of responsibility. Economic forms of capital are prevalent in free software, as when technology companies including advertising provider Google, software support company Red Hat and software and services provider Novell employ coders to commit code to various projects such as the Linux kernel (The Linux Foundation, 2009). However, the final decision of whether the code is accepted, is left up to the project itself, which is usually free of corporate management. There are numerous, generally temporary exceptions to this rule, including OpenOffice.org, the free software office suite, which was recently acquired by software developer Oracle. Within a few months of the acquisition, the number of senior developers involved in the project dropped significantly, most of them citing interference from Oracle in the management of the software, and those who left set up their own fork of the project, based on the Oracle version (Clarke, 2010). Correspondingly, a number of software collections also stopped including the Oracle software, and instead used the version released by the new, again community-managed, offshoot (Sneddon, 2010). Due to the license which OpenOffice.org is released under, all of Oracle’s efforts to take direct control of the project were easily sidestepped. Oracle may possess the copyright to all of the original code, through purchasing the project, but this comes to naught once that code is released, it can be taken and modified by anyone who sees fit.

    This increased productivity of the commons can be seen in the response to flaws with the software: as there is no hierarchical structure enforced by, for example, employment contract, problems reported by users can and are taken on by volunteer coders who will work on the flaw until it is fixed, without needing to consult line managers, and align with a corporate strategy. If the most recognised source for the software does not respond quickly, either due to financial or technical reasons, because of the nature of the licence, other coders are able to fix the problem, including those hired by customers. For those not paid, symbolic capital continues to play a part here: although the coders may appear to be unpaid volunteers, in reality there is kudos to be gained by solving a problem quickly, pushing coders to compete against each other, even while sharing their advances.

    Despite this realisation that capital should not get too close to free software, the products of free software are still utilised by many corporates: free software forms the key infrastructure for a high proportion of web servers (Netcraft Ltd., 2011), and is extensively used in mobile phones (Germain, 2011) and financial trading (Jackson, 2011). The free software model thus forms a highly effective method for producing efficient software useful to capital. The decentralised, hard-to-control model disciplines capital into keeping its distance, forcing corporations to realise that if they get too close, try to control too much, they will lose out by wasting resources and appearing as bad citizens of the free software community, thus losing symbolic capital in the eyes of potential investors and customers.

    Conclusion

    The preceding analysis of free software and its relationship to capitalism demonstrates four areas in which the former is relevant to the latter.

    Firstly, free software claims to form a part of the commons, and to a certain extent, this is true: the data and code in the projects are licensed in a way which allows all to take benefit from using them, they cannot be monopolised, owned and locked-down as capitalism has done with the tangible assets of the commons, and many parts of the intangible commons. Further, it appears that not only is free software not enclosable, but whenever any attempt to control it is exerted by an external entity, the project radically changes direction, sheds itself of regulation and begins where it left off, more wary of interference from capital.

    Secondly, however, the paradigm of free software shows that ownership of the thing is not necessarily required to extract profit with it, there are still opportunities for the capitalist mode of accumulation despite this lack of close control of it. The high quality, efficient tools provided by free software are readily used by capitalist organisations to sell and promote other intangible products, and to manipulate various forms of data, particularly financial instruments, a growth industry in modern knowledge capitalism, at greater margins than had free software not existed. This high quality is due largely to the aforementioned ability of free software to keep capital from taking a part in its development, due to its apparent inefficiency at managing the commons.

    Thirdly, although free software cannot be owned and controlled as physical objects can, thus apparently foiling the extraction of surplus value as economic profit from alienated employees, the nominal leaders of each free software project appear to take a significant part of the credit for the project they steer, thus extracting symbolic capital from other, less prominent coders of the project. This is despite not being involved in much, or in some cases any, of the actual code-writing, thus mirroring the extraction of profit through surplus labour adopted by capitalism.

    Finally, the tendency of the rate of profit to fall seems to pervade free software in the same way as it affects capitalism. Certain free software projects have been shown to have difficulty extracting profit, in the form of surplus symbolic capital, and this in turn, has caused a turn to open data, which initially showed itself to be an area with potentiality for growth and profit, although it too has now suffered the same fate as free software.

    References

    Ahmad, A. (2009). Google beating the evil empire | Malay Mail Online. Retrieved November 3, 2011, from http://www.mmail.com.my/content/google-beating-evil-empire

    Black, N. (2008). CloudMade?» OpenStreetMap Grants. Retrieved October 29, 2011, from http://blog.cloudmade.com/2008/03/17/openstreetmap-grants/

    Bollier, D. (2002). Reclaiming the Commons. Retrieved November 3, 2011, from http://bostonreview.net/BR27.3/bollier.html

    Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. London: Routledge & Kegan Paul.

    Bourdieu, P. (1986). The Forms of Capital. Retrieved November 5, 2011, from http://www.marxists.org/reference/subject/philosophy/works/fr/bourdieu-forms-capital.htm

    Burke, M. (1999). Pirates of Silicon Valley.

    Calore, M. (2009). Aug. 25, 1991: Kid From Helsinki Foments Linux Revolution | This Day In Tech | Wired.com. Retrieved November 5, 2011, from http://www.wired.com/thisdayintech/2009/08/0825-torvalds-starts-linux

    Canonical Ltd. (2008). “firefox-3.0” source package?: Hardy (8.04)?: Ubuntu. Retrieved October 29, 2011, from https://launchpad.net/ubuntu/hardy/+source/firefox-3.0/3.0~b5+nobinonly-0ubuntu3

    Chen, J. (2008). AT&T’s 3G iPhone Is $199 This Summer | Gizmodo Australia. Retrieved November 3, 2011, from http://www.gizmodo.com.au/2008/04/atts_3g_iphone_is_199_this_summer-2/

    Chi, E. H. (2009, July 22). PART 1: The slowing growth of Wikipedia: some data, models, and explanations. Augmented Social Cognition Research Blog from PARC. Retrieved November 3, 2011, from http://asc-parc.blogspot.com/2009/07/part-1-slowing-growth-of-wikipedia-some.html

    Clarke, G. (2010). OpenOffice files Oracle divorce papers • The Register. Retrieved October 30, 2011, from http://www.theregister.co.uk/2010/09/28/openoffice_independence_from_oracle/

    Collins, B. (2011). Ubuntu Unity: the great divider | PC Pro blog. Retrieved October 24, 2011, from http://www.pcpro.co.uk/blogs/2011/05/03/ubuntu-unity-the-great-divider/

    Distrowatch.com. (2011). DistroWatch.com: Put the fun back into computing. Use Linux, BSD. Retrieved October 30, 2011, from http://distrowatch.com/

    Duke, O. (n.d.). Open Sesame | Love Learning. Retrieved November 3, 2011, from http://www.reedlearning.co.uk/learn-about/1/ll-open-standards

    Fairhurst, R. (2011). File:Osmdbstats8.png – OpenStreetMap Wiki. Retrieved October 29, 2011, from https://wiki.openstreetmap.org/wiki/File:Osmdbstats8.png

    Free Software Foundation. (2010). The Free Software Definition – GNU Project – Free Software Foundation. Retrieved August 29, 2011, from https://www.gnu.org/philosophy/free-sw.html

    Germain, Ja. M. (2011). Linux News: Android: How Linuxy Is Android? Retrieved October 29, 2011, from http://www.linuxinsider.com/story/How-Linuxy-Is-Android-73523.html

    Hardt, M., & Negri, A. (2000). Empire. Cambridge, Mass: Harvard University Press.

    Hardt, M., & Negri, A. (2011). Commonwealth. Cambridge, Massachusetts: Belknap Press of Harvard University Press.

    Hars, A., & Ou, S. (2001). Working for Free? – Motivations of Participating in Open Source Projects. Hawaii International Conference on System Sciences (Vol. 7, p. 7014). Los Alamitos, CA, USA: IEEE Computer Society. doi:http://doi.ieeecomputersociety.org/10.1109/HICSS.2001.927045

    Humanitarian OpenStreetMap Team. (2011). Humanitarian OpenStreetMap Team?» Using OpenStreetMap for Humanitarian Response & Economic Development. Retrieved November 3, 2011, from http://hot.openstreetmap.org/weblog/

    Ingo, H. (2006). Open Life: The Philosophy of Open Source. (S. Torvalds, Trans.). Lulu.com. Retrieved from www.openlife.cc

    Jackson, J. (2011). How Linux mastered Wall Street | ITworld. Retrieved October 29, 2011, from http://www.itworld.com/open-source/193823/how-linux-mastered-wall-street

    KernelTrap. (2002). Interview: Andrew Morton | KernelTrap. Retrieved October 30, 2011, from http://www.kerneltrap.org/node/10

    Map Kibera. (2011). Map Kibera. Retrieved October 29, 2011, from http://mapkibera.org/

    Martin Burns. (2002). Where all the Work’s Hiding | evolt.org. Retrieved October 30, 2011, from http://evolt.org/Where_all_the_Works_Hiding

    Marx, K. (1959). Economic & Philosophic Manuscripts. (M. Mulligan, Trans.). marxists.org. Retrieved from http://www.marxists.org/archive/marx/works/download/pdf/Economic-Philosophic-Manuscripts-1844.pdf Retrieved on 2011-11-03

    Marx, K. (1976a). Capital: A Critique of Political Economy (Vol. 1). Harmondsworth: Penguin Books in association with New Left Review.

    Marx, K. (1976b). Capital: A Critique of Political Economy. The Pelican Marx library (Vol. 3). Harmondsworth: Penguin Books in association with New Left Review.

    Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 50(4), 370-396.

    Mills, A. (2007). Why I quit: kernel developer Con Kolivas. Retrieved October 30, 2011, from http://apcmag.com/why_i_quit_kernel_developer_con_kolivas.htm

    Moeller, E., & Zachte, E. (2009). Wikimedia blog?» Blog Archive?» Wikipedia’s Volunteer Story. Retrieved November 3, 2011, from http://blog.wikimedia.org/2009/11/26/wikipedias-volunteer-story/

    Netcraft Ltd. (2011). May 2011 Web Server Survey | Netcraft. Retrieved October 29, 2011, from http://news.netcraft.com/archives/2011/05/02/may-2011-web-server-survey.html

    Parfeni, L. (2011). Linus Torvalds Drops Gnome 3 for Xfce, Calls It “Crazy” – Softpedia. Retrieved October 29, 2011, from http://news.softpedia.com/news/Linus-Torvalds-Drops-Gnome-3-for-Xfce-Calls-It-Crazy-215074.shtml

    Paulson, R. (2010). Application of the theoretical tools of the culture industry to the concept of free culture. Retrieved October 25, 2010, from http://bumblepuppy.org/blog/?p=4

    Raymond, E. S. (2002). Homesteading the Noosphere. Retrieved June 3, 2010, from http://www.catb.org/~esr/writings/cathedral-bazaar/homesteading/ar01s10.html

    Rivlin, G. (2003, November). Wired 11.11: Leader of the Free World. Retrieved from http://www.wired.com/wired/archive/11.11/linus.html

    Rooney, P. (2011). IT Management: Red Hat CEO: Google, Facebook owe it all to Linux, open source. IT Management. Retrieved October 25, 2011, from http://si-management.blogspot.com/2011/08/red-hat-ceo-google-facebook-owe-it-all.html

    Sneddon, J. (2010). LibreOffice – Google, Novell sponsored OpenOffice fork launched. Retrieved October 29, 2011, from http://www.omgubuntu.co.uk/2010/09/libreoffice-google-novell-sponsored-openoffice-fork-launched/

    Stallman, R. (2010). Free Software Free Society: Selected Essays of Richard M. Stallman. (J. Gay, Ed.) (2nd ed.). Boston, MA: GNU Press, Free Software Foundation.

    Stout, K. L. (2007). CNN.com – Reclusive Linux founder opens up – May 18, 2006. Retrieved October 30, 2011, from http://edition.cnn.com/2006/BUSINESS/05/18/global.office.linustorvalds/

    The Linux Foundation. (2009). Linux Kernel Development. Retrieved from https://www.linuxfoundation.org/sites/main/files/publications/whowriteslinux.pdf

    Torvalds, L. (2001). Just For Fun: The Story of an Accidental Revolutionary. London: Texere.

    Torvalds, L. (2005). “Re: Kernel SCM saga..” – MARC. Retrieved from http://marc.info/?l=linux-kernel&m=111288700902396

    Veltman, K. H. (2006). Understanding new media: augmented knowledge & culture. University of Calgary Press.

    Wikipedia:About. (n.d.).Wikipedia. Retrieved October 29, 2011, from https://secure.wikimedia.org/wikipedia/en/wiki/Wikipedia:About

    26 October 2011

    Colin Jackson

    Retaking the Net

    This Saturday (29th October 2011) is the RetakeTheNet Bar Camp in the Wellington Town Hall.

    I’ve talked about RtN before. It’s a group of people who are uncomfortable about the extent of control of the Net being exerted by governments and companies, and who want to do concrete things to imp roe the situation. This last point is the kicker – anyone can yell a bit, but doing actual projects is a lot harder. We are trying to the use the features of the Net that have made it so successful, its openness and its innovation culture, to find ways to do things more freely.

    The bar camp is for people to come and contribute ideas, meet some fantastic people, and just maybe get energized enough to actually do stuff. There will be sessions through the day starting at 10am (best get get there a bit early) and going on until an after party, starting around 4:30.

    There are going to be some very cool people there. And, you never know, we just might make a difference! Come if you want to be part of that.

    12 October 2011

    Robin Paulson

    University Without Conditions has launched

    Our Free University, the University Without Conditions had its first meeting on Saturday, October the 8th.

    We talked through various issues, including what our University will be, courses we will hold, and a rough idea of principles.  These principles will be made concrete over the next few weeks.  In the meantime, we have decided on our first event; it will be an Equality Forum, to be held as part of the Occupy Auckland demonstration and occupation on October the 15th at Aotea Square.

    All are welcome to attend the first event on the 15th, suggest courses via the website, or join the discussion list to take part in creating our University.

    If you would like to be involved in the set-up, please ask for an account to create posts.

    For more information, see the website:

    http://universitywithoutconditions.ac.nz or http://fu.ac.nz

    18 May 2011

    Andrew Caudwell

    Show Your True Colours

    This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

    A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

    When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):

            

    Those aren’t compression artifacts you’re seeing!

    Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.

            

    The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

    Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

    30 March 2011

    Vik Olliver

    31-Mar-2011 AM clippings

    Enjoy:

    Chapman Tripp and its allies are casting FUD at New Zealand's upcoming ban on software patents. The MED is not impressed:
    http://computerworld.co.nz/news.nsf/news/chapman-tripp-urges-re-think

    Two more Registration Authorities have had their SSL signing keys compromised. Web of trust fail:
    https://threatpost.com/en_us/blogs/comodo-says-two-more-registration-authorities-compromised-033011

    Acer lines up a dual-screen tablet PC as Microsoft waits for the tablet fad to pass:
    http://technolog.msnbc.msn.com/_news/2011/03/29/6367167-acers-dual-screen-tablet-behaves-like-a-laptop

    The first commercially viable nanogenerator, a flexible chip turning body movement into power, is shown to the American Chemical Society:
    http://portal.acs.org/portal/PublicWebSite/pressroom/newsreleases/CNBP_026949

    And finally. Atomic wristwatches go out of kilter all over Japan. Not from radiation; the radio sync transmitter is 16 km from the Daiichi:
    http://www.newscientist.com/blogs/onepercent/2011/03/atomic-clocks-go-dark-in-japan.html

    Vik :v) Diamond Age Solutions Ltd. http://diamondage.co.nz

    28 March 2011

    Guy Burgess

    The software patent affair

    Law firm Chapman Tripp has published an article criticising the Government’s decision to exclude software from patentability. While the article makes some valid points, it does not deal with some points fairly.

    The article claims:

    The [software patent] exclusion was the product of intense and successful lobbying by members of the “free and open source” software movement… In its April 2010 report to Parliament on the Patents Bill, the Commerce Select Committee acknowledged that the free software movement had convinced it that computer programs should be excluded from patentability.

    I’m sure this assertion of mighty lobbying power (the ability to sway an all-party, unanimous recommendation no less) would be flattering to any professional lobbyist, let alone FOSS supporters – if only it were true (it is not evidenced in the Commerce Committee report). A range of entities made submissions against software patents, including the statutorily independent University of Otago, InternetNZ, a number of small businesses (and my independent self, I modestly add). There were also submissions the other way, though interestingly the most submissions in favour of retaining software patents were from patent attorney law firms. It is also notable that other organisations including NZICT, which is a strong supporter of software patents and engaged in heavy after-the-event lobbying, did not make any submissions on the issue.

    The article adds the comment:

    The Committee said that “software patents can stifle innovation and competition, and can be granted for trivial or existing techniques”. The Committee provided no analysis or data to support that proposition.

    The fact that a Committee “provided no analysis or data” to support its recommendations is hardly noteworthy – that is not it’s job. Submitters provide analysis and data to the Committee, not the other way around. The material in support of the proposition is in the submissions.

    The article sets up an unfair straw-man argument:

    Free software proponents reckon that software should be free and, as a result, they generally oppose intellectual property rights. They say that IP rights lock away creativity and technology behind pay-walls which smother innovation. Most authors, inventors and entrepreneurs take the opposite view.

    I don’t claim to know what “free software proponents'” views on all manner of IP rights are, but when it comes to software patents in New Zealand, the evidence strongly suggests that the “authors, inventors and entrepreneurs” of software (FOSS or not) are opposed to software patents (see my posts here and here). This includes major companies, including NZ’s biggest software exporter Orion Health (see Orion Health backs moves to block patents).

    While the New Zealand Computer Society poll showing 81% member support for the exclusion is not scientific, it is at least indicative. In any case, opponents of the new law (mainly law firms) have consistently asserted a high level of opposition to the exclusion without any evidence to support that view.

    The article leads to the warning:

    If New Zealand enacted an outright ban on computer-implemented inventions we would be breaking international law. … Article 27(1) of TRIPs says that WTO members must make patents available for inventions “without discrimination as to… the field of technology…”.

    The authors rightly point out that breaching TRIPs could result in legal action against the Government by another country. However, that conclusion is premised on the basis that software is an “invention”. A number of processes and outcomes are not recognised as inventions for the purpose of patent law in different countries, including mathematical algorithms and business methods. The question of whether software is (or should be) an invention was commented on by a Comptroller-General of the UK Patent Office:

    Some have argued that the TRIPS agreement requires us to grant patents for software because it says “patents shall be available for any inventions … in all fields of technology, provided they are…..capable of industrial application”. However, it depends on how you interpret these words.

    Is a piece of pure software an invention? European law says it isn’t.

    The New Zealand Bill does not say that a computer program is an invention that is not patentable. It says, quite differently, that a computer program is “not a patentable invention”, along with human beings, surgical methods, etc.

    Article 27 has reportedly rarely been tested (twice in 17 years), and never in relation to software. The risk of possibly receiving a complaint under a provision (untested) of a multilateral agreement is not new. The New Zealand Law Society notes this in its submission on the Patents Bill (which does not address software patents):

    The proposal to exclude plant varieties under [the new Act] is because New Zealand has been in technical breach of the 1978 Union for the Protection of New Varieties of Plants (UPOV) treaty since it acceded to it in November 1981.

    What’s 30 years of technical breach between friends? Therefore, in fairness I would add a “third way” of dealing with the software patent exclusion: leave it as it is, and see how it goes (which is, after all, what the local industry appears to want). As I wrote last year, “Pressure to conform with international norms (if one emerges) and trading partner requirements may force a change down the track, but the New Zealand decision was born of widely supported policy …”

    If the ban on software patents as it currently stands does not make it into law (which is a possibility, despite clear statements from the Minister of Commerce that it will), it won’t be the end of the world. In fact, it will be the status quo. There are pro’s and con’s to software patents, and the authors are quite right that New Zealand will be going out on a limb by excluding them. The law can be changed again if need be. In the meantime, I refer again (unashamed self-cite) to my article covering the other, and much more popular, ways of protecting and commercialising software.

    27 March 2011

    Vik Olliver

    28-Mar-2011 AM clippings

    Enjoy:

    Microsoft seeks US state laws requiring customers of companies that use pirated software to pay the penalty (proprietary s/w only):
    http://www.groklaw.net/article.php?story=2011032316585825

    Rumours abound that Amazon is about to release its own Android tablet, and ebook makers start to turn to Android too:
    http://gigaom.com/mobile/how-e-books-are-coming-full-circle-thanks-to-tablets/

    An attacker broke into the Comodo Registration Authority (RA) based in Southern Europe and issued fraudulent SSL certificates:
    http://threatpost.com/en_us/blogs/phony-web-certificates-issued-google-yahoo-skype-others-032311

    The first processor is printed on a sheet of plastic. Well, two sheet. One for the CPU, one for the code:
    http://www.technologyreview.com/computing/37126/?p1=A2

    And finally. What do you do if a grizzly attacks you while you're stoned? Why, you claim ACC of course:
    http://www.brobible.com/bronews/montana-man-gets-workers-comp-for-getting-mauled-by-bear-after-smoking-pot

    Vik :v) Diamond Age Solutions Ltd. http://diamondage.co.nz

    22 March 2011

    Vik Olliver

    23-Mar-2011 AM clippings

    Enjoy:

    The UK prepares to introduce a national IP blocking system in the name of preventing piracy of movies and music. For now:
    http://torrentfreak.com/100-domains-on-movie-and-music-industry-website-blocking-wishlist-110322/

    A look at the new features in the recently released Firefox 4 web browser:
    http://www.businessinsider.com/new-features-in-firefox-4-2011-3

    A new 3D nanostructure for lithium and NiMH batteries allows very rapid charging - an electric vehicle in 5 mins if you have the amps:
    http://news.illinois.edu/news/11/0321batteries_PaulBraun.html

    Quantum computing should hit the magic 10 qubit level this year. That's where it starts to surpass some standard computing techniques:
    http://www.bbc.co.uk/news/science-environment-12811199

    A discussion with a scientist who is actually building things atom by atom and his take on when it will be possible to make machines:
    http://nextbigfuture.com/2011/03/philip-moriarty-discusses.html

    And finally. A facebook app that peels the clothes off people in your friends' pictures. Works with guys, girls and sad people:
    http://www.thesmokingjacket.com/humor/falseflesh-facebook-app

    Vik :v) Diamond Age Solutions Ltd. http://diamondage.co.nz

    06 October 2010

    Andrew Caudwell

    New Zealand Open Source Awards

    I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

    I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

    Update: here’s the video presented at Onward!:

    Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!

    16 August 2010

    Glynn Foster

    World’s First Pavlova Western

    Many months ago, I was lucky to be involved in the shooting of a western film – more appropriately, the world’s first Pavlova Western. Most people will be familiar with the concept of a Spaghetti Western, but now Mike Wallis (my brother in law) and his fiancée Inge Rademeyer from Mi Films have extended that concept to New Zealand.

    They are currently in post-production mode bringing all the pieces together, including an incredible music score from John Psathas (recently awarded Officer of the New Zealand Order of Merit for his Athens Olympics work). Jamie Selkirk (who received an Academy Award for his work on the Lord of the Rings trilogy) has also come on board to give them financial support to put the film through the final stages at Weta’s Park Road Post Production studios.

    And to top it all off, last week they appeared on TV One’s Close Up. Check out the following video –

    [http://www.youtube.com/v/Bhx9NiG9uhs]

    You can check their progress on the Facebook Pavlova Western group and the Pavlova Western blog.

    15 July 2010

    Guy Burgess

    Software patents to remain excluded

    The Government has cleared up the recent uncertainty about software patent reform by confirming that the proposed exclusion of software patents will proceed. A press release from Commerce Minister Simon Power said:

    “My decision follows a meeting with the chair of the Commerce Committee where it was agreed that a further amendment to the bill is neither necessary nor desirable.”

    During its consideration of the bill, the committee received many submissions opposing the granting of patents for computer programs on the grounds it would stifle innovation and restrict competition… The committee and the Minister accept this position.

    Barring any last-minute flip-flop – which is most unlikely given the Minister’s unequivocal statement – s15 of the new Patents Act, once passed, will read:

    15(3A) A computer program is not a patentable invention.

    Lobbying

    It is clear that the lobbying by pro-software patent industry group NZICT was unsuccessful, although Computerworld reports that its CEO apparently still holds out hope that “[IPONZ] will clarify the situation and bring this country’s law into line with the position in Europe and the UK, where software patents have been granted”. Hope does indeed spring eternal: the exclusion is clear and leaves no room for IPONZ to “clarify” it to permit software patents (embedded software is quite different- see below).

    As I wrote earlier, it remains a mystery as to why NZICT, a professional and funded body, failed to make a single submission on the Patents Act reform process – they only had 8 years to do so – but instead engaged in private lobbying after the unanimous Select Committee decision had been made. It also did not (and still does not) have a policy paper on the subject, nor did it mention software patents once in its 17 November 2009 submission on “New Zealand’s research, science and technology priorities”. It is not as though the software patent issue had not been signalled – it was raised in the very first document in 2002. Despite this silence, it claims that software patents are actually critical to the IT industry it says it represents.

    The New Zealand Computer Society, on the other hand, did put in a submission and has articulated a clear and balanced view representing the broader ICT community. It said today that “we believe this is great news for software innovation in New Zealand”.

    Left vs right?

    Is there a political angle to this? While some debate has presumed an open-vs-proprietary angle (a false premise) some I have chatted with have seen it as a left-vs-right issue, something Stephen Bell also alluded to (in a different context) in this interesting article.

    Thankfully, it appears not. The revised Patents Bill was unanimously supported by the Commerce Committee, comprising members National, Labour, Act, the Greens, and Maori parties. It reported to Commerce Minister Simon Power (National) and Associate Minister Rodney Hide (Act). Unlike the previous Government’s Copyright Act reform, post-committee industry lobbying has not turned the Government.

    What about business? NZICT apart, the exclusion of software patents has received the wide support of the New Zealand ICT industry, including (publicly) leading software exporters Orion Health and Jade, which as Paul Matthews notes represent around 50% of New Zealand’s software exports. The overwhelming majority of NZCS members support the change. Internationally, many venture capitalists and other non-bleeding-heart-liberal types have spoken out against software patents, on business grounds.

    Some pro-software patent business owners might be miffed at a perceived lack of support from National or Act, perhaps assuming that software patents are a “right” and are valuable for their businesses. The reality is that only a handful of New Zealand companies have New Zealand software patents (I did see a figure quoted somewhere – will try to find it). Yes, they can be valuable if you have them but that is a separate issue (and remember, under the new Act no one loses existing patents). A capitalist, free market economy (and the less restrictive the better) abhors monopolies, and this decision benefits the majority of businesses in New Zealand. Strong IP protection is essential in modern society – including patents – (see my article “Protecting IP in a post-software patent environment“) but the extent of statutory protection when being reviewed will always come down to a perceived balance, not just for the minority holders of a patent (a private monopoly) but for the much larger majority artificially prevented from competing and innovating by that monopoly.

    I have always taken pains to note, like NZCS, that there are pros and cons to software patents. And I am a fan of patents generally. Patents are good! But for software patents, the cons outweigh the pros. There are sound business reasons to exclude them. This specific part of the reform targets one specific area, has unanimous political party support (how rare is that?), and wide local business support. The last thing it can be seen as is an anti-business, left-wing policy (if it was, I’d have to oppose it!)

    Embedded software

    Inventions containing embedded software will remain, rightly, not excluded under the Patents Bill. Minister Power confirmed that IPONZ will develop guidelines for embedded software, which hopefully will set some clear parameters for applicants.

    Software is essential to many inventions, and while that software itself will not be patentable, the invention it is a component of still may be. Some difficult conceptual issues can arise, but in most cases I don’t expect difficulties would arise. This “exception” (if it can be described as such) will not undermine the general exclusion for software patents.

    11 May 2010

    Guy Burgess

    Open source in government tenders

    Computerworld reports:

    A requirement that a component of a government IT tender be open-source has sparked debate on whether such a specification is appropriate.

    The relevant part of the RFP (for the State Services Commission) puts the requirement as follows:

    We are looking for an Open Source solution. By Open Source we mean:

    • Produce standards-compliant output;
    • Be documented and maintainable into the future by suitable developers;
    • Be vendor-independent, able to be migrated if needed;
    • Contain full source code. The right to review and modify this as needed shall be available to the SSC and its appointed contractors.

    The controversy is whether this is a mandate of open source licensing (which it isn’t). The government should not mandate open source licensing or proprietary licensing on commercial-line tenders. More precisely, it should not rule solutions in or out based on whether they are offered (to others) under an open source licence. The best options should be on the table.

    The four stated requirements are quite sensible. As the SSC spokesman said, there is nothing particularly unusual about them in government procurement. These requirements (or variations on them) are similarly common in private-sector procurement and development contracts. In the public sector in particular though, vendor independence and standards-compliance help avoid farcical situations like the renegotiation of the Ministry of Health’s bulk licensing deal.

    Open standards and interoperability in public sector procurement is gaining traction around the world. Recently, the European Union called for “the introduction of open standards and interoperability in government procurement of IT”. And in the recent UK election, all three of the main parties included open source procurement in their manifestos.

    So why the controversy in this case? Most likely it’s the perhaps inapt use of the term “open source” in the RFP (even though the intended meaning is clarified immediately afterwards). The term “open source” is a hot-button word that means many things to many people, but today it generally means having code licensed under a recognised open source licence, many of which are copyleft. Many vendors simply could not (or would never want to) licence their code under such a licence, and it would be uncommercial and somewhat capricious for a Government tender to rule out some (or even the majority of) candidates based on such criteria.

    However, it is clear that the SSC did not use the term in that context, and does not intend to impose such a requirement. An appropriate source-available licence is as capable of meeting the requirements as an open source licence (see my post on source available vs open source). The requirement for disclosure of code to contractors and future modification can be simply dealt with on standard commercial IP licensing terms.

    A level playing field for open and proprietary solutions is the essential starting point, with evaluation – which in most cases should include open standards and interoperability – proceeding from there.

    25 January 2010

    Glynn Foster

    http://www.internetblackout.com.au/

    While I catch a breadth and write up some of my experiences of LCA2010 last week, the Australian’s are in full gear for their Great Australian Internet Blackout Campaign.

    From their website –

    What’s the problem?

    The Federal Government is pushing forward with a plan to force Internet Service Providers to censor the Internet for all Australians. This plan will waste millions of dollars and won’t make anyone safer.

    1. It won’t protect children: The filter isn’t a “cyber safety” measure to stop kids seeing inappropriate content such as R and X rated websites. It is not even designed to prevent the spread of illegal material where it is most often found (chat rooms, peer-to-peer file sharing).
    2. We will all pay for this ineffective solution: Under this policy, ISPs will be forced to charge more for consumer and business broadband. Several hundred thousand dollars has already been spent to test the filter – without considering high-speed services such as the National Broadband Network!
    3. A dangerous precedent: We stand to join a small club of countries which impose centralised Internet censorship such as China, Iran and Saudi Arabia. The secret blacklist may be limited to “Refused Classification” content for now, but what might a future Australian Government choose to block?

    Help turn the lights out on the proposed Internet filter by joining the Great Australian Internet Blackout.

    New Zealand was supported worldwide during their appeal of Section 92A – it’s time to support our cousins in the west.

    14 January 2010

    Glynn Foster

    Come to LCA2010 Open Day

    We’ve got a great line up for LCA Open Day! Check out our great posters and pass them around your work, university, community group or government department!

    01 June 2009

    Gavin Treadgold

    Software for Disasters

    This is the original text I submitted to The Box feature on Disaster Tech on Tuesday the 2nd of June, 2009. It is archived here for my records. It also includes some additional content that didn’t make it to the print edition.

    On December 26, 2004, the Boxing Day tsunami killed over 35 thousand people and displaced over half a million people in Sri Lanka alone. A massive humanitarian crisis played out in numerous other countries also affected by the magnitude 9+ Great Sumatra-Andaman earthquake and resulting tsunami. Within days it became apparent that an information system was needed to manage the massive amounts of information being generated about who was doing what, and where – at one point there were approximately 1,100 registered NGO’s operating in Sri Lanka.

    It was decided by a group of Sri Lankan IT professionals that a system needed to be built to better manage the information as they couldn’t find any existing free solutions that could be quickly deployed. Free, was critical, as they couldn’t afford any commercial solutions.

    Sahana was implemented within a week by around four hundred IT volunteers, and it was named after the Sinhalese word for relief. Initially it provided tools for tracking missing persons, organisations involved in response, locations and details of camps set up in response to the tsunami, and a means of accepting requests for resources such as food, water and medicine.

    Following the tsunami, the Swedish International Development Agency provided funding to take the lessons learnt from writing and deploying software during a disaster, and to rebuild Sahana from the ground up, and release it as free and open source software to the world. After all, Sri Lanka had needed an open and available system to manage disaster information, surely other countries should benefit from their experience?

    Since 2005, Sahana has been officially deployed to earthquakes in Pakistan, Indonesia, China and Peru; a mudslide in the Philippines; and has been deployed in New York City as a preparedness measure to help manage storm evacuations.

    Being free and open source software has been critical to Sahana’s success. The more accessible a system is, the more likely it is to be adopted, used and improved. Even in developed countries, many disaster agencies are poorly funded and often cannot justify significant expenditure on systems – commercial systems are too expensive. With pressure being applied to many public budgets, the significance of this is even greater now. Perhaps the greatest benefit of applying open source approaches is that it encourages a collaborative and communal approach to improving the system. As more countries with experience in disaster management contribute to its development, this will also act as a form of expertise transfer to countries that may not have as much experience with disasters.

    Following Hurricane Katrina, there were nearly 50 websites created to track missing and displaced persons – all using different systems, all collecting duplicate information, and few of them sharing. Many of the potential benefits of the technology were lost due to a lack of co-ordination and massive replication of data. Access to tools such as Sahana will be more efficient as they can be deployed faster than solutions developed after an event occurs.

    Normally, management involves a ‘leisurely’ process to collect as much information as possible, to then decide what actions should be taken. This is completely the opposite immediately following a disaster whereby decisions have to be made, sometimes with little or no information and no time to gather it.

    A key benefit that IT can provide is in linking silos of information held by different organisations – everyone has a better shared picture of what has happened, what is occurring now, and what is planned.

    Software, however, is just one aspect. There is a need for open data (such as maps and statistics) and standards to ensure that the multitude of systems can connect to each other and share information.

    The most important aspect is having the relationships between organisations set up in advance of a disaster. This results in organisations having the confidence to connect their systems and share information. Without shared information the rest of the system will lose many potential benefits that IT can bring to disaster management.

    Often, little or no information is available to support decision-making – emergency managers are forced to make complex decisions without having the luxury of all the required information.

    A disaster can produce a massive number of tasks requiring hundreds of organisations and thousands of people to co-ordinate activity – meaning that there will always be some prioritisation needed. What should be done first? What can wait until later? How should an impacted community prioritise response and recovery with limited resources?

    The benefits are not just limited to agencies and NGO’s. The next evolutionary step will be to adopt an approach called ‘crowd sourcing’ whereby members of the community are provided with tools to interact with each other and emergency managers.

    This may be achieved with applications that run on mobile phones linking people and even submitting information from the field directly to Sahana servers. Imagine the situation where a passerby can take a georeferenced photo of some disaster damage, and if communications networks are working, send that directly to the system emergency managers are using to manage the event. There are a numberof efforts underway looking at how social networks and websites such as Facebook and Twitter can be utilised during a disaster.

    Disaster IT is really a force multiplier. It won’t usually save lives, but it will allow a better shared understanding of the problems, and will lead to more effective and co-ordinated response. It allows those responding to an event, whether an organisation or individual, to quickly access information and better inform decision-making. This can lead to less suffering and a quicker recovery for affected communities.

    Design for Disaster
    Computer systems can often be fragile by their design – they are especially reliant upon power and communications. If any of these are lost during a disaster, the value of a system can quickly be lost if it has not been designed to operate in adverse environments. Here are some design decisions that are very important for disaster applications:

    • Low bandwidth – we’ve all become accustomed to sucking bandwidth through massive broadband pipes, but during a disaster network connectivity for emergency managers may be limited to dialup speeds over satellite or digital radio connections. Disaster software needs to be designed for very efficient transfer of information, and should never assume vast quantities of bandwidth are available. At at extreme, some information may even be transferred by SMS or USB memory stick.
    • Intermittent connectivity – during a disaster communications will likely fail multiple times before they are finally restored. This means that most ‘software as a service’ or web applications on the Internet will be of little use to emergency managers. Disaster software needs to be stored and run locally, and be able to work without a connection to the Internet.
    • Synchronisation – one of the best techniques for designing around low bandwidth and intermittent connectivity, is to design a system to be able to synchronise information between two systems when communications are available. When communications later fail, both systems will have a copy of the same data, and can access it locally until communications are restored.
    • Low power – power can, and will fail during a disaster, so disaster software needs to be designed to run on low power devices. Laptops and notebooks are good targets as they are self-contained, have built-in batteries, and can be charged from solar cells or generators. Large, power hungry servers can be difficult to move and support in a disaster environment.

    How I became involved

    One might ask how a Kiwi became involved in Sahana. Ever since training as a Civil Defence volunteer in the late 90′s, I had an interest in how information technology could be used to improve disaster management. The tsunami in 2004 acted as the catalyst for Sri Lankan computer programmers to produce Sahana. I have been volunteering with the project since 2005. In September 2005, he helped facilitate a workshop in Colombo that formed the basis for the current version of Sahana. In March this year he attended a Sahana conference and Board meeting in Sri Lanka. At the Board meeting the existing ‘owner’ of Sahana – the Lanka Software Foundation – agreed to hand the project over to the open source community. Gavin is a member of the transition Board that is in the process of forming an international non-profit foundation that can accept financial donations, and act as the ‘custodian’ of Sahana.

    How you can help

    There are numerous ways Sahana is looking for help. Once registered, we will be able to accept financial donations that will be used to fund development. In the meantime, we are looking for open source programmers with web development skills (including mapping). If you’re not a programmer, we are always looking for translators that can convert the english text and documentation into many different languages. Perhaps most importantly, we are looking for experienced emergency managers to help provide design advice to the Sahana community and guide the developers.

    26 April 2009

    Gavin Treadgold

    Google investing USD$50,000 in Sahana

    Well, it has been a lot of work for the admins, the mentors, and the students, but it has paid off. The Sahana has been awarded 10 projects in the 2009 Summer of Code. We have some great projects lined up! The include:

    • Person Registry for Sahana
    • Warehouse Management
    • Disaster Victim Identification
    • J2ME clients for form data collection in the field
    • Optical Character Recognition for scanning forms
    • Peer to peer synchronisation of Sahana servers
    • CAP Aggregation and Firefox CAP plugin
    • CAP Editing and Publishing
    • Mashup/Aggregation Dashboard
    • Theme Manager

    Having been neck deep in the process; working with others to set up our assessment process, coming up with ideas (I’m stoked to have two students working on CAP ideas that came out of my earlier suggestion), and reviewing each and every of the 45 proposals we recieved, it has been exciting to get so many projects accepted.

    I think that by the end of the year, we are going to have some great new functionality available in Sahana. Even more, I hope we’ll attract more open source developers to our ever growing community!

    25 March 2009

    Gavin Treadgold

    Sahana – a catalyst to widespread EMIS deployment

    I’ve just uploaded the presentation I gave on Sahana at the Sahana 2009 Conference in Colombo, Sri Lanka on the 25th of March, 2009. I’ll put a link up to the associated paper soon as well.