NZOSS  
Planet NZOSS
 

25 August 2016

Francois Marier

face

Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

  • libhybris
  • libwayland-egl1-mesa
  • libwayland-egl1-mesa-dbg
  • libwayland-egl1-mesa-lts-utopic
  • libwayland-egl1-mesa-lts-vivid
  • libwayland-egl1-mesa-lts-wily
  • libwayland-egl1-mesa-lts-xenial

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

20 August 2016

Francois Marier

face

Remplacer un disque RAID défectueux

Traduction de l'article original anglais à https://feeding.cloud.geek.nz/posts/replacing-a-failed-raid-drive/.

Voici la procédure que j'ai suivi pour remplacer un disque RAID défectueux sur une machine Debian.

Remplacer le disque

Après avoir remarqué que /dev/sdb a été expulsé de mon RAID, j'ai utilisé smartmontools pour identifier le numéro de série du disque à retirer :

smartctl -a /dev/sdb

Cette information en main, j'ai fermé l'ordinateur, retiré le disque défectueux et mis un nouveau disque vide à la place.

Initialiser le nouveau disque

Après avoir démarré avec le nouveau disque vide, j'ai copié la table de partitions avec parted.

Premièrement, j'ai examiné la table de partitions sur le disque dur non-défectueux :

$ parted /dev/sda
unit s
print

et créé une nouvelle table de partitions sur le disque de remplacement :

$ parted /dev/sdb
unit s
mktable gpt

Ensuite j'ai utilisé la commande mkpart pour mes 4 partitions et je leur ai toutes donné la même taille que les partitions équivalentes sur /dev/sda.

Finalement, j'ai utilisé les commandes toggle 1 bios_grub (partition d'amorce) et toggle X raid (où X est le numéro de la partition) pour toutes les partitions RAID, avant de vérifier avec la commande print que les deux tables de partitions sont maintenant identiques.

Resynchroniser/recréer les RAID

Pour synchroniser les données du bon disque (/dev/sda) vers celui de remplacement (/dev/sdb), j'ai exécuté les commandes suivantes sur mes partitions RAID1 :

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

et j'ai gardé un oeil sur le statut de la synchronisation avec :

watch -n 2 cat /proc/mdstat

Pour accélérer le processus, j'ai utilisé le truc suivant :

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Ensuite, j'ai recréé ma partition swap RAID0 comme suit :

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Par que la partition swap est toute neuve (il n'est pas possible de restorer une partition RAID0, il faut la re-créer complètement), j'ai dû faire deux choses:

  • remplacer le UUID pour swap dans /etc/fstab, avec le UUID donné par la commande mkswap (ou bien en utilisant la command blkid et en prenant le UUID pour /dev/md1)
  • remplacer le UUID de /dev/md1 dans /etc/mdadm/mdadm.conf avec celui retourné pour /dev/md1 par la commande mdadm --detail --scan

S'assurer que l'on peut démarrer avec le disque de remplacement

Pour être certain de bien pouvoir démarrer la machine avec n'importe quel des deux disques, j'ai réinstallé le boot loader grub sur le nouveau disque :

grub-install /dev/sdb

avant de redémarrer avec les deux disques connectés. Ceci confirme que ma configuration fonctionne bien.

Ensuite, j'ai démarré sans le disque /dev/sda pour m'assurer que tout fonctionnerait bien si ce disque décidait de mourir et de me laisser seulement avec le nouveau (/dev/sdb).

Ce test brise évidemment la synchronisation entre les deux disques, donc j'ai dû redémarrer avec les deux disques connectés et puis ré-ajouter /dev/sda à tous les RAID1 :

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Une fois le tout fini, j'ai redémarrer à nouveau avec les deux disques pour confirmer que tout fonctionne bien :

cat /proc/mdstat

et j'ai ensuite exécuter un test SMART complet sur le nouveau disque :

smartctl -t long /dev/sdb

17 August 2016

NZOSS News

NZ Open Source Award 2016 Finalists Announced

From the NZOSA press release email:

"The judges have completed their review of the many projects and individuals nominated for awards this year, and have announced the finalists for each category.  

“The quality of all of the nominations was very impressive. We are fortunate to have so many people here using open source technology and philosophy to deliver amazing technical, social and creative projects.

“The judges would like to congratulate all of the nominees, as all of their contributions have created significant value for our communities, research institutions and businesses.” said Jason Ryan, Chair of the judging panel.

We would like to thank the judges for all the time they put into reviewing the nominations.

View a list of the 2016 finalists - http://nzosa.org.nz/categories/

People's Choice award

Nominations for the People's Choice award are listed on the website, ready for voting, which is open until 5 September.
View People's Choice nominations - http://nzosa.org.nz/peoples-choice/

Sponsors

We would like to thank all our sponsors for the NZOSA 2016.
Platinum - Catalyst and NZRS
Silver - Redhat, SilverStripe and IITP
Bronze - Silicon Systems, NZRise and NZOSS. 

If you are interested in becoming a sponsor, please contact sponsors [at] nzosa [dot] org [dot] nz

Clinton Bedogni Prize for Open Systems

Once again The University of Auckland Department of Computer science is presenting the Clinton Bedgoni prize for open systems at the NZOSA gala dinner, applications close on 27 August.  More details and nomination forms are available at  https://www.cs.auckland.ac.nz/our_department/Clinton_Bedogni/

Gala dinner

The gala dinner is being held on Tuesday 25 October at Te Papa in Wellington. 
Tickets are $45, to reserve your ticket contact awards [at] nzosa [dot] org [dot] nz

PreviewAttachmentSize
Media Release- Finalists August 2016.pdf1.17 MB

07 August 2016

NZOSS News

ACT calls for government open source adoption

The ACT Party has called for the NZ Government to adopt open source software, citing, among other things, the UK's transition away from Microsoft products to open source alternatives.

While the NZOSS is gratified to see Free and Open Source Software (FOSS) being advocated by the ACT Party (and the Greens have similarly advocated it for at least the past decade) we think that FOSS sells itself if the playing field is level. At present it is not. The UK's transition has been made possible by its mandatory adoption of open standards. Without that change, adopting FOSS is blocked by the fact that corporate software suppliers control the documents everyone produces, and they can unilaterally change those proprietary formats to ensure that the FOSS alternatives are never quite fully compatible as has been the case for the past 20 years.

We would love to see the NZ government make good on the goals of the D5 Charter, to which we have formally been signed up (thanks Minister Dunne), which among other admirable things include: adoption of open standards, and broader use of open source software. We advocate that the NZ government formally mandate the use of open standards, because upon doing so, government software procurement will be on a level playing field, and at that point we are confident that FOSS will increasingly be the best choice on all criteria.

01 August 2016

Simon Lyall

Putting Prometheus node_exporter behind apache proxy

I’ve been playing with Prometheus monitoring lately. It is fairly new software that is getting popular. Prometheus works using a pull architecture. A central server connects to each thing you want to monitor every few seconds and grabs stats from it.

In the simplest case you run the node_exporter on each machine which gathers about 600-800 (!) metrics such as load, disk space and interface stats. This exporter listens on port 9100 and effectively works as an http server that responds to “GET /metrics HTTP/1.1” and spits several hundred lines of:

node_forks 7916
node_intr 3.8090539e+07
node_load1 0.47
node_load15 0.21
node_load5 0.31
node_memory_Active 6.23935488e+08

Other exporters listen on different ports and export stats for apache or mysql while more complicated ones will act as proxies for outgoing tests (via snmp, icmp, http). The full list of them is on the Prometheus website.

So my problem was that I wanted to check my virtual machine that is on Linode. The machine only has a public IP and I didn’t want to:

  1. Allow random people to check my servers stats
  2. Have to setup some sort of VPN.

So I decided that the best way was to just use put a user/password on the exporter.

However the node_exporter does not  implement authentication itself since the authors wanted the avoid maintaining lots of security code. So I decided to put it behind a reverse proxy using apache mod_proxy.

Step 1 – Install node_exporter

Node_exporter is a single binary that I started via an upstart script. As part of the upstart script I told it to listen on localhost port 19100 instead of port 9100 on all interfaces

# cat /etc/init/prometheus_node_exporter.conf
description "Prometheus Node Exporter"

start on startup

chdir /home/prometheus/

script
/home/prometheus/node_exporter -web.listen-address 127.0.0.1:19100
end script

Once I start the exporter a simple “curl 127.0.0.1:19100/metrics” makes sure it is working and returning data.

Step 2 – Add Apache proxy entry

First make sure apache is listening on port 9100 . On Ubuntu edit the /etc/apache2/ports.conf file and add the line:

Listen 9100

Next create a simple apache proxy without authentication (don’t forget to enable mod_proxy too):

# more /etc/apache2/sites-available/prometheus.conf 
<VirtualHost *:9100>
 ServerName prometheus

CustomLog /var/log/apache2/prometheus_access.log combined
 ErrorLog /var/log/apache2/prometheus_error.log

ProxyRequests Off
 <Proxy *>
Allow from all
 </Proxy>

ProxyErrorOverride On
 ProxyPass / http://127.0.0.1:19100/
 ProxyPassReverse / http://127.0.0.1:19100/

</VirtualHost>

This simply takes requests on port 9100 and forwards them to localhost port 19100 . Now reload apache and test via curl to port 9100. You can also use netstat to see what is listening on which ports:

Proto Recv-Q Send-Q Local Address   Foreign Address State  PID/Program name
tcp   0      0      127.0.0.1:19100 0.0.0.0:*       LISTEN 8416/node_exporter
tcp6  0      0      :::9100         :::*            LISTEN 8725/apache2

 

Step 3 – Get Prometheus working

I’ll assume at this point you have other servers working. What you need to do now is add the following entries for you server in you prometheus.yml file.

First add basic_auth into your scape config for “node” and then add your servers, eg:

- job_name: 'node'

  scrape_interval: 15s

  basic_auth: 
    username: prom
    password: mypassword

  static_configs:
    - targets: ['myserver.example.com:9100']
      labels: 
         group: 'servers'
         alias: 'myserver'

Now restart Prometheus and make sure it is working. You should see the following lines in your apache logs plus stats for the server should start appearing:

10.212.62.207 - - [31/Jul/2016:11:31:38 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"
10.212.62.207 - - [31/Jul/2016:11:31:53 +0000] "GET /metrics HTTP/1.1" 200 11398 "-" "Go-http-client/1.1"
10.212.62.207 - - [31/Jul/2016:11:32:08 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"

Notice that connections are 15 seconds apart, get http code 200 and are 11k in size. The Prometheus server is using Authentication but apache doesn’t need it yet.

Step 4 – Enable Authentication.

Now create an apache password file:

htpasswd -cb /home/prometheus/passwd prom mypassword

and update your apache entry to the followign to enable authentication:

# more /etc/apache2/sites-available/prometheus.conf
 <VirtualHost *:9100>
 ServerName prometheus

 CustomLog /var/log/apache2/prometheus_access.log combined
 ErrorLog /var/log/apache2/prometheus_error.log

 ProxyRequests Off
 <Proxy *>
 Order deny,allow
 Allow from all
 #
 AuthType Basic
 AuthName "Password Required"
 AuthBasicProvider file
 AuthUserFile "/home/prometheus/passwd"
 Require valid-user
 </Proxy>

 ProxyErrorOverride On
 ProxyPass / http://127.0.0.1:19100/
 ProxyPassReverse / http://127.0.0.1:19100/
 </VirtualHost>

After you reload apache you should see the following:

10.212.56.135 - prom [01/Aug/2016:04:42:08 +0000] "GET /metrics HTTP/1.1" 200 11394 "-" "Go-http-client/1.1"
10.212.56.135 - prom [01/Aug/2016:04:42:23 +0000] "GET /metrics HTTP/1.1" 200 11392 "-" "Go-http-client/1.1"
10.212.56.135 - prom [01/Aug/2016:04:42:38 +0000] "GET /metrics HTTP/1.1" 200 11391 "-" "Go-http-client/1.1"

Note that the “prom” in field 3 indicates that we are logging in for each connection. If you try to connect to the port without authentication you will get:

Unauthorized
This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.

That is pretty much it. Note that will need to add additional Virtualhost entries for more ports if you run other exporters on the server.

 

FacebookGoogle+Share

24 July 2016

Andrew Ruthven

Allow forwarding from VoiceMail to cellphones

Something I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose.

I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry.

When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions:

[macro-stdexten]
...
exten => a,1,Goto(pstn,027xxx,1)
...

(Where I have a context called pstn for placing calls out to the PSTN).

This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1):

[macro-stdexten]
...
exten => a,1,Goto(vmfwd,${ARG1},1)
...

Then we can create a new context called vmfwd with extension matching (my extension is 7231):

[vmfwd]
exten => 7231,1,Goto(pstn,027xxx,1)

I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated.

The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this:

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously.

Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others.

Here's my macro-stdexten and vmfwd in full:

[macro-stdexten]
exten => s,1,Progress()
exten => s,n,Dial(${ARG2},20)
exten => s,n,Goto(s-${DIALSTATUS},1)
exten => s-NOANSWER,1,Answer
exten => s-NOANSWER,n,Wait(1)
exten => s-NOANSWER,n,Set(voicemail_option=u)
exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u)
exten => s-NOANSWER,n,Hangup
exten => s-BUSY,1,Answer
exten => s-BUSY,n,Wait(1)
exten => s-BUSY,n,Set(voicemail_option=b)
exten => s-BUSY,n,Voicemail(${ARG1}@sip,b)
exten => s-BUSY,n,Hangup
exten => _s-.,1,Goto(s-NOANSWER,1)
exten => a,1,Goto(vmfwd,${ARG1},1)
exten => o,1,Macro(operator)

[vmfwd]

exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option})
  same => n,Hangup

#include extensions-vmfwd-auto.conf

And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files.

With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd.

23 July 2016

Francois Marier

face

Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s
print

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb

Simon Lyall

Gather Conference 2016 – Afternoon

The Gathering

Chloe Swarbrick

  • Whose responsibility is it to disrupt the system?
  • Maybe try and engage with the system we have for a start before writing it off.
  • You disrupt the system yourself or you hold the system accountable

Nick McFarlane

  • He wrote a book
  • Rock Stars are dicks to work with

So you want to Start a Business

  • Hosted by Reuben and Justin (the accountant)
  • Things you need to know in your first year of business
  • How serious is the business, what sort of structure
    • If you are serious, you have to do things properly
    • Have you got paying customers yet
    • Could just be an idea or a hobby
  • Sole Trader vs Incorporated company vs Trust vs Partnership
  • Incorperated
    • Directors and Shareholders needed to be decided on
    • Can take just half an hour
  • when to get a GST number?
    • If over $60k turnover a year
    • If you have lots of stuff you plan to claim back.
  • Have an accounting System from Day 1 – Xero Pretty good
  • Get an advisor or mentor that is not emotionally invested in your company
  • If partnership then split up responsibilities so you can hold each other accountable for specific items
  • If you are using Xero then your accountant should be using Xero directly not copying it into a different system.
  • Remuneration
    • Should have a shareholders agreement
    • PAYE possibility from drawings or put 30% aside
    • Even if only a small hobby company you will need to declare income to IRD especially non-trivial level.
  • What Level to start at Xero?
    • Probably from the start if the business is intended to be serious
    • A bit of pain to switch over later
  • Don’t forget about ACC
  • Remember you are due provisional tax once you get over the the $2500 for the previous year.
  • Home Office expense claim – claim percentage of home rent, power etc
  • Get in professionals to help

Diversity in Tech

  • Diversity is important
    • Why is it important?
    • Does it mean the same for everyone
  • Have people with different “ways of thinking” then we will have a diverse views then wider and better solutions
  • example “Polish engineer could analysis a Polish specific character input error”
  • example “Controlling a robot in Samoan”, robots are not just in english
  • Stereotypes for some groups to specific jobs, eg “Indians in tech support”
  • Example: All hires went though University of Auckland so had done the same courses etc
  • How do you fix it when people innocently hire everyone from the same background? How do you break the pattern? No be the first different-hire represent everybody in that group?
  • I didn’t want to be a trail-blazer
  • Wow’ed out at “Women in tech” event, first time saw “majority of people are like me” in a bar.
  • “If he is a white male and I’m going to hire him on the team that is already full of white men he better be exception”
  • Worried about implication that “diversity” vs “Meritocracy” and that diverse candidates are not as good
  • Usual over-representation of white-males in the discussion even in topics like this.
  • Notion that somebody was only hired to represent diversity is very harmful especially for that person
  • If you are hiring for a tech position then 90% of your candidates will be white-males, try place your diversity in getting more diverse group applying for the jobs not tilt in the actual hiring.
  • Even in maker spaces where anyone is welcome, there are a lot fewer women. Blames mens mags having things unfinished, women’s mags everything is perfect so women don’t want to show off something that is unfinished.
  • Need to make the workforce diverse now to match the younger people coming into it
  • Need to cover “power income” people who are not exposed to tech
  • Even a small number are role models for the future for the young people today
  • Also need to address the problem of women dropping out of tech in the 30s and 40s. We can’t push girls into an “environment filled with acid”
  • Example taking out “cocky arrogant males” from classes into “advanced stream” and the remaining class saw women graduating and staying in at a much higher rate.

Podcasting

  • Paul Spain from Podcast New Zealand organising
  • Easiest to listen to when doing manual stuff or in car or bus
  • Need to avoid overload of commercials, eg interview people from the company about the topic of interest rather than about their product
  • Big firms putting money into podcasting
  • In the US 21% of the market are listening every single month. In NZ perhaps more like 5% since not a lot of awareness or local content
  • Some radios shows are re-cutting and publishing them
  • Not a good directory of NZ podcasts
  • Advise people use proper equipment if possible if more than a once-off. Bad sound quality is very noticeable.
  • One person: 5 part series on immigration and immigrants in NZ
  • Making the charts is a big exposure
  • Apples “new and noteworthy” list
  • Domination by traditional personalities and existing broadcasters at present. But that only helps traction within New Zealand

 

 

FacebookGoogle+Share

22 July 2016

Simon Lyall

Gather Conference 2016 – Morning

At the Gather Conference again for about the 6th time. It is a 1-day tech-orientated unconference held in Auckland every year.

The day is split into seven streamed sessions each 40 minutes long (of about 8 parallel rooms of events that are each scheduled and run by attendees) plus and opening and a keynote session.

How to Steer your own career – Shirley Tricker

  • Asked people hands up on their current job situation, FT vs PT, sinmgle v multiple jobs
  • Alternatives to traditional careers of work. possible to craft your career
  • Recommended Blog – Free Range Humans
  • Job vs Career
    • Job – something you do for somebody else
    • Career – Uniqie to you, your life’s work
    • Career – What you do to make a contribution
  • Predicted that a greater number of people will not stay with one (or even 2 or 3) employers through their career
  • Success – defined by your goals, lifestyle wishes
  • What are your strengths – Know how you are valuable, what you can offer people/employers, ways you can branch out
  • Hard and Soft Skills (soft skills defined broadly, things outside a regular job description)
  • Develop soft skills
    • List skills and review ways to develop and improve them
    • Look at people you admire and copy them
    • Look at job desctions
  • Skills you might need for a portfilio career
    • Good at organising, marketing, networking
    • flexible, work alone, negotiation
    • Financial literacy (handle your accounts)
  • Getting started
    • Start small ( don’t give up your day job overnight)
    • Get training via work or independently
    • Develop you strengths
    • Fix weaknesses
    • Small experiments
    • cheap and fast (start a blog)
    • Don’t have to start out as an expert, you can learn as you go
  • Just because you are in control doesn’t make it easy
  • Resources
    • Careers.govt.nz
    • Seth Goden
    • Tim Ferris
    • eg outsources her writing.
  • Tools
    • Xero
    • WordPress
    • Canva for images
    • Meetup
    • Odesk and other freelance websites
  • Feedback from Audience
    • Have somebody to report to, eg meet with friend/adviser monthly to chat and bounce stuff off
    • Cultivate Women’s mentoring group
    • This doesn’t seem to filter through to young people, they feel they have to pick a career at 18 and go to university to prep for that.
    • Give advice to people and this helps you define
    • Try and make the world a better place: enjoy the work you are doing, be happy and proud of the outcome of what you are doing and be happy that it is making the world a bit better
    • How to I “motivate myself” without a push from your employer?
      • Do something that you really want to do so you won’t need external motivation
      • Find someone who is doing something write and see what they did
      • Awesome for introverts
    • If you want to start a startup then work for one to see what it is like and learn skills
    • You don’t have to have a startup in your 20s, you can learn your skills first.
    • Sometimes you have to do a crappy job at the start to get onto the cool stuff later. You have to look at the goal or path sometimes

Books and Podcasts – Tanya Johnson

Stuff people recommend

  • Intelligent disobedience – Ira
  • Hamilton the revolution – based on the musical
  • Never Split the difference – Chris Voss (ex hostage negotiator)
  • The Three Body Problem – Lia CiXin – Sci Fi series
  • Lucky Peach – Food and fiction
  • Unlimited Memory
  • The Black Swan and Fooled by Randomness
  • The Setup (usesthis.com) website
  • Tim Ferris Podcast
  • Freakonomics Podcast
  • Moonwalking with Einstein
  • Clothes, Music, Boy – Viv Albertine
  • TIP: Amazon Whispersync for Kindle App (audiobook across various platforms)
  • TIP: Blinkist – 15 minute summaries of books
  • An Intimate History of Humanity – Theodore Zenden
  • How to Live – Sarah Bakewell
  • TIP: Pocketcasts is a good podcast app for Android.
  • Tested Podcast from Mythbusters people
  • Trumpcast podcast from Slate
  • A Fighting Chance – Elizabeth Warren
  • The Choice – Og Mandino
  • The Good life project Podcast
  • The Ted Radio Hour Podcast (on 1.5 speed)
  • This American Life
  • How to be a Woman by Caitlin Moran
  • The Hard thing about Hard things books
  • Flashboys
  • The Changelog Podcast – Interview people doing Open Source software
  • The Art of Oppertunity Roseland Zander
  • Red Rising Trilogy by Piers Brown
  • On the Rag podcast by the Spinoff
  • Hamish and Andy podcast
  • Radiolab podcast
  • Hardcore History podcast
  • Car Talk podcast
  • Ametora – Story of Japanese menswear since WW2
  • .net rocks podcast
  • How not to be wrong
  • Savage Love Podcast
  • Friday Night Comedy from the BBC (especially the News Quiz)
  • Answer me this Podcast
  • Back to work podcast
  • Reply All podcast
  • The Moth
  • Serial
  • American Blood
  • The Productivity podcast
  • Keeping it 1600
  • Ruby Rogues Podcast
  • Game Change – John Heilemann
  • The Road less Travelled – M Scott Peck
  • The Power of Now
  • Snow Crash – Neil Stevensen

My Journey to becoming a Change Agent – Suki Xiao

  • Start of 2015 was a policy adviser at Ministry
  • Didn’t feel connected to job and people making policies for
  • Outside of work was a Youthline counsellor
  • Wanted to make a difference, organised some internal talks
  • Wanted to make changes, got told had to be a manager to make changes (10 years away)
  • Found out about R9 accelerator. Startup accelerator looking at Govt/Business interaction and pain points
  • Get seconded to it
  • First month was very hard.
  • Speed of change was difficult, “Lean into the discomfort” – Team motto
  • Be married to the problem
    • Specific problem was making sure enough seasonal workers, came up with solution but customers didn’t like it. Was not solving the actual problem customers had.
    • Team was married to the problem, not the married to the solution
  • When went back to old job, found slower pace hard to adjust back
  • Got offered a job back at the accelerator, coaching up to 7 teams.
    • Very hard work, lots of work, burnt out
    • 50% pay cut
    • Worked out wasn’t “Agile” herself
    • Started doing personal Kanban boards
    • Cut back number of teams coaching, higher quality
  • Spring Board
    • Place can work at sustainable pace
    • Working at Nomad 8 as an independent Agile consultant
    • Work on separate companies but some support from colleges
  • Find my place
    • Joined Xero as a Agile Team Facilitator
  • Takeaways
    • Anybody can be a change agent
    • An environment that supports and empowers
    • Look for support
  • Conversation on how you overcome the “Everest” big huge goal
    • Hard to get past the first step for some – speaker found she tended to do first think later. Others over-thought beforehand
    • It seems hard but think of the hard things you have done in your life and it is usually not as bad
    • Motivate yourself by having no money and having no choice
    • Point all the bad things out in the open, visualise them all and feel better cause they will rarely happen
    • Learn to recognise your bad patterns of thoughts
    • “The Way of Art” Steven Pressfield (skip the Angels chapter)
  • Are places Serious about Agile instead of just placing lip-service?
    • Questioner was older and found places wanted younger Agile coaches
    • Companies had to completely change into organisation, eg replace project managers
    • eg CEO is still waterfall but people lower down are into Agile. Not enough management buy-in.
    • Speaker left on client that wasn’t serious about changing
  • Went though an Agile process, made “Putting Agile into the Org” as the product
  • Show customers what the value is
  • Certification advice, all sorts of options. Nomad8 course is recomended

 

FacebookGoogle+Share

18 May 2016

NZOSS News

Attend ITx - NZOSS stream on Day 3

This year the IITP's annual conference, ITx, is a bit different from years past. It will feature 11 other national technology organisation's "streams" as well, including one for the NZOSS community discussing aspects of open source, open business, open data, and open standards. We've got speakers, a lightning talk session (so put together a 5 minute presentation about your favourite open topic) and a panel discussion.

We're finalising our programme now, but we encourage NZOSS community members to take part: you can sign up for the conference (either the full conference, or just the 3rd day NZOSS stream) and please contact us regarding sponsorship.

08 May 2016

Tom Ryder

Cron best practices

The time-based job scheduler cron(8) has been around since Version 7 Unix, and its crontab(5) syntax is familiar even for people who don’t do much Unix system administration. It’s standardised, reasonably flexible, simple to configure, and works reliably, and so it’s trusted by both system packages and users to manage many important tasks.

However, like many older Unix tools, cron(8)‘s simplicity has a drawback: it relies upon the user to know some detail of how it works, and to correctly implement any other safety checking behaviour around it. Specifically, all it does is try and run the job at an appropriate time, and email the output. For simple and unimportant per-user jobs, that may be just fine, but for more crucial system tasks it’s worthwhile to wrap a little extra infrastructure around it and the tasks it calls.

There are a few ways to make the way you use cron(8) more robust if you’re in a situation where keeping track of the running job is desirable.

Apply the principle of least privilege

The sixth column of a system crontab(5) file is the username of the user as which the task should run:

0 * * * *  root  cron-task

To the extent that is practical, you should run the task as a user with only the privileges it needs to run, and nothing else. This can sometimes make it worthwhile to create a dedicated system user purely for running scheduled tasks relevant to your application.

0 * * * *  myappcron  cron-task

This is not just for security reasons, although those are good ones; it helps protect you against nasties like scripting errors attempting to remove entire system directories.

Similarly, for tasks with database systems such as MySQL, don’t use the administrative root user if you can avoid it; instead, use or even create a dedicated user with a unique random password stored in a locked-down ~/.my.cnf file, with only the needed permissions. For a MySQL backup task, for example, only a few permissions should be required, including SELECT, SHOW VIEW, and LOCK TABLES.

In some cases, of course, you really will need to be root. In particularly sensitive contexts you might even consider using sudo(8) with appropriate NOPASSWD options, to allow the dedicated user to run only the appropriate tasks as root, and nothing else.

Test the tasks

Before placing a task in a crontab(5) file, you should test it on the command line, as the user configured to run the task and with the appropriate environment set. If you’re going to run the task as root, use something like su or sudo -i to get a root shell with the user’s expected environment first:

$ sudo -i -u cronuser
$ cron-task

Once the task works on the command line, place it in the crontab(5) file with the timing settings modified to run the task a few minutes later, and then watch /var/log/syslog with tail -f to check that the task actually runs without errors, and that the task itself completes properly:

May  7 13:30:01 yourhost CRON[20249]: (you) CMD (cron-task)

This may seem pedantic at first, but it becomes routine very quickly, and it saves a lot of hassles down the line as it’s very easy to make an assumption about something in your environment that doesn’t actually hold in the one that cron(8) will use. It’s also a necessary acid test to make sure that your crontab(5) file is well-formed, as some implementations of cron(8) will refuse to load the entire file if one of the lines is malformed.

If necessary, you can set arbitrary environment variables for the tasks at the top of the file:

MYVAR=myvalue

0 * * * *  you  cron-task

Don’t throw away errors or useful output

You’ve probably seen tutorials on the web where in order to keep the crontab(5) job from sending standard output and/or standard error emails every five minutes, shell redirection operators are included at the end of the job specification to discard both the standard output and standard error. This kluge is particularly common for running web development tasks by automating a request to a URL with curl(1) or wget(1):

*/5 * * *  root  curl https://example.com/cron.php >/dev/null 2>&1

Ignoring the output completely is generally not a good idea, because unless you have other tasks or monitoring ensuring the job does its work, you won’t notice problems (or know what they are), when the job emits output or errors that you actually care about.

In the case of curl(1), there are just way too many things that could go wrong, that you might notice far too late:

  • The script could get broken and return 500 errors.
  • The URL of the cron.php task could change, and someone could forget to add a HTTP 301 redirect.
  • Even if a HTTP 301 redirect is added, if you don’t use -L or --location for curl(1), it won’t follow it.
  • The client could get blacklisted, firewalled, or otherwise impeded by automatic or manual processes that falsely flag the request as spam.
  • If using HTTPS, connectivity could break due to cipher or protocol mismatch.

The author has seen all of the above happen, in some cases very frequently.

As a general policy, it’s worth taking the time to read the manual page of the task you’re calling, and to look for ways to correctly control its output so that it emits only the output you actually want. In the case of curl(1), for example, I’ve found the following formula works well:

curl -fLsS -o /dev/null http://example.com/
  • -f: If the HTTP response code is an error, emit an error message rather than the 404 page.
  • -L: If there’s an HTTP 301 redirect given, try to follow it.
  • -sS: Don’t show progress meter (-S stops -s from also blocking error messages).
  • -o /dev/null: Send the standard output (the actual page returned) to /dev/null.

This way, the curl(1) request should stay silent if everything is well, per the old Unix philosophy Rule of Silence.

You may not agree with some of the choices above; you might think it important to e.g. log the complete output of the returned page, or to fail rather than silently accept a 301 redirect, or you might prefer to use wget(1). The point is that you take the time to understand in more depth what the called program will actually emit under what circumstances, and make it match your requirements as closely as possible, rather than blindly discarding all the output and (worse) the errors. Work with Murphy’s law; assume that anything that can go wrong eventually will.

Send the output somewhere useful

Another common mistake is failing to set a useful MAILTO at the top of the crontab(5) file, as the specified destination for any output and errors from the tasks. cron(8) uses the system mail implementation to send its messages, and typically, default configurations for mail agents will simply send the message to an mbox file in /var/mail/$USER, that they may not ever read. This defeats much of the point of mailing output and errors.

This is easily dealt with, though; ensure that you can send a message to an address you actually do check from the server, perhaps using mail(1):

$ printf '%s\n' 'Test message' | mail -s 'Test subject' you@example.com

Once you’ve verified that your mail agent is correctly configured and that the mail arrives in your inbox, set the address in a MAILTO variable at the top of your file:

MAILTO=you@example.com

0 * * * *    you  cron-task-1
*/5 * * * *  you  cron-task-2

If you don’t want to use email for routine output, another method that works is sending the output to syslog with a tool like logger(1):

0 * * * *   you  cron-task | logger -it cron-task

Alternatively, you can configure aliases on your system to forward system mail destined for you on to an address you check. For Postfix, you’d use an aliases(5) file.

I sometimes use this setup in cases where the task is expected to emit a few lines of output which might be useful for later review, but send stderr output via MAILTO as normal. If you’d rather not use syslog, perhaps because the output is high in volume and/or frequency, you can always set up a log file /var/log/cron-task.log … but don’t forget to add a logrotate(8) rule for it!

Put the tasks in their own shell script file

Ideally, the commands in your crontab(5) definitions should only be a few words, in one or two commands. If the command is running off the screen, it’s likely too long to be in the crontab(5) file, and you should instead put it into its own script. This is a particularly good idea if you want to reliably use features of bash or some other shell besides POSIX/Bourne /bin/sh for your commands, or even a scripting language like Awk or Perl; by default, cron(8) uses the system’s /bin/sh implementation for parsing the commands.

Because crontab(5) files don’t allow multi-line commands, and have other gotchas like the need to escape percent signs % with backslashes, keeping as much configuration out of the actual crontab(5) file as you can is generally a good idea.

If you’re running cron(8) tasks as a non-system user, and can’t add scripts into a system bindir like /usr/local/bin, a tidy method is to start your own, and include a reference to it as part of your PATH. I favour ~/.local/bin, and have seen references to ~/bin as well. Save the script in ~/.local/bin/cron-task, make it executable with chmod +x, and include the directory in the PATH environment definition at the top of the file:

PATH=/home/you/.local/bin:/usr/local/bin:/usr/bin:/bin
MAILTO=you@example.com

0 * * * *  you  cron-task

Having your own directory with custom scripts for your own purposes has a host of other benefits, but that’s another article…

Avoid /etc/crontab

If your implementation of cron(8) supports it, rather than having an /etc/crontab file a mile long, you can put tasks into separate files in /etc/cron.d:

$ ls /etc/cron.d
system-a
system-b
raid-maint

This approach allows you to group the configuration files meaningfully, so that you and other administrators can find the appropriate tasks more easily; it also allows you to make some files editable by some users and not others, and reduces the chance of edit conflicts. Using sudoedit(8) helps here too. Another advantage is that it works better with version control; if I start collecting more than a few of these task files or to update them more often than every few months, I start a Git repository to track them:

$ cd /etc/cron.d
$ sudo git init
$ sudo git add --all
$ sudo git commit -m "First commit"

If you’re editing a crontab(5) file for tasks related only to the individual user, use the crontab(1) tool; you can edit your own crontab(5) by typing crontab -e, which will open your $EDITOR to edit a temporary file that will be installed on exit. This will save the files into a dedicated directory, which on my system is /var/spool/cron/crontabs.

On the systems maintained by the author, it’s quite normal for /etc/crontab never to change from its packaged template.

Include a timeout

cron(8) will normally allow a task to run indefinitely, so if this is not desirable, you should consider either using options of the program you’re calling to implement a timeout, or including one in the script. If there’s no option for the command itself, the timeout(1) command wrapper in coreutils is one possible way of implementing this:

0 * * * *  you  timeout 10s cron-task

Greg’s wiki has some further suggestions on ways to implement timeouts.

Include file locking to prevent overruns

cron(8) will start a new process regardless of whether its previous runs have completed, so if you wish to avoid locking for long-running task, on GNU/Linux you could use the flock(1) wrapper for the flock(2) system call to set an exclusive lockfile, in order to prevent the task from running more than one instance in parallel.

0 * * * *  you  flock -nx /var/lock/cron-task cron-task

Greg’s wiki has some more in-depth discussion of the file locking problem for scripts in a general sense, including important information about the caveats of “rolling your own” when flock(1) is not available.

If it’s important that your tasks run in a certain order, consider whether it’s necessary to have them in separate tasks at all; it may be easier to guarantee they’re run sequentially by collecting them in a single shell script.

Do something useful with exit statuses

If your cron(8) task or commands within its script exit non-zero, it can be useful to run commands that handle the failure appropriately, including cleanup of appropriate resources, and sending information to monitoring tools about the current status of the job. If you’re using Nagios Core or one of its derivatives, you could consider using send_nsca to send passive checks reporting the status of jobs to your monitoring server. I’ve written a simple script called nscaw to do this for me:

0 * * * *  you  nscaw CRON_TASK -- cron-task

Consider alternatives to cron(8)

If your machine isn’t always on and your task doesn’t need to run at a specific time, but rather needs to run once daily or weekly, you can install anacron and drop scripts into the cron.hourly, cron.daily, cron.monthly, and cron.weekly directories in /etc, as appropriate. Note that on Debian and Ubuntu GNU/Linux systems, the default /etc/crontab contains hooks that run these, but they run only if anacron(8) is not installed.

If you’re using cron(8) to poll a directory for changes and run a script if there are such changes, on GNU/Linux you could consider using a daemon based on inotifywait(1) instead.

Finally, if you require more advanced control over when and how your task runs than cron(8) can provide, you could perhaps consider writing a daemon to run on the server consistently and fork processes for its task. This would allow running a task more often than once a minute, as an example. Don’t get too bogged down into thinking that cron(8) is your only option for any kind of asynchronous task management!

01 May 2016

Nat Torkington

Lesson: Moats and Flywheels

It’s bloody hard to build something new into the world.  Don’t let anyone tell you it’s easy: there’s a lot of unfunded and unrecognised work that you have to do before you can get to the point where fame and/or fortune arrive.  And once you’ve finished the painful birth of your new service or product into the world, maybe even defining an entirely new product category or unserviced market segment, dozens of unimaginative parasites will appear from nowhere and try to eat your lunch.

Hapara had this: we have a track record of beautiful firsts.  First Education solution for Google Apps, first device management for Chromebooks.  And, sure enough, as soon as we’d proven that Google Apps could be improved for educators, Google popped up and entered the space with Classroom.  Dozens of companies sprung up to help educators manage Chromebooks.  I like to think that the Hapara team do it better than those Johnny-Come-Latelys, but regardless of what I think there’s still competition. (I’ve never been at a place that had so many Firsts in such a short period time; there’s something incredible about Tony and Jan’s ability to identify great product niches)

I first noticed a similar dynamic at O’Reilly’s Emerging Technology conference.  My buddy Rael had built a conference where incredible cutting edge technologists, the kind of people who follow Alan Kay’s dictum and predict the future by inventing it.  But I soon realised that while the first movers started as Crazy Dreamers On The Edge, they were quickly joined by competitors who were happy to copy their ideas.  Sometimes those first movers would be beaten by the competitors, who had better market access or other products to piggyback on.  Being first to market doesn’t mean you’ll always lead the market.

So what’s an entrepreneur to do?  I look for moats and/or flywheels in the business plans and pitches that come to me.

Moats are the defences around your castle’s treasure: do you have patents, or an exclusive arrangement with suppliers, or an exclusive arrangement with the major distributors, or all the world’s experts in the field working for you?  Those are the sorts of things that thwart would-be competitors, or at least slow them down because now they have to blaze their own trail to the sweet land of Product-Market-Fit that your startup has identified.

Flywheel is Jeff Bezos’s term for a virtuous circle in your business.  As Tim O’Reilly described it, “your app gets better as more people use it.”  More specifically, I’m looking for geometric (exponential) value in the face of arithmetic (linear) customer growth.  If twice as many people use the telephone, the phone network is four times as valuable because the value of the phone network isn’t just for the new user—now all the previous users can call that new person too.  Amazon’s great at building out parts of their empire such that the success of one part makes other parts stronger; check out Jeff Bezos’s letters to shareholders for more on his thinking.

This is strongly related to the idea of virality.  If number of users is important to you then you want your product to attract users by itself.  The most viral products don’t rely on friends telling friends; instead, merely by using them you’re dragging your friends in.  Examples from history are Plaxo (address book in the cloud, where all your friends were told you’re managing contacts through Plaxo and is this info you have for them accurate?) and any sharing service like Dropbox or Google Docs, where inviting people to collaborate/share files means that they’re becoming users of your service.

Flywheels mean that the time advantage you have by being first to discover the new product/market is directly turned into a more durable value advantage—each new user you get makes your system not just one user’s worth of better.

Neither moats nor flywheels guarantee success, but they can keep the wolves from your heels while you chase it!

26 April 2016

Tim Penhey

face

It has been too long

Well, it has certainly been a lot longer since I wrote a post than I thought.

My work at Canonical still has me on the Juju team. Juju has come a long way in the last few years, and we are on the final push for the 2.0 version. This was initially intended to come out with the Xenial release, but unfortunately was not ready. Xenial has 2.0-beta4 right now, soon to be beta 6. Hoping that real soon now we'll step through the release candidates to a final release. This will be SRU'ed into both Xenial and Trusty.

I plan to do some more detailed posts on some of the Go utility libraries that have come out of the Juju work. In particular, talking again about loggo which I moved under the "github.com/juju" banner, and the errors package.

Recent work has had me look at the database agnostic model representations for migrating models from one controller to another, and also at gomaasapi - the Go library for talking with MAAS. Perhaps more on that later.

13 April 2016

Nat Torkington

NZ Herald’s “News” Is Shit and Lazy

tl;dr: Today’s NZ Herald drove me to this analysis/rant.  I’m giving the damn thing up and will get my news from other sites, Radio New Zealand and NewsHub.  You should too.

I can’t imagine how disheartening it is to work as a journalist in New Zealand.  Almost as disheartening as it is to be a news consumer in New Zealand.

The newspapers are shit.  I include Stuff, owned by Fairfax, in this as Stuff has become indistinguishable from the NZ Herald—they race to cover each other’s stories and make sure nobody sees a different set of “news” when they visit the other’s pages.  There are two rays of hope, though: Radio New Zealand and NewsHub.  I’d never have picked it, but TV3’s NewsHub seems to cover more actual news than newspapers (or, perhaps, features Real News more prominently than Rugby Player In Celebrity Vajazzling Tragedy And What Does It Mean For Your House Prices stories).

What do I look for?

  1. New Zealand focus, as opposed to printing Australian or UK stories. If I’m a Pom living in NZ, I can skim the Guardian or Daily Mail and choose which Brit stories I want to read, instead of having that selection made for me by the type of person who thinks “Woman’s excruciating pain baffled docs” is a story.  Stories about my region are more relevant than stories about other regions.
  2. Beyond a reprinted press release.  A press release tells me what the originator of the press release wants me to think. What context does this sit in—who is behind the press release, is there a real problem, what’s the history behind this solution, what data supports or challenges this, what do neutral experts think?  I’d like to become better informed about the issue, not simply the latest step.
  3. Relevance.  To me that means I can take action on it.  It might be by voting or by spending or by travelling or contributing.

What I don’t look for:

  1. Celebrities, who help me not at all.
  2. House prices, which help me not at all.
  3. Foreign stories, which help me not at all.
  4. Scare stories, which help me not at all.

The latter is particularly relevant. The Herald’s top 15 stories feature: cursed, bitter, murder, sexual, excruciating, illness, attack, murder, breast, punched.  15 stories, 10 of which are scare stories or framed as such.  The framing with sex and violence is like deep-frying and covering the news in sugar: it makes more people want to consume it, but what they’re consuming harms them.  As one researcher wroteFear has become a standard feature of news formats steeped in a problem frame oriented to entertainment. Entertainment abhors ambiguity, while truth and effective intervention efforts to improve social life reside in ambiguity. It is this tension between entertaining and familiar news reports, on the one hand, and civic understanding, on the other hand, that remains unresolved. 

First-pass breakdown of the NZ Herald coverage follows.  I’ve taken the website section by section, and looked at how each story works for me.  I should probably have done this in a spreadsheet, rows for stories and columns for what look & don’t look for.

The NZ Herald web site prominently features 15 stories under their logo. Each is shown with headline and thumbnail photo and teaser text.  Few qualify as relevant news.

  1. `Cursed resort’ of Rarotonga: a Bayleys real estate listing, only a few readers can do anything wiht this “news”.
  2. Bitter feud behind 60 Minutes Arrest: Australian news, no meaningful action a Kiwi reader can take on it.
  3. Toddler killer walks with empty pram: Australian news, of no value to NZ readers except titillation.
  4. DJ dropped over sexual allegations: arguable news, definitely salacious, readers can act on this if they’re in Auckland. Perhaps I was shown this message because I am near Auckland, and other readers saw other messages upon which they can act.
  5. Is Roger Federer really a nice guy? Unactionable celebrity toss.
  6. Woman’s excruciating pain baffled docs: American story, “the condition is also called mesenteric fibromatosis. About 900 desmoid tumors are diagnosed annually in the United States”, so odds of it being relevant to us is minute — tell the doctors to keep their eyes open for rare conditions, but there’s bugger all that we patients can do with a regular diet of stories about rare conditions other than worry.
  7. The great octopus escape: NZ story about an octopus escaping from Napier’s National Aquarium.  No action a reader can take, other than to make a mental note to visit the attraction that no longer holds Inky the Octopus.
  8. ‘I was losing all feeling in my body’: NZ consumer affairs story. A change in formula of an “alternative medicine”/heath products was driven by Australian regulatory change, but the known-risky ingredient was left for sale in NZ. MedSafe, the NZ regulator, was unavailable for comment, which is a bit shit.
  9. Dogs attack pregnant woman in Christchurch: useful knowledge if you’re in Christchurch. Useless to the rest of us. So much for the “shown relevant stories” theory.
  10. The Bachelor’s most tense moment yet: more celebrity shit. It’s the rare day, it feels, where there’s NOT a story about some aspect of The Bachelor. Fucks given: 0.
  11. Chaining yourself to tree does work: 52-year old Havelock North woman chains herself to a walnut tree and the council relents on chopping it down. Bigger question, left unanswered, is why a council is chopping down food trees.
  12. Mother’s head dumped in recycling bin: Seattle news. Of absolutely no relevance to pretty much all NZ readers.
  13. ‘Trump makes me want to move to NZ’: Billy Crystal promotes his upcoming tour.
  14. ‘Dude, this room is the breast!’: light fixtures project breasts onto the ceiling. A photo and story taken from Buzzfeed, etc.
  15. Student held down, punched in class: Christchurch student bullied. Relevant to Christchurch readers, and of secondary interest to readers who have kids in school and are worried about bullying.

Then comes the National section:

  1. Facebook blackmailer jailed: Whangarei news, but again relevant to children (victim was 14 year old girl). Thumbnail photo and text on the home page.
  2. Brian Rudman: Sugar obesity link plain for all but Govt to see: opinion piece about obesity and regulation. Teased with text and glimpse of a cartoon that runs with the opinion piece.
  3. David Rutherford: Privacy at risk in child safety push: opinion piece about government depts data-sharing and privacy.
  4. Te Atatu accused pleads not guilty: apparently a story the Herald has been following for its readers, about a murder in a “quiet suburb”. (I’m very glad they didn’t describe it as “leafy”, which is the preferred adjective for Remuera, Epsom, and any other suburb they want to say is rich but not say is rich)
  5. Jail for man who fled in stolen car while on drugs: Whangarei local news.

And World:

  1. Amsterdam Schiphol airport evacuated: terrorism-fear related.
  2. Across the Ditch: 7 things to read now: Australian news teases.
  3. 8 things you need to know right now: teased as “Your catch-up on stories that have broken overnight and this morning”, the stories are: Republican presidential race, Australian TV news in Lebanon (full story covers this above), Democratic presidential race, IMF warning “severe global damage” about Britain existing the European Union, Brazilian politics, dead reality TV star, Bono speaks out for refugees in Europe, possible Caravaggio found in France.  That’s what we need to know right now?
  4. Hawking’s interstellar mission to find alien life: science, technology, futuristic.  No action we can take but at least it’s optimistic.

And Business:

  1. Michael Hill confirms primary listing move to ASX: the news is what you might get from a press release. No analysis.
  2. Income tax in New Zealand lowest in OECD: “Single workers in New Zealand face taxes of 17.6 per cent in 2015, compared with the OECD average of 35.9 per cent.”  My first response was to be aghast, as that’s HALF of the OECD average.  Anyone who grouses that their kids’ school is underfunded or has a relative die after months on a surgery waiting list should have the dots connected for them.  The last line of the story ends is “The report doesn’t take into consideration GST.”  That’s only 24% of the government’s revenue.  So why even bother with this “news”?  The journalist has taken an OECD report and looked for NZ’s position in the graphs and turned numbers into sentences.

… and here my will to live ends.  We’re so far below the fold of the web page that these articles can’t get many reads.  And if I read much further, I’ll encounter the Entertainment and Lifestyle sections which will end me.

Also at the top of the page is a ticket with few-word teases for two “trending” items, and a list of stories showing “Latest”.  I’m back later in the day, and the current Latest listings are:

  1. Prime Minister John Key visits China as defence force simulates threat in disputed territory: two PMs visit China as their defence forces participate in an exercise that simulates a threat against Malaysia and Singapore from “an unnamed force”. Purely coincidence, nothing to see here. Actual news, but short of analysis: does this jeopardise our relationship with China, or is this political business-as-usual?
  2. Brian Rudman: Sugar obesity link plain for all but Govt to see: this article was posted hours ago, is not “latest”.
  3. Paula Bennett to head to New York to sign historic climate agreement: actual news, though short in the analysis area.
  4. Is Lindsay Lohan engaged? Celebrity gossip.
  5. Dog app launched to help keep kids safe: “The new app, called A Dog’s Story, aims to engage young children and teach them safe behaviour around dogs. […] it was commissioned by pet food maker Mars Petcare and developed in Auckland by Colenso BBDO, with input from Auckland Council’s animal management team.” Reworded press release, which appeared two days ago on NewsHub.
  6. Game, Set and Match Podcast Wed Apr 13: Radio Sport press release about a tennis podcast.  Because subscribing to podcasts is best done by reading the NZ Herald’s “Latest Listings” every day and looking for a new edition of the podcasts you like.
  7. College Sport: St Peter’s dominate Maadi Cup: high school rowing news.
  8. Fine filling for wet-weather sandwich: WeatherWatch weather forecast written as a story.
  9. Does sex count as exercise?: I can’t even.
  10. Steve Hansen excited by new All Blacks era: also known as “Steve Hansen puts a happy face on injuries”.

 

11 April 2016

Nat Torkington

On Villains

[I posted this on a Slack recently, and would like to give it a longer life than Slack’s 10,000 line scrollback. –Nat]

Socially constructed roles like “douche-bro” and “rock star teacher” are generally strongly viewpoint dependent. The rhetoric of continuous improvement is part of self-help, get rich quick, professional development, factory management, military training, science, and more.

What separates these is the outcome they’re working towards, and how we judge those outcomes. So sales bros high-fiving each other in startup land are heroes of their own story, which is about self-improvement and making the world better through optimised supply chains of just-in-time whatever; and at the same time their goals of corporate success and self-aggrandisement makes them villains of our stories where meaningful work is in service of others, where empathy and humility are treasured, and where personal profit is awkward and not to be pre-eminently sought. So if you want more people to shun values you don’t like, teach the values you do like.

06 October 2015

Mark Foster

CLI tool for 'diffing' in a useful fashion: vimdiff

Had to scratch my head to find the right tool for the job today - something that I used regularly at SMX but havn't had much need to use since.

The tool was 'vimdiff'. I needed to compare to configuration files (retrieved from two different servers) to understand what difference existed. Whilst 'diff' alone would've done it, I find the output hard to follow. vimdiff worked wonders!

Google hit I used also has some other useful tools:

https://unix.stackexchange.com/questions/79135/is-there-a-condensed-side-by-side-diff-format.

For posterity.

Honourable mention for icdiff also.

15 September 2015

Mark Foster

Firefox's 'Tiles' Antifeature - disabling this spam/track tool

I came across a Reddit Thread recently which included this gem:

From the comments that have been posted on this thread and what I found on the Mozilla forums so far:

1- In a new tab, type or paste about:config in the address bar and press Enter/Return. Click the button promising to be careful.

2- Set browser.newtab.url to about:blank>

3- To disable the callbacks to tiles.cdn.mozilla.com without enabling the "do not track" feature you need to remove the address from browser.newtabpage.directory.ping and browser.newtabpage.directory.source

Source:

https://support.mozilla.org/en-US/questions/1074600

http://forums.mozillazine.org/viewtopic.php?f=7&t=2888321

So i've now done both of the above and feel much better.

The Reddit page linked to a ZDNet.com article talking about Mozilla's new Advertising strategy. I for one don't need a third party tracking what I do when I click on 'new tab' ... ! Interestingly there's also a move to remove browser.newtab.url due to "Abuse" which seems to be in itself, contentious, but it's possible you'll have to use an addon to achieve the above, at least in part, in the near future.

29 January 2015

Tom Ryder

Shell config subfiles

Large shell startup scripts (.bashrc, .profile) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it’s not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.

Given that shell configuration is just shell code, we can apply the source builtin (or the . builtin for POSIX sh) to load several files at the end of a .bashrc, for example:

source ~/.bashrc.options
source ~/.bashrc.aliases
source ~/.bashrc.functions

This is a better approach, but it still binds us into using those filenames; we still have to edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.

Fortunately, UNIX-like systems have a common convention for this, the .d directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory ~/.bashrc.d:

$ ls ~/.bashrc.d
options.bash
aliases.bash
functions.bash

With a slightly more advanced snippet at the end of ~/.bashrc, we can then load every file with the suffix .bash in this directory:

# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
    source "$config"
done
unset -v config

Note that we unset the config variable after we’re done, otherwise it’ll be in the namespace of our shell where we don’t need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there’s at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.

The same method can be applied with .profile to load all scripts with the suffix .sh in ~/.profile.d, if we want to write in POSIX sh, with some slightly different syntax:

# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
    . "$config"
done
unset -v config

Another advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your .bashrc file.

Here’s my implementation of the above method, for both .bashrc and .profile:

Thanks to commenter oylenshpeegul for correcting the syntax of the loops.

02 December 2014

Andrew Ruthven

LCA2015 - Debian Miniconf & nz2015 Debian mini-DebConf

nz2015 mini-DebConf

Already attending linux.conf.au? Come a couple of days earlier and attend the mini-DebConf too! There will be a day of talks with a strong focus on the Debian project and a bug squashing day.

Debian Miniconf

After 5 years, the Debian Miniconf is back! Run as part of linux.conf.au 2015, this event will attract speakers talking on topics that suit the broader audience attending LCA. The Debian Miniconf has been one of the largest miniconfs in the history of linux.conf.au.

For more information about both these events which I'm organising, head over to: nz2015.mini.debconf.org!

07 November 2014

Tom Ryder

Prompt directory shortening

The common default of some variant of \h:\w\$ for a Bash prompt PS1 string includes the \w escape character, so that the user’s current working directory appears in the prompt, but with $HOME shortened to a tilde:

tom@sanctum:~$
tom@sanctum:~/Documents$
tom@sanctum:/usr/local/nagios$

This is normally very helpful, particularly if you leave your shell for a time and forget where you are, though of course you can always call the pwd shell builtin. However it can get annoying for very deep directory hierarchies, particularly if you’re using a smaller terminal window:

tom@sanctum:/chroot/apache/usr/local/perl/app-library/lib/App/Library/Class:~$

If you’re using Bash version 4.0 or above (bash --version), you can save a bit of terminal space by setting the PROMPT_DIRTRIM variable for the shell. This limits the length of the tail end of the \w and \W expansions to that number of path elements:

tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$ PROMPT_DIRTRIM=3
tom@sanctum:.../App/Library/Class$

This is a good thing to include in your ~/.bashrc file if you often find yourself deep in directory trees where the upper end of the hierarchy isn’t of immediate interest to you. You can remove the effect again by unsetting the variable:

tom@sanctum:.../App/Library/Class$ unset PROMPT_DIRTRIM
tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$

24 September 2014

Robert Collins

face

what-poles-for-the-tent

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


08 September 2014

Glen Ogilvie

Google authenticator TFA for Android - Backup and OATH

I’ve been a fan of using multi/two factor authentication for anything that matters.

Thankfully, many sites these days are beginning to support using MFA, and many of them have standardized on OATH,
Google Authenticator, is one such OATH client app, implimenting TOTP (time based on time passsword).

OATH is a reasonaly good method of providing MFA, because it’s easy for the user to setup and pretty secure, and open, both for the client and server.
You can read all about how it works in RFC 6238, or wikipedia.
So, in a nut shell, we now have method that a client can generate a key, and a server than can authenticate that key.

Google Authenticator, being the client I use, as it supports adding the share key by simply reading a QR code, is great.
But, what if I loose my phone.. or want to use a second device.. or my computer? MFA of course, pretty much locks you out if you
loose your way to generate your TOTP..

Well, google provides you a number of static keys.. you can use.. but that’s not good enough for me.

So, I thought I’d see if I could backup google authenticator, and read the shared key from it. The answer to this is yes.

Here’s the technical details:
Backup Google Authenticator using Titanium Backup. This will generate 3 files on your SD card:
The file of intrest is:
sdcard/TitaniumBackup/com.google.android.apps.authenticator2-DATE-TIME.tar.gz

In this tar.gz, you will find:
data/data/com.google.android.apps.authenticator2/./databases/databases

This is an SQLlite3 database, that contains each account you have added to google authenticator.
So, after opening it with sqllite3, IE:

tar -zxvf sdcard/TitaniumBackup/com.google.android.apps.authenticator2-DATE-TIME.tar.gz data/data/com.google.android.apps.authenticator2/./databases/databases
sqlite3 data/data/com.google.android.apps.authenticator2/./databases/databases
sqlite> select * from accounts;

to get a list of your keys.
Each key is base32 encoded.

So, to decode your key, you use:

$ python
>>> import base64
>>> base64.b16encode(base64.b32decode('THEKEYFROMTHESELECT', True))

Then this will output the key into base 16, which is the format that oathtool

You can then generate the token form your linux, computer.
Ensure you have the package: oath-toolkit

Then

$ oathtool --totp BASE16KEY

will generate you the same key as google authentcator, provided the time is correct on your Linux system.
Note: Make sure you clear your bash history, if you don’t want your MFA key in your history. And of course,
only store it on encrypted storage.. including make sure your sdcard is secure/erased in some way.

29 August 2014

Robert Collins

face

Test processes as servers

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


25 August 2014

The Open Source School

Vocal | Open Source Podcast Manager

"Vocal is a podcast manager that aims to do what similar, and well intentioned, apps fail to: do one thing and do it well.

While it’s not the first to try and serve the needs of podcast users, its developer, Nathan Dyer, hopes to avoid the ‘clunky, bloated [and] unnecessarily complicated’ flaws of current options.
"

13 August 2014

The Open Source School

LibreOffice 4.3 upgrades spreadsheets, brings 3D models to presentations | Ars Technica

"LibreOffice's latest release provides easier ways of working with spreadsheets and the ability to insert 3D models into presentations, along with dozens of other changes.

LibreOffice was created as a fork from OpenOffice in September 2010 because of concerns over Oracle's management of the open source project. LibreOffice has now had eight major releases and is powered by "thousands of volunteers and hundreds of developers," the Document Foundation, which was formed to oversee its development, said in an announcement today. (OpenOffice survived the Oracle turmoil by being transferred to the Apache Software Foundation and continues to be updated.)

In LibreOffice 4.3, spreadsheet program Calc "now allows the performing of several tasks more intuitively, thanks to the smarter highlighting of formulas in cells, the display of the number of selected rows and columns in the status bar, the ability to start editing a cell with the content of the cell above it, and being able to fully select text conversion models by the user," the Document Foundation said.

For LibreOffice Impress, the presentation application, users can now insert 3D models in the gITF format. "To use this feature, go to Insert ▸ Object ▸ 3D Model," the LibreOffice 4.3 release notes say. So far, this feature is available for Windows and Linux but not OS X."

10 July 2014

Andrew Ruthven

Cloud - in New Zealand!

I've spent a reasonable chunk of the past year working on a project we launched last month, Catalyst Cloud! It is using OpenStack with Ceph as the object store. It has taken a lot of work, and it is now very exciting seeing the level of interest there we're receiving about this new service!

The great part of this is that we can now offer private cloud services to our customers which provides all the flexibility that we've come to expect with the "cloud", but hosted in New Zealand by a New Zealand owned company so no concerns about jurisdiction of your data! Not only are we able to offer private cloud services on our OpenStack cluster(s), but we can also deploy OpenStack onto our customers own hardware using our ProdStack solution (I get to look directly at the Dashboard shown on that page, which is pretty cool).

Next up is deploying another OpenStack cluster in our new data centre (which is another project I'm working on). In the near future we also hope to start using Open Compute Project hardware for our clusters.

05 May 2014

Robert Collins

face

Distributed bugtracking – quick thoughts

Just saw http://sny.no/2014/04/dbts and I feel compelled to note that distributed bug trackers are not new – the earliest I personally encountered was Aaron Bentley’s Bugs everywhere – coming up on it’s 10th birthday. BE meets many of the criteria in the dbts post I read earlier today, but it hasn’t taken over the world – and I think this is in large part due to the propogation nature of bugs being very different to code – different solutions are needed.

XXXX: With distributed code versioning we often see people going to some effort to avoid conflicts – semantic conflicts are common, and representation conflicts extremely common.The idions

Take for example https://bugs.launchpad.net/ubuntu/+source/ntp/+bug/805661. Here we can look at the nature of the content:

  1. Concurrent cannot-conflict content – e.g. the discussion about the bug. In general everyone should have this in their local bug database as soon as possible, and anyone can write to it.
  2. Observations of fact – e.g. ‘the code change that should fix the bug has landed in Ubuntu’ or ‘Commit C should fix the bug’.
  3. Reports of symptoms – e.g. ‘Foo does not work for me in Ubuntu with package versions X, Y and Z’.
  4. Collaboratively edited metadata – tags, title, description, and arguably even the fields like package, open/closed, importance.

Note that only one of these things – the commit to fix the bug – happens in the same code tree as the code; and that the commit fixing it may be delayed by many things before the fix is available to users. Also note that conceptually conflicts can happen in any of those fields except 1).

Anyhow – my humble suggestion for tackling the conflicts angle is to treat all changes to a bug as events in a timeline – e.g. adding a tag ‘foo’ is an event to add ‘foo’, rather than an event setting the tags list to ‘bar,foo’ – then multiple editors adding ‘foo’ do not conflict (or need special handling). Collaboratively edited fields would be likely be unsatisfying with this approach though – last-writer-wins isn’t a great story. OTOH the number of people that edit the collaborative fields on any given bug tend to be quite low – so one could defer that to manual fixups.

Further, as a developer wanting local access to my bug database, syncing all of these things is appealing – but if I’m dealing with a million-bug bug database, I may actually need the ability to filter what I sync or do not sync with some care. Even if I want everything, query performance on such a database is crucial for usability (something git demonstrated convincingly in the VCS space).

Lastly, I don’t think distributed bug tracking is needed – it doesn’t solve a deeply burning use case – offline access would be a 90% solution for most people. What does need rethinking is the hugely manual process most bug systems use today. Making tools like whoopsie-daisy widely available is much more interesting (and that may require distributed underpinnings to work well and securely). Automatic collation of distinct reports and surfacing the most commonly experienced faults to developers offers a path to evidence based assessment of quality – something I think we badly need.


10 April 2014

Mark Foster

A useful pair of commands

Sorry, couple more glitches with the server today. Really need to get on to migrating onto the new hardware I have running in parellel...

However I did find this useful stuff today :-)

Force Reboot :
#echo 1 > /proc/sys/kernel/sysrq
#echo b > /proc/sysrq-trigger
If you want to force shutdown machine try this.
#echo 1 > /proc/sys/kernel/sysrq
#echo o > /proc/sysrq-trigger

Remove comments as required. :)

04 April 2014

The Open Source School

First Ubuntu Tablets To Launch This Autumn

For about three years, sensing the move from the desktop to the mobile device, Canonical have been making Ubuntu more tablet-friendly. Now we hear that tablets (and smartphones) are due to ship with Ubuntu OEM soon.
I've been a fan of Ubuntu for a long time, and if you'd like to try it out to see how easy-to-use free software operating systems are, you can do it online here: http://www.ubuntu.com/tour/en/ (Tip: full-screen your browser and it's like you're running it natively on your machine.)

21 February 2014

Glen Ogilvie

Fritz!Box Telephony

 

In New Zealand,  VDSL internet is available from a number of providers.  Snap! is the provider I use, and they offer some very cool Fritz!Box VDSL modems.   I have the Fritz!Box 7390, and the other one on offer is the cheaper Fritz!Box 7360.

These routers are far more than just a basic VDSL router, offering a range of awesome features, including NAS, IPV6 (standard with Snap), good WIFI, DECT, VPNs.   This blog posts are about my experiences with setting up the Telephony component.

The first thing to note is, Snap! does not ship the right cable for connecting the Fritz!Box to a standard telephone line.  This is a special Y cable, and they will ship it to you if you ask them.   Note however, you’ll still need to make up an adapter, as this cable is for RJ545 telephone plugs, not NZ BT plugs.  My setup, is that I have a monitored alarm, so I need a telephone line, theirfore, this is for VDSL + standard phone line, rather than VDSL + VOIP phone.

Hardware:

Step 1:

Get the Y cable from Snap!.    It will have 1 plug at the end that connects to the Fritz!Box and a split end, one for your VDSL plug, and one for the telephone line.  They will change you $5 postage.
Here is the description of it, it’s the grey cable, first one on the left.

Here is the email I sent to Snap!   It took a while, but Michael Wadman did agree to ship it to me, case number #BOA-920-53402  if you need a reference case.

Hi,

I purchased a Fritz!Box 7390 from you, along with my internet subscription. It looks to me, from looking at their website, that it should come with some
cables that I don’t have. See:

http://www.fritzbox.eu/en/products/FRITZBox_Fon_WLAN_7390/index.php?tab=5

The Fritz!Box 7390 you supplied came with two Ethernet cables and power, but no other cables.
In the above link, you can see it should come with:

  • 4.25 m-long ADSL/fixed-line network connection cable (Y cable)
  • 1.5 m-long LAN cable
  • RJ45-RJ11 adapter for connection to the ADSL line
  • RJ45-RJ11 adapter for connection to the analog telephone line

I would like to plug my Analog PSTN phone line into the Fritz!Box so I can use it’s telephony features with my fixed line.     To do this, I need the cable that goes between the ISDN/analog and the phone jack on the wall.
I am aware that the Y cable does not actually fit NZ phone plugs.
This post discusses the matter on geekzone.

http://www.geekzone.co.nz/forums.asp?forumid=90&topicid=115999

Ref:

http://service.avm.de/support/en/skb/FRITZ-Box-7390-int/56:Pin-assignment-of-cables-adapters-and-ports-for-telephony-devices

Step 2:

Build an adapter for the telephone end.  This is easier than you might think.  What you need:

  • RF45  Crimping tool
  • RJ45 plug
  • Any old phone cable with a BT11 plug on it
  • A CAT5 RJ45 Network Cable Extender Plug Coupler Joiner, you can get this off ebay for < $2
  • A multi-meter

Then, simply cut off the end of the old phone cable that had the end the plugs into the phone.  It will probably be a 4 wire cable.  Now, use your multimeter to identify which two wires are connected to the outer two BT-11 pins.  Then plug it into your phone jack and check you get around 50 volts DC from those two pins..   Once you’ve got the two wires that have power, cut off the other 2 wires and crimp the two powered wires to the two outside pins on the RJ45 plug.   See the above diagram.

Then, plug the RJ45 plug you crimped into the joiner you got off ebay, and label it, as you don’t want to be plugging in normal networking equipment by accident to this plug.

Step 3:

  1. Connect the Y cable to your Fritz!Box and to your VDSL.  Check it works.
  2. Connect the Y cable to your Analog line, using the adapter you made.  Your Fritz!Box is now able answer and make calls with your landline.

Now, your ready to configure Telephony on the FRITZ!Box.

Manual: http://www.avm.de/en/service/manuals/FRITZBox/Manual_FRITZBox_Fon_WLAN_7390.pdf

Configuration:

Start with Telephone numbers.  Here you should configure your fixed line, plus any SIP providers you want. I have added ippi and comfytel.   Also to note, that Snap! provides a SIP service, if you want.

Next, connect some phones to your Fritz!Box.  You can plugin your standard analoge phones into the FON1 and FON2 plugs.  You can connect your ISDN phones, you can connect any DECT wireless phones to the Fritz!Box, as their base station (your luck my vary), and you can connect your mobile phones to it, using the FRITZ!Box Fon app.  These will appear under telephone devices.  You should now be able to make a call.

Each device can have a default outgoing telephone number connected to it, and you can pre-select which phone number to make outgoing calls with, by dialing the ** prefix code.

Things you can do

  1. Use a number of devices as phones in your home, including normal phones and your mobiles
  2. Answer calls from skype, sip, landlines and internal numbers
  3. Make free calls to skype and global sip inum’s.
  4. Make low cost calls to overseas landlines using a sip provider
  5. Make calls to local numbers via your normal phone line
  6. Answer machine
  7. Click to dail
  8. Telephone book, including calling internal numbers
  9. Wakeup calls
  10. Send and receive faxes
  11. Block calls
  12. Call Diversion / Call through
  13. Automatically select different providers when dialing different numbers
  14. Set a device to use a specific number, or only to ring for calls for a specific number.
  15. Set do not disturb on a device, based on time of day.
  16. Connect your wireless DECT phones directly to the FRITZ!Box as a base station
  17. See a call log of the calls you’ve made

Sip providers

ippi – http://www.ippi.fr – They allow free outgoing and incoming skype calls, plus a free global sip number

Country Number
SIP glenogilvie@ippi.fr
SIP numeric 889507473
iNum +883510012028558

For outgoing skype calls with ippi, if your phone cannot dial email addresses, you need to add the skype contacts to the phone book on the ippi.com website under your account, and assign a short code, which you can then use from your phone.

Comfytel – http://www.comfytel.com/ – They provide cheap calling, but you have to pay them manually with paypal, and currently their skype gateway does not work.

iNum: 883510001220681
Internal number: 99982009943

 

Since screen shots are much nicer than words, below is a collection from my config for your reference.

Call Log:

Answer Phone:

Telephone book:

Alarm:

Fax:

Call Blocking:

Call through

Dialing rules

Telephone Devices

Dect configuration (I don’t have any compatible phones)

Telephone Numbers (fixed and SIP)

ComfyTel configuration:

IPPI configuration:

IPPI phone book, for calling skype numbers:

Fixed line configuration:

Line Settings:

30 January 2014

Colin Jackson

So long and thanks for all the fish

I stopped updating this blog some time ago, mainly because my work started taking me overseas so much I couldn’t keep up with it.

But now I am blogging again, perhaps a bit less frequently than I used to, over at my new website Jackson Strategy. I’ll be covering technology and how it changes our lives, and what we should be doing about that. Please drop by!

28 December 2013

Tim Penhey

face

2013 in review

2013 started with what felt like a failure, but in the end, I believe that the
best decision was made.  During 2011 and 2012 I worked on and then managed
the Unity desktop team.  This was a C++ project that brought me back to my
hard-core hacker side after four and a half years on Launchpad.  The Unity
desktop was a C++ project using glib, nux, and Compiz. After bringing Unity to
be the default desktop in 12.04 and ushering in the stability and performance
improvements, the decision was made to not use it as the way to bring the
Ubuntu convergence story forward. At the time I was very close tho the Unity 7
codebase and I had an enthusiastic capable team working on it. The decision
was to move forwards with a QML based user interface.  I can see now that this
was the correct decision, and in fact I could see it back in January, but that
didn't make it any easier to swallow.

I felt that I was at a juncture and I had to move on.  Either I stayed with
Canonical and took another position or I found something else to do. I do like
the vision that Mark has for Ubuntu and the convergence story and I wanted to
hang around for it even if I wasn't going to actively work on the story itself.  For a while I was interested in learning a new programming language, and Go was considered the new hotness, so I looked for a position working on Juju. I was lucky to be able to join the the juju-core team.

After a two weak break in January to go to a family wedding, I came back to
work and started reading around Go. I started with the language specification
and then read around and started with the Go playground. Then started with the
Juju source.

Go was a very interesting language to move to from C++ and Python. No
inheritance, no exceptions, no generics. I found this quite a change.  I even
blogged about some of these frustrations.

As much as I love the C++ language, it is a huge and complex language. One
where you are extremely lucky if you are working with other really competent
developers. C++ is the sort of language where you have a huge amount of power and control, but you pay other costs for that power and control. Most C++ code is pretty terrible.

Go, as a contrast, is a much smaller, more compact, language. You can keep the
entire language specification in your head relatively easily. Some of this is
due to specific decisions to keep the language tight and small, and others I'm
sure are due to the language being young and immature. I still hope for
generics of some form to make it into the language because I feel that they
are a core building block that is missing.

I cut my teeth in Juju on small things. Refactoring here, tweaking
there. Moving on to more substantial changes.  The biggest bit that leaps to
mind is working with Ian to bring LXC containers and the local provider to the
Go version of Juju.  Other smaller things were adding much more infrastructure
around the help mechanism, adding plugin support, refactoring the provisioner,
extending the logging, and recently, adding KVM container support.

Now for the obligatory 2014 predictions...

I will continue working on the core Juju product bringing new and wonderful
features that will only be beneficial to that very small percentage of
developers in the world who actually deal with cloud deployments.

Juju will gain more industry support outside just Canonical, and will be seen
as the easiest way to OpenStack clouds.

I will become more proficient in Go, but will most likely still be complaining
about the lack of generics at the end of 2014.

Ubuntu phone will ship.  I'm guessing on more than just one device and with
more than one carrier. Now I do have to say that these are just personal
predictions and I have no more insight into the Ubuntu phone process than
anyone outside Canonical.

The tablet form-factor will become more mature and all the core applications,
both those developed by Canonical and all the community contributed core
applications will support the form-factor switching on the fly.

The Unity 8 desktop that will be based on the same codebase as the phone and
tablet will be available on the desktop, and will become the way that people
work with the new very high resolution laptops.

30 October 2013

Tim Penhey

face

loggo - hierarchical loggers for Go

Some readers of this blog will just think of me as that guy that complains about the Go language a lot.  I complain because I care.

I am working on the Juju project.  Juju is all about orchestration of cloud services.  Getting workloads running on clouds, and making sure they communicate with other workloads that they need to communicate with. Juju currently works with Amazon EC2, HP Cloud, Microsoft Azure, local LXC containers for testing, and Ubuntu's MAAS. More cloud providers are in development. Juju is also written in Go, so that was my entry point to the language.

My background is from Python and C++.  I have written several logging libraries in the past, but always in C++ and with reasonably specific performance characteristics.  One thing I really felt was missing with the standard library in Go was a good logging library. Features that I felt were pretty necessary were:
  • A hierarchy of loggers
  • Able to specify different logging levels for different loggers
  • Loggers inherited the level of their parent if not explicitly set
  • Multiple writers could be attached
  • Defaults should "just work" for most cases
  • Logging levels should be configurable easily
  • The user shouldn't have to care about synchronization
Initially this project was hosted on Launchpad.  I am trialing moving the trunk of this branch to github.  I have been quite isolated from the git world for some time, and this is my first foray in git, and specifically git and go.  If I have done something wrong, please let me know.

Basics

There is an example directory which demonstrates using loggo (albeit relatively trivially).

import "github.com/howbazaar/loggo"
...
logger = loggo.GetLogger("project.area")
logger.Debugf("This is debug output.")
logger.Warningf("Some error: %v", err)

In juju, we normally create one logger for the module, and the dotted name normally reflects the module. This logger is then used by the other files in the module.  Personally I would have preferred file local variables, but Go doesn't support that, not where they are private to the file, and as a convention, we use the variable name "logger".

Specifying logging levels

There are two main ways to set the logging levels. The first is explicitly for a particular logger:

logger.SetLogLevel(loggo.DEBUG)

or chained calls:

loggo.GetLogger("some.logger").SetLogLevel(loggo.TRACE)

Alternatively you can use a function to specify levels based on a string.

loggo.ConfigureLoggers("<root>=INFO; project=DEBUG; project.some.area=TRACE")

The ConfigureLoggers function parses the string and sets the logging levels for the loggers specified.  This is an additive function.  To reset logging back to the default (which happens to be "<root>=WARNING", you call

loggo.ResetLoggers()

You can see a summary of the current logging levels with

loggo.LoggerInfo()

Adding Writers

A writer is defined using an interface. The default configuration is to have a "default" writer that writes to Stderr using the default formatter.  Additional writers can be added using loggo.RegisterWriter and reset using loggo.ResetWriters. Named writers can be removed using loggo.RemoveWriter.  Writers are registered with a severity level. Logging below that severity level are not written to that writer.

More to do

I want to add a syslog writer, but the default syslog package for Go doesn't give the formatting I want. It has been suggested to me to just take a copy of the library implementation and make it work how I want.

I also want to add some filter-ability to the writers, both on the inclusive and exclusive, so you could say when registering a writer, "only show me messages from these modules", or "don't show messages from these other modules".

This library has been used in Juju for some time now, and fits with most our needs.  For now at least.


27 June 2013

Malcolm Locke

face

First Adventures in Spectroscopy

One of my eventual amateur astronomy goals is to venture into the colourful world of spectroscopy. Today I took my first steps on that path.

We have a small cut glass pendant hanging in our window which casts pretty rainbows around the living room in the evening sun. A while ago I took photo of one of these with the intention of one day seeing what data could be extracted from it.

Spectrum

Not much to look at, but I decided tonight to see how much information can be extracted from this humble image.

First step was to massage the file into FITS format, the standard for astronomical data, with the hope that I could use some standard tools on the resulting file.

I cropped the rainbow, converted it to grayscale, and saved it as FITS in Gimp. I'd hoped to use my usual swiss army knife for FITS files, SAOImage DS9, to extract a graph from the resulting file, but no dice. Instead I used the following Python script to plot a graph of the averages brightness values of the columns of pixels across the spectrum. Averaging the vertical columns of pixels helps cancel out the effects of noise in the source image.

# spectraplot.py
import pyfits
import matplotlib.pyplot as plt
import sys

hdulist = pyfits.open(sys.argv[1])
# Grab the mean value of each column in the image
mean_data = hdulist[0].data.mean(axis=0)

plt.plot(mean_data)
plt.show()

Here's the resulting graph, with the cropped colour and grayscale spectra added for context.

Solar black body curve

If you weren't asleep during high school physics class, you may recognise the tell tale shape of a black body radiation curve.

From the image above we can see that the Sun's peak intensity is actually in green light, not yellow as we perceive it. But better still we can (roughly!) measure the surface temperature of the Sun using this curve.

Here's a graph of colour vs wavelength.

I estimate the wavelength of the green in the peak of the graph above to be about 510nm. With this figure the Sun's surface temperature can be calculated using Wien's displacement law.

λmax * T = b

This simple equation says that the peak frequency (λmax) of the black body curve times the temperature (T) is a constant, b, called Wien's displacement constant. We can rearrange the equation ...

T = b / λmax

... and then plug in the value of b to get the surface temperature of the Sun in degrees Kelvin.

T = 2,897,768 / 510 = 5681 K

That's close enough to the actual value of 5780 K for me!

I'm pretty encouraged by this result with nothing more than a piece of glass and a basic digital camera. A huge amount of what we know about the cosmos comes from examining spectra, but it's a field that doesn't get much love from amateur astronomers. Stay tuned for some hopefully more refined experiments in future.

08 May 2013

Glen Ogilvie

time to blog again.

I have been a bit slack with my blogging and not posted much for a long time. This has been due to both working on lots of things, buying a house and a busy lifestyle.

I do however have a few things to blog about. So, in the coming days i will blog about auto_inst os testing, corporate patching, android tools, aucklug, raspberry pi, rdiff-backup, mulitseat Linux, the local riverside community centre, getting 10 laptops, which will run mageia, my cat gorse, gps tracking, house automation, Amazon AMIs and maybe some other stuff.

03 March 2013

Nevyn H

Why are the USB Ports Locked?

I was at a school recently where the USB ports of the computers are locked down (on the student's log in). This struck me as counter to learning. Your students ability to share is greatly diminished. So what's the justification? Security. They might get virus'.

This to me just means that MS Windows is not fit for the purpose of education. If education needs to suffer to keep a computer system secure in a school, then it's a no brainer. Chose something that doesn't require you to sacrifice education - especially if that's your primary purpose.

I know MS Windows is good for some things. Microsoft Office is a top notch application for example. You can do things in MS Office that you can't in other office suites. But that's only important if those features of MS Office used.

Is the ability to create a pivot chart all that important in education? I would argue that if it's only in MS Office, then you've probably got more important things to be learning or teaching. Database theory i.e. relational databases - is probably more important anyway.

I think Linux has something to offer here. A lot of people would probably be surprised by the amazing things that can be done using Open Source software - and for the most part, without software licensing costs. Take GIMP for example. Although not photoshop, it is incredibly capable. You can do all sorts of things in it that teach a whole range of interesting concepts such as layers, filters, super imposing etc. There's a social lesson in there too - just because you see a photograph, doesn't necessarily mean that something is real. Removing a few pimples, stretching out a persons neck etc. isn't all that hard.

So what's stopping Linux in schools?

Firstly, the perception that Windows if free for schools. Actually, quite a lot of money goes towards the MS schools agreement - money that could be used elsewhere.

Secondly, who offers schools help with Linux? I was replaced in my last job by a Windows person - not a Linux person. There seem to be very few companyies offering Linux desktop support. Given that school I.T. support is tied up in just a handful of I.T. companies, who are all willing to perpetuate the "Windows is free for schools" mantra, where do schools go for Linux support?

Thirdly, is there any real efforts into making Linux suitable for schools? Some might argue that edubuntu is going in this direction but... well look at their goals:
Our aim is to put together a system that contains all the best free software available in education and make it easy to install and maintain.
So the first part talks about free educational software. The second part is pretty much what Ubuntu provides anyway. So basically, it's a copy of Ubuntu with a few education applications added. And while I hate to criticise Open Source software (although I do fairly often), a lot of it is made by geeks for geeks. This is incredibly evident in the educational software sector where educational games often lack lasting engagement.

When looking at what schools are already using on their desktops it's not unusual to see a set up identical to what a secretary in a small business might have. The operating system, a browser and MS Office. All of which (LibreOffice rather than MS Office) are installed in most desktop Linux distributions anyway.

What does Windows offer? A whole lot of management. This isn't a road I would like Linux taking as I think it just stifles education anyway. Instead, I think a school set up should be concerned with keeping kids safe - an I.T. system should look after itself without limiting education.

So what does this look like to me?

Kids as the admins. They should be able to install whatever applications they need to accomplish a task.

A fall back position - currently there are PXE boot options. I think it needs to be more local than that - a rescue partition. Perhaps PXE for the initial load OR usb sticks. A compete restore should taken less than 10 minutes.

Some small amount of management - applications that can or cannot be installed for example. Internet security done on a network level, not an individual machine level.

If you find yourself justifying something on the desktop for security, then you have to ask seriously ask yourself, what is it that you're protecting? It should no longer be enough to just play the security card by default. There's a cost to security. This needs to be understood.

Cloud or server based storage. The individual machines should not hold files vital to a child's work. This makes back ups a whole lot easier - even better if you can outsource that to someone else. Of course, there's the whole "off shore" issue. i.e. government agencies do not store information offshore.

I guess this post is really just a great big justification for Tartare Source. It's not the only use. I think a similar set up could be incredibly beneficial to a business for example. Less overheads in terms of licensing tracking and security concerns. Freedom for people to work in the way that they feel most comfortable etc.

So I guess the question is still, where would you find the support? With the user in mind and "best practise" considered inappropriate in civilised company...

27 February 2013

Robin Paulson

The Digital Commons: Escape From Capital?

So, after a year of reading, study, analysis and writing, my thesis is complete. It’s on the digital commons, of course, this particular piece is an analysis to determine whether or not the digital commons represents an escape from, or a continuation of, capitalism. The full text is behind the link below.

The Digital Commons: Escape From Capital?

In the conclusion I suggested various changes which could be made to avert the encroachment of capitalist modes, as such I will be releasing various pieces of software and other artefacts over the coming months.

For those who are impatient, here’s the abstract, the conclusion is further down:

In this thesis I examine the suggestion that the digital commons represents a form of social organisation that operates outside any capitalist relationships. I do this by carrying out an analysis of the community and methods of three projects, namely Linux, a piece of software; Wikipedia, an encyclopedia; and Open Street Map, a geographic database.

Each of these projects, similarly to the rest of the digital commons, do not require any money or other commodities in return for accessing them, thus denying exchange as the dominant method of distributing resources, instead offering a more social way of conducting relations. They further allow the participation of anyone who desires to take part, in relatively unhindered ways. This is in contrast to the capitalist model of
requiring participants demonstrate their value, and take part in ways demanded by capital.

The digital commons thus appear to resist the capitalist mode of production. My analysis uses concepts from Marx’s Capital Volume 1, and Philosophic and Economic Manuscripts of 1844, with further support from Hardt and Negri’s Empire trilogy. It analyses five concepts, those of class, commodities, alienation, commodity fetishism and surplus-value.

I conclude by demonstrating that the digital commons mostly operates outside capitalist exchange relations, although there are areas where indicators of this have begun to encroach. I offer a series of suggestions to remedy this situation.

Here’s the conclusion:

This thesis has explored the relationship between the digital commons and aspects of the capitalist mode of production, taking three iconic projects: the Linux operating system kernel, the Wikipedia encyclopedia and the Open Street Map geographical database as case studies. As a result of these analyses, it appears digital commons represent a partial escape from the domination of capital.

 

As the artefacts assembled by our three case studies can be accessed by almost anybody who desires, there appear to be few class barriers in place. At the centre of this is the maxim “information wants to be free” 1 underpinning the digital commons, which results in assistance and education being widely disseminated rather than hoarded. However, there are important resources whose access is determined by a small group in each project, rather than by a wider set of commoners. This prevents all commoners who take part in the projects from attaining their full potential, favouring one group and thus one set of values over others. Despite the highly ideological suggestion that anyone can fork a project at any time and do with it as they wish, which would suggest a lack of class barriers, there is significant inertia which makes this difficult to achieve. It should be stressed however, that the exploitation and domination existing within the three case studies is relatively minor when compared to typical capitalist class relations. Those who contribute are a highly educated elite segment of society, with high levels of self-motivation and confidence, which serves to temper what the project leaders and administrators can do.

 

The artefacts assembled cannot be exchanged as commodities, due to the license under which they are released, which demands that the underlying information, be it the source code, knowledge or geographical data always be available to anyone who comes into contact with the artefact, that it remain in the commons in perpetuity.

 

This lack of commoditisation of the artefacts similarly resists the alienation of those who assemble them. The thing made by workers can be freely used by them, they make significant decisions around how it is assembled, and due to the collaborative nature essential to the process of assembly, constructive, positive, valuable relationships are built with collaborators, both within the company and without. This reinforces Stallman’s suggestion that free software, and thus the digital commons is a more social way of being 2.

 

Further, the method through which the artefacts are assembled reduces the likelihood of fetishisation. The work is necessarily communal, and involves communication and association between those commoners who make and those who use. This assists the collaboration essential for such high quality artefacts, and simultaneously invites a richer relationship between those commoners who take part. However, as has been shown, recent changes have shown there are situations where the social nature of the artefacts is being partially obscured, in favour of speed, convenience and quality, thus demonstrating a possible fetishisation.

 

The extraction of surplus-value is, however, present. The surplus extracted is not money, but in the form of symbolic capital. This recognition from others can be exchanged for other forms of capital, enabling the leaders of the three projects investigated here to gain high paying, intellectually fulfilling jobs, and to spread their political beliefs. While it appears there is thus exploitation of the commoners who contribute to these projects, it is firstly mild, and secondly does not result in a huge imbalance of wealth and opportunity, although this should not be seen as an apology for the behaviour which goes on. Whether in future this will change, and the wealth extracted will enable the emergence of a super-rich as seen in the likes of Bill Gates, the Koch brothers and Larry Ellison remains to be seen, but it appears unlikely.

 

There are however ways in which these problems could be overcome. At present, the projects are centred upon one website, and an infrastructure and values, all generally controlled by a small group who are often self-selected, or selected by some external group with their own agenda. This reflects a hierarchical set of relationships, which could possibly be addressed through further decentralisation of key resources. For examples of this, we can look at YaCy 3, a search engine released under a free software license. The software can be used in one of a number of ways, the most interesting of these is network mode, in which several computers federate their results together. Each node searches a different set of web sites, which can be customised, the results from each node are then pooled, thus when a commoner carries out a search, the terms are searched for in the databases of several computers, and the results aggregated. This model of decentralisation prevents one entity taking control over what are a large and significant set of resources, and thus decreases the possibility of exploitation, domination and the other attendant problems of minority control or ownership over the means of production.

 

Addressing the problem of capitalists continuing to extract surplus, requires a technically simple, but ideologically difficult, solution. There is a general belief within the projects discussed that any use of the artefacts is fine, so long as the license is complied with. Eric Raymond, author of the influential book on digital commons governance and other matters, The Cathedral and The Bazaar, and populariser of the term open source, is perhaps most vocal about this, stating that the copyleft tradition of Stallman’s GNU is overly restrictive of what people, by which he means businesses, can do, and that BSD-style, no copyleft licenses are the way forward 4. The majority of commoners taking part do not follow his explicit preference for no copyleft licenses, but nonetheless have no problem with business use of the artefacts, suggesting that wide spread use makes the tools better, and that sharing is inherently good. It appears they either do not have a problem with this, or perhaps more likely do not understand that this permissiveness allows for uses that they might not approve of. Should this change, a license switch to something preventing commercial use is one possibility.

1Roger Clarke, ‘Roger Clarke’s “Information Wants to Be Free …”’, Roger Clarke’s Web-Site, 2013, http://www.rogerclarke.com/II/IWtbF.html.

2Richard Stallman, Free Software Free Society: Selected Essays of Richard M. Stallman, ed. Joshua Gay, 2nd ed (Boston, MA: GNU Press, Free Software Foundation, 2010), 8.

3YaCy, ‘Home’, YaCy – The Peer to Peer Search Engine, 2013, http://yacy.net/.

4Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, ed. Tim O’Reilly, 2nd ed. (Sebastopol, California: O’Reilly, 2001), 68–69.

09 January 2013

Malcolm Locke

face

Kill All Tabs in Eclipse

As a long time Vim user, I have the following config in my ~/.vimrc to ensure I never ever enter an evil tab character into my source code.

set   shiftwidth=2
set   tabstop=2
set   smarttab
set   et

For my Android projects, I'm starting to use Eclipse, and unfortunately eternally banishing all tabs in Eclipse is not such an easy task. Here's where I'm at so far, YMMV and I'll update this as I find more. It seems the tab boss is difficult to kill in this app.

  • Under Window -> Preferences -> General -> Editors -> Text Editors ensure Insert spaces for tabs is checked.
  • Under Window -> Preferences -> Java -> Code Style -> Formatter create a new profile based off the default, and under the Indentation tab set Tab policy to to Spaces only.

18 December 2012

Nevyn H

Open Source Battle Grounds

I often find myself at odds with a lot of FLOSS (Free/Libre/Open-Source Software) people due to my attitude on office suites.

To me, they're not the place to start introducing FLOSS. Those ideals about interoperability are lost when people then find they're having to adjust bullet points and the like when those same files end up on a different office suite. This is fiddly work (which I'm of the opinion should NEVER be a problem).

The matter is even more confused when there's already a de facto standard used in businesses everywhere.

The fact is, trying to replace MS Office with LibreOffice/OpenOffice and formerly StarOffice, is setting up for failure.

It's my belief that bringing FLOSS to a business should ALWAYS extend their functionality. Introducing GIMP to crop images in place of packages too expensive for most businesses to contemplate (in which case, a cost benefit is seen immediately) or building a database (a real database - which is NOT a spreadsheet or multi-sheet spreadsheet - more rubbish that I hear perpetrated) to help ensure data integrity and enable future expansion (i.e. a database can then be offered via a web front end or used with other information to build a more complete information management system) adds value rather than asking people to sacrifice something - whether it's as simple as user interface elements or more complex like sharing documents - they're losses. Introducing FLOSS should not cause a loss if you want to promote it.

I'm also not a fan of Google Docs. Sure, they're great for collaborative work - in fact, for this purpose, they just can't be beat. However, instead of the office suite being the limiting factor, the limiting factor has turned into the browser. Page breaks in a word processing document appear in different places depending on what browser you're using. There ALWAYS seem to be annoying nags if you're not using Google Chrome (OS, the Browser or Chromium Browser).

I'm a big fan of getting rid of office suites. I consider them to be HORRIBLY outdated technology.

Spreadsheets are great for small quick tabulated, the presentation is more important than data integrity, sort of quick jobs BUT extending the range that spreadsheets can handle (i.e. previously you couldn't have more than 65,536 rows) has confused their purpose even further.

They should be replaced by databases. This then introduces the opportunity to make a system work to a business rather than a business trying to work around a software package.
    And no... I don't mean sqlite. Sqlite is probably good for stand alone programs but for the most part, those databases that are vital to a businesses everyday operations, need to be shared by multiple people. A more server-centric database:
    1. Has locking features which make them scalable (i.e. if one person has a spreadsheet open, then generally, the whole spreadsheet is locked (collaborative features aside - although in a version of Office, this caused all sorts of headaches). Having the capacity for an information system to scale brings with it an incredibly positive message - you're working with the business to help with it's growth.
    2. Clears the way for expansion such as having it work with other data in a consistent way.
    3. Helps with data integrity by ensuring everyone is accessing the information in the same way (hopefully by web based interfaces - even if not available on the Internet, designing interfaces for the Internet generally creates a OS/device agnostic interface).
    The word processor could be a whole lot smarter. Rather than presenting you with a 20,000 formatting controls (on an individual character basis), I'm of the opinion that you should instead be able to use styles - yes, the same concept as web pages. Lyx, which describes itself as a document processor, is close except that it doesn't make it easy for you to define your own styles. Other word processors have a cursory nod in this direction but do incredibly badly at enforcing it (a friend of mine wrote up a document, sent it away for review, and then went through it again to fix up the structured nature of it - as pointless and just as busy work like as fixing up bullet points). Currently, writing up a document is a mad frenzy of content and formatting. What if, you could concentrate on the content and then quickly mark out blocks of text (That's a heading. That's a sub heading etc.) and let the computer take care of the rest?

    I'm of the opinion that presentations are, in the normal course of things, done INCREDIBLY badly. The few good presentations I've seen have been from people who do presentations for a living. The likes of Lawrence Lessig and Al Gore. These presentations were used to illustrate something. I don't believe that the lack of presentation applications would have stopped either of these people from having brilliant presentations. They are the exception and people who have gone to exceptional lengths to have good presentations. Otherwise, they're bullet points - points to talk to. They don't engage the audience. It's much better to have vital information - that which you need illustrated - behind you such as charts. Something that helps to illustrate a point (when I was told I needed to have some slides behind me I agonised over it. I didn't want them. I didn't need any charts and I think they're more of a distraction than an aid when you don't actually need them. I spent more time agonising over those slides than I did actually thinking about what I wanted to say - fitting a speech to slides just sucks. The speech was awful as I was feeling horrendously anxious about the slides.) is so much more engaging than putting up bullet points.

    So I think FLOSS has a huge part to play in advancing technology here. Instead of fighting a losing battle with trying to perpetuate existing terrible practises (based on MS's profits), FLOSS could instead be used to show people better and more efficient ways of doing things.

    This really came home to me when I was trying to compile a report on warranties. Essentially, while trying to break down the types of repairs across each of the school's, I was finding I was having a hell of a time trying to get the formatting consistent. The problem would be the same regardless of which office suite I was using. Instead, I really should have been able to create a template and then select which sheets it should use to generate a finished report - charts and all.

    Office Suites create bad (and soul destroying) work practises.

    But back to the original point, when making a proposal, think about what value it's adding. Is it adding value? Is it adding value perceived by the intended audience? The revolution that oh so many Linux people talk about isn't going to happen by replacing like for like. Instead, it needs to be something better. Something that the intended audience sees value in. Something that revolutionises the way people do things. Things that encompass the best in Open Source Software - flexibility, scalability and, horribly important to me, customizability. All derivatives of the core concept of Free (as in Freedom)...

    16 December 2012

    Colin Jackson

    People should be allowed to have red cars!

    Dear Editor

    I am writing to ask why people are not allowed to have red cars. Some of my friends’ favourite colours are red, but they are not able to have their cars painted this way. Why?

    I have seen people writing in your newspaper to say that cars are meant to be black and it is simply wrong to paint them any other colour. They generally don’t explain why they think this, except to point to the manufacturers’ books that say cars must be made black (but don’t justify this). For heaven’s sake! This is the twentieth century and we have moved on so far since cars were invented. Back then, some people said that having any kind of car was wrong – look how they’ve changed once they have got used to the idea.

    Others have said, if my friends have red cars, that their black cars will be worth less somehow. Nonsense! There’s absolutely no reason to assume that. They can keep driving round the black cars they’ve always had. Some of the sillier of these people have even said that, if we let people paint cars red, they will want to go around painting other things red, like horses and dogs. What a crazy thing to say!

    The most honest people who don’t want people to have red cars say it’s because they just don’t like them. Some of the people who use other arguments really think this, as well, but they don’t like to say it in public. But no-one is going to force them to have a red car. They can keep having a black one, or none at all. Other people having red cars won’t affect them at all.

    I’m asking everyone who doesn’t think there should be red cars – think about why you oppose them. Why is it your business to try to stop something that won’t affect you and will make other people happy?

    Yours

    Red Car Lover

    20 November 2012

    Nevyn H

    Open Source Awards

    So it's a few days after the NZ Open Source Awards. The Manaiakalani Project won! The presenter for the award in education, Paul Seiler, did mention the fact that the award was really about 2 things. The project itself and the use of Open Source Software and the contribution by me (sorry - ego does have to enter in at times - I'm awesome!). So my time to shine. I'm thinking about doing a great big post about the evolution of a speech. I'll post my notes, which I didn't take with me here though:
    • Finalists
    • Community
    • Leadership
    • Synergy
    • Personal
    Half way through my speech I'd realised my accent had turned VERY kiwi. Sod it - carry on.

    Anyway, I did say in a previous post that I'd look at the awards and process a little more closely. So let's do that!

    Nominations are found by opening it up to the public. This runs for a few anxious months. i.e. I didn't want to nominate myself as I felt this would be egotistical. However, with the discussion going on - the lack of mention of Open Source Software on the Manaiakalani website etc. - I was fairly confident the project had been nominated.

    The judges are all involved with various projects. It's such a small community that it'd be difficult to find people who knew what to look for who weren't involved in the community in some way. They have to do this with full disclosure.

    Amy Adams opened up the dinner with a "Software development is important to the government" with emphasis on the economic benefits. Of course, what she didn't say was that we spend far more money offshore on software than we do onshore. Take this as a criticism - we have the skills in New Zealand to be able to take our software into our hands and make it work to how we work. We don't have the commitment from the New Zealand government or businesses to be able to do this.

    The dinner itself it's a little strange. Here you are sitting at a table of your peers and you can't help but think that all of the finalists should probably be getting their bit in the lime light. So at our table we had a sort of mock rivalry going on.

    There was:

    • Nathan Parker principal at Warrington School - the first school in New Zealand to take on a full Open Source and Open Culture philosophy. I've idolised Nathan for awhile - the guy's a dude! So the school itself runs on Open Source Software - completely. It's small enough that they don't need a Student Management System. As well as that, they have a low power FM station running 24/7 that's been run for the last 2 years, by volunteers. This station plays Creative Commons content. They also do sort of a "computers at homes" programme done right i.e. the sense of ownership is accomplished by getting people to build their own computers.
    • Ian Beardslee from Catalyst I.T. ltd. for the Catalyst Open Source Academy. For 2 summers now, Catalyst I.T. have taken an intake of students - basically dropping that barrier of entry into opensource contribution through a combination of classroom type sessions, and mentored sessions for real contributions.
    Personally, I don't think I'd liked to have been a judge given just how close I perceive all of these projects to be. Paul Seiler, who I was sitting next to, did kind of say that you're all accomplishing the same sorts of things in different ways.

    And I saw a comment in a mailing list about those projects that are out there doing their thing but no one nominated. This puts me in mind of yet another TED talk that I watched the morning after the NZOSA dinner.

    There was lots in that video that resonated with me. Things like me wanting to learn electronics but having a fear around it due to what people kept saying around me - "You have to get it absolutely right or it won't work" - paralysing me with fear of getting it wrong and it not working. The same thing was said to me about programming though I learnt fairly quickly that I could make mistakes and it wasn't the end of the world. In fact, programming is kind of the art of putting bugs in.

    But more importantly, and more on topic, the video kind of defines why there are likely a whole lot of projects that should have been nominated but weren't due to weird hang ups.

    And in a greater sense, shame is probably a huge threat to the Open Source Community. I don't think I've ever contributed more than a few lines of code openly because I'm convinced that whatever code just isn't good enough. That I'll be ridiculed for my coding style or assumptions etc. And I've always said that I'll put out my code after a clean up etc.

    Take the code for the Manaiakalani project. It often felt like I was hacking things (badly in some cases) in order to get the functionality we needed - things like creating a blacklist of applications for example. I'm sure that there's evidence of the fact that I'd never coded in python before the project in the code as well. And yet, it's not the code, but the thoughts behind the code, that's award winning. Even if the code is hideous, it's what the code is accomplishing or attempting to accomplish that's important.

    So for the next NZOSA gala, I would love to see, not just a list of the finalists, but also a short list of nominations (those that are deserving of at least a little recognition even if not quite making the final cut). I'd love to see the organisers and judges complaining about the number of nominations coming through. I would love to see the "Open Source Special Recognition" award become a permanent fixture (won by Nathan Parker this year). And hell, greater media attention probably wouldn't be such a bad thing.

    On a very personal note though - I now have to change from being a "Professional Geek" to being an "Award Winning Geek". Feels good on these ol' shoulders of mine.

    03 November 2012

    Rob Connolly

    Tiny MQTT Broker with OpenWRT

    So yet again I’ve been really lax at posting, but meh. I’ve still been working on various projects aimed at home automation – this post is a taster of where I’m going…

    MQTT (for those that haven’t heard about it) is a real time, lightweight, publish/subscribe protocol for telemetry based applications (i.e. sensors). It’s been described as “RSS for the Internet of Things” (a rather poor description in my opinion).

    The central part of MQTT is the broker: clients connect to brokers in order to publish data and receive data in feeds to which they are subscribed. Multiple brokers can be fused together in a heirarchical structure, much like the mounting of filesystems in a unix-like system.

    I’ve been considering using MQTT for the communication medium in my planned home automation/sensor network projects. I wanted to set up a heirarchical system with different brokers for different areas of the house, but hadn’t settled on a hardware platform. Until now…

    …enter the TP-Link MR3020 ‘travel router’, which is much like the TL-WR703N which I’ve seen used in several hardware hacks recently:

    It's a Tiny MQTT Broker!

    It’s a Tiny MQTT Broker!

    I had to ask a friend in Hong Kong to send me a couple of these (they aren’t available in NZ) – thanks Tony! Once I received them installing OpenWRT was easy (basically just upload through the exisiting web UI, follow the instructions on the wiki page I linked to above). I then configured the wireless adapter in station mode so that it would connect to my existing wireless network and added a cheap 8GB flash drive to expand the available storage (the device only has 4MB of on-board flash, of which ~900KB is available after installing OpenWRT). I followed the OpenWRT USB storage howto for this and to my relief found that the on-board flash had enough space for the required drivers (phew!).

    Once the hardware type stuff was sorted with the USB partitioned (1GB swap, 7GB /opt) and mounting on boot, I was able to install Mosquitto, the Open Source MQTT broker with the following command:

    $ opkg install mosquitto -d opt

    The -d option allows the package manager to install to a different destination, in this case /opt. Destinations are configured in /etc/opkg.conf.

    It took a little bit of fiddling to get mosquitto to start at boot, mainly because of the custom install location. In the end I just edited the paths in /opt/etc/init.d/mosquitto to point to the correct locations (I changed the APP and CONF settings). I then symlinked the script to /etc/rc.d/S50mosquitto to start it at boot.

    That’s about as far as I’ve got, apart from doing a quick test with mosquitto_pub/mosquitto_sub to test everything works. I haven’t tried mounting the broker under the master broker running on my home server yet.

    The next job is to investigate the serial port on the device in order to attach an Arduino clone which I soldered up a while ago. That will be a story for another day, hopefully in the not-to-distant future!

    Flattr this!

    21 August 2012

    Rob Connolly

    Smartclock Prototype

    So as promised here are the details and photos of the Arduino project I’ve been working on – a little late I know, but I’ve actually been concentrating on the project.

    The project I’m working on is a clock, but as I mentioned before it’s not just any old clock. The clock is equipped with sensors for temperature, light level and battery level. It also has a bluetooth module for relaying this data back to my home server. This is the first part of a larger plan to build a home automation and sensor network around the house (and garden). It’s serving as kind of a test bed for some of the components I want to use as well as getting me started with the software.

    Prototype breadboard

    The prototype breadboard showing the Roving Networks RN-41 bluetooth module on the left and the sensors on the right. The temperature sensor (bottom middle) is a TMP36 and the light sensor is a simple voltage divider using a photocell.

    As you can see from the photos this is a very basic prototype at the moment – although as of this weekend all the hardware is working as well as the software drivers. I just have the firmware to finalise before building the final unit.

    Time display

    The (very bright!) display showing the time. I’m using the Sparkfun 7-segment serial display, which I acquired via Nicegear. It’s a lovely display to work with!

    The display is controlled via SPI and the input from the light sensor is used to turn off the display when it is dark in order to save power when there is no-one in the room. The display will also be able to be controlled from the server via a web interface.

    Temperature display

    The display showing the current temperature. The display switches between modes every 20 seconds with it’s default settings.

    Careful readers will note the absence of a real time clock chip to keep accurate time. The time is kept using one of the timers on the ATmega328p. Yes, before you ask this isn’t brilliantly accurate (it loses about 30 seconds every hour!), but I am planning to sync the time from the server via the bluetooth connection, so I’m not concerned.

    The final version of will use an Arduino Pro Mini 3.3v (which I also got from Nicegear) for the brains, along with the peripherals shown. The Duemilanove shown is just easier for prototyping (although it makes interfacing with the RN-41 a little more difficult).

    I intend to publish all the code (both for the firmware and the server) and schematics under Open Source licences as well as another couple of blog posts on the subject (probably one on the final build with photos and one on the server). However, that’s it for now.

    Flattr this!

    15 August 2012

    Rob Connolly

    Quick Update…

    Well, I’ve not been doing great with posting more, especially on the quick short posts front. I guess it’s because I’m either too busy all the time or because I just don’t think anyone wants to read every last thought which pops into my head. Probably a bit of both!

    Anyway, here’s a quick run-down of what I’ve been up to over the past couple of weeks:

    • I’ve been working on an Arduino project at home. I’ll post more on this over the weekend (with photos). For now I’ll just say that it’s a clock with some sensors on it – but it does a little more than your average clock. Although I’ve had my Arduino for a couple of years I’ve never really used it in earnest and I’m finding it refreshing to work with. Since I use PICs at work the simpler architecture is nice. Of course I program it in C so I can’t comment on the IDE/language provided by the Arduino tools.
    • The beer which I made recently is now bottled and maturing. It’ll need a couple more weeks to be ready to drink though (actually the longer the better really, but I can never wait!). I’ll report back on what it’s like when I try it.
    • I’ve been thinking about ways to get the ton of data I have spread across my machines in order. Basically I want to get it all onto my mythbox/home server/personal cloud and acessible via ownCloud and NFS. I also have a ton of dead tree (read ‘important’ documents) which need scanning and a ton of CDRs that need backing up. After that I have to overhaul my backup scheme. It’s a big job – hence why I’ve only been thinking about doing it so far.
    • I’ve also been thinking about upgrading my security with the recent hacks which have occured. Since I’m not hugely reliant on external services (i.e. Google, Facebook, Apple and Amazon) I’m doing pretty well already. Also, I already encrypt all my computers anyway (which is way more effective than that stupid ‘remote wipe’ misfeature Apple have). I am considering upgrading to two factor authentication using Google Authenticator anywhere I can and I want to switch to using GPG subkeys and storing my master private key somewhere REALLY secure. I’ll be writing about these as I do them so stay tuned.

    Well, hopefully that’s a quick summary of what I’ve been up to (tech-wise) lately as well as what might be to come in these pages. For now, that’s all folks.

    Flattr this!

    24 July 2012

    Pass the Source

    Google Recruiting

    So, Google are recruiting again. From the open source community, obviously. It’s where to find all the good developers.

    Here’s the suggestion I made on how they can really get in front of FOSS developers:

    Hi [name]

    Just a quick note to thank you for getting in touch of so many our
    Catalyst IT staff, both here and in Australia, with job offers. It comes
    across as a real compliment to our company that the folks that work here
    are considered worthy of Google’s attention.

    One thing about most of our staff is that they *love* open source. Can I
    suggest, therefore, that one of the best ways for Google to demonstrate
    its commitment to FOSS and FOSS developers this year would be to be a
    sponsor of the NZ Open Source Awards. These have been very successful at
    celebrating and recognising the achievements of FOSS developers,
    projects and users. This year there is even an “Open Science” category.

    Google has been a past sponsor of the event and it would be good to see
    you commit to it again.

    For more information see:

    http://www.nzosa.org.nz/

    Many thanks
    Don

    09 July 2012

    Andrew Caudwell

    Inventing On Principle Applied to Shader Editing

    Recently I have been playing around with GLSL Sanbox (github), a what-you-see-is-what-you-get shader editor that runs in any WebGL capable browser (such as Firefox, Chrome and Safari). It gives you a transparent editor pane in the foreground and the resulting compiled fragment shader rendered behind it. Code is recompiled dynamically as the code changes. The latest version even has syntax and error highlighting, even bracket matching.

    There have been a few other Webgl based shader editors like this in the past such as Shader Toy by Iñigo Quílez (aka IQ of Demo Scene group RGBA) and his more recent (though I believe unpublished) editor used in his fascinating live coding videos.

    Finished compositions are published to a gallery with the source code attached, and can be ‘forked’ to create additional works. Generally the author will leave their twitter account name in the source code.

    I have been trying to get to grips with some more advanced raycasting concepts, and being able to code something up in sandbox and see the effect of every change is immensely useful.

    Below are a bunch of my GLSL sandbox creations (batman symbol added by @emackey):

        

        

    GLSL Sandbox is just the latest example of the merit of software development tools that provide immediate feedback, and highlights the major advantages of scripting languages have over heavy compiled languages with long build and linking times that make experimentation costly and tedious. Inventing on Principle, a presentation by Bret Victor, is a great introduction to this topic.

    I would really like a save draft button that saves shaders locally so I have some place to save things that are a work in progress, I might have to look at how I can add this.

    Update: Fixed links to point at glslsandbox.com.

    05 June 2012

    Pass the Source

    Wellington City Council Verbal Submission

    I made the following submission on the Council’s Draft Long Term Plan. Some of this related to FLOSS. This was a 3 minute slot with 2 minutes for questions from the councillors.

    Introduction

    I have been a Wellington inhabitant for 22 years and am a business owner. We employ about 140 staff in Wellington, with offices in Christchurch, Sydney, Brisbane and the UK. I am also co-chair of NZRise which represents NZ owned IT businesses.

    I have 3 Points to make in 3 minutes.

    1. The Long Term plan lacks vision and is a plan for stagnation and erosion

    It focuses on selling assets, such as community halls and council operations and postponing investments. On reducing public services such as libraries and museums and increasing user costs. This will not create a city where “talent wants to live”. With this plan who would have thought the citizens of the city had elected a Green Mayor?

    Money speaks louder than words. Both borrowing levels and proposed rate increases are minimal and show a lack of investment in the city, its inhabitants and our future.

    My company is about to open an office in Auckland. A manager was recently surveying staff about team allocation and noted, as an aside, that between 10 and 20 Wellington staff would move to Auckland given the opportunity. We are not simply competing with Australia for hearts and minds, we are competing with Auckland whose plans for investment are much higher than our own.

    2. Show faith in local companies

    The best way to encourage economic growth is to show faith in the talent that actually lives here and pays your rates. This means making sure the council staff have a strong direction and mandate to procure locally. In particular the procurement process needs to be overhauled to make sure it does not exclude SME’s (our backbone) from bidding for work (see this NZCS story). It needs to be streamlined, transparent and efficient.

    A way of achieving local company participation in this is through disaggregation – the breaking up large-scale initiatives into smaller, more manageable components. For the following reasons:

    • It improves project success rates, which helps the public sector be more effective.
    • It reduces project cost, which benefits the taxpayers.
    • It invites small business, which stimulates the economy.

    3. Smart cities are open source cities

    Use open source software as the default.

    It has been clear for a long time that open source software is the most cost effective way to deliver IT services. It works for Amazon, Facebook, Red Hat and Google and just about every major Silicon Valley success since the advent of the internet. Open source drives the internet and these companies because it has an infinitely scalable licensing and model – free. Studies, such as the one I have here from the London School of Economics, show the cost effectiveness and innovation that comes with open source.

    It pains me to hear about proposals to save money by reducing libraries hours and increasing fees, when the amount of money being saved is less than the annual software licence fees our libraries pay, when world beating free alternatives exist.

    This has to change, looking round the globe it is the visionary and successful local councils that are mandating the use of FLOSS, from Munich to Vancouver to Raleigh NC to Paris to San Francisco.

    As well as saving money, open source brings a state of mind. That is:

    • Willingness to share and collaborate
    • Willingness to receive information
    • The right attitude to be innovative, creative, and try new things

    Thank you. There should now be 2 minutes left for questions.

    11 March 2012

    Malcolm Locke

    face

    Secure Password Storage With Vim and GnuPG

    There are a raft of tools out there for secure storage of passwords, but they will all come and go, Vim and GnuPG are forever.

    Here's the config:

    augroup encrypted
        au!
    
        " First make sure nothing is written to ~/.viminfo while editing
        " an encrypted file.
        autocmd BufReadPre,FileReadPre      *.gpg set viminfo=
        " We don't want a swap file, as it writes unencrypted data to disk
        autocmd BufReadPre,FileReadPre      *.gpg set noswapfile
        " Switch to binary mode to read the encrypted file
        autocmd BufReadPre,FileReadPre      *.gpg set bin
        autocmd BufReadPre,FileReadPre      *.gpg let ch_save = &ch|set ch=2
        autocmd BufReadPost,FileReadPost    *.gpg '[,']!gpg --decrypt 2> /dev/null
        " Switch to normal mode for editing
        autocmd BufReadPost,FileReadPost    *.gpg set nobin
        autocmd BufReadPost,FileReadPost    *.gpg let &ch = ch_save|unlet ch_save
        autocmd BufReadPost,FileReadPost    *.gpg execute ":doautocmd BufReadPost " . expand("%:r")
    
        " Convert all text to encrypted text before writing
        autocmd BufWritePre,FileWritePre    *.gpg   '[,']!gpg --default-recipient-self -ae 2>/dev/null
        " Undo the encryption so we are back in the normal text, directly
        " after the file has been written.
        autocmd BufWritePost,FileWritePost  *.gpg   u
    
        " Fold entries by default
        autocmd BufReadPre,FileReadPre      *.gpg set foldmethod=expr
        autocmd BufReadPre,FileReadPre      *.gpg set foldexpr=getline(v:lnum)=~'^\\s*$'&&getline(v:lnum+1)=~'\\S'?'<1':1
    augroup END
    

    Now, open a file, say super_secret_passwords.gpg and enter your passwords with a blank line between each set:

    My Twitter account
    malc : s3cr3t
    
    My Facebook account
    malc : s3cr3t
    
    My LinkedIn account
    malc : s3cr3t
    

    When you write the file out, it will be encrypted with your GPG key. When you next open it, you'll be prompted for your GPG private key passphrase to decrypt the file.

    The line folding config will mean all the passwords will be hidden by default when you open the file, you can reveal the details using zo (or right arrow / l) with the cursor over the password title.

    I like this system because as long as I have gpg and my private key available, I can extract any long lost password from my collection.

    05 January 2012

    Pass the Source

    The Real Tablet Wars

    tl;dr formally known as Executive Summary, Openness + Good Taste Wins

    Gosh, it’s been a while. But this site is not dead. Just been distracted by indenti.ca and twitter.

    I was going to write about Apple, again. A result of unexpected and unwelcome exposure to an iPad over the Christmas Holidays. But then I read Jethro Carr’s excellent post where he describes trying to build the Android OS from Google’s open source code base. He quite mercilessly exposes the lack of “open” in some key areas of that platform.

    It is more useful to look at the topic as an issue of “open” vs “closed” where iPad is one example of the latter. But, increasingly, Android platforms are beginning to display similar inane closed attributes – to the disadvantage of users.

    Part of my summer break was spent helping out at the premier junior sailing regatta in the world, this year held in Napier, NZ. Catalyst, as a sponsor, has built and is hosting the official website.

    I had expected to swan around, sunbathing, drinking cocktails and soaking up some atmosphere. Instead a last minute request for a new “live” blogging section had me blundering around Joomla and all sorts of other technology with which I am happily unfamiliar. Days and nightmares of iPads, Windows, wireless hotspots and offshore GSM coverage.

    The plan was simple, the specialist blogger, himself a world renown sailor, would take his tablet device out on the water on the spectator boat. From there he would watch and blog starts, racing, finishes and anguished reactions from parents (if there is one thing that unites races and nationalities, it is parental anguish over sporting achievement).

    We had a problem in that the web browser on the tablet didn’t work with the web based text editor used in the Joomla CMS. That had me scurrying around for a replacement to the tinyMCE plugin, just the most common browser based editing tool. But a quick scan around various forums showed me that the alternative editors were not a solution and that the real issue was a bug with the client browser.

    “No problem”, I thought. “Let’s install Firefox, I know that works”.

    But no, Firefox is not available to iPad users  and Apple likes to “protect” its users by only tightly controlling whose applications are allowed to run on the tablet. Ok, what about Chrome? Same deal. You *have* to use Apple’s own buggy browser, it’s for your own good.

    Someone suggested that the iPad’s operating system we were using needed upgrading and the new version might have a fixed browser. No, we couldn’t do that because we didn’t have Apple’s music playing software, iTunes, on a PC. Fortunately Vodafone were also a sponsor and not only did they download an upgrade they had iTunes handy. Only problem, the upgrade wiped all the apps that our blogger and his family had previously bought and installed.

    Er, and the upgrade failed to fix the problem. One day gone.

    So a laptop was press ganged into action, which, in the end was a blessing because other trials later showed that typing blogs fast, on an ocean swell, is very hard without a real keyboard. All those people pushing tablets at schools, keep in mind it is good to have our children *write* stuff, often.

    The point of this post is not really to bag Apple, but to bag the mentality that stops people using their own devices in ways that help them through the day. I only wanted to try a different browser to Safari, not an unusual thing to do. Someone else might want to try out a useful little application a friend has written for them, but that wouldn’t be allowed.

    But the worst aspect of this is that because of Apple’s success in creating well designed gadgets other companies have decided that “closed” is also the correct approach to take with their products. This is crazy. It was an open platform, Linux Kernel with Android, that allowed them to compete with Apple in the first place and there is no doubt that when given a choice, choice is what people want – assuming “taste” requirements are met.

    Other things being equal*, who is going to chose a platform where the company that sold you a neat little gadget controls all the things you do on it? But there is a strong trend by manufacturers such as Samsung, and even Linux distributions, such asUbuntu, to start placing restrictions on their clients and users. To decide for all of us how we should behave and operate *our* equipment.

    The explosive success of the personal computer was that it was *personal*. It was your own productivity, life enhancing device. And the explosive success of DOS and Windows was that, with some notable exceptions, Microsoft didn’t try and stop users installing third party applications. The dance monkey boy video is funny, but the truth is that Microsoft did want “developers, developers, developers, developers” using its platforms because, at the time, it knew it didn’t know everything.

    Apple, Android handset manufacturers and even Canonical (Ubuntu) are falling into the trap of not knowing that there is stuff they don’t know and they will probably never know. Similar charges are now being made about Facebook and Twitter. The really useful devices and software will be coming from companies and individuals who realise that whilst most of what we all do is the same as what everyone else does, it is the stuff that we do differently that makes us unique and that we need to control and manage for ourselves. Allow us do that, with taste, and you’ll be a winner.

    PS I should also say “thanks” fellow sponsors Chris Devine and Devine Computing for just making stuff work.

    * I know all is not equal. Apple’s competitive advantage it “has taste” but not in its restrictions.

    11 November 2011

    Robin Paulson

    Free software and the extraction of capital

    This essay will asses the relationship between free software and the capitalist mode of accumulation, namely that of the extraction of various forms of capital to produce profit. I will perform an analysis through the lens of the Marxist concept of extracting surplus from workers, utilise Bourdieu’s theory of capital, and the ideas of Hardt and Negri as they discuss the various economic paradigms, and the progression through these.

    The free software movement is one which states that computer software should not have owners (Stallman, 2010, chap. 5), and that proprietary software is fundamentally unethical (Stallman, 2010, p. 5). This idea is realised through “the four freedoms” and a range of licenses, which permit anyone to: use for any purpose; modify; examine and redistribute modified copies, of the software so licensed (Free Software Foundation, 2010). These freedoms are posited as a contrast to the traditional model of software development, which rests all ownership and control of the product in its creators. As free software is not under private control, it would appear at first to escape the capitalist mode of production, and the problems which ensue from that, such as alienation, commodity fetishism and the concentration of power and wealth in the hands of a few.

    For a definition of the commons, Bollier states:

    commons comprises a wide range of shared assets and forms of community governance. Some are tangible, while others are more abstract, political, and cultural. The tangible assets of the commons include the vast quantities of oil, minerals, timber, grasslands, and other natural resources on public lands, as well as the broadcast airwaves and such public facilities as parks, stadiums, and civic institutions. … The commons also consists of intangible assets that are not as readily identified as belonging to the public. Such commons include the creative works and public knowledge not privatized under copyright law. … A last category of threatened commons is that of so-called ‘gift economies’. These are communities of shared values in which participants freely contribute time, energy, or property and over time receive benefits from membership in the community. The global corps of GNU/Linux software programmers is a prime example: enthusiasts volunteer their talents and in return receive useful rewards and group esteem. (2002)

    Thus, free software would appear to offer an escape from the system of capitalist dominance based upon private property, as the products of free software contribute to the commons, resist attempts at monopoly control and encourage contributors to act socially.

    Marx described how through the employment of workers, investors in capitalist businesses were able to amass wealth and thus power. The employer invests an amount of money into a business, to employ labour, and he labourer creates some good, be it tangible or intangible. The labourer is then paid for this work, and the company owner takes the good and sells it at some higher price, to cover other costs and to provide a profit. The money the labourer is paid is for the “necessary labour” (Marx, 1976a, p. 325), i.e. the amount the person requires to reproduce labour, that is the smallest amount possible to ensure the worker can live, eat, house themself, work fruitfully and produce offspring who will do similar. The difference between this amount and the amount the good sells for, minus other costs, which are based upon the labour of other workers, is the “surplus value”, and equals the profit to the employer (Marx, 1976a, p. 325). The good is then sold to a customer, who thus enters into a social relationship with the worker that made it. However, the customer has no knowledge of the worker, does not know the conditions they work under, their wage, their name or any other information about them, their relationship is mediated entirely through the commodity which passes from producer to consumer. Thus, despite the social relationship between the two, they are alienated from each other, and the relationship is represented through a commodity object, which is thus fetishised over the actual social relationship (Marx, 1976a, chap. 1). The worker is further alienated, from the product of their labour, for which they are not fully recompensed, as they are not paid the full exchange amount which the capitalist company obtains, and do not have control over any further part in the commodity than the work they employed to put in.

    If we study the reasons participants have for contributing to free software projects, coders fall into one or more of the following three categories: firstly, coders who contribute to create something of utility to themselves, secondly, those who are paid by a company which employs them to write code in a traditional employment relationship, and finally those who write software without economic compensation, to benefit the commons (Hars & Ou, 2001). The first category does not enter into a relationship with others, so the system of capitalist exchange does not need to be considered. The second category, that of a worker being paid to contribute to a project, might seem unusual, as the company appears to be giving away the result of capital investment, thus benefiting competitors. Although this is indeed the case, the value gained in other contributors viewing, commenting on and fixing the code is perceived to outweigh any disadvantages. In the case of a traditional employee of a capitalist company, the work, be it production of knowledge, carrying out of a service or making a tangible good, will be appropriated by the company the person works for, and credited as its own. The work is then sold at some increased cost, the difference between the cost to make it and the cost it is sold for being surplus labour, which reveals itself as profit.

    The employed software coder working on a free software performs necessary labour (Marx, 1976a, p. 325), as any other employee does, and this is rewarded with a wage. However, the surplus value, which nominally is used to create profit for the employer by them appropriating the work of the employee, is not solely controlled by the capitalist. Due to the nature of the license, the product of the necessary and surplus labour can be taken, used and modified by any other person, including the worker. Thus, the traditional relationship of the commons to the capitalist is changed. The use of paid workers to create surplus value is an example of the capitalist taking the commons and re-appropriating it for their own gain. However, as the work is given back to the commons, there is an argument that the employer has instead contributed to the wider sphere of human knowledge, without retaining monopoly control as the traditional copyright model does. Further, the worker is not alienated by their employer from the product of their labour, it is available for them to use as they see fit.

    The second category of contributors to a project, volunteers are generally also highly-skilled, well-paid, and materially comfortable in life. According to Maslow’s Hierarchy of Needs (Maslow, 1943), as individuals attain the material comforts in life, so they are likely to turn their aspirations towards less tangible but more fulfilling achievements, such as creative pursuits. Some will start free software projects of their own, as some people will start capitalist businesses: the Linux operating system kernel, The GNU operating system and the Diaspora* [sic] distributed social networking software are examples of this situation. If a project then appears successful to others, it will gain new coders, who will lend their assistance and improve the software. The person(s) who started the project are acknowledged as the leader(s), and often jokingly referred to as the “benevolent dictator for life” (Rivlin, 2003), although their power is contingent, because as Raymond put it, “the culture’s ‘big men’ and tribal elders are required to talk softly and humorously deprecate themselves at every turn in order to maintain their status.” (2002). As leaders, they will make the final decision of what code goes into the ‘official’ releases, and be recognised as the leader in the wider free software community.

    Although there may be hundreds of coders working on a project, as there is an easily identifiable leader, he or she will generally receive the majority of the credit for the project. Each coder will carry out enough work to produce the piece of code they wish to work on, thus producing a useful addition to the software. As suggested above by Maslow, the coder will gain symbolic capital, defined by Bourdieu as “the acquisition of a reputation for competence and and image of respectability” (1984, p. 291) and as “predisposition to function as symbolic capital, i.e., to be unrecognized as capital and recognized as legitimate competence, as authority exerting an effect of (mis)recognition … the specifically symbolic logic of distinction” (Bourdieu, 1986). This capital will be attained through working on the project, and being recognised by: other coders involved in the project and else where; the readers of their blog; their friends and colleagues, and they may occasionally be featured in articles on technology web news sites (KernelTrap, 2002; Mills, 2007). Each coder adds their piece of effort to the project, gaining enough small acknowledgements for their work along the way to feel they should continue coding, which could be looked at as necessary labour (Marx, 1976a, p. 325). Contemporaneously, the project leader gains a smaller acknowledgement for the improvements to the project as a whole, which in the case of a large project can be significant over time. In the terms expressed by Marx, although the coder carries out a certain amount of work, it is then handed over to the project, represented in the eyes of the public by the leader who accrues similar small amounts of capital from all coders on the project. This profit is surplus value (Marx, 1976a, p. 325). Similarly to the employed coder, the economic value of the project does not belong to the leaders, there is no surplus extracted there, as all can use it.

    To take a concrete example, Linus Torvalds, originator and head of the Linux kernel is known for his work throughout the free software world, and feted as one of its most important contributors (Veltman, 2006, p. 92). The perhaps surprising part of this, is that Torvalds does not write code for the project any more, he merely manages others, and makes grand decisions as to which concepts, not actual code, will be allowed into the mainline, or official, release of the project (Stout, 2007). Drawing a parallel with a traditional capitalist company, Linus can be seen as the original investor who started the organisation, who manages the workers, and who takes a dividend each year, despite not carrying out any productive work. Linus’ original investment in 1991 was economic and cultural capital, in the form of time and a part-finished degree in computer science (Calore, 2009). While he was the only contributor, the project progressed slowly, and the originator gained symbolic, social and cultural capital solely through his own efforts, thus resembling a member of the petit bourgeois. As others saw the value in the project, they offered small pieces of code to solve small problems and progress the code. These were incorporated, thus rapidly improving the software, and the standing of Torvalds.

    Like consumers of any other product, users of Linux will not have be aware of who had made the specific change unless they make an effort to read the list of changes for each release, thus resulting in the coder being alienated from the product of their labour and the users of the software (Marx, 1959, p. 29), who fetishise (Marx, 1976a, chap. 1) the software over the social relationship which should be prevalent. For each contribution, which results in a small gain in symbolic capital to the coder, Linus takes a smaller gain in those forms of capital, in a way analogous to a business investor extracting surplus economic capital from her employees, despite not having written the code in question. The capitalist investor possesses no particular values, other than to whom and where she was born, yet due to the capital she is able to invest, she can amass significant economic power from the work of others. Over 18 years, these small gains in capital have also added up for Linus Torvalds, and such is now the symbolic capital expropriated that he is able to continue extracting this capital fro Linux, while reinvesting capital in writing code for other projects, in this case ‘Git’ (Torvalds, 2005), which has attracted coders in part due to the fame of its principal architect. The surplus value of the coders on this project is also extracted and transferred to the nominal leader, and so the cycle continues, with the person at the top continuously and increasingly benefiting from the work of others, at their cost.

    The different forms of capital can readily be exchanged for one another. As such, Linus has been offered book contracts (Torvalds, 2001), is regularly interviewed for a range of publications (Calore, 2009; Rivlin, 2003), has gained jobs at high prestige technology companies (Martin Burns, 2002), and been invited to various conferences as guest speaker. The other coders on the Linux project have also gained, through skills learned, social connections and prestige for being part of what is a key project in free software, although none in the same way as Linus.

    Free software is constructed in such a way as to allow a range of choices to address most needs, for instance in the field of desktop operating systems there are hundreds to choose from, with around six distributions, or collections of software, covering the majority of users, through being recognised as well-supported, stable and aimed at the average user (Distrowatch.com, 2011). In order for the leaders of each of these projects to increase their symbolic capital, they must continuously attract new users, be regularly mentioned in the relevant media outlets and generally be seen as adding to the field of free software, contributing in some meaningful way. Doing so requires a point-of-difference between their software and the other distributions. However, this has become increasingly difficult, as the components used in each project have become increasingly stable and settled, so the current versions of each operating system will contain virtually identical lists of packages. In attempting to gain users, some projects have chosen to make increasingly radical changes, such as including versions of software with new features even though they are untested and unstable (Canonical Ltd., 2008), and changing the entire user experience, often negatively as far as users are concerned (Collins, 2011). Although this keeps the projects in the headlines on technology news sites, and thus attracts new users, it turns off experienced users, who are increasingly moving to more stable systems (Parfeni, 2011).

    This proliferation of systems, declining opportunities to attract new users, and increasingly risky attempts to do so, demonstrates the tendency of the rate of profit to fall, and the efforts capitalist companies go to in seeking new consumers (Marx, 1976b, chap. 3), so they can continue extracting increased surplus value as profit Each project must put in more and more effort, in increasingly risky areas, thus requiring increased maintenance and bug-fixing, to attract users and be appreciated in the eyes of others.

    According to Hardt and Negri, since the Middle Ages, there have been three economic paradigms, identified by the three forms from which profit is extracted. These are: land, which can be rented out to others or mined for minerals; tangible, movable products, which are manufactured by exploited workers and sold at a profit; and services, which involve the creation and manipulation of knowledge and affect, and the care of other humans, again by exploited workers (2000, p. 280). Looking more closely at these phases, we can see a procession. The first phase relied mainly upon the extraction of profit from raw materials, such as the earth itself, coal and crops, with little if any processing by humans. The second phase still required raw materials, such as iron ore, bauxite, rubber and oil, but also required a significant amount of technical processing by humans to turn these materials into commodities which were then sold, with profit extracted from the surplus labour of workers. Thus the products of the first phase were important in a supporting role to the production of the commodities, in the form of land for the factory, food for workers, fuel for smelters and machinery, and materials to fashion, but the majority of the value of the commodity was generated by activities resting on these resources, the working of those raw materials into useful items by humans. The latter of the phases listed above, the knowledge, affect and care industry, entails workers collecting and manipulating data and information, or performing some sort of service work, which can then be rented to others. Again, this phase relies on the other phases: from the first phase, land for offices, data centres, laboratories, hospitals, financial institutes, and research centres; food for workers, fuel for power; plus from the second phase: commodities including computers, medical equipment, office supplies, and laboratory and testing equipment, to carry out the work. Similarly to the previous phase, these materials and items are not directly the source of the creation of profit, but are required, the generation of profit relies and rests on their existence.

    In the context of IT, this change in the dominant paradigm was most aptly demonstrated by the handover of power from the mighty IBM to new upstart Microsoft in 1979, when the latter retained control over their operating system software MS-DOS, despite the former agreeing to install it on their new desktop computer range. The significance of this apparent triviality was illustrated in the film ‘Pirates of Silicon Valley’, during a scene depicting the negotiations between the two companies, in which everyone but Bill Gates’ character froze as he broke the ‘fourth wall’, turning to the camera and explaining the consequences of the mistake IBM had made (Burke, 1999). IBM, the dominant power in computing of the time, were convinced high profit continued to lie in physical commodities, the computer hardware they manufactured, and were unconcerned by lack of ownership of the software. Microsoft recognised the value of immaterial labour, and soon eclipsed IBM in value and influence of the industry, a position which they held for around 20 years.

    Microsoft’s method of generating profit was to dominate the field of software, their products enabling users to create, publish and manipulate data, while ignoring the hardware, which was seen as a commodity platform upon which to build (Paulson, 2010). Further, the company wasn’t particularly interested what its customers were doing with their computers, so long as they were using Windows, Office and other technologies, to work with that data, as demonstrated by a lack of effort to control the creation or distribution of information. As Microsoft were increasing their dominance, the free software GNU Project was developing a free alternative, to firstly the Unix operating system (Stallman, 2010, p. 9), and later to Microsoft products. Fuelled by the rise in highly capable, cost-free software which competed with and undercut Microsoft, so commoditising the market, the dominance of that company faded in the early 2000s (Ahmad, 2009), to be replaced by a range of companies which built on the products of the free software movement, by relying on the use value, but no longer having any interest in the exchange value of the software (Marx, 1976a, p. 126). The power Microsoft retains today through its desktop software products is due in significant part to ‘vendor lock-in’ (Duke, n.d.), the process of using closed standards, only allowing their software to interact with data in ways prescribed by the vendor. Google, Apple and Facebook, the dominant powers in computing today, would not have existed in their current form were it not for various pieces of free software (Rooney, 2011). Notably, the prime method of profit making of these companies is through content, rather than via a software or hardware platform. Apple and Google both provide platforms, such as the iPhone and Gmail, although neither companies makes large profit directly from these platforms, sometimes to the point of giving them away, subsidised heavily via their profit-making content divisions (Chen, 2008).

    Returning to the economic paradigms discussed by Hardt and Negri, we have a series of sub-phases, each building on the sub-phase before. Within the third, knowledge, phase, the first sub-phase of IT, computer software, such as operating systems, web servers and email servers, was a potential source of high profits through the 1980s and 1990s, but due to high competition, predominantly from the free software movement, the rate of profit has dropped considerably, with for instance the free software ‘Apache’ web server being used to host over 60% of all web sites (Netcraft Ltd., 2011). Conversely, the capitalist companies from the next sub-phase were returning high profits and growth, through extensive use of these free products to sell other services. This sub-phase is noticeable for its reliance on creating and manipulating data, rather than producing the tools to do so, although both still come under the umbrella of knowledge production. This trend was mirrored in the free software world, as the field of software stabilised, thus realising fewer opportunities for increasing one’s capital through the extraction of surplus in this area.

    As the falling rate of profit reduced the potential to gain symbolic capital through free software, so open data projects, which produce large sets of data under open licences, became more prevalent, providing further areas for open content contributors to invest their capital. These initially included Wikipedia, the web-based encyclopedia which anyone can edit, in 2001 (“Wikipedia:About,” n.d.). Growth of this project was high for several years, with a large number of new editors joining, but has since become so small as to find attracting new users very difficult (Chi, 2009; Moeller & Zachte, 2009). Similarly, OpenStreetMap, which aims to map the world, was begun in 2004, and grew at a very high rate once it became known in the mainstream technology press. However, now that the majority of streets and significant geographical data in western countries are mapped, the project is finding it difficult to attract new users, unless they are willing to work on adding increasingly esoteric minutiae, which has little obvious effect on the map, and thus provides a less obvious gain in symbolic capital attained by the user (Fairhurst, 2011). For the leaders of the project, this represents higher and higher effort to be put in, for comparatively smaller returns, again the rate of profit is falling. Rather than the previous, relatively passive method of attracting new users and expanding into other areas, the project founders and leading lights are now aggressively pushing the project to map less well-covered areas, such as a recent effort in a slum in Africa (Map Kibera, 2011); starting a sub-group to create maps in areas such as Haiti, to help out after natural disasters (Humanitarian OpenStreetMap Team, 2011); and providing economic grants for those who will map in less-developed countries (Black, 2008). This closely follows the capitalist need to seek out new markets and territories, once all existing ones are saturated, to continuously push for more growth, to arrest the falling rate of profit.

    According to Hardt and Negri,

    You can think and form relationships not only on the job buy also in the street, at home, with your neighbors and friends. The capacities of biopolitical labor-power exceed work and spill over into life. We hesitate to use the word “excess” for this capacity because from the perspective of society as a whole it is never too much. It is excess only from the perspective of capital because it does not produce economic value that can be captured by the individual capitalist (2011)

    The capitalist mode of production brings organisational structure to the production of value, but in doing so fetters the productivity of the commons, the productivity of the commons is higher when capital stays external to the production process. This hands-off approach to managing production can be seen extensively in free software, through the self-organising, decentralised model it utilises (Ingo, 2006, p. 38), eschewing traditional management forms with chains of responsibility. Economic forms of capital are prevalent in free software, as when technology companies including advertising provider Google, software support company Red Hat and software and services provider Novell employ coders to commit code to various projects such as the Linux kernel (The Linux Foundation, 2009). However, the final decision of whether the code is accepted, is left up to the project itself, which is usually free of corporate management. There are numerous, generally temporary exceptions to this rule, including OpenOffice.org, the free software office suite, which was recently acquired by software developer Oracle. Within a few months of the acquisition, the number of senior developers involved in the project dropped significantly, most of them citing interference from Oracle in the management of the software, and those who left set up their own fork of the project, based on the Oracle version (Clarke, 2010). Correspondingly, a number of software collections also stopped including the Oracle software, and instead used the version released by the new, again community-managed, offshoot (Sneddon, 2010). Due to the license which OpenOffice.org is released under, all of Oracle’s efforts to take direct control of the project were easily sidestepped. Oracle may possess the copyright to all of the original code, through purchasing the project, but this comes to naught once that code is released, it can be taken and modified by anyone who sees fit.

    This increased productivity of the commons can be seen in the response to flaws with the software: as there is no hierarchical structure enforced by, for example, employment contract, problems reported by users can and are taken on by volunteer coders who will work on the flaw until it is fixed, without needing to consult line managers, and align with a corporate strategy. If the most recognised source for the software does not respond quickly, either due to financial or technical reasons, because of the nature of the licence, other coders are able to fix the problem, including those hired by customers. For those not paid, symbolic capital continues to play a part here: although the coders may appear to be unpaid volunteers, in reality there is kudos to be gained by solving a problem quickly, pushing coders to compete against each other, even while sharing their advances.

    Despite this realisation that capital should not get too close to free software, the products of free software are still utilised by many corporates: free software forms the key infrastructure for a high proportion of web servers (Netcraft Ltd., 2011), and is extensively used in mobile phones (Germain, 2011) and financial trading (Jackson, 2011). The free software model thus forms a highly effective method for producing efficient software useful to capital. The decentralised, hard-to-control model disciplines capital into keeping its distance, forcing corporations to realise that if they get too close, try to control too much, they will lose out by wasting resources and appearing as bad citizens of the free software community, thus losing symbolic capital in the eyes of potential investors and customers.

    Conclusion

    The preceding analysis of free software and its relationship to capitalism demonstrates four areas in which the former is relevant to the latter.

    Firstly, free software claims to form a part of the commons, and to a certain extent, this is true: the data and code in the projects are licensed in a way which allows all to take benefit from using them, they cannot be monopolised, owned and locked-down as capitalism has done with the tangible assets of the commons, and many parts of the intangible commons. Further, it appears that not only is free software not enclosable, but whenever any attempt to control it is exerted by an external entity, the project radically changes direction, sheds itself of regulation and begins where it left off, more wary of interference from capital.

    Secondly, however, the paradigm of free software shows that ownership of the thing is not necessarily required to extract profit with it, there are still opportunities for the capitalist mode of accumulation despite this lack of close control of it. The high quality, efficient tools provided by free software are readily used by capitalist organisations to sell and promote other intangible products, and to manipulate various forms of data, particularly financial instruments, a growth industry in modern knowledge capitalism, at greater margins than had free software not existed. This high quality is due largely to the aforementioned ability of free software to keep capital from taking a part in its development, due to its apparent inefficiency at managing the commons.

    Thirdly, although free software cannot be owned and controlled as physical objects can, thus apparently foiling the extraction of surplus value as economic profit from alienated employees, the nominal leaders of each free software project appear to take a significant part of the credit for the project they steer, thus extracting symbolic capital from other, less prominent coders of the project. This is despite not being involved in much, or in some cases any, of the actual code-writing, thus mirroring the extraction of profit through surplus labour adopted by capitalism.

    Finally, the tendency of the rate of profit to fall seems to pervade free software in the same way as it affects capitalism. Certain free software projects have been shown to have difficulty extracting profit, in the form of surplus symbolic capital, and this in turn, has caused a turn to open data, which initially showed itself to be an area with potentiality for growth and profit, although it too has now suffered the same fate as free software.

    References

    Ahmad, A. (2009). Google beating the evil empire | Malay Mail Online. Retrieved November 3, 2011, from http://www.mmail.com.my/content/google-beating-evil-empire

    Black, N. (2008). CloudMade?» OpenStreetMap Grants. Retrieved October 29, 2011, from http://blog.cloudmade.com/2008/03/17/openstreetmap-grants/

    Bollier, D. (2002). Reclaiming the Commons. Retrieved November 3, 2011, from http://bostonreview.net/BR27.3/bollier.html

    Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. London: Routledge & Kegan Paul.

    Bourdieu, P. (1986). The Forms of Capital. Retrieved November 5, 2011, from http://www.marxists.org/reference/subject/philosophy/works/fr/bourdieu-forms-capital.htm

    Burke, M. (1999). Pirates of Silicon Valley.

    Calore, M. (2009). Aug. 25, 1991: Kid From Helsinki Foments Linux Revolution | This Day In Tech | Wired.com. Retrieved November 5, 2011, from http://www.wired.com/thisdayintech/2009/08/0825-torvalds-starts-linux

    Canonical Ltd. (2008). “firefox-3.0” source package?: Hardy (8.04)?: Ubuntu. Retrieved October 29, 2011, from https://launchpad.net/ubuntu/hardy/+source/firefox-3.0/3.0~b5+nobinonly-0ubuntu3

    Chen, J. (2008). AT&T’s 3G iPhone Is $199 This Summer | Gizmodo Australia. Retrieved November 3, 2011, from http://www.gizmodo.com.au/2008/04/atts_3g_iphone_is_199_this_summer-2/

    Chi, E. H. (2009, July 22). PART 1: The slowing growth of Wikipedia: some data, models, and explanations. Augmented Social Cognition Research Blog from PARC. Retrieved November 3, 2011, from http://asc-parc.blogspot.com/2009/07/part-1-slowing-growth-of-wikipedia-some.html

    Clarke, G. (2010). OpenOffice files Oracle divorce papers • The Register. Retrieved October 30, 2011, from http://www.theregister.co.uk/2010/09/28/openoffice_independence_from_oracle/

    Collins, B. (2011). Ubuntu Unity: the great divider | PC Pro blog. Retrieved October 24, 2011, from http://www.pcpro.co.uk/blogs/2011/05/03/ubuntu-unity-the-great-divider/

    Distrowatch.com. (2011). DistroWatch.com: Put the fun back into computing. Use Linux, BSD. Retrieved October 30, 2011, from http://distrowatch.com/

    Duke, O. (n.d.). Open Sesame | Love Learning. Retrieved November 3, 2011, from http://www.reedlearning.co.uk/learn-about/1/ll-open-standards

    Fairhurst, R. (2011). File:Osmdbstats8.png – OpenStreetMap Wiki. Retrieved October 29, 2011, from https://wiki.openstreetmap.org/wiki/File:Osmdbstats8.png

    Free Software Foundation. (2010). The Free Software Definition – GNU Project – Free Software Foundation. Retrieved August 29, 2011, from https://www.gnu.org/philosophy/free-sw.html

    Germain, Ja. M. (2011). Linux News: Android: How Linuxy Is Android? Retrieved October 29, 2011, from http://www.linuxinsider.com/story/How-Linuxy-Is-Android-73523.html

    Hardt, M., & Negri, A. (2000). Empire. Cambridge, Mass: Harvard University Press.

    Hardt, M., & Negri, A. (2011). Commonwealth. Cambridge, Massachusetts: Belknap Press of Harvard University Press.

    Hars, A., & Ou, S. (2001). Working for Free? – Motivations of Participating in Open Source Projects. Hawaii International Conference on System Sciences (Vol. 7, p. 7014). Los Alamitos, CA, USA: IEEE Computer Society. doi:http://doi.ieeecomputersociety.org/10.1109/HICSS.2001.927045

    Humanitarian OpenStreetMap Team. (2011). Humanitarian OpenStreetMap Team?» Using OpenStreetMap for Humanitarian Response & Economic Development. Retrieved November 3, 2011, from http://hot.openstreetmap.org/weblog/

    Ingo, H. (2006). Open Life: The Philosophy of Open Source. (S. Torvalds, Trans.). Lulu.com. Retrieved from www.openlife.cc

    Jackson, J. (2011). How Linux mastered Wall Street | ITworld. Retrieved October 29, 2011, from http://www.itworld.com/open-source/193823/how-linux-mastered-wall-street

    KernelTrap. (2002). Interview: Andrew Morton | KernelTrap. Retrieved October 30, 2011, from http://www.kerneltrap.org/node/10

    Map Kibera. (2011). Map Kibera. Retrieved October 29, 2011, from http://mapkibera.org/

    Martin Burns. (2002). Where all the Work’s Hiding | evolt.org. Retrieved October 30, 2011, from http://evolt.org/Where_all_the_Works_Hiding

    Marx, K. (1959). Economic & Philosophic Manuscripts. (M. Mulligan, Trans.). marxists.org. Retrieved from http://www.marxists.org/archive/marx/works/download/pdf/Economic-Philosophic-Manuscripts-1844.pdf Retrieved on 2011-11-03

    Marx, K. (1976a). Capital: A Critique of Political Economy (Vol. 1). Harmondsworth: Penguin Books in association with New Left Review.

    Marx, K. (1976b). Capital: A Critique of Political Economy. The Pelican Marx library (Vol. 3). Harmondsworth: Penguin Books in association with New Left Review.

    Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 50(4), 370-396.

    Mills, A. (2007). Why I quit: kernel developer Con Kolivas. Retrieved October 30, 2011, from http://apcmag.com/why_i_quit_kernel_developer_con_kolivas.htm

    Moeller, E., & Zachte, E. (2009). Wikimedia blog?» Blog Archive?» Wikipedia’s Volunteer Story. Retrieved November 3, 2011, from http://blog.wikimedia.org/2009/11/26/wikipedias-volunteer-story/

    Netcraft Ltd. (2011). May 2011 Web Server Survey | Netcraft. Retrieved October 29, 2011, from http://news.netcraft.com/archives/2011/05/02/may-2011-web-server-survey.html

    Parfeni, L. (2011). Linus Torvalds Drops Gnome 3 for Xfce, Calls It “Crazy” – Softpedia. Retrieved October 29, 2011, from http://news.softpedia.com/news/Linus-Torvalds-Drops-Gnome-3-for-Xfce-Calls-It-Crazy-215074.shtml

    Paulson, R. (2010). Application of the theoretical tools of the culture industry to the concept of free culture. Retrieved October 25, 2010, from http://bumblepuppy.org/blog/?p=4

    Raymond, E. S. (2002). Homesteading the Noosphere. Retrieved June 3, 2010, from http://www.catb.org/~esr/writings/cathedral-bazaar/homesteading/ar01s10.html

    Rivlin, G. (2003, November). Wired 11.11: Leader of the Free World. Retrieved from http://www.wired.com/wired/archive/11.11/linus.html

    Rooney, P. (2011). IT Management: Red Hat CEO: Google, Facebook owe it all to Linux, open source. IT Management. Retrieved October 25, 2011, from http://si-management.blogspot.com/2011/08/red-hat-ceo-google-facebook-owe-it-all.html

    Sneddon, J. (2010). LibreOffice – Google, Novell sponsored OpenOffice fork launched. Retrieved October 29, 2011, from http://www.omgubuntu.co.uk/2010/09/libreoffice-google-novell-sponsored-openoffice-fork-launched/

    Stallman, R. (2010). Free Software Free Society: Selected Essays of Richard M. Stallman. (J. Gay, Ed.) (2nd ed.). Boston, MA: GNU Press, Free Software Foundation.

    Stout, K. L. (2007). CNN.com – Reclusive Linux founder opens up – May 18, 2006. Retrieved October 30, 2011, from http://edition.cnn.com/2006/BUSINESS/05/18/global.office.linustorvalds/

    The Linux Foundation. (2009). Linux Kernel Development. Retrieved from https://www.linuxfoundation.org/sites/main/files/publications/whowriteslinux.pdf

    Torvalds, L. (2001). Just For Fun: The Story of an Accidental Revolutionary. London: Texere.

    Torvalds, L. (2005). “Re: Kernel SCM saga..” – MARC. Retrieved from http://marc.info/?l=linux-kernel&m=111288700902396

    Veltman, K. H. (2006). Understanding new media: augmented knowledge & culture. University of Calgary Press.

    Wikipedia:About. (n.d.).Wikipedia. Retrieved October 29, 2011, from https://secure.wikimedia.org/wikipedia/en/wiki/Wikipedia:About

    26 October 2011

    Colin Jackson

    Retaking the Net

    This Saturday (29th October 2011) is the RetakeTheNet Bar Camp in the Wellington Town Hall.

    I’ve talked about RtN before. It’s a group of people who are uncomfortable about the extent of control of the Net being exerted by governments and companies, and who want to do concrete things to imp roe the situation. This last point is the kicker – anyone can yell a bit, but doing actual projects is a lot harder. We are trying to the use the features of the Net that have made it so successful, its openness and its innovation culture, to find ways to do things more freely.

    The bar camp is for people to come and contribute ideas, meet some fantastic people, and just maybe get energized enough to actually do stuff. There will be sessions through the day starting at 10am (best get get there a bit early) and going on until an after party, starting around 4:30.

    There are going to be some very cool people there. And, you never know, we just might make a difference! Come if you want to be part of that.

    12 October 2011

    Robin Paulson

    University Without Conditions has launched

    Our Free University, the University Without Conditions had its first meeting on Saturday, October the 8th.

    We talked through various issues, including what our University will be, courses we will hold, and a rough idea of principles.  These principles will be made concrete over the next few weeks.  In the meantime, we have decided on our first event; it will be an Equality Forum, to be held as part of the Occupy Auckland demonstration and occupation on October the 15th at Aotea Square.

    All are welcome to attend the first event on the 15th, suggest courses via the website, or join the discussion list to take part in creating our University.

    If you would like to be involved in the set-up, please ask for an account to create posts.

    For more information, see the website:

    http://universitywithoutconditions.ac.nz or http://fu.ac.nz

    18 May 2011

    Andrew Caudwell

    Show Your True Colours

    This last week saw the release of fairly significant update to Gource – replacing the out dated, 3DFX-era rendering code, with something a bit more modern, utilizing more recent OpenGL features like GLSL pixel shaders and VBOs.

    A lot of the improvements are under the hood, but the first thing you’ll probably notice is the elimination of banding artifacts in Bloom, the illuminated fog Gource places around directories. This effect is pretty tough on the ‘colour space’ of so called Truecolor, the maximum colour depth on consumer monitors and display devices, which provides 256 different shades of grey to play with.

    When you render a gradient across the screen, there are 3 or 4 times more pixels than there are shades of each colour, producing visible ‘bands’ of the same shade. If multiple gradients like this get blended together, as happens with bloom, you simply run out of ‘in between’ colours and the issue becomes more exaggerated, as seen below (contrast adjusted for emphasis):

            

    Those aren’t compression artifacts you’re seeing!

    Gource now uses colour diffusion to combat this problem. Instead of sampling the exact gradient of bloom for the distance of a pixel from the centre of a directory, we take a fuzzy sample in that vicinity instead. When zoomed in, you can see the picture is now slightly noisy, but the banding is completely eliminated. Viewed at the intended resolution, you can’t really see the trickery going on – in fact the effect even seems somewhat more natural, a bit closer to how light bouncing off particles of mist would actually behave.

            

    The other improvement is speed – everything is now drawn with VBOs, large batches of objects geometry passed to the GPU in as few shipments as possible, eliminating CPU and IO bottle necks. Shadows cast by files and users are now done in a second pass on GPU using the same geometry as used for the lit pass – making them really cheap compared to before when we effectively wore the cost of having to draw the whole scene twice.

    Text is now drawn in single pass, including shadows, using some fragment shader magic (take two samples of the font texture, offset by 1-by-1 pixels, blend appropriately). Given the ridiculous amount of file, user and directory names Gource draws at once with some projects (Linux Kernel Git import commit, I’m looking at you), doing half as much work there makes a big difference.

    30 March 2011

    Vik Olliver

    31-Mar-2011 AM clippings

    Enjoy:

    Chapman Tripp and its allies are casting FUD at New Zealand's upcoming ban on software patents. The MED is not impressed:
    http://computerworld.co.nz/news.nsf/news/chapman-tripp-urges-re-think

    Two more Registration Authorities have had their SSL signing keys compromised. Web of trust fail:
    https://threatpost.com/en_us/blogs/comodo-says-two-more-registration-authorities-compromised-033011

    Acer lines up a dual-screen tablet PC as Microsoft waits for the tablet fad to pass:
    http://technolog.msnbc.msn.com/_news/2011/03/29/6367167-acers-dual-screen-tablet-behaves-like-a-laptop

    The first commercially viable nanogenerator, a flexible chip turning body movement into power, is shown to the American Chemical Society:
    http://portal.acs.org/portal/PublicWebSite/pressroom/newsreleases/CNBP_026949

    And finally. Atomic wristwatches go out of kilter all over Japan. Not from radiation; the radio sync transmitter is 16 km from the Daiichi:
    http://www.newscientist.com/blogs/onepercent/2011/03/atomic-clocks-go-dark-in-japan.html

    Vik :v) Diamond Age Solutions Ltd. http://diamondage.co.nz

    28 March 2011

    Guy Burgess

    The software patent affair

    Law firm Chapman Tripp has published an article criticising the Government’s decision to exclude software from patentability. While the article makes some valid points, it does not deal with some points fairly.

    The article claims:

    The [software patent] exclusion was the product of intense and successful lobbying by members of the “free and open source” software movement… In its April 2010 report to Parliament on the Patents Bill, the Commerce Select Committee acknowledged that the free software movement had convinced it that computer programs should be excluded from patentability.

    I’m sure this assertion of mighty lobbying power (the ability to sway an all-party, unanimous recommendation no less) would be flattering to any professional lobbyist, let alone FOSS supporters – if only it were true (it is not evidenced in the Commerce Committee report). A range of entities made submissions against software patents, including the statutorily independent University of Otago, InternetNZ, a number of small businesses (and my independent self, I modestly add). There were also submissions the other way, though interestingly the most submissions in favour of retaining software patents were from patent attorney law firms. It is also notable that other organisations including NZICT, which is a strong supporter of software patents and engaged in heavy after-the-event lobbying, did not make any submissions on the issue.

    The article adds the comment:

    The Committee said that “software patents can stifle innovation and competition, and can be granted for trivial or existing techniques”. The Committee provided no analysis or data to support that proposition.

    The fact that a Committee “provided no analysis or data” to support its recommendations is hardly noteworthy – that is not it’s job. Submitters provide analysis and data to the Committee, not the other way around. The material in support of the proposition is in the submissions.

    The article sets up an unfair straw-man argument:

    Free software proponents reckon that software should be free and, as a result, they generally oppose intellectual property rights. They say that IP rights lock away creativity and technology behind pay-walls which smother innovation. Most authors, inventors and entrepreneurs take the opposite view.

    I don’t claim to know what “free software proponents'” views on all manner of IP rights are, but when it comes to software patents in New Zealand, the evidence strongly suggests that the “authors, inventors and entrepreneurs” of software (FOSS or not) are opposed to software patents (see my posts here and here). This includes major companies, including NZ’s biggest software exporter Orion Health (see Orion Health backs moves to block patents).

    While the New Zealand Computer Society poll showing 81% member support for the exclusion is not scientific, it is at least indicative. In any case, opponents of the new law (mainly law firms) have consistently asserted a high level of opposition to the exclusion without any evidence to support that view.

    The article leads to the warning:

    If New Zealand enacted an outright ban on computer-implemented inventions we would be breaking international law. … Article 27(1) of TRIPs says that WTO members must make patents available for inventions “without discrimination as to… the field of technology…”.

    The authors rightly point out that breaching TRIPs could result in legal action against the Government by another country. However, that conclusion is premised on the basis that software is an “invention”. A number of processes and outcomes are not recognised as inventions for the purpose of patent law in different countries, including mathematical algorithms and business methods. The question of whether software is (or should be) an invention was commented on by a Comptroller-General of the UK Patent Office:

    Some have argued that the TRIPS agreement requires us to grant patents for software because it says “patents shall be available for any inventions … in all fields of technology, provided they are…..capable of industrial application”. However, it depends on how you interpret these words.

    Is a piece of pure software an invention? European law says it isn’t.

    The New Zealand Bill does not say that a computer program is an invention that is not patentable. It says, quite differently, that a computer program is “not a patentable invention”, along with human beings, surgical methods, etc.

    Article 27 has reportedly rarely been tested (twice in 17 years), and never in relation to software. The risk of possibly receiving a complaint under a provision (untested) of a multilateral agreement is not new. The New Zealand Law Society notes this in its submission on the Patents Bill (which does not address software patents):

    The proposal to exclude plant varieties under [the new Act] is because New Zealand has been in technical breach of the 1978 Union for the Protection of New Varieties of Plants (UPOV) treaty since it acceded to it in November 1981.

    What’s 30 years of technical breach between friends? Therefore, in fairness I would add a “third way” of dealing with the software patent exclusion: leave it as it is, and see how it goes (which is, after all, what the local industry appears to want). As I wrote last year, “Pressure to conform with international norms (if one emerges) and trading partner requirements may force a change down the track, but the New Zealand decision was born of widely supported policy …”

    If the ban on software patents as it currently stands does not make it into law (which is a possibility, despite clear statements from the Minister of Commerce that it will), it won’t be the end of the world. In fact, it will be the status quo. There are pro’s and con’s to software patents, and the authors are quite right that New Zealand will be going out on a limb by excluding them. The law can be changed again if need be. In the meantime, I refer again (unashamed self-cite) to my article covering the other, and much more popular, ways of protecting and commercialising software.

    27 March 2011

    Vik Olliver

    28-Mar-2011 AM clippings

    Enjoy:

    Microsoft seeks US state laws requiring customers of companies that use pirated software to pay the penalty (proprietary s/w only):
    http://www.groklaw.net/article.php?story=2011032316585825

    Rumours abound that Amazon is about to release its own Android tablet, and ebook makers start to turn to Android too:
    http://gigaom.com/mobile/how-e-books-are-coming-full-circle-thanks-to-tablets/

    An attacker broke into the Comodo Registration Authority (RA) based in Southern Europe and issued fraudulent SSL certificates:
    http://threatpost.com/en_us/blogs/phony-web-certificates-issued-google-yahoo-skype-others-032311

    The first processor is printed on a sheet of plastic. Well, two sheet. One for the CPU, one for the code:
    http://www.technologyreview.com/computing/37126/?p1=A2

    And finally. What do you do if a grizzly attacks you while you're stoned? Why, you claim ACC of course:
    http://www.brobible.com/bronews/montana-man-gets-workers-comp-for-getting-mauled-by-bear-after-smoking-pot

    Vik :v) Diamond Age Solutions Ltd. http://diamondage.co.nz

    22 March 2011

    Vik Olliver

    23-Mar-2011 AM clippings

    Enjoy:

    The UK prepares to introduce a national IP blocking system in the name of preventing piracy of movies and music. For now:
    http://torrentfreak.com/100-domains-on-movie-and-music-industry-website-blocking-wishlist-110322/

    A look at the new features in the recently released Firefox 4 web browser:
    http://www.businessinsider.com/new-features-in-firefox-4-2011-3

    A new 3D nanostructure for lithium and NiMH batteries allows very rapid charging - an electric vehicle in 5 mins if you have the amps:
    http://news.illinois.edu/news/11/0321batteries_PaulBraun.html

    Quantum computing should hit the magic 10 qubit level this year. That's where it starts to surpass some standard computing techniques:
    http://www.bbc.co.uk/news/science-environment-12811199

    A discussion with a scientist who is actually building things atom by atom and his take on when it will be possible to make machines:
    http://nextbigfuture.com/2011/03/philip-moriarty-discusses.html

    And finally. A facebook app that peels the clothes off people in your friends' pictures. Works with guys, girls and sad people:
    http://www.thesmokingjacket.com/humor/falseflesh-facebook-app

    Vik :v) Diamond Age Solutions Ltd. http://diamondage.co.nz

    06 October 2010

    Andrew Caudwell

    New Zealand Open Source Awards

    I discovered today that Gource is a finalist in the Contributor category for the NZOSA awards. Exciting stuff! A full list of nominations is here.

    I’m currently taking a working holiday to make some progress on a short film presentation of Gource for the Onward!.

    Update: here’s the video presented at Onward!:

    Craig Anslow presented the video on my behalf (thanks again Craig!), and we did a short Q/A over Skype afterwards. The music in the video is Aksjomat przemijania (Axiom of going by) by Dieter Werner. I suggest checking out his other work!

    16 August 2010

    Glynn Foster

    World’s First Pavlova Western

    Many months ago, I was lucky to be involved in the shooting of a western film – more appropriately, the world’s first Pavlova Western. Most people will be familiar with the concept of a Spaghetti Western, but now Mike Wallis (my brother in law) and his fiancée Inge Rademeyer from Mi Films have extended that concept to New Zealand.

    They are currently in post-production mode bringing all the pieces together, including an incredible music score from John Psathas (recently awarded Officer of the New Zealand Order of Merit for his Athens Olympics work). Jamie Selkirk (who received an Academy Award for his work on the Lord of the Rings trilogy) has also come on board to give them financial support to put the film through the final stages at Weta’s Park Road Post Production studios.

    And to top it all off, last week they appeared on TV One’s Close Up. Check out the following video –

    [http://www.youtube.com/v/Bhx9NiG9uhs]

    You can check their progress on the Facebook Pavlova Western group and the Pavlova Western blog.

    15 July 2010

    Guy Burgess

    Software patents to remain excluded

    The Government has cleared up the recent uncertainty about software patent reform by confirming that the proposed exclusion of software patents will proceed. A press release from Commerce Minister Simon Power said:

    “My decision follows a meeting with the chair of the Commerce Committee where it was agreed that a further amendment to the bill is neither necessary nor desirable.”

    During its consideration of the bill, the committee received many submissions opposing the granting of patents for computer programs on the grounds it would stifle innovation and restrict competition… The committee and the Minister accept this position.

    Barring any last-minute flip-flop – which is most unlikely given the Minister’s unequivocal statement – s15 of the new Patents Act, once passed, will read:

    15(3A) A computer program is not a patentable invention.

    Lobbying

    It is clear that the lobbying by pro-software patent industry group NZICT was unsuccessful, although Computerworld reports that its CEO apparently still holds out hope that “[IPONZ] will clarify the situation and bring this country’s law into line with the position in Europe and the UK, where software patents have been granted”. Hope does indeed spring eternal: the exclusion is clear and leaves no room for IPONZ to “clarify” it to permit software patents (embedded software is quite different- see below).

    As I wrote earlier, it remains a mystery as to why NZICT, a professional and funded body, failed to make a single submission on the Patents Act reform process – they only had 8 years to do so – but instead engaged in private lobbying after the unanimous Select Committee decision had been made. It also did not (and still does not) have a policy paper on the subject, nor did it mention software patents once in its 17 November 2009 submission on “New Zealand’s research, science and technology priorities”. It is not as though the software patent issue had not been signalled – it was raised in the very first document in 2002. Despite this silence, it claims that software patents are actually critical to the IT industry it says it represents.

    The New Zealand Computer Society, on the other hand, did put in a submission and has articulated a clear and balanced view representing the broader ICT community. It said today that “we believe this is great news for software innovation in New Zealand”.

    Left vs right?

    Is there a political angle to this? While some debate has presumed an open-vs-proprietary angle (a false premise) some I have chatted with have seen it as a left-vs-right issue, something Stephen Bell also alluded to (in a different context) in this interesting article.

    Thankfully, it appears not. The revised Patents Bill was unanimously supported by the Commerce Committee, comprising members National, Labour, Act, the Greens, and Maori parties. It reported to Commerce Minister Simon Power (National) and Associate Minister Rodney Hide (Act). Unlike the previous Government’s Copyright Act reform, post-committee industry lobbying has not turned the Government.

    What about business? NZICT apart, the exclusion of software patents has received the wide support of the New Zealand ICT industry, including (publicly) leading software exporters Orion Health and Jade, which as Paul Matthews notes represent around 50% of New Zealand’s software exports. The overwhelming majority of NZCS members support the change. Internationally, many venture capitalists and other non-bleeding-heart-liberal types have spoken out against software patents, on business grounds.

    Some pro-software patent business owners might be miffed at a perceived lack of support from National or Act, perhaps assuming that software patents are a “right” and are valuable for their businesses. The reality is that only a handful of New Zealand companies have New Zealand software patents (I did see a figure quoted somewhere – will try to find it). Yes, they can be valuable if you have them but that is a separate issue (and remember, under the new Act no one loses existing patents). A capitalist, free market economy (and the less restrictive the better) abhors monopolies, and this decision benefits the majority of businesses in New Zealand. Strong IP protection is essential in modern society – including patents – (see my article “Protecting IP in a post-software patent environment“) but the extent of statutory protection when being reviewed will always come down to a perceived balance, not just for the minority holders of a patent (a private monopoly) but for the much larger majority artificially prevented from competing and innovating by that monopoly.

    I have always taken pains to note, like NZCS, that there are pros and cons to software patents. And I am a fan of patents generally. Patents are good! But for software patents, the cons outweigh the pros. There are sound business reasons to exclude them. This specific part of the reform targets one specific area, has unanimous political party support (how rare is that?), and wide local business support. The last thing it can be seen as is an anti-business, left-wing policy (if it was, I’d have to oppose it!)

    Embedded software

    Inventions containing embedded software will remain, rightly, not excluded under the Patents Bill. Minister Power confirmed that IPONZ will develop guidelines for embedded software, which hopefully will set some clear parameters for applicants.

    Software is essential to many inventions, and while that software itself will not be patentable, the invention it is a component of still may be. Some difficult conceptual issues can arise, but in most cases I don’t expect difficulties would arise. This “exception” (if it can be described as such) will not undermine the general exclusion for software patents.

    11 May 2010

    Guy Burgess

    Open source in government tenders

    Computerworld reports:

    A requirement that a component of a government IT tender be open-source has sparked debate on whether such a specification is appropriate.

    The relevant part of the RFP (for the State Services Commission) puts the requirement as follows:

    We are looking for an Open Source solution. By Open Source we mean:

    • Produce standards-compliant output;
    • Be documented and maintainable into the future by suitable developers;
    • Be vendor-independent, able to be migrated if needed;
    • Contain full source code. The right to review and modify this as needed shall be available to the SSC and its appointed contractors.

    The controversy is whether this is a mandate of open source licensing (which it isn’t). The government should not mandate open source licensing or proprietary licensing on commercial-line tenders. More precisely, it should not rule solutions in or out based on whether they are offered (to others) under an open source licence. The best options should be on the table.

    The four stated requirements are quite sensible. As the SSC spokesman said, there is nothing particularly unusual about them in government procurement. These requirements (or variations on them) are similarly common in private-sector procurement and development contracts. In the public sector in particular though, vendor independence and standards-compliance help avoid farcical situations like the renegotiation of the Ministry of Health’s bulk licensing deal.

    Open standards and interoperability in public sector procurement is gaining traction around the world. Recently, the European Union called for “the introduction of open standards and interoperability in government procurement of IT”. And in the recent UK election, all three of the main parties included open source procurement in their manifestos.

    So why the controversy in this case? Most likely it’s the perhaps inapt use of the term “open source” in the RFP (even though the intended meaning is clarified immediately afterwards). The term “open source” is a hot-button word that means many things to many people, but today it generally means having code licensed under a recognised open source licence, many of which are copyleft. Many vendors simply could not (or would never want to) licence their code under such a licence, and it would be uncommercial and somewhat capricious for a Government tender to rule out some (or even the majority of) candidates based on such criteria.

    However, it is clear that the SSC did not use the term in that context, and does not intend to impose such a requirement. An appropriate source-available licence is as capable of meeting the requirements as an open source licence (see my post on source available vs open source). The requirement for disclosure of code to contractors and future modification can be simply dealt with on standard commercial IP licensing terms.

    A level playing field for open and proprietary solutions is the essential starting point, with evaluation – which in most cases should include open standards and interoperability – proceeding from there.

    25 January 2010

    Glynn Foster

    http://www.internetblackout.com.au/

    While I catch a breadth and write up some of my experiences of LCA2010 last week, the Australian’s are in full gear for their Great Australian Internet Blackout Campaign.

    From their website –

    What’s the problem?

    The Federal Government is pushing forward with a plan to force Internet Service Providers to censor the Internet for all Australians. This plan will waste millions of dollars and won’t make anyone safer.

    1. It won’t protect children: The filter isn’t a “cyber safety” measure to stop kids seeing inappropriate content such as R and X rated websites. It is not even designed to prevent the spread of illegal material where it is most often found (chat rooms, peer-to-peer file sharing).
    2. We will all pay for this ineffective solution: Under this policy, ISPs will be forced to charge more for consumer and business broadband. Several hundred thousand dollars has already been spent to test the filter – without considering high-speed services such as the National Broadband Network!
    3. A dangerous precedent: We stand to join a small club of countries which impose centralised Internet censorship such as China, Iran and Saudi Arabia. The secret blacklist may be limited to “Refused Classification” content for now, but what might a future Australian Government choose to block?

    Help turn the lights out on the proposed Internet filter by joining the Great Australian Internet Blackout.

    New Zealand was supported worldwide during their appeal of Section 92A – it’s time to support our cousins in the west.

    14 January 2010

    Glynn Foster

    Come to LCA2010 Open Day

    We’ve got a great line up for LCA Open Day! Check out our great posters and pass them around your work, university, community group or government department!

    01 June 2009

    Gavin Treadgold

    Software for Disasters

    This is the original text I submitted to The Box feature on Disaster Tech on Tuesday the 2nd of June, 2009. It is archived here for my records. It also includes some additional content that didn’t make it to the print edition.

    On December 26, 2004, the Boxing Day tsunami killed over 35 thousand people and displaced over half a million people in Sri Lanka alone. A massive humanitarian crisis played out in numerous other countries also affected by the magnitude 9+ Great Sumatra-Andaman earthquake and resulting tsunami. Within days it became apparent that an information system was needed to manage the massive amounts of information being generated about who was doing what, and where – at one point there were approximately 1,100 registered NGO’s operating in Sri Lanka.

    It was decided by a group of Sri Lankan IT professionals that a system needed to be built to better manage the information as they couldn’t find any existing free solutions that could be quickly deployed. Free, was critical, as they couldn’t afford any commercial solutions.

    Sahana was implemented within a week by around four hundred IT volunteers, and it was named after the Sinhalese word for relief. Initially it provided tools for tracking missing persons, organisations involved in response, locations and details of camps set up in response to the tsunami, and a means of accepting requests for resources such as food, water and medicine.

    Following the tsunami, the Swedish International Development Agency provided funding to take the lessons learnt from writing and deploying software during a disaster, and to rebuild Sahana from the ground up, and release it as free and open source software to the world. After all, Sri Lanka had needed an open and available system to manage disaster information, surely other countries should benefit from their experience?

    Since 2005, Sahana has been officially deployed to earthquakes in Pakistan, Indonesia, China and Peru; a mudslide in the Philippines; and has been deployed in New York City as a preparedness measure to help manage storm evacuations.

    Being free and open source software has been critical to Sahana’s success. The more accessible a system is, the more likely it is to be adopted, used and improved. Even in developed countries, many disaster agencies are poorly funded and often cannot justify significant expenditure on systems – commercial systems are too expensive. With pressure being applied to many public budgets, the significance of this is even greater now. Perhaps the greatest benefit of applying open source approaches is that it encourages a collaborative and communal approach to improving the system. As more countries with experience in disaster management contribute to its development, this will also act as a form of expertise transfer to countries that may not have as much experience with disasters.

    Following Hurricane Katrina, there were nearly 50 websites created to track missing and displaced persons – all using different systems, all collecting duplicate information, and few of them sharing. Many of the potential benefits of the technology were lost due to a lack of co-ordination and massive replication of data. Access to tools such as Sahana will be more efficient as they can be deployed faster than solutions developed after an event occurs.

    Normally, management involves a ‘leisurely’ process to collect as much information as possible, to then decide what actions should be taken. This is completely the opposite immediately following a disaster whereby decisions have to be made, sometimes with little or no information and no time to gather it.

    A key benefit that IT can provide is in linking silos of information held by different organisations – everyone has a better shared picture of what has happened, what is occurring now, and what is planned.

    Software, however, is just one aspect. There is a need for open data (such as maps and statistics) and standards to ensure that the multitude of systems can connect to each other and share information.

    The most important aspect is having the relationships between organisations set up in advance of a disaster. This results in organisations having the confidence to connect their systems and share information. Without shared information the rest of the system will lose many potential benefits that IT can bring to disaster management.

    Often, little or no information is available to support decision-making – emergency managers are forced to make complex decisions without having the luxury of all the required information.

    A disaster can produce a massive number of tasks requiring hundreds of organisations and thousands of people to co-ordinate activity – meaning that there will always be some prioritisation needed. What should be done first? What can wait until later? How should an impacted community prioritise response and recovery with limited resources?

    The benefits are not just limited to agencies and NGO’s. The next evolutionary step will be to adopt an approach called ‘crowd sourcing’ whereby members of the community are provided with tools to interact with each other and emergency managers.

    This may be achieved with applications that run on mobile phones linking people and even submitting information from the field directly to Sahana servers. Imagine the situation where a passerby can take a georeferenced photo of some disaster damage, and if communications networks are working, send that directly to the system emergency managers are using to manage the event. There are a numberof efforts underway looking at how social networks and websites such as Facebook and Twitter can be utilised during a disaster.

    Disaster IT is really a force multiplier. It won’t usually save lives, but it will allow a better shared understanding of the problems, and will lead to more effective and co-ordinated response. It allows those responding to an event, whether an organisation or individual, to quickly access information and better inform decision-making. This can lead to less suffering and a quicker recovery for affected communities.

    Design for Disaster
    Computer systems can often be fragile by their design – they are especially reliant upon power and communications. If any of these are lost during a disaster, the value of a system can quickly be lost if it has not been designed to operate in adverse environments. Here are some design decisions that are very important for disaster applications:

    • Low bandwidth – we’ve all become accustomed to sucking bandwidth through massive broadband pipes, but during a disaster network connectivity for emergency managers may be limited to dialup speeds over satellite or digital radio connections. Disaster software needs to be designed for very efficient transfer of information, and should never assume vast quantities of bandwidth are available. At at extreme, some information may even be transferred by SMS or USB memory stick.
    • Intermittent connectivity – during a disaster communications will likely fail multiple times before they are finally restored. This means that most ‘software as a service’ or web applications on the Internet will be of little use to emergency managers. Disaster software needs to be stored and run locally, and be able to work without a connection to the Internet.
    • Synchronisation – one of the best techniques for designing around low bandwidth and intermittent connectivity, is to design a system to be able to synchronise information between two systems when communications are available. When communications later fail, both systems will have a copy of the same data, and can access it locally until communications are restored.
    • Low power – power can, and will fail during a disaster, so disaster software needs to be designed to run on low power devices. Laptops and notebooks are good targets as they are self-contained, have built-in batteries, and can be charged from solar cells or generators. Large, power hungry servers can be difficult to move and support in a disaster environment.

    How I became involved

    One might ask how a Kiwi became involved in Sahana. Ever since training as a Civil Defence volunteer in the late 90′s, I had an interest in how information technology could be used to improve disaster management. The tsunami in 2004 acted as the catalyst for Sri Lankan computer programmers to produce Sahana. I have been volunteering with the project since 2005. In September 2005, he helped facilitate a workshop in Colombo that formed the basis for the current version of Sahana. In March this year he attended a Sahana conference and Board meeting in Sri Lanka. At the Board meeting the existing ‘owner’ of Sahana – the Lanka Software Foundation – agreed to hand the project over to the open source community. Gavin is a member of the transition Board that is in the process of forming an international non-profit foundation that can accept financial donations, and act as the ‘custodian’ of Sahana.

    How you can help

    There are numerous ways Sahana is looking for help. Once registered, we will be able to accept financial donations that will be used to fund development. In the meantime, we are looking for open source programmers with web development skills (including mapping). If you’re not a programmer, we are always looking for translators that can convert the english text and documentation into many different languages. Perhaps most importantly, we are looking for experienced emergency managers to help provide design advice to the Sahana community and guide the developers.

    26 April 2009

    Gavin Treadgold

    Google investing USD$50,000 in Sahana

    Well, it has been a lot of work for the admins, the mentors, and the students, but it has paid off. The Sahana has been awarded 10 projects in the 2009 Summer of Code. We have some great projects lined up! The include:

    • Person Registry for Sahana
    • Warehouse Management
    • Disaster Victim Identification
    • J2ME clients for form data collection in the field
    • Optical Character Recognition for scanning forms
    • Peer to peer synchronisation of Sahana servers
    • CAP Aggregation and Firefox CAP plugin
    • CAP Editing and Publishing
    • Mashup/Aggregation Dashboard
    • Theme Manager

    Having been neck deep in the process; working with others to set up our assessment process, coming up with ideas (I’m stoked to have two students working on CAP ideas that came out of my earlier suggestion), and reviewing each and every of the 45 proposals we recieved, it has been exciting to get so many projects accepted.

    I think that by the end of the year, we are going to have some great new functionality available in Sahana. Even more, I hope we’ll attract more open source developers to our ever growing community!

    25 March 2009

    Gavin Treadgold

    Sahana – a catalyst to widespread EMIS deployment

    I’ve just uploaded the presentation I gave on Sahana at the Sahana 2009 Conference in Colombo, Sri Lanka on the 25th of March, 2009. I’ll put a link up to the associated paper soon as well.