Thursday, December 23, 2010

What I learnt from the Gawker hack

Over the years I've gradually increased my online security through better passwords, using SSH, VPNs, SSL, always have up to date anti-virus, using up to date software, and not using strange computers, typing in random junk for 'security questions', etc. Even with all that I'm paranoid about online security.

But what I've learnt from the Gawker hack and breaking people's passwords is that lots of people aren't. In fact, even well-known people who should know better pick bad passwords. A lot of the passwords I've seen are so poor that hackers are likely to be able to break the passwords of well-known people just by guessing. It's no wonder that people like Sarah Palin get hacked

For example, I looked at the passwords of journalists (senior editors or high-profile technology writers). Many of these were single words all in lowercase. I saw a mixture of using the name of the publication they were writing for or the name of a family member.

This sort of poor security means that hacks like the Gawker one are completely unnecessary. Hackers can just sit back and guess a password based on a little research.

In other cases, the passwords were just a single English word written in lowercase. To defend against people guessing those words many sites prevent too many log ins with the wrong password. But there's a flaw in that: since many people use the same password across multiple sites a smart hacker can try out guesses on different sites flying below the radar.

For example, there are users who had the same password on Gawker, Twitter, Facebook, etc. Suppose your target's password is in the top 3,000 words in English (scrubbed of words longer than 8 and less than 6 characters). Now suppose you know they have accounts on 6 sites. Picking randomly from the list you'd expect to get their password in 1,500 guesses or 250 per site.

If you allowed yourself three guesses per site per day it would take 80 days to crack their password. Of course, the more sites the person uses the same password on the quicker it's crackable. And any site that allows many guesses would make the process even quicker.

That's yet another reason to use different passwords, and don't use something that a hacker can find out with a simple Google search.

Wednesday, December 22, 2010

Why do Christmas lights all go out when one bulb blows? (and how to find the broken one)

The answer is rather simple: traditional Christmas lights (I'm ignoring newfangled LED varieties) were typically connected directly to the mains power supply and wired in series like this:


Only if the filaments of all the bulbs are intact will a current flow around the circuit; if one bulb breaks then the circuit is broken and all the lights go out. The reason the bulbs are wired in this, inconvenient, manner is that it's convenient for the manufacturer.

Although the supply voltage is 230v (or 110v) the bulbs are rated for a much lower voltage. At home I have a string of 20 lights like this with 12v bulbs. This works because of the rules of series circuits. In my home lights there are 20 bulbs each with some unknown resistance R. The total resistance of the circuit is 20R and the entire circuit is a sort of voltage divider.

The current flowing through the entire circuit is I = 230/20R and the voltage across any individual lamp is V= R * 230/20R or 230/20. So my 20 bulbs are each getting 11.5v. That's handy for the manufacturer because they can use cheap, small bulbs that use a low voltage.

BTW Some bulbs have a second piece of wire called a shunt that passes current when the filament breaks. With a shunt the manufacturer can still use series wiring and cheap bulbs, but a blown bulb doesn't stop all the lights from working.

Finding the broken bulb

A really fast way to find which bulb is broken is to perform a binary chop. To do that you need a multimeter (or similar meter to test continuity).

0. Unplug the string of lights from the power.

1. Remove the first and last bulbs and check that they are ok.

2. Remove the bulb in the centre of the string of lights. Using the multimeter check to see if there's an electrical connection between the contacts in the centre bulb socket and each of the end bulb sockets that you remove the bulbs from (you can actually look at the wiring to see which way the wires go and which contact that corresponds to).

3. Pick the half where there's no connection. The broken bulb is there. Remove the bulb that's in the middle of that half of the string and check it. If it's ok proceed to checking the electrical connection between the socket of the bulb you just removed the two nearest bulbs you removed (which will be the middle of the string and one end).

4. Proceed like that following where there's no electrical connection and dividing in half until you find the broken bulb.

This is a technique from computer science and you will find the broken bulb much faster (on average) than if you proceed checking each bulb in turn.

My password generator code

Some people have asked me about the code for my password generator. Here it is:

use strict;
use warnings;

use Crypt::Random qw(makerandom_itv);
use HTML::Entities;

print "<pre>\n ";
print join( ' ', ('A'..'Z') );
print "\n +-", '--' x 25, "\n";

foreach my $x ('A'..'Z') {
print "$x|";
foreach my $y (0..25) {
print encode_entities(
chr(makerandom_itv( Strength => 1,
Uniform => 1,
Lower = >ord('!'),
Upper => ord('~')))), ' ';
}
print "\n";
}
print '</pre>';

Monday, December 20, 2010

Royal Festival Hall condundrum

When I went to record Shift Run Stop at the Royal Festival Hall a few weeks ago I noticed that the display on the 5th floor lift was not showing 5 but a bit pattern. I snapped a quick photo and decided to look into it later:


And here's a close up of the top of it.


If you look carefully you'll see that there are 8 columns of on or off squares. I transcribed the squares with on = 1 and off = 0 to get the following list: 11111111 11000100 11011000 11101100 00000000 00010100 00101000 01001110 01110100 10001000 10011100 10110000 11000100 11011000 11101100 00000000 00010100 00101000 00111100 01010000 01100100 01111000 10001100 10100000 10110100 11001000 11011100 11110000 00000100 00011000 00101100 01000000 01010100 01101110 10001110 10110100 11001000 11101110 00000010 00010110 00101010 01010000 01111000 10001100 10110010 11001110 11101100 00000000 00100110 00111010 01100000 10000110 10011010 11000000 11010100 11111010 00100000 01001100 01101100 10000000 10010100 10101000 10111100 11010000 11100100 11111000 00001100 00110110.

Apart from the first item which is all 1s all the others have a right-most bit of zero. At first I thought this might be 7-bit ASCII (LSB first), but decoding that just gives a mess. Then I wondered if it was machine code, but I think that's unlikely given the fact that one of the bits is always zero. I don't think this is random data.

Here it is as hex with LSB on the right.

ff c4 d8 ec 00 14 28 4e 74 88 9c b0 c4 d8 ec 00 14
28 3c 50 64 78 8c a0 b4 c8 dc f0 04 18 2c 40 54 6e
8e b4 c8 ee 02 16 2a 50 78 8c b2 ce ec 00 26 3a 60
86 9a c0 d4 fa 20 4c 6c 80 94 a8 bc d0 e4 f8 0c 36

And reversed:

ff 23 1b 37 00 28 14 72 2e 11 39 0d 23 1b 37 00 28
14 3c 0a 26 1e 31 05 2d 13 3b 0f 20 18 34 02 2a 76
71 2d 13 77 40 68 54 0a 1e 31 4d 73 37 00 64 5c 06
61 59 03 2b 5f 04 32 36 01 29 15 3d 0b 27 1f 30 6c

So, what could it be? I'm assuming that the display is showing something from either its internal memory or from the memory of its controller and that we are looking at consecutive memory locations (this could, also be incorrect).

Anyone else want to take a stab at this? Anyone know what company made the controller for the display or the lift?

The other thing that's odd is that there are lots of monotonic increasing sequences in the data. e.g. drop the ff and observe:

c4 d8 ec
00 14 28 4e 74 88 9c b0 c4 d8 ec
00 14 28 3c 50 64 78 8c a0 b4 c8 dc f0
04 18 2c 40 54 6e8e b4 c8 ee
02 16 2a 50 78 8c b2 ce ec
00 26 3a 60 86 9a c0 d4 fa
20 4c
6c 80
94 a8 bc d0 e4 f8
0c 36

Friday, December 17, 2010

Write your passwords down

Here's my advice on password security based on the collected opinions of others:

1. Write them down and keep them in your wallet because you are good at securing your wallet. (ref)

2. Use different passwords on every web site because if you don't one site hacked = all your accounts hacked. (ref)

3. Use passwords of at least 12 characters. (ref)

4. Use mixed-case, numbers and special characters. (ref)

Research says you need 80-bits of entropy in your password so it needs to be long, chosen from a wide range of characters and chosen randomly. My scheme gives me 104 bits of entropy.

My passwords are generated using a little program I wrote that chooses random characters (using a cryptographically secure random number generator) and then printing them out on a tabula recta. If you were to steal my wallet you would find a sheet of paper that looks like this in it (I have a second copy of that sheet left with a friend in an envelope):


I use that sheet as follows. If I'm logging into Amazon I'll find the intersection of column M and column A (the second and third letters of Amazon) and then read off diagonally 16 characters. That would be my Amazon password (in this case, TZ'k}T'p39m-Y>4d); when I hit the edge of the paper I just follow the edge).

The security of this system rests on the randomness of the generated characters and the piece of paper.

PS Yes, it's a total pain to use long, random, different passwords.

PPS If it's not obvious to people you can add a second factor to this (something only you know) in the form of the algorithm for picking the password from the sheet. For example, instead of using the second and third characters from the site name you could pick any combination. And you could change the letters as well (e.g. for Amazon you could use the last two letters moved on one place in the alphabet; you'd have PO as the key). Also you don't have to read diagonally but could use any scheme that works for you (e.g. a spiral pattern, read vertically, read characters at offsets from the start based on the Fibonacci sequence, etc.).

Thursday, December 16, 2010

Inside the Gawker hack: the .uk domains

The other day I talked about the Gawker hack and I thought it would be interesting to look a little deeper at the .uk domains that are in the file. There are 7,599 accounts with email addresses that have hash values suitable for attacking with John the Ripper.

I've now let it run for 24 hours and have cracked 2,512 of the accounts (which is 1/3). Here are some fun facts based on the cracked passwords.

1. There are two government accounts with Government Secure Intranet email addresses from the Crown Prosecution Service and The Charity Commission with very simple passwords. Plenty of schools and universities are represented, as is ACAS and Tesco. Plus a smattering of people from the NHS.

2. The top ten passwords are 123456, 12345678, password, liverpoo (note that the Gawker system truncates at 8 characters), letmein, arsenal, chelsea, starwars, daniel and qwerty. Clearly, football (Liverpool, Arsenal and Chelsea) are important when cracking UK-based passwords. Further down in the list the football theme continues with manchest, manunite and ronaldo.

3. The top ten domains by cracked password are hotmail.co.uk, yahoo.co.uk, live.co.uk, blueyonder.co.uk, tiscali.co.uk, aol.co.uk, o2.co.uk, homecall.co.uk, yahoo.com.uk and zen.co.uk.

4. Journalists seem to be quite bad at picking passwords. There are easily cracked passwords from senior figures (editors) at The Guardian, The Observer, The Times and The Daily Telegraph. Note to hacks: using the name of your paper as a password is probably a bad idea.

5. Worrying for individuals are people whose email address includes their full name (or they have a custom domain) and their password is a word that is likely significant to them. Since they probably think that password is safe they'll likely use it elsewhere. Real risk there of being able to attack those individuals.

6. There's a senior figure from the Liberal Democrats (not an MP) whose password is an easily guessed word.

Casting outside the .uk domains it's possible to find British companies like BP, British Telecom, HSBC, Shell, Barclays, BHP Billiton, Unilever, ... Many have easily cracked passwords.

System administrators would do well to check their own domains, as I did, to make sure their users are not exposed and do a bit of password security education.

PS Just in case you think I'm some kind of l33t h4x0r for this, bear in mind that password cracking tools are widely available on the Internet, the complete database is circulating widely and can be found via Google, and running JtR is not hard at all. No uber-skills required.

Wednesday, December 15, 2010

Plan 28 gets some professional PR

Last week I announced that Doron Swade had joined Plan 28. I'm happy to say this week that we're getting some professional help with our announcements (and more) from global PR firm AxiCom. AxiCom handles clients such as Dell, Panasonic, Ericsson, Fujitsu, Logitech, McAfee, Qualcomm, Salesforce.com and more.

And now, on a pro bono basis, they are handling Plan 28. Here's their official blog announcement of their involvement.

Having professional PR is another big boost for the project because it takes a load off my shoulders and AxiCom can reach people and places I simply can't. I expect that their involvement will help Plan 28 enormously. Expect to see more news stories about the project over the coming months and more announcements about additional support for the project.

As always there's lots more going on, once details are finalized I'll announce. And please remember that Plan 28 still needs your financial support to make it a reality.

Tuesday, December 14, 2010

Don't write to me asking me to support your crusade against global warming science

I've received yet another email indicating that the author thinks I don't believe man is responsible for global warming. This comes about because of an insidious sort of tribalism that has turned conversations about climate change into a "you're either with us or against us" situation.

For the record, my reading of the scientific literature and my own reproductions of Met Office data convince me that (a) the world is warming and (b) the most likely reason for this is man.

Much of the 'debate' about climate change reminds me of the pro-choice/pro-life non-debates in the US. Once you split down what look suspiciously like faith lines you're no longer doing science at all. Many people seem to mistake my criticism of the quality of source code used by UEA's CRU as indication of some underlying belief on my part.

Poppycock.

To be clear, I think the code I saw from CRU was woeful and had many easily identified bugs. I also think that source code used for scientific papers should be routinely be made available. And, yes, I did find errors in Met Office software. People who discuss those errors often seem to omit the fact that correcting them reduces the error range for global temperatures thus increasing the confidence that the temperature trend is up since the 1970s.

I find it very sad that I can't criticize the one area of climate change science I know something about (software) without suddenly being thought of as 'on the side of the skeptics/deniers'. I'm not on anyone's side. I'll call it like I see it.

Shift Run Stop

Some time ago I recorded a long interview with the fine folks at Shift Run Stop. The interview covered all sorts of topics, but focussed on Plan 28 with detours through Kinect hacking, GAGA-1, Tron and The Geek Atlas.

The podcast comes out this Thursday, but here's a sneak preview.

John Graham-Cumming from shiftrunstop on Vimeo.

Monday, December 13, 2010

Many of the Gawker passwords are easily cracked

This morning the hack of Gawker Media (including sites like LifeHacker and Gizmodo) is big news and I grabbed the torrent to make sure that no one in my office had been compromised. Happily there were no causata.com email addresses in that file.

But there were email addresses of people I know. I did a quick check by downloading all my email contacts as a CSV and then doing a grep.
$ cut -d, -f 15 contacts.csv | xargs -I % grep 
% real_release/database/full_db.log | wc -l
17

So, 17 people I know were in the list. The algorithm used to store the passwords is a DES hash which is quite readily attackable using John The Ripper. So I set it to work on the people I know. (At the same time I emailed them all to tell them).

Within seconds I had the passwords of 3 of the 17 (including the password of one well-known tech personality and one person who was using the password 'password') and within a few minutes another two. I didn't keep a record of the passwords.

If you use any of the Gawker sites change your password; if you use the same password on a different site: STOP NOW (and change all your passwords to something different).

PS I'd stay away from the Gawker sites for a while. The entire source code was compromised and so I expect hackers will be already reading the code looking for vulnerabilities and additional hacks me occur in the coming days.

As part of a hack a long list of compromised accounts was distributed. The top 15 passwords cracked are:
3057 123456
1955 password
1119 12345678
661 lifehack
418 qwerty
333 abc123
311 111111
300 monkey
273 consumer
253 12345
247 letmein
241 trustno1
233 dragon
213 baseball
208 superman

Please don't use simple passwords like this! Use a password manager like KeePass and generate random passwords for each site.

Saturday, December 11, 2010

Friday, December 10, 2010

Are some Oxford colleges racist?

As a follow up to yesterday's post about statistics on black students applying to and being accepted by Oxbridge colleges and thought I'd follow up on the "Merton problem". In the original article the author writes: "Merton College, Oxford, has not admitted a single black student for five years".

Two questions come out from that. First, how likely is the event "A single Oxford college doesn't admit a single black student for five years in a row" and secondly, "are some Oxford colleges racist?".

In an email from Ben Goldacre the first question is answered. Ben calculates the p-value of the "Merton problem" as (0.29^5) * 38 = 0.0779423662. i.e. given the small number of black students applying to Oxford you can't with statistical confidence say that the "Merton problem" is anything more than a natural consequence of randomness.

So, let's turn to the second question. Happily, the article author has published the documents he received from Oxford and Cambridge and from them we can calculate the rate at which black students are accepted at each of the colleges (for some reason there's no acceptance data for Hertford and Harris-Manchester).

Here's the data:


There I've shown just the total number of black students applying to each college from 1999 to 2009, the total number accepted and calculated an acceptance rate. You'll notice that some colleges get more applicants than other. Interestingly, Merton received the lowest number of black applicants which goes along way to understanding why they didn't accept more students.

Now, let's turn to the acceptance rate. Do the acceptance rates tell us that some colleges are more racist than others? For the statistical test take the null hypothesis as "black students are accepted at the same rate by all Oxford colleges" (i.e. what I'm asking the test is "does this data look like it's not uniformly distributed?").

Using my old friend the chi-square test we get a value of 39.161 for 27 degrees of freedom (there are 28 colleges with data). Looking that up in a chi-square table gives p-values of 40.11 (p=0.05), 46.96 (p=0.01) and 55.48 (p=0.001). That means we can't reject the null hypothesis. This data doesn't give us evidence that the acceptance rates at Oxford colleges are anything other than uniform.

Thursday, December 09, 2010

The utter balls people write about Oxbridge

I start by apologizing for the profanity, but when I hear people spouting questions like "What is it about the famed Oxbridge interview system that counts against students who didn't attend a top public school?" it makes me very angry. The implication in the article is that there's a race bias (or is it a school bias, or a north/south divide bias... I actually lost track of the number of biases the article is claiming).

The answer to the question is rather simple (as long as the question is reframed as "What is it about the famed Oxbridge interview system that counts against some people?"). The Oxford interview process is bloody hard. So hard that 25 years on I use questions asked of me at 17 years old to screen candidates for programming jobs. The interview process is not designed to discriminate against people who didn't go to a top public school, it's meant to discriminate against people who aren't up to studying there.

I attended Oxford at the same time as David Cameron and his chums (Michael Gove was in the room next to me for a year). There certainly were lots of people from public schools (perhaps they did get in because their Dad went to Oxford, or perhaps it was because of the level of education they received), but there were also lots of people like me who didn't, in the stereotype, go to Eton.

I went to Oxford from a large comprehensive school. I sat that grueling entrance exam in mathematics, I was invited for interview and stayed days in Oxford being interviewed over and again. I didn't get special tuition to make it into Oxford, I'm not a public school boy and no one in my family has an Oxford connection. Neither of my parents have degrees.

I was asked extremely searching questions about mathematics and computer science that were well outside any A level curriculum and the purpose was to see how I would think. One interviewer pointedly asked me why I hadn't done a particular question on the entrance exam and then made me answer it at a blackboard in front of him. Another made me stand in front of a blackboard and solve a problem in computer science.

While I was at Oxford I was asked to go into comprehensive schools to encourage people to apply. Many people write themselves off and don't even try. This is a problem and the linked article doesn't help the situation by portraying Oxford as racist.

The author should have asked himself why so few black students were applying to Oxford and so few were getting top A level grades. You'd think he might have done that given that he was Minister for Higher Education under the previous government. But it's a lot easier to point the finger at some imagined evil institution than ask the hard questions about the state of education in British schools.

And he really shows his deep knowledge of the subject when he states: "Cambridge doesn't employ a single black academic." Sorry, Dr Okeoghene Odudu, Dr Justice Tankebe (inter alia) I guess you don't count for some reason.

It is tragic that such a small number of black students are getting top grades, but whacking Oxford and Cambridge without attacking the root cause is almost criminal. It's betraying the people the author wishes to be believed to be trying to help.

He also states: "You will not find these figures on the Oxford or Cambridge websites. ". Wanna bet? How about Oxford's Undergraduate Admissions Statistics 2009 entry and let's look at Ethnic Origin.

And we'll just compare "Whites" to "Black African, Caribbean and Other". So 8,378 white applicants; 221 black. Swap to acceptances we have 2,316 white acceptances; 27 black. So 28% of white applicants got in and 12% of black. Evidence of race bias or something else?

To quote the site: "Oxford’s three most oversubscribed large (over 70 places) courses (Economics & Management, Medicine and Mathematics) account for 44% of all Black applicants – compared to just 17% of all white applicants." and "Subject breakdown: 28.8% of all Black applicants for 2009 entry applied for Medicine, compared to just 7% of all white applicants. 10.4% of all Black applicants for 2009 entry applied for Economics & Management, compared to just 3.6% of all white applicants."

So you've got a small number of candidates applying into the most oversubscribed subject areas. 44% of black applicants are applying for courses with acceptance rates of 7.9%, 12.1% and 19%.

Put those figures together and assume no bias and for the 44% you've got a 12 black students who get admitted out of 97 who apply to those subjects. That's a success rate of 12%. That gibes with the figure given above: and that's assuming that the acceptance rate for those three subjects has no bias at all.

What about the other 56%? That's 123 students of which 27 - 12 got accepted. So that's also 12%. The problem with interpreting that is those 15 students are a tiny portion of the pool of 11,896 students who applied. And without knowing what subjects they applied for it's hard to dig into them.

But it is possible to work backwards. Suppose that 28% of those 123 black students were accepted (the average rate for whites) then there'd be 34 accepted. So the total would be 34 + 12 out of 221 or 20.8%. Comparing that with the overall 12% rate it's clear that the acceptance rate for black students is lower than white students in the non-oversubscribed subjects. But knowing why is hard.

If they are all applying for earth sciences (acceptance rate 44.9%) then there's a problem, if they are applying for law (acceptance rate 17.7%) then a different picture emerges. And if it's Fine Art (acceptance rate 12.9%) they are close to spot on. The only way to the bottom of that puzzle is a breakdown by subject and ethnic origin. But with such a tiny group of applicants even a change of acceptance of a single student could cause wild swings in percentage acceptance rates.

The other laughable misuse of statistics in the article come in the form of cherry-picking. "Merton College, Oxford, has not admitted a single black student for five years." Hardly surprising. If 2009 isn't anything to go by just 27 black students were admitted to the entire university. There are 38 colleges in Oxford. It's not possible to divide 27 by 38 evenly and no surprise that a specific college would have no black student for a number of years.

The bottom line is that getting into Oxbridge is hard and that the number of black students applying is tiny. Imagine for a moment that black students got in at the same rate as white students. There would still only be 62 black students at the university. Let's attack the real problem and raise up the education level of black students.

Update: Follow up post looking into the Merton Problem.

Update: I emailed Oxford asking if they'd release the breakdown by ethnic origin and subject so that per-subject bias can be examined in the non-over subscribed subjects. Will blog if I get a result.

Update: I saw a comment on Twitter that said that it was "delusional of the author [i.e. me] to doggedly say there is no way oxbridge has any institutional issues at all." Clearly, I haven't said that Oxford has no institutional issues (in fact, it would be utterly amazing if it didn't), and in a comment I stated: "If anyone would like to point me to statistically significant data that shows bias I'd be happy to write about it." I don't see it from the data, but if it's there it should be examined.

Backgrounder document on Plan 28

Doron and I have prepared a short document that describes the background and goals of the project. This is primarily intended for use with third-parties (such as sponsors, institutions and the press), but in the spirit of openness here's a copy that anyone can read.



A brief introduction to the Plan 28 Project by John Graham-Cumming/Doron Swade is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

If you want to understand the Analytical Engine, start with the Difference Engine No. 2

There are large similarities between Charles Babbage's Difference Engine No. 2 and the Analytical Engine. Critically, Babbage designed the Difference Engine No. 2 after the Analytical Engine and it incorporates improvements discovered during the design of the Analytical Engine.

And the printer that's part of the Difference Engine No. 2 is identical to the printer needed for the Analytical Engine. Babbage said that the same printer would be used for both. The memory of the Analytical Engine is very similar to the figure wheels in the Difference Engine No. 2.

Here's Doron Swade demonstrating and explaining the Difference Engine No. 2:



And here's a lovely video of the machine in motion. Now try to imagine the Analytical Engine which will have 8x the number of parts and be vastly bigger.

Babbage books as stocking stuffers

If you're following along with Plan 28 (the project to build Charles Babbage's Analytical Engine) then you might like to do some background reading. Here are four suggestions for stocking stuffers for the coming holiday:

1. Doron Swade's The Difference Engine (also published with the title The Cogwheel Brain).



This is Doron's account of the Difference Engine No. 2 as envisaged by Babbage and as built by the Science Museum in London.

2. William Gibson and Bruce Sterling's The Difference Engine.



A fancy based that imagines what would have happened if the Analytical Engine had been built in Babbage's time.

3. Cultural Babbage



A set of essays inspired by the Difference Engine No. 2 that discuss the cultural significance of Babbage and his life.

4. Charles Babbage's Passages from the Life of a Philosopher



Babbage's autobiography.

More background reading here.

Tuesday, December 07, 2010

A boost for Plan 28

Up until a couple of weeks ago Plan 28 was a one man show. Although Plan 28 has received enormous press coverage and many people have pledged money, services, material and time, the project was still just me.

I'm happy to say that that's no longer the case.

Doron Swade, the pre-eminent Babbage expert, who, as curator of computing a the Science Museum, masterminded the project to build Babbage's Difference Engine No. 2 has joined me on the project. Doron and I now share responsibility for finishing Babbage's work.

Doron and I met over coffee a few weeks ago to discuss the Analytical Engine and it was clear that both of us had been dreaming of building the physical engine for public display. Happily, Doron had been doing a lot more than dreaming. His deep knowledge of Babbage's engines and his continuing study of Babbage's plans and notes have placed him in the unique position of being the key figure in any attempt to build the world's first digital, programmable, automatic computer.

Much more has been happening behind the scenes that we cannot yet discuss, and the project's success is by no means guaranteed, but Plan 28 has received a major boost in the form of Doron Swade.

PS You can still pledge to the project; your promise of $, € or £ is much needed!

Monday, December 06, 2010

GAGA-1: The Camera Hole

This weekend's work on GAGA-1 was mostly around mounting the camera inside the capsule. The capsule walls are 95mm thick so a hole had to be cut all the way through for the thinnest part of the lens and then part way through for two other parts. A second trench had to be cut into the polystyrene for the part of the camera where the batteries are held.

The other thing I worked on was the positioning and mounting of the computers and where the batteries will sit. Here's a shot inside the box showing the camera pushed into place and flush against the capsule sides. There's a single battery pack in roughly the spot where it will be fixed and the recovery computer on the wall opposite the camera. The two gold connectors are the GSM and GPS antenna SMA connectors.


And here's a show showing the hole pierced through the capsule wall to allow the camera to take photos (yes, I have checked that the capsule wall is not seen in the photos). The recovery computer can be clearly seen at the back. I will be painting the hole the same yellow as the rest of the capsule just to make it look nicer.


The hole was cut with a very sharp, thin knife. A bit messy but the end result is certainly good enough. Here's the camera in the hole.


I insulated the trench with space blanket to keep the camera as warm as possible, but left the lens hole untouched because the walls are very thick there. The black circles are velcro pads used to help keep the camera in place during the flight.

Friday, December 03, 2010

Breaking the Reddit code

A few days ago an entry on Reddit asked for help breaking a code. Because I was laid up in bed yesterday with something chesty and nasty I couldn't help but wonder about the decryption of the message (see also the Fermilab code). At the time no one had broken it.

I managed to break it; here's how.

The original message was written on paper like this:


So I did my own transcription of the message and obtained the following four lines:

SSNTTNNDERPEVEEEHNOTONNAAEWMAEEMUDRITRNTNDOAWNETOHTVEEDMRMRTTFOGT
HUUFSHIIEMAHVOIANRTOARRSJRGEHHIEREELSEANMSTEMEWYEOHAMDEOMITTIECI
OLCHHIMDBRPPCAPROMRADIMEOSISLTSTYMEIATYOOEDSTHIEVLVEOBECWGEOORYA
TYERNOAEONLWRSLESKEEHTAEYIODSAAOIHWIUTMNWEONTHATPLVRLAPLIEOAAOUN

There were a couple of things that stood out immediately. Just eyeballing the text it looks like it's English (lots of E's, T's, etc.) and so I ran it through a letter frequency checker and sure enough it looks like English.


So given that, the code was most likely some kind of transposition cipher. I blindly ran through a bunch of classic ciphers using anagramming to try to find likely words. Wasted ages on this and got nowhere. Although I did discover that the last 16 letters can be rearranged to say POPULAR I LOVE ANAL.

Then I went back and looked at the text. There are clues within it. First, it's broken into four separate rows and that's likely significant. Secondly the first row is one character longer. That made me think that character must be the last one in the message.

After much messing around with the order of the rows I discovered that reversing the first and third rows resulted in the word THAT appearing in the first column:

TGOFTTRMRMDEEVTHOTENWAODNTNRTIRDUMEEAMWEAANNOTONHEEEVEPREDNNTTNSS
HUUFSHIIEMAHVOIANRTOARRSJRGEHHIEREELSEANMSTEMEWYEOHAMDEOMITTIECI
AYROOEGWCEBOEVLVEIHTSDEOOYTAIEMYTSTLSISOEMIDARMORPACPPRBDMIHHCLO
TYERNOAEONLWRSLESKEEHTAEYIODSAAOIHWIUTMNWEONTHATPLVRLAPLIEOAAOUN

And, in fact, if you read down the columns from left to right (and add some spaces) you get:

THAT GUY YOUR EFFORTS ON THE ORIGAMI WERE COMMENDABLE HOWEVER VOV
STILL HAVE ONE STRIKE THE NOTE WAS HARD TO READ SO ENJOY TRYING TO READ
THIS I HEAR I MADE YOUR TIMESHEET WELL I ASSUME IT WAS ME NO NAME WAS
MENTIONED NO MATTER HOW MANY OTHER PEOPLE HAVE A CRVMPLED PAPER
PROBLEM DID I MENTION THAT I HATE CONCLUSIONS

Notice that I made a few transcription errors. I suspect that VOV is really YOU and CRVMPLED must be CRUMPLED.

Guess I'll have to get back to Kryptos now.

Sunday, November 28, 2010

GAGA-1: CoCom limit for GPS

One problem for high-altitude balloon projects is the CoCom limit on how high and how fast a GPS will operate. To prevent GPS modules from being used in very fast moving weapons (such as ballistic missiles) GPS receivers are not allowed to operate at:

1. Higher than 60,000 feet

2. When traveling faster than 1,000 knots

The second restriction doesn't matter for GAGA-1, but the first does. GAGA-1 will have a maximum altitude (balloon dependent) of more like 100,000 feet.

Different manufacturers implement the CoCom limit in different ways: some use an AND rule (>60,000 ft and >1,000 knots) and others use an OR rule (>60,000 ft or >1,000 knots). For high-altitude ballooning it's ideal if the GPS uses AND. Unfortunately, this information is shrouded mostly in mystery and it's only through actual flights and testing that people have managed to determine which GPS receivers are AND and which are OR.

For GAGA-1 I have two GPS units: one in the Recovery Computer and one in the Flight Computer. The Flight Computer is using a Lassen IQ which is known to work correctly on balloon flights.

The Recovery Computer is using the GM862-GPS which will fail. This is OK because it is used when the balloon has landed to send GPS location via SMS messages. But the failure mode is important.

I've been back and forth with Telit technical support and they claim that the module will simply fail to give me a GPS fix above 60,000 ft but that once the balloon is down again it'll restart automatically. Others claim that code should be included to automatically reset the GPS if it hasn't given a fix for some length of time. I plan to update the code to include an auto-reset after 30 minutes if no fix or no satellites during flight and recovery.

Saturday, November 27, 2010

GAGA-1: Capsule insulation and antenna mounting

A bit of physical stuff on GAGA-1 this weekend after the Recovery Computer software last time. I'd previously painted the capsule for high visibility, but hadn't started cutting it or sticking on parts. After the successful test of the Recovery Computer it's time to put some bits on the box!


The three antennae visible on the box (as with the other components) are hot glued in place. I pierced holes in the box using a long metal skewer and a chop stick.

Here's a close up of the top of the capsule.


The top two antennae are for the two GPS modules (one in the Flight Computer and the other in the Recovery Computer). The long thin antenna is for the GSM connection that's part of the Recovery Computer.

The other two parts are a small red straw and a large black straw. The small red straw is simply there to allow the pressure to equalize between the inside and the outside of the capsule. Since the pressure is very low in the stratosphere it would be dangerous to send the box up completely sealed.

The black straw is sealed at the end with hot glue and will be where the external temperature sensor is placed.

I've further insulated the box by lining the interior with sheets of space blanket. This reflects almost all the heat generated inside the box (by the electronics) and should help keep things warm.


This was very fiddly to do as the space blanket material is very thin. I cut sheets out using a stencil and glued them in place. Placing my hand in the box I can feel warmth: the reflected warmth of my own hand.



Finally, here's an interior shot of the lid of the capsule showing where the cables for the antennae and straws poke through.

Friday, November 26, 2010

Notes on Kryptos Part 4

Copy of message I sent to the Kryptos group on Yahoo! for anyone whose working on Kryptos but not in that group.

Given Elonka's notes mentioning that K4 uses a cipher system not known to anyone else I decided to investigate other possible ways of attacking K4. Specifically, I wondered if the BERLIN crib might not be as simple as NYVTT turning letter for letter into BERLIN.

First I assume that this is something that's breakable by hand as was the rest of Kryptos and thus would simply be based on MOD 26 arithmetic of letters and might involve transposition of characters.

So I went to see if there's a word that could be permuted to create some permutation of BERLIN from NYVTT. There is: it is SILENT

NYVTT
ENTSIL
-----
RLINBE

More strikingly this works if you are sliding SILENT from the start of K4, it falls in just the right position to make BERLIN

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
SILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTS
GJVVHHPWRLHETAZPVYTJHJYKNYBTEGYSDWBMOBBWWJKAPOMSOIENXEMLTEJBFNMRLINBEQMYHSHKQDRFENPWAOVYUNSCPOPTJ

Now leading on from this I wonder if the cipher used for K4 consists of permutations of both the key and the ciphertext. Note how BERLIN is permuted within itself and so then I returned to the start of cipher text to see if there's a permutation of SILENT that results in a word (after permutation) starting in position 0. Once again there is:

OBKRUO
ILENTS
------
WMOENG

i.e. the word is WOMEN, assuming that the G is in the word after women. In this case ILENTS is a simple rotate of the word SILENT (just as ENTSIL followed by ENTSIL gives us BERLIN). There are likely other words as well, but this one is strikingly long.

Running through all six possible rotations of SILENT you get:

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
SILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTS
GJVVHHPWRLHETAZPVYTJHJYKNYBTEGYSDWBMOBBWWJKAPOMSOIENXEMLTEJBFNMRLINBEQMYHSHKQDRFENPWAOVYUNSCPOPTJ

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
ILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSI
WMOENGFZKUNDJDSYBXJMASEJDBUCKFOVWFHLEEUFCIADIXSRELXWDDCOMNPAVQFARHDEXZSXXVATWCHIXWVVQROHAMIFIXVSZ

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
LENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSIL
ZFXKMWISTAMTMWBEANMFJYDZGUDIJVROFLGBHXDLBYDWRDRHHEGCCTFHVTOQYJOGQXGXGFRNAOJZVSKBGCULTKXNZCLYRDUIC

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
ENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILE
SODJCZBBZZCWFFHDQQFOPXTCZDJHZYKXLKWEAGJKRBWFXCHKANMBSWYQBSETRSUFGAZGMEHQTXPYLVDKMBKOMTDMPFEHXCKLV

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
NTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILEN
BUCZFSKHYPFPOLGTTJOUONWVIJIXCRTDKAZXJMIAUUFLWSKDJTLRVPHWAIHMAYTVJTIMLUKJCDOOOOMQLRNHVZCCSYNNWSNEE

OBKRUOXOGHULBSOLIFBBWFLRVQQPRNGKSSOTWTQSJQSSEKZZWATJKLUDIAWINFBNYPVTTMZFPKWGDKZXTJCDIGKUHUAUEKCAR
TSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENTSILENT
HTSCYBQGOSYYUKWWMSUTEQPEOIYAVAZCADSGPLYDNDLKMVDMPSBUOYNVQLAVGXJYCCOLBXDSICERHXSPBUGQBYSFLHTMMVGNK

If you look you'll see various words popping out (in the ILENTS set, second row above, there's WOMEN at the beginning and closer to the end an anagram of WATCH).

Perhaps there's a method to choosing which shift of SILENT to use followed by some sort of transposition.

Wednesday, November 24, 2010

A proper Dr.

I wrote in my bio for The Geek Atlas that (speaking about myself in the third person) "Because he has a doctorate in computer security he's deeply suspicious of people who insist on being called Dr.". I am very suspicious of people who shove their PhD in your face, or who insist on being called Dr. In fact, like a British surgeon, I would much prefer to be called Mr. (which I suppose is a form of snobbery) mostly because it's what I've done after my doctorate that I'm most proud of.

Which brings me to the case of "Dr." Gillian McKeith. I only became aware of her because of the wonderful Ben Goldacre who has taken her to task about her qualifications and claims.

In an old article Goldacre talks about McKeith's qualifications and her legal threats against people who criticize her. He makes the very good point that it's easy to validate real credentials (e.g. if you want to check that I've really got a DPhil from Oxford you just need to call them up or, unlike McKeith, you can read my thesis).

Another person with a PhD from an unaccredited institution is John Gray (who wrote Men are from Mars, Women are from Venus). He refers to himself as Dr John Gray or John Gray PhD.

A striking similarity between McKeith's and Gray's web sites are the special sections explaining their degrees. John Gray has a page on the subject and McKeith explains her degree in detail:

Gillian then spent several years re-training for a Masters and Doctorate (PhD) in Holistic Nutrition from the American Holistic College of Nutrition (USA).
[...]
To obtain a PhD in Holistic Nutrition from the College, it is a pre-requisite that a student must have a Masters Degree and then undertakes study through a number of preliminary courses and a core curriculum, including general nutrition, immune system health, detoxification, herbology, human anatomy, enzymatic nutritional therapy, vitamin and mineral studies, nutrients, relationship of diet and disease, geriatric nutrition and nutrition and mental health. Doctoral students also have to prepare an original and practical dissertation. Gillian studied and completed the PhD course and dissertation over a period of more than 4 years between 1993 and 1997.

The PhD in Holistic Nutrition is a doctorate programme that is approved by the (American) National Association of Nutrition Professionals (NANP), a non-profit organisation which maintains the integrity of the holistic nutrition profession by establishing educational standards, a rigorous code of ethics and registration of nutrition professionals.

Whenever I see these long explanations I can't help remembering the line: "The lady doth protest too much, methinks". Let me, for the record, do a special section describing my qualifications: "John Graham-Cumming, MA (Oxon), DPhil (Oxon)"

Which brings me to the question of what the proper title is for someone who has a PhD that isn't really a PhD (or at least a PhD from an unaccredited institution). Might I suggest Ph. in front of their name for 'Phony'?

Saturday, November 20, 2010

GAGA-1 Recovery Computer Ground Test

Today was the first live test of the GAGA-1 Recovery Computer and it, at least initially, didn't go well. The result is much improved, fully working, code in the repository. This is why I'm obsessed with actual testing of the components of GAGA-1.

First the good news: the module ran on 4 AA batteries for 9 hours without showing any problems caused by power. For over 3 hours the module was getting GPS location information and sending SMS messages. This is very reassuring.

Here's the module sitting on the kitchen table ready to go:


Here's the commit message for the code changes:

Significant changes based on live testing of the code on the Telit GM862-GPS module:

1. The module does not support floating point and so the gps2.get() function has been modified to only use integer arithmetic and thus not do conversion of latitude and longitude. This leaves most of the returned elements as strings. Except altitude which is needed for comparisons.

2. The main loop is wrapped in the try/except that will hide any inflight errors (although they will be logged to the log file). I hope this will never be called.

3. The timeout on SMS message sending has been increased from the default to 30 seconds because it can take a while to send the message and this was corrupting the return from the temperature command.

4. Improved handling of timestamps to make it clearer in the SMS messages when a message occurred.

5. Modified the upload script to delete the .pyo files for uploaded .py files so that the module will recompile and not use the previous version.

The floating point was a pain (and yes, it is documented in the Telit documentation). The SMS timeout was showing up in the log file as follows:

32945: Temperature read returned

+CMGS: 161



OK



#TEMPMEAS: 0,22



OK

The first part (+CMGS: 161) is left over from sending an SMS with the AT+CMGS command and meant that the read buffer hadn't been flushed. Changing the timeout fixes this. The good news here is that my defensive programming style worked well in keeping the module running in under this error condition.

I ran the module for 208 minutes (3 hours, 28 minutes) sitting motionless in the garden reporting position via SMS every two minutes. Here's a graph showing reported altitude, number of satellites, internal temperature and speed. At the beginning I take the module out from the house (room was at 20C) into the garden (temperature was 7C) and at the end I bring it in again.


The gaps in the temperature record are where the problem in number 3 above occurred. The chart starts just as I put the module down in the garden; at the end you can see the number of satellites drop and the apparent speed increase as the module is brought indoors. The module seems to be running about 3C hotter than ambient temperatures.

Things to do on this part:

1. Shorten the leads on the two antennae and install in the capsule.

2. Run a three hour test in a moving car.

3. I am still very worried about the COCOM limit and am waiting for a response from Telit. In the worst case I'm going to add a watchdog to the code so that if there's been no GPS lock for an hour a complete reset of the GPS module is forced.

Thursday, November 18, 2010

I guess Hacker News doesn't do meta very well

(which is ironic for something written in a Lisp variant)

Recently, I've grown tired of stories about the TSA on Hacker News.

So I posted and item saying that I was taking a break (the title was "Au revoir Hacker News") with text saying that I was fed up with the TSA stories (and in particular the Ron Paul story in the top slot) and that I was going to take a temporary break. It ended saying I'd see everyone in the New Year once things had blown over.

It was at this link but it was nuked by someone. Not made [dead], simply expunged by a moderator.

Feels a bit uncalled for to me, I'd have been happy for the community to shoot me down.


(Actually it is [dead] so it was the community that killed it off)

I guess I'll be back in the New Year.

Friday, November 12, 2010

The things make got right (and how to make it better)

make is much maligned because people mistake its terse syntax and pickiness about whitespace for signs of being an anachronism. But make's terseness is what makes make fit for purpose, and people who design 'improvements' rarely seem to understand the fundamental zen nature of make.

Here are some things make does well:

1. make's key use is in the expression of dependencies. make has a compact, syntactic cruft-free way of expressing a dependency between a file and other files.

2. Since make is so dependent on handling lists of dependencies it has built-in list processing functionality.

3. Second to dependency management is the need to execute shell commands. make's syntax for including dependencies in shell commands is small which prevents the eye from being distracted from the commands themselves.

4. make is a macro-language not a programming language. The state of a build is determined by the dependency structure and the 'up to dateness' of files. There's no (or little) need for any other internal state.

To see the ways in which make is superior to other similar, more modern, systems this post will compare GNU Make and Rake. I've chosen Rake because I believe its illustrative of what happens when people create new make-like systems instead of just fixing the things that are broken about make.

Here's a simple Makefile showing the syntax used for updating a file (called target) from a list of dependent files by running a command called update.

target: prereq1 prereq2 prereq3 prereq4
update $@ $^

(If you are unfamiliar with make then it's helpful to know that $@ is the name of the file to the left of the :, and $^ is the list of files to the right).

Here's the same thing expressed in Rake. The first thing that's obvious is that there's a lot of syntactic noise around the command and the expression of dependencies. What was clear in make now requires more digging to uncover and things like #{t.prerequisites.join(' ')} are long and unnecessarily ugly.

file target => [ 'prereq1', 'prereq2', 'prereq3', 'prereq4' ] do |t|
sh "update #{t.name} #{t.prerequisites.join(' ')}"
end

The biggest 'problem' that the Rake syntax fixes in make is that the target and prerequisite names can have spaces in them without difficulty. Because a make list is space-separated and there's no escaping mechanism for spaces it's a royal pain to work with paths with spaces in them.

make's terse syntax $@ is replaced by #{t.name} and $^ is #{t.prerequisites.join(' ')}. The great advantage of the terse syntax is that the actual command being executed can be clearly seen. When the command lines are long (with many options) this makes a real difference in debug-ability.

This terseness is better can be seen in an example taken from the Rake documentation:
  
task :default => ["hello"]

SRC = FileList['*.c']
OBJ = SRC.ext('o')

rule '.o' => '.c' do |t|
sh "cc -c -o #{t.name} #{t.source}"
end

file "hello" => OBJ do
sh "cc -o hello #{OBJ}"
end

# File dependencies go here ...
file 'main.o' => ['main.c', 'greet.h']
file 'greet.o' => ['greet.c']

which rewritten in make syntax is:
  
SRC := $(wildcard *.c)
OBJ := $(SRC:.c=.o)

all: hello

.o.c:
cc -c -o $@ $<

hello: $(OBJ)
cc -o hello $(OBJ)

main.o: main.c greet.h
greet.o: greet.c

If you want to fix make then it's worth considering the following make problems that don't require an entirely new language:

1. Fix the 'spaces in filenames' problem. Not hard, just needs consistent escaping or quoting.

2. make has a concept of a PHONY target which is a target that isn't a file (used for things like clean and all). These are in the same namespace as file targets. This should be fixed.

3. make can't detect changes in the commands used to build targets. It would be better if make could do this. You can hack that into make but it's ugly.

4. make relies on timestamps for 'up to date' information. It would be better if make used hashes (in some situations, such as when files are extracted from a source code management system, timestamps can be unreliable). This can also be hacked into make if needed.

5. Ensure that non-recursive make is handled in an efficient manner.

Overall I'd urge make reimplementers to do as Paul Graham has done with LISP: his arc language is very LISP-like rather than something brand new.

And one final note: building and maintaining software build systems is inherently hard. Visualizing and getting right the graph of dependencies and handling cross-platform problems isn't easy. If you do come up with something good, please write good documentation for it.

Avis: how to shaft a long time customer

I'm a really long time customer of Avis and so I was pretty surprised that using my Avis Preferred number gets me a way worse deal on car rental than pretending to be an ordinary unknown customer.

I wanted to rent a car for a couple of days so I went to the web site, found a suitable car and got offered this deal:


So £71 for unlimited mileage for the two days plus the standard insurance cover and tax.

But the site wants me to log in and so I hand over my last name and Avis wizard number and the deal changes to:


So I've lost the chance to pay now for a smaller amount, the rental has gone to £106 and the unlimited mileage has been removed to be replaced with 200 free miles. But at least:


Apparently my discount is a 50% increase in the fee. Boy, thanks Avis. I've been a Preferred member since 1995 and this is my reward.

Tuesday, November 09, 2010

CRUTEM3 Curiosity

Working through some old code tonight I decided to take a look at the errors that Ilya Goz and I discovered in the calculation of CRUTEM3 station errors (here's the confirmation from Met Office of the errors).

My original detailed post on the problem was written on February 7, 2010 and the confirmation from the Met Office came on April 15, 2010.

I downloaded the latest release of CRUTEM3 data from here and reran my little test program on the three example errors I gave in my original post. Inside the file it is reported as having been generated on Thu Oct 14 15:42:43 BST 2010 and the associated text file has date Fri Oct 15 2010 20:07.

Oddly, the errors remain. In the intervening nine months the Met Office appears to have acknowledged the errors but not corrected them.

Friday, November 05, 2010

GAGA-1 Recovery Computer

Finally, got some time to work on the GAGA-1 Recovery Computer that uses a combination of a GPS and a GSM module to send position updates via SMS to a cell phone. The complete code is now in the repository in the gaga-1/recovery/ folder.

The recovery computer itself is a Telit GM862-GPS module mounted on a board that supplies power from four AA batteries. It has two external antennas: one for GPS and one for GSM access. Here's a shot of the computer before installation in the capsule (clearly the cables are going to have to be shortened and the power supply cleaned up before the real flight). The GPS antenna is square and the GSM is the long thin bar.


The GM862-GPS has an integrated Python interpreter so the control software is a set of Python modules that handle getting GPS information (and sundry information like temperature and voltage) and sending SMS messages at appropriate times. Here's the key piece of code for the recovery computer:

# The recovery computer runs through a sequence of simple states that
# determine its behaviour. It starts in the Launch state, transitions
# to the Ascent mode once above a preset altitude, then moves to
# Flight mode once too high for safe SMS usage. Once below the safe
# altitude it transitions to Recovery mode.

state = ''
set_state( 'LAUNCH' )
sms.init()
gps2.init()

# The rules for the states are as follows:
#
# Launch: get GPS position every 1 minute and SMS, check for
# transition to Ascent mode
#
# Ascent: get GPS position every 2 minute and SMS, check for
# transition to flight mode
#
# Flight: get GPS position every 5 minutes and check for
# transition to Recovery mode
#
# Recovery: get GPS position every 1 minute and SMS

while 1:
position = gps2.get()

if state == 'LAUNCH':
report_position(position)
if position['valid'] and
( position['altitude'] > ascent_altitude ):
set_state( 'ASCENT' )
elif state == 'ASCENT':
report_position(position)
if position['valid'] and
( position['altitude'] > flight_altitude ):
set_state( 'FLIGHT' )
elif state == 'FLIGHT':
if position['valid'] and
( position['altitude'] < recovery_altitude ):
set_state( 'RECOVERY' )
elif state == 'RECOVERY':
report_position(position)

if state == 'LAUNCH' or state == 'RECOVERY':
delay = 1
elif state == 'ASCENT':
delay = 2
else:
delay = 5

MOD.sleep(delay * 600)

That code can be found in gaga-1.py which is the main module executed automatically by the GM862-GPS. The other important modules are logger.py (logs to the serial port for debugging and a file in the NVRAM on the GM862-GPS), at.py (simple wrapper for AT command access on the module), sms.py (module for sending SMS messages) and gps2.py (module to get GPS location).

There's a small Makefile the controls building and uploading of the code to the module (upload is achieved using the upload.pl helper program). The main commands are make all to build the code into compiled Python files, make upload to upload to the GM862-GPS and make test to run a flight simulation.

To test the code I've written modules that pretend to be the Telit Python modules (MDM, MOD, GPS, SER, etc.) and respond realistically to API calls and AT commands from my code. Within these modules I've programmed a simulated flight (an ascent, albeit a fast one, followed by descent) and random appearance of errors coming from the module (such as no GPS fix, no GSM access and other errors).

Here's a log of a simulated flight. You can see times when failures occurred (loss of GPS, can't send SMS: those lines are in red). I've highlighted the altitude in blue for easy reading.

$ make test
342295541: sms.send(+447...,"Transition to state LAUNCH")
342295541: gps2.get() -> 180541.000,5238.7818N,00211.1238W,
1.2,13.00,3,138.00,0.00,0.00,051110,02
342295541: sms.send(+447...,"52.6464N 2.1854W 13.00m 138.00deg
0.00kph 2sats (1711mV, 28C)")
342295601: gps2.get() -> 180641.000,5238.2410N,00211.3171W,
1.2,2053.03,3,114.00,45.00,24.30,051110,05
342295601: sms.send(+447...,"52.6373N 2.1886W 2053.03m 114.00deg
45.00kph 5sats (1919mV, -51C)")
342295601: sms.send(+447...,"Transition to state ASCENT")
342295721: gps2.get() -> 180841.000,5238.1755N,00211.5866W,
1.2,7093.11,3,241.00,38.00,20.52,051110,03
342295721: sms.send(+447...,"52.6363N 2.1931W 7093.11m 241.00deg
38.00kph 3sats (535mV, -11C)")
342295721: Failed to get SMS prompt
342295721: sms.send(+447...,"Transition to state FLIGHT")
342295721: Failed to get SMS prompt
342296021: gps2.get() -> 181341.000,,,,,0,,,,051110,00
342296321: gps2.get() -> 181841.000,5238.5639N,00211.2198W,
1.2,37093.23,3,46.00,33.00,17.82,051110,06
342296621: gps2.get() -> 182341.000,5238.7426N,00211.8475W,
1.2,48193.23,3,94.00,43.00,23.22,051110,01
342296921: gps2.get() -> 182841.000,5238.8810N,00211.9387W,
1.2,34693.21,3,355.00,5.00,2.70,051110,06
342297221: gps2.get() -> 183341.000,5238.1542N,00211.6911W,
1.2,20293.21,3,26.00,28.00,15.12,051110,04
342297522: gps2.get() -> 183842.000,5238.2393N,00211.8530W,
1.2,8262.62,3,54.00,24.00,12.96,051110,02
342297822: gps2.get() -> 184342.000,5238.1079N,00211.0661W,
1.2,12.00,3,3.00,0.00,0.00,051110,08
342297822: sms.send(+447...,"Transition to state RECOVERY")
342297882: gps2.get() -> 184442.000,5238.6368N,00211.9774W,
1.2,12.00,3,77.00,0.00,0.00,051110,06
342297882: sms.send(+447...,"52.6439N 2.1996W 12.00m 77.00deg
0.00kph 6sats (1444mV, -2C)")
342297942: gps2.get() -> 184542.000,5238.6790N,00211.3624W,
1.2,18.00,3,63.00,0.00,0.00,051110,03
342297942: sms.send(+447...,"52.6446N 2.1894W 18.00m 63.00deg
0.00kph 3sats (1246mV, -22C)")
342298002: gps2.get() -> 184642.000,5238.4941N,00211.8801W,
1.2,17.00,3,256.00,0.00,0.00,051110,01
342298002: sms.send(+447...,"52.6416N 2.1980W 17.00m 256.00deg
0.00kph 1sats (3095mV, -51C)")
342298062: gps2.get() -> 184742.000,,,,,2,,,,051110,00
342298062: sms.send(+447...,"No GPS lock (1045mV, 32C)")
342298122: gps2.get() -> 184842.000,5238.9542N,00211.9596W,
1.2,11.00,3,21.00,0.00,0.00,051110,05
342298122: sms.send(+447...,"52.6492N 2.1993W 11.00m 21.00deg
0.00kph 5sats (2742mV, 48C)")
342298182: gps2.get() -> 184942.000,5238.9607N,00211.1014W,
1.2,14.00,3,167.00,0.00,0.00,051110,08
342298182: sms.send(+447...,"52.6493N 2.1850W 14.00m 167.00deg
0.00kph 8sats (819mV, 6C)")


There are a few remaining items:

1. Run a complete, real test of the module using fresh batteries in a moving car and ensure that it correctly logs information and sends SMS. Also, see how long it lasts.

2. Get an answer from Telit on the COCOM limits so that I understand how the GPS fails above the 18km altitude line.

3. Cut down the cables and install in the capsule.

Then it'll be on to the flight computer.

Thursday, November 04, 2010

The most common objection to my 'releasing scientific code' post

Is...

And why dismiss so casually the argument that running the code used to generate a paper's result provides no actual independent verification of that result? How does running the same buggy code and getting the same buggy result help anyone?

Or as expressed at RealClimate:

First, the practical scientific issues. Consider, for example, the production of key observational climate data sets. While replicability is a vital component of the enterprise, this is not the same thing as simply repetition. It is independent replication that counts far more towards acceptance of a result than merely demonstrating that given the same assumptions, the same input, and the same code, somebody can get the same result. It is far better to have two independent ice core isotope records from Summit in Greenland than it is to see the code used in the mass spectrometer in one of them. Similarly, it is better to have two (or three or four) independent analyses of the surface temperature station data showing essentially the same global trends than it is to see the code for one of them. Better that an ocean sediment core corroborates a cave record than looking at the code that produced the age model. Our point is not that the code is not useful, but that this level of replication is not particularly relevant to the observational sciences.

This argument strikes me as bogus. It comes down to something like "we should protect other scientists from themselves by not giving them code that they might run; by not releasing code we are ensuring that the scientific method is followed".

Imagine the situation where a scientist runs someone else's code on the data that person released and gets the same result. Clearly, they have done no science. All they have done is the simplest verification that the original scientist didn't screw up in their methods. That person has not used the scientific method, they have not independently verified the results and their work is close to useless.

Is this enough to argue that the code should have been closed in the first place?

I can't see that it is. No one's going to be able to publish a paper saying "I ran X's code and it works", it would never get through peer review and isn't scientific.

To return to the first quote above, running someone else's buggy code proves nothing. But in hiding the buggy code you've lost the valuable situation where someone can verify that the code was good in the first place. Just look at the effort I went to do discover the code error in CRUTEM (which, ironically, is a 'key observational climate data sets' to use RealClimate's words).

The argument from RealClimate can also be stated as 'running someone else's code isn't helpful so there's no point releasing it'. (see comments below to understand why this is struck out) The premise is reasonable, the conclusion not. I say that because there are other reasons to release code:

1. It can be used by others for other work. For example, good code can form part of a library of code that is used to improve or speedup science.

2. The algorithm in a paper can be quickly checked against the implementation to ensure that the results being generated are correct. For example, the CRUTEM error I found could have been quickly eliminated by access to the paper and source code at the same time.

3. Releasing code has a psychological effect which will improve its quality. This will lead to fewer errors on the part of scientists who rely on computer methods.

The real reason (climate) scientists don't want to release their code

Recently there have been three articles that discuss releasing scientific software. Nature had a piece called Computational science: ...Error, the bloggers at RealClimate wrote about Climate code archiving: an open and shut case? and Communications of the ACM has an article entitled Should code be released?.

Nestled in amongst the arguments about the scientific method requiring independent verification is what I believe is the real human motivation. Here's RealClimate:

Very often, novel methodologies applied to one set of data to gain insight can be applied to others as well. And so an individual scientist with such a methodology might understandably feel that providing all the details to make duplication of their type of analysis ‘too simple’ (that is, providing the code rather carefully describing the mathematical algorithm) will undercut their own ability to get future funding to do similar work. There are certainly no shortage of people happy to use someone else’s ideas to analyse data or model output (and in truth, there is no shortage of analyses that need to be done).

And here's Communications of the ACM:

"There are downsides [to releasing code]", says Alan T. DeKok, a former physicist who now serves as CTO of Mancala Networks, a computer security company. "You may look like a fool for publishing something that's blatantly wrong. You may be unable to exploit new 'secret' knowledge and technology if you publish. You may have better-known people market your idea better than you can and be credited with the work. [...]"


1. When the scientist's results are invalidated later by others because the code was blatantly wrong (and they are retracting papers) (see story in the Nature article) they are going to look like much more of a fool. And, frankly, if they think their code is that bad one has to wonder how they can think their paper is worth publishing.

2. The argument about others using your code seems bogus because if everyone released code then there would be (a) an improvement in code quality and (b) an 'all boats rise' situation as others could build on reliable code.

It's tragic that there's a conflict between science and scientific careers. But I think you can put aside the high-minded arguments about the integrity of the scientific method, and see the real reason (climate) scientists don't want to release their code: management of a scientific career and fear of looking foolish.

Monday, November 01, 2010

Top blog content for October 2010

The following blog entries were popular around the web in October:

  1. Babbage's heart-warming message for the middle-aged 12.7% of page views

  2. 1000 (bad) ideas 12.3% of page views

  3. A Plan 28 Completion Date 7.4% of page views

  4. Tweet is cheap 6.1% of page views

  5. x is down 5.7% of page views

Serendipity Cost

In economics the 'opportunity cost' is the cost of a path not taken. If you were presented with a choice such as enter a PhD program or immediately go to work for a consulting firm for $80,000 a year then there's a cost to either path not taken.

If you do the PhD then you've turned down $240,000 of earnings (perhaps because you think you'll earn more in future from the PhD), if you take the job you turn down the chance to get the qualification and hence a higher salary afterwards (the lost increase in salary is the opportunity cost).

Another sort of cost is the cost of paths not observed. I call this serendipity cost.

While working at a start-up I was inundated with email from a variety of sources and wanted to find a solution. While looking for a solution I stumbled upon a project called ifile and decided to walk down its path. That path took me into machine learning and POPFile, meeting Paul Graham and much more. I was looking for a solution to my email overload problem, I found something else.

But there are other times when I paid the serendipity cost by not observing a path. By not observing the lucky randomness in front of me I've cut off various interesting paths that could have been fulfilling both financially and emotionally. And that happened when I got too blinkered about the path I was on. The key to not paying a serendipity cost is to look around the path you are walking for the wonderful things that are adjacent.

Some years ago I was told about some people working on an interesting start-up to keep people in contact with work colleagues. I dismissed the idea as I was somewhat obsessed with the path I was on (which went nowhere). The idea is now called LinkedIn.

If you don't want to pay the serendipity cost then you listen when a little voice says "Now, that's interesting" and not be quick to dismiss the interesting paths leading off around you.

I wonder how things would have been different if Alexander Fleming hadn't looked at those culture dishes and gone "Hmm, that's interesting".

Sunday, October 31, 2010

Sometimes they let me out of the asylum

I was invited to give a talk at Ignite London 3. The video of the talk has now been posted:

The Geek Atlas: Sun, Sea, Sand, Science - by John Graham-Cumming from chichard41 on Vimeo.



(I do use some language in this talk which might be considered NSFW, although it's pretty mild).

The ICCER C++ Code

Early in September I received a message from Sir Muir Russell indicating that he and the Independent Climate Change Email Review folks were going to look into my request to release the C++ code they used to analyze GHCN records as part of the review.

This morning I received another email from Sir Muir indicating that the code had been released and that it was available from the review web site. Sure enough it appears that the code was released last week.

Today I downloaded it and got it running without much difficulty. The code is written in C++ and compiles easily with g++. The review didn't release a Makefile or similar build instructions so I quickly hacked one together to make building easy:

# Trivial Makefile to build a program called 'ghcn' from the code
# released by the ICCER for gridding/trending GHCN data

# The name of the program that will be generated
PROG := ghcn

.PHONY: all clean
all: $(PROG)
clean: ; @rm -f $(PROG)

HDRS := $(wildcard inc/*.h)

$(PROG): Analysis.cxx $(HDRS) ; @g++ -Wall -o $@ $<

Doing a make creates a program called ghcn that reads GHCN files (v2) and calculates anomalies, performs gridding and calculates the global time series.

There's a helpful README file that comes with the code which contains an important caveat: The exploratory nature of this code is such that it does not follow design rules or optimisation as would be appropriate for production code and there is no comprehensive exception handling.

And, sure enough, the code is a bit messy and doesn't follow some good C++ practices. The comments present in the code tend to be poor, there are entire classes implemented inside .h files (which is mentioned with the cryptic comment The code is purposely not factorised into normal .cxx and .h files), there are a few #define's where a const would have been better. And there's an almost random handling of the special -9999 code for missing data where sometimes the code decides to check for -9990, -9999 or -1000 seemingly on the whim of the coder.

But, having said that, reading it it wouldn't be a big step to go from it to something quite robust. The first step would be to write tests for the code so that its operation could be verified at a functional level. Note that I've compiled with maximum warnings and none a produced.

The only thing I saw that was obviously anomalous was that the code uses fifteen values in the date range 1961 to 1990 for normal calculation, whereas the ICCER report said they used ten values from 1955 to 1995. Note that that'd make much difference.

The program fills an array with the yearly temperature trend. Here's what that looks like when charted: