Tuesday, August 25, 2009

How to trick Apple Numbers into colouring individual bars on a bar chart

Suppose you have a bar chart like this:

and it's made from a data table like this:

And you are really proud of the sales of the XP1000 and want to change the colour of its bar to red. In Apple Numbers you can't do that because the bar colour is based on the data series.

But you can fool Apple Numbers by creating two data series like this:

Then choose a Stacking Bar chart after selecting the two series of data in the data table and you'll get a chart like this:

You can change the colour of any of the series by clicking on the Fill button on the toolbar. And you can extend that beyond two series to colour the individual bars as needed.

Letter to Her Majesty The Queen

Her Majesty The Queen
Buckingham Palace
London SW1A 1AA

August 25, 2009


I write to ask Your Majesty to consider awarding a posthumous knighthood to the British mathematician, code-breaker and computer scientist Alan Turing.

Alan Turing was born in 1912 in London and in 1935 became a Fellow of King’s College, Cambridge. One year later he published a mathematical paper that is the foundation of all of computer science. In the paper he proposed a machine, which we now call a Turing Machine, that is the basis for all computers; the machine on which I write this letter to Your Majesty follows Turing’s rules.

Alan Turing went on to work at Bletchley Park during the Second World War and was instrumental in helping break Nazi German codes including the Enigma and is credited with shortening the war by a number of years. After the war Turing worked in Manchester on the birth of computers as we know them.

The United States-based Association of Computing Machinery has been giving an award in Turing’s name since 1966, and he was awarded the OBE in 1945 for his secret war time work.

But Alan Turing’s life ended in tragedy when after being prosecuted for ‘gross indecency’ (Alan Turing was a homosexual) he was forced to have estrogen injections and committed suicide. On that day in 1954, at age 41, Great Britain lost one of its greatest minds.

Since then Great Britain has done little to honour him. A section of road in Manchester is named after him, and a blue plaque is fixed to the wall of his former home.

I write today to Your Majesty to ask that Alan Turing be honoured with a posthumous knighthood that recognizes what a great man he was; there is no doubt in my mind that if Turing had lived past age 41 his international impact would have been great and that he likely would have received a knighthood while alive.

I have the honour to be, Madam, Your Majesty’s humble and obedient subject,

Dr John Graham-Cumming, MA (Oxon), DPhil (Oxon)

Alan Turing petition nears 5,000 signatories

My Alan Turing petition has rocked past the 500 signatures I thought I might get to reach almost 5,000. At the rate people are signing I'd expect 5,000 to be reached either today or tomorrow.

The campaign has got quite a bit of media coverage. Here's a round up:


Daily Telegraph: Britain should apologize for the shameful way it treated Alan Turing.

The Independent: The Turing Engima: Campaigners demand pardon for mathematics genius and Dawkins calls for apology for Turing.

Manchester Evening News: Campaign to win an official apology for Alan Turing and Gay backing for Turing apology.

Belfast Telegraph: Prosecuted for being gay: campaigners demand pardon for genius Alan Turing and Dawkins calls for official apology for Alan Turing.


PRI/BBC World Service: Apology campaign for British Nazi code breaker.

BBC Radio Ulster: Sunday Sequence.


Channel 4 News: Pardon for Enigma codebreaker, Alan Turing?

Note to gay readers: I've deliberately excluded the specialist gay news outlets from that list so that people realize that this is not simply a gay issue. Please don't think I'm ignoring you and your great coverage!

Tuesday, August 18, 2009

The Gay Agenda

One of the adverse effects of my Alan Turing Petition is that some commentators see it as part of a 'gay agenda'. Here's a comment from someone:

This is sad. Turing is being used by sectors of the UK gay lobby as a political wedge to bash an already-weak Labour government. Despite the good intentions, we all know that the newsbite would be "Brown apologises to gay war hero". That is wrong on so many levels.

I've seen other similar quotes implying that what's behind the petition is a 'gay agenda' or some local government trying to be PC. All there is behind this 'campaign' (that's the newspapers word, not mine) is one person: me.

Last night while talking on BBC Radio Manchester the interviewer asked me a question about why Alan Turing isn't better known in the gay community. It was then that I had to admit (since I wasn't planning to talk about sexuality) that I'm not gay.

That probably comes as a shock to some people who don't understand that my petition isn't motivated by a hidden agenda. I think Alan Turing's treatment was appalling. I think we lost a great, great man when he died at 41 who had much more to contribute and I think that Britain has not adequately recognized this great man, or the manner of his decline. If he hadn't died so young we probably would have knighted him and celebrated his genius.

I do not expect that the British Government will apologize. They are damned if they do because they'll really need to apologize to all the other men prosecuted for gross indecency and then there's probably a list of other nasty things in the past that people could ask for an apology for.

But if they do want to honour Alan Turing (and others who were prosecuted), then I suggest that they fund Bletchley Park and The National Museum of Computing in his and their honour.

Monday, August 17, 2009

The Alan M. Turing Endowment

I've done a few newspaper and radio interviews about my Alan Turing petition and I've been asked a couple of times about how to honour Turing's memory. Since I fully don't expect the British Government to actually apologize, I have an alternative suggestion.

Currently, Bletchley Park (where Turing broke the Nazi Enigma code) and The National Museum of Computing (which is inside the Bletchley Park grounds) receive no government money for their upkeep.

It would be an appropriate way to honour Alan Turing by creating an endowment to keep these two organizations going. Given his contribution to both it would be fitting that in one place we could talk about his computing work and code breaking work.

And it's about time the British Government stumped up to help pay for the upkeep of these two important treasures. Without Bletchley Park and Alan Turing I'd likely be writing this in German :-)

And the computer I write this post on follows the rules that Turing laid down.

Sunday, August 16, 2009

Geek Weekend, Day 2: The Brunel Museum

So after yesterday's trip to Bletchley Park I stayed in London and hopped over to a spot not far from Tower Bridge where Marc Brunel and his son Isambard built the first tunnel under a navigable river: the Thames Tunnel. The tunnel was dug out by hand using a tunnel shield (which is the basis of all tunnel building to the present day). Workers stood inside a metal cage pressed against the undug earth and removed boards, dug in a few inches and replaced the boards. Once the digging was done the entire structure was forced forwards a few centimeters and bricklayers would fill in behind.

The tunnel has a rich and varied history and is still in use today (read the Wikipedia link above to learn more). The entrance to the tunnel was through a massive circular tube (a caisson) which the Brunels built above ground and then sunk it into place. The entrance has been closed for about 140 years and is being renovated, but I was lucky enough to be taken into it by the curator of the Brunel Museum.

The museum displays works by the Brunels and runs tours through the tunnel itself. The grand entrance hall will be reopened to the public in September. Before that here's a shot of me standing in the interior of the entrance about 15 meters underground.

Image credit: Jonathan Histed

The diagonal line on the wall is the remains of where the grand staircase came down and brought visitors into the tunnel.

Saturday, August 15, 2009

Geek Weekend, Day 1: Bletchley Park

Left to my own devices to the weekend I decided to embark on a Geek Weekend with visits to two places within easy reach of London. Today I visited Bletchley Park which is simply wonderful for any geek out there.

Bletchley Park is where the cryptanalysts of the Second World War worked in great secrecy (including Alan Turing) to break the Nazi German Enigma and Lorenz ciphers. To break them they used a combination of intimate knowledge of language, mathematics and machines.

Here's a Nazi German Enigma machine:

And here's a look inside one of the rotors inside an Enigma machine to see the wiring:

Two of the code breaking machines have been reconstructed. One is the Turing Bombe, an electromechanical machine made to break the Enigma cipher. Here's a look at the wiring in the back of the Bombe:

The other machine is the Colossus, a binary computer built to decipher Lorenz. Enigma is far more famous than Lorenz, but I have a soft spot for the Lorenz code because of its close relationship to modern cryptography. Here's a Lorenz machine:

While I was there I signed a large stack of copies of my book, The Geek Atlas. If you are at Bletchley Park and pop into the shop you'll be able to buy a signed copy if that's your thing. Of course, Bletchley Park, Enigma, Lorenz and the National Museum of Computing (also on site) are covered.

50p from every copy of The Geek Atlas goes to Bletchley Park (if the book is bought in the UK) and so the folks at Bletchley treated me to a special geek moment: a chance to meet Tony Sale who worked at MI-5 and reconstructed the Lorenz breaking machine Colossus. He took me round the back of the machine, and past the No Admittance sign to see it in operation. A geek treat if ever there was one.

The Lorenz code is essentially binary. Letters were transmitted using the Baudot Code which is a five-bit code. To encrypt the Lorenz machine created a pseudo-random sequence of Baudot codes and then XORed them with the message to be transmitted. If both transmitting and receiving machines generated the same pseudo-random sequence then the nice property of XOR that if you perform the same operation twice you get back to where you started. Thus XORing once with the pseudo-random sequence gave you the ciphertext to be transmitted, XORing again gave you back the original message.

Breaking this binary code was achieved with a binary computer. After giving me a behind-the-scenes look at Colossus, Tony Sale stood for a portrait in front of the machine:

And behind the machine is where the valve-action is:

Standing and see the Turing Bombe, staring into Turing's office in Hut 8, being taken around the back of Colossus by the man who put it back together, and getting to see more Enigmas, Lorenzs and Typexs than anyone could ask for made it a real treat.

The National Museum of Computing is Britain's answer to the wonderful Computer History Museum in Mountain View, CA. It contains many machines from the mainframe through the 8-bit era of British computing. All the machines are working or being restored. If you've never seen core memory, massive hard disk packs the size of washing machines, or just Commodore PET it's worth visiting (and it's right next door to Colossus).

Lastly, it's worth knowing that the National Museum of Computing despite being part of the ticket price to Bletchley Park actually receives no money from them. Please consider either donating money directly to them (I gladly emptied my pockets of change) or buying something in their shop.

And tomorrow it's a step back into the 19th century with a special visit to a place important in the life of Isambard Kingdom Brunel.

Friday, August 14, 2009

Slowly coming out of stealth mode

For some time I've been working at a new start-up in London. We've been doing a bunch of work to get a product out the door (probably by the end of the year), but one of the things we decided to do was spin out some bits of technology as open source projects. The first thing to come out of the door is a new JavaScript-based web site tagging technology called jsHub.org.

If you look at any web site today you'll see multiple 'tags' on the web site which range from web bugs used for advertising to large pieces of JavaScript used for web site analytics. You can download programs like WASP and Ghostery to see what's on a page. If you use them on my page you'll see that I'm using Google Analytics.

The problem is that as more and more products are added to web sites for analytics, advertising, optimization, A/B testing, etc. the number of different tags on the pages explodes. This leads to long load times and the silly situation where different pieces of tagging code are trying to access the same metadata about the page which leads to inevitable inconsistencies.

jsHub.org is designed to get around these problem by implementing a single tag on the page which is capable of talking to all the different products distributing the metadata to the different products needed by a web site.

Everything about jsHub.org is open source. We've created a separate non-profit company and given all the IP to it. We are releasing it under the BSD license and the standards we are proposing are public domain.

Our goal is to make web site tagging lighter, more consistent and more professional (we have complete automated test suites for our code).

As part of making things open we are also releasing a tool that allows anyone, web site developer or member of the public, to examine the use of jsHub.org on any web page. The tag inspector allows you to see exactly what information is being gathered and where it's going.

The Conservative Party has a prioritization problem

Anyone who's worked in software will be familiar with the ever changing list of things that are the number one priority on a project. Just today David Cameron, leader of the British Conservative Party said that the NHS is the party's number one priority.

A quick search of the party web site shows that they have a number of other number one priorities:

1. NHS

2. Education Reform

3. Economic Stability

4. Job Security

I'd hate to see what's on the priority 2 list.

Tuesday, August 11, 2009

Regular expression are hard, let's go shopping

After looking at a Tweet from Charles Arthur of The Guardian and I decided to hunt down his blog. I typed "Charles Arthur" into Google and the first link was to his blog.

But there was something strange about it. All the letter t's following an apostrophe were highlighted. Here's a screen shot:

Yet, if I typed the exact same URL into Firefox the highlighted t's were not there. Odd. Since the URL was there this had to be something inside the HTTP headers sent when I was clicking through from Google.

I fired up HTTPFox and watched the transaction. Here's a screen shot of the HTTP headers of the GET request for his page. The interesting thing to look at is the Referer header.

It immediately jumped out to me that one of the parameters was aq=t. Looked to me like something on his blog was reading that parameter and using it to highlight. Poking around I discovered that his site is written using WordPress and there's a plugin for WordPress (that hasn't been updated for years) that's intended to highlight search terms when the visitor comes from a search engine.

Looking into the source of his web page it looked from the CSS like he was using that plugin. So I downloaded the source of the plugin and took a look. There's a bug in the way in which it extracts the query parameters from the Referer header for Google.

Here's the code:

$query_terms = preg_replace('/^.*q=([^&]+)&?.*$/i','$1',

That regular expression is buggy. It's looking for the right-most instance of a string that begins q= followed by anything other than the & symbol or the end of the Referer header. It's getting the right-most because the ^.* at the beginning means skip over anything from the start of the Referer header until you find q= and be greedy about it: skip over as much stuff as possible.

In the Referer string that are two parameters with q= in them. The first one is the correct one, the second one is the aq=. Since the regular expression isn't written to check that before the q= there's a ? or & it gets the wrong one.

I did a bunch of tests with wget to confirm that I'm right. It's a bug.

The aq=t parameter was added in 2006, here are the details. It's only present when you use the Firefox Google search box. Unfortunately, the plugin hasn't been updated since 2005.

It can be fixed by changing that line above to:

$query_terms = preg_replace('/^.*[\?&]q=([^&]+)&?.*$/i','$1',

But the right thing to do here is to rewrite this so that it didn't use regular expressions at all. After all, PHP has parse_url and parse_str functions that can do all the URL and query string parsing for you.

Monday, August 10, 2009

Gifts to your future self

In my last blog post I took a look at the contents of a 13 year old archive of my doctoral work that was stored on a DOS formatted floppy disk.

My thesis was interesting, but more interesting were the READ.ME files that I'd scattered around the disk pointing out important information about what was stored on the disk, and how to interpret it.

These notes were gifts to my future self. In 1996 I left these READ.ME files knowing that sometime in the future I would open this disk and not know what to do with the contents.

A couple of these files were substitutes for symlinks. Since I was taking the files from a SunOS system and writing it to a DOS floppy in a ZIP file I lost two important things: symlinks and long file names.

In one directory a file called MACROS contains the text: Should point to ..\THESIS\MACROS. Copying to Mac OS X system and fixing the symlinks makes things work swimmingly.

Another READ.ME file explained how I had shortened some file names to fit into the 8.3 naming scheme:

TBD.TEX = thesis.bibliography.definition.tex
TD.TEX = thesis.definition.tex
TDUTOFT = thesis.definition.use.this.one.for.thesis

These short comments were all that I needed to get my thesis up and running in LaTeX again.

This is pretty much the strategy that I apply to code comments: those comments should point out the things you are going to forget and are going to need to know in the future. Don't comment things that can be easily derived from the code. But bear in mind that you are writing comments for someone who is unfamiliar with the code base: you in a few days, weeks, months or years time.

In which I resurrect a 13 year old 3.5" floppy disk and reprint my doctoral thesis

This is a follow up to a post from the weekend about playing with my old Sharp MZ-80K. Someone commented that they'd be more impressed if I resurrected a 15 year old floppy disk than a 30 year cassette tape.

I don't have a 15 year old floppy disk to hand, but I do have this one that's 13 years old and according to the label contains a copy of my doctoral thesis. The disk was created in 1996 and the files on it date to 1994 for my doctoral thesis which I completed in 1992.

But would it still read?

The first step was finding a drive. I had an old-ish 3.5" USB disk drive kicking around, so I plugged it into my MacBook Air and fired up Windows XP under VMWare. It happily recognized the drive and the magically it loaded up the floppy disk:

The disk contains a single ZIP file called oxford.zip. Unzipping it and poking around in the directories reveals that it contains my thesis, all the papers I wrote as a doctoral student, my CV and helpful READ.ME files: a gift to my future self.

That's all well and good, but are any of these files usable? Can I take the LaTeX based source files and produce a copy of my thesis? Or can I take the DVI file that I had saved and make that into a PDF?

A quick copy over to the main Mac machine and a download of LaTeX later I had a working LaTeX system again and all the files.

So to get started I grabbed the DVI file of my thesis and ran it through dvipdf. Apart from complaining about various missing fonts it produced a totally readable PDF file and suddenly I was staring at my thesis. You can download the PDF by clicking on: The Formal Development of Secure Systems. Here's a sample page (the code at the bottom is written in Occam):

But it's not enough to stop at a DVI file, what I wanted was to compile from sources. My first test was to start with something small: my CV. Magically, that worked:

And so on to my thesis. I'm not going to show all that I went through, but it worked after I'd got things in the right directories and tracked down a couple of additional style files.

BTW Does anyone have a Research Machines 380Z with working 8" drives? I have a couple of my really old floppies that it would be fun to read.

Sunday, August 09, 2009

My Alan Turing petition hit the magic 500 signatures mark

My petition just hit 500 signatures which means that I will be getting a response from the government about it.


In which I switch on a 30 year old computer and it just works

Yesterday, I had the pleasure of visiting my parents and getting out an old computer. One of the first computers I used a lot was the Sharp MZ-80K which was sold from 1979 to 1982. I was but a wee bairn, but this is the first machine I really programmed. First using BASIC and then using Z-80 assembler (and sometimes by typing in characters directly on the screen corresponding to Z-80 opcodes and then calling the address of the start of screen memory to have the program on screen executed).

My parents have a Sharp MZ-80K that I purchased as a nostalgia item some years ago. Yesterday I fired it up for the first time and was straight into the boot ROM. Oddly, I could remember everything about the machine's operation and shoved a cassette tape containing SP-BASIC into the tape drive, hit play and typed LOAD.

The machine duly loaded SP-BASIC and gave me the prompt.

Then I did the real test. After poking around and finding a tape of my old BASIC programs I typed LOAD again and explored the tape. 30 years on all the programs loaded from tape just fine and executed. I was able to spend a happy few hours playing character-based games that I wrote.

Here's a screen shot of the listing of one such game: notice the J.G.C. initials at the start. This was one of the few programs I put my name in (I think because it was clearly co-written with A.S.).

I put the survival of that tape down to two things: my parents careful handling of a box of Sharp MZ-80K and BBC Micro tapes and my obsession at the time of being the highest quality CrO2 cassette tapes available: "It is still considered today by many oxide and tape manufacturers to have been the most perfect magnetic recording particulate ever invented."

Friday, August 07, 2009

What's on a baggage tag?

Very recently Alitalia managed to lose all my luggage twice in one week. Of course, when I say lose I simply mean delay. They did in the end (after days of delay) get it all back to me.

But it got me wondering about how baggage is tracked. I mentioned this in the office and a colleague said that he had heard from a guy working on RFID tagging of airline baggage that there was a little secret: the current bag numbers are too short for the volume of bags being handled.

Naturally, I decided to try to find out whether this was true.

Firstly, from a technical standpoint the numbers on baggage tags (called a 'license plate' in industry parlance) consist of 10 digit base-10 numbers. There are three separate fields: a three digit airline identifier (which is the numeric version of something like BA or AA and does not include flight number), a one digit field used by the airline and a six digit serial number. All of this is specified in IATA Resolution 740.

Here's an example given by the airline Malev

Item number 1 there is the baggage tag number (MA759235). It shows the airline identifier as letters (in this case MA instead of the three digit IATA code which would be 182). That is followed by the six digit bag serial number.

At the bottom under the bar code is that actual 10 digit IATA number which starts with 0 (that's the single digit the airline gets to use for its own purpose), then there's the airline code (182) and then the serial number: 0182759235.

The six digit number identifies the bag. With six digits the airline can have up to 1 million unique bags in their systems at any one time. So I instantly wondered how long it would be before an airline would reuse a bag number.

Wikipedia tells me that the largest number of passengers carried in 2007 was by Southwest Airlines with over 100m passengers. Just behind was American Airlines with 98m.

Making some wild assumptions: each passenger checks a single bag and the passengers are evenly distributed across the year that says that Southwest Airlines and American Airlines handle 1m bags every 4 days.

So then I wondered. What happens if my bag gets lost and it stays lost for too long. Does my number get reused? Does this have an effect on me getting it back?

For that I decided to go the traditional route and talk to the people who know: IATA, BATA and the airlines. I prepared an email with a sequence of questions around the story, introduced myself and this blog and sent off messages. I also considered buying the complete IATA regulations, but they are rather expensive.

The IATA and American Airlines simply did not reply. When I tried to register for access to the IATA web site my access was denied. British Airways kindly replied and told me that they couldn't comment on an 'industry-wide issue'. The British Air Transport Association replied and said they were too small to answer my questions which really needed to go to IATA.

The Civil Aviation Authority replied and told me that I was asking about a 'security issue' (oddly) and that I needed to ask the UK Department of Transport. They didn't reply.

At this point I realized why people get paid at newspapers: it takes time and money to investigate things. So, to save my time and money I mentioned this to someone I know who works at The Times in the UK. She took the bull by the horns and got the IATA on the phone.

To cut a long story short the IATA claims that this isn't a problem for two reasons: that single digit field can be used to turn 1m bags into 10m (this is up to the airlines to implement and it wasn't clear who is implementing that) and that reuse isn't a problem because if it does occur the system records the information about both people and flights using the same bag number (it was unclear from IATA exactly how long these records are held onto: they appeared to say six days).

But I'm not really satisfied. It would be nice to really know. Does number reuse occur in practice? And does it have an effect on the handling of delayed baggage?

Anyone out there work for an airline and want to give me the inside scoop?

Wednesday, August 05, 2009

Unmarked surveillance vehicles in Central London

I see these vehicles all the time. Today two were parked near my office (the drivers appeared to be having lunch and a chat):

However, they give me the creeps for two reasons: they are totally unmarked and they are doing automatic number plate recognition.

Since they are unmarked it's impossible to tell who's controlling them. Are they police vehicles looking for the evil-doers, or are they from the local council looking for people breaking parking laws. No idea (I think it's the latter).

ANPR creeps me out because it's part of the larger surveillance state that's evident since I returned to the UK.

The four cameras on the vehicles are marked with "PIPS Technology / A Federal Signal Company" and that appears to refer to these guys. As a geek I find the ability to spot and read multiple number plates while traveling at a relative velocity of 155 MPH very neat.

I'd just rather you didn't do that. Or if you do that please tell me why and where the data is going.

The world's simplest log file

Back when I was doing embedded programming we had a debugging feature called 'pokeouts'. The idea was that the program could write a single character to the screen when some important even occurred.

Now writing single characters to the screen might not seem like a good way to do debugging. After all, these days we've got tons of disk space and can spew log files out and our CPUs are not burdened. But in embedded programming you tend to have little space and little time.

This system worked by having code that resembled the following:

pokeout: LD AH, 0Eh
INT 10h

And to make things even easier we actually implemented this by hooking an interrupt so that programs didn't need to know whether the pokeout facility was available. They could safely do INT xxh for debugging output. This meant that the logging facility could be loaded onto a running system.

It's amazing how much information you can convey with single characters scrolling across the screen. It's easy to get critical area entry and exit (we used lots of combinations of ( ) [ ] { } < > followed by single characters to identify the area). You can build on that to output hexadecimal numbers easily. And individual events can be hooked individual characters.

I'd spend my days looking at these scrolling screens of characters waiting for a program to crash. Everything was on the screen.

For some high-speed systems the screen pokeout was too slow. We replaced the routine above with code that wrote the pokeouts into a circular buffer in memory. When the program finally crashed the buffer could be examined using a high-powered debugger like SoftICE to give us a trace of the program's final moments. It was the equivalent of an aircraft's 'black box'.

Just give me a simple CPU and a few I/O ports

Back when I started programming computers came with circuit diagrams and listings of their firmware. The early machines I used like the Sharp MZ-80K, the BBC Micro Model B, the Apple ][ and so on had limited instruction sets and an 'operating system' that was simple enough to comprehend if you understood assembly language. In fact, you really wanted to understand assembly language to get the most out of these machines.

Later I started doing embedded programming. I wrote a TCP/IP stack that ran on an embedded processor inside a network adapter card. Again it was possible to understand everything that was happening in that piece of hardware.

But along the way Moore's Law overtook me. The unending doubling in speed and capacity of machines means that my ability to understand the operation of the computers around me (including my phone) has long since been surpassed. There is simply too much going on.

And it's a personal tragedy. As computers have increased in complexity my enjoyment of them has plummeted. Since I can no longer understand the computer I am forced to spend my days in the lonely struggle against an implacable and yet deterministic foe: another man's APIs.

The worse thing about APIs is that you know that someone else created them, so your struggle to get the computer to do something is futile. This is made worse by closed source software where you are forced to rely on documentation.

Of course, back in my rose tinted past someone else had made the machine and the BIOS, but they'd been good enough to tell you exactly how it worked and it was small enough to comprehend.

I was reminded of all this reading the description of the Apollo Guidance Computer. The AGC had the equivalent of just over 67Kb of operating system in ROM and just over 4kb of RAM. And that was enough to put 12 men on the moon.

Even more interesting is how individuals were able to write the software for it: "Don was responsible for the LM P60's (Lunar Descent), while I was responsible for the LM P40's (which were) all other LM powered flight". Two men were able to write all that code and understand its operation.

12 men went to the moon using an understandable computer, and I sit before an unfathomable machine.

Luckily, there are fun bits of hardware still around. My next projects are going to use the Arduino.

Tuesday, August 04, 2009

My Alan Turing petition

A while back I wrote about the appalling treatment of Alan Turing and suggested that the UK government should apologize. Someone suggested that I turn this into a petition to the UK government.

That petition has now been approved.

If 500 people sign it there will eventually be a response from the government to the petition. If you are a British citizen and wish to sign the petition you can do so on the Number 10 web site here.

Monday, August 03, 2009

Please don't use pie charts

I don't like pie charts. I don't like them because they fail to convey information. They do that because people have a really hard time judging relative areas instead of lengths. Wikipedia mentions some of the reasons why pie charts are generally poor.

I'd go a little further and say that pie charts are really only useful when a small number of categories of data are far, far greater than others. Like this image from Wikipedia of the English-speaking peoples:

Yep, there are lots of Americans.

Once you get data that isn't widely different or you have lots of categories your pie chart would be better as either a bar chart, or as simply a data table. Here's a particularly bad pie chart from a blog about Microsoft Office. It depicts the number of features added in various releases.

Literally eveything is wrong with this pie chart. The data being presented is the number of features added per release. Releases occur chronologically. So an obvious choice would be a bar chart or a line chart for cumulative information with time going from left to right. Instead we have to follow the chart around clockwise (finding the right starting point) to follow time.

And since the releases didn't come out at equal intervals it would be really nice to compare the number of features added with the amount of time between releases.

The pie chart has no values on it at all. We don't get the actual number of features, or just the percentage added. So we are left staring at the chart trying to guess the relative sizes of the slices. And that's made extra hard by the chart being in 3D. For example, how do Word 2000 and Word 2003 compare?

But if you still must use pie charts, I beg you not to use 3D pie charts. Please, they are simply an abomination. Making them 3D just makes them even harder to interpret.

Network Solutions renames their services for added obscurity

I logged into my Network Solutions account this morning for a bit of Monday domain name management to be treated to a page which contained the following:

You see Network Solutions has decided that the service called "Domain" was much too obscure and difficult to understand and so it's much clearer if we now call it "nsWebAddress". Huh?

Also, "Web Site" was so obscure that it was better changed to "nsSpace" or "nsBusinessSpace". And you know those "SSL Certificates"? Well, that was way too confusing, so let's call them "nsProtect".

I'm sure that someone in Network Solutions' marketing department got really excited about all these changes. I'm also betting that they don't actually use their own product.

The best part comes when you actually run the gauntlet of offers and get to your account. Obviously "nsWebAddress" was so crystal clear that they felt the need to put "(Domains)" after it. Pure, pure genius.