Monday, December 17, 2012

Speeding up HTTP with minimal protocol changes

As SPDY works its way through IETF ratification I began wondering whether it was really necessary to add a complex, binary protocol to the HTTP suite to improve HTTP performance. One of the main things that SPDY sets out to fix is defined in the opening paragraph of the SPDY proposal:
One of the bottlenecks of HTTP implementations is that HTTP relies on multiple connections for concurrency.  This causes several problems, including additional round trips for connection setup, slow-start delays, and connection rationing by the client, where it tries to avoid opening too many connections to any single server.  HTTP pipelining helps some, but only achieves partial multiplexing.  In addition, pipelining has proven non-deployable in existing browsers due to intermediary interference.
The solution to this problem (as currently proposed) is SPDY. But I couldn't help thinking that solving the multiplexing problem could be done in a simpler manner within HTTP itself. And so here is a partial proposal that involves adding two new headers to existing HTTP and nothing more.

1.  Overview

   HMURR (pronounced 'hammer') introduces a new pipelining mechanism
   with explicit identifiers used to match requests and responses sent
   on the same TCP connection so that out-of-order responses are
   possible. The current HTTP 1.1 pipelining mechanism requires that
   responses be returned in the same order as requests are made (FIFO)
   which itself introduces a head-of-line blocking problem.

   In addition, HTTP 1.1 pipelining does not allow responses to be
   interleaved. When a response is transmitted the entire response
   must be sent before a later response can be transmitted. HMURR
   introduces a chunking mechanism that allows partial responses to be
   sent. This enables multiple responses to be interleaved on a single
   connection preventing a long response from starving out shorter
   ones.

   HMURR attempts to preserve the existing semantics of HTTP.  All
   features such as cookies, ETags, Vary headers, Content-Encoding
   negotiations, etc. work as they do with HTTP; HMURR simply
   introduces an explicit multiplexing mechanism.

   HMURR introduces two new HTTP headers: one header that is used for
   requests and responses and one that is only present in
   responses. No changes are made to other HTTP headers or HTTP
   responses.

2. HTTP Version

   It is intended that HMURR be a modification to the existing HTTP
   standard RFC 2616 and requires a higher HTTP version number. Either
   HTTP 1.2 or HTTP 2.0 would be suitable.

3. HMURR Operation

3.1. Pipelining

   A client that supports persistent connections MAY "pipeline" its
   requests (i.e., send multiple requests without waiting for each
   response). Each request must contain a Request-ID header specifying a
   unique identifier used by the client to identify the request. When
   responding to a request the server will each the Request-ID header
   with the same value so that the client can match requests and
   responses. This mechanism allows HTTP responses to be returned in any
   order.

   Clients which assume persistent connections and pipeline immediately
   after connection establishment SHOULD be prepared to retry their
   connection if the first pipelined attempt fails. If a client does
   such a retry, it MUST NOT pipeline before it knows the connection is
   persistent. Clients MUST also be prepared to resend their requests if
   the server closes the connection before sending all of the
   corresponding responses.

   Clients SHOULD NOT pipeline requests using non-idempotent methods or
   non-idempotent sequences of methods (see section 9.1.2 of
   RFC2616). Otherwise, a premature termination of the transport
   connection could lead to indeterminate results. A client wishing to
   send a non-idempotent request SHOULD wait to send that request until
   it has received the response status for all previous outstanding
   requests made in the pipeline.

3.2. Multiplexed responses

   A server may choose to break a response into parts so that a large
   response does not consume the entire TCP connection. This allows
   multiple responses to be returned without any one waiting for another.

   When a response is broken into parts each part will consist of a
   normal HTTP header and body. These parts are called slices. The first
   slice sent in response to an HTTP request MUST contain either a
   Content-Length or specify Transfer-Encoding: chunked.

   Each slice MUST start with a valid Status-Line (RFC 2616 section 6.1)
   followed by response headers. The first slice MUST have the HTTP
   headers that would be present were the response transmitted
   unsliced. Subsequent slices MUST have only a Slice-Length (but see
   next paragraph) and Request-ID header. The minimal slice will consist
   of a Status-Line and a single Request-ID header.

   In satisfying an HTTP request the server MAY send multiple slices. All
   slices except the last one MUST contain a Slice-Length header
   specifying the number of bytes of content being transmitted in that
   slice. The final slice MUST NOT contain a Slice-Length header; the
   client MUST either use the Content-Length header sent in the first
   slice (if present) or the chunked transfer encoding to determine how
   much data is to be read.

   The HTTP response code MAY change from slice to slice if server
   conditions change. For example, if a server becomes unavailable while
   sending slices in response to a request the Status-Line on the initial
   slice could have indicated 200 OK but a subsequent slice may indicate
   500 Internal Server Error. If the HTTP response code changes the
   server MUST send a complete set of HTTP headers as if the it were the
   first slice.

   Since there is no negotiation between client and server about sliced
   responses, a client sending a Request-ID header MUST be prepared to
   handle a sliced response.

3.3. Long responses

   A server MAY choose to use the slice mechanism in section 3.2 to
   implement a long response to a request. For example, a chat server
   could make a single HTTP request for lines of chat and the server
   could use the slice mechanism with chunked transfer encoding to send
   messages when they arrive.

   The client would simply wait for slices to arrive and decode the
   chunks within them. One simple mechanism would be to send a slice
   containing the same number of bytes as the chunk (the chunked encoding
   header would indicate X bytes and the Slice-Length would be X bytes
   plus the chunk header size). The client would then be able to read a
   complete slice containing a complete chunk and use it for rendering.

3.4. Example session

   In this example the HTTP version for HMURR is specified as 1.2. It
   shows a client making an initial request for a page without a
   Request-ID, receiving the complete response and then reusing the
   connection to send multiple requests and received sliced replies in a
   different order on a single TCP connection.

     client                             server

     GET / HTTP/1.2
     Host: example.com
     Connection: keep-alive


                                        HTTP/1.2 200 OK
                                        Content-Length: 1234
                                        Content-Type: text/html
                                        Connection: keep-alive

                                        (1234 bytes of data)


     GET /header.jpg HTTP/1.2
     Host: example.com
     Request-ID: a1

     GET /favicon.ico HTTP/1.2
     Host: example.com
     Request-ID: b2

     GET /hero.jpg HTTP/1.2
     Host: example.com                  HTTP/1.2 200 OK
     Request-ID: c3                     Content-Length: 632
                                        Content-Type: image/jpeg
     GET /iframe.html HTTP/1.2          Request-ID: b2
     Host: example.com
     Request-ID: d4                     (632 bytes of data)


                                        HTTP/1.2 200 OK
                                        Content-Length: 65343
                                        Request-ID: a1
                                        Slice-Length: 1024

                                        (1024 bytes of data)

                                        HTTP/1.2 200 OK
                                        Transfer-Encoding: chunked
                                        Request-ID: c3
                                        Slice-Length: 4957

                                        (4957 of chunked data)

                                        HTTP/1.2 200 OK
                                        Content-Length: 128
                                        Request-ID: d4

                                        (128 bytes of HTML)

                                        HTTP/1.2 200 OK
                                        Request-ID: a1

                                        (64319 bytes of data)

                                        HTTP/1.2 200 OK
                                        Request-ID: c3
                                        Slice-Length: 2354

                                        (2354 bytes of chunked data)

                                        HTTP/1.2 200 OK
                                        Request-ID: c3
                                        (chunked data that includes 00
                                        block indicating end)

   In this example, the request for / is satisfied in full without using
   pipelining or slicing. The client then makes requests for four
   resources /header.jpg, /favicon.ico, /hero.jpg and /iframe.html and
   assigns them IDs a1, b2, c3 and d4 respectively.

   Since /favicon.ico (ID b2) is small it is sent while the client is
   generating requests and in full (the Request-ID header is present, but
   Slice-Length is not).

   /header.jpg is sent in two slices. The first has a Slice-Length of
   1024 bytes and specifies the complete Content-Length of the
   resource. The second slice has no Slice-Length header indicating that
   it is the final slice satisfying the request with ID a1.

   /hero.jpg is sent using chunked encoding and in two slices. The first
   slice indicate a Slice-Length (of chunked data) and the second slice
   has no Slice-Length and the client reads the rest of the chunked data
   (which must include the 0 length final chunked block).

   /iframe.html is small and is satisfied with a non-sliced response.
   Responses are delivered in the order that is convenient for the server
   and using slicing to prevent starvation. Since the client needs the /
   resource in its entirety before continuing it does not send a
   Request-ID header and receives the complete response.

4. Header Definitions

This section defines the syntax and semantics of additional HTTP
headers added with HMURR to the standard HTTP/1.1 header fields.

4.1. Request-ID

   The Request-ID is added to the HTTP request headers generated by a
   client to indicate that it intends to use HMURR and to uniquely
   identify the request.

      Request-ID = "Request-ID" ":" unique-request-tag

   When responding to the request the origin-server MUST insert a
   Request-ID header with the corresponding unique-request-tag so that
   the client can match requests and responses.

4.2. Slice-Length

   The Slice-Length response-header is added to a response by the
   origin-server to indicate the length of content that follows the HTTP
   response headers.

      Slice-Length = "Slice-Length" ":" 1*DIGIT
   If this header is missing it indicates that the entire (or remaining
   unsent) response-body is being transmitted with this set of HTTP
   headers. If present it indicates the number of bytes of response that
   are being transmitted. The client MUST use the Content-Length to
   determine the total length expected, or if chunked transfer encoding
   is used the client MUST use the chunked encoding header to determine
   the end of the content.

Obviously, this proposal does not provide all the functionality of SPDY (such as a forced TLS connection, header compression or built-in server push), but it does deal with connection multiplexing in a simple, textual manner.

There are probably reasons (that I've overlooked) why my proposal is a bad idea; what are they?

Saturday, December 08, 2012

Listen on a UDP port and dump received lines of data

I needed to quickly fake up a syslog server for some debugging and wrote a small Perl program to listen for messages (lines of text) on a UDP port and dump them to the console. The program listens on a port specified on the command-line and simply prints out whatever it receives. It is only suitable for line oriented protocols since it uses the Perl <FN> operator to read data.

Here it is:

#!/usr/bin/perl                                                                                        
#                                                                                                      
# udp.pl - listen on UDP port and dump out whatever has been received                                  

use strict;
use warnings;
use Socket;

die "Usage: udp.pl <port>" if (!defined($ARGV[0]));

socket(UDP, PF_INET, SOCK_DGRAM, getprotobyname("udp"));
bind(UDP, sockaddr_in($ARGV[0], INADDR_ANY));
print $_ while (<UDP>);

PS Some people have asked why I'm not using netcat. The answer is that netcat works fine if only one 'connection' is made to the UDP port. With multiple it doesn't work and I want an arbitrary number of UDP sources to throw data at this program successively. For details, see this StackOverflow question.

Monday, December 03, 2012

How I ended up with so much Hacker News karma

On the Hacker News leaderboard I'm currently in position #12 with 32,360 points. I was curious to find out how I ended up in that position, so I used the HNSearch API to pull down my complete submission and comment history.

Some headline numbers: I've been on Hacker News for well over 5 years (I joined on July 2, 2007) and have made 2,985 comments or submissions. That's 1.5 comments or submissions every day for over 5 years (I've actually commented far more than I've submitted: 670 submissions; 2,315 comments). And for all those items I've received on average 10.8 points.

So, the first conclusion is: consistency over a long period.

Looking at comments and submissions separately, I've received 15,985 points for 2,315 comments and 18,119 points for 670 submissions. That means my average comment received 6.9 points, but my average submission received 27 points.

Second conclusion: good submissions earn way more points than comments (which was probably obvious).

Looking at the trend in points per submission it looks like things haven't changed much in 5 years. If you look at submissions that received below 200 points things look fairly consistent.

But it looks like I've got better at writing comments people vote up. Or perhaps the community has just grown and the number of upvotes has gone up for a well received comment.

Breaking the points gained for submissions into bands shows that I've earned a great deal of karma for stories that hit 50 to 200 points.
One thing I hoped to gain from this was insight into when my comments and posts did best on Hacker News. Here's the number of points I've gained by UTC hour for submissions:
So, for me at least, the peak times for submitting content to get karma are between 0900 and 1200 UTC (i.e. the morning in the UK).  Commenting has a very different pattern. It appears that it's better for me to submit stories in the morning UTC but wait until the afternoon (i.e. morning in California) to write comments.
But digging a little deeper shows that this pattern really happens because I comment more in the afternoons. Looking at the average points gained per comment by time reveals that it doesn't really matter when I comment.
And the pattern seen above for submissions gets similarly blurred. Although there are some slightly better times than others.
My conclusion is that if I want more points there's no point trying to game Hacker News. It doesn't really matter when I submit a story or write a comment. It's content that's king. That's good news for Hacker News because it indicates that what matters there is content and not game playing.

So, to answer my original question about how I ended up with that much karma: slowly, consistently, over years by submitting things that people like.

Tuesday, November 27, 2012

A home-made Voltaic Pile

Back from my trip to Italy and visit to Tempio Voltiano I decided that I really ought to make my own Voltaic Pile. So, I gathered together the following easy to obtain items: British 2p coins, zinc coated disks of metal, some blotting paper and a bottle of vinegar.

The British 2p coins are copper (at least on the outside) and so, with the zinc disks, will make the two electrodes of each cell in the pile. The vinegar is the electrolyte and the blotting paper is used to separate the metals. I got the zinc disks by buying the cheapest electrical junction box I could find and punching out all the holes:

Each cell consists of a 2p coin, a piece of blotting paper soaked in vinegar and a zinc disk:

Each cell gave roughly 1.02v (apparently zinc-copper should be 1.1v at 25C) and then it's just a question of making a pile of them to make a larger voltage. This pile of 10 cells had a measured voltage of 9.7v (clearly not all the cells are at 1.02v).


And there's sufficient current to light an LED:


To make it easier to connect to the bottom of the pile I sat the whole thing on a small piece of aluminium foil.

The greatest Google Mail feature you may not be using

There's a wonderfully powerful and subtle technique in Google Mail that can be used to enhance every reply you make to a mail: if you select an area of a message and then hit Reply only the selected text will be quoted in the response.

For example, here's an email I need to reply to:

If I simply hit Reply then the entire original message is quoted in the response:

But if I first select the piece that's important to me:

And then hit reply, only that part is in the response:

Use this to respond to just the right parts of a message and cut down those enormous chains of replies where the messages grow and grow and grow.

PS This feature has been disabled by default in Google Mail. You must enable it to use it.

Sunday, November 18, 2012

The Bergamo Analemma

While in Italy I briefly visited Bergamo and stumbled upon a wonderful projection of the Sun's analemma in the old town. A hole in a metal disk causes a dot of sunlight to be projected on the ground and the path of the Sun throughout the year is recorded by keeping track of the position of the Sun at local noon. The path of the Sun is an analemma. In Bergamo the position of the dot has been permanently engraved in the stone work:


In the photograph you can see the curved line that shows the local noon position of the Sun as projected through a metal disk. At the top of the image you can just make out the metal disk that has the hole used to create the track of the Sun on the ground. The two ends of the analemma represent the summer and winter solstices. The central line is a meridian and the photograph above is facing south. Here's the compass rose at the end of the meridian line:


Here's the disk:


And the position of the solstices:



All along the meridian line are the days of the year laid out so that the date can be determined from the angle of the Sun at solar noon.

I didn't manage to get a good shot of the characteristic elongated figure-8 shape of the analemma itself. The ends of the 8 are the solstices. The two outer lines parallel to the meridian show 15 minutes before and after solar noon.

Thursday, November 08, 2012

"That'll Never Work!"

I got invited to speak at the last Hacker News London meet up and gave a talk called That'll Never Work! about crazy ideas and what to do about them. The organizers have now posted the talk video:


If you want the slides themselves then here they are:

Wednesday, November 07, 2012

The Rizzoli Conundrum

I was in the Rizzoli book shop in Milan buying The Economist when I noticed that the wear pattern on the pinpad used to pay using a debit card was anything but uniform.  Unfortunately, I didn't have my phone with me so was unable to snap a picture, but it looked like this:

There was heavy wear on the buttons 1, 4, 7, 8, 9 and the green OK button. The buttons 2, 3, 5 and 6 showed little wear. What could cause this?

At first I assumed that Italian debit cards had four digit PINs and people might be able to choose their PIN and use a birth year. To check that I grabbed the latest statistics on the number of people living in Italy by age (statistics are available from ISTAT in CSV format) and wrote a small program to process that. Based on people aged 18 to 80, assuming 4 digit PINs equal to birth year the wear pattern would be: 9 (29.08%), 1 (27.46%), 7 (7.39%), 6 (7.34%), 5 (6.39%), 8 (6.19%), 4 (6.13%), 3 (4.84%), 2 (2.70%), 0 (2.47%) (which isn't terribly surprising as there would have had to be a sudden drop in the birth rate in 1950s and 1960s Italy for the observed pattern).

Then I asked some Italians about their debit card PINs. Italian PINs are 5 digits long (not 4) and are chosen by the bank and cannot be changed.

So, can anyone come up with an explanation of what I observed?

PS If anyone's in Milan and can walk into Rizzoli and snap a picture of the pinpad (at the till straight in the front door) it would be cool.

Monday, November 05, 2012

Tempio Voltiano

At the bottom tip of Lake Como one of the most (if not the) most over-the-top memorials to a scientist is found sitting on the edge of the lake. The Tempio Voltiano is a temple built to commemorate the Italian scientist Alessandro Volta (who, amongst other things, invented the battery). Built in 1927 the temple depicts Volta as a classical figure. In central Como there's a statue of Volta (on the Piazza Alessandro Volta) with the scientist draped in robes as if he were a figure from the Roman era.

The temple itself continues the theme, with statues representing science (on the left of the entrance) and the Roman goddess Fides (Goddess of trust).

And the interior is similarly grand with an inlaid floor of marble, alabaster and other stones. The circular  layout follows the progression of science that Volta worked from the left to right with the dates engraved in the stonework.

The actual exhibition is a little disappointing. In 1899 Como put on an enormous exhibition celebrating the 100 year anniversary of the Volta's invention of the battery. A massive fire broke out and many of Volta's original instruments and creations (including his batteries) were destroyed. The temple contains those artifacts that remain augmented by reconstructions based on parts that were recovered.

Nevertheless it's here that you can see some of the first batteries ever created. Such as this Voltaic Pile:


And there's a good display of other batteries made by Volta using a variety of metals and electrolytes (some of them dry and some of the wet technologies):




And here's some equipment used for electrolysis to see what gases are generated at the anode and cathode.


Volta's invention came about because of Galvani's investigation of 'animal electricity' that appeared to be exhibited when frogs' legs moved when placed in contact with two different metals. Volta didn't believe Galvani's explanation of the presence of electricity in animals, but rather thought the the contact of the metals and the legs was creating electricity. In disproving Galvani he invented the battery.


Also, on display is equipment that Volta used to measure the electromotive force by balancing weights against two charged plates to measure the force required to separate them. And there's a display of capacitors (which he called 'condensors' because the electricity was thought to 'condense' on the plates).


If you visit the museum be sure to ask for the handout in English that describes all of the numbered exhibits and buy the 6 Euro English-language "Guide to the Volta Temple" which is well worth reading as it covers the history of the building and Volta's inventions in detail.


Wednesday, October 24, 2012

"Calculation and Tabulation in the Nineteenth Century: Airy versus Babbage"

Doron Swade's 2003 PhD thesis entitled "Calculation and Tabulation in the Nineteenth Century: Airy versus Babbage" is available for download.

It covers the interaction between Charles Babbage and George Airy and shines light on a relationship that has previously been reduced to caricatures of the two men.


Tuesday, October 16, 2012

A downloadable nanosecond

I came across a wonderful video of Grace Hopper (if you don't know who she is go read that Wikipedia article first and the come back here) explaining what a nanosecond is using a visual aid. The aid is a length of wire equal to the distance light travels in one nanosecond. That's 299.8mm (or as she puts it 11.8").


That's a handy length because it fits neatly on A4 and US Letter paper. So, here are downloadable nanoseconds that can be used to make the same point as Hopper. I've prepared both A4 and US Letter versions as PDFs.

Seeing the distance light travels in a nanosecond is interesting because it becomes clear that at the very high frequencies that computers operate at the speed of light and length of cabling become significant. This propagation delay is something that designers of very high speed circuits have to take into account. For example, a machine working 1 GHz has a clock that's ticking once every nanosecond.

Here are the two versions.


Marie Curie Day

Every year when Ada Lovelace Day comes along I find myself disappointed that Lovelace has been chosen as the symbolic 'woman in science' because her contribution is minimal, the claims about her are overblown and there's a much better role model who really contributed a lot: Marie Curie.
The claim that Lovelace was the 'first programmer' rests on very shaky ground and the prophetic contribution she did make is overlooked. But even that is essentially a dead end: Lovelace isn't a figure who inspired; she's been rediscovered and raised up. That's not to say that she isn't cool: she really did grok what Babbage was doing when others didn't and we know a great deal about Babbage's ideas because of her translation of the famous paper by Menabrea with her famously detailed additional notes, but...

In contrast, check out Marie Curie:
She was the first woman to win a Nobel Prize, the only woman to win in two fields, and the only person to win in multiple sciences. She was also the first female professor at the University of Paris (La Sorbonne), and in 1995 became the first woman to be entombed on her own merits in the Panthéon in Paris
Her achievements included a theory of radioactivity (a term that she coined), techniques for isolating radioactive isotopes, and the discovery of two elements, polonium and radium. Under her direction, the world's first studies were conducted into the treatment of neoplasms, using radioactive isotopes. She founded the Curie Institutes in Paris and in Warsaw, which remain major centres of medical research today.
Here's my favorite picture of Curie. It's a gathering of every important physicist/chemist in the world at the time. Curie is the only woman sitting with Einstein, Bragg, Heisenberg, Dirac, Planck, ... at the Solvay Conference in 1927.


And if that wasn't enough her own daughter Irène Joliot-Curie went on to win the Nobel Prize for Chemistry in 1935. There's absolutely no doubt that Marie Curie is a scientist of major importance and an inspirational figure. When alive she was something of a celebrity and in 1921 went on a visit to the US which is nicely covered by NIST here and also here.

If you find yourself in Paris do visit the Curie Museum (which is in her now decontaminated laboratory). Here's a look inside the museum.

Monday, October 08, 2012

Why Plan 28 decided not to use Kickstarter

Firstly, this is not a post about how Kickstarter is evil, bad for you or anything similar, it's about why the UK charity I run, Plan 28 (which is working on building Charles Babbage's Analytical Engine), chose not to use Kickstarter. I get asked this question a lot so here are the answers.

1. Tax efficiency

As a UK charity Plan 28 is eligible for Gift Aid; this means that Plan 28 is able to claim back tax paid by UK taxpayers on donations they make to the charity. That means that UK donations are increased by around 25% because the charity claims back the tax. Doing that means collecting information on UK taxpayers. This would be very complex with Kickstarter and an administrative burden.

(Note that US donors have asked if Plan 28 plans to become a US 501(c)3 so that US taxpayer donations can become deductible: not at this time. The legal and administrative burden of running two non-profits is too high for a small charity like Plan 28.  As we grow this may change.)

2. Kickstarter's fees are quite high

Plan 28 originally planned to use Kickstarter and had a range of incentives already thought up, but the board of trustees objected based on the overall Kickstarter fee structure. To quote from the Kickstarter FAQ:
If a project is successfully funded, Kickstarter applies a 5% fee to the funds collected. Our payments processor, Amazon Payments, will also apply credit card processing fees that work out to roughly 3-5%.
So, worse case the total cost to the charity would have been 10% of the donation.

Contrast that with JustGiving (which Plan 28) uses. They charge 3% to 5% of the donation amount plus up to 1.3% from the credit card company. The worst case is that Plan 28 pays 6.3% of the donation, but JustGiving also handles the Gift Aid process: they automatically make the 25% claims to the tax authorities for Plan 28.

3. All or nothing funding

The Analytical Engine project is a very long one. It could easily take 10 years. The board of trustees were concerned that this was somewhat at odds with the spirit of Kickstarter which revolves around funding a specific project where somewhat immediate gratification is involved.

Implicit in the Kickstarter process is that the amount of money asked for is enough to make the project happen. In the case of Plan 28 that's likely £5m over ten years. But also Kickstarter is 'all or nothing' so we could have found ourselves falling short of the money we need and receiving nothing.

All these things lead us to conclude that working with JustGiving made more sense. We may return to Kickstarter when we start to build the actual machine and the end is in sight, but for the moment we are going the traditional fund-raising route.

If you are interested in donating to the project click here.


Friday, October 05, 2012

Plan 28 now accepting donations

Plan 28's donation system is now online through JustGiving. Donations can be made in British Pounds, US Dollars, Euros and a number of other currencies. As a UK charity Plan 28 is eligible for GiftAid; tax payers in the UK can help the charity by signing the GiftAid form when they make a donation.

As Plan 28 is a very long project it is particularly helpful when we receive regular donations. The donation system is able to set up monthly donations for people who want to contribute a little each month over the long term.

Plan 28 can accept donations on the web or via mobile phone. For example, to donate £10 by phone text BABB37 £10 to 70070.

Thursday, October 04, 2012

Web-scale Arduino

There are a number of projects that allow one to control an Arduino using node.js.  For example, there's Noduino and Johnny-Five. To my mind these things are an abomination(*) because they are teaching people to use Arduino at the wrong level of abstraction. If you are going to learn Arduino, learn some C.

One can always argue that they get more people using Arduino, but all these people would be better off learning some C for the joy of Arduino is not (to my mind) controlling it using a massive desktop or laptop machine. The joy of Arduino is small, embedded projects that do stuff. Such as Simonoids:
And if they learn some C they'll get more out of Arduino because they'll be able to do things that the node.js layers don't allow: cool stuff. And once they've learnt some C and need to control the Arduino from the host they'll be able to do even more cool stuff. If you are going to learn Arduino, learn some C.

To get an idea of what's problematic about 'web-scale Arduino' take a look at the canonical Hello, World! of Arduinoland: flashing the LED attached to pin 13.  Here's the code to do that in Noduino:

And here's the code to do that in C on the Arduino:
The first problem is that the Noduino code is nothing like the code on the Arduino. So, anyone learning to use Noduino is learning almost nothing about programming Arduino. The skill of using the duino library is not transferrable to writing code on the device itself. That's a problem because these node.js Arduino wrappers have limitations that can only be overcome by using C.

For example, LED.blink() takes care of all the flipping on and off of the LED and it performs the timing. And it's not just that it's doing the work, all the concepts in node.js (particularly of events) just don't exist on the Arduino. What you learn with Noduino is how to use Noduino; similarly for Johnny-Five.

(Aside: for a fun puzzle work out how to stop the LED from blinking in the node.js example.)

The other issue is that the really cool thing in node.js (the same code on the server and browser) just isn't true here. None of that JavaScript is running on the Arduino. You can't take your JavaScript and run it on the Arduino and so you can't, for example, take your LED-flashing Arduino, attach a battery to it and see it run.

Now, that might seem like a minor issue but doing things that are time sensitive (such as bit banging) means your code needs to be on the Arduino. Projects like the Cansole need to have very precise timing to generate the TV signal in software. That needs doing in C.
The way these systems work is by having a serial protocol defined between the Arduino and the host. Noduino uses duino and Johnny-Five uses Firmata. Projects that can live within the restrictions of those protocols will get along fine, but it means giving up access to the vast world of Arduino libraries.

(Aside: if you know C then you can use Firmata as a communication mechanism between your host program and Arduino and get the benefits of both worlds.)

As an example, if you wanted to connect a serial device (say a GPS) to some of the Arduino digital pins using SoftwareSerial you'd be in trouble. Or suppose you want to use TVout to generate a PAL or NTSC signal, you can't. If you know C you can hack that stuff into your project even if you use node.js to control the Arduino. C is the core language used by Arduino and it's worth learning.

Of course, having gone on about this there is a good reason to control an Arduino from a node.js program: if you are writing a web app that needs to control some external device then Arduino is a handy interface box.

And I'm not against doing that. In fact, the standard Arduino examples include many ways of combining a Processing program with an Arduino program. One example allows the user to move the mouse on their computer to control the brightness of an LED. All of these involve communicating using the serial connection (which is what Firmata and duino do under the hood) and require writing a simple Arduino program in C.  Another example uses the Arduino to read analog data and Processing to graph it.  In both case the programs used are small, simple and native and can take full advantage of the Arduino and the host.

In summary, I think you're missing a lot if you start with node.js as your way of learning Arduino. Flash an LED in C: it's easy.

(*) I am exaggerating for effect.

Tuesday, October 02, 2012

The Great Railway Caper: Big Data in 1955

The story of a Big Data problem in 1955 presented at StrataConf London 2012.


As soon as I have Tim Greening-Jackson's permission to share his research paper on "LEO 1 and the BR Job" I will update this blog post with information on how to get it.

PS Tim has kindly given permission to post his paper here.


Monday, October 01, 2012

Fact checking George Dyson (where he taps me on the shoulder)

It's no secret that I wasn't impressed by George Dyson's book "Turing's Cathedral" because it skewed history in a particular way a bit too much, and I felt that the title exploited the Turing anniversary. But I was struck by something he said in a StrataConf EU keynote.

He said that in 1953 there were only 53 kilobytes of random-access memory in computers in use and showed a picture of a February 1953 report entitled "A Survey of Automatic Digital Computers" published by the US Office of Naval Research. I thought that sounded odd, so I tracked down a copy of the report.


In fact, he makes the same claim in the book, but I'd overlooked it:


So, I started going through the report looking at machines that were operational in March 1953 according to the report. Just concentrating on binary machines I quickly found that random-access memory was well past 53KB.  By the time you reach E (the machines are in alphabetical order) there were 85,856 bytes of memory (in the ACE Pilot, APE(R)C, AVIDAC, BARK, EBIAC, CADAC, CSIRO Mark 1, EDSAC 1,  EDVAC, ELECOM 100 and ERA 1101).

So, then I wondered if, given that Dyson is oddly obsessed with the Williams Tube memory storage (making it the basis of his really odd "Turing Machines are one-dimensional, von Neumann machines two dimensional metaphor) if he was only counting machines that used fast electrostatic memory (he does, after all, say 'high-speed' in the book).

The machines that were operational in March 1953 with that type of memory are (according to the report): AVIDAC (5,120 bytes), ILLIAC (5,120 bytes), IAS Machine (5,120 bytes), Manchester Machine (645 bytes), MANIAC (5,120 bytes), ORDVAC (5,120 bytes), SEAC (2,816 bytes), SWAC (1,152 bytes), UTEC (768 bytes), Whirlwind (4,096 bytes).  That's a total of 35,077 bytes.

Reading the report it's clear that there was more than 53KB of memory in March 1953 and that it was spread across a variety of different memory types (magnetic drums, acoustic/mercury delay lines and electrostatic techniques).

It's a nice soundbite that in "1953 there were 53KB of memory", but like everything else in Dyson's book it's important to read it in the light of his fundamental idea that von Neumann's IAS machine is the Ur Machine.

PS After my talk at StrataConf I attended the FOO Party where all the speakers and others get together at the end of the conference. While chatting with someone, I felt a tap on my shoulder and turned round to hear "I really enjoyed your talk today. Marvellous to hear about LEO". The finger belonged to George Dyson.

So, of course, I had to own my opinions and ask him straight to his face about things I've written. I still don't understand his explanation of the one dimensional/two dimensional thing, but he convinced me that he's right about the 53KB in '53 thing. Apparently, there are two other important machines not mentioned in the US Navy report: one at IBM and another at RAND. He was just counting high-speed electrostatic memory as I'd guessed. And given that he spent 6 years researching the book he's probably got that part right!

Sunday, September 23, 2012

My UKHAS 2012 Conference talk: "HAB Software Woes"

Yesterday I have a talk at the UKHAS 2012 Conference about software problems in High-Altitude Ballooning. The slides are here:

Friday, September 21, 2012

A water rocket made from household bits and bobs

Last year I made a 'Ponyo' boat out of an aluminium can and a juice carton following the instructions from Science Toy Maker. This summer when the weather was nice enough I followed his instructions for a simple water rocket launcher made from a pen and a piece of PVC tube.

The nice thing about Science Toy Maker is that all the stuff he makes uses the minimum number of tools and a maximum of common items. The water rocket launcher needs the following: a bicycle pump, a "Bic" pen, a length of PVC tube, a candle, and some glue. Add a drinks bottle for the rocket.

Here's the finished launcher.
It's pretty simple. The length of PVC tube is pinched at the bottom end (on the right) to seal it, a hole is made for the bicycle pump connector made from the pen and a neck is made to rest the bottle on. Since the PVC melts at a low temperature he uses a candle to do all the work.

The first job is to heat the PVC pipe at the point where the drinks bottle rocket's neck will rest while it is pumped full of air.
All you do is heat up the pipe a bit (rotating it around for even heating) above the candle flame and then gently push to make a little bulge in the pipe. That's where the neck of the upside down drinks bottle sits.

Next you cut up a small pen to make the bicycle pump adapter. I used a simple "Bic" pen like this.
Then you make a hole in the PVC pipe near the bottom and heat the hole up until it's possible to push the pen segment in. I glued it all around with rubber-toughened superglue.
The finally stage is to seal the bottom of the tube. That's done with the candle again to soften it up, then pinch it and let it cool. Any space can be filled with glue.
After that you just attach the bicycle pump to the connector, partially fill the drinks bottle with water, stick it on the top, pump and let it fly. Check out this video from Science Toy Maker.