Tuesday, October 02, 2012

The Great Railway Caper: Big Data in 1955

The story of a Big Data problem in 1955 presented at StrataConf London 2012.


As soon as I have Tim Greening-Jackson's permission to share his research paper on "LEO 1 and the BR Job" I will update this blog post with information on how to get it.

PS Tim has kindly given permission to post his paper here.


If you enjoyed this blog post, you might enjoy my travel book for people interested in science and technology: The Geek Atlas. Signed copies of The Geek Atlas are available.

3 Comments:

Blogger doranchak said...

Fascinating talk! Great story that touches on many facets of computer science.

5:11 PM  
Blogger Francis Turner said...

There's one difference between today and 60 years ago - today we have a lot of problems that are "fairly big data". That is to say that they can resemble a big data problem but only due to lack of funding - so you can't afford enough AWS instances - and/or because you failed to chose the proper algorithm/data structure/...

In other words we have a lot of problems that we know the solution to, it's just that the solution is unaffordable for some reason. And that may well include because you are stupid and don't realize that someone else has (most of) the answer. In the 1950s there was no way to actually buy a second LEO mk 1 (or even another 1KB or memory) and a lot of the algorithms had not yet been thought of by any one.

1:14 PM  
Blogger Justin Mason said...

The algorithm described in section 3.5.3 of the paper sounds an awful lot like Dijkstra's Algorithm: http://en.wikipedia.org/wiki/Dijkstra's_algorithm

According to wikipedia, that was invented in 1956 and published in 1959 -- I wonder if Roger Coleman's version predated that?

Thanks for posting this, I love reading 1950s computing history. It's amazing how little has changed in many respects.

12:32 PM  

Post a Comment

Links to this post:

Create a Link

<< Home