Monday, July 30, 2012

The TENMA 72-8730 IR Thermometer

Some time ago the folks at Farnell UK wrote to me offering to send me any piece of equipment they sell (up to a certain £ amount) on the condition that I review it. I racked my brains for something I'd want (excluding a free Raspberry Pi which was an impossible ask) and settled upon an infrared thermometer.

They sent me a TENMA 72-8730.

It's a very simple to use device: point at it the thing you want measured and get a temperature reading. The display shows the current temperature and the maximum temperature read so far (this can be changed to show minimum).

The thermometer does C and F measurements and has a backlight for the screen and a laser for aiming.

The thermometer has a 10:1 spot ratio which means that the spot being measured will have a diameter of 1/10 of the distance away from the thermometer. Thus when measuring at a distance of 1m the spot has a diameter of 10cm. You need to get close to get a precise measurement. It's accuracy is given as 2% with a range of -18C to 280C according to the data sheet.

The other important factor with IR thermometers is that they are measuring the thermal radiation from the thing being measured and the thermal radiation will differ for different types of materials. A black piece of paper and a white piece of metal at the same temperature will have different radiation characteristics and give different measurements.

Some, more expensive thermometers allow the surface type to be set so that an automatic adjustment is made. This model does not. According to the data sheet the emissivity is assumed to be 0.95. Looking at a table of emissivity you'll see that this is good for many dark things, but poor for shiny metal (for example).

Pointing the thermometer in my mouth (laser off!) I get a measured temperature of 36.2C (so, looks like I'm pretty healthy). That's because human skin has very good emissivity close to the 0.95 this thermometer assumes.

Here's a long blog post that goes into more detail on emissivity and this type of thermometer. You can hack around the emissivity by applying black tape to the thing you are measuring and measuring that. (Or you can buy fancy tape )

Inside the battery compartment there's a 14 pin connector that connects directly to the microcontroller inside the thermometer and looks like it can be used for communication with it. Putting a logic analyzer on it I was unable to get any interesting signals from it (I'd hoped to be able to remotely control it) so I wrote to TENMA who responded: "This is used only at the factory for initial setup." If I get time I'd still love to play with that interface.

The only problem I experienced with it was that it would occasionally lock up requiring the battery to be removed and reinserted.

Friday, July 13, 2012

Some things I've learnt about writing

The big theme of my working life (so far) has been programming, and yesterday I published a short list of things I've learnt about programming. The second (smaller) theme has been writing. Since 1996, when my first piece of writing was published in The Guardian, I've written quite a number of articles for newspapers and magazines (including a recent 3,000 word special on Alan Turing for New Scientist) and a book.

Almost everything I've written is non-fiction (except my parody startup CEO Brad Bradstone), so my thoughts are about that sort of writing. I have nothing useful to say about writing fiction.

Here are a few things I've learnt about writing. I hope you'll find them useful, as I think writing is a vital skill for almost everyone because good writing is simply good communicating and good communicating matters enormously in any job.
0. Practice
Some time ago I wrote a blog post about blogging and in it I said "write, write, write":
When I began this blog I didn't know what to write, and I thought I only had a few ideas. I ended up writing short, boring blog posts and saving my ideas up because I was afraid that I would run out of things to say. It turns out that the opposite is true. The more you write, the better you get at it. And the more you write the more ideas seem to appear from the ether. I don't set myself a goal of a blog post per day, but I do try to prevent my blog from going stale. Some of my posts are winners, some are not. But I would not have written successful posts without having written the duds.
Looking back at that I think the final sentence is very important. You'll write a lot of duds, but it doesn't matter because it's practicing writing that's important.
1. Read
There are three books about writing they've I've read that I think are helpful.
On Writing Well, by William Zinsser. This book is an example of its own title. It's a delightfully well written book about writing that reads with the slippery ease of a Malcolm Gladwell. It's both enjoyable and informative.
The AP Manual of Style. Although the AP book is about writing for newspapers it is full of useful advice about clarity. The fight for clarity is at the heart of non-fiction. Your goal is not to delight the reader with the breadth of your vocabulary, but to inform them about a subject in which you are claiming to be knowledgeable. News writing has to aim to be succinct, accessible and accurate; those are all good attributes for any non-fiction writing.
The Elements of Style, by Strunk and White. The surprising thing about Strunk and White is that I can pick it up after years and years of owning it and rediscover its lessons. If you only buy one book from my list buy this one.
And, at one time, I read the entire Chicago Manual of Style. It's incredibly long and detailed, but worth having been through once in your life.
And read other books that are not about writing, but while reading them think about how they are written.
2. Listen to editors
Back in 1996 I was lucky that The Guardian newspaper in the UK asked me to write an article about the monitoring of Internet connections. I duly wrote an article and the then editor told me over the phone that it was 'utter rubbish' that 'read like a press release' and to go back and rewrite it for someone.
I touch on who you are writing for below. But the most important thing is not what he said, but that he said it. Editors often have excellent advice.
After having submitted the long Alan Turing special to New Scientist and feeling that it was quite readable the editor came back with a list of questions. Reading his questions I could see areas where I failed to be clear, or had come up with convoluted sentences. With his questions in mind I was able to rewrite parts to make the entire piece interesting, succinct and accessible.
3. Think about writing
At least for me writing didn't come naturally. I started when my doctoral supervisor told me that writing was an essential skill and that my thesis would be poor if it wasn't both original and well written. He started me thinking about writing.
I think about writing by being aware of each sentence I've written and by being aware of the sentences of others. Just reading a book like "On Writing Well" and examining each sentence and paragraph is enlightening.
The hard part, of course, is emulating the skills of other writers.
When you are writing you need to go back and read what you've written with fresh eyes. Sometimes I walk away from a text so that I can see it anew. And sometimes rereading leads to total rewriting.
4. Think about the reader
I've often heard people say "write for yourself" and it's certainly true that when I wrote The Geek Atlas it was the book I wanted to buy but couldn't. But I didn't write the actual text for me; that would have been a grave error: the sections on mathematics would have assumed a doctorate and the sections on biology would have assumed no knowledge at all.
It's important to keep your actual reader in mind as you write. If you're writing a corporate whitepaper then you will imagine a certain type of reader and their knowledge level. If you're writing for a newspaper then the reader may be more 'average'. And if you're writing a technical blog post you can probably assume that your reader knows something about the subject.
One risk with technical writing is what I call the fog of knowledge. You know so much about a subject that it's hard to write for someone who isn't as knowledgeable. The words you write assume a level of understanding that only you have. Fight that or be left with a reader saying "I didn't understand a word of that".
And don't be clever. Use long, sophisticated words at your peril. If you do you'll end up sounding like a modern French philosopher and the goal of a modern French philosopher is to pass on only one sort of understanding: the understanding that they are smart. Don't be that person; write to be read.
Throughout the writing of a non-fiction piece it's vital to go back and read each paragraph as if you were your reader. Are the sentences themselves clear? Is the level of knowledge assumed correct? Do the paragraphs tell a story that leads the reader through the piece?
I read and reread everything I've written.
5. Plan
Yes, plan. Plan what you want to say; sketch out the major themes and threads through the piece. Make notes so you don't forget a point.
And plan when you are going to write and stick to it. While writing The Geek Atlas I kept very regular hours and it helped enormously. If you know that you are going to write from 0900 to 1200 and then eat lunch it makes you focus on writing.
6. Dream
Walk away from what you are writing and do something else. Let you mind go and then return to what you are writing. But return to it only in your mind. I've often found that ideas, paragraphs, and sentences will come to me when I'm far from the page or screen.
In fact, turning on the screen can be a positive turn off. Once you get to the screen you're forced to do actual writing. Yet some of the time what you need is actual thinking. That's best done away from the demanding computer with its word counts and empty page snarling at you that you haven't written anything yet.

Thursday, July 12, 2012

Tim Robinson joins Plan 28

I'm pleased to announce that Tim Robinson has joined Plan 28 as a trustee. Tim is the man behind these incredible Meccano Babbage engines and is extremely knowledgeable about the details of Babbage's machines.

His bio:

Retired engineer Tim Robinson maintains a strong interest in the early history of computing, particularly mechanical computing devices, and is actively involved in the restoration of these early machines and in the construction of working replicas of Charles Babbage’s conceptual designs. 
Since its arrival in 2008, he has devoted much of his time to Charles Babbage's Difference Engine No.2 at the Computer History Museum in Mountain View, where he is responsible for the presentation, operation, and maintenance of the engine. 
In 2003 Tim demonstrated a working model of the beautiful fragment of Babbage’s Difference Engine No.1, constructed in Meccano. After publicizing the design, the model has been replicated several times around the world. In 2006, he demonstrated a working model of the calculating section of Difference Engine No.2, also constructed in Meccano. Capable of tabulating a 4th order polynomial, this model closely follows Babbage’s design and reproduces all its essential features.  More recently, Tim has demonstrated working models of significant sections of the Analytical Engine, including the anticipating carriage, and the microcoded control mechanisms. 
In 2008, Tim was honored as a John Deaver Drinko Distinguished Visiting Professor at Marshall University, WV, for his work with undergraduate students to recreate a mechanical analog computer based on the original Bush differential analyzer, to be used as a calculus teaching aid. 
Tim retired in 2003 from Broadcom Corporation, where he led a group responsible for the development of Broadcom’s range of WiFi wireless networking chipsets. Oxford educated in Physics, he entered the computing field in 1980 in the UK, where, as co-founder of High Level Hardware Ltd., he designed a user-microprogrammable computer system for developing novel programming languages. He moved to the San Francisco Bay Area in 1989, and has held senior engineering positions at a number of Silicon Valley startup companies.

Some things I've learnt about programming

I've been programming for over 30 years from machines that seem puny today (Z80 and 6502 based) to the latest kit using languages that range from BASIC, assembly language, C, C++ through Tcl, Perl, Lisp, ML, occam to arc, Ruby, Go and more.

The following is a list of things I've learnt.

0. Programming is a craft not science or engineering

Programming is much closer to a craft than a science or engineering discipline. It's a combination of skill and experience expressed through tools. The craftsman chooses specific tools (and sometimes makes their own) and learns to use them to create.

To my mind that's a craft. I think the best programmers are closer to watchmakers than bridge builders or physicists. Sure, it looks like it's science or engineering because of the application of logic and mathematics, but at its core it's taking tools in your hands (almost) and crafting something.

Given that it's a craft then it's not hard to see that experience matters, tools matter, intuition matters.

1. Honesty is the best policy

When writing code it's sometimes tempting to try stuff to see what works and get a program working without truly understanding what's happening. The classic example of this is an API call you decide to insert because, magically, it makes a bug go away; or a printf that's inserted that causes a program to stop crashing.

Both are examples of personal dishonesty. You have to ask yourself: "Do I understand why my program is doing X?". If you do not you'll run into trouble later on. It's the programmer's responsibility to know what's going on, because the computer will do precisely what it's told not what you wish it would do.

Honesty requires rigor. You have to be rigorous about ensuring that you know what your program does and why.

2. Simplify, simplify, simplify

Tony Hoare said: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."

Simplify, refactor, delete.

I'd rephrase Hoare's maxim as "Inside every large, complex program is a small, elegant program that does the same thing, correctly".

Related to this is the 'small pieces loosely joined' philosophy. It's better to structure a program in small parts that communicate than to create some gigantic monolith. This is partly what has made UNIX successful.

3. Debuggers are sometimes a crutch, profilers are not

I almost never use a debugger. I make sure my programs produce log output and I make sure to know what my programs do. Most times I can figure out what's wrong with my code from the log file without recourse to a debugger.

The reason I don't use a debugger much is I think it leads to lazy thinking. Many people when faced with a bug reach for the debugger and dive into setting breakpoints and examining memory or variable values. It's easy to become enamored with such a powerful tool, but a little bit of thinking tends to go a long way. And if your program is so complex that you need a debugger you might need to go back to #2.

(Aside: having said all that, one of the programmers I most respect, John Ousterhout, seemed to spend all day in the Windows debugger).

On the other hand, profilers are essential if you need to understand performance. You'll never cease to be amazed what a profiler will tell you.

4. Code duplication will bite you

Don't Repeat Yourself. Do everything just once in your code.

This is related to #2, but is a special case. Even a simple piece of code that's duplicated will lead to trouble later when you 'fix' one version and forget about the other one.

5. Be promiscuous with languages

Some people get obsessed with a specific language and have to do everything in it. This is a mistake. There is not single greatest language for all tasks.

The key thing is to know which language in your toolbox you'll use for which problem. And it's best to have lots of tools. Try out different languages, build things in them.

For example, perhaps you'll not use Python or ML very much but you'll have played with list comprehensions and seen their power. Or you'll dabble in Go and will have seen how it handles concurrency. Or you'll have used Perl and seen the power of really flexible string handling. Or you'll have used PHP to quickly build a dynamic web page.

I hate language wars. They're basically for losers because you're arguing about the wrong thing. For example, in my hands PHP is a disaster, in the hands of others people make it sing. Similar things can be said about C++.

6. It's easier to grow software than build it

This is related to #2. Start small and grow out. If you are attacking a problem then it's easier to grow from a small part of the problem that you've tackled (perhaps having stubbed out or simulated missing parts) than to design a massive architecture up front.

When you create a massive architecture from the start you (a) get it wrong and (b) have created a Byzantine maze that you'll find hard to change. If, on the other hand, you work from small pieces that communicate with each other, refactoring will be easier when you realize you got it wrong from the start.

The root of this is that you never know what the truly correct architecture will look like. That's because it's very rare to know what the external stimuli of your program will be like. You may think that you know, say, the pattern of arriving TCP traffic that your mail server will handle, or the number of recipients, or you may not have heard of spam. Something will come along from outside to mess up your assumptions and if your assumptions have been cast into a large, interlocked, complex program you are in serious trouble.

7. Learn the layers

I think that having an understanding of what's happening in a program from the CPU up to the language you are using is important. It's important to understand the layers (be it in C understanding the code it's compiled to, or in Java understanding the JVM and how it operates).

It helps enormously when dealing with performance problems and also with debugging. On one memorable occasion I recall a customer sending my company a screenshot of a Windows 2000 crash that showed the state of a small bit of memory and the registers. Knowing the version of the program he had we were able to identify a null pointer problem and its root cause just from that report.

8. I'm not young enough to know everything

I've still got plenty to learn. There are languages I've barely touched and would like to (Erlang, Clojure). There are languages I dabble in but don't know well (JavaScript) and there are ideas that I barely understand (monads).

PS It's been pointed out that I haven't mentioned testing. I should have added that I do think that test suites are important for any code that's likely to be around for a while. Perhaps when I've been programming for another 30 years I'll have an answer to the question "Do unit tests improve software?". I've written code with and without extensive unit tests and I still don't quite to know the answer, although I lean towards unit tests make a difference.

Friday, July 06, 2012

The Perl script the powered the Alan Turing petition

Back in 2009 I was the person behind the successful petition asking the British government to apologize for the treatment of Alan Turing. As I worked on this completely alone (mostly via email, Twitter and phone while commuting to work) I needed some assistance to muster enough interest. Part of that backup was the following Perl script. What it does is look for celebrities that had signed the petition.

(Note that this script probably doesn't work any more because Wikipedia have changed their search functionality and the Number 10 petitions web site has been changed to no longer show the names of signatories).

That script ran hourly via a cron job and its output was emailed to me. It works by searching for each the name of each person who has signed the petition (since the last time the script ran) on Wikipedia and then seeing if a page exists for that person. If it does then it looks to see if that person is British (in any variation) or Irish. If so, it would output their name and page.

Initially, my parents had been reading the list on the web site and texting me to say that someone well known had signed, but that became impossible as the number of signatories increased. Hence the script.

I wanted to know who had signed for the simple reason that it would give the press something to write about. After the initial rush of publicity I wanted to keep the petition in the news and there were a couple of ways to do that: get lots of people to sign so that the number of signatories was newsworthy and spot when famous people signed.

A famous person signing gives the press something to say. In the end, many famous people did sign and their names became an important part of the media story around the petition.

Wednesday, July 04, 2012

Things I like about programming in Go

I recently wrote a blog post for CloudFlare about our use of Go. I thought it was worth following up with a bit more detail about why I like Go. In no particular order:

1. It's a fun language to write in

That's hard to justify because it's very personal. But to me Go has all the power of C, and all the fun of a scripting language. Go breaks Ousterhout's Dichotomy. Things like slices make doing the sorts of fast pointer-based things I used to do in C safe and fun.

And the fact that it's missing exceptions seems like a win because I much prefer dealing with errors when they occur.

Variable declaration, particularly using :=, means you get on with programming not doing a bunch of typing about types.

And, lastly, gofmt means a complete end to code formatting wars. There's one true way.

2. CSP

I was lucky enough to study at Oxford under Tony Hoare and so Hoare's Communicating Sequential Processes was an important part of the syllabus. And my doctoral thesis is all CSP and occam. CSP is incredibly simple and powerful: processes go ahead and run in parallel until they need to communicate at which point they synchronize on a communication. There's no shared memory.

Go takes the ideas of CSP (and of Dijkstra's Guarded Commands) and replaces Hoare's synchronization with channel-based communication (with synchronization) (this is actually very, very similar to occam). This does two things: it means the synchronization is part of the language (and not something that you do by calling synchronization functions) and that sharing by communication (instead of sharing memory) is natural.

Coupling that with goroutines (which are very, very lightweight) means that it's trivial to write concurrent programs without worrying about the usual headaches of threading. As long as communication between the routines is solely done through channels it just works.

3. It's ok to block

Recently some languages/frameworks (notably node.js) have tried to solve performance problems by making extensive use of callbacks (in the form of anonymous functions) so that whenever a system call is in progress the code can disappear off and do work. I think this makes code seriously ugly.

The ugliness comes from two things. Firstly, you're forced to write code in what seems like a backwards manner constantly writing callbacks that will do things when some other things are done. And secondly, rather than solving the problem at hand node.js forces the programmer to do extra work all the time. That's a sort of programmer busy work that's essentially useless, since computers should be worrying about those sorts of problems, and programmers should be worried about writing functionality.

Go on the other hand allows you to simply block. Create a goroutine to do the work and just do a system call and when it finishes your goroutine can carry on. No need to worry about any underlying implementation.

4. Static duck typing

Go dispenses with type hierarchies and has a simple notion of an interface which means you get something close to duck typing while getting static, at compile time type safety. This one feature alone means that it feels like a scripting language.

5. Rich library

The Go library is incredibly rich. Just go look at it. All those library packages are well thought out, clearly documented and performant. Because the library is open source it's easy to dig in and read the code (which is all formatted with gofmt) and see precisely what it does.

These libraries are in real-world use at companies like Heroku, Google, StatHat, SmugMug and more. They are ready for prime time.

6. Instant builds

I started a company to speed up builds (particularly for C/C++) because build time is an absolute productivity killer for any developer. And slow builds are one of the things that distinguish scripting and system languages. Go completely does away with that by cleverly handling the dependency problem in the compiler making all builds lighting fast.

They are so fast the the Go Playground compiles and runs code while you wait.

7. No free

The language is garbage-collected meaning there are no specific memory allocation/deallocation primitives. I used to love doing memory management because it felt like I was doing really useful work. Having seen Go in action I now think it was total busy work.


I'm sure some people reading this are going to say "But language FrobBub has had NoduleFoos for years cretin!".

You know what, I don't care.

What Go has done is brought together a set of features that make it compact, fast, readable, expressive and fun to program in.  I've programmed seriously in tons of languages (both functional and not) and Go is the most fun I've had in a long time.

Making an old USB printer support Apple AirPrint using a Raspberry Pi

There are longer tutorials on how to connect a USB printer to a Raspberry Pi and make it accessible via AirPrint but here's the minimal ...