Thursday, October 14, 2010

What Nature didn't say

Nature has a nice article about scientific software which starts by mentioning the hacking of the Climatic Research Unit and the release of software from the hacked files, and then goes on to talk generally about the state of scientific software. My summary would be that it's generally a mess because software engineering has crept up on scientists and now they need to get educated about things that have been common in the commercial software world for years.

Which is pretty much what I said on Newsnight in December 2009.

The article begins:

When hackers leaked thousands of e-mails from the Climatic Research Unit (CRU) at the University of East Anglia in Norwich, UK, last year, global-warming sceptics pored over the documents for signs that researchers had manipulated data. No such evidence emerged, but the e-mails did reveal another problem — one described by a CRU employee named "Harry", who often wrote of his wrestling matches with wonky computer software.

"Yup, my awful programming strikes again," Harry lamented in one of his notes, as he attempted to correct a code analysing weather-station data from Mexico.

Although Harry's frustrations did not ultimately compromise CRU's work, his difficulties will strike a chord with scientists in a wide range of disciplines who do a large amount of coding.

True enough that the messy code from CRU wasn't shown to compromise any of their scientific results. None of the enquiries into "ClimateGate" examined the CRU code. I did show, however, that the code I saw was buggy. (See, Whoops, there's a third bug in that code and Bugs in the software flash the message 'Something's out there'.) In fact, the best that can be said is that CRU's code was buggy and we don't know if those bugs have a material impact on the science.

And another piece of CRU-related code, the code used by the Met Office to produce the CRUTEM3 temperature set was similarly buggy. I first showed that there were errors in the way the station files were generated (see The full response from the Met Office. By the way, I'm still waiting for them to make good on their promise to credit me) and then (with Ilya Goz) showed that the program used to generate the station errors in CRUTEM3 wasn't working (see Something odd in CRUTEM3 station errors and Met Office confirms that CRUTEM3 station errors are incorrectly calculated.)

What's interesting about these bugs is that practices like unit testing or automation (through even make, which has been around since the 1970s) would have helped avoid the problems the Met Office saw. And likely the bugs in the CRU code. It really would be a good idea for commercial best practices to be introduced to scientists.

Nature didn't mention any of that. Pity. Those are real bugs in the real software related to CRU.

I don't know if any of this leads to problems with actual climate science, that would take a real examination of the source code used to produce published papers. But it does concern me that it was so easy to find so many errors. There really could be a nasty surprise lurking in the code.


Justin Mason said...

This suggests also that academics, while studying CS through crossover courses in universities, are not being trained in use of unit tests, testing methodologies, etc. The universities need to update their syllabi.

Rui Sousa said...

Totally agreed.

Software production is an area of knowledge that is fairly mature.

There is a world of methodologies, and best practices available (CMMI, Prince2, PMI, Agile, SWEBOK, then thousand ISOs normatives).

Testing is just one of points covered by these methodologies, and it is really a shame that code developed by scientists does not use the accumulated knowledge of software professionals around the world.