Skip to main content

The most common objection to my 'releasing scientific code' post


And why dismiss so casually the argument that running the code used to generate a paper's result provides no actual independent verification of that result? How does running the same buggy code and getting the same buggy result help anyone?

Or as expressed at RealClimate:

First, the practical scientific issues. Consider, for example, the production of key observational climate data sets. While replicability is a vital component of the enterprise, this is not the same thing as simply repetition. It is independent replication that counts far more towards acceptance of a result than merely demonstrating that given the same assumptions, the same input, and the same code, somebody can get the same result. It is far better to have two independent ice core isotope records from Summit in Greenland than it is to see the code used in the mass spectrometer in one of them. Similarly, it is better to have two (or three or four) independent analyses of the surface temperature station data showing essentially the same global trends than it is to see the code for one of them. Better that an ocean sediment core corroborates a cave record than looking at the code that produced the age model. Our point is not that the code is not useful, but that this level of replication is not particularly relevant to the observational sciences.

This argument strikes me as bogus. It comes down to something like "we should protect other scientists from themselves by not giving them code that they might run; by not releasing code we are ensuring that the scientific method is followed".

Imagine the situation where a scientist runs someone else's code on the data that person released and gets the same result. Clearly, they have done no science. All they have done is the simplest verification that the original scientist didn't screw up in their methods. That person has not used the scientific method, they have not independently verified the results and their work is close to useless.

Is this enough to argue that the code should have been closed in the first place?

I can't see that it is. No one's going to be able to publish a paper saying "I ran X's code and it works", it would never get through peer review and isn't scientific.

To return to the first quote above, running someone else's buggy code proves nothing. But in hiding the buggy code you've lost the valuable situation where someone can verify that the code was good in the first place. Just look at the effort I went to do discover the code error in CRUTEM (which, ironically, is a 'key observational climate data sets' to use RealClimate's words).

The argument from RealClimate can also be stated as 'running someone else's code isn't helpful so there's no point releasing it'. (see comments below to understand why this is struck out) The premise is reasonable, the conclusion not. I say that because there are other reasons to release code:

1. It can be used by others for other work. For example, good code can form part of a library of code that is used to improve or speedup science.

2. The algorithm in a paper can be quickly checked against the implementation to ensure that the results being generated are correct. For example, the CRUTEM error I found could have been quickly eliminated by access to the paper and source code at the same time.

3. Releasing code has a psychological effect which will improve its quality. This will lead to fewer errors on the part of scientists who rely on computer methods.


admin said…
The argument from RealClimate can also be stated as 'running someone else's code isn't helpful so there's no point releasing it'. The premise is reasonable, the conclusion not.

I'm curious about this quote, because nowhere in the highlighted quote above, nor in the original post did we ever draw that conclusion. Indeed, it says specifically that "this does not imply that code should not be released". How can you conclude that we said the complete opposite?

It may well be that we disagree on some aspects of this and that we might have some further discussion, but it is really hard to tell if you aren't actually reading what is written.

Gavin, You are enitirely correct. I should not have made that inference and was remiss in posting that sentence.

I don't like to edit blog posts after the fact; I prefer to admit my errors and leave them for others to see along with my retraction.

In this case I will add a note to point to the comments for clarification.
tz said…
You also miss that you can run control datasets through the code. If a flat input or random noise also yield a "hockey stick", there is something biased in the code.

Lab instruments need to be calibrated and compensated, not just read.
dljvjbsl said…
From the post quoting RealClimate

. Similarly, it is better to have two (or three or four) independent analyses of the surface temperature station data showing essentially the same global trends than it is to see the code for one of them

What happens in the case in which there is disagreement between results? How did this discrepancy occur. So running and analyzing the code isn't just repetition; it is an analysis of the reported experiment to see what was responsible for the results that it reported. Is the reported trend a property of the physical data or s it a property of the program itself? Is the result a product of a programming error, a misunderstood or mis-implemented mathematical algorithm, poor selection of data or a property of the real world? As RealClimate points out, having multiple independent confirmations is an aid to science but there is also the common case in which independent results do not agree and the common case in climate science in which the independent results are not truly independent. Analyzing the experiment by analyzing the code that created the experiment is essential in these common cases.

Popular posts from this blog

How to write a successful blog post

First, a quick clarification of 'successful'. In this instance, I mean a blog post that receives a large number of page views. For my, little blog the most successful post ever got almost 57,000 page views. Not a lot by some other standards, but I was pretty happy about it. Looking at the top 10 blog posts (by page views) on my site, I've tried to distill some wisdom about what made them successful. Your blog posting mileage may vary. 1. Avoid using the passive voice The Microsoft Word grammar checker has probably been telling you this for years, but the passive voice excludes the people involved in your blog post. And that includes you, the author, and the reader. By using personal pronouns like I, you and we, you will include the reader in your blog post. When I first started this blog I avoid using "I" because I thought I was being narcissistic. But we all like to read about other people, people help anchor a story in reality. Without people your bl

Your last name contains invalid characters

My last name is "Graham-Cumming". But here's a typical form response when I enter it: Does the web site have any idea how rude it is to claim that my last name contains invalid characters? Clearly not. What they actually meant is: our web site will not accept that hyphen in your last name. But do they say that? No, of course not. They decide to shove in my face the claim that there's something wrong with my name. There's nothing wrong with my name, just as there's nothing wrong with someone whose first name is Jean-Marie, or someone whose last name is O'Reilly. What is wrong is that way this is being handled. If the system can't cope with non-letters and spaces it needs to say that. How about the following error message: Our system is unable to process last names that contain non-letters, please replace them with spaces. Don't blame me for having a last name that your system doesn't like, whose fault is that? Saying "Your

The Elevator Button Problem

User interface design is hard. It's hard because people perceive apparently simple things very differently. For example, take a look at this interface to an elevator: From flickr Now imagine the following situation. You are on the third floor of this building and you wish to go to the tenth. The elevator is on the fifth floor and there's an indicator that tells you where it is. Which button do you press? Most people probably say: "press up" since they want to go up. Not long ago I watched someone do the opposite and questioned them about their behavior. They said: "well the elevator is on the fifth floor and I am on the third, so I want it to come down to me". Much can be learnt about the design of user interfaces by considering this, apparently, simple interface. If you think about the elevator button problem you'll find that something so simple has hidden depths. How do people learn about elevator calling? What's the right amount of