Thursday, November 04, 2010

The most common objection to my 'releasing scientific code' post

Is...

And why dismiss so casually the argument that running the code used to generate a paper's result provides no actual independent verification of that result? How does running the same buggy code and getting the same buggy result help anyone?

Or as expressed at RealClimate:

First, the practical scientific issues. Consider, for example, the production of key observational climate data sets. While replicability is a vital component of the enterprise, this is not the same thing as simply repetition. It is independent replication that counts far more towards acceptance of a result than merely demonstrating that given the same assumptions, the same input, and the same code, somebody can get the same result. It is far better to have two independent ice core isotope records from Summit in Greenland than it is to see the code used in the mass spectrometer in one of them. Similarly, it is better to have two (or three or four) independent analyses of the surface temperature station data showing essentially the same global trends than it is to see the code for one of them. Better that an ocean sediment core corroborates a cave record than looking at the code that produced the age model. Our point is not that the code is not useful, but that this level of replication is not particularly relevant to the observational sciences.

This argument strikes me as bogus. It comes down to something like "we should protect other scientists from themselves by not giving them code that they might run; by not releasing code we are ensuring that the scientific method is followed".

Imagine the situation where a scientist runs someone else's code on the data that person released and gets the same result. Clearly, they have done no science. All they have done is the simplest verification that the original scientist didn't screw up in their methods. That person has not used the scientific method, they have not independently verified the results and their work is close to useless.

Is this enough to argue that the code should have been closed in the first place?

I can't see that it is. No one's going to be able to publish a paper saying "I ran X's code and it works", it would never get through peer review and isn't scientific.

To return to the first quote above, running someone else's buggy code proves nothing. But in hiding the buggy code you've lost the valuable situation where someone can verify that the code was good in the first place. Just look at the effort I went to do discover the code error in CRUTEM (which, ironically, is a 'key observational climate data sets' to use RealClimate's words).

The argument from RealClimate can also be stated as 'running someone else's code isn't helpful so there's no point releasing it'. (see comments below to understand why this is struck out) The premise is reasonable, the conclusion not. I say that because there are other reasons to release code:

1. It can be used by others for other work. For example, good code can form part of a library of code that is used to improve or speedup science.

2. The algorithm in a paper can be quickly checked against the implementation to ensure that the results being generated are correct. For example, the CRUTEM error I found could have been quickly eliminated by access to the paper and source code at the same time.

3. Releasing code has a psychological effect which will improve its quality. This will lead to fewer errors on the part of scientists who rely on computer methods.

4 comments:

admin said...

The argument from RealClimate can also be stated as 'running someone else's code isn't helpful so there's no point releasing it'. The premise is reasonable, the conclusion not.

I'm curious about this quote, because nowhere in the highlighted quote above, nor in the original post did we ever draw that conclusion. Indeed, it says specifically that "this does not imply that code should not be released". How can you conclude that we said the complete opposite?

It may well be that we disagree on some aspects of this and that we might have some further discussion, but it is really hard to tell if you aren't actually reading what is written.

Gavin

John Graham-Cumming said...

Gavin, You are enitirely correct. I should not have made that inference and was remiss in posting that sentence.

I don't like to edit blog posts after the fact; I prefer to admit my errors and leave them for others to see along with my retraction.

In this case I will add a note to point to the comments for clarification.

tz said...

You also miss that you can run control datasets through the code. If a flat input or random noise also yield a "hockey stick", there is something biased in the code.

Lab instruments need to be calibrated and compensated, not just read.

dljvjbsl said...

From the post quoting RealClimate

================
. Similarly, it is better to have two (or three or four) independent analyses of the surface temperature station data showing essentially the same global trends than it is to see the code for one of them
=================

What happens in the case in which there is disagreement between results? How did this discrepancy occur. So running and analyzing the code isn't just repetition; it is an analysis of the reported experiment to see what was responsible for the results that it reported. Is the reported trend a property of the physical data or s it a property of the program itself? Is the result a product of a programming error, a misunderstood or mis-implemented mathematical algorithm, poor selection of data or a property of the real world? As RealClimate points out, having multiple independent confirmations is an aid to science but there is also the common case in which independent results do not agree and the common case in climate science in which the independent results are not truly independent. Analyzing the experiment by analyzing the code that created the experiment is essential in these common cases.