And why dismiss so casually the argument that running the code used to generate a paper's result provides no actual independent verification of that result? How does running the same buggy code and getting the same buggy result help anyone?
Or as expressed at RealClimate:
First, the practical scientific issues. Consider, for example, the production of key observational climate data sets. While replicability is a vital component of the enterprise, this is not the same thing as simply repetition. It is independent replication that counts far more towards acceptance of a result than merely demonstrating that given the same assumptions, the same input, and the same code, somebody can get the same result. It is far better to have two independent ice core isotope records from Summit in Greenland than it is to see the code used in the mass spectrometer in one of them. Similarly, it is better to have two (or three or four) independent analyses of the surface temperature station data showing essentially the same global trends than it is to see the code for one of them. Better that an ocean sediment core corroborates a cave record than looking at the code that produced the age model. Our point is not that the code is not useful, but that this level of replication is not particularly relevant to the observational sciences.
This argument strikes me as bogus. It comes down to something like "we should protect other scientists from themselves by not giving them code that they might run; by not releasing code we are ensuring that the scientific method is followed".
Imagine the situation where a scientist runs someone else's code on the data that person released and gets the same result. Clearly, they have done no science. All they have done is the simplest verification that the original scientist didn't screw up in their methods. That person has not used the scientific method, they have not independently verified the results and their work is close to useless.
Is this enough to argue that the code should have been closed in the first place?
I can't see that it is. No one's going to be able to publish a paper saying "I ran X's code and it works", it would never get through peer review and isn't scientific.
To return to the first quote above, running someone else's buggy code proves nothing. But in hiding the buggy code you've lost the valuable situation where someone can verify that the code was good in the first place. Just look at the effort I went to do discover the code error in CRUTEM (which, ironically, is a 'key observational climate data sets' to use RealClimate's words).
1. It can be used by others for other work. For example, good code can form part of a library of code that is used to improve or speedup science.
2. The algorithm in a paper can be quickly checked against the implementation to ensure that the results being generated are correct. For example, the CRUTEM error I found could have been quickly eliminated by access to the paper and source code at the same time.
3. Releasing code has a psychological effect which will improve its quality. This will lead to fewer errors on the part of scientists who rely on computer methods.