Tuesday, September 23, 2014

Getting more out of science

There has been a lot of activity recently on how to ensure we don't get sidetracked by wrong, or non-robust results.

A recent NYtimes post by Brendan Nyhan on this is quite good: To Get More Out of Science, Show the Rejected Research

He writes:
"The intense competition for space in top journals creates strong pressures for novel, statistically significant effects. As a result, studies that do not turn out as planned or find no evidence of effects claimed in previous research often go unpublished, even though their findings can be important and informative."

"This pattern of publication bias and failed replications, which is drawing attention in fields from psychology to medicine, has prompted great alarm within the scientific community. "

"Others advocate requiring the registration of trials before data has been collected. For instance, some social scientists have voluntarily begun to preregister analysis plans for experiments to minimize concerns about selective reporting. Unfortunately, the demand for statistically significant results is still likely to create publication bias. For example, federal law and journal policies now require registration of clinical trials, but publishing of trial results has been found to be selective, to frequently deviate from protocols and to emphasize significant results. "

His solution:
"Instead, my colleagues and I propose a radically different publishing model: Ask journal editors and scientific peers to review study designs and analysis plans and commit to publish the results if the study is conducted and reported in a professional manner (which will be ensured by a second round of peer review)."

Funnily enough, this comes up again and again in experimental economics. And most often when people think about controversies, and when they got annoyed that their paper, which had such a nice design, didn't get a good publication just because the results weren't maybe as exciting as one could hope for...

I actually don't agree. There are many ways in which an interesting design can deliver results that are not interesting. The problem is, it is often easier to detect whether a result was achieved for the wrong reasons (though that may not always be very easy) than why an expected result did not happen. And there could be many boring reasons for an expected result that did not happen. An extreme example is that somehow things were so complicated that everything just turned out to be very noisy. Think of running your perfect design but using only subjects who actually don't understand your language. The results of the best design might not be worth publishing, and correctly so.

Likewise, many designs that we thought were not interesting may actually turn out to be very interesting and important. Not everyone will have the same intuition. 

No comments:

Post a Comment