There is a new initiative in psychology, as reported by Science Insider:
"A group of psychologists are launching a project this week that they hope will make studies in their field radically more transparent and prompt other fields to open up as well. With a pledge of $5.25 million from private supporters, they have set up an outfit called the Center for Open Science. It is collaborating with an established journal, Perspectives on Psychological Science, to solicit work from authors who are willing to work completely in the open and have their studies replicated. Authors will be asked to first publish an experimental design and then, after a public vetting, collect data. Findings come in a separate publication."
There is also a nice website that organizes replications (and failed replications): http://psychfiledrawer.org/
For more on this see the following article in the chronicle of Higher education
"New Center Hopes to Clean Up Sloppy Science and Bogus Research" by Tom Bartlett where they write:
"The center hopes to encourage scientists to "register" their hypotheses before they carry out experiments, a procedure that should help keep them honest. And the center is working with journals, like Perspectives on Psychological Science, to publish the results of experiments even if they don't pan out the way the researchers hoped. Scientists are "reinforced for publishing, not for getting it right in the current incentives," Mr. Nosek said. "We're working to rejigger those incentives." "
Similar initiatives have been discussed for lab and field experiments in general, and specifically for development economics.
There are two concerns that one hopes to address: My understanding is that this could address two concerns
The first is to distinguish findings that are tests of original hypotheses compared to happenstance findings. This may be harder to distinguish in a final paper, where one could write a model ex post whose hypothesis generates the result found in the experiment. Clearly, this becomes a big issue whenever many variables or many hypotheses are checked in a single experiment.
The second is to reduce the number of studies run that end up in a file cabinet, because they didn't produce the expected result. So, for example running 100 basically similar experiments will quite likely result in some being statistically significant, even if the initial hypothesis is wrong.
While I think this sounds good in theory, I am less convinced how the latter would work in practice, but I bet it slows down research, which seems not a good thing.
No comments:
Post a Comment