Sunday, October 17, 2010

The Journal of Null Results

Scientists often jokingly tout the need for a "journal of null results", a way for researchers to find out about all the projects that didn't yield sexy results (all right, turns out jellyfish DON'T fluoresce purple when exposed to Jimi Hendrix). Notwithstanding the fact that no one would ever, ever read this journal (like an early version of The Mighty Ducks, in which case our avian protagonists place fifth in the tournament), they're right. By limiting our ken to "successful"studies, we are committing an egregious sampling error. It's an amateur mistake, but we do it all the time! Give me funding for a hundred experiments with similar parameters, and I will prove to p <.05 that Chuck Norris's tears cure cancer. I'll prove it five times, and I'll publish. WATCH ME.

My point is that we need more transparency in science:

  1. ALL data should be readily available, at least within the academic community.
  2. Research should be judged based on the quality of the work, not on the direction of the results. Sometimes well-designed experiments fail to reject the null hypothesis. That's why do science.
It's doable! Here's why:
  1. Data storage is cheap. My entire Master's takes up about $20 worth of hard drive space, including 20 subjects' worth of whole-brain fMRI data.
  2. We have the Internet, via which information can be easily and cheaply shared
  3. There are far more researchers than funding sources. For example, agencies like the NIH fund an enormous fraction of biomedical research in the U.S. (28% from the NIH as of 2003). Creating a centralized community of data-sharing should be easy when the money is all coming from the same place.
Done ranting; time to go back to determining the locus of the soul. Wish me luck.

No comments:

Post a Comment