Anyway, I read a great section on the possible "traps" of peer review today over in Slothville, and I figured I'd share it here because it's something that pretty tidily sums up something I've noticed but not seen nearly enough attention drawn to. In a recent entryShalizi writes (emphasis mine):
...passing peer review is better understood as saying a paper is not obviously wrong, not obviously redundant and not obviously boring, rather than as saying it's correct, innovative and important. Even this misses a deeper problem, a possible failure mode of the scientific community. A journal's peer review is only as good as the peers it uses as reviewers. If everyone, or almost everyone, who referees for some journal is in the grip of the same mistake, then they will not catch it in papers they review, and the journal will propagate it. In fact, since journals usually recruit new referees from their published authors or people recommended by old referees, mistakes and delusions can become endemic and self-confirming in epistemic communities associated with particular journals...
...Put simply, the problem is that any group of quack scholars with a shared delusion can put together a journal, dub each other peer reviewers, and go on their cheerful way by endorsing each others' work for their journal. (One of the ways you can tell that intelligent design creationism is a propaganda front and not a real, if stupid, scholarly movement is that their effort to put together just such a journal was never more than half-assed, and it's moribund for some time now.) This isn't even always a bad thing, since sometimes people who seem like quacks are in fact right, and doing things like starting their own journals gives them a chance to get their act together and assemble a convincing case. But all of this does mean that the peer-review filter is a very weak and accepting one, especially on controversial topics. It does not seem unreasonable of me to ask that those who set themselves up as science reporters grasp this.
Of course, Shalizi is not saying that peer review is useless, or that it doesn't ever "work" -- rather, he's just pointing out something that those of us whose primary connection to the world of experimental science is through papers we've found online would be smart to take heed of. And that is the fact that it isn't enough for a paper to simply be "peer reviewed" or published in a journal. It also has to reflect good experimental design, analysis that takes bias into account (and endeavors to correct for it), among other factors.
The Internet has made peer-reviewed literature available to a far wider range of people than ever before. Even when I was in college between 1997 - 2002 (which wasn't too long ago), I had to go into an actual library building and access microfilm or microfiche to get at real scientific papers. But over the past few years, I've noticed a rather curious phenomenon becoming more and more common online. Pretty much everyone is familiar with "Argumentum Ad Wikipedium" -- when a person on a forum or mailing list links to or copies whole huge blocks of text from Wikipedia articles in order to support their case. But what's also common (particularly among science and technology bloggers) is to do something similar with peer-reviewed papers and other academic literature.
Overall, I see this as great! The fact that we average Internet-enabled citizens now have so much more information at our fingertips is wonderful. I can't help but be in awe sometimes at how much has apparently long been going on in laboratories and university research centres, but which was until recently filtered through layers and layers of interpretation and mass-media distortion before any folks not directly involved in the research got to take a peek at it. There's definitely a lot of fascinating stuff going on (I'm personally very interested in research areas pertaining to autism, biogerontology, artificial intelligence, general neurology, etc.) and it's quite a breath of fresh air to be able to read about what actually went on in an experiment rather than just reading a brief (and quite probably sensationalized) summary account in the popular news media.
But -- there is a caveat here. Those of us who like to read peer-reviewed papers and who use them to back up our own assertions need to independently develop good critical thinking skills, and we need to maintain an "outside the system" vantage point so that we don't get sucked into unthinking trust of the peer-review process. Science isn't just a set of instructions, it's also something people need to consciously engage with -- that is, we can't just read papers and assume that since they were peer-reviewed they reflect good experimental design and such. We also need to know how to tell good experiments from bad experiments in general, and we should also check up on the authors of the papers we read to some extent. As Shalizi notes, sometimes you do end up coming across someone who sounds "quacky" but who is in fact simply ahead of the academic curve, but there are definitely a lot of actual quacks out there. And, frighteningly, some of them have their own journals.
I remember a while back reading through a paper on IQ that someone had linked from a mailing list. At first this paper came across like a fairly standard piece of academic writing, but as I read further, I began to get suspicious. Some of it sounded, well, more than a little bit racist. My first signal was that the word "White" was capitalized when it referred to the color of a subject's skin. While this is by no means proof of bias in any paper (differences in capitalization behavior can exist as quirks of individual writers without necessarily indicating anything about the biases of those writers), it did prompt me to go and look up the authors to see what else they'd been up to. Sure enough, I found that the main writer had been associated with racist activities and organizations.
I'm not going to reveal any names here because I don't want this to become "about" a specific individual or a particular paper -- the point here is just that as soon as I read Shalizi's comment, I knew I'd seen exactly the sort of thing he was referring to.
This age of unparalleled transparency between the scientific community and the (hopefully growing) scientifically literate public is most certainly a positive thing -- but as we gain access to more and more information, we have to develop the skills to avoid being fooled by dubiously clever cranks who find that new electronic publishing media make it very easy to produce something that "looks official" without having the necessary scientific substance behind it.