Scientific knowledge - getting closer to the right answer

The "Science and the Public" story of the year might just be Arsenic using bacteria. Certainly, Alex's critique has been the most popular post on this blog since we started and has received quite a bit of attention from other bloggers as well as in conventional media. This might be a teachable moment in science communication, but even though it's clear that this wasn't handled particularly well, it's hard to see how things can be done better in the future. Heather's follow-up post is a great summary of how science works in the real world, but I think it also illustrates a fundamental difference in how science goes about collecting knowledge compared to most people in their daily lives, and especially compared to how the media reports what is "news."

Consider Heather's first steps in the scientific process:

  1. Scientist thinks they have made major new discovery
  2. Scientists writes up said discovery and submits paper to peer-reviewed journal (the bigger the claim, the fancier the journal)
  3. If editors of journal like the paper, they send it off to 3 or so experts in the field who review it, and provide the editor with feedback as to whether they feel the work is valid or not.
  4. Eventually, lets say the paper makes its way through the reviewers and gets published.

Advancing science is a formal process, and it's collaborative. In principal, it doesn't fall sway to arguments from authority or many of the other fallacies that people often succumb to. Arguments are based on data, and rebuttals use data to refute claims. I think that's been accurately reflected in this debate: Wolfe-Simon and her colleagues published a paper containing a great deal of data, which they interpreted in a particular way. Most of the criticism has focused on the data, their methods for collecting it, or the claims they made based on the data. As a scientist, attacks on your data can feel like personal attacks, and sometimes debates like this can get quite heated, but the currency is almost always the data itself.

As I said, this is how it works in principal, but scientists are human too. In reality, an established lab with lots of credibility will have an easier time publishing sketchy data than a freshly minted assistant professor, and people will look more critically at a paper that's published by a fierce competitor, but in general, over time, these biases work themselves out. The critical piece is the process that happens after a paper is published:

  1. other scientists start to pick at the paper (again, the bigger the claim, the more criticism typically surfaces).
  2. Original author might perform additional experiments that address these claims.
  3. Other scientists might carry out experiments that disprove the claims.
  4. Original claim either does, or does not, stand the test of time.

Unfortunately, media rarely reports this part of the story. It's no wonder really; it can take decades for a consensus to be reached. Science advances incrementally, in fits and starts, and the really fundamental breakthroughs often aren't appreciated until years later. But this process is the key to the success of science: the process acknowledges the propensity for error. And because of that, we get incrementally closer to the right answer.

There's more to say on this, but I wanted to throw this out and get some feed back. Do you think that there's a way for science or science reporting to make this long slog interesting, or to make it seem like a good thing? In most other contexts, constantly getting things wrong would be seen as a bad thing, but I think it's the greatest strength of science.