For many researchers, the scientific method is as close to a religion as they’ll ever get. The appeal is similar: The scientific method, rigorously followed, provides disciples with a hint of the objective truth an all-knowing god might impart. But the path to that truth is rocky. While the rules of the scientific method are wonderful guidelines, they, like religious commandments, can be broken. Scientists can make simple mistakes or be subtly biased by their desire for prestige, interfering with the sanctity of their results.
It doesn’t have to be that way. Right now, science is undergoing a correction of sorts—trying as hard as it can to remove all the little ways scientists get in the way of their own work. Today, in a big step toward that correction, the Open Science Collaboration published the results of 100 psychology studies, studies that already had been done. By replicating those studies and checking to see whether their results could be reproduced, the project seeks to understand the ways in which science’s current procedures are flawed.
The results, published in Science, may on the face seem discouraging. The collaboration successfully reproduced less than half of the results of those 100 studies. Only 36 percent of the replicated studies showed significant results, compared to 97 percent of the originals. But the results themselves are not so much the point here. What’s more important is the framework used to successfully conduct those replications, which points toward a revolution—made possible by the new, Internet-connected world—in how science is conducted, reviewed, and consumed.