The scientific method -- make an observation, form a hypothesis, test your prediction, obtain data -- is the cornerstone of science, right up there with dramatically removing your glasses and exclaiming, "My god ..." But modern science has added a step you didn't learn about in third grade: publish your results. Publication is an important way for scientists to share research and advance their careers. Here's how it's destroying science.
#6. Negative Results Are Ignored
An experiment will give you either a positive result ("Just as I thought, my car has a flat tire") or a negative result ("My tire's fine, but I have bad news for my dog-owning neighbor"). It's estimated that no more than 10 percent of hypotheses should be supported. So why do experiments touting positive results make up 70 to 90 percent of all published papers? Are modern scientists mutants with the power to generate facts?
Sadly, no. The majority of papers contain positive results because that's what publishing companies want. No one wants to read about dozens of weight-loss drugs that made test subjects gain 10 pounds and a third nipple -- they want to read about the one that will get them back in their swimsuits.
This incredibly common practice is called publication bias, and like trained mice, some scientists have adapted to it with "p-hacking." That sounds gross, but it's essentially testing one hypothesis over and over again until you get a positive result. It's dishonest and irresponsible, but the more papers you publish the more likely you are to get grants, jobs, tenure, and science groupies.
How This Affects You
Anyone who uses medication can fall victim to this problem. Only papers stating that a drug works are published, while studies that find a negative result are locked in the attic and neglected like a NordicTrack. This leads to doctors falsely believing a drug is fantastic because all they can read are rave reviews, when in reality it may be no more effective than a handful of crystals and positive thought.
While this usually just means it takes three tries to get rid of that unsightly face rash, publication bias can fucking kill you. Lorcainide was supposed to combat heart arrhythmia, but trials found the troubling side effect of death. Because of this buzzkill result, the study was never published. Several years later, unaware pharmaceutical companies put Lorcainide on the market. And 100,000 deaths were accelerated through its use. Even after Lorcainide brought more people to their premature graves than the McDonald's dollar menu, the authors of the study were still rejected by multiple journals before finally getting published. Because we wouldn't want science to be a downer.
#5. Scientists Don't Have To Show Their Work
When your friend tells you they arm-wrestled Bill Murray last night, your first reaction is, "Pics or it never happened." Congratulations! You have higher standards than most scientific journals. When scientists submit a paper, they're rarely required to show raw data. Only about 15 percent of journals have relevant instructions, and enforcement is often more lax than anti-media-piracy laws after the apocalypse. For scientists, restricting access to data stalls progress and hinders the ability to replicate a study. It's hard to stand on the shoulders of giants when they never lean down to let you climb on.
How This Affects You
When scientists don't publish raw data, it's difficult for their work to be checked. One particularly damaging error occurred in 2010 with the publication of an influential paper that concluded that countries with large debts experience lower economic growth. This study was cited by everyone from Rand Paul to your racist Facebook uncle to justify large cuts in government spending, with economist Paul Krugman saying it "may have had more immediate influence on public debate than any previous paper in the history of economics."
However, the authors made a bit of a data analysis error. And by that we mean they messed up their Excel formula. Fix that and you'll find countries with debt experienced minor growth, the exact opposite of the original conclusion. The error was found only after three other economists personally asked the authors for their data. But at least they made a genuine whoopsie -- scientists can easily make intentional errors or just invent numbers. Once the results are published, they're accepted by the scientific community, meaning your doctor might be basing conclusions solely off of the power of one particularly whimsical researcher's imagination.
The research of ex-doctor Don Poldermans showed that beta-blockers given during surgery didn't lead to an increase in deaths. The research of Don Poldermans also apparently consisted of finding a favorable random-number generator.
snip
The case of Joachim Boldt (one-time record holder for number of retracted papers, with 90) is even worse. Boldt faked data claiming that a particular type of IV drip caused no increase in patient death, whereas most other researchers found the opposite. Boldt convinced many doctors to continue using the drip for years, which caused an estimated 20,000 deaths.