Where Science is Flawed



I heartily agree with the article below, which is why I reproduce it fully (with links). And its relevance to Warmist "science" needs no spelling out. I saw all of the faults discussed below in my own social science research career and to this day I tackle similar problems daily in my FOOD & HEALTH SKEPTIC blog.

I can however go beyond the article below and point to what is the remedy to these now well-documented faults in scientific reporting. The remedy is to encourage similar research by those who have an OPPOSITE agenda to the established writers. Because I am a conservative, I saw the received wisdom in my Left-dominated field of research as quite absurd. And I set out to show that such theories were absurd. And I did. I even got my findings published over and over again in the academic journals. My findings, however, had no impact whatever. Leftists didn't want to believe my findings so simply ignored them.

If however, I had been one of many people with opposing views writing in the field, that would have been much harder to ignore and a more balanced view might have emerged as the consensus position.

At the moment, however, being skeptical of any scientific consensus is career death. So the only remedy is for skeptical views to be specifically rewarded both among students in marking, in academic hiring and in career advancement. It is only a faint hope but perhaps there are enough people of integrity in science to bring that about eventually. Science will be greatly hobbled otherwise -- JR


In its current issue, The New Yorker has an excellent piece on the prevalence of (unconscious) bias in scientific studies that builds on this recent must-read piece in The Atlantic. And to some extent, Jonah Lehrer’s New Yorker article builds on this story he did for Wired in 2009. Anyone interested in the scientific process should read all three, for they are provocative cautionary tales.

Back to Lehrer’s story in The New Yorker. I’m going to quote from it extensively because it’s behind a paywall, but I urge people to buy a copy of the issue off the newsstand, if possible. It’s that good. His piece is an arrow into the heart of the scientific method:
The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been been enshrined in textbooks are suddenly unprovable. This phenomenon doesn't yet have an official name, but it's occurring across a wide range of fields, from psychology to ecology.

How did this happen? How have “enshrined” findings that were replicated suddenly become undone? The fatal flaw appears to be the selective reporting of results–the data that scientists choose to document in the first place.

This is not the same as scientific fraud, Lehrer writes:
Rather the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.

He then describes “one of the classic examples” of selective reporting:
While acupuncture is widely accepted as a medical treatment in various Asian countries, its use is much more contested in the West. These cultural differences have profoundly influenced the results of clinical trials. Between 1966 and 1995, there were forty-seven studies of acupuncture in China, Taiwan, and Japan, and every single trial concluded that acupuncture was an effective treatment. During the same period, there were ninety-four clinical trials of acupuncture in the United States, Sweden, and the U.K., and only fifty-six percent of these studies found any therapeutic benefits. As [University of Alberta biologist Richard] Palmer notes, this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.

Lehrer then introduces Stanford epidemiologist John Ioannidis, the star of the The Atlantic story. Lehrer writes:
According to Ioannidis, the main problem is that too many researchers engage in what he calls “significant chasing,” or finding ways to interpret the data so that it passes the statistical test of significance–the ninety five percent boundary invented by Ronald Fisher. “The scientists are so eager to to pass this magical test that they start playing around with the numbers, trying to find anything that seems worthy,” Ioannidis says. In recent years, Ioannidis has become increasingly blunt about the pervasiveness of the problem. One of his most cited papers has a deliberately provocative title: “Why Most Published Research Findings are False.”

The problem of selective reporting is rooted in a fundamental cognitive flaw, which is that we like proving ourselves right and hate being wrong. “It feels good to validate a hypothesis,” Ioannidis said. “It feels even better when you’ve got a financial interest in the idea or your career depends on it. And that’s why, even after a claim has been systematically disproven”–he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins–”you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it’s true.”

That’s why [UC Santa Barbara cognitive psychologist Jonathan] Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting our time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design…In a forthcoming paper, Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results.”I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says.

As I said, you really should read the whole piece if you want to learn more about this widespread but little discussed problem with a key tenet of the scientific method. Lehrer perceptively concludes:
We like to pretend that our experiments define the truth for us. But that’s often not the case. Just because an idea is true doesn’t mean it can be proved. And just because an idea can be proved doesn’t mean it’s true. When the experiments are done, we still have to choose what to believe.

SOURCE

No comments:

Post a Comment

All comments containing Chinese characters will not be published as I do not understand them