Whom to believe?

Some interesting excerpts below from a long article that spends a lot of time looking at old philosophical questions.  The author does make the point that decisions on some matters of alleged truth are easier to arrive at than others. And I think global warming is an example of that. Philosophical enquiries about "what is truth?" are largely irrelevant to assessing it.  Why?  Because it is a prophecy, not a fact.  There is no proof available about the future of the climate.  All we know is that sometimes it gets hotter and sometimes it gets colder. No other facts are available. So, from a philosophical viewpoint, it should not be seen as any kind of fact.  It is outside the purview of science

There are some areas of science that CAN produce accurate prophecies.  The orbits of the inner planets can, for instance, be predicted with great accuracy.  But they can be predicted because they show great regularity. The fact involved in the prediction is that great regularity. There are facts involved  there. But there is nothing like that regularity in global climate processes and, largely for that reason, all attempted predictions have so far been well out of synchrony with reality.


In a March 2015 article in National Geographic, Joel Achenbach lamented the supposed rise of science skepticism in American culture. “Empowered by their own sources of information and their own interpretations of research,” he writes somewhat dramatically, “doubters have declared war on the consensus of experts.”

A few months later, Lee McIntyre of Boston University offered a similar analysis in the Chronicle of Higher Education. Explaining what he sees as a growing disrespect for truth in American culture, McIntyre points to the Internet as a likely culprit. After all, he argues, “outright lies can survive on the Internet. Worse, those who embrace willful ignorance are now much more likely to find an electronic home where their marginal views are embraced.”

Complaints of this kind are not without merit. Consider a recent survey from the Pew Research Center’s Initiative on Science and Society showing a significant gap between the views of laypeople and those of scientists (a sample from the American Association for the Advancement of Science) on a wide range of scientific issues. To take one notable example, 88 percent of the polled AAAS scientists believe genetically modified foods to be safe, compared to only 37 percent of the respondents from the general public.

But as worthwhile as such research may be, it has little to say about a closely related question: What ought we to believe? How should non-experts go about seeking reliable knowledge about complex matters? Absent a granular understanding of the theories underpinning a given area of knowledge, how should laypeople weigh rival claims, choose between conflicting interpretations, and sort the dependable expert positions from the dubious or controversial ones? This is not a new question, of course, but it has become more urgent thanks to our glut of instant information, not to mention the proliferation of expert opinion.

The closest thing to an answer one hears is simply to trust the experts. And, indeed, when it comes to the charge of the electron or the oral-health benefits of fluoride, this response is hard to quarrel with. The wisdom of trusting experts is also a primary assumption behind the work of scholars like Kahan. But once we dispense with the easy cases, a reflexive trust in specialist judgment doesn’t get us very far.

On all manner of consequential questions an average citizen faces — including whether to support a hike in the minimum wage or a new health regulation — expert opinion is often conflicting, speculative, and difficult to decipher. What then? In so many cases, laypeople are left to choose for themselves which views to accept — precisely the kind of haphazard process that the critics of “willful ignorance” condemn and that leaves us subject to our own whims. The concern is that, if we doubt the experts, many people will draw on cherry-picked facts and self-serving anecdotes to furnish their own versions of reality.

This is certainly the case. But, in fixating on this danger, we neglect an important truth: it is simply not feasible to outsource to experts all of our epistemological work — nor would it be desirable. We frequently have no alternative but to choose for ourselves which beliefs to accept. The failure to come to grips with this fact has left us without the kinds of strategies and tools that would enable non-experts to make more effective use of the increasingly opaque theories that explain our world. We need, in other words, something more to appeal to once disagreements reach the “my-source-versus-your-source” phase.

Developing approaches that fit this description will require an examination of our everyday assumptions about knowledge — that is, about which beliefs are worth adopting and why. Not surprisingly, those assumptions have been significantly shaped by our era’s information and communication technologies, and not always for the better.

One consequence of this view of knowledge is that it has become largely unnecessary to consider how a given piece of information was discovered when determining its trustworthiness. The research, experiments, mathematical models, or — in the case of Google — algorithms that went into establishing a given fact are invisible. Ask scientists why their enterprise produces reliable knowledge and you will likely be told “the scientific method.” And this is correct — more or less. But it is rare that one gets anything but a crude schematic of what this process entails. How is it, a reasonable person might ask, that a single method involving hypothesis, prediction, experimentation, and revision is applied to fields as disparate as theoretical physics, geology, and evolutionary biology — or, for that matter, social-scientific disciplines such as economics and sociology?

Even among practitioners this question is rarely asked in earnest. Science writer and former Nature editorial staffer Philip Ball has condemned “the simplistic view of the fictitious ‘scientific method’ that many scientists hold, in which they simply test their theories to destruction against the unrelenting candor of experiment. Needless to say, that’s rarely how it really works.”

Like the algorithms behind Google’s proposed “truth” rankings, the processes that go into establishing a given empirical finding are often out of view. All the lay reader gets is conclusions such as “the universe is fundamentally composed of vibrating strings of energy,” or “eye color is an inherited trait.” By failing to explain — or sometimes even to acknowledge — how, exactly, “the scientific method” generates reliable knowledge about the world in various domains, scientists and science communicators are asking laypeople to accept the supremacy of science on authority.

Far from bolstering the status of experts who engage in rigorous scientific inquiry, this way of thinking actually gives them short shrift. Science, broadly construed, is not a fact-generating machine. It is an activity carried out by people and requiring the very human capacities of reason, intuition, and creativity. Scientific explanations are not the inevitable result of a purely mechanical process called “the scientific method” but the product of imaginative attempts to make empirical data more intelligible and coherent, and to make accurate predictions. Put another way, science doesn’t tell us anything; scientists do.

Failure to recognize the processes involved in adding to our stores of knowledge creates a problem for those of us genuinely interested in getting our beliefs right, as it denies us relevant information for understanding why a given finding deserves our acceptance. If the results of a single, unreplicated neuroscience study are to be considered just as much an instance of good science as the rigorously tested Standard Model of particle physics, then we laypeople have little choice but to give them equal weight. But, as any scientist will tell you, not all findings deserve the same credibility; determining which ones merit attention requires at least a basic grasp of methodology.

To understand the potential costs of failing to engage at the level of method, consider the Innocence Project’s recent investigation of 268 criminal trials in which evidence from hair analysis had been used to convict defendants. In 257 of those cases, the organization found forensic testimony by FBI scientists to be flawed — a conclusion the FBI does not dispute. What is more, each inaccurate analysis overstated the strength of hair evidence in favor of the prosecution. Thirty-two defendants in those cases were eventually sentenced to death, of whom fourteen have either died in prison or have been executed. This is an extreme example of how straightforwardly deferring to expert opinion — without considering how those opinions were arrived at — is not only an inadequate truth-seeking strategy, but a potentially harmful one.

Reacting to the discoveries of forensic malpractice at the FBI, the co-chairman of the President’s Council of Advisors on Science and Technology, biologist Eric S. Lander, suggested a single rule that would make such lapses far less common. As he wrote in the New York Times, “No expert should be permitted to testify without showing three things: a public database of patterns from many representative samples; precise and objective criteria for declaring matches; and peer-reviewed published studies that validate the methods.”

Lander’s suggestion amounts to the demand that forensic experts “show their work,” so to speak, instead of handing down their conclusions from on high. And it is an institutional arrangement that could, with a few adjustments, be applied to other instances where expert analyses carry significant weight. It might be too optimistic to assume that such information will be widely used by the average person on the street. But, at least in theory, efforts to make the method by which certain facts are established more available and better understood will leave each of us more able to decide which claims to believe. And these sorts of procedural norms would help create the expectation that, when choosing what to believe, we laypeople have responsibilities extending beyond just trusting the most credentialed person in the room.

Research from psychologist Philip Tetlock and colleagues lends support to this idea. Tetlock is co-creator of The Good Judgment Project, an initiative that won a multi-year forecasting tournament conducted by the Intelligence Advanced Research Projects Activity, a U.S. government research agency. Beginning in 2011, participants in the competition were asked a range of specific questions regarding future geopolitical events, such as, “Will the United Nations General Assembly recognize a Palestinian state by Sept. 30, 2011?,” or “Before March 1, 2014, will North Korea conduct another successful nuclear detonation?” Tetlock’s forecasters, mind you, were not career analysts, but volunteers from various backgrounds. In fact, a pharmacist and a retired irrigation specialist were among the top performers — so-called “superforecasters.”

In analyzing the results of the tournament, researchers at the Good Judgment Project found a number of characteristics common to the best forecasters. For instance, these individuals “had more open-minded cognitive styles” and “spent more time deliberating and updating their forecasts.” In a January 2015 article in the Washington Post, two of the researchers further explained that the best forecasters showed “the tendency to look for information that goes against one’s favored views,” and they “viewed forecasting not as an innate ability, but rather as a skill that required deliberate practice, sustained effort and constant monitoring of current affairs.”

More HERE

No comments:

Post a Comment

All comments containing Chinese characters will not be published as I do not understand them