Debating Scientific Epistemology

Dec 19, 2016 |  Edward R. Dougherty

Font Size  

  

Debating Scientific Epistemology

Dec 19, 2016 | 

Edward R. Dougherty

In debating climate science, recent NAS articles by Leo Goldstein and Bruce Gilley have taken opposing views regarding epistemology. Their conclusions regarding climate science are corollaries of their differing stands on the necessity for validation. Their argument is germane to many current areas of investigation. In July 2015, I attended the conference How to Build Trust in Computer Simulations – Towards a General Epistemology of Validation in Hanover, Germany, at which a small group of researchers in philosophy, biology, physics, economics, social science, engineering, and climate science were invited to discuss the conundrums of trying to validate highly complex models that involve huge numbers of parameters and require massive computation.

Epistemology is not peripheral to scientific investigation; in fact, it is primary. Albert Einstein stated, “Science without epistemology is—insofar as it is thinkable at all—primitive and muddled.”

Validation is central to scientific knowledge. Richard Feynman wrote, “It is whether or not the theory gives predictions that agree with experiment. It is not a question of whether a theory is philosophically delightful, or easy to understand, or perfectly reasonable from the point of view of common sense.” Based on predictions of future observations, a theory is either rejected or (contingently) accepted.

This understanding of scientific knowledge evolved over three centuries, from the early Seventeenth Century with Francis Bacon and Galileo to quantum theory in the first half of the Twentieth Century. The reasoning behind it is sufficiently subtle that it has perplexed the greatest minds, including Newton, Kant, and Einstein.

Checking whether or not observations are concordant with predictions derived from theory is a nontrivial matter, made more so by the prevalence of stochastic systems in the Twentieth century. Let me reiterate four questions from an article in Public Discourse that one should ask when presented with a purportedly scientific theory:

 

  1. Does it contain a mathematical model expressing the theory?
  2. If there is a model, does it contain precise relationships between terms in the theory and measurements of corresponding physical events?
  3. Does it contain validating experimental data—that is, a set of future quantitative predictions derived from the theory and the corresponding measurements?
  4. Does it contain a statistical analysis that supports the acceptance of the theory, that is, supports the concordance of the predictions with the physical measurements—including the mathematical theory justifying application of the statistical methods?

Validation requires affirmative answers to all four questions.

While Goldstein oversimplifies the issue of validation, the gist of his argument is that climate models have not been validated. In an excellent analysis of statistical issues in climate modeling, Claudia Tebaldi of the National Center for Atmospheric Research (Boulder) and Reto Knutti of the Institute for Atmospheric and Climate Science (Zurich) state, “Climate projections, decades or longer in the future by definition, cannot be validated directly through observed changes.” Not only is Goldstein correct, but according to Tebaldi and Knutti, the models cannot be validated.

Bruce Gilley accepts this impossibility and wants to base human action on unvalidated theories, a hazardous perspective. But, he might argue, human beings generally act without rigorous knowledge; indeed, common sense often provides the course of action. But then one cannot claim that he is acting on the basis of scientific knowledge.

I am sympathetic to the pragmatic view. In a different Public Discourse article, I stated four options for moving ahead given our desire to model complex phenomena and our inability to validate such systems:  (1) dispense with modeling complex systems; (2) model complex systems and dishonestly claim that the models are scientifically valid; (3) model complex systems, admit that the models and predictions are not scientifically valid, utilize them pragmatically where possible, and be extremely prudent when interpreting them; or (4) put in the effort to develop a new scientific epistemology, recognizing that success within the next half century would be wonderful.

I believe that Gilley concurs with me that option 3 is preferable to options 1 and 2, but the risk is great, especially when political forces with their own agendas get into the game. It is currently not possible to validate gene-protein models that can be used to develop new drugs, but should we ignore their potential for drug development? I’m sure many would answer negatively, but who is to define and judge prudence?

I am also a proponent of option 4. But here too there is great risk. Do we today have the same commitment to knowledge as did our forebears? Let me finish with words from my recent book:

It took three centuries from the birth of modern science until quantum theory to fully clarify the epistemological revolution of Galileo, during which time the greatest minds took up the challenge. Perhaps we have reached our limit and the rules of the game cannot be relaxed without collapsing the entire enterprise into a Tower of Babel. Whatever the case, the issue is too important to ignore and let science aimlessly become “primitive and muddled.”

 

Image: https://commons.wikimedia.org/w/index.php?curid=8170377">Public Domain

Robert W Tucker

| December 30, 2016 - 2:34 PM


I agree with Mr. Dougherty in noting that validating hypotheses is a central component in the accretion of scientific knowledge. I note, however, that his list of four criteria for assessing the merit of a scientific theory and the hypotheses it generates is woefully truncated in relation to the most recent century of scientific and philosophical thinking and consensus. Among the dozen or so additional criteria not mentioned are the generation of falsifiable hypotheses, the generation of fruitful empirical propositions, congruence among empirical outcomes generated propositions, conceptual (logical) congruence among theoretical elements, explanatory scope, and integration with related theoretical/explanatory models.

Whether Mr. Dougherty omitted important criteria deliberately, to make a point, or out of a lack of understanding, he is incidentally correct in noting that climate change theory is, as is the nature of theories, less than fully validated. I think Mr. Dougherty would agree with the idea that each hypothesis generated by climate change theory carries with it a theoretical probability that the hypothesis is valid (correct, true). While we can never know the exact coefficient before the fact, we can estimate it within parameters determined by the context and nature of the data.

Mr. Dougherty wants to stop with the fact that climate change hypotheses are less than contingent truths. In doing so, he ignores at least half of the issue. An example from a less heated topic may help. What if we were to learn from epidemiologists that a strain of influenza was about to jump from butterflies to humans, and that the transmission and fatality rates of the strain were hypothesized (imperfect knowledge) to be above 50%? What level of certainty would we want from our scientists before taking action? The answer to that question would involve analyzing a variety of factors related to consequences, cost, feasibility, and impact. Whatever level of certainty we might decide we want, it would be based on a careful consideration of all four cells of the decision matrix (inaction/action matrixed with valid/invalid). In addition, we would quickly see that the two positive and the new negative outcomes are not perfectly symmetrical. Taking action on an invalid hypothesis is not an equally bad outcome as failing to take action should the hypothesis turn out to be valid. Similarly, even though both outcomes are positive, there is dissymmetry between taking action on a valid hypothesis and not taking action on an invalid hypothesis. This simplified analysis sets aside the shades of grey (partial determination) that characterize most important decisions matrices.

Thus, along with others, Mr. Dougherty fails to analyze and adjudicate to its logical and empirical conclusions the up- and downside risks and benefits associated to accepting or rejecting the hypotheses generated by climate change science. Instead, he focuses his attention on one and, at most, two of the four possibilities. Doing so is neither scientific nor logical.

arsenal arrow jacket

| June 02, 2017 - 4:07 AM


Thus, among other things, Mr. Dougherty climate change science to refuse or accept the hypothesis generated by the up- and downside risks and rewards associated with analysis and logical and empirical findings can not judge. Instead, he and most of all, four draws the attention of two of its possibilities. Doing so is a scientific and logical.