What Everyone Should Know About Science—but Doesn’t

Henry H. Bauer

Henry H. Bauer is Dean Emeritus of Arts & Sciences, Professor Emeritus of Chemistry & Science Studies, Virginia Polytechnic Institute & State University; [email protected]. He last appeared in AQ with “Fact Checking is Needed in Science Also” (Summer 2021).


Commonly accepted beliefs about science are drastically different from the reality. Science is almost universally regarded—at least in developed industrial countries—as the reliable source of understanding of the material world. In fact, science is as fallible as any other human activity, influenced in similar ways by outside interests and conflicts of interest.

The differences between reality and common beliefs came about because those beliefs describe scientific activity from roughly the fifteenth and sixteenth centuries until about the middle of the twentieth.1 Since then, scientific activity has changed progressively, largely because science has been greatly stimulated through increased governmental funding as well as increasing reliance on science for desirable commercial and sociopolitical ends. Science changed from “an ivory-tower activity, a cottage industry of self-driven intellectual entrepreneurs motivated largely by sheer curiosity . . . [into] an academe-industry-government complex . . . pervasively co-opted by outside interests.”2

This change came gradually over the course of about half a century. Society’s conventional wisdom still regards science as the reliable source of understanding of the material world, and it is unrealistic to imagine that common beliefs could suddenly change drastically to conform to present-day realities. Indeed, it is unrealistic to imagine that anyone’s beliefs could change quickly in this way. But perhaps accommodation of the new realities could be facilitated by a chronological narrative detailing the slowly accumulating changes.

I happened to be well placed to note these changes, as a practicing scientist during much of that half century and a later career change to Science and Technology Studies (STS). Differences in the circumstances of science between Australia and the United States in mid-twentieth-century brought experiences that enabled me to recognize, later and among other things, the unintended consequences of long-term damage to the integrity of science that followed from the enormous infusion of federal funds after World War II.

My cohort of science students in late-1940s Australia had been enthusiastic and idealistic. Science’s atom bomb had ended World War II, relieving us of the fear of Japanese invasion as well as the burden of the more distant European war in which Australia had been significantly involved. Science seemed to us a truly noble pursuit, wonderfully capable of understanding nature and applying that understanding to human benefit. Science, it seemed, equals practical truth.

I did not at the time appreciate the significance of slowly accumulating evidence to the contrary, though I observed it, until circumstances nudged me into a career change, from chemistry researcher to student of STS. Doing chemistry, I had like other insiders shrugged off glitches as singular anomalies; only when taking a birds-eye view, a kibitzer’s or outsider’s view, could I come to appreciate that little things occasionally glimpsed as wrong in my specialty of electrochemistry were occurring also in the rest of science, harbingers of systemic dysfunction.

During my very first experience of research, the then-Bible of chemical literature, Chemical Abstracts, directed me to an article about NaI when I was looking for information about NOI. Of course I shrugged this off as a typo, or a mis-transcribing of an oral communication; I did not recognize the signal that science is fallible, since every detail results from some fallible human action, inaction, or judgment.

Again, I did not conclude that scientists in general could behave dishonestly when sometimes tempted, just because some of my fellow students cheated on laboratory exercises, presenting as their own work of organic synthesis samples of substances that they had actually purchased.

Having gained my Ph.D., I learned soon enough that dishonesty might reach beyond the student ranks: a quite senior faculty member writing a review article for a foreign-language journal simply translated the introductory section of my Ph.D. thesis, without consulting me, ready to pass it off as his own work had I not learned about it first.

Many outsiders as well as devoted propagandists for science contribute to the conventional wisdom that “peer review” is the gold standard guaranteeing the accuracy and reliability of published scientific conclusions. When a couple of rising young stars visiting in Australia from the USA were chatting in my presence about how they coped with all the demands on their time, they revealed how to take shortcuts when reviewing grant proposals or manuscripts: to avoid the labor of reading all the detail, they just recommended according to the prestige of the author or their institution. Once again I drew no general conclusions, I did not recognize that peer review is a very fallible process, I just thought those two Young Turks were a couple of bad apples, like the fellow who plagiarized my dissertation.

That science is not universally done by “the scientific method” was clear to me from the outset, though once more I didn’t then know it. I had gained bachelor’s, master’s, and doctoral degrees in chemistry from a long-established and well-respected university without ever having been taught “the scientific method.” Indeed, I had never even heard of that “method” until, when I was a postdoctoral fellow at an American university, a young political scientist remarked to me that science is done by the scientific method. That became an important clue for me later, about what differentiates the so-called social and behavioral sciences from the so-called “hard” physical sciences. That such a differentiation is warranted on a number of grounds should in itself prescribe doubts about the purported objectivity and trustworthiness of “science.”

As a result of Vannevar Bush’s legendary post-WWII Report to the President: The Endless Frontier,3 the National Science Foundation (NSF) came into being, and the National Institutes of Health (NIH) gained enormous budget increases, the aim being to stimulate not only expansion of scientific research but also the numbers of students seeking careers in science.

That certainly worked, but rather like an economic “bubble.” Among the unforeseen consequences was that institutions of “higher education” rushed to benefit from the federal largesse in order to gain individual prestige and status. A couple of decades or so later, demand for federal largesse naturally outstripped supply and the bubble burst, bringing highly dysfunctional hyper-competitiveness among researchers and institutions: corner-cutting; pursuing quantity rather than quality; dependence on patronage, with distortion of research aims and hindering of honestly completed publication of results.

Teachers colleges had morphed into liberal arts colleges or universities; four-year colleges became “research universities”; long-established universities already in the research business aimed to increase their prestige ranking, which meant getting always quantitatively more: larger grants, more publications, more patents, producing more science graduates.

Moving from Australia to the U.S. in the mid-1960s, I landed a job at one of those up-and-coming places. I was dismayed when my first application for an NSF grant was unsuccessful: everyone was getting grants. It turned out that the problem had not been my proposed research but the budget: without letting me know, the Research Division at my university had expanded it by including reimbursement to the university for some of my academic-year salary and increasing estimates of other projected expenses, thereby making larger also the “indirect costs” (an agreed-with-NSF percentage of the overall budget) that came to the Research Division. The Director there later tried to justify what he had done by claiming it was in line with the federal policy of helping universities expand their research endeavors. These “indirect costs” (a popular euphemism for “overhead”) could be (and still can be) as high as fifty percent for private universities.

In the 1940s, there had been 107 doctorate-granting research universities in the USA; that increased to 142 by 1950-54, to 208 by 1960-64, and to 307 by 1970-74.4 In 1955 there had been ninety-eight doctoral programs in chemistry; by 1967 there were 165, and 192 by 1979.5

Faculty were rewarded with salary raises and promotions for getting more grants and mentoring more graduate students. In the early 1980s, the Dean of Engineering at my university revealed that his criterion for promotion from assistant to associate professor was bringing grant money of at least $100,000 annually, three times that for promotion to full professor.

I observed first-hand how demand for grants and other resources started to exceed supply. In the mid-to-late 1960s, about half of our Chemistry Department’s proposals to NSF had been funded; by 1978, the success rate had fallen to ten percent.

At NIH, the success rate has continued to fall steadily; from thirty-one percent in 1997 to twenty percent by 2014. On average, biomedical scientists begin their independent careers in middle age: on average, in 1980 biomedical researchers were aged thirty-seven when they received their first award as principal or sole investigator; by 2007, that average age had become forty-two.6

In the drive for ever more, “salami-slicing” became routine: publishing many separate articles from any given research result, generating the acronym LPU for “least publishable unit.” New journals were founded. The numbers of submitted manuscripts mushroomed owing to the increasing numbers of researchers whose careers required ever more publication: “publish or perish” is an entirely accurate description of modern academe. During my student days in 1940s Australia, jobs in academe had represented an opportunity for a useful, unhurried, scholarly career. So I was surprised when a fresh Ph.D., in the US in 1958, told me that he was looking for work in industry in order to avoid the “academic rat-race.”

As publishing increased so did the costs. Scientific societies, the traditional publishers of scientific periodicals, needed more financial support: so-called “page charges” were levied on the authors of articles. Those without grants to pay such charges were not refused publication, but their articles were labeled “costs borne by [for example] the American Chemical Society,” hardly good for the authors’ career-advancement, thus exacerbating the pressure to obtain grants.

Costs to research libraries for journal subscriptions increased enormously. Commercial publishers expanded their output of technical periodicals, and their drive for profit further increased the burden on libraries and researchers.

Journals also accepted advertising. Medical-science journals became effectively subsidized by drug companies, which also bought huge numbers of “reprints” to distribute when those articles favored a company’s products; Merck even paid Elsevier to establish the Australasian Journal of Bone and Joint Medicine, masquerading as a normal, peer-reviewed medical journal but actually collections of articles written for or provided by Merck.7

Publishing on the Internet is far cheaper than print publication. The Public Library of Science (PLOS) was founded as a non-profit for scientific publication and it now produces half-a-dozen journals whose costs are covered by the authors’ page-charges as well as subsidies from charitable foundations. Some publishers of print journals offer their authors the possibility of having their articles published online, “open access,” as well as in print, at some additional “page-charge” cost. “Open access” increases enormously the size of potential audiences, since the technical print journals are available only in specialist libraries.

Because online publishing is so cheap, even small page charges can yield significant profit. PLOS and the traditional print publishers that also offer additional open access publishing sought to maintain traditional standards of peer-review. However, as the flood of submitted manuscripts grew endlessly, established journals became increasingly selective. That created an opportunity for profit-making, and a whole host of brand new self-styled scientific and medical “journals” sprang up, which a librarian appropriately labeled “predatory”; his collection listed hundreds of such journals by 2015.8 I continue to receive a few invitations every week to submit my “precious” or “valuable” article to benefit from its wide potential distribution, copious indexing by established indexing sources, very low page-charges and impressive journal-impact, as gauged by, for instance, the African Quality Center for Journals—whose website raises strong suspicions as to its own authenticity.9 Some solicitations invite me to serve on an editorial board, or to become an editor, or to suggest another journal title. Most of these solicitations come from journals not known to SCOPUS, Elsevier's abstract and citation database.

The intensity of competition in research has put an ever greater premium on getting published and getting grants, which means satisfying “peer review,” which means not rocking the boat, not being contrarian. Thus, the majority view, the “scientific consensus,” that is, groupthink, has become increasingly hegemonic, in effect unquestionable dogma.10 Dissenting views became heretical (“denialist”); those who insisted on their validity lost prestige, status, access to grants and other resources, and experienced increasing difficulty in getting their work published.

I had a front-row seat for noticing this latter trend. In the early 1970s a major recession in the aerospace and other technical industries had decimated graduate programs in science. My university urged us to apply for grants for interdisciplinary research projects, the latest intellectual fad. I participated with an historian, a sociologist, a journalist, and a philosopher in a grant proposal to study the attitudes of scientists to unorthodox claims such as the existence of Loch Ness Monsters. Comments by reviewers of our (unsuccessful) grant application led me to learn about the Velikovsky Affair of the 1950s and 1970s, and I became fascinated by the inept manner science dealt with such unorthodoxies. This led to becoming a founding member (around 1980) of the Society for Scientific Exploration.11 That mainstream science dogmatically rejected unorthodox claims by outsiders was no great surprise. But we heard also from highly accomplished scientists whose unorthodox views were bringing them denigration, sometimes even damaging their careers. Thomas Gold, highly acclaimed astrophysicist, was laughed at for suggesting that the sense of hearing must involve some active process—which was confirmed much later and eventually accepted. Halton Arp, accomplished research astronomer, had lost telescope access after interpreting certain photographs as disproving the Doppler explanation of cosmological redshifts. Other well-established astrophysicists were treated like outsiders for questioning the Big-Bang theory.

As I began research in STS, focusing on scientific controversy, knowledge of those events made me ready to notice that the increasingly competitive research environment was accompanied by an increasingly dogmatic mainstream scientific “consensus.” The competitive pressure also increased the temptation to cut corners (and worse). General-circulation magazines like the Economist brought attention to an evident “crisis”: increasing proportions of results and conclusions in scientific and medical journals could not be replicated. John Ioannidis ruffled innumerable establishment feathers with his fully documented article in 2005 in PLOS Medicine, “Why Most Published Research Findings are False.” Dogmatic assertions by official sources in medicine and science nowadays need to be fact-checked.

CODA: Viewpoint Shift

We tend to think that we hold our beliefs because they are true. But if human beings all held beliefs consistent with reality, then all of us would share the same beliefs, which quite obviously is not the case. Instead, starting as infants, we first acquire beliefs from our parents, and then from teachers, peers, and personal experience. Human psychology acts to make us try as far as possible to fit new learning and new experiences to what we already believe. Major changes of belief come—if at all—only slowly and progressively, as they did for me in the story told above (sudden changes of belief, like the apocryphal tale of Saul becoming Paul, are typically more drastic and discrete, from one utter certainty to the opposite certainty).

My change from complete faith in science to regretful caution resulted from many experiences whose nature and range could hardly be the same for other people, illustrating why humans come to hold such a variety of different yet not always unreasonable opinions.


1 Henry H. Bauer, Science Is Not What You Think: How It Has Changed, Why We Can’t Trust It, How It Can Be Fixed (McFarland 2017).

2 Henry H. Bauer, “Fact Checking is Needed in Science Also,” Academic Questions 34 (Summer 2021):18-30.

3 Vannevar Bush, Report to the President: The Endless Frontier, U.S. Government Printing Office, July 1945.

4 National Academy of Sciences, A Century of Doctorates: Data Analyses of Growth and Change (National Academies Press, 1978). 

5 American Chemical Society, Directory of Graduate Research, 1955; published biennially in print for many years but nowadays on-line.

6 NIH Data Book: Research Grants, updated June 15, 2015, http://report.nih.gov/NIHDatabook/Charts/Default.aspx?chartId=202&catId=2.

7 Bob Grant, “Merck Published Fake Article,” The Scientist, April 9, 2009.

8 See Beall’s List of Potential Predatory Journals and Publishers, https://beallslist.net/.

9 See “Is the African Quality Centre for Journals Reliable?,” https://predatory-publishing.com/is-the-african-quality-centre-for-journals-reliable.

10 Henry H. Bauer, Dogmatism in Science and Medicine: How Dominant Theories Monopolize Research and Stifle the Search for Truth (McFarland, 2012).

11 See web page https://www.scientificexploration.org.


Image: Alina Grubnyak, Public Domain

  • Share
Most Commented
Most Read

May 30, 2018

1.

The Case for Colonialism

From the summer issue of Academic Questions, we reprint the controversial article, "The Case for Colonialism." ...

July 2, 2020

2.

In Humans, Sex is Binary and Immutable

The idea that there are more than two sexes in human beings is a rejection of everything biological science has taught us. Unbelievably, this idea is coming directly from within the highest......

March 29, 2019

3.

Homogenous: The Political Affiliations of Elite Liberal Arts College Faculty

A study on the partisanship of liberal arts professors at America's top universities. ...