In June 2016, Oona Lönnstedt and Peter Eklöv published an article in Science magazine, the peer-reviewed academic journal of the American Association for the Advancement of Science (AAAS). Lönnstedt and Eklöv’s article was on how microplastic particles harm fish larvae. As soon as it was published, some of their colleagues at Sweden’s Uppsala University suspected that the research must have been fabricated. They accused the pair of research misconduct, and two investigations followed, the first by Uppsala University and the second by Sweden’s Central Ethical Review Board. In December 2016, Science published a letter of concern about Lönnstedt and Eklöv’s research. In April 2017, the CERB recommended that Lönnstedt and Eklöv retract their article; they did, and Science retracted the article in May 2017. In December 2017, a Board for Investigation of Misconduct in Research at Uppsala University found that Lönnstedt and Eklöv had committed research misconduct. In September 2017, the Swedish Research Council terminated a $355,400 grant it had previously awarded to Eklöv, and forbade him from applying for new grants until December 2019.1
You’d think from reading that account that Science just got hoodwinked by some dodgy Swedish scientists. But it’s much worse than that. Science has procedures to prevent dodgy research from getting published, and it didn’t follow them. It has large numbers of reviewers who are supposed to be able to tell when there’s something questionable about the research considered for publication. Science would have had to be remarkably negligent to have missed all the red flags.
But it was worse than simple negligence. I read Lönnstedt and Eklöv’s article a week after it was published and I saw at once that it was substandard. I’m not a professional in their field, but I worked my entire career in industrial chemistry. I’ve been an aquarium enthusiast for sixty years, and the entire article raised red flags. I sent a long series of emails to Science editors, reviewing editors, and board of directors, and gave lengthy reasons why they should re-examine Lönnstedt and Eklöv’s article’s editorial approval. My concerns were dismissed out of hand. Science has never acknowledged publicly that I warned them in June 2016 that they should investigate an article that was gravely and obviously flawed. Nor have they owned up to anything more than trivial negligence. Science was culpably negligent, both in publishing Lönnstedt and Eklöv’s article to begin with, and then in not following up on my clear, immediate warning about the article’s flaws.
Here’s what Science hasn’t admitted to the public.
In June 2016 I opened my copy of Science magazine and read Lönnstedt and Eklöv’s “Environmentally relevant concentrations of microplastic particles influence larval fish ecology.” It was obvious to me as I read it that this article had major problems—problems which should have jumped out at any expert reviewer who was paying attention, if they were obvious to an interested amateur.
The gist of Lönnstedt and Eklöv’s article was that pollution in the form of “microplastics”—an ill-defined term—is detrimentally affecting fish larvae. Although the article began with a very general discussion of plastic pollution in earth’s waters, the research concentrated on what appeared to be fresh, rounded particles of a uniform size of 90 microns rather than the widely varying (and generally vastly larger) discarded plastic pieces and particles that are found in nature.
It is troubling that forms of pollution not directly relevant to the study and plastic particles that don’t appear in the same form in nature were the focus of discussion and study.
Lönnstedt and Eklöv stated that they exposed juvenile fish to synthetic, unweathered polystyrene plastic particles. They could have chosen these particles in order to study the effects of similar particles used as mild abrasives in cosmetics, soaps, and other consumer products—but they never offered this justification. Nor did they acknowledge that weathering can fundamentally change the form, surface properties, and/or chemical characteristics of microplastics.
It remains unclear why the authors assumed, without supporting evidence, that the effects of one sort of particle are the same as the vastly different, far more common particles encountered in real-world plastic pollution.
Lönnstedt and Eklöv harvested eggs of the fish Perca fluviatilus, a Eurasian perch. They took these eggs from the wild—without proper authorization, as subsequent investigations determined—and exposed them to the plastic beads. This exposure supposedly correlated with substantial hatch rate reduction when compared with eggs from a control sample of unexposed fish. The authors’ graph vividly demonstrated a clear effect.
Data from studies of living systems usually are messy and complex, and show high levels of variation. Sometimes you do run across a case of “data simplicity.” A pattern of data simplicity raises suspicions that need to be examined to insure confidence in data outcomes.
Lönnstedt and Eklöv then allowed the fish to live and grow for two weeks. The size of the fry after this growth period showed a simple downward trend correlating with increasing particle exposure.
Two cases of data simplicity are a suspicious occurrence.
Lönnstedt and Eklöv then put the P. fluviatilis fry in a container with a grid visible through the bottom glass. The reader might assume that these fish were also gathered from the wild, or were hatchlings from the collected eggs, but the article didn’t mention the source of these fish, or even whether they were all from one source.
This basic information should be provided but goes unmentioned.
Lönnstedt and Eklöv placed thirty-six fish in the one liter test container and measured their activity across the grid during three minute periods. This produced data they also displayed in a graph that showed a simple reduction in activity as exposure to plastic beads was raised. They did not say how they measured the activity of the thirty-six fish.
Three cases of data simplicity. It remains unclear how the crossing of grid lines of the thirty-six, 8-millimeter essentially colorless fish was measured during a three minute period in a one liter container. Measuring and recording such numbers is nearly impossible without sophisticated photographic, computerized technology. This technology was evidently not employed. It seems likely this data was fabricated.
Finally, Lönnstedt and Eklöv measured the ability of P. fluviatilis to avoid predation. They examined containers containing roughly forty-five fry for their responses to chemical “alarm cues,” which are supposedly indicators of predation, although they did not discuss how this was done or what the alarm cues were.
Not describing the techniques suggests the measurements either were not done or were not carried out diligently.
Lönnstedt and Eklöv introduced 31-millimeter specimens of the predatory fish Esox lucius (pike) into the one liter “aquaria” containing young fish subjected to three plastics exposure levels. They then counted the number of P. fluviatilis eaten by the pike. Raising the exposure to plastic particles reduced the fish’s assumed avoidance behavior to predators, resulting in predation on and consumption of the fry.
There is no statement that the pike were selected to be identical in their predation habits, size, and age. It is essential to control for all three factors.
The results, once again, clearly (and, therefore, suspiciously) showed that increasing the plastic concentration resulted in lower survival of the fry.
This constitutes four cases of data simplicity. My own experience with predatory fish suggests that in this confined environment all of the perch fry would have been consumed in less than a day, regardless of plastics exposure.
Lönnstedt and Eklöv report that the fish were confined in one liter “mesocosms simulating natural conditions.”
Anyone even slightly familiar with tending living fish knows that it is virtually impossible to arrange anything near to “natural conditions” in a one liter glass container that contains forty-five fish larvae. My estimate is that the minimum size for a “mesocosm” would be at least 10 to 20 liters, with a degree of filtration, temperature control to simulate “natural” conditions, and lighting arranged and timed to be like their home waters, among other important simulations. That the test containers were nothing like this (see below) makes any inferences based on these test conditions highly suspect.
Lönnstedt and Eklöv’s tests showed that the baby E. fluviatilis liked to eat the plastic particles: “These results suggest that newly hatched larvae favor microplastic particles [for eating] over the more natural food source of free-swimming zooplankton . . . Here it appears that larvae preferentially feed on plastic particles.”
Young carnivorous fish, such as E. fluviatilis usually devour, ravenously, newly hatched brine shrimp (Artemia salina), which Lönnstedt claimed she fed her specimens. While it is possible that these fish would prefer microplastics to natural food sources, it seems highly improbable. Why would well-fed fish gorge themselves on microplastics? No proof or even description was provided in the published report. My own view is that despite fish being less food selective than humans, this would be similar to taking a child to McDonald’s and watching him eat the plastic utensils rather than the burger and fries.
The article contains pictures showing fry whose guts are crammed with plastic particles. This raises the question of whether these fish were manipulated in some way into eating large amounts of indigestible items.
Another skeptical reader would probably have come up with additional questions or variations of what I describe. The point is that if Lönnstedt and Eklöv’s article raised a long series of red flags to an amateur reader, Science’s professional reviewers and editors were at best grossly negligent in failing to notice the article’s patterns of unlikely data simplicity, vagueness about data gathering, and impractical research design.
How on earth had Science’s peer review process failed so badly?
It is possible that Science editors hadn’t looked too hard at Lönnstedt and Eklöv’s article because it fit their preconceptions—and perhaps because it was politically attractive. Environmental groups in the past decade have made the elimination of microplastics from the world’s waters a primary objective, and Science has been solicitous of fashionable environmentalist opinion.2 As David Randall, the Director of Research for the National Association of Scholars, pointed out in a scathing article this summer (2019), “The ‘gold standard journals’ Nature and Science, above all, promote the artificial ‘consensus’ of nigh-apocalyptic climate change.”3
This could explain why Science published a puff piece for Lönnstedt and Eklöv’s article in the same issue, Chelsea Rochman’s “Ecologically Relevant Data Are Policy Relevant.”4 This item appeared in the magazine’s “Perspectives” section, which Science uses to provide a layman’s explanation of the highly technical research they publish. Rochman, a professor at the University of Toronto and the University of California, Davis, also failed to notice the study’s obvious flaws. Lönnstedt and Eklöv exposed fish to concentrations of polystyrene microplastics comparable to those found in nature . . . Most importantly, they asked ecologically relevant questions about survival and recruitment in their laboratory populations . . . Lönnstedt and Eklöv’s study marks an important step toward understanding ecological impacts of microplastics.
Lönnstedt and Eklöv exposed fish to concentrations of polystyrene microplastics comparable to those found in nature . . . Most importantly, they asked ecologically relevant questions about survival and recruitment in their laboratory populations . . . Lönnstedt and Eklöv’s study marks an important step toward understanding ecological impacts of microplastics.
Rochman’s concentration on the “policy relevant” aspects of the study suggests that Science may have been anxious to secure a voice for itself on the policy debates surrounding microplastics, which have been subject to an increasing number of bans and proposed bans.5
Political groupthink would provide an explanation for Science’s negligence, if not an excuse. It would be a yet more damning comment on its reviewers’ and editors’ professional competence if only negligence and no political groupthink were involved.
I saw this article shortly after publication, noting all the problems I’ve mentioned above. I wrote a short e-mail to Oona Lönnstedt on June 20, 2016, copying Marcia McNutt, then the editor of Science.6 The response from Dr. Lonnstedt’s server was immediate and said she was out of town and not available, but that she would respond when she returned. I included five questions, which I summarize here:
Why did the study measure particles per cubic meter, when square meters are the more appropriate measure? I attached a recent article from Plastics Engineering, the publication of the American trade association the Society of Plastics Engineers (SPE) that more rationally measured plastic pollution in particles per square meter.7
Why did the study fail to analyze actually occurring particle sizes in the oceans, and only analyze, without explaining why it omitted the other sizes, “particles smaller than 5 mm” and microplastics of 90 microns diameter?
What was the source and justification for the use of 90 micron particles? That is a convenient size for ingestion by 8-mm fishes.
Were these particles freshly manufactured or weathered, as they would be in nature?
Why did the authors select a size of 90 microns? How much ingestion would have occurred if they had tested 1 mm particles?
Why were the particles spherical and smooth? Fish are sensitive to the feel of anything they attempt to eat, and prefer to ingest smooth substances.
When I received the “out-of-town” e-mail from Lönnstedt, I immediately sent a second e-mail directly to Dr. McNutt, wondering how Science could have published such an obviously flawed article. She was non-responsive: “I am certain that Dr. Lönnstedt will have no problem responding to your queries. These issues are all well addressed in the environmental literature.”8
“I am certain that Dr. Lönnstedt will have no problem responding to your queries. These issues are all well addressed in the environmental literature.”8
McNutt was overly optimistic. Lönnstedt never replied. So far as I can tell from extensive follow-up reading, none of the five issues I queried has been “well addressed” in environmental scholarship.
Dr. McNutt left her position at Science soon after our exchange; she was replaced by Dr. Jeremy Berg, who will be editor at Science until June 2019. Rush Holt, CEO and Executive Publisher of Science, who was repeatedly kept appraised of communications between Dr. Berg and myself, will also retire in fall, 2019.9
Retraction and Silence
Science reviewers and editors may not have noticed anything wrong with Lönnstedt and Eklöv’s article, but their colleagues at Uppsala University in Sweden did.10 They started an investigation, which ended up with an official determination that both authors had committed research misconduct. Among other findings, the Board concluded that Eklöv and Lönnstedt had falsely stated that they had received ethical approval for animal experimentation; Eklöv and Lönnstedt had been grossly negligent in their failure to preserve their original data; the research failed to follow proper control procedures; Eklöv, the senior scientist, had failed to conduct proper oversight on Lönnstedt; Lönnstedt was not actually present at the island laboratory for much of the time she claimed to have conducted the research, about which the investigators said: [T]he main experiments on which the article is based and which presuppose observation for three weeks after exposure to plastic particles were not actually conducted. Consequently, the published results are fabricated, i.e. misconduct has occurred.11
[T]he main experiments on which the article is based and which presuppose observation for three weeks after exposure to plastic particles were not actually conducted. Consequently, the published results are fabricated, i.e. misconduct has occurred.11
A great many other charges against the two could not be proven, largely because the data had gone missing. Immediately after publication, Lönnstedt claimed that her laptop with all the data had been stolen and that she failed to back it up to the Uppsala servers, as required. All of which didn’t stop the Board from stating “[t]he most parsimonious explanation based on all available evidence is that most of the data in the Science paper are fabricated and the details of the experimental methods given in the paper are false in most instances.”12
Even before the November Uppsala report, the April 2017 report by Sweden’s Central Ethical Review Board noted that Science had been astonishingly lax at following its own data publications requirements and querying the article’s shoddy methodology. It may be considered particularly remarkable that the article was sent for publication without the presence of the necessary data. It is worth pointing out here that the journal Science was deficient in its checking in this respect . . . it is remarkable that the article, given these deficiencies, was accepted by the journal Science.13
It may be considered particularly remarkable that the article was sent for publication without the presence of the necessary data. It is worth pointing out here that the journal Science was deficient in its checking in this respect . . . it is remarkable that the article, given these deficiencies, was accepted by the journal Science.13
The Science news department ran an article by Martin Enserink noting the Central Ethical Review Board’s negative judgment on Science’s review practices.14 But, to my knowledge, no one at Science has acknowledged his responsibility—his deficiency, as Sweden’s Central Review Board put it—in publishing an obviously flawed paper.
The Cover Up
It is now beyond dispute that Eklöv and Lönnstedt committed research misconduct, and that their article should not have been published. But exactly how Science failed to detect problems with the research design and data, or why it failed to uphold its own rules requiring researchers to provide their data remains unanswered.
To try and answer these questions I sent an e-mail in June, 2017 to Jeremy Berg, Science’s editor, with a copy to Rush Holt, Executive Publisher at Science. In my e-mail I asked about the review failures at Science and the curious reluctance to admit Science’s role in publishing a fraudulent piece. Dr. Berg, unfortunately, never made a forthright reply to my queries during several rounds of communications. Dr. Holt made no reply.
Berg’s e-mails, however, did reveal fragments of information. On June 12, 2017, he wrote that “this submission was reviewed by members of our Board of Reviewing Editors and then by outside reviewers with more focused expertise in the area. The reviewers were quite supportive of the paper although they did raise concerns, many of which were addressed during the revision of the submission.”15 Berg’s e-mail confirmed that Eklöv and Lönnstedt’s article had gone through Science’s full review process—and that, to the extent that the reviewers’ comments were incorporated in the final piece, they even bore some responsibility for its content.
Berg’s e-mails also confirmed that more than pure science was involved in the decision to publish the article. On August 18 he wrote that “This paper was of interest, in part, because of the fact that it touched on a topic of contemporary interest, but that is true of many papers considered for publication in Science.”16 “Contemporary interest” sounds like a euphemism for politics. While Berg’s statement is not a smoking gun that proves that political groupthink or a political agenda played a role in Science’s slipshod review process, it does provide some evidence to support that theory.
Having gotten nowhere with Berg and Holt, however, I decided to send e-mails to members of Science’s Board of Reviewing Editors. On May 16, 2018, I sent an e-mail with my queries to twenty editors who had relevant expertise in Eklöv and Lönnstedt’s subject matter, copied to Berg and Holt.17 I received no comments. I then sent an e-mail with my query on July 6, 2018 to fifty Review Board members, without regard to expertise, as well as the entire AAAS Board of Directors.18 I still received no comments.
Berg’s response to me stated that he could not answer my questions because of “confidentiality.”19 No one at Science gave me a straight answer.
Science’s normal review process did not work, at least in this case. Further, the responsible officials and editors didn’t respond as scientists should when someone raises legitimate concerns. Instead they continued to shove Science’s culpable negligence under the rug. The Board of Reviewing Editors and the AAAS Board of Directors have now associated themselves with the culpable negligence of the editors and reviewing editors responsible for approving Eklöv and Lönnstedt’s article.
All of this is not to disparage science in general, of course, which has immeasurable value when it lives up to the ideals of objectivity and reproducible evidence. But fraud committed under the rubric of science and political bias can and does intrude. All scientists and non-scientists alike need to consider pronouncements by scientists and the peer review process critically. Just because a study has gained the imprimatur of a reputable scientific journal does not mean it is beyond methodological dispute. Institutions and individuals make mistakes; systems of checks and balances break down or are manipulated, and the scientific method is sometimes compromised. In this age of heightened politicization especially, it is essential that scientists, researchers, writers, and editors remain vigilant in their defense of processes and protocols from which science derives its truest value.