In January 2014, Haruko Obokata’s scholarly contributions included the release of two significant publications. These articles purported to demonstrate a method for transmuting ordinary somatic cells into induced pluripotent stem cells.

At that juncture, this development represented a monumental achievement. It offered a substantial simplification of a previously intricate procedure, thereby unlocking novel avenues for medical and biological inquiry. Crucially, it also circumvented the intricate bioethical debates surrounding the utilization of human embryos for stem cell acquisition.

Furthermore, the documented methodology was remarkably uncomplicated, involving the application of a dilute acidic solution or the application of mechanical stress – a process curiously analogous to the removal of rust from metallic surfaces.

Within a mere handful of days, the scientific community began to observe anomalies in certain graphical representations within the published papers. This sparked a pervasive sense of doubt. Could such a seemingly elementary methodology truly yield the claimed results?

Given the simplicity of the experimental design and the inherent curiosity of the biological research community, immediate efforts were undertaken to independently verify the findings presented in these papers. These replication attempts proved unsuccessful. By February, Obokata’s affiliated institution had initiated an internal investigation. By March, some of the paper’s co-authors began to disassociate themselves from the published methods. By July, the research articles were formally retracted.

Although the papers were unequivocally deemed unreliable, the precise locus of the underlying issues remained elusive. Was there an instance of sample misidentification by the authors? Did they uncover a protocol that was effective on a singular occasion but inherently unstable?

Alternatively, were the data sets fabricated outright? It took several additional years, but the scientific collective eventually obtained an approximate resolution when subsequent research by Obokata, related in theme, was similarly retracted due to image manipulation, data discrepancies, and other questionable scientific practices.

This entire episode served as a remarkable illustration of the self-correcting nature of scientific inquiry. A significant discovery was disseminated, met with skepticism, subjected to rigorous testing, thoroughly investigated, found to be wanting, and subsequently withdrawn from the scientific record.

This is precisely the idealized progression one might anticipate from a system founded on organized skepticism. However, this ideal scenario is not universally realized.

In the overwhelming majority of scientific endeavors, it is exceedingly uncommon for fellow researchers to even detect potential irregularities, let alone mobilize global empirical resources to address them. The fundamental premise underpinning academic peer review presumes that instances of scientific misconduct are sufficiently infrequent or inconsequential to warrant a dedicated investigative framework.

The prevailing assumption among most scientists is that they will never encounter a single case of fabricated data during their professional careers. Consequently, the notion of scrutinizing calculations in peer-reviewed articles, re-executing analyses, or verifying the faithful implementation of experimental protocols is often deemed superfluous.

Compounding this issue, the requisite raw data and analytical code, essential for a thorough forensic examination of a published paper, are not consistently made available. Moreover, undertaking such rigorous scrutiny is frequently perceived as an antagonistic act, a laborious task relegated to individuals with an excessive motivation or an inherent propensity for confrontation.

Given that everyone is preoccupied with their own research obligations, who would undertake such extreme measures to discredit another’s work?

This brings us directly to the case of ivermectin, an antiparasitic agent that underwent investigation as a potential therapeutic for COVID-19 following preliminary laboratory studies conducted in early 2020 which suggested potential benefits.

Its prominence surged dramatically subsequent to a Surgisphere group analysis, initially published and subsequently withdrawn, which indicated a substantial reduction in mortality rates among individuals administered the drug. This finding precipitated a widespread global adoption of the medication.

More recently, the scientific substantiation for ivermectin’s efficacy was heavily reliant on a singular research study. This study, which was disseminated as a preprint (i.e., published without undergoing peer review) in November 2020.

This particular investigation, derived from an extensive patient cohort and reporting a pronounced therapeutic effect, garnered significant attention. It was accessed over 100,000 times, cited in a multitude of academic papers, and incorporated into at least two meta-analytic models that concluded, as the researchers asserted, that ivermectin was a “wonder drug” for COVID-19.

It is not an overstatement to assert that this solitary publication influenced the decision-making process for millions of individuals seeking ivermectin for the management and/or prevention of COVID-19.

Mere days ago, the study was retracted amidst allegations of academic fraud and intellectual dishonesty. A postgraduate student tasked with reviewing the paper for their degree program identified that the entirety of the introductory section appeared to have been plagued by plagiarism from earlier scientific works. Subsequent examination revealed that the study’s accompanying data set, uploaded online by the authors, exhibited patent inconsistencies.

The magnitude of this failing within the scientific establishment is difficult to overemphasize. We, who consider ourselves custodians of knowledge, readily accepted research findings that were so riddled with fundamental flaws that a medical student could dismantle them in a matter of hours.

The gravitas attributed to the results stood in stark contrast to the intrinsic quality of the study. The authors presented erroneous statistical methodologies with considerable frequency, reported standard deviations that were highly improbable, and documented an astonishing degree of positive therapeutic impact – the last occasion the medical community witnessed a ’90 percent benefit’ for a medication against a disease was with the introduction of antiretroviral therapies for individuals succumbing to AIDS.

Yet, these critical issues went unnoticed. For the better part of a year, esteemed and reputable researchers integrated this study into their comprehensive reviews, medical practitioners utilized it as a basis for patient treatment, and governmental bodies incorporated its conclusions into their public health directives.

No one dedicated the minimal effort of five minutes to download the data file made available online by the authors and observe that it indicated a multitude of fatalities occurring prior to the study’s commencement. No one undertook the simple act of copying phrases from the introduction into a search engine, which would have immediately revealed the extent of its verbatim duplication from previously published scholarly works.

This pervasive lack of vigilance and inaction has perpetuated the ongoing series of events. When our collective attention remains deliberately disengaged from these issues, we also fail to ascertain the prevalence of scientific fraud, its easily identifiable locations, or strategies for its detection, thereby preventing the formulation of robust plans to mitigate its adverse consequences.

A recent commentary published in the British Medical Journal put forth the proposition that it may be time to fundamentally reorient our perspective on health research, adopting a default assumption of potential fraud unless definitively disproven.

This does not imply a presumption of dishonesty on the part of all researchers, but rather suggests initiating the reception of new health research information from a distinctly more skeptical baseline, rather than operating under an assumption of uncritical acceptance.

This proposition might initially appear extreme. However, if the alternative is to tacitly accept the occasional administration of medications to millions of individuals based on unsubstantiated research that is subsequently entirely withdrawn, this revised approach may well represent a negligible cost.

James Heathers is the Chief Scientific Officer at Cipher Skin and is actively engaged in research pertaining to scientific integrity.

Gideon Meyerowitz-Katz is an epidemiologist specializing in chronic disease management in Sydney, Australia. He maintains a regular health blog that addresses scientific communication, public health, and the accurate interpretation of newly published studies.