{shortcode-145054c39bd7c959833da733e1f1836ce1cf54eb}
Research scandals seem never-ending these days.
At Harvard, in the last year alone, allegations of plagiarism or research misconduct have been raised against former University President Claudine Gay, Harvard Business School professor Francesca Gino, Harvard Medical School neuroscientist Khalid Shah, and Harvard’s Chief Diversity and Inclusion Officer Sherri A. Charleston.
The problem, though prevalent here, is not isolated.
Last year, the president of Stanford resigned amid scrutiny of his past research. The social sciences have had a replicability crisis for some time now, but recently the natural sciences have seen waves of paper retractions for fraudulent research.
How could this happen? And what can we do about this sorry state of affairs?
There are certainly several causes. A “publish or perish” culture has long driven academics to fraudulent behavior. Holes in the peer-review system have allowed it to slip through. When concerns are raised to journals about work published in their pages, there is evidence that they often fail to adequately respond.
A crucial problem, however, is missing from this list, as well as from many of the ongoing conversations about plagiarism right now: Institutions pay far too little mind to uncovering misconduct that has already happened.
Right now, so-called science sleuths — do-good vigilantes who look into research concerns — are doing a lion’s share of this work. They are on the lookout for everything from duplicated or suspicious pictures to statistical tinkering to good-old-fashioned verbal plagiarism. To investigate these issues, sleuths employ a wide variety of methods that require human attention and can’t easily be automated by journals or institutions.
In recent years, an entire internet subculture has emerged around this activity. There are now dedicated discussion blogs such as PubPeer, where people comment on and discuss potential problems they have found in past research. One enthusiast created a game called Dupesy, which challenges players to look for pictures too similar to one another.
{shortcode-26b64d061b3460ed02f164e8ff4e3fe203aea39d}
Such grassroots efforts, while valuable, are indicative of the fact that there is not a formal and effective institutional framework for screening previous research. As opposed to the production of new research, there are few rewards for uncovering problems with previously published studies.
That is, except for one enticing, but troubling, reward: the political ends you can achieve by motivated review of research.
In recent months, finding research misconduct in the work of ideological opponents has become a weapon in American ideological disputes. This type of targeted review is present on both sides: While certainly ironic, the plagiarism allegations raised against Neri Oxman, the wife of outspoken Harvard critic Bill A. Ackman ’88, are, like those leveled against Claudine Gay, politically motivated.
While genuine cases of misconduct have been raised as a result of these targeted investigations, this phenomenon leads to a narrow and skewed impression of a problem that is far broader.
This is apparent in the recent targeting of Black female scholars, whose potential misconduct is being used to critique DEI initiatives. Without a benchmark measure of the prevalence of plagiarism in the work of other scholars, we have no way of understanding how unusual these issues really are.
To make matters worse, while incentives to uncover misconduct are scarce or misaligned, obstacles abound.
Those uncovering misconduct often get threatened or sued — in the case of vigilante sleuths, often without an institution to protect them. Moreover, advances in AI are likely to enable completely new ways to commit scientific fraud, further complicating amateur strategies. Generative AI, for example, makes it much easier to create unique images, rendering the sleuthing technique of finding pictures too similar to one another obsolete.
Developments in AI can also be used to fight AI-enabled fraud, of course, but it is unlikely that the hobbyists who do much of the investigation now will have access to the cutting edge of AI detection technology. More generally, amateur detectives can lack the capacity to find problems in complex studies with expert fraud — often the ones that matter the most.
In light of these issues, one clear way forward is to integrate sleuths more closely into the scientific process. More specifically, governments or large research universities should create internal divisions dedicated to searching for potential fraud.
The review of past research should have its own career path in academia, next to research and teaching. This would introduce new incentives to sleuth while blunting the impact of legal threats.
At the very least, more recognition and even material reward should be provided to anyone who finds legitimate concerns with published research. Across the Atlantic, the University of Bern is already piloting such a program.
Harvard has a chance to redeem itself in the eyes of the research community and take the lead in such efforts. We certainly have the resources both to establish our own division looking for past misconduct and offer prizes for those who uncover serious cases.
The job of spotting scientific fraud should become a specialized career with its own tools, practices, and code of conduct. Not only would this clean up our science, but it would likely have deterrence effects in the future.
The sooner we professionalize science sleuthing, the better. Harvard should take the lead.
Ivan Toth-Rohonyi ’25, an Associate Editorial editor, is a joint concentrator in Sociology and Computer Science in Adams House.
Read more in Opinion
The Econ Echo Chamber