{shortcode-b23ccebac8266c8e992f2de39f48b74fde29687d}
The Broad Institute and Boston Children’s Hospital are providing researchers with increased access to online image manipulation and plagiarism detection tools following multiple allegations of research misconduct against Longwood researchers.
Broad Institute Director Todd R. Golub shared two image checking softwares with affiliates on Monday, while Boston Children’s Hospital Chief Scientific Officer Nancy C. Andrews rolled out a similar program last month.
In the Monday email announcement to affiliates, Golub wrote that researchers can now upload their draft manuscripts to Proofig and Imagetwin — softwares that use artificial intelligence to detect image integrity issues — prior to publication.
The offering of institutional access to these tools comes weeks after data sleuths alleged claims of research misconduct against top Dana-Farber Cancer Institute scientists and a Brigham and Women’s Hospital researcher. The majority of instances of data fabrication were rooted in image manipulation and duplication.
Broad Information Technology Services, the Academic Affairs Officers, and scientific teams chose to implement Proofig and Imagetwin “after several weeks of review,” according to the email.
Where Proofig can be used to detect manipulations within a manuscript, Imagetwin can be employed to identify duplication across previously published papers.
“Neither tool catches everything it is intended to catch, which reinforces the ongoing importance of a thorough, manual (human) review of all images and underlying data prior to submission,” Golub wrote.
“Proofig and Imagetwin should be viewed as complements to, rather than replacements for, your preexisting quality control procedures,” he added.
According to the email, Broad Institute officials are also currently reviewing additional systems “that can help authors flag instances of inadvertent plagiarism.”
At Boston Children’s Hospital, Andrews recommended the use of similar tools in an emailed statement to affiliates on February 7.
“There is no good excuse for doctored figures or altered data. However, they happen, and some are not caught prior to publication,” Andrews wrote.
She encouraged researchers to use Research Computing, which “offers assistance in detecting AI-generated content and text that appears to be plagiarized,” according to the email.
“We are developing a comprehensive list of research integrity resources and can provide data management best practices to your lab upon request,” Andrews added.
Such tools have not yet been made available to researchers in the broader Harvard community.
Harvard Medical School Dean for Faculty & Research Integrity Kristin Bittinger said in an interview with The Crimson that there has been “no institutional decision on that either at the Medical School or at Harvard University that I’m aware of.”
Though HMS has not recently faced allegations of research misconduct or data manipulation against their original research, some accused scientists at affiliated hospitals have had joint appointments at the Medical School.
“That doesn’t mean it may not happen,” she added. “It just means there’s no decision — nothing to share on it at this time.”
“Certainly, leadership is looking closely at everything that we can do to improve research integrity across all of our community,” Bittinger said.
—Staff writer Veronica H. Paulus can be reached at veronica.paulus@thecrimson.com. Follow her on X @VeronicaHPaulus.