Believing that what you see, read and hear is real and truthful has never been under such a challenge as it is today.
While technology plays a big role in creating this state of affairs, most egregious is the human faker, someone who sets out to deceive and profit with methods that long pre-date today’s sophisticated technology, like AI.
In scientific research, this manifests itself in areas like data and image falsification. Uncovering such dishonesty isn’t easy. When you do, challenging it can be tricky as well.
So how do you combat it?
To find out, I had the pleasure of speaking to David A. Sanders, an associate professor at Purdue University in the US, who is the guest in episode 20 of Ideas to Innovation Season 2, a business podcast from Clarivate that I host, and published last week.
While Sanders is indeed a university professor, he has also built a reputation as a “scientific sleuth” where his ability to identify falsified data and images in published academic research is quite uncanny. His findings have led to numerous retractions, highlighting how this type of misconduct directly damages scientific integrity.
The case that launched Sanders-the-sleuth in 2017 was bringing scientific misconduct in the Croce Laboratory at Ohio State University into the public spotlight, which included image and data manipulation and plagiarism.
He discusses how papers containing manipulated images copied across multiple papers were published. When Sanders exposed this issue, many publishers were reluctant to retract the flawed papers even when provided with evidence. Allowing falsified research to stand uncorrected erodes public trust in science.
Advancing technology is making it easier to generate fabricated images and data that appear authentic. AI systems can now produce fake images, text and data that humans struggle to identify as false. As Sanders remarks in our conversation, this could lead to a proliferation of high-quality fraudulent research flooding both predatory journals and reputable publishers.
He notes that the sheer volume of research data combined with an overwhelmed peer review system is also contributing to the system pressure, adding that post-publication review is just as valid as pre-publication review.
And he warns that AI will become a greater challenge with new tools that will make it increasingly difficult to detect fakes and plagiarism.
From our conversation, I can see three broad ways researchers and publishers can work to protect research and publishing integrity:
- Researchers must thoroughly review data and images for errors and falsification before publication. Curbing misconduct before publication is ideal.
- Publishers need to dedicate more staff to pre- and post-publication review to identify fraud. Swift retractions maintain credibility.
- Universities should expand training on research ethics, data credibility, and fraud detection. Growing a community of “scientific sleuths” is critical.
Maintaining research integrity in the face of increasingly sophisticated falsification will require vigilance across the entire academic community.
Listen to the conversation: