Could the new and wildly popular chatbot ChatGPT convincingly produce fake abstracts that fool scientists into thinking those studies are the real thing?

That was the question worrying Northwestern Medicine physician-scientist Dr. Catherine Gao when she designed a study—collaborating with University of Chicago scientists—to test that theory.

Yes, scientists can be fooled, their new study reports. Blinded human reviewers—when given a mix real and falsely generated abstracts—could only spot ChatGPT generated abstracts 68% of the time. The reviewers also incorrectly identified 14% of real abstracts as being AI generated.

"Our reviewers knew that some of the abstracts they were being given were fake, so they were very suspicious," said corresponding author Gao, an instructor in pulmonary and critical care medicine at Northwestern University Feinberg School of Medicine. "This is not someone reading an abstract in the wild. The fact that our reviewers still missed the AI-generated ones 32% of the time means these abstracts are really good. I suspect that if someone just came across one of these generated abstracts, they wouldn't necessarily be able to identify it as being written by AI."

To read more, click here.