Human assumptions regarding language usage can lead to flawed judgments of whether language was AI- or human-generated, Cornell Tech and Stanford researchers have found in a series of experiments.

While individuals' proficiency at detecting AI-generated was generally a tossup across the board, people were consistently influenced by the same verbal cues, leading to the same flawed judgments.

Participants could not differentiate AI-generated from human-generated language, erroneously assuming that mentions of personal experiences and the use of "I" pronouns indicated human authors. They also thought that convoluted phrasing was AI-generated.

"We learned something about humans and what they believe to be either human or AI language," said Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science at the Cornell Ann S. Bowers College of Computing and Information Science. "But we also show that AI can take advantage of that, learn from it and then produce texts that can more easily mislead people."

To read more, click here.