Are large language models sentient? If they are, how would we know?

As a new generation of AI models have rendered the decades-old measure of a machine's ability to exhibit human-like behavior (the Turing test) obsolete, the question of whether AI is ushering in a generation of machines that are self-conscious is stirring lively discussion.

Former Google software engineer Blake Lemoine suggested the large language model LaMDA was sentient.

"I know a person when I talk to it," Lemoine said in an interview in 2022. "If I didn't know exactly what it was, which is this we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics."

Ilya Sutskever, a co-founder of OpenAI, proposed that ChatGPT might be "slightly conscious."

And Oxford philosopher Nick Bostrom agrees.

"If you admit that it's not an all-or-nothing thing, then it's not so dramatic to say that some of these [AI] assistants might plausibly be candidates for having some degrees of sentience," he said.

Others, however, warn, "Don't be fooled."

To read more, click here.