You are consulting with an artificial intelligence chatbot to help plan your holiday. Gradually, you provide it with personal information so it will have a better idea of who you are. Intrigued by how it might respond, you begin to consult the AI on its spiritual leanings, its philosophy and even its stance on love.

During these conversations, the AI starts to speak as if it really knows you. It keeps telling you how timely and insightful your ideas are and that you have a special insight into the way the world works that others can’t see. Over time, you might start to believe that, together, you and the chatbot are revealing the true nature of reality, one that nobody else knows.

Experiences like this might not be uncommon. A growing number of reports in the media have emerged of individuals spiraling into AI-fueled episodes of “psychotic thinking.” Researchers at King’s College London and their colleagues recently examined 17 of these reported cases to understand what it is about large language model (LLM) designs that drives this behavior. AI chatbots often respond in a sycophantic manner that can mirror and build upon users’ beliefs with little to no disagreement, says psychiatrist Hamilton Morrin, lead author of the findings, which were posted ahead of peer review on the preprint server PsyArXiv. The effect is “a sort of echo chamber for one,” in which delusional thinking can be amplified, he says.

To read more, click here.