Two-thirds of people surveyed think that artificial intelligence (AI) tools like ChatGPT have some degree of consciousness and can have subjective experiences such as feelings and memories, according to a new study from the University of Waterloo.
Large language models (LLMs) like ChatGPT often display a conversational style when outputting content. These human-like abilities have spurred debates on whether AI has consciousness.
According to the researchers, if people believe that AI has some level of consciousness, it could ultimately affect how people interact with AI tools, potentially strengthening social bonds and increasing trust. On the other hand, excessive trust can also lead to emotional dependence, reduced human interactions, and over-reliance on AI to make critical decisions.
The article "Folk psychological attributions of consciousness to large language models" was published in Neuroscience of Consciousness.
Never underestimate the stupidity of the general public.
To read more, click here.