Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”.
Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were?
To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California.
To read more, click here.