Large language models (LLMs), the most renowned of which is ChatGPT, have become increasingly better at processing and generating human language over the past few years. The extent to which these models emulate the neural processes supporting language processing by the human brain, however, has yet to be fully elucidated.
Researchers at Columbia University and Feinstein Institutes for Medical Research Northwell Health recently carried out a study investigating the similarities between LLM representations on neural responses. Their findings, published in Nature Machine Intelligence, suggest that as LLMs become more advanced, they do not only perform better, but they also become more brain-like.
"Our original inspiration for this paper came from the recent explosion in the landscape of LLMs and neuro-AI research," Gavin Mischler, first author of the paper, told Tech Xplore.
"A few papers over the past few years showed that the word embeddings from GPT-2 displayed some similarity with the word responses recorded from the human brain, but in the fast-paced domain of AI, GPT-2 is now considered old and not very powerful.
"Ever since ChatGPT was released, there have been so many other powerful models that have come out, but there hasn't been much research on whether these newer, bigger, better models still display those same brain similarities."
To read more, click here.