Researchers have found out that with all the progress claimed to be made by large language models (LLMs), generative artificial intelligence (GenAI) has still got a lot to learn and they are still incapable of being trusted fully.

The study could have serious implications for generative AI models deployed in the real world.

This is especially because a LLM that seems to be performing well in one context might break down if the task or environment slightly changes.

 

The study is being conducted by researchers from Harvard University, Massachusetts Institute of Technology (MIT), University of Chicago Booth, and Cornell.

A student is only as good as their teachers.

To read more, click here.