Artificial intelligence that cannot explain how it makes decisions—often called "black box" AI—could soon be replaced by more transparent systems, research suggests. A study by Loughborough University, published in Physica D: Nonlinear Phenomena, outlines a new mathematical blueprint for building AI that can reveal how it learns, remembers, and makes decisions.
The team have developed a prototype system with both a "brain" and a memory. Unlike conventional AI, it can learn continuously without losing past knowledge, avoids forming false or misleading memories, and mimic aspects of human thinking—such as strengthening or forgetting information over time—in a clear, controllable way.
"Intelligence has long been treated as something that emerges inside a black box," said lead author, Dr. Natalia Janson, of Loughborough University's Department of Mathematical Sciences. "We wanted to rethink AI from the ground up—and we've built a system where the inner workings of cognition are fully transparent."
In early tests, the prototype showed promising results. In simple demonstrations, it learned musical notes and short musical phrases without supervision and identified and stored colors from visual data such as cartoon images. Across all tasks, it behaved in a predictable and traceable way, while avoiding common problems seen in AI, including "catastrophic forgetting" and the formation of false memories.
To read more, click here.