Every day, our brain recognizes and discriminates the many thousands of sensory signals that it encounters. Today’s best artificial intelligence models—many of which are inspired by neural circuits in the brain—have similar abilities. For example, the so-called deep convolutional neural networks used for object recognition and classification are inspired by the layered structure of the visual cortex. However, scientists have yet to develop a full mathematical understanding of how biological or artificial intelligence systems achieve this recognition ability. Now SueYeon Chung of the Flatiron Institute in New York and her colleagues have developed a more detailed description of the connection of the geometric representation of objects in biological and artificial neural networks to the performance of the networks on classification tasks [1] (Fig. 1). The researchers show that their theory can accurately estimate the performance capacity of an arbitrarily complex neural network, a problem that other methods have struggled to solve.
To read more, click here.