AI vision models have improved dramatically over the past decade. Yet these gains have led to neural networks which, though effective, don’t share many characteristics with human vision. For example, CNNs are often better at noticing texture, while humans respond more strongly to shapes.

A paper recently published in Nature Human Behaviour has partially addressed that gap. It describes a novel All-Topographic Neural Network (All-TNN) that, when trained on natural images, developed an organized, specialized structure more like human vision. The All-TNN better mimicked human spatial biases, like expecting to see an airplane closer to the top of an image than the bottom, and operated on a significantly lower energy budget than other neural networks used for machine vision.

“One of the things you notice when you look at the way knowledge is ordered in the brain, is that it’s fundamentally different to how it is ordered in deep neural networks, such as convolutional neural nets,” said Tim C. Kietzmann, full professor at the Institute of Cognitive Science in Osnabrück, Germany and co-supervisor of the paper.

To read more, click here.