Advances in artificial intelligence and autonomous systems have triggered increasing interest in artificial vision systems (AVSs) in recent years. Artificial vision allows machines to “see”, interpret and react to the world around them, much like humans do when we respond to a situation that we can see changing – a car braking in front of us when driving, for example.

These “machine eyes” capture images from the world around them using cameras and sensors. Complex computing algorithms then process these images, enabling the machines to analyse their surroundings in real time and provide a response to any changes or threats (depending upon their intended application).

AVSs have been used in many areas, including facial recognition, autonomous vehicles and visual prosthetics (artificial eyes). AVSs for autonomous vehicles and high-tech applications have become well established. However, the complex nature of the human body makes visual prosthetics more challenging, because state-of-the-art AVSs do not possess the same level of multifunctionality and self-regulation as the biological counterparts that they mimic.

Many AVSs in use today require several components to function – there aren’t any photoreceptive devices that can perform multiple functions. This means that a lot of the designs are more complex than they should be, making them less commercially feasible and harder to manufacture. Hanlin Wang, Yunqi Liu and colleagues at the Chinese Academy of Sciences are now using nanoclusters to create multifunctional photoreceptors for biological prosthetics, reporting their findings in Nature Communications.

To read more, click here.