Artificial neural networks (ANNs) have brought about many stunning tools in the past decade, including the Nobel-Prize-winning AlphaFold model for protein-structure prediction [1]. However, this success comes with an ever-increasing economic and environmental cost: Processing the vast amounts of data for training such models on machine-learning tasks requires staggering amounts of energy [2]. As their name suggests, ANNs are computational algorithms that take inspiration from their biological counterparts. Despite some similarity between real and artificial neural networks, biological ones operate with an energy budget many orders of magnitude lower than ANNs. Their secret? Information is relayed among neurons via short electrical pulses, so-called spikes. The fact that information processing occurs through sparse patterns of electrical pulses leads to remarkable energy efficiency. But surprisingly, similar features have not yet been incorporated into mainstream ANNs. While researchers have studied spiking neural networks (SNNs) for decades, the discontinuous nature of spikes implies challenges that complicate the adoption of standard algorithms used to train neural networks. In a new study, Christian Klos and Raoul-Martin Memmesheimer of the University of Bonn, Germany, propose a remarkably simple solution to this problem, derived by taking a deeper look into the spike-generation mechanism of biological neurons [3]. The proposed method could dramatically expand the power of SNNs, which could enable myriad applications in physics, neuroscience, and machine learning.
To read more, click here.