New research offers a refined approach to calculating the energy costs of computational processes, potentially paving the way for more energy-efficient computing and advanced chip designs.
Every computing system, whether biological or synthetic—ranging from cells and brains to laptops—incurs a cost. This cost isn’t merely the monetary price, which is straightforward to determine, but rather an energy cost tied to the effort needed to run a program and the heat that is released during the process.
Researchers at SFI and elsewhere have spent decades developing a thermodynamic theory of computation, but previous work on the energy cost has focused on basic symbolic computations — like the erasure of a single bit — that aren’t readily transferable to less predictable, real-world computing scenarios.
In a paper recently published in the journal Physical Review X, a quartet of physicists and computer scientists expand the modern theory of the thermodynamics of computation. By combining approaches from statistical physics and computer science, the researchers introduce mathematical equations that reveal the minimum and maximum predicted energy cost of computational processes that depend on randomness, which is a powerful tool in modern computers.
To read more, click here.