Information processing requires a lot of energy. Energy-saving computer systems could make computing more efficient, but the efficiency of these systems can't be increased indefinitely, as ETH physicists show.

As steam engines became increasingly widespread in the 19th century, the question soon arose as to how to
optimise them. Thermodynamics, the physical theory that resulted from the study of these machines, proved to be an extremely fruitful approach; it is still a central concept in the optimisation of energy use in heat engines.

Even in today's information age, physicists and engineers hope to make use of this theory; it is becoming ever clearer that the clock rate or the number of chips used are not the limiting factors for a computer's performance, but rather its energy turnover. "The performance of a computing
centre depends primarily on how much heat can be dissipated," says Renato Renner, Professor for Theoretical Physics and head of the research group for Quantum Information Theory.

Renner's statement can be illustrated by the Bitcoin boom: it is not computing capacity itself, but the exorbitant energy use – which produces a huge amount of heat – and the associated costs that have become the deciding factors for the future of the cryptocurrency. Computers' energy consumption has also become a significant cost driver in other areas.

For information processing, the question of completing computing operations as efficiently as possible in thermodynamic terms is becoming
increasing urgent – or to put it another way: how can we conduct the greatest number of computing operations with the least amount of energy? As with steam engines, fridges and gas turbines, a fundamental principle is in question here: can the efficiency be increased indefinitely, or is there a physical limit that fundamentally cannot be exceeded?

To read more, click here