Scientific Education


Posted in
3300 views on this post

2018-10-10 post date.

Why Do Computers Use So Much Energy?

Microsoft is currently running an interesting set of hardware experiments. The company is taking a souped-up shipping container stuffed full of computer servers and submerging it in the ocean. The most recent round is taking place near Scotland’s Orkney Islands, and involves a total of 864 standard Microsoft data-center servers. Many people have impugned the rationality of the company that put Seattle on the high-tech map, but seriously—why is Microsoft doing this?
There are several reasons, but one of the most important is that it is far cheaper to keep computer servers cool when they’re on the seafloor. This cooling is not a trivial expense. Precise estimates vary, but currently about 5 percent of all energy consumption in the U.S. goes just to running computers—a huge cost to the economy as whole. Moreover, all that energy used by those computers ultimately gets converted into heat. This results in a second cost: that of keeping the computers from melting.
These issues don’t only arise in artificial, digital computers. There are many naturally occurring computers, and they, too, require huge amounts of energy. To give a rather pointed example, the human brain is a computer. This particular computer uses some 10–20 percent of all the calories that a human consumes. Think about it: our ancestors on the African savanna had to find 20 percent more food every single day, just to keep that ungrateful blob of pink jelly imperiously perched on their shoulders from having a hissy fit. That need for 20 percent more food is a massive penalty to the reproductive fitness of our ancestors. Is that penalty why intelligence is so rare in the evolutionary record? Nobody knows—and nobody has even had the mathematical tools to ask the question before.

There are other biological computers besides brains, and they too consume large amounts of energy. To give one example, many cellular systems can be viewed as computers. Indeed, the comparison of thermodynamic costs in artificial and cellular computers can be extremely humbling for modern computer engineers. For example, a large fraction of the energy budget of a cell goes to translating RNA into sequences of amino acids (i.e., proteins), in the cell’s ribosome. But the thermodynamic efficiency of this computation—the amount of energy required by a ribosome per elementary operation—is many orders of magnitude superior to the thermodynamic efficiency of our current artificial computers. Are there “tricks” that cells use that we could exploit in our artificial computers? Going back to the previous biological example, are there tricks that human brains use to do their computations that we can exploit in our artificial computers?
More generally, why do computers use so much energy in the first place? What are the fundamental physical laws governing the relationship between the precise computation a system runs and how much energy it requires? Can we make our computers more energy-efficient by redesigning how they implement their algorithms?
These are some of the issues my collaborators and I are grappling with in an ongoing research project at the Santa Fe Institute. We are not the first to investigate these issues; they have been considered, for over a century and a half, using semi-formal reasoning based on what was essentially back-of-the-envelope style analysis rather than rigorous mathematical arguments—since the relevant math wasn’t fully mature at the time.
This earlier work resulted in many important insights, in particular the work in the mid to late 20th century by Rolf Landauer, Charles Bennett and others.
However, this early work was also limited by the fact that it tried to apply equilibrium statistical physics to analyze the thermodynamics of computers. The problem is that, by definition, an equilibrium system is one whose state never changes. So whatever else they are, computers are definitely nonequilibrium systems.  In fact, they are often very-far-from-equilibrium systems.

Fortunately, completely independent of this early work, there have been some major breakthroughs in the past few decades in the field of nonequilibrium statistical physics (closely related to a field called “stochastic thermodynamics”). These breakthroughs allow us to analyze all kinds of issues concerning how heat, energy, and information get transformed in nonequilibrium systems.
These analyses have provided some astonishing predictions. For example, we can now calculate the (non-zero) probability that a given nanoscale system will violate the second law, reducing its entropy, in a given time interval. (We now understand that the second law does not say that the entropy of a closed system cannot increase, only that its expected entropy cannot increase.) There are no controversies here arising from semi-formal reasoning; instead, there are many hundreds of peer-reviewed articles in top journals, a large fraction involving experimental confirmations of theoretical predictions.

Now that we have the right tools for the job, we can revisit the entire topic of the thermodynamics of computation in a fully formal manner. This has already been done for bit erasure, the topic of concern to Landauer and others, and we now have a fully formal understanding of the thermodynamic costs in erasing a bit (which turn out to be surprisingly subtle).
However, computer science extends far, far beyond counting the number of bit erasures in a given computation. Thanks to the breakthroughs of nonequilibrium statistical physics, we can now also investigate the rest of computer science from a thermodynamic perspective. For example, moving from bits to circuits, my collaborators and I now have a detailed analysis of the thermodynamic costs of “straight-line circuits.” Surprisingly, this analysis has resulted in novel extensions of information theory. Moreover, in contrast to the kind of analysis pioneered by Landauer, this analysis of the thermodynamic costs of circuits is exact, not just a lower bound.
Conventional computer science is about all about trade-offs between the memory resources and number of timesteps needed to perform a given computation. In light of the foregoing, it seems that there might be far more thermodynamic trade-offs in performing a computation than had been appreciated in conventional computer science, involving thermodynamic costs in addition to the costs of memory resources and number of timesteps. Such trade-offs would apply in both artificial and biological computers.

Clearly there is a huge amount to be done to develop this modern “thermodynamics of computation.”
Be on the lookout for a forthcoming book from the SFI Press, of contributed papers touching on many of the issues mentioned above. Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!

By David Wolpert
David Wolpert is resident faculty at the Santa Fe Institute where he uses nonequilibrium statistical physics to analyze the thermodynamics of computing systems.