Effect on efficiency when computing at lower energies

In summary, the relationship is that computation rate in bits/sec = power^(3/4). If you reduce the power going into a computer chip (assuming it was designed to deal with that and wouldn't just turn off), it is logical it would only work at half the capacity.
  • #1
Ryan_m_b
Staff Emeritus
Science Advisor
5,963
721
Recently I've been in a discussion with someone online who claimed that the efficiency of a computer with regards to energy is not a linear relationship. I assumed that if you were to halve the power going into a computer chip (assuming it was designed to deal with that and wouldn't just turn off) it was logical it would only work at half the capacity. They claimed that there are some fundamental reasons why this isn't the case and that actually the relationship is computation rate in bits/sec = power^(3/4). In other words if you drop the power by to 50% you only drop the computation rate to 59%.

I can't get my head around this and unfortunately they heard it from someone else (that I'm reasonably certain is a credible source but is difficult for me to contact). I've done a lot of searching around online and can't find any details on this. Is it true? Is there any relationship like this?

Thanks!
 
Computer science news on Phys.org
  • #2
What do you means halve the power? To reduce power consumption, one might reduce clock speed or voltage or a combination.
 
  • #3
It was a pretty simplistic conversation (I'm not too clued up on computer science), it started from a conversation about a computer performing at X requiring Y power to do so. I assumed that if you halved the available energy to the machine it would only be able to perform at X/2 (like I said above assuming it didn't turn off). This other person claimed otherwise.

If you reduce the clock speed or voltage would that allow a computer to operate at the same level at half power? Wouldn't performance be affected i.e. by running slower?
 
  • #4
If you reduce the clock speed it will run slower up to a point, but you won't be able to reduce the clock speed indefinitely without it stopping at some point.
 
  • #5
Ryan_m_b said:
Recently I've been in a discussion with someone online who claimed that the efficiency of a computer with regards to energy is not a linear relationship. I assumed that if you were to halve the power going into a computer chip (assuming it was designed to deal with that and wouldn't just turn off) it was logical it would only work at half the capacity. They claimed that there are some fundamental reasons why this isn't the case and that actually the relationship is computation rate in bits/sec = power^(3/4). In other words if you drop the power by to 50% you only drop the computation rate to 59%.

I can't get my head around this and unfortunately they heard it from someone else (that I'm reasonably certain is a credible source but is difficult for me to contact). I've done a lot of searching around online and can't find any details on this. Is it true? Is there any relationship like this?

Thanks!
Could be. But since your "someone" had heard it from a friend, he/she could be talking apples to your oranges. It even could be a description of the curve explaining the increased power consumption of CPU's that went tfrom 1Mz to 4 to 60 to 400 to GhZ from the 80's onward, with increased byte length thrown in also. A version of Moore's law. In which case it has little application to one particular machine.

For a gate,
The basic equation is Ohm's law, P=EI, applied dynamically to a reaction circuit. Since, at the the gate (CMOS), oe is just charging and uncharging a capacitor to change the bit from ON/OFF, this becomes P = CV2f, where C is the capacitance, V the applied voltage, and f the frequency. If one relates f to bits/sec, than that is just the basic starting form to work with.

Of course there are other things to consider when manufacturing a chip. Such as gate density on a chip which goes by N2, which would increase power consumption by the same rate per chip. One way to increase gate density is to make each gate smaller. Smaller gates means quicker turn ON/OFF by a factor of N, which means the chip could either use a lower supply voltage or an increased frequency with the same power usage.

In case you are wondering how P=EI translates to P=CV2f for a capacitor, you can recall that for a capacitor, the energy stored suffers the following relationship,
E = 1/2 QV, where E is the energy, Q is the charge on the capacitor, and V the voltage.
with P=E/t, P=1/2 QV/t = 1/2 QVf ( Or, = IV with I=Q/t )
Of course another half of energy from the power supply is dissipated in the resitive elements (in the chip) when the capacitor is charging or discharging, so we do get the full P = CV2f when we know that Q = CV.

I just thought I would write that down for you as an addition to the thread.
 
  • Like
Likes Silicon Waffle and Greg Bernhardt
  • #6
Thanks 256bits, the gate explanation is quite interesting. I'm waiting to hear back from them but a simple way of rephrasing their claim is that computation per watt obeys a power law rather than being linear. They seemed to believe this was a general rule so any hardware would in theory be more efficient at lower power consumption. The implication being that you could get a hell of a lot of computation done at extremely low power levels, you just have to compensate by adding more components to your computer.
 
  • #7
256bits said:
Smaller gates means quicker turn ON/OFF by a factor of N, which means the chip could either use a lower supply voltage or an increased frequency with the same power usage.

As you decrease the gate geometry, the leakage current goes up. So if you cut the gate capacitance in half and double the speed, you get the same I=CVf current consumption, but you add more to the leakage current component of power consumption.
 
  • Like
Likes jim mcnamara and Silicon Waffle
  • #8
In terms of standard cmos circuits, if you drop the clock speed by half, you drop the dynamic power by half, and drop the computation by half. Leakage won't change of course, and can be significant (and we need to ignore shoot-through). Switching power is proportional to Vdd^2

So, thinking only about switching power for the moment, what happens if we lower the voltage to reduce the power?
Most CMOS computational current is consumed by charging capacitors. I = C(dv/dt). so if I drop V by 0.707 I get half the power, given a constant dt and constant C.

Energy = (CV^2)/2 so if I drop V by .707, I use half the energy for each transition. It all seems linear.

So, unless he is messing with leakage (which makes it CMOS process and geometry dependent), I don't quite see it.

In terms of information theory there are theorems regarding the energy of computation. For example https://en.wikipedia.org/wiki/Landauer's_principle. Landauer's principle seems to be linear, not ^3/4 power.

Here is a thesis https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-821.pdf but it doesn't really answer the question either (but has nice graphs :) )
Maybe the power equivalent to Moores law (Koomey's Law) expresses it somehow?

Koomey observed that computation per kilowatthour has followed a similarly exponential trend, doubling every 1.57 years? But, I can't get ^3/4 out of that.

SO we also have:
https://en.wikipedia.org/wiki/Limits_to_computation
https://en.wikipedia.org/wiki/Koomey's_law
 
  • #9
CMOS size reduction is a real issue. Its main component MOSFET is made out of silicon oxide whose molecular size is now almost equal to the thickness of the designed gate dielectric. Although I have found no resources that could offer detailed explanation on how the power law works after such a technique as power gating is applied, I personally don't think your chat-mate's claim of the exponential relationship as mentioned still holds true.
Ryan_m_b said:
... The implication being that you could get a hell of a lot of computation done at extremely low power levels, you just have to compensate by adding more components to your computer.
You then need a real good or perfect assembly line or formal process to handle all of your added components. Value realization for each of the components is also required and your picks on cheap ones will increase performance and would ask a lot from your other components too i.e the appropriate or best match, even from your own requirements for them to deal with any specific task in question.
 
  • #10
Ok I have more information, I managed to get in touch with the originator of this comment. They were referring to this equation:

[tex]s\le\frac{(\sigma)^{1/4}}{k_{B}log2}A^{1/4}P^{3/4}[/tex]
Which comes from Fundamental Physical Limits on Computation by Warren Smith (1995). Specifically section 5.1: Limits on the speed of computation. Can't say I'm having an easy time understanding the paper at all, but the gist of it seems to be that if you drop P by 10,000x you decrease S by 1000x. You could hypothetically compensate for that by increasing A by 1e12x.

Also thanks for the replies so far :) they're interesting and I'm aware my lack of context is hindering an explanation.
 
Last edited:
  • #11
That server is down or not accessible.

Turns out it is an information theory limit imposed by maximum entropy per unit volume. The seth lloyd comments regarding the ultimate laptop are in a similar vein (from a different thread regarding the limits to processing).

meBigGuy said:
Here is a unique view, from Seth Lloyd, a quantum mechanic at MIT

https://edge.org/conversation/the-computational-universe

He discusses it in more detail here:
http://arxiv.org/abs/quant-ph/9908043

In summary:
The 'ultimate laptop' is a computer with a mass of 1 kg and a volume of 1 l, operating at the fundamental limits of speed and memory capacity fixed by physics. The ultimate laptop performs 2mc^2/
proxy.php?image=http%3A%2F%2Fwww.nature.com%2F__chars%2Fpi%2Fblack%2Fmed%2Fbase%2Fglyph.png
proxy.php?image=http%3A%2F%2Fwww.nature.com%2F__chars%2Fplanck%2Fblack%2Fmed%2Fbase%2Fglyph.png
= 5.4258
mage=http%3A%2F%2Fwww.nature.com%2F__chars%2Fmath%2Fspecial%2Ftimes%2Fblack%2Fmed%2Fbase%2Fglyph.png
10^50 logical operations per second on
?image=http%3A%2F%2Fwww.nature.com%2F__chars%2Fmath%2Fspecial%2Fsim%2Fblack%2Fmed%2Fbase%2Fglyph.png
10^ 31 bits. Although its computational machinery is in fact in a highly specified physical state with zero entropy, while it performs a computation that uses all its resources of energy and memory space it appears to an outside observer to be in a thermal state at
?image=http%3A%2F%2Fwww.nature.com%2F__chars%2Fmath%2Fspecial%2Fsim%2Fblack%2Fmed%2Fbase%2Fglyph.png
10^9 degrees Kelvin. The ultimate laptop looks like a small piece of the Big Bang.

So once you are past all the realities of actually implementing a computational device you eventually run into this limit regarding what can be packed into a volume.

This paper addresses it also and refers to Lloyd and Smith's results: http://www.cise.ufl.edu/research/revcomp/physlim/PhysLim-CiSE/PhysLim-CiSE-5.pdf
 
  • Like
Likes jim mcnamara

Related to Effect on efficiency when computing at lower energies

1. How does computing at lower energies affect processing speed?

Computing at lower energies typically results in slower processing speeds. This is because lower energies mean lower clock speeds and reduced performance of the processor.

2. Is there a noticeable difference in efficiency when computing at lower energies?

Yes, there is a noticeable difference in efficiency when computing at lower energies. Lower energy consumption leads to decreased performance and slower processing speeds, ultimately affecting overall efficiency.

3. What is the impact of computing at lower energies on the lifespan of electronic devices?

Computing at lower energies can extend the lifespan of electronic devices. This is because lower energy consumption puts less strain on the components, reducing wear and tear and potentially increasing their lifespan.

4. How does reducing energy consumption affect the cost of computing?

Reducing energy consumption can lead to cost savings in computing. This is because lower energy usage means lower electricity bills and reduced maintenance costs for electronic devices.

5. Can computing at lower energies impact the accuracy of calculations?

Yes, computing at lower energies can impact the accuracy of calculations. This is because lower energy levels can lead to errors in processing and data transmission, potentially affecting the accuracy of calculations.

Similar threads

Replies
4
Views
6K
  • Computing and Technology
Replies
0
Views
417
Replies
19
Views
2K
Replies
2
Views
1K
Replies
1
Views
1K
  • Mechanical Engineering
Replies
2
Views
745
  • Computing and Technology
2
Replies
36
Views
3K
  • Programming and Computer Science
Replies
29
Views
3K
  • Introductory Physics Homework Help
Replies
15
Views
1K
  • Computing and Technology
Replies
6
Views
3K
Back
Top