Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Effect on efficiency when computing at lower energies

  1. Jul 22, 2015 #1


    User Avatar
    Staff Emeritus
    Science Advisor

    Recently I've been in a discussion with someone online who claimed that the efficiency of a computer with regards to energy is not a linear relationship. I assumed that if you were to halve the power going into a computer chip (assuming it was designed to deal with that and wouldn't just turn off) it was logical it would only work at half the capacity. They claimed that there are some fundamental reasons why this isn't the case and that actually the relationship is computation rate in bits/sec = power^(3/4). In other words if you drop the power by to 50% you only drop the computation rate to 59%.

    I can't get my head around this and unfortunately they heard it from someone else (that I'm reasonably certain is a credible source but is difficult for me to contact). I've done a lot of searching around online and can't find any details on this. Is it true? Is there any relationship like this?

  2. jcsd
  3. Jul 22, 2015 #2
    What do you means halve the power? To reduce power consumption, one might reduce clock speed or voltage or a combination.
  4. Jul 22, 2015 #3


    User Avatar
    Staff Emeritus
    Science Advisor

    It was a pretty simplistic conversation (I'm not too clued up on computer science), it started from a conversation about a computer performing at X requiring Y power to do so. I assumed that if you halved the available energy to the machine it would only be able to perform at X/2 (like I said above assuming it didn't turn off). This other person claimed otherwise.

    If you reduce the clock speed or voltage would that allow a computer to operate at the same level at half power? Wouldn't performance be affected i.e. by running slower?
  5. Jul 22, 2015 #4
    If you reduce the clock speed it will run slower up to a point, but you won't be able to reduce the clock speed indefinitely without it stopping at some point.
  6. Jul 22, 2015 #5


    User Avatar
    Gold Member

    Could be. But since your "someone" had heard it from a friend, he/she could be talking apples to your oranges. It even could be a description of the curve explaining the increased power consumption of CPU's that went tfrom 1Mz to 4 to 60 to 400 to GhZ from the 80's onward, with increased byte length thrown in also. A version of Moore's law. In which case it has little application to one particular machine.

    For a gate,
    The basic equation is Ohm's law, P=EI, applied dynamically to a reaction circuit. Since, at the the gate (CMOS), oe is just charging and uncharging a capacitor to change the bit from ON/OFF, this becomes P = CV2f, where C is the capacitance, V the applied voltage, and f the frequency. If one relates f to bits/sec, than that is just the basic starting form to work with.

    Of course there are other things to consider when manufacturing a chip. Such as gate density on a chip which goes by N2, which would increase power consumption by the same rate per chip. One way to increase gate density is to make each gate smaller. Smaller gates means quicker turn ON/OFF by a factor of N, which means the chip could either use a lower supply voltage or an increased frequency with the same power usage.

    In case you are wondering how P=EI translates to P=CV2f for a capacitor, you can recall that for a capacitor, the energy stored suffers the following relationship,
    E = 1/2 QV, where E is the energy, Q is the charge on the capacitor, and V the voltage.
    with P=E/t, P=1/2 QV/t = 1/2 QVf ( Or, = IV with I=Q/t )
    Of course another half of energy from the power supply is dissipated in the resitive elements (in the chip) when the capacitor is charging or discharging, so we do get the full P = CV2f when we know that Q = CV.

    I just thought I would write that down for you as an addition to the thread.
  7. Jul 23, 2015 #6


    User Avatar
    Staff Emeritus
    Science Advisor

    Thanks 256bits, the gate explanation is quite interesting. I'm waiting to hear back from them but a simple way of rephrasing their claim is that computation per watt obeys a power law rather than being linear. They seemed to believe this was a general rule so any hardware would in theory be more efficient at lower power consumption. The implication being that you could get a hell of a lot of computation done at extremely low power levels, you just have to compensate by adding more components to your computer.
  8. Jul 23, 2015 #7


    User Avatar

    Staff: Mentor

    As you decrease the gate geometry, the leakage current goes up. So if you cut the gate capacitance in half and double the speed, you get the same I=CVf current consumption, but you add more to the leakage current component of power consumption.
  9. Jul 24, 2015 #8


    User Avatar
    Gold Member

    In terms of standard cmos circuits, if you drop the clock speed by half, you drop the dynamic power by half, and drop the computation by half. Leakage won't change of course, and can be significant (and we need to ignore shoot-through). Switching power is proportional to Vdd^2

    So, thinking only about switching power for the moment, what happens if we lower the voltage to reduce the power?
    Most CMOS computational current is consumed by charging capacitors. I = C(dv/dt). so if I drop V by 0.707 I get half the power, given a constant dt and constant C.

    Energy = (CV^2)/2 so if I drop V by .707, I use half the energy for each transition. It all seems linear.

    So, unless he is messing with leakage (which makes it CMOS process and geometry dependent), I don't quite see it.

    In terms of information theory there are theorems regarding the energy of computation. For example https://en.wikipedia.org/wiki/Landauer's_principle. Landauer's principle seems to be linear, not ^3/4 power.

    Here is a thesis https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-821.pdf but it doesn't really answer the question either (but has nice graphs :) )
    Maybe the power equivalent to Moores law (Koomey's Law) expresses it somehow?

    Koomey observed that computation per kilowatthour has followed a similarly exponential trend, doubling every 1.57 years? But, I can't get ^3/4 out of that.

    SO we also have:
  10. Jul 24, 2015 #9
    CMOS size reduction is a real issue. Its main component MOSFET is made out of silicon oxide whose molecular size is now almost equal to the thickness of the designed gate dielectric. Although I have found no resources that could offer detailed explanation on how the power law works after such a technique as power gating is applied, I personally don't think your chat-mate's claim of the exponential relationship as mentioned still holds true.
    You then need a real good or perfect assembly line or formal process to handle all of your added components. Value realization for each of the components is also required and your picks on cheap ones will increase performance and would ask a lot from your other components too i.e the appropriate or best match, even from your own requirements for them to deal with any specific task in question.
  11. Jul 24, 2015 #10


    User Avatar
    Staff Emeritus
    Science Advisor

    Ok I have more information, I managed to get in touch with the originator of this comment. They were referring to this equation:

    Which comes from Fundamental Physical Limits on Computation by Warren Smith (1995). Specifically section 5.1: Limits on the speed of computation. Can't say I'm having an easy time understanding the paper at all, but the gist of it seems to be that if you drop P by 10,000x you decrease S by 1000x. You could hypothetically compensate for that by increasing A by 1e12x.

    Also thanks for the replies so far :) they're interesting and I'm aware my lack of context is hindering an explanation.
    Last edited: Jul 24, 2015
  12. Jul 24, 2015 #11


    User Avatar
    Gold Member

    That server is down or not accessible.

    Turns out it is an information theory limit imposed by maximum entropy per unit volume. The seth lloyd comments regarding the ultimate laptop are in a similar vein (from a different thread regarding the limits to processing).

    So once you are past all the realities of actually implementing a computational device you eventually run into this limit regarding what can be packed into a volume.

    This paper addresses it also and refers to Lloyd and Smith's results: http://www.cise.ufl.edu/research/revcomp/physlim/PhysLim-CiSE/PhysLim-CiSE-5.pdf
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook