Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why is Division harder than Multiplication?

  1. Sep 21, 2012 #1
    An example:
    Both conceptually and computationally it is easier to see that

    2.5 * 5.2 = 13

    than it is to see

    13 / 5.2 = 2.5

    In programming too: division loops take more CPU resources than multiplication loops.

    Why is this?
     
  2. jcsd
  3. Sep 21, 2012 #2
    What do you mean by "why"? We have algorithms for both and division algorithms tend to be more complex. I don't imagine that there's any grand philosophical significance to this fact.
     
  4. Sep 21, 2012 #3
    Implementation of division in CPU's is a subtraction loop, while multiplication is accomplished by shift and add.
     
  5. Sep 21, 2012 #4

    jbriggs444

    User Avatar
    Science Advisor

    In the naive implementation, both loops have roughly the same complexity.

    For multiplication you test a bit, conditionally add and then shift.
    For division you subtract, test the result, conditionally set a bit and then shift.

    On average, that means one subtraction per bit position for division and one half of an addition per bit position for multiplication.

    However, there are better algorithms than the naive ones. The guys that do this stuff for a living have very clever tricks at their disposal.
     
  6. Sep 21, 2012 #5
    I am speaking of hardware cpu implementation, not algorithm software implementation. Unfortunately all we have is adders. They shift and add. if there is a clever guy out there he could be a billionaire. subtraction involves complementing and then adding, a two step process.
     
  7. Sep 21, 2012 #6

    rcgldr

    User Avatar
    Homework Helper

    Multiplication doesn't have to be iterative, all the sub-products can be generated in parallel, and then all the sub-products are added. Computers can have circuits that directly implement subtract without using complement. For division, even if there is a complement, it's only done once. As far as I know, division is an iterative process, although tables for the most significant bits can be used to speed up the process. Taking the inverse (1/x) of a number (which can be optimized) and multiplying can be faster. Wiki articles:

    http://en.wikipedia.org/wiki/Binary_multiplier

    http://en.wikipedia.org/wiki/Binary_numeral_system#Division

    http://en.wikipedia.org/wiki/Division_(digital)
     
  8. Sep 21, 2012 #7
    Read the article, first sentence, "it uses binary adders", binary adders are the building blocks of hardware mathematics.
     
  9. Sep 21, 2012 #8

    rcgldr

    User Avatar
    Homework Helper

  10. Sep 21, 2012 #9
    "Subtractors are usually implemented within a binary adder for only a small cost when using the standard two's complement notation, by providing an addition/subtraction selector to the carry-in and to invert the second operand."

    I suspect there is no implementation of this in cpu's as there are no references listed.
     
  11. Sep 21, 2012 #10

    rcgldr

    User Avatar
    Homework Helper

    I don't know how add and subtract are implemented in various cpu's, but in the case of an old Intel 486, add and subtract take the same number of cycles (1 clock if 32 bit register ± 32 bit register). This doesn't mean that the 486 doesn't do a complement and add for subtract, only that it can be done in the minimum cpu time which is 1 clock. In the case of a divide, the complement would only need to be done once.

    One way to optimize subtract would be to do a ones complement (just inverters on the adder inputs) instead of twos complement to avoid carry propagation on the twos complement operation, then use a full adder on the least significant bit with the carry input bit set to 1, so that the increment part of a two's complement was done as part of the add.

    The main point is that multiplication can be implemented as a single but long propagation cycle through a series of sets of gates, while I think division will always require iteration (inverse (1/x) requires some iteration, but less).
     
    Last edited: Sep 22, 2012
  12. Sep 22, 2012 #11

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper

    as some one in the nixon administration put it, it's harder to out the toothpaste back in the tube than to get it out.
     
  13. Sep 22, 2012 #12
    You are right, division is harder. It is hard to explain why. This guy does a pretty good job. I think basically the reason is that you can multiply in parallel but you can't divide in parallel. It's possible to split multiplications into independent steps done in parallel then add the results, but with division each step affects the next so parallelism isn't possible. Or something like that. Memory has faded.
     
  14. Sep 22, 2012 #13
    Here is a method I discovered several years ago for doing "parallel" division. It is easily implemented in binary hardware, but I doubt it ever will be:

    www.ulieulieulie.com/longdiv.html [Broken]
     
    Last edited by a moderator: May 6, 2017
  15. Sep 22, 2012 #14
    I think (from personal experience) is that to do division you need to be familiar with your multiplication.
     
  16. Sep 22, 2012 #15
    Just going back the original example of everyday multiplication and division.

    Humans are very comfortable with integers and don't deal with decimals nearly as well. Multiplication amongst the integers is closed, division amongst the integers is not. Meaning that you don't have to think about decimals and fractions ever when doing multiplication, but you almost always have to when doing division.
     
  17. Sep 23, 2012 #16
    The fastest way calculate (x/y) if both numbers are large AFAIK, is to compute (1/y) with newtons method:

    if z = (1/y), then f(z) = 1/z -y = 0, and you get


    [tex] z_{n+1} = z_n - \frac { f(z_n)} {f'(z_n)} = 2 z_n - y {z_n}^2 [/tex]

    z_0 can be a single precision computation of (1/y) by ordinary division.

    this contains only two multiplications for each step.

    Since the number of digits with each step of the iteration doubles, there won't be many steps, and the full precision is only needed for the last step, the one before that can be 1/2 precisoion, before that 1/4 etc. this takes about as long as 4 full-precision multiplications.

    The multiplications can be done with fast fourier transforms wich takes only only O(N log n loglog n) steps for a number with N digits.

    Including the final multiplication, division will be about 5 times as slow as multiplying
     
    Last edited by a moderator: May 6, 2017
  18. Sep 23, 2012 #17
    It's because the successive subtractions result in a remainder or a residual fraction which must be tested for. This doesn't happen with multiplication.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook