## Why is Division harder than Multiplication?

An example:
Both conceptually and computationally it is easier to see that

2.5 * 5.2 = 13

than it is to see

13 / 5.2 = 2.5

In programming too: division loops take more CPU resources than multiplication loops.

Why is this?
 What do you mean by "why"? We have algorithms for both and division algorithms tend to be more complex. I don't imagine that there's any grand philosophical significance to this fact.
 Recognitions: Gold Member Implementation of division in CPU's is a subtraction loop, while multiplication is accomplished by shift and add.

## Why is Division harder than Multiplication?

 Quote by coolul007 Implementation of division in CPU's is a subtraction loop, while multiplication is accomplished by shift and add.
In the naive implementation, both loops have roughly the same complexity.

For multiplication you test a bit, conditionally add and then shift.
For division you subtract, test the result, conditionally set a bit and then shift.

On average, that means one subtraction per bit position for division and one half of an addition per bit position for multiplication.

However, there are better algorithms than the naive ones. The guys that do this stuff for a living have very clever tricks at their disposal.

Recognitions:
Gold Member
 Quote by jbriggs444 In the naive implementation, both loops have roughly the same complexity. For multiplication you test a bit, conditionally add and then shift. For division you subtract, test the result, conditionally set a bit and then shift. On average, that means one subtraction per bit position for division and one half of an addition per bit position for multiplication. However, there are better algorithms than the naive ones. The guys that do this stuff for a living have very clever tricks at their disposal.
I am speaking of hardware cpu implementation, not algorithm software implementation. Unfortunately all we have is adders. They shift and add. if there is a clever guy out there he could be a billionaire. subtraction involves complementing and then adding, a two step process.

Recognitions:
Homework Help
 Quote by coolul007 I am speaking of hardware cpu implementation, not algorithm software implementation. Unfortunately all we have is adders. They shift and add. if there is a clever guy out there he could be a billionaire. subtraction involves complementing and then adding, a two step process.
Multiplication doesn't have to be iterative, all the sub-products can be generated in parallel, and then all the sub-products are added. Computers can have circuits that directly implement subtract without using complement. For division, even if there is a complement, it's only done once. As far as I know, division is an iterative process, although tables for the most significant bits can be used to speed up the process. Taking the inverse (1/x) of a number (which can be optimized) and multiplying can be faster. Wiki articles:

http://en.wikipedia.org/wiki/Binary_multiplier

http://en.wikipedia.org/wiki/Binary_...ystem#Division

http://en.wikipedia.org/wiki/Division_(digital)

Recognitions:
Gold Member
 Quote by rcgldr Multiplication doesn't have to be iterative, all the sub-products can be generated in parallel, and then all the sub-products are added. Computers can have circuits that directly implement subtract without using complement. For division, even if there is a complement, it's only done once. As far as I know, division is an iterative process, although tables for the most significant bits can be used to speed up the process. Taking the inverse (1/x) of a number (which can be optimized) and multiplying can be faster. Wiki articles: http://en.wikipedia.org/wiki/Binary_multiplier http://en.wikipedia.org/wiki/Binary_...ystem#Division http://en.wikipedia.org/wiki/Division_(digital)
Read the article, first sentence, "it uses binary adders", binary adders are the building blocks of hardware mathematics.
 Recognitions: Homework Help Wiki article for binary subtractor: http://en.wikipedia.org/wiki/Subtractor

Recognitions:
Gold Member
 Quote by rcgldr Wiki article for binary subtractor: http://en.wikipedia.org/wiki/Subtractor
"Subtractors are usually implemented within a binary adder for only a small cost when using the standard two's complement notation, by providing an addition/subtraction selector to the carry-in and to invert the second operand."

I suspect there is no implementation of this in cpu's as there are no references listed.

Recognitions:
Homework Help
 Quote by coolul007 "Subtractors are usually implemented within a binary adder for only a small cost when using the standard two's complement notation, by providing an addition/subtraction selector to the carry-in and to invert the second operand." I suspect there is no implementation of this in cpu's as there are no references listed.
I don't know how add and subtract are implemented in various cpu's, but in the case of an old Intel 486, add and subtract take the same number of cycles (1 clock if 32 bit register ± 32 bit register). This doesn't mean that the 486 doesn't do a complement and add for subtract, only that it can be done in the minimum cpu time which is 1 clock. In the case of a divide, the complement would only need to be done once.

One way to optimize subtract would be to do a ones complement (just inverters on the adder inputs) instead of twos complement to avoid carry propagation on the twos complement operation, then use a full adder on the least significant bit with the carry input bit set to 1, so that the increment part of a two's complement was done as part of the add.

The main point is that multiplication can be implemented as a single but long propagation cycle through a series of sets of gates, while I think division will always require iteration (inverse (1/x) requires some iteration, but less).
 Recognitions: Homework Help Science Advisor as some one in the nixon administration put it, it's harder to out the toothpaste back in the tube than to get it out.

 Quote by rcgldr Multiplication doesn't have to be iterative, all the sub-products can be generated in parallel, and then all the sub-products are added. Computers can have circuits that directly implement subtract without using complement. For division, even if there is a complement, it's only done once. As far as I know, division is an iterative process, although tables for the most significant bits can be used to speed up the process. Taking the inverse (1/x) of a number (which can be optimized) and multiplying can be faster. Wiki articles: http://en.wikipedia.org/wiki/Binary_multiplier http://en.wikipedia.org/wiki/Binary_...ystem#Division http://en.wikipedia.org/wiki/Division_(digital)
You are right, division is harder. It is hard to explain why. This guy does a pretty good job. I think basically the reason is that you can multiply in parallel but you can't divide in parallel. It's possible to split multiplications into independent steps done in parallel then add the results, but with division each step affects the next so parallelism isn't possible. Or something like that. Memory has faded.
 Recognitions: Gold Member Here is a method I discovered several years ago for doing "parallel" division. It is easily implemented in binary hardware, but I doubt it ever will be: www.ulieulieulie.com/longdiv.html

 Quote by devinedj An example: Both conceptually and computationally it is easier to see that 2.5 * 5.2 = 13 than it is to see 13 / 5.2 = 2.5 In programming too: division loops take more CPU resources than multiplication loops. Why is this?
I think (from personal experience) is that to do division you need to be familiar with your multiplication.
 Recognitions: Gold Member Just going back the original example of everyday multiplication and division. Humans are very comfortable with integers and don't deal with decimals nearly as well. Multiplication amongst the integers is closed, division amongst the integers is not. Meaning that you don't have to think about decimals and fractions ever when doing multiplication, but you almost always have to when doing division.

 Quote by coolul007 Here is a method I discovered several years ago for doing "parallel" division. It is easily implemented in binary hardware, but I doubt it ever will be: www.ulieulieulie.com/longdiv.html
The fastest way calculate (x/y) if both numbers are large AFAIK, is to compute (1/y) with newtons method:

if z = (1/y), then f(z) = 1/z -y = 0, and you get

$$z_{n+1} = z_n - \frac { f(z_n)} {f'(z_n)} = 2 z_n - y {z_n}^2$$

z_0 can be a single precision computation of (1/y) by ordinary division.

this contains only two multiplications for each step.

Since the number of digits with each step of the iteration doubles, there won't be many steps, and the full precision is only needed for the last step, the one before that can be 1/2 precisoion, before that 1/4 etc. this takes about as long as 4 full-precision multiplications.

The multiplications can be done with fast fourier transforms wich takes only only O(N log n loglog n) steps for a number with N digits.

Including the final multiplication, division will be about 5 times as slow as multiplying
 It's because the successive subtractions result in a remainder or a residual fraction which must be tested for. This doesn't happen with multiplication.