Which is more processor-intensive?

  • Thread starter Thread starter iScience
  • Start date Start date
AI Thread Summary
The discussion centers on the processor intensity of dividing a large number by a small number versus a large number by another large number. It concludes that both operations are equally intensive in terms of processor cycles, primarily due to the handling of large integers in 64-bit systems. The choice of data types, such as integer versus floating-point, affects precision but not the overall computational cost. Optimization at the algorithmic level is more significant than micro-optimizations in arithmetic operations. Additionally, the efficiency of division can vary based on the processor's architecture and the specific division algorithm used.
iScience
Messages
466
Reaction score
5
What's more process intensive:

5465411784154564 / 3

or

5465411784154564 / 5746845218

ie a big number / small number or a big number / big number?

does it matter if I want to take the modulus instead?

------------------

EDIT:

Accidentally messed up my title, if a moderator could change it that'd be great
Thanks!
 
Technology news on Phys.org
iScience said:
Accidentally messed up my title
Fixed...
 
iScience said:
What's more process intensive:

5465411784154564 / 3

or

5465411784154564 / 5746845218

ie a big number / small number or a big number / big number?
My guess is that both operations are equally expensive in terms of processor cycles. When you have arithmetic operations like this, you need to think about the types that could be used. The dividend (number being divided) is too large to fit in 32 bits (an int or long in C), but would fit into 64 bits. In your examples, the dividend and both divisors are integer values, so integer division could be performed, but the result will be an integer.

If you want a precise answer, floating point division could be used, but neither a 32-bit floating point type (a float in C) nor a 64-bit floating point type (a double in C) would be able to hold all of the digits, so the answers wouldn't be that precise.
iScience said:
does it matter if I want to take the modulus instead?
Same concerns as above.
iScience said:
------------------

EDIT:

Accidentally messed up my title, if a moderator could change it that'd be great
Thanks!
 
Both calculations will take the exact same amount of time.

Don't worry about optimizing things like math of plain data types, optimization of algorithms is always a better use of your time, any time savings at the bit-manipulation level is usually negligible.

Secondly, the compiler is usually much smarter than you when it comes to optimizing this sort of thing. The proper use of registers and L1 cache makes a difference several orders of magnitude above what you'd save by nitpicking POD math.

For example:
Code:
int x = y * 320;
can be converted into assembly in two ways

Code:
mov [y] ax
mul ax 320
mov ax [x]

Code:
mov [y] ax
mov bx ax
shr ax 8
shr bx 6
add ax bx
mov ax [x]
The compiler knows that the second is actually faster and will compile it that way for you.
 
Last edited:
The answers above are true for many 64 bit CPUs and a 64 bit integer data type.
Usually way more critical is if the values you want to process are in a fast processor cache or need to be fetched from slower memory.

Otherwise it really depends on the processor used (and it's division algorithm), or if you use a type like BigInteger in Java on their actual implementation/algorithm too.

For example if it was implemented using Euclidian Division (which it's usually not, because that would be way too slow), then division by 3 would be slower than by the bigger number (because more iterations with substractions needed).

Depending on the processor/algorithm, both the result of the division and the remainder (result of modulo) might be available at the same time without increased cost, that is why C++ offers std::div:
http://www.cplusplus.com/reference/cstdlib/div/

If you are interested in algorithms for computing with big numbers, have a look i.e. at the libGMP (The GNU Multiple Precision Arithmetic Library):
https://gmplib.org/
 
Dominik Tugend said:
[...]
For example if it was implemented using Euclidian Division (which it's usually not, because that would be way too slow), then division by 3 would be slower than by the bigger number (because more iterations with substractions needed).

That should have been Euclidian division by substraction, I can't edit it in anymore though, sorry for that mistake by me.
 
Back
Top