How do computers and websites work out huge numbers?

Click For Summary

Discussion Overview

The discussion revolves around how computers and websites perform calculations involving very large numbers, specifically factorials like 102!. Participants explore the mechanisms behind these computations, including software libraries, CPU operations, and algorithmic optimizations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants inquire about the methods used by computers to calculate large factorials, such as 102!, and express curiosity about the speed of these calculations.
  • Others mention software libraries like Java's BigInteger that can handle arbitrarily large numbers.
  • It is noted that the CPU processes code as binary, and calculations are performed using arithmetic logic units (ALUs) that can handle multiple bits simultaneously.
  • Some participants argue that computing large factorials does not require an excessive number of operations, suggesting that it may take significantly fewer than a billion operations.
  • There are references to optimized algorithms for big number computations, with links to external resources for further reading.
  • One participant discusses the Trachtenberg method of multiplication as a potential approach for handling large numbers, though they have not pursued it further.
  • Another participant mentions the continuous gamma function as a way to compute factorials for non-integer values.

Areas of Agreement / Disagreement

Participants express varying opinions on the efficiency and methods of computing large factorials. While some agree on the existence of optimized algorithms, others remain skeptical about the speed and feasibility of these calculations, leading to an unresolved discussion on the specifics of the computational processes involved.

Contextual Notes

Some participants highlight the limitations of human understanding when comparing computer processing speeds to manual calculations, emphasizing the complexity of operations involved in large number computations.

iDimension
Messages
108
Reaction score
4
I was just wondering when you type some numbers into your computers calculator or into a website, how does it find the answer? Obviously for very small numbers like 4x12 it could do this just by addition. What about for massive calculations? For example 102!

I typed this into Wolfram Alpha and it gave the answer
961446671503512660926865558697259548455355905059659464369444714048531715130254590603314961882364451384985595980362059157503710042865532928000000000000000000000000

Which is the exact answer... How does to calculate what 88! * 89 is in the sequence? Then 89! * 90 etc. Each of these alone is an astronomically large number.

Thanks.
 
Technology news on Phys.org
But ultimately the code is read by the CPU which then does the calculation correct? Which is all 1's and 0's so that is why I'm not sure how it can find the answer to 102! in literally 1 second.
 
iDimension said:
But ultimately the code is read by the CPU which then does the calculation correct? Which is all 1's and 0's so that is why I'm not sure how it can find the answer to 102! in literally 1 second.
You are thinking in human terms. For doing simple calculations like this, computers are, as rootone pointed out, MANY orders of magnitude faster than humans. So much so that it is really outside our realm of direct comprehension.
 
phinds said:
You are thinking in human terms. For doing simple calculations like this, computers are, as rootone pointed out, MANY orders of magnitude faster than humans. So much so that it is really outside our realm of direct comprehension.

So are you saying that a computer can calculate 102! in 1 second just because it's fast? Even if a computer could count 1trillion numbers per second it would still take an absurd amount of time to calculate 102!
 
iDimension said:
So are you saying that a computer can calculate 102! in 1 second just because it's fast? Even if a computer could count 1trillion numbers per second it would still take an absurd amount of time to calculate 102!
Any yet you yourself have observed it happening. So you don't believe your eyes?

Computing 102! hardly takes anything like a trillion operations. I doubt it would even come close to a billion, much less a trillion.
 
The big number algorithms are highly optimized. This website includes source code and somewhat of an explanation of how this is done.

http://www.apfloat.org/ntt.html
 
  • Like
Likes   Reactions: pbuk and FactChecker
phinds said:
Computing 102! hardly takes anything like a trillion operations. I doubt it would even come close to a billion, much less a trillion.
In fact, all it takes is 100 multiplications. Of course some of these multiplications are too large to be done in registers of the CPU, but it takes a lot less than a billion operations to compute the product.
 
  • #10
iDimension said:
But ultimately the code is read by the CPU which then does the calculation correct? Which is all 1's and 0's so that is why I'm not sure how it can find the answer to 102! in literally 1 second.
They probably just look up factorials in a table.
 
Last edited:
  • #11
rcgldr said:
The big number algorithms are highly optimized. This website includes source code and somewhat of an explanation of how this is done.

http://www.apfloat.org/ntt.html
Wow! That is a fascinating article you referenced. The math is much more sophisticated than the "brute-force" approach I had imagined. I wish I could understand more of it.
 
  • #12
I contemplated at one time to implement the Trachtenberg method of multiplication because it could handle arbitrarily large numbers but never had a real need for it. It was something

I first learned about the Trachtenberg system in grade school as part of my imaginations of becoming a great math prodigy like my hero Albert Einstein. It would have curious though to see how fast the system would fare against the prevailing libraries of today.

https://en.wikipedia.org/wiki/Trachtenberg_system
 
  • #13
phinds said:
Any yet you yourself have observed it happening. So you don't believe your eyes?

Computing 102! hardly takes anything like a trillion operations. I doubt it would even come close to a billion, much less a trillion.

I guess I just didn't think a computer's cpu was capable of multiplying gigantic numbers in milliseconds. So a cpu will do something like the following.

1x2 = 2
1x2x3 = 6
1x2x3x4 = 24
1x2x3x4x5 = 120

but between each of this calculations the cpu must calcua
Jarvis323 said:
They probably just look up factorials in a table.

I thought this bu then you can type in any arbitrary number like 87.53! and it will know it so I doubt it's listed every factorial in a table.
 
  • #15
102! is nothing. Using Java's BigInteger my computer can calculate the exact solution for 100000! (a number with 456574 digits) in 3 seconds, if I use only one single CPU core that is.
 
  • #16
DrZoidberg said:
102! is nothing. Using Java's BigInteger my computer can calculate the exact solution for 100000! (a number with 456574 digits) in 3 seconds, if I use only one single CPU core that is.

Sure, but I want to know how they compute those solutions.
 
  • #18
iDimension said:
I was just wondering when you type some numbers into your computers calculator or into a website, how does it find the answer? Obviously for very small numbers like 4x12 it could do this just by addition. What about for massive calculations? For example 102!

I typed this into Wolfram Alpha and it gave the answer
961446671503512660926865558697259548455355905059659464369444714048531715130254590603314961882364451384985595980362059157503710042865532928000000000000000000000000

Which is the exact answer... How does to calculate what 88! * 89 is in the sequence? Then 89! * 90 etc. Each of these alone is an astronomically large number.

Thanks.
For a computer, first the numbers are encoded in binary. So 102 is encoded as 1100110. The computation 101 x 102 becomes 1100101 x 1100110. This will be done in the arithmetic logic unit (ALU) of the computer in a single step. At the basic level, you can multiply two bits with an AND gate: 1x1=1, 1x0=0, 0x1=0, 0x0=0. Let's say that the ALU is wired for 32 bit arithmetic - so it will have a 32x32 grid of AND gates. Conductors along rows will handle one operand and separate conductors along columns will hold the other operand. Where the conductors cross, there will be and AND gate. Extending to 32 bits, our rows will be 00000000000000000000000001100101 and our columns will be 00000000000000000000000001100101. Next the output of the AND gates will be added along the diagonals - 1+1=0, with one to carry. It actually takes about 5 gates to add three 1 bit value and put out a sum and a carry - and at you have one of these adders at every AND gate, one of the three inputs for the AND gate result, one for the sum of previous AND gate sums, and one for the carry bit from and adjacent adder.

The result is a 64-bit product - calculated in a matter of nanoseconds. This is repeated for 100 x 99 x 98 x ... When you can't store the number in 32-bit, the software starts using multiple 32-bit words, multiple 32x32 bit multiplies, and adds and handles carries at the word level in much the same way that the hardware did at the bit level. The final answer is then encoded to decimal (by repeated division by 10, and capturing the remainder) and presented on the display or GUI.
 
  • Like
Likes   Reactions: Silicon Waffle and iDimension
  • #19
.Scott said:
For a computer, first the numbers are encoded in binary. So 102 is encoded as 1100110. The computation 101 x 102 becomes 1100101 x 1100110. This will be done in the arithmetic logic unit (ALU) of the computer in a single step. At the basic level, you can multiply two bits with an AND gate: 1x1=1, 1x0=0, 0x1=0, 0x0=0. Let's say that the ALU is wired for 32 bit arithmetic - so it will have a 32x32 grid of AND gates. Conductors along rows will handle one operand and separate conductors along columns will hold the other operand. Where the conductors cross, there will be and AND gate. Extending to 32 bits, our rows will be 00000000000000000000000001100101 and our columns will be 00000000000000000000000001100101. Next the output of the AND gates will be added along the diagonals - 1+1=0, with one to carry. It actually takes about 5 gates to add three 1 bit value and put out a sum and a carry - and at you have one of these adders at every AND gate, one of the three inputs for the AND gate result, one for the sum of previous AND gate sums, and one for the carry bit from and adjacent adder.

The result is a 64-bit product - calculated in a matter of nanoseconds. This is repeated for 100 x 99 x 98 x ... When you can't store the number in 32-bit, the software starts using multiple 32-bit words, multiple 32x32 bit multiplies, and adds and handles carries at the word level in much the same way that the hardware did at the bit level. The final answer is then encoded to decimal (by repeated division by 10, and capturing the remainder) and presented on the display or GUI.

Now that's the kind of answer I was looking for. Cheers!
 
  • #20
Arbitrarily large numbers are not stored in IEEE integer format, they are stored in structure, which are specific to the library that you are using. Here is an example

Code:
mpz_t p;  /* Notice: Not a built in C[++] type, comes from GNU */
mpz_init_set_ui(p,1); /* p = 1 */
for (i=1; i <= n ; ++i){
     mpz_mul_ui(p,p,i); /* p = p * i */
}
 
  • #21
iDimension said:
I was just wondering when you type some numbers into your computers calculator or into a website, how does it find the answer? Obviously for very small numbers like 4x12 it could do this just by addition. What about for massive calculations? For example 102!

I typed this into Wolfram Alpha and it gave the answer
961446671503512660926865558697259548455355905059659464369444714048531715130254590603314961882364451384985595980362059157503710042865532928000000000000000000000000

Which is the exact answer... How does to calculate what 88! * 89 is in the sequence? Then 89! * 90 etc. Each of these alone is an astronomically large number.

Thanks.
Literally? By jumping through a recursive stack of values. Each are probably handled as an object with a large memory space, but the object implementation would depend on the program being used. Could also be a struct.
 
  • #22
People use digit by digit algorithms to handle arbitrarily large numbers. One example would be long multiplication.
A computer uses exactly the same techniques. But it usually uses binary words rather than decimal digits.
A computer is much faster and makes less mistakes than me. But I write the program that does the arithmetic.
 
  • #23
Eric V said:
Literally? By jumping through a recursive stack of values. Each are probably handled as an object with a large memory space, but the object implementation would depend on the program being used. Could also be a struct.
Recursive algorithms are not generally used in the real world as more efficient alternatives exist.
 
  • #24
Baluncore said:
People use digit by digit algorithms to handle arbitrarily large numbers. One example would be long multiplication.
A computer uses exactly the same techniques. But it usually uses binary words rather than decimal digits.
A computer is much faster and makes less mistakes than me. But I write the program that does the arithmetic.
More efficient algorithms exist for multiplying large integers e.g. Karatsuba. A relevant link was provided in post #8.
 
  • #25
Thread closed for Moderation...

EDIT: a heated exchange has been removed and the thread is reopened with encouragement to keep things polite and factual
 
Last edited by a moderator:
  • #26
iDimension said:
So are you saying that a computer can calculate 102! in 1 second just because it's fast? Even if a computer could count 1trillion numbers per second it would still take an absurd amount of time to calculate 102!
Calc 2 power series
 
  • #27
Here is a 2011 Intel Patent that might be worth leading if you want to know more details of one implementation that is likely being often used in the real world.

BACKGROUND
...
Conceptually, multiplication and modular reduction are straight-forward operations. However, often the sizes of the numbers used in these systems are very large and significantly surpass the native wordsize of a processor. For example, a cryptography protocol may require modular operations on numbers 1024 to 4096 bits in length or greater while many processors have native wordsizes of only 32 or 64 bits. Performing operations on such large numbers may be very expensive in terms of time and in terms of computational resources.
...
DETAILED DESCRIPTION

As described above, a wide variety of cryptographic operations involve multiplication of very large numbers and/or modular reduction. Described herein are a variety of techniques that can reduce the burden of these compute-intensive operations and speed operation of cryptographic systems. These techniques can also be applied in more general purpose, non-cryptographic, computing settings. One such technique involves improving the efficiency of a technique to multiply large numbers known as Karatsuba multiplication. Another technique involves improving the efficiency of modular reduction.
...

http://www.google.com/patents/US7930337
 

Similar threads

Replies
29
Views
6K
  • Sticky
  • · Replies 13 ·
Replies
13
Views
8K
Replies
5
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 64 ·
3
Replies
64
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K