Is Prime Factorization Linear or Exponential?

AI Thread Summary
Prime factorization is often considered computationally difficult, particularly for large numbers, despite initial perceptions of linear complexity when checking divisibility. The discussion highlights that while a straightforward algorithm appears linear, the time complexity actually grows exponentially with the number of digits in the number being factored. For efficient factorization, it's suggested to only check divisibility up to the square root of the number, which can improve performance but still remains impractical for very large numbers. The conversation also touches on the relationship between division time and input size, clarifying that computational time does scale with the size of the inputs. Ultimately, understanding the complexities of factorization is crucial for applications like RSA encryption.
tiredryan
Messages
50
Reaction score
0
I'm confused about how difficult is it to factor numbers. I am reading that it is used in encryption and it is computationally difficult, but it seems to take O(n) from how I see it.

For example to factor 6, I would
(1) divide by 2 and check if the remainder is 0
(2) divide by 3 and check if the remainder is 0
(3) divide by 4 and check if the remainder is 0
(4) divide by 5 and check if the remainder is 0

For something larger like 30, I would
(1) divide by 2 and check if the remainder is 0
(2) divide by 3 and check if the remainder is 0
...
(28) divide by 29 and check if the remainder is 0

For n, I would
(1) divide by 2 and check if the remainder is 0
(2) divide by 3 and check if the remainder is 0
...
(n-2) divide by 29 and check if the remainder is 0

An algorithm to find the factors seems to be linear.

Is there something that I am missing?

Thanks.

-Ryan
 
Mathematics news on Phys.org
You're right, it's a linear algorithm in that sense.
And as a shortcut you only need to check up to sqrt(N).

For example, for N=13:
N / 2, remainder = 1
N / 3, remainder = 1
N / 4 - don't bother checking. 4 is greater than sqrt(13).
We're now done. It's prime. If there were a factor 4 or bigger, then there would already have been a factor 13/4 = 3.25 or smaller that would have been encountered.

But what if you have gigantic numbers with 100 or more digits? How long do you think a computer takes to do 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 operations? It's unimaginably longer than the age of the universe.

Usually people consider how long the algorithm takes in terms of D=log(N), roughly the number of digits. In that sense, your algorithm takes exponential time exp(D), which means it quickly becomes useless. My trick of using sqrt() yields exp(D/2), which is better, but still wholly useless for cracking numbers that a child can jot down in seconds.
 
Last edited:
So if I understand it correctly, dividing x by y computationally will take around around log(x) time.

For example,
if it takes ~1 millisecond to divide 7 by 4,
it will take ~2 millisecond to divide 73 by 4
it will take ~4 millisecond to divide 7334 by 4
it will take ~8 millisecond to divide 73344234 by 4

Is this right?
 
Factorization means using recursion (or maybe there's a faster way? idk) to decompose the given number into prime factors, i.e. with their powers. Since you'd have to keep track of how many times it can be divided by each prime factor, I'd say it'd take longer than O(n).

What you're doing is simply checking to see what prime factors it has, which is linear.
 
I was reading about RSA cryptography.

http://en.wikipedia.org/wiki/RSA#Key_generation

In the cryptography I start with two large prime numbers p and q. This p times q yields a large composite number n. I keep p and q secret and share n to the world. It is hard for the public to back calculate p and q. So I am wondering about algorithms where I start with a known larger number that is composed of only unknown two prime factors.

I am a little confused now. You say that "What you're doing is simply checking to see what prime factors it has, which is linear." This is what I thought in the first post assuming that the computational time of dividing is independent of the size of the inputs. I am wondering if my third post has a more valid assumption and computational division time is dependent on the size of my input.
 
Last edited:
I'm confused now. You say that "What you're doing is simply checking to see what prime factors it has, which is linear."
It's very simple. You were looking specifically at factorizing a semi-prime (product of two primes) while MacNCheese thought you where referring to prime factorization in general.
 
Does division take log(x) time?
For example,
if it takes ~1 millisecond to divide 7 by 4,
it will take ~2 millisecond to divide 73 by 4
it will take ~4 millisecond to divide 7334 by 4
it will take ~8 millisecond to divide 73344234 by 4

Or is it relatively constant ?
For example,
if it takes ~1 millisecond to divide 7 by 4,
it will take ~1 millisecond to divide 73 by 4
it will take ~1 millisecond to divide 7334 by 4
it will take ~1 millisecond to divide 73344234 by 4

Or does it scale with with different with a different speed?

I think that is my confusion. Also does other arithmetic operations (addition, subtraction, division) scale at a same or different speed?
 

Similar threads

Back
Top