Is Prime Factorization Linear or Exponential?

Click For Summary

Discussion Overview

The discussion revolves around the computational complexity of prime factorization, specifically whether it is linear or exponential in nature. Participants explore various algorithms and their efficiencies, particularly in the context of cryptography and large numbers.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • Ryan suggests that the process of factorization appears linear based on his method of dividing by successive integers.
  • Another participant agrees with Ryan's observation but introduces the concept of checking only up to the square root of the number, arguing that this method still leads to exponential time complexity for large numbers.
  • A participant questions whether the time taken for division scales logarithmically with the size of the numbers involved, providing examples to illustrate their point.
  • Another participant notes that factorization involves recursion and keeping track of prime factors, implying that it may take longer than linear time.
  • Ryan expresses confusion regarding the computational time of division and whether it is dependent on the size of the inputs, referencing RSA cryptography as a context for his inquiry.
  • One participant clarifies that there may have been a misunderstanding regarding the specific type of factorization being discussed, suggesting that the context matters in determining the complexity.
  • Ryan later seeks clarification on whether division time is constant or scales with input size, and whether other arithmetic operations behave similarly.
  • Ryan concludes by stating he found resources on computational complexity that address his questions.

Areas of Agreement / Disagreement

Participants express differing views on the computational complexity of prime factorization, with no consensus reached on whether it is linear or exponential. There is also uncertainty regarding the scaling of division time with input size.

Contextual Notes

Participants reference various assumptions about the efficiency of algorithms and the nature of arithmetic operations, but these assumptions remain unresolved within the discussion.

tiredryan
Messages
50
Reaction score
0
I'm confused about how difficult is it to factor numbers. I am reading that it is used in encryption and it is computationally difficult, but it seems to take O(n) from how I see it.

For example to factor 6, I would
(1) divide by 2 and check if the remainder is 0
(2) divide by 3 and check if the remainder is 0
(3) divide by 4 and check if the remainder is 0
(4) divide by 5 and check if the remainder is 0

For something larger like 30, I would
(1) divide by 2 and check if the remainder is 0
(2) divide by 3 and check if the remainder is 0
...
(28) divide by 29 and check if the remainder is 0

For n, I would
(1) divide by 2 and check if the remainder is 0
(2) divide by 3 and check if the remainder is 0
...
(n-2) divide by 29 and check if the remainder is 0

An algorithm to find the factors seems to be linear.

Is there something that I am missing?

Thanks.

-Ryan
 
Mathematics news on Phys.org
You're right, it's a linear algorithm in that sense.
And as a shortcut you only need to check up to sqrt(N).

For example, for N=13:
N / 2, remainder = 1
N / 3, remainder = 1
N / 4 - don't bother checking. 4 is greater than sqrt(13).
We're now done. It's prime. If there were a factor 4 or bigger, then there would already have been a factor 13/4 = 3.25 or smaller that would have been encountered.

But what if you have gigantic numbers with 100 or more digits? How long do you think a computer takes to do 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 operations? It's unimaginably longer than the age of the universe.

Usually people consider how long the algorithm takes in terms of D=log(N), roughly the number of digits. In that sense, your algorithm takes exponential time exp(D), which means it quickly becomes useless. My trick of using sqrt() yields exp(D/2), which is better, but still wholly useless for cracking numbers that a child can jot down in seconds.
 
Last edited:
So if I understand it correctly, dividing x by y computationally will take around around log(x) time.

For example,
if it takes ~1 millisecond to divide 7 by 4,
it will take ~2 millisecond to divide 73 by 4
it will take ~4 millisecond to divide 7334 by 4
it will take ~8 millisecond to divide 73344234 by 4

Is this right?
 
Factorization means using recursion (or maybe there's a faster way? idk) to decompose the given number into prime factors, i.e. with their powers. Since you'd have to keep track of how many times it can be divided by each prime factor, I'd say it'd take longer than O(n).

What you're doing is simply checking to see what prime factors it has, which is linear.
 
I was reading about RSA cryptography.

http://en.wikipedia.org/wiki/RSA#Key_generation

In the cryptography I start with two large prime numbers p and q. This p times q yields a large composite number n. I keep p and q secret and share n to the world. It is hard for the public to back calculate p and q. So I am wondering about algorithms where I start with a known larger number that is composed of only unknown two prime factors.

I am a little confused now. You say that "What you're doing is simply checking to see what prime factors it has, which is linear." This is what I thought in the first post assuming that the computational time of dividing is independent of the size of the inputs. I am wondering if my third post has a more valid assumption and computational division time is dependent on the size of my input.
 
Last edited:
I'm confused now. You say that "What you're doing is simply checking to see what prime factors it has, which is linear."
It's very simple. You were looking specifically at factorizing a semi-prime (product of two primes) while MacNCheese thought you where referring to prime factorization in general.
 
Does division take log(x) time?
For example,
if it takes ~1 millisecond to divide 7 by 4,
it will take ~2 millisecond to divide 73 by 4
it will take ~4 millisecond to divide 7334 by 4
it will take ~8 millisecond to divide 73344234 by 4

Or is it relatively constant ?
For example,
if it takes ~1 millisecond to divide 7 by 4,
it will take ~1 millisecond to divide 73 by 4
it will take ~1 millisecond to divide 7334 by 4
it will take ~1 millisecond to divide 73344234 by 4

Or does it scale with with different with a different speed?

I think that is my confusion. Also does other arithmetic operations (addition, subtraction, division) scale at a same or different speed?
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
908
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 26 ·
Replies
26
Views
4K