Why is the brute force factoring algorithm exponential time?

AI Thread Summary
The discussion centers on the complexity of brute force factoring algorithms, specifically questioning why they are considered exponential time. The original brute force algorithm has a time complexity of O(n^m), where n is the number and m is the number of factors, leading to inefficiencies, especially for prime numbers. Alternative algorithms are proposed to improve speed, particularly for numbers with small factors, but the challenge remains to find a polynomial-time solution. Current best-known algorithms are subexponential but still slower than O(b^k) for all k, indicating that factoring remains a complex problem. The conversation highlights the need for algorithms that can factor numbers in O(log(n)) time, which is still an unresolved challenge in computational theory.
michael879
Messages
696
Reaction score
7
ok so I thought of this factoring algorithm, and I did some quick analysis on it (attached picture, time is in seconds * 5). It appears to be polynomial. However, when I see the basic brute force factoring algorithm I think that's also polynomial. So before I get excited over nothing. Can someone explain to me why the brute force factoring algorithm shown below is exponential time? What I get is approximately O(n^m) where n is the number and m is the number of factors.

Code:
brute-force(n)
{
   for(i = 2; i < n/2; i++)
   {
      if(n%i == 0)
         return CONCAT(brute_force(i), brute_force(n/i));
   }
   return n;
}

basically it searches till it finds a factor, and then it returns a list of the concatination of the factors of the two factors it found (i and n/i). For prime numbers it just returns the number. The main loop is obviously O(n). And as far as I can tell the function is recursively called on the order of m where m is the number of factors. This gives O(n^m).
 
Mathematics news on Phys.org
attachment got messed up here it is.
 

Attachments

  • factor.JPG
    factor.JPG
    67 KB · Views: 593
That program has running time roughly proportional to the largest factor of the number, which in the worst case is the whole number if it's prime. (If you check for primality first that reduces it massively to the square root of the number in the worst case.)

However much faster algorithms are known. The ideal algorithm, which is probably impossible*, would run in polynomial time. But "polynomial time" in this case means polynomial in the size of the input.

If you have a b-bit number n, you have 2^{b-1}\le n&lt;2^b. Your algorithm then has run time \mathcal{O}(2^b) (or \mathcal{O}(\sqrt2\,^b) if you check for primes), while there are algorithms that are \mathcal{O}((1+\epsilon)^b) for all \epsilon&gt;0 -- that is, they are subexponential.

* This has not been proven.
 
CRGreathouse said:
That program has running time roughly proportional to the largest factor of the number

doesnt that make that program O(n)??
 
While you're on the topic of brute-force factoring, here's a way to make the program much faster for numbers with small factors. (Without checking to see if factors are prime, you'll never factor numbers with large prime factors quickly.) This checks only two numbers every six (speeding it up by a factor of 3) and doesn't start over every time it finds a new factor.

Code:
brute-force2(n)
{
   if (n%2 == 0)
      return CONCAT(2, brute_force2(n/2));
   if (n%3 == 0)
      return CONCAT(2, brute_force2(n/2));

   return brute_force3(n, 5);
}

brute_force3(n, k)
   for(i = k; i * i < n; i += 6)
   {
      if(n%i == 0)
         return CONCAT(i, brute_force3(n/i, i));
      if(n%(i+2) == 0)
         return CONCAT(i+2, brute_force3(n/(i+2), i));
   }
   return n;
}
 
michael879 said:
doesnt that make that program O(n)??

O(n) but O(k^b). Being polynomial in n is easy; being polynomial in b is hard (probably impossible).
 
o are you saying that factoring run times are based on log2(n) rather than n? well no wonder theyre all exponential... your taking a function that's polynomial and logging x...
 
ok so yea my function is O(2E-09e^1.3102b) then. thanks for the clarification.
 
so what the problem really is is to find an algorithm that factors in O(log(n))? That clears up a lot...
 
  • #10
michael879 said:
so what the problem really is is to find an algorithm that factors in O(log(n))? That clears up a lot...

The best general-purpose factoring algorithms now known are slower than O(b^k) for all k but faster than O(k^b) for all k > 0. They're subexponential but superpolynomial.
 

Similar threads

Back
Top