# Factorization Algorithm II

1. Oct 15, 2006

### Playdo

[Of course some of you will let me know if we need numerical examples or if something just does not make sense, or if it is very similar to what somebody else has already done, and I appreciate that.]

Suppose we have a reducible polynomial over the natural numbers (zero include) such that

$$\sum^n_{i=0}\alpha_{i}a_{i}=\sum^s_{i=0}\beta_{i}a_{i}\sum^m_{i=0}\gamma_{i}a_{i}$$

By substituting natural number values for $a$ on the left side we obtain a sequence of natural numbers. Buy substituting the same value on the right we find one factorization of the number formed by substituting on the left.

This equation acts as a map between factor pairs of the generated sequence. If we then think of our normal way of writing the equation as the polynomial base a of the number base ten found by simplifying the left hand side, we realize we can do the same for every element of the sequence. That is if we wanted to factor a very large number with polynomial expansion given by the left hand side, we could simply substitute for a, say two, then simplify that number base ten which would produce a much smaller number to be factored that has at least one factor pair that we can map back to a factor pair of the desired number. Suppose that new number was still too large. Do the same thing over and over until you arrive at a number that is clearly easy to factor and work your way back up.

The problem with daisy chaining the algorithm is there is no way of knowing which factor pair corresponds to the upward mapped pair, so we may have to search through all factors pairs of each rung below it.

A hinderance of even a single move from base $a$ to base two and then simplification in a base $b$ is that you need a reducible polynomial that evaluates to N base $a$ or else the assumptions do not hold.

Notice that the typical base $a$ expansion algorithm does not always give a reducible polynomial in $a$. That I think is the question that all of this hinges on, is it easy or hard to work through the linear space of base a polynomials that evaluate to N. Some of this stuff is in the other thread. But if we have two numbers, a factor pair of N that multiply without carrying base ten then N will have a base ten expansion that is reducible as a polynomial if we replace ten with a generic natural variable $a$.

Every natural number having a reducible base $a$ expansion can be tackled using the method we are talking about here. A way that I have thought to try to get around the loss of information due to carrying in the multiplication process is to try to understand the density of reducible base $a$ expansions for a given natural number N.

Is it true that there must be a base $a$ less than N such that the base $a$ expansion of N is a reducible polynomial in $a$ as a variable?

Some folks have trouble with the notion of thinking of something as constant one moment and then variable the next, but don't we do something similar to that with partial differentiation to obatin the derivative?

Note: Notice that the polynomial does not need to offer a complete factorization of N to be useful and in fact it would have to be of degree no less than the number of distinct prime factors (up to mutiplicity) of N. We can get started very well using polynomials that allow only two irreducible factors base a.

*****************************************************

Update on my own intentions. I am thinking about pursuing a Phd now again in pure math. I would like to study elementary number theory. Also to focus on Chaitin's Omega Number. Apparently he used an Diophantine equation to discover the halting probability of the universal Turing machine. Which the number itself is amazing, though not really embraced I guess by mainstream computer science or math. A number so complex you cannot write it down!

It occured to me that all of the problems I have ever been deeply interested in have been in elementary number theory or it's applications. I don't know though, I'm also preparing for the actuarial exams. But being a professor and working on number theory could work for me. I feel a little like Dorothy. I did not like the culture in my little part of the math world so I ran away from home and found out that teaching and being around others who are freakishly inquisitive has alot to say for itself. My apologies to the universe, to the community if needs be.

******************************************************

Last edited: Oct 15, 2006
2. Oct 16, 2006

### shmoe

Is any of this any different from what you were saying before?

It looks to me like you've just observed given a polynomial that you've factored p(x)=f(x)*g(x), then evaluating this at an integer x gives you a partial factorization of p(x) (assuming f(x) and g(x) are not equal to 1, you only seem to consider non-negative coefficients, so that's a given unless f or g are identically 1). What you haven't explained in this "Factorization Algorithm II" is how, given an N, you expect to find a reducible polynomial that takes on the value of N, so I'm left wondering where the "Algorithm" part is?

Of course there is. If N=a*b is a non-trivial factorization of N, just consider base a or base b.

If you want to exclude these trivial cases, then the answer is no. Take N=14, base 3 will give the polynomial a^2+a+2 which is irreducible, all higher bases will give a linear polynomial. I haven't bothered to look for other examples.

Can we please stop the crappy generalizations of what "some folks" can and cannot do? Sorry, but there's nothing profoundly mind-boggling about taking the base a expansion of a number and then considering this as a polynomial in "a".

3. Oct 16, 2006

### Playdo

You bet we can.

4. Oct 16, 2006

### Playdo

Hmmm, somehow that is not complete. So you are saying that unless we choose a base that divides N we don't get a reducible polynomial, you counterexample shows it is not true in general, but the first theorem of that old thread shows precisely that there are many numbers for which you do get reducible polynomials in bases other than divisors of N. Well fifteen for instance is (a+1)(a^2+1)=a^3+a^2+a+1 => 15 when a = 2. That was what I was going on about over there.

Your example above multiply 2*7 base 3 and you get 2*(2a+1) = 4a+2 which has to be carried to give a^2+a+2. So that highlights points I made in the other thread, the standard base a expansion algorithm will sometimes give irreducible polynomials, but if N is composite then there must be a related reducible polynomial that represents the result of multiplying the factors base a without simplifying. Notice the relationship between 4a+2 and a^2+a+2 in base 3.

By registers,

a^0 a^1 a^2

2 1 1

2 4 0

If I were going to try undoing the carrying process I would immediately try spining the largest power back by one digit so the result here is easy to obtain.

My notion was that if we found polynomials (quadratics) where the lead coefficient was 1 if would mean either that the factors had a lead coefficeint of 1 or that one of the factors was less than a. But notice if we have a quadratic in a where the factors are both greater than a we easily obtain

(a+r)(a+s)=a^2+a(r+s)+rs = N

No carrying, just factor rs, if that is still too large apply the algorithm again.

If you have to simplify by carrying we get

(A) (a+r)(a+s)=a^2+a(r+s+t)+(rs-at) = N

now we cannot get at r and s by factoring the constant coefficient but we can recognize that 0<t<a. But then since a is around root N it is a poor sieve to be sure. However if I do this,

(B) N-a^2 div a = r+s+t

(C) N-a^2 mod a = rs mod a

So if I can get the set of ordered pairs (r,s) from the reduced residue system such that rs=N-a^2 mod a, [there will be at most phi(a) such pairs] I can simply plug into (B) and find t. The number of steps then is basically phi(a). If the leading coefficient is prime we still get a relatively straightforward algorithm.

The reason I like the primorials is you can find the place in the sequence where your target N is quadratic with a leading coefficient of 1 and then confirm that p# and N are coprime. Then you get the appropariate set of order paris by right multiplty the kernel (that is where the kernel comes in.)

Example: N=1549 use base thirty 1561 = 30^2+22*30+1

30^2+22*30+1 = (30x+r)(30y+s) = 900xy + 30(ry+sx) + rs

Clearly xy = 1

So 1561 div 30 = 52 = 30 + r + s -> r+s = 22 or r + s + t = 22

This implies that

rs = 1

Or

rs = 30t+1

We are trying to solve

rs = 22-t
r+s = 30t+1

by substituting for t, positive values for t. So start testing for t starting with t=1, t=3 is the smallest factorable 30t+1 giving 7 and 13 but that does not add up to 19.

The other posibility is that a divisor lies within the reduced residue system if that is true then we have

1561 = s(30x+r) = 30sx +rs = 30^2+22*30 + 1

That is 1561=30^2u+30(sx-30u+t)+(rs-30t), so remove

1561= 30*52 + 1

sx = 52-t
rs = 30t+1

So now t = 3 gives s = 7 and r = 13 or r = 7 , s = 13. Turns out that

7x = 49 gives x = 7

So 1561 = 7*223 and we're done. A little arithmetic and in this case we factored 1561 faster than if we had used all the primes less than root 1561 which is around forty and even faster than phi(30).

So that's the game. And if you can figure out similar clever things to do with larger base a polynomials you get even faster algorithms.

Suppose we let N be cubic in a and coprime to it. Are there similar techniques for nth degree base a polynomials? Is the above technique generally useful for anything but small numbers?