MHB Test whether the integer is a prime

evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

We want to find an efficient algorithm that checks whether an odd number is a prime or not.
In order to obtain such an algorithm, one tests the congruence $(X+a)^n \equiv X^n+a$ not "absolutely" in $\mathbb{Z}_n[X]$, but modulo a polynomial $X^r-1$, where $r$ have to be chosen in a clever way. That is, one compares, in $\mathbb{Z}_n[X]$, the polynomials
$$(X+a)^n \pmod{X^r-1} \text{ and } (X^n+a) \pmod{X^r-1}=X^{n \mod{r}}+a.$$

In the intermediate results that appear in the course of the computation of the power $(X+a)^n$
, all coefficients are reduced modulo $n$, hence they can never exceed $n$. Calculating modulo $X^r-1$ just means that one can replace $X^s$ by $X^{s \mod{r}}$, hence that the degrees of the polynomials that appear as intermediate results can be kept below $r$. This keeps the computational cost in the polynomial range as long as $r$ is $O((\log{n})^c)$ for some constant $c$.

How do we compute the computational cost for computing $(X+a)^n \pmod{X^r-1}$ ? (Thinking)
 
Mathematics news on Phys.org
evinda said:
This keeps the computational cost in the polynomial range as long as $r$ is $O((\log{n})^c)$ for some constant $c$.

Hey evinda!

What does $r$ is $O((\log{n})^c)$ mean? (Wondering)

evinda said:
How do we compute the computational cost for computing $(X+a)^n \pmod{X^r-1}$ ?

That depends on the algorithm we use doesn't it?

We might for instance set up a recursive algorithm based on squaring.
In that case let $f(n,r)$ be the computational cost of evaluating $(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)^{n} \pmod{X^r-1, n}$.
Then:
$$f(2k,r)=f(k,r)+\text{ cost to evaluate }(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)^2$$
and:
$$f(2k+1,r)=f(2k,r)+\text{ cost to evaluate }(X^{r-1}+c_{r-2}X^{r-2} + ... + c_0)(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)$$

Can we find the worst case computational cost for computing $(X+a)^n \pmod{X^r-1,n}$ from that? (Wondering)
 
I like Serena said:
Hey evinda!

What does $r$ is $O((\log{n})^c)$ mean? (Wondering)

I assume that we will find the computational cost in respect to $r$ and then we will deduce that the computational cost is polynomial only if $r=O((\log{n})^c)$. (Thinking)

I like Serena said:
That depends on the algorithm we use doesn't it?

We use fast exponentiation in the ring $\mathbb{Z}_n[X]$.

I like Serena said:
We might for instance set up a recursive algorithm based on squaring.
In that case let $f(n,r)$ be the computational cost of evaluating $(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)^{n} \pmod{X^r-1, n}$.

So, $f(n,r)$ will be the computational cost of evaluating any monic polynomial of degree $r-1$ ?

I like Serena said:
Then:
$$f(2k,r)=f(k,r)+\text{ cost to evaluate }(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)^2$$
and:
$$f(2k+1,r)=f(2k,r)+\text{ cost to evaluate }(X^{r-1}+c_{r-2}X^{r-2} + ... + c_0)(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)$$

I see...

I like Serena said:
Can we find the worst case computational cost for computing $(X+a)^n \pmod{X^r-1,n}$ from that? (Wondering)

Do we multiply two polynomials of some degree $d$ using the FFT, the time complexity of which is $O(d \log{d})$ ?

If so, will we have a recursive relation of the following form? (Thinking)

$f(n,n)=\left\{\begin{matrix}
f\left(\frac{n}{2},n \right )+O\left(\frac{n}{2}\log{\left(\frac{n}{2} \right )} \right ), & n \text{ even} \\
f\left(\frac{n-1}{2},n \right )+O\left(\frac{n-1}{2}\log{\left(\frac{n-1}{2} \right )} \right )+\dots, & n \text{ odd}
\end{matrix}\right.$For the case when $n$ is odd, how do we multiply a polynomial of degree $n-1$ with a polynomial of degree $1$ ? Do we use again the FFT?
 
evinda said:
I assume that we will find the computational cost in respect to $r$ and then we will deduce that the computational cost is polynomial only if $r=O((\log{n})^c)$.

Isn't $r$ supposed to be a constant?

evinda said:
So, $f(n,r)$ will be the computational cost of evaluating any monic polynomial of degree $r-1$ ?

Yep. (Nod)

evinda said:
Do we multiply two polynomials of some degree $d$ using the FFT, the time complexity of which is $O(d \log{d})$ ?

I guess we can. (Mmm)

evinda said:
If so, will we have a recursive relation of the following form?

$f(n,n)=\left\{\begin{matrix}
f\left(\frac{n}{2},n \right )+O\left(\frac{n}{2}\log{\left(\frac{n}{2} \right )} \right ), & n \text{ even} \\
f\left(\frac{n-1}{2},n \right )+O\left(\frac{n-1}{2}\log{\left(\frac{n-1}{2} \right )} \right )+\dots, & n \text{ odd}
\end{matrix}\right.$

Shouldn't that be:
$$f(n,r)=\begin{cases}
f\left(\frac{n}{2},r \right )+O\left(r\log{r} \right ), & n \text{ even} \\
f\left(n-1,r \right )+O\left(r\log{r} \right ), & n \text{ odd}
\end{cases}$$
(Wondering)

evinda said:
For the case when $n$ is odd, how do we multiply a polynomial of degree $n-1$ with a polynomial of degree $1$ ? Do we use again the FFT?

We don't.
First we evaluate the left hand polynomial completely, yielding a polynomial of degree $r-1$.
And then we multiply by the right hand polynomial of degree $r-1$.
We can indeed use the FFT for that. (Happy)
 
I like Serena said:
Isn't $r$ supposed to be a constant?

Yes, it just says at the beginning that $r$ will have to be chosen in a clever way. So they mean that we have to pick an $r$ in the order $O((\log{n})^c)$, or not? (Thinking)
I like Serena said:
Shouldn't that be:
$$f(n,r)=\begin{cases}
f\left(\frac{n}{2},r \right )+O\left(r\log{r} \right ), & n \text{ even} \\
f\left(n-1,r \right )+O\left(r\log{r} \right ), & n \text{ odd}
\end{cases}$$
(Wondering)We don't.
First we evaluate the left hand polynomial completely, yielding a polynomial of degree $r-1$.
And then we multiply by the right hand polynomial of degree $r-1$.
We can indeed use the FFT for that. (Happy)

Isn't it as follows?

$(X+a)^{2k+1} \mod{(X^r-1)}=((X+a)^k)^2 \cdot (X+a) \mod{(X^r-1)}$

So don't we multiply a polynomial of maximum degree $r-1$ by a polynomial of degree $1$?
Or am I wrong? (Thinking)
 
evinda said:
Yes, it just says at the beginning that $r$ will have to be chosen in a clever way. So they mean that we have to pick an $r$ in the order $O((\log{n})^c)$, or not?

I don't know what that is supposed to mean. I was hoping you could clarify. (Worried)
evinda said:
Isn't it as follows?

$(X+a)^{2k+1} \mod{(X^r-1)}=((X+a)^k)^2 \cdot (X+a) \mod{(X^r-1)}$

So don't we multiply a polynomial of maximum degree $r-1$ by a polynomial of degree $1$?
Or am I wrong?

Oh yes, the right side polynomial is always $(X+a)$. (Mmm)
That means we can calculate it directly:
$$(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)(X+a) \\
= X^r + (b_{r-2}+a)X^{r-1} + (b_{r-3}+a)X^{r-2} + ... +(b_1+a)X+b_0a \\
= (b_{r-2}+a)X^{r-1} + (b_{r-3}+a)X^{r-2} + ... +(b_1+a)X+(b_0a + 1)
$$
It takes $r$ additions, so it's $O(r)$.

It doesn't really matter though. We might as well use the same FFT for $O(r\log r)$.
That's because $O(r\log r + r)=O(r\log r)$. (Thinking)
 
I like Serena said:
I don't know what that is supposed to mean. I was hoping you could clarify. (Worried)

It says the following:

View attachment 7582

(Thinking) But as they write it, they mean that the computational cost is polynomial only if $r$ is $O((\log{n})^c)$. I don't know why they say it like that. (Thinking)

I like Serena said:
Oh yes, the right side polynomial is always $(X+a)$. (Mmm)
That means we can calculate it directly:
$$(X^{r-1}+b_{r-2}X^{r-2} + ... + b_0)(X+a) \\
= X^r + (b_{r-2}+a)X^{r-1} + (b_{r-3}+a)X^{r-2} + ... +(b_1+a)X+b_0a \\
= (b_{r-2}+a)X^{r-1} + (b_{r-3}+a)X^{r-2} + ... +(b_1+a)X+(b_0a + 1)
$$
It takes $r$ additions, so it's $O(r)$.

It doesn't really matter though. We might as well use the same FFT for $O(r\log r)$.
That's because $O(r\log r + r)=O(r\log r)$. (Thinking)

So don't we have the following?

$$f(n,r)=\begin{cases}
f\left(\frac{n}{2},r \right )+O\left(r\log{r} \right ), & n \text{ even} \\
f\left(n-1,r \right )+O(r), & n \text{ odd}
\end{cases}$$

Also, I was wondering if it is not possible that the polynomial that we get has degree $<r-1$... (Thinking)

In addition, we also need an initial condition in order to be able to solve the recurrence relation, don't we?
Do we check the computational cost when $n=r$, in order to find an initial condition? Or am I wrong? (Thinking)
 

Attachments

  • abr.JPG
    abr.JPG
    49.9 KB · Views: 139
evinda said:
It says the following:

But as they write it, they mean that the computational cost is polynomial only if $r$ is $O((\log{n})^c)$. I don't know why they say it like that.

It actually says 'if $r$ is of size $O((\log{n})^c)$'.
So I believe they're not talking about computational complexity but just about which constant value of $r$ to pick.
In particular that affects that number of different values of $a$ that we have to check.

Anyway, we're currently only talking about the computational complexity for a single value of $a$. (Thinking)

evinda said:
So don't we have the following?

$$f(n,r)=\begin{cases}
f\left(\frac{n}{2},r \right )+O\left(r\log{r} \right ), & n \text{ even} \\
f\left(n-1,r \right )+O(r), & n \text{ odd}
\end{cases}$$

Also, I was wondering if it is not possible that the polynomial that we get has degree $<r-1$...
Yep.

A degree $<r-1$ can happen in some of the iterations, but not generally.
And it does not matter for the worst case computational complexity.

evinda said:
In addition, we also need an initial condition in order to be able to solve the recurrence relation, don't we?
Do we check the computational cost when $n=r$, in order to find an initial condition? Or am I wrong? (Thinking)

Indeed, we need an initial condition.
As the algorithm is now, we can only stop when $n=1$ can't we? (Wondering)
Otherwise we don't actually have the polynomial yet.
And then there is nothing to do, so the computational complexity is $0$.
 
I like Serena said:
It actually says 'if $r$ is of size $O((\log{n})^c)$'.
So I believe they're not talking about computational complexity but just about which constant value of $r$ to pick.
In particular that affects that number of different values of $a$ that we have to check.

So the computational cost will be polynomial, no matter what $r$ we will pick, right?

It doesn't say how $r$ affects the number of different values of $a$ that we have to check, does it?
I like Serena said:
A degree $<r-1$ can happen in some of the iterations, but not generally.
And it does not matter for the worst case computational complexity.

A ok.
I like Serena said:
Indeed, we need an initial condition.
As the algorithm is now, we can only stop when $n=1$ can't we? (Wondering)
Otherwise we don't actually have the polynomial yet.
And then there is nothing to do, so the computational complexity is $0$.

Ah yes, I see... So we have the following recurrence relation, right?$$f(n,r)=\left\{\begin{matrix}
0, & n=1\\
f\left(\frac{n}{2},r \right )+O(r \log{r}) &, n \text{ even } \\
f(n-1,r)+O(r), & n \text{ odd}>1.
\end{matrix}\right.$$

How can we solve the recurrence relation, now that $f$ is a function of two variables? Do we maybe replace $r$ by $O((\log{n})^c)$ ? (Thinking)
 
  • #10
evinda said:
So the computational cost will be polynomial, no matter what $r$ we will pick, right?

It doesn't say how $r$ affects the number of different values of $a$ that we have to check, does it?

I still don't quite get what is intended.
And as far as I can tell the cost will be polynomial whatever we do. (Worried)
evinda said:
Ah yes, I see... So we have the following recurrence relation, right?

$$f(n,r)=\left\{\begin{matrix}
0, & n=1\\
f\left(\frac{n}{2},r \right )+O(r \log{r}) &, n \text{ even } \\
f(n-1,r)+O(r), & n \text{ odd}>1.
\end{matrix}\right.$$

How can we solve the recurrence relation, now that $f$ is a function of two variables? Do we maybe replace $r$ by $O((\log{n})^c)$ ? (Thinking)

Yep.

We can simplify it:
$$\begin{aligned}f(n,r)&=\begin{cases}
0, & n=1\\
f\left(\frac{n}{2},r \right )+O(r \log{r}), & n \text{ even } \\
f(n-1,r)+O(r), & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
f\left(\frac{n}{2},r \right )+O(r \log{r}), & n \text{ even } \\
f(\frac{n-1}2,r)+O(r) + O(r\log r), & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
f\left(\lfloor\frac{n}{2}\rfloor,r \right )+O(r \log{r}), & n >1
\end{cases} \\
&= O(\log n \cdot r \log r)
\end{aligned}$$

Now we can substitute $r=O((\log n)^c)$ if we want. (Thinking)
 
  • #11
I like Serena said:
Yep.

We can simplify it:
$$\begin{aligned}f(n,r)&=\begin{cases}
0, & n=1\\
f\left(\frac{n}{2},r \right )+O(r \log{r}), & n \text{ even } \\
f(n-1,r)+O(r), & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
f\left(\frac{n}{2},r \right )+O(r \log{r}), & n \text{ even } \\
f(\frac{n-1}2,r)+O(r) + O(r\log r), & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
f\left(\lfloor\frac{n}{2}\rfloor,r \right )+O(r \log{r}), & n >1
\end{cases} \\
&= O(\log n \cdot r \log r)
\end{aligned}$$

Now we can substitute $r=O((\log n)^c)$ if we want. (Thinking)

We have $f(n,r)=f\left( \lfloor \frac{n}{2}\rfloor,r\right)+O(r \log{r})=f\left( \lfloor\frac{n}{4}\rfloor,r\right)+O(2r \log{r})= \dots= f\left( \lfloor \frac{n}{2^i}\rfloor,r\right)+O( i r \log{r})$.

We stop when $\lfloor\frac{n}{2^i}\rfloor=1$. This holds for $i=O(\log{n})$.

So we deduce that $f(n,r)=O(\log{n} r \log{r})$, right?

So we can get that $f(n,r)=O((\log{n})^{c+1} \log{(\log{n})^c})$, right?
 
  • #12
evinda said:
We have $f(n,r)=f\left( \lfloor \frac{n}{2}\rfloor,r\right)+O(r \log{r})=f\left( \lfloor\frac{n}{4}\rfloor,r\right)+O(2r \log{r})= \dots= f\left( \lfloor \frac{n}{2^i}\rfloor,r\right)+O( i r \log{r})$.

We stop when $\lfloor\frac{n}{2^i}\rfloor=1$. This holds for $i=O(\log{n})$.

So we deduce that $f(n,r)=O(\log{n} r \log{r})$, right?

So we can get that $f(n,r)=O((\log{n})^{c+1} \log{(\log{n})^c})$, right?

Yep.

Furthermore $(\log{n})^{c+1} \log{(\log{n})^c} = O((\log n)^{c+2}) = O((\log n)^{c'})$.
And this is less than linear, which is $O(n)$.

Perhaps we need $r$ of size $O((\log n)^c)$ to achieve less than linear computational complexity instead of polynomial? (Wondering)
 
  • #13
I like Serena said:
Yep.

Furthermore $(\log{n})^{c+1} \log{(\log{n})^c} = O((\log n)^{c+2}) = O((\log n)^{c'})$.
And this is less than linear, which is $O(n)$.

Perhaps we need $r$ of size $O((\log n)^c)$ to achieve less than linear computational complexity instead of polynomial? (Wondering)

If $r$ is not in $O((\log{n})^c)$ for some constant $c$, then the computational cost cannot be polynomial as for $n$, right? (Thinking)
 
  • #14
evinda said:
If $r$ is not in $O((\log{n})^c)$ for some constant $c$, then the computational cost cannot be polynomial as for $n$, right? (Thinking)

Suppose $r$ is of size $O(n)$, then $f(n,r)=O(n\log^2 n)$, so that $f(n,r)=O(n^2)$, which is polynomial, but not linear. (Nerd)
 
  • #15
I like Serena said:
Suppose $r$ is of size $O(n)$, then $f(n,r)=O(n\log^2 n)$, so that $f(n,r)=O(n^2)$, which is polynomial, but not linear. (Nerd)

Ah right. The fact that $r=O((\log{n})^c)$ is necessary so that the computational cost is polynomial as for the length of $n$, that is $O(\log{n})$, or not? (Thinking)
 
  • #16
evinda said:
Ah right. The fact that $r=O((\log{n})^c)$ is necessary so that the computational cost is polynomial as for the length of $n$, that is $O(\log{n})$, or not? (Thinking)

What is the length of $n$?

Is it the number of digits or some such?
If so then I think that's it. (Mmm)
 
  • #17
I like Serena said:
What is the length of $n$?

Is it the number of digits or some such?
If so then I think that's it. (Mmm)

Yes, I meant the number of digits of $n$. (Nod)

Using the idea described above, we get the following algorithm:

View attachment 7583

Could you explain to me what the following if-loop checks?

Code:
if (r is a prime number) then
    if (n^i mod r != 1 for all i, 1 <= i <= 4 ceil(logn)^2 ) then
       break;
 

Attachments

  • det.JPG
    det.JPG
    27.5 KB · Views: 120
  • #18
evinda said:
Could you explain to me what the following if-loop checks?

Code:
if (r is a prime number) then
    if (n^i mod r != 1 for all i, 1 <= i <= 4 ceil(logn)^2 ) then
       break;

From the context I deduce it's a heuristic criterion that indicates we have found a suitable $r$.
I can't say why it would be suitable though.

Still, we can say a little more about it.
Suppose $n^i \bmod r = 1$, what does that mean?
Isn't there a proposition that looks like that? (Wondering)
 
  • #19
I like Serena said:
From the context I deduce it's a heuristic criterion that indicates we have found a suitable $r$.
I can't say why it would be suitable though.

Still, we can say a little more about it.
Suppose $n^i \bmod r = 1$, what does that mean?
Isn't there a proposition that looks like that? (Wondering)

Ah, then we have that $r \mid n^i-1$ and so $r$ cannot divide $n$. Right?

So if $r$ is a prime number and $r \nmid n^i-1$ then it is possible that $r$ is a prime divisor of $n$, so in this case we make the further test if $(X+a)^n \mod{(X^r-1)} \neq X^{n \mod{r}}+a$ for some $a$, right?
 
  • #20
evinda said:
Ah, then we have that $r \mid n^i-1$ and so $r$ cannot divide $n$. Right?

So if $r$ is a prime number and $r \nmid n^i-1$ then it is possible that $r$ is a prime divisor of $n$, so in this case we make the further test if $(X+a)^n \mod{(X^r-1)} \neq X^{n \mod{r}}+a$ for some $a$, right?

Actually, we already had that $r$ does not divide $n$, which would have caused the algorithm to terminate (line 4).

But don't we have that $n^d \bmod r = 1$ if $d$ is the order of $n$ with respect to $r$?
And also that $n^{\phi(r)} \bmod r = 1$ if $n$ and $r$ are co-prime? (Wondering)

More specifically, $n^i\bmod r=1$ means that $(n,r)=1$ and $\#n \mid i$.
 
  • #21
I like Serena said:
Actually, we already had that $r$ does not divide $n$, which would have caused the algorithm to terminate (line 4).

Oh yes, right...

I like Serena said:
But don't we have that $n^d \bmod r = 1$ if $d$ is the order of $n$ with respect to $r$?

And also that $n^{\phi(r)} \bmod r = 1$ if $n$ and $r$ are co-prime? (Wondering)

Yes... (Nod)

I like Serena said:
More specifically, $n^i\bmod r=1$ means that $(n,r)=1$ and $\#n \mid i$.

With #n you mean the order of $n$, right ?

So according to the algorithm, if $n$ has an order between $1$ and $4 \lceil 4 \log{n}\rceil^2$ with respect to $r$, we can deduce that $n$ is a prime, otherwise we cannot see whether $n$ is prime or composite. Right? (Thinking)

If so, this fact isn't known in general, is it? (Thinking)
 
  • #22
evinda said:
With #n you mean the order of $n$, right ?

So according to the algorithm, if $n$ has an order between $1$ and $4 \lceil 4 \log{n}\rceil^2$ with respect to $r$, we can deduce that $n$ is a prime, otherwise we cannot see whether $n$ is prime or composite. Right? (Thinking)

If so, this fact isn't known in general, is it? (Thinking)

Not quite. (Shake)

So if $n^i \bmod r \ne 1$, that means that $(n,r)\ne 1$ or $\#n \not\mid i$ with respect to $r$.
And yes, $\#n$ is the order of $n$ with respect to $r$.
Since we have already checked all lower values of $r$, and since $r\not\mid n$, it follows that $(n,r)=1$.
Therefore $\#n \not\mid i$ with respect to $r$.
Since this is the case for all $i$ up to $4 \lceil 4 \log{n}\rceil^2$, the order of $n$ with respect to $r$ must be higher than than upper bound, and it also means that $r$ has to be greater than that upper bound.

Anyway, it doesn't tell us whether $n$ is prime or not.
It's just a property of $r$ that we can observe, which apparently makes it suitable. (Thinking)
 
  • #23
I like Serena said:
So if $n^i \bmod r \ne 1$, that means that $(n,r)\ne 1$ or $\#n \not\mid i$ with respect to $r$.

Why does this hold? $A \Rightarrow B\wedge C $ doesn't imply that $A^c \Rightarrow B^c \lor C^c$, does it? (Thinking)

I like Serena said:
Since we have already checked all lower values of $r$, and since $r\not\mid n$, it follows that $(n,r)=1$.

You mean when we get to the last possible iteration of the while-loop, right? (Thinking)

I like Serena said:
Therefore $\#n \not\mid i$ with respect to $r$.
Since this is the case for all $i$ up to $4 \lceil 4 \log{n}\rceil^2$, the order of $n$ with respect to $r$ must be higher than than upper bound, and it also means that $r$ has to be greater than that upper bound.

Why does $r$ have to be greater than the upper bound?

I like Serena said:
Anyway, it doesn't tell us whether $n$ is prime or not.
It's just a property of $r$ that we can observe, which apparently makes it suitable. (Thinking)
Ah the line 9 is executed only if the line 4 is never executed at the while-loop. It doesn't depend on the lines 5-7, right? (Thinking)
 
  • #24
evinda said:
Why does this hold? $A \Rightarrow B\wedge C $ doesn't imply that $A^c \Rightarrow B^c \lor C^c$, does it?

Indeed.
But $A \Leftrightarrow B\land C $ does imply $A^c \Leftrightarrow B^c \lor C^c$. (Nerd)

evinda said:
You mean when we get to the last possible iteration of the while-loop, right?

I mean when the condition is satisfied to break out of the loop.
If that condition is never satisfied, we have $r=n$ after the while-loop and the algorithm ends.

evinda said:
Why does $r$ have to be greater than the upper bound?

Let $N = 4 \lceil 4 \log{n}\rceil^2$.
Then we have $n>N$ with respect to $r$.
Since $r$ is prime, the greatest order $n$ can have, is $r-1$.
Therefore $r>n>N$. (Thinking)

evinda said:
Ah the line 9 is executed only if the line 4 is never executed at the while-loop. It doesn't depend on the lines 5-7, right?

More specifically, the algorithm returns in line 9 if the condition in line 4 was never satisfied, nor the conditions in line 5 and 6. (Nod)
 
  • #25
I like Serena said:
Indeed.
But $A \Leftrightarrow B\land C $ does imply $A^c \Leftrightarrow B^c \lor C^c$. (Nerd)

A ok. And if $(n,r)=1$ and $\#n \mid i$ then we have that $i=k \cdot \#n$ for some $k \in \mathbb{Z}$ and so it follows that $n^{i} \mod{r}=1$.

I like Serena said:
I mean when the condition is satisfied to break out of the loop.
If that condition is never satisfied, we have $r=n$ after the while-loop and the algorithm ends.

I see...

I like Serena said:
Let $N = 4 \lceil 4 \log{n}\rceil^2$.
Then we have $n>N$ with respect to $r$.
Since $r$ is prime, the greatest order $n$ can have, is $r-1$.
Therefore $r>n>N$. (Thinking)

You mean $N=4 \lceil \log{n}\rceil^2$, right?
Ah, so since the greatest order $n$ can have is $r-1$, and we have checked that the order isn't $\leq N$, it holds that $r-1>N \Rightarrow r>N+1$, right? And since we also have that $r<n$, we get the inequality $N+1<r<n$, right? (Thinking)
I like Serena said:
More specifically, the algorithm returns in line 9 if the condition in line 4 was never satisfied, nor the conditions in line 5 and 6. (Nod)

So, if we find a prime $r<n$ such that $\#n \nmid i, \forall i \in \{1, \dots, 4 \lceil \log{n}\rceil^2 \}$, then we will have found a suitable $r$ to check if $(X+a)^n \mod{(X^r-1)} \neq X^{n \mod{r}}+a$ for some value of $a$. With this value of $r$ and the values of $a$ that we check, we can deduce if $n$ is composite or not. Right?

If we don't find such a prime $r$, then we will have checked that no integer $<n$ divides $n$ and so we will deduce that $n$ is a prime, right?
 
  • #26
Yes to all of that. (Nod)
 
  • #27
I like Serena said:
Yes to all of that. (Nod)

Nice. (Smile) I have also some questions about the analysis of the time for the single operations of the algorithm.
About the time for the perfect power test: In my notes, there is mentioned an algorithm which needs no more than $O((\log{n})^2 \log{\log{n}})$ arithmetic operations so that the test in line 1 of the algorithm is carried out.

And I read at an other site the following idea for the perfect power test:

Given a number $n$, if at all it can be written as $a^b$ ($b > 1$), then $b< \log{n}+1$. And for every fixed $b$, checking if there exists an $a$ with $a^b=n$ can be done using binary search.

The logarithm is to base $2$, right? How do we get that $b< \log_2{n}+1$ ? (Thinking)
 
  • #28
We will have the greatest b when a is smallest.
What is the smallest a? (Wondering)
 
  • #29
I like Serena said:
We will have the greatest b when a is smallest.
What is the smallest a? (Wondering)

$2$ ? Then don't we have the following ?

$$n=a^b \geq 2^b \Rightarrow b \leq \log_2{n}$$Can we also find a bound for $a$? Or is it between $1$ and $n$? (Thinking)
 
  • #30
evinda said:
$2$ ? Then don't we have the following ?

$$n=a^b \geq 2^b \Rightarrow b \leq \log_2{n}$$

Can we also find a bound for $a$? Or is it between $1$ and $n$?

Yep. (Nod)

We could calculate $a$ directly with $a=\sqrtn$.
Or we could calculate the upper bound for $a$ by using the smallest value of $b$ giving us $\sqrt n$.
But those are difficult to calculate with an integer algorithm.
So instead we should probably pick $n$ as upper bound so that we need $\log_2 n$ bisections. (Thinking)
 
  • #31
I like Serena said:
Yep. (Nod)

We could calculate $a$ directly with $a=\sqrtn$.
Or we could calculate the upper bound for $a$ by using the smallest value of $b$ giving us $\sqrt n$.
But those are difficult to calculate with an integer algorithm.
So instead we should probably pick $n$ as upper bound so that we need $\log_2 n$ bisections. (Thinking)


I have thought of the following algorithm:

Code:
Algorithm:
k=-1;
b=2;
while ((b<=logn) and (k=-1)){
         k=binarysearch(b,n);
         b=b+1;
}
if (k!=-1) printf("n=%d^%d",k,b);

Code:
int binarysearch(int b, int n){
 int low=2;
 int high=n;
 while (low<high){
          int mid=low+floor((high-low)/2);
          if (mid^b==n) return mid;
          else if (mid^b<n) low=mid+1;
          else high=mid-1;
 }
 return -1;
}
Is the algorithm I have written right? But don't we need for the exponentiation time $O((\log{n})^3)$ ?
So will the number of arithmetic operations be greater as desired? (Thinking)
 
  • #32
evinda said:
Is the algorithm I have written right? But don't we need for the exponentiation time $O((\log{n})^3)$ ?
So will the number of arithmetic operations be greater as desired? (Thinking)

It looks correct.
Can't we improve it by starting the next iteration with a new upper bound from the previous iteration? (Wondering)
 
  • #33
I like Serena said:
Can't we improve it by starting the next iteration with a new upper bound from the previous iteration? (Wondering)

You mean a new upper bound for $b$ ? How can we find it? (Thinking)
 
  • #34
evinda said:
You mean a new upper bound for $b$ ? How can we find it? (Thinking)

No, a new upper bound for $k$ for the next $b$. (Shake)
Suppose we found that for a certain $b$ we have $k^b<n$ and $(k+1)^b>n$.
Let the next $b$ be $b'=b+1$.
Now we want to see if we can find a corresponding $k'$ such that $n=(k')^{b'}$.
Don't we have that $k' \le k$? (Wondering)
 
  • #35
I like Serena said:
No, a new upper bound for $k$ for the next $b$. (Shake)
Suppose we found that for a certain $b$ we have $k^b<n$ and $(k+1)^b>n$.
Let the next $b$ be $b'=b+1$.
Now we want to see if we can find a corresponding $k'$ such that $n=(k')^{b'}$.
Don't we have that $k' \le k$? (Wondering)

Yes, right... (Nod) So does the function binarysearch have to return the value mid for which $mid^b<n$ and $(mid+1)^b>n$ , besides the -1, that indicates that we haven't found a $k$ such that $k^b=n$? So do we have to use pointers at this function?
Or could we do this somehow else? (Thinking)
 
  • #36
In c we can either pass a pointer to return an additional value (an output parameter), or we can return a structure with the values.
In this case i recommend passing a pointer as input-output parameter to track the upper bound. (Thinking)
 
  • #37
I like Serena said:
In c we can either pass a pointer to return an additional value (an output parameter), or we can return a structure with the values.
In this case i recommend passing a pointer as input-output parameter to track the upper bound. (Thinking)

I also took a look at the algorithm mentioned in my notes. The algorithm is the following:

View attachment 7608

It says the following about it:View attachment 7609

At the lemma it says that the algorithm makes $O((\log{n})^2 \log{\log{n}})$ multiplications of numbers from $\{1, \dots,n\}$. How do we deduce this? (Thinking)
 

Attachments

  • pptt.JPG
    pptt.JPG
    19.4 KB · Views: 104
  • com.JPG
    com.JPG
    57.3 KB · Views: 111
  • #38
evinda said:
At the lemma it says that the algorithm makes $O((\log{n})^2 \log{\log{n}})$ multiplications of numbers from $\{1, \dots,n\}$. How do we deduce this? (Thinking)

How many times does the outer loop run?
The inner loop?
And what is the complexity of fast exponentiation? (Wondering)
 
  • #39
I like Serena said:
How many times does the outer loop run?
The inner loop?

The outer loop runs $O(\log{n})$ times. Since with the inner loop we carry out a binary search, it is executed $O(\log{n})$ times, too. Right?
I like Serena said:
And what is the complexity of fast exponentiation? (Wondering)

For the computation of $m^b$, it is required time $O(\log^3{b})$ using fast exponentiation, right? (Thinking)
 
  • #40
evinda said:
The outer loop runs $O(\log{n})$ times. Since with the inner loop we carry out a binary search, it is executed $O(\log{n})$ times, too. Right?

Yep. (Nod)

evinda said:
For the computation of $m^b$, it is required time $O(\log^3{b})$ using fast exponentiation, right?

Isn't the fast exponentiation $O(\log{b})$?
And we have $b=O(\log n)$ don't we? (Wondering)
 
  • #41
I like Serena said:
Isn't the fast exponentiation $O(\log{b})$?

How is it justified that the fast exponentiation is $O(\log{b})$?

I like Serena said:
And we have $b=O(\log n)$ don't we? (Wondering)

Yes. (Smile)
 
  • #42
evinda said:
How is it justified that the fast exponentiation is $O(\log{b})$?

Isn't the computational complexity of calculating $a^n$ with fast exponentiation:
$$\begin{aligned}g(n)&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2}\right )+1, & n \text{ even } \\
g(n-1)+1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2} \right )+1, & n \text{ even } \\
g(\frac{n-1}2)+1 + 1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\lfloor\frac{n}{2}\rfloor \right )+O(1), & n >1
\end{cases} \\
&= O(\log n)
\end{aligned}$$
(Wondering)
 
  • #43
I like Serena said:
Isn't the computational complexity of calculating $a^n$ with fast exponentiation:
$$\begin{aligned}g(n)&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2}\right )+1, & n \text{ even } \\
g(n-1)+1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\frac{n}{2} \right )+1, & n \text{ even } \\
g(\frac{n-1}2)+1 + 1, & n \text{ odd}>1
\end{cases} \\
&=\begin{cases}
0, & n=1\\
g\left(\lfloor\frac{n}{2}\rfloor \right )+O(1), & n >1
\end{cases} \\
&= O(\log n)
\end{aligned}$$
(Wondering)

Why is it plus $1$ at the last two cases?

$$g(n)=\begin{cases}
0, n=1\\
g\left(\frac{n}{2}\right )+1, n \text{ even } \\
g(n-1)+1, n \text{ odd}>1
\end{cases}$$
 
  • #44
evinda said:
Why is it plus $1$ at the last two cases?

$$g(n)=\begin{cases}
0, n=1\\
g\left(\frac{n}{2}\right )+1, n \text{ even } \\
g(n-1)+1, n \text{ odd}>1
\end{cases}$$

We have $a^{2k}=(a^k)^2$ respectively $a^{2k+1}=(a^{2k})\cdot a$, and each takes 1 multiplication. (Thinking)
 
  • #45
I like Serena said:
We have $a^{2k}=(a^k)^2$ respectively $a^{2k+1}=(a^{2k})\cdot a$, and each takes 1 multiplication. (Thinking)

I see... So the computational complexity is the number of multiplications done?
It differs from the time complexity, right? (Thinking)

Furthermore, it says that the number $\lceil \log{n} \rceil$ may be calculated in $O(\log{n})$ arithmetic operations (by repeated halving).
How do we calculate $\lceil \log{n} \rceil$ by repeated halving? (Thinking)
 
  • #46
evinda said:
I see... So the computational complexity is the number of multiplications done?
It differs from the time complexity, right?

It depends on how we measure time complexity.
Typically this is the number of multiplications/divisions and (possibly separately) the number of additions/subtractions.
Either way, we can lump it all into one big number with the $O$ operator.
So instead of adding $1\text{ multiplication}$, we might add $O(1)$ to eliminate the type of operation. (Nerd)

evinda said:
Furthermore, it says that the number $\lceil \log{n} \rceil$ may be calculated in $O(\log{n})$ arithmetic operations (by repeated halving).
How do we calculate $\lceil \log{n} \rceil$ by repeated halving?

If we want to know what $\log_2 n$ is, we can simply count the number of bits can't we? (Wondering)
 
  • #47
I like Serena said:
It depends on how we measure time complexity.
Typically this is the number of multiplications/divisions and (possibly separately) the number of additions/subtractions.
Either way, we can lump it all into one big number with the $O$ operator.
So instead of adding $1\text{ multiplication}$, we might add $O(1)$ to eliminate the type of operation. (Nerd)

I thought that at the time complexity we usually multiply the number of arithmetic operations by the corresponding time needed... (Thinking)

I like Serena said:
If we want to know what $\log_2 n$ is, we can simply count the number of bits can't we? (Wondering)

How do we count the number of bits? (Thinking)
 
  • #48
evinda said:
I thought that at the time complexity we usually multiply the number of arithmetic operations by the corresponding time needed...

As far as I know, time complexity is an abstract measure, which can be anything, as long is it's (more or less) related to time.
And saying it's $O(f(n))$ says that it's less then some constant times $f(n)$, which covers any amount of time that a multiplication or addition (that is presumed to be more or less constant) may need. (Nerd)

evinda said:
How do we count the number of bits?

Don't we usually already have a bit representation of $n$?
Either way, even counting bits is effectively the same as repeated halving, which is $O(\log n)$. (Thinking)
 
  • #49
I like Serena said:
As far as I know, time complexity is an abstract measure, which can be anything, as long is it's (more or less) related to time.
And saying it's $O(f(n))$ says that it's less then some constant times $f(n)$, which covers any amount of time that a multiplication or addition (that is presumed to be more or less constant) may need. (Nerd)

So we could consider that one multiplication of some integers $a,b$ takes time $O(1)$ or that it takes time $O(\log^2{n})$, right?

I like Serena said:
Don't we usually already have a bit representation of $n$?
Either way, even counting bits is effectively the same as repeated halving, which is $O(\log n)$. (Thinking)

Ok. But how can we find $\lceil \log{n}\rceil$ by repeated halving? (Thinking)
 
  • #50
evinda said:
So we could consider that one multiplication of some integers $a,b$ takes time $O(1)$ or that it takes time $O(\log^2{n})$, right?

Indeed. (Nod)

evinda said:
Ok. But how can we find $\lceil \log{n}\rceil$ by repeated halving? (Thinking)

How about:
$$
\textbf{Algorithm ILSe.1 (Count Bits)} \\
\textrm{INPUT: Integer }n \ge 0 \\
\begin{array}{rl}
1.& k \leftarrow 0\ ; \\
2.& \textbf{while }n \ne 0 \\
3.& \quad n \leftarrow \lfloor \frac n2 \rfloor\ ; \\
4.& \quad k \leftarrow k+1\ ; \\
5.& \textbf{return } k\ ; \\
\end{array}$$
(Wondering)
 
Back
Top