MHB Test whether the integer is a prime

  • #51
I like Serena said:
How about:
$$
\textbf{Algorithm ILSe.1 (Count Bits)} \\
\textrm{INPUT: Integer }n \ge 0 \\
\begin{array}{rl}
1.& k \leftarrow 0\ ; \\
2.& \textbf{while }n \ne 0 \\
3.& \quad n \leftarrow \lfloor \frac n2 \rfloor\ ; \\
4.& \quad k \leftarrow k+1\ ; \\
5.& \textbf{return } k\ ; \\
\end{array}$$
(Wondering)

Nice... (Smile) The algorithm runs as long as $\frac{n}{2^i}\geq 1 \Rightarrow n \geq 2^i \Rightarrow i \leq \log{n}$.

So the time complexity of the algorithm is $O(i)=O(\log{n})$. Right? (Thinking)
 
Mathematics news on Phys.org
  • #52
evinda said:
Nice... (Smile) The algorithm runs as long as $\frac{n}{2^i}\geq 1 \Rightarrow n \geq 2^i \Rightarrow i \leq \log{n}$.

Happy that you like it. (Happy)

evinda said:
So the time complexity of the algorithm is $O(i)=O(\log{n})$. Right?

Yup. (Nod)
 
  • #53
I like Serena said:
Happy that you like it. (Happy)
Yup. (Nod)

(Happy)

I have also a question for the time required for testing $r$. According to my notes:

View attachment 7611

First of all, how do we deduce that the total cost for line $4$ is $O(\rho(n))$ ?

The Sieve of Eratosthenes algorithm is the following:View attachment 7612

Why do we check what happens when $r$ gets the value $2^i+1$ for some $i$ ? Could you explain it to me? (Thinking)
 

Attachments

  • r.JPG
    r.JPG
    60 KB · Views: 116
  • soe.JPG
    soe.JPG
    15 KB · Views: 111
  • #54
evinda said:
I have also a question for the time required for testing $r$. According to my notes:

First of all, how do we deduce that the total cost for line $4$ is $O(\rho(n))$ ?

Isn't line 4 executed $\rho(n)$ times, while the cost of one execution is $1\text{ division} = O(1)$? (Wondering)

evinda said:
The Sieve of Eratosthenes algorithm is the following:

Why do we check what happens when $r$ gets the value $2^i+1$ for some $i$ ? Could you explain it to me?

I believe it's to reuse the same table for a number of iterations of $r$.
When $r$ runs out of the table, the table size is doubled and recalculated for the next series of iterations of $r$. (Thinking)
 
  • #55
I like Serena said:
Isn't line 4 executed $\rho(n)$ times, while the cost of one execution is $1\text{ division} = O(1)$? (Wondering)

Oh yes, right... (Nod)
I like Serena said:
I believe it's to reuse the same table for a number of iterations of $r$.
When $r$ runs out of the table, the table size is doubled and recalculated for the next series of iterations of $r$. (Thinking)

But why do we check only what happens when $r$ is of the form $2^{i}+1$ although not every prime can be written in this form?

Also if $r=2^i+1$ for some $i$ don't we get a table of size $2^{\lceil \log{(2^{i}+1)}\rceil}-1$ ? Or am I wrong?
 
  • #56
evinda said:
But why do we check only what happens when $r$ is of the form $2^{i}+1$ although not every prime can be written in this form?

Also if $r=2^i+1$ for some $i$ don't we get a table of size $2^{\lceil \log{(2^{i}+1)}\rceil}-1$ ? Or am I wrong?

The size of the table is not related to primes.
Since we double the table size each time, we have $2^i$ entries in there at all times.
And when $r$ becomes $2^i+1$ it refers to an entry in the table that is not there yet, which is when the table gets resized, after which it will have $2^{i+1}$ entries.
Now the main algorithm can continue running $r$ from $2^i+1$ up to $2^{i+1}$ before we run out of the table again. (Thinking)
 
  • #57
I like Serena said:
The size of the table is not related to primes.
Since we double the table size each time, we have $2^i$ entries in there at all times.

Why do we double the table size each time? I haven't understood it... (Worried)
 
  • #58
evinda said:
Why do we double the table size each time? I haven't understood it... (Worried)

The Sieve of Erathosthenes does not find individual primes, but instead finds all primes in the range that we specify.
The complexity to figure out if some number $m$ is prime has the exact same complexity as finding all primes up to $m$.
So instead of rerunning the sieve for every $r$, we run it once for a range, and then iterate $r$ through that range.
And afterwards we rerun the sieve on a larger range.

It's an arbitrary decision to double the table every time.
We could also triple it, or each time add 1000 entries to it, or use yet some other scheme to grow the table.
However, we do not want to run the sieve for the full range up to $n$, since that is too computationally expensive. (Thinking)
 
  • #59
I like Serena said:
The Sieve of Erathosthenes does not find individual primes, but instead finds all primes in the range that we specify.
The complexity to figure out if some number $m$ is prime has the exact same complexity as finding all primes up to $m$.
So instead of rerunning the sieve for every $r$, we run it once for a range, and then iterate $r$ through that range.
And afterwards we rerun the sieve on a larger range.

It's an arbitrary decision to double the table every time.
We could also triple it, or each time add 1000 entries to it, or use yet some other scheme to grow the table.
However, we do not want to run the sieve for the full range up to $n$, since that is too computationally expensive. (Thinking)

Ok.

I like Serena said:
The size of the table is not related to primes.
Since we double the table size each time, we have $2^i$ entries in there at all times.
And when $r$ becomes $2^i+1$ it refers to an entry in the table that is not there yet, which is when the table gets resized, after which it will have $2^{i+1}$ entries.
Now the main algorithm can continue running $r$ from $2^i+1$ up to $2^{i+1}$ before we run out of the table again. (Thinking)
So at the first execution of the Sieve of Erathosthenes we will get the table $m[2 \dots 2^{i}+1]$ so that it contains $2^{i}$ entries? (Thinking)
But if so then $m[2^{i}+1]$ will have got an entry, or am I wrong? (Thinking)
 
  • #60
evinda said:
So at the first execution of the Sieve of Erathosthenes we will get the table $m[2 \dots 2^{i}+1]$ so that it contains $2^{i}$ entries?
But if so then $m[2^{i}+1]$ will have got an entry, or am I wrong?

I think we're only talking about the upper bound instead of the actual size of the table.
In the first iteration we will have $m[2 \dots 2^{1}]$, then $m[2 \dots 2^{2}]$, and so on.
Generally we have $m[2 \dots 2^{i}]$, meaning that $2^i+1$ is not in the table. (Thinking)
 
  • #61
I like Serena said:
I think we're only talking about the upper bound instead of the actual size of the table.
In the first iteration we will have $m[2 \dots 2^{1}]$, then $m[2 \dots 2^{2}]$, and so on.
Generally we have $m[2 \dots 2^{i}]$, meaning that $2^i+1$ is not in the table. (Thinking)

A ok. So when we have the table $m[2 \dots 2^{i}]$, the table will have $2^i-1$ elements, at the next iteration it will have $2^{i+1}-1$ elements, and so on. Right? (Thinking)
 
Last edited:
  • #62
evinda said:
A ok. So when we have the table $m[2 \dots 2^{i}]$, the table will have $2^i-1$ elements, at the next iteration it will have $2^{i+1}-1$ elements, and so on. Right?

Yep. (Nod)
 
  • #63
I like Serena said:
Yep. (Nod)
So when we execute the algorithm and we have at the beginning $r=2$, we create the table $m[2 \dots 2^1]=m[2]$.
Then we get $r=3$ and we create the table $m[3 \dots 2^2]=m[3 \ 4]$.
Then we get $r=4$ and we just have to look at the previous table we created, and so on, Right?
 
  • #64
evinda said:
So when we execute the algorithm and we have at the beginning $r=2$, we create the table $m[2 \dots 2^1]=m[2]$.
Then we get $r=3$ and we create the table $m[3 \dots 2^2]=m[3 \ 4]$.
Then we get $r=4$ and we just have to look at the previous table we created, and so on, Right?

Wouldn't the table still start at 2 in every iteration? (Wondering)
 
  • #65
I like Serena said:
Wouldn't the table still start at 2 in every iteration? (Wondering)

Oh yes, right.
So when $r=2$ we create the table $m[2]$, when $r=3$ we create the table $m[2 \ 3 \ 4]$, when $r=4$ we verify with 3 steps using the previous table that it is composite. Then when $r=5$ we get the table $m[2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8]$.
We check what holds for the numbers $k=6,7,8$ with the last table that we got with $k-1$ steps. Right?
So the number of arithmetic operations done is $\sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)+O(i)= \sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)$.

Right? (Thinking)
 
  • #66
evinda said:
Oh yes, right.
So when $r=2$ we create the table $m[2]$, when $r=3$ we create the table $m[2 \ 3 \ 4]$, when $r=4$ we verify with 3 steps using the previous table that it is composite. Then when $r=5$ we get the table $m[2 \ 3 \ 4 \ 5 \ 6 \ 7 \ 8]$.
We check what holds for the numbers $k=6,7,8$ with the last table that we got with $k-1$ steps. Right?
So the number of arithmetic operations done is $\sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)+O(i)= \sum_{1 \leq i \leq \lceil \log{(\rho(n))}\rceil} O(i \cdot 2^i)$.

Right?

That looks about right yes. (Nod)
 
  • #67
I like Serena said:
That looks about right yes. (Nod)

Nice. (Happy) Then we calculate $n^i \mod{r}$, for $i=1,2, \dots, 4 \lceil \log{n}\rceil^2$.
Why is the number of multiplications modulo $r$ $O((\log{n})^2)$ for one $r$ and not $O(\log{i})$?
How do we deduce this? (Thinking)
 
  • #68
evinda said:
Nice. (Happy) Then we calculate $n^i \mod{r}$, for $i=1,2, \dots, 4 \lceil \log{n}\rceil^2$.
Why is the number of multiplications modulo $r$ $O((\log{n})^2)$ for one $r$ and not $O(\log{i})$?
How do we deduce this? (Thinking)

$O(\log{i})$ would be for fast exponentiation for a single $i$, wouldn't it?
But don't we need it for all $i$?
And can't we use the previous result $n^{i-1}$ to find $n^i$? (Wondering)
 
  • #69
I like Serena said:
$O(\log{i})$ would be for fast exponentiation for a single $i$, wouldn't it?

Yes, right... (Nod)

I like Serena said:
But don't we need it for all $i$?
And can't we use the previous result $n^{i-1}$ to find $n^i$? (Wondering)

Yes, so we make $4 \lceil \log{n}\rceil^2-1$ multiplications, right? (Thinking)

And $4 \lceil \log{n}\rceil^2-1 \leq 4 (\log{n}+1)^2-1=O(\log^2{n})$, right?
 
  • #70
evinda said:
Yes, so we make $4 \lceil \log{n}\rceil^2-1$ multiplications, right? (Thinking)

And $4 \lceil \log{n}\rceil^2-1 \leq 4 (\log{n}+1)^2-1=O(\log^2{n})$, right?

Right. (Nod)
 
  • #71
Then we check how many times the while-loop is executed. There is the following lemma.

View attachment 7628

First of all, how do we get that $\Omega\left(\frac{x}{\log{x}}\right)=\Omega((\log{n})^3 (\log{\log{n}})^2)$ ?

I got that $\frac{x}{\log{x}}=\frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{\log{8}+3 \log{(\lceil \log{n}\rceil)+3 \log{(\log{\log{n}})}}}$... But is this in $\Omega((\log{n})^3 (\log{\log{n}})^2)$ ?

Also how do we get that $n^{\frac{x^{\frac{2}{3}}}{3}} <\Pi<n^{x^{\frac{2}{3}}}$ ? (Thinking)
 

Attachments

  • x.JPG
    x.JPG
    52.3 KB · Views: 113
  • #72
evinda said:
Then we check how many times the while-loop is executed. There is the following lemma.

First of all, how do we get that $\Omega\left(\frac{x}{\log{x}}\right)=\Omega((\log{n})^3 (\log{\log{n}})^2)$ ?

I got that $\frac{x}{\log{x}}=\frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{\log{8}+3 \log{(\lceil \log{n}\rceil)+3 \log{(\log{\log{n}})}}}$... But is this in $\Omega((\log{n})^3 (\log{\log{n}})^2)$ ?

Don't we have $\frac{x}{\log{x}}=\frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{\log{8}+3 \log{(\lceil \log{n}\rceil)+3 \log{(\log{\log{n}})}}}
\ge \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{4 \log{(\lceil \log{n}\rceil)}}$ for sufficiently large $n$? (Wondering)

evinda said:
Also how do we get that $n^{\frac{x^{\frac{2}{3}}}{3}} <\Pi<n^{x^{\frac{2}{3}}}$ ? (Thinking)

Let $X=\lfloor x^{1/3}\rfloor$.
Then:
$$\mathit\Pi = \prod_{i=1}^X (n^i-1) < \prod_{i=1}^X n^X = n^{X^2}$$
and:
$$\mathit\Pi = \prod_{i=1}^X (n^i-1) > \prod_{i=1}^X n^{i-1} = n^{\sum (i-1)} = n^{\frac 12 X(X-1)} > n^{\frac 13 X^2}$$
Isn't it? (Wondering)
 
  • #73
I like Serena said:
Don't we have $\frac{x}{\log{x}}=\frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{\log{8}+3 \log{(\lceil \log{n}\rceil)+3 \log{(\log{\log{n}})}}}
\ge \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{4 \log{(\lceil \log{n}\rceil)}}$ for sufficiently large $n$? (Wondering)

This holds when $\lceil \log{n}\rceil \geq \frac{(\log{\log{n}})^3}{8}$, right? If so, how can we easily prove this? Do we use induction?

Then we have the following.

$$\frac{x}{\log{x}} \geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{4 \log{(\lceil \log{n}\rceil)}}\geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{4 \log{(\log{n}+1)}} (\star)$$

We have that $4 \log{(\log{n}+1)} \leq 4 \log{(2 \log{n})}=4(\log{2}+\log{(\log{n})})=4(1+\log{(\log{n})}) \leq 4(2 \log{(\log{n})})=8 \log{(\log{n})}$.

So we get that $(\star) \geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3 }{8 \log{(\log{n})}}=\lceil \log{n}\rceil^3 (\log{\log{n}})^2$.

Right? Or have I done something wrong? (Thinking)

I like Serena said:
Let $X=\lfloor x^{1/3}\rfloor$.
Then:
$$\mathit\Pi = \prod_{i=1}^X (n^i-1) < \prod_{i=1}^X n^X = n^{X^2}$$

I see... (Nod)

I like Serena said:
$$\mathit\Pi = \prod_{i=1}^X (n^i-1) > \prod_{i=1}^X n^{i-1} = n^{\sum (i-1)} = n^{\frac 12 X(X-1)} > n^{\frac 13 X^2}$$
Isn't it? (Wondering)

Why does it hold that $\prod_{i=1}^X (n^i-1) > \prod_{i=1}^X n^{i-1}$ ?

The last inequality holds for $X>3$, right?
 
  • #74
evinda said:
This holds when $\lceil \log{n}\rceil \geq \frac{(\log{\log{n}})^3}{8}$, right? If so, how can we easily prove this? Do we use induction?

Can't we use that $\log n > \log\log n > \log\log\log n > 8$ for sufficiently large $n$? (Wondering)

evinda said:
Then we have the following.

$$\frac{x}{\log{x}} \geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{4 \log{(\lceil \log{n}\rceil)}}\geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{4 \log{(\log{n}+1)}} (\star)$$

We have that $4 \log{(\log{n}+1)} \leq 4 \log{(2 \log{n})}=4(\log{2}+\log{(\log{n})})=4(1+\log{(\log{n})}) \leq 4(2 \log{(\log{n})})=8 \log{(\log{n})}$.

So we get that $(\star) \geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3 }{8 \log{(\log{n})}}=\lceil \log{n}\rceil^3 (\log{\log{n}})^2$.

Right? Or have I done something wrong?

All correct. (Nod)

evinda said:
Why does it hold that $\prod_{i=1}^X (n^i-1) > \prod_{i=1}^X n^{i-1}$ ?

The last inequality holds for $X>3$, right?

Doesn't it hold for any $X$ as long as $n \ge 2$?
That is:
$$n^i -1 = n^{i-1}(n-n^{-(i-1)})>n^{i-1}$$
(Thinking)
 
  • #75
I like Serena said:
Can't we use that $\log n > \log\log n > \log\log\log n > 8$ for sufficiently large $n$? (Wondering)
So is it as follows?

$$\log{8}+3 \log{(\lceil \log{n}\rceil)}+3 \log{(\log{\log{n}})} \leq \log{(\log{n})}+3 \log{(\log{n}+1)}+3 \log{(\log{n})} \leq 4 \log{(\log{n})}+3 \log{(2 \log{n})}=7\log{(\log{n})}+3 \log{2}\leq 7 \log{(\log{n})}+3 \log{(\log{n})}=10 \log{\log{n}}$$

Thus, we have $\frac{x}{\log{x}}\geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{10 \log{\log{n}}}=\frac{8}{10} \lceil \log{n}\rceil^3 (\log{\log{n}})^2= \Omega(\lceil \log{n}\rceil^3 (\log{\log{n}})^2)$.

Right?
I like Serena said:
Doesn't it hold for any $X$ as long as $n \ge 2$?
That is:
$$n^i -1 = n^{i-1}(n-n^{-(i-1)})>n^{i-1}$$
(Thinking)

This holds only when $n-n^{-(i-1)}>1 \Rightarrow n^i-n^{i-1}-1>0$, right? (Thinking)
 
  • #76
evinda said:
So is it as follows?

$$\log{8}+3 \log{(\lceil \log{n}\rceil)}+3 \log{(\log{\log{n}})} \leq \log{(\log{n})}+3 \log{(\log{n}+1)}+3 \log{(\log{n})} \leq 4 \log{(\log{n})}+3 \log{(2 \log{n})}=7\log{(\log{n})}+3 \log{2}\leq 7 \log{(\log{n})}+3 \log{(\log{n})}=10 \log{\log{n}}$$

Thus, we have $\frac{x}{\log{x}}\geq \frac{8 \lceil \log{n}\rceil^3 (\log{\log{n}})^3}{10 \log{\log{n}}}=\frac{8}{10} \lceil \log{n}\rceil^3 (\log{\log{n}})^2= \Omega(\lceil \log{n}\rceil^3 (\log{\log{n}})^2)$.

Right?

Yep. (Nod)

evinda said:
This holds only when $n-n^{-(i-1)}>1 \Rightarrow n^i-n^{i-1}-1>0$, right?

Yes.
And $-(i-1)\le 0$, so with $n\ge 2$ we have that $n^{-(i-1)} \le 1$. (Thinking)
 
  • #77
I like Serena said:
Yes.
And $-(i-1)\le 0$, so with $n\ge 2$ we have that $n^{-(i-1)} \le 1$. (Thinking)

So we have that $n^i-1>n^{i-1}$ only if $n-n^{-(i-1)}>1$.

And taking into consideration that $n \geq 2$ and $-(i-1) \leq 0$, we get that $n^{-(i-1)} \leq 1 \Rightarrow -n^{-(i-1)}\geq -1 \Rightarrow n-n^{-(i-1)} \geq n-1>1$, right?Do you maybe have an idea how we get that $k=O\left( \frac{\log{(\Pi)}}{\log{\log{(\Pi)}}}\right)$ ?
 
  • #78
evinda said:
So we have that $n^i-1>n^{i-1}$ only if $n-n^{-(i-1)}>1$.

And taking into consideration that $n \geq 2$ and $-(i-1) \leq 0$, we get that $n^{-(i-1)} \leq 1 \Rightarrow -n^{-(i-1)}\geq -1 \Rightarrow n-n^{-(i-1)} \geq n-1>1$, right?

Just nitpicking, but shouldn't that last inequality be $\geq n-1\ge 1$? (Wondering)

evinda said:
Do you maybe have an idea how we get that $k=O\left( \frac{\log{(\Pi)}}{\log{\log{(\Pi)}}}\right)$ ?

What is the argument following lemma 3.6.8? (Wondering)
 
  • #79
I like Serena said:
Just nitpicking, but shouldn't that last inequality be $\geq n-1\ge 1$? (Wondering)

Yes, right...

I like Serena said:
$$\mathit\Pi = \prod_{i=1}^X (n^i-1) > \prod_{i=1}^X n^{i-1} = n^{\sum (i-1)} = n^{\frac 12 X(X-1)} > n^{\frac 13 X^2}$$

Doesn't $n^{\frac 12 X(X-1)} > n^{\frac 13 X^2}$ only hold when $\frac{1}{2}X^2-\frac{1}{2}X>\frac{1}{3}X^2 \Rightarrow X>3$ ?

Also having shown that $\mathit\Pi > n^{\frac{1}{3}X^2}=n^{\frac{1}{3} \lfloor x^{\frac{1}{3}}\rfloor^2}$ we haven't shown yet that $\mathit\Pi > n^{\frac{1}{3}x^{\frac{2}{3}}}$, do we?
I like Serena said:
What is the argument following lemma 3.6.8? (Wondering)

View attachment 7630

For $x>1$, with $\pi(x)$ we denote the number of primes $p \leq x$.
 

Attachments

  • arg.JPG
    arg.JPG
    33.3 KB · Views: 109
  • #80
evinda said:
Doesn't $n^{\frac 12 X(X-1)} > n^{\frac 13 X^2}$ only hold when $\frac{1}{2}X^2-\frac{1}{2}X>\frac{1}{3}X^2 \Rightarrow X>3$ ?

Also having shown that $\mathit\Pi > n^{\frac{1}{3}X^2}=n^{\frac{1}{3} \lfloor x^{\frac{1}{3}}\rfloor^2}$ we haven't shown yet that $\mathit\Pi > n^{\frac{1}{3}x^{\frac{2}{3}}}$, do we?

Indeed. (Thinking)

evinda said:
For $x>1$, with $\pi(x)$ we denote the number of primes $p \leq x$.

That leads up to $\mathit\Pi > (2k/e)^k$.
Taking logs brings us basically to $k\log k <c\cdot N$, where $N=\log\mathit\Pi$ and $c$ is some constant.
From there we are supposed to get to $k<c\cdot \frac{N}{\log N}=c\cdot \frac{\log\mathit\Pi}{\log\log\mathit\Pi}$.
It seems that is explained right after your fragment with 'an indirect argument'.
What comes after? (Thinking)
 
  • #81
I like Serena said:
Indeed. (Thinking)

How else could we get the lower bound? (Thinking)
I like Serena said:
That leads up to $\mathit\Pi > (2k/e)^k$.
Taking logs brings us basically to $k\log k <c\cdot N$, where $N=\log\mathit\Pi$ and $c$ is some constant.
From there we are supposed to get to $k<c\cdot \frac{N}{\log N}=c\cdot \frac{\log\mathit\Pi}{\log\log\mathit\Pi}$.
It seems that is explained right after your fragment with 'an indirect argument'.
What comes after? (Thinking)
View attachment 7631
 

Attachments

  • ind.JPG
    ind.JPG
    53.5 KB · Views: 117
  • #82
evinda said:
Also having shown that $\mathit\Pi > n^{\frac{1}{3}X^2}=n^{\frac{1}{3} \lfloor x^{\frac{1}{3}}\rfloor^2}$ we haven't shown yet that $\mathit\Pi > n^{\frac{1}{3}x^{\frac{2}{3}}}$, do we?

evinda said:
How else could we get the lower bound?

We have to tweak it a little.
Something like:
$$
\mathit\Pi > n^{\frac{1}{2}X(X-1)}>n^{\frac{1}{2}(X-1)^2}
=n^{\frac{1}{2}(\lfloor x^{1/3}\rfloor-1)^2}
>n^{\frac{1}{2}(x^{1/3}-2)^2}
>n^{\frac{1}{3}x^{2/3}}
$$
for sufficiently large $x$. (Thinking)

evinda said:
Do you maybe have an idea how we get that $k=O\left( \frac{\log{(\Pi)}}{\log{\log{(\Pi)}}}\right)$ ?

And indeed, the argument is a long one, but it seems to be all there, doesn't it? (Wondering)
 
  • #83
I like Serena said:
We have to tweak it a little.
Something like:
$$
\mathit\Pi > n^{\frac{1}{2}X(X-1)}>n^{\frac{1}{2}(X-1)^2}
=n^{\frac{1}{2}(\lfloor x^{1/3}\rfloor-1)^2}
>n^{\frac{1}{2}(x^{1/3}-2)^2}
>n^{\frac{1}{3}x^{2/3}}
$$
for sufficiently large $x$. (Thinking)
The last inequality holds when $\frac{1}{2}(x^{\frac{1}{3}}-2)^2>\frac{1}{3}x^{\frac{2}{3}} \Rightarrow x^{\frac{2}{3}}-4x^{\frac{1}{3}}+4>\frac{2}{3}x^{\frac{2}{3}} \Rightarrow x^{\frac{2}{3}}-12x^{\frac{1}{3}}+12 > 0$.

So does $x$ have to be such that $x^{\frac{1}{3}}>\frac{12+ \sqrt{3 \cdot 2^5}}{2}$? (Thinking)
I like Serena said:
And indeed, the argument is a long one, but it seems to be all there, doesn't it? (Wondering)

But in our case we don't have that $k=\pi(\mathit\Pi )$, do we? (Thinking)
 
  • #84
evinda said:
The last inequality holds when $\frac{1}{2}(x^{\frac{1}{3}}-2)^2>\frac{1}{3}x^{\frac{2}{3}} \Rightarrow x^{\frac{2}{3}}-4x^{\frac{1}{3}}+4>\frac{2}{3}x^{\frac{2}{3}} \Rightarrow x^{\frac{2}{3}}-12x^{\frac{1}{3}}+12 > 0$.

So does $x$ have to be such that $x^{\frac{1}{3}}>\frac{12+ \sqrt{3 \cdot 2^5}}{2}$? (Thinking)

Yep.

evinda said:
But in our case we don't have that $k=\pi(\mathit\Pi )$, do we? (Thinking)

Indeed. That's why the argument is similar. (Thinking)
 
  • #85
I like Serena said:
Indeed. That's why the argument is similar. (Thinking)

Don't we have that $k \leq \pi(\mathit \Pi)$ and so $k=O{\left( \frac{\mathit \Pi}{\log{ \mathit \Pi}}\right)}$ ? Or am I wrong? (Thinking)
 
  • #86
I like Serena said:
Taking logs brings us basically to $k\log k <c\cdot N$, where $N=\log\mathit\Pi$ and $c$ is some constant.
From there we are supposed to get to $k<c\cdot \frac{N}{\log N}=c\cdot \frac{\log\mathit\Pi}{\log\log\mathit\Pi}$.
(Thinking)

It seems that $O\left(\frac{\log{N}}{\log{\log{N}}}\right)$ is a known upper bound for the number of distince prime divisors of $N$, but I haven't understood how we can show this... (Thinking)
 
  • #87
evinda said:
I like Serena said:
Isn't the first step where it says in the proof in http://mathhelpboards.com/number-theory-27/test-whether-integer-prime-22804-post103344.html#post103344 that $\mathit\Pi > 2^k\cdot k! > (2k/e)^k$ by lemma 3.6.8? (Wondering)

Yes. Do we use this somehow to conclude that $k=O{\left( \frac{\log{\mathit\Pi}}{\log{\log{\mathit\Pi}}}\right)}$ ? (Thinking)
I like Serena said:
Yes. Can't the next step be similar to what we have in post http://mathhelpboards.com/number-theory-27/test-whether-integer-prime-22804-post103372.html#post103372?
That is, we're taking logarithms,
$$\ln\mathit\Pi > k\cdot (\ln k + \ln 2 - 1)\tag{3.6.19a}$$
(Thinking)

evinda said:
So then we suppose that $k \geq \frac{\ln{\mathit\Pi}}{\ln{\ln{\mathit\Pi}}}$, in order to get a contradiction.

Using the relation $\ln{\mathit\Pi}>k(\ln{k}+\ln{2}-1)$ we get that

$$\ln{\mathit\Pi}> \frac{\ln{\mathit\Pi}}{\ln{\ln{\mathit\Pi}}}\left( \ln{\ln{\mathit\Pi}}-\ln{\ln{\ln{\Pi}}}+\ln{2}-1\right) \\ \Rightarrow \ln{\mathit\Pi} \ln{\ln{\mathit\Pi}}> \ln{\mathit\Pi} \ln{\ln{\mathit\Pi}}-\ln{\mathit\Pi} \ln{\ln{\ln{\mathit\Pi}}}+ \ln{2} \ln{\mathit\Pi}-\ln{\mathit\Pi} \\ \Rightarrow \ln{\mathit\Pi} (\ln{(\ln{\ln{\mathit\Pi}})}+1-\ln{2})>0$$

Do we now look at the function $f(x)= \ln{x}(\ln{\ln{\ln{x}}}+1-\ln{2})$ ?

If so, accrding to Wolfram, it has one solution and its derivate has no solution. What can we get from this? (Thinking)

Isn't that expression true for sufficiently large $x$? And the function positive as well?
Then it won't lead to a contradiction. (Worried)In the argument given in http://mathhelpboards.com/number-theory-27/test-whether-integer-prime-22804-post103374.html#post103374, we assumed that $k \ge \frac{2N}{\ln N}$. Note the extra factor $2$.

So let's first define $M=\ln\mathit\Pi$, and let's assume that $k\ge \frac{2M}{\ln M}$ for a contradiction.
Then we get:
$$M>k\cdot(\ln k + \ln 2 - 1) \tag{3.6.19a}$$
and it follows, when $\ln M > 0$, that:
$$M>\frac{2M}{\ln M}\cdot\left(\ln\left(\frac{2M}{\ln M}\right) + \ln 2 - 1\right) \\
\Rightarrow\quad\frac 12\ln M > \ln\left(\frac{2M}{\ln M}\right)+ \ln 2 - 1 \\
\Rightarrow\quad\frac 12\ln M > \ln M - \ln\ln M+ 2\ln 2 - 1 \\
\Rightarrow\quad(1-\frac 12)\ln M < \ln\ln M-2\ln 2 + 1$$
so by obvious transformations,
$$(1-\frac 12)\ln M < \ln\ln M-2\ln 2 + 1\tag{3.6.20a}$$

Now we can look at the function $f: x \mapsto (1-\frac 12)\ln x - \ln\ln x + 2\ln 2 - 1$, which is defined for $x > 1$, can't we? (Wondering)
 
  • #88
I like Serena said:
In the argument given in http://mathhelpboards.com/number-theory-27/test-whether-integer-prime-22804-post103374.html#post103374, we assumed that $k \ge \frac{2N}{\ln N}$. Note the extra factor $2$.

So let's first define $M=\ln\mathit\Pi$, and let's assume that $k\ge \frac{2M}{\ln M}$ for a contradiction.
Then we get:
$$M>k\cdot(\ln k + \ln 2 - 1) \tag{3.6.19a}$$
and it follows, when $\ln M > 0$, that:
$$M>\frac{2M}{\ln M}\cdot\left(\ln\left(\frac{2M}{\ln M}\right) + \ln 2 - 1\right) \\
\Rightarrow\quad\frac 12\ln M > \ln\left(\frac{2M}{\ln M}\right)+ \ln 2 - 1 \\
\Rightarrow\quad\frac 12\ln M > \ln M - \ln\ln M+ 2\ln 2 - 1 \\
\Rightarrow\quad(1-\frac 12)\ln M < \ln\ln M-2\ln 2 + 1$$
so by obvious transformations,
$$(1-\frac 12)\ln M < \ln\ln M-2\ln 2 + 1\tag{3.6.20a}$$

Now we can look at the function $f: x \mapsto (1-\frac 12)\ln x - \ln\ln x + 2\ln 2 - 1$, which is defined for $x > 1$, can't we? (Wondering)

We have that $f(x)>0$ for $x>1$.

Then $f'(x)=\frac{\ln{x}-2}{2 x \ln{x}}$.

$f'(x)=0 \Rightarrow x=e^2$.

For $x> e^2$ we have that $f'(x)>0$ and so $f$ is increasing.

For $1<x<e^2$ we have that $f'(x)<0$ and so $f$ is decreasing.

So we get that for any $x>1$, $f(x)>f(e^2)=\ln{2} \Rightarrow \frac{1}{2} \ln{x}-\ln\ln x+ 2\ln 2-1>\ln{2}>0$, contradicting the relation $(3.6.20a)$.

Is everything right? (Thinking)
 
  • #89
evinda said:
We have that $f(x)>0$ for $x>1$.

We don't have this yet. It's what we want to prove don't we? (Wondering)

evinda said:
Then $f'(x)=\frac{\ln{x}-2}{2 x \ln{x}}$.

$f'(x)=0 \Rightarrow x=e^2$.

For $x> e^2$ we have that $f'(x)>0$ and so $f$ is increasing.

For $1<x<e^2$ we have that $f'(x)<0$ and so $f$ is decreasing.

So we get that for any $x>1$, $f(x)>f(e^2)=\ln{2} \Rightarrow \frac{1}{2} \ln{x}-\ln\ln x+ 2\ln 2-1>\ln{2}>0$, contradicting the relation $(3.6.20a)$.

Yep. (Nod)

(Although shouldn't it be $f(x)\ge f(e^2)$?)
 
  • #90
I like Serena said:
We don't have this yet. It's what we want to prove don't we? (Wondering)

Yes, right... (Nod)

I like Serena said:
Yep. (Nod)

(Although shouldn't it be $f(x)\ge f(e^2)$?)

Yes, I see... (Nod)

View attachment 7642

How do we see that $q$ must divide $ord_r{(n)}$ ? (Thinking)
 

Attachments

  • claim.JPG
    claim.JPG
    24.9 KB · Views: 96
  • #91
evinda said:
How do we see that $q$ must divide $ord_r{(n)}$ ? (Thinking)

From what came before we have that:
$\operatorname{ord}_r n \mid r-1 = qm \tag 1$
$\operatorname{ord}_r n > x^{1/3} \ge m \tag 2$

Suppose that the prime $q\not\mid \operatorname{ord}_r n$.
Then by (1) it follows that $\operatorname{ord}_r n \mid m$, doesn't it?
Thus $\operatorname{ord}_r n \le m$, which is a contradiction with (2) isn't it? (Wondering)
 
  • #92
I have also an other question. It says the following about the algorithm of post #17.View attachment 7781Why are the numbers bounded by $n^2$ and not by $n$ ? (Thinking)
 

Attachments

  • ao.JPG
    ao.JPG
    15.9 KB · Views: 111
  • #93
evinda said:
I have also an other question. It says the following about the algorithm of post #17.

Why are the numbers bounded by $n^2$ and not by $n$ ? (Thinking)
View attachment 7583

In line 6 we're calculating $n^i\bmod r$ where $r$ can be as high as $n-1$.
With the fast exponentiation algorithm that means we need to evaluate numbers up to $(r-1)^2 = (n-2)^2$ don't we? (Wondering)
 
  • #94
I like Serena said:
In line 6 we're calculating $n^i\bmod r$ where $r$ can be as high as $n-1$.
With the fast exponentiation algorithm that means we need to evaluate numbers up to $(r-1)^2 = (n-2)^2$ don't we? (Wondering)

In line 6, don't we compute at the beginning $n \mod{r}$, then at the second interation $(n \mod r) \cdot (n \mod{r})$, and so on? (Thinking)
 
  • #95
evinda said:
In line 6, don't we compute at the beginning $n \mod{r}$, then at the second interation $(n \mod r) \cdot (n \mod{r})$, and so on? (Thinking)

Yes.
So the result of the first iteration is $n^2 \bmod r$.
Then we calculate $(n^2 \bmod r)\cdot (n^2 \bmod r)$ don't we? (Wondering)
And in one of the iterations we could have $r-1$.

From wiki:
The following is an example in pseudocode based on Applied Cryptography by Bruce Schneier.[1] The inputs base, exponent, and modulus correspond to b, e, and m in the equations given above.

Code:
function modular_pow(base, exponent, modulus)
    if modulus = 1 then return 0
    Assert :: (modulus - 1) * (modulus - 1) does not overflow base
    result := 1
    base := base mod modulus
    while exponent > 0
        if (exponent mod 2 == 1):
           result := (result * base) mod modulus
        exponent := exponent >> 1
        base := (base * base) mod modulus
    return result

As you can see, we even have an assert in there that says we need to have support up to $(r-1)^2$. (Worried)
 
  • #96
I like Serena said:
Yes.
So the result of the first iteration is $n^2 \bmod r$.
Then we calculate $(n^2 \bmod r)\cdot (n^2 \bmod r)$ don't we? (Wondering)
And in one of the iterations we could have $r-1$.

But then we could also need to calculate $n^5 \mod{r}$, which is equal to $n^4 \cdot n \mod{r}$. (Worried)

So why can we pick $n^2$ as an upper bound? (Thinking)
 
  • #97
evinda said:
But then we could also need to calculate $n^5 \mod{r}$, which is equal to $n^4 \cdot n \mod{r}$. (Worried)

So why can we pick $n^2$ as an upper bound? (Thinking)

In the modular_pow algorithm we calculate [M]base := (base * base) mod modulus[/M], where $0 \le \text{base} \le \text{modulus} -1$ each time.
And we calculate [M]result:= (result * base) mod modulus[/M], where $0 \le \text{result, base} \le \text{modulus} -1$.
So at every step the biggest product we might calculate is $(\text{modulus} -1)(\text{modulus} -1)$, after which we reduce the result to be below $\text{modulus}$.

Suppose we have $r=17, n=19, i=10$.
Then the algorithm is:
\begin{array}{cccl}
base & exponent & result & code\\
19 & 10 & 1 & base := base \bmod 17\\
2 & 10 & 1 & base := (base * base) \bmod 17 ; exponent := exponent >> 1 \\
4 & 5 & 1 & result := (result * base) \bmod 17 ; exponent := exponent - 1 \\
4 & 4 & 4 & result := (base * base) \bmod 17 ; exponent := exponent >> 1 \\
16 & 2 & 4 & result := (base * base) \bmod 17 ; exponent := exponent >> 1 \\
1 & 1 & 4 & (result * base) \bmod 17 ; exponent := exponent - 1 \\
1 & 0 & 4
\end{array}

See how both result and base are always below $ r=17 < n=19$ (after the first step)?
And how the base can be 16, which gets multiplied with itself?
So $n^2$ is an upper bound. (Thinking)
 
Back
Top