Test whether the integer is a prime

  • Context: MHB 
  • Thread starter Thread starter evinda
  • Start date Start date
  • Tags Tags
    Integer Prime Test
Click For Summary

Discussion Overview

The discussion revolves around finding an efficient algorithm to determine whether an odd integer is prime, specifically through the use of polynomial congruences and computational cost analysis. Participants explore the implications of choosing a polynomial modulus and the efficiency of various algorithms, including recursive methods and fast exponentiation.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant proposes testing the congruence $(X+a)^n \equiv X^n+a$ modulo a polynomial $X^r-1$ to check for primality.
  • Another participant questions the meaning of $r$ being $O((\log{n})^c)$ and its implications for computational cost.
  • There is a suggestion to use a recursive algorithm based on squaring to evaluate polynomials, with a focus on determining the computational cost $f(n,r)$.
  • Participants discuss the potential use of the Fast Fourier Transform (FFT) for polynomial multiplication, noting its time complexity of $O(d \log{d})$.
  • One participant proposes a recursive relation for $f(n,n)$ based on whether $n$ is even or odd, while others refine this relation.
  • There is a debate over whether $r$ should be treated as a constant or if it can vary, with some suggesting it must be chosen cleverly to maintain polynomial cost.
  • Participants explore the implications of polynomial degrees and the operations involved in multiplying polynomials, particularly in the context of the FFT.
  • Concerns are raised about the degree of the resulting polynomial and the need for initial conditions in solving recurrence relations.

Areas of Agreement / Disagreement

Participants express differing views on the interpretation of $r$ and its role in the computational cost, leading to unresolved questions about the implications of choosing $r$ as $O((\log{n})^c)$. There is no consensus on the best approach to analyze the computational cost or the exact nature of the polynomial operations involved.

Contextual Notes

Limitations include the dependence on the choice of $r$, the assumptions about polynomial degrees, and the unresolved nature of the computational cost analysis. The discussion also highlights the need for initial conditions in recurrence relations, which remains unaddressed.

  • #91
evinda said:
How do we see that $q$ must divide $ord_r{(n)}$ ? (Thinking)

From what came before we have that:
$\operatorname{ord}_r n \mid r-1 = qm \tag 1$
$\operatorname{ord}_r n > x^{1/3} \ge m \tag 2$

Suppose that the prime $q\not\mid \operatorname{ord}_r n$.
Then by (1) it follows that $\operatorname{ord}_r n \mid m$, doesn't it?
Thus $\operatorname{ord}_r n \le m$, which is a contradiction with (2) isn't it? (Wondering)
 
Mathematics news on Phys.org
  • #92
I have also an other question. It says the following about the algorithm of post #17.View attachment 7781Why are the numbers bounded by $n^2$ and not by $n$ ? (Thinking)
 

Attachments

  • ao.JPG
    ao.JPG
    15.9 KB · Views: 134
  • #93
evinda said:
I have also an other question. It says the following about the algorithm of post #17.

Why are the numbers bounded by $n^2$ and not by $n$ ? (Thinking)
View attachment 7583

In line 6 we're calculating $n^i\bmod r$ where $r$ can be as high as $n-1$.
With the fast exponentiation algorithm that means we need to evaluate numbers up to $(r-1)^2 = (n-2)^2$ don't we? (Wondering)
 
  • #94
I like Serena said:
In line 6 we're calculating $n^i\bmod r$ where $r$ can be as high as $n-1$.
With the fast exponentiation algorithm that means we need to evaluate numbers up to $(r-1)^2 = (n-2)^2$ don't we? (Wondering)

In line 6, don't we compute at the beginning $n \mod{r}$, then at the second interation $(n \mod r) \cdot (n \mod{r})$, and so on? (Thinking)
 
  • #95
evinda said:
In line 6, don't we compute at the beginning $n \mod{r}$, then at the second interation $(n \mod r) \cdot (n \mod{r})$, and so on? (Thinking)

Yes.
So the result of the first iteration is $n^2 \bmod r$.
Then we calculate $(n^2 \bmod r)\cdot (n^2 \bmod r)$ don't we? (Wondering)
And in one of the iterations we could have $r-1$.

From wiki:
The following is an example in pseudocode based on Applied Cryptography by Bruce Schneier.[1] The inputs base, exponent, and modulus correspond to b, e, and m in the equations given above.

Code:
function modular_pow(base, exponent, modulus)
    if modulus = 1 then return 0
    Assert :: (modulus - 1) * (modulus - 1) does not overflow base
    result := 1
    base := base mod modulus
    while exponent > 0
        if (exponent mod 2 == 1):
           result := (result * base) mod modulus
        exponent := exponent >> 1
        base := (base * base) mod modulus
    return result

As you can see, we even have an assert in there that says we need to have support up to $(r-1)^2$. (Worried)
 
  • #96
I like Serena said:
Yes.
So the result of the first iteration is $n^2 \bmod r$.
Then we calculate $(n^2 \bmod r)\cdot (n^2 \bmod r)$ don't we? (Wondering)
And in one of the iterations we could have $r-1$.

But then we could also need to calculate $n^5 \mod{r}$, which is equal to $n^4 \cdot n \mod{r}$. (Worried)

So why can we pick $n^2$ as an upper bound? (Thinking)
 
  • #97
evinda said:
But then we could also need to calculate $n^5 \mod{r}$, which is equal to $n^4 \cdot n \mod{r}$. (Worried)

So why can we pick $n^2$ as an upper bound? (Thinking)

In the modular_pow algorithm we calculate [M]base := (base * base) mod modulus[/M], where $0 \le \text{base} \le \text{modulus} -1$ each time.
And we calculate [M]result:= (result * base) mod modulus[/M], where $0 \le \text{result, base} \le \text{modulus} -1$.
So at every step the biggest product we might calculate is $(\text{modulus} -1)(\text{modulus} -1)$, after which we reduce the result to be below $\text{modulus}$.

Suppose we have $r=17, n=19, i=10$.
Then the algorithm is:
\begin{array}{cccl}
base & exponent & result & code\\
19 & 10 & 1 & base := base \bmod 17\\
2 & 10 & 1 & base := (base * base) \bmod 17 ; exponent := exponent >> 1 \\
4 & 5 & 1 & result := (result * base) \bmod 17 ; exponent := exponent - 1 \\
4 & 4 & 4 & result := (base * base) \bmod 17 ; exponent := exponent >> 1 \\
16 & 2 & 4 & result := (base * base) \bmod 17 ; exponent := exponent >> 1 \\
1 & 1 & 4 & (result * base) \bmod 17 ; exponent := exponent - 1 \\
1 & 0 & 4
\end{array}

See how both result and base are always below $ r=17 < n=19$ (after the first step)?
And how the base can be 16, which gets multiplied with itself?
So $n^2$ is an upper bound. (Thinking)
 

Similar threads

  • · Replies 29 ·
Replies
29
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
6
Views
1K
Replies
27
Views
4K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K