Existence of Roots for Quadratic Forms Modulo Prime Numbers

In summary, the conversation discusses polynomials of the form P(x,y,z) = ax^2 + by^2 + cz^2 + 2(exy + fxz + gyz) \mod p where a,b,c \in \mathbb{Z}_p/ \{0\} and d,e,f \in \mathbb{Z}_p. The main question is whether all of these polynomials have a non-trivial root. Various methods are suggested, such as diagonalizing the matrix and using quadratic formula approaches, but it is ultimately suggested to use a computer to check for roots.
  • #1
Mathmajor2010
6
0
I've been doing some work and I keep running into polynomials of the following form:

[tex] P(x,y,z) = ax^2 + by^2 + cz^2 + 2(exy + fxz + gyz) \mod p [/tex]

where [itex] a,b,c \in \mathbb{Z}_p/ \{0\} [/itex] and [itex] d , e, f \in \mathbb{Z}_p [/itex] . It would be great if I knew anything about the existence of roots of [itex] P [/itex], save for the trivial root [itex] x=y=z=0 [/itex] I've looked through some of my algebra books and number theory books, but didn't find anything that would help me answer this question.

Could anyone point me in the right direction? If anyone knows of a textbook that discusses this or theorem name, that'd be great. Thanks!
 
Physics news on Phys.org
  • #2
Mathmajor2010 said:
I've been doing some work and I keep running into polynomials of the following form:

[tex] P(x,y,z) = ax^2 + by^2 + cz^2 + 2(exy + fxz + gyz) \mod p [/tex]

where [itex] a,b,c \in \mathbb{Z}_p/ \{0\} [/itex] and [itex] d , e, f \in \mathbb{Z}_p [/itex] . It would be great if I knew anything about the existence of roots of [itex] P [/itex], save for the trivial root [itex] x=y=z=0 [/itex] I've looked through some of my algebra books and number theory books, but didn't find anything that would help me answer this question.

Could anyone point me in the right direction? If anyone knows of a textbook that discusses this or theorem name, that'd be great. Thanks!

Forgive me, but I'm a little confused. It looks like you are adding stuff in [itex] \mathbb{Z}_p [/itex] to stuff in [itex] \mathbb{Z}_p/ \{0\} [/itex]. Obviously the two are isomorphic but I don't think you can add the two together.
 
  • #3
Off the top of my head...
You can rewrite the quadratic in vector-matrix form:
[itex] P(\vec{x}) = \left(\begin{array}{ccc}x & y & z \end{array}\right)\left(\begin{array}{ccc}a & e & f\\ e & b & g \\ f & g & c\end{array}\right)\left(\begin{array}{c}x\\ y\\ z\end{array}\right)[/itex]

If p is prime then your ring is a field and you should be able to diagonalize the matrix and enumerate solutions in terms of Pythagorean triples. Guessing here but that's how I'd start.
 
  • #4
jambaugh said:
Off the top of my head...
You can rewrite the quadratic in vector-matrix form:
[itex] P(\vec{x}) = \left(\begin{array}{ccc}x & y & z \end{array}\right)\left(\begin{array}{ccc}a & e & f\\ e & b & g \\ f & g & c\end{array}\right)\left(\begin{array}{c}x\\ y\\ z\end{array}\right)[/itex]

If p is prime then your ring is a field and you should be able to diagonalize the matrix and enumerate solutions in terms of Pythagorean triples. Guessing here but that's how I'd start.

But how do you add stuff from totally different spaces? That is, how to you add an integer mod p to a set?
 
  • #5
Robert1986 said:
Forgive me, but I'm a little confused. It looks like you are adding stuff in [itex] \mathbb{Z}_p [/itex] to stuff in [itex] \mathbb{Z}_p/ \{0\} [/itex]. Obviously the two are isomorphic but I don't think you can add the two together.

Robert1986 said:
But how do you add stuff from totally different spaces? That is, how to you add an integer mod p to a set?

You don't. It should be [itex] \mathbb{Z}_p \backslash \{0\} [/itex].
That is the set minus the element 0.
I guess the OP found it too much hassle to find out how to do it in LaTeX. :wink:
 
  • #6
Assuming your equation is non-degenerate, there are either no nontrivial solutions, or the set of nontrivial solutions is parameterized by the set [itex]F^* \times \mathbf{P}^1(F)[/itex].

(there is a geometric argument to find the new solutions -- if (x:y:z) as projective coordinates on the projective plane, then your equation defines a conic. Every line through your known root will intersect your conic section in another point)



To find the first solution, solve the quadratic equation in x. This will have a square-root of a function involving y and z. It's probably worth factoring out the z and pulling it outside of the square root, so you have a function in (y/z). (and treat the z=0 case specially) The existence of a solution now depends on the ability of that function to produce a square modulo p.

Even if that function in (y/z) is non-constant, I believe it possible that it never takes on a square value. I don't know of any easy way to determine whether or not it's true beyond actually computing its values.
 
  • #7
Thanks for all of the replies so far, and sorry for the confusion on [itex] \mathbb{Z}_p \backslash \{0\} [/itex]. I didn't know how to make a backslash in TeX. The reason I'm asking this is because, in [itex] \mathbb{Z}_3 [/itex] and in [itex] \mathbb{Z}_5 [/itex], I think all polynomials of the above form have a non-trivial root. I'm curious whether this is true in general, but I so far have had no way of attacking the problem besides using a computer to simply check all polynomials of those forms and checking for a non-trivial root.

I hadn't considered diagonalizing the matrix, but how do I know that I'm allowed to diagonalize? I know, at least in the case over [itex] \mathbb{C} [/itex], diagonalizability is only allowable iff the matrix has n eigenvalues. I'm also not sure exactly how one could use this form to check in general to see if all of these polynomials have a solution.

Hurkyl, I'm not familiar with the notation [itex] F^* \times \mathbf{P}^1(F) [/itex]. Is P^1 the projective plane? I'm unfortunately not familiar with any algebraic geometry, so I'll have to read up on this. I hadn't considered solving the problem with a quadratic formula approach as I wasn't sure if the concept of taking a square root was well defined. For example, in [itex] \mathbb{Z}_3 [/itex], 2 has no square root, as [itex] 1 \cdot 1 \equiv 1 \mod 3 [/itex] and [itex] 2 \cdot 2 \equiv 4 \equiv 1 \mod 3 [/itex]. Regardless, I believe every polynomial of the form from above, has a non-trivial root. I just so far have no mathematical reasoning or way of attacking the problem in general, save for root searching on the computer.

I'll try to run my root finder over all polynomials [itex] \mathbb{Z}_7 [/itex], but that's a rather large number to solve for. Thanks for all of the suggestions so far!
 
  • #8
Quadratic forms in three variables DO in fact have solutions mod every prime p. The proof requires a little knowledge of number theory, but not too much. If you know what a http://en.wikipedia.org/wiki/Legendre_symbol" [Broken] is, you'll have no trouble understanding it.

First, let's dispense with the trivial case. Mod 2, the requirement that a, b, c≠0 implies that the polynomial is x² + y² + z², which has nontrivial solutions (1, 1, 0), (0, 1, 1), and (1, 0, 1). Henceforth, all computations will be performed in the field [itex]\mathbb{Z}/p\mathbb{Z}[/itex] for some odd prime p.

Next, as Robdert1986 said, you can rewrite the equation P(x, y, z) = 0 as [itex]v^{T}Mv = 0[/itex], where v is the column vector (x, y, z) and M is a symmetric matrix. If the determinant of M is zero, then we can take v to be any nonzero vector in the kernel of M as a solution. Henceforth we may assume that det M is nonzero. Let us define an equivalence relation on symmetric matrices. Call two matrices M and N equivalent if there exists an invertible matrix A such that:

[tex]M = A^{T}MA[/tex]

Note that that is A transpose at the beginning, not A inverse. Now, it is easy to see that equivalence of matrices is in fact an equivalence relation, that any matrix equivalent to a symmetric matrix is symmetric, that any matrix equivalent to an invertible matrix is invertible, and that the form [itex]v^{T}Mv = 0[/itex] has a nontrivial solution iff [itex]v^{T}Nv = 0[/itex] does (proof of the last statement: if v is a solution to [itex]v^{T}Nv = 0[/itex], then Av is a solution to [itex]v^{T}Mv = 0[/itex], and if v is a solution to [itex]v^{T}Mv = 0[/itex], then A-1v is a solution to [itex]v^{T}Nv = 0[/itex]). Hence when deciding whether a quadratic form has a nontrivial solution, we may replace it with any equivalent one. In particular, the following theorem says that we may replace it with a diagonal matrix.

-------------------------------------------

Theorem 1: Over a field of characteristic not equal to 2, every symmetric matrix is equivalent to a diagonal matrix.

Proof: We proceed by induction on the dimension of the matrix. If the matrix is 1×1, then every matrix is already diagonal, and we are done. Now, suppose that the theorem is true for n×n matrices, and let M be an (n+1)×(n+1) symmetric matrix. If M is the zero matrix, then it is already diagonal, so we are done. So suppose that M is not the zero matrix. We consider three cases:

Case 1: [itex]M_{11} \neq 0[/itex]

First we will show that M is equivalent to a matrix whose entries in the first row and first column, except for the upper-left entry itself, are all zero. Let A be the matrix defined by:

[tex]A_{ij}=\begin{cases}1 & \mathrm{if}\ i=j \\ -M_{1j}/M_{11} & \mathrm{if}\ i = 1 \neq j \\ 0 & \mathrm{otherwise} \end{cases}[/tex]

It is elementary to verify that the matrix [itex]A^{T}MA[/itex] has zero entries in the first row and column, except in the upper left corner. In other words it is a block matrix of the form:

[tex]\left( \begin{array}{cc} m & 0 \\ 0 & M' \end{array} \right)[/tex]

Where [itex]m=M_{11}[/itex] and M' is the n×n submatrix obtained by deleting the first row and column. Now, M' is symmetric, and so by the induction hypothesis, is equivalent to a diagonal matrix. Let B be an invertible n×n matrix such that [itex]B^{T}MB[/itex] is diagonal, and then let B' be the (n+1)×(n+1) block matrix:

[tex]\left( \begin{array}{cc} 1 & 0 \\ 0 & B \end{array} \right)[/tex]

Then [itex](AB')^{T}M(AB')[/itex] will be a diagonal matrix, as required.

Case 2: M has a nonzero diagonal entry, but it is not [itex]M_{11}[/itex]

Suppose that [itex]M_{jj}[/itex] is a nonzero diagonal entry, let P be the permutation matrix that exchanges the first and jth rows. It is easy to verify that [itex]P^{T}=P[/itex] and that [itex](P^{T}MP)_{11} = M_{jj} \neq 0[/itex], so M is equivalent to a matrix in case 1 which is equivalent to a diagonal matrix.

Case 3: All the diagonal entries of M are zero.

Per our assumption that M is not the zero matrix, there exists some entry [itex]M_{ij}[/itex] such that [itex]M_{ij}=M_{ji} \neq 0[/itex]. Let A be the matrix defined as follows: Let all the diagonal entries of A be 1, [itex]A_{ij}=1[/itex], [itex]A_{ji}=-1[/itex], and all other entries of A be zero. A is invertible since we are not in characteristic 2, so 1≠-1. Let us now compute [itex](A^{T}MA)_{ii}[/itex]:

[tex]\begin{array}{ccc}(A^{T}MA)_{ii} & = & \sum_{k=1}^{n+1} A^{T}_{ik}(MA)_{ki} \\ & & (MA)_{ii} - (MA)_{ji} \\ & & \sum_{k=1}^{n+1} M_{ik}A_{ki} - \sum_{k=1}^{n+1} M_{jk}A_{ki} \\ & & M_{ii} - M_{ij} - M_{ji} + M_{jj} \\ & & -2M_{ij} \end{array}[/tex]

And again since we are not in characteristic 2, [itex]-2M_{ij} \neq 0[/itex], so the matrix [itex]A^{T}MA[/itex] has a nonzero diagonal entry, so M is equivalent to a matrix in case 2 which is equivalent to a diagonal matrix.

Since the above three cases are collectively exhaustive, we have by induction that all symmetric matrices are equivalent to a diagonal matrix. Q.E.D.

-------------------------------------------

So, in order to show that all quadratic forms in three variables in three variables have a nontrivial solution, it suffices to show this for forms given by a diagonal matrix. Thus we can consider only equations of the form [itex]ax^{2} + by^2 + cz^2 = 0[/itex]. Since we assumed that we started with matrices of nonzero determinant, and equivalence preserves invertibility, we can assume that a, b, and c are all nonzero (alternatively, if after diagonalization one of the coefficients, say a, is zero, then the equation trivially has the solution (1, 0, 0)). Then the following theorem guarantees the existence of solutions:

-------------------------------------------

Theorem 2: If a, b, and c are all nonzero elements of [itex]\mathbb{Z}/p\mathbb{Z}[/itex] for an odd prime p, then the equation [itex]ax^{2} + by^2 + cz^2 = 0[/itex] has a nontrivial solution in [itex]\mathbb{Z}/p\mathbb{Z}[/itex].

Proof: Since there are three coefficients and only two possible valued for the Legendre symbol, two of the coeffecients must have the same quadratic character mod p. After relabeling if necessary, we may assume that [tex]\left(\frac{a}{p}\right) = \left(\frac{b}{p}\right)[/tex]. Now, note that multiplying the equation by any nonzero constant will not change whether it has nontrivial solutions, so if a is a quadratic nonresidue, we may multiply the equation by a nonresidue to obtain an equation where a and b are both residues mod p, hence we may assume WLOG that a and b are both quadratic residues. Thus, they have square roots in [itex]\mathbb{Z}/p\mathbb{Z}[/itex]. Let [itex]\sqrt{a}[/itex] and [itex]\sqrt{b}[/itex] be square roots of a and b respectively, and let [itex]x' = x \sqrt{a}[/itex] and [itex]y' = y \sqrt {a}[/itex]. Then the equation becomes [itex](x')^{2} + (y')^2 = -cz^2[/itex]. Certainly, if -c can be expressed as the sum of two squares mod p, then this equation has a solution where z=1, and we are done. Therefore, it suffices to show that every element of [itex]\mathbb{Z}/p\mathbb{Z}[/itex] can be expressed as the sum of two squares.

First, note that every quadratic residue mod p can be expressed as the sum of two squares by letting the second square be zero. If every element that could be expressed as the sum of two squares was itself a square, then since 1 = 1², we would have by induction that every element of [itex]\mathbb{Z}/p\mathbb{Z}[/itex] is a square, which is plainly false. So let m = x² + y² be a nonresidue which is expressible as the sum of two squares, and let n be any other nonresidue. Then by the multiplicative nature of the Legendre symbol, we have that n/m is a nonresidue, and so can be written as k² for some [itex]k \in \mathbb{Z}/p\mathbb{Z}[/itex]. But then [itex]n = mk^2 = (x^2 + y^2)k^2 = (xk)^2 + (yk)^2[/itex] is the sum of two squares, and so every nonresidue is expressible as the sum of two squares as well. Q.E.D.

-------------------------------------------

Does this help answer your question?
 
Last edited by a moderator:
  • #9
Wow! Thanks for the detailed explanation. It is very clever, and I need to go review the use of Legendre symbols (it's been years since I last used it!). I take it that this has been a known result. Do you happen to know of a textbook/theorem name/paper? I'm very glad to know that it is indeed true that all of these quadratic forms have a non-trivial solution, as that was my guess so far.

Edit: After a bit more looking and proper googling, I found a theorem named the Chevalley-Warning Theorem which tells that a quadratic form with no constant terms in at least three variables has a non-trivial root. Is this the theorem you used to know the existence Citan Uzuki? Once again, thanks for the amazingly detailed response!
 
Last edited:
  • #10
Actually, I had forgotten about the Chevalley-Warning theorem at the time I wrote that solution, although it certainly does follow from it. The first theorem, that all quadratic forms over fields of characteristic other than 2 are diagonalizable, is a well-known result in the arithmetic of quadratic forms (I don't think it has a name), but the proof of theorem 2 was just something that I came up with on the spot.
 

1. What are the "Roots of Polynomials modulo p"?

The "Roots of Polynomials modulo p" refer to the solutions or values of a polynomial equation when the coefficients and variables are taken modulo a prime number, denoted as p.

2. Why do we take the coefficients and variables modulo a prime number in this context?

Taking the coefficients and variables modulo a prime number in this context allows us to work with a finite field, which has many useful properties for solving polynomial equations. It also allows us to extend the concept of roots from real or complex numbers to other number systems.

3. How do we find the roots of polynomials modulo p?

To find the roots of polynomials modulo p, we can use various methods such as the quadratic formula, factoring, or trial and error. We can also use more advanced techniques such as Hensel's lemma or the Chinese remainder theorem.

4. Can a polynomial have multiple roots modulo p?

Yes, a polynomial can have multiple roots modulo p. In fact, the number of roots modulo p can be greater than the degree of the polynomial. This is because in a finite field, there are fewer numbers to choose from, making it more likely for multiple roots to exist.

5. Why is the concept of roots of polynomials modulo p important?

The concept of roots of polynomials modulo p has many applications in various fields such as cryptography, coding theory, and number theory. It also helps us understand the behavior of polynomials over finite fields, which is essential in solving many mathematical problems.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
5
Views
839
  • Precalculus Mathematics Homework Help
Replies
12
Views
2K
  • Precalculus Mathematics Homework Help
Replies
20
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
909
Replies
13
Views
3K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Programming and Computer Science
Replies
5
Views
2K
Replies
5
Views
220
Back
Top