Quadratic forms in three variables DO in fact have solutions mod every prime p. The proof requires a little knowledge of number theory, but not too much. If you know what a http://en.wikipedia.org/wiki/Legendre_symbol" is, you'll have no trouble understanding it.
First, let's dispense with the trivial case. Mod 2, the requirement that a, b, c≠0 implies that the polynomial is x² + y² + z², which has nontrivial solutions (1, 1, 0), (0, 1, 1), and (1, 0, 1). Henceforth, all computations will be performed in the field \mathbb{Z}/p\mathbb{Z} for some odd prime p.
Next, as Robdert1986 said, you can rewrite the equation P(x, y, z) = 0 as v^{T}Mv = 0, where v is the column vector (x, y, z) and M is a symmetric matrix. If the determinant of M is zero, then we can take v to be any nonzero vector in the kernel of M as a solution. Henceforth we may assume that det M is nonzero. Let us define an equivalence relation on symmetric matrices. Call two matrices M and N equivalent if there exists an invertible matrix A such that:
M = A^{T}MA
Note that that is A transpose at the beginning, not A inverse. Now, it is easy to see that equivalence of matrices is in fact an equivalence relation, that any matrix equivalent to a symmetric matrix is symmetric, that any matrix equivalent to an invertible matrix is invertible, and that the form v^{T}Mv = 0 has a nontrivial solution iff v^{T}Nv = 0 does (proof of the last statement: if v is a solution to v^{T}Nv = 0, then Av is a solution to v^{T}Mv = 0, and if v is a solution to v^{T}Mv = 0, then A-1v is a solution to v^{T}Nv = 0). Hence when deciding whether a quadratic form has a nontrivial solution, we may replace it with any equivalent one. In particular, the following theorem says that we may replace it with a diagonal matrix.
-------------------------------------------
Theorem 1: Over a field of characteristic not equal to 2, every symmetric matrix is equivalent to a diagonal matrix.
Proof: We proceed by induction on the dimension of the matrix. If the matrix is 1×1, then every matrix is already diagonal, and we are done. Now, suppose that the theorem is true for n×n matrices, and let M be an (n+1)×(n+1) symmetric matrix. If M is the zero matrix, then it is already diagonal, so we are done. So suppose that M is not the zero matrix. We consider three cases:
Case 1: M_{11} \neq 0
First we will show that M is equivalent to a matrix whose entries in the first row and first column, except for the upper-left entry itself, are all zero. Let A be the matrix defined by:
A_{ij}=\begin{cases}1 & \mathrm{if}\ i=j \\ -M_{1j}/M_{11} & \mathrm{if}\ i = 1 \neq j \\ 0 & \mathrm{otherwise} \end{cases}
It is elementary to verify that the matrix A^{T}MA has zero entries in the first row and column, except in the upper left corner. In other words it is a block matrix of the form:
\left( \begin{array}{cc} m & 0 \\ 0 & M' \end{array} \right)
Where m=M_{11} and M' is the n×n submatrix obtained by deleting the first row and column. Now, M' is symmetric, and so by the induction hypothesis, is equivalent to a diagonal matrix. Let B be an invertible n×n matrix such that B^{T}MB is diagonal, and then let B' be the (n+1)×(n+1) block matrix:
\left( \begin{array}{cc} 1 & 0 \\ 0 & B \end{array} \right)
Then (AB')^{T}M(AB') will be a diagonal matrix, as required.
Case 2: M has a nonzero diagonal entry, but it is not M_{11}
Suppose that M_{jj} is a nonzero diagonal entry, let P be the permutation matrix that exchanges the first and jth rows. It is easy to verify that P^{T}=P and that (P^{T}MP)_{11} = M_{jj} \neq 0, so M is equivalent to a matrix in case 1 which is equivalent to a diagonal matrix.
Case 3: All the diagonal entries of M are zero.
Per our assumption that M is not the zero matrix, there exists some entry M_{ij} such that M_{ij}=M_{ji} \neq 0. Let A be the matrix defined as follows: Let all the diagonal entries of A be 1, A_{ij}=1, A_{ji}=-1, and all other entries of A be zero. A is invertible since we are not in characteristic 2, so 1≠-1. Let us now compute (A^{T}MA)_{ii}:
\begin{array}{ccc}(A^{T}MA)_{ii} & = & \sum_{k=1}^{n+1} A^{T}_{ik}(MA)_{ki} \\ & & (MA)_{ii} - (MA)_{ji} \\ & & \sum_{k=1}^{n+1} M_{ik}A_{ki} - \sum_{k=1}^{n+1} M_{jk}A_{ki} \\ & & M_{ii} - M_{ij} - M_{ji} + M_{jj} \\ & & -2M_{ij} \end{array}
And again since we are not in characteristic 2, -2M_{ij} \neq 0, so the matrix A^{T}MA has a nonzero diagonal entry, so M is equivalent to a matrix in case 2 which is equivalent to a diagonal matrix.
Since the above three cases are collectively exhaustive, we have by induction that all symmetric matrices are equivalent to a diagonal matrix. Q.E.D.
-------------------------------------------
So, in order to show that all quadratic forms in three variables in three variables have a nontrivial solution, it suffices to show this for forms given by a diagonal matrix. Thus we can consider only equations of the form ax^{2} + by^2 + cz^2 = 0. Since we assumed that we started with matrices of nonzero determinant, and equivalence preserves invertibility, we can assume that a, b, and c are all nonzero (alternatively, if after diagonalization one of the coefficients, say a, is zero, then the equation trivially has the solution (1, 0, 0)). Then the following theorem guarantees the existence of solutions:
-------------------------------------------
Theorem 2: If a, b, and c are all nonzero elements of \mathbb{Z}/p\mathbb{Z} for an odd prime p, then the equation ax^{2} + by^2 + cz^2 = 0 has a nontrivial solution in \mathbb{Z}/p\mathbb{Z}.
Proof: Since there are three coefficients and only two possible valued for the Legendre symbol, two of the coeffecients must have the same quadratic character mod p. After relabeling if necessary, we may assume that \left(\frac{a}{p}\right) = \left(\frac{b}{p}\right). Now, note that multiplying the equation by any nonzero constant will not change whether it has nontrivial solutions, so if a is a quadratic nonresidue, we may multiply the equation by a nonresidue to obtain an equation where a and b are both residues mod p, hence we may assume WLOG that a and b are both quadratic residues. Thus, they have square roots in \mathbb{Z}/p\mathbb{Z}. Let \sqrt{a} and \sqrt{b} be square roots of a and b respectively, and let x' = x \sqrt{a} and y' = y \sqrt {a}. Then the equation becomes (x')^{2} + (y')^2 = -cz^2. Certainly, if -c can be expressed as the sum of two squares mod p, then this equation has a solution where z=1, and we are done. Therefore, it suffices to show that every element of \mathbb{Z}/p\mathbb{Z} can be expressed as the sum of two squares.
First, note that every quadratic residue mod p can be expressed as the sum of two squares by letting the second square be zero. If every element that could be expressed as the sum of two squares was itself a square, then since 1 = 1², we would have by induction that every element of \mathbb{Z}/p\mathbb{Z} is a square, which is plainly false. So let m = x² + y² be a nonresidue which is expressible as the sum of two squares, and let n be any other nonresidue. Then by the multiplicative nature of the Legendre symbol, we have that n/m is a nonresidue, and so can be written as k² for some k \in \mathbb{Z}/p\mathbb{Z}. But then n = mk^2 = (x^2 + y^2)k^2 = (xk)^2 + (yk)^2 is the sum of two squares, and so every nonresidue is expressible as the sum of two squares as well. Q.E.D.
-------------------------------------------
Does this help answer your question?