The basic idea is this: with a real valued function of two variables, f(x,y), the derivative at a point is, in a strict sense, the linear function that maps the pair (x,y) to a real number that "best approximates" f around that point. That can be represented as a dot product, (f_x, f_y)\cdot (x, y), and so we can represent the derivative as the gradient vector (f_x, f_y).
So the first derivative function maps (x, y) to the vector (f_x , f_y) and its derivative, the second derivative of f, can best be represented as a 2 by 2 <b>matrix</b> <br />
\begin{bmatrix}\frac{\partial^2 f}{\partial x^2} &amp; \frac{\partial^2 f}{\partial x\partial y} \\ \frac{\partial^2 f}{\partial x\partial y} &amp; \frac{\partial^2 f}{\partial y^2}\end{bmatrix}<br />
<br />
Now, two points from Linear Algebra: <br />
1) Since that is a symmetric matrix, it has two independent eigenvectors and so can be <b>diagonalized</b>. That is, there exist some coordinate system, say x' and y', such that the second derivative matrix can be written <br />
\begin{bmatrix}\frac{\partial^2f}{\partial x&#039;^2} &amp; 0 \\ 0 &amp; \frac{\partial^2 f}{\partial y&#039;^2}\end{bmatrix}<br />
<br />
2) The determinant is invariant.<br />
<br />
If f(x,y) at a point, (x_0, y_0) can be approximated by (\partial^2f/\partial x&#039;^2)(x&#039;- x_0)^2+ (\partial^2f/\partial y&#039;^2)(y&#039;- y_0)^2+ f(x_0, y_0) in that x', y<br />
coordinate system. If the point is a minimum, both those partial derivatives are positive, if it is a maximum, both negative, if a saddle point, one positive and one negative. That is, if either a maximum or minimum, the determinant, \left(\partial^2 f/\partial x&#039;^2\right)\left(\partial^2 f/\partial y&#039;^2\right) is positive, if a saddle point, negative. And because the determinant is "invariant", that is the f_{xx}f_{yy}- (f_{xy})^2 you refer to.<br />
<br />
Now, here is the problem- and the reason why textbooks do not say what do to in three dimensions. If you go to <b>three</b> variables, x, y, and z, you can still find coordinates, x', y', z', such that the second derivative matrix is diagonal:<br />
\begin{bmatrix} \frac{\partial^2 f}{\partial x&#039;^2} &amp; 0 &amp; 0 \\ 0 &amp; \frac{\partial^2 f}{\partial y&#039;^2} &amp; 0 \\ 0 &amp; 0 &amp; \frac{\partial^2 f}{\partial z&#039;^2}\end{bmatrix}.<br />
<br />
But knowing the <b>determinant</b> of that does NOT tell you anything about the individual signs. If the determinant is positive, it might happen that all three of the signs are positive (a minimum) or that two are negative and the third positive (a saddle point). If the determinant is negative, it might be the case that all three are negative (a maximum) or that two are positive and the third negative (a saddle point). The only thing you really can do is find the individual eigenvalues of the second derivative matrix.