Weird theorem on critical points for multivariable functions

Click For Summary
SUMMARY

The discussion centers on the critical points of multivariable functions, specifically addressing the conditions under which a point (a,b) is classified as a local minimum, maximum, or saddle point. It establishes that if the first partial derivatives \(f_x\) and \(f_y\) are zero and the product of the second partial derivatives \(f_{xx}f_{yy}\) is greater than zero, then (a,b) is a local extremum. Conversely, if \(f_{xx}f_{yy} < 0\), (a,b) is a saddle point. The discussion emphasizes the importance of the Hessian matrix and its eigenvalues in determining the nature of critical points.

PREREQUISITES
  • Understanding of multivariable calculus concepts, including critical points and partial derivatives.
  • Familiarity with Hessian matrices and their role in classifying critical points.
  • Knowledge of eigenvalues and eigenvectors in the context of linear algebra.
  • Basic proficiency in mathematical notation and functions of two variables.
NEXT STEPS
  • Study the properties of Hessian matrices in multivariable calculus.
  • Learn about the classification of critical points using eigenvalues in higher dimensions.
  • Explore the implications of mixed partial derivatives and their continuity.
  • Investigate the differences in critical point analysis between two-variable and three-variable functions.
USEFUL FOR

Mathematicians, students of multivariable calculus, and anyone interested in the analysis of critical points in functions of multiple variables will benefit from this discussion.

Nikitin
Messages
734
Reaction score
27
Why does it write all that stuff about fxy2? Isn't it unnecessary?

I mean, isn't the following true?

If in a point (a,b): fx=fy=0 and fxxfyy>0, then the partial derivatives must be either both negative or positive, and thus point (a,b) is a local minima or maxima. And if fxxfyy<0 => (a,b) is a saddlepoint.

Right? So why bring in the fxy2?
 

Attachments

  • bilde(2).JPG
    bilde(2).JPG
    31.3 KB · Views: 506
Physics news on Phys.org
f(x,y) = x^2 + 100xy +y^2

This has a saddle point at the origin. If you look at f(t,t) you get 102t^2, which is positive. If you look at f(t,-t) you get -98x^2 which is negative. So you have both positive and negative values near the origin.
 
Hmm, thanks. But is there an intuitive explanation for the theorem?
 
A more precise way of thinking about it is this: if f(x,y) is a function of two variables, the its "derivative" is NOT just the partial derivatives, \partial f/\partial x and \partial f/\partial y (it is possible for those partial derivatives to exist at a point while f itself is not even continuous, much less differentiable), but the gradient \nabla f= (\partial f/\partial x)\vec{i}+ (\partial f/\partial y)\vec{j}. (Even more precisely, the derivative is the linear transformation, at each point, given by the dot product (\partial f/\partial x)\vec{i}+ (\partial f/\partial y)\vec{j})\cdot ((x- x_0)\vec{i}+ (y- y_0)\vec{j}) but that is "represented" by the gradient vector.)

In that sense the second derivative is given by the linear transformation "represented", at each point, by the matrix
\begin{bmatrix}\frac{\partial^2 f}{\partial x^2} &amp; \frac{\partial^2 f}{\partial x\partial y} \\ \frac{\partial^2 f}{\partial y\partial x} &amp; \frac{\partial^2 f}{\partial y^2}\end{bmatrix}

Because, as long as f has continuous derivatives, the "mixed" second derivatives are the same, that is a symmetric matrix and so has real eigenvalues and two independent eigenvectors. If we were to use the directions of those eigenvectors as coordinate lines, x' and y', the matrix representing the second derivative would be "diagonal":
\begin{bmatrix}\frac{\partial^2 f}{\partial x&#039;^2} &amp; 0 \\ 0 &amp;\frac{\partial^2 f}{\partial y^2}\end{bmatrix}
where those two derivatives (evaluated at the given point) are the "eigenvalues" of the original second derivative matrix.

Now, at a point where the first derivatives are 0 (a critical point) and the "mixed" second derivatives are 0, as in the x', y' coordinate system, we can write f(x)= f(x_0, y_0)+ f_{xx}(x_0,y_0)(x- x_0)^2+ f_{yy}(x_0, y_0)(y- y_0)^2 to second degree. And it is easy to see from this that:
1) if f_{xx}(x_0, y_0)= a and f_{yy}(x_0, y_0)= b are both positive, we have f(x, y)= f(x_0, y_0)+ a(x- x_0)^2+ (y- y_0)^2 so that (x_0, y_0) is a local "minimum".
2) if f_{xx}(x_0, y_0)= -a and f_{yy}(x_0, y_0)= -b are both negative, we have f(x, y)= f(x_0, y_0)- a(x- x_0)^2- b(y- y_0)^2 so that (x_0, y_0) is a local "maximum".
3) if f_{xx}(x_0, y_0)= a and f_{yy}(x_0, y_0)= -b are one positive and the other negative, we have f(x, y)= f(x_0, y_0)+ a(x- x_0)^2- b(y- y_0)^2 so that (x_0, y_0) is a local "saddle point".

So the question is about the eigenvalues of that two by two matrices. If both are positive, the point is a local minimum, if both are negative, a local maximum, and if they are of different sign, a saddle point (of course, just like in the one variable situation, if either is 0, this does not tell us). Further the determinant is independent of the coordinate system- the two determinants:
\left|\begin{array}{cc}\frac{\partial^2 f}{\partial x&#039;^2} &amp; 0 \\ 0 &amp; \frac{\partial^2 f}{\partial y&#039;^2}\end{array}\right|= f_{x&#039;x&#039;}f_{y&#039;y&#039;}
\left|\begin{array}{cc}\frac{\partial^2 f}{\partial x^2} &amp; \frac{\partial^2 f}{\partial x\partial y} \\ \frac{\partial^2 f}{\partial y\partial x} &amp; \frac{\partial^2 f}{\partial y^2}\end{array}\right|= f_{xx}f_{xy}- (f_{xy})^2
are the same- both eigenvalues, and so second derivatives, are the same sign and so we have either a minimum or a maximum, if and only if f_{xx}f_{yy}- (f_{xy})^2&gt; 0 and the second derivatives are of different sign, and so we have a saddle point, if and only if f_{xx}f_{yy}- (f_{xy})^2&lt; 0.

This also shows why we do not have a similar formula for three or more variables- all the analysis, to the diagonal matrix goes through but the determinant is a product of three or more numbers and its sign does not tell us about the sign of the individual eigenvalues. If, in the three variable case, the product is positive, it might be that all three eigenvalues are positive or that one is positive and the other two negative.
 
Last edited by a moderator:
thanks!
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K