Critical point of function of two variables

Click For Summary

Discussion Overview

The discussion revolves around finding critical points of the function \( f(x,y) = 2\cos(2x) + \sin(x^{2}-y^{2}) \). Participants explore the first and second order derivatives, the criteria for identifying critical points, and the reasoning behind using specific derivatives to classify these points. The scope includes mathematical reasoning and conceptual clarification regarding critical points in multivariable calculus.

Discussion Character

  • Mathematical reasoning
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant seeks help in verifying their derivatives and understanding the critical point criteria, expressing uncertainty about the second derivative's role.
  • Another participant questions why the second derivative with respect to \( x \) is emphasized over \( y \) in determining the type of critical point, suggesting both variables should be treated equally.
  • Some participants clarify that either \( f_{xx} \) or \( f_{yy} \) can be used to determine maxima or minima, indicating that the choice of variable does not affect the outcome.
  • A later reply discusses the determinant of the Hessian matrix and its relevance to the classification of critical points, emphasizing the relationship between the signs of \( f_{xx} \) and \( f_{yy} \).
  • One participant elaborates on the mathematical justification for the second derivative test, referencing linear algebra concepts and the behavior of the function near critical points.

Areas of Agreement / Disagreement

Participants express differing views on the relevance of the second derivatives with respect to \( x \) and \( y \) in determining critical point types. While some assert that either can be used, others seek a deeper understanding of why \( f_{xx} \) is often specified. The discussion remains unresolved regarding the justification for the preference of one variable over the other.

Contextual Notes

Participants highlight the need for a deeper understanding of the mathematical principles underlying the second derivative test, including the role of the Hessian matrix and the conditions for maxima and minima. There is an acknowledgment of the complexity involved in justifying the formulas used in multivariable calculus.

GreenGoblin
Messages
68
Reaction score
0
Hello,
Please help me solve this problem and help me find if I made a mistake? If you will,
Thank you

$f(x,y) = 2cos(2x) + sin(x^{2}-y^{2})$

Find all the first and second order derivatives, hence show the origin is a critical point and find which type of critical point
First time attempting critical point question, I know it has to do with second derivative but I am not sure on what the definition is,

I made the derivatives (I don't have any way to verify this other that my own mind and it needs to be right to check the critical point criteria I figure, so please point out to me any mistake)

$df/dx = -4sin(2x) + 2xsin(x^{2})cos(y^{2}) - cos(x^{2})sin(y^{2})$

$df/dy = -2y(sin(y^{2})sin(x^{2}) + cos(y^{2})cos(x^{2}))$

$d^{2}f/dx^{2} = -8cos(2x) + 4x^{2}cos(x^{2})(sin(y^{2}) + cos(y^{2})) + 2sin(x^{2})(sin(y^{2}) + cos(y^{2}))$
 
Physics news on Phys.org
GreenGoblin said:
Hello,
Please help me solve this problem and help me find if I made a mistake? If you will,
Thank you

$f(x,y) = 2cos(2x) + sin(x^{2}-y^{2})$

Find all the first and second order derivatives, hence show the origin is a critical point and find which type of critical point
First time attempting critical point question, I know it has to do with second derivative but I am not sure on what the definition is,

I made the derivatives (I don't have any way to verify this other that my own mind and it needs to be right to check the critical point criteria I figure, so please point out to me any mistake)

$df/dx = -4sin(2x) + 2xsin(x^{2})cos(y^{2}) - cos(x^{2})sin(y^{2})$

$df/dy = -2y(sin(y^{2})sin(x^{2}) + cos(y^{2})cos(x^{2}))$

$d^{2}f/dx^{2} = -8cos(2x) + 4x^{2}cos(x^{2})(sin(y^{2}) + cos(y^{2})) + 2sin(x^{2})(sin(y^{2}) + cos(y^{2}))$

http://en.wikipedia.org/wiki/Second_partial_derivative_test

check out the example.
 
dwsmith said:
Hi,
Thanks,

Why is it that the second derivative of x is used to find the critical point type but not y? What property is it that makes x more relevant than y in this case? (since the function is of both variables I don't understand why x is more involved in the evaluation than y..?)
 
GreenGoblin said:
Hi,
Thanks,

Why is it that the second derivative of x is used to find the critical point type but not y? What property is it that makes x more relevant than y in this case? (since the function is of both variables I don't understand why x is more involved in the evaluation than y..?)

What you find with x, you plug into the y derivative. You can start with y as well.
 
No no, what I mean is, why is it the $d^{2}f/dx^{2}$ (or $f_{xx}$ whatever notation you prefer) is used to find out whether its a maximum or minimum? I can't see that x or y should be any different since theyre both independent variables? But it specificies the 2nd x derivative is to be used. What makes this the case?
 
GreenGoblin said:
No no, what I mean is, why is it the $d^{2}f/dx^{2}$ (or $f_{xx}$ whatever notation you prefer) is used to find out whether its a maximum or minimum? I can't see that x or y should be any different since theyre both independent variables? But it specificies the 2nd x derivative is to be used. What makes this the case?

That is the determinant for the Hessian matrix.
For example, the det of
$$
\begin{vmatrix}a & b \\ c & d \end{vmatrix} = ad-bc
$$

Examine at f_xx has to with positive and negative definiteness.
 
Last edited by a moderator:
No, the determinant is used for the first step. I'm asking about the second. Given it is a maximum OR minimum, and not a saddle (from using the determinant), the process for determining which of maximum or minimum it is. I'm querying why the second x derivative is specified as being used and not y (since x and y are in essence, the same for problems like this. You could switch them around with no effect, so why isn't this true for that as well).

Please someone different respond.
 
Last edited:
To answer your question, yes you can use either $f_{xx}$ or $f_{yy}$. It doesn't matter.
 
Jester said:
To answer your question, yes you can use either $f_{xx}$ or $f_{yy}$. It doesn't matter.
Thank you

I like to have a justification for a formula rather than just a formula itself, I couldn't tally this as written. I saw no mention of this in the source provided.

Gracias,
GreenGoblin
 
  • #10
The condition for max or min is that $f_{xx}f_{yy}- f_{xy}^2$ be greater than 0. Since $f_{xy}$ is squared, it is non-negative so that $-f_{xy}^2$ is non-positive. In order that
$f_{xx}f_{yy}- f_{xy}^2$ be greater than 0, then, $f_{xx}f_{yy}$ must be positive, which, in turn means that $f_{xx}$ and $f_{yy}$ must have the same sign- so it is sufficient to check either of them to see whether it is a max or min.

The problem with a justification for this formula (which typically just "given" in a calculus text without proof or justification) is that it requires some pretty deep linear algebra. In a strict sense, "the" derivative, at a given point, of a function of two variables (as opposed to the partial derivatives) is a linear transformation from $R^2$ to R which can be represented by the vector $\begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y}\end{bmatrix}$, the "gradient" of the function f.

And, then, the second derivative is the linear transformation from $R^2$ to $R^2$ which can be represented by the matrix
$\begin{bmatrix}\frac{\partial^2f}{\partial x^2} & \frac{\partial^2}{\partial x\partial y} \\ \frac{\partial^2f}{\partial x\partial y} & \frac{\partial^2f}{\partial y^2}\end{bmatrix}$

Now, that is a symmetric matrix so there exist a basis (coordinate system) in which it becomes the diagonal matrix
$\begin{bmatrix}\frac{\partial^2f}{\partial x'^2} & 0 \\ 0 & \frac{\partial^2f}{\partial y^2}\end{bmatrix}$

Further, the determinants of those two matrices are equal. Now that means that $f_{xx}f_{yy}- f_{xy}^2= f_{x'x'}f_{y'y'}$ which in turn means that $f_{fxx}f_{yy}- f_{xy}^2$ will be positive if and only if there exist a coodinate system, x'y', such that $f_{x'x'}$ and $f_{y'y'}$ have the same sign. Of course, you are only check this if the first derivatives, $f_x$ and $f_y$ are 0. So, in terms of the Taylor's series, $f(x',y')= f(x'_0, y'_0)+ f_{x'x'}(x'- x'_0)^2+ f_{y'y'}(y'- y'_0)^2$ plus higher powers of x' and y'. For x' and y' sufficiently close to $x'_0$ and $y'_0$ those higher power terms are negligible. And, if $f_{x'x'}$ and $f_{y'y'}$ are both positive, $f(x'_0, y'_0)+ f_{x'x'}(x'- x'_0)^2+ f_{y'y'}(y'- y'_0)^2$ is a parabola opening upward, so we have a minimum at $(x'_0, y'_0)$ while if they are negative we have a parabola opening downward so we have a maximum.
 
Last edited by a moderator:

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 29 ·
Replies
29
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K