Fermat's theorem applied to multivariate functions

In summary, a function is not differentiable if along every path converging to a given point, the differentiation procedure produces different results.
  • #1
elementbrdr
43
0
Fermat's theorem provides that, if a function f(x) has a local max or min at a, and if f'(a) exists, then f'(a)=0. I was wondering whether a similar theory exists for a function f(x,y) or f(x,y,z) etc.
 
Physics news on Phys.org
  • #2
Yep. You can even work it out yourself, using the fact that, e.g., if (0,0) is a local minimum of f(x,y), then 0 is a local minimum of g(t), where g(t) is defined by
g(t) = f(at, bt)​
 
  • #3
Thanks for the response, Hurkyl. I'm not following you as to how I could use that to solve for a pair of (x,y), though. Do you have an example?
 
  • #4
If f is a differentiable function of two variables, x and y, then at any max or min we must have
[itex]\frac{\partial f}{\partial x}= 0[/itex] and [itex]\frac{\partial f}{\partial y}= 0[/itex]
at that point.

By the way, the existence of the partial derivatives at a given point does not always imply that f itself is differetiable there. Better is the statement
[tex]\text{\grad }f= \nabla f= \vec{0}[/itex]
 
  • #5
HallsofIvy, your last statement is interesting. I'm not sure if I'm at the point where I can understand it quite yet, unfortunately.

I have not yet begun to study multivariate calculus. I am currently reviewing single variable calculus in preparation for linear algebra. I reviewed Fermat's Theorem yesterday and recalled that I had encountered a problem where applying it in the multivariate context would have been helpful. So I was primarily interested in whether my intuition that such application was possible was conceptually sound.

I'm curious, though, why the existence of a partial derivative with respect to each variable does not imply that the function is differentiable. I thought a whole derivative was either (a) an ordered set of partial derivative values or (b) the vectors sum of the partial derivatives. So if you can calculate partial derivatives, how could the function not be differentiable? (I could be way off here, but figured it wouldn't hurt to ask)
 
  • #6
elementbrdr said:
HallsofIvy, your last statement is interesting. I'm not sure if I'm at the point where I can understand it quite yet, unfortunately.

I have not yet begun to study multivariate calculus. I am currently reviewing single variable calculus in preparation for linear algebra. I reviewed Fermat's Theorem yesterday and recalled that I had encountered a problem where applying it in the multivariate context would have been helpful. So I was primarily interested in whether my intuition that such application was possible was conceptually sound.

I'm curious, though, why the existence of a partial derivative with respect to each variable does not imply that the function is differentiable. I thought a whole derivative was either (a) an ordered set of partial derivative values or (b) the vectors sum of the partial derivatives. So if you can calculate partial derivatives, how could the function not be differentiable? (I could be way off here, but figured it wouldn't hurt to ask)

Think of the following one-dimensional analogy:

A function f(x) is 1 at all rational points (set Q), but f(x)=x for all irrational points (set I).

Now, if you constrain your limiting procedure onto (set Q), you will find that the "partial" derivative on THAT set equals 0, at all points in Q.

But on (set I), you'll get the "partial" derivative equal to 1, at all points in I.Thus, at every point, you may say that a "partial" derivative exists, but the function is NOT differentiable on the reals as such...What is required of a differentiable function is that along ALL and EVERY path converging to some point, the differentiation procedure must give the same answer.
That is a much stricter requirement than the definition of the partial derivative requires, and thus, all partial derivatives may exist, even though the function isn't differentiable.
 
  • #7
That makes sense. But f(x) is not continuous, so it not be differentiable, right? If you expand your analogy to two dimensions using a function f(x,y), with x and y taking the place of Q and I, respectively, then wouldn't you have f'(x) = (d/dx, d/dy) = (0,1) at all points? I'm having trouble visualizing how you could obtain a complete set of partial derivatives for all variables of a function but fail to have a differentiable function. Again, I haven't taken multivariate, so I'm just trying to apply single variable calc concepts here...
 
  • #8
Arildno, I received your response by email. I don't see it on the forum yet, though. What you say makes a lot of sense. If I understand you correctly, you are saying that, for a function f(x,y), the existence of the partial derivatives d/dx and d/dy only represent 4 possible approaches to a given point f(a,b) out of an infinite number of possible approaches (not sure if I'm using proper terminology). But if that is correct, then how can one test whether a function f(x,y) is actually differentiable?
 
  • #9
elementbrdr said:
Arildno, I received your response by email. I don't see it on the forum yet, though. What you say makes a lot of sense. If I understand you correctly, you are saying that, for a function f(x,y), the existence of the partial derivatives d/dx and d/dy only represent 4 possible approaches to a given point f(a,b) out of an infinite number of possible approaches (not sure if I'm using proper terminology). But if that is correct, then how can one test whether a function f(x,y) is actually differentiable?
Strange.
Don't know how the e-mail was activated??
 
  • #10
Did you post and then delete? I have instant email notification activated for responses to my threads. Here's what I received in my inbox:

1. As to continuity:
f is continuous, when restricting our limiting process upon the subset specified.
That is why it can be "partially" differentiable there!

2. However, neither of the two subsets I specify are connected sets (they consist, so to speak, of isolated points), something that is peculiar for the one-variable case, but not for higher dimensional cases (the x-axis is a connected set, and so is the y-axis).


3. Well, [tex] \frac{ \partial f }{ \partial x } \frac{ \partial f}{ \partial x} [/tex] constrains us to only look at how f behaves along the x-axis, (or along an axis parallell to that).
It does NOT take into account how we may approach a point further along the x-axis by LEAVING the x-axis, and then rejoin it at some other point.
And it is precisely this restricted perspective of the partial derivative(s) that make it possible that all partial derivatives exists, even though the function remains non-differentiable at some point.
 
Last edited:
  • #11
That's probably it.

Anyhow.

What is the truly essential idea about "continuity"?

It is that if you are "close enough" in your argument space of values, you'll be "close enough" in your space of function values.

But, if your argument space is not a line, but a plane, being "close enough" to some point is to be within some circle of sufficiently tiny radius to that point.
Is that clear?

Take the function [tex]f(x,y)=x^{2}+y^{2}[/tex]
How can we prove that this is continuous everywhere?

Well, if we pick a point [tex](x_[0},y_{0}[/tex], any other point in the plane can be represented as [tex]x_{0}+r\cos\theta,y_{0}+r\sin\theta[/tex].

Now, we look at the difference between the assumed limit [tex]L=x_{0}^{2}+y_{0}^{2}[/tex] (i.e, what it needs to be if continuous), and the general expression of f:

[tex]|(x_{0}+r\cos\theta)^{2}+(y_{0}+r\sin\theta)^{2}-x_{0}^{2}-y_{0}^{2}|=r|2x_{0}\cos\theta+2y_{0}\sin\theta+r|[/tex], an expression that:

Strictly vanishes when r goes to zero, wholly independent of the angle (which will be different for different paths towards our point)!

Thus, we have proven continuity of the function.Now, tell me if this is OK, and ikf it is, we can go on with the derivatives for higher-dimensional functions.
 
  • #12
I think I follow your explanation, though I doubt I could personally prove the continuity of other functions. But the basic idea in this case is that lim r->0 = x^2 + y^2. Conceptually this means that, for any point within the function's range is continuous with every other point within an arbitrarily large radius within a certain tolerance for error. And you could formalize this in the same way that a limit is precisely defined for single variable functions by showing a necessary relationship between the variance of (x,y) and f(x,y).
 

1. What is Fermat's theorem applied to multivariate functions?

Fermat's theorem applied to multivariate functions is a mathematical principle that states that if a function has a maximum or minimum value, then the derivative of that function at that point is equal to zero. This theorem is used to find extreme points in multivariate functions, where there are multiple independent variables.

2. How is Fermat's theorem applied to multivariate functions used in real-world applications?

Fermat's theorem applied to multivariate functions has many practical applications, such as in economics, physics, and engineering. It can be used to optimize processes and find the most efficient solutions to problems with multiple variables.

3. Can Fermat's theorem be applied to functions with more than two variables?

Yes, Fermat's theorem can be applied to functions with any number of variables. The basic principle remains the same - if the function has a maximum or minimum value, then the derivative at that point is equal to zero.

4. Are there any limitations to using Fermat's theorem applied to multivariate functions?

While Fermat's theorem is a powerful tool, it does have some limitations. It can only be used to find extreme points in continuous functions, and it may not always give the global maximum or minimum of a function.

5. How is Fermat's theorem applied to multivariate functions related to optimization?

Fermat's theorem is closely related to optimization, as it is often used to find the maximum or minimum value of a multivariate function. This is useful in optimization problems where the goal is to find the most efficient solution or the highest possible profit.

Similar threads

Replies
3
Views
2K
Replies
3
Views
1K
  • Calculus
Replies
6
Views
1K
Replies
15
Views
1K
Replies
20
Views
2K
Replies
2
Views
285
Replies
11
Views
2K
  • Calculus
Replies
13
Views
1K
Back
Top