Definition of a derivative in several variables

Click For Summary
SUMMARY

The discussion centers on the definition of a derivative in several variables, specifically for functions of two variables. A function f(x, y) is differentiable at a point if it can be expressed as f(a+h, b+k) - f(a, b) = A1h + A2k + √(h² + k²)ρ(h, k), where ρ(h, k) approaches zero as (h, k) approaches (0, 0). The participants clarify that A1 and A2 represent the partial derivatives at the point, and they explore the implications of this definition through the example of f(x, y) = sin(x + y) at the point (1, 1). The discussion emphasizes the importance of understanding linear transformations and the best linear approximation in the context of differentiability.

PREREQUISITES
  • Understanding of partial derivatives and their notation.
  • Familiarity with limits and continuity in multivariable calculus.
  • Knowledge of linear transformations and their properties.
  • Basic understanding of trigonometric functions and their derivatives.
NEXT STEPS
  • Study the concept of partial derivatives in depth, focusing on their geometric interpretation.
  • Learn about the application of the limit definition of differentiability in multivariable functions.
  • Explore linear approximations and their significance in calculus, particularly in R².
  • Investigate the role of the epsilon-delta definition in proving differentiability at a point.
USEFUL FOR

Students of multivariable calculus, mathematics educators, and anyone seeking to deepen their understanding of differentiability in functions of several variables.

Gramsci
Messages
64
Reaction score
0

Homework Statement


Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
[tex]f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
And if [tex]\rho[/tex] goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Apart from that, I'm trying to do show that a derivative exists at a point:
[tex]f(x,y) = sin(x+y)[/tex] at (1,1)

Homework Equations


-

The Attempt at a Solution


To show that the derivative exists:
[tex]f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
and:
[tex]\rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0[/tex]
Then I guess I'm supposed to prove that the limit goes to zero, but how do I do it in this case?
 
Physics news on Phys.org
Gramsci said:

Homework Statement


Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
[tex]f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)[/tex]
And if [tex]\rho[/tex] goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Just as the derivative in one variable, of f(x), gives the slope of the tangent line to y= f(x), so the derivative in two variables gives the inclination of a tangent plane to the surface z= f(x,y). Why that exact formula requires a little linear algebra. In general, if [itex]f(x_1, x_2, ..., x_n)[/itex] is a function of n variables, its derivative, at [itex](x_{01}, x_{02}, ..., x_{0n})[/itex] is not a number or a set of numbers but the linear function that best approximates f in some neighborhood of that point. In one variable, any linear function is of the form y= mx+ b. Since b is given by the point, the new information is "m" and we think of that as being the derivative. If [itex]f(x_1, x_2, ..., x_n)[/itex] is a function from [itex]R^n[/itex] to R, then a linear function from [itex]R^n[/itex] to R is of the form [itex]y= a_1x_1+ a_2x_2+ ... + a_nz_n+ b[/itex] which we can think of as [itex]y- b= <a_1, a_2, ..., a_n>\cdot <x_1, x_2, ..., x_n>[/itex], the product being the dot product of two vectors. In that way, we can think of the vector [itex]<a_1, a_2, ..., a_2>[/itex] as being the derivative.

A more precise definition is this: if [itex]f: R^n\to R^m[/itex], the derivative of f at [itex]x_0= (x_{01}, x_{02}, ..., x_{0n})[/itex] is the linear transformation, L, from [itex]R^n\to R^m[/itex] such that
[tex]f(x)= f(x_0)+ L(x- x_0)+ \epsilon(x)[/tex]
for some function [itex]\epsilon[/itex] from [itex]R^n[/itex] to R such that
[tex]\lim_{x\to x_0} \frac{\epsilon(x)}{|x|}= 0[/tex]
where |x| is the length of the vector x.

Of course, that requires that [itex]\epsilon(x)[/itex] go to 0 as x goes to [itex]x_0[/itex] so this is an approximation of f around [itex]x_0[/itex]. The requirement that [itexs\epsilon/|x|[/itex] also go to 0 is essentially the requirement that this be the best linear approximation.

In the case [itex]R^2\to R[/itex], a "real valued function of two variables", as I said, any linear transformation from [itex]R^2[/itex] to R can be represented as a dot product: [itex]<a, b>\cdot<x, y>= ax+ by[/itex] so the definition above becomes:
[tex]f(x)= f(x_0, y_0)+ a(x- x_0)+ b(y- y_0)+ \epsilon(x,y)[/tex]
and
[tex]lim_{x\to x_0}\frac{\epsilon(x,y)}{|(x-x_0, y- y_0)|}= 0[/itex]<br /> Of course, in two dimensions that "length" is [itex]\sqrt{(x-x_0)^2+ (y- y_0)^2}[/itex].<br /> If we take [itex]x- x_0= h[/itex] and [itex]y- y_0= k[/itex] that last says that <br /> [tex]\frac{\epsilon}{\sqrt{h^2+ k^2}[/tex]<br /> goes to 0. If we set equal to [itex]\rho(h, k)[/itex] then [itex]\epsilon= \rho(h,k)\sqrt{h^2+ k^2}[/itex] and the result is your formula.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Apart from that, I'm trying to do show that a derivative exists at a point:<br /> [tex]f(x,y) = sin(x+y)[/tex] at (1,1)<br /> <br /> <br /> <h2>Homework Equations</h2><br /> -<br /> <br /> <br /> <h2>The Attempt at a Solution</h2><br /> To show that the derivative exists:<br /> [tex]f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k)[/tex] </div> </div> </blockquote> How did you get "0*h" and "0*k" here? The "A1" and "A2" in your formula are the partial derivatives at the point which, here, are both cos(2), not 0.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> and:<br /> [tex]\rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0[/tex]<br /> Then I guess I'm supposed to prove that the limit goes to zero, but how do I do it in this case? </div> </div> </blockquote>[/tex]
 
HallsofIvy:
I'll show you where I got the 0*a from a previous example here. I probably misunderstood something, but just to see what I'm thinking.
Let's sa we want to show that f(x,y) = xy is differentiable at (1,1).

[tex]f(1+h,1+k)-f(1,1) = (1+h)(1+k)-1 = h+k+hk = 1*h+1*k+\sqrt{h^2+k^2}*hk/(\sqrt{h^2+k^2} \rightarrow A_1=A_2=1 \and \rho(h,k) = hk/\sqrt{h^2+k^2}[/tex]
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
6K
Replies
6
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K