Definition of a derivative in several variables

Gramsci
Messages
64
Reaction score
0

Homework Statement


Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)
And if \rho goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Apart from that, I'm trying to do show that a derivative exists at a point:
f(x,y) = sin(x+y) at (1,1)

Homework Equations


-

The Attempt at a Solution


To show that the derivative exists:
f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k)
and:
\rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0
Then I guess I'm supposed to prove that the limit goes to zero, but how do I do it in this case?
 
Physics news on Phys.org
Gramsci said:

Homework Statement


Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)
And if \rho goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Just as the derivative in one variable, of f(x), gives the slope of the tangent line to y= f(x), so the derivative in two variables gives the inclination of a tangent plane to the surface z= f(x,y). Why that exact formula requires a little linear algebra. In general, if f(x_1, x_2, ..., x_n) is a function of n variables, its derivative, at (x_{01}, x_{02}, ..., x_{0n}) is not a number or a set of numbers but the linear function that best approximates f in some neighborhood of that point. In one variable, any linear function is of the form y= mx+ b. Since b is given by the point, the new information is "m" and we think of that as being the derivative. If f(x_1, x_2, ..., x_n) is a function from R^n to R, then a linear function from R^n to R is of the form y= a_1x_1+ a_2x_2+ ... + a_nz_n+ b which we can think of as y- b= <a_1, a_2, ..., a_n>\cdot <x_1, x_2, ..., x_n>, the product being the dot product of two vectors. In that way, we can think of the vector <a_1, a_2, ..., a_2> as being the derivative.

A more precise definition is this: if f: R^n\to R^m, the derivative of f at x_0= (x_{01}, x_{02}, ..., x_{0n}) is the linear transformation, L, from R^n\to R^m such that
f(x)= f(x_0)+ L(x- x_0)+ \epsilon(x)
for some function \epsilon from R^n to R such that
\lim_{x\to x_0} \frac{\epsilon(x)}{|x|}= 0
where |x| is the length of the vector x.

Of course, that requires that \epsilon(x) go to 0 as x goes to x_0 so this is an approximation of f around x_0. The requirement that [itexs\epsilon/|x|[/itex] also go to 0 is essentially the requirement that this be the best linear approximation.

In the case R^2\to R, a "real valued function of two variables", as I said, any linear transformation from R^2 to R can be represented as a dot product: <a, b>\cdot<x, y>= ax+ by so the definition above becomes:
f(x)= f(x_0, y_0)+ a(x- x_0)+ b(y- y_0)+ \epsilon(x,y)
and
lim_{x\to x_0}\frac{\epsilon(x,y)}{|(x-x_0, y- y_0)|}= 0[/itex]<br /> Of course, in two dimensions that &quot;length&quot; is \sqrt{(x-x_0)^2+ (y- y_0)^2}.<br /> If we take x- x_0= h and y- y_0= k that last says that <br /> \frac{\epsilon}{\sqrt{h^2+ k^2}<br /> goes to 0. If we set equal to \rho(h, k) then \epsilon= \rho(h,k)\sqrt{h^2+ k^2} and the result is your formula.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Apart from that, I&#039;m trying to do show that a derivative exists at a point:<br /> f(x,y) = sin(x+y) at (1,1)<br /> <br /> <br /> <h2>Homework Equations</h2><br /> -<br /> <br /> <br /> <h2>The Attempt at a Solution</h2><br /> To show that the derivative exists:<br /> f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k) </div> </div> </blockquote> How did you get &quot;0*h&quot; and &quot;0*k&quot; here? The &quot;A1&quot; and &quot;A2&quot; in your formula are the partial derivatives at the point which, here, are both cos(2), not 0.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> and:<br /> \rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0<br /> Then I guess I&#039;m supposed to prove that the limit goes to zero, but how do I do it in this case? </div> </div> </blockquote>
 
HallsofIvy:
I'll show you where I got the 0*a from a previous example here. I probably misunderstood something, but just to see what I'm thinking.
Let's sa we want to show that f(x,y) = xy is differentiable at (1,1).

f(1+h,1+k)-f(1,1) = (1+h)(1+k)-1 = h+k+hk = 1*h+1*k+\sqrt{h^2+k^2}*hk/(\sqrt{h^2+k^2} \rightarrow A_1=A_2=1 \and \rho(h,k) = hk/\sqrt{h^2+k^2}
 
Prove $$\int\limits_0^{\sqrt2/4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx = \frac{\pi^2}{8}.$$ Let $$I = \int\limits_0^{\sqrt 2 / 4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx. \tag{1}$$ The representation integral of ##\arcsin## is $$\arcsin u = \int\limits_{0}^{1} \frac{\mathrm dt}{\sqrt{1-t^2}}, \qquad 0 \leqslant u \leqslant 1.$$ Plugging identity above into ##(1)## with ##u...
Back
Top