Gramsci said:
Homework Statement
Hello, I'm trying to grasp the definition of a derivative in several variables, that is, to say if it's differentiable at a point.
My book tells me that a function of two variables if differntiable if:
f(a+h,b+k)-f(a,b) = A_1h+A_2k+\sqrt{h^2+k^2}\rho(h,k)
And if \rho goes to zero as (h,k) --> 0. First of all, how did one "come up" with this? It seems a bit arbitrary to me, which I am sure it is not.
Just as the derivative in one variable, of f(x), gives the slope of the tangent line to y= f(x), so the derivative in two variables gives the inclination of a tangent
plane to the surface z= f(x,y). Why that exact formula requires a little linear algebra. In general, if f(x_1, x_2, ..., x_n) is a function of n variables, its derivative, at (x_{01}, x_{02}, ..., x_{0n}) is not a number or a set of numbers but the
linear function that best approximates f in some neighborhood of that point. In one variable, any linear function is of the form y= mx+ b. Since b is given by the point, the new information is "m" and we think of that as
being the derivative. If f(x_1, x_2, ..., x_n) is a function from R^n to R, then a linear function from R^n to R is of the form y= a_1x_1+ a_2x_2+ ... + a_nz_n+ b which we can think of as y- b= <a_1, a_2, ..., a_n>\cdot <x_1, x_2, ..., x_n>, the product being the dot product of two vectors. In that way, we can think of the
vector <a_1, a_2, ..., a_2> as
being the derivative.
A more precise definition is this: if f: R^n\to R^m, the derivative of f at x_0= (x_{01}, x_{02}, ..., x_{0n}) is the linear transformation, L, from R^n\to R^m such that
f(x)= f(x_0)+ L(x- x_0)+ \epsilon(x)
for some function \epsilon from R^n to R such that
\lim_{x\to x_0} \frac{\epsilon(x)}{|x|}= 0
where |x| is the length of the vector x.
Of course, that requires that \epsilon(x) go to 0 as x goes to x_0 so this
is an approximation of f around x_0. The requirement that [itexs\epsilon/|x|[/itex] also go to 0 is essentially the requirement that this be the best
linear approximation.
In the case R^2\to R, a "real valued function of two variables", as I said, any linear transformation from R^2 to R can be represented as a dot product: <a, b>\cdot<x, y>= ax+ by so the definition above becomes:
f(x)= f(x_0, y_0)+ a(x- x_0)+ b(y- y_0)+ \epsilon(x,y)
and
lim_{x\to x_0}\frac{\epsilon(x,y)}{|(x-x_0, y- y_0)|}= 0[/itex]<br />
Of course, in two dimensions that "length" is \sqrt{(x-x_0)^2+ (y- y_0)^2}.<br />
If we take x- x_0= h and y- y_0= k that last says that <br />
\frac{\epsilon}{\sqrt{h^2+ k^2}<br />
goes to 0. If we set equal to \rho(h, k) then \epsilon= \rho(h,k)\sqrt{h^2+ k^2} and the result is your formula.<br />
<br />
<blockquote data-attributes="" data-quote="" data-source=""
class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch">
<div class="bbCodeBlock-content">
<div class="bbCodeBlock-expandContent js-expandContent ">
Apart from that, I'm trying to do show that a derivative exists at a point:<br />
f(x,y) = sin(x+y) at (1,1)<br />
<br />
<br />
<h2>Homework Equations</h2><br />
-<br />
<br />
<br />
<h2>The Attempt at a Solution</h2><br />
To show that the derivative exists:<br />
f(1+h,1+k)-f(h,k) = \sin(2+(h+k)) -\sin(2) = 0*h+0*k+\sqrt{h^2+k^2}\rho(h,k)
</div>
</div>
</blockquote> How did you get "0*h" and "0*k" here? The "A1" and "A2" in your formula are the partial derivatives at the point which, here, are both cos(2), not 0.<br />
<br />
<blockquote data-attributes="" data-quote="" data-source=""
class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch">
<div class="bbCodeBlock-content">
<div class="bbCodeBlock-expandContent js-expandContent ">
and:<br />
\rho(h,k) = \frac{\sin(2+(h+k))}{\sqrt{h^2+k^2}} \text{ if } (h,k) \neq 0 \text{ and } \rho(h,k) = 0 \text{ if } (h,k) = 0<br />
Then I guess I'm supposed to prove that the limit goes to zero, but how do I do it in this case?
</div>
</div>
</blockquote>