F(x,y) = |x| + |y|

  • Thread starter Buri
  • Start date
  • #1
273
0

Homework Statement



f: R^2 -> R f(x,y) = |x| + |y|

(a) Find all directional derivates at (0,0) in the direction of u not equal to zero if they exist. And evaluate when they do.

(b) Do the partial derivatives exist at (0,0)?

(c) Is it differentiable at (0,0)?

(d)Is it continuous at (0,0)?

!!!!Solutions!!!!

(a) No they do not exist. I have:

f'(0;u) = lim [t->0] 1/t[f((0,0) + t(h,k)) - f(0,0)] = lim [t->0] 1/t(|th| + |tk|)

Which then equals |h| + |k| and -|h| - |k| for t > 0 and t < 0 respectively. So the "left and right" limits aren't equal. Therefore, the limit cannot exist.

(b) No, because the directional derivatives don't.

(c) No it is not differentiable. Since all directional deritaves at 0 don't exist it implies it is not differentiable at 0.

(d) Yes it is continuous at 0.

Note that

lim [ (x,y) -> (0,0) ] |x| = 0 and lim [ (x,y) -> (0,0) ] |y| = 0

These are both proved by setting delta = epsilon. And therefore the summation of the two exists, so we have:

lim [ (x,y) -> (0,0) ] f(x,y) = lim [ (x,y) -> (0,0) ] |x| + |y| = lim [ (x,y) -> (0,0) ] |x| + lim [ (x,y) -> (0,0) ] |y| = 0 + 0 = 0.

Can someone verify whether I'm right or wrong on any of the parts? I'd really appreciate it!!
 

Answers and Replies

  • #2
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
Directional derivatives for this function do exist in all directions at the origin.
 
  • #3
273
0
Really? So how is it that I would check that it exists? See I wasn't sure if the whole thing about the left and right limits since its a multivariate function. Any help on how to go about on this?
 
  • #4
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
The direction derivatives are perhaps a bit easier if you use polar rather than cartesian coordinates. In any case, the directional derivative in some direction [itex]\hat u[/itex] is given by

[tex]\nabla_{\hat u} f(\vec x) = \lim_{h\to0^+} \frac {f(\vec x + h\hat u)-f(\vec x)}h[/tex]

In other words, the directional derivative is a one-sided derivative. That you might get another value for some other direction or for negative values of h is irrelevant for directional derivatives -- but certainly is relevant for partial derivatives and differentiability.
 
  • #5
273
0
I guess this is because its in the direction of u so we don't need to consider -u? Why is it relevant for partial derivatives? Aren't partial derivatives simply the directional derivative in the direction of e_i? So shouldn't it also not matter?

Thanks for your replies btw :)
 
  • #6
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
I guess this is because its in the direction of u so we don't need to consider -u?
Exactly.

Why is it relevant for partial derivatives? Aren't partial derivatives simply the directional derivative in the direction of e_i? So shouldn't it also not matter?
Because you do need to consider +u and -u in the case of partial derivatives. For example, [itex]\partial f/\partial x[/itex] exists at some point only if the function f is has directional derivatives in both the +x and -x directions at the point of interest and these two directional derivatives are equal to one another.
 
  • #7
273
0
So for this question, my partials wouldn't exist. And differentiability would be true, since the directional derivatives do exist. And its continuous at 0.

See I never had my directional derivatives defined like you had with the h+. We simply had an 'h''. And I didn't have the partial derivatives definition like you gave either just that the directional derivative has to exist in the direction of e_i.

See I asked a similar question earlier which was for f(x,y) = |xy|^(1/2) for x >= 0 and -|xy|^(1/2) for x < 0. Hurkyl helped me out on it and he did consider when h > 0 and when h < 0, but I guess I misunderstood. I thought he was considering left and right derivatives, but in reality he was simply doing the two cases since my function has the x >=0 and x < 0. Would that be right?

EDIT:

Well my u was defined to be u = (h,k) and I was using t rather than h in the definition of the directional derivative.
 
  • #8
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
Try again. What does it mean for a multivariate function to be differentiable?
 
  • #9
273
0
LOL no wait! No it isn't differentiable! Because the directional derivative is non linear, so it just can't be!
 
  • #10
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
The directional derivatives of the function f(x,y)=x3+y in both the +x and -x direction are 3x2. This is obviously nonlinear, and yet this function is differentiable.
 
  • #11
273
0
Okay now I'm really getting confused. We were told that the differential of a multivariate function f: R^n -> R^m is a linear transformation Df = A: R^m -> R^n. We also have that if a function is differentiable then:

D_v f = Df(v)

Where, the first is the directional derivative in the direction of v and the second is the differential of f at v. For f(x,y) = |x| + |y| the differential would be something like [a b] and at v would be something like av1 + bv2 which is linear. So I'm confused now. Could you explain?

These are the definitions in Munkres Analysis on Manifolds, but maybe I've misunderstood it all?
 
  • #12
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
You appear to be confusing linear transformations with linear operators. Let's just look at one dimension for simplicity. The derivative of some function h(x)=f(x)+g(x) is just the sum of the derivatives: h'(x)=f'(x)+g'(x). Scaling a function by some constant scales the derivative: d/dx(a*f(x))=a*f'(x). That means the one dimensional derivative is a linear operator. That of course does not mean that derivatives are linear functions of x. That would preclude rather useful functions such as exp(x).

The same concept applies beyond one dimension. The gradient remains a linear operator because [itex]\nabla(f+g)=\nabla f + \nabla g[/itex] and [itex]\nabla(af)=a\nabla f[/itex], where [itex]a[/itex] is some constant. Once again, that does not mean that the gradient is necessarily a linear function.
 
  • #13
273
0
I'm not actually sure what you mean by gradient. Differential I'm guessing right? But the differential has to be a linear operator in v wouldn't it?

See, Munkres seems to do exactly what I did above. And it made sense to me why before, but now I feel like I'm lost.
 
  • #14
273
0
Is there something wrong with what I wrote earlier?

D_v f = Df(v)

Because if its differentiable I should have the above right? But clearly it wouldn't be the case that Df(v) is a linear operator? That should be right though shouldn't it?
 
  • #15
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
Let's back up a bit and look at just one dimension. The concept of directional derivatives is closely aligned to the concept of left and right derivatives in one dimension. Consider the function f(x)=|x|. This function has left and right derivatives at all points on the real number line. The left derivative is -1 for all x<=0, +1 for x>0. The right derivative is -1 for x<0, +1 for x>=0. Note that the left derivative is equal to the right derivative except at x=0. At x=0 the left derivative is -1 but the right derivative is +1. This means the function is not differentiable at x=0.

This function also has directional derivatives at all points on the real number line. In this case, the directional derivative in the -x direction is +1 for all x<=0, -1 for x>0. The directional derivative in the +x direction is -1 for all x<0, +1 for all x>=0. Note that except at x=0 that the directional derivative in the direction of -x and in the direction of +x are negatives of one another. At x=0 the left and right directional derivatives are both +1. This also means the function is not differentiable at x=0.

I need to run, perhaps someone else can help you further.
 
  • #16
273
0
I'm going to sound like a total moron...but could you actually do the calculations for f(x) = |x| I think it would help me a lot. I'd really appreciate it.
 
  • #17
726
1
Yikes! I handed in my homework today saying that the directional derivatives do not exist. But I don't think it would be fair if the prof. took off marks because in Munkres' book he says:

" Definition . Let [tex] A \subset \mathbb{R}^m [/tex]; let [tex] f: A \rightarrow \mathbb{R}^n [/tex]. Suppose [tex] A [/tex] contains a neighborhood of [tex] a [/tex]. Given [tex] u \in \mathbb{R}^m [/tex] with [tex] u \neq 0 [/tex]; define:

[tex] f'(a ; u) = \lim_{t \rightarrow 0} \frac{f(a + tu) - f(a)}{t} [/tex], provided the limit exists."

In this definition, the limit is required to exist when t approaches 0 through both directions, and these one handed limits must be equal.
 
  • #18
273
0
LOL! You're in my class haha That's interesting lol The question is now, which is correct? I had initially thought it was what you just said, but after I asked on here (to check if I was right or wrong) I got all confused. Same thing goes for the question with g(x,y) = |xy|^(1/2) x >= 0 and -|xy|^(1/2). That one gets messy if its both left and right handed limits.
 
  • #19
726
1
Do you remember if maybe the professor revised the definition in any of the lectures?
 
  • #20
726
1
Okay, I was just thinking about it. Maybe Munkres defined the directional derivatives such that t must go to 0 through both positive and negative values because he was only using the definition of directional derivative to help him define partial derivatives, which does require that t go to 0 through positive and negative values.
 
  • #21
726
1
Wikipedia agrees with DH, but http://mathworld.wolfram.com/DirectionalDerivative.html agrees with Munkres' definition. I guess there are just two definitions. The questions you posted were questions assigned in the book, so it makes sense that we use the books definition for them.
 
  • #22
D H
Staff Emeritus
Science Advisor
Insights Author
15,393
686
I agree. If you have a specific definition, use it.
 
  • #23
273
0
No I don't recall him ever altering the defintion, but I wasn't in class on that lecture. However, the lecture notes I have (which are very thorough) don't specify whether something was changed.

Yeah, HallsofIvy also says that the directional derivatives of this function don't exist. He uses the same definiton. I also checked Spivak, wolfram and Stewart and they all agree with Munkres. I happened to check with someone else on this from class and they had D H's result. So I'll probalby ask the professor about this one.
 
  • #24
Landau
Science Advisor
905
0
Okay now I'm really getting confused. We were told that the differential of a multivariate function f: R^n -> R^m is a linear transformation Df = A: R^m -> R^n. We also have that if a function is differentiable then:

D_v f = Df(v)

Where, the first is the directional derivative in the direction of v and the second is the differential of f at v. For f(x,y) = |x| + |y| the differential would be something like [a b] and at v would be something like av1 + bv2 which is linear. So I'm confused now. Could you explain?

That means the one dimensional derivative is a linear operator. That of course does not mean that derivatives are linear functions of x. That would preclude rather useful functions such as exp(x).
This is a common confusion, because the 1-dimensional case from high school calculus is very special.

The (total) derivative of [itex]f:\mathbb{R}^n\to\mathbb{R}^m[/itex] at a is a linear map [itex]Df(a):\mathbb{R}^n\to\mathbb{R}^m}[/itex]. Its matrix representation (w.r.t. to the standard bases) is just the Jacobi matrix of partial derivatives at a. If f is differentiable at all a in R^n, then we obtain a map
[tex]Df:\mathbb{R}^n\to \text{Lin}(\mathbb{R}^n,\mathbb{R}^m)[/tex]
[tex]a\mapsto Df(a),[/tex]
where [tex]\text{Lin}(\mathbb{R}^n,\mathbb{R}^m)[/tex] is the vector space of all linear maps from R^n to R^m.

Now observe what happens if m=n=1. We have a function [itex]f:\mathbb{R}\to\mathbb{R}[/itex]. Its (total) derivative at a is a linear map [itex]Df(a):\mathbb{R}\to\mathbb{R}[/itex], whose matrix representation is the 1x1 matrix consisting of the value [itex]f'(a)\in\mathbb{R}[/itex]. If f is differentiable at all of R, we get the map
[tex]Df:\mathbb{R}\to \text{Lin}(\mathbb{R},\mathbb{R})[/tex]
[tex]a\mapsto Df(a).[/tex]
But there is a nice linear isomorphism from [tex]\text{Lin}(\mathbb{R},\mathbb{R})[/tex] to good old [itex]\mathbb{R}[/itex] given by [itex]L\mapsto L(1)[/itex], i.e. evaluating at 1! This identification is always implicitly used in high school calculus, so that the above map becomes
[tex]Df:\mathbb{R}\to \text{Lin}(\mathbb{R},\mathbb{R})\cong \mathbb{R}[/tex]
[tex]a\mapsto Df(a)(1).[/tex]

Example:
Take f(x)=exp(5x). Then [itex]f'(a)=exp(5a)\in\mathbb{R}[/itex], and Df(a) is the linear map [itex]x\mapsto e^ax[/itex] i.e. multiplication by f'(a). Now since f is differentiable at all a in R, we get the map
[tex]Df:\mathbb{R}\to\text{Lin}(\mathbb{R},\mathbb{R})[/tex]
[tex]a\mapsto Df(a).[/tex]
Under the indentification just described, we evaluate [itex]Df(a)(1)=f'(a)1=f'(a)\in\mathbb{R}[/itex]
[tex]Df:\mathbb{R}\to \text{Lin}(\mathbb{R},\mathbb{R})\cong \mathbb{R}[/tex]
[tex]a\mapsto e^{5a}.[/tex]



Again, consider f:R^n\to R^m. Let [itex]D_vf(a)\in\mathbb{R}^m[/itex] denote the directional derivative of f at a in the direction of v. Then:

If f is differentiable at a (i.e. the linear map Df(a) from above exists), then f has dir.der. at a in all directions [itex]v\in\mathbb{R}^n[/itex], and they are given by
[tex]D_vf(a)=Df(a)v,[/tex]
i.e. D_vf(a) is obtained from Df(a) by evaluating in v, or in matrix notation by multiplying the Jacobi matrix of f at a by the column vector v.
Also, the map
[tex]\mathbb{R}^n\to\mathbb{R}^m[/tex]
[tex]v\mapsto D_vf(a)[/tex]
is linear.
 
Last edited:

Related Threads on F(x,y) = |x| + |y|

Replies
9
Views
5K
  • Last Post
Replies
13
Views
9K
  • Last Post
Replies
10
Views
1K
Replies
3
Views
849
  • Last Post
Replies
2
Views
20K
  • Last Post
Replies
21
Views
1K
Replies
1
Views
2K
  • Last Post
Replies
4
Views
1K
  • Last Post
Replies
8
Views
30K
Replies
3
Views
1K
Top