# Differentiability in higher dimensions

RiotRick

## Homework Statement

Examine if the function is differentiable in (0,0)##\in \mathbb{R}^2##? If yes, calculate the differential Df(0,0).
##f(x,y) = x + y## if x > 0 and ##f(x,y) =x+e^{-x^2}*y## if ##x \leq 0 ## (it's one function)

## Homework Equations

##lim_{h \rightarrow 0} \frac{||f(x_0+h)-f(x_0)-J(h)||}{||h||}=0##

## The Attempt at a Solution

I'm not familiar with two subfunctions in one function.
So my plan is: I show the definition for both subfunctions if x> and <0 and then i show that both are equal at 0.
I calculated the partial derivative for ##x \leq 0##:
##\frac{f(0+h,0)-f(0,0)}{h} = \frac{h+e^{-h^2}*0}{h}= 1##
##\frac{f(0,0+h)-f(0,0)}{h} = \frac{0+e^{0}*h}{h}= 1##
so J = (1,1)
##\frac{f(0+x,0+y)-f(0,0)-(1,1)*(x,y)^T}{\sqrt{x^2+y^2}}=\frac{x+e^{-x^2}*y-0-(x+y)}{\sqrt{x^2+y^2}}##
now if x is less 0 we can simplify to:
##\frac{y*e^{x^2}-y)}{\sqrt{x^2+y^2}}##
##\frac{r*sin(\theta)*e^{(r*cos(\theta))^2}-r*sin(\theta))}{r}##
##\frac{sin(\theta)*e^{(r*cos(\theta))^2}-sin(\theta))}{1}##
if r approaches 0 then
##\frac{sin(\theta)*e^{0}-sin(\theta))}{1}=\frac{sin(\theta)*1-sin(\theta))}{1}=\frac{0}{1}=0##
Does this work so far? Is my idea okay and can I proceed this way for x>0?

Mentor

## Homework Statement

Examine if the function is differentiable in (0,0)##\in \mathbb{R}^2##? If yes, calculate the differential Df(0,0).
##f(x,y) = x + y## if x > 0 and ##f(x,y) =x+e^{-x^2}*y## if ##x \leq 0 ## (it's one function)

## Homework Equations

##lim_{h \rightarrow 0} \frac{||f(x_0+h)-f(x_0)-J(h)||}{||h||}=0##

## The Attempt at a Solution

I'm not familiar with two subfunctions in one function.
So my plan is: I show the definition for both subfunctions if x> and <0 and then i show that both are equal at 0.
I calculated the partial derivative for ##x \leq 0##:
##\frac{f(0+h,0)-f(0,0)}{h} = \frac{h+e^{-h^2}*0}{h}= 1##
##\frac{f(0,0+h)-f(0,0)}{h} = \frac{0+e^{0}*h}{h}= 1##
This is somewhat difficult to follow. The goal here is to find the differential Df at (0, 0). To do that, you need both first partials, and for each partial you need to look at two limits. One limit is as ##h \to 0^+##, and the other is as ##h \to 0^-##.
The work you show above appears to be looking at ##\frac{\partial f}{\partial x}## and ##\frac{\partial f}{\partial y}##, but doesn't take into account that the function f is defined in a piecewise fashion. For each of the partials that I wrote to exist, you have to show that the two one-sided limits exist and are equal.
RiotRick said:
so J = (1,1)
##\frac{f(0+x,0+y)-f(0,0)-(1,1)*(x,y)^T}{\sqrt{x^2+y^2}}=\frac{x+e^{-x^2}*y-0-(x+y)}{\sqrt{x^2+y^2}}##
now if x is less 0 we can simplify to:
##\frac{y*e^{x^2}-y)}{\sqrt{x^2+y^2}}##
##\frac{r*sin(\theta)*e^{(r*cos(\theta))^2}-r*sin(\theta))}{r}##
##\frac{sin(\theta)*e^{(r*cos(\theta))^2}-sin(\theta))}{1}##
if r approaches 0 then
##\frac{sin(\theta)*e^{0}-sin(\theta))}{1}=\frac{sin(\theta)*1-sin(\theta))}{1}=\frac{0}{1}=0##
Does this work so far? Is my idea okay and can I proceed this way for x>0?
See my comment above.

As already noted, the goal here is to find Df(0, 0). Do you know the formula for the differential of a function? For a two-variable function, it should involve dx and dy.

Last edited:
RiotRick
This is somewhat difficult to follow. The goal here is to find the differential Df at (0, 0). To do that, you need both first partials, and for each partial you need to look at two limits. One limit is as h→0+h \to 0^+, and the other is as h→0−h \to 0^-.

I'm now quite buffled that my approach with the definition is "hard to follow". First I need to show that the function is differentiable and then calculate Df. I use only the definition of differentiability. With the first steps I calculate my Jacobian Matrix to use it for the J(h). I don't know how to use your approach since "h" disappears anyway.

Mentor
I'm now quite buffled that my approach with the definition is "hard to follow".
First I need to show that the function is differentiable and then calculate Df.
Yes, that's the right idea, but you're not going about it the right way. Below are some of the things you wrote in a previous post, plus my comments.

RiotRick said:
I calculated the partial derivative for ##x \leq 0##:
##\frac{f(0+h,0)-f(0,0)}{h} = \frac{h+e^{-h^2}*0}{h}= 1##
##\frac{f(0,0+h)-f(0,0)}{h} = \frac{0+e^{0}*h}{h}= 1##
1. "The partial derivative" makes no sense. For your function there are two partial derivatives -- one with respect to x and the other with respect to y.
2. Since your function is defined piecewise, each equation above needs to show what the limit is as h approaches zero from below (x <= 0) as well as from above (x > 0). If you did that, you didn't show it
I use only the definition of differentiability.
The definition you used is for a function that is defined in only one way. Your function has different formulas, depending on whether x <= 0 or x > 0.

I calculate my Jacobian Matrix to use it for the J(h).
so J = (1,1)

This doesn't make sense to me. For one thing, (1, 1) is a vector, not a matrix. A Jacobian is always a square matrix. The Jacobian matrix applies to vector-valued functions, such as a function that maps ##\mathbb R^2## to ##\mathbb R^2##. Your function is a map from ##\mathbb R^2## to ##\mathbb R##. IOW, it's a real-valued function of two variables.

RiotRick said:
##\frac{f(0+x,0+y)-f(0,0)-(1,1)*(x,y)^T}{\sqrt{x^2+y^2}}=\frac{x+e^{-x^2}*y-0-(x+y)}{\sqrt{x^2+y^2}}##
I don't know where you got this formula, but it definitely doesn't apply to the function at hand in this problem.

Let me ask again, for a function of two variables f(x, y), how is the differential of f (Df) defined? You should be able to find it in your textbook, probably in the same section as the problem you're working on. If you can't find it in your book, look up differential on the internet. The context here is the differential of a function of several variables.

RiotRick
In my notes and in wikipedia:
A function with more than 1 variable is differentiable if:

#### Attachments

• physics.JPG
4.3 KB · Views: 485
Mentor
In my notes and in wikipedia:
A function with more than 1 variable is differentiable if:
View attachment 240633
But this isn't what you have. From the wikipedia article on Jacobian matrix and determinant,
https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant said:
In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function.
The notation as used in the wikipedia article with f in bold -- f -- means that f is a vector-valued function. That is ***NOT*** what you have in this problem. For a given point ##(x, y) \in \mathbb R^2##, there is a single value of f(x, y).
An example of a vector-valued function of one variable is ##\textbf r(t) = <f_1(t), f_2(t), f_3(t)>##.

RiotRick
okay I see. I got this wrong about the Matrix.
##lim_{h \rightarrow 0} \frac{||f(x_0+h)-f(x_0)-A(x-x_0)||}{||h||}=0## where A is my mxn matrix of the linear mapping of the partial derivatives. So A = (1,1) in my example

Mentor
okay I see. I got this wrong about the Matrix.
##lim_{h \rightarrow 0} \frac{||f(x_0+h)-f(x_0)-A(x-x_0)||}{||h||}=0## where A is my mxn matrix of the linear mapping of the partial derivatives. So A = (1,1) in my example
Again, no. There are several things wrong with your definition above.
1. The function in this problem has two arguments, not one.
2. For your function, you need to examine both one-sided limits of each partial derivative -- four limits in all.
3. Your expression ##A(x - x_0)## makes no sense in this context, since A is a vector and ##f(x_0 + h), f(x_0)## and ##(x - x_0)## are all scalars (numbers). You can't mix them like this.

You're making this problem way more complicated than it needs to be. Since you don't seem to be able to find the definition of the differential of a function, take a look here: https://en.wikipedia.org/wiki/Differential_of_a_function, in the section titled Differentials in several variables.

Eclair_de_XII
The first function should be easy enough to find the differential for, since it's a linear transformation. Once you have the expression for the differential, and if you want to find the matrix for this linear transformation, try applying the linear transformation to a column vector of two variables, and figuring out what two entries should go into the two entries of the 2 x 1 matrix based on that expression. Meanwhile, for the second problem, I would indeed look at the matrix of partial derivatives: it should be a 1 x 2 matrix consisting of the partial with respect to ##x## and the partial with respect to ##y##.

Mentor
The first function should be easy enough to find the differential for, since it's a linear transformation.
There is only one function -- it's defined in a piecewise fashion.

Once you have the expression for the differential
... the OP can evaluate it at the point (0, 0). That's all the problem asks for.
Eclair_de_XII said:
, and if you want to find the matrix for this linear transformation, try applying the linear transformation to a column vector of two variables, and figuring out what two entries should go into the two entries of the 2 x 1 matrix based on that expression. Meanwhile, for the second problem, I would indeed look at the matrix of partial derivatives: it should be a 1 x 2 matrix consisting of the partial with respect to x and the partial with respect to y.
There is no second problem. The problem as given is a fairly straightforward one, and doesn't require the use of matrices, square or otherwise, or Jacobians. The definition of the differential of a function of two variables is shown in all calculus textbooks, as well as in the wikipedia page I linked to.

RiotRick
The first function should be easy enough to find the differential for, since it's a linear transformation. Once you have the expression for the differential, and if you want to find the matrix for this linear transformation, try applying the linear transformation to a column vector of two variables, and figuring out what two entries should go into the two entries of the 2 x 1 matrix based on that expression. Meanwhile, for the second problem, I would indeed look at the matrix of partial derivatives: it should be a 1 x 2 matrix consisting of the partial with respect to ##x## and the partial with respect to ##y##.
But isn't that exactly what I tried with:

I calculated the partial derivative for ##x \leq 0##:
##\frac{f(0+h,0)-f(0,0)}{h} = \frac{h+e^{-h^2}*0}{h}= 1##
##\frac{f(0,0+h)-f(0,0)}{h} = \frac{0+e^{0}*h}{h}= 1##
so J = (1,1)
##\frac{f(0+x,0+y)-f(0,0)-(1,1)*(x,y)^T}{\sqrt{x^2+y^2}}=\frac{x+e^{-x^2}*y-0-(x+y)}{\sqrt{x^2+y^2}}##

Mentor
But isn't that exactly what I tried with:
so J = (1,1)
##\frac{f(0+x,0+y)-f(0,0)-(1,1)*(x,y)^T}{\sqrt{x^2+y^2}}=\frac{x+e^{-x^2}*y-0-(x+y)}{\sqrt{x^2+y^2}}##
This isn't even close to what you're trying to get.

The answer should look like this: ##Df(0, 0) = \text{something} \cdot dx + \text{something else} \cdot dy##
The first something is ##\frac{\partial f}{\partial x}(0, 0)## and the other something is ##\frac{\partial f}{\partial y}(0, 0)##.

You're making this much more complicated than it needs to be with the Jacobian business and conversion to polar form.