- #1

quasar987

Science Advisor

Homework Helper

Gold Member

- 4,773

- 8

## Main Question or Discussion Point

Recall that for a function [itex]f:A\subset \mathbb{R}^n\rightarrow \mathbb{R}^m[/itex], the derivative of f at x is defined as the linear map L:R^n-->R^m such that ||f(x+h)-f(x)-L(h)||=o(||h||)

if such a linear map exists.

We can show that for certain geometries of the set A, when the derivative exists, it is unique. This is the case for instance if A is a closed disk or more simply, if A is any open set. However, for certain sets, the derivative may exists and not be unique. For instance, if A is a singleton, then any linear map does the trick.

I arrive at a strange conclusion if I assume that a function [itex]f:A\subset \mathbb{R}^n\rightarrow \mathbb{R}^m[/itex] is differentiable at x with L_1, L_2 two distinct derivatives of f at x and follow the proof that the matrix representation of the derivative is the Jacobian matrix.

By definition, we have that [itex]||f(x+h)-f(x)-L_k(h)||=o(||h||) [/itex] (k=1,2), which implies by the sandwich theorem that [itex]|f_j(x+h)-f_j(x)-(L_k)_j(h)_|=o(||h||)[/itex], for each component j=1,...,m. In particular,

[tex]\lim_{t\rightarrow 0}\left|\frac{f_j(x_1,...,x_i+t,...,x_n)-f_j(x_1,...,x_n)-(L_k)_j(0,...,t,...,0)}{t} \right|=0 [/tex]

from which it follows by definition of the partial derivatives that

[tex]\frac{\partial f_j}{\partial x_i}(x)=(L_k)_j(e_i) [/tex]

But this is absurd since L_1 and L_2 are assumed distincts so there is at least a couple (i,j) for which [itex](L_1)_j(e_i)\neq (L_2)_j(e_i) [/itex], leading to the contradiction

[tex]\frac{\partial f_j}{\partial x_i}(x)\neq \frac{\partial f_j}{\partial x_i}(x) [/tex]

Does anyone sees where I'm mistaken in my reasoning??

if such a linear map exists.

We can show that for certain geometries of the set A, when the derivative exists, it is unique. This is the case for instance if A is a closed disk or more simply, if A is any open set. However, for certain sets, the derivative may exists and not be unique. For instance, if A is a singleton, then any linear map does the trick.

I arrive at a strange conclusion if I assume that a function [itex]f:A\subset \mathbb{R}^n\rightarrow \mathbb{R}^m[/itex] is differentiable at x with L_1, L_2 two distinct derivatives of f at x and follow the proof that the matrix representation of the derivative is the Jacobian matrix.

By definition, we have that [itex]||f(x+h)-f(x)-L_k(h)||=o(||h||) [/itex] (k=1,2), which implies by the sandwich theorem that [itex]|f_j(x+h)-f_j(x)-(L_k)_j(h)_|=o(||h||)[/itex], for each component j=1,...,m. In particular,

[tex]\lim_{t\rightarrow 0}\left|\frac{f_j(x_1,...,x_i+t,...,x_n)-f_j(x_1,...,x_n)-(L_k)_j(0,...,t,...,0)}{t} \right|=0 [/tex]

from which it follows by definition of the partial derivatives that

[tex]\frac{\partial f_j}{\partial x_i}(x)=(L_k)_j(e_i) [/tex]

But this is absurd since L_1 and L_2 are assumed distincts so there is at least a couple (i,j) for which [itex](L_1)_j(e_i)\neq (L_2)_j(e_i) [/itex], leading to the contradiction

[tex]\frac{\partial f_j}{\partial x_i}(x)\neq \frac{\partial f_j}{\partial x_i}(x) [/tex]

Does anyone sees where I'm mistaken in my reasoning??