cliowa
- 190
- 0
Let E, F be Banach spaces, and let L(E;F) denote the space of linear, bounded maps between E and F. My goal is to understand better higher order derivatives.
Let's take E=\mathbb{R}^2, F=\mathbb{R}. Consider a function f:U\subset\mathbb{R}^2\rightarrow\mathbb{R}, where U is an open subset of \mathbb{R}^2. Then D^2 f:U\rightarrow L(\mathbb{R}^2;L(\mathbb{R}^2;\mathbb{R})).
Now, I read that for u\in U, v,w\in\mathbb{R}^2 by definition D^2 f(u)\cdot (v,w):=D((Df)(.)\cdot w)\cdot v. My question now is: Why was this defined precisely this way?
Does it have something to do with "using the product rule", which would amount to D((Df)(.)\cdot w)=D^2 f(.)\cdot w+Df(.)\cdot D(w)=D^2 f(.)\cdot w?
Thanks for any help. Best regards...Cliowa
Let's take E=\mathbb{R}^2, F=\mathbb{R}. Consider a function f:U\subset\mathbb{R}^2\rightarrow\mathbb{R}, where U is an open subset of \mathbb{R}^2. Then D^2 f:U\rightarrow L(\mathbb{R}^2;L(\mathbb{R}^2;\mathbb{R})).
Now, I read that for u\in U, v,w\in\mathbb{R}^2 by definition D^2 f(u)\cdot (v,w):=D((Df)(.)\cdot w)\cdot v. My question now is: Why was this defined precisely this way?
Does it have something to do with "using the product rule", which would amount to D((Df)(.)\cdot w)=D^2 f(.)\cdot w+Df(.)\cdot D(w)=D^2 f(.)\cdot w?
Thanks for any help. Best regards...Cliowa