Taylor expansion of a vector field

Click For Summary

Discussion Overview

The discussion revolves around the Taylor expansion of vector fields, exploring the possibility and methodology for approximating vector fields using Taylor series. Participants delve into the theoretical aspects, mathematical formulations, and potential challenges associated with higher-order terms in the expansion.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant questions the plausibility of a Taylor expansion for vector fields and proposes a first-order term involving the Jacobian.
  • Another participant draws an analogy to directional derivatives and suggests a more complex second term that requires a linear operator on two arguments.
  • A different participant suggests that a (1,2) tensor could serve as the second derivative for vector fields, expressing difficulty in finding a suitable operator.
  • One participant provides a component-wise approach to the expansion, detailing how to express the terms using partial derivatives.
  • Another participant expresses agreement with the proposed method for the approximation.
  • A later reply introduces a detailed mathematical formulation involving identity matrices and Kronecker deltas, leading to a proposed definition of a second derivative operator, denoted as H, for vector fields.

Areas of Agreement / Disagreement

Participants show some agreement on the approach to the first-order term of the Taylor expansion, but there is no consensus on the definition or formulation of the second-order term. Multiple competing views and methods are presented without resolution.

Contextual Notes

Participants express uncertainty regarding the existence and definition of operators that could represent higher-order derivatives for vector fields. The discussion includes complex mathematical expressions that may depend on specific definitions and assumptions not fully resolved within the thread.

Trifis
Messages
165
Reaction score
1
I was wondering if such an approximation is possible and plausible...

The first term would have to look sth like this: [itex]\vec{f}[/itex]([itex]\vec{x_{0}}[/itex]) + [itex]\textbf{J}[/itex][itex]_{[itex]\vec{f}[/itex]}[/itex]([itex]\vec{x_{0}}[/itex])[itex]\cdot[/itex]([itex]\vec{x}[/itex]-[itex]\vec{x_{0}}[/itex])

No clue about the second term though...
We would have to calculate the Jacobian of the Jacobian (like we calculate the Jacobian of the Gradient to get the Hessian for the second term of the regular case of scalar fields f: ℝ[itex]^{n}[/itex]→ℝ) or sth...
 
Last edited:
Physics news on Phys.org
The analogy for Taylor expansions of vector fields is most easily seen through directional derivatives.

[tex]f(r) = f(r_0) + (r - r_0) \cdot \nabla' f(r') |_{r' = r_0} + \frac{1}{2!} ([r - r_0] \cdot \nabla')^2 f(r') |_{r' = r_0} + \ldots[/tex]

But yes, the first-order term is the Jacobian, can be interpreted as a matrix operation, etc. The second term is more complicated, though, because it's obviously quadratic in [itex]r - r_0[/itex]. So you would need some sort of operator that is linear on two arguments.
 
Yes indeed, a (1,2) tensor would do the job...

The question is of course if such an operator could be defined to play the role of the second derivative for vector fields. I had no luck of finding one neither with my calculus books nor with the internet so far...
 
you just do it component by component.
[tex]f^i(x) = f^i(x_0) + \left. \frac{\partial f^i}{\partial x^j}\right|_{x=x_0}(x^j - x_0^j)<br /> + \left. \frac{1}{2!} \frac{\partial^2 f^i}{\partial x^j \partial x^k}\right|_{x=x_0}(x^j - x_0^j)(x^k - x_0^k)<br /> + \cdots[/tex]

Repeated indices are summed.
 
  • Like
Likes   Reactions: vitya-garcia
Sry for the late reply...

Yes this seems to be the right way to do this approximation. Thank you!
 
Well, you could take that term from your first post:

[itex]F = \vec{\eta}\cdot\vec{\nabla}[/itex]

where I let

[itex]\vec{\eta} = \vec{x}-\vec{x}_0[/itex]

and multiply it by the Identity Matrix, [itex]I[/itex],

[itex]I = \sum_k{\left|k\right\rangle\left\langle k\right|}[/itex]

from the left and from the right:

[itex]D \equiv IFI = \sum_{i,j}{\left|i\right\rangle\left\langle i\right| \vec{\eta}\cdot\vec{\nabla} \left|j\right\rangle\left\langle j\right|}[/itex]
[itex]= \sum_{i,j}{\left|i\right\rangle\left\langle i\right| \eta_j\frac{\partial}{\partial x_j}}[/itex]

Notice that, then, the vector function [itex]\vec{f}\left(\vec{x}\right)[/itex] is written as:

[itex]\vec{f}\left(\vec{x}\right) = \sum_k{\left|k\right\rangle f_k\left(\vec{x}\right)}[/itex]

so that

[itex]D\vec{f} = \sum_{i,j,k}{\left|i\right\rangle\left\langle i\right| \eta_j\frac{\partial f_k}{\partial x_j} \left|k\right\rangle} = \sum_{i,j}{\eta_j\frac{\partial f_i}{\partial x_j} \left|i\right\rangle}[/itex]

Notice also that [itex]D^2[/itex] is, explicitly:

[itex]D^2 = \sum_i{\left(\vec{\eta}\cdot\vec{\nabla}\right)^2 \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= \sum_i{\left(\vec{\eta}\cdot\vec{\nabla}\right) \left(\vec{\eta}\cdot\vec{\nabla}\right) \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= \sum_{i,j,k}{\eta_j\partial _j\left(\eta_k\partial _k\right) \left|i\right\rangle\left\langle i\right|}[/itex]

Assuming [itex]\partial _i=\frac{\partial}{\partial x_i}[/itex] (then [itex]\partial _j\eta_i=\delta_{ij}[/itex] -- Kronecker delta); thus:

[itex]D^2 = \sum_{i,j,k}{\eta_j\partial _j\left(\eta_k\partial _k\right) \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= \sum_{i,j,k}{\eta_j\left(\partial _j\eta_k\partial _k + \eta_k\partial _{jk}\right) \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= \sum_{i,j,k}{\eta_j\partial _j\eta_k\partial _k \left|i\right\rangle\left\langle i\right|} + \sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= \sum_{i,j}{\eta_j\partial _j \left|i\right\rangle\left\langle i\right|} + \sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= D + \sum_{i,j}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}[/itex]

And finally:

[itex]\sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2}{\partial x_j \partial x_k} \left|i\right\rangle\left\langle i\right|}[/itex]
[itex]= D^2-D=D(D-1)\equiv H[/itex]

Then you may define [itex]H[/itex]: (which I'll call [itex]H[/itex] because of the similarity with the Hessian Matrix):

[itex]H \equiv D(D-1) = \sum_i{\left|i\right\rangle \vec{\eta}\cdot\vec{\nabla} \left(\vec{\eta}\cdot\vec{\nabla}-1\right)\left\langle i\right|} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2}{\partial x_j \partial x_k} \left|i\right\rangle\left\langle i\right|}[/itex]

so that:

[itex]H\vec{f} = \sum_{i,j,k,l}{\eta_j \eta_k \frac{\partial^2 f_l}{\partial x_j \partial x_k} \left| i \right\rangle \left\langle i \right|\left. l \right\rangle} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2 f_i}{\partial x_j \partial x_k} \left| i \right\rangle}[/itex]

because [itex]\left\langle i \right|\left. l \right\rangle=\delta_{il}[/itex].

Now, your expansion may be written as:

[itex]\vec{f}\left(\vec{x}_0+\vec{\eta}\right) = \vec{f}\left(\vec{x}_0\right) + D\left.\vec{f}\right|_{\vec{x}=\vec{x}_0} + \frac{1}{2}H\left.\vec{f}\right|_{\vec{x}=\vec{x}_0} + O\left(\vec{\eta}^3\right)[/itex]
 
Last edited:

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 20 ·
Replies
20
Views
4K