Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Taylor expansion of a vector field

  1. Aug 8, 2012 #1
    I was wondering if such an approximation is possible and plausible...

    The first term would have to look sth like this: [itex]\vec{f}[/itex]([itex]\vec{x_{0}}[/itex]) + [itex]\textbf{J}[/itex][itex]_{[itex]\vec{f}[/itex]}[/itex]([itex]\vec{x_{0}}[/itex])[itex]\cdot[/itex]([itex]\vec{x}[/itex]-[itex]\vec{x_{0}}[/itex])

    No clue about the second term though...
    We would have to calculate the Jacobian of the Jacobian (like we calculate the Jacobian of the Gradient to get the Hessian for the second term of the regular case of scalar fields f: ℝ[itex]^{n}[/itex]→ℝ) or sth...
    Last edited: Aug 8, 2012
  2. jcsd
  3. Aug 8, 2012 #2
    The analogy for Taylor expansions of vector fields is most easily seen through directional derivatives.

    [tex]f(r) = f(r_0) + (r - r_0) \cdot \nabla' f(r') |_{r' = r_0} + \frac{1}{2!} ([r - r_0] \cdot \nabla')^2 f(r') |_{r' = r_0} + \ldots[/tex]

    But yes, the first-order term is the Jacobian, can be interpreted as a matrix operation, etc. The second term is more complicated, though, because it's obviously quadratic in [itex]r - r_0[/itex]. So you would need some sort of operator that is linear on two arguments.
  4. Aug 8, 2012 #3
    Yes indeed, a (1,2) tensor would do the job...

    The question is of course if such an operator could be defined to play the role of the second derivative for vector fields. I had no luck of finding one neither with my calculus books nor with the internet so far...
  5. Aug 8, 2012 #4
    you just do it component by component.
    [tex]f^i(x) = f^i(x_0) + \left. \frac{\partial f^i}{\partial x^j}\right|_{x=x_0}(x^j - x_0^j)
    + \left. \frac{1}{2!} \frac{\partial^2 f^i}{\partial x^j \partial x^k}\right|_{x=x_0}(x^j - x_0^j)(x^k - x_0^k)
    + \cdots [/tex]

    Repeated indices are summed.
  6. Aug 10, 2012 #5
    Sry for the late reply...

    Yes this seems to be the right way to do this approximation. Thank you!
  7. Oct 7, 2012 #6
    Well, you could take that term from your first post:

    [itex]F = \vec{\eta}\cdot\vec{\nabla}[/itex]

    where I let

    [itex]\vec{\eta} = \vec{x}-\vec{x}_0[/itex]

    and multiply it by the Identity Matrix, [itex]I[/itex],

    [itex]I = \sum_k{\left|k\right\rangle\left\langle k\right|}[/itex]

    from the left and from the right:

    [itex]D \equiv IFI = \sum_{i,j}{\left|i\right\rangle\left\langle i\right| \vec{\eta}\cdot\vec{\nabla} \left|j\right\rangle\left\langle j\right|}[/itex]
    [itex]= \sum_{i,j}{\left|i\right\rangle\left\langle i\right| \eta_j\frac{\partial}{\partial x_j}}[/itex]

    Notice that, then, the vector function [itex]\vec{f}\left(\vec{x}\right)[/itex] is written as:

    [itex]\vec{f}\left(\vec{x}\right) = \sum_k{\left|k\right\rangle f_k\left(\vec{x}\right)}[/itex]

    so that

    [itex]D\vec{f} = \sum_{i,j,k}{\left|i\right\rangle\left\langle i\right| \eta_j\frac{\partial f_k}{\partial x_j} \left|k\right\rangle} = \sum_{i,j}{\eta_j\frac{\partial f_i}{\partial x_j} \left|i\right\rangle}[/itex]

    Notice also that [itex]D^2[/itex] is, explicitly:

    [itex]D^2 = \sum_i{\left(\vec{\eta}\cdot\vec{\nabla}\right)^2 \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= \sum_i{\left(\vec{\eta}\cdot\vec{\nabla}\right) \left(\vec{\eta}\cdot\vec{\nabla}\right) \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= \sum_{i,j,k}{\eta_j\partial _j\left(\eta_k\partial _k\right) \left|i\right\rangle\left\langle i\right|}[/itex]

    Assuming [itex]\partial _i=\frac{\partial}{\partial x_i}[/itex] (then [itex]\partial _j\eta_i=\delta_{ij}[/itex] -- Kronecker delta); thus:

    [itex]D^2 = \sum_{i,j,k}{\eta_j\partial _j\left(\eta_k\partial _k\right) \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= \sum_{i,j,k}{\eta_j\left(\partial _j\eta_k\partial _k + \eta_k\partial _{jk}\right) \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= \sum_{i,j,k}{\eta_j\partial _j\eta_k\partial _k \left|i\right\rangle\left\langle i\right|} + \sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= \sum_{i,j}{\eta_j\partial _j \left|i\right\rangle\left\langle i\right|} + \sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= D + \sum_{i,j}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}[/itex]

    And finally:

    [itex]\sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2}{\partial x_j \partial x_k} \left|i\right\rangle\left\langle i\right|}[/itex]
    [itex]= D^2-D=D(D-1)\equiv H[/itex]

    Then you may define [itex]H[/itex]: (which I'll call [itex]H[/itex] because of the similarity with the Hessian Matrix):

    [itex]H \equiv D(D-1) = \sum_i{\left|i\right\rangle \vec{\eta}\cdot\vec{\nabla} \left(\vec{\eta}\cdot\vec{\nabla}-1\right)\left\langle i\right|} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2}{\partial x_j \partial x_k} \left|i\right\rangle\left\langle i\right|}[/itex]

    so that:

    [itex]H\vec{f} = \sum_{i,j,k,l}{\eta_j \eta_k \frac{\partial^2 f_l}{\partial x_j \partial x_k} \left| i \right\rangle \left\langle i \right|\left. l \right\rangle} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2 f_i}{\partial x_j \partial x_k} \left| i \right\rangle}[/itex]

    because [itex]\left\langle i \right|\left. l \right\rangle=\delta_{il}[/itex].

    Now, your expansion may be written as:

    [itex]\vec{f}\left(\vec{x}_0+\vec{\eta}\right) = \vec{f}\left(\vec{x}_0\right) + D\left.\vec{f}\right|_{\vec{x}=\vec{x}_0} + \frac{1}{2}H\left.\vec{f}\right|_{\vec{x}=\vec{x}_0} + O\left(\vec{\eta}^3\right)[/itex]
    Last edited: Oct 8, 2012
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook