Taylor expansion of a vector field

Click For Summary
SUMMARY

The discussion centers on the Taylor expansion of vector fields, specifically the formulation of the first and second terms in the expansion. The first term is represented by the function value at a point plus the Jacobian matrix multiplied by the difference vector. The second term involves a quadratic expression requiring a (1,2) tensor to represent the second derivative of vector fields. The participants confirm that the operator defined as H, analogous to the Hessian matrix, can be utilized to express the second derivative in this context.

PREREQUISITES
  • Understanding of vector calculus and Taylor series expansions
  • Familiarity with Jacobians and Hessians in multivariable calculus
  • Knowledge of tensor notation and operations
  • Proficiency in mathematical notation and operations involving directional derivatives
NEXT STEPS
  • Study the properties and applications of Jacobian matrices in vector fields
  • Explore the concept of Hessians and their role in multivariable calculus
  • Research tensor calculus and its applications in physics and engineering
  • Learn about directional derivatives and their significance in vector field analysis
USEFUL FOR

Mathematicians, physicists, and engineers interested in advanced calculus, particularly those working with vector fields and their approximations.

Trifis
Messages
165
Reaction score
1
I was wondering if such an approximation is possible and plausible...

The first term would have to look sth like this: \vec{f}(\vec{x_{0}}) + \textbf{J}_{\vec{f}}(\vec{x_{0}})\cdot(\vec{x}-\vec{x_{0}})

No clue about the second term though...
We would have to calculate the Jacobian of the Jacobian (like we calculate the Jacobian of the Gradient to get the Hessian for the second term of the regular case of scalar fields f: ℝ^{n}→ℝ) or sth...
 
Last edited:
Physics news on Phys.org
The analogy for Taylor expansions of vector fields is most easily seen through directional derivatives.

f(r) = f(r_0) + (r - r_0) \cdot \nabla' f(r') |_{r' = r_0} + \frac{1}{2!} ([r - r_0] \cdot \nabla')^2 f(r') |_{r' = r_0} + \ldots

But yes, the first-order term is the Jacobian, can be interpreted as a matrix operation, etc. The second term is more complicated, though, because it's obviously quadratic in r - r_0. So you would need some sort of operator that is linear on two arguments.
 
Yes indeed, a (1,2) tensor would do the job...

The question is of course if such an operator could be defined to play the role of the second derivative for vector fields. I had no luck of finding one neither with my calculus books nor with the internet so far...
 
you just do it component by component.
f^i(x) = f^i(x_0) + \left. \frac{\partial f^i}{\partial x^j}\right|_{x=x_0}(x^j - x_0^j)<br /> + \left. \frac{1}{2!} \frac{\partial^2 f^i}{\partial x^j \partial x^k}\right|_{x=x_0}(x^j - x_0^j)(x^k - x_0^k)<br /> + \cdots

Repeated indices are summed.
 
  • Like
Likes   Reactions: vitya-garcia
Sry for the late reply...

Yes this seems to be the right way to do this approximation. Thank you!
 
Well, you could take that term from your first post:

F = \vec{\eta}\cdot\vec{\nabla}

where I let

\vec{\eta} = \vec{x}-\vec{x}_0

and multiply it by the Identity Matrix, I,

I = \sum_k{\left|k\right\rangle\left\langle k\right|}

from the left and from the right:

D \equiv IFI = \sum_{i,j}{\left|i\right\rangle\left\langle i\right| \vec{\eta}\cdot\vec{\nabla} \left|j\right\rangle\left\langle j\right|}
= \sum_{i,j}{\left|i\right\rangle\left\langle i\right| \eta_j\frac{\partial}{\partial x_j}}

Notice that, then, the vector function \vec{f}\left(\vec{x}\right) is written as:

\vec{f}\left(\vec{x}\right) = \sum_k{\left|k\right\rangle f_k\left(\vec{x}\right)}

so that

D\vec{f} = \sum_{i,j,k}{\left|i\right\rangle\left\langle i\right| \eta_j\frac{\partial f_k}{\partial x_j} \left|k\right\rangle} = \sum_{i,j}{\eta_j\frac{\partial f_i}{\partial x_j} \left|i\right\rangle}

Notice also that D^2 is, explicitly:

D^2 = \sum_i{\left(\vec{\eta}\cdot\vec{\nabla}\right)^2 \left|i\right\rangle\left\langle i\right|}
= \sum_i{\left(\vec{\eta}\cdot\vec{\nabla}\right) \left(\vec{\eta}\cdot\vec{\nabla}\right) \left|i\right\rangle\left\langle i\right|}
= \sum_{i,j,k}{\eta_j\partial _j\left(\eta_k\partial _k\right) \left|i\right\rangle\left\langle i\right|}

Assuming \partial _i=\frac{\partial}{\partial x_i} (then \partial _j\eta_i=\delta_{ij} -- Kronecker delta); thus:

D^2 = \sum_{i,j,k}{\eta_j\partial _j\left(\eta_k\partial _k\right) \left|i\right\rangle\left\langle i\right|}
= \sum_{i,j,k}{\eta_j\left(\partial _j\eta_k\partial _k + \eta_k\partial _{jk}\right) \left|i\right\rangle\left\langle i\right|}
= \sum_{i,j,k}{\eta_j\partial _j\eta_k\partial _k \left|i\right\rangle\left\langle i\right|} + \sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}
= \sum_{i,j}{\eta_j\partial _j \left|i\right\rangle\left\langle i\right|} + \sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}
= D + \sum_{i,j}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|}

And finally:

\sum_{i,j,k}{\eta_j\eta_k\partial _{jk} \left|i\right\rangle\left\langle i\right|} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2}{\partial x_j \partial x_k} \left|i\right\rangle\left\langle i\right|}
= D^2-D=D(D-1)\equiv H

Then you may define H: (which I'll call H because of the similarity with the Hessian Matrix):

H \equiv D(D-1) = \sum_i{\left|i\right\rangle \vec{\eta}\cdot\vec{\nabla} \left(\vec{\eta}\cdot\vec{\nabla}-1\right)\left\langle i\right|} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2}{\partial x_j \partial x_k} \left|i\right\rangle\left\langle i\right|}

so that:

H\vec{f} = \sum_{i,j,k,l}{\eta_j \eta_k \frac{\partial^2 f_l}{\partial x_j \partial x_k} \left| i \right\rangle \left\langle i \right|\left. l \right\rangle} = \sum_{i,j,k}{\eta_j \eta_k \frac{\partial^2 f_i}{\partial x_j \partial x_k} \left| i \right\rangle}

because \left\langle i \right|\left. l \right\rangle=\delta_{il}.

Now, your expansion may be written as:

\vec{f}\left(\vec{x}_0+\vec{\eta}\right) = \vec{f}\left(\vec{x}_0\right) + D\left.\vec{f}\right|_{\vec{x}=\vec{x}_0} + \frac{1}{2}H\left.\vec{f}\right|_{\vec{x}=\vec{x}_0} + O\left(\vec{\eta}^3\right)
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K