# Lie derivation and flows

Lie derivative of a vector field Y along a vector field X is a third vector field acting on a funcion f as
$$\mathcal{L}_X Y(f)(p) = X(Y(f))(p) - Y(X(f)(p) = \lim_{t,s \to 0} \frac{f(\psi_s \circ \phi_t (p)) - f(\phi_t \circ \psi_s (p))}{st}$$

where $$\phi$$ and $$\psi$$ are the flows generated by X and Y respectively.

On the other hand, using an alternative definitiion of the Lie derivative

$$(\mathcal{L}_X Y)_p=\left.\frac{d}{dt}\right|_{t=0}\left((\phi_t^{-1})_*Y_{\phi_{t}(p)}\right)$$

we get

$$\mathcal{L}_X Y(f)(p) = \lim_{t,s \to 0} \frac{f( \phi_t^{-1} \circ \psi_s \circ \phi_t (p)) - f(\psi_s (p))}{st}$$

Are these equal?

Related Differential Geometry News on Phys.org
Since there is no replies yet; I will venture an opinion.
Yes, and I would tackle it using the chain rule for a simple proof. Unfortunately I am lacking reference books right now, so I won't try to write it out.
Moving away from that there are two reasons/models
1) Geometric: Following the drags around the loop they almost measure the same gap. And in the limit would be measuring the gap.
2) Using the Lie derivative implementation in differential geometry one has a straightforward equivalence via. the Riemann tensor.

Sorry for the lack of detail, but I would like to look a couple of things up before being detailed; and I don't have my books. It has also been several years since I studied Lie derivatives.
I imagine that there are C^2 (smoothness) restrictions.

Ray

Hi Ray,

Meanwhile I found the proof in Spivak's book (vol 1., page 155.) The proof is tricky a little bit, but very short.

Cheers,
mma

Why not post it, or email it to me. I'm interested in how it would be done by Spivak. I don't have his books.

Ray

This is his proof. Thanks! As usual, in my old age, I have to study Spivak's proofs carefully to see what he has done. In the end it becomes obvious (and "why didn't I do that") though.

Ray

mma:

Spivak question: It seems to me that the notation L_x Y in the second to last line is a typo (or I don't understand); it should probably be the partial derivative. The last line is L_x Y by definition?

Ray

To tell you the truth, this proof doesn't satisfy me completely.

I conjecture that the following exrpessions are all equal:

$$\lim_{t,s \to 0} \frac{f(\psi_s^{-1} \circ \phi_t^{-1} \circ \psi_s \circ \phi_t (p)) - f(p)}{st}$$

$$\lim_{t,s \to 0} \frac{f( \phi_t^{-1} \circ \psi_s \circ \phi_t (p)) - f(\psi_s (p))}{st}$$

$$\lim_{t,s \to 0} \frac{f(\psi_s \circ \phi_t (p)) - f(\phi_t \circ \psi_s (p))}{st}$$

$$\lim_{t,s \to 0} \frac{f( \phi_t (p)) - f(\psi_t^{-1} \circ \phi_t \circ \psi_s (p))}{st}$$

$$\lim_{t,s \to 0} \frac{f(p) - f( \phi_t^{-1} \circ \psi_t^{-1} \circ \phi_t \circ \psi_s (p))}{st}$$

but Spvak's proof doesn't give a clue to me for proving this.

Looks like a perfect setup for commutation diagram(s) to me.

Ray

I'm afraid that i haven't the vaguest idea what do you mean.
What are these commutation diagrams at all?

That's to bad. I was hoping for some help:)
I use them to keep track of mutiple transforms/mappings. Generally they provide a graphical way to illustrate some mappings. I will attempt to repharse your equations in this form. If I make any progress I will post it. Unfortunately I don't have my reference books so I will probably just invent some things. I have forgotten (not that I ever knew thoroughly) how to show anticommutation operators though.
http://en.wikipedia.org/wiki/Commutative_diagram

Ray

I conjecture that the following exrpessions are all equal:

$$\lim_{t,s \to 0} \frac{f(\psi_s^{-1} \circ \phi_t^{-1} \circ \psi_s \circ \phi_t (p)) - f(p)}{st}$$

$$\lim_{t,s \to 0} \frac{f( \phi_t^{-1} \circ \psi_s \circ \phi_t (p)) - f(\psi_s (p))}{st}$$

$$\lim_{t,s \to 0} \frac{f(\psi_s \circ \phi_t (p)) - f(\phi_t \circ \psi_s (p))}{st}$$

$$\lim_{t,s \to 0} \frac{f( \phi_t (p)) - f(\psi_t^{-1} \circ \phi_t \circ \psi_s (p))}{st}$$

$$\lim_{t,s \to 0} \frac{f(p) - f( \phi_t^{-1} \circ \psi_t^{-1} \circ \phi_t \circ \psi_s (p))}{st}$$

but Spvak's proof doesn't give a clue to me for proving this.
It seems that this conjecture is wrong.
Reading further Spivak's book, on page 162 we find a proof for

$$\lim_{t \to 0} \frac{f(\psi_t^{-1} \circ \phi_t^{-1} \circ \psi_t \circ \phi_t (p)) - f(p)}{t^2} = 2[X,Y]f$$

This is just the double of the second and third expression. This is suprising for me a little bit.

Was the dropping of the subscript "s" intentional.
In any case, I really have to review my books when they arrive!

Ray

Was the dropping of the subscript "s" intentional.

Ray
Not dropped, only took equal to t, because Spivak did it also. Strictly speaking, this is really not an exactly equivalent expression, but if the limit on the (s,t) plane exists, then the value of the limit is equal to the the limit above the t=s line.

This proof takes more then one page, and the necessary prerequisits are more then two pages.
The chain rule plays the key role!

Last edited:
In any case, I really have to review my books when they arrive!
Until they arrive...   Last edited:
Sorry, it didn't fit into 3 pieces, here is the end of the proof. Sigh, this is really bad. I have gone from wondering to having doubts about Spivak's reasoning. Do you want to discuss the doubts? In any case, I will look around on the internet for alternate explanations. I really prefer books(:

Ray

Sorry, this post slipped my notice:

mma:

Spivak question: It seems to me that the notation L_x Y in the second to last line is a typo (or I don't understand); it should probably be the partial derivative. The last line is L_x Y by definition?

Ray
No, I think that everything is OK here. $$(L_XYf)(p)$$ means here $$(L_X(Yf))(p)$$. Since $$Yf$$ is a real valued function, $$L_X(Yf)$$ is simply $$X(Yf)$$.
The last line is the definition of $$[X,Y]f$$.

I have gone from wondering to having doubts about Spivak's reasoning.
Ray
I'm afraid that I don't really know what doubts do you mean.

" LaTeX Code: (L_XYf)(p) means here LaTeX Code: (L_X(Yf))(p) "
Thanks, just getting old.
My problems with the second proof:
1) First page
"If there happens to be a coordinate system x"
then
"even if [X,Y]$$\neq$$0 "
While you can build such a coordinate system at p; the coordinate flows will lift
off of the X,Y flow lines at (h,h, 0,0 ). In other words, in general the required
coordinate system doesn't exist?
2)
At the bottom of first page c() is defined as a point. Whereas at the top of the
second page it is a "constant curve". What is a "constant curve"? Then it's
treated as a function. In any case the definition of c as an entity with
p ->0 ends up defining it as a member of T* ; i.e. a path in the total tangent
space. In that case c'() should have to be defined in a coordinate free manner.

It looks to me as if he needs an entity for the calculations and worked backward to
it; then mislabeled it. Of course I consistently misinterpret things at times;
like the original L_x(Yf) above.

Ray

1) First page
"If there happens to be a coordinate system x"
then
"even if [X,Y]$$\neq$$0 "
While you can build such a coordinate system at p; the coordinate flows will lift
off of the X,Y flow lines at (h,h, 0,0 ). In other words, in general the required
coordinate system doesn't exist?
Exactly.

2)
At the bottom of first page c() is defined as a point.
More precisely, c(h) is defined as a point on the curve $$c: \mathbb{R} \rightarrow M : h \mapsto c(h)$$

Whereas at the top of the
second page it is a "constant curve". What is a "constant curve"?
More predisely, c is the constant curve p up to first order.

This expresion is explained on the previous page: $$c = \gamma$$ up to first order at 0 means that $$c'(0) = \gamma'(0)$$

And also explained in the 15. PROPOSITION: c'(0) (= a constant's derivative) = 0

I hope this helps.

Last edited:
Thanks this makes the process a lot clearer. I can proceed. Some of the calculations
are clever. Sorry to be a pest, but these things weren't clear to me.
I am still more comfortable with phrases like: R(h) X M ->M, pushforward for vectors, pullback with covectors. I am interpreting the reasoning from this viewpoint.
Tedious but it works for me.
I have neglected Lie derivatives in my self-edcation; my bad because I knew they were essential in GR but I just took a superficial view.
I am trying to understand how the factor of "2" appeared; but I presume that some of
the push/pull 's count the lack of closure twice.

Ray

Ray

Perhaps the clearest understanding of what goes on with Lie derivative is obtained as follows:
If one is at a point where the vector field X is nonzero, then there is a coordinate system defined in some neighborhood of the point on which the vector field X is in faxct the first coordinate vector field. (X= d/dx1 there). For such a vector field all is clear. Lie bracket [X,Y] = flow pullback derivative already alluded too etc. But all points
have either X identically 0 in a neighborhood(in which case everything in sight is 0) OR
they are limits of point where X is not 0. So that way you can understand everything!

You can find some discussion of this here
http://www.math.ucla.edu/~greene/Integral Curves of Vector Fields.pdf
if you do not mind my handwriting!

The proof of the original question can be done by brute force by expanding the flows in a Taylor series in a local coordinate system

Robert G.

Thanks!
If you don't mind me restating the obvious in terms I am comfortable with; the explanation also smooths Spizak's proofs for me.
1) With smooth maps you can push-forward vectors (and pull-back covectors) on smooth manifolds M->N: from differential geometry.
2) A C^inf vector field can be integrated to a flow; i.e. an C^inf automorphism
3) Since the M->M map is invertible (automorphism) you can pull-back vectors along the flows.
This should have been second nature; but somehow it eluded me (or I forgot it, something that happens).

It seems to me that the Theorem on page 6 is missing the conditional "iff [V,W]=0";
or am I missing the point yet again. I haven't examined the proof yet; I will this afternoon.