About lie algebras, vector fields and derivations

In summary: Whenever derivatives are involved, things heavily depend on language and notation. E.g. your ##Vect(M)## almost looks categorial in which case... it would be more complicated to check the group properties.
  • #1
aalma
46
1
Misplaced Homework Thread
-Verify that the space ##Vect(M)## of vector fields on a manifold ##M## is a Lie algebra with respect to the bracket.
-More generally, verify that the set of derivations of any algebra ##A## is a Lie algebra with respect to the bracket defined as ##[δ_1,δ_2] = δ _1◦δ_2− δ_2◦δ_1##.

In the first part of the question am I supposed to show that the bracket operation satisfies the axioms of a Lie algebra: bilinearity, skew-symmetry, and the Jacobi identity? Given ##X, Y \in Vect(M)## we look at ##[X,Y]f=XYf-YXf## for ##f \in C^{\infty}(M)##, and then check the axioms?
But is not it trivial knowing that ##Vect(M)## has a structure of a vector space on ##R## and if ##f \in C^{\infty}(M), X \in Vect(M)## then ##fX \in Vect(M)## ?

I would be glad if you can tell me how do these parts work.

Thank you for help
 
Physics news on Phys.org
  • #2
As a physicist, I'd introduce a holonomous coordinate system and use the definition of a vector (field) as the operator, acting on a scalar field as
$$Xf=X^j \partial_j f, \quad \text{where} \quad \partial_j=\frac{\partial}{\partial x^j}.$$
Then the above quoted theorem is easily to prove by applying the usual rules of taking (partial) derivatives.
 
  • Like
Likes aalma
  • #3
aalma said:
In the first part of the question am I supposed to show that the bracket operation satisfies the axioms of a Lie algebra: bilinearity, skew-symmetry, and the Jacobi identity? Given ##X, Y \in Vect(M)## we look at ##[X,Y]f=XYf-YXf## for ##f \in C^{\infty}(M)##, and then check the axioms?
Yes.
aalma said:
But is not it trivial knowing that ##Vect(M)## has a structure of a vector space on ##R## and if ##f \in C^{\infty}(M), X \in Vect(M)## then ##fX \in Vect(M)## ?
No. This doesn't say anything about the bracket, so any properties of the bracket need to be shown. It may be easy but it doesn't follow from this, it follows from the definition of the bracket.
 
  • Like
Likes aalma and vanhees71
  • #5
vanhees71 said:
As a physicist, I'd introduce a holonomous coordinate system and use the definition of a vector (field) as the operator, acting on a scalar field as
$$Xf=X^j \partial_j f, \quad \text{where} \quad \partial_j=\frac{\partial}{\partial x^j}.$$
Then the above quoted theorem is easily to prove by applying the usual rules of taking (partial) derivatives.
Yes, then you take two vector fields ##X=X^j\partial_jf
,Y=Y^i\partial_if## and check the axioms of lie algebra, i.e.
##[X,Y]=-[Y,X]##, bilinearity and Jacobi identity?
What about the general case they mention?
 
  • #6
martinbn said:
Yes.

No. This doesn't say anything about the bracket, so any properties of the bracket need to be shown. It may be easy but it doesn't follow from this, it follows from the definition of the bracket.
Yeah. How to check this exactly? Is it just to show that
[X,Y](f)=-[Y,X](f) and this holds since the bracket [ , ] is skew-symmetric?
Bilinearity and Jaccobi identity also hold from same reason?
(Or it is not straightforward as I think?!)
 
  • #7
aalma said:
Yes, then you take two vector fields ##X=X^j\partial_jf
,Y=Y^i\partial_if## and check the axioms of lie algebra, i.e.
##[X,Y]=-[Y,X]##, bilinearity and Jacobi identity?
What about the general case they mention?
Bilinearity is inherited from the derivatives, and the Jacobi identity is nothing else than the Leibniz or product rule. Anti-commutativity is inherited from matrix multiplication of the Jacobi matrices.

The only crucial part is left-invariance (or right-invariance, in case your book defines it with right instead of left.)
 
  • Like
Likes aalma and malawi_glenn
  • #8
fresh_42 said:
Bilinearity is inherited from the derivatives, and the Jacobi identity is nothing else than the Leibniz or product rule. Anti-commutativity is inherited from matrix multiplication of the Jacobi matrices.

The only crucial part is left-invariance (or right-invariance, in case your book defines it with right instead of left.)
Thanks. As I understand I do not actually need to use local coordinates here, just to check the axioms of Lie algebra which are clear here since ##Vect(M)## has an operation "brackets" which is bilinear over ##R##, anti-commutative and satisfies Jacobi identity?
What you mean by checking left invariance here? (why is that important)
 
  • #9
aalma said:
Thanks. As I understand I do not actually need to use local coordinates here, just to check the axioms of Lie algebra which are clear here since ##Vect(M)## has an operation "brackets" which is bilinear over ##R##, anti-commutative and satisfies Jacobi identity?
What you mean by checking left invariance here? (why is that important)
It all depends on your toolbox. What are you supposed to use and what not?

How is ##X## defined? Where does the point of evaluation of the tangent vectors come into play? How does the group operate on the tangent field? Have you made a sketch?

And it also depends a bit on what you study. If you study physics or differential geometry, then you should get used to local coordinates and they really make life easier as @vanhees71 pointed out. If you study Lie groups and Lie algebras in mathematics, then you could approach this on a bit more generalized level.

Whenever derivatives are involved, things heavily depend on language and notation. E.g. your ##Vect(M)## almost looks categorial in which case local coordinates may not be the preferred choice. Another example can be found in (at the beginning)
https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/
where I gathered 10 different perspectives on the same thing: differentiation. And I didn't even use the word slope.

Hence, the correct answer to your question depends on the language you are using. It is a very technical question. It all follows more or less directly from the definitions. So start at this point, with the definitions, write down what needs to be shown in the same language, and perform the few calculations in between.
 
  • Like
Likes aalma
  • #10
fresh_42 said:
It all depends on your toolbox. What are you supposed to use and what not?

How is ##X## defined? Where does the point of evaluation of the tangent vectors come into play? How does the group operate on the tangent field? Have you made a sketch?

And it also depends a bit on what you study. If you study physics or differential geometry, then you should get used to local coordinates and they really make life easier as @vanhees71 pointed out. If you study Lie groups and Lie algebras in mathematics, then you could approach this on a bit more generalized level.

Whenever derivatives are involved, things heavily depend on language and notation. E.g. your ##Vect(M)## almost looks categorial in which case local coordinates may not be the preferred choice. Another example can be found in (at the beginning)
https://www.physicsforums.com/insights/journey-manifold-su2mathbbc-part/
where I gathered 10 different perspectives on the same thing: differentiation. And I didn't even use the word slope.

Hence, the correct answer to your question depends on the language you are using. It is a very technical question. It all follows more or less directly from the definitions. So start at this point, with the definitions, write down what needs to be shown in the same language, and perform the few calculations in between.
Yes, thanks. (Local coordinates is possible to use).
If I take vector fields X Y in Vect(M) and f in ##C^\infty(M)## then define the operation [ .,.] on Vect(M) gettinh a vectpr field [X,Y] that acts on f:
[X,Y](f)=X(Y(f))-Y(X(f)).
The calculations were a little confusing
for example, in checking bilinearity:
##[X,Y_1+Y_2](f)=[X,Y_1](f)+[X,Y_2](f)##.
I used the def of the brackets
##[X,Y_1+Y_2](f)=X((Y_1+Y_2)(f))-(Y1+Y_2)(X(f)##. Now which facts can help me to get that it equals ##[X,Y_1](f)+[X,Y_2](f)##, or just to mention that it follows from that the btackets are bilinear?

Sorry if it is trivial..
 
  • #11
aalma said:
Yes, thanks. (Local coordinates is possible to use).
If I take vector fields X Y in Vect(M) and f in ##C^\infty(M)## then define the operation [ .,.] on Vect(M) gettinh a vectpr field [X,Y] that acts on f:
[X,Y](f)=X(Y(f))-Y(X(f)).
The calculations were a little confusing
for example, in checking bilinearity:
##[X,Y_1+Y_2](f)=[X,Y_1](f)+[X,Y_2](f)##.
I used the def of the brackets
##[X,Y_1+Y_2](f)=X((Y_1+Y_2)(f))-(Y1+Y_2)(X(f)##. Now which facts can help me to get that it equals ##[X,Y_1](f)+[X,Y_2](f)##, or just to mention that it follows from that the btackets are bilinear?

Sorry if it is trivial..
Well, trivial is relative. I remember that it once took me two days and four variable transformations to understand what an author called "clearly". And if anybody says "obviously" then this will be the first point to doubt!

You only have to go on with what you started:
\begin{align*}
[X\, , \,\alpha_1Y_1+\alpha_2Y_2](f)&=[X\, , \,\alpha_1Y_1](f)+[X\, , \,\alpha_2Y_2](f)\\
&=\alpha_1X(Y_1(f))-\alpha_1Y_1(X(f))+\alpha_2X(Y_2(f))-\alpha_2Y_2(X(f))\\
&=\alpha_1[X\, , \,Y_1](f)+\alpha_2[X\, , \,Y_2](f)
\end{align*}
And then for ##[\alpha_1X_1+\alpha_2X_2\, , \,Y](f)=\ldots=\alpha_1[X_1\, , \,Y](f)+\alpha_2[X_2\, , Y\,](f)## and then ##[X\, , \,Y](\alpha_1f_1+\alpha_2f_2)=\ldots= \alpha_1[X\, , \,Y](f_1)+\alpha_2[X\, , \,Y](f_2) ## and then correct your copy and paste errors.

However, the multiplicative part - the Jacobi Identity - will need some more considerations: What is ##X(f)?## Or even more important, what is ##X(f\cdot g)?##
 
Last edited:
  • Like
Likes aalma
  • #12
fresh_42 said:
Well, trivial is relative. I remember that it once took me two days and four variable transformations to understand what an author called "clearly". And if anybody says "obviously" then this will be the first point to doubt!

You only have to go on with what you started:
\begin{align*}
[X\, , \,\alpha_1Y_1+\alpha_2Y_2](f)&=[X\, , \,\alpha_1Y_1](f)+[X\, , \,\alpha_2Y_2](f)\\
&=\alpha_1X(Y_1(f))-\alpha_1Y_1(X(f))+\alpha_2X(Y_2(f))-\alpha_2Y_2(X(f))\\
&=\alpha_1[X\, , \,Y_1](f)+\alpha_2[X\, , \,Y_2](f)
\end{align*}
And then for ##[\alpha_1X_1+\alpha_2X_2\, , \,Y](f)=\ldots=\alpha_1[X_1\, , \,Y](f)+\alpha_2[X_2\, , Y\,](f)## and then ##[X\, , \,Y](\alpha_1f_1+\alpha_2f_2)=\ldots= \alpha_1[X\, , \,Y](f_1)+\alpha_2[X\, , \,Y](f_2) ## and then correct your copy and paste errors.

However, the multiplicative part - the Jacobi Identity - will need some more considerations: What is ##X(f)?## Or even more important, what is ##X(f\cdot g)?##
Anyi symmetry is also obvious. So just the jaccobi identity..

##X(f)## is in ##C^{\infty}(M)##.

I need to show: ##([[X,Y],Z]+[Y,[Z,X]]+[Z,[X,Y]])(f)=0##.
I can look at
##[[X,Y],Z](f)=[X,Y]Z(f)-Z[X,Y](f)=X(Y(Z(f)))-Y(X(Z(f)))-Z(X(Y(f)))+Z(Y(X(f)))##. Same for the other two brackets then just add them together. Is that fine?
 
  • #13
aalma said:
Anyi symmetry is also obvious. So just the jaccobi identity..

##X(f)## is in ##C^{\infty}(M)##.

I need to show: ##([[X,Y],Z]+[Y,[Z,X]]+[Z,[X,Y]])(f)=0##.
I can look at
##[[X,Y],Z](f)=[X,Y]Z(f)-Z[X,Y](f)=X(Y(Z(f)))-Y(X(Z(f)))-Z(X(Y(f)))+Z(Y(X(f)))##. Same for the other two brackets then just add them together. Is that fine?
If you can use ##[X\, , \,Y]=XY-YX## then everything follows from that. But that is a bit of cheating. The essential lesson here is to learn what ##X(f)## actually means!

If you add a point, say ##X_p(f)## then we have the tangent of ##f## at point ##p## in direction of ##X.## If we drop this evaluation point, then ##X(f)## is the function ##p\longmapsto X_p(f),## i.e. all tangents of ##f## in the direction of ##X##. But how can we calculate with those vector fields? This is what it is all about in this exercise. Local coordinates ##\dfrac{\partial }{\partial \,x_k} ## are a possibility, flows, i.e. paths along these vector fields are another.

You must use your definition of ##X(f).## It has to be defined somewhere.
 
  • #14
fresh_42 said:
If you can use ##[X\, , \,Y]=XY-YX## then everything follows from that.
This is exactly what one needs to do as written in the first post. You are trying to do too much.
 
  • Like
Likes aalma
  • #15
I don't understand, why you don't use coordinates and the corresponding basis. Than all this is a simple exercise in doing partial derivatives. With the Ricci calculus it's just a bit work to write out the Lie-group properties in the coordinate-free notation and doing the not too complicated algebra. The abstract notation is very useful for general considerations, but if you want to do concrete calculations the Ricci calculus is much simpler. It's good to be used to both notations and also know, when which of them is most convenient to use!
 
  • Like
Likes strangerep
  • #16
martinbn said:
This is exactly what one needs to do as written in the first post. You are trying to do too much.
In the general case (showing that the set of derivations os a lie algebra respect to brackets ##[\delta_1,\delta_2]=\delta_1\delta_2-\delta_2\delta_1##), the action of ##[\delta_1,\delta_2]## on ##f## is
##[\delta_1,\delta_2](f)=\delta_1(\delta_2(f))-\delta_2(\delta_1(f))##. Here I need to do the same checking, right?
Anti-symmetry is obvious. But bilinearity and Jaccobi identity seem different, what is the point here?

[Mentor Note -- a misquote has been corrected in this post.]
 
Last edited by a moderator:
  • #17
Just use the definition of a vector field as
$$Xf=X^j \partial_j f.$$
I'd start with the derivation property
$$X(fg)=(Xf) g + f(Xg).$$
Then by defining the bracket as the commutator of thedifferential operators, you immideately get the Lie-algebra properties of these brackets from the fact that ##X## is a derivation. Linearity in both arguments is obvious. Only the Jacobi identity is a bit work to write. As an intermediate step it's helpful to derive
$$[AB,C]=A[B,C]+[A,C]B$$
for three vector fields ##A##, ##B##, and ##C##.
 
  • Like
Likes aalma
  • #18
aalma said:
In the general case (showing that the set of derivations os a lie algebra respect to brackets ##[\delta_1,\delta_2]=\delta_1\delta_2-\delta_2\delta_1##), the action of ##[\delta_1,\delta_2]## on ##f## is
##[\delta_1,\delta_2](f)=\delta_1(\delta_2(f))-\delta_2(\delta_1(f))##. Here I need to do the same checking, right?
Yes
aalma said:
Anti-symmetry is obvious. But bilinearity and Jaccobi identity seem different, what is the point here?
Why do they seem different?
 
  • Like
Likes aalma
  • #19
martinbn said:
Yes

Why do they seem different?
Yes, but then if the calcultions are exactly the same (as if we take ##\delta_1=X, \delta_2=Y##) then where do we use there the fact that we are working with derivations now (not with vector fields)?
 
  • #20
aalma said:
Yes, but then if the calcultions are exactly the same (as if we take ##\delta_1=X, \delta_2=Y##) then where do we use there the fact that we are working with derivations now (not with vector fields)?
Vector fields are derivations, you never really needed anything more about them.
 
  • Like
Likes aalma
  • #21
martinbn said:
Vector fields are derivations, you never really needed anything more about them.
Yes, that is true. In the way the question is given, I thought that I need to do the calculations for derivations too (which might differ, since the Leibniz rule is involved..).
 

1. What is a Lie algebra?

A Lie algebra is a mathematical structure that consists of a vector space equipped with a bilinear operation called the Lie bracket. This bracket operation is used to measure the failure of a given operation to be commutative. Lie algebras have applications in many areas of mathematics, including differential geometry, topology, and physics.

2. How are vector fields related to Lie algebras?

Vector fields are elements of a Lie algebra. They can be thought of as infinitesimal transformations on a manifold. In other words, they represent the direction and magnitude of change at each point on a surface. The Lie bracket operation on vector fields measures the failure of these transformations to commute with each other.

3. What is the role of derivations in Lie algebras?

Derivations are a special type of vector field that preserve the Lie bracket operation. In other words, they are transformations that satisfy the Leibniz rule, which states that the derivative of a product is equal to the sum of the derivatives. Derivations play a crucial role in the study of Lie algebras as they provide a way to measure the infinitesimal changes in a given structure.

4. How are Lie algebras used in physics?

Lie algebras are used in physics to study the symmetries and transformations of physical systems. The concept of a Lie group, which is a group with a smooth manifold structure, is closely related to Lie algebras. In physics, Lie algebras are used to study the fundamental forces of nature, such as electromagnetism and the strong and weak nuclear forces.

5. Can you give an example of a Lie algebra?

One example of a Lie algebra is the three-dimensional Euclidean space, which consists of all vectors in three-dimensional space. The Lie bracket operation on this space is given by the cross product, which measures the failure of vectors to commute with each other. This Lie algebra has applications in mechanics, electromagnetism, and other areas of physics.

Similar threads

Replies
16
Views
3K
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
4
Views
1K
  • Differential Geometry
Replies
7
Views
6K
  • Special and General Relativity
Replies
26
Views
913
  • Differential Geometry
Replies
4
Views
2K
  • Differential Geometry
Replies
2
Views
2K
Back
Top