Gravitational Lagrangian density

Click For Summary
SUMMARY

The discussion centers on the construction and significance of the gravitational Lagrangian density, particularly the integral I = ∫(−g(x))^1/2 L(x) d^4x. The square root of the determinant of the metric, √(-g), is essential for ensuring the invariance of the integral under coordinate transformations. Key references include Poisson's and Carroll's lecture notes, which provide foundational insights into the mathematics of integration on manifolds and the role of differential forms. The conversation also touches on the implications of total derivatives in Lagrangian mechanics and their eventual vanishing in equations of motion.

PREREQUISITES
  • Understanding of Lagrangian mechanics and action principles
  • Familiarity with differential geometry and manifold integration
  • Knowledge of metric tensors and their determinants
  • Basic concepts of tensor calculus and coordinate transformations
NEXT STEPS
  • Study Poisson's lecture notes on gravitational theory for foundational concepts
  • Explore Carroll's lecture notes on general relativity for advanced insights
  • Learn about integration on manifolds and the use of differential forms
  • Investigate the implications of total derivatives in Lagrangian formulations
USEFUL FOR

Physicists, particularly those specializing in general relativity, theoretical physicists working with Lagrangian mechanics, and mathematicians interested in differential geometry and manifold theory.

mertcan
Messages
343
Reaction score
6
Hi, in gravitational theory the action integral is: I = ∫( − g ( x))^1/2 L ( x) d 4 x, but I do not know why there is a square root -g . Could you give me the proof of this integral? I mean How is this integral constructed? What is the logic of this? Thanks in advance...
 
Physics news on Phys.org
The \int d^4 x is not invariant...
The \int d^4 x \sqrt{-g} is invariant...
 
  • Like
Likes   Reactions: Markus Hanke
See section 1.7 of Poisson's lecture notes,

http://www.physics.uoguelph.ca/poisson/research/agr.pdf

or pages 51 - 53 of Carroll's lecture notes,

http://xxx.lanl.gov/abs/gr-qc/9712019

Both of these sets of lecture notes evolved into books, and the books are better than the notes.
 
If you later study integration on manifolds, the general integral is defined using differential forms. The volume form must be completely antisymmetric and of the same rank as the dimension of the manifold. The permutation symbol is a tensor density and must be multiplied by a scalar density and the square root of the metric determinant is such a density of the correct weight, in order to become an n-form. Going to an orthogonal coordinate frame, you can conclude that the physical volume n-form is just the square root of the metric determinant multiplied by the permutation symbol. Going to a pseudo-Riemannian manifold, you also need to multiply with the sign of the determinant in order to make the square root real.
 
  • Like
Likes   Reactions: Markus Hanke
It's sort of straight-forward to see why \sqrt{|det(g)|} comes up in 2-dimensions, but I guess the generalization to more dimensions requires the mathematics of forms.

It might be worth looking at how it works in 2 spacelike dimensions. There, you can think of an integral over all space as the limit of a sum, where the sum is over little rectangles with sides \vec{\delta x} and \vec{\delta y}. The issue is: how to compute the area of the rectangle (in 3-D, it would be computing the volume of a parallelepiped, in 4-D, it would be computing the hypervolume of some 4-dimensional cell). In Cartesian coordinates, the displacement vectors \vec{\delta x} and \vec{\delta y} are orthogonal, and the lengths are just |\delta x| and |\delta y|, respectively, so you just use the usual formula for area of a rectangle: A = |\delta x| |\delta y|. But if you're using curvilinear coordinates, the two displacements are not necessarily orthogonal. In that case, how do you compute the area?

Well, here's a heuristic argument: In good-old Euclidean space, the area of a parallelogram with sides \vec{U} and \vec{V} is given by:
A = \vec{U} \times \vec{V} = |U||V| sin(\theta) where \theta is the angle between the two sides. We also know that:

\vec{U} \cdot \vec{V} = |U||V| cos(\theta)

So we can relate the cross product to the dot product as follows:

|\vec{U} \times \vec{V}|^2 = |U|^2 |V|^2 sin^2(\theta) = |U|^2 |V|^2 - |U|^2 |V|^2 cos^2(\theta) = (\vec{U} \cdot \vec{U}) (\vec{V} \cdot \vec{V}) - (\vec{U} \cdot \vec{V})^2

The dot-product can be written in terms of the metric:

\vec{U} \cdot \vec{V} = g_{\mu \nu} U^\mu V^\nu

So we can rewrite the cross-product in terms of the metric:

|\vec{U} \times \vec{V}|^2 = (g_{\mu \nu} g_{\alpha \beta} - g_{\mu \alpha} g_{\nu \beta}) U^\mu U^\nu V^\alpha V^\beta

Now, let's specialize to the case \vec{U} = \vec{\delta x} and \vec{V} = \vec{\delta y}. In that case, U^x = \delta x, U^y = 0, V^x = 0, V^y = \delta y. So we have:

|\vec{\delta x} \times \vec{\delta y}|^2 = (g_{x x} g_{y y} - g_{x y} g_{x y}) \delta x^2 \delta y^2 (All other terms are zero)

Note that the expression involving the metric is just the determinate of the 2x2 matrix:

\left( \begin{array} \\ g_{x x} & g_{x y} \\ g_{y x} & g_{y y} \end{array} \right)

So,

|\vec{\delta x} \times \vec{\delta y}|^2 = det(g) \delta x^2 \delta y^2

So,

|\vec{\delta x} \times \vec{\delta y}| = \sqrt{det(g)} \delta x \delta y

It would take a lot more work to see that this generalizes to more than 2 dimensions and to the case of a pseudo-Euclidean metric, but it gives you a little bit of the flavor for why something like \sqrt{det(g)} might show up.
 
  • Like
Likes   Reactions: mertcan
stevendaryl said:
It would take a lot more work to see that this generalizes to more than 2 dimensions
Not really, the main change to make for n dimensions is to use the completely anti-symmetric scalar n-tuple product. You can define it using the Levi-Civita symbol in Cartesian coordinates. It essentially boils down to finding a volume spanned by a parallelepiped.
 
stevendaryl said:
It's sort of straight-forward to see why \sqrt{|det(g)|} comes up in 2-dimensions, but I guess the generalization to more dimensions requires the mathematics of forms.

It might be worth looking at how it works in 2 spacelike dimensions. There, you can think of an integral over all space as the limit of a sum, where the sum is over little rectangles with sides \vec{\delta x} and \vec{\delta y}. The issue is: how to compute the area of the rectangle (in 3-D, it would be computing the volume of a parallelepiped, in 4-D, it would be computing the hypervolume of some 4-dimensional cell). In Cartesian coordinates, the displacement vectors \vec{\delta x} and \vec{\delta y} are orthogonal, and the lengths are just |\delta x| and |\delta y|, respectively, so you just use the usual formula for area of a rectangle: A = |\delta x| |\delta y|. But if you're using curvilinear coordinates, the two displacements are not necessarily orthogonal. In that case, how do you compute the area?

Well, here's a heuristic argument: In good-old Euclidean space, the area of a parallelogram with sides \vec{U} and \vec{V} is given by:
A = \vec{U} \times \vec{V} = |U||V| sin(\theta) where \theta is the angle between the two sides. We also know that:

\vec{U} \cdot \vec{V} = |U||V| cos(\theta)

So we can relate the cross product to the dot product as follows:

|\vec{U} \times \vec{V}|^2 = |U|^2 |V|^2 sin^2(\theta) = |U|^2 |V|^2 - |U|^2 |V|^2 cos^2(\theta) = (\vec{U} \cdot \vec{U}) (\vec{V} \cdot \vec{V}) - (\vec{U} \cdot \vec{V})^2

The dot-product can be written in terms of the metric:

\vec{U} \cdot \vec{V} = g_{\mu \nu} U^\mu V^\nu

So we can rewrite the cross-product in terms of the metric:

|\vec{U} \times \vec{V}|^2 = (g_{\mu \nu} g_{\alpha \beta} - g_{\mu \alpha} g_{\nu \beta}) U^\mu U^\nu V^\alpha V^\beta

Now, let's specialize to the case \vec{U} = \vec{\delta x} and \vec{V} = \vec{\delta y}. In that case, U^x = \delta x, U^y = 0, V^x = 0, V^y = \delta y. So we have:

|\vec{\delta x} \times \vec{\delta y}|^2 = (g_{x x} g_{y y} - g_{x y} g_{x y}) \delta x^2 \delta y^2 (All other terms are zero)

Note that the expression involving the metric is just the determinate of the 2x2 matrix:

\left( \begin{array} \\ g_{x x} & g_{x y} \\ g_{y x} & g_{y y} \end{array} \right)

So,

|\vec{\delta x} \times \vec{\delta y}|^2 = det(g) \delta x^2 \delta y^2

So,

|\vec{\delta x} \times \vec{\delta y}| = \sqrt{det(g)} \delta x \delta y

It would take a lot more work to see that this generalizes to more than 2 dimensions and to the case of a pseudo-Euclidean metric, but it gives you a little bit of the flavor for why something like \sqrt{det(g)} might show up.
Stevendarly Thanks for your explanatory answer, but how do We obtain square root -g ? I can not see any -g term in your answer...
 
The minus sign is because you have one timelike direction in spacetime. Basically, one takes the absolute value of the determinant.

You can also consult Zwiebach's string theory book, chapter 6.
 
  • Like
Likes   Reactions: vanhees71
mertcan said:
Stevendarly Thanks for your explanatory answer, but how do We obtain square root -g ? I can not see any -g term in your answer...
##\sqrt{-g}## is shorthand for ##\sqrt{-\text{det}(g)} = \sqrt{|\text{det}(g)|}##.
 
  • #10
ChrisVer said:
The \int d^4 x is not invariant...
The \int d^4 x \sqrt{-g} is invariant...
Hi,ChrisVer you said \int d^4 x \sqrt{-g} is invariant, but I see that integration of multiplication of lagrangian density and infinite small volume element is actually invariant, so I wonder if there is a proof or derivation of this situation. Could you provide me it's derivation ?
 
  • #11
mertcan said:
Hi,ChrisVer you said \int d^4 x \sqrt{-g} is invariant, but I see that integration of multiplication of lagrangian density and infinite small volume element is actually invariant, so I wonder if there is a proof or derivation of this situation. Could you provide me it's derivation ?
This is just a regular change of coordinates for a multivariable integral.
 
  • #12
Orodruin said:
This is just a regular change of coordinates for a multivariable integral.
pardon orodruin, would you mind spelling it out, or explain in more detail ?
 
  • #13
The ##\sqrt{-g}## is doing the same job that ##r^2\sin^2\theta## is doing when you replace ##dxdydz## with ##r^2\sin^2\theta drd\theta d\phi##. It makes the volume of the infinitesimal parallelapiped defined by infinitesimal displacements in each of the coordinate directions invariant under coordinate transformation.
 
  • #14
Thanks for your valuable responses...
 
  • #15
One fun way to go through that is to apply the Liouville's theorem:
http://www.nyu.edu/classes/tuckerman/stat.mech/lectures/lecture_2/node2.html
I call it fun because it can have interesting analogies from other 'fields' (like the conservation of volumes in phase-space diagrams)...
 
  • #16
hi, I was looking some equations related to invariance of lagrangian. In my attachment you can see that there are 4 equations, and while the we proceed to 4. equations from 3. equations (green box) you can see that first term and third term of 3. equation vanishes. I would like to ask how these terms vanish mathematically. Could you explain this situation using some mathematical demonstrations?
 

Attachments

  • ımage yy.png
    ımage yy.png
    19.2 KB · Views: 551
  • #17
they are total derivatives within the integral, they will vanish eventually...
 
  • #18
Initially, thanks for your return, but I can not get the logic well So I am asking to visualize the concept in order to comprehend well : Could you explain in more detail using some mathematical stuff?
 
  • #19
You use Stoke's theorem to convert these integrals into surface integrals. Upon using boundary conditions (variations at the boundary vanish), these terms will vanish.
 
  • #20
I can not prove it to myself by hand using your answers, Would you mind helping me a little bit more ?
 
  • #21
By the way ChrisVer you said that total derivatives within the integral, they would vanish eventually. Is there a proof of it ?
 
  • #22
I mean at the end of the day, in every variation formalism you go to, you will eventually get some EOM that will have:
S = \int L(x ,\dot{x} ;t ) dt
Correct?
then the concept is that the Lagrangian is not uniquely defined, but there is a whole set of Lagrangians that could in general produce the same equations of motions. Those lagrangians in the above case are:
L + \frac{d}{dt} K
with the last term being a total derivative... The total derivatives don't survive into the equations of motion after the Lagrangian variation. The dK/dt will by default satisfy the Euler-Lagrange equation.

Another way to see that is by looking at the integrals you have; they will eventually give you 'surface' terms, which in general can lead to some "constraints" over your fields... where? at infinity...
What is this integral equal to:
\int_{a}^{b} \frac{dA(x)}{dx} dx = ?
A(x) is a parametrized line, but the result of this int is 1 number... the generalization to more dimension's why these type of things are called 'surface' terms...

Another example from the 'classical case' of a relativistic string is that those "surface" constraints will generally result to the known "boundary" conditions for the oscillating string (like Neumann or Dirichlet ones).
 
Last edited:
  • #23
Sorry I have to correct something: the 'classical case' is of a string in general (even non-relativistic)... I specified "relativistic" because I happened to read the application in relativistic strings in the past...
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 48 ·
2
Replies
48
Views
3K
  • · Replies 16 ·
Replies
16
Views
1K
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
963
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
945
  • · Replies 20 ·
Replies
20
Views
1K
  • · Replies 3 ·
Replies
3
Views
493