Calculating area and volume for Diagonal Metrics

  • Thread starter Thread starter Tony Stark
  • Start date Start date
  • Tags Tags
    Area Volume
Tony Stark
Messages
51
Reaction score
1
My first question is, what is a diagonal metric?
Secondly while calculating the area and volume of a diagonal metric, why do we calculate it like
dA = √g11g22 dx1dx2
instead of
dA = g1g2dx1dx2 ?
 
Physics news on Phys.org
Tony Stark said:
My first question is, what is a diagonal metric?
Secondly while calculating the area and volume of a diagonal metric, why do we calculate it like
dA = √g11g22 dx1dx2
instead of
dA = g1g2dx1dx2 ?
Both the equations are the same, arent they..
 
Tony Stark said:
My first question is, what is a diagonal metric?
In the 2-dimensional case, it's a metric with components<br /> \begin{bmatrix}<br /> g_{11} &amp;&amp; 0 \\<br /> 0 &amp;&amp; g_{22}<br /> \end{bmatrix}<br />or equivalently with an interval<br /> ds^2 = g_{11} (dx^1)^2 + g_{22} (dx^2)^2<br />as opposed to a non-diagonal metric
<br /> \begin{bmatrix}<br /> g_{11} &amp;&amp; g_{12} \\<br /> g_{21} &amp;&amp; g_{22}<br /> \end{bmatrix}\\<br /> ds^2 = g_{11} (dx^1)^2 + 2 g_{12} dx^1 dx^2 + g_{22} (dx^2)^2<br />
Tony Stark said:
Secondly while calculating the area and volume of a diagonal metric, why do we calculate it like
dA = √g11g22 dx1dx2
instead of
dA = g1g2dx1dx2 ?
g_{11}g_{22} is the determinant of the matrix for the metric. I've no idea what you think g_1 and g_2 are.
 
DrGreg said:
In the 2-dimensional case, it's a metric with components<br /> \begin{bmatrix}<br /> g_{11} &amp;&amp; 0 \\<br /> 0 &amp;&amp; g_{22}<br /> \end{bmatrix}<br />or equivalently with an interval<br /> ds^2 = g_{11} (dx^1)^2 + g_{22} (dx^2)^2<br />as opposed to a non-diagonal metric
<br /> \begin{bmatrix}<br /> g_{11} &amp;&amp; g_{12} \\<br /> g_{21} &amp;&amp; g_{22}<br /> \end{bmatrix}\\<br /> ds^2 = g_{11} (dx^1)^2 + 2 g_{12} dx^1 dx^2 + g_{22} (dx^2)^2<br />

g_{11}g_{22} is the determinant of the matrix for the metric. I've no idea what you think g_1 and g_2 are.
g11 is the scalar product of g1 basis 4 vector with itself, isn't it.
 
Tony Stark said:
g11 is the scalar product of g1 basis 4 vector with itself, isn't it.
OK, I see what you mean. I've never seen that denoted by \textbf{g}_1; it's more usual to use \textbf{e}_1. So your formula would be correct if by g1 you mean<br /> \sqrt{g(\textbf{e}_1, \textbf{e}_1)} = \sqrt{g_{11}}.<br />Usually things are written just in terms of the metric tensor g_{\alpha\beta} without any mention of basis vectors \textbf{e}_1 etc.
 
DrGreg said:
OK, I see what you mean. I've never seen that denoted by \textbf{g}_1; it's more usual to use \textbf{e}_1. So your formula would be correct if by g1 you mean<br /> \sqrt{g(\textbf{e}_1, \textbf{e}_1)} = \sqrt{g_{11}}.<br />Usually things are written just in terms of the metric tensor g_{\alpha\beta} without any mention of basis vectors \textbf{e}_1 etc.
Yes sir
At last I just want to know how
√g11 different from g1
Please explain :oldsmile::oldsmile:
 
Tony Stark said:
Yes sir
At last I just want to know how
√g11 different from g1
Please explain :oldsmile::oldsmile:
It's confusing to use the same letter g for the metric and for the basis vectors. The usual convention is to denote a basis vector by \textbf{e}_1 (or, if you prefer, \vec{e}_1).

The length of the vector can be denoted<br /> \sqrt{g(\textbf{e}_1, \textbf{e}_1)} = \sqrt{\textbf{e}_1 \cdot \textbf{e}_1} = \| \textbf{e}_1 \| = e_1 = \sqrt{g_{11}}.<br />The notation g1 isn't used.
 
  • Like
Likes Tony Stark
The confusion comes about, because usually one first learns to deal with Cartesian tensor components, while in general relativity we usually use, at least in the beginning, holomic bases for the tangent and cotangent spaces. Of course, both formalisms are equivalent. So let's compare, for convenience, both in flat three-dimensional Euclidean space. I prefer to work always with upper and lower indices to make clear which components/vectors a co- and contravariant, even when I work in a Cartesian basis.

There we can start with a Cartesian basis ##(\vec{e}_1,\vec{e}_2,\vec{e}_3)##, which obeys
$$g_{ab}=\vec{e}_a \cdot \vec{e}_b=\delta_{ab}.$$
For the line element we have
$$\mathrm{d} l^2=\mathrm{d} \vec{x} \cdot \mathrm{d} \vec{x}=\mathrm{d} x^a \mathrm{d} x^b \vec{e}_a \cdot \vec{e}_b = \mathrm{d} x^a \mathrm{d}x^b \delta_{ab}.$$
Now we introduce any curvilinear coordinates ##q^j## (usually covering only a part of the entire Euclidean space, but we tacitly assume that we restrict ourselves to this region, where the coordinates are regular without always mentioning domains and co-domains). Then you can write
$$\mathrm{d} x^a=\frac{\partial x^a}{\partial q^j} \mathrm{d} q^j.$$
Then the length element is
$$\mathrm{d} l^2 = \delta_{ab} \frac{\partial x^a}{\partial q^j} \frac{\partial x^b}{\partial q^k} \mathrm{d} q^j \mathrm{d} q^k=g_{jk} \mathrm{d} q^j \mathrm{d} q^k.$$
Now the new metric coefficients of the so defined holonomic coordinate basis are in general neither diagonal and even if so (when you have orthogonal curvilinear coordinates as the well-known spherical or cylinder coordinates) the diagonal elements are not 1.

Now the line integrals are already defined fully covariantly, i.e., covariant under changes of arbitrary curvilinear (generalized) coordinates:
$$\int_{\mathcal{C}} \mathrm{d} \vec{x} \cdot \vec{V}(\vec{x})=\int_{a}^{b} \mathrm{d} \lambda \frac{\mathrm{d} q^{j}}{\mathrm{d} \lambda} V^{j},$$
where the ##V^j## are the components of the vector with respect to the holonomous basis, which is defined by
$$\vec{b}_j=\frac{\partial \vec{x}}{\partial q^j}=\frac{\partial x^a}{\partial q^j} \vec{e}_a=\vec{e}_a {T^a}_{j}$$

From the usual Cartesian components ##\overline{V}^a## you get them as follows
$$\vec{V}=V^j \vec{b}_j=V^j {T^a}_{j} \vec{e}_a=\overline{V}^a \vec{e}_a \; \Rightarrow \; \overline{V}^a={T^a}_{j} V^j.$$
To get the other way, we introduce ##{U^j}_a={(T^{-1})^j}_{a}##, leading to
$$V^j={U^j}_a \overline{V}^a.$$
The vector components with upper indices thus transform contravariantly and the objects with lower indices like the basis vectors covariantly.

Also the gradient of a scalar field always transforms covariariantly:
$$\frac{\partial \phi}{\partial q^j} \phi=\frac{\partial x^a}{\partial q^j} \frac{\partial \phi}{\partial x^a}.$$

Now we define surface elements. In Cartesian coordinates they are given by
$$\mathrm{d}^2 \vec{S}=\frac{\partial \vec{x}}{\partial \lambda_1} \times \frac{\partial \vec{x}}{\partial \lambda_2},$$
where ##\lambda_1## and ##\lambda_2## are any parameters describing the surface. Written in Cartesian components you have
$$\mathrm{d} \overline{S}_a = \epsilon_{abc} \frac{\mathrm{d} x^b}{\mathrm{d} \lambda_1} \frac{\mathrm{d} x^c}{\mathrm{d} \lambda_2} .$$
Of course, it's also simple to write this in terms of the generalized coordinates:
$$\mathrm{d}^2 \overline{S}_a=\epsilon_{abc} \frac{\partial x^b}{\partial q^j} \frac{\partial x^c}{\partial q^k} \frac{\mathrm{d} q^j}{\mathrm{d} \lambda_1} \frac{\mathrm{d} q^k}{\mathrm{d} \lambda_2}.$$
Now since we assume that ##\mathrm{d} \overline{S}_a## are Cartesian covariant (!) components (with lower indices!), the transformation to the holonomous co-vector components is given by
$$\mathrm{d}^2 S_i=\frac{\partial x^a}{\partial q^i} \mathrm{d}^2 \overline{S}^a = \epsilon_{abc} \frac{\partial x^a}{\partial q^i} \frac{\partial x^b}{\partial q^j} \frac{\partial x^c}{\partial q^k} \frac{\mathrm{d} q^j}{\mathrm{d} \lambda_1} \frac{\mathrm{d} q^k}{\mathrm{d} \lambda_2}.$$
This shows that the Levi-Civita symbol are no covariant tensor components, because what appears instead of the Levi-Civita symbol in the general basis and co-basis formalism is the Levi-Civita tensor
$$\Delta_{jkl}=\epsilon_{abc} \frac{\partial x^a}{\partial q^i} \frac{\partial x^b}{\partial q^j} \frac{\partial x^c}{\partial q^k} =\epsilon_{jkl} \mathrm{det} T.$$
But now we have
$$g_{jk}=\delta_{ab} \frac{\partial x^a}{\partial q^j} \frac{\partial x^b}{\partial q^k}$$
This implies that
$$g=\mathrm{det} (g_{jk})=\mathrm{det}(T^t T)=(\mathrm{det} T)^2.$$
Now we assume that the order of the ##q^j## is chosen such that ##\mathrm{det} T>0##. Then we can write
$$\Delta_{jkl}=\sqrt{g} \epsilon_{jkl}.$$
And this is a generally covariant tensor, the Levi-Civita tensor.

The same holds true for the volume element
$$\mathrm{d} V=\epsilon_{abc} \mathrm{d} x^a \mathrm{d} x^b \mathrm{d} x^c=\Delta_{ijk} \mathrm{d} q^i \mathrm{d} q^j \mathrm{d} q^k.$$

Now for an orthogonal system you have
$$(g_{ij})=\mathrm{diag}(h_1^2,h_2^2,h_3^2), \quad h_j=\sqrt{g_{jj}}.$$
Here in the final equation one must not sum over ##j##. From now on we do not use the Einstein summation convention anymore but write sum symbols explicitly. First of all the determinant of the metric is
$$g=\det (g_{ij})=(h_1 h_2 h_3)^2 \; \Rightarrow \sqrt{g}=h_1 h_2 h_3.$$
The volume element thus is
$$\mathrm{d}V=h_1 h_2 h_3 \epsilon_{ijk} \mathrm{d} q^i \mathrm{d} q^j \mathrm{d} q^k.$$
Also instead of the holonomous basis vectors one normalizes them, so that in each point one works with a Cartesian coordinate system again, but note that in general you have a different basis in each point of the Euklidean space:
$$\vec{n}_j=\frac{1}{h_j} \vec{b}_j.$$
Thus we have for any vector ##\vec{V}##
$$\vec{V}=\sum_j V^j \vec{b}_j=\sum_{j} \frac{V^j} h_j \vec{n}_j \; \Rightarrow \; \tilde{V}^j=h_j \frac{V^j},$$
where ##\tilde{V}^j## are vector components with respect to the orthonormal basis ##\vec{n}_j##. For the covariant vector components we use the corresponding co-basis to the ##(\vec{n}_j)##, i.e., we define co-vector components ##\tilde{V}_j## such that for all contra-variant vector components ##\tilde{W}^k## we have
$$\sum_j \tilde{V}_j \tilde{W}^j=\sum_j V_j W^j = \sum_j V_j \frac{\tilde{W}^j}{h_j} \; \Rightarrow\; \tilde{V}_j=\frac{V_j}{h_j}.$$
Thus for the surface-element vectors we have
$$\mathrm{d}^2 \tilde{S}_i=\sum_{j,k} \sqrt{g} \epsilon_{ijk} \frac{1}{h_i} \frac{\mathrm{d} q^j}{\mathrm{d} \lambda_1}\frac{\mathrm{d} q^k}{\mathrm{d} \lambda_2}\mathrm{d} \lambda_1\mathrm{d} \lambda_2.$$
Now for, e.g., i=1 you get the factor
$$\frac{\sqrt{g}}{h_1}=h_2 h_3$$
etc.

I hope, now it has become clear, why one has the specific factors you mentioned in posting #1.

All this is of course valid in any Riemannian (or pseudo-Riemannian) space, where you don't start with a flat space. In the pseudo-Riemannian case, e.g., as used as the space-time description in General Relativity, one has to be a bit careful with the signs. Usually one defines the Levi-Civita tensor in 1+3 space-time dimensions by
$$\Delta^{ijkl}=\frac{1}{\sqrt{-g}} \epsilon^{ijkl}$$
and
$$\Delta_{ijkl}=g_{ii'} g_{jj'} g_{kk'} g_{ll'} \Delta^{ijkl}=g \frac{1}{\sqrt{-g}} \epsilon_{ijkl}=-\sqrt{-g} \epsilon_{ijkl},$$
where ##\epsilon^{ijkl}=\epsilon_{ijkl}## are the usual Levi-Civita symbols with ##\epsilon^{0123}=1## and antisymmetric in all four indices.
 
Last edited:
DrGreg said:
It's confusing to use the same letter g for the metric and for the basis vectors. The usual convention is to denote a basis vector by \textbf{e}_1 (or, if you prefer, \vec{e}_1).

The length of the vector can be denoted<br /> \sqrt{g(\textbf{e}_1, \textbf{e}_1)} = \sqrt{\textbf{e}_1 \cdot \textbf{e}_1} = \| \textbf{e}_1 \| = e_1 = \sqrt{g_{11}}.<br />The notation g1 isn't used.
Thank You sir. Now I correctly get the point. Thank You very much.
I am writing my understanding below. Please clarify is something is faulty-
e1 is a basis vector whereas for calculating area of a line element, we need the scalar measure of that vector. That's why we use √e1e2 as it gives the scalar measure.
 
Last edited:
  • #10
vanhees71 said:
The confusion comes about, because usually one first learns to deal with Cartesian tensor components, while in general relativity we usually use, at least in the beginning, holomic bases for the tangent and cotangent spaces. Of course, both formalisms are equivalent. So let's compare, for convenience, both in flat three-dimensional Euclidean space. I prefer to work always with upper and lower indices to make clear which components/vectors a co- and contravariant, even when I work in a Cartesian basis.

There we can start with a Cartesian basis ##(\vec{e}_1,\vec{e}_2,\vec{e}_3)##, which obeys
$$g_{ab}=\vec{e}_a \cdot \vec{e}_b=\delta_{ab}.$$
For the line element we have
$$\mathrm{d} l^2=\mathrm{d} \vec{x} \cdot \mathrm{d} \vec{x}=\mathrm{d} x^a \mathrm{d} x^b \vec{e}_a \cdot \vec{e}_b = \mathrm{d} x^a \mathrm{d}x^b \delta_{ab}.$$
Now we introduce any curvilinear coordinates ##q^j## (usually covering only a part of the entire Euclidean space, but we tacitly assume that we restrict ourselves to this region, where the coordinates are regular without always mentioning domains and co-domains). Then you can write
$$\mathrm{d} x^a=\frac{\partial x^a}{\partial q^j} \mathrm{d} q^j.$$
Then the length element is
$$\mathrm{d} l^2 = \delta_{ab} \frac{\partial x^a}{\partial q^j} \frac{\partial x^b}{\partial q^k} \mathrm{d} q^j \mathrm{d} q^k=g_{jk} \mathrm{d} q^j \mathrm{d} q^k.$$
Now the new metric coefficients of the so defined holonomic coordinate basis are in general neither diagonal and even if so (when you have orthogonal curvilinear coordinates as the well-known spherical or cylinder coordinates) the diagonal elements are not 1.

Now the line integrals are already defined fully covariantly, i.e., covariant under changes of arbitrary curvilinear (generalized) coordinates:
$$\int_{\mathcal{C}} \mathrm{d} \vec{x} \cdot \vec{V}(\vec{x})=\int_{a}^{b} \mathrm{d} \lambda \frac{\mathrm{d} q^{j}}{\mathrm{d} \lambda} V^{j},$$
where the ##V^j## are the components of the vector with respect to the holonomous basis, which is defined by
$$\vec{b}_j=\frac{\partial \vec{x}}{\partial q^j}=\frac{\partial x^a}{\partial q^j} \vec{e}_a=\vec{e}_a {T^a}_{j}$$

From the usual Cartesian components ##\overline{V}^a## you get them as follows
$$\vec{V}=V^j \vec{b}_j=V^j {T^a}_{j} \vec{e}_a=\overline{V}^a \vec{e}_a \; \Rightarrow \; \overline{V}^a={T^a}_{j} V^j.$$
To get the other way, we introduce ##{U^j}_a={(T^{-1})^j}_{a}##, leading to
$$V^j={U^j}_a \overline{V}^a.$$
The vector components with upper indices thus transform contravariantly and the objects with lower indices like the basis vectors covariantly.

Also the gradient of a scalar field always transforms covariariantly:
$$\frac{\partial \phi}{\partial q^j} \phi=\frac{\partial x^a}{\partial q^j} \frac{\partial \phi}{\partial x^a}.$$

Now we define surface elements. In Cartesian coordinates they are given by
$$\mathrm{d}^2 \vec{S}=\frac{\partial \vec{x}}{\partial \lambda_1} \times \frac{\partial \vec{x}}{\partial \lambda_2},$$
where ##\lambda_1## and ##\lambda_2## are any parameters describing the surface. Written in Cartesian components you have
$$\mathrm{d} \overline{S}_a = \epsilon_{abc} \frac{\mathrm{d} x^b}{\mathrm{d} \lambda_1} \frac{\mathrm{d} x^c}{\mathrm{d} \lambda_2} .$$
Of course, it's also simple to write this in terms of the generalized coordinates:
$$\mathrm{d}^2 \overline{S}_a=\epsilon_{abc} \frac{\partial x^b}{\partial q^j} \frac{\partial x^c}{\partial q^k} \frac{\mathrm{d} q^j}{\mathrm{d} \lambda_1} \frac{\mathrm{d} q^k}{\mathrm{d} \lambda_2}.$$
Now since we assume that ##\mathrm{d} \overline{S}_a## are Cartesian covariant (!) components (with lower indices!), the transformation to the holonomous co-vector components is given by
$$\mathrm{d}^2 S_i=\frac{\partial x^a}{\partial q^i} \mathrm{d}^2 \overline{S}^a = \epsilon_{abc} \frac{\partial x^a}{\partial q^i} \frac{\partial x^b}{\partial q^j} \frac{\partial x^c}{\partial q^k} \frac{\mathrm{d} q^j}{\mathrm{d} \lambda_1} \frac{\mathrm{d} q^k}{\mathrm{d} \lambda_2}.$$
This shows that the Levi-Civita symbol are no covariant tensor components, because what appears instead of the Levi-Civita symbol in the general basis and co-basis formalism is the Levi-Civita tensor
$$\Delta_{jkl}=\epsilon_{abc} \frac{\partial x^a}{\partial q^i} \frac{\partial x^b}{\partial q^j} \frac{\partial x^c}{\partial q^k} =\epsilon_{jkl} \mathrm{det} T.$$
But now we have
$$g_{jk}=\delta_{ab} \frac{\partial x^a}{\partial q^j} \frac{\partial x^b}{\partial q^k}$$
This implies that
$$g=\mathrm{det} (g_{jk})=\mathrm{det}(T^t T)=(\mathrm{det} T)^2.$$
Now we assume that the order of the ##q^j## is chosen such that ##\mathrm{det} T>0##. Then we can write
$$\Delta_{jkl}=\sqrt{g} \epsilon_{jkl}.$$
And this is a generally covariant tensor, the Levi-Civita tensor.

The same holds true for the volume element
$$\mathrm{d} V=\epsilon_{abc} \mathrm{d} x^a \mathrm{d} x^b \mathrm{d} x^c=\Delta_{ijk} \mathrm{d} q^i \mathrm{d} q^j \mathrm{d} q^k.$$

Now for an orthogonal system you have
$$(g_{ij})=\mathrm{diag}(h_1^2,h_2^2,h_3^2), \quad h_j=\sqrt{g_{jj}}.$$
Here in the final equation one must not sum over ##j##. From now on we do not use the Einstein summation convention anymore but write sum symbols explicitly. First of all the determinant of the metric is
$$g=\det (g_{ij})=(h_1 h_2 h_3)^2 \; \Rightarrow \sqrt{g}=h_1 h_2 h_3.$$
The volume element thus is
$$\mathrm{d}V=h_1 h_2 h_3 \epsilon_{ijk} \mathrm{d} q^i \mathrm{d} q^j \mathrm{d} q^k.$$
Also instead of the holonomous basis vectors one normalizes them, so that in each point one works with a Cartesian coordinate system again, but note that in general you have a different basis in each point of the Euklidean space:
$$\vec{n}_j=\frac{1}{h_j} \vec{b}_j.$$
Thus we have for any vector ##\vec{V}##
$$\vec{V}=\sum_j V^j \vec{b}_j=\sum_{j} \frac{V^j} h_j \vec{n}_j \; \Rightarrow \; \tilde{V}^j=h_j \frac{V^j},$$
where ##\tilde{V}^j## are vector components with respect to the orthonormal basis ##\vec{n}_j##. For the covariant vector components we use the corresponding co-basis to the ##(\vec{n}_j)##, i.e., we define co-vector components ##\tilde{V}_j## such that for all contra-variant vector components ##\tilde{W}^k## we have
$$\sum_j \tilde{V}_j \tilde{W}^j=\sum_j V_j W^j = \sum_j V_j \frac{\tilde{W}^j}{h_j} \; \Rightarrow\; \tilde{V}_j=\frac{V_j}{h_j}.$$
Thus for the surface-element vectors we have
$$\mathrm{d}^2 \tilde{S}_i=\sum_{j,k} \sqrt{g} \epsilon_{ijk} \frac{1}{h_i} \frac{\mathrm{d} q^j}{\mathrm{d} \lambda_1}\frac{\mathrm{d} q^k}{\mathrm{d} \lambda_2}\mathrm{d} \lambda_1\mathrm{d} \lambda_2.$$
Now for, e.g., i=1 you get the factor
$$\frac{\sqrt{g}}{h_1}=h_2 h_3$$
etc.

I hope, now it has become clear, why one has the specific factors you mentioned in posting #1.

All this is of course valid in any Riemannian (or pseudo-Riemannian) space, where you don't start with a flat space. In the pseudo-Riemannian case, e.g., as used as the space-time description in General Relativity, one has to be a bit careful with the signs. Usually one defines the Levi-Civita tensor in 1+3 space-time dimensions by
$$\Delta^{ijkl}=\frac{1}{\sqrt{-g}} \epsilon^{ijkl}$$
and
$$\Delta_{ijkl}=g_{ii'} g_{jj'} g_{kk'} g_{ll'} \Delta^{ijkl}=g \frac{1}{\sqrt{-g}} \epsilon_{ijkl}=-\sqrt{-g} \epsilon_{ijkl},$$
where ##\epsilon^{ijkl}=\epsilon_{ijkl}## are the usual Levi-Civita symbols with ##\epsilon^{0123}=1## and antisymmetric in all four indices.
Thanks Vanhees,but much of the post was beyond my scope of understanding. :smile::smile::smile:
 
Back
Top