# Basic questions about raising and lowering indices

• I
• olgerm
In summary: It was mistake. My guess was ##T_{\mu \nu}*g^{\mu \nu}##. My question is: Can ##T_\mu^\mu## be written into form where T has only lower indices?No, it cannot be written that way.
olgerm
Gold Member
is it true that##\frac{\partial f}{\partial x_\mu}=\frac{\partial f}{\partial x^\mu}*g_{\mu \mu}##?

how to lower some of indices if same indice is many times in same tensor or multipication? for example:
##T^{i_1 i_1 i_2}## to ##T_{i_1 i_1 i_2}##
##a^{\alpha}*b^{\alpha}*c^{\alpha}## to ##a_{\alpha}*b_{\alpha}*c_{\alpha}##?

Last edited:
Delta2
Orodruin said:
No.
Is it answer 1. question? Can you wrte an example to make clear that it is wrong?

Orodruin said:
If the same index appears more than twice, you did something wrong.
Ok, but if I have ##T_{\mu}^\mu##: can I write it to form which has only covariant indices? something like: ##T_{\mu \nu}*g_{\mu \nu}##?

Last edited:
olgerm said:
Is it answer 1. question? Can you wrte an example to make clear that it is wrong?
It has three indices that are the same and on the right side all indices are covariant while on the left the only index is contravariant. There is nothing about that expression that can be right. Please see the Insight I linked to.

olgerm said:
Ok, but if I have ##T_{\mu}^\mu##: can I write it to form which has only covariant indices? something like: ##T_{\mu \nu}*g_{\mu \nu}##?

olgerm said:
Is it answer 1. question? Can you wrte an example to make clear that it is wrong?
The same index is repeated three times, so it is not a valid expression.
Ok, but if I have ##T_{\mu}^\mu##: can I write it to form which has only covariant indices? something like: ##T_{\mu \nu}*g_{\mu \nu}##?
##T_{\mu \nu}*g_{\mu \nu}## is another invalid expression because you have pairs of lower indices instead of pairs of upper and lower indices that can be summed over.

Whatever is going wrong here is happening further back. Where did these invalid expressions come from? What problem are you working on that leads you to write them down?

Nugatory said:
The same index is repeated three times, so it is not a valid expression.

##T_{\mu \nu}*g_{\mu \nu}## is another invalid expression because you have pairs of lower indices instead of pairs of upper and lower indices that can be summed over.

Whatever is going wrong here is happening further back. Where did these invalid expressions come from? What problem are you working on that leads you to write them down?
Just to stress that all of those errors, along with some discussion regarding them and how to avoid them, are covered in the Insight I linked to.

Nugatory
@olgerm - also, can I also ask what you are using the star to mean? In the context of tensor maths I understand the symbol to mean a Hodge dual, but I don't think that's what you are using it for.

weirdoguy
Ibix said:
@olgerm - also, can I also ask what you are using the star to mean? In the context of tensor maths I understand the symbol to mean a Hodge dual, but I don't think that's what you are using it for.
... especially as the metric is symmetric and not a 2-form.

Orodruin said:
... especially as the metric is symmetric and not a 2-form.
Quite. I think it's just meant to be multiplication, but if so it would be a confusing notation where hodge duality is valid, and if not it's always possible whatever is meant will make some of the rest make sense.

olgerm
Orodruin said:
There is nothing about that expression that can be right. Please see the Insight I linked to.
I seemed to me like that because ##\frac{\partial f}{\partial x_\mu}(\vec{X})=lim_{d \to 0}(\frac{f(\vec{X}+d*\vec{e}_\mu)-f(\vec{X})}{d})=lim_{d \to 0}(\frac{f(\vec{X}+d*\vec{e}^\nu*g_{\nu \mu})-f(\vec{X})}{d})=lim_{d \to 0}(\frac{f(\vec{X}+d*\sum_{\nu=0}^D(\vec{e}^\nu*g_{\mu \nu}))-f(\vec{X})}{d})\\
\color{red}=\color{black}lim_{d \to 0}(\frac{f(\vec{X}+d*\sum_{\nu=0}^D(\vec{e}^\mu*g_{\mu \mu}))-f(\vec{X})}{d})=lim_{d \to 0}(\frac{f(\vec{X}+d*\sum_{\nu=0}^D(\vec{e}^\nu))-f(\vec{X})}{d})*g_{\mu \mu}=\frac{\partial f}{\partial x^\mu}(\vec{X})*g_{\mu \mu}##
but now I see it is wrong.
olgerm said:
Ok, but if I have ##T_{\mu}^\mu##: can I write it to form which has only covariant indices? something like: ##T_{\mu \nu}*g_{\mu \nu}##?
It was mistake. My guess was ##T_{\mu \nu}*g^{\mu \nu}##. My question is: Can ##T_\mu^\mu## be written into form where T has only lower indices?

Delta2 and weirdoguy
Ibix said:
what you are using the star to mean?
multipication.

Delta2 and weirdoguy
olgerm said:
Can ##T_\mu^\mu## be written into form where T has only lower indices?
No. ##{T^\mu}_\mu## is a compact notation for ##\sum_{\mu=0}^3 {T^\mu}_\mu##, which is just a number with no indices at all.

If you have two free indices instead of one summation index then you can use the metric to lower indices: ##T_{\mu\nu}=g_{\mu\epsilon}{T^\epsilon}_\nu##.

olgerm said:
multipication.
Can I suggest you don't use it to mean that? Multiplication is understood when there's no symbol, and the star operator means something else (edit: although, as Orodruin noted, it is not valid in this context). ##v^iv^jv^k## is perfectly unambiguous.
olgerm said:
My guess was ##T_{\mu \nu}*g^{\mu \nu}##. My question is: Can ##T_\mu^\mu## be written into form where T has only lower indices?
I think the following works:$$\begin{eqnarray*} T^\mu{}_\mu&=&T^\mu{}_\nu\delta^\nu_\mu\\ &=&T_{\rho\nu}g^{\rho\mu}\delta^\nu_\mu\\ &=&T_{\mu\rho}g^{\rho\mu}\end{eqnarray*}$$
...which is what you wrote, give or take choices of dummy indices. Not sure why you'd want to do this - you're just shuffling your metric dependence around.

Last edited:
olgerm
I see @Nugatory disagrees - possible I'm wrong...

Ibix said:
I see @Nugatory disagrees - possible I'm wrong...
The right-hand side of your last line is just setting up to raise the index again, getting back to ##{T^\mu}_\mu##.

Nugatory said:
The right-hand side of your last line is just setting up to raise the index again, getting back to ##{T^\mu}_\mu##.
I agree I don't really see any point in the manipulation, but I read your #12 as saying it couldn't be done at all, which worried me. Perhaps I misunderstood.

@olgerm : Please read some good book about physics and learn how to write equations in a readable form. It is really impossible to communicate in this way, when you always post unreadable formulae.

The important point of the original question is that a derivative with respect to ##x^{\mu}## gives a covariant new index ##\mu##. In your case, when ##f## is a scalar field, then
$$V_{\mu}=\frac{\partial f}{\partial x^{\mu}}=\partial_{\mu} f$$
are covariant vector components.

If you work in Minkowski space (special relativity) and in Minkowski coordinates, where ##\eta_{\mu \nu}=\mathrm{diag}(1,-1,-1,-1)## then ##\partial_{\mu}## applied to tensor-field components leads to new tensor-field components with one more covariant index.

To prove this it's sufficient to look at the case of a scalar field. If you have ##\bar{x}^{\mu} = {\Lambda^{\mu}}{\nu} x^{\nu}## then a scalar field transforms as
$$\bar{f}(\bar{x})=f(x)$$
and thus
$$\bar{\partial}_{\mu} \bar{f}(\bar{x})=\frac{\partial x^{\nu}}{\partial \bar{x}^{\mu}} \partial_{\nu} f(x).$$
This is precisely the transformation law for covariant vector-field components, as claimed above.

I got that question while thinking about why ususally index of deriative is in oposite ud/down than it otherwise would be. for example:
##B^\mu=\frac{\partial f}{x_\mu}##. left side hs up(contravariant) index , but left side has down(covariant) index.

vanhees71 said:
This is precisely the transformation law for covariant vector-field components, as claimed above.
Do you mean the transform between ##\frac{\partial f}{\partial x_\mu}## and ##\frac{\partial f}{\partial x^\mu}##, that I wrongly thought to be ##\frac{\partial f}{\partial x_\mu}=\frac{\partial f}{\partial x^\mu}*g_{\mu \mu}## in 1. post?

##\newcommand{\bvec}[1]{\boldsymbol{#1}}##

No, I mean the Lorentz transformation from one inertial reference frame to another. (Again, please learn correct mathematical notation; you'll not be able to understand anything about science and communicate about without learning to write down things in a way that makes sense to begin with).

So let's review the transformation laws. We start with four-vectors, which are most easily represented by their components with respect to a Minkowskian (pseudo-Cartesian) basis ##\bvec{e}_{\mu}## which obeys
$$\bvec{e}_{\mu} \cdot \bvec{e}_{\nu}=\eta_{\mu \nu},$$
where the dot indicates the Minkowski product of two four-vectors and ##\eta_{\mu \nu}## are the components of the corresponding pseudometric wrt. a pseudo-Cartesian basis, ##\eta_{00}=1, \quad \eta_{11}=\eta_{22}=\eta_{33}=-1## and all ##\eta_{\mu \nu}=0## for ##\mu \neq \nu##.

An arbitrary four-vector ##\bvec{x}## is then written as a linear combination of these basis vectors,
$$\bvec{x}=x^{\mu} \bvec{e}_{\mu},$$
where over equal index pairs (one upper and one lower index; there can never be equal indices both lower or both upper in any correct equation!) has to be summed from ##0## to ##3##.

Then we can also define another basis ##\bvec{e}^{\nu}##, associated with the ##\bvec{e}_{\mu}## by
$$\bvec{e}^{\nu} \cdot \bvec{e}_{\mu}=\delta_{\mu}^{\nu}.$$
With ##\eta^{\mu \nu}## defined such that ##\eta^{\mu \nu} \eta_{\nu \rho}=\delta_{\rho}^{\mu}## we have
$$\bvec{e}^{\nu} = \eta^{\nu \mu} \bvec{e}_{\mu} \; \Leftrightarrow \; \bvec{e}_{\mu} = \eta_{\mu \nu} \bvec{e}^{\nu},$$
and we can write
$$\bvec{x}=x^{\mu} \bvec{e}_{\mu}=x^{\mu} \eta_{\mu \nu} \bvec{e}^{\nu}=x_{\nu} \bvec{e}^{\nu} \quad \text{with} \quad x_{\nu}=\eta_{\mu \nu} x^{\mu}.$$
Now suppose we have another pseudo-Cartesian basis ##\bar{\bvec{e}}_{\rho}##. Now we can write the old basis in terms of the new basis as
$$\bvec{e}_{\mu} = {\Lambda^{\rho}}_{\mu} \bar{\bvec{e}}_{\rho}.$$
Then we have
$$\bvec{x}=x^{\mu} \bvec{e}_{\mu} = x^{\mu} {\Lambda^{\rho}}_{\mu} \bar{\bvec{e}}_{\rho},$$
i.e., the components of the vector wrt. the new basis are given by
$$\bar{x}^{\rho}={\Lambda^{\rho}}_{\mu} x^{\mu}.$$
Since the new basis should also be pseudo-Cartesian, we have
$$\eta_{\rho \sigma} = \bar{\bvec{e}}_{\rho} \cdot \bar{\bvec{e}}_{\sigma} ={\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} \bvec{e}_{\mu} \cdot \bvec{e}_{\nu}=\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}.$$
Contracting this with ##\eta^{\rho \alpha}## we get
$$\delta_{\sigma}^{\alpha} = \eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} \eta^{\rho \alpha} {\Lambda^{\nu}}_{\sigma},$$
$$\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} \eta^{\rho \alpha} = {\Lambda_{\nu}}^{\alpha}={(\Lambda^{-1})^{\alpha}}_{\nu}.$$
This implies that
$$\bar{\bvec{e}}_{\rho}={\Lambda_{\rho}}^{\mu} \bvec{e}_{\mu}={(\Lambda^{-1})^{\mu}}_{\rho} \bvec{e}_{\mu}.$$
One says the ##x^{\mu}## transform contragediently to the basis vectors or that the basis vectors (lower indices) transform covariantly and the vector components (upper indices) contravariantly under Lorentz transformations.

The cobasis vectors transform also contravariantly (as the upper indices indicate):
$$\bar{\bvec{e}}^{\rho}=\eta^{\rho \sigma} \bar{\bvec{e}}_{\sigma} = \eta^{\rho \sigma} {\Lambda_{\sigma}}^{\nu} \bvec{e}_{\nu} = \eta^{\rho \sigma} {\Lambda_{\sigma}}^{\nu} \eta_{\mu \nu} \bvec{e}^{\mu}={\Lambda^{\rho}}_{\mu} \bvec{e}^{\mu}.$$

A scalar field is a map ##\Phi:M \rightarrow \mathbb{R}##. Written as a function of the vector components it doesn't change its value, i.e., its transformation rule under Lorentz transformations is
$$\bar{\phi}(\bar{x}^{\rho})=\phi(x^{\mu}) = \phi[{(\Lambda^{-1})^{\mu}}_{\rho} \bar{x}^{\rho}]=\phi({\Lambda_{\rho}}^{\mu} \bar{x}^{\rho}).$$
From this it's very easy to derive that ##A_{\mu}=\partial_{\mu} \phi=\partial \phi/\partial x^{\mu}## are the covariant components of a vector field, because
$$\bar{A}_{\rho}=\frac{\partial \bar{\phi}}{\partial \bar{x}^{\rho}}=\bar{\partial}_{\rho} \bar{\phi} = \frac{\partial x^{\mu}}{\partial \bar{x}^{\rho}} \partial_{\mu} \phi = {\Lambda_{\rho}}^{\mu} \partial_{\mu} \phi={\Lambda_{\rho}}^{\mu} A_{\mu}.$$
Accordingly the derivative wrt. the covariant vector indices leads to contravariant vector components
$$\partial^{\mu} \phi=\frac{\partial \phi}{\partial x_{\mu}} = \frac{\partial x^{\rho}}{\partial x_{\mu}} \partial_{\rho} \phi=\eta^{\rho \mu} \partial_{\rho} \phi=\eta^{\rho \mu} A_{\rho}=A^{\mu}.$$

Orodruin
Ibix said:
I think the following works:$$\begin{eqnarray*} T^\mu{}_\mu&=&T^\mu{}_\nu\delta^\nu_\mu\\ &=&T_{\rho\nu}g^{\rho\mu}\delta^\nu_\mu\\ &=&T_{\mu\rho}g^{\rho\mu}\end{eqnarray*}$$
vanhees71 said:
Accordingly the derivative wrt. the covariant vector indices leads to contravariant vector components
$$\partial^{\mu} \phi=\frac{\partial \phi}{\partial x_{\mu}} = \frac{\partial x^{\rho}}{\partial x_{\mu}} \partial_{\rho} \phi=\eta^{\rho \mu} \partial_{\rho} \phi=\eta^{\rho \mu} A_{\rho}=A^{\mu}.$$
Do these only work in minkowsky spacetime or generally?
last as: ##\frac{\partial \phi}{\partial x_{\mu}}=g^{\rho \mu} \frac{\partial \phi}{\partial x^{\mu}}##

Last edited:
Lowering the indices of the event vector with components ##x^\mu## make very little sense outside of Minkowski space described by inertial coordinates.

Furthermore, to pile on to what was discussed above, not only does ##\partial/\partial x^\mu = \partial_mu## transform covariantly. Since those partial derivatives define the holonomic basis of the tangent space - their transformation actually defines what it means to transform covariantly. In other words, something transforms covariantly if it transforms as ##\partial_\mu##.

Orodruin said:
Lowering the indices of the event vector with components ##x^\mu## make very little sense outside of Minkowski space described by inertial coordinates.
But is this theoretically true that generally ##\frac{\partial \phi}{\partial x_{\mu}}=g^{\rho \mu} \frac{\partial \phi}{\partial x^{\rho}}##?

Last edited:
You have three times ##\mu## in there, that is not a good sign.

olgerm said:
But is this theoretically true that generally ##\frac{\partial \phi}{\partial x_{\mu}}=g^{\rho \mu} \frac{\partial \phi}{\partial x^{\rho}}##?
I do not see which part about ##x_\mu## making little sense at all that was unclear. If ##x_\mu## does not make much sense, clearly ##\partial/\partial x_\mu## does not make much sense either.

Pencilvester
Orodruin said:
I do not see which part about ##x_\mu## making little sense at all that was unclear. If ##x_\mu## does not make much sense
Einstein field equations include both contravariant and covariant indices.
I try to write einstein field equations to another form that includes simpler symbols, even if this form of the equation is much longer. As last simplification I tried to lower some indices, based on formula in post13. so far I have got:

##R_{\mu\ \nu}-\frac{1}{2}*R*g_{\mu\ \nu}+\Lambda*g_{\mu\ \nu}=\frac{8*\pi*G}{c^4}*T_{\mu\ \nu}##

butting in ricci curvatur-tensor and scalar curvatur:
##\sum_{j_1=0}^D(\frac{\partial{\Gamma}^{j1\nu\mu}}{\partial x^{j_1}}-\frac{{\partial {\Gamma^{j_1}}_{j_1}}^{\mu}}{\partial x_{\nu}}+\sum_{j_2=0}^D({\Gamma^{j_1}}_{j_1 j_2}\Gamma^{j_2\nu\mu}-{\Gamma^{j_1\nu}}_{j_2 }{{\Gamma^{j_2}}_{j_1}}^{\mu}))+
(\Lambda-\frac{1}{2}*\sum_{i_1=0}^D(\sum_{i_2=0}^D(g_{i_1i_2}*\sum_{j_1=0}^D(\frac{\partial{\Gamma}{j1 i_2 i_1}}{\partial x^{j_1}}-\frac{{\partial{\Gamma^{j_1}}_{j_1}}^{i_1}}{\partial x_{i_2}}+\sum_{j_2=0}^D({\Gamma^{j_1}}_{j_1 j_2}\Gamma^{j_2 i_2 i_1 }-{\Gamma ^{j_1 i_2}}_{j_2}{{\Gamma^{j_2}}_{j_1}}^{i_1 })))))*g^{\mu,\nu}=\frac{8*\pi*G}{c^4}*T^{\mu,\nu}##

tring to lower indices:
##\sum_{m_1=0}^D(\sum_{m_2=0}^D(g^{\mu m_1}*g^{\nu m_2}\sum_{j_1=0}^D(\sum_{m_3=0}^D( g^{j_1 m_3}(\frac{\partial \Gamma_{m_3 m_2 m_1}}{\partial x^{j_1}}-\frac{{\partial \Gamma_{m_3 j_1 m_1}}}{\partial x^{m_2}}+\sum_{j_2=0}^D(\sum_{m_4=0}^D(g^{j_2 m_4}({\Gamma_{m_3 j_1 j_2}}{\Gamma_{m_4 m_2 m_1}}-{\Gamma_{m_3 m_2 j_2}}{\Gamma_{m_4 j_1 m_1}}))))))))+
(\Lambda-\frac{1}{2}*\sum_{i_1=0}^D(\sum_{m_1=0}^D(\sum_{i_2=0}^D(\sum_{m_2=0}^D(g_{i_1i_2}*g^{i_1m_1}*g^{i_2m_2}*\sum_{j_1=0}^D(\sum_{m_3=0}^D(g^{m_3 j_1}(\frac{\partial{\Gamma}_{m_3 m_2 m_1}}{\partial x^{j_1}}-\frac{\partial \Gamma_{m_3 j_1 m_1}}{\partial x^{m_2}}+\sum_{j_2=0}^D(\sum_{m_4=0}^D(g^{j_2m_4}(\Gamma_{m_3 j_1 j_2}\Gamma_{m_4 m_2 m_1 }-\Gamma_{m_3 m_2 j_2} \Gamma_{m_4 j_1 m_1})))))))))))*g^{\mu \nu}=\frac{8*\pi*G}{c^4}*T^{\mu \nu}##

are these simplification correct?
last equation is spread on many lines I do not know why.

Last edited:
weirdoguy
olgerm said:
Einstein field equations include both contravariant and covariant indices.

That depends on which source you are looking at. Plenty of textbooks and papers write them with two lower indices.

Also, the issue with ##x_\mu## is that a coordinate "covector" with a lower index instead of an upper index does not make sense in the general case, as @Orodruin has already pointed out. So you should not expect any equation in which such a thing appears to make sense either.

olgerm said:
I try to write einstein field equations to another form that includes simpler symbols, even if this form of the equation is much longer.

This is a very weird notion of "simpler". The standard Einstein Field Equation is easy to read and understand. The mess you are turning it into is not.

Pencilvester
PeterDonis said:
That depends on which source you are looking at. Plenty of textbooks and papers write them with two lower indices.
I have meant I have lower and upper indices after replacing expressions instead of Ricci tensor.

PeterDonis said:
Also, the issue with ##x_\mu## is that a coordinate "covector" with a lower index instead of an upper index does not make sense in the general case, as @Orodruin has already pointed out. So you should not expect any equation in which such a thing appears to make sense either.
That is just the equation I got after replacing in Ricci tensor.

olgerm said:
I have meant I have lower and upper indices after replacing expressions instead of Ricci tensor.

You shouldn't, unless the upper indices are part of a summation. If you start with an equation with two lower indices, any valid transformation of that equation will still have two lower indices, once you have taken all summations into account. The usual way of describing this is that the equation has two free lower indices; any valid transformation keeps the number of free indices the same.

olgerm said:
That is just the equation I got after replacing in Ricci tensor.

No such replacement can put ##x_\mu## in anywhere. So whatever you are doing, it would seem to be wrong.

PeterDonis said:
You shouldn't, unless the upper indices are part of a summation. If you start with an equation with two lower indices, any valid transformation of that equation will still have two lower indices, once you have taken all summations into account. The usual way of describing this is that the equation has two free lower indices; any valid transformation keeps the number of free indices the same.
yes, the original indices ##\mu## and ##\nu## are still upper indices. I got new summation variables (##i_1##, ##i_2##,##j_1## and ##j_2##) that appear both upper and lower. (see the equation.)

PeterDonis said:
No such replacement can put ##x_\mu## in anywhere. So whatever you are doing, it would seem to be wrong.
I mean that I have cristoffel symbols with both lower and upper indices.

I simplified the equation with
Ibix said:
##T^\mu{}_\mu=T_{\mu\rho}g^{\rho\mu}##
and got the last equation in post26. I am not sure if I did it correctly.

olgerm said:
I simplified the equation with

and got the last equation in post26. I am not sure if I did it correctly.
I think the thing that's confusing us is that this isn't a simplification. You've replaced one sum with two.

You seem to be fighting very hard against the notation. The whole point of it is to allow you to express gigantic error-prone horrors like your #26 in a clean and manageable way, with algebraic complexity "boxed away" inside simple symbols.

Have you ever programmed a computer? What you are doing is a bit like insisting on trying to write a full-featured word processor directly in machine code.

Orodruin and weirdoguy
I like that form. maybe after simplifiyng it more it becomes sipmler. the form that includes Ricci tensor and other things that I do not know has jus no intuitive meaning to me.
would like to know whether I used
Ibix said:
##T^\mu{}_\mu=T_{\mu\rho}g^{\rho\mu}##
correctly.

Motore and weirdoguy
olgerm said:
I like that form.

Ok, but keep in mind that you are alone with that. Don't be surprised with lack of help. Especially since you've been told multiple times in multiple threads of yours that what you write is unreadable. You've been told at least twice that star is not a symbol of multiplication, use \cdot command if necessarry (and quite often it is not necessary at all). And yet you ignore everything, because you like it that way. That's ok in your private files, but you are not writing it for yourself, you do this for other members to help you. Keep that in mind.

Motore
try to understand my notation. Are the equations in post26 correct? Have any recomendations to simplify einstein fild equation füther?

weirdoguy
olgerm said:
the original indices ##\mu## and ##\nu## are still upper indices

What original indices?

olgerm said:
the last equation in post26. I am not sure if I did it correctly

That equation is such a mess that I am unable to tell whether it is correct or not.

Replies
17
Views
2K
Replies
8
Views
2K
Replies
44
Views
2K
Replies
38
Views
3K
Replies
59
Views
3K
Replies
1
Views
768
Replies
1
Views
925
Replies
8
Views
515
Replies
4
Views
2K
Replies
3
Views
2K