Why Can't We Do Algebraic Methods with Tensors?

Click For Summary

Discussion Overview

The discussion revolves around the challenges of applying algebraic methods to tensor equations, particularly in the context of the Ricci tensor and the Einstein summation convention. Participants explore why certain algebraic manipulations that seem valid in conventional algebra do not hold in tensor calculus.

Discussion Character

  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions why the equation $$R_{\mu\nu} = 0$$ leads to the conclusion that either $$g^{\sigma\rho} = 0$$ or $$R_{\sigma\mu\rho\nu} = 0$$, suggesting that normal algebraic steps should apply.
  • Another participant explains that the Einstein summation convention complicates algebraic manipulations, indicating that $$A^{\sigma}B_{\sigma\rho\mu}=0$$ does not imply that either $$A$$ or $$B$$ must be zero.
  • A different participant points out that in algebraic structures like matrices, the product being zero does not necessitate that one of the factors is zero, drawing parallels to the tensor case.
  • There is a discussion about the validity of transposing or dividing tensor equations, with one participant arguing that such operations are not straightforward due to the nature of summation in tensor calculus.

Areas of Agreement / Disagreement

Participants express differing views on the validity of algebraic manipulations in tensor calculus, with no consensus reached on the applicability of conventional algebraic methods to tensor equations.

Contextual Notes

Participants highlight limitations in understanding due to the dependence on the Einstein summation convention and the specific properties of tensors and matrices, which may not align with standard algebraic rules.

cr7einstein
Messages
87
Reaction score
2
Hello everyone!
Even though I have done substantial tensor calculus, I still don't get one thing. Probably I am being naive or even stupid here, but consider

$$R_{\mu\nu} = 0$$.
If I expand the Ricci tensor, I get
$$g^{\sigma\rho} R_{\sigma\mu\rho\nu} = 0$$.
Which, in normal algebra, should imply,
$$ g^{\sigma\rho} = 0$$ (which is meaningless) or $$R_{\sigma\mu\rho\nu} = 0$$ ( which isn't always true).

So, Why can't we do normal algebra here?( it is perfectly valid step in algebra)
Also, consider a simple case
$$dS^2 = g_{\mu\nu}dx^{\mu}dx^{\nu}$$.
Here, why can't we simply transpose(or divide both sides by) the differentials on RHS, i.e.,
$$\frac{dS^2}{dx^{\mu}dx^{\nu}} = g_{\mu\nu}$$ ?
Why is this expression not valid? Or, another example, Why can't
$$R_{\mu\nu} = g^{\sigma\rho} R_{\sigma\mu\rho\nu}$$ imply that
$$g^{\sigma\rho} = \frac{R_{\mu\nu}}{R_{\sigma\mu\rho\nu}}$$ ??
Is there a reason why this is wrong? Or is there a different way to transpose tensors from one side of the equation to the other side? Can you do this to vacuum field equations(as an example)?
Thanks in advance!
 
Physics news on Phys.org
All of these problematic expressions use the Einstein summation convention. Any time that you find yourself wondering about whether an algebraic manipulation is valid in such an expression, you can expand the expression.

For example, you ask why ##A^{\sigma}B_{\sigma\rho\mu}=0## doesn't necessarily imply that either ##A## or ##B## are zero. If you write the summation out (in two dimensions to keep things simple) you get ##A^0B_{0\rho\mu}+A^1B_{1\rho\mu}=0##, which can be true even if none of the components of A or B are zero.
 
Exactly what "algebraic methods" do you mean? For many algebraic structures, such as matrices, "AB= 0" does NOT imply "A= 0 or B= 0".
 
Hi. This is not more than #2. I assume your normal algebra means product of numbers like 2 X 3 = 6. Do you know inner product of vectors like [tex]\mathbf{a}\cdot\mathbf{b}=0[/tex]? This means vector a and vector b is orthogonal. a or b does not have to be a zero vector. For example a=(1.0) and b=(0,1) satisfy the eauation. What you referred is inner product of vector and tensor. Vector or tensor does not have to be a zero vector or tensor as well.
 
Last edited:
cr7einstein said:
$$g^{\sigma\rho} R_{\sigma\mu\rho\nu} = 0$$.
Which, in normal algebra, should imply,
$$ g^{\sigma\rho} = 0$$ (which is meaningless) or $$R_{\sigma\mu\rho\nu} = 0$$
What you have there is of the form ##\operatorname{Tr}(AB)=0##, where A and B are square matrices. This doesn't imply that one of the matrices must be zero.

Definition of matrix multiplication: ##(AB)_{ij}=A_{ik}B_{kj}##.
Definition of trace: ##\operatorname{Tr}A=A_{ii}##.
$$\operatorname{Tr}(AB)=(AB)_{ii}=A_{ik}B_{ki}.$$
cr7einstein said:
$$dS^2 = g_{\mu\nu}dx^{\mu}dx^{\nu}$$.
Here, why can't we simply transpose(or divide both sides by) the differentials on RHS, i.e.,
$$\frac{dS^2}{dx^{\mu}dx^{\nu}} = g_{\mu\nu}$$ ?
Mainly because of the summation. What you're doing there is like dividing both sides of ##z=ax+by## (where all variables represent real numbers) by one of ##xx,xy,yx,yy## and (incorrectly) ending up with either ##a## or ##b## on the right-hand side.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
884
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 49 ·
2
Replies
49
Views
4K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 30 ·
2
Replies
30
Views
3K
  • · Replies 38 ·
2
Replies
38
Views
3K