I Can a Basis Vector be Lightlike?

  • Thread starter Thread starter cianfa72
  • Start date Start date
  • Tags Tags
    Basis Vector
Click For Summary
The discussion centers on whether a lightlike (null) vector can serve as a basis vector for writing down a metric in a tangent space. It is established that while a null vector can be included in a basis, doing so results in a metric coefficient, specifically ##g_{00}##, being zero, which raises questions about the metric's validity. The conversation explores the implications of using lightlike vectors in a basis, noting that it prevents the metric from being diagonal and leads to non-standard representations in Minkowski space. Additionally, the participants discuss the orthogonality of lightlike vectors and their linear independence, concluding that two null vectors cannot be both independent and orthogonal. The overall consensus is that while lightlike vectors can be part of a basis, their inclusion alters the characteristics of the resulting metric.
cianfa72
Messages
2,856
Reaction score
302
[Moderator's note: Spin off from another thread due to topic change.]

I was thinking about the following: can we take as a basis vector a null (i.e. lightlike) vector to write down the metric ?

Call ##v## such a vector and add to it 3 linear independent vectors. We get a basis for the tangent space (the first vector in the basis is the ##v## vector itself).

Then in such a basis the metric coefficient ##g_{00}## should be zero since the vector ##v## has coordinate ##(1,0,0,0)## and by definition it has zero length. Hence we are not allowed to use it, I believe.
 
Physics news on Phys.org
cianfa72 said:
Sorry I was thinking about the following: can we take as a basis vector a null (i.e. lightlike) vector to write down the metric ?

Call ##v## such a vector and add to it 3 linear independent vectors. We get a basis for the tangent space (the first vector in the basis is the ##v## vector itself).
Yes, you can do that.
cianfa72 said:
Then in such a basis the metric coefficient ##g_{00}## should be zero since the vector ##v## has coordinate ##(1,0,0,0)## and it has zero length. Hence we are not allowed to use it, I believe.
You are not allowed to use what?
 
martinbn said:
You are not allowed to use what?
If we use the vector ##v## as element in the basis we get a metric in which ##g_{00}## is actually zero. Does a such metric make sense ?
 
cianfa72 said:
If we use the vector ##v## as element in the basis we get a metric in which ##g_{00}## is actually zero. Does a such metric make sense ?
Of course. That is just a component in a given basis, it can be zero or anything else.
 
  • Like
Likes Dale and cianfa72
martinbn said:
Of course. That is just a component in a given basis, it can be zero or anything else.
BTW it does mean the metric in that basis cannot be diagonal, otherwise components in ##v## direction do not appear in the metric at all !
 
  • Like
Likes isaacdl
Take the most simple (1+1)-dimensional Minkowski-space example. A basis must just two linear independent vectors. Let's choose the two light-like vectors ##\boldsymbol{e}_1=(1,1)## and ##\boldsymbol{e}_2=(1,-1)## (written as component vectors wrt. a usual pseudo-Euclidean/Lorentzian basis). Then the Minkowski product of two vectors ##\boldsymbol{x}## and ##\boldsymbol{y}## wrt. this "light-like basis" is
$$\boldsymbol{x} \cdot \boldsymbol{y}=x^1 y^1 \boldsymbol{e}_1 \cdot \boldsymbol{e}_1 + (x^1 y^2 + x^2 y^1) \boldsymbol{e}_1 \boldsymbol{e}_2 + x^2 y^2 \boldsymbol{e}_2 \cdot \boldsymbol{e}_2 = 2 (x^1 y^2+x^2 y^1).$$
 
  • Like
Likes Dale
vanhees71 said:
Take the most simple (1+1)-dimensional Minkowski-space example. A basis must just two linear independent vectors. Let's choose the two light-like vectors ##\boldsymbol{e}_1=(1,1)## and ##\boldsymbol{e}_2=(1,-1)##.
Yes, indeed the (1+1) Minkowski metric in that basis is not diagonal.
 
Of course not. They can not. Think about what you can say if you have two light-like vectors that are Minkowski-orthogonal to each other! Can they be (part of) a basis?
 
isaacdl said:
Yes, I can see it graphically, but my problem is to show it in a free cordinate way.
You can do it in a coordinate-free way.
Given ##\vec A## and ##\vec B##, then ##\vec B= \vec A+ (\vec B-\vec A)##. (No coordinates were used in this calculation.)
 
  • #10
vanhees71 said:
Of course not. They can not. Think about what you can say if you have two light-like vectors that are Minkowski-orthogonal to each other! Can they be (part of) a basis?
Two linear independent null (light-like) vectors ##\boldsymbol {u}, \boldsymbol {w}## in Minkowski standard basis have components:

##\boldsymbol {u} = (k, k) , \boldsymbol {w} = (m , -m) \text{ } k,m \in \mathbb R##

then the further condition ##\boldsymbol{u} \cdot \boldsymbol{w} = 0## implies ##km + km = 2km = 0##. Hence either ##\boldsymbol {u}## or ##\boldsymbol {w}## would result in the zero vector.
 
Last edited:
  • #11
You can rather prove that two Minkowski-orthogonal lightlike vectors are collinear.
 
  • #12
vanhees71 said:
You can rather prove that two Minkowski-orthogonal lightlike vectors are collinear.
Yes, if both ##\boldsymbol {u}## and ##\boldsymbol {w}## are not the zero vector then it follows ## \boldsymbol {u} = k \boldsymbol {w}##, hence they are not linearly independent (it actually includes the case at least one vector is the zero vector).
 
  • Like
Likes vanhees71
  • #13
Also look at the textbook by Stephani et al. for usage of complex null tetrads, formed of two real null vectors ##\mathbf{k}## and ##\mathbf{l}## with ##g_{kl} = \mathbf{k} \cdot \mathbf{l} = -1## and two complex null vectors ##\mathbf{m}## and ##\overline{\mathbf{m}}## with ##g_{m \overline{m}} = \mathbf{m} \cdot \overline{\mathbf{m}} = 1##. (All the other inner products vanish.)
 
  • Informative
Likes vanhees71
  • #14
Another point related to the topic. I would like to show that any 4-vector in Minkowski spacetime can be written as the sum of a timelike vector plus a spacelike one.

Take a timelike vector ##\boldsymbol {u}##. It has components ##(u_0,u_1,u_2,u_3)## in Minkowski standard basis.

Consider the set of vectors ##\boldsymbol {w}## of components ##(w_0,w_1,w_2,w_3)## orthogonal to it, namely the set of vectors such that ##u_0w_0 - u_1w_1 - u_2w_2 - u_3w_3=0## in Minkowski standard basis. Such a set defines a 3-dimensional linear subspace of the vector space ##V##. Furthermore each vector in such set is spacelike.

This way we get a basis of one timelike vector ##\boldsymbol {u}## and three linear independent spacelike vectors orthogonal to it. So the subspace spanned by ##\boldsymbol {u}## and the subspace ##\boldsymbol {u}\perp## are in direct sum, hence any vector in ##V## can be uniquely written as the the linear combination of vector ##\boldsymbol {u}## and a vector in ##\boldsymbol {u}\perp##.

Does it make sense ? Thanks.
 
Last edited:
  • #15
Take an arbitrary vector ##v^a## and an arbitrary unit timelike vector ##t^a##. Then ##v_at^a## is the component of ##v^a## parallel to ##t^a##, so ##s^a=v^a-v_bt^bt^a## is the part of ##v^a## that is orthogonal to ##t^a## (note: flip the sign on the second term if your metric is -+++). It's just algebra to show that ##s^a## is a spacelike vector. Hence we have shown that ##v^a=(v_bt^b) t^a+s^a## as required.
 
  • Like
Likes vanhees71 and cianfa72
  • #16
Ibix said:
Take an arbitrary vector ##v^a## and an arbitrary unit timelike vector ##t^a##. Then ##v_at^a## is the component of ##v^a## parallel to ##t^a##, so ##s^a=v^a-v_bt^bt^a## is the part of ##v^a## that is orthogonal to ##t^a##.

##s_at^a=v_at^a - (v_bt^b)t_at^a##. Since ##t^a## is unit timelike then ##s_at^a=v_at^a - v_bt^b=0##, hence ##s^a## is orthogonal to ##t^a##.
 
  • Like
Likes Ibix and vanhees71
  • #17
Yes, and a vector Minkowski-orthogonal to a time-like vector is...
 
  • Like
Likes Ibix
  • #18
vanhees71 said:
Yes, and a vector Minkowski-orthogonal to a time-like vector is...
Spacelike (if non-zero vector).
 
Last edited:
  • Like
Likes vanhees71
  • #19
I was assuming you'd calculate ##s_as^a## and see that it's opposite sign to ##t_at^a##, but yes.
 
  • Like
Likes vanhees71
  • #20
Ibix said:
I was assuming you'd calculate ##s_as^a## and see that it's opposite sign to ##t_at^a##, but yes.
##s_as^a = (v_a - (v_bt^b)t_a)(v^a-v_bt^bt^a) = v_av^a - (v_bt^b)v_at^a - (v_bt^b)t_av^a + (v_bt^b)^2t_at^a = ##
##s_as^a = v_av^a - 2(v_bt^b)^2 + (v_bt^b)^2 = v_av^a - (v_bt^b)^2##
 
  • #21
Yup. From your final expression ##s_as^a## is manifestly negative if ##v_av^a\leq 0##, but in the case that ##v_av^a>0## there's a tiny bit more work to do to show that ##(v_av^a)\leq(v_at^a)^2##.
 
  • #22
Ibix said:
in the case that ##v_av^a>0## there's a tiny bit more work to do to show that ##(v_av^a)\leq(v_at^a)^2##.
Supposing I did not any mistake before, I've no idea how to proceed 🤔 .
 
Last edited:
  • #23
Ibix said:
Take an arbitrary vector ##v^a## and an arbitrary unit timelike vector ##t^a##.
The vectors can't be completely arbitrary since they must be linearly independent.
 
  • Like
Likes Ibix
  • #24
PeterDonis said:
The vectors can't be completely arbitrary since they must be linearly independent.
Yeah, you're right. If ##v^a=kt^a## for some constant ##k## then ##s^a=0##. So you have to choose a unit timelike vector ##t^a##, arbitrary except that it isn't parallel to ##v^a##.
cianfa72 said:
Supposing I did not any mistake before, I've no idea how to proceed 🤔 .
What's ##v_at^a## in terms of ##v_av^a## and ##t_at^a##, assuming ##v^a## is timelike?
 
Last edited:
  • #25
Ibix said:
What's ##v_at^a## in terms of ##v_av^a## and ##t_at^a##, assuming ##v^a## is timelike?
Since ##v_av^a > 0## there is a Lorentz-orthonormal basis such that the components of ##v^a## and ##t^a## are ##(v^0,0,0,0)## and ##(t^0,t^1,t^2,t^3)##.

In this basis ##v_at^a=v^0t^0 \Rightarrow (v_at^a)^2=(v^0)^2(t^0)^2##. Since ##t^a## is unit timelike we get ##(t^0)^2 > 1## then ##(v_at^a)^2 > (v^0)^2 = v_av^a##.

Note that in the above we are assuming ##v^a## and ##t^a## not collinear otherwise ##s^a=0##.
 
Last edited:
  • Like
Likes Ibix
  • #26
Let ##(a^{\mu})## be a time-like future-pointing vector, i.e., ##a_{\mu} a^{\mu}=(a^0)^2-\vec{a}^2>0##, ##a^0>0##, and ##(b^{\mu})## a vector with ##a^{\mu} b_{\mu} =a^0 b^0-\vec{a} \cdot \vec{b}=0##. Then ##(b^{\mu})## is spacelike. We have
$$b^0=\frac{1}{a^0} \vec{a} \cdot \vec{b} <\frac{1}{|\vec{a}|} \vec{a} \cdot \vec{b} \leq |\vec{b}| \; \Rightarrow \; b^0<|\vec{b}| \; \Rightarrow \; b_{\mu} b^{\mu} =(b^0)^2-\vec{b}^2<0,$$
i.e., ##(b^{\mu})## is spacelike, as claimed.

If ##(a^{\mu})## is past-pointing you can use the same argument with ##(-a^{\mu})##, i.e., if ##(b^{\mu})## is Minkowski-orthogonal to an arbitrary time-like vector, it's necessarily space-like.
 
  • Like
Likes Ibix
  • #27
vanhees71 said:
If ##(a^{\mu})## is past-pointing you can use the same argument with ##(-a^{\mu})##, i.e., if ##(b^{\mu})## is Minkowski-orthogonal to an arbitrary time-like vector, it's necessarily space-like.
Why two cases (futur-poiniting and past-pointing) for the timelike vector ##a^{\mu}## ? Assuming it is timelike we always have ##
a_{\mu} a^{\mu}=(a^0)^2-\vec{a}^2>0## so the above argument does not change, I believe.
 
  • Like
Likes vanhees71
  • #28
Well, yes. You can divide by ##|a^0|## in the first step of my proof, i.e., using
$$|a^0 b^0|=|\vec{a} \cdot \vec{b}| \; \Rightarrow \; |b^0|=\frac{1}{|a^0|} |\vec{a} \cdot \vec{b}|<\frac{1}{|\vec{a}|} |\vec{a} \cdot \vec{b}| \leq |\vec{b}| \; \Rightarrow \; |b^0|^2-|\vec{b}|^2=b_{\mu} b^{\mu}<0.$$
 
  • Like
Likes cianfa72

Similar threads

  • · Replies 6 ·
Replies
6
Views
586
  • · Replies 38 ·
2
Replies
38
Views
6K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 146 ·
5
Replies
146
Views
10K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
473