Basis Vector Length: Unit Length & Mistakes to Avoid

Click For Summary

Discussion Overview

The discussion revolves around the properties of basis vectors and their lengths, particularly in the context of orthonormal bases, dual spaces, and the relationships between covariant and contravariant components. Participants explore the implications of these properties in various mathematical frameworks, including the use of metrics and the definitions of dual bases.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that the length of a basis vector is defined by the inner product, leading to the conclusion that basis vectors are of unit length under certain assumptions.
  • Others argue that the assumption of an orthonormal basis is necessary to claim that basis vectors have unit length, questioning the validity of the initial assertion without this assumption.
  • A participant expresses confusion regarding the relationship between basis vectors and their dual counterparts, particularly in non-orthonormal contexts.
  • There is a discussion about the process of raising and lowering indices, with some participants suggesting that this alters the nature of the vectors involved, while others clarify that it does not change the underlying vector itself.
  • Some participants inquire about the ambiguity in notation when expressing basis vectors and their components, particularly regarding whether they refer to contravariant or covariant components.
  • One participant provides a specific example involving a two-dimensional manifold and a metric, seeking clarification on how to derive covariant basis vectors from given contravariant ones.
  • There is a mention of the dual space and its definition, with some participants noting that the dot product depends on the choice of basis.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of orthonormality for basis vectors to be of unit length, and there is no consensus on the implications of raising and lowering indices on basis vectors. The discussion remains unresolved regarding the clarity of notation and the relationships between different types of components.

Contextual Notes

Limitations include the dependence on the choice of basis and the ambiguity in notation when discussing contravariant and covariant components. The discussion also highlights the potential confusion surrounding the definitions and relationships of basis vectors and their duals.

snoopies622
Messages
852
Reaction score
29
For an ordinary vector V, the square of its length is [tex]V \cdot V = V^a V_a[/tex].

For basis vectors, [tex]e^a \cdot e_b = \delta ^a _b[/tex] so [tex]e^a \cdot e_a = 1[/tex].

Since [tex]1^2 = 1[/tex], this implies that every basis vector is of unit length.

What is my mistake?
 
Physics news on Phys.org
it's fine
 
snoopies622 said:
For an ordinary vector V, the square of its length is [tex]V \cdot V = V^a V_a[/tex].

For basis vectors, [tex]e^a \cdot e_b = \delta ^a _b[/tex] so [tex]e^a \cdot e_a = 1[/tex].

Since [tex]1^2 = 1[/tex], this implies that every basis vector is of unit length.

What is my mistake?

Using the formula on top, the length of a basis vector is [tex]e_a \cdot e_a[/tex]

The equation for unit vectors perpendicular to each other is [tex]e_a \cdot e_b = \delta _a _b[/tex]

The equation for non-unit non-perpendicular vectors is ea.eb= gab

Given a set of basis vectors, the dual basis covectors are defined by [tex]e^a \cdot e_b = \delta ^a _b[/tex]

Or something like that, I can never quite remember which indices go up or down.
 
Last edited:
snoopies622 said:
For an ordinary vector V, the square of its length is [tex]V \cdot V = V^a V_a[/tex].

For basis vectors, [tex]e^a \cdot e_b = \delta ^a _b[/tex] so [tex]e^a \cdot e_a = 1[/tex].

Since [tex]1^2 = 1[/tex], this implies that every basis vector is of unit length.

What is my mistake?
In order to say [tex]e^a \cdot e_b = \delta ^a _b[/tex] or [tex]e^a \cdot e_a = 1[/tex] you have to assume an orthonormal basis. Essentially what you are saying is "Assuming basis vectors have unit length, then every basis vector is of unit length"!
 
HallsofIvy said:
In order to say [tex]e^a \cdot e_b = \delta ^a _b[/tex] or [tex]e^a \cdot e_a = 1[/tex] you have to assume an orthonormal basis.

I did not know this. If my basis is not orthonormal and I have chosen my (say) countervariant basis vectors, how do I find their covariant counterparts? Does one raise or lower indices in the same way as with ordinary vectors/tensors, or is it different with basis vectors?

For example, suppose I have a two-dimensional manifold and a coordinate chart (u,v) with metric [tex]g_{uu}=2[/tex] [tex]g_{uv}=g_{vu}=-2[/tex] [tex]g_{vv}=4[/tex] and I choose [tex]e^u =<1,0>[/tex] [tex]e^v =<0,1>[/tex]. How do I find [tex]e_u[/tex] and [tex]e_v[/tex]?
 
I mispoke before because I did not realize that you were talking about a product of "vectors" and "co-vectors" or the dual space.

The dual space of a vector space, V, is the set of linear functions from V to its underlying field.

In fact, given any basis for a vector space, the the corresponding basis for the dual space is defined by "ei(ej)= 1" and then that is used to define the "dot product". However, that dot product depends upon the choice of basis! In that sense, yes, every basis vector has unit length: the choice of dot product based on that basis guarentees that.

If you have a given dot product (perhaps based on some basis) and use it with a different basis, It does not follow that [itex]e^i\cdot e_j= \delta^i_j[/itex]

[itex]e_i= g_{ij}e^j[/itex]. In the case you give, it is just a matrix multiplication:
[tex]e^u= \left[\begin{array}{cc}2 & -2 \\ -2 & 2\end{array}\right]\left[\begin{array}{c}1 \\ 0\end{array}\right]= <2, -2>[/tex]
Similarly, [itex]e^v[/itex] is <-2, 2>.
 
Is the dual basis the same thing as the "reciprocal basis" described here?

http://home.pacbell.net/bbowen/covariant.htm
 
Last edited by a moderator:
snoopies622 said:
Is the dual basis the same thing as the "reciprocal basis" described here?

http://home.pacbell.net/bbowen/covariant.htm

Yes, the "reciprocal basis" is the same as a "dual basis". The reciprocal basis only exists when you have already defined one basis.
 
Last edited by a moderator:
A clarification, please: According to the description here

http://home.pacbell.net/bbowen/covariant.htm

And the drawing here

http://en.wikipedia.org/wiki/Image:Basis.gif

when one raises or lowers the index on a vector/covector, one is still describing the same arrow, only using different ‘building block’ arrows to do so. But when one raises or lowers the index on a basis-vector/basis-covector, one is actually turning it into a different arrow – one that goes from being parallel to a coordinate line to one that is perpendicular to the other coordinate line(s) or vice versa.

Granted that vectors/co-vectors are not really arrows, is this otherwise correct?
 
Last edited by a moderator:
  • #10
snoopies622 said:
But when one raises or lowers the index on a basis-vector/basis-covector, one is actually turning it into a different arrow

Using Bowen's notation and formulas, if V=A1, its contravariant components are [V1=1, V2=0]. To figure out its covariant components we lower its index:

A1 = V1A1+V2A2 = V1A1+V2A2 = Vi*Gi1A1+Vi*Gi2A2
= (V1*G11+V2*G21)A1+(V1*G12+V2*G22)A2
= G11A1+G12A2

So if {A1,A2} are the basis vectors, their contravariant components are {[1,0], [0,1]}, and their covariant components are {[G11,G12]c,[G21,G22]c}, so that Ai.Aj=Gij

The sets of basis vectors {A1,A2} and {A1,A2} are reciprocal, but lowering the index on A1 doesn't change it into A1, it still remains itself described in terms of the reciprocal basis vectors, just as with any other vector.

The process of getting a set of reciprocal basis vectors given a set of basis vectors is a different process from raising or lowering an index.

(Actually, I usually think of vectors and covectors as being in different spaces. The vectors are column vectors and the covectors are row vectors. You can multiply a column and row vector to get a number without any metric. Without a metric, the column vectors and row vectors are not related, unless we define a basis for the column vectors and a reciprocal basis for the row vectors, which means the relationship between column vectors and row vectors changes with the basis. Without a metric you cannot multiply two column vectors to get a number. Once you have a metric, you can use that to multiply two column vectors to get a number, by using it to change one column vector into a row vector. You can also use the metric to enforce a fixed relationship between column vectors and row vectors, so you can identify a vector with its covector.)
 
  • #11
atyy said:
The sets of basis vectors {A1,A2} and {A1,A2} are reciprocal, but lowering the index on A1 doesn't change it into A1, it still remains itself described in terms of the reciprocal basis vectors, just as with any other vector.

The process of getting a set of reciprocal basis vectors given a set of basis vectors is a different process from raising or lowering an index.


So an expression like [tex]e^u = <1,0>[/tex] is ambiguous since the raised u could mean either that the components of the u basis vector (or covector) are contravariant, or that the basis vector itself is contravariant and the components are either contravariant or covariant?

(Edit: I guess "expression" should be "equation".)
 
Last edited:
  • #12
Yes, that's true.
 
  • #13
So does [tex]\frac {\partial}{\partial t}[/tex] represent the either contravariant or covariant components of a covariant basis vector, or the covariant components of a basis vector that could be either contravariant or covariant?
 
Last edited:
  • #14
snoopies622 said:
So does [tex]\frac {\partial}{\partial t}[/tex] represent the either contravariant or covariant components of a covariant basis vector, or the covariant components of a basis vector that could be either contravariant or covariant?

Given coordinates [tex]\{t,x\}[/tex] on a manifold, the basis vectors can be chosen to be [tex]\{e_0={\frac {\partial}{\partial t}},e_1={\frac {\partial}{\partial x}}\}[/tex], and they can be used to represent any vector using contravariant components: [tex]v=v^ie_i=v^i\frac {\partial}{\partial x_i}=v^i[/tex] , where in the final step I omitted the basis vectors only for notational convenience as is often done.
 
  • #15
snoopies622 said:
So an expression like [tex]e^u = <1,0>[/tex] is ambiguous since the raised u could mean either that the components of the u basis vector (or covector) are contravariant, or that the basis vector itself is contravariant and the components are either contravariant or covariant?

Wait, it's not ambiguous. If [tex]e^u = <1,0>[/tex] then it is being expressed in terms of the eu and ev basis vectors. eu=(1)(eu)+(0)(ev). I may actually understanding this now.
 
Last edited:

Similar threads

  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 42 ·
2
Replies
42
Views
5K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K