Can You Explain the Relationship Between a Basis and Its Dual Basis?

simpleton
Messages
56
Reaction score
0
Hi, I'm learning about vector spaces and I would like to ask some questions about it.

Suppose I have a vector space V, and a basis for V \{v_1, ... v_n\}. Then there is a dual space V^* consisting all linear functions whose domain is V and range is ℝ. Then the space V^* has a dual basis \{x_1 ... x_n\}, and one way of constructing this dual basis is to let x_i be a function that returns the i^{th} coordinate of the vector when expressed in the given basis for V. The claim is that the span of these functions is equal to V^*, and therefore the space and dual space have the same dimension.

I have a few questions. My textbook states that any linear function on V can be expressed as a linear combination of the coordinate functions, but it does not explain why. May I know why this is the case? I am assuming that this is because a linear function on a vector must be a linear combination of its coordinates, but I'm not sure if this must always be the case, since I think a linear function is merely defined to be something that is closed under addition and multiplication under scalars.

The textbook also says that the coordinate functions are linear independent. By this, I think they mean that if there exists a linear combination of the coordinate functions such that any input vector returns 0 on the equation, then the coefficients of all the coordinate functions must be 0. I think this makes sense, but may I know if I interpreted it correctly?

Finally, it is mentioned that given any dual basis \{x_1...x_n\} of V^*, you can construct a basis for V, such that the dual basis acts as coordinate functions for the basis of V. By this, I think they mean that I can find a set \{v_1...v_n\} such that x_i(v_j) = 1 iff i = j and x_i(v_j) = 0 otherwise. But I'm not sure why this is true. I think that in order for this to be true, the following questions have to be answered.

1) Such a set \{v_1...v_n\} exists.
2) Any vector in V can be expressed as a linear combination of the vectors.
3) The constructed vectors are linearly independent.

I am not sure how 1) can be answered. I think 2) and 3) are actually the same question because we know that both V and V^* have the same dimension, and therefore both 2) and 3) have to be true at the same time so that the Linear Dependence Lemma is not violated. I am not sure how to prove 2), but I think I can prove 3). Suppose a linear combination exists, so \Sigma a_iv_i = 0 for some a_1...a_n. But if I apply x_i to this linear combination, I will get a_i = 0 so in the end I deduce that all the coefficients are 0. May I know if there is a way to prove 2) directly?

Thank you very much!
 
Physics news on Phys.org
Let V be a vector space of dimension n. Then it has a basis \{x_1, x_2, \cdot\cdot\cdot, x_n\}. For i from 1 to n, we define f_i by requiring that f_i(x_j)= \delta_{ij}. That is, f_i(x_j) is equal to 1 if i= j, 0 if not. We define f_i for any vector in V "by linearity": if x= a_1x_1+ a_2x_2+ \cdot\cdot\cdot+ a_nx_n then f_i(x)= a_1f_i(x_1)+ a_2f_i(x_2)+ \cdot\cdot\cdot+ a_nf_i(x_n)= a_i.

Now, we show that this set of linear functions is a basis for V*. Suppose f is any linear function from V to its underlying field. For all i from 0 to n, let a_i= f(x_i). Then it is easy to show that f(x)= a_1f_1(x)+ a_2f_2(x)+ \cdot\cdot\cdot+ a_nf_n(x) so that f= a_1f_1+ a_2f_2+ \cdot\cdot\cdot+ a_nf_n. That is, this set of n linear functionals spans the entire space of all linear functionals on V.

Of course the linear functional f that takes all vectors in V to 0, in particular, takes all of the basis vectors, x_i, to 0 and so is f= 0f_1+ 0f_2+ \cdot\cdot\cdot+ 0f_n, showing that these functionals are independent.


(By the way, all of this requires that V be finite dimensional. Infinite dimensional vector spaces are not necessarily isomorphic to their duals.)
 
Hi HallsofIvy,

Thank you very much for your reply! I understand now how the linear functions work. Could you help me with the second part as well? How do you prove that you can find a corresponding basis \{v_1...v_n\} exists given the basis of the dual space?
 
This is how I like to do these things: First a few comments about notation.

I use the convention that basis vectors of V are written with indices downstairs, and components of vectors of V are written with indices upstairs. So we can write a basis for V as ##\{e_i\}##, and if x is in V, we have ##x=\sum_i x^i e_i##. But I'll write this as ##x=x^i e_i##. The idea is that since there's always a sum over the indices that occur twice, and never a sum over other indices, it's a waste of time and space to type summation sigmas.

For members of V*, the convention for indices is the opposite. We'll write a basis as ##\{e^i\}##, and if x is in V*, we'll write ##x=x_i e^i##.

Since you're new at this, you may find it difficult to keep track of which symbols denote members of V and which symbols denote members of V*, so for your benefit, I will write members of V as ##\vec x## and members of V* as ##\tilde x##. I found this notation helpful when I was learning this stuff, but I don't use it anymore. By the way, I'm going to omit some "for all" statements, in particular "for all i" and "for all j". I hope it will be obvious when I'm doing so.

Let ##\{\vec e_i\}## be an arbitrary basis for V. I like to define the dual basis ##\{\tilde e^i\}## by saying that for each i, ##\tilde e^i## is the member of V* such that ##\tilde e^i(\vec e_j)=\delta^i_j##. For all ##\vec x\in V##, we have
$$\tilde e^i(\vec x)=\tilde e^i(x^j\vec e_i)=x^j\tilde e^i(\vec e_j)=x^j\delta^i_j=x^i.$$ To prove that ##\{\tilde e^i\}## is a basis for V*, we must prove that it's a linearly independent set that spans V*. To see that it spans V*, let ##\tilde y\in V^*## be arbitrary. For all ##\vec x\in V##,
$$\tilde y(\vec x)=\tilde y(x^i\vec e_i)=x^i\tilde y(\vec e_i)=\tilde e^i(\vec x)\tilde y(\vec e_i) =\tilde y(\vec e_i)\tilde e^i(\vec x) =\big(\tilde y(\vec e_i)\tilde e^i\big)(\vec x).$$ So ##\tilde y=\tilde y(\vec e_i)\tilde e^i##. To see that ##\{\tilde e^i\}## is linearly independent, suppose that ##a_i\tilde e^i=0##. Then ##(a_i\tilde e^i)(\vec x)## for all ##\vec x\in V##. This implies that ##(a_i\tilde e^i)(\vec e_j)=0## for all j. So for all j,
$$0=(a_i\tilde e^i)(\vec e_j) =a_i \tilde e^i(\vec e_j)=a_i\delta^i_j=a_j.$$ So all the ##a_i## are 0.

Now let ##\{\tilde e^i\}## be an arbitrary basis for V*. We want to use this to construct a basis ##\{\vec e_i\}## for V such that ##\tilde e^i(\vec e_j)=\delta^i_j##. Maybe there's a more direct approach, but this is the one I see immediately:

Let's write members of V** (the dual of V*) as ##\overleftarrow x##. We will write the dual basis of ##\{\tilde e_i\}## as ##\{\overleftarrow e_i\}##. Define a function ##f:V\to V^{**}## by ##f(\vec x)(\tilde y)=\tilde y(\vec x)##. (Since ##f(\tilde x)## is in V**, it takes a member of V* as input). It's not hard to show that this f is an isomorphism. Then we can define the basis for V by ##\vec e_i=f^{-1}(\overleftarrow e_i)##. Then we just verify that everything works out as intended.
 
The vector space is isomorphic to the dual space of the dual space. Any vector v defines the linear map on dual vectors, l -> l(v).

The dual basis of the dual basis is the basis back again.
 
Last edited:

Similar threads

Back
Top