- #1
simpleton
- 58
- 0
Hi, I'm learning about vector spaces and I would like to ask some questions about it.
Suppose I have a vector space [itex]V[/itex], and a basis for [itex]V \{v_1, ... v_n\}[/itex]. Then there is a dual space [itex]V^*[/itex] consisting all linear functions whose domain is [itex]V[/itex] and range is ℝ. Then the space [itex]V^*[/itex] has a dual basis [itex]\{x_1 ... x_n\}[/itex], and one way of constructing this dual basis is to let [itex]x_i[/itex] be a function that returns the [itex]i^{th}[/itex] coordinate of the vector when expressed in the given basis for [itex]V[/itex]. The claim is that the span of these functions is equal to [itex]V^*[/itex], and therefore the space and dual space have the same dimension.
I have a few questions. My textbook states that any linear function on [itex]V[/itex] can be expressed as a linear combination of the coordinate functions, but it does not explain why. May I know why this is the case? I am assuming that this is because a linear function on a vector must be a linear combination of its coordinates, but I'm not sure if this must always be the case, since I think a linear function is merely defined to be something that is closed under addition and multiplication under scalars.
The textbook also says that the coordinate functions are linear independent. By this, I think they mean that if there exists a linear combination of the coordinate functions such that any input vector returns 0 on the equation, then the coefficients of all the coordinate functions must be 0. I think this makes sense, but may I know if I interpreted it correctly?
Finally, it is mentioned that given any dual basis [itex]\{x_1...x_n\}[/itex] of [itex]V^*[/itex], you can construct a basis for [itex]V[/itex], such that the dual basis acts as coordinate functions for the basis of [itex]V[/itex]. By this, I think they mean that I can find a set [itex]\{v_1...v_n\}[/itex] such that [itex]x_i(v_j) = 1[/itex] iff [itex]i = j[/itex] and [itex]x_i(v_j) = 0[/itex] otherwise. But I'm not sure why this is true. I think that in order for this to be true, the following questions have to be answered.
1) Such a set [itex]\{v_1...v_n\}[/itex] exists.
2) Any vector in [itex]V[/itex] can be expressed as a linear combination of the vectors.
3) The constructed vectors are linearly independent.
I am not sure how 1) can be answered. I think 2) and 3) are actually the same question because we know that both [itex]V[/itex] and [itex]V^*[/itex] have the same dimension, and therefore both 2) and 3) have to be true at the same time so that the Linear Dependence Lemma is not violated. I am not sure how to prove 2), but I think I can prove 3). Suppose a linear combination exists, so [itex]\Sigma a_iv_i = 0[/itex] for some [itex]a_1...a_n[/itex]. But if I apply [itex]x_i[/itex] to this linear combination, I will get [itex]a_i = 0[/itex] so in the end I deduce that all the coefficients are 0. May I know if there is a way to prove 2) directly?
Thank you very much!
Suppose I have a vector space [itex]V[/itex], and a basis for [itex]V \{v_1, ... v_n\}[/itex]. Then there is a dual space [itex]V^*[/itex] consisting all linear functions whose domain is [itex]V[/itex] and range is ℝ. Then the space [itex]V^*[/itex] has a dual basis [itex]\{x_1 ... x_n\}[/itex], and one way of constructing this dual basis is to let [itex]x_i[/itex] be a function that returns the [itex]i^{th}[/itex] coordinate of the vector when expressed in the given basis for [itex]V[/itex]. The claim is that the span of these functions is equal to [itex]V^*[/itex], and therefore the space and dual space have the same dimension.
I have a few questions. My textbook states that any linear function on [itex]V[/itex] can be expressed as a linear combination of the coordinate functions, but it does not explain why. May I know why this is the case? I am assuming that this is because a linear function on a vector must be a linear combination of its coordinates, but I'm not sure if this must always be the case, since I think a linear function is merely defined to be something that is closed under addition and multiplication under scalars.
The textbook also says that the coordinate functions are linear independent. By this, I think they mean that if there exists a linear combination of the coordinate functions such that any input vector returns 0 on the equation, then the coefficients of all the coordinate functions must be 0. I think this makes sense, but may I know if I interpreted it correctly?
Finally, it is mentioned that given any dual basis [itex]\{x_1...x_n\}[/itex] of [itex]V^*[/itex], you can construct a basis for [itex]V[/itex], such that the dual basis acts as coordinate functions for the basis of [itex]V[/itex]. By this, I think they mean that I can find a set [itex]\{v_1...v_n\}[/itex] such that [itex]x_i(v_j) = 1[/itex] iff [itex]i = j[/itex] and [itex]x_i(v_j) = 0[/itex] otherwise. But I'm not sure why this is true. I think that in order for this to be true, the following questions have to be answered.
1) Such a set [itex]\{v_1...v_n\}[/itex] exists.
2) Any vector in [itex]V[/itex] can be expressed as a linear combination of the vectors.
3) The constructed vectors are linearly independent.
I am not sure how 1) can be answered. I think 2) and 3) are actually the same question because we know that both [itex]V[/itex] and [itex]V^*[/itex] have the same dimension, and therefore both 2) and 3) have to be true at the same time so that the Linear Dependence Lemma is not violated. I am not sure how to prove 2), but I think I can prove 3). Suppose a linear combination exists, so [itex]\Sigma a_iv_i = 0[/itex] for some [itex]a_1...a_n[/itex]. But if I apply [itex]x_i[/itex] to this linear combination, I will get [itex]a_i = 0[/itex] so in the end I deduce that all the coefficients are 0. May I know if there is a way to prove 2) directly?
Thank you very much!