What Are the Key Concepts of Dual Vector Spaces?

Click For Summary
The discussion centers on understanding dual vector spaces and their key concepts, particularly linear functionals and dual bases. A linear functional maps vectors from a vector space V to a field K while maintaining linearity, and the dual space V* consists of all such functionals. The uniqueness of functionals is established through the choice of basis, leading to the definition of dual basis vectors that satisfy the relation [β_i, β^*^j] = δ_i^j. The motivation behind these definitions is to ensure linear independence among dual basis vectors and to provide a natural way to assign values to vectors based on their coordinates. Overall, the conversation reflects a quest for clarity in advanced linear algebra concepts.
Etenim
Messages
9
Reaction score
0
Greetings,

Slowly I am beginning to think that I must be some sort of retard for not getting this fundamental concept. For this post, I will adapt the bracket notation as introduced by P. Halmos' "Finite-dimensional Vector Spaces". \left[ \cdot, \cdot \right] : V \times V^* \to K.

A linear functional on a vector space V is a scalar-valued function, defined for each v \in V, mapping vectors into the underlying coefficient field and having the well-known property of linearity. -- Let V be a finite vector space over a field K. V* is defined to be the space of all linear functionals f : V \to K, which shall be referred to as the dual space of V.

Once a basis for V is chosen, fixing x \in V, for all f, f^\prime \in V^*, we have \left[ x, f \right] = \left[ x, f^\prime \right] \, \Rightarrow \, f = f^\prime. Which is obvious, for by choosing a basis, we can show that f must be unique for the expression \left[ x, f \right] to be well-defined. After representing the fixed vector as a linear combination of V's basis vectors \left( \beta_i \right)_{i=0}^n, and applying a linear functional f, the term \left[ \beta_i, f \right] = a_i emerges.

That is, given some a_i \in K and x \in V, can I find a unique y^i \in V^* such that \left[ x, y^i \right] = a_i?

I interpret this to be a result of our previous definition of the functional to be linear. Conversely, could we give the functional's now known property of uniqueness axiomatically and then deduce that it must be linear on such a foundation? Then that result would not seem so coincidental to me, but be rather a rediscovery of a historical definition made for the very purpose of making the elements of the dual space linear.

Now, I can take the f apart and write it as a linear combination of dual basis vectors, the basis of the dual vector space. Given a basis \left( \beta_i \right)_{i=0}^n for V, we define the elements of the dual basis \left( \beta^*^i \right)_{i=0}^n uniquely by \left[ \beta_i, \beta^*^j \right] = \delta_i^j.

Why do we do this? To later make the set of dual basis vectors linearly independent? Is there no other choice for \left[ \beta_i, \beta^*^j \right]'s value to do this feat?

I hope I didn't mess up the indices. This is my first exposure to advanced linear algebra - I would be happy if someone could enlighten me about dual spaces, and what motivates the definitions.

Thanks a lot,

Cheers,
- Etenim.
 
Physics news on Phys.org
a basis represents every vector as a sequence of coordinates (a1,...,an).

then the most natural way to assign a number to such a vector is to choose one of the coordinates.

choosing the ith coordinate, is exactly your definition of the ith dual basis vector.

what other choice could be simpler?
 
Ah. By 'naturally' defining the action of a dual basis vector on an arbitrary vector v = c^i \beta_i to be \beta^*^j ( c^1 \beta_1 + c_2 \beta_2 + \cdots + c^n \beta_n ) = c_j, we want, by the dual basis vector's linearity, \beta^*^j ( \beta_i ) = \delta_i^j, to "extract" the c_j.

'Naturally'. Well, I wonder what understanding feels like. Meh. But, yes, it makes more sense now. Thanks. :)
 
Last edited:
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 48 ·
2
Replies
48
Views
9K
  • · Replies 1 ·
Replies
1
Views
3K