What Are the Key Concepts of Dual Vector Spaces?

Etenim
Messages
9
Reaction score
0
Greetings,

Slowly I am beginning to think that I must be some sort of retard for not getting this fundamental concept. For this post, I will adapt the bracket notation as introduced by P. Halmos' "Finite-dimensional Vector Spaces". \left[ \cdot, \cdot \right] : V \times V^* \to K.

A linear functional on a vector space V is a scalar-valued function, defined for each v \in V, mapping vectors into the underlying coefficient field and having the well-known property of linearity. -- Let V be a finite vector space over a field K. V* is defined to be the space of all linear functionals f : V \to K, which shall be referred to as the dual space of V.

Once a basis for V is chosen, fixing x \in V, for all f, f^\prime \in V^*, we have \left[ x, f \right] = \left[ x, f^\prime \right] \, \Rightarrow \, f = f^\prime. Which is obvious, for by choosing a basis, we can show that f must be unique for the expression \left[ x, f \right] to be well-defined. After representing the fixed vector as a linear combination of V's basis vectors \left( \beta_i \right)_{i=0}^n, and applying a linear functional f, the term \left[ \beta_i, f \right] = a_i emerges.

That is, given some a_i \in K and x \in V, can I find a unique y^i \in V^* such that \left[ x, y^i \right] = a_i?

I interpret this to be a result of our previous definition of the functional to be linear. Conversely, could we give the functional's now known property of uniqueness axiomatically and then deduce that it must be linear on such a foundation? Then that result would not seem so coincidental to me, but be rather a rediscovery of a historical definition made for the very purpose of making the elements of the dual space linear.

Now, I can take the f apart and write it as a linear combination of dual basis vectors, the basis of the dual vector space. Given a basis \left( \beta_i \right)_{i=0}^n for V, we define the elements of the dual basis \left( \beta^*^i \right)_{i=0}^n uniquely by \left[ \beta_i, \beta^*^j \right] = \delta_i^j.

Why do we do this? To later make the set of dual basis vectors linearly independent? Is there no other choice for \left[ \beta_i, \beta^*^j \right]'s value to do this feat?

I hope I didn't mess up the indices. This is my first exposure to advanced linear algebra - I would be happy if someone could enlighten me about dual spaces, and what motivates the definitions.

Thanks a lot,

Cheers,
- Etenim.
 
Physics news on Phys.org
a basis represents every vector as a sequence of coordinates (a1,...,an).

then the most natural way to assign a number to such a vector is to choose one of the coordinates.

choosing the ith coordinate, is exactly your definition of the ith dual basis vector.

what other choice could be simpler?
 
Ah. By 'naturally' defining the action of a dual basis vector on an arbitrary vector v = c^i \beta_i to be \beta^*^j ( c^1 \beta_1 + c_2 \beta_2 + \cdots + c^n \beta_n ) = c_j, we want, by the dual basis vector's linearity, \beta^*^j ( \beta_i ) = \delta_i^j, to "extract" the c_j.

'Naturally'. Well, I wonder what understanding feels like. Meh. But, yes, it makes more sense now. Thanks. :)
 
Last edited:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top