# Linear Algebra Dual Space

## Homework Statement

Let Hom(V,W) be the set of linear transformations from V to W. Define addition on Hom(V,W) by (f + g)(v) = f(v) + g(v) and scalar multiplication by (af)(v) = af(v.

If V is a vector space over a field K, define V* = Hom(V,K). This is called the dual space of V. If <v1,.....,vn> is a basis of V, show that for each I there is an vi* that is an element of V* satisfying vi*(vj) = d_ij (Kronecker delta function).

*Hopefully I typed this question out clear enough, I'm hoping that a real maverick of linear algebra happens upon this thread and understands the question*

## The Attempt at a Solution

I've put a lot of thought into this problem and wanted to discuss my thoughts with someone who knows what's going on. So given vi* is an element of V*, we need find what mapping vi* must be such that vi*(vj) = d_ij (Kronecker delta function).

what if vi*(vj) = ( vj / |vj| )e_i where e_i is a unit vector in the I direction. The (vj / |vj|) will equal one, since this is the equation for normalizing a vector, and then multiplying this by e_i will be zero if j does not equal I.

But then I realized that multiplying by e_i will be zero only if I apply grand Schmitt's orthonormalization process first, right?

Anyone, I'm knew to the subject and hope somebody can somewhat understand my thoughts and provide some insight! Thanks PF

LCKurtz
Homework Helper
Gold Member

## Homework Statement

Let Hom(V,W) be the set of linear transformations from V to W. Define addition on Hom(V,W) by (f + g)(v) = f(v) + g(v) and scalar multiplication by (af)(v) = af(v.

If V is a vector space over a field K, define V* = Hom(V,K). This is called the dual space of V. If <v1,.....,vn> is a basis of V, show that for each I there is an vi* that is an element of V* satisfying vi*(vj) = d_ij (Kronecker delta function).

*Hopefully I typed this question out clear enough, I'm hoping that a real maverick of linear algebra happens upon this thread and understands the question*

## The Attempt at a Solution

I've put a lot of thought into this problem and wanted to discuss my thoughts with someone who knows what's going on. So given vi* is an element of V*, we need find what mapping vi* must be such that vi*(vj) = d_ij (Kronecker delta function).

what if vi*(vj) = ( vj / |vj| )e_i where e_i is a unit vector in the I direction. The (vj / |vj|) will equal one, since this is the equation for normalizing a vector, and then multiplying this by e_i will be zero if j does not equal I.

But then I realized that multiplying by e_i will be zero only if I apply grand Schmitt's orthonormalization process first, right?

Anyone, I'm knew to the subject and hope somebody can somewhat understand my thoughts and provide some insight! Thanks PF

If ##v = \sum_{i=1}^n c_i v_i##, think about defining ##v_j^*## as picking out the ##j##th coefficient:$$v_j^*(v) = c_j$$See if you can do something with that.

PsychonautQQ
Stephen Tashi
Your thinking is reasonable. It depends on having a "standard basis" {e1, e2,...eN} available. Some proofs can be effectively be done by thinking of the abstract vector space as being Euclidean space in a diagram where the illustrator has set up {e1,e2,..eN} for us. However, statistically speaking, abstractly phrased problems more often expect students to abstract methods if they do the exercise "the easy way"

My thought on this problem is that you should define a transformation P that has the desired properties, without initially claiming it is a linear transformation. Then prove P is a linear transformation. According to the traditions of how writing mathematics is interpreted, the definition:

Let Hom(V,W) be the set of linear transformations from V to W.

Implies that any linear transformation you create from V into W is an element of Hom(V,W). So if you can make up a linear transformation P with the desired properties, it will be an element in Hom(V,W).

Give the vector vj in V, you could say "Let Pj be the transformation Pj(vi) = d_ij". However, this does not "well define" Pj since it doesn't tell what Pj does to an arbitrary vector R. To "well define" Pj(R) you could say that Pj is defined as "Express R as its unique linear combination of the vectors in the basis {v1, v2,...vN} and let Pj(R) be the coefficient of vj in that linear combination". (I'm sure you can find a more dignifed way of putting it.) You have to prove Pj is a linear transformation. (Pj is "projection on the j-direction" if you are thinking of Euclidean space, but I wouldn't use the word "direction" when you write the proof.)

It interesting that your course materials, within a single problem, define Hom(V,W) as a set of linear transformations between vector spaces and then specialize to the case Hom(V,K) where K is the scalar field. The special case of Hom(V,K) is very important. The linear transformations of Hom(V,K) can be called "linear functionals". You can think of certain sets of functions as vector spaces. In advanced calculus, a linear functional is defined as linear function that maps functions to numbers. (For example $L(f(x)) = \int_0^\infty x f(x) dx$ ).

There are also examples of Hom(V,W) in advanced calculus - many of those mysterious "transforms" that take a function f(x) to a function F(s) in a different variable are linear transformations from one vector space of functions to another vector space of functions.

Last edited:
PsychonautQQ