Confused about the concepts of dual spaces, dual bases, reflexivity and annihilators

In summary, the conversation discusses the confusion the speaker has regarding concepts in linear algebra such as dual spaces, dual bases, reflexivity, and annihilators. They are seeking clarification and examples to better understand these concepts. The confusion is due to their misconception of what a vector is and how operations can be considered vectors. The speaker also introduces a bracket notation and asks for clarification on its meaning.
  • #1
Philmac
25
0
My background in linear algebra is pretty basic: high school math and a first year course about matrix math. Now I'm reading a book about finite-dimensional vector spaces and there are a few concepts that are just absolutely bewildering to me: dual spaces, dual bases, reflexivity and annihilators. The book I'm reading explains everything in extremely general terms and doesn't provide any numerical examples, so I can't wrap my head around any of this. I'd really appreciate it if my loose understanding of these concepts could be critiqued/corrected and, if possible, some simple numerical examples could be provided. I really can't make heads or tails of some of this.

Note: I've never seen this bracket notation before, so I'll briefly introduce it in case it isn't something standard:
[x,y][itex]\equiv[/itex]y(x)

First, dual spaces. My understanding of a linear functional is that it's a black box where vectors go in and scalars come out (e.g. dot product). The dual space V' of a vector space V, is the set of all linear functionals that can be applied to that vector space. So, why is this called a "space"? How can things like integration and dot products (i.e. operations) form a space? The author also refers to the elements of V' as "vectors" -- how can an operation be a vector? My understanding of a vector is that it is a value with both magnitude and direction. Obviously, operations produce values, but V' is the set of operations, not values.

Second, dual bases. I just don't understand this at all, so I'll just provide the definition in this book:
If V is an n-dimensional vector space and if X={x1,...,xn} is a basis in V, then there is a uniquely determined basis X' in V', X'={y1,...,yn}, with the property that [xi,yi]=∂ij. Consequently the dual space of an n-dimensional space is n-dimensional.

The basis X' is called the dual basis of X.
I think ∂ij is the Kronecker delta, but I'm not 100% sure.
So, what I think this means is that there is one operation in V' for each V, for which yj(xi)=1 for j=i and 0 for all j≠i. But does V' necessarily have dimension n?

Third, reflexivity. I just don't understand this at all. Here's the definition in the book:
If V is a finite-dimensional vector space, then corresponding to every linear functional z0 on V' there is a vector x0 in V such that z0(y)=[x0,y](x0) for every y in V'; the correspondence z0[itex]\leftrightarrow[/itex]x0 between V'' and V is an isomorphism.

The correspondence described in this statement is called the natural correspondence between V'' and V.

It is important to observe that the theorem shows not only that V and V'' are isomorphic -- this much is trivial from the fact that they have the same dimension -- but that the natural correspondence is an isomorphism. This property of vector spaces is called reflexivity; every finite-dimensional vector space is reflexive.


Fourth, annihilators. I think I understand this concept somewhat, but the proofs presented don't make sense to me. My understanding of an annihilator is that it is any subset of V' which evaluates to 0 for all x in V. The thing I'm confused about is the annihilator of an annihilator.
If M is a subspace in a finite-dimensional vector space V, then M00 (=(M0)0) = M.
Now, I'm willing to accept this proposition, but the proof is relatively short yet I cannot make sense of it. The proof is:
By definition, M00 is the set of all vectors x such that [x,y]=0 for all y in M0. Since, by the definition of M0, [x,y] = 0 for all x in M and all y in M0, it follows that M[itex]\subset[/itex]M00. The desired conclusion now follows from a dimension argument. Let M be m-dimensional; then the dimension of M0 is n-m, and that of M00 is n-(n-m)=m. Hence M = M00, as was to be proved.
The problem I have here is "By definition, M00 is the set of all vectors x such that [x,y]=0 for all y in M0". Shouldn't it be the set of all vectors z (or whatever letter you like) in V'?
 
Physics news on Phys.org
  • #2


Philmac said:
My background in linear algebra is pretty basic: high school math and a first year course about matrix math. Now I'm reading a book about finite-dimensional vector spaces and there are a few concepts that are just absolutely bewildering to me: dual spaces, dual bases, reflexivity and annihilators. The book I'm reading explains everything in extremely general terms and doesn't provide any numerical examples, so I can't wrap my head around any of this. I'd really appreciate it if my loose understanding of these concepts could be critiqued/corrected and, if possible, some simple numerical examples could be provided. I really can't make heads or tails of some of this.

Note: I've never seen this bracket notation before, so I'll briefly introduce it in case it isn't something standard:
[x,y][itex]\equiv[/itex]y(x)

First, dual spaces. My understanding of a linear functional is that it's a black box where vectors go in and scalars come out (e.g. dot product). The dual space V' of a vector space V, is the set of all linear functionals that can be applied to that vector space. So, why is this called a "space"? How can things like integration and dot products (i.e. operations) form a space? The author also refers to the elements of V' as "vectors" -- how can an operation be a vector? My understanding of a vector is that it is a value with both magnitude and direction. Obviously, operations produce values, but V' is the set of operations, not values.
Let's begin by clearing this up. Once you get this, we'll get to the next thing:

You're having a major, major misconception. You think of vector as some kind of "arrow" with both a magnitude and direction. While this is certainly true in basic math, this is absolutely false when you get to more advanced spaces.

In fact, an [itex]\mathbb{R}[/itex]-vector space is any set equipped with an addition and a scalar multiplication (which satisfy some elementary axioms). All a vector is, is an element of a vector space.

The easiest example of a vector space is of course [itex]\mathbb{R}^n[/itex], whose elements can indeed be seen as "arrows" with a magnitude and a direction.

However, this is far from the only example of a vector space. For example:

[tex]\{f:[0,1]\rightarrow \mathbb{R}~\vert~\text{f continuous}\}[/tex]

is also a vector space! All this means is that the sum and scalar multiplication of continuous functions gives us a continuous function. And it would be very awkward to see this set as a collection "arrows". The vectors of these set are now continuous functions!

Other vector spaces are the polynomials, the differentiable functions, etc... The thing I want you to realize is that a vector space is a very broad concept. There's a lot that can be a vector space, not just "arrows with maginute and direction".

When given a vector space V (which can be anything), we can form the dual space

[tex]V^\prime=\{f:V\rightarrow \mathbb{R}~\vert~f~\text{linear}\}[/tex]

This is a vector space. All this means is that the sum and scalar product of linear functions is linear. There's nothing more to it. The vectors here are simply linear functions!

So remember: a vector space can be anything! Once you understand this, we can move on to your next questions. But I feel that you must grasp this first.
 
  • #3


Thank you very much! There is a list of axioms defining both fields and vector spaces at the beginning of the book I'm reading. I do agree that a dual space satisfies these axioms. However, I errantly believed that "vectors" had to have some geometric interpretation. I thought that perhaps vectors were more abstract than I had previously believed, and you've confirmed that for me. So a vector (space) is simply anything that satisfies the axioms, nothing more than that. I believe I'm ready to hear explanations for the rest of these concepts :)
 
  • #4


OK, dual bases then/

Philmac said:
Second, dual bases. I just don't understand this at all, so I'll just provide the definition in this book:

I think ∂ij is the Kronecker delta, but I'm not 100% sure.
So, what I think this means is that there is one operation in V' for each V, for which yj(xi)=1 for j=i and 0 for all j≠i. But does V' necessarily have dimension n?

The definition is maybe a bit more abstract then was possible. But that's not necessary a bad thing.

Let's first look at the vector space [itex]V=\mathbb{R}^n[/itex], which is the nice vector space of arrows. Elements in the dual space are now just linear functions [itex]T:\mathbb{R}^n\rightarrow \mathbb{R}[/itex].

The first important observation is that T is completely determined on how it acts on a basis. That is, if you would know that

[tex]T(1,0,...,0)=y_1,T(0,1,0,...,0)=y_2,..., T(0,0,0,...,1)=y_n[/tex]

then you know completely what x does on every element:

[tex]T(a_1,...,a_n)=a_1y_1+...+a_ny_n[/tex]

So it just suffices to say what T does on a basis.

Now, if V is an arbitrary vector space, then the same holds: we can define a linear function by saying what it does on the basis. Now, let's define such a function. Take a basis [itex]\{e_1,...,e_n\}[/itex] and define

[tex]T_1(e_1)=1, T_k(e_k)=0~\text{for k>1}[/tex]

In general, we have

[tex]T_i(e_i)=1,~T_k(e_i)=0~\text{if}~k\neq i[/tex]

Even more abstractly put: [itex]T_i(e_k)=\delta_{ik}[/itex], where we indeed have the Kronecker delta.

What is our T in our nice space [itex]\mathbb{R}^n[/itex]? Well, you can easily see that

[tex]T_i(a_1,...,a_n)=a_i[/tex]

so Ti is simply the i'th projection!

Now, when V is finite-dimensional, I claim it is the case that Ti is a basis for V' (the dual space). This means nothing more then:

  • The Ti are linearly independent:
    [tex]\lambda_1 T_1+...+\lambda_n T_n=0~\Rightarrow~\lambda_1=...=\lambda_n=0[/tex]
    Indeed, if [itex]\lambda_1 T_1+...+\lambda_n T_n=0[/itex], then

    [tex]\lambda_1 T_1(x)+...+\lambda_n T_n(x)=0[/tex]

    for every vector x. So try the vectors ei as our x's.
  • The Ti span the space.
    This means only that every functional T can be written as

    [tex]\lambda_1 T_1+...+\lambda_n T_n=T[/tex]

    So, we must find [itex]\lambda_i[/itex] such that the above is true. But, it suffices to take [itex]\lambda_i=T(e_i)[/itex] here. Then you can easily check that for any x, it holds that

    [tex]\lambda_1T_1(x)+...+\lambda_nT_n(x)=T(x)[/tex]

I hope that clarifies this.

By the way, which book are you reading?
 
  • #5


Thanks again! The book I'm reading is "Finite-Dimensional Vector Spaces" by Paul R. Halmos.

I've read your post many times but I'm having a great deal of trouble understanding it. Math notation has always been extremely confusing for me (I really need concrete examples with numbers), so I apologize if my questions are extremely basic. Here's what I'm having trouble with:

micromass said:
Let's first look at the vector space [itex]V=\mathbb{R}^n[/itex], which is the nice vector space of arrows. Elements in the dual space are now just linear functions [itex]T:\mathbb{R}^n\rightarrow \mathbb{R}[/itex].

The first important observation is that T is completely determined on how it acts on a basis. That is, if you would know that

[tex]T(1,0,...,0)=y_1,T(0,1,0,...,0)=y_2,..., T(0,0,0,...,1)=y_n[/tex]

then you know completely what x does on every element:

[tex]T(a_1,...,a_n)=a_1y_1+...+a_ny_n[/tex]

So it just suffices to say what T does on a basis.
What exactly is T? I understand that it is a map from Rn to R1, but are you saying that each element of V'=Tn or are you saying that V'=T? What are the arguments of T? x? The elements y1...yn are operations, so how can anything equate to them? Perhaps this goes back to my original problem with the notion of a dual space.


micromass said:
Now, if V is an arbitrary vector space, then the same holds: we can define a linear function by saying what it does on the basis. Now, let's define such a function. Take a basis [itex]\{e_1,...,e_n\}[/itex] and define

[tex]T_1(e_1)=1, T_k(e_k)=0~\text{for k>1}[/tex]

In general, we have

[tex]T_i(e_i)=1,~T_k(e_i)=0~\text{if}~k\neq i[/tex]

Even more abstractly put: [itex]T_i(e_k)=\delta_{ik}[/itex], where we indeed have the Kronecker delta.

What is our T in our nice space [itex]\mathbb{R}^n[/itex]? Well, you can easily see that

[tex]T_i(a_1,...,a_n)=a_i[/tex]
I sort of understand this... For example, in R3 a basis is (1,0,0),(0,1,0),(0,0,1), so I understand the Kronecker delta here, but a dual base is made up of operations -- what guarantees that each operation in V' will match up with the right (or any) element of the base of V such that it evaluates to 1 (and 0 for all the other elements of the base)?

micromass said:
so Ti is simply the i'th projection!
If I were in the familiar territory of R2 or R3 I would understand this completely, but what does it mean when you project an operation? Say, onto the "integration axis". What would this mean? Say Tk=integration, and ai=3. Would this mean that the operation being performed consists (in part) of integrating the argument and multiplying the result by 3?

micromass said:
Now, when V is finite-dimensional, I claim it is the case that Ti is a basis for V' (the dual space). This means nothing more then:

  • The Ti are linearly independent:
    [tex]\lambda_1 T_1+...+\lambda_n T_n=0~\Rightarrow~\lambda_1=...=\lambda_n=0[/tex]
    Indeed, if [itex]\lambda_1 T_1+...+\lambda_n T_n=0[/itex], then

    [tex]\lambda_1 T_1(x)+...+\lambda_n T_n(x)=0[/tex]

    for every vector x. So try the vectors ei as our x's.
  • The Ti span the space.
    This means only that every functional T can be written as

    [tex]\lambda_1 T_1+...+\lambda_n T_n=T[/tex]

    So, we must find [itex]\lambda_i[/itex] such that the above is true. But, it suffices to take [itex]\lambda_i=T(e_i)[/itex] here. Then you can easily check that for any x, it holds that

    [tex]\lambda_1T_1(x)+...+\lambda_nT_n(x)=T(x)[/tex]
I understand your discussion of linear independence, but how does this work with a dual space? How is it even possible for the elements of a dual space to be linearly dependent? How does one combine, say, multiplication and the dot product to produce integration?
 
  • #6


Philmac said:
Thanks again! The book I'm reading is "Finite-Dimensional Vector Spaces" by Paul R. Halmos.

Hmmm, maybe not the best book for beginners...

I've read your post many times but I'm having a great deal of trouble understanding it. Math notation has always been extremely confusing for me (I really need concrete examples with numbers), so I apologize if my questions are extremely basic. Here's what I'm having trouble with:


What exactly is T? I understand that it is a map from Rn to R1,

Indeed, T is nothing more or nothing less then a linear map [itex]T:R^n\rightarrow R[/itex]. That's all it is.

but are you saying that each element of V'=Tn or are you saying that V'=T?

No, Tn and T are certainly elements of V'. Why? Because T is a linear map from V to R, and V' is simply the set of such a linear maps.
I'm not claiming that V'=T or something. This would make little sense, since V' is a vector space and T is an operator.

What are the arguments of T? x?

The arguments of T are elements of Rn. So, we can do T(2,3,2) if n=3, for example. In general notation, I write (x1,...,xn) for an n-tuple in Rn.

The elements y1...yn are operations, so how can anything equate to them?

Sorry about this. I didn't mean y1,..., yn as operations here. When I wrote them, I just meant them to be real numbers!

For example, we can have T(1,0,...,0)=2, and then y1=2. I do not mean yi to be an operator here.

I sort of understand this... For example, in R3 a basis is (1,0,0),(0,1,0),(0,0,1), so I understand the Kronecker delta here, but a dual base is made up of operations -- what guarantees that each operation in V' will match up with the right (or any) element of the base of V such that it evaluates to 1 (and 0 for all the other elements of the base)?

Could you explain this more? What do you mean with "mathcing up with the right element such that it evaluates to 1"??

If I were in the familiar territory of R2 or R3 I would understand this completely, but what does it mean when you project an operation?

I only meant this explanation to be in R2 or R3. There are notions of "projections" in arbitrary vector spaces, but I don't think that now is a good time to discuss this.



I understand your discussion of linear independence, but how does this work with a dual space? How is it even possible for the elements of a dual space to be linearly dependent? How does one combine, say, multiplication and the dot product to produce integration?

Elements of a dual space V' are linear independent by definition if for all x in V it holds that

[tex]a_1T_1(x)+...+a_nT_n(x)=0~\Rightarrow~a_1=...=a_n=0[/tex]

For example, let me look at the dual space of [itex]\mathbb{R}^2[/itex]. Take two operators:

[tex]T:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow 2x+y[/tex]

and

[tex]S:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow x+2y[/tex]

these are elements of (R2)' because the maps are linear. I claim that they are linearly independent. This means that

If for all (x,y) it holds that aT(x,y)+bS(x,y)=0, then a=b=0.
So, let's assume that aT(x,y)+bS(x,y)=0 for all x and y. Translating this gives us

[tex]0=aT(x,y)+bS(x,y)=a(2x+y)+b(x+2y)=(2a+b)x+(a+2b)y[/tex]

This must hold for all x and y. So in particular for x=1 and y=0. So, if we fill that in, we get

[tex]2a+b=0[/tex]

But it must also hold for x=0 and y=1. So, filling that in, we get

[tex]a+2b=0[/tex]

So, if aT(x,y)+bS(x,y)=0 for all x and y, then certainly it must hold true that

[tex]2a+b=0~\text{and}~a+2b=0[/tex]

but this only holds for a=0 and b=0. So S and T are independent.
 
  • #7


micromass said:
Could you explain this more? What do you mean with "mathcing up with the right element such that it evaluates to 1"??

My understanding of the Kronecker delta ∂ij is that it evaluates to 1 when i=j and to 0 when i≠j. So, doesn't this mean that yj(xi) must evaluate to 1 for all i=j and 0 for all i≠j for the Kronecker delta to make sense here? What guarantee is there that this will happen? In R≤3 this makes sense because any basis can be reduced to something like (1,0,0), etc. but with operations there is no reducing, there is simply the operation (e.g. integration). I think I'm just not understanding this part at all, now that I think about it some more.

micromass said:
Elements of a dual space V' are linear independent by definition if for all x in V it holds that

[tex]a_1T_1(x)+...+a_nT_n(x)=0~\Rightarrow~a_1=...=a_n=0[/tex]

For example, let me look at the dual space of [itex]\mathbb{R}^2[/itex]. Take two operators:

[tex]T:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow 2x+y[/tex]

and

[tex]S:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow x+2y[/tex]

these are elements of (R2)' because the maps are linear. I claim that they are linearly independent. This means that

If for all (x,y) it holds that aT(x,y)+bS(x,y)=0, then a=b=0.
So, let's assume that aT(x,y)+bS(x,y)=0 for all x and y. Translating this gives us

[tex]0=aT(x,y)+bS(x,y)=a(2x+y)+b(x+2y)=(2a+b)x+(a+2b)y[/tex]

This must hold for all x and y. So in particular for x=1 and y=0. So, if we fill that in, we get

[tex]2a+b=0[/tex]

But it must also hold for x=0 and y=1. So, filling that in, we get

[tex]a+2b=0[/tex]

So, if aT(x,y)+bS(x,y)=0 for all x and y, then certainly it must hold true that

[tex]2a+b=0~\text{and}~a+2b=0[/tex]

but this only holds for a=0 and b=0. So S and T are independent.

Oh, I see. That makes perfect sense.
 
  • #8


Philmac said:
My understanding of the Kronecker delta ∂ij is that it evaluates to 1 when i=j and to 0 when i≠j. So, doesn't this mean that yj(xi) must evaluate to 1 for all i=j and 0 for all i≠j for the Kronecker delta to make sense here? What guarantee is there that this will happen? In R≤3 this makes sense because any basis can be reduced to something like (1,0,0), etc. but with operations there is no reducing, there is simply the operation (e.g. integration). I think I'm just not understanding this part at all, now that I think about it some more.

What guarantee?? You define things that way. You define yi such that

[tex]y_i(e_i)=1~\text{and}~y_i(e_j)=0~\text{if}~i\neq j[/tex]

This always happens by definition!
 
  • #9


micromass said:
What guarantee?? You define things that way. You define yi such that

[tex]y_i(e_i)=1~\text{and}~y_i(e_j)=0~\text{if}~i\neq j[/tex]

This always happens by definition!

Is it always possible to do this?
 
  • #10


Yes, given a vector space and given a basis [itex]\{e_1,...,e_n\}[/itex], I can define a linear function just by specifying what it will do on the basis.

For example, the following function is linear:

[itex]T(\lambda_1 e_1+...+\lambda_n e_n)=\lambda_i[/itex]

and it will be the function such that [itex]T(e_j)=\delta_{ij}[/tex].

Can I find a linear function that will send every [itex]T(e_i)[/itex] to 2? Yes!

[itex]T(\lambda_1e_1+...+\lambda_n e_n)=2(\lambda_1+...+\lambda_n)[/itex]

will be such a function. I can let the basis go to anything! That's exactly what bases are good for.
 
  • #11


micromass said:
Yes, given a vector space and given a basis [itex]\{e_1,...,e_n\}[/itex], I can define a linear function just by specifying what it will do on the basis.

For example, the following function is linear:

[itex]T(\lambda_1 e_1+...+\lambda_n e_n)=\lambda_i[/itex]

and it will be the function such that [itex]T(e_j)=\delta_{ij}[/tex].

Can I find a linear function that will send every [itex]T(e_i)[/itex] to 2? Yes!

[itex]T(\lambda_1e_1+...+\lambda_n e_n)=2(\lambda_1+...+\lambda_n)[/itex]

will be such a function. I can let the basis go to anything! That's exactly what bases are good for.

But how do you know that the appropriate operations will be available? And how do you know that V' is always n-dimensional? Take R1 for example, I can think of at least two linear operations: multiplication and integration. So wouldn't V' have a dimension of at least 2 (even though n=1)?
 
  • #12


Multiplication and integration are not linear operations on R. All the linear functionals on R have the form

[tex]f:R\rightarrow R:x\rightarrow \lambda x[/tex]

for a certain [itex]\lambda[/itex].
 
  • #13


micromass said:
Multiplication and integration are not linear operations on R. All the linear functionals on R have the form

[tex]f:R\rightarrow R:x\rightarrow \lambda x[/tex]

for a certain [itex]\lambda[/itex].

Hm, alright. Thank you. I think I'm ready for reflexivity.
 
  • #14


Philmac said:
Hm, alright. Thank you. I think I'm ready for reflexivity.
You're looking for an isomorphism from V into V''. This is very easy, because it turns out that the first function we can think of from V into V'' (except for constant functions of course) is an isomorphism. We want to define a function f:V→V'', so we must specify a member of V'' for each x. This member of V'' will of course be denoted by f(x). A member of V'' is defined by specifying what we get when it acts on an arbitrary member of V'. So we must specify f(x)(ω) for each ω in V'. f(x)(ω) is supposed to be a real number, and ω(x) is a real number. So...

For each x in V, we define f(x) in V'' by f(x)(ω)=ω(x) for all ω in V'. This defines a function f:V→V''.

Now you just need to verify that this function is linear and bijective onto V''.

By the way, I think V* is a more common notation than V'.
 
  • #15


Fredrik said:
You're looking for an isomorphism from V into V''. This is very easy, because it turns out that the first function we can think of from V into V'' (except for constant functions of course) is an isomorphism. We want to define a function f:V→V'', so we must specify a member of V'' for each x. This member of V'' will of course be denoted by f(x). A member of V'' is defined by specifying what we get when it acts on an arbitrary member of V'. So we must specify f(x)(ω) for each ω in V'. f(x)(ω) is supposed to be a real number, and ω(x) is a real number. So...

For each x in V, we define f(x) in V'' by f(x)(ω)=ω(x) for all ω in V'. This defines a function f:V→V''.

Now you just need to verify that this function is linear and bijective onto V''.

By the way, I think V* is a more common notation than V'.

Thank you! I still don't quite understand yet, but I think I'm a bit closer. If I'm not mistaken, it seems that we define the elements of V'' such that regardless of the value of y (or omega) in V', there is a bijective map (an isomorphism, in this context, I believe) between V and V'', in other words, each value of V'' corresponds to one, and only one, value in x. And since y (or omega) doesn't matter, there is only one value of [x0, y] which corresponds to any element z0 of V''. I hope that made sense :confused:

Also, what you said seems very similar to something in my textbook. I couldn't make sense of it, but after reading what you said I think it makes slightly more sense. Perhaps you could clear some things up about these concepts. Here it is:

If we consider the symbol [x, y] for some fixed y = y0, we obtain nothing new: [x, y0] is merely another way of writing the value y0(x) of the function y0 at the vector x. If, however, we consider the symbol [x, y] for some fixed x = x0, then we observe that the function of the vectors in V', whose value at y is [x0, y], is a scalar-valued function that happens to be linear; in other words, [x0, y] defines a linear functional on V', and, consequently, an element of V''.

I understand the part about [x, y0], but I really don't follow the reasoning about [x0, y]. [x, y] is a scalar, so how does keeping the value of x constant suddenly make (what is seemingly) the exact same thing an element of V''?



Also, backing up a bit to dual bases, I'd just like to verify that I understand. I'd appreciate it if someone could let me know if this example makes sense.

V=R3
X={(1,0,0),(0,1,0),(0,0,1)} is a basis in V
V'={ax1+bx2+cx3|a,b,c[itex]\in[/itex]R}
X'={x1,x2,x3}
 
  • #16


Philmac said:
Thank you! I still don't quite understand yet, but I think I'm a bit closer. If I'm not mistaken, it seems that we define the elements of V'' such that regardless of the value of y (or omega) in V', there is a bijective map (an isomorphism, in this context, I believe) between V and V'', in other words, each value of V'' corresponds to one, and only one, value in x. And since y (or omega) doesn't matter, there is only one value of [x0, y] which corresponds to any element z0 of V''. I hope that made sense :confused:
Not entirely. The first thing that sounds weird to me is "regardless of the value of y (or omega)". The claim that V is isomorphic to V'' has nothing to do with any specific member of V,V' or V''. I guess that could be your point, but to say it this way is like saying that regardless of the value of q, we have 1+1=2. It's true, but it's weird to mention q when we could have just said that 1+1=2.

If you would like to use the [,] notation, we could say that for each x in V, there's exactly one z in V'' such that [y,z]=[x,y] for all y in V'. This z is denoted by f(x), and this defines the function f.

(If I understand the [,] notation correctly, [y,z]=[x,y] means z(y)=y(x)).

Philmac said:
Also, what you said seems very similar to something in my textbook. I couldn't make sense of it, but after reading what you said I think it makes slightly more sense.
It will make more sense after you have verified that the f I defined is an isomorphism (i.e. that it's linear and bijective).

Philmac said:
I understand the part about [x, y0], but I really don't follow the reasoning about [x0, y]. [x, y] is a scalar, so how does keeping the value of x constant suddenly make (what is seemingly) the exact same thing an element of V''?
I don't like the [,] notation, and I'm not crazy about this author's way of explaining it either. You already understand that for each x in V and each y in V', [x,y]=y(x) is a real number. The author is saying that for each y in V', the function that takes x to [x,y] is a function from V into ℝ that we already had a notation for (this function is denoted by y). Then he's saying that for each x in V, the function that takes y to [x,y] is a function from V' to ℝ. Let's denote this function by g. We have [tex]g(ay+bz) = [x,ay+bz] = (ay+bz)(x) = (ay)(x)+(bz)(x) = a(y(x))+b(z(x)) = a[x,y]+b[x,z]=ag(y)+bg(z),[/tex] so g is linear. That means that it's a member of V''.

Philmac said:
Also, backing up a bit to dual bases, I'd just like to verify that I understand. I'd appreciate it if someone could let me know if this example makes sense.

V=R3
X={(1,0,0),(0,1,0),(0,0,1)} is a basis in V
V'={ax1+bx2+cx3|a,b,c[itex]\in[/itex]R}
X'={x1,x2,x3}
You're right that if X is a basis for V=ℝ3 and X' is its dual basis, then the members of V' can be uniquely expressed as linear combinations of members of X'. However, if X' is any basis for V', then the members of V' can still be uniquely expressed as linear combinations of members of X'.

What you mentioned is a property of any basis, not just the dual basis.
 
Last edited:
  • #17


Fredrik said:
Not entirely. The first thing that sounds weird to me is "regardless of the value of y (or omega)". The claim that V is isomorphic to V'' has nothing to do with any specific member of V,V' or V''. I guess that could be your point, but to say it this way is like saying that regardless of the value of q, we have 1+1=2. It's true, but it's weird to mention q when we could have just said that 1+1=2.

If you would like to use the [,] notation, we could say that for each x in V, there's exactly one z in V'' such that [y,z]=[x,y] for all y in V'. This z is denoted by f(x), and this defines the function f.
I like your q, 1+1=2 analogy. That, in combination with what you said below, triggered a bit of an epiphany. V' is the set of functionals that take x as their argument and V'' is the set of functionals that take y as their argument. I find this notation to be slightly misleading (I'm probably just misunderstanding something) -- is there a V'''?

Fredrik said:
(If I understand the [,] notation correctly, [y,z]=[x,y] means z(y)=y(x)).
Exactly.

Fredrik said:
I don't like the [,] notation, and I'm not crazy about this author's way of explaining it either. You already understand that for each x in V and each y in V', [x,y]=y(x) is a real number. The author is saying that for each y in V', the function that takes x to [x,y] is a function from V into ℝ that we already had a notation for (this function is denoted by y). Then he's saying that for each x in V, the function that takes y to [x,y] is a function from V' to ℝ. Let's denote this function by g. We have [tex]g(ay+bz) = [x,ay+bz] = (ay+bz)(x) = (ay)(x)+(bz)(x) = a(y(x))+b(z(x)) = a[x,y]+b[x,z]=ag(y)+bg(z),[/tex] so g is linear. That means that it's a member of V''.
This makes much more sense now, thank you. However, now that I think about it again, how can there only be one zi in V'' for each xi in V? Changing the value of yj will change the value of [xi,yj], so shouldn't V'' have dimension dimVxdimV'?

Fredrik said:
You're right that if X is a basis for V=ℝ3 and X' is its dual basis, then the members of V' can be uniquely expressed as linear combinations of members of X'. However, if X' is any basis for V', then the members of V' can still be uniquely expressed as linear combinations of members of X'.

What you mentioned is a property of any basis, not just the dual basis.
Great, I guess that means I have at least some grasp on the idea :)
 
  • #18


I only have time for a short answer right now.
Philmac said:
how can there only be one zi in V'' for each xi in V?
Suppose that there are two. To be more precise, suppose that for all y in V',
\begin{align}
z(y) &=y(x)\\
w(y) &=y(x).
\end{align} Then z=w. (They have the same domain and we have z(y)=w(y) for each y in the domain. If you think about what a function is, it should be obvious that this means that z=w).
 
  • #19


Philmac said:
V' is the set of functionals that take x as their argument and V'' is the set of functionals that take y as their argument. I find this notation to be slightly misleading (I'm probably just misunderstanding something) -- is there a V'''?
V' is the set of linear functions from V into ℝ. V'' is defined as (V')', so it's the set of linear functions from V' into ℝ. V''' is defined as (V'')' so it's the set of linear functions from V'' into ℝ. The sequence goes on forever. Each "V with lots of primes" is isomorphic to V if the number of primes is even, and isomorphic to V' if the number of primes is odd.

Actually, V is also isomorphic to V'. The difference between this isomorphism and the one between V and V'' is that to define an isomorphism between V and V', we need to use something like an inner product on V or a specific basis on V, but to define an isomorphism between V and V'', we just need to understand that members of V'' act on members of V' and that members of V' act on members of V.

Philmac said:
However, now that I think about it again, how can there only be one zi in V'' for each xi in V? Changing the value of yj will change the value of [xi,yj],
You should think about what you said here until you understand that it's like replying to the statement
For each integer n, define [itex]f_n:\mathbb R\rightarrow\mathbb R[/itex] by [itex]f_n(x)=x^n[/itex] for each [itex]x\in\mathbb R[/itex].​
by saying
How can there be only one [itex]f_n[/itex] for each n? Changing the value of x will change the value of [itex]f_n(x)[/itex].​
 
Last edited:
  • #20


Ohh, I get it now (I think)! The claim isn't that z0(y) is always the same value, the claim is simply that z0(y) will always be the same as y(x0). Which should be obvious just by looking at the equation provided, but, oh well. I think I'm ready for annihilators now.
 
Last edited:
  • #21


Philmac said:
I think I'm ready for annihilators now.
I don't understand your definitions. Can you post them as they appear in the book?
 
  • #22


Fredrik said:
I don't understand your definitions. Can you post them as they appear in the book?

Here it is:
The annihilator S0 of any subset S of a vector space V (S need not be a subspace) is the set of all vectors y in V' such that [x, y] is identically zero for all x in S.

Thus 00 = V' and V0 = 0 ([itex]\subset[/itex]V'). If V is finite-dimensional and S contains a non-zero vector, so that S ≠ 0, then [reference to another theorem] shows that S0 ≠ V'.
 
  • #23


Philmac said:
The problem I have here is "By definition, M00 is the set of all vectors x such that [x,y]=0 for all y in M0". Shouldn't it be the set of all vectors z (or whatever letter you like) in V'?
M00 isn't equal to M, it's isomorphic to it. What you're supposed to prove is that f(M)=M00, where [itex]f:V\rightarrow V''[/itex] is the isomorphism defined in a previous post. M00 is the vector space of all z in V'' such that z(y)=0 for all y in M0. Now use the definition of the isomorphism f to show that x is in M if and only if f(x) is in M00.
 
Last edited:
  • #24


Fredrik said:
M00 isn't equal to M, it's isomorphic to it.

Are you saying that the textbook is wrong? The symbol in the book is =, not [itex]\approx[/itex] or [itex]\cong[/itex]. (Although, equivalency implies isometry)

Fredrik said:
What you're supposed to prove is that f(M)=M00, where [itex]f:V\rightarrow V''[/itex] is the isomorphism defined in a previous post. M00 is the vector space of all z in V'' such that z(y)=0 for all y in M0. Now use the definition of the isomorphism f to show that x is in M if and only if f(x) is in M00.
Not quite sure I follow here. Does this have something to do with reflexivity? That is to say, for every z in M00, z(y)=0, so y(x)=0? Which means that M00 is isomorphic to M0?
 
  • #25


Philmac said:
Are you saying that the textbook is wrong? The symbol in the book is =, not [itex]\approx[/itex] or [itex]\cong[/itex]. (Although, equivalency implies isometry)
I'm saying that the author is a bit careless with the notation/terminology. When he writes M00=M, he means M00=f(M). It's obvious from the definitions (assuming that I understood them) that M00 can't be equal to M. They are subspaces of two different vector spaces.

Philmac said:
Not quite sure I follow here.
I left out the details on purpose because I think it's time that you start doing some of these things on your own instead of having us do everything for you. I actually typed the proof that f(M)=M00 and then deleted it before I submitted the post.

Philmac said:
Does this have something to do with reflexivity?
...
Which means that M00 is isomorphic to M0?
That's not what "reflexive" means.

Philmac said:
for every z in M00, z(y)=0
This statement isn't OK as part of a proof. What is y?

Please try to prove that M00=f(M). Recall that two sets are equal if and only if they have the same members. This means that you can break up the proof into these two steps:

Step 1. Let z in M00 be arbitrary. Show that z is in f(M), i.e. that there's an x in M such that f(x)=z.

Step 2. Let z in f(M) be arbitrary. Show that z is in M00.

I also recommend that you do the exercise I suggested earlier: Prove that f is an isomorphism.
 
Last edited:
  • #26


I'm exceptionally terrible at proving things, but I'll try.

First, proving that f is an isomorphism:
Fredrik said:
You're looking for an isomorphism from V into V''. This is very easy, because it turns out that the first function we can think of from V into V'' (except for constant functions of course) is an isomorphism. We want to define a function f:V→V'', so we must specify a member of V'' for each x. This member of V'' will of course be denoted by f(x). A member of V'' is defined by specifying what we get when it acts on an arbitrary member of V'. So we must specify f(x)(ω) for each ω in V'. f(x)(ω) is supposed to be a real number, and ω(x) is a real number. So...

For each x in V, we define f(x) in V'' by f(x)(ω)=ω(x) for all ω in V'. This defines a function f:V→V''.

Now you just need to verify that this function is linear and bijective onto V''.

V is an n-dimensional vector space
V' is the dual space of V and is therefore n-dimensional
V''(=(V')') is the dual space of V' and is therefore n-dimensional
Since dimV=dimV'', V is isomorphic to V''.

or

x is an element of V
y is an element of V'
z is an element of V''
Define the function f:V[itex]\rightarrow[/itex]V''
This means that f(x) is an element of V'': z
Since f is a function acting on x, f is an element of V': y
So we have y(x0)=z0
I believe that this implies that f is bijective, but I'm not sure how to show that it's linear. Don't we know that it's linear by definition?

Fredrik said:
Please try to prove that M00=f(M). Recall that two sets are equal if and only if they have the same members. This means that you can break up the proof into these two steps:

Step 1. Let z in M00 be arbitrary. Show that z is in f(M), i.e. that there's an x in M such that f(x)=z.

Step 2. Let z in f(M) be arbitrary. Show that z is in M00.
I'm really not sure at all what to do here. Don't we know that there is such a correspondence due to reflexivity?

Fredrik said:
That's not what "reflexive" means.
Sorry, I meant M, not M0.

Fredrik said:
This statement isn't OK as part of a proof. What is y?
y is an element of V'
 
  • #27


Philmac said:
I'm exceptionally terrible at proving things, but I'll try.
That's OK. We all start out that way.

Philmac said:
V is an n-dimensional vector space
V' is the dual space of V and is therefore n-dimensional
V''(=(V')') is the dual space of V' and is therefore n-dimensional
Since dimV=dimV'', V is isomorphic to V''.
This proves that V and V'' are isomorphic, but it says nothing about whether f is an isomorphism. I was hoping you'd use the definition of f to prove that it's linear and bijective.

Philmac said:
x is an element of V
I understand that you make statements like this to explain the notation before you start the proof, but it's not necessary, because before you refer to x in the actual proof, you're going to have to do one of the following:

1. Specify the value that the variable represents. For example: Let x be the vector (1,1,0) in [itex]\mathbb R^3[/itex].
2. Say that it represents an arbitrary member of some set. "For example: Let x be an arbitrary real number".
3. Precede the statement with "for all x". For example: For all x in [itex]\mathbb R[/itex], we have [itex]x^2\geq 0[/itex].
4. Precede the statement with "there exists an x in ... such that". For example: There exists an x in [itex]\mathbb C[/itex] such that [itex]x^2[/itex]=-1.

So I suggest that you don't include statements like "x is an element of V" in any proofs.
Philmac said:
Define the function f:V[itex]\rightarrow[/itex]V''
This means that f(x) is an element of V'': z
A better way of saying that: The definition of f implies that for all x in V, f(x) is in V''.

There's no need to mention z, unless you think you can simplify the notation in the rest of the proof by saying "define z=f(x)". This only makes sense if you have specified the value of x or used the phrase "Let x in V be arbitrary" earlier.

Philmac said:
Since f is a function acting on x
I don't like phrases like this, because they suggest that it matters what variable symbols we're using. Say something like, "since the domain of f is V..." instead.

Philmac said:
f is an element of V': y
It's actually not. If the codomain of f had been ℝ, it might have been. But even then, you would have had to prove that f is linear before you can draw that conclusion. And the codomain of our f is V'', not ℝ.

Philmac said:
So we have y(x0)=z0
I believe that this implies that f is bijective,
Hm, you're saying that y(x)=f(x)? That doesn't make sense. Anyway, in a proof you have to show how the things you know imply the result you want to prove.

Philmac said:
but I'm not sure how to show that it's linear.
Let a,b be arbitrary real numbers, and let x,y be arbitrary members of V. Use the definition of f to rewrite f(ax+by).

Philmac said:
Don't we know that it's linear by definition?
No. We defined [itex]f:V\rightarrow V''[/itex] by saying that for each x in V, f(x) is the member of V'' such that for all ω in V', f(x)(ω)=ω(x). Since the definition doesn't say that f is linear, we have to prove it. You also have to prove that f is bijective. I suggest that you do it in two steps:

Injectivity: Show that for all x,y in V, if f(x)=f(y), then x=y.

Surjectivity: Let z in V'' be arbitrary and show that there exists an x in V such that f(x)=z.
Philmac said:
I'm really not sure at all what to do here. Don't we know that there is such a correspondence due to reflexivity?Sorry, I meant M, not M0.
Reflexivity ensures that there's an isomorphism from M into M'', not that there's an isomorphism from M into M00.

Reflexivity also ensures that there's an isomorphism from V into V''. Let f be such an isomorphism. M is a subspace of V. That makes f(M) a subspace of V''. M00 is a subspace of V''. You don't know that f(M) and M00 is the same subspace of V'' until you have proved that they have the same members.
 
Last edited:
  • #28


Did you finish these proofs or did you give up? Just curious.
 
  • #29


Fredrik said:
Did you finish these proofs or did you give up? Just curious.

Sorry, a bunch of other issues came up and I haven't had time to properly sit down and think about it yet; I probably should have said something before. I'll finally have some time tomorrow afternoon, so I'll post what I come up with then :)
 
  • #30


Fredrik said:
You're looking for an isomorphism from V into V''. This is very easy, because it turns out that the first function we can think of from V into V'' (except for constant functions of course) is an isomorphism. We want to define a function f:V→V'', so we must specify a member of V'' for each x. This member of V'' will of course be denoted by f(x). A member of V'' is defined by specifying what we get when it acts on an arbitrary member of V'. So we must specify f(x)(ω) for each ω in V'. f(x)(ω) is supposed to be a real number, and ω(x) is a real number. So...

For each x in V, we define f(x) in V'' by f(x)(ω)=ω(x) for all ω in V'. This defines a function f:V→V''.

Now you just need to verify that this function is linear and bijective onto V''.

Fredrik said:
This proves that V and V'' are isomorphic, but it says nothing about whether f is an isomorphism. I was hoping you'd use the definition of f to prove that it's linear and bijective.
...
Before you refer to x in the actual proof, you're going to have to do one of the following:

1. Specify the value that the variable represents. For example: Let x be the vector (1,1,0) in [itex]\mathbb R^3[/itex].
2. Say that it represents an arbitrary member of some set. "For example: Let x be an arbitrary real number".
3. Precede the statement with "for all x". For example: For all x in [itex]\mathbb R[/itex], we have [itex]x^2\geq 0[/itex].
4. Precede the statement with "there exists an x in ... such that". For example: There exists an x in [itex]\mathbb C[/itex] such that [itex]x^2[/itex]=-1.
...
A better way of saying that: The definition of f implies that for all x in V, f(x) is in V''.
...
There's no need to mention z, unless you think you can simplify the notation in the rest of the proof by saying "define z=f(x)". This only makes sense if you have specified the value of x or used the phrase "Let x in V be arbitrary" earlier.
...
I don't like phrases like this, because they suggest that it matters what variable symbols we're using. Say something like, "since the domain of f is V..." instead.
...
It's actually not. If the codomain of f had been ℝ, it might have been. But even then, you would have had to prove that f is linear before you can draw that conclusion. And the codomain of our f is V'', not ℝ.
...
Hm, you're saying that y(x)=f(x)? That doesn't make sense. Anyway, in a proof you have to show how the things you know imply the result you want to prove.
...
Let a,b be arbitrary real numbers, and let x,y be arbitrary members of V. Use the definition of f to rewrite f(ax+by).
...
No. We defined [itex]f:V\rightarrow V''[/itex] by saying that for each x in V, f(x) is the member of V'' such that for all ω in V', f(x)(ω)=ω(x). Since the definition doesn't say that f is linear, we have to prove it. You also have to prove that f is bijective. I suggest that you do it in two steps:

Injectivity: Show that for all x,y in V, if f(x)=f(y), then x=y.

Surjectivity: Let z in V'' be arbitrary and show that there exists an x in V such that f(x)=z.

Define [itex]f:V\rightarrow V''[/itex] such that for each x in V, f(x) is the member of V'' such that for all y in V', f(x)(y)=y(x)

Injectivity:

The definition of f implies that for every x in V, there is an f(x) in V'' (I think this is all that is required)

or

Show that if for arbitrary x1,x2 in V, if f(x1)=f(x2), then x1=x2:
If y is an arbitrary vector in V' and z1 and z2 are vectors in V'' such that z1(y)=y(x1) and z2(y)=y(x2), then if z1=z2, then y(x1)=y(x2), which implies that x1=x2.
I think this makes sense, but I don't know if I'm justified in the last part where I say that y(x1)=y(x2) implies x1=x2.

Also, doesn't showing this prove bijectivity, not just injectivity? We know that V and V'' have the same dimension and we know that that if x1=x2 then z1=z2 (i.e. you cannot have x1=x2 and z1≠z2 and vice versa). This injectivity goes both ways, doesn't it? Since V and V'' have the same dimension, doesn't this mean that V injective to V'' and V'' injective to V implies bijectivity?

Surjectivity:

Let z be an arbitrary member of V''
Since z=f(x), by the definition of f, we know that z(y)=f(x)(y)=y(x). Therefore, for every z in V'', there exists an x in V.


Linearity:

Let x1, x2 be arbitrary members of V
Let y be an arbitrary member of V'
If f is a linear functional on V, then it must be shown that f(ax1+bx2)=af(x1)+bf(x2)
f(ax1+bx2)(y)=y(ax1+bx2)
and since y is linear...
y(ax1+bx2)=ay(x1)+by(x2)=af(x1)(y)+bf(x2)(y)
Combining this result with the first line...
f(ax1+bx2)(y)=af(x1)(y)+bf(x2)(y)
f(ax1+bx2)=af(x1)+bf(x2)


Fredrik said:
Prove that M00=f(M). Recall that two sets are equal if and only if they have the same members. This means that you can break up the proof into these two steps:

Step 1. Let z in M00 be arbitrary. Show that z is in f(M), i.e. that there's an x in M such that f(x)=z.

Step 2. Let z in f(M) be arbitrary. Show that z is in M00.

Fredrik said:
Reflexivity ensures that there's an isomorphism from M into M'', not that there's an isomorphism from M into M00.

Reflexivity also ensures that there's an isomorphism from V into V''. Let f be such an isomorphism. M is a subspace of V. That makes f(M) a subspace of V''. M00 is a subspace of V''. You don't know that f(M) and M00 is the same subspace of V'' until you have proved that they have the same members.

Step 1. Let z in M00 be arbitrary. Show that z is in f(M), i.e. that there's an x in M such that f(x)=z.

By the definition of M00 we know that M00[itex]\subseteq[/itex]M''. Since z is therefore a member of M'', there must be an x in M such that f(x) is a member of M00.


Step 2. Let z in f(M) be arbitrary. Show that z is in M00.

If x is an arbitrary member of M and y is an arbitrary member of M0 (and M') then, by reflexivity:
z(y)=f(x)(y)=y(x)=0
Since z(y)=0, it follows that z is in M00



That's what I managed to come up with. I hope I got it right this time.
 
  • #31


Philmac said:
Injectivity:

The definition of f implies that for every x in V, there is an f(x) in V'' (I think this is all that is required)
This is what's required to show that f is a function. To show that it's injective, you need to show this:
Philmac said:
for arbitrary x1,x2 in V, if f(x1)=f(x2), then x1=x2:

Philmac said:
If y is an arbitrary vector in V' and z1 and z2 are vectors in V'' such that z1(y)=y(x1) and z2(y)=y(x2), then if z1=z2, then y(x1)=y(x2), which implies that x1=x2.
I think this makes sense, but I don't know if I'm justified in the last part where I say that y(x1)=y(x2) implies x1=x2.
You have the right idea, but you're expressing it in a way that makes it hard to see that. This is how I would do it.

Suppose that [itex]f(x_1)=f(x_2)[/itex]. Then for all [itex]y\in V'[/itex], we have [itex]f(x_1)(y)=f(x_2)(y)[/itex]. (The reason is this: [itex]f(x_1)[/itex] and [itex]f(x_2)[/itex] are members of V'', so they are functions from V' into ℝ. Two functions are equal if and only if they have the same domain and the same value at each point in the domain. These two functions are equal and have the domain V', so they must have the same value at each point in V'). This means that for all [itex]y\in V'[/itex], we have [itex]y(x_1)=y(x_2)[/itex]. This implies that [itex]x_1=x_2[/itex].

The last step is not obvious. It's not true that [itex]y(x_1)=y(x_2)[/itex] implies that [itex]x_1=x_2[/itex]. That would mean that y is injective, and it's certainly not true that each y in V' is injective. For example, if [itex]V=\mathbb R^2[/itex] and y is defined so that for all x, y(x) is the projection of x onto the 1 axis, then y is linear but not injective. However, the statement "for all y in V', [itex]y(x_1)=y(x_2)[/itex]" does imply that [itex]x_1=x_2[/itex]. I don't see an easy way to show it right now. I can see a hard way, because by coincidence I read about how to do this in the context of infinite dimensional normed spaces today. But I feel like there must be an easier way. We can think about that tomorrow. I don't have time to answer everything today anyway.

Philmac said:
Also, doesn't showing this prove bijectivity, not just injectivity? We know that V and V'' have the same dimension...
It doesn't. It's true that you can show that dim V=dim V'=dim V'', and (assuming that V is finite-dimensional) this means that they're all isomorphic. But this doesn't mean that the specific function f that we defined is an isomorphism, or even surjective.

I'll answer the rest tomorrow.
 
  • #32


Fredrik said:
It doesn't. It's true that you can show that dim V=dim V'=dim V'', and (assuming that V is finite-dimensional) this means that they're all isomorphic. But this doesn't mean that the specific function f that we defined is an isomorphism, or even surjective.

I don't want to confuse anybody, but I just wanted to mention that an important theorem (= alternative theorem) states that any injective linear function between finite-dimensional spaces is in fact an isomorphism. So our f is indeed an isomorphism! But it only follows from the alternative theorem, not from anything else.
 
  • #33


I think this will be easier if we first work through a few easy facts about basis vectors. Let [itex]\{e_i\}[/itex] be a basis for V. Let [itex]\{e^i\}[/itex] be the dual basis of [itex]\{e_i\}[/itex]. Let [itex]\{\bar e_i\}[/itex] be the dual basis of [itex]\{e^i\}[/itex]. These are bases of V, V' and V'' respectively. (Suppose that we have already proved that, and that we have also already proved that dim V=dim V'=dim V'').
\begin{align}
x &=x^ie_i\\
y &=y_ie^i\\
z &=z^i\bar e_i
\end{align}
The first thing you need to know is what you get when you have one of these things act on a basis vector [tex]y(e_i)=y_j e^j(e_i)=y_j\delta^j_i=y_i[/tex] and when a member of a dual basis acts on an arbitrary vector. [tex]
\begin{align}e^i(x) &=e^i(x^je_j)=x^je^i(e_j)=x^j\delta^i_j=x^i\\
\bar e_i(y) &=\bar e_i(y_j e^j)=y_j\bar e_i(e^j) =y_j\delta^j_i=y_i.
\end{align}[/tex] You also need to know about the relationship between [itex]\{\bar e_i\}[/itex] and [itex]\{e_i\}[/itex]. For all y in V, we have [tex]f(e_i)(y)=y(e_i)=y_i=\bar e_i(y),[/tex] so [itex]\bar e_i=f(e_i)[/itex] for all i.

Now let's return to the thing I didn't prove last night. We found that if [itex]f(x)=f(x')[/itex], then for all y in V', we have [itex]y(x)=y(x')[/itex]. We need to show that this implies that [itex]x=x'[/itex]. The easiest way to do that is to note that since the equality y(x)=y(x') holds for all y in V', we have [itex]e^i(x)=e^i(x')[/itex] for all i. This means that [itex]x^i=x'^i[/itex] for all i, and that means that x=x'.
Philmac said:
Surjectivity:

Let z be an arbitrary member of V''
Since z=f(x), by the definition of f,
Here you're making a statement that involves x, but you haven't said what x is. Yes, I understand that your notation is to use x for members of V, y for members of V', and z for members of V'', but even if you include that information in the proof, you still have to explain if you mean that the equality z=f(x) holds for all x in V, or that there exists an x in V such that z=f(z). I'm guessing that you're trying to say that "the definition of f implies that there exists an x in V such that z=f(x)". The thing is, this is the statement we're trying to prove! So you haven't actually proved anything here.

Let z be an arbitrary member of V''. We need to show that there exists an x in V such that f(x)=z. By definition of f, such a z would satisfy z(y)=y(x) for all y. This means that it would satisfy [itex]z(e^i)=e^i(x)[/itex] for all i. This equality is equivalent to [itex]z^i=x^i[/itex]. So if there's an x with the desired property, it has the same components in the basis [itex]\{e_i\}[/itex] as z has in the basis [itex]\{\bar e_i\}[/itex]. This doesn't prove that f is surjective, but it tells us what we should try to do. We should try to prove that [itex]f(z^ie_i)=z[/itex]. This is the same thing as proving that for all y in V', we have [itex]f(z^ie_i)(y)=z(y)[/itex], and this is very easy if we use the results above.

Philmac said:
Linearity:

Let x1, x2 be arbitrary members of V
Let y be an arbitrary member of V'
If f is a linear functional on V, then it must be shown that f(ax1+bx2)=af(x1)+bf(x2)
f(ax1+bx2)(y)=y(ax1+bx2)
and since y is linear...
y(ax1+bx2)=ay(x1)+by(x2)=af(x1)(y)+bf(x2)(y)
Combining this result with the first line...
f(ax1+bx2)(y)=af(x1)(y)+bf(x2)(y)
f(ax1+bx2)=af(x1)+bf(x2)
I don't see a "thumbs up" smiley, so I'll just use the "approve" smiley. :approve:
Philmac said:
Step 1. Let z in M00 be arbitrary. Show that z is in f(M), i.e. that there's an x in M such that f(x)=z.

By the definition of M00 we know that M00[itex]\subseteq[/itex]M''. Since z is therefore a member of M'', there must be an x in M such that f(x) is a member of M00.
The surjectivity of f implies that there must be an x in the domain of f such that f(x)=z, but the domain is V, not M. So how do you know that this x is in M?

Philmac said:
Step 2. Let z in f(M) be arbitrary. Show that z is in M00.

If x is an arbitrary member of M and y is an arbitrary member of M0 (and M') then, by reflexivity:
z(y)=f(x)(y)=y(x)=0
Since z(y)=0, it follows that z is in M00
Not bad, but x is not arbitrary in this calculation. It's specifically the x in M such that f(x)=z. (Apart from that, the proof is fine).

Note that what you've proved here is that [itex]f(M)\subset M^{00}[/itex]. The other part of the problem was to show that [itex]M^{00}\subset f(M)[/itex]. Together, these two "inequalities" imply that [itex]f(M)=M^{00}[/itex].
 

1. What is a dual space?

A dual space is the set of all linear functionals on a vector space. In other words, it is the space of all linear transformations from the original vector space to its underlying field (such as the real numbers).

2. What is a dual basis?

A dual basis is a set of linear functionals that form a basis for the dual space. This means that every linear functional in the dual space can be written as a linear combination of the dual basis elements.

3. What is reflexivity?

Reflexivity is a property of a vector space where the space is isomorphic to its double dual space. In other words, the dual space of the dual space is equivalent to the original vector space.

4. What is an annihilator?

An annihilator is a subset of the dual space that contains all linear functionals that evaluate to zero on a given subspace of the original vector space. In other words, it is the set of all functionals that "annihilate" or map to zero on a specific subspace.

5. How are dual spaces, dual bases, reflexivity, and annihilators related?

Dual spaces and dual bases are closely related, with dual bases forming a basis for the dual space. Reflexivity is a property of a vector space that is related to the dual space, as it states that the dual space is isomorphic to the double dual space. Annihilators are subsets of the dual space that are used to describe linear functionals that map to zero on a specific subspace of the original vector space.

Similar threads

  • Linear and Abstract Algebra
Replies
13
Views
1K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
0
Views
445
  • Linear and Abstract Algebra
Replies
3
Views
288
  • Linear and Abstract Algebra
2
Replies
48
Views
7K
  • Linear and Abstract Algebra
Replies
32
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
914
  • Linear and Abstract Algebra
Replies
4
Views
865
  • Linear and Abstract Algebra
Replies
2
Views
860
  • Linear and Abstract Algebra
Replies
5
Views
1K
Back
Top