Isomorphism between a linear space and its dual

In summary, the conversation discusses a theorem involving finite dimensional vector spaces and their duals. The theorem states that if a linear map between the two spaces is defined using the mathematicians' convention for inner product, it is bijective if and only if a certain statement about the inner product is true. The conversation then focuses on the proof of the "if" part of the theorem, where the speaker is able to prove the injectivity of the map but runs into difficulties proving its surjectivity. The speaker presents an attempted proof and notes that the issue arises from the use of the mathematicians' convention. Another participant suggests using the physicists' convention instead, and another notes that the theorem holds for spaces of equal finite dimension. The conversation ends
  • #1
Geofleur
Science Advisor
Gold Member
426
177
I have been trying to prove the following theorem, for a finite dimensional vector space ## X ## and its dual ## X^* ##:

Let ## f:X\rightarrow X^* ## be given by ## f(x) = (x|\cdot) ##, where ## (x|\cdot) ## is linear in the first argument and conjugate linear in the second (so I am using the mathematicians' convention here). Then ## f ## is bijective if and only if [## (y|x)=0 ## for all ## x \in X ## implies that ## y = 0 ##].

I have been able to show the "only if" part - that ## f ## being bijective implies the statement about the inner product. For the "if" part of the proof (where we assume that [## (y|x)=0 ## for every ## x \in X ## implies ## y = 0 ##] and prove that ## f ## is bijective), I am able to prove that ## f ## is injective, but I am running into a problem in proving that ## f ## is surjective. Here is my attempt:

Let ##x^*## be an arbitrary linear functional in ## X^*##. Because the space ## X ## is finite dimensional, we may choose an orthonormal basis ## \{e_i\}_{i=1}^n ## and write an arbitrary vector in ## X ## as ## x = \sum_{i=1}^{n}x_i e_i ##. Then we have ## x^*(x) = x^*(\sum_{i=1}^{n} x_i e_i) = \sum_{i=1}^{n}x_i x^*(e_i) ##. Now we need to find a ## y \in X ## such that ## f(y) = x^*##, i.e., such that ## (y|x) = x^*(x) ## for all ## x \in X ##. For any ## y \in X ## we can write ## y = \sum_{j=1}^{n} y_j e_j ##, so maybe choosing the ## y_j ## appropriately will do the job. We have ## (y|x) = (\sum_{j=1}^{n} y_j e_j | \sum_{i=1}^{n} x_i e_i) = \sum_{j} \sum_{i} y_j \overline{x}_i (e_j|e_i) = \sum_{i} y_i \overline{x}_i ##. The problem is that I want this to equal ## x^*(x) = \sum_{i=1}^{n}x_i x^*(e_i) ##, but the former expression involves ## \overline{x}_i ## while the latter involves ## x_i ##. If I had used the physicists' convention and made the inner product conjugate linear in the first argument instead, this problem would not have occurred. But it seems strange that a theorem like this would be dependent upon a convention like that. Any insights?
 
Physics news on Phys.org
  • #2
Isn't the problem that in ##(y|x) = x^*(x)##, the LHS is conjugate linear in x, while the RHS is linear in x?

In other words, shouldn't f be defined as ##f(x) = (\cdot|x)##?
 
  • Like
Likes Geofleur
  • #3
Another is by saying ##X## and ##X^*## have the same dimension.
 
  • #4
@Samy_A: That makes sense - it means I would need to make a correction in the book I am reading, though (Analysis, Manifolds, and Physics, Choquet-Bruhat).

@micromass: Do you mean I was wrongfully assuming that ## X ## and ## X^* ## have the same dimension? I thought that all I had assumed is that the action of ## x^* ## is completely determined by its effect on the basis vectors of ## X ##. Did I sneak something else in without realizing it?
 
  • #5
Geofleur said:
@Samy_A: That makes sense - it means I would need to make a correction in the book I am reading, though (Analysis, Manifolds, and Physics, Choquet-Bruhat).

@micromass: Do you mean I was wrongfully assuming that ## X ## and ## X^* ## have the same dimension? I thought that all I had assumed is that the action of ## x^* ## is completely determined by its effect on the basis vectors of ## X ##. Did I sneak something else in without realizing it?

No, it's correct. In finite dimensions, ##X## and ##X^*## have the same dimension. But once you know that, the proof is done. See the alternative theorem. This says that a linear map between two spaces of the same finite dimension is an isomorphism iff it is surjective iff it is injective.
 
  • Like
Likes Geofleur
  • #6
Ah - I understand! Thanks :-D
 
  • #7
If you're interested in quantum mechanics or functional analysis, you probably will want to think about how much this result generalizes to infinite dimensions...
 
  • #8
Geofleur said:
I have been trying to prove the following theorem, for a finite dimensional vector space ## X ## and its dual ## X^* ##:

Let ## f:X\rightarrow X^* ## be given by ## f(x) = (x|\cdot) ##, where ## (x|\cdot) ## is linear in the first argument and conjugate linear in the second (so I am using the mathematicians' convention here). Then ## f ## is bijective if and only if [## (y|x)=0 ## for all ## x \in X ## implies that ## y = 0 ##].

I have been able to show the "only if" part - that ## f ## being bijective implies the statement about the inner product. For the "if" part of the proof (where we assume that [## (y|x)=0 ## for every ## x \in X ## implies ## y = 0 ##] and prove that ## f ## is bijective), I am able to prove that ## f ## is injective, but I am running into a problem in proving that ## f ## is surjective. Here is my attempt:

Let ##x^*## be an arbitrary linear functional in ## X^*##. Because the space ## X ## is finite dimensional, we may choose an orthonormal basis ## \{e_i\}_{i=1}^n ## and write an arbitrary vector in ## X ## as ## x = \sum_{i=1}^{n}x_i e_i ##. Then we have ## x^*(x) = x^*(\sum_{i=1}^{n} x_i e_i) = \sum_{i=1}^{n}x_i x^*(e_i) ##. Now we need to find a ## y \in X ## such that ## f(y) = x^*##, i.e., such that ## (y|x) = x^*(x) ## for all ## x \in X ##. For any ## y \in X ## we can write ## y = \sum_{j=1}^{n} y_j e_j ##, so maybe choosing the ## y_j ## appropriately will do the job. We have ## (y|x) = (\sum_{j=1}^{n} y_j e_j | \sum_{i=1}^{n} x_i e_i) = \sum_{j} \sum_{i} y_j \overline{x}_i (e_j|e_i) = \sum_{i} y_i \overline{x}_i ##. The problem is that I want this to equal ## x^*(x) = \sum_{i=1}^{n}x_i x^*(e_i) ##, but the former expression involves ## \overline{x}_i ## while the latter involves ## x_i ##. If I had used the physicists' convention and made the inner product conjugate linear in the first argument instead, this problem would not have occurred. But it seems strange that a theorem like this would be dependent upon a convention like that. Any insights?
Are you assuming a default inner product defined?
 
  • Like
Likes micromass
  • #9
WWGD said:
Are you assuming a default inner product defined?

I think so? I am defining a sequilinear mapping ## X \times X \rightarrow \mathbb{C} ## such that ## (x,y) \mapsto (x|y) ##, where

## (x|y) = \overline{(y|x)} ##, and
## (\alpha x + \beta y|z) = \alpha(x|z) + \beta(y|z) ##.

I ended up just writing a note in the margin pointing out that ## (x|\cdot) \in X^* ## if one uses the physics convention; otherwise, we have ## (\cdot|x) \in X^* ## as Samy_A suggested.
 
Last edited:

What is meant by "isomorphism between a linear space and its dual"?

Isomorphism between a linear space and its dual refers to a one-to-one correspondence between a vector space and its dual space. This means that for every vector in the original space, there is a unique linear functional (or "dual vector") in the dual space, and vice versa.

Why is it important to study isomorphism between a linear space and its dual?

Understanding isomorphism between a linear space and its dual is important because it allows us to easily switch between the vector space and its dual, and perform calculations in either space. This is particularly useful in areas such as linear algebra, functional analysis, and optimization.

How do you determine if two vector spaces are isomorphic?

Two vector spaces are isomorphic if there exists a linear transformation between them that is both one-to-one and onto. This means that the dimensions of the two spaces must be the same, and there must be a way to map each vector in one space to a unique vector in the other space.

Can a vector space be isomorphic to its dual space?

Yes, a vector space can be isomorphic to its dual space. This is often referred to as a "self-dual" space. An example of a self-dual space is the space of real-valued functions on a finite interval with the standard inner product.

What is the relationship between a basis for a vector space and its dual space?

A basis for a vector space and its dual space are related by a dual basis. This means that for every basis vector in the original space, there is a corresponding dual basis vector in the dual space, and vice versa. The dual basis can be used to construct a dual transformation between the two spaces.

Similar threads

  • Linear and Abstract Algebra
Replies
23
Views
1K
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
5
Views
393
  • Linear and Abstract Algebra
Replies
4
Views
886
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
632
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Topology and Analysis
Replies
6
Views
281
Back
Top