Isomorphisms have the same dimensions

  • Thread starter Thread starter E'lir Kramer
  • Start date Start date
  • Tags Tags
    Dimensions
E'lir Kramer
Messages
73
Reaction score
0
Advanced Calculus of Several Variables, 5.6:

Two vector spaces V and W are called isomorphic if and only if there exist linear mappings S : V \to W and T : W \to V such that S \circ T and T \circ S are the identity mappings of S and W respectively. Prove that two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension.

I'm having a hard time proving the problem formally. I'm also having a hard time proving sufficiency of the criterion. In terms of necessity, I have this much:

It's clear that if one space, say V, has more dimensions than the other, then V which all map to the same value of W. For instance, v \in V : <v,w> = 0 for all w \in W all map to 0 in W (by the positivity property of inner products). There is then no function W -> V that can reclaim the lost information.

Formally, for all S : V -> W, S(v) = 0 when v is orthogonal to every vector in W.

Consider two vectors v_{1} and v_{2} which are both orthogonal to every vector in W.

Then, for example, S(v_{1}) = S(v_{2}) = 0 \in W, and we have (T \circ S) (v_{1}) = (T(S(v_{1})) = T(S(v_{2})) = T(0).

In order for the isomorphism to exist, (T \circ S) (v_{1}) = v_{1} and (T \circ S) (v_{2}) = v_{2}. But this is impossible since (T \circ S) (v_{2}) = (T \circ S) (v_{1}), and v_{1} ≠ v_{2}

Though I am satisfied by this, I have the feeling that I'm supposed to show this using matrix multiplication. The formalism that is eluding me is hiding in the phrase "The identity mappings of S and W respectively". What does that mean? S \circ T = I_{w}? What does that mean?
 
Last edited:
Physics news on Phys.org
Are you allowed to assume a given inner product on the spaces? That does not appear in the statement.

I would think that the simplest way to do this would be to show that if \{v_1, v_2, ..., v_n\} is a basis for V then \{S(v_1), S(v_2), ..., S(v_n)\} is a basis for W.
 
Earlier in the chapater, a theorem is given that:

"Let L : V \to W be linear, with V being n-dimensional. If Ker L = 0, then L is one-to-one, and I am L is an n-dimensional subspace of W."

I had a hard time with the proof of this theorem, and the exact place where I had the difficulty is relevant to this proof.
The author writes that

"To show that the subspace I am L is n-dimensional, start with a basis v_{1}, ... v_{n} for V. Since it is clear, (by the linearly of L) that the vectors L(v_{1}), ..., L(v_{n}) generate a basis for I am L, it suffices to prove that they are linearly independent..."

To me, it isn't clear at all how the linearity of L makes L(v_{1}), ..., L(v_{n}) a basis for I am L. Well, intuitively, I'm not surprised. It seems pretty clear that an orthonormal basis becomes a basis for any linear transformation, but I can't provide a formalism for that. And moreover, I can't see how *any* basis does so, especially not formally. One of my goals in this reading is to get better at providing formal arguments for all of my proofs. I actually spent a few minutes trying to convince myself of the fact the first time I read this "proof" but never was able to do so. And now I've just spent another hour trying again. Could you explain it to me?
 
E'lir Kramer said:
Earlier in the chapater, a theorem is given that:

"Let L : V \to W be linear, with V being n-dimensional. If Ker L = 0, then L is one-to-one, and I am L is an n-dimensional subspace of W."

I had a hard time with the proof of this theorem, and the exact place where I had the difficulty is relevant to this proof.
The author writes that

"To show that the subspace I am L is n-dimensional, start with a basis v_{1}, ... v_{n} for V. Since it is clear, (by the linearly of L) that the vectors L(v_{1}), ..., L(v_{n}) generate a basis for I am L, it suffices to prove that they are linearly independent..."

To me, it isn't clear at all how the linearity of L makes L(v_{1}), ..., L(v_{n}) a basis for I am L. Well, intuitively, I'm not surprised. It seems pretty clear that an orthonormal basis becomes a basis for any linear transformation, but I can't provide a formalism for that. And moreover, I can't see how *any* basis does so, especially not formally. One of my goals in this reading is to get better at providing formal arguments for all of my proofs. I actually spent a few minutes trying to convince myself of the fact the first time I read this "proof" but never was able to do so. And now I've just spent another hour trying again. Could you explain it to me?

Any element of I am L can be written as L(v) for some v in V. Since v1,v2,...,vn is a basis of V, v can be written as a1*v1+a2*v2+...+an*vn. If you work out L(a1*v1+a2*v2+...+an*vn) you should be able to show L(v1),L(v2),...,L(vn) must span I am L. Next, to finish showing it's a basis you need to show L(v1),L(v2),...,L(vn) are also linearly independent. Use the definition of linearly independent. It will be easiest to show that if you assume they are linearly dependent then that would contradict Ker L={0}. Try it!
 
Thanks Dick, your remark that "Any element of I am L can be written as L(v) for some v in V" was what I was missing to prove that L(v_{1}), ... , L(v_{n}) spans W if v_{1}, ... , v_{n} spans V. Because any vector in v can be written as a_{1}v_{1} + ... a_{n}v_{n}, then any vector in W can be written as L(a_{1}v_{1} + ... + a_{n}v_{n}). By linearity of L, then, any vector in W can be written as a_{1}L(v_{1}) + ... a_{n}L(v_{n}), proving that L(v_{1}), ... , L(v_{n}) spans W.

Now if we have that v_{1}, ... , v_{n} are a basis and Ker L = 0, it's easy to prove that L(v_{1}), ... , L(v_{n}) are also linearly independent and thus constitute a basis: but do I even have to do this for 5.6? I've shown that dim V = dim W is necessary in my first post and sufficient in this post. Why is it being a basis necessary?

PS: Ivy, you are correct in pointing out that an inner product is not defined in this problem statement, but we have that dim Ker V = dim V - dim I am V, so we know without an inner product that there will be infinitely many zeroes of V if dim V - dim I am V > 0, so the proof is easily mended.
 
Last edited:
E'lir Kramer said:
Thanks Dick, your remark that "Any element of I am L can be written as L(v) for some v in V" was what I was missing to prove that L(v_{1}), ... , L(v_{n}) spans W if v_{1}, ... , v_{n} spans V. Because any vector in v can be written as a_{1}v_{1} + ... a_{n}v_{n}, then any vector in W can be written as L(a_{1}v_{1} + ... + a_{n}v_{n}). By linearity of L, then, any vector in W can be written as a_{1}L(v_{1}) + ... a_{n}L(v_{n}), proving that L(v_{1}), ... , L(v_{n}) spans W.

Now if we have that v_{1}, ... , v_{n} are a basis and Ker L = 0, it's easy to prove that L(v_{1}), ... , L(v_{n}) are also linearly independent and thus constitute a basis: but do I even have to do this for 5.6? I've shown that dim V = dim W is necessary in my first post and sufficient in this post. Why is it being a basis necessary?

PS: Ivy, you are correct in pointing out that an inner product is not defined in this problem statement, but we have that dim Ker V = dim V - dim I am V, so we know without an inner product that there will be infinitely many zeroes of V if dim V - dim I am V > 0, so the proof is easily mended.

To show 5.6 using this theorem you don't have to repeat the whole thing. If you can show TS=ST=identity means S and T are one-to-one then it's easy from there.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top