E'lir Kramer
- 73
- 0
Advanced Calculus of Several Variables, 5.6:
Two vector spaces V and W are called isomorphic if and only if there exist linear mappings S : V \to W and T : W \to V such that S \circ T and T \circ S are the identity mappings of S and W respectively. Prove that two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension.
I'm having a hard time proving the problem formally. I'm also having a hard time proving sufficiency of the criterion. In terms of necessity, I have this much:
It's clear that if one space, say V, has more dimensions than the other, then V which all map to the same value of W. For instance, v \in V : <v,w> = 0 for all w \in W all map to 0 in W (by the positivity property of inner products). There is then no function W -> V that can reclaim the lost information.
Formally, for all S : V -> W, S(v) = 0 when v is orthogonal to every vector in W.
Consider two vectors v_{1} and v_{2} which are both orthogonal to every vector in W.
Then, for example, S(v_{1}) = S(v_{2}) = 0 \in W, and we have (T \circ S) (v_{1}) = (T(S(v_{1})) = T(S(v_{2})) = T(0).
In order for the isomorphism to exist, (T \circ S) (v_{1}) = v_{1} and (T \circ S) (v_{2}) = v_{2}. But this is impossible since (T \circ S) (v_{2}) = (T \circ S) (v_{1}), and v_{1} ≠ v_{2}
Though I am satisfied by this, I have the feeling that I'm supposed to show this using matrix multiplication. The formalism that is eluding me is hiding in the phrase "The identity mappings of S and W respectively". What does that mean? S \circ T = I_{w}? What does that mean?
Two vector spaces V and W are called isomorphic if and only if there exist linear mappings S : V \to W and T : W \to V such that S \circ T and T \circ S are the identity mappings of S and W respectively. Prove that two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension.
I'm having a hard time proving the problem formally. I'm also having a hard time proving sufficiency of the criterion. In terms of necessity, I have this much:
It's clear that if one space, say V, has more dimensions than the other, then V which all map to the same value of W. For instance, v \in V : <v,w> = 0 for all w \in W all map to 0 in W (by the positivity property of inner products). There is then no function W -> V that can reclaim the lost information.
Formally, for all S : V -> W, S(v) = 0 when v is orthogonal to every vector in W.
Consider two vectors v_{1} and v_{2} which are both orthogonal to every vector in W.
Then, for example, S(v_{1}) = S(v_{2}) = 0 \in W, and we have (T \circ S) (v_{1}) = (T(S(v_{1})) = T(S(v_{2})) = T(0).
In order for the isomorphism to exist, (T \circ S) (v_{1}) = v_{1} and (T \circ S) (v_{2}) = v_{2}. But this is impossible since (T \circ S) (v_{2}) = (T \circ S) (v_{1}), and v_{1} ≠ v_{2}
Though I am satisfied by this, I have the feeling that I'm supposed to show this using matrix multiplication. The formalism that is eluding me is hiding in the phrase "The identity mappings of S and W respectively". What does that mean? S \circ T = I_{w}? What does that mean?
Last edited: