Mistake in Schaum's Group Theory?

Click For Summary
SUMMARY

The discussion centers on Schaum's Outline of Group Theory, specifically Section 3.6e, which defines {{\rm{L}}_n}\left( {V,F} \right) as the set of all one-to-one linear transformations of a vector space V of dimension n over a field F. Participants clarify that a linear transformation from V to V is injective if and only if it is surjective, supported by the rank-nullity theorem. This confirms that {{\rm{L}}_n}\left( {V,F} \right) is indeed a subset of {S_V}, as all injective linear transformations in finite-dimensional spaces are also onto.

PREREQUISITES
  • Understanding of linear transformations and their properties
  • Familiarity with vector spaces and dimensions
  • Knowledge of the rank-nullity theorem
  • Basic concepts of injective and surjective functions
NEXT STEPS
  • Study the rank-nullity theorem in detail
  • Learn about the general linear group, denoted as GL(n, V)
  • Explore the implications of injective and surjective mappings in linear algebra
  • Investigate the differences between finite and infinite-dimensional vector spaces
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in the properties of linear transformations and group theory will benefit from this discussion.

jstrunk
Messages
53
Reaction score
2
TL;DR
Text implies that every one to one linear transformation of dimension n on field F is onto
Schaum's Outline of Group Theory, Section 3.6e defines {{\rm{L}}_n}\left( {V,F} \right) as the set of all one to one linear transformations of V,
the vector space of dimension n over field F.

It then says "{{\rm{L}}_n}\left( {V,F} \right) \subseteq {S_V}, clearly".
({S_V} here means the set of all one to one mappings of V onto V).
This isn't clear to me at all.
By the definition given, an element of {{\rm{L}}_n}\left( {V,F} \right) could potentially not be onto V.
Then it wouldn't be an element of {S_V}.

Either all such one to one linear transformations have to be onto, or the author should have defined {{\rm{L}}_n}\left( {V,F} \right)
as the set of all one to one linear transformations of V onto V, the vector space of dimension n over F.

I haven't had much luck trying to prove that all such one to one transformations have to be onto, so I am guessing the author made a mistake.
On the next page after this definition, the author calls {{\rm{L}}_n}\left( {V,F} \right), with composition of mappings as the operation,
the full linear group of dimension n. This doesn't seem to be standard terminology so its hard to find anything online to verify my suspicion.

Can anyone verify that the author made a mistake, or show me how to prove that all such one to one transformations have to be onto?
Thanks.
 
Physics news on Phys.org
It is no mistake.

A linear transformation ##V\to V## between finitedimensional vector spaces is injective if and only if it is surjective.

For a proof, consider a linear map ##T: V \to V##.

Do you know the formula

##\dim V = \dim \ker T + \dim T(V)##

?

If so, apply this formula and the claim follows trivially.
 
  • Like
Likes   Reactions: Abhishek11235
jstrunk said:
Summary: Text implies that every one to one linear transformation of dimension n on field F is onto

Schaum's Outline of Group Theory, Section 3.6e defines ##{{\rm{L}}_n}\left( {V,F} \right)## as the set of all one to one linear transformations of ##V##
This set is usually abbreviated by ##\operatorname{GL}(n,V)##. We have all ##F-##linear transformations ##V \longrightarrow V## which are one to one. I assume that this should mean injective. Now every injective linear mapping ##\varphi\, : \,U \longrightarrow V## is basically an embedding ##U \subseteq V## and if both are of the same dimension it is necessarily surjective, too. This results e.g. from the rank-nullity theorem:
$$
\operatorname{rank}\varphi + \operatorname{null}\varphi = \operatorname{dim}U
$$
The rank is the dimension of the image of ##\varphi##, the nullity the dimension of the kernel. Now for ##U=V## we get ##\dim(\operatorname{im}(\varphi)) = n - \dim(\operatorname{ker}(\varphi)) = n-0 = n## so that ##\varphi## is surjective, i.e. onto. It's a regular matrix if we chose a basis for ##V##. And they are especially bijective, i.e. ##{{\rm{L}}_n}\left( {V,F} \right) = \operatorname{GL}(n,V) \subseteq S_V##.
 
  • Like
Likes   Reactions: Abhishek11235
Thanks. This rank and kernel stuff hasn't come up yet in the book but there are a few references further in. I suspect the author just wanted to gloss over it at this point.
 
jstrunk said:
Thanks. This rank and kernel stuff hasn't come up yet in the book but there are a few references further in. I suspect the author just wanted to gloss over it at this point.
It doesn't really belong in group theory, rather in linear algebra. You can also consider it as sets: if we have an injective map from ##V## to ##V##, then every element from the left is mapped to one on the right, but no two elements have the same image. How should it be possible not to hit all points on the right then? If there was such an element on the right, then we would have one more than on the left. This is easy to see for finite fields, where ##V## is therefore also finite. It is not a very good argument for infinite fields and thus vector spaces with infinitely many elements. E.g. there are bijections from ##(0,1)## to ##\mathbb{R}##, so it doesn't work very well with infinite sets. But here we have linearity to save the day:

Let ##\varphi\, : \,V \longrightarrow V## be one to one, i.e. injective and ##\{\,v_1,\ldots,v_n\,\}## a basis of ##V##. Then ##\{\,\varphi(v_1),\ldots,\varphi(v_n)\,\}## are also linearly independent and for dimensional reasons a basis. This is an easy exercise. You need that ##\varphi## is injective. But with such a basis of image vectors, we have the entire vector space as image.
 
Maybe this could help: Define ##f: V_1 \rightarrow V_2 ## f(v)=w . Then ##f^{-1}(w)=v ## ( by 1-1 ness) and if there is w' with ##f(v') \neq w' ## for all v' in ##V_1## then f(v) is not defined for some w' in W. Maybe a bit clunky. More informally, if f is 1-1, f(V) is a copy of V, a subspace ( of full dimension) of V.
 
WWGD said:
Maybe this could help: Define ##f: V_1 \rightarrow V_2 ## f(v)=w . Then ##f^{-1}(w)=v ## ( by 1-1 ness) and if there is w' with ##f(v') \neq w' ## for all v' in ##V_1## then f(v) is not defined for some w' in W. Maybe a bit clunky. More informally, if f is 1-1, f(V) is a copy of V, a subspace ( of full dimension) of V.
One has to use an additional structure like linearity, because counting doesn't work very well on infinite sets.
 
fresh_42 said:
One has to use an additional structure like linearity, because counting doesn't work very well on infinite sets.
I am just using 1-1 -ness and the fact that f is defined on the whole of V.
 
WWGD said:
I am just using 1-1 -ness and the fact that f is defined on the whole of V.
Yes, but this would also be true for ##(0,1)## and ##\mathbb{R}-\{\,0\,\}## and they are likewise bijective or not.
 
  • #10
fresh_42 said:
Yes, but this would also be true for ##(0,1)## and ##\mathbb{R}-\{\,0\,\}## and they are likewise bijective or not.
You mean injective self-maps in it that are not surjective?
 
  • #11
I mean that element count does not work. You can embed ##(0,1)## in ##\mathbb{R}## or in ##\mathbb{R}-\{\,0\,\}##, we can do it even surjective, or none of them. To conclude from an embedding to a bijection we need either finiteness or an additional information like linearity. If neither of them occurs in a 'proof', then the proof is necessarily wrong.
 
  • #12
fresh_42 said:
I mean that element count does not work. You can embed ##(0,1)## in ##\mathbb{R}## or in ##\mathbb{R}-\{\,0\,\}##, we can do it even surjective, or none of them. To conclude from an embedding to a bijection we need either finiteness or an additional information like linearity. If neither of them occurs in a 'proof', then the proof is necessarily wrong.
Ok, I am also using the fact that this is a map from a space to itself. So you would need to give me , e.g., a self injection of either that is not a surjection.
 
  • #13
WWGD said:
Ok, I am also using the fact that this is a map from a space to itself. So you would need to give me , e.g., a self injection of either that is not a surjection.
##(0,1) \stackrel{\operatorname{id}}{\hookrightarrow} \mathbb{R}-\{\,0\,\}\stackrel{\varphi}{\rightarrowtail} (-\frac{1}{4},\frac{1}{4}) -\{\,0\,\} \stackrel{+\frac{1}{3}}{\hookrightarrow} (0,1)##
where I only need a bijection ##\varphi## between the real line and an open interval, both without zero to make it easier.
 
Last edited:
  • #14
fresh_42 said:
I mean that element count does not work. You can embed ##(0,1)## in ##\mathbb{R}## or in ##\mathbb{R}-\{\,0\,\}##, we can do it even surjective, or none of them. To conclude from an embedding to a bijection we need either finiteness or an additional information like linearity. If neither of them occurs in a 'proof', then the proof is necessarily wrong.
One-one guarantees a 1-sided inverse exists. I think not too hard that one can show, under a self-map f: X-->X , function is also onto.
 
  • #15
WWGD said:
One-one guarantees a 1-sided inverse exists. I think not too hard that one can show, under a self-map f: X-->X , function is also onto.
My function is injective - whatever one-to-one should mean. I read 1:1 as bijection but have been told that it is only an injection. So the example I gave is an injection in itself which is not onto, will say surjective.
 
  • #16
WWGD said:
One-one guarantees a 1-sided inverse exists. I think not too hard that one can show, under a self-map f: X-->X , function is also onto.

If you mean injection with ##1-1## this is false. Consider ##\mathbb{N} \to \mathbb{N}: n \mapsto n+1##. This is an injection which is not surjective.
 
  • #17
Math_QED said:
If you mean injection with ##1-1## this is false. Consider ##\mathbb{N} \to \mathbb{N}: n \mapsto n+1##. This is an injection which is not surjective.
How is it not surjective? n-1-->(n-1)+1=n. Give me any natural, its predecessor will hit it. I am not sure my statement , claim, is true, but I think it is.
 
  • #18
WWGD said:
How is it not surjective? n-1-->(n-1)+1=n. Give me any natural, its predecessor will hit it. I am not sure my statement , claim, is true, but I think it is.

Take the smallest natural number (either ##0## or ##1## depending on the convention you are using). Give me a number that hits it :).
 
  • Like
Likes   Reactions: WWGD
  • #19
Math_QED said:
Take the smallest natural number (either ##0## or ##1## depending on the convention you are using). Give me a number that hits it :).
Ah, ok, I was thinking Integers. Good counter.
 
  • #20
My bad, I was being hard-headed because I could not off-hand think of general counters. They are plenty: Real Exponential for Reals, Complex Exponentials for Complexes, etc. They may even be more numerous by some account than self-injections injections that are surjections. Will think things through more carefully. I knew this conditions was unique to linear maps but just my heard-headedness in not coming up with counters , and, worse, I did not even spend much time thinking either. Arrogance.
 
Last edited:
  • Like
Likes   Reactions: member 587159
  • #21
WWGD said:
My bad, I was being hard-headed because I could not off-hand think of general counters. They are plenty: Real Exponential for Reals, Complex Exponentials for Complexes, etc. They may even be more numerous by some account than self-injections injections that are surjections. Will think things through more carefully. I knew this conditions was unique to linear maps but just my heard-headedness in not coming up with counters , and, worse, I did not even spend much time thinking either. Arrogance.

It is not arrogance! It is your mind trying to generalise a concept that's true in many situations (e.g. finite sets, linear maps between finite-dimensional vector spaces,...). It happens mostly in situations where we have developed a certain sense of intuition and in the general context it breaks down.

For example, here is something I believed to be true until recently: if we have two isomorphic groups ##G_1,G_2## with isomorphic normal subgroups ##H_1, H_2##, we could be temped to believe that the quotient groups ##G_1/H_1## and ##G_2/H_2## are isomorphic. It isn't true however. The counterexample to a statement often helps to see why intuition breaks down. Here it is because it is also important in what way we embed the subgroup in the other group and the conditions given don't ensure that this happens.

So TL,DR: it happens to everyone at some point, and it is no arrogance.
 
  • Like
Likes   Reactions: mathwonk
  • #22
Math_QED said:
If we have two isomorphic groups ##G_1,G_2## with isomorphic normal subgroups ##H_1, H_2##, we could be temped to believe that the quotient groups ##G_1/H_1## and ##G_2/H_2## are isomorphic. It isn't true however.
Why that? What is a counterexample? A quick view on the 4 Lemma seems as if there are inclusions ##G_1/H_1 \rightarrowtail G_2/H_2 \rightarrowtail G_1/H_1##. Not sure whether there is also an epimorphism, but the two inclusions are a strong condition.
 
  • #23
Maybe one way of doing the proof for (fin. dim) V.S is to use the fact that, given an ordered basis ## \{ v_1, v_2,...,v_n \} ## there is an isomorphism between ##\text L(V,V) ## and ##\text Mat_{(n,n)}##. And then you can show a matrix can only have 2-sided inverses, meaning it has to describe a map that is 1-1 and onto.

##AB=I B=A^{-1} \rightarrow BA =A^{-1}A=I ##

So B is both right- and left- inverse and we have two-sided inverses.
 
  • #24
fresh_42 said:
Why that? What is a counterexample? A quick view on the 4 Lemma seems as if there are inclusions ##G_1/H_1 \rightarrowtail G_2/H_2 \rightarrowtail G_1/H_1##. Not sure whether there is also an epimorphism, but the two inclusions are a strong condition.
I wonder if there is something similar in topology with subspaces ## S_1, S_2 ## which are, as stand-alone spaces homeomorphic and both embed in a third space X. Is it the case that ## X/S_1 \~ X/S_2## , at least up to homotopy ( or something else?).

I would think no, because we could, e.g., start with a Torus as X, and consider S_1 , S_2 as loops/circles going around X , longitudinally and latitudinally. So if we do X/S_1 (shrink to a point/collapse/quotient out) we pinch the torus but nothing happens when we do X/S_1. It seems we need additional conditions. Maybe both S_1, S_2 are contractible in X, then there is the result is that X/C ~X , when C is contractible in X. Hope this is not getting too far off-topic.
 
Last edited:
  • #25
WWGD said:
I wonder if there is something similar in topology with subspaces S1,S2S1,S2 which are, as stand-alone spaces homeomorphic and both embed in a third space X. Is it the case that X/S1\~X/S2X/S1\~X/S2 , at least up to homotopy ( or something else?).

I would think no, because we could, e.g., start with a Torus as X, and consider S_1 , S_2 as loops/circles going around X , longitudinally and latitudinally. So if we do X/S_1 (shrink to a point/collapse/quotient out) we pinch the torus but nothing happens when we do X/S_1. It seems we need additional conditions. Maybe both S_1, S_2 are contractible in X, then there is the result is that X/C ~X , when C is contractible in X. Hope this is not getting too far off-topic.
Well the 4-,5- and 9-Lemmata should work in ##\mathbf{Top}## as well, but I'm still not convinced. The short 5 Lemma works outside in, and we are looking for left side to the right, so it could be possible. But I'd like to see a counterexample, and whether my observation of two inclusions was correct.
 
  • #26
fresh_42 said:
Well the 4-,5- and 9-Lemmata should work in ##\mathbf{Top}## as well, but I'm still not convinced.
?? I never heard of those names. Can you ref. please?
Do you agree with my argument using matrices?
 
  • #27
WWGD said:
?? I never heard of those names. Can you ref. please?
Do you agree with my argument using matrices?
There is no problem with a finite basis and linearity. I only said, that the element count doesn't work for infinite sets. Interesting would be the case of uncountable dimensions: Can we embed such a vector space in itself without being surjective? Or will Hamel bases, i.e. AC save the day?
 
  • #29
fresh_42 said:
There is no problem with a finite basis and linearity. I only said, that the element count doesn't work for infinite sets. Interesting would be the case of uncountable dimensions: Can we embed such a vector space in itself without being surjective? Or will Hamel bases, i.e. AC save the day?
I never thought cardinality/count was enough; that wasn't part of my argument. I thought a key issue was that we are mapping a space to itself. And that matrices are not one-side invertible, unlike some functions. EDIT: I guess re the quotients X/S_1, X/S_2 , we need some condition on maps passing to the quotient. Thanks for the 5-lemma ref. I will see how to fit it.
 
  • #30
fresh_42 said:
Why that? What is a counterexample? A quick view on the 4 Lemma seems as if there are inclusions ##G_1/H_1 \rightarrowtail G_2/H_2 \rightarrowtail G_1/H_1##. Not sure whether there is also an epimorphism, but the two inclusions are a strong condition.

Exactly the point I was trying to make! It seems so true! Take ##G = Z_4 \times Z_2## and consider the cyclic subgroups generated by ##(2,0)## and ##(0,1)##. They are both cyclic of order 2 and hence isomorphic. Quotienting them out gives the two different groups of order ##4##.
 
Last edited by a moderator:
  • Like
Likes   Reactions: fresh_42 and martinbn

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 26 ·
Replies
26
Views
962
  • · Replies 1 ·
Replies
1
Views
634
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 13 ·
Replies
13
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
988