Mistake in Schaum's Group Theory?

In summary: Thanks. This is helpful. A 1-1 map between vector spaces is a linear transformation if and only if it preserves the composition of vectors. This is an easy exercise: Every 1-1 map between vector spaces is linear, because composition is associative and commutative.
  • #1
jstrunk
55
2
TL;DR Summary
Text implies that every one to one linear transformation of dimension n on field F is onto
Schaum's Outline of Group Theory, Section 3.6e defines [itex]{{\rm{L}}_n}\left( {V,F} \right)[/itex] as the set of all one to one linear transformations of V,
the vector space of dimension n over field F.

It then says "[itex]{{\rm{L}}_n}\left( {V,F} \right) \subseteq {S_V}[/itex], clearly".
([itex]{S_V}[/itex] here means the set of all one to one mappings of V onto V).
This isn't clear to me at all.
By the definition given, an element of [itex]{{\rm{L}}_n}\left( {V,F} \right)[/itex] could potentially not be onto V.
Then it wouldn't be an element of [itex]{S_V}[/itex].

Either all such one to one linear transformations have to be onto, or the author should have defined [itex]{{\rm{L}}_n}\left( {V,F} \right)[/itex]
as the set of all one to one linear transformations of V onto V, the vector space of dimension n over F.

I haven't had much luck trying to prove that all such one to one transformations have to be onto, so I am guessing the author made a mistake.
On the next page after this definition, the author calls [itex]{{\rm{L}}_n}\left( {V,F} \right)[/itex], with composition of mappings as the operation,
the full linear group of dimension n. This doesn't seem to be standard terminology so its hard to find anything online to verify my suspicion.

Can anyone verify that the author made a mistake, or show me how to prove that all such one to one transformations have to be onto?
Thanks.
 
Physics news on Phys.org
  • #2
It is no mistake.

A linear transformation ##V\to V## between finitedimensional vector spaces is injective if and only if it is surjective.

For a proof, consider a linear map ##T: V \to V##.

Do you know the formula

##\dim V = \dim \ker T + \dim T(V)##

?

If so, apply this formula and the claim follows trivially.
 
  • Like
Likes Abhishek11235
  • #3
jstrunk said:
Summary: Text implies that every one to one linear transformation of dimension n on field F is onto

Schaum's Outline of Group Theory, Section 3.6e defines ##{{\rm{L}}_n}\left( {V,F} \right)## as the set of all one to one linear transformations of ##V##
This set is usually abbreviated by ##\operatorname{GL}(n,V)##. We have all ##F-##linear transformations ##V \longrightarrow V## which are one to one. I assume that this should mean injective. Now every injective linear mapping ##\varphi\, : \,U \longrightarrow V## is basically an embedding ##U \subseteq V## and if both are of the same dimension it is necessarily surjective, too. This results e.g. from the rank-nullity theorem:
$$
\operatorname{rank}\varphi + \operatorname{null}\varphi = \operatorname{dim}U
$$
The rank is the dimension of the image of ##\varphi##, the nullity the dimension of the kernel. Now for ##U=V## we get ##\dim(\operatorname{im}(\varphi)) = n - \dim(\operatorname{ker}(\varphi)) = n-0 = n## so that ##\varphi## is surjective, i.e. onto. It's a regular matrix if we chose a basis for ##V##. And they are especially bijective, i.e. ##{{\rm{L}}_n}\left( {V,F} \right) = \operatorname{GL}(n,V) \subseteq S_V##.
 
  • Like
Likes Abhishek11235
  • #4
Thanks. This rank and kernel stuff hasn't come up yet in the book but there are a few references further in. I suspect the author just wanted to gloss over it at this point.
 
  • #5
jstrunk said:
Thanks. This rank and kernel stuff hasn't come up yet in the book but there are a few references further in. I suspect the author just wanted to gloss over it at this point.
It doesn't really belong in group theory, rather in linear algebra. You can also consider it as sets: if we have an injective map from ##V## to ##V##, then every element from the left is mapped to one on the right, but no two elements have the same image. How should it be possible not to hit all points on the right then? If there was such an element on the right, then we would have one more than on the left. This is easy to see for finite fields, where ##V## is therefore also finite. It is not a very good argument for infinite fields and thus vector spaces with infinitely many elements. E.g. there are bijections from ##(0,1)## to ##\mathbb{R}##, so it doesn't work very well with infinite sets. But here we have linearity to save the day:

Let ##\varphi\, : \,V \longrightarrow V## be one to one, i.e. injective and ##\{\,v_1,\ldots,v_n\,\}## a basis of ##V##. Then ##\{\,\varphi(v_1),\ldots,\varphi(v_n)\,\}## are also linearly independent and for dimensional reasons a basis. This is an easy exercise. You need that ##\varphi## is injective. But with such a basis of image vectors, we have the entire vector space as image.
 
  • #6
Maybe this could help: Define ##f: V_1 \rightarrow V_2 ## f(v)=w . Then ##f^{-1}(w)=v ## ( by 1-1 ness) and if there is w' with ##f(v') \neq w' ## for all v' in ##V_1## then f(v) is not defined for some w' in W. Maybe a bit clunky. More informally, if f is 1-1, f(V) is a copy of V, a subspace ( of full dimension) of V.
 
  • #7
WWGD said:
Maybe this could help: Define ##f: V_1 \rightarrow V_2 ## f(v)=w . Then ##f^{-1}(w)=v ## ( by 1-1 ness) and if there is w' with ##f(v') \neq w' ## for all v' in ##V_1## then f(v) is not defined for some w' in W. Maybe a bit clunky. More informally, if f is 1-1, f(V) is a copy of V, a subspace ( of full dimension) of V.
One has to use an additional structure like linearity, because counting doesn't work very well on infinite sets.
 
  • #8
fresh_42 said:
One has to use an additional structure like linearity, because counting doesn't work very well on infinite sets.
I am just using 1-1 -ness and the fact that f is defined on the whole of V.
 
  • #9
WWGD said:
I am just using 1-1 -ness and the fact that f is defined on the whole of V.
Yes, but this would also be true for ##(0,1)## and ##\mathbb{R}-\{\,0\,\}## and they are likewise bijective or not.
 
  • #10
fresh_42 said:
Yes, but this would also be true for ##(0,1)## and ##\mathbb{R}-\{\,0\,\}## and they are likewise bijective or not.
You mean injective self-maps in it that are not surjective?
 
  • #11
I mean that element count does not work. You can embed ##(0,1)## in ##\mathbb{R}## or in ##\mathbb{R}-\{\,0\,\}##, we can do it even surjective, or none of them. To conclude from an embedding to a bijection we need either finiteness or an additional information like linearity. If neither of them occurs in a 'proof', then the proof is necessarily wrong.
 
  • #12
fresh_42 said:
I mean that element count does not work. You can embed ##(0,1)## in ##\mathbb{R}## or in ##\mathbb{R}-\{\,0\,\}##, we can do it even surjective, or none of them. To conclude from an embedding to a bijection we need either finiteness or an additional information like linearity. If neither of them occurs in a 'proof', then the proof is necessarily wrong.
Ok, I am also using the fact that this is a map from a space to itself. So you would need to give me , e.g., a self injection of either that is not a surjection.
 
  • #13
WWGD said:
Ok, I am also using the fact that this is a map from a space to itself. So you would need to give me , e.g., a self injection of either that is not a surjection.
##(0,1) \stackrel{\operatorname{id}}{\hookrightarrow} \mathbb{R}-\{\,0\,\}\stackrel{\varphi}{\rightarrowtail} (-\frac{1}{4},\frac{1}{4}) -\{\,0\,\} \stackrel{+\frac{1}{3}}{\hookrightarrow} (0,1)##
where I only need a bijection ##\varphi## between the real line and an open interval, both without zero to make it easier.
 
Last edited:
  • #14
fresh_42 said:
I mean that element count does not work. You can embed ##(0,1)## in ##\mathbb{R}## or in ##\mathbb{R}-\{\,0\,\}##, we can do it even surjective, or none of them. To conclude from an embedding to a bijection we need either finiteness or an additional information like linearity. If neither of them occurs in a 'proof', then the proof is necessarily wrong.
One-one guarantees a 1-sided inverse exists. I think not too hard that one can show, under a self-map f: X-->X , function is also onto.
 
  • #15
WWGD said:
One-one guarantees a 1-sided inverse exists. I think not too hard that one can show, under a self-map f: X-->X , function is also onto.
My function is injective - whatever one-to-one should mean. I read 1:1 as bijection but have been told that it is only an injection. So the example I gave is an injection in itself which is not onto, will say surjective.
 
  • #16
WWGD said:
One-one guarantees a 1-sided inverse exists. I think not too hard that one can show, under a self-map f: X-->X , function is also onto.

If you mean injection with ##1-1## this is false. Consider ##\mathbb{N} \to \mathbb{N}: n \mapsto n+1##. This is an injection which is not surjective.
 
  • #17
Math_QED said:
If you mean injection with ##1-1## this is false. Consider ##\mathbb{N} \to \mathbb{N}: n \mapsto n+1##. This is an injection which is not surjective.
How is it not surjective? n-1-->(n-1)+1=n. Give me any natural, its predecessor will hit it. I am not sure my statement , claim, is true, but I think it is.
 
  • #18
WWGD said:
How is it not surjective? n-1-->(n-1)+1=n. Give me any natural, its predecessor will hit it. I am not sure my statement , claim, is true, but I think it is.

Take the smallest natural number (either ##0## or ##1## depending on the convention you are using). Give me a number that hits it :).
 
  • Like
Likes WWGD
  • #19
Math_QED said:
Take the smallest natural number (either ##0## or ##1## depending on the convention you are using). Give me a number that hits it :).
Ah, ok, I was thinking Integers. Good counter.
 
  • #20
My bad, I was being hard-headed because I could not off-hand think of general counters. They are plenty: Real Exponential for Reals, Complex Exponentials for Complexes, etc. They may even be more numerous by some account than self-injections injections that are surjections. Will think things through more carefully. I knew this conditions was unique to linear maps but just my heard-headedness in not coming up with counters , and, worse, I did not even spend much time thinking either. Arrogance.
 
Last edited:
  • Like
Likes member 587159
  • #21
WWGD said:
My bad, I was being hard-headed because I could not off-hand think of general counters. They are plenty: Real Exponential for Reals, Complex Exponentials for Complexes, etc. They may even be more numerous by some account than self-injections injections that are surjections. Will think things through more carefully. I knew this conditions was unique to linear maps but just my heard-headedness in not coming up with counters , and, worse, I did not even spend much time thinking either. Arrogance.

It is not arrogance! It is your mind trying to generalise a concept that's true in many situations (e.g. finite sets, linear maps between finite-dimensional vector spaces,...). It happens mostly in situations where we have developed a certain sense of intuition and in the general context it breaks down.

For example, here is something I believed to be true until recently: if we have two isomorphic groups ##G_1,G_2## with isomorphic normal subgroups ##H_1, H_2##, we could be temped to believe that the quotient groups ##G_1/H_1## and ##G_2/H_2## are isomorphic. It isn't true however. The counterexample to a statement often helps to see why intuition breaks down. Here it is because it is also important in what way we embed the subgroup in the other group and the conditions given don't ensure that this happens.

So TL,DR: it happens to everyone at some point, and it is no arrogance.
 
  • Like
Likes mathwonk
  • #22
Math_QED said:
If we have two isomorphic groups ##G_1,G_2## with isomorphic normal subgroups ##H_1, H_2##, we could be temped to believe that the quotient groups ##G_1/H_1## and ##G_2/H_2## are isomorphic. It isn't true however.
Why that? What is a counterexample? A quick view on the 4 Lemma seems as if there are inclusions ##G_1/H_1 \rightarrowtail G_2/H_2 \rightarrowtail G_1/H_1##. Not sure whether there is also an epimorphism, but the two inclusions are a strong condition.
 
  • #23
Maybe one way of doing the proof for (fin. dim) V.S is to use the fact that, given an ordered basis ## \{ v_1, v_2,...,v_n \} ## there is an isomorphism between ##\text L(V,V) ## and ##\text Mat_{(n,n)}##. And then you can show a matrix can only have 2-sided inverses, meaning it has to describe a map that is 1-1 and onto.

##AB=I B=A^{-1} \rightarrow BA =A^{-1}A=I ##

So B is both right- and left- inverse and we have two-sided inverses.
 
  • #24
fresh_42 said:
Why that? What is a counterexample? A quick view on the 4 Lemma seems as if there are inclusions ##G_1/H_1 \rightarrowtail G_2/H_2 \rightarrowtail G_1/H_1##. Not sure whether there is also an epimorphism, but the two inclusions are a strong condition.
I wonder if there is something similar in topology with subspaces ## S_1, S_2 ## which are, as stand-alone spaces homeomorphic and both embed in a third space X. Is it the case that ## X/S_1 \~ X/S_2## , at least up to homotopy ( or something else?).

I would think no, because we could, e.g., start with a Torus as X, and consider S_1 , S_2 as loops/circles going around X , longitudinally and latitudinally. So if we do X/S_1 (shrink to a point/collapse/quotient out) we pinch the torus but nothing happens when we do X/S_1. It seems we need additional conditions. Maybe both S_1, S_2 are contractible in X, then there is the result is that X/C ~X , when C is contractible in X. Hope this is not getting too far off-topic.
 
Last edited:
  • #25
WWGD said:
I wonder if there is something similar in topology with subspaces S1,S2S1,S2 which are, as stand-alone spaces homeomorphic and both embed in a third space X. Is it the case that X/S1\~X/S2X/S1\~X/S2 , at least up to homotopy ( or something else?).

I would think no, because we could, e.g., start with a Torus as X, and consider S_1 , S_2 as loops/circles going around X , longitudinally and latitudinally. So if we do X/S_1 (shrink to a point/collapse/quotient out) we pinch the torus but nothing happens when we do X/S_1. It seems we need additional conditions. Maybe both S_1, S_2 are contractible in X, then there is the result is that X/C ~X , when C is contractible in X. Hope this is not getting too far off-topic.
Well the 4-,5- and 9-Lemmata should work in ##\mathbf{Top}## as well, but I'm still not convinced. The short 5 Lemma works outside in, and we are looking for left side to the right, so it could be possible. But I'd like to see a counterexample, and whether my observation of two inclusions was correct.
 
  • #26
fresh_42 said:
Well the 4-,5- and 9-Lemmata should work in ##\mathbf{Top}## as well, but I'm still not convinced.
?? I never heard of those names. Can you ref. please?
Do you agree with my argument using matrices?
 
  • #27
WWGD said:
?? I never heard of those names. Can you ref. please?
Do you agree with my argument using matrices?
There is no problem with a finite basis and linearity. I only said, that the element count doesn't work for infinite sets. Interesting would be the case of uncountable dimensions: Can we embed such a vector space in itself without being surjective? Or will Hamel bases, i.e. AC save the day?
 
  • #29
fresh_42 said:
There is no problem with a finite basis and linearity. I only said, that the element count doesn't work for infinite sets. Interesting would be the case of uncountable dimensions: Can we embed such a vector space in itself without being surjective? Or will Hamel bases, i.e. AC save the day?
I never thought cardinality/count was enough; that wasn't part of my argument. I thought a key issue was that we are mapping a space to itself. And that matrices are not one-side invertible, unlike some functions. EDIT: I guess re the quotients X/S_1, X/S_2 , we need some condition on maps passing to the quotient. Thanks for the 5-lemma ref. I will see how to fit it.
 
  • #30
fresh_42 said:
Why that? What is a counterexample? A quick view on the 4 Lemma seems as if there are inclusions ##G_1/H_1 \rightarrowtail G_2/H_2 \rightarrowtail G_1/H_1##. Not sure whether there is also an epimorphism, but the two inclusions are a strong condition.

Exactly the point I was trying to make! It seems so true! Take ##G = Z_4 \times Z_2## and consider the cyclic subgroups generated by ##(2,0)## and ##(0,1)##. They are both cyclic of order 2 and hence isomorphic. Quotienting them out gives the two different groups of order ##4##.
 
Last edited by a moderator:
  • Like
Likes fresh_42 and martinbn
  • #31
WWGD said:
Maybe one way of doing the proof for (fin. dim) V.S is to use the fact that, given an ordered basis ## \{ v_1, v_2,...,v_n \} ## there is an isomorphism between ##\text L(V,V) ## and ##\text Mat_{(n,n)}##. And then you can show a matrix can only have 2-sided inverses, meaning it has to describe a map that is 1-1 and onto.

##AB=I B=A^{-1} \rightarrow BA =A^{-1}A=I ##

So B is both right- and left- inverse and we have two-sided inverses.
BAH. My argument here is incorrect; it does not automatically follow. It is an interesting exercise: Show that for A,B same size, AB=I , then BA=I. Not trivial that matrix inverses are necessarily two-sided. EDIT: I don't know if/how this can be generalized to non-commutative rings with identity: if rr'=1 , when does it follow that r'r=1?
 
Last edited:
  • #32
WWGD said:
BAH. My argument here is incorrect; it does not automatically follow. It is an interesting exercise: Show that for A,B same size, AB=I , then BA=I. Not trivial that matrix inverses are necessarily two-sided.

an easy way to do this for ##n## x ##n## matrices is write the left inverse of ##B## as a linear combination of powers of ##B## (i.e. a polynomial in B). Now right multiply by ##B## and each side is thus equal to the identity matrix, but your polynomial commutes with ##B## (evaluate term by term) which means your left inverse commutes with ##B## as well and hence is an actual inverse.

- - - -
The "easy" polynomial for the above uses Cayley Hamilton. The much more basic result that is actually useful here is that these matrices live in a vectors space with dimension ##n^2## and hence there must be some monic polynomial of degree (at most) ##n^2## that annhilates them... once use you have your polynomial in B equal to zero, multiply on the left by the left inverse of ##B## a suitable number of times and move the lowest order term (the left inverse of B itself), to the right hand side, and rescale as needed. (This is where the first paragraph starts.)
 
  • #33
WWGD said:
BAH. My argument here is incorrect; it does not automatically follow. It is an interesting exercise: Show that for A,B same size, AB=I , then BA=I. Not trivial that matrix inverses are necessarily two-sided. EDIT: I don't know if/how this can be generalized to non-commutative rings with identity: if rr'=1 , when does it follow that r'r=1?

##AB= I\implies BA = B(AB)B^{-1} = BB^{-1}= I##
An inverse exists by a determinant argument: ##AB=I## means that ##\det B \neq 0 \neq \det A##

In general in non-commutative rings ##rr'=1## does not imply ##r'r=1##. See e.g. here https://math.stackexchange.com/q/70777/661543
 
Last edited by a moderator:
  • #34
Math_QED said:
##AB= I\implies BA = B(AB)B^{-1} = BB^{-1}= I##
An inverse exists by a determinant argument: ##AB=I## means that ##\det B \neq 0 \neq \det A##

In general in non-commutative rings ##rr'=1## does not imply ##r'r=1##. See e.g. here https://math.stackexchange.com/q/70777/661543
Yes, but you cheated a bit: the multiplicity of the determinant, the existence of ##B^{-1}## as left and right inverse, which is what we wanted to show! The condition does not hold in arbitrary rings, but in groups. So the essential question behind the task was: Can we show that ##GL(V)## is a group and thus ##AB=I \Longrightarrow BA=I## without using linear algebra? It's easy if there exists a left inverse, but the existence will probably require facts from the definition of the group. Can this be proven, or can we conclude the existence of ##CA=I## only from ##AB=I\,##?
 
  • #35
##BABB^{-1} =I ## implies ##AB=AB^{-1} ## , not necessarily the identity. Maybe we can argue:
##BA=I ## , then ##AB= ABAB=I \rightarrow (AB)^n =I ## , has only identity as solution ( since this is true for all natural ##n## )?
 
<h2>1. What is the "Mistake in Schaum's Group Theory"?</h2><p>The "Mistake in Schaum's Group Theory" refers to an error found in the popular textbook "Schaum's Outline of Group Theory" by Baumslag and Chandler. The error was discovered in the first edition of the book and has since been corrected in later editions.</p><h2>2. What was the nature of the mistake?</h2><p>The mistake was a misstatement of a theorem regarding the order of elements in a group. The incorrect statement suggested that every element in a group must have the same order, which is not true in all cases. This error was corrected in later editions of the book.</p><h2>3. How was the mistake discovered?</h2><p>The mistake was discovered by a mathematician who was using the book for teaching and noticed that the statement did not hold true in all cases. The error was then confirmed by other mathematicians and brought to the attention of the authors and publisher.</p><h2>4. Did the mistake affect the overall quality of the book?</h2><p>No, the overall quality of the book is not affected by this mistake. The rest of the content in the book is accurate and valuable for learning group theory. However, it is important to be aware of this error and use a corrected edition of the book.</p><h2>5. Is it common for textbooks to have errors?</h2><p>Yes, it is not uncommon for textbooks to have errors, especially in the first edition. This is why it is important for authors and publishers to have a thorough review and editing process to catch and correct any mistakes before publishing. It is also important for readers to critically evaluate the content of any textbook and consult other sources for confirmation and clarification.</p>

1. What is the "Mistake in Schaum's Group Theory"?

The "Mistake in Schaum's Group Theory" refers to an error found in the popular textbook "Schaum's Outline of Group Theory" by Baumslag and Chandler. The error was discovered in the first edition of the book and has since been corrected in later editions.

2. What was the nature of the mistake?

The mistake was a misstatement of a theorem regarding the order of elements in a group. The incorrect statement suggested that every element in a group must have the same order, which is not true in all cases. This error was corrected in later editions of the book.

3. How was the mistake discovered?

The mistake was discovered by a mathematician who was using the book for teaching and noticed that the statement did not hold true in all cases. The error was then confirmed by other mathematicians and brought to the attention of the authors and publisher.

4. Did the mistake affect the overall quality of the book?

No, the overall quality of the book is not affected by this mistake. The rest of the content in the book is accurate and valuable for learning group theory. However, it is important to be aware of this error and use a corrected edition of the book.

5. Is it common for textbooks to have errors?

Yes, it is not uncommon for textbooks to have errors, especially in the first edition. This is why it is important for authors and publishers to have a thorough review and editing process to catch and correct any mistakes before publishing. It is also important for readers to critically evaluate the content of any textbook and consult other sources for confirmation and clarification.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
862
  • Linear and Abstract Algebra
Replies
2
Views
946
  • Linear and Abstract Algebra
Replies
2
Views
816
  • Linear and Abstract Algebra
Replies
4
Views
944
  • Linear and Abstract Algebra
2
Replies
48
Views
7K
Replies
2
Views
940
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
15
Views
988
  • Calculus and Beyond Homework Help
Replies
14
Views
531
Back
Top