Mistake in Schaum's Group Theory?

Click For Summary

Discussion Overview

The discussion revolves around a definition in Schaum's Outline of Group Theory regarding the set {{\rm{L}}_n}\left( {V,F} \right) of one-to-one linear transformations of a vector space V over a field F. Participants are examining whether this set necessarily includes onto transformations, and the implications of this definition in the context of group theory and linear algebra.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant questions the definition of {{\rm{L}}_n}\left( {V,F} \right) as potentially including transformations that are not onto, suggesting a possible mistake in the text.
  • Another participant asserts that a linear transformation from a finite-dimensional vector space is injective if and only if it is surjective, referencing the rank-nullity theorem to support this claim.
  • A later reply summarizes that the text implies every one-to-one linear transformation of dimension n must be onto, and discusses the relationship between injective mappings and embeddings in the context of vector spaces.
  • Some participants introduce the concept of linearity and its importance in proving that injective maps are surjective, particularly in finite-dimensional spaces.
  • There are discussions about the limitations of these arguments when considering infinite sets, with examples provided that illustrate the complexities of injective and surjective mappings in such cases.
  • Several participants express uncertainty about the applicability of certain arguments to infinite-dimensional spaces and the necessity of additional structures like linearity for conclusions about surjectivity.

Areas of Agreement / Disagreement

Participants do not reach a consensus. While some argue that the definition is correct based on established mathematical principles, others maintain that the definition could be misleading or incomplete without further clarification.

Contextual Notes

Limitations include the potential misunderstanding of terminology (e.g., one-to-one vs. bijection) and the implications of dimensionality in finite versus infinite contexts. The discussion reveals a reliance on definitions and theorems that may not be fully explored in the referenced material.

  • #31
WWGD said:
Maybe one way of doing the proof for (fin. dim) V.S is to use the fact that, given an ordered basis ## \{ v_1, v_2,...,v_n \} ## there is an isomorphism between ##\text L(V,V) ## and ##\text Mat_{(n,n)}##. And then you can show a matrix can only have 2-sided inverses, meaning it has to describe a map that is 1-1 and onto.

##AB=I B=A^{-1} \rightarrow BA =A^{-1}A=I ##

So B is both right- and left- inverse and we have two-sided inverses.
BAH. My argument here is incorrect; it does not automatically follow. It is an interesting exercise: Show that for A,B same size, AB=I , then BA=I. Not trivial that matrix inverses are necessarily two-sided. EDIT: I don't know if/how this can be generalized to non-commutative rings with identity: if rr'=1 , when does it follow that r'r=1?
 
Last edited:
Physics news on Phys.org
  • #32
WWGD said:
BAH. My argument here is incorrect; it does not automatically follow. It is an interesting exercise: Show that for A,B same size, AB=I , then BA=I. Not trivial that matrix inverses are necessarily two-sided.

an easy way to do this for ##n## x ##n## matrices is write the left inverse of ##B## as a linear combination of powers of ##B## (i.e. a polynomial in B). Now right multiply by ##B## and each side is thus equal to the identity matrix, but your polynomial commutes with ##B## (evaluate term by term) which means your left inverse commutes with ##B## as well and hence is an actual inverse.

- - - -
The "easy" polynomial for the above uses Cayley Hamilton. The much more basic result that is actually useful here is that these matrices live in a vectors space with dimension ##n^2## and hence there must be some monic polynomial of degree (at most) ##n^2## that annhilates them... once use you have your polynomial in B equal to zero, multiply on the left by the left inverse of ##B## a suitable number of times and move the lowest order term (the left inverse of B itself), to the right hand side, and rescale as needed. (This is where the first paragraph starts.)
 
  • #33
WWGD said:
BAH. My argument here is incorrect; it does not automatically follow. It is an interesting exercise: Show that for A,B same size, AB=I , then BA=I. Not trivial that matrix inverses are necessarily two-sided. EDIT: I don't know if/how this can be generalized to non-commutative rings with identity: if rr'=1 , when does it follow that r'r=1?

##AB= I\implies BA = B(AB)B^{-1} = BB^{-1}= I##
An inverse exists by a determinant argument: ##AB=I## means that ##\det B \neq 0 \neq \det A##

In general in non-commutative rings ##rr'=1## does not imply ##r'r=1##. See e.g. here https://math.stackexchange.com/q/70777/661543
 
Last edited by a moderator:
  • #34
Math_QED said:
##AB= I\implies BA = B(AB)B^{-1} = BB^{-1}= I##
An inverse exists by a determinant argument: ##AB=I## means that ##\det B \neq 0 \neq \det A##

In general in non-commutative rings ##rr'=1## does not imply ##r'r=1##. See e.g. here https://math.stackexchange.com/q/70777/661543
Yes, but you cheated a bit: the multiplicity of the determinant, the existence of ##B^{-1}## as left and right inverse, which is what we wanted to show! The condition does not hold in arbitrary rings, but in groups. So the essential question behind the task was: Can we show that ##GL(V)## is a group and thus ##AB=I \Longrightarrow BA=I## without using linear algebra? It's easy if there exists a left inverse, but the existence will probably require facts from the definition of the group. Can this be proven, or can we conclude the existence of ##CA=I## only from ##AB=I\,##?
 
  • #35
##BABB^{-1} =I ## implies ##AB=AB^{-1} ## , not necessarily the identity. Maybe we can argue:
##BA=I ## , then ##AB= ABAB=I \rightarrow (AB)^n =I ## , has only identity as solution ( since this is true for all natural ##n## )?
 
  • #36
You said earlier that ##AB=I## is given. I think you confused the order now. This is very confusing, as the order is all we talk about. But given ##AB=I## the following doesn't make much sense:
WWGD said:
##BABB^{-1} =I ## implies ##AB=AB^{-1} ## , not necessarily the identity. Maybe we can argue:
##BA=I ## , then ##AB= ABAB=I \rightarrow (AB)^n =I ## , has only identity as solution ( since this is true for all natural ##n## )?
If we allow a left inverse ##CA=I## then we immediately have ##B=IB=(CA)B=C(AB)=CI=C##. The existence of ##C## is the problem. To write it as ##B^{-1}## is cheating.
 
  • #37
fresh_42 said:
You said earlier that ##AB=I## is given. I think you confused the order now. This is very confusing, as the order is all we talk about. But given ##AB=I## the following doesn't make much sense:

If we allow a left inverse ##CA=I## then we immediately have ##B=IB=(CA)B=C(AB)=CI=C##. The existence of ##C## is the problem. To write it as ##B^{-1}## is cheating.
Yes, my bad, mixed both things up. Let me go for another doppio, a 2-sided doppio. I intended a post to contain left- and right - inverses, but I somehow got logged off a few times and lost track of what I was doing :(.
 
  • #38
Ok, this is the argument I intended:
Assume ##AB=I ##
Then ##BA=B(AB)A=(BA)^2=I ## ( just need associativity) and we can extend to ##(BA)^n =I ## for all n
 
  • #39
WWGD said:
Ok, this is the argument I intended:
Assume ##AB=I ##
Then ##BA=B(AB)A=(BA)^2=I ## ( just need associativity) and we can extend to ##(BA)^n =I ## for all n
That's wrong. If ##BA=C## then all we have is ##C=BA=B(AB)A = (BA)^2=C^2## and per recursion ##C=C^n## for all ##n##. Since there are unipotent matrices which are not the identity, I don't see how ##C=I## should follow.
 
  • #40
fresh_42 said:
That's wrong. If ##BA=C## then all we have is ##C=BA=B(AB)A = (BA)^2=C^2## and per recursion ##C=C^n## for all ##n##. Since there are unipotent matrices which are not the identity, I don't see how ##C=I## should follow.
Yes, use BA or C either way. And, no, not . ##C^n =C ## for _some_ n, but ##C^n =C ## _for all_ n . Do you have a non-ID C with ##C^2=C^3=...=C^n =I ## ? Maybe, I don't know of one. Also DetC =1 here, let's assume Real entries. If not, the determinant is a 2nd, 3rd,...n-th,... root of unity. Is there such number?
 
  • #41
WWGD said:
Yes, use BA or C either way. And, no, not . ##C^n =C ## for _some_ n, but ##C^n =C ## _for all_ n . Do you have a non-ID C with ##C^2=C^3=...=C^n =I ## ?
That's what I said, for all ##n##. And, no, I don't have such a ##C##, but this isn't a formal proof, especially if we do not use linear algebra.
 
  • #42
fresh_42 said:
That's what I said, for all ##n##. And, no, I don't have such a ##C##, but this isn't a formal proof, especially if we do not use linear algebra.
Yes, I know, I never said it is a proof. But this matrix BA satisfies infinitely many polynomials ##C^n-C =0 ##, which is also strange, but, I admit, not a proof (yet?).
 
  • #43
Just experimenting before aiming for a formal proof.
 
  • #44
Well, we have ##C=0## is a solution, too. And theoretically we can have zero divisors from the left and not from the right and similar nasty things. But the question "can we prove ##G=GL(V)## is a group" without using linear algebra, and only ##AB=I## is probably doomed to fail, as @Math_QED mentioned correctly, that it doesn't work in rings. But if it's a group, then we are done. If we do not allow this, we have to use something from the definition of ##G##, and that is linear algebra.
 
  • #45
fresh_42 said:
Well, we have ##C=0## is a solution, too. And theoretically we can have zero divisors from the left and not from the right and similar nasty things. But the question "can we prove ##G=GL(V)## is a group" without using linear algebra, and only ##AB=I## is probably doomed to fail, as @Math_QED mentioned correctly, that it doesn't work in rings. But if it's a group, then we are done. If we do not allow this, we have to use something from the definition of ##G##, and that is linear algebra.
C=0 Is not a solution ##AB=I ## so ##Det(AB)=DetADetB=1 ## , and ##DetC=DetBA =1 ## , but Det0=0. EDIT: ##DetC =1 ##or if you allow Complexes and ##Detc \neq 1##, DetC is a 2nd,3rd,..., nth,... root of unity.
 
  • #46
WWGD said:
C=0 Is not a solution AB=I so Det(AB)=DetADetB=1 , and Det0 =0.
We are still on the necessity part. The damn existence of ##C## is the problem. And ##0## is a solution to ##C=C^n## for all ##n##. If you use the determinant, you have to mention the field! What I say: we will need some argument from LA. In the end we could simply write down the formula for the inverse in terms of ##A## and good is.
 
  • #47
fresh_42 said:
We are still on the necessity part. The damn existence of ##C## is the problem. And ##0## is a solution to ##C=C^n## for all ##n##. If you use the determinant, you have to mention the field! What I say: we will need some argument from LA.
Well, yes, no finite fields, so C^n=C. And I had specified Complexes or Reals. Anyway, I don't see how this relates to Los Angeles, LA ;), but at any rate, I think we may be able to relax condition of linearity. All we need is scaling property and showing the image is not meager, i.e. the image contains a ball about the origin , then if we consider a line segment y=cx from the origin, we consider the multiples of that segment and every point is hit in that way.
 
  • #48
We still have the difficulty that zero divisors are possible as long as we haven't a group. The (unique) solvability of ##x A = B## is equivalent to the group axioms, so we circle around the main topic. Guess we have to visit La La Land.
 
  • Like
Likes   Reactions: member 587159
  • #49
the integers are a group in which all non zero subgroups are normal and isomorphic, but the quotients can be any finite cyclic group.
 
  • Like
Likes   Reactions: member 587159 and fresh_42
  • #50
fresh_42 said:
##(0,1) \stackrel{\operatorname{id}}{\hookrightarrow} \mathbb{R}-\{\,0\,\}\stackrel{\varphi}{\rightarrowtail} (-\frac{1}{4},\frac{1}{4}) -\{\,0\,\} \stackrel{+\frac{1}{3}}{\hookrightarrow} (0,1)##
where I only need a bijection ##\varphi## between the real line and an open interval, both without zero to make it easier.
If I understand what your notation conveys, the last bit should be ## \stackrel{+\frac{1}{4}}{\hookrightarrow} (0,1)##, since you're adding 1/4 to each element of the punctured interval (-1/4, 1/4).
 
  • #51
Mark44 said:
If I understand what your notation conveys, the last bit should be ## \stackrel{+\frac{1}{4}}{\hookrightarrow} (0,1)##, since you're adding 1/4 to each element of the punctured interval (-1/4, 1/4).
No, I just wanted to shift it into the interval injectively. The amount didn't matter. With ##\frac{1}{4}## I would have had to deal with the zero, which I avoided by taking ##\frac{1}{3}##.
 
  • #52
fresh_42 said:
We still have the difficulty that zero divisors are possible as long as we haven't a group. The (unique) solvability of ##x A = B## is equivalent to the group axioms, so we circle around the main topic. Guess we have to visit La La Land.
Where are you getting your scalars? We're talking coefficients in a field of characteristic zero now, aren't we?
 
  • #53
WWGD said:
Where are you getting your scalars? We're talking coefficients in a field of characteristic zero now, aren't we?
O.k., but matrices can still multiply to zero. However, since @mathwonk #49 I consider our riddle solved.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 26 ·
Replies
26
Views
1K
  • · Replies 1 ·
Replies
1
Views
788
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 13 ·
Replies
13
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K