Linear Algebra Texts Recommended for Rigorous Approach

  • Thread starter Thread starter JasonRox
  • Start date Start date
JasonRox
Homework Helper
Gold Member
Messages
2,381
Reaction score
4
I am doing Linear Algebra right now, but the text we are using sucks in my opinion.

Elementary Linear Algebra /with Applications - Anton and Roberts

If there exists a linear algebra text that has a Spivak (Calculus) style approach, that would be great. I would like a more rigorous approach to linear algebra.

So, what text do you recommend?

Thanks.
 
Physics news on Phys.org
You might want to check out "Linear Algebra Done Right" by Sheldon Axler.
 
Sounds like a Linear Algebra textbook book tag along kind of thing.

Is it like Schaum's or what not?
 
Not at all. I suggest you try to find a copy and not judge it by it's title.

Axler's is the closest analogue in the linear algebra world I've seen to Spivak's Calculus.
 
I used Shilov's . They're pretty decent.
 
Last edited by a moderator:
Thanks guys.
 
Also, I found rather fun:

http://ocw.mit.edu/OcwWeb/Mathematics/18-06Linear-AlgebraFall2002/VideoLectures/index.htm
 
Last edited by a moderator:
the best standard linear algebra text is probably hoffman and kunze. anything by serge lang tends to be well done, but osme of his lineat algebra texts are bend over backward dumbed down.

"finite dimensional vector spaces" by paul halmos is probably closest to a spivak type text.

or just get a good algebra book that focuses on, linear algebra , like the excellent book Algebra, by michael artin.

there is also an excellent book by paul detman, in paperback for a few dollars, in dover. anythoing written in the 60's as spivak was, will have hgiher stahndards thahn tdays c**ppy texts.
 
Yes I too would also recommend Sheldon Axler's "Linear Algebra Done Right." It is much more rigorous than your standard intro to linear algebra text. You hardly use matrices and matrix manipulations at all. Determinants of matrices are explained in one of the very last chapters of the book. Axler explains the THEORY behind a lot of the linear algebra most students take for granted in their first course in LA.
 
  • #10
Is Shaums good?
 
  • #11
schaum's books are always useful, utilitarian collectioins of facts and exercises. they are never, ever, anywhere near the category of book requested here, i.e. a serious theoretical text like spivak's calculus. They are pretty good at what they attempt, which is a minimal acquaintance with a subject, usually old fashioned, and often out of date.

even their value as a problem solving source to me seems compromised in recent years as they get thicker and less challenging, responding to the new penchant for dumbing down the material.

after years of recommending them, i found the schaum's calculus book almost useless this year, and regretted recommending it to my honors class.
 
  • #12
the algebra books 843,844,845, listed for free on the website http://www.math.uga.edu/~roy/

are fantastic, marvellous works of magnificent scholarship. Anyone who even holds them in his/her hand magically becomes algebraically literate.

just as the prey caught in the mouth of the tiger is doomed to never escape, so the student who once gazes into the pages of these works will be forever tied into the community of algebra scholars.
 
  • #13
I got a new text by Friedberg, Spinsel and some other guy.

Seems really good so far.
 
  • #14
Does one know of a good linear algebra text with an emphasis on applications and MATLAB. Thanks.
 
  • #15
quasi426 said:
Does one know of a good linear algebra text with an emphasis on applications and MATLAB. Thanks.

Linear Algegra by Anton Roberts has lots of great applications. In fact, a whole chapter is devoted to applications.

He has some emphasis on the use of programs like MATLAB, but he doesn't show how to use the program.
 
  • #16
Is this Anton and Rorres? That's the book I used as an undergrad. It was decent--I still refer to it from time to time...
 
  • #17
JasonRox said:
I got a new text by Friedberg, Spinsel and some other guy.

Seems really good so far.
:smile: Friedberg, Insel, and Spence.
 
  • #18
gravenewworld said:
Yes I too would also recommend Sheldon Axler's "Linear Algebra Done Right." It is much more rigorous than your standard intro to linear algebra text. You hardly use matrices and matrix manipulations at all. Determinants of matrices are explained in one of the very last chapters of the book. Axler explains the THEORY behind a lot of the linear algebra most students take for granted in their first course in LA.
Can that be good? The book I used, "Linear Algebra with Applications" by Robert Lay, proved nearly everything but also had a good early focus on matrices and matrix manipulations. These manipulations I found very useful in understanding how proofs later in the book proceed. Doing a lot of them helps make it all more concrete and gives you a sense for what you're actually proving. Without determinants you probably can't even do eigenvalues well. The book seemed rigorous, not that rigor in basic linear algebra is so difficult. Once you've mastered the basic manipulation there are ample proofs for you to do in the problems. What did it leave out? Well, there was a theorem or two (I don't remember exactly which) later in the book that it said to consult more advanced texts on rather than proving it itself. Also most of the book focused on real-valued matrices, though I don't know if that would be different in any other introductory text. On the whole the book was a wonderful guide to a course I loved enough to do most of the rest of the book since the course ended.

This is the _only_ book that I have had for linear algebra, and also I want to stress that the teacher I had was a very good lecturer. But I think the book contributed a lot, and maybe this weekend I'll finish the part on quadratic forms.
 
  • #19
0rthodontist said:
Can that be good?

Yes, very. Linear algebra should not equal solving systems of equations, and I think a good motivated student will be able to start with the interesting parts of linear algebra from the start. Axler's might be difficult for most as a first course (he suggests it for a second), but I think is a good book to look at for someone who asked for a Spivak analogue (see OP).

0rthodontist said:
Without determinants you probably can't even do eigenvalues well.

See http://www.axler.net/DwD.html it's a paper published by Axler on a determinant free approach to linear algebra that his text is partly based on.
 
  • #20
Well, at that link he says it is intended for a _second_ course in linear algebra.

I don't see why you don't like the manipulative approach. After all in applied math that's what the computer is doing. How can you understand LUP decomposition without doing row reduction yourself? I can't tell you how many times I've successfully thought through a proof in Lay's book by thinking about the nitty gritty rows and elements. Probably half of Lay's own proofs take that route. Also I especially like determinants because of their concrete area/volume/etc. interpretation.

It's not as if you're spending the whole course just manipulating matrices, you just spend a few weeks doing manipulation and that gets you an intuition you can use.

I think that all math should be grounded in some kind of practical, almost physical manipulation skill, something that let's you visualize what is going on when it gets abstract. I doubt we would even have any calculus or geometry without the manipulation in two and three dimensions that we do every day going about our lives.
 
Last edited:
  • #21
0rthodontist said:
Well, at that link he says it is intended for a _second_ course in linear algebra.

Yes, I mentioned this above. The text is self contained though, and does not rely on any prior linear algebra or matrix knowledge.

0rthodontist said:
I don't see why you don't like the manipulative approach. After all in applied math that's what the computer is doing. How can you understand LUP decomposition without doing row reduction yourself?

What you're calling "applied math" here is what I'd call "trivial", or "exceedingly dull computations". I'm not saying that these dull proceedures shouldn't be taught (mathematicians will have to teach them to science students one day at any rate), but I do feel your typical intro course spends way to much time on computations that really aren't needed for the more interesting topics.

0rthodontist said:
I can't tell you how many times I've successfully thought through a proof in Lay's book by thinking about the nitty gritty rows and elements. Probably half of Lay's own proofs take that route. Also I especially like determinants because of their concrete area/volume/etc. interpretation.

After pushing all these little indicies and sums around, how many of these proofs do you actually understand beyond the technical "line n follows from line n-1 follows from line n-2..."? These kinds of proofs are rarely enlightening.

Take a look at the "Down with Determinants" paper I linked to to see a different approach to things. He introduces determinants for the very purpose of volumes near the end, but to me it's much more motivated than the usual cofactor expansion that leaves students befuddled. Read it and let me know if it doesn't increase your understanding of linear algebra.Do keep in mind the context of my recomendation of Axler's text. I know full well the moaning that your typical intro linear algebra class produces when you do anything at all that isn't in the form of a 'concrete' matrix. However, Jason isn't the typical student- he's complained many times about the low level of his courses and he seems like he's genuinely interested in learning math, not just some number crunching so I have no qualms about suggesting this text to him if he is intersted in learning some linear algebra. He'll learn enough of the other dull junk in whatever classes his university makes him take.
 
  • #22
shmoe said:
After pushing all these little indicies and sums around, how many of these proofs do you actually understand beyond the technical "line n follows from line n-1 follows from line n-2..."? These kinds of proofs are rarely enlightening.

This is so true. I'm glad someone else said it!

shmoe said:
He'll learn enough of the other dull junk in whatever classes his university makes him take.

OTOH, a lot of these classes are what you make of them. Many people are bored in linear algebra class, but it is maybe the most useful math class they ever take. I've heard some people refer to dynamical systems as boring too. This kind of stuff amazes me...

Yeah, solving 3x3 systems gets old. But that isn't the point of these courses (or it shouldn't be anyway)...
 
  • #23
shmoe said:
After pushing all these little indicies and sums around, how many of these proofs do you actually understand beyond the technical "line n follows from line n-1 follows from line n-2..."? These kinds of proofs are rarely enlightening.
I understand almost all of it. The "line n follows from ..." type stuff is how at this time I perceive the purely abstract algebra, meaningless fiddling with symbols. My opinion may change when I take a course in abstract algebra, or it may not, but at any rate I can picture how the operations are actually performed and this leads easily to proof.

This reminds me of something I posted a while ago, https://www.physicsforums.com/showthread.php?t=106101. It's an example where I didn't actually manage to get the proof, but it happened to be a very simple idea that you could never get without thinking in terms of row operations.

Do keep in mind the context of my recomendation of Axler's text. I know full well the moaning that your typical intro linear algebra class produces when you do anything at all that isn't in the form of a 'concrete' matrix. However, Jason isn't the typical student- he's complained many times about the low level of his courses and he seems like he's genuinely interested in learning math, not just some number crunching so I have no qualms about suggesting this text to him if he is intersted in learning some linear algebra. He'll learn enough of the other dull junk in whatever classes his university makes him take.
Oh, so interest in actual manipulative skill makes you somehow "not genuinely interested in math"? And I suppose holding a driver's license disqualifies you as an automotive engineer.
I said I loved my course on linear algebra and I do. My average in that class was above 100%, and a few weeks ago I looked at a few problems in linear algebra from the graduate qualifying exam at my school and found that I could do them.
 
Last edited:
  • #24
0rthodontist said:
I understand almost all of it. The "line n follows from ..." type stuff is how at this time I perceive the purely abstract algebra, meaningless fiddling with symbols. My opinion may change when I take a course in abstract algebra, or it may not, but at any rate I can picture how the operations are actually performed and this leads easily to proof.

"abstract algebra" is really nothing like what they call algebra in high school.

Of course it's sometimes unavoidable to have to look at specific entries of matrices or do tedius calculations. If this adds little to understanding and can be avoided, I think it should be.

(aside-for your stochastic matrix, you were happy showing you had an eigenvalue of 1. This is obvious if you look at the transpose)

0rthodontist said:
Oh, so interest in actual manipulative skill makes you somehow "not genuinely interested in math"?

I said nothing like this implication, please read again. Interest in computations or interest what I consider maths do not necessarily exclude one another.
 
  • #25
0rthodontist said:
I understand almost all of it. The "line n follows from ..." type stuff is how at this time I perceive the purely abstract algebra, meaningless fiddling with symbols. My opinion may change when I take a course in abstract algebra, or it may not, but at any rate I can picture how the operations are actually performed and this leads easily to proof.

Don't let the fact that you've no experience of what "abstract algebra" is stop you from making sweeping and dismissive statements about it then.

This reminds me of something I posted a while ago, https://www.physicsforums.com/showthread.php?t=106101. It's an example where I didn't actually manage to get the proof, but it happened to be a very simple idea that you could never get without thinking in terms of row operations.

that has an elementary proof in 'purely abstract' terms, as shmoe points out. If the columns sum to one then (1,1,1..,1) is a left eigenvector of eigenvalue 1 (ie an eigenvalue of A^t) hence the answer.
Oh, so interest in actual manipulative skill makes you somehow "not genuinely interested in math"?

sometimes it is necessary to get ones hands dirty to figure out why something is true, perhaps by verifying a proposition for a few cases to see how a proof might run in general, however an 'interest' in manipulation could be take to mean something entirely different.

And I suppose holding a driver's license disqualifies you as an automotive engineer.

I don't see how that is remotely justifiable as an analogy. Perhaps (and this is genuinely tongue in cheek) a better analogy would be 'the ability to change a tyre doesn't make you and automotive engineer'. In anycase, I don't think you've understood shmoe's position.
I said I loved my course on linear algebra and I do. My average in that class was above 100%,

and who says there no such thing as grade inflation these days?

and a few weeks ago I looked at a few problems in linear algebra from the graduate qualifying exam at my school and found that I could do them.

So you presumably understand what a quotient space is then?

There are exactly, what, two things (ie proofs) one needs to be taught in linear algebra:

dim(U+V)=dim(U)+dim(V)-dim(UnV)

and for a linear map f:M-->M, then

M=Im(f) +Ker(f)

where the sum is direct.

Ok, probably throw in Sylvester's Law of replacement, call it 3 results.

Finally, to give you some idea of why proper theorem/proof stuff is admirable and necessary in linear algebra, try to show that det(AB)=det(A)det(B) for an arbitray nxn matrix. This is remarkably straight forward if we use the fact that det(M) (M a linear map from V to V) is the unique number d such that the induced map

M:Lambda^n(V)--> Lambda^n(V)

is mutliplication by d on the top power of the exterior algebra.

How about this, then:

let S_n the permutation group act on some vector space V of dimension n by permuting some basis. Show that the subspace L=e_1+e_2+..+e_n is invariant (e_i the basis vectors) and hence that the quotient V/L is also S_n invariant. (these are called representations of S_n). Try to write out a basis for V/L too.
 
  • #26
Suddenly I'm in an argument with everyone... Well, I'd like to remove myself from this thread.

Shmoe, of course I know that abstract algebra is "really nothing like what they call algebra in high school." I would like you to stop insulting me. I am taking an honors course in formal language theory.

Matt, I qualified my statement about abstract algebra. I have done a very small amount of it in beginning discrete math type settings, and found it very boring and pedantic, but I already said that when I know more I may change my opinion. Maybe I should have withheld my opinion.

Perhaps there is also an easy proof in abstract terms (terms I have never heard of) but the fact is there also exists an easy proof in concrete terms.

Proof that det(AB) = det(A)det(B):
(with A and B nxn)
Assume that A has rank <n. Then AB has rank <n, and det(AB) = 0 = det(A)det(B).
So I only need to prove the case when A has rank n. In this case, let A be a product of elementary matrices and the identity, like this:
A = E_kE_(k-1)...E2E1I
So AB = E_kE_(k-1)...E2E1B

det(E1B) = det(E1)det(B) because E1 is the matrix for a row operation and by the determinant rules for row operations.
Assume det(Ej...E1B) = det(Ej...E1)det(B). Then
det(E(j+1)Ej...E1B) = det(E(j+1))det(Ej...E1)det(B)
So by induction det(AB) = det(Ek)det(E(k-1))...det(E1)det(B)
By the same principle proved in the induction, det(Ek)det(E(k-1))...det(E1) = det(EkE(k-1)...E1) = det(A)
So det(AB) = det(A)det(B)

Was this proof longer than yours? Yes, but the idea was easy to grasp and the rest was just details.

I cannot do your second problem because it includes terminology I have never heard of.

I really understand the value of proof and its centrality to math, I just think that it helps a lot in proof to have a good "feel" for the subject matter through manipulation of it. It's the spirit of experimental mathematics.

Anyway I'm out of this thread.
 
  • #27
You need to prove to me that every linear map is the product of elementary ones, ie you need to pick a basis. Moreover, you have not demonstrated that the answer you got was independent of the choice of decomposition into elementary matrices. Ie you have not proven that det(A)=prod(det(Ei) (this would not be too hard, actually).

This 'basis dependent' proof also sheds no light on what a determinant (volume) is.I think the point for me is that 'doing row operations' bear absolutely no relation to what 'interesting linear algebra' is, nor does it give any feel for why decent theorems are true. I'd also think it bears no relation to the type of maths that Jason will care about. Anything that fails to mention quotient spaces ought not to call itself a linear maths course (for mathematicians, which is Jason's viewpoint).

I can certainly not imagine any mathematician I know saying that this kind of stuff (row operations) was what made them want to do maths for a career. There are definitely things that people think of as being elegant in maths.

And finally, you asserted that your book (Lay) proved everything. If you're not happy with quotient spaces, then it certainly has proved everything that it is reasonable to expect to be taught in a course that Jason would benefit from.
 
Last edited:
  • #28
0rthodontist said:
Shmoe, of course I know that abstract algebra is "really nothing like what they call algebra in high school." I would like you to stop insulting me. I am taking an honors course in formal language theory.

I did not mean it as an insult, you haven't taken a course in abstract algebra and your perception of it is way off ("meaningless fiddling with symbols") so it looked to me like you had no clue yet what abstract algebra entails. Don't take this as an insult either, I obviously don't have much to judge your abstract algebra knowledge than a few sentences and there would really be nothing wrong with knowing next to nothing about a subject you haven't studied in depth.

0rthodontist said:
Perhaps there is also an easy proof in abstract terms (terms I have never heard of) but the fact is there also exists an easy proof in concrete terms.

Proof that det(AB) = det(A)det(B):

Again, I urge you to take a look at the approach in Axler's paper towards determinants. It's much more motivated in my opinion.

matt grime said:
I think the point for me is that 'doing row operations' bear absolutely no relation to what 'interesting linear algebra' is, nor does it give any feel for why decent theorems are true. I'd also think it bears no relation to the type of maths that Jason will care about. Anything that fails to mention quotient spaces ought not to call itself a linear maths course (for mathematicians, which is Jason's viewpoint).

Agreed. I found a copy of the text Jason picked up, Friedberg, et al., sitting on my shelf (I didn't even know I had it). At first glance it looks decent and you'll be at least somewhat pleased to know that quotient spaces at least make an appearance in the exercises (starting in the first chapter), which he will cetainly do now if he's paying attention here.
 
  • #29
shmoe said:
I did not mean it as an insult, you haven't taken a course in abstract algebra and your perception of it is way off ("meaningless fiddling with symbols") so it looked to me like you had no clue yet what abstract algebra entails. Don't take this as an insult either, I obviously don't have much to judge your abstract algebra knowledge than a few sentences and there would really be nothing wrong with knowing next to nothing about a subject you haven't studied in depth.



Again, I urge you to take a look at the approach in Axler's paper towards determinants. It's much more motivated in my opinion.



Agreed. I found a copy of the text Jason picked up, Friedberg, et al., sitting on my shelf (I didn't even know I had it). At first glance it looks decent and you'll be at least somewhat pleased to know that quotient spaces at least make an appearance in the exercises (starting in the first chapter), which he will cetainly do now if he's paying attention here.

The book is fine if you don't go through each chapter.

We are going through determinants right now. WHY? I don't know. This is my third linear algebra course, and I still haven't got much further than the first course.

They keep reviewing everything. It's god damn annoying. Sure I won't remember everything, but I know it's my responsibility to go over last year's work. For god sakes, we went over the definition of a basis. If a student doesn't know this, they should be forced to drop the course.
 
  • #30
Well, I said I'm out of this thread, and I am, but I don't think there's any harm in finishing up the purely mathematical discussion.
matt grime said:
You need to prove to me that every linear map is the product of elementary ones, ie you need to pick a basis. Moreover, you have not demonstrated that the answer you got was independent of the choice of decomposition into elementary matrices. Ie you have not proven that det(A)=prod(det(Ei) (this would not be too hard, actually).
I'm not absolutely sure what your objection is, but if you're asking how I know that A can be decomposed into Ek...E1I--well, we know A can be row reduced to I since it has rank n, and we know row reductions are reversible, so I can be changed through row operations back to A, which means A can be written Ek...E1I. The original proof shows that det(A) = prod(det(Ei)) because prod(det(Ei)) = det(prod(Ei)) (by the principle shown in the induction) = det(A).
 
Last edited:
  • #31
That was what you needed to do, now all you need to do is prove that determinants behave as you claim they do under elementary row operations. Why does adding row 1 to row 2 leave the determinant unchanged? I don't think it is obvious that it does.
 
  • #32
Proving the row operation properties of determinants is an entirely separate question.
 
  • #33
It is not entirely different, and is in fact absolutely equivalent to the stated problem (in the sense that if det is a multiplicative homomorphism, then this implies the row operation result, and the row operation result you have shown implies the det result for all matrices).

So your elementary method of using purely row and column operations requirs us to believe this result. I have no problem with appealing to a simpler result, but this result is equivalent to what we want to prove. Can you at least indicate how row and column operations behave properly with respect to determinants in your method?

I presume that you're using the definition of det as expanding by cofactors, although you haven't said, so it would also rely upon us assuming that this definition actually gave zero if the rank of A were less than n too, wouldn't it? And how do you justify that this expression of Det is the volume change factor?

You probably feel I'm unduly playing devil's advocate. Perhaps I am. Perhaps I am being completely unfair to you on this one; I certainly can't discount that possibility.
 
Last edited:
  • #34
Well, you know that Rank A < n implies that det A = 0 because if Rank A < n, A is row reducible to an upper triangular matrix with a 0 on the diagonal, whose determinant is 0, and row operations do not change nonzero determinants to zero determinants.

Proving the row operations properties is more difficult, and I can't figure it out myself. Lay proves them, though, by an induction on the size of A.
 
Last edited:
  • #35
It is terribly gross, but not conceptiually difficult, to fill in all the questions matt is asking.

A question to matt though, have you never seen the details of this sort of thing worked out? If not, stay that way. I had them forced on me in my first linear algebra course, and I've since had to force them on students, it's not enjoyable from either end.
 
  • #36
I've never seen these details worked out, nor have I any desire to see them written out.

I am perfectly willing to accept that if one defines det of a matrix by some cofactor expansion method, that one can prove these elementary facts, but I do not see 0rthodontist proving any of them. All I see is a reference to an equally horrible to prove fact. For instance, from the last post we now have to prove that row operations cannot make nonzero determinants to zero determinants.

All that being said, even if we demonstrate that these pulled from nowhere cofactor definitions do satisfy these results, we still do not explain why on Earth this has any bearing on the idea of volume change.

If we adopt the exterior algebra point of view it is absolutely trivial, and trivial to demonstrate that the determinant satisfies

{\rm det}(a_{ij})=\sum_{\sigma \in S_n}{\rm sign}(sigma)a_{1\sigma(1)}\ldots a_{n\sigma(n)}
 
Last edited:
  • #37
There are three types of elementary row operations:

1. Switch two rows
2. Multiply a row by a constant
3. Add one row to another

Each such operation has a very simple matrix representation (they look "more or less" like the identity matrix). It's easy to show that each such matrix is invertible. So row operations cannot make nonzero determinants to zero determinants.

Anyways, det(A)det(B) = det(AB) is a result from a first course in linear algebra, in fact I think I might have done it in high school algebra. Would it make sense to teach exterior algebras in high school?
 
  • #38
Would it make sense to teach nxn matrices for all n and proper proofs in high school? No, you offer a justification and wave your hands and claim it is ok, like all maths at that level. There is a difference between using a result, justifying a result and then proving a result. We use real numbers in high school, who was taught that they are the unique totally ordered field? The question is about *proving* that determinants behave properly, not accetping and using the fact that they do. Besides, I still note 0rthodontist has not given his definition of determinant. Is the cofactors belief of mine correct? I also fail to see why your justification that multiplying by an invertible matrix doesn't make nonzero determinants zero is valid *without assuming that determinants behave properly*.
 
  • #39
matt grime said:
I also fail to see why your justification that multiplying by an invertible matrix doesn't make nonzero determinants zero is valid *without assuming that determinants behave properly*.
Oh you're right, it doesn't. When I wrote that, I forgot that he was trying to prove det(A)det(B) = det(AB) in the first place. The rest of my post was written after I saw that he was trying to prove that, but by that point I wasn't thinking about the stuff I had just written about elementary matrices.
 
  • #40
Well, basically the only unproven point in the proof I gave is the row reduction properties of determinants. Everything else was clear.

Yes, the cofactor definition is the one I am using.

AKG's argument does not depend on determinants. If A and B are invertible then AB is invertible, because AB(B^-1)(A^-1) = I.

Anyway, can you prove that the exterior algebra view is equivalent to the cofactor view in a few words, or would that also take a page or two? (this is a rhetorical question since I likely would not understand your proof)
 
Last edited:
  • #41
ridiculous comment #509:

my favorite treatment of determinants is to prove first the formula for how an exterior power commutes with direct sums:

i.e. the rth wedge product of a direct sum of two modules, is isomorphic to the direct sum of the tensor products of all pairs of lower wedge powers (s,t) of the two modules, such that s+t = r.this implies by induction the existence and uniqueness of determinants subject to the usual alternating axioms, a formula for them, their multiplicativity property, and the computation of the exterior products of all finite free modules.

this treatment is contained in the notes for math 845-3, page 56, on my webpage, for free, for persons not sufficiently challenged by their own linear algebra courses.

these consequences of the theorem are proved there in 2 pages, and then the theorem itself is proved in 3 further pages.
 
Last edited:
  • #42
0rthodontist said:
Well, basically the only unproven point in the proof I gave is the row reduction properties of determinants. Everything else was clear.

but those are the only things you need to prove.

AKG's argument does not depend on determinants. If A and B are invertible then AB is invertible, because AB(B^-1)(A^-1) = I.

Read AKG's own reply to my post. or consider the following: the position stated is that given X, and some invertible operation on X to get Y then X is not zero iff Y is not zero. Now put X=1, the operation as adding -1, and see what happens. If you don't prove this invertible operation actually behaves properly with respect to the property of 'being zero' then you can't use that as a proof.

Anyway, can you prove that the exterior algebra view is equivalent to the cofactor view in a few words, or would that also take a page or two? (this is a rhetorical question since I likely would not understand your proof)

Look at the formula I gave for the determinant: it is an expression of degree n monomials in the entries of the determinant and S_n acts by changing signs (this is equivalent to swapping rows/cols) hence they are the same quantity. (look at mathwonks uniqueness property in his notes).

The properties of S_n's action also tell you that elementary row ops do what you think, and that det is a multiplicative homomorphism, and the definition of volume means that det corresponds to the scale factor.
 
Last edited:
  • #43
matt grime said:
but those are the only things you need to prove.
I just said that.

Anyway Lay proves it. I'll give his proof here when I have a little time.
Read AKG's own reply to my post. or consider the following: the position stated is that given X, and some invertible operation on X to get Y then X is not zero iff Y is not zero. Now put X=1, the operation as adding -1, and see what happens. If you don't prove this invertible operation actually behaves properly with respect to the property of 'being zero' then you can't use that as a proof.
Noninvertible matrices have zero determinants... well, perhaps this fact does depend on determinants. Anyway, I can use the row operation properties of determinants because I'm not trying to prove those. In that case saying that row operations do not take nonzero determinants to zero determinants is just a matter of looking at the constant they multiply the determinant by, which is what I originally intended.

Look at the formula I gave for the determinant: it is an expression of degree n monomials in the entries of the determinant and S_n acts by changing signs (this is equivalent to swapping rows/cols) hence they are the same quantity. (look at mathwonks uniqueness property in his notes).

The properties of S_n's action also tell you that elementary row ops do what you think, and that det is a multiplicative homomorphism, and the definition of volume means that det corresponds to the scale factor.
You lost me... What is S?
 
Last edited:
  • #44
S_n is the permutation group on n elements.So, your proof of the row operation result is going to rest on some result you can't prove that is exactly equivalent to what you need to prove? It is perfectly reasonable to ask you to prove that, especially since you're making claims about the elementary nature of it. You cannot use something that is seemingly harder to prove, and a proof of which you cannot provide, to prove this in a manner that satisfies my curiosity about your position. If indeed it is all a simple matter of just manipulating rows, let's see it.

Revisiting AKG's point (you did read his own reply, right), you cannot say that since A is invertible det(AB) is not zero iff det(B) is not zero without assuming several things, not least is that not invertible is the same as det 0 in your definition, and seemingly that det is multiplicative since you're relying on the fact that xy=0 iff x or y=0. Thus we see this would fail for something other than matrices over a field in many ways (invertible over Z is iff det is 1, for instance).
 
Last edited:
  • #45
matt grime said:
So, your proof of the row operation result is going to rest on some result you can't prove that is exactly equivalent to what you need to prove? It is perfectly reasonable to ask you to prove that, especially since you're making claims about the elementary nature of it. You cannot use something that is seemingly harder to prove, and a proof of which you cannot provide, to prove this in a manner that satisfies my curiosity about your position. If indeed it is all a simple matter of just manipulating rows, let's see it.
No, my proof that A has rank < n implies det A = 0, which is part of my proof that det(AB) = det(A)det(B), not part of the row operation result, is going to rest on the row operation result.

Lay's proof of the row operation result is slightly tricky, not a simple matter of manipulating rows, and it rests on the theorem that you can expand a determinant along any row or column, which he does not prove because he states it would be a lengthy digression. So I have to find a proof for that before I can give you Lay's proof of the row operation result.

... GIVEN the row operation result, everything else is simple.
 
Last edited:
  • #46
0rthodontist said:
... it rests on the theorem that you can expand a determinant along any row or column, which he does not prove because he states it would be a lengthy digression.

I know it's in Nicholson's "Linear Algebra with Applications" if you are really interested. It's another horrid but not difficult thing that, while very important to this scheme of defining determinants, is not usually covered in detail. I know I skipped it when I had to teach determinants this way. While it's distasteful to force students to accept results on my word, nothing would have been gained be covering the details. If you hope to prove everything rigorously this way you cannot avoid the horror.
 
  • #47
0rthodontist said:
... GIVEN the row operation result, everything else is simple.


to summarize, your position that manipulating rows and cols gives you all the understanding you need in linear algebra, is sufficient to prove an elementary result, det(AB)=det(A)det(b), providing we are willing to accept a result that is tricky and not provable by manipulating rows? And you wonder why I think you position on linear, and abstract, algebra is not tenable...
 
  • #48
I mean everything else in the proof is simple. I never asserted that row operations are all you need, and anyway I have left that whole discussion.
 
Back
Top