0rthodontist said:
I understand almost all of it. The "line n follows from ..." type stuff is how at this time I perceive the purely abstract algebra, meaningless fiddling with symbols. My opinion may change when I take a course in abstract algebra, or it may not, but at any rate I can picture how the operations are actually performed and this leads easily to proof.
Don't let the fact that you've no experience of what "abstract algebra" is stop you from making sweeping and dismissive statements about it then.
This reminds me of something I posted a while ago,
https://www.physicsforums.com/showthread.php?t=106101. It's an example where I didn't actually manage to get the proof, but it happened to be a very simple idea that you could never get without thinking in terms of row operations.
that has an elementary proof in 'purely abstract' terms, as shmoe points out. If the columns sum to one then (1,1,1..,1) is a left eigenvector of eigenvalue 1 (ie an eigenvalue of A^t) hence the answer.
Oh, so interest in actual manipulative skill makes you somehow "not genuinely interested in math"?
sometimes it is necessary to get ones hands dirty to figure out why something is true, perhaps by verifying a proposition for a few cases to see how a proof might run in general, however an 'interest' in manipulation could be take to mean something entirely different.
And I suppose holding a driver's license disqualifies you as an automotive engineer.
I don't see how that is remotely justifiable as an analogy. Perhaps (and this is genuinely tongue in cheek) a better analogy would be 'the ability to change a tyre doesn't make you and automotive engineer'. In anycase, I don't think you've understood shmoe's position.
I said I loved my course on linear algebra and I do. My average in that class was above 100%,
and who says there no such thing as grade inflation these days?
and a few weeks ago I looked at a few problems in linear algebra from the graduate qualifying exam at my school and found that I could do them.
So you presumably understand what a quotient space is then?
There are exactly, what, two things (ie proofs) one needs to be taught in linear algebra:
dim(U+V)=dim(U)+dim(V)-dim(UnV)
and for a linear map f:M-->M, then
M=Im(f) +Ker(f)
where the sum is direct.
Ok, probably throw in Sylvester's Law of replacement, call it 3 results.
Finally, to give you some idea of why proper theorem/proof stuff is admirable and necessary in linear algebra, try to show that det(AB)=det(A)det(B) for an arbitray nxn matrix. This is remarkably straight forward if we use the fact that det(M) (M a linear map from V to V) is the unique number d such that the induced map
M:Lambda^n(V)--> Lambda^n(V)
is mutliplication by d on the top power of the exterior algebra.
How about this, then:
let S_n the permutation group act on some vector space V of dimension n by permuting some basis. Show that the subspace L=e_1+e_2+..+e_n is invariant (e_i the basis vectors) and hence that the quotient V/L is also S_n invariant. (these are called representations of S_n). Try to write out a basis for V/L too.