matqkks said:
How much emphasizes should be on proof on a first course in Linear Algebra?
I sometimes feel that they (proofs) crowd out a coherent vision for linear algebra.
However I also think a central theme of a Linear Algebra course is to learn reasoning even though it does not always succeed.
The audience is first year undergraduate students studying mathematics and physics but maybe extended to engineers. They generally struggle with the idea of proof.
Proofs are pretty central to all of mathematics: one of the reasons we place a lot of trust in mathematics as a whole discipline is because of the proof mechanism. If we couldn't have such a system that people agreed on, the whole thing would come down like a house of cards and we might as well not have mathematics at all.
Linear algebra is more or less the study of representing and analyzing linear spaces (more or less). A linear space is something that acts like a vector or an arrow and the definition is basically that f(X+Y) = f(X) + f(Y) and f(aX) = af(X) for an appropriate object X and Y from some vector space.
The vector space axioms are more or less based on proving the above two identities in a rigorous way and when these are proven, we are gauranteed the linearity characteristics of the space given a field and an addition operation.
Then you get the situation of trying to solve a system of linear relationships and this leads into a whole lot of material including analytic and numeric frameworks for finding solutions. Also there is the topic of finding solutions that are the best given that we have an operator that is singular (no inverse) or close enough to singular and these kinds of things deal with pseudo-inverses.
You also deal with the formal idea of decomposing a linear system into minimal representations and along the way you deal with spanning and basis sets as well as the clarification of what dimension is and how this relates to other properties of linear analysis.
Then like every other area of mathematics, we look at possible decompositions for structures and this ends up indirectly leading to eigen-decompositions. There are also other decompositions but they are part of a later course.
Then on top of the vector space you introduce normed and inner product spaces in which the latter adds geometry to your space. From this you can deal with projections and introduce general decompositions of vector spaces with an inner product.
This kind of thing forms a lot of the stuff dealing with hilbert spaces and integral transforms which concerns itself with things like Fourier analysis and wavelets.
Finally linear operators form the basis of studying general differentiation for multivariable calculus (both differential and integral) and also for tensor theory (generalized co-ordinate system theories) which is pretty useful. Also the differential is a linear operator and this is utilized in more general theories.
You may actually do the simplest case of tensor analysis where the two geometries are completely flat (no curvative at all) when you have to transfer between two coordinate systems (given vector A in C1 find A in C2 where C1 and C2 are constant matrices). The extension to general systems is a lot easier to understand if you have been through this example.
Hopefully that will give you some idea.