I thnk i can write a shorter one than that:
linear algebra is about linear spaces, i.e. vector spaces, and linear maps between them. the first topic is therefore linear spaces. a (real) linear space V is a set of "vectors" closed under addition and multiplication by real numbers, which is an abelian group under addition (the usual properties of arithmetic hold, like associativity, commutativity and existence of a zero and negatives) and has the expected properties under multiplication (multilication by 1 acts as the identity, multiplication distributes over addition, a(bv) = (ab)v if a,b, are numbers and v is a vector).
the basic example is R^n, ordered n tuples of real numbers, with component wise addition and multiplication, i.e. (v1,...,vn) + (w1,...,wn) = (v1+w1, v2+w2,...,vn+wn) and a(v1,...,vn) = (av1,...,avn).
a subspace of V is a non empty subset W closed under addition and scalar multiplication. given a subspace W of V we can define a new vector space V/W by identifying two vectors x,y in V provided x-y lies in W.
given any two vector spaces V,W we can also define a new space V+W consisting of all ordered pairs (x,y) with x in V and y in W. addition and multiplication are componentwise as in R^n which is merely the sum of n copies of the real numbers.
A map f from V to W is called linear if f(x+y) = f(x)+f(y) for all x,y, in V, and also if f(ax) = af(x) for all x in V and all reals a.
and isomorphism is a linear map with a linear inverse.
exercise: any bijective linear map is an isomorphism.
given a space V and a subspace W, the map V-->V/W sending a vector mto its equivalence class is a linear map sending all vectors in W to zero.
a "linear combination" of the vectors v1,...,vm,... is a vector which is a finite sum of multiples of the given ones, i.e. a vector of form a1 v1+...+am vm. a set of vectors "spans" or "generates" a vector space if every vector in the space is a linear combination of the given vectors. equivalently if the given vectors are not contained in any proper subspace. a space is called finite dimensional if it has a finite spanning set or equivalently if there is a linear surjection from some R^n to that space. we restrict attention to finite dimensional spaces from now on.
note: every vector in R^n is a linear combination of the standard unit vectors (1,0,...,0),...,(0,...,0,1), but that none of these is a linear combination of the others. hence we call these the standard "basis", e1,...,en.
in any vector space we call a subset a "basis" if every vector is a linear combination of them, but no vector in the subset is a linear combination of the other vectors in that subset. an isomorphism between two spaces always takes a basis to a basis. in particular an isomorphism from V to R^n takes the standard basis of R^n to some basis of V. conversely, any basis v1,...,vn of V defines a unique isomorphism from R^n to V sending (a1,...,an) to a1v1+...+anvn.
thus an (ordered) basis for V may be thought of merely as an isomorphism of V with R^n. in other words, an ordered basis is a way to introduce linear coordinates into V, since each vector gets represented by a sequence of numbers, or coordinates.
example: the set of polynomials of degree at most d, has as basis the set of d+1 monomials 1,X,...,X^d. another basis is the set of d+1 polynomials 1, (1+X), (1+X+X^2),...,(1+X+...+X^d).
theorem: every finite dimensional space has a basis, i.e. admits an isomorphism with some R^n.
proof: choose any finite spanning set, v1,...,vn. throw out any zero vectors. if v2 is a multiple of v1 throw out v2, if not keep it. if v3 is a linear combination of v1,v2, throw it out, if not keep it. continue going through nthe spanning set, throwing out any vectors which are linear combinations of previous ones. then the ones left are a basis. i.e. they still span but none is a linear combination of a previous one, hence none is a linear combination of any others. QED.
cor: we have proved that every finite generating set contains a basis. rephrasing this proof in term of linear maps, it follows that any linear surjection from R^n to V which is not injective restricts to an isomorphism on some linear subspace (which may be regarded as R^m where m < n) to V.
cor: given any basis of V, there is a one one correspondence between linear maps from V to W and set functions from the basis to W, i.e. every function on the basis extends uniquely to a linear map.
proof : this is true of R^n hence of all finite diml spaces.
exercise: if V = R^n and W is the subspace spanned by en, then V/W is isomorphic to R^(n-1).
exercise: if f:V-->W is a linear map and ker(f) is the subspace of vectors sent by f to zero, then f is constant on equivalence classes in V/ker(f), hence defines a linear map from V/ker(f)--->W, which is always injective, and is still surjective if f was.
we define a space to have dimension n if it is isomorphic to R^n. we claim the dimension of a space is "well defined", i.e. that a space cannot have two different dimensions. it suffices to show:
theorem: if R^n and R^m are isomorphic, then n=m.
proof: clearly there is no linear surjection from R^1 to a higher dimensional space, since the image vectors of a linear map from R^1 to R^n all have proportional entries. if n<m and there were a linear surjection f: R^n --> R^m, and if e1,...,en, and u1,...,um are the standard bases of these two spaces, then the composition
R^n-->R^m/span(um) is a linear surjection which is not injective. hence there is a restriction to some lower dimensional subspace of R^n which is still surjective to R^(m-1). by induction on n this is a contradiction. QED.
Cor: two spaces are isomorphic if and only if they have the same dimension.
Cor: all bases have the same cardinality.
we agree that the space {0} containing only the zero vector has dimension zero, and has the empty set as a basis.
theorem: if W is a subspace of V, then dimW+dimV/W = dimV.
proof sketch: choose a basis for W, and extend it to a basis of V. then the added vectors are a basis for V/W. QED.
Theorem: if f:V--.W is a linear surjection, then dimker(f) + dimW = dimV.
proof: f induces an isomorphism from V/ker(f) to W. QED.
cor: dim(V+W) = dimV + dimW
proof: the projection taking (x,y) to y is a linear surjection from V+W to W with kernel V. QED.
definition: an indexed set of vectors {vi} is independent if the only linear combination of them that equals the zero vector has all coefficients equal to zero. equivalently, if some ai is a non zero scalar, then a1v1+...+anvn is not the zero vector.
lemma: every independent set is contained in a basis.
proof: given independent vectors v1,...,vn, add on any basis to get a generting set v1,...,vn,w1,...,wm. then applying the procedure in the theorem above on reducing a generating set to a basis, does the job.
cor: if V has dimension n then an independent set of vectors in V has at most n vectors. if a set of vectors in V has more than n vectors, it is not independent. if dimV > dimW, then a linear map V-->W is not injective, and a linear map W-->V is not surjective.
proof: easy exercise.
exercise: if the sequence v1,...,vn is independent then the map R^n-->V taking (a1,...,an) to a1v1+...+anvn is injective. (this is a tautology.)
that pretty much finishes the theory of finite dimensional vectors spaces and their dimension. the next chapter is to classify linear maps between them.
i would do it by introducing modules over rings, row and column operations on matrices, and diagonalizing the matrix giving a finite presentation of the R[X] module structure on V defined by a linear endomorphism. but not right now.