Here's what I'm thinking.
Take a basis of vector space of square matrices V, such that each matrix has a single 1 and all other elements are 0.
If you take a basis {v1, ... vn} and replace vk with c1v1 + ... + cnvn, you still get a basis, as long as ck is not 0. That is easy to prove, if needed.
Take some invertible matrix m1 = c1v1 + ... + cnvn in V, such that c1 is not 0 and replace v1 with it. I'm going to build an invertible matrix m2 = m1 + a2v2. Since det is a linear function, we can write it as kx + b, where x is some element of the matrix. The thing to prove is that if kx+b ≠ 0, then there exists 'a' ≠ 0 such that k(x+a) + b ≠ 0. The condition implies that k and b can't be both 0, so (due to linearity) there exists at most 1 point where k(x+a)+b = 0. We can always choose a different one, thus we can always build m2. Likewise mn = mn-1 + anvn, gives an invertible basis {m1, ..., mn}