If you consider the columns (or rows) of an nxn matrix to be n-dimensional vectors, then the determinant of the matrix gives the "signed volume" of the n-dimensional parallelepiped defined by those vectors. In a sense, then, it gives the "magnitude" of the matrix, multiplied by a sign that indicates whether the orientation of the vectors is "right-handed" or "left-handed" with respect to the underlying coordinate system.
If anyone of the vectors lies in the same (n-1)-dimensional space spanned by the remaining (n-1) vectors, then the whole parallelepiped is "squashed" to zero volume (consider the three-dimensional case, where you have three vectors all lying in the same plane). A matrix composed of such vectors, then, represents a degenerate system of equations, where the equations are not all independent (and thus, permit infinitely many solutions). One can consider the matrix equation
\mathbf{AX} = \mathbf{B}
as a single, abstract entity, with the solution
\mathbf{X} = \mathbf{A^{-1}B}
In this case, the determinant of A can again be regarded as a sort of "magnitude". If it is zero, then there are infinitely many X which solve the equation, in an analogous sense to solving the one-dimensional equation
0x = b
Furthermore, the expression
\mathbf{A^{-1}B}
yields Cramer's Rule when written out in terms of \det \mathbf{A} and the cofactors of A.