How to understand it inituitively?

  • Thread starter Thread starter eraserxp
  • Start date Start date
eraserxp
Messages
10
Reaction score
0
I have come across a theorem which says that det(A)=0 if and only if the row (or column) vectors of A are linearly dependent.

I can understand the proof for it, but fails to figure out what's the essential meaning of this theorem. Can someone provide me a simple intuitutive understanding of this theorem? By intitutive understanding, I mean something that one can assure himself of some result without doing much derivation.

Thanks in advance!
 
Physics news on Phys.org
One way of looking at the determinants is that they are simply the factor of contraction/expansion that a transformation matrix exerts on a n-dimensional volume on a n-dimensional space. If that transformation has a lower dimension that the space in which it exists (it's vectors are not linearly independent), then it won't do anything to that n-volume.

Depending on what you are working, you can't take a lot more of information from the determinant of a linear transformation, like the implicit function theorem for example.
 
I personally like this geometric interpretation: you can view the n x m matrix A as a linear transformation from U = Rn to V = Rm.
Then the i-th column, for example, gives the coordinates of a basis vector in V expressed in some set of basis vectors in U. The determinant det(A) is some measure of scale, for example the scale factor of a parallelogram spanned by two vectors.

Then det(A) = 0 means that the parallelogram is collapsed to something one-dimensional. In a suitable choice of basis then, two basis vectors are mapped to the same image, up to a rescaling. Thus, two of the colums will be multiples of one another. This property itself is not preserved when you choose a different basis, but still there will be some column which is a linear combination of the others.

Since column and row space are related by transposition, the same argument holds for row space.
 
Thanks guys! I understand the theorem now.
 
Thanks guys.

After reading about your posts, I came up with the following understanding:
Suppose Matrix(A) transforms the geometrical shape whose edges are row vectors of identity matrix I. The edges of resulted geometrical shape are the column vectors of Matrix(A).

For the geometrical ratio to be zero, the resulted geometrical shape after the linear transformation must have lower dimensions than the original geometrical shape, therefore, at least one edge must lie in the plane of other two edges. So one column vector of Matrix(A) must be a linear combination of other two column vectors.
 
The way I sort of see this to be true is to do with row reduction. You know that if you add a multiple of one row or column to another then you don't change the determinant. You also know that if the rows or columns are dependent then you will be able to get a row of zeros by doing this.

And you know that the determinant of any matrix with a row or column of zeros MUST be zero due to the way its computed... so...

More intuitively I guess in the 3 by 3 case its sort of like asking what the volume is of a subset of a plane... Think of the definition of the volume of a paralellpiped (scalar triple product of the 3 vectors), ie the determinant of the matrix with each of the 3 vectors a row. If this set of 3 vectors is dependent then the paralellpiped (well it won't be a paralellpiped in this case) will be on a plane (or a line), and its volume will naturally be 0
 
Last edited:
The determinant computes the hyper-volume of ihe parallelopiped spanned by its rows.

It is an exercise in Euclidean coordinate geometry that its algebraic formula is the formula for the volume.
 
Back
Top