Eigenvalues Redux: Deriving the Product of Eigenvalues = Determinant

In summary, the product of the eigenvalues of a matrix equals its determinant, which can be proven by diagonalizing the matrix and using the fact that the trace of a matrix is independent of basis. However, not all matrices are diagonalizable, and in the general case, one can look at the characteristic polynomial to find the constant term, which is equal to the product of the eigenvalues. This can also be seen geometrically as the oriented n-volume of the transformed basis relative to the old basis. Additionally, the sum of the eigenvalues appears in the characteristic polynomial.
  • #1
gnome
1,041
1
In a recent thread
https://www.physicsforums.com/showthread.php?t=67366
matt and cronxeh seemed to imply that we should all know that the product of the eigenvalues of a matrix equals its determinant. I don't remember hearing that very useful fact when I took linear algebra (except in the case of diagonal matrices, where it's obvious), and I can't find it mentioned in the textbook. Sure enough, it's true for every matrix that I've tried, but I can't see how to derive it. Would one of you please show how?
 
Physics news on Phys.org
  • #2
When you diagonalize a matrix (that is, when you write it in the basis of its own eigenvectors) the eigenvalues appear on the diagonal. That, combined with the fact that the trace of a matrix is independent of basis, proves the point.
 
  • #3
Alas, not all matrices are diagonalizable. In the general case, take a look at the characteristic polynomial, especially it's constant term.
 
  • #4
gnome said:
In a recent thread
https://www.physicsforums.com/showthread.php?t=67366
matt and cronxeh seemed to imply that we should all know that the product of the eigenvalues of a matrix equals its determinant. I don't remember hearing that very useful fact when I took linear algebra (except in the case of diagonal matrices, where it's obvious), and I can't find it mentioned in the textbook. Sure enough, it's true for every matrix that I've tried, but I can't see how to derive it. Would one of you please show how?
The determinant of a transformation is the oriented n-volume of the parallelotope formed by the transformed basis relative to the old basis.
Ie., let V be R^3 with the standard basis, and let T be a linear operator on V that doubles the magnitude of all vectors. The T has the matrix representation (2,0,0;0,2,0;0,0,2). By the above definition, equivalent to the algebraic formula, the determinant is then 8. Note that inverting anyone of the axes causes the oriented 3-volume of the parallelepiped to be negative, because the orientation of the axes has changed (you cannot rotate the old basis into the new basis if the new basis has inverted an odd number of axes). Zero-ing anyone of the basis vectors causes the 3-volume to go to 0, as we do not want to confuse 2-volume (area) with 3-volume.
It is then immediately apparent that if the transformed basis is just a rescaling of the old basis, then the oriented n-volume of the new basis with respect to the old basis is just the product of the scalars (the eigenvalues). Depending on the way you learned linear algebra, this geometric motivation for the determinant may not have been made apparent.
 
Last edited:
  • #5
Alas, indeed. I've been reading and re-reading the theorems about which matrices are diagonalizable, and I'm more or less comfortable with them. But it appears that the answer to my question is going to be much more complex than I expected.

For example, this cryptic statement
shmoe said:
In the general case, take a look at the characteristic polynomial, especially it's constant term.
Please elaborate.
 
  • #6
hypermorphism said:
The determinant of a transformation is the oriented n-volume of the parallelotope formed by the transformed basis relative to the old basis. It is then immediately apparent that if the transformed basis is just a rescaling of the old basis, then the oriented n-volume of the new basis with respect to the old basis is just the product of the scalars (the eigenvalues). Depending on the way you learned linear algebra, this geometric motivation for the determinant may not have been made apparent.
That's easy for you to say. :yuck:

Maybe I'll just accept it & worry about the details some other time.
 
  • #7
The characteristic polynomial of an nxn matrix is det(xI-A), (or det(A-xI) in some books).

Suppose this thing factors completely into linear factors:

[tex]det(xI-A)=(x-\lambda_1)\dots (x-\lambda_n)[/tex]

Then the lambdas are the eigenvalues. The constant term on the left can be found by sticking in x=0 to get det(-A). The constant term on the right is just the product of the lambda's times (-1)^n (or stick x=0 into the right side as well). That's your result.

If you're looking at a matrix with real coefficients and it's characteristic polynomial doesn't factor completely in the reals, you can factor it over the complex numbers and get a result if you are willing to allow complex eigenvalues.

If you wanted to beat this thing down with a big hammer, you could look up Jordan Form, or Schur's theorem about matrices being similar to upper triangular ones (doesn't always carry schur's name)

For a bonus lolipop, can you see where the sum of the eigenvalues appers in the characteristic polynomial?
 
Last edited:
  • #8
hypermorphism said:
It is then immediately apparent that if the transformed basis is just a rescaling of the old basis, then the oriented n-volume of the new basis with respect to the old basis is just the product of the scalars (the eigenvalues). Depending on the way you learned linear algebra, this geometric motivation for the determinant may not have been made apparent.

You're assuming here that you have a basis of eigenvectors, so this will only work for diagonalizable matrices. Maybe I'm missing something obvious?
 
  • #9
if A can be diagonalized, then it can be rewritten as [tex]A = PDP^{-1}[/tex] where D is diagonal matrix of all eigenvalues of A, and P is the invertible matrix consisting of all eigenvectors of A (which correspond to all eigenvalues in D). Obviously determinant of [tex]PDP^{-1}[/tex] will be multiple of all eigenvalues in the diagonal matrix, and the trace will be the sum of all eigenvalues, since [tex]P*P^{-1} = I[/tex]

[tex]A[/tex] can be diagonalized if and only if rank of the matrix formed by eigenvectors of [tex]A_{nxn}[/tex] = n. That is, there are n linearly independent eigenvectors of A.
 
Last edited:
  • #10
cronxeh said:
if A can be diagonalized, then it can be rewritten as [tex]A = PDP^{-1}[/tex] where D is diagonal matrix of all eigenvalues of A, and P is the invertible matrix consisting of all eigenvectors of A (which correspond to all eigenvalues in D). Obviously determinant of [tex]PDP^{-1}[/tex] will be multiple of all eigenvalues in the diagonal matrix, and the trace will be the sum of all eigenvalues, since [tex]P*P^{-1} = I[/tex]

[tex]A[/tex] can be diagonalized if and only if rank of the matrix formed by eigenvectors of [tex]A_{nxn}[/tex] = n. That is, there are n linearly independent eigenvectors of A.
Yeah, that part I got. I'm just trying to comprehend the situations where that doesn't apply, which I realize are simply not addressed at all in the texts (or the PARTS of the texts) that I've read.
 
  • #11
shmoe said:
You're assuming here that you have a basis of eigenvectors, so this will only work for diagonalizable matrices. Maybe I'm missing something obvious?
Heya shmoe,
You're right, that was too specific. I should have said "...if the transformed basis is just a rescaling of any basis of V,...", which segues right into the "diagonalizable" business.
 
Last edited:
  • #12
shmoe said:
The characteristic polynomial of an nxn matrix is det(xI-A), (or det(A-xI) in some books).

Suppose this thing factors completely into linear factors:

[tex]det(xI-A)=(x-\lambda_1)\dots (x-\lambda_n)[/tex]

Then the lambdas are the eigenvalues. The constant term on the left can be found by sticking in x=0 to get det(-A). The constant term on the right is just the product of the lambda's times (-1)^n (or stick x=0 into the right side as well). That's your result.
One of my books refers to the characteristic equation as [itex]\text{det}(A-\lambda I) = 0[/itex] and the other states it as [itex]\text{det}(\lambda I - A) = 0[/itex]. I haven't gotten up to dealing with complex eigenvalues yet. So, sticking to your linear factors example:

it's been prettly clear to me that we can find the eigenvalues, assuming it factors nicely, by solving
[tex](\lambda-\lambda_1)\dots (\lambda-\lambda_n) = 0[/tex]
and clearly each of those [itex]\lambda_i[/itex] is an eigenvalue. I don't understand what you are doing with your "x=0 to get det(-A)" or "x=0 into the right side ..." leaving the product of all the [itex]\lambda_i[/itex]s.
 
  • #13
Either definition of the characteristic polynomial is fine, they only differ by (-1)^n.

So you're ok with

[tex]det(xI-A)=(x-\lambda_1)\dots (x-\lambda_n)[/tex]

Now substitute x=0 in on both sides:

[tex]det(0I-A)=(0-\lambda_1)\dots (0-\lambda_n)[/tex]

[tex]det(-A)=(-\lambda_1)\dots (-\lambda_n)[/tex]

(since A is nxn, remember how multiplying by a constant affects it)

[tex](-1)^{n}det(A)=(-1)^{n}\lambda_1\dots \lambda_n[/tex]

[tex]det(A)=\lambda_1\dots \lambda_n[/tex]
 
  • #14
Look up Jordan Normal Form (or equivalently Jordan Canonical Form) if you're really interested in this. The triangular form theorem would also be sufficient for a discussion of determinants.

Essentially, the theorem says that every n x n matrix A over [tex]\mathbb{C}[/tex] (this requirement is not part of the theorem as usually stated, of course, and can easily be generalised, but you have to be a little bit careful, since if the minimal polynomial of A is not factorizable into linear factors the triangular form theorem does not apply) is similar to at least one upper triangular (upper triangular is actually weaker than the theorem's statement, but it's okay for this discussion) matrix. Obviously in the special case that A is diagonalizable, then we can just say A is similar to a diagonal matrix.

It follows that since similar matrices have the same characteristic polynomial, that the diagonal entries of such an upper triangular matrix must be A's eigenvalues (each repeated a number of times equal to its algebraic multiplicity). But of course, the product of the diagonal entries is just the determinant of A, which answers your question.

There are quite a few results necessary before the triangular form theorem becomes obvious.
 
Last edited:
  • #15
heres a possible argument: an equation like detA = product of eigenvalues, is true on a closed set. thus since diagonalizable matrices are dense, it is true for all matrcies iff it is true for diagonalizable ones.

how about them apples?
 
  • #16
shmoe, I'm not stuck at the end of your argument, but the beginning. You're starting with [itex]det(xI-A)=(x-\lambda_1)\dots (x-\lambda_n)[/itex] where x is some eigenvalue of A, right?

So, if you set x=0 in that equation to show that [itex]det(A)=\lambda_1\dots \lambda_n[/itex], haven't you proven it only for a matrix that has 0 as an eigenvalue?

edit: forget about this. For some reason I was looking at the derivation of this equation completely backwards, as if we were starting with these eigenvalues [itex]\lamba_1 \dots \lambda_n[/itex] and then ... etc etc
 
Last edited:
  • #17
Why would that be true? It's an equation, like any other. If 0 were an eigenvalue, you would get a factor of [itex]x[/itex] multiplying everything on the right, and would find [tex]\mbox{det}(A)=0[/tex].
 
  • #18
OK, I see that now. Thanks.
 
  • #19
The simple product of the eigenvalues is not the determinant - there are mutliplicities to worry about.

The constant in Det(A-xI) corresponds to the product of roots of the characteristic polynomial with multuiplicity assuming certain things about the base field containing enough roots of unity.
 
Last edited:
  • #20
This:
matt grime said:
The simple product of the eigenvalues is not the determinant - there are mutliplicities to worry about.
I think is apparent if the rhs is simply a factorization of the lhs, right?

but I don't know what this:
matt grime said:
The constant in Det(A-xI) corresponds to the product of roots of the characteristic polynomial with multuiplicity assuming certain things about the base field containing enough roots of unity.
means, particularly, what is the constant you referring to, and can you state "the base field containing enough roots of unity" in my non-mathematician's English, or should I simply not worry about it?
 
  • #21
I'll just repeat what others have said.

If A is a matrix, then by definition the characteristic polynomial is a determinant,

namely det(A-X.Id).

as with any polynomial, the constant etrm is the product of the roots, counted with multiplicities. (There is a unique field extension of the coefficient field generated by these roots, and in which these roots live. if the product is taken there, the answer will however lie in the original field.)

So to be precise, as matt is pointing out, the correct statement should that the constant term of the characteristic polynomial is the product of the roots, counted with multiplicities, of that smae characteristic polynomial.

Now it is trivial that this constant term is the determinant of A because to get it we set X = 0, getting det (A-0.Id) = det(A).


moreover the roots of the charac poly are trivially eigenvalues, because these roots r make the determinant of A-r.Id equal to zero.

so it is pretty tautological really that the determinant is the product of the roots of the characteristic polynomial, i.e. the eigenvalues counted with suitable multiplicities.

it follows however that for any matrix with n distinct eigenvalues, their product is the determinant.
 
  • #22
Yes, thanks, I pretty much got all of that already (although I couldn't state it anywhere near as eloquently as you have), except for this one sentence:
(There is a unique field extension of the coefficient field generated by these roots, and in which these roots live. if the product is taken there, the answer will however lie in the original field.)
which I'm sorry to say goes right over my head. Don't even try to explain it. Just please tell me this: in which math course would this be covered?
 
  • #23
what is the constant you referring to, and can you state "the base field containing enough roots of unity" in my non-mathematician's English, or should I simply not worry about it?

It has to do with the fact that, for example, over the real numbers, the only roots of [tex]x^n = 1[/tex] are [tex] \pm 1[/tex] for even [itex]n[/itex] and [itex]1[/itex] for odd [itex]n[/itex], but over the complex numbers [tex]x^n = 1[/tex] has [itex]n[/itex] roots for every [tex]\mathbb{Z} \ni n > 0[/tex].

This is why some real polynomials don't have real roots (or have fewer than you would expect).
 
Last edited:
  • #24
Would that be covered in a numbers theory course?
 
  • #25
The statement mathwonk stated would likely be proven in an abstract algebra course, at least if it gets into field extensions.

However, for he who works over R (or any other subset of C), you can simply wield the sledgehammer and appeal to the fact that any polynomial over C has all of its roots in C.

(Something I really feel you should know in high school, but am probably being optimistic about)
 
  • #26
Hurkyl said:
...you can simply wield the sledgehammer and appeal to the fact that any polynomial over C has all of its roots in C. (Something I really feel you should know in high school, but am probably being optimistic about)
Perhaps I should go back and complain.
:rofl: :rofl: :rofl:

All kidding aside, while I'm sure you know that to be true, I don't think that, at my level, I have any basis to support that claim other than my ignorance as to the existence of any alternative possibility.
 
  • #27
While not every matrix can be diagonalized, every matrix can be written in "Jordan Normal Form" with its eigenvalues along the diagonal,either 1 or 0 on the
"super diagonal" (the numbers just above the diagonal) and 0s everywhere else. It's easy to see that the determinant of such a matrix is just the product of the eigenvallues (counting multiplicity, of course).
 
  • #28
But aren't you now referring to the determinant of the reduced matrix, not the determinant of the original matrix?
 
  • #29
They're the same. The determinant of any matrix similar to A is equal to the determinant of A (which isn't hard to prove).
 
  • #30
Yep. I was thinking ... hell, I don't know what I was thinking.
 

1. What is the significance of eigenvalues in linear algebra?

Eigenvalues are a fundamental concept in linear algebra that represent the scaling factor by which a vector is stretched or compressed when multiplied by a matrix. They are used to understand the behavior of linear transformations and to solve systems of linear equations.

2. How are eigenvalues and eigenvectors related?

Eigenvalues and eigenvectors are closely related, with eigenvectors being the corresponding vectors that are scaled by the eigenvalues. In other words, an eigenvector is a vector that does not change direction when multiplied by a matrix, but only changes in length by a factor of the corresponding eigenvalue.

3. Can the product of eigenvalues be calculated without finding the individual eigenvalues?

Yes, the product of eigenvalues can be calculated using the determinant of the matrix. This is known as the Cayley-Hamilton theorem, which states that the characteristic polynomial of a matrix, which is used to find eigenvalues, is equal to the determinant of the matrix.

4. How is the product of eigenvalues related to the determinant of a matrix?

The product of eigenvalues is equal to the determinant of a matrix. This can be seen through the Cayley-Hamilton theorem, as well as through the fact that the determinant of a matrix can be calculated by taking the product of its eigenvalues.

5. Can the product of eigenvalues be used to determine the invertibility of a matrix?

Yes, the product of eigenvalues can be used to determine the invertibility of a matrix. If the product of eigenvalues is non-zero, then the matrix is invertible. If the product is zero, then the matrix is not invertible.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
607
Replies
2
Views
1K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
7K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
Replies
3
Views
1K
  • Quantum Physics
Replies
2
Views
970
  • Linear and Abstract Algebra
Replies
5
Views
3K
Back
Top