Negative determinants when calculating eigenvectors?

Click For Summary
The discussion centers around the confusion regarding the determinant's sign when diagonalizing a transformation matrix M using the formula D = C⁻¹MC. It is clarified that the determinant is invariant and equals the product of the matrix's eigenvalues, meaning changing the order of eigenvectors in matrix C does not affect the eigenvalues in matrix D. A negative determinant does not imply incorrectness; rather, it is essential to remember that the determinant of the entire expression remains unchanged due to the presence of C and C⁻¹. The participants emphasize the need for specific examples to resolve misunderstandings about eigenvalues and determinants. Ultimately, the confusion arose from miscalculations and the misconception that a negative determinant is problematic.
tamtam402
Messages
199
Reaction score
0
Let M be a transformation matrix. C is the matrix which diagonalizes M.

I'm trying to use the formula D = C-1MC. I noticed that depending on how I arrange my vectors in C, I can change the sign of the determinant. If I calculate D using a configuration of C that gives me a negative value for the determinant, my matrix D will have a negative sign in front of the eigenvalues on it's diagonal. (Note: the determinant is needed when calculating the inverse of C, so a negative determinant will multiply the C-1MC equation by -1 )

However, I've read that the matrix D should always have the eigenvalues on it's diagonal, and I've also heard that it doesn't matter how you set-up the matrix C, as long as the eigenvectors are all there.

What's going on? Should I always make sure to use a configuration of C that will get me a positive determinant?
 
Last edited:
Physics news on Phys.org
tamtam402 said:
Let M be a transformation matrix. C is the matrix which diagonalizes M.

I'm trying to use the formula D = C-1MC. I noticed that depending on how I arrange my vectors in C, I can change the sign of the determinant.

$${}$$
No, you can't. The determinant is an invariant of matrices (linear operators), and it equals the product of all the matrix's eigenvalues

(in some extension field, probably).

DonAntonio




If I calculate D using a configuration of C that gives me a negative value for the determinant, my matrix D will have a negative sign in front of the eigenvalues on it's diagonal. (Note: the determinant is needed when calculating the inverse of C, so a negative determinant will multiply the C-1MC equation by -1 )

However, I've read that the matrix D should always have the eigenvalues on it's diagonal, and I've also heard that it doesn't matter how you set-up the matrix C, as long as the eigenvectors are all there.

What's going on? Should I always make sure to use a configuration of C that will get me a positive determinant?
 
You must be making a mistake somewhere, but without a specific example (preferably a small one that is easy to check!) we can't really say much more than post #2 already said.
 
If you change the order of vectors, you change the matrix so it is NOT correct to say that you have changed the sign on an eigenvalue. You have an eigenvalue of a different matrix that has a different sign.

In particular, we can always represent an abstract linear operator on a vector space by a matrix by choosing a specific ordered basis for the vector space. But changing the basis or just changing the order of the vectors in the basis will represent the same linear operator by different matrix.
 
Let's suppose I have 2 eigenvalues: 1 and 3.

The eigenvector for the first eigenvalue is (1,-1). The eigenvector for the second eigenvalue is (1,1).

I can write my matrix in the following form, which has a determinant equal to -2.

1 1
1 -1

I can also write my matrix in the following form, which has a determinant equal to 2.

1 1
-1 1
 
I think I see what you're getting at here, you can write the eigenvector matrix in a different order which will make the determinant change sign
BUT
you have C^{-1} there too
det(C^{-1} M C) = det(C^{-1}) det(M) det(C) = \frac{1}{det(C)} det(M) det(C) = \frac{det(C)}{det(C)} det(M) = det(M)
Is this what was confusing you?
 
tamtam402 said:
Let's suppose I have 2 eigenvalues: 1 and 3.

The eigenvector for the first eigenvalue is (1,-1). The eigenvector for the second eigenvalue is (1,1).

I can write my matrix in the following form, which has a determinant equal to -2.

1 1
1 -1

I can also write my matrix in the following form, which has a determinant equal to 2.

1 1
-1 1
This "example" is like the example of a (plane) triangle (in Euclidean geometry) with sides 3, 6, 9: impossible.

You cannot have a 2x2 matrix with eigenvalues 1,3 and its determinant different from 3: impossible. This is why, I

suppose, you were already asked to give a specific example , not only throwing numbers as eigenvalues

and "let's say an eigenvalue is..." things.

DonAntonio
 
genericusrnme said:
I think I see what you're getting at here, you can write the eigenvector matrix in a different order which will make the determinant change sign
BUT
you have C^{-1} there too
det(C^{-1} M C) = det(C^{-1}) det(M) det(C) = \frac{1}{det(C)} det(M) det(C) = \frac{det(C)}{det(C)} det(M) = det(M)
Is this what was confusing you?

Yes, thank you. Somehow it was my first time getting a negative determinant, and I also messed up elsewhere in the process. That's what led me to believe that negative determinants were bad :)
 
DonAntonio said:
This "example" is like the example of a (plane) triangle (in Euclidean geometry) with sides 3, 6, 9: impossible.

You cannot have a 2x2 matrix with eigenvalues 1,3 and its determinant different from 3: impossible. This is why, I

suppose, you were already asked to give a specific example , not only throwing numbers as eigenvalues

and "let's say an eigenvalue is..." things.

DonAntonio

Please see the post above yours, it seems like my example was specific enough.
 
  • #10
tamtam402 said:
Please see the post above yours, it seems like my example was specific enough.

Not really. We still don't know what you did wrong.
 
  • #11
tamtam402 said:
Please see the post above yours, it seems like my example was specific enough.


No, it isn't, and I find it odd that you stubbornly insist on. You must give us one specific matrix where you have

specifically calculated its eigenvalues and its eigenvectors and and where you've specifically changed the order of the eigenvectors and

get a DIFFERENT determinant. Of course, you can't do this but if you send what the above we'll be able, perhaps, to specifically

point where your mistake is.

BTW, my last post has a typo: of course that the determinant of any 2x2 matrix with eigenvalues 1,3 will ALWAYS be 3. I meant

to write -3 referring to your claim that you can get the opposite sign.

DonAntonio

Pd If you refer, as "specific example, to the matrices
\begin{pmatrix}1&1\\1&-1\end{pmatrix}\,\,,\,\,\begin{pmatrix}1&1\\-1&1\end{pmatrix}
then this a non-example as these matrices have neither the same determinant nor the same trace so they can't

be similar. Please do present us the ORIGINAL matrix before you try to change it.
 
  • #12
micromass, donantonio, the problem the guy was having is this;

Let A be some matrix then we can decompose it as A = S D S^{-1}. But this isn't dependant on the order we write the eigenvalues and eigen vectors, so we can change the rows of S and alter D whilst leaving A unchanged. He knew that permuting the rows of a matrix changes the sign of its determinant which lead him to be confused since he knows the determinant of the RHS should be equal to the determinant of the LHS. He was forgetting about the fact that there is an S and an [/itex]S^{-1}[/itex] which will cancel any changes caused by changing rows of S. He's not trying to claim that two matrices with permuted rows are the same or have the same eigen values or anything along those lines.
 
  • #13
genericusrnme said:
micromass, donantonio, the problem the guy was having is this;

Let A be some matrix then we can decompose it as A = S D S^{-1}. But this isn't dependant on the order we write the eigenvalues and eigen vectors, so we can change the rows of S and alter D whilst leaving A unchanged. He knew that permuting the rows of a matrix changes the sign of its determinant which lead him to be confused since he knows the determinant of the RHS should be equal to the determinant of the LHS. He was forgetting about the fact that there is an S and an [/itex]S^{-1}[/itex] which will cancel any changes caused by changing rows of S. He's not trying to claim that two matrices with permuted rows are the same or have the same eigen values or anything along those lines.



Wonderful. Why then didn't he send any specific example of a matrix with that decomposition? That could have cleared things out way before.

DonAntonio
 
  • #14
DonAntonio said:
Wonderful. Why then didn't he send any specific example of a matrix with that decomposition? That could have cleared things out way before.

DonAntonio

Sorry, I didn't check back on the thread after genericusrnme replied. I didn't post a specific example because I had trashed my work and I couldn't find out what exercise I had failed. When I came back to the thread I was ready to do the exercise from scratch to post my solution, but I saw genericusrnme's post and he proved that the sign of the determinant should have no impact on the solution, so I had surely screwed up a calculation somewhere.
 

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
3
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K