Proving Diagonalizability of Adjoint Operator on Finite Inner Product Space

  • Context: Graduate 
  • Thread starter Thread starter sammycaps
  • Start date Start date
  • Tags Tags
    Operator
Click For Summary

Discussion Overview

The discussion revolves around the diagonalizability of the adjoint operator of a linear operator T on a finite-dimensional inner product space. Participants explore various definitions of diagonalizability and discuss approaches to prove that if T is diagonalizable, then its adjoint operator is also diagonalizable. The conversation includes hints and suggestions for tackling the problem without providing direct answers.

Discussion Character

  • Exploratory
  • Technical explanation
  • Homework-related

Main Points Raised

  • One participant seeks guidance on proving that the adjoint operator of a diagonalizable linear operator is also diagonalizable, suggesting that the dimensions of the eigenspaces of λ and \overline{λ} should be considered.
  • Another participant emphasizes the importance of defining "diagonalizable" and offers a hint to write down an explicit formula for the matrix representation of T with respect to a chosen basis.
  • There is a discussion about the implications of using different definitions of diagonalizability, including the potential dependency on the choice of isomorphism when defining diagonal matrices.
  • A suggestion is made to show that there exists a basis of eigenvectors for the adjoint operator, linking the eigenvalues of T and its adjoint.
  • Participants discuss the relationship between the kernel of the adjoint operator and the image of the original operator, indicating a potential direction for the proof.
  • Concerns are raised about the limitations of using the dimension theorem in the context of non-normal operators, with participants reflecting on the implications of normality for their arguments.

Areas of Agreement / Disagreement

Participants express uncertainty and explore multiple approaches without reaching a consensus. There is no agreement on a definitive method to prove the diagonalizability of the adjoint operator, and differing definitions of diagonalizability contribute to the complexity of the discussion.

Contextual Notes

Participants note that the definitions of diagonalizability may vary, and the choice of basis or isomorphism could affect the interpretation of diagonal matrices. The discussion also highlights the challenges posed by non-normal operators in applying certain theorems.

sammycaps
Messages
89
Reaction score
0
I was looking for a hint on a problem in my professor's notes (class is over and I was just auditing).

I want to show that if T:V→V is a linear operator on finite dimensional inner product space, then if T is diagonalizable (not necessarily orth-diagonalizable), so is the adjoint operator of T (with respect to the inner product).

I think I should show that the eigenspaces of λ and \overline{λ} have the same dimension (I know they are not the same since this is only true for normal operators), but I'm not sure if this is the right way to go.

Any small push in the right direction would help. Thanks very much.

EDIT: The definition here of diagonalizable is that there exists a basis, \chi, such that [T]\chi is a diagonal matrix (i.e. the matrix representation of T with respect to the basis is a diagonal matrix).
 
Last edited:
Physics news on Phys.org
You should post your definition of "diagonalizable".
 
I included the definition in the edit I made above.
 
OK, that is one of several definitions that we can work with. (Another one is that there exists an invertible matrix P such that ##P^{-1}TP## is diagonal). When we're using your definition, I think the easiest way is to just write down an explicit formula for the ij component of ##[T]_\chi## (with i and j arbitrary), that involves the basis vectors and the inner product. Can you do that?

Once you've done that, you just use the definition of the adjoint operator, and you're almost done.

In case you're wondering why I don't just tell you the complete answer, it's because the forum rules say that we should treat every textbook-style problem as homework. So I can only give you hints.
 
fredrik said:
ok, that is one of several definitions that we can work with. (another one is that there exists an invertible matrix p such that ##p^{-1}tp## is diagonal). When we're using your definition, i think the easiest way is to just write down an explicit formula for the ij component of ##[t]_\chi## (with i and j arbitrary), that involves the basis vectors and the inner product. Can you do that?

Once you've done that, you just use the definition of the adjoint operator, and you're almost done.

In case you're wondering why i don't just tell you the complete answer, it's because the forum rules say that we should treat every textbook-style problem as homework. So i can only give you hints.

Well if this were an orthonormal basis, I would know how to do it, but since it isn't then I'm not so sure. I'll have a think.

Not to worry, I didn't want an answer, just a hint.
 
Last edited:
sammycaps said:
Well if this were an orthonormal basis,...
Ah, you're right. The simple solution I had in mind only works with orthonormal bases. I will have to think about it as well. (However, I think I still see a simple solution based on the other definition of diagonalizable).
 
How are you defining "diagonal" when you say PTP-1 is diagonal?
 
sammycaps said:
How are you defining "diagonal" when you say PTP-1 is diagonal?
Sorry, I'm so used to thinking of square matrices and linear operators on finite-dimensional vector spaces as "the same thing" that it didn't even occur to me that this is an issue. If we denote the vector space by X, there's always an isomorphism ##F:X\to\mathbb R^n##. It seems to make sense to call T "diagonal" if the matrix representation of ##F\circ T## with respect to the standard basis of ##\mathbb R^n## is a diagonal matrix.

Unfortunately, this seems to make the answer to the question "Is T diagonal?" depend on the choice of the isomorphism F. So maybe this was a bad idea.
 
Last edited:
How about showing that there exists a basis of eigenvectors for A*?

If \lambda is an eigenvalue is an eigenvalue of A, then \overline{\lambda} is an eigenvalue of A*. So like in the OP, it suffices to show that the eigenspaces have the same dimension.
 
  • #10
Try to use the relation

Ker(A^*) = (Im(A))^\bot
 
  • #11
micromass said:
Try to use the relation

Ker(A^*) = (Im(A))^\bot

Well if the operator were normal (i.e. orthogonally diagonalizable) then we could restrict the operator to just the eigenspace and use the dimension theorem to arrive at the answer (I think). But the operator is not normal, so I'm still thinking about how to proceed. Thanks for the hint.
 
  • #12
sammycaps said:
Well if the operator were normal (i.e. orthogonally diagonalizable) then we could restrict the operator to just the eigenspace and use the dimension theorem to arrive at the answer (I think). But the operator is not normal, so I'm still thinking about how to proceed. Thanks for the hint.

What do you mean with the dimension theorem? Why can't we use it now?
 
  • #13
micromass said:
What do you mean with the dimension theorem? Why can't we use it now?

Actually I'm not sure. I thought there was something we needed from normal operators, but now I don't think so (well in a normal operator the eigenspaces of corresponding eigenvalues are the same, but this is stronger than I need anyway).
 
Last edited:

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K