Doubt about proof on self-adjoint operators.

In summary, the conversation is discussing a proof related to linear transformations and their products. The proof shows that for every linear transformation A (between finite dimension spaces), the product A^*A is self-adjoint. There is confusion about the statement (A^*A)^* = A^*A^{**}, as it is thought to only be true if A and A^* commute. However, it is shown that (AA^*)^* = A^{**}A^* = AA^*, which proves the original statement. The confusion is cleared up by remembering the basic fact that (AB)^* = B^*A^*.
  • #1
Rodrigo Schmidt
14
5
So the statement which the proof's about is: For every linear transformation ##A##(between finite dimension spaces), the product ##A^*A## is self-adjoint. So, the proof is:
##(A^*A)^*=A^*A^{**}=A^*A##
What i don't understand is why ##(A^*A)^*=A^*A^{**}##. Isn't that true only if ##A## and ##A^*##commute?
 
Physics news on Phys.org
  • #2
Rodrigo Schmidt said:
So the statement which the proof's about is: For every linear transformation ##A##(between finite dimension spaces), the product ##AA^*## is self-adjoint. So, the proof is:
##(AA^*)^*=A^*A^{**}=A^*A##
What I don't understand is ##(AA^*)^*=A^*A^{**}##. Shouldn't ##(AA^*)^*##be equal to ##A^*A^{**} = A^*A##? Wouldn't this mean that ##AA^*## is self-adjoint only if ##A## and ##A^*## commute?

Your post is very confused, because you have posted the same thing twice as both the correct and wrong versions. What you meant to say, no doubt was:

The proof is given as:

##(AA^*)^* = A^{**}A^* = AA^*##

But, you think it should be:

##(AA^*)^* = A^*A^{**} = A^*A##

But, you are wrong, because:

##(AB)^* = B^*A^*## and not ##A^*B^*##
 
  • Like
Likes Rodrigo Schmidt
  • #3
PeroK said:
AB)∗=B∗A∗(AB)^* = B^*A^* and not A∗B∗
Oh, thank you! I totally forgot that.
 
  • #4
think through the proof of these more basic facts.
 
  • Like
Likes Rodrigo Schmidt

1. What is a self-adjoint operator?

A self-adjoint operator is a linear operator on a complex vector space that is equal to its own adjoint. This means that the operator and its adjoint have the same matrix representation.

2. How is self-adjointness related to Hermitian matrices?

A self-adjoint operator is also known as a Hermitian operator, and its matrix representation is a Hermitian matrix. This means that the matrix is equal to its own conjugate transpose, and it has real eigenvalues.

3. Can any operator be self-adjoint?

No, not all operators are self-adjoint. Only operators that satisfy certain conditions, such as being defined on a Hilbert space and having a compact domain, can be self-adjoint.

4. What is the importance of self-adjoint operators in quantum mechanics?

In quantum mechanics, observables are represented by self-adjoint operators. This means that their eigenvalues are the possible outcomes of measurements, and their eigenvectors represent the corresponding states of the system.

5. How do you prove that an operator is self-adjoint?

To prove that an operator is self-adjoint, you need to show that it is equal to its own adjoint. This can be done by using the definition of the adjoint, which involves integrating the operator and its adjoint over the domain of the operator.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
945
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
2
Replies
59
Views
5K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
4K
  • Linear and Abstract Algebra
Replies
4
Views
3K
Replies
3
Views
2K
Back
Top