Question about adjoint transformations- is this a valid proof

Click For Summary

Homework Help Overview

The discussion revolves around the properties of self-adjoint linear transformations in the context of an inner product space. The original poster is attempting to prove a relationship involving a self-adjoint transformation P and a positive definite transformation Q, specifically regarding the existence of linearly independent vectors and associated scalars that satisfy a certain equation.

Discussion Character

  • Conceptual clarification, Mathematical reasoning, Problem interpretation

Approaches and Questions Raised

  • The original poster attempts to use the spectral theorem to establish a relationship between the transformations and their eigenvectors. Some participants question the validity of restricting the action of one transformation to the span of eigenvectors of another. Others suggest exploring the implications of diagonalizability and the conditions under which two matrices can be simultaneously diagonalized.

Discussion Status

Participants are actively engaging with the problem, raising questions about the assumptions made in the proof and discussing the implications of linear independence versus orthogonality of the vectors involved. There is recognition of a potential flaw in the original reasoning, and some participants are exploring alternative approaches to clarify the relationships between the transformations.

Contextual Notes

There is an ongoing discussion about the conditions under which the transformations can be related, particularly concerning the diagonalizability of Q and the transformation inv(S)Pinv(S). The original poster notes the challenge of proving the relationship without assuming orthogonality of the eigenvectors.

Zoe-b
Messages
91
Reaction score
0

Homework Statement


Q is an invertible self-adjoint linear transformation on an inner product space V. Suppose Q is positive definite. I have already shown that inv(Q) is self-adjoint, that all eigenvalues of Q are positive, so there exists S s.t. S^2 = Q.
Now suppose P is any self-adjoint linear transformation on V. I have shown inv(S)*P*inv(S) is also self adjoint. The bit I'm trying to do is: deduce or prove otherwise that there are linearly independent vectors e1...en and scalars a1...an such that for i between 1 and n:
Pei = aiQei

Homework Equations


Spectral theorem.

The Attempt at a Solution


I have been staring at this for ages- I've done this but I'm not sure at all that its valid so if its complete rubbish please let me know!

inv(S)*P*inv(S) is self-adjoint so by the spectral theorem there exists an orthonormal (and therefore linearly independent) basis of eigenvectors e1...en with associated eigenvalues a1...an s.t

inv(S)*P*inv(S)ei = aiei for i = 1..n

(possibly dodgy step coming up!)
Now consider the transformation inv(S)*P*inv(S) restricted to span(ei)
On this subspace of V, inv(S)*P*inv(S) = ai*I
pre-multiply by S, post multiply by S
to get P = ai*Q on this span
so Pei = aiQei

Is that correct?
Thanks.
 
Physics news on Phys.org
Can anyone help? So sorry for bumping but even a 'yes' or 'no' answer as to whether the proof is valid would be extremely useful.
Thanks again.
 
Zoe-b said:
Can anyone help? So sorry for bumping but even a 'yes' or 'no' answer as to whether the proof is valid would be extremely useful.
Thanks again.

Well, no. It can't be a valid proof. You can only restrict the action of S to span(ei) if ei is also an eigenvector of S. That much should be pretty clear. I'm still not clear whether your final conclusion is true or not though. I'll tell you one thing I think is true though. Pick Q=[[1,0],[0,4]] and P=[[0,1],[0,1]]. I think you can find two independent vectors such that Pei = aiQei. But they aren't orthogonal. Clearly I'm still fishing around with this problem.
 
Last edited:
Thanks.. dodgy step was indeed dodgy then. Yeah the vectors being linearly independent should obviously be easier to prove than being orthogonal (which may not be true at all) so I will look at that. Just struggling at the moment to relate inv(S)Pinv(S) and Q since there is no guarantee (infact its unlikely surely?) that Q is diagonal over the same basis as inv(S)Pinv(S) is. I have in my notes that two matrices are simultaneously diagonalisable iff they commute but here that seems to be what I'm trying to prove anyway.
 
Zoe-b said:
Thanks.. dodgy step was indeed dodgy then. Yeah the vectors being linearly independent should obviously be easier to prove than being orthogonal (which may not be true at all) so I will look at that. Just struggling at the moment to relate inv(S)Pinv(S) and Q since there is no guarantee (infact its unlikely surely?) that Q is diagonal over the same basis as inv(S)Pinv(S) is. I have in my notes that two matrices are simultaneously diagonalisable iff they commute but here that seems to be what I'm trying to prove anyway.

Ok, think I see it finally. As you said, you have inv(S)*P*inv(S)ei = aiei. S is invertible. So pick vi=inv(S)ei. Since the ei are a basis, so are the vi (but not necessarily orthogonal). The vi are the vectors you want. Do you see it?
 
yep I do thank you! Was getting rather frustrated because the rest of the question I could prove if I had proved this bit first. Thanks again.
 

Similar threads

Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K