Question about adjoint transformations- is this a valid proof

In summary: Ok, think I see it finally. As you said, you have inv(S)*P*inv(S)ei = aiei. S is invertible. So pick vi=inv(S)ei. Since the ei are a basis, so are the vi (but not necessarily orthogonal). The vi are the vectors you want. Do you see it?Yes, I see it.
  • #1
Zoe-b
98
0

Homework Statement


Q is an invertible self-adjoint linear transformation on an inner product space V. Suppose Q is positive definite. I have already shown that inv(Q) is self-adjoint, that all eigenvalues of Q are positive, so there exists S s.t. S^2 = Q.
Now suppose P is any self-adjoint linear transformation on V. I have shown inv(S)*P*inv(S) is also self adjoint. The bit I'm trying to do is: deduce or prove otherwise that there are linearly independant vectors e1...en and scalars a1...an such that for i between 1 and n:
Pei = aiQei

Homework Equations


Spectral theorem.

The Attempt at a Solution


I have been staring at this for ages- I've done this but I'm not sure at all that its valid so if its complete rubbish please let me know!

inv(S)*P*inv(S) is self-adjoint so by the spectral theorem there exists an orthonormal (and therefore linearly independent) basis of eigenvectors e1...en with associated eigenvalues a1...an s.t

inv(S)*P*inv(S)ei = aiei for i = 1..n

(possibly dodgy step coming up!)
Now consider the transformation inv(S)*P*inv(S) restricted to span(ei)
On this subspace of V, inv(S)*P*inv(S) = ai*I
pre-multiply by S, post multiply by S
to get P = ai*Q on this span
so Pei = aiQei

Is that correct?
Thanks.
 
Physics news on Phys.org
  • #2
Can anyone help? So sorry for bumping but even a 'yes' or 'no' answer as to whether the proof is valid would be extremely useful.
Thanks again.
 
  • #3
Zoe-b said:
Can anyone help? So sorry for bumping but even a 'yes' or 'no' answer as to whether the proof is valid would be extremely useful.
Thanks again.

Well, no. It can't be a valid proof. You can only restrict the action of S to span(ei) if ei is also an eigenvector of S. That much should be pretty clear. I'm still not clear whether your final conclusion is true or not though. I'll tell you one thing I think is true though. Pick Q=[[1,0],[0,4]] and P=[[0,1],[0,1]]. I think you can find two independent vectors such that Pei = aiQei. But they aren't orthogonal. Clearly I'm still fishing around with this problem.
 
Last edited:
  • #4
Thanks.. dodgy step was indeed dodgy then. Yeah the vectors being linearly independent should obviously be easier to prove than being orthogonal (which may not be true at all) so I will look at that. Just struggling at the moment to relate inv(S)Pinv(S) and Q since there is no guarantee (infact its unlikely surely?) that Q is diagonal over the same basis as inv(S)Pinv(S) is. I have in my notes that two matrices are simultaneously diagonalisable iff they commute but here that seems to be what I'm trying to prove anyway.
 
  • #5
Zoe-b said:
Thanks.. dodgy step was indeed dodgy then. Yeah the vectors being linearly independent should obviously be easier to prove than being orthogonal (which may not be true at all) so I will look at that. Just struggling at the moment to relate inv(S)Pinv(S) and Q since there is no guarantee (infact its unlikely surely?) that Q is diagonal over the same basis as inv(S)Pinv(S) is. I have in my notes that two matrices are simultaneously diagonalisable iff they commute but here that seems to be what I'm trying to prove anyway.

Ok, think I see it finally. As you said, you have inv(S)*P*inv(S)ei = aiei. S is invertible. So pick vi=inv(S)ei. Since the ei are a basis, so are the vi (but not necessarily orthogonal). The vi are the vectors you want. Do you see it?
 
  • #6
yep I do thank you! Was getting rather frustrated because the rest of the question I could prove if I had proved this bit first. Thanks again.
 

Related to Question about adjoint transformations- is this a valid proof

Question 1: What is an adjoint transformation?

An adjoint transformation is a linear transformation that maps a vector space onto itself, preserving the structure of the vector space. It is also known as a self-adjoint or Hermitian transformation.

Question 2: How is an adjoint transformation related to a matrix?

An adjoint transformation can be represented by a matrix, with the adjoint transformation applied to a vector being equivalent to the matrix multiplication of the vector with the adjoint matrix.

Question 3: What is the purpose of an adjoint transformation?

The main purpose of an adjoint transformation is to preserve the structure and properties of a vector space, while also allowing for the efficient computation of certain mathematical operations.

Question 4: Can an adjoint transformation be used as a proof?

No, an adjoint transformation alone is not a valid proof. It is a mathematical concept and must be used in conjunction with other mathematical techniques and principles to form a valid proof.

Question 5: How can I determine if a proof involving an adjoint transformation is valid?

To determine the validity of a proof involving an adjoint transformation, you must carefully examine the steps and reasoning used in the proof, ensuring they follow logically and are based on established mathematical principles.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
8K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Precalculus Mathematics Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
2
Views
4K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Math Proof Training and Practice
2
Replies
69
Views
4K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Introductory Physics Homework Help
Replies
8
Views
953
Back
Top