Need help for solving an operator equation

  • Context: Graduate 
  • Thread starter Thread starter TRXW
  • Start date Start date
  • Tags Tags
    Operator
Click For Summary
SUMMARY

The discussion centers on solving the operator equation A + [B, X] = 0, where A and B are self-adjoint operators in a finite-dimensional Hilbert space. It is established that a solution X exists if and only if Tr(A) = 0. The inquiry extends to countably infinite-dimensional Hilbert spaces, where the trace condition becomes irrelevant, raising the question of whether a solution still exists. The respondent suggests using recursion and block matrix forms to derive a solution, indicating that the approach can be adapted for infinite dimensions by continuing the recursion process.

PREREQUISITES
  • Understanding of self-adjoint operators in Hilbert spaces
  • Familiarity with the concept of the trace of an operator
  • Knowledge of matrix representation of operators
  • Experience with recursion techniques in mathematical proofs
NEXT STEPS
  • Explore the properties of self-adjoint operators in infinite-dimensional Hilbert spaces
  • Study the implications of the trace condition in operator theory
  • Learn about block matrix techniques for operator equations
  • Investigate recursion methods in mathematical proofs and their applications
USEFUL FOR

Mathematicians, physicists, and researchers working with operator theory, particularly those dealing with Hilbert spaces and operator equations.

TRXW
Messages
1
Reaction score
0
I am working on something and have lead to a problem. I need your help!
Let A and B be self-adjoint operators acting on a finite dimensional Hilbert space. Then, the equation

A + [ B , X ] = 0,
has at least one solution X, iff Tr(A)=0.
([ B , X ] = BX - XB)

I have proved it by taking the matrix form of the operators.
But my question is about countably infinite dimensional Hilbert spaces. There the Trace condition is irrelevant. But will there always be a solution?
I would appreciate your guidance.

Thanks
Leo
 
Physics news on Phys.org
(First: it's easy to show that X is anti-Hermitian). Here's a suggestion: use recursion. To begin with, write the solution you already have for finite dimensions to make the recursion explicit. That is, write the NxN matrix in block form, splitting off the first row and column and leaving the other N-1 dimensions in a block. Your unknowns will be similarly split: X00, X0i = - Xi0* and Xij = - Xji*. It helps to suppress the indices and consider these quantities to be a scalar, vector and tensor, respectively.

Then you will have three equations for these three unknowns. For example the first (00) equation will read X·B + XB* = A, and it's easy to show this can always be satisfied. (You can also assume wlog that A is already diagonal.)

If you can do the above, the idea is to recurse N times, each time block-splitting the Aij further by peeling off the first dimension, etc. Then, to prove it for infinite dimensions, all you have to do is forget to stop recursing. :smile:
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 57 ·
2
Replies
57
Views
6K
Replies
37
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K