Is Tr(ABAB) Nonnegative for Symmetric Matrices A and B?

  • Thread starter Thread starter jostpuur
  • Start date Start date
  • Tags Tags
    Inequality Trace
jostpuur
Messages
2,112
Reaction score
19
Here's the claim: Assume that A and B are both symmetric matrices of the same size. Also assume that at least other one of them does not have negative eigenvalues. Then

<br /> \textrm{Tr}(ABAB)\geq 0<br />

I don't know how to prove this!
 
Physics news on Phys.org
If ##\lambda## is an eigenvalue of ##AB## then ##\lambda^2## is an eigenvalue of ##ABAB##. Since the trace is the sum of the eigenvalues, it suffices to show that the eigenvalues of ##AB## are all real. This would be true if ##AB## were symmetric, but the product of two symmetric matrices need not be symmetric. But can you show that ##AB## is similar to a symmetric matrix?
 
Never mind, I succeeded in proving this now. After months of wondering the proof appears in few minutes after posting to PF as usual :wink:

My hint for this is that you should first prove that you can assume A to be diagonal with non-negative entries.
 
According to my formulas this is true too: If A is positive definite, and B is non-zero (and the other assumptions), then

<br /> \textrm{Tr}(ABAB) &gt; 0<br />

It is not clear which would be the most desired form for this problem.
 
jbunniii said:
Only positive semidefinite in this case.

My bad. I tripped over the meaning of
at least other one of them does not have negative eigenvalues
Too many negatives!
 
Clearly if ##A## can be semi-definite, it can be zero, and ##ABAB = 0##.

The basic idea of my thinking is to show that ##AB## has a full set of real eigenvalues, by simultaneously diagonalizing ##A## and ##B##. The eigenvalues of ##(AB)(AB)## are then all positive.

If the diagonalization fails because ##A## is singular, replace ##A## with the nonsingular matrix ##A + eI## where ##e > 0##, and take the limit as ##e \to 0##.
 
AlephZero said:
Clearly if ##A## can be semi-definite, it can be zero, and ##ABAB = 0##.

The basic idea of my thinking is to show that ##AB## has a full set of real eigenvalues, by simultaneously diagonalizing ##A## and ##B##. The eigenvalues of ##(AB)(AB)## are then all positive.

If the diagonalization fails because ##A## is singular, replace ##A## with the nonsingular matrix ##A + eI## where ##e > 0##, and take the limit as ##e \to 0##.
I was thinking of doing something similar to how one can "compress" the singular value decomposition when some of the singular values are zero. Here's a sketch:

Since ##A## is symmetric, it can be orthogonally diagonalized: ##A = QDQ^T## where ##D## is diagonal. Since ##A## is positive semidefinite, ##D## has nonnegative diagonal elements: namely, the eigenvalues of ##A##. We may remove any zero eigenvalues as follows. For each ##i## for which ##[D]_{i,i} = 0##, delete the ##i##'th row and column from ##D##. Call the result ##D_0##. Also for each such ##i##, delete the ##i##'th column from ##Q##. Call the result ##Q_0##.

Thus if there is a zero eigenvalue with algebraic multiplicity ##k## and the dimension of ##A## is ##n\times n##, then ##D_0## is ##(n-k)\times(n-k)## and ##Q_0## is ##n \times (n-k)##.

We have ##A = Q_0 D_0 Q_0^T## but now ##D_0## is invertible whereas ##D## was not.

Caveat for any subsequent calculations: ##Q_0## is now rectangular, and we still have ##Q_0^T Q_0 = I## but ##Q_0 Q_0^T## will not be ##I##.

I haven't checked whether this delivers the goods.
 
Back
Top