How to Diagonalize a 9x9 Matrix with Unitary Vectors?

Ojisan
Messages
4
Reaction score
0
I'm trying to solve the following problem (not homework :smile:) which is a strange form of diagonalization problem. Standard references and papers didn't turn up anything for me. Does anyone see possible approach for this?

- Given n x n full rank random matrices A1, A2, ... A9
Find length n unitary vectors x1, x2, x3, y1, y2, and y3 such that [y1^H 0 0; 0 y2^H 0; 0 0 y3^H] [A1 A2 A3; A4 A5 A6; A7 A8 A9] [x1 0 0; 0 x2 0; 0 0 x3] reduces to a 3 x 3 diagonal matrix.

^H is the Hermitian transpose and 0's indicate appropriate zero vectors.

It's like a constrained form of SVD but I can't seem to get a handle of it. Thanks in advance for any thoughts!
 
Physics news on Phys.org
If you are careful not to commute anything (like you would when they were ordinary numbers), I think you can simply do the multiplication and get
<br /> \left(<br /> \begin{array}{lll}<br /> {y_1^H} &amp; 0 &amp; 0 \\<br /> 0 &amp; {y_2^H} &amp; 0 \\<br /> 0 &amp; 0 &amp; {y_3^H}<br /> \end{array}<br /> \right)<br /> \left(<br /> \begin{array}{lll}<br /> {A_1} &amp; {A_2} &amp; {A_3} \\<br /> {A_4} &amp; {A_5} &amp; {A_6} \\<br /> {A_7} &amp; {A_8} &amp; {A_9}<br /> \end{array}<br /> \right)<br /> \left(<br /> \begin{array}{lll}<br /> {x_1} &amp; 0 &amp; 0 \\<br /> 0 &amp; {x_2} &amp; 0 \\<br /> 0 &amp; 0 &amp; {x_3}<br /> \end{array}<br /> \right)<br /> =<br /> \left(<br /> \begin{array}{lll}<br /> {y_1^H} {A_1} {x_1} &amp; 0 &amp; 0 \\<br /> 0 &amp; {y_2^H} {A_5} {x_2} &amp; 0 \\<br /> 0 &amp; 0 &amp; {y_3^H} {A_9} {x_3} <br /> \end{array}<br /> \right)<br />
 
Thanks, but I'm not sure if I follow you. You should get the following with straight computation.

<br /> \left(<br /> \begin{array}{lll}<br /> {y_1^H} &amp; 0 &amp; 0 \\<br /> 0 &amp; {y_2^H} &amp; 0 \\<br /> 0 &amp; 0 &amp; {y_3^H}<br /> \end{array}<br /> \right)<br /> \left(<br /> \begin{array}{lll}<br /> {A_1} &amp; {A_2} &amp; {A_3} \\<br /> {A_4} &amp; {A_5} &amp; {A_6} \\<br /> {A_7} &amp; {A_8} &amp; {A_9}<br /> \end{array}<br /> \right)<br /> \left(<br /> \begin{array}{lll}<br /> {x_1} &amp; 0 &amp; 0 \\<br /> 0 &amp; {x_2} &amp; 0 \\<br /> 0 &amp; 0 &amp; {x_3}<br /> \end{array}<br /> \right)<br /> =<br /> \left(<br /> \begin{array}{lll}<br /> {y_1^H} {A_1} {x_1} &amp; {y_1^H} {A_2} {x_2} &amp; {y_1^H} {A_3} {x_3} \\<br /> {y_2^H} {A_4} {x_1} &amp; {y_2^H} {A_5} {x_2} &amp; {y_2^H} {A_6} {x_3} \\<br /> {y_3^H} {A_7} {x_1} &amp; {y_3^H} {A_8} {x_2} &amp; {y_3^H} {A_9} {x_3} <br /> \end{array}<br /> \right)<br />

and the objective is to null out the off-diagonals.
 
Are you sure it has a nontrivial solution? Consider the case where n=1...


I suppose you're interested in a particular case where you know it really does have a solution? Well, I suppose the thing to do is to start small.

Can you figure out how to zero out y2HA4x1? Can you figure out all possible ways to do so?

Now, what about zeroing out both y2HA4x1 and y3HA7x1 at the same time?

And keep going until you manage to do it all.



Alternatively, is the matrix of A's diagonalizable? Maybe looking at that might help.
 
Thanks for your suggestion Hurkyl.

Solutions do exist, although probably not unique. I can search with a brute force conditioning to find a solution, by imposing partial conditions as you suggest.

The interpretation amounts to, for example, making A2x2 and A3x3 orthogonal to y1 while keeping A1x1 nonorthogonal to y1. Unfortunately, fixing x2 and x3 in this manner affects the other rows, and vice versa.

A is diagonalizable, so I have tried looking at SVD of individual A as well as combined SVD which didn't lead to much insight. I wish there was block unitary diagonalization procedure, but I had no luck in that direction.

I'd appreciate any thoughts.
 
Ojisan said:
The interpretation amounts to, for example, making A2x2 and A3x3 orthogonal to y1 while keeping A1x1 nonorthogonal to y1.

Is the latter necessary? A null block is diagonal as well.
 
True, but necessary for the problem that I am interested in. I should have stated my problem to include non-zero diagonal. In fact, I would like to maximize the Frobenius norm of the resultant diagonal matrix, but I was thinking that might be harder.
 
Back
Top