# Tricky complex square matrix problem

## Main Question or Discussion Point

I have a complex square matrix, $\textbf{C}$, which satisfies:

$\textbf{C}\textbf{C} = (\textbf{I} \odot \textbf{C})$

where $\textbf{I}$ is the identity matrix and $\odot$ denotes the Hadamard (element-by-element) product. In other words, $\textbf{C}\textbf{C}$ is a diagonal matrix whose diagonal entries are the same as the diagonal entries of $\textbf{C}$, which is not necessarily diagonal itself.

Furthermore, $\textbf{C}$ is Hermitian:

$\textbf{C}^{H}=\textbf{C}$

and $\textbf{C}$ must be full rank (because actually, in my problem, $\textbf{C} \triangleq (\textbf{A}^{H}\textbf{A})^{-1}$, where $\textbf{A}$ is complex square invertible).

I want to determine whether $\textbf{C} = \textbf{I}$ is the only solution (because this would imply that $\textbf{A}$ is unitary). (This is equivalent to proving that $\textbf{C}$ is diagonal). By expanding out terms, I've shown that $\textbf{C} = \textbf{I}$ is the only invertible solution for $(3 \times 3)$ matrices, but I can't seem to obtain a general proof.

Any help or insight would be very much appreciated - I'm completely stumped!

## Answers and Replies

Related Linear and Abstract Algebra News on Phys.org
you can take the square root of your equation

since C is positive definite ($C=(A^\dagger A)^{-1}$) on the left you have C
and you obtain (in components):
$$C_{ij}=\sqrt{C_{ij}}\delta_{ij}$$

from which you can conclude that C is the identity matrix

you can take the square root of your equation
Brilliant! Thank you very much indeed! I had really be scratching my head over that one. Many thanks again for your help!

AlephZero
Science Advisor
Homework Helper
You can show this from "first principles". Let the matrix be
$$\left(\begin{array}{cc} a & b \\ b* & c \end{array}\right)$$
where a and c are real.

Mlutiplying the matrices out gives 3 equations

a^2 + bb* = a
c^2 + bb" = c
ab + bc = 0

Subtracting the first two equations, either a = c, or a+c = 1
From the third equation, either b = 0, or a+c = 0

So either b = 0, or a = c = 0

But from the first two equations, if a = c = 0 then b = 0 also.

So, the first two equations reduce to a^2 = a and c^2 = c, and the only solution which gives a matrix of full rank is C = I.

Thanks AlephZero. That is the approach I took in order to obtain a proof for $(2 \times 2)$ and $(3 \times 3)$ matrices. (If I understand correctly, your $a$, $b$ and $c$ are scalars.) However, aesir's solution is valid for the general $(n \times n)$ case, which is especially important for me.

A final question on positive definiteness:
If $\textbf{A}$ is not square, but instead is tall (with linearly independent columns) then is it correct to say that $(\textbf{A}^{H}\textbf{A})^{-1}$ is now positive semi-definite?

My reasoning is that $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 \geq 0$ for any $\textbf{z}$ (with equality when $\textbf{z}$ lies in the null space of $\textbf{A}$).

(Therefore aesir's square root still exists in this case).

...
A final question on positive definiteness:
If $\textbf{A}$ is not square, but instead is tall (with linearly independent columns) then is it correct to say that $(\textbf{A}^{H}\textbf{A})^{-1}$ is now positive semi-definite?

My reasoning is that $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 \geq 0$ for any $\textbf{z}$ (with equality when $\textbf{z}$ lies in the null space of $\textbf{A}$).

(Therefore aesir's square root still exists in this case).
I don't think so.
It is true that if $\textbf{z}$ is in the null space of $\textbf{A}$ then $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 = 0$, but this means that $(\textbf{A}^{H}\textbf{A})$ is semi-positive definite, not its inverse (which does not exists if the null space is non-trivial). BTW if $\textbf{A}$ has linearly independent columns its null space is $\{0\}$

I don't think so.
It is true that if $\textbf{z}$ is in the null space of $\textbf{A}$ then $\textbf{z}^H\textbf{A}^H\textbf{A}\textbf{z} = \left\Vert{\textbf{Az}}\right\Vert^2 = 0$, but this means that $(\textbf{A}^{H}\textbf{A})$ is semi-positive definite, not its inverse (which does not exists if the null space is non-trivial). BTW if $\textbf{A}$ has linearly independent columns its null space is $\{0\}$
Ah yes, of course. Thanks for clearing that up!

So are the following two statements correct?
(1) $(\textbf{A}^H\textbf{A})$ is positive definite when the columns of $\textbf{A}$ are independent (which requires that $\textbf{A}$ is tall or square). Therefore $(\textbf{A}^H\textbf{A})^{-1}$ is also positive definite.

(2) When the rank of $\textbf{A}$ is less than its number of columns (which includes all fat matrices), $(\textbf{A}^H\textbf{A})$ is positive semidefinite. In this case, $(\textbf{A}^H\textbf{A})^{-1}$ does not exist.

Ah yes, of course. Thanks for clearing that up!

So are the following two statements correct?
(1) $(\textbf{A}^H\textbf{A})$ is positive definite when the columns of $\textbf{A}$ are independent (which requires that $\textbf{A}$ is tall or square). Therefore $(\textbf{A}^H\textbf{A})^{-1}$ is also positive definite.

(2) When the rank of $\textbf{A}$ is less than its number of columns (which includes all fat matrices), $(\textbf{A}^H\textbf{A})$ is positive semidefinite. In this case, $(\textbf{A}^H\textbf{A})^{-1}$ does not exist.
Yes, that's true.
In case (2) you can say a little more. If you split the vector space in null{A} and its orthogonal complement $V_1$ you have $$A^H A = \left(\begin{array}{cc} B^HB & 0 \\ 0 & 0 \end{array} \right)$$
that has a positive definite inverse if restricted from $V_1$ to $V_1$