- #1

- 55

- 0

## Homework Statement

I need the steps to follow when finding the polar decomposition of a hermitian matrix

If someone could direct me to a website that would help, or put up an example here please.

thanks :)

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter syj
- Start date

- #1

- 55

- 0

I need the steps to follow when finding the polar decomposition of a hermitian matrix

If someone could direct me to a website that would help, or put up an example here please.

thanks :)

- #2

- 90

- 1

The polar decomposition as stated in Horn and Johnson's Matrix Analysis (corollary 7.3.3):

"If A is an [itex]n\times n[/itex] complex matrix, then it may be written in the form

[tex] A = PU,[/tex]

where P is a positive semidefinite matrix and U is unitary. The matrix P is always uniquely determined as [itex]P = ( AA^*)^{1/2}[/itex]; if A is nonsingular, then U is uniquely determined as [itex]U = P^{-1}A[/itex]."

So, we need to know how to take square roots of matrices. Let M be any positive semidefinite matrix. It is therefore hermitian. We know by the spectral theorem for hermitian matrices (theorem 4.1.5, Horn and Johnson) that it is unitarily equivalent to a diagonal matrix, a.k.a., it is diagonalizable via a unitary:

[tex] M = U \Lambda U^*[/tex]

where [itex]\Lambda[/itex] is the usual matrix of eigenvalues. Define [itex]M^{1/2} = U \Lambda^{1/2} U^*[/itex], where

[tex] \Lambda^{1/2} = diag ( \lambda_1^{1/2}, \ldots, \lambda_n^{1/2})[/tex]

and the unique NONNEGATIVE square root is taken in each case. This is possible since the spectrum of a positive definite matrix is nonnegative. Then [itex]M^{1/2}[/itex] is hermitian (this is the converse of the spectral theorem) and it is positive semidefinite since its spectrum is nonnegative (theorem 7.2.1, Horn and Johnson).

OK, so now we are to compute [itex] (AA^*)^{1/2} = (A^2)^{1/2}[/itex]. Note that this is in general NOT equal to A itself. Why? Because the eigenvalues of A may be negative, and we take nonnegative roots when computing the square root. First, we had better verify that [itex]AA^*=A^2[/itex] is positive semidefinite: let x be any column vector, then

[tex]x^*AA^*x = (A^*x)^*(A^*x) = ||A^*x||^2 \geq 0,[/tex]

and [itex](AA^*)^* = AA^*,[/itex] as required.

Now, A is hermitian and therefore we may invoke the spectral theorem one more time to write

[tex] A = W D W^*,[/tex]

where D is the diagonal matrix of eigenvalues and W is unitary. Then

[tex] AA^* = A^2 = W D^2 W^*[/tex]

and thus

[tex] (AA^*)^{1/2} = (A^2)^{1/2} = W (D^2)^{1/2} W^*.[/tex]

Remember to take the nonnegative roots! And we've found our P! Now, if A is nonsingular, then we can find U via the formula in the original theorem. Verifying that U is unitary (EDIT: and uniqueness of P and U!) can be found in Horn and Johnson.

If you have any more questions or want to see an example, write back.

PS, buy Horn and Johnson.

"If A is an [itex]n\times n[/itex] complex matrix, then it may be written in the form

[tex] A = PU,[/tex]

where P is a positive semidefinite matrix and U is unitary. The matrix P is always uniquely determined as [itex]P = ( AA^*)^{1/2}[/itex]; if A is nonsingular, then U is uniquely determined as [itex]U = P^{-1}A[/itex]."

So, we need to know how to take square roots of matrices. Let M be any positive semidefinite matrix. It is therefore hermitian. We know by the spectral theorem for hermitian matrices (theorem 4.1.5, Horn and Johnson) that it is unitarily equivalent to a diagonal matrix, a.k.a., it is diagonalizable via a unitary:

[tex] M = U \Lambda U^*[/tex]

where [itex]\Lambda[/itex] is the usual matrix of eigenvalues. Define [itex]M^{1/2} = U \Lambda^{1/2} U^*[/itex], where

[tex] \Lambda^{1/2} = diag ( \lambda_1^{1/2}, \ldots, \lambda_n^{1/2})[/tex]

and the unique NONNEGATIVE square root is taken in each case. This is possible since the spectrum of a positive definite matrix is nonnegative. Then [itex]M^{1/2}[/itex] is hermitian (this is the converse of the spectral theorem) and it is positive semidefinite since its spectrum is nonnegative (theorem 7.2.1, Horn and Johnson).

OK, so now we are to compute [itex] (AA^*)^{1/2} = (A^2)^{1/2}[/itex]. Note that this is in general NOT equal to A itself. Why? Because the eigenvalues of A may be negative, and we take nonnegative roots when computing the square root. First, we had better verify that [itex]AA^*=A^2[/itex] is positive semidefinite: let x be any column vector, then

[tex]x^*AA^*x = (A^*x)^*(A^*x) = ||A^*x||^2 \geq 0,[/tex]

and [itex](AA^*)^* = AA^*,[/itex] as required.

Now, A is hermitian and therefore we may invoke the spectral theorem one more time to write

[tex] A = W D W^*,[/tex]

where D is the diagonal matrix of eigenvalues and W is unitary. Then

[tex] AA^* = A^2 = W D^2 W^*[/tex]

and thus

[tex] (AA^*)^{1/2} = (A^2)^{1/2} = W (D^2)^{1/2} W^*.[/tex]

Remember to take the nonnegative roots! And we've found our P! Now, if A is nonsingular, then we can find U via the formula in the original theorem. Verifying that U is unitary (EDIT: and uniqueness of P and U!) can be found in Horn and Johnson.

If you have any more questions or want to see an example, write back.

PS, buy Horn and Johnson.

Last edited:

- #3

- 55

- 0

I want to follow the steps u posted, with the example and and see if I can get the correct answer.

Thanks so much.

I'm studying at the university of south africa. our prescribed text is "matrices and linear transformations" by cullen.

i'll see if i can get a copy of the text you suggested.

thanks again

- #4

- 90

- 1

- #5

- 90

- 1

- #6

- 55

- 0

will this still work if my matrix is a REAL hermitian?

I'm going to find sometime today at work to work through your example.

thank you so much!

the text book i have just had theorems and no examples.

:(

thanks again.

- #7

- 90

- 1

Yup, it'll work! In the case that the matrix is real and hermitian, we call it symmetric.

- #8

- 55

- 0

For the question I am working on, the answer is given. I notice that if I do not use unit vectors, then I will have the same answer as the book does for the matrix P.

I am still working out the matrix U.

The matrix P is quite ugly (well to me its ugly) and getting its inverse is taking me a while.

Im sure I am just making calculation errors with this as finding the inverse of a matrix is pretty straight forward.

Thanks again.

PS: did you see my post on finding the projectors for a matrix? can you help there?

- #9

- 90

- 1

But you are correct, we don't need those eigenvectors to be unit length. We just to diagonalize it.

I'll take a look at your other thread.

Share: