Square root of a Matrix

Homework Statement

The matrix $$C$$ is self-adjoint and positive definite on $$\mathbb{ C}^2$$. Determine $$\sqrt{\mathbb{C}} \in \mathbb{ C}^{2x2}$$.

$$C=\left(\begin{array}{cc}5&-4i\\4i&5\end{array}\right)$$

The Attempt at a Solution

Characteristic polynomial:
$$(5-\lambda)^2 - 16 = 0$$

What I have done so far is solve for lambda, giving me 9 and 3 respectively. But then I don't really know what to do with those eigenvalues. Do I have to search the eigenvectors as well? In that case, what should I do with them?

I know that at the end I should get something in the following form to get the square root of the matrix:

$$\sqrt{C} = a\cdot\left(\begin{array}{cc}5&-4i\\4i&5\end{array}\right) + b\cdot \left(\begin{array}{cc}1&0\\0&1\end{array}\right)$$

But I don't know how to figure out both a and b.

Any help will be appreciated. I have been looking for tutorials in google, but I couldn't find any regarding the procedure for the square roots of the matrix.

Oh, sorry concerning the eigenvalues is 9 and 1 respectively to solve the characteristic polynomial ^^

jbunniii
Homework Helper
Gold Member
If you can factor the matrix as

$$C = SDS^{-1}$$

where D is diagonal, then do you know how to find $\sqrt{C}$?

Yea, I do know how to get the diagonal matrix, I guess I'd have to get the 2 eigenvectors and I would be fine, but then again I still don't know what would I have to do with the diagonal matrix to find $$\sqrt{C}$$ :shrug:

Last edited:
jbunniii
Homework Helper
Gold Member
Well, surely you know how to find $\sqrt{D}$, assuming the entries of $D$ are positive. What happens if you square

$$S\sqrt{D}S^{-1}$$?

Oh, well by doing the square of it, it would just mean to get:
$$S\sqrt{D}S^{-1}S\sqrt{D}S^{-1}$$

$$SDS^{-1}$$ again. hmm

jbunniii
Homework Helper
Gold Member
Oh, well by doing the square of it, it would just mean to get:
$$S\sqrt{D}S^{-1}S\sqrt{D}S^{-1}$$

$$SDS^{-1}$$ again. hmm

Right, so

$$S\sqrt{D}S^{-1}$$

is a square root of

$$SDS^{-1}$$

So if you can factor

$$C = SDS^{-1}$$

then it's easy to find a square root. (By the way, is it "a" square root or "the" square root? i.e., is it unique?)

So how do you find S and D such that

$$C = SDS^{-1}$$?

Hint: eigenvalues and eigenvectors.

Hmm, I am afraid it may be 'the' square root, since I didn't see anything like this in the Seminar class.

Well to find D, for instance I would just plug the eigenvalues (9 and 1) in the diagonal, and regarding the S, I'd find the eigenvectors and plug them there in columns.
But to be honest in the lecture they gave this procedure (which I don't understand why, even though it looks too easy, I'd like to know why is it done this way...)

9a + b = 3
1a + b = 1

Any suggestions of this rearrangement of equations?

jbunniii
Homework Helper
Gold Member
OK, I worked out what that procedure is doing. This is interesting; I haven't seen it before.

Suppose $D$ is a diagonal matrix consisting of the eigenvalues (which are presumed positive) of $C$, and $\sqrt{D}$ is the square root of $D$.

Then the equation

$$aD + bI = \sqrt{D}[/itex] is essentially two equations and two unknowns (since the off-diagonals of $D$, $I$, and $\sqrt{D}$ are zero). Suppose $a,b$ is a solution to this equation. Then multiply both sides of the equation, first on the left by S and then on the right by $S^{-1}$, where S is a matrix of linearly independent eigenvectors corresponding to the eigenvalues in D, assuming such a matrix S exists. Then: [tex]aSDS^{-1} + bSIS^{-1} = S\sqrt{D}S^{-1}$$

or equivalently

$$aC + bI = \sqrt{C}$$

Pretty slick, although it only works for 2 x 2 matrices. It's important to establish what, if any, conditions are needed to ensure that a solution $\{a,b\}$ exists. Does it work, for example, if the matrix has only one eigenvalue? Also, do there necessarily exist two independent eigenvectors in that case?

Oh whoa, you haven't seen this before and you just figured that out! That's impressive. I see it now why is it that way, so it could have basically be done by the way you were telling me before right? the diagonalization method is actually more general than this one, because as you remarked this one only works on 2x2 matrices.

Anyway, thanks really! that helped a lot.

Does it work, for example, if the matrix has only one eigenvalue? Also, do there necessarily exist two independent eigenvectors in that case?

In the case that we only had one eigenvalue, I don't think we would be able to use such a method, I mean I don't see how we would.
As for the eigenvectors, I don't think they play any kind of role in this procedure, so I think it would be alright if they are not independent at all. What is your thought about it?

Edit: So yeah, I guess it wouldn't work in a Jordan canonical form of diagonalization.

Last edited:
jbunniii
Homework Helper
Gold Member
Oh whoa, you haven't seen this before and you just figured that out! That's impressive. I see it now why is it that way, so it could have basically be done by the way you were telling me before right? the diagonalization method is actually more general than this one, because as you remarked this one only works on 2x2 matrices.

Yes, the diagonalization method is more general. It works as long as
(1) all the eigenvalues are positive
(2) there exists a full linearly independent set of eigenvectors.

Condition (2) is guaranteed if all the eigenvalues are distinct, but that's not a necessary condition. E.g. the identity matrix has only one eigenvalue, 1, but it has a full set of eigenvectors. Your matrix is Hermitian, so it is guaranteed to have a full set of eigenvectors. (This is a consequence of the spectral theorem, for instance.)

The way it works is as follows. For each eigenvalue/eigenvector pair, we have

$$C v_i = \lambda_i v_i$$

where $\lambda_i$ is the eigenvalue and $v_i$ is the eigenvector.

We can create a matrix S whose columns are the $v_i$'s:

$$S = [v_1 v_2 \ldots v_n]$$

and we can express all the eigenvalue/eigenvector relations simultaneously as follows:

$$C S = S D$$

where D is the diagonal matrix whose i'th diagonal element is $\lambda_i$. (To see why the matrices are multiplied in the order shown, multiply them out elementwise.)

Now, S is invertible because its columns are linearly independent. Thus we can solve for C:

$$C = SDS^{-1}$$

So that's how you obtain the diagonalized form. Then, as I explained earlier, the square root is simply

$$\sqrt{C} = S \sqrt{D} S^{-1}$$

I hope that's clear!

Yeah! that was just the perfect tutorial I was looking for.

So what's the difference that you were trying to discern before when saying 'a square root of a matrix' and the 'uniqueness square root of a matrix'? Are we dealing here with 'a square root' one, then?

jbunniii
Homework Helper
Gold Member
Yeah! that was just the perfect tutorial I was looking for.

So what's the difference that you were trying to discern before when saying 'a square root of a matrix' and the 'uniqueness square root of a matrix'? Are we dealing here with 'a square root' one, then?

What I was asking was a "food for thought" question: should I say I have found *a* square root or *the* square root? In other words, if we find a square root, is it unique? Can there be two different matrices, say M and N, such that

$$M^2 = C$$

and

$$N^2 = C$$?

The answer is that there are indeed multiple square roots, depending on what sign you choose for each of the elements of $\sqrt{D}$.

$$D = \left[\begin{array}{ll} 9 & 0 \\ 0 & 1\end{array}\right]$$

and you can choose $\sqrt{D}$ to be any of the following four matrices:

$$\left[\begin{array}{cc} 3 & 0 \\ 0 & 1\end{array}\right]$$

$$\left[\begin{array}{cc} -3 & 0 \\ 0 & 1\end{array}\right]$$

$$\left[\begin{array}{cc} 3 & 0 \\ 0 & -1\end{array}\right]$$

$$\left[\begin{array}{cc} -3 & 0 \\ 0 & -1\end{array}\right]$$

It's easy to see that these four matrices result in four distinct square roots of C.

The natural question is, then, are there any other square roots of C? (Perhaps ones that can be obtained by some means other than diagonalization.) I wasn't sure when I asked the question. After a little research, it turns out that there are not any others, and I even have a proof in front of me in Horn and Johnson, "Matrix Analysis," but it uses Lagrange interpolating polynomials in the proof, and I'm going to have to do a little background reading to understand what the heck they're doing. I wouldn't worry about it if I were you, unless you want to, of course.

Haha, 'Lagrange interpolating polynomials' that sounds fancy, can't wait to do that on class. I see now your point of the uniqueness square root or, in our case, multiple square roots.

Thanks jbunnii!

Mark44
Mentor
OK, I worked out what that procedure is doing. This is interesting; I haven't seen it before.

Suppose $D$ is a diagonal matrix consisting of the eigenvalues (which are presumed positive) of $C$, and $\sqrt{D}$ is the square root of $D$.

Then the equation

$$aD + bI = \sqrt{D}[/itex] Can you enlighten me as to how you came up with this equation, and why it's applicable only to 2x2 matrices? is essentially two equations and two unknowns (since the off-diagonals of $D$, $I$, and $\sqrt{D}$ are zero). Suppose $a,b$ is a solution to this equation. Then multiply both sides of the equation, first on the left by S and then on the right by $S^{-1}$, where S is a matrix of linearly independent eigenvectors corresponding to the eigenvalues in D, assuming such a matrix S exists. Then: [tex]aSDS^{-1} + bSIS^{-1} = S\sqrt{D}S^{-1}$$

or equivalently

$$aC + bI = \sqrt{C}$$

Pretty slick, although it only works for 2 x 2 matrices. It's important to establish what, if any, conditions are needed to ensure that a solution $\{a,b\}$ exists. Does it work, for example, if the matrix has only one eigenvalue? Also, do there necessarily exist two independent eigenvectors in that case?

jbunniii
Homework Helper
Gold Member
Can you enlighten me as to how you came up with this equation, and why it's applicable only to 2x2 matrices?

I didn't really come up with it, I reverse-engineered it from Redsummers' question.

Certainly I am free to write the equation as stated, and if it happens to have a solution [the solution doesn't have to be unique], then it gives me a way to express $\sqrt{D}$ as a linear combination of $D$ and $I$. And as I showed, the same coefficients allow me to express the square root of the original matrix, $\sqrt{C}$, as a linear combination of $C$ and $I$.

There are two unknowns, $a$ and $b$, so in general if there is to be a solution I must have two (scalar) equations, which is why it only works for 2x2 matrices. If the matrices were larger, I would have an overdetermined system.

There might be a generalization to n x n matrices if you postulate a linear combination of D, I, and $n-2$ other specific diagonal matrices, but I don't know how useful it would be.

The main advantage in the 2x2 case would seem to be that although you have to find eigenvalues of the original matrix to form D, you don't need to find eigenvectors, put them into a matrix S, and invert S, but instead you have to solve a system of 2 equations with 2 unknowns. I guess it's slightly faster, but I'm not sure I really see the point, other than that it's a kind of neat trick.

P.S. I haven't thought very carefully about exactly what conditions must hold in order for this trick to work, but certainly it requires
(1) a square root must exist
and
(2) a solution $a,b$ to diagonal matrix equation must exist
and
(3) the original matrix must be diagonalizable (in order to conclude that the same linear combination works for $C$ and $\sqrt{C}$).

A sufficient condition would be distinct, positive eigenvalues, but it's not necessary (e.g., it works for C = I).

Last edited:
Hey i just tried to calcuculate the eigenvectors.
But i cant get a sol. The result is always = 0.
??

Edit: Forget it, using the way with a and b, i was able to solve it correct ^^ (without any eigenvectors)

Last edited: