Why is diagonalizing a matrix beneficial?

  • Thread starter Thread starter matqkks
  • Start date Start date
  • Tags Tags
    Matrices
matqkks
Messages
280
Reaction score
5
Why do we need to diagonalise a matrix? What purpose does it serve apart from finding the powers of a matrix? Is there any tangible application of this?
 
Physics news on Phys.org
it allows us to define and calculate more complicated functions for matrices. The standard way to extend complicated functions to other systems, say finding cos(z) for z complex or eA for A a matrix or linear transformation is to expand the function in a power series:
e^A= I+ A+ \frac{1}{2}A^2+ \frac{1}{6}A^3+ \cdot\cdot\cdot+ \frac{1}{n!}A^n+ \cdot\cdot\cdot[/itex].<br /> <br /> And, of course, a diagonal matrix has its eigenvalues on the diagonal. That&#039;s not so important since usually you have to have found the eigenvalues to diagonalize the matrix.<br /> <br /> (Oh, and not all matrices <b>can</b> be diagonalized.)
 
I think that what HallsofIvy wants to say is that, if

A=MDM^{-1}

and D is a diagonal matrix with eigenvalues \lambda_i, then

f(A)=Mf(D)M^{-1}

and f(D) is easy to calculate, because it's just the diagonal matrix with eigenvalues f(\lambda_i).
 
Sometimes, there are properties of a matrix that are independent of the basis (i.e., properties that are intrinsic to the linear map it represents). For instance, the rank or the determinant. If you have a matrix in diagonal form, then those properties are easier to determine. For instance, the determinant will be just the product of the diagonal elements, and the rank will be the number of nonzero diagonal elements.
 
Petr Mugver said:
I think that what HallsofIvy wants to say is that, if

A=MDM^{-1}

and D is a diagonal matrix with eigenvalues \lambda_i, then

f(A)=Mf(D)M^{-1}

and f(D) is easy to calculate, because it's just the diagonal matrix with eigenvalues f(\lambda_i).
No, that's not what I meant to say because, without specifying that f(x) is a function with some important properties, it just isn't true.
 
HallsofIvy said:
No, that's not what I meant to say because, without specifying that f(x) is a function with some important properties, it just isn't true.

I meant functions like the ones in your post, expressed as Taylor series, and of course whose domain must include the eigenvalues of the matrix you want to apply it... or am I getting something wrong?
 
The simplest type of diagonal matrices is the set of all real numbers R = M(1,R). In a sense, diagonal matrices are easier to work with, whether you are multiplying or solving a system of equations, or finding eigenvalues, it is always better to have a diagonal matrix. If you can change the base of your vector space to obtain a diagonal matrix than most problems become trivial. However, not all matrices are diagonalizable. In that case you can always obtain a block matrix that is unitarily equivalent.
Vignon S. Oussa
 
Another way of looking at it is that diagonalizing a matrix "uncouples" the equations.

A general matrix can be thought of a representing a system of linear equations. If that matrix can be diagonalized, then we have the same number of equations but each equation now has only one of the unknown values in it. For example, the matrix equation
Ax= b= \begin{bmatrix}-1 &amp; 6 \\ -4 &amp; 6\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}= \begin{bmatrix}2 \\ 3 \end{bmatrix}.

That matrix, A, has eigenvalues 2 and 3 with corresponding eigenvectors <2, 1> and <3, 2> respectively. Let P= \begin{bmatrix}2 &amp; 3 \\ 1 &amp; 2\end{bmatrix}. Then P^{-1}= \begin{bmatrix}2 &amp; -3 \\ -1 &amp; 2\end{bmatrix} and P^{-1}AP= \begin{bmatrix}2 &amp; -3 \\ -1 &amp; 2\end{bmatrix}\begin{bmatrix}-1 &amp; 6 \\ -4 &amp; 6\end{bmatrix}\begin{bmatrix}2 &amp; 3 \\ 1 &amp; 2\end{bmatrix}= \begin{bmatrix}2 &amp; 0 \\ 0 &amp; 3\end{bmatrix}

Now we can rewrite the equation Ax= b as A(PP^{-1})x= b so (P^{-1}AP)P^{-1}x= P^{-1}b If we let z= P^{-1}x[/tex], here, z= \begin{bmatrix}z_1 \\ z_2\end{bmatrix}= \begin{bmatrix}2 &amp;amp; -3 \\ -1 &amp;amp; 2\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}, then P^{-1}b= \begin{bmatrix}2 &amp;amp; -3 \\ -1 &amp;amp; 2\end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix}= \begin{bmatrix}-5 \\ 4\end{bmatrix} so the matrix equation becomes \begin{bmatrix}2 &amp;amp; 0 \\ 0 &amp;amp; 3\end{bmatrix}\begin{bmatrix}z_1 \\ z_2\end{bmatrix}= \begin{bmatrix}-5 \\ 4\end{bmatrix} which is exactly the same as the two equations 2z_1= -5 and 3z_1= 4 which are &quot;uncoupled&quot;- they can be solved separately. After you have found z, because z= P^{-1}y, y= Pz= \begin{bmatrix}2 &amp;amp; 3 \\ 1 &amp;amp; 2\end{bmatrix}\begin{bmatrix}z_1 \\ z_2\end{bmatrix}.<br /> <br /> Of course the work involved in finding the eigenvalues and eigenvectors of a matrix and diagonalizing, here, is far more than just solving the equations- but think about a situation where your system has, say, 1000 equations in 1000 unknown values. Also, it is not uncommon that applications involve solving many systems of the form Ax= b, each with the <b>same</b> A and different bs. In a situation like that,the diagonalization only has to be done <b>once</b> for the whole problem.<br /> <br /> Also, there are important situations where we have systems of linear <b>differential equations</b>. The same things apply there- diagonalizing &quot;uncouples&quot; the equations.<br /> <br /> As I said before, not every matrix <b>can</b> be &quot;diagonalized&quot;- but every matrix can be put in &quot;Jordan Normal Form&quot;, a slight variation on &quot;diagonalized&quot; where we allow some &quot;1&quot;s just above the main diagonal, which <b>almost</b> uncouples the equations- no equation involves more then two of the unknowns.
 

Similar threads

Back
Top