Linear Algebra - Find an orthogonal matrix P

Click For Summary
The discussion revolves around finding an orthogonal matrix P using Mathematica for a Linear Algebra course. The user successfully verified that matrix A is symmetric and computed its eigenvalues and eigenvectors, utilizing Mathematica for efficiency. They applied the Gram-Schmidt process to orthogonalize the eigenvectors, forming the orthogonal matrix P, which was confirmed to be orthogonal. However, the user encountered issues when computing the diagonal matrix D using P, initially due to a mistake in matrix multiplication syntax and later due to using MatrixForm incorrectly. After resolving these issues, they confirmed the eigenvalues were correctly positioned on the diagonal of D.
Lelouch
Messages
18
Reaction score
0
A problem that I have to solve for my Linear Algebra course is the following

1.png

We are supposed to use Mathematica.

What I have done is that I first checked that A is symmetric, i.e. that ##A = A^T##. Which is obvious.

Next I computed the eigenvalues for A. The characteristic polynomial is given by ## det(A - \lambda*I) = \lambda(4+\lambda)^3 ##. Solving ## \lambda(4+\lambda)^3 = 0 ##, then yields the eigenvalues ## \lambda_1 = 0 ## and ## \lambda_2 = -4 ##. I used Mathematica to solve the determinant instead of doing it by hand. I also know that one could have used the Eigenvalues[] command in Mathematica. However, in order to report the step by step procedure in my report later I decided not to use the build in commands for Mathematica.

Next, I computed the eigenvectors for each eigenvalue. I.e. I computed ## (A - \lambda_i*I)* \vec x = \vec 0 ## for ## i = 1, 2 ##. I used mathematica to solve the matrices which yields the following eigenvectors.
For ## \lambda_1 = 0 ## we have ## \vec v_1 = (1, 1, 1, 1) ##.
For ## \lambda_2 = -4 ## we have ## \vec v_2 = (-1, 1, 0, 0), \vec v_3 = (-1, 0, 1, 0), \vec v_4 = (-1, 0, 0, 1)##.
I also checked the built in command Eigenvectors[] from Mathematica and it yields the same solution.

Next what I suppose had to be done is to use the Gram-Schmidt orthogonalization on ## \vec v_2, \vec v_3, \vec v_4 ##. This one I did not do by hand. Instead I used the command Orthogonalize[]. Which gave the vectors ## \vec w_2 = (-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0, 0), \vec w_3 = (-\frac{1}{\sqrt{6}}, -\frac{1}{\sqrt{6}}, \sqrt{\frac{2}{3}}, 0), \vec w_4 = (-\frac{1}{2\sqrt{3}}, -\frac{1}{2\sqrt{3}},
-\frac{1}{2\sqrt{3}}, \frac{\sqrt{3}}{2}) .##
I also normalized the vector ## \vec v_1 = (1, 1, 1, 1) ## which is then ## \vec w_1 = (\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, \frac{1}{2}) ##.

Then, I formed the matrix ##P## such that
##P =
\begin{pmatrix}
\frac{1}{2} & -\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} & -\frac{1}{2\sqrt{3}} \\
\frac{1}{2} & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{6}} & -\frac{1}{2\sqrt{3}} \\
\frac{1}{2} & 0 & \sqrt{\frac{2}{3}} & -\frac{1}{2\sqrt{3}} \\
\frac{1}{2} & 0 & 0 & \frac{\sqrt{3}}{2}
\end{pmatrix}
##.
After that I checked if ## P ## is an orthogonal matrix; i.e. ## P^T = P^{-1} ##. Which indeed turns out to be the case and also the Mathematica command OrthogonalMatrixQ[] gives true back.
2.png


Lastly, I wanted to compute the diagonal matrix ## D ## by ##P^T * A * P = D##. But this turns out rather strange in Mathematica:

3.png


I am not sure what to make of this last result. I suppose I have made a mistake somewhere during the Gram-Schmidt Orthogonal Process. However, I cannot seem to find the mistake, if there is one. Also note that this is my first day working with Mathematica and the Gram-Schmidt Process.
 

Attachments

  • 1.png
    1.png
    5.8 KB · Views: 1,448
  • 3.png
    3.png
    28.2 KB · Views: 731
  • 2.png
    2.png
    14.9 KB · Views: 782
Last edited:
Physics news on Phys.org
You want to use . to multiply matrices in Mathematica, not * Try that.
 
  • Like
Likes Lelouch
phyzguy said:
You want to use . to multiply matrices in Mathematica, not * Try that.

I just did that and it gives the following result:

4.png
 

Attachments

  • 4.png
    4.png
    10.1 KB · Views: 591
Oh my... I just used the following instead of ##PTranspose##

5.png


It seems correct, since the eigenvalues are on the main diagonal. However, why can't I define/use the name PTranspose to equal the transpose of P?
 

Attachments

  • 5.png
    5.png
    2.7 KB · Views: 916
@Lelouch, in future posts, please don't delete the homework template. Its use is required.
 
  • Like
Likes Lelouch
Lelouch said:
Oh my... I just used the following instead of ##PTranspose##

View attachment 220829

It seems correct, since the eigenvalues are on the main diagonal. However, why can't I define/use the name PTranspose to equal the transpose of P?

The problem is that you used MatrixForm when you defined PTranspose. MatrixForm converts it to a different form which looks nice, but which isn't the same as a matrix. If you eliminate the MatrixForm in your definition of PTranspose, it will work.
 
  • Like
Likes Lelouch
Mark44 said:
@Lelouch, in future posts, please don't delete the homework template. Its use is required.

I apologize for this. It was like 3 am for me when I wrote this thread and I was tired and was banging my head against a wall with this problem. I was re-doing the theory, the examples in the book by hand and proofs and was mad that I couldn't see where the mistake was. I was even more mad when I noticed that it was such a silly mistake.

phyzguy said:
The problem is that you used MatrixForm when you defined PTranspose. MatrixForm converts it to a different form which looks nice, but which isn't the same as a matrix. If you eliminate the MatrixForm in your definition of PTranspose, it will work.

Thank you. Also thanks for pointing out to use ##.## instead of ##*##.