# Unravelling Structure of a Symmetric Matrix

• I
• thatboi
thatboi
Hey guys,
I was wondering if anyone had any thoughts on the following symmetric matrix:
$$\begin{pmatrix} 0.6 & 0.2 & -0.2 & -0.6 & -1\\ 0.2 & -0.2 & -0.2 & 0.2 & 1\\ -0.2 & -0.2 & 0.2 & 0.2 & -1\\ -0.6 & 0.2 & 0.2 & -0.6 & 1\\ -1 & 1 & -1 & 1 & -1 \end{pmatrix}$$
Notably, when one solves for the eigenvalues and eigenvectors of this matrix, one finds that for the largest magnitude eigenvalues, the eigenvectors demonstrate an oscillatory behavior (the elements within the eigenvector switch between positive and negative), whereas for the smallest magnitude eigenvalue, the eigenvectors have a "nicer" behavior. This most likely has to do with the alternative +/- 1 in the matrix but I can't quite figure it out.

thatboi said:
Hey guys,
I was wondering if anyone had any thoughts on the following symmetric matrix:
$$\begin{pmatrix} 0.6 & 0.2 & -0.2 & -0.6 & -1\\ 0.2 & -0.2 & -0.2 & 0.2 & 1\\ -0.2 & -0.2 & 0.2 & 0.2 & -1\\ -0.6 & 0.2 & 0.2 & -0.6 & 1\\ -1 & 1 & -1 & 1 & -1 \end{pmatrix}$$
Notably, when one solves for the eigenvalues and eigenvectors of this matrix, one finds that for the largest magnitude eigenvalues, the eigenvectors demonstrate an oscillatory behavior (the elements within the eigenvector switch between positive and negative), whereas for the smallest magnitude eigenvalue, the eigenvectors have a "nicer" behavior. This most likely has to do with the alternative +/- 1 in the matrix but I can't quite figure it out.
I think this hypothesis can be easily tested.

Hill said:
I think this hypothesis can be easily tested.
Right, so the last row contributes to the eigenvalue in the sense that it gives the last entry of the resultant column vector when the matrix is multiplied by the eigenvector. So if the eigenvector also has entries that alternates signs, then the dot product between the eigenvector and the last row will result in a "coherent" sum and thus produce a larger number than compared to a different eigenvector with a different configuration of signs. Is this the right way to think about it?

thatboi said:
Right, so the last row contributes to the eigenvalue in the sense that it gives the last entry of the resultant column vector when the matrix is multiplied by the eigenvector. So if the eigenvector also has entries that alternates signs, then the dot product between the eigenvector and the last row will result in a "coherent" sum and thus produce a larger number than compared to a different eigenvector with a different configuration of signs. Is this the right way to think about it?
I meant to test it experimentally, by modifying the "suspected" elements and observing how the eigenvectors are affected.

Hill said:
I meant to test it experimentally, by modifying the "suspected" elements and observing how the eigenvectors are affected.
Sure, I already did some modifications and it seemed to match what I said above (for example if I put a negative sign on only the second element of the last column and second element of the last row, then the eigenvector corresponding to the largest magnitude eigenvalue only has a negative sign on the second element as well). I was just wondering if there was any more intuition/structure in the matrix beyond what I said above.

If you described how you generated that matrix, sometimes there are underlying reasons why certain eigenvectors exist.

• Introductory Physics Homework Help
Replies
1
Views
766
• Linear and Abstract Algebra
Replies
4
Views
5K
• Linear and Abstract Algebra
Replies
2
Views
2K
• Linear and Abstract Algebra
Replies
6
Views
1K
• Linear and Abstract Algebra
Replies
10
Views
1K
• General Math
Replies
3
Views
2K
• Engineering and Comp Sci Homework Help
Replies
18
Views
2K
• Linear and Abstract Algebra
Replies
1
Views
2K
• Linear and Abstract Algebra
Replies
1
Views
1K
• Linear and Abstract Algebra
Replies
1
Views
1K