Undergrad Unravelling Structure of a Symmetric Matrix

Click For Summary
The discussion centers on the analysis of a specific symmetric matrix and its eigenvalues and eigenvectors. It highlights that the largest magnitude eigenvalues yield oscillatory eigenvectors, while the smallest magnitude eigenvalue results in more stable eigenvectors. Participants suggest that the alternating signs in the matrix influence the behavior of the eigenvectors, particularly in relation to the last row's contribution. Experimental modifications to the matrix confirm the initial hypothesis about the relationship between the matrix elements and the eigenvectors. Further exploration is encouraged to uncover deeper structural insights within the matrix.
thatboi
Messages
130
Reaction score
20
Hey guys,
I was wondering if anyone had any thoughts on the following symmetric matrix:
$$\begin{pmatrix}
0.6 & 0.2 & -0.2 & -0.6 & -1\\
0.2 & -0.2 & -0.2 & 0.2 & 1\\
-0.2 & -0.2 & 0.2 & 0.2 & -1\\
-0.6 & 0.2 & 0.2 & -0.6 & 1\\
-1 & 1 & -1 & 1 & -1
\end{pmatrix}
$$
Notably, when one solves for the eigenvalues and eigenvectors of this matrix, one finds that for the largest magnitude eigenvalues, the eigenvectors demonstrate an oscillatory behavior (the elements within the eigenvector switch between positive and negative), whereas for the smallest magnitude eigenvalue, the eigenvectors have a "nicer" behavior. This most likely has to do with the alternative +/- 1 in the matrix but I can't quite figure it out.
 
Physics news on Phys.org
thatboi said:
Hey guys,
I was wondering if anyone had any thoughts on the following symmetric matrix:
$$\begin{pmatrix}
0.6 & 0.2 & -0.2 & -0.6 & -1\\
0.2 & -0.2 & -0.2 & 0.2 & 1\\
-0.2 & -0.2 & 0.2 & 0.2 & -1\\
-0.6 & 0.2 & 0.2 & -0.6 & 1\\
-1 & 1 & -1 & 1 & -1
\end{pmatrix}
$$
Notably, when one solves for the eigenvalues and eigenvectors of this matrix, one finds that for the largest magnitude eigenvalues, the eigenvectors demonstrate an oscillatory behavior (the elements within the eigenvector switch between positive and negative), whereas for the smallest magnitude eigenvalue, the eigenvectors have a "nicer" behavior. This most likely has to do with the alternative +/- 1 in the matrix but I can't quite figure it out.
I think this hypothesis can be easily tested.
 
Hill said:
I think this hypothesis can be easily tested.
Right, so the last row contributes to the eigenvalue in the sense that it gives the last entry of the resultant column vector when the matrix is multiplied by the eigenvector. So if the eigenvector also has entries that alternates signs, then the dot product between the eigenvector and the last row will result in a "coherent" sum and thus produce a larger number than compared to a different eigenvector with a different configuration of signs. Is this the right way to think about it?
 
thatboi said:
Right, so the last row contributes to the eigenvalue in the sense that it gives the last entry of the resultant column vector when the matrix is multiplied by the eigenvector. So if the eigenvector also has entries that alternates signs, then the dot product between the eigenvector and the last row will result in a "coherent" sum and thus produce a larger number than compared to a different eigenvector with a different configuration of signs. Is this the right way to think about it?
I meant to test it experimentally, by modifying the "suspected" elements and observing how the eigenvectors are affected.
 
Hill said:
I meant to test it experimentally, by modifying the "suspected" elements and observing how the eigenvectors are affected.
Sure, I already did some modifications and it seemed to match what I said above (for example if I put a negative sign on only the second element of the last column and second element of the last row, then the eigenvector corresponding to the largest magnitude eigenvalue only has a negative sign on the second element as well). I was just wondering if there was any more intuition/structure in the matrix beyond what I said above.
 
If you described how you generated that matrix, sometimes there are underlying reasons why certain eigenvectors exist.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
6K
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
5K