MHB Eigenvalues of a Linear Transformation

Click For Summary
The discussion focuses on finding the eigenvalues of the linear transformation represented by the matrix \(A^t A\). It is established that this matrix is symmetric and can be decomposed into \(QDQ^T\), where \(D\) contains the eigenvalues. The primary eigenvalue identified is \(\lambda = a_1^2 + a_2^2 + \ldots + a_n^2\), with the corresponding eigenvector being \(x = (a_1, a_2, \ldots, a_n)^T\). Additionally, it is noted that any vector orthogonal to \(x\) is an eigenvector corresponding to the eigenvalue \(0\). The discussion concludes with the realization that \(0\) is indeed a valid eigenvalue for this transformation.
Sudharaka
Gold Member
MHB
Messages
1,558
Reaction score
1
Hi everyone, :)

Here's a question I got stuck. Hope you can shed some light on it. :)

Find all eigenvalues of a linear transformation \(f\) whose matrix in some basis is \(A^{t}.A\) where \(A=(a_1,\cdots, a_n)\).

Of course if we write the matrix of the linear transformation we get,

\[A^{t}.A=\begin{pmatrix}a_1^2 & a_{1}a_2 & \cdots & a_{1}a_{n}\\a_2 a_1 & a_2^2 &\cdots & a_{2}a_{n}\\.&.&\cdots&.\\.&.&\cdots&.\\a_n a_1 & a_{n}a_2 & \cdots & a_{n}^2\end{pmatrix}\]

Now this is a symmetric matrix. So it could be written as \(A^{t}.A=QDQ^T\) where \(Q\) is a orthogonal matrix and \(D\) is a diagonal matrix. If we can do this the diagonal elements of the diagonal matrix gives all the eigenvalues we need. However I have no idea how break \(A^{t}.A\) into \(QDQ^T\). Or does any of you see a different approach to this problem which is much more easier? :)


 
Physics news on Phys.org
Sudharaka said:
Hi everyone, :)

Here's a question I got stuck. Hope you can shed some light on it. :)
Of course if we write the matrix of the linear transformation we get,

\[A^{t}.A=\begin{pmatrix}a_1^2 & a_{1}a_2 & \cdots & a_{1}a_{n}\\a_2 a_1 & a_2^2 &\cdots & a_{2}a_{n}\\.&.&\cdots&.\\.&.&\cdots&.\\a_n a_1 & a_{n}a_2 & \cdots & a_{n}^2\end{pmatrix}\]

Now this is a symmetric matrix. So it could be written as \(A^{t}.A=QDQ^T\) where \(Q\) is a orthogonal matrix and \(D\) is a diagonal matrix. If we can do this the diagonal elements of the diagonal matrix gives all the eigenvalues we need. However I have no idea how break \(A^{t}.A\) into \(QDQ^T\). Or does any of you see a different approach to this problem which is much more easier? :)




I think I found a way to solve this problem. The method seems quite obvious but if you see any mistakes in it please let me know. :)

So we know that,

\[(A^{T}A)x=\lambda x\]

where \(x\) is the eigenvector corresponding to \(\lambda\). We simply multiply both sides by \(A\) and use the associative property of matrix multiplication.

\[A(A^{T}A)x=\lambda (Ax)\]

\[(AA^{T})(Ax)=\lambda (Ax)\]

\[(a_1^2+a^2_2+\cdots+a_n^2)(Ax)=\lambda (Ax)\]

Therefore,

\[\lambda = a_1^2+a^2_2+\cdots+a_n^2\]

And that's it! Yay, we found the eigenvalue. :p
 
You have found one eigenvalue, namely $\lambda = a_1^2+a_2^2+\ldots+a_n^2$. In fact, if $x = (a_1,a_2,\ldots,a_n)^T$ then $x$ is an eigenvector, with eigenvalue $\lambda$.

Now suppose that $y = (b_1,b_2,\ldots,b_n)^T$ is a (nonzero) vector orthogonal to $x$, $x.y = 0$. If you form the product $A^TAy$, you will find that its $i$th coordinate is $a_i(x.y) = 0$ for $i=1,2,\ldots,n$, and so $A^TAy = 0$. That shows that $y$ is an eigenvector of $A^TA$, corresponding to the eigenvalue $0$. In other words, all the other eigenvalues of $A^TA$ are $0$.
 
Opalg said:
You have found one eigenvalue, namely $\lambda = a_1^2+a_2^2+\ldots+a_n^2$. In fact, if $x = (a_1,a_2,\ldots,a_n)^T$ then $x$ is an eigenvector, with eigenvalue $\lambda$.

Now suppose that $y = (b_1,b_2,\ldots,b_n)^T$ is a (nonzero) vector orthogonal to $x$, $x.y = 0$. If you form the product $A^TAy$, you will find that its $i$th coordinate is $a_i(x.y) = 0$ for $i=1,2,\ldots,n$, and so $A^TAy = 0$. That shows that $y$ is an eigenvector of $A^TA$, corresponding to the eigenvalue $0$. In other words, all the other eigenvalues of $A^TA$ are $0$.

Wow, thanks very much for completing my answer. It never occurred me that 0 could be a possibility of an eigenvalue. :)
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
27
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 52 ·
2
Replies
52
Views
4K