Why Do Rank 1 Matrices Have Eigenvalues 0 and Trace?

brownman
Messages
13
Reaction score
0
How come a square matrix has eigenvalues of 0 and the trace of the matrix?
Is there any other proof other than just solving det(A-λI)=0?
 
Physics news on Phys.org
I might argue something like the following: By row operations, a rank 1 matrix may be reduced to a matrix with only the first row being nonzero. The eigenvectors of such a matrix may be chosen to be the ordinary Euclidian basis, in which the eigenvalues become zero's and the 11-component of this reduced matrix. As row operations are invertible, the trace is unchanged, and thus this nonzero eigenvalue equals the trace of the original matrix.

Afterthought: But that is probably erroneous, because even though the row operations are indeed invertible, they do not generally preserve the trace. So the last part of my argument fails.

A better argument seems to be the following: For a rank k matrix there exists a basis in which k of its columns are nonzero, the other ones being zero. The transformation between bases may be chosen to be orthogonal, thus preserving the trace.
 
Last edited:
We assume ##A## is an ##n \times n## rank one matrix. If ##n > 1##, any rank one matrix is singular. Therefore ##\lambda = 0## is an eigenvalue: for an eigenvector, just take any nonzero ##v## such that ##Av = 0##.

So let's see if there are any nonzero eigenvalues.

If ##A## is a rank one matrix, then all of its columns are scalar multiples of each other. Thus we may write ##A = xy^T## where ##x## and ##y## are nonzero ##n \times 1## vectors.

If ##\lambda## is an eigenvalue of ##A##, then there is a nonzero vector ##v## such that ##Av = \lambda v##. This means that ##(xy^T)v = \lambda v##. By associativity, we may rewrite the left hand side as ##x(y^T v) = \lambda v##.

Note that ##y^T v## is a scalar, and of course ##\lambda## is also a scalar. If we assume ##\lambda \neq 0##, then this means that ##v## is a scalar multiple of ##x##: specifically, ##v = x(y^T v)/\lambda##.

Therefore ##x## itself is an eigenvector associated with ##\lambda##, so we have ##x(y^T x) = \lambda x##, or equivalently, ##x(\lambda - y^T x) = 0##. As ##x## is nonzero, this forces ##\lambda = y^T x##.

All that remains is to recognize that ##y^T x = \sum_{n = 1}^{N} x_n y_n## is the trace of ##A = xy^T##.
 
By the way, note that this does not necessarily mean that ##A## has two distinct eigenvalues. The trace may well be zero, for example
$$A = \begin{bmatrix}
1 & 1 \\
-1 & -1
\end{bmatrix}$$
is a rank one matrix whose only eigenvalue is 0.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top