If matrix lambda is diagonal with entries 0,1 then lambda squared is lambda

In summary, the conversation discusses a proof involving idempotent matrices, specifically showing that for a real symmetric matrix A, it can be decomposed into the form $A=Q \Lambda Q^{T}$ where $\Lambda$ is a matrix with all non-diagonal entries equal to 0 and diagonal entries equal to 0 or 1. The goal is to show that $\Lambda^2=\Lambda$, which will then demonstrate that $A^2=A$. The conversation also mentions using the dot product definition of matrix multiplication to argue that the $i$th row and $j$th column are either zero or orthogonal, depending on whether $i$ is equal to $j$ or not, and how this supports the proof.
  • #1
Jameson
Gold Member
MHB
4,541
13
This is part of a proof I am working on involving idempotent matrices. I believe it is true that for any real symmetric matrix A ($n \times n$ for this), even with repeat eigenvalues, it can be decomposed into the form $A=Q \Lambda Q^{T}$. For the matrix I'm working on, we assume that all eigenvalues of A are 0 or 1.

What I need to show now is that $\Lambda^2=\Lambda$ (which in turn helps me show that $A^2=A$, thus is idempotent). It makes sense intuitively since when doing the row-column multiplications, the only time you would get a non-zero answer is when you have two non-zero elements being multiplied together in the sum, which only occurs when $i=j$. I'm trying to formulate a more rigorous argument for this though.

Just to make sure I'm clear, $\Lambda$ is an $n \times n$ matrix with all non-diagonal entries equal to 0, and diagonal entries equal to 0 or 1.

I know that $ \displaystyle \Lambda^2_{ij} = \sum_{k=1}^{n} \Lambda_{ik}\Lambda_{kj}$

Can I start here to make my argument you think?
 
Physics news on Phys.org
  • #2
I'd use the dot product definition of matrix multiplication for this one. First get the $i \ne j$ case out of the way by showing that if $i \ne j$ then the $i$th row and $j$th column are either zero or orthogonal (since $\Lambda$ is diagonal so they can't have nonzero entries in the same dimension) so their dot product is zero. For $i = j$, use the fact that $\Lambda$ is diagonal and contains only entries 0, 1 to argue that the $i$th row and the $j$th column are equal, so their dot product is either 0 if they are zero (if the diagonal entry contains zero) or 1 otherwise (if the diagonal entry contains a 1, i.e. the two vectors are of unit length).
 
  • #3
Thanks, Bacterius. That was the logic I was thinking of but I didn't state it fully as you did. I think I can use your argument to sufficiently show what I need to. :)
 

1. What does it mean for a matrix to be diagonal?

A diagonal matrix is a matrix where all the entries outside the main diagonal (the diagonal from the top left to the bottom right) are equal to zero. This means that the only non-zero entries in the matrix are on the main diagonal.

2. How do you find the square of a diagonal matrix?

To find the square of a diagonal matrix, you simply square each entry on the main diagonal. The rest of the matrix will remain unchanged.

3. How does a diagonal matrix with entries 0 and 1 affect the square of its eigenvalues?

If a matrix is diagonal with entries 0 and 1, then the eigenvalues of the matrix will remain unchanged when squared. This is because when you square a diagonal matrix, you are essentially squaring each entry on the main diagonal, which will not affect the values of the eigenvalues.

4. Can a non-diagonal matrix also have eigenvalues with the same property as "lambda squared is lambda"?

Yes, it is possible for a non-diagonal matrix to have eigenvalues with the same property as "lambda squared is lambda". This occurs when the matrix has a repeated eigenvalue, meaning that one eigenvalue appears multiple times in its characteristic polynomial.

5. How can "lambda squared is lambda" be applied in real-world situations?

The property of "lambda squared is lambda" can be applied in real-world situations in various fields such as physics, engineering, and economics. For example, in physics, this property can be used to model the decay of radioactive elements. In economics, it can be used to analyze the stability of economic systems. In general, this property can be used to understand the behavior and dynamics of systems that involve matrices and eigenvalues.

Similar threads

Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
813
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
612
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
983
Replies
1
Views
549
  • Linear and Abstract Algebra
Replies
3
Views
2K
Back
Top