Proving the uniqueness of eigenspaces

  • B
  • Thread starter Eclair_de_XII
  • Start date
  • Tags
    Uniqueness
B^2, so the kernel of B^2 is {0}, which is what you wish to prove. Yes, I was mistaken. Your approach is much more elegant and efficient. Thank you for explaining it to me. I will try to use this method in the future.
  • #1
Eclair_de_XII
1,083
91
TL;DR Summary
Let ##A## be a linear transformation that maps some vector space to itself. Let ##\lambda,\mu## be two distinct scalars and define ##T_k:=A-k\cdot I##, where ##I## denotes the identity transformation. Show that the following hold:

\begin{eqnarray}
\ker(T_\lambda^2)\cap\ker(T_\mu^2)=0
\end{eqnarray}
Let ##x\in\ker(T_\lambda^2)\cap\ker(T_\mu^2)##. Then the following must hold:

\begin{eqnarray}
(A^2-2\lambda\cdot A+\lambda^2I)x=0\\
(A^2-2\mu\cdot A+\mu^2I)x=0
\end{eqnarray}

Subtracting the latter equation from the former gives us:

\begin{eqnarray}
0-0&=&0\\
&=&(-2\lambda\cdot Ax+\lambda^2x)-(-2\mu\cdot Ax+\mu^2x)\\
0&=&-2(\lambda-\mu)Ax+(\lambda-\mu)(\lambda+\mu)x
\end{eqnarray}

Bearing in mind that performing row operations on any system of linear equations does not change its solution set, we proceed by dividing both sides of the equality by ##\lambda-\mu##:

\begin{eqnarray}
0&=&-2Ax+(\lambda+\mu)x\\
&=&-2Ax+(\lambda+\lambda+(\mu-\lambda)x\\
&=&-2Ax+2\lambda\cdot x+(\mu-\lambda)x\\
&=&2(\lambda\cdot I-A)x+(\mu-\lambda)x\\
&=&2T_\lambda x+(\mu-\lambda)x
\end{eqnarray}

By definition, ##x\in\ker(T_\lambda+\frac{1}{2}(\lambda-\mu)I)## whenever ##I## is restricted to ##\ker(T_\lambda^2)\cap\ker(T_\mu^2)##. It can be shown through similar means that ##x\in\ker(T_\mu+\frac{1}{2}(-\lambda+\mu)I)##.

(I am still figuring out how to finish this proof, if it can be finished.)
 
Last edited:
Physics news on Phys.org
  • #2
the problem is unnecessarily complicated. denote A-lambda.Id by B, and then denote A-mu.Id by B-c.Id, where c= mu-lambda ≠0. then try to show that if B^2 x=0 and also (B-c)^2 x = 0, where c≠0, then x = 0.
 
Last edited:
  • #3
Sure. I show first that ##(B-c)^2## can be expanded like a polynomial.

\begin{eqnarray}
(B-cI)^2&=&[B^2-B(cI)-(cI)B+c^2I]\\
&=&[B^2-c(BI)-c(BI)+c^2I]\\
&=&[B^2-2c(BI)+c^2I]\\
\end{eqnarray}

Let ##x\in\ker(B^2)\cap\ker[(B-c)^2]##.

\begin{eqnarray}
0&=&B^2x\\
&=&(B-cI)^2x\\
&=&0-0\\
&=&(2cB-c^2I)x\\
&=&(2B-cI)x\\
&=&(B+B-cI)x\\
-Bx&=&(B-cI)x
\end{eqnarray}

By uniqueness of the inverse, either ##B-cI=-B## or ##x=0##. The former fails to hold if ##B\neq\frac{1}{2}cI##, in which case, ##x=0##.

If this solution were correct, I am wondering how I would show this is true (if it is true) for higher powers. I don't like thinking about the amount of algebra I'd need to do to prove it in the general case like how I did it for the case when the power is just two.
 
Last edited:
  • #4
or, applying B again at step 19 implies cBx= 0, so Bx = 0, so from step 18, also c^2 x = 0, so x = 0.

a basic algebra fact is that in the integers and in the polynomial ring over a field, and in any euclidean domain, the greatest common divisor of two elements can be written as a linear combination of those elements. Hence if P,Q are relatively prime polynomials, so their gcd is 1, there are polynomials f, g, such that fP+gQ = 1. Then for any map A, plugging A into both sides of this equation, gives f(A)P(A) + g(A)Q(A) = Id. Now assuming P(A)x = Q(A)x = 0, gives that 0 = x.

In your original problem, P = (X-lambda)^2, and Q = (X-mu)^2.
 
  • Like
Likes Eclair_de_XII
  • #5
Thank you. That is very helpful.

mathwonk said:
applying B again at step 19 implies cBx= 0
Will that not change the solution set, though, if for example, ##B## is singular? Wait, never mind. I think that the new solution set obtained from applying ##B## to both sides would just be a superset of the original solution set. And since the new solution set is trivial, it must follow that the original solution set is also trivial.
 
Last edited:
  • #6
using this idea on your original problem directly, note that if P = X-a, and Q = X-b, with a ≠b, then P-Q = b-a ≠0, so we can take f = 1/(b-a) and g = -1/(b-a), to get fP+gQ = 1.

Then to do the case of P^2 and Q^2, just cube both sides of this equation. Every term on the left will be divisible by either P^2 or Q^2, and the right side will still equal 1. So we get some equation of form FP^2 + GQ^2 = 1. the same trick works for all higher powers. I.e. to deal with P^n and Q^n, raise the equation to the power 2n-1, I guess.
 
  • #7
i don't understand your concern in post #5, since if B is not singular, you are done immediately, so we may assume if we wish that B is singular, but that is irrelevant to the argument. I am just trying to prove that x = 0. from step 19 we know that 2Bx = cx, so applying B again gives 2B^2x = cBx, but B^2 x = 0, by hypothesis, so cBx = 0. but c≠0 so Bx = 0. now either step 18 or 19 gives that 0 = cx or 0 = c^2x, either way x = 0.
 

1. What is an eigenspace?

An eigenspace is a subspace of a vector space that contains all of the eigenvectors corresponding to a specific eigenvalue of a linear transformation.

2. How do you prove the uniqueness of eigenspaces?

To prove the uniqueness of eigenspaces, you must show that for a given eigenvalue, there is only one eigenspace that corresponds to it. This can be done by showing that any two eigenvectors with the same eigenvalue are scalar multiples of each other.

3. Why is proving the uniqueness of eigenspaces important?

Proving the uniqueness of eigenspaces is important because it allows us to fully understand the behavior of a linear transformation and its relationship with its eigenvalues. It also helps us to identify and classify different types of matrices, such as diagonalizable and non-diagonalizable matrices.

4. Are eigenspaces always unique?

No, eigenspaces are not always unique. In some cases, a single eigenvalue may have multiple linearly independent eigenvectors, resulting in multiple eigenspaces for that eigenvalue.

5. How does the uniqueness of eigenspaces relate to the diagonalization of a matrix?

The uniqueness of eigenspaces is closely related to the diagonalization of a matrix. If a matrix has n linearly independent eigenvectors, it can be diagonalized by using these eigenvectors as the columns of a diagonal matrix. This process is only possible if the eigenspaces are unique for each eigenvalue.

Similar threads

  • Linear and Abstract Algebra
Replies
20
Views
3K
  • Calculus and Beyond Homework Help
Replies
6
Views
304
  • Linear and Abstract Algebra
Replies
15
Views
969
  • Advanced Physics Homework Help
Replies
0
Views
134
  • Linear and Abstract Algebra
Replies
4
Views
983
  • Linear and Abstract Algebra
Replies
12
Views
2K
  • Special and General Relativity
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
557
Back
Top