Interesting theorem, complex eigenvalues.

bobby2k
Messages
126
Reaction score
2
Take a look at this theorem.

theorem.png


Is it a way to show this theorem? I would like to show it using the standard way of diagonalizing a matrix.

I mean if P = [v1 v2] and D =
[lambda1 0
0 lambda D]

We have that AP = PD even for complex eigenvectors and eigenvalues.

But the P matrix in this theorem is real, and so is the C matrix. I think they have used that v1 and v2 are conjugates, and so is lambda 1 and lambda 2.

How would you show this theorem? Can you use ordinary diagonolisation to show it?
 
Last edited:
Physics news on Phys.org
I would suggest: check that AP= PC writing A with some matrix elements (say p,q,r,s) and using the definitions.
 
Before showing the theorem, I'll first establish the lemma that if you have two real 2x2 matrices, and if you multiply them on the right by \begin{bmatrix} 1 \\ i \end{bmatrix} and you get the same result for both, then the two original matrices are the same:

Let X= \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix} and Y= \begin{bmatrix} y_1 & y_2 \\ y_3 & y_4 \end{bmatrix}. Then if
\begin{align}X\begin{bmatrix} 1 \\ i \end{bmatrix} &amp;= Y\begin{bmatrix} 1 \\ i \end{bmatrix}, \\<br /> \begin{bmatrix} x_1 &amp; x_2 \\ x_3 &amp; x_4 \end{bmatrix}\begin{bmatrix} 1 \\ i \end{bmatrix} &amp;= \begin{bmatrix} y_1 &amp; y_2 \\ y_3 &amp; y_4 \end{bmatrix}\begin{bmatrix} 1 \\ i \end{bmatrix} \end{align} \\<br /> \begin{bmatrix} x_1 + x_2i \\ x_3 + x_4i \end{bmatrix} = \begin{bmatrix} y_1 + y_2i \\ y_3 + y_4i \end{bmatrix}
Hence x_1=y_1, x_2=y_2, x_3=y_3, x_4=y_4, and hence X=Y.

Now for the theorem itself:
Notice that P \begin{bmatrix} 1 \\ i \end{bmatrix} = {\bf v} and that
C\begin{bmatrix} 1 \\ i \end{bmatrix}=\begin{bmatrix} a-bi \\ b+ai \end{bmatrix}=\begin{bmatrix} \lambda \\ \lambda i \end{bmatrix} = \lambda\begin{bmatrix} 1 \\ i \end{bmatrix}.
Then, for any real matrix A with complex eigenvalues, we have
\begin{align}A{\bf v} &amp;= \lambda {\bf v} \\<br /> AP\begin{bmatrix} 1 \\ i \end{bmatrix} &amp;= \lambda P\begin{bmatrix} 1 \\ i \end{bmatrix} \\ &amp;= P \lambda \begin{bmatrix} 1 \\ i \end{bmatrix} \\ &amp;= PC \begin{bmatrix} 1 \\ i \end{bmatrix}\end{align}
But both AP and PC are real, hence
AP=PC \quad \text{or} \quad A=PCP^{-1}.
 
Very nice proof!, thank you very much!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top