Linear Algebra - Diagonalisation

In summary: A_22?In summary, A matrix cannot be diagonalized by a real or complex matrix when there are two distinct real or complex eigenvalues. However, when there are two complex eigenvalues, A can be diagonalized by a complex matrix.
  • #1
kehler
104
0

Homework Statement


a) Consider a real matrix, A =
[a b]
[c d]
Give simple algebraic criteria in terms of a,b,c,d for when the following are true:
i)there exists a real matrix P that diagonalises A,
ii)there exists a complex matrix P that diagonalises A, but no real matrix P that does

b) Let A be a 2 x 2 matrix which cannot be diagonalized by any matrix P (real or not). Prove there is an invertible real 2 x 2 matrix P such that
P-1AP =
[ lambda 1 ]
[ 0 lambda]


The Attempt at a Solution


Here's what I have so far (the 'x's are meant to be lambdas):

a)To find the eigenvalues, let det(A-xI) = 0
So, (a-x)(d-x)-bc = 0
x2 - (a+d)x -bc + ad = 0
Using the quadratic formula,
x = ((a + d) +- sqrt(a2 + d2 -2ad +4bc) ) / 2

i) When there are two real and linearly independent eigenvectors, A is diagonalisable by a real matrix P. This occurs when A has two distinct real eigenvalues.
So a2 + d2 -2ad +4bc > 0

ii) When there are two complex and linearly independent eigenvectors, A is diagonalisable by a complex matrix P. This occurs when A has two distinct complex eigenvalues.
So a2 + d2 -2ad +4bc < 0

When a2 + d2 -2ad +4bc = 0, a has one eigenvalue ((a+d)/2a) of multiplicity 2. Row reducing (A-xI) for this eigenvalue produces at most one free variable, and so there will not be two linearly independent eigenvectors. Thus, A is not diagonalisable when a2 + d2 -2ad +4bc = 0.

Is this correct?? :S I don't quite know what the question means by 'simple algebraic criteria'..

b) I have no clue how to do this part. If A can't be diagonalised, it will not have two linearly independent eigenvectors. What then will the columns of P be? :S
How should I go about proving this?

Any help would be much appreciated :)
 
Physics news on Phys.org
  • #2
Anyone? :(
 
  • #3
That seems pretty ok. Except that just because the eigenvalue equation has a double root, that doesn't mean it can't be diagonalized. [[2,0],[0,2]] has a double eigenvalue but it can be pretty easily diagonalized. For b) what P really represents is a choice of basis. Choose a special basis {b1,b2} where b1 is an eigenvalue with eigenvector lambda and b2 is orthogonal to b1.
 
  • #4
Thanks Dick :).
For (a), is it correct to say that if an eigenvalue equation has double roots, it can be diagonalised only when a=d, b=0, and c=0? I've tried putting in numbers and those seem like the conditions that the matrix has to fulfil to be diagonlised when it has a single eigenvalue.
 
  • #5
It sure is. If you have a diagonal matrix with constant entries (which your diagonalized matrix must be), then it's a multiple of the identity. It's the same diagonal matrix in ALL bases.
 
  • #6
Cool thanks :)
where b1 is an eigenvalue with eigenvector lambda
I don't quite get what you mean by this. Do you mean b1=
[lambda]
[...0 ...] ?
Or did you mean to say that b1 is an eigenvector with eigenvalue lambda? But this doesn't seem to work cos the eigenvector would be
[0]
[0]
which would mean that b2 could be made up of any two numbers :S

Can you give me a pointer on how I can start the proof? Do I just find P and expand P-1AP?
 
Last edited:
  • #7
I meant eigenvector with eigenvalue lambda, sure. A(b1)=lambda*b1. Let's take {b1,b2} to be orthonormal as well. Then in that basis the matrix A_{ij}=(bi).A(bj). Does that help to start?
 
Last edited:
  • #8
Oh ok, thanks :). I'll work on that and see if I get anywhere.
 
  • #9
You shouldn't have to work that hard. Tell me what A_{11} and A_{21} are.
 
  • #10
Hmm do I just put the values it? In that case,
A_{11} is b1.lambda(b1)
A_{21} is b2.lambda(b1) = 0?? because b2 and b1 are orthogonal
A_{12} is b1.lambda(b2) = 0 for the same reason
A_{22} is b2.lambda(b1)

But I should be getting
[ lambda 1 ]
[ 0 lambda]
shouldn't I? :S
 
  • #11
kehler said:
Hmm do I just put the values it? In that case,
A_{11} is b1.lambda(b1)
A_{21} is b2.lambda(b1) = 0?? because b2 and b1 are orthogonal
A_{12} is b1.lambda(b2) = 0 for the same reason
A_{22} is b2.lambda(b1)

But I should be getting
[ lambda 1 ]
[ 0 lambda]
shouldn't I? :S

Not yet. We picked b1 and b2 to be orthonormal. So A_11=lambda*b1.b1=lambda. A_21 is lambda*b1.b2=0. A_12 is b1.A(b2) and A_22 is b2.A(b2). We can't say much about them (since b2 isn't an eigenvector). But at least you have two elements of your matrix right. But we also know A should have a double eigenvalue of lambda. You can tell me what A_22 is based on that information, right?
 
  • #12
So A_22 is lambda?
Are we trying to find b2 from this? :S
 
  • #13
Ja, A_22 must be lambda. Now we can forget about the basis. So the matrix looks like A=[[lambda,a],[0,lambda]] with a not equal to zero, correct? It's now pretty easy to find a matrix P so that P^(-1)AP=[[lambda,1],[0,lambda]]. At least that's the way I did it.
 
  • #14
Got to run to school now. Thanks for your help so far :)
 
  • #15
Hey Dick,
I still have a couple of questions about the proof... Hope you don't mind :)

1. How did you know it had to be a basis of the eigenvector and a vector orthogonal to it?
2. How did u get this formula : matrix A_{ij}=(bi).A(bj).
The only formula I know to change bases is P-1AP :S
3. Why's lambda*b1.b1=lambda?
4. And how is it easy to find P from here?
 
  • #16
2 first. A_{ij} is the ith component of the A acting on the jth basis vector. The column vectors of A are the are the images of the basis vectors. So A_{ij}=bi.A(bj) if the basis is orthonormal. I picked the vectors the way I did because I wanted to get (lambda,0) in the first column. I picked b1 and b2 to be orthonormal, so b1.b2=0 and b1.b1=b2.b2=1. Do you mean the matrix P that will change [[lambda,a],[0,lambda]] into [[lambda,1],[0,lambda]]? It's just [[1,0],[0,1/a]], it's pretty easy to see that.
 
  • #17
Wait... So do you mean P-1A results in [[lambda,a],[0,lambda]] and then multiplying this by P gives us [[lambda,1],[0,lambda]]??
 
  • #18
I mean if A=[[lambda,a],[0,lambda]] and P=[[1,0],[0,1/a]] then P^(-1)AP=[[lambda,1],[0,lambda]].
 
  • #19
Ohh ok :). So what you did was change the basis of A to get it into that form? Sorry, I kinda still fail to see the big picture here...
 
  • #20
I picked a special basis of A and argued what the matrix must look like in that basis. Then I did a final P thing to change the a to 1.
 
  • #21
Ahh ok, I see... Thanks for your help :)
 
  • #22
Dick said:
I picked a special basis of A and argued what the matrix must look like in that basis. Then I did a final P thing to change the a to 1.

Hi, where did the 'a' come from? :S
 
  • #23
muso07 said:
Hi, where did the 'a' come from? :S

The only thing we know about A_12 is that it's not zero (otherwise the matrix would be diagonalizable). So I just called it 'a'.
 

1. What is diagonalisation in linear algebra?

Diagonalisation is a process in linear algebra where a square matrix is transformed into a diagonal matrix by finding a new basis for the vector space in which the matrix operates. This new basis, also known as eigenvectors, allow for simpler calculations and easier interpretation of the matrix.

2. Why is diagonalisation important in linear algebra?

Diagonalisation is important because it simplifies many calculations involving matrices. It also allows for better understanding of the matrix and its properties, such as its eigenvalues and eigenvectors. Diagonalisation is also useful in solving systems of differential equations and finding the power of a matrix.

3. How do you diagonalise a matrix?

To diagonalise a matrix, you must first find the eigenvalues of the matrix. Then, for each eigenvalue, you must find the corresponding eigenvectors. These eigenvectors will form the new basis for the matrix. Finally, using the eigenvectors as columns, you can construct a diagonal matrix with the eigenvalues as its diagonal entries.

4. Can all matrices be diagonalised?

No, not all matrices can be diagonalised. A square matrix can only be diagonalised if it has n distinct eigenvalues, where n is the dimension of the matrix. If the matrix has repeated eigenvalues, it may still be diagonalisable, but not always. Additionally, non-square matrices cannot be diagonalised.

5. How is diagonalisation related to eigendecomposition?

Diagonalisation and eigendecomposition are closely related concepts. Diagonalisation is essentially a special case of eigendecomposition, where the matrix is decomposed into its eigenvalues and eigenvectors. However, eigendecomposition can apply to non-square matrices, while diagonalisation is only applicable to square matrices.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
525
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
527
  • Calculus and Beyond Homework Help
Replies
2
Views
390
  • Calculus and Beyond Homework Help
Replies
10
Views
1K
  • Calculus and Beyond Homework Help
Replies
24
Views
798
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
284
  • Calculus and Beyond Homework Help
Replies
6
Views
302
Back
Top