Problem involving eigenvalues/vectors

  • Thread starter Thread starter jacks0123
  • Start date Start date
jacks0123
Messages
3
Reaction score
0
Hi!
Please help me with this problem which must be solved using eigenvalues and eigenvectors:
A geometric sequence of vectors (2x1 row vectors) where to get from one term to the next multiply by a matrix (2x2):
t =(R^(n-1))*a
Where:
t is the nth vector in the sequence
R is the 2x2 matrix
R=
[a b]
[c d]


1.Does t converge as n->infinity? What conditions are sufficient for the sequence to converge? What vector does tn converge in each case?

2.What is the formula for the sum of the first n vecotrs in this sequence? Under what conditions and to what vectors does it converge

Thanks!
 
Last edited:
Physics news on Phys.org
Welcome to PF, jacks0123! :smile:

For starters, is it possible that your condition should be ((a+d)^2)-4detR>0?

Did you try anything?
How far did you get?

Can you say anything about the eigenvalues and eigenvectors of R based on the condition ((a+d)^2)-4detR>0?

Can you diagonalize R?
 
First, answer the questions if the first vector in the sequence is an eignevector of R. (Hint: the answers will depend on the corresponding eigenvalue).

Then, think how you can use those answers for an arbitrary starting vector.
 
I do not understand how to do question 1 at all.

My friend showed me how but i do not understand what he is talking about. Could someone explain this in more detail?

Suppose you decompose a into its eigenvector components , say a= k1e1+k2e2 where e1 and e2 are the eigenvectors, then you apply R to it many times. The e1 component will blow up to infinity if abs(k1)>1 and similarly for e2. So for convergence we have the following alternatives:
(a) a=0 obviously never changes
(b) abs(k2)<1, then it converges to zero if abs(k1)<1 and it converges to e1 if k1=1
(c) abs(k1)<1, then it converges to zero if abs(k2)<1 and it converges to e2 if k2=1
 
Suppose the eigenvalues are t1 and t2, then you need to replace abs(k1) by abs(t1), and abs(k2) by abs(t2).

This is because:

R a = R (k1e1 + k2e2) = k1 (R e1) + k2 (R e2) = k1 t1 e1 + k2 t2 e2

R^2 a = R (k1 t1 e1 + k2 t2 e2) = k1 t1^2 e1 + k2 t2^2 e2

R^n a = k1 t1^n e1 + k2 t2^n e2

So the blowing up is with t1 and t2.
If either abs(t1) or abs(t2) is greater than 1, the result blows up.
 
Last edited:
Im a bit confused. What is t and what is k? How did you get from
k1 (R e1) + k2 (R e2)
to
k1 t1 e1 + k2 t2 e2
 
Just got up. :zzz:

jacks0123 said:
Im a bit confused. What is t and what is k? How did you get from
k1 (R e1) + k2 (R e2)
to
k1 t1 e1 + k2 t2 e2

k1 and k2 are defined by the decomposition of "a" into the eigenvectors e1 and e2.
Any 2D vector can be decomposed into a linear combination of 2 independent vectors.

And oh, I meant t1 and t2 to be the eigenvalues of R.
I'll edit my previous post to match.
This means R e1 = t1 e1 since that is the definition of an eigenvalue and its eigenvector.
 
Last edited:
Hi guys stuck with same problem...i still don't get what k1 and k2 are. what do you mean by the decomposition of "a" into the eigenvectors e1 and e2. please reply fast
 
2 independent vectors form a basis for ℝ2.
Any 2D vector can be decomposed as a linear combination of the (independent) vectors in a basis.

Key to this problem is that there are 2 independent eigenvectors.
The condition given guarantees that, although that is still something that you would need to proof.
 
  • #10
jack201 said:
Hi guys stuck with same problem...i still don't get what k1 and k2 are. what do you mean by the decomposition of "a" into the eigenvectors e1 and e2. please reply fast

Think of it like decomposing 3D space into x,y,z vectors, where in this case eigen-vector decomposition does that for a particular matrix: it gets the linearly independent basis vectors of your matrix and hence "decomposes" it into basis vectors (in a similar way you decompose 3D space into x,y,z vectors).
 
Back
Top