Problem involving eigenvalues/vectors

  • Context: Graduate 
  • Thread starter Thread starter jacks0123
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around a problem involving eigenvalues and eigenvectors in the context of a geometric sequence of vectors defined by a matrix transformation. Participants explore conditions for convergence of the sequence and the formula for the sum of the first n vectors in the sequence, addressing both theoretical and conceptual aspects.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant asks about the convergence of the sequence defined by the matrix R and seeks conditions for convergence and the limiting vector.
  • Another participant suggests a condition involving the determinant of R and the trace to analyze eigenvalues and their implications for convergence.
  • It is proposed that if the first vector is an eigenvector of R, the convergence behavior will depend on the corresponding eigenvalue.
  • A participant expresses confusion about the first question and requests a more detailed explanation, mentioning the decomposition of the initial vector into eigenvector components.
  • Discussion includes the assertion that the behavior of the sequence's convergence is influenced by the absolute values of the eigenvalues.
  • Clarifications are made regarding the definitions of k1 and k2, which represent coefficients in the decomposition of the vector a into its eigenvector components.
  • Another participant emphasizes the importance of independent eigenvectors in forming a basis for the vector space and the implications for the problem at hand.

Areas of Agreement / Disagreement

Participants express varying levels of understanding regarding the concepts of eigenvalues, eigenvectors, and vector decomposition. There is no consensus on the specific conditions for convergence or the interpretation of the coefficients in the decomposition.

Contextual Notes

Some participants reference the need for proof regarding the independence of eigenvectors and the conditions for convergence, indicating that assumptions may be required for certain claims to hold.

Who May Find This Useful

This discussion may be useful for students and practitioners interested in linear algebra, particularly those studying eigenvalues and eigenvectors in the context of matrix transformations and their applications in convergence analysis.

jacks0123
Messages
3
Reaction score
0
Hi!
Please help me with this problem which must be solved using eigenvalues and eigenvectors:
A geometric sequence of vectors (2x1 row vectors) where to get from one term to the next multiply by a matrix (2x2):
t =(R^(n-1))*a
Where:
t is the nth vector in the sequence
R is the 2x2 matrix
R=
[a b]
[c d]


1.Does t converge as n->infinity? What conditions are sufficient for the sequence to converge? What vector does tn converge in each case?

2.What is the formula for the sum of the first n vecotrs in this sequence? Under what conditions and to what vectors does it converge

Thanks!
 
Last edited:
Physics news on Phys.org
Welcome to PF, jacks0123! :smile:

For starters, is it possible that your condition should be ((a+d)^2)-4detR>0?

Did you try anything?
How far did you get?

Can you say anything about the eigenvalues and eigenvectors of R based on the condition ((a+d)^2)-4detR>0?

Can you diagonalize R?
 
First, answer the questions if the first vector in the sequence is an eignevector of R. (Hint: the answers will depend on the corresponding eigenvalue).

Then, think how you can use those answers for an arbitrary starting vector.
 
I do not understand how to do question 1 at all.

My friend showed me how but i do not understand what he is talking about. Could someone explain this in more detail?

Suppose you decompose a into its eigenvector components , say a= k1e1+k2e2 where e1 and e2 are the eigenvectors, then you apply R to it many times. The e1 component will blow up to infinity if abs(k1)>1 and similarly for e2. So for convergence we have the following alternatives:
(a) a=0 obviously never changes
(b) abs(k2)<1, then it converges to zero if abs(k1)<1 and it converges to e1 if k1=1
(c) abs(k1)<1, then it converges to zero if abs(k2)<1 and it converges to e2 if k2=1
 
Suppose the eigenvalues are t1 and t2, then you need to replace abs(k1) by abs(t1), and abs(k2) by abs(t2).

This is because:

R a = R (k1e1 + k2e2) = k1 (R e1) + k2 (R e2) = k1 t1 e1 + k2 t2 e2

R^2 a = R (k1 t1 e1 + k2 t2 e2) = k1 t1^2 e1 + k2 t2^2 e2

R^n a = k1 t1^n e1 + k2 t2^n e2

So the blowing up is with t1 and t2.
If either abs(t1) or abs(t2) is greater than 1, the result blows up.
 
Last edited:
Im a bit confused. What is t and what is k? How did you get from
k1 (R e1) + k2 (R e2)
to
k1 t1 e1 + k2 t2 e2
 
Just got up. :zzz:

jacks0123 said:
Im a bit confused. What is t and what is k? How did you get from
k1 (R e1) + k2 (R e2)
to
k1 t1 e1 + k2 t2 e2

k1 and k2 are defined by the decomposition of "a" into the eigenvectors e1 and e2.
Any 2D vector can be decomposed into a linear combination of 2 independent vectors.

And oh, I meant t1 and t2 to be the eigenvalues of R.
I'll edit my previous post to match.
This means R e1 = t1 e1 since that is the definition of an eigenvalue and its eigenvector.
 
Last edited:
Hi guys stuck with same problem...i still don't get what k1 and k2 are. what do you mean by the decomposition of "a" into the eigenvectors e1 and e2. please reply fast
 
2 independent vectors form a basis for ℝ2.
Any 2D vector can be decomposed as a linear combination of the (independent) vectors in a basis.

Key to this problem is that there are 2 independent eigenvectors.
The condition given guarantees that, although that is still something that you would need to proof.
 
  • #10
jack201 said:
Hi guys stuck with same problem...i still don't get what k1 and k2 are. what do you mean by the decomposition of "a" into the eigenvectors e1 and e2. please reply fast

Think of it like decomposing 3D space into x,y,z vectors, where in this case eigen-vector decomposition does that for a particular matrix: it gets the linearly independent basis vectors of your matrix and hence "decomposes" it into basis vectors (in a similar way you decompose 3D space into x,y,z vectors).
 

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K