Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Problem involving eigenvalues/vectors

  1. Oct 23, 2011 #1
    Hi!
    Please help me with this problem which must be solved using eigenvalues and eigenvectors:
    A geometric sequence of vectors (2x1 row vectors) where to get from one term to the next multiply by a matrix (2x2):
    t =(R^(n-1))*a
    Where:
    t is the nth vector in the sequence
    R is the 2x2 matrix
    R=
    [a b]
    [c d]


    1.Does t converge as n->infinity? What conditions are sufficient for the sequence to converge? What vector does tn converge in each case?

    2.What is the formula for the sum of the first n vecotrs in this sequence? Under what conditions and to what vectors does it converge

    Thanks!
     
    Last edited: Oct 23, 2011
  2. jcsd
  3. Oct 23, 2011 #2

    I like Serena

    User Avatar
    Homework Helper

    Welcome to PF, jacks0123! :smile:

    For starters, is it possible that your condition should be ((a+d)^2)-4detR>0?

    Did you try anything?
    How far did you get?

    Can you say anything about the eigenvalues and eigenvectors of R based on the condition ((a+d)^2)-4detR>0?

    Can you diagonalize R?
     
  4. Oct 23, 2011 #3

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    First, answer the questions if the first vector in the sequence is an eignevector of R. (Hint: the answers will depend on the corresponding eigenvalue).

    Then, think how you can use those answers for an arbitrary starting vector.
     
  5. Oct 23, 2011 #4
    I do not understand how to do question 1 at all.

    My friend showed me how but i do not understand what he is talking about. Could someone explain this in more detail?

    Suppose you decompose a into its eigenvector components , say a= k1e1+k2e2 where e1 and e2 are the eigenvectors, then you apply R to it many times. The e1 component will blow up to infinity if abs(k1)>1 and similarly for e2. So for convergence we have the following alternatives:
    (a) a=0 obviously never changes
    (b) abs(k2)<1, then it converges to zero if abs(k1)<1 and it converges to e1 if k1=1
    (c) abs(k1)<1, then it converges to zero if abs(k2)<1 and it converges to e2 if k2=1
     
  6. Oct 23, 2011 #5

    I like Serena

    User Avatar
    Homework Helper

    Suppose the eigenvalues are t1 and t2, then you need to replace abs(k1) by abs(t1), and abs(k2) by abs(t2).

    This is because:

    R a = R (k1e1 + k2e2) = k1 (R e1) + k2 (R e2) = k1 t1 e1 + k2 t2 e2

    R^2 a = R (k1 t1 e1 + k2 t2 e2) = k1 t1^2 e1 + k2 t2^2 e2

    R^n a = k1 t1^n e1 + k2 t2^n e2

    So the blowing up is with t1 and t2.
    If either abs(t1) or abs(t2) is greater than 1, the result blows up.
     
    Last edited: Oct 24, 2011
  7. Oct 23, 2011 #6
    Im a bit confused. What is t and what is k? How did you get from
    k1 (R e1) + k2 (R e2)
    to
    k1 t1 e1 + k2 t2 e2
     
  8. Oct 24, 2011 #7

    I like Serena

    User Avatar
    Homework Helper

    Just got up. :zzz:

    k1 and k2 are defined by the decomposition of "a" into the eigenvectors e1 and e2.
    Any 2D vector can be decomposed into a linear combination of 2 independent vectors.

    And oh, I meant t1 and t2 to be the eigenvalues of R.
    I'll edit my previous post to match.
    This means R e1 = t1 e1 since that is the definition of an eigenvalue and its eigenvector.
     
    Last edited: Oct 24, 2011
  9. Oct 24, 2011 #8
    Hi guys stuck with same problem.....i still dont get what k1 and k2 are. what do you mean by the decomposition of "a" into the eigenvectors e1 and e2. please reply fast
     
  10. Oct 24, 2011 #9

    I like Serena

    User Avatar
    Homework Helper

    2 independent vectors form a basis for ℝ2.
    Any 2D vector can be decomposed as a linear combination of the (independent) vectors in a basis.

    Key to this problem is that there are 2 independent eigenvectors.
    The condition given guarantees that, although that is still something that you would need to proof.
     
  11. Oct 25, 2011 #10

    chiro

    User Avatar
    Science Advisor

    Think of it like decomposing 3D space into x,y,z vectors, where in this case eigen-vector decomposition does that for a particular matrix: it gets the linearly independent basis vectors of your matrix and hence "decomposes" it into basis vectors (in a similar way you decompose 3D space into x,y,z vectors).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Problem involving eigenvalues/vectors
  1. Eigenvalues problem (Replies: 4)

Loading...