Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Motivation behind eigenvalue and eigenvector

  1. Jul 9, 2013 #1
    An eigenvector is defined as a non-zero vector 'v' such that A.v = λ.v

    I don't understand the motive behind this. We are trying to find a vector that when multiplied by a given square matrix preserves the direction of the vector.

    Shouldn't the motive be the opposite i.e. finding the matrix A given the vector v?

    I suppose eigenvector was defined this way with some application in mind
     
  2. jcsd
  3. Jul 9, 2013 #2
    There are many applications. For me, one obvious application is to find the set of uncorrelated variables in large data sets. Such data sets are usually initially described by correlated variables and the object of the analysis is to transform the data into its principal components (called Principal Component Analysis or PCA). The principal components are the eigenvectors of the covariance matrix describing the relations among the initial set of variables. Each eigenvector represents one new variable. These new variables are independent of each other (orthogonal). The eigenvalues of the principal components correspond to the variance of each component.
     
    Last edited: Jul 9, 2013
  4. Jul 9, 2013 #3

    Stephen Tashi

    User Avatar
    Science Advisor

    It's not just "a" vector, it could be several.

    One path to understanding is to think about the convenience that a "change of coordinates" brings to many problems. If M is a matrix that represents something in a given set of coordinates, what happens to M when you change coordinates? (If you look into this you'll find that a linear change of coordinates amounts to the multiplication [itex] A^{-1} M A [/itex] for some matrix [itex] A [/itex] )

    If M is a complicated matrix, and you want to change coordinates to make it simple, what is the "simplest" type of matrix to deal with? I think a diagonal matrix is simplest. You can't always change coordinates to revise the information in a matrix M so it is displayed as a diagonal matrix, but in many important cases you can. Suppose you do get a diagonal matrix. The eigenvectors of a diagonal matrix are simple to find, aren't they? The eigen values of these eigenvectors are just the numbers on the diagonal. If you only have to deal with diagonal matrices then eigenvectors and eigen values are obviously important concepts - although they are triival to find.

    If we use a change of coordinates that doesn't change any vectorial properties of the information in M then the eigenvectors of the diagonalized version of the information should be the eigenvectors of M expressed in a different system of coordinates. If you don't know how to diagonalize M by changing coordinates then finding the eigenvectors of M can give you a hint about how to do it.

    For matrices that can't be diagonalized by a change of coordinates, it turns out that they can be changed to a somewhat simple form consisting of "blocks" of numbers calld "Jordan blocks". Eigenvalues also play a role in the facts about that.
     
  5. Jul 9, 2013 #4

    WannabeNewton

    User Avatar
    Science Advisor

  6. Jul 10, 2013 #5
    Consider the differential equation
    [tex] \frac{d \vec{x}}{dt}=A \vec{x},[/tex]
    where [itex] \vec{x}[/itex] is a vector and [itex]A[/itex] is a matrix. If the problem were one-dimensional, then you'd like to say the solution is just [itex]x=e^{tA}(x_{0})[/itex]. What about the multi-dimensional version? We'd still like to say that [itex] \vec{x}=e^{tA}( \vec{x}_{0})[/itex], but we have to make sense out of the expression [itex]e^{tA}.[/itex] What does it mean to exponentiate a matrix? Well, one approach would be to use the Maclaurin expansion for the exponential:
    [tex]e^{x}= \sum_{k=0}^{\infty} \frac{x^{k}}{k!},[/tex]
    and simply substitute in the matrix:
    [tex]e^{tA}= \sum_{k=0}^{ \infty} \frac{t^{k}A^{k}}{k!}.[/tex]
    How to compute the arbitrary power of a matrix? Well, supposing you could write [itex]A[/itex] this way: [itex]A=PDP^{-1},[/itex] where [itex]D[/itex] is diagonal. Then [itex]A^{k}=PD^{k}P^{-1}[/itex], and arbitrary powers of diagonal matrices are easy to compute.

    Guess what? Diagonalizing a matrix, when possible, is a matter of finding the eigenvalues and eigenvectors. The eigenvalues form the diagonal of [itex]D[/itex], and the eigenvectors, orthonormalized, form the columns of the matrix [itex]P[/itex].

    The differential equation I began with is a very useful one. You can write many circuit DE's and mass-spring DE's in that form.
     
    Last edited: Jul 10, 2013
  7. Aug 1, 2013 #6
    Sorry for the late reply.
    I was brushing up my linear algebra when I stumbled upon this video from Khan Academy

    It basically said the need to find a function T such that T(v) = λv. Here λ is the eigenvalue and v the eigenvector.
    Aren't we doing the opposite? For a given T we find all v?
    Why not the opposite? For a given v, find all T such that T(v) = λv. This makes more sense in my opinion.

    E.g.: - You have a vector and want to increase its length but preserve its direction. So you will find T for this v.
    But what we are doing is opposite. We have T and find all v for it.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Motivation behind eigenvalue and eigenvector
Loading...