Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Eigenvalues and Eigenvectors

  1. Nov 10, 2003 #1


    User Avatar
    Staff Emeritus

    I'm currently taking linear algebra and it has to be the worst math class EVER. It is extremely easy, but I find the lack of application discouraging. I really want to understand how the concepts arose and not simple memorize an algorithm to solve mindless operations, which are tedious. My professor is unhelpful and brushes off any discussion in class citing the lack of time to learn all the material. He also assumes to much, leaving little room for proofs.

    If anyone would be kind enough to post the importance of Eigenvalues and Eigenvectors, how they where developed, and possible applications for their use.

    Any input is wecome. :smile:
  2. jcsd
  3. Nov 10, 2003 #2
    Well, the first step in understanding anything in linear algebra is to think about it geometrically, in my opinion. Instead of matrices, think about linear transformations of vectors. i.e., think about geometric operations on vectors: think in terms of arrows rotating, stretching, shearing, etc. If you choose a basis (a set of axes), then you can write down the matrix components of the transformation in this basis; you can get infinitely many matrices (related by similarity transformations) that are all different ways of representing the same geometric transformation.

    Then, consider some specific transformations, and look for vectors that are unchanged under the transformation: we are looking for symmetry. For instance, if you have a 3D rotation about an axis, any vector pointing along that axis will be unchanged under that rotation; that axis is a symmetry of the rotation. This is an example of an eigenvector.

    We can relax our definition a little: let's look for vectors that are not completely unchanged by a transformation, but just have their direction unchanged. (Well, we'll count reversing direction as leaving the direction unchanged; it still points along the same line.) For example, if you have a transformation that stretches a vector in one direction and squashes it in another. Then those two directions are also eigenvectors of the transformation --- e.g., a vector pointing purely in the "stretch" direction gets stretched that way; it gets squashed the other way, but it has no component in the other direction, so the squashing doesn't do anything to it. It remains pointing in the stretched direction. The eigenvalues of those eigenvectors are the amount of stretching and squashing, respectively.

    Now, if you looked at this transformation from the perspective of a matrix in a basis that didn't point along these directions, this nice geometric property would be obscured. But if you choose a basis consisting of eigenvectors of the transformation, then the matrix becomes simple: it's just diagonal, with the diagonal components being the eigenvalues!

    (That's because a column of a matrix represents how the transformation acts on one of the basis vectors; if you choose your basis axis to be the eigenvectors, then the eigenvectors will have only one nonzero component, and since they remain in the same direction after the transformation, the transformed basis vectors will also have one nonzero component.)

    Example: suppose I choose a linear transformation T(u+v) = 5u + 3v, assuming u and v are orthogonal. This transformation stretches vectors parallel to 'u' by a factor of 5, and it stretches vectors parallel to 'v' by a factor of 3. In this basis, 'u' has components [1 0]T because it is written as u = 1 x u + 0 x v, and similarly 'v' has components [0 1]T. After transformation, u' = T(u) = 5u = [5 0]T, v' = T(v) = 3v = [0 3]T. So in the {u,v} basis, the transformation T is represented by a diagonal matrix with components,

    [5 0]
    [0 3]

    Not all matrices have a nice real set of eigenvalues so they can't all be diagonalized like this. But when they can, it makes the matrix easy to deal with. The eigenvectors tell you which basis will give you you the easy diagonal matrix. Even when it can't be diagonalized, eigenvectors are a good way to find out in which directions the transformation is "simple", and it can simplify calculations.

    Example: if you want to compute a matrix power An, you could multiply it with itself over and over. Or, you could find the eigenbasis which makes it diagonal (if such a basis exists): raising that matrix to a power is trivial, since you're just raising the diagonal elements to a power. Then, if you want, you can change the answer back to whatever basis you were originally using.

    Or, in physics, angular momentum L and angular velocity ω are related by the inertia tensor (matrix) I, by L = Iω The eigenvectors of the inertia tensor are the "principal axes" of a body: if you rotate the body about those axes, then the angular momentum of the body will point in the same direction as the axis of rotation. So you can decompose a general rotation into rotations about those axes, and analyze them separately: they "decouple" from each other (like in the stretching/squashing example, the stretching in one direction and the squashing in another are independent of each other). You also do this "decoupling" to find normal modes of oscillation and such, in other applications: characteristic resonant behavior of a body.

    I'll also throw in a few recommendations for my favorite linear algebra books:

    Anton, Elementary Linear Algebra
    Axler, Linear Algebra Done Right
    Strang, Introduction to Linear Algebra

    If matrix algebra seems like an unmotivated bunch of meaningless, mindless manipulations on a pile of numbers, these books will help. The secret is, as I said, looking at the geometry of linear transformations acting on abstract vector spaces, not in matrix gymnastics.

    By the way, linear algebra lies behind everything in physics: it's one of the most underrated math courses. Quantum mechanics is just linear algebra on infinite-dimensional vectors. Special relativity involves linear algebra on 4D spacetime vectors (Lorentz transformations are analogous to rotations). General relativity involves lots of tensors, which are generalizations of matrices that act on more than one vector at a time (multilinear transformations). Mechanics has inertia tensors, elasticity tensors, normal modes of wave equations ... electromagnetism has field tensors ... and so on. I had a particularly strong linear algebra background; my linear algebra course has served me better than any other single course that I've ever taken.
    Last edited: Nov 10, 2003
  4. Nov 10, 2003 #3


    User Avatar
    Staff Emeritus

    Wow, great summary Ambitwistor. Not to metion very clear! You explain things better than Strang himself. I use his book in class -"Introduction to Linear Algebra" and i've listen to his lectures on mit opencourseware but your explainations are much better. Can you teach my class Ambitwistor. :wink:

    I had no idea that a basis was a set axis. Now it makes sense that the number of vectors in the basis define the dimesion of the vector space.
  5. Nov 11, 2003 #4


    User Avatar
    Staff Emeritus
    Science Advisor

    I'll add a different major applications: differential equations.

    The basic theory of linear differential equations IS linear algebra: the set of all solutions to a linear homogeneous differential equations forms a vector space. And 90% of solving non-linear differential equations consists of reducing them to linear equations!

    Linear algebra really is the theory of "linear problems".
  6. Dec 7, 2003 #5
    newbie here

    whats the sgnificance of eigenvectors in terms of decribing oscillation?
  7. Dec 7, 2003 #6
    Eigenvectors in a linear mechanical system describe normal modes of oscillation, which are resonsant modes that "decouple" from each other (oscillate independently of each other); you can describe the general motion as a superposition of these modes. See, for instance,

  8. Dec 7, 2003 #7
    hi i'm a newie also but speaking of eigenvectors whats the importance of normalizing eigenvectors?

  9. Dec 7, 2003 #8
    It's just convenient. An eigenvector isn't uniquely defined, since you can multiply any eigenvector by any number and get another eigenvector. All the eigenvectors obtained that way correspond to the same eigenvalue (as long as you don't multiply by zero). People like to single out one of them as "representative" of the bunch, and since they differ only in their lengths (up to a sign), the simplest way to do that is to pick the one with unit length.
  10. Dec 8, 2003 #9
    Hi im a newie as well and also very confused by normalising
    ok, so now i understand the importance of normalising eigenvectors, but how do you actually do this operation.
    for example given the matrix

    3 0 0
    5 4 0
    3 6 1

    i calculate the eigenvalues to be 3, 4 and 1
    using eigenvalue = 3
    i get an eigenvector of


    so how do you normalise this, any help would be grately appreciated.
  11. Dec 8, 2003 #10
    Are you sure? I get an eigenvector of,

    \begin{pmatrix}1 \\ -5 \\ -13.5 \end{pmatrix}

    The same way you normalize any vector: divide it by its magnitude.
  12. Dec 8, 2003 #11
    brilliant cheers, you were right i typed the eigenvector in wrong

    so if i got this right, then the normalsised eigenvector should be

  13. Dec 8, 2003 #12
    Yes, that's right.
  14. Dec 9, 2003 #13
    thanks for the help Ambitwistor, much appreciated
  15. Dec 9, 2003 #14


    User Avatar

    This is probably going to make me sound really stupid, but as this is my first post go easy on me!!

    Im abit stuck on working out eigenvectors for 3 by 3 matrices, i get confused when trying to get the realtionships between the three values. Hope that makes sense!
  16. Dec 9, 2003 #15
    I'm not sure what you mean by "the three values". The three components of one of the eigenvectors? The three eigenvalues? Are you able to find the eigenvalues?
  17. Dec 9, 2003 #16


    User Avatar

    yeah sorry its not that clear is it, i meant the three terms that make up the eigenvector, i can get eigenvalues alright. take for example the matrix that bracy posted how did you get the

    i can get the equations out of the matrix ok but then get stuck
  18. Dec 9, 2003 #17
    Well, if the eigenvector is v = [x,y,z]T, then applying bracey's matrix to it yields a vector with components [3x,5x+4y,3x+6y+z]T. That has to equal 3 (the eigenvalue) times v, so

    3x = 3x
    5x+4y = 3y
    3x+6y+z = 3z

    The first equation gives x=1. The second gives 5+4y=3y, or y=-5. The third gives 3-30+z = 3z, or 2z = -27, or z = -13.5.
  19. Dec 9, 2003 #18


    User Avatar

    ok, let me see if i have got this right, take the same matrix again but this time with an eigenvalue of 4

    so you get [3x,5x+4y,3x+6y+z] again but this time it has to equal to 4

    3x = 4x
    5x+4y = 4y
    3x+6y+z = 4z

    so the first gives x=1.3 the second gives y=0 and the third gives z=1.3 again

    Have i got that right or have i just made an idiot of myself!! if i have then i blame the hard day i have had!!
  20. Dec 9, 2003 #19
    The only solution to the equation 3x = 4x is x=0. That makes the second equation 4y = 4y or y=1. The third equation becomes 6+z=4z, or 3z=6, or z=2.
  21. Dec 9, 2003 #20


    User Avatar

    oh yeah, sorry about that, didnt think it looked right!!. like i said its been a long hard day!!
    thank you for the help i will try a few more make sure i got it.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Eigenvalues and Eigenvectors