I don't get Eigenvalues or Eigenvectors

In summary, eigenvalues and eigenvectors are important concepts in linear algebra that can be used to solve differential equations. Eigenvalues represent the magnitude of an eigenvector, which is a vector that remains unchanged under a linear transformation. Understanding linear algebra can help make concepts like derivatives and differential equations easier to grasp. Taking a linear algebra course before differential equations is recommended.
  • #1
Alex6200
75
0
I just finished Differential Equations, and I know how to find eigenvalues/eigenvectors, and I understand how to use them to solve a differential equation.

But I don't really understand "what they are". How is a matrix with complex eigenvalues any different than a matrix with real eigenvalues? What does the eigenvalue tell us about the form of a matrix? What does it tell us about its character - its form?

I've always understood things like derivatives, because they make sense. The derivative of a function is just how steep its slope is. But linear algebra just doesn't make sense to me. What is an eigenvalue? What is a determinant? What is the matrix?
 
Physics news on Phys.org
  • #2
The most general linear relation between quantities [tex]x_{i}[/tex] and [tex]y_{i}[/tex] is:

[tex]y_{i}=\sum_{j}A_{i,j}x_{j}[/tex]

right? :smile:
 
  • #3
Count Iblis said:
The most general linear relation between quantities [tex]x_{i}[/tex] and [tex]y_{i}[/tex] is:

[tex]y_{i}=\sum_{j}A_{i,j}x_{j}[/tex]

right? :smile:

I guess you could find matrices that fit that rule...
 
  • #4
A map f from a vector space U (over some field F, usually the field of real numbers or the field of complex numbers) to a vector space V is linear iff f(a + sb) = f(a) + sf(b) for all a, b in U and s in the field F.
From linear algebra, we know that all vectors u in U and v in V can be written as linear combination of a basis (a list of vectors in the space that span the space) of that space. That is to say that u = u1e1 + u2e2 + ... + unen, where each ui is an element of the field and the ei's are elements of U and U is n-dimensional. Suppose f is a linear transformation from U into V where V is an m-dimensional space. Use the basis property to show that f can be specified by m*n numbers (in the field F). These are the numbers Aij in Count Iblis's reply.
 
Last edited:
  • #5
To get a better understanding of the eigen-stuff consider this equation:

[tex] A \, \vec{x} = \lambda \, \vec{x} [/tex]


On the left side of the equation, A operates on an unknown vector x. That means A transforms a vector x into a different vector. Now the right side of the equation forces the transformed vector to equal to itself.

Said differently, x is a vector that remains unchanged under a transformation. If you try to alter it (matrix multiplication), the net result will spit out the same vector you tried to alter.

We call that vector an eigenvector. However, the magnitude of the eigenvector can change under a transformation, and we call that magnitude an eigenvalue (lambda in the equation). But the vector's direction is unchanged.
 
  • #6
slider142 said:
A map f from a vector space U (over some field F, usually the field of real numbers or the field of complex numbers) to a vector space V is linear iff f(a + sb) = f(a) + sf(b) for all a, b in U and s in the field F.

So if f(a + b) = f(a) + f(b) then f is linear, and by that same reasoning f(constant * x) = constant * f(x).

I understand that much.

From linear algebra, we know that all vectors u in U and v in V can be written as linear combination of a basis (a list of vectors in the space that span the space) of that space. That is to say that u = u1e1 + u2e2 + ... + unen, where each ui is an element of the field and the ei's are elements of U and U is n-dimensional. Suppose f is a linear transformation from U into V where V is an m-dimensional space. Use the basis property to show that f can be specified by m*n numbers (in the field F). These are the numbers Aij in Count Iblis's reply.

Is that relying on some Linear Algebra stuff, or should a Differential Equations student know what that means?
 
  • #7
Alex6200 said:
Is that relying on some Linear Algebra stuff, or should a Differential Equations student know what that means?
Eigenvalues and eigenvectors are linear algebra terms. Multivariable calculus and diffEq's make a lot more sense if you study linear algebra first. There are quite a few resources online as well.
 
  • #8
It's a shame that you didn't take Linear Algebra before taking differential equations. At my university I was instrumental in having Linear Algebra added as a pre-requisite to the introductory differential equations course.

I would recommend that you take it now. Not only will it answer your questions but it is as useful a subject in all applications as differential equations.
 
  • #9
Well, I took differential equations at the community college before heading off to college (Worcester Polytechnic Institute, Electrical Engineering), and I'm going to take Linear Algebra first semester there. I couldn't take linear algebra at my high school or local community college.

The thing is, I understand eigenvalues in a formal sense. I know how to use them to solve differential equations. I know the definitions, I just don't... get them, if you know what I mean. Like, I can't just look at a matrix and think "Wow, that matrix must have some big eigenvalues", the way I'd look at a parabola and immediately get a sense of what it would look like.
 
  • #10
what is the simplest functioin with regard to differentiation? many would say its the exponential because D(e^at) = ae^at.

i.e. the derivative is just a multiple of the function. that's what characterizes an eigenvector, for a given operation, namely when you operate you just multiply by a number.

it doesn't get any better than that.
 
  • #11
And given the exponential function we can write any analytical function as:

f(x) = Exp(x D) f(0)

:approve:
 
  • #12
Suppose a and x are scalars, and the time derivative of x is

[tex]\frac{d}{dt}x = ax[/tex]

The solution to this differential equation is of the form

[tex]x(t) = e^{at}x(0)[/tex]

Now, instead suppose x is a vector. The corresponding differential equation is

[tex]\frac{d}{dt}\vec x = \mathbf A \vec x[/tex]

where [itex]\mathbf A[/itex] is a square matrix. The solutions to this multidimensional differential equation are of the form

[tex]\vec x(t) = e^{\mathbf A t}\vec x(0)[/tex]

where the matrix exponential is defined as

[tex]e^{\mathbf A t} = \boldsymbol 1 + \mathbf A t + \frac 1 2 \mathbf A^2 t^2 + \cdots[/tex]

(the first term on the right-hand side is the identity matrix).

The eigenvalues and eigenvectors of the matrix describe the solution. Suppose [itex]\vec x_i[/itex] is an eigenvector of [itex]\mathbf A[/itex] with eigenvalue [itex]k_i[/itex]. With a little work,

[tex]e^{\mathbf A t}\vec x_i = e^{k_i t} \vec x_i[/tex]

Now decompose the initial state [itex]\vec x(0)[/itex] in terms of the eigenvectors:

[tex]\vec x(0) = \sum_i a_i x_i[/tex]

With this, the solutions of the vector differential equation become

[tex]\vec x(t) = \sum_i a_i e^{\mathbf k_i t}\vec x_i[/tex]
 
  • #13
So, basically an eigenvector X of some transformation is the vector X for which multiplying by the matrix A is equivalent to that transform?

In Diff Eq. that transform would be differentiation, but it could be anything.
 
  • #14
An eigenvector is basically a vector with certain properties such that when operated on by a certain operator (or matrix) it gives you the same vector back. Its coupled in a way to the operator itself. A certain operator will only give you the same vector back if its special.
But most of the time its the direction of the vector that matters. If you operate on (2,2,2) and you get (6,6,6) back, well its the same direction once normalized, but with a different magnitude. So O.(2,2,2) == (6,6,6) == 3 (2,2,2). So its "eigenvalue" is 3.

In physics it comes into play in quantum mechanics mainly because everything is a wave function (A sum of eigenvectors) and operators that work on a wavefunction without changing it are usually pretty special. Because if your operator destroys the wavefunction then its probably not very useful. So operators that can work on what you have and leave it intact(having the same basis vectors) are quite useful, and they represent many known observables.
 
  • #15
Alex6200 said:
The thing is, I understand eigenvalues in a formal sense. I know how to use them to solve differential equations. I know the definitions, I just don't... get them, if you know what I mean. Like, I can't just look at a matrix and think "Wow, that matrix must have some big eigenvalues", the way I'd look at a parabola and immediately get a sense of what it would look like.
If you're able to visualise things like parabola's ellipses and hyperbolas fairly easily, then it may be helpful to look at the connections between eigenvalues and these objects. Specifically, look into the Principle axis theorem and how eigenvalues and vectors relate to the major and minor axes of these figures in 2D. The eigenvalues tell you which is the major and which the minor axis, and the eigenvectors tell you the directions of the axes. True, this analogy only holds for symmetric matrices, but it's a good starting point.

There are geometrical relations present in much if not all linear algebra. Unfortunately, most presentations of the subject tend to ignore these entirely.
 
  • #16
Very stupid example but sometimes works on weird individuals like me... (and it is not general enough to capture all the properties) but life is too short to explain everything at one shot, here it goes...


Imagine you are pushing a box with a very weird shape and suppose you don't see the whole box so it is difficult to estimate how to push it.


You want to push it so that the direction of the force that you are applying is exactly the direction the box moves, that means force vector and the displacement vector pointing the same direction. Possibly with different quantities! What usually happens is that you get both translation and rotation if you choose a bad direction. But there might be a direction where you get pure translation. That is your eigenvector. How much translation you get is proportional to your force and that is your eigenvalue, with respect to the that direction (or eigenvector) Now let A be your matrix explaining the relations between individual directions and translations you getin each direction, then after some concrete argument, you can show that there are some directions that your biiiiig matrix really acts like scalar, (you get only translation on that particular direction)


Now, if it makes sense, read the rigorous arguments above again. If not, forget about these immediately :)
 
  • #17
So if I have a transform A, the eigenvector is the matrix which I can perform A on and get back that eigenvector, or the eigenvector with a bigger magnitude (the multiple of the magnitude is the eigenvalue).
 
  • #18
or smaller!

That's exactly what happens when we write
[tex]Ax = \lambda x[/tex]

Actually, it is better to write like this (with a horrible math style!)

[tex]Something = Ax = \lambda x[/tex]

So for some x vectors, (not all!), the matrix is only shrinking/stretching. The amount is defined by the eigenvalue.
 
  • #19
Alex6200 said:
Well, I took differential equations at the community college before heading off to college (Worcester Polytechnic Institute, Electrical Engineering), and I'm going to take Linear Algebra first semester there. I couldn't take linear algebra at my high school or local community college.

The thing is, I understand eigenvalues in a formal sense. I know how to use them to solve differential equations. I know the definitions, I just don't... get them, if you know what I mean. Like, I can't just look at a matrix and think "Wow, that matrix must have some big eigenvalues", the way I'd look at a parabola and immediately get a sense of what it would look like.

Alex, what community college did you go to? Was it NSCC by chance? If so, I may have been in that class with you.
 
  • #20
Saladsamurai said:
Alex, what community college did you go to? Was it NSCC by chance? If so, I may have been in that class with you.

Ugh, Carroll. :uhh:
 
  • #21
Alex6200 said:
Ugh, Carroll. :uhh:

Is Carroll the name of the college? Or are you asking me if my name is Carroll? Because it's not, it's Casey.
 
  • #22
No, Carroll's the name of the college.
 
  • #23
edit
 
  • #24
If this guy hasn't done any linear algebra how do you expect him to know about linear maps and even analytical geometry? I will offer a layman explanation, and hopefully you can build up on it when you take linear algebra.

First of all, you need to know that vectors of Rn can be represented as column matrices. For example, the vector (1,-4,3) would look like this in linear algebra: [tex]\begin{pmatrix}
1 \\
-4 \\
3 \\
\end{pmatrix}
[/tex]

Eigen is german for "the same". So when you are working with a matrix A, you are trying to find an X and [tex]\lambda[/tex] such that AX=[tex]\lambda[/tex]X. Here, X is the vector (represented as a colum matrix) and [tex]\lambda[/tex] is some number. It should be clear that when multiplying matrices A (nxn) and X (nx1) you get an nx1 matrix in return. As you know, multiplying a scalar by X will also give you an nx1 matrix. So you are trying to find an X, called the eigenvector, so that AX is equivalent to X multiplied by some scalar, [tex]\lambda[/tex], called the eingenvalue. Sometimes more than one X will work for a certain [tex]\lambda[/tex].

To find eigenvectors, you work with AX = [tex]\lambda[/tex]X, which is the same as AX = [tex]\lambda[/tex]IX, which is the same as (A - [tex]\lambda[/tex]I)X = 0. Obviously X=0 works for this, but by defintion no eigen can ever be zero. The only way to guarantee that there exist some X other than zero is to make (A - [tex]\lambda[/tex]I) not invertible.

Theorem: Suppose BX=0. If B is invertible, then X=0 is the only solution.
Proof: If B is invertible, then there is some B- such that B-B=I.
So X = IX = B-BX = B-(BX) = 0, because BX=0.

See? The theorem tells us (A - [tex]\lambda[/tex]I) must not be invertible otherwise we will only have X=0 as an eingenvector, which is not allowed! The way you do this is make the determinant zero. There is a theorem is linear algebra that says A is not invertible iff its determinant is zero. So that's what you are doing when finding eigenvalues, expanding the determinant in terms of [tex]\lambda[/tex] and making it equal zero. Because only
(A - [tex]\lambda[/tex]I) with those [tex]\lambda[/tex] are non-invertible, so they are the only ones with nonzero X.

Once you have your eigenvalues, you find out which Xs work which you already know how to do. Ultimately, there are infinitely many Xs that work so we overcome this problem by excluding all other scalar multiples of each eigenvector. So you've found an eigenvector so that AX is the same as [tex]\lambda[/tex]X, pretty weird huh? A matrix acting like a scalar towards some X! Each [tex]\lambda[/tex] has its own eigenvectors, and sometimes more than 1 are possible.

Eigenvalues and eigenvectors are used in linear algebra to diagonalize matrices, that is to make them into a nice and cute form. Certain matrices have predictable eigenvalues. Others not so predictable. You will learn other ways to use them in linear algebra. For now, I hope this clears things up.
 
  • #25
trambolin said:
Very stupid example but sometimes works on weird individuals like me... (and it is not general enough to capture all the properties) but life is too short to explain everything at one shot, here it goes...


Imagine you are pushing a box with a very weird shape and suppose you don't see the whole box so it is difficult to estimate how to push it.


You want to push it so that the direction of the force that you are applying is exactly the direction the box moves, that means force vector and the displacement vector pointing the same direction. Possibly with different quantities! What usually happens is that you get both translation and rotation if you choose a bad direction. But there might be a direction where you get pure translation. That is your eigenvector. How much translation you get is proportional to your force and that is your eigenvalue, with respect to the that direction (or eigenvector) Now let A be your matrix explaining the relations between individual directions and translations you getin each direction, then after some concrete argument, you can show that there are some directions that your biiiiig matrix really acts like scalar, (you get only translation on that particular direction)


Now, if it makes sense, read the rigorous arguments above again. If not, forget about these immediately :)

My God. This explanation is quite possibly the best I've ever heard. Looking at it in the way you mentioned makes explaining eigenvalue/vectors much more intuitive than throwing numbers and vague terms. Better than the ways any of my professors have tried explaining it. This coming from a junior at carnegie mellon university...
 
  • #26
Alex6200 said:
But linear algebra just doesn't make sense to me.

I have the same problem some times (though I've studied linear algebra for a few years now).

Sometimes, I think what we really want isn't so much to know what something is, but to figure out an intuitive understanding of it. You know what an eigenvector of an operator is. It's any vector whose direction is not changed under that operator (only it's length). And then eigenvalues are the scalars that give the change in length.

But what does it physically signify? I'm not sure myself (I was actually browsing through Wikipedia earlier today for just this reason). But eigenvectors and eigenvalues have lots of super weird properties. The sum of the eigenvalues is the trace of an operator. The product is the determinant. The determinant is an expression that "determines" the number of solutions (values for x) to the equation Tx = y (where T is a transformation and y is a vector). The determinant is also used in finding the inverse transformation.

But I don't understand this crap either, so I'll leave you with this instead!


What is the matrix?

Neo: What is the Matrix?
Trinity: The answer is out there, Neo, and it's looking for you, and it will find you if you want it to.
 
  • #28
The classic example is the Earth rotating. If one can create a 3-D vector for each point on the surface of the Earth. As the Earth rotates, all but 2 of these vectors will change direction. The two that don't change directions (on the North and South Poles), are the Eigenvectors. And since they also don't move, their Eigenvalues are one.
 
  • #29
How did you unearth this thread (and why)? It is about 2 and a half years old.

And you example isn't very clear. You say "create a 3-D vector for each point on the surface of the Earth" but you appear to be assuming, without saying it, that the vector at each point is pointing directly away from the center of the earth.
 
  • #30
Thats true Halls, but, at least for someone that understands it, it is a pretty nice example, and by now the OP is probably long gone...! So we can enjoy it and think about how to use it next time this is asked. best wishes.
 

1. What are Eigenvalues and Eigenvectors?

Eigenvalues and Eigenvectors are concepts in linear algebra that are used to understand and analyze the behavior of linear transformations. Eigenvalues represent the scaling factor of an Eigenvector when it is transformed by a linear transformation. Eigenvectors are special vectors that remain in the same direction after being transformed by a linear transformation.

2. Why are Eigenvalues and Eigenvectors important?

Eigenvalues and Eigenvectors are important because they allow us to simplify complex linear transformations and understand their behavior. They are used in a variety of fields, including physics, engineering, and computer science, to solve problems related to linear transformations.

3. How do I find Eigenvalues and Eigenvectors?

To find Eigenvalues and Eigenvectors, you need to solve the characteristic equation of a matrix. The characteristic equation is obtained by setting the determinant of the matrix minus a scalar value equal to zero. The solutions to this equation are the Eigenvalues, and the corresponding Eigenvectors can be found by plugging the Eigenvalues back into the original equation.

4. What is the relationship between Eigenvalues and Eigenvectors?

The relationship between Eigenvalues and Eigenvectors is that Eigenvalues represent the scaling factor of an Eigenvector when it is transformed by a linear transformation. In other words, Eigenvectors are the vectors that remain in the same direction after being transformed by a linear transformation with a scaling factor represented by the Eigenvalue.

5. How are Eigenvalues and Eigenvectors used in data analysis?

Eigenvalues and Eigenvectors are used in data analysis to reduce the dimensionality of a dataset. This is done by finding the Eigenvectors and Eigenvalues of the covariance matrix of the dataset. The Eigenvectors with the highest corresponding Eigenvalues are then used as new variables to represent the original data, resulting in a more compact representation of the data while preserving most of the important information.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
791
  • Linear and Abstract Algebra
Replies
12
Views
1K
Replies
3
Views
2K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
16
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
7K
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
3K
Back
Top