# Eigenvectors + Me= ?AHHHHHHHHHHH

1. Dec 9, 2005

### karen03grae

Eigenvectors + Me= ??????AHHHHHHHHHHH

I am really trying to understand Eigenvectors but you have to understand that my prof. only teaches about HOW to get the eigenvalues/vectors and how to use then to solve diff. eqs.
So far I can find the eigenvalues of a 2x2 and from there I can get the eigenvectors. And I can also take a coupled system of differential equations and decouple them using X=CY where C is the eigenvector matrix. Anyways, besides the definition Av=rv (where A is the coefficient matrix of the system and r is the matrix of eigenvectors) I have no idea what an eigenvector is. If I could SEE an eigenvector on a graph I think I would be much more enlightened. But here are some questions that may also do the trick:
1. What does an eigenvector look like (i.e. direction, what coordinate system would it belong on)?
2. How is an eigenvector different from regular vectors?
3. I checked out a book on linear algebra and on one example it gives a system of equations with matrix form
[x] [2 3]= [0]
[y] [4 -1]= [0]
It says that
[x] is a vector AND [2] is a vector! How? What is the
[y]......................[3]
direction of
[2]? It has a magnitude, but by itself, where would it
[3]
go on a graph? Thanks

2. Dec 9, 2005

### hypermorphism

Go back to the definitions. Don't confuse all 2-dimensional vectors as being elements of R2 (the Cartesian plane) with the usual this and that. A vector is an element of a vector space. A 2-dimensional vector is an element of a 2-dimensional vector space. A 2-dimensional vector space has a basis of 2 vectors, thus every element of that space can be represented by a sum v1e1 + v2e2 where the e's are basis vectors and the v's are components of that specific vector. That can be abbreviated to a list (v1, v2).
Also note that a notion of length is not necessary to study eigenvalues, but is a useful additional structure. Take the 2-dimensional vector space of polar coordinates. The usual norm in that space is ||(x1,x2)|| = x1, instead of the Euclidean norm ||(x1,x2)|| = $\sqrt{x_1^2 + x_2^2}$. Studying the interaction of linear transformations with metric and inner product spaces will come later in your text.
With respect to what eigenvectors look like, the definition holds all intuition. An eigenvector is an element of a vector space that is simply scaled by a linear transformation, and thus the 1-dimensional subspace it spans is actually mapped into itself by the linear transformation. If a linear transformation over R2 has 2 eigenvectors with non-zero eigenvalues, then it is simply a rescaling of R2. The eigenvalues are how much each axis is scaled (note that the axes correspond to the eigenvectors, not necessarily the original xy-axes).

Last edited: Dec 9, 2005
3. Dec 10, 2005

### karen03grae

Thanks so much for your reply. I wrote down your definition of eigenvectors. But I'm not to familiar with linear transformations. Are eigenvectors perpendicular? I kind of understand them to be like axes that are scaled when a linear transformation occurs.

When I have a system of differential equations and X' represents the matrix of differentiated dependent variables and X represents the matrix of solutions (say for a 2x2 in R^2) then is "A" the linear transformation?

X'=AX

I had two systems of differential equations and I plotted their solutions (approximately) by means of a directional field. (I had to put in initial values). Now if both dependent variables depend on "t" (some variable) and I plotted both of the dependent variables, can I see the eigenvectors in the graph?

Ps. I tried to attach the graph but it was wayyy to large

Thanks,
Karen

4. Dec 10, 2005

### cronxeh

eigenvectors are linearly independent, so yes they are perpendicular to each other

5. Dec 10, 2005

### rachmaninoff

Err, linear independence does not imply orthogonality. Eigenvectors don't have to be orthogonal. I notice you posted after midnight...

Last edited by a moderator: Dec 10, 2005
6. Dec 10, 2005

### Spectre5

In practice, they can be used to calculate the basis of new coordinate system which make certain calculations easier.

For example, in 3D rigid body mechanics, you can solve the eigenevalue/eigenvector problem to get the coordinate system in which all products of inertia are zero. Furthermore, the eigenvalues you calculate will be the moments of inertia (or principle moments of inertia since the products of inertia are zero) about their corresponding eigenvectors.

Another quick example would be the calculation of the axes in which the shear stress is zero (thus you only have normal stresses) in solid mechanics.

Note that we can always solve the eigenvalue problem in these two cases because they both use symmetric matrices (the inertia tensor and the stress tensor).

7. Dec 10, 2005

### matt grime

How did you get to do a course including coupled differential equations without having the prerequisite linear algebra knowledge?

If you have to think of the pairs (x,y) (imagine that in a column) as pairs of real numbers. They can describe 'position' vectors in the plane, you don't plot them on a graph.

You are doing matrix algebra (forget linear maps for now). An eigenvector of a matrix M is a vector (u,v) such that M sends (u,v) to (tu,tv) for some real number t. t is the eigenvalue. What is special in the cases you're doing is that you can find two eigenvectors and thus write anyother vector as a sum of them. say these two vectors are x and y then you can easily describe the action of M on ax+by for real numbers a and b.

Last edited: Dec 10, 2005
8. Dec 10, 2005

### karen03grae

I got into the course because that is how our university sets up its math track. Cal 1, 2, 3, and lastly differential equations. That's all we need to take but I am going to take Linear Algebra and Fourier Series and Wavelets (because F.S.W. fascinates me); actually to make an "A" in diff.eq, all we need to know are the Maple commands.

Okay so my solutions to this system of differential equations will be a linear combination of my eigenvectors. Just like I can express "M" as a linear combination of the vector x and the vector y. Why can't I express my solutions as a linear combination of x and y?

My friend sent me this graph of the directional field of the solutions to a system of differential equations(initially coupled). And he put in initial values to make a trace of the solution on the plot. Also he plotted the eigenvectors, and it appears that the solution follows one of the eigenvectors for a large negative value of "t" (indep. var.) and follows the other eigenvector for a large positive value of "t". So the trajectories show the eigenvectors! But the eigenvectors weren't perpendicular.

Our teacher mentioned that the motion of pistons in a car can be described by a system of coupled differential equations. Which makes sense because the motion of one piston depends on the motion of the others. So say we have 2 pistons (a very unique vehicle), and I wanted to find the equation of motion with respect to time for each piston. I would say that the equations I want to find are related to some other equations via the eigenvectors? Right? And somehow through this substitution I end up with a diagonalized system. Geometrically speaking, I'm not sure how the eigenvectors produce a diagonal system. What makes them so special that they can do that. Magic? Ok. I hope you don't think I'm a lost cause.

*back to studying maple commands*

Karen

9. Dec 10, 2005

### matt grime

well, mutatis mutandis you probably can, it's just that that won't be an easy thing to do. or helpful. suppose I give you a matrix M and a vector v, then what is (M^n)v? (ie M applied to v n times? if v were an eigenvactor with eigenvalue t then it would be (t^n)v. if v=u+w each eigenvectors with eigenvalues s and t respectively then (s^n)u+(t^n)w, so eigenvectors are sought for computational reasons)

it's called a change of basis. instead of the row vector (x,y) representing the value xi+yj, we change basis to u,w which are eigenvectors and the pair (p,q) represents the point pu+qw, and M acts 'diagonally on this base, like a diagonal matrix acts on i,j

this is abstractly confusing, but really quite easy.

suppose we were doing a street plan, and we did it in the american way so that streets ran in one of two directions. usually these are north-south and east-west. we can describe how to get from A to B as go 3 blocks north and 2 west. now suppose that the streets were off set so that roads ran either east-west or northeast-southwest ie - or /, then we couldn't say go so many blocks north then west, but we could say go so many blocks northeast then go west.

the directions (matrices) we choose to give are simpler in some map coordinates than others. we just pick the easiest ones.

the important thing is that we have enough basic vectors (ie go north, go southwest) to be able to specifiy any position (uniquely), and we choose the directions depending on the situation to make it as nice as possiblel.

Last edited: Dec 10, 2005