- #1

- 194

- 1

What is eigen value, eigen vector etc and what is their physical significance?

-Devanand T

-Devanand T

- Thread starter dexterdev
- Start date

- #1

- 194

- 1

What is eigen value, eigen vector etc and what is their physical significance?

-Devanand T

-Devanand T

- #2

Simon Bridge

Science Advisor

Homework Helper

- 17,857

- 1,655

if a transformation A applied to a vector v results

These need have no physical significance at all - the terminology is mathematical and describes a mathematical relationship. However, some physical models turn out to employ this relationship so knowing about it is useful.

In sloppy language: if a vector is, for instance, also a solution to the schodinger equation, then some transformations on that vector may have eigenvalues. If it has real eigenvalues then the transformation corresponds to the act of measuring a physical property on the system described by the schodinger equation. The eigenvalue helps predict the results of actually performing that measurement.

- #3

- 6,054

- 391

I disagree. In any case when an operator describes something physical, its eigenvectors have direct physical significance. They usually form a basis of the space involved, and the corresponding eigenvalues describe the effect of the operator on the basis vectors. So we have, in a way, "principal" directions of the operator.These need have no physical significance at all - the terminology is mathematical and describes a mathematical relationship.

- #4

- 100

- 0

Yeah I was just going to say something similar to Voko. What Simon Bridge said is true, but there is *a type* of physical meaning you can have when it comes to eigenvectors/eigenvalues and it comes about in the following way.

When you look at how matrices operate on vectors to produce new vectors, the alternative to the eigenvalue problem is just some [itex]\tilde{A} \vec{v}_{1} = \vec{v}_{2}[/itex] where [itex]\vec{v}_{1}[/itex] is not in the same direction as [itex]\vec{v}_{2}[/itex]. However, there are special direction(s) in which the vectors are in exactly the same direction (and thus we have the eigenvector/value equation [itex]\tilde{A} \vec{v} = \lambda \vec{v}[/itex].) The directions are really what's important here, it doesn't really matter what magnitude the eigenvector has, because you could just multiply both sides of the equation by a scalar. Now if there exists a number of eigenvalues (and thus number of "special directions" associated with the matrix operation) equal to the dimensions of the vectors, one can represent the vectors with basis vectors that are in these "special" directions (the eigenvalues can't be all in the same direction, they will span a space of 'n' dimensions where 'n' is the number of eigenvalues/vectors.) What does this do for us? It allows us to write the complicated multilinear vector function, [itex]\sum_{j}{\sum_{i}{A_{ij} v_{i} \hat{e}_{j}}}[/itex], as a simple linear vector function, [itex]\sum_{i}{v_{i} \hat{e}_{i}}[/itex], where the unit vectors are in the special directions (usually called the "principal directions" which are along the "principal axes"..)

When physics problems that involve matrix operations are represented in a coordinate system composed of these principle axes, the problem takes on its simplest form. The physics are the most direct and simple. There is a real significance to these axes, they denote the natural symmetry of the physical object being described by the matrix equation. They give you huge insights into the salient features of the physical geometry of the problem.

Let's look at a physical case. In the polarization of materials (let's say some crystal) by an outside electric field, the crystal usually polarizes more easily (well.. differently) in certain directions. The different directions are the principle directions and are related to the structure of the atomic lattice. If one represents the applied electric field in the principal coordinate system, then the polarization of the crystal can be found quite easily, it's a simple vector function like "[itex]\vec{P} = \epsilon_{o} \chi \vec{E}[/itex]". Represent the electric field in some other coordinate system, such as cartesian coordinates that are not aligned with the principle axes and you get this monster: [itex]\vec{P} = \sum_{i}{\sum_{j}{\epsilon_{o} \chi_{ij} E_{j} \hat{e}_{i}}}[/itex]. Note that [itex]\vec{P}[/itex] is the polarization per unit volume of the material.

When you look at how matrices operate on vectors to produce new vectors, the alternative to the eigenvalue problem is just some [itex]\tilde{A} \vec{v}_{1} = \vec{v}_{2}[/itex] where [itex]\vec{v}_{1}[/itex] is not in the same direction as [itex]\vec{v}_{2}[/itex]. However, there are special direction(s) in which the vectors are in exactly the same direction (and thus we have the eigenvector/value equation [itex]\tilde{A} \vec{v} = \lambda \vec{v}[/itex].) The directions are really what's important here, it doesn't really matter what magnitude the eigenvector has, because you could just multiply both sides of the equation by a scalar. Now if there exists a number of eigenvalues (and thus number of "special directions" associated with the matrix operation) equal to the dimensions of the vectors, one can represent the vectors with basis vectors that are in these "special" directions (the eigenvalues can't be all in the same direction, they will span a space of 'n' dimensions where 'n' is the number of eigenvalues/vectors.) What does this do for us? It allows us to write the complicated multilinear vector function, [itex]\sum_{j}{\sum_{i}{A_{ij} v_{i} \hat{e}_{j}}}[/itex], as a simple linear vector function, [itex]\sum_{i}{v_{i} \hat{e}_{i}}[/itex], where the unit vectors are in the special directions (usually called the "principal directions" which are along the "principal axes"..)

When physics problems that involve matrix operations are represented in a coordinate system composed of these principle axes, the problem takes on its simplest form. The physics are the most direct and simple. There is a real significance to these axes, they denote the natural symmetry of the physical object being described by the matrix equation. They give you huge insights into the salient features of the physical geometry of the problem.

Let's look at a physical case. In the polarization of materials (let's say some crystal) by an outside electric field, the crystal usually polarizes more easily (well.. differently) in certain directions. The different directions are the principle directions and are related to the structure of the atomic lattice. If one represents the applied electric field in the principal coordinate system, then the polarization of the crystal can be found quite easily, it's a simple vector function like "[itex]\vec{P} = \epsilon_{o} \chi \vec{E}[/itex]". Represent the electric field in some other coordinate system, such as cartesian coordinates that are not aligned with the principle axes and you get this monster: [itex]\vec{P} = \sum_{i}{\sum_{j}{\epsilon_{o} \chi_{ij} E_{j} \hat{e}_{i}}}[/itex]. Note that [itex]\vec{P}[/itex] is the polarization per unit volume of the material.

Last edited:

- #5

Simon Bridge

Science Advisor

Homework Helper

- 17,857

- 1,655

I was attempting to answer the question as stated :) "What is the physical significance..." but there is a related question: "what is the significance to physics..." and dydxforsn has given a good example of that.

A very well known example is our habit of resolving vectors into perpendicular and parallel components to some other vector. We are decomposing the vector into orthogonal eigenvectors of transformations in space.

- #6

- 25

- 0

I second Bridge's comment above. I once read somewhere on this forum that any linear space can be expressed in matrix form such as the set of linear ODE's, which of course implies treating the space like a Euclidean space but the space doesn't exist physically: it is only a Mathematical way of treating the elements in the set. Since ODE's are widely employed in Engineering, the eigenvectors do have physical implications but they are not necessarily **spatial** depending on the applications. Of course ODE is just an example but vectors in Mathematics are really general as Bridge put it.

Last edited:

- #7

- 104

- 0

Essentially, when we say something is an eigenvector of something, we mean that, taken a member of a vector space (be it an actual vector, a function etc), when an operator A is done on it, the resultant vector is simply the original vector multipled by a constant

So..A(p)=ap Here p is the element of the vector space, a is some constant, A is some operator.

- #8

Simon Bridge

Science Advisor

Homework Helper

- 17,857

- 1,655

I think what we need now is input from OP to see if we've covered the confusion.

How about it dexterdev? Any of this help?

- #9

- 6,054

- 391

- Replies
- 10

- Views
- 3K

- Last Post

- Replies
- 6

- Views
- 4K

- Replies
- 12

- Views
- 2K

- Last Post

- Replies
- 6

- Views
- 3K

- Last Post

- Replies
- 6

- Views
- 2K

- Replies
- 3

- Views
- 3K

- Replies
- 6

- Views
- 5K

- Last Post

- Replies
- 3

- Views
- 2K

- Last Post

- Replies
- 5

- Views
- 2K

- Last Post

- Replies
- 2

- Views
- 2K