Understanding Eigenvectors and Eigenvalues in Linear Transformations

  • Context: Undergrad 
  • Thread starter Thread starter porcupine6789
  • Start date Start date
  • Tags Tags
    Eigenvalues Eigenvectors
Click For Summary

Discussion Overview

The discussion centers around the concepts of eigenvectors and eigenvalues in the context of linear transformations, exploring their definitions, properties, and applications, particularly in physics and linear algebra.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants explain that for a matrix A, the relationship A*xi=bi*xi defines eigenvectors and eigenvalues, with applications in physics, such as studying vibrations.
  • Others elaborate on the idea that if a vector w is in the same direction as v after multiplication by A, then w can be expressed as a scalar multiple of v, leading to the formulation Av=λv.
  • One participant questions the nature of operators that do not change the direction of vectors, suggesting they might be linear transformations, while expressing uncertainty about the terminology.
  • Another participant emphasizes the utility of eigenvectors in simplifying matrix multiplications, allowing arbitrary vectors to be expressed as sums of eigenvectors.
  • Some participants discuss the formal definition of linear transformations and their relationship to matrices, noting that linear transformations can exist in infinite-dimensional spaces where they may not be represented by matrices.
  • There is a mention of the differentiation of polynomials as a linear operator that can be represented by an infinite matrix, raising questions about the representation of linear transformations in different contexts.

Areas of Agreement / Disagreement

Participants express various viewpoints on the definitions and implications of eigenvectors and eigenvalues, with no clear consensus on certain aspects, such as the nature of linear transformations and their representations.

Contextual Notes

Some discussions involve assumptions about the dimensionality of vector spaces and the applicability of certain definitions, which may not hold universally across all contexts.

porcupine6789
Messages
17
Reaction score
0
Can someone explain these to me?
 
Physics news on Phys.org
For a given matrix A, the eigenvectors xi and the eigenvalues bi satisfy the relationship:

A*xi=bi*xi

Eigenvectors and eigenvalues appear frequently in Physics, for example in the study of vibrations
 
At a basic level suppose we have a vector, v, and we multiply it by a square (n -by -n) matrix. Once we multiple A*v = w (where w is some new vector) we can look at the direction of w.

If w is in the same direction as v it must be some multiple of v. In other words [tex]w = \lambda v[/tex] where lambda is just some scalar value.

If this is the case we can say that [tex]Av = \lambda v[/tex] where lambda is the eigenvalue and v is an eigenvector of the matrix A
 
Feldoh said:
If w is in the same direction as v it must be some multiple of v. In other words [tex]w = \lambda v[/tex] where lambda is just some scalar value.

If this is the case we can say that [tex]Av = \lambda v[/tex] where lambda is the eigenvalue and v is an eigenvector of the matrix A

I have some questions about this. If w is in the same direction as v this means that A (which is an operator represented by matrix A) don't change the direction of w, i.e the A don't rotate the vector w. How do we call such operators (or matrices)? Are they only linear transformations?
 
Last edited:
I guess some people will explain the details. I just want to explain the main reason why you use eigenvectors.

If you find enough vectors xi which satisfy
A*xi=qi*xi
where qi is a scalar constant, then you can write any arbitrary vector y as a sum of these
y=t1*x1+t2*x2+t3*x3+...
Now multiplying by the matrix A is very easy now!
A*y=t1*q1*x1+t2*q2*x2+t3*q3*x3+...
You see eigenvectors simplify matrix multiplications a lot.
 
What you are really asking for is a complete course in linear algebra!

The basic "object of study" in Linear Algebra is the "vector space", an algebraic structure in which we have defined addition and multiplication by numbers.

All functions, then are functions that "preserve" those operations: A(ru+ sv)= rA(u)+ sA(v), the "linear transformations".

Further, the most fundamental property of "linear" itself is: we can "take apart" a linear problem, solve each part, then put the solutions to each part together to get a solution to the original problem. If I have to solve a problem of the form "Ax= B" then first solving "[itex]Ax= \lambda x[/itex]" shows me the best way to "take apart" the problem.
 
maxverywell said:
I have some questions about this. If w is in the same direction as v this means that A (which is an operator represented by matrix A) don't change the direction of w, i.e the A don't rotate the vector w. How do we call such operators (or matrices)? Are they only linear transformations?

I mean I've typically heard them called linear transformations, however I'm not really qualified to say if it's something else.

I mean formally a linear transformation is something like T: f -> g where f,g are vector spaces and

1.) T(v1+v2) = T(v1) + T(v2), where v1,v2 are in V
2.) T(av) = aT(v)

So I don't have a proof that those holds for eigenvalue/vectors but I fell like it is satisfied.

Also depending on the operator the eigen problem might be part of certain mathematical groups couldn't it? I'm mainly thinking about invariant groups
 
A linear operator on a vector space is a function, F, such that F(au+ bv)= aF(u)+ bF(v) where u and v are vectors and a and b are scalars.

A matrix is, of course, an array of numbers with specific addition and multiplication defined.

We can think of n by m matrices as linear transformations from the specific vector space [itex]R^n[/itex] to the vector space [itex]R^m[/itex].

More generally, if F is a linear transformation from a vector space of dimension n to a vector space of dimension m, then, given a specific ordered basis, it can be represented as an n by m matrix.

But linear transformations are not the same as matrices. For one thing, the correspondence depends on a specific basis (the matrices representing the same linear transformation in different bases are "similar" matrices however). Further, there exist infinite dimensional vector spaces in which linear transformations cannot be represented by matrices. One example is the vector space of all polynomials (with the usual definition of addition of polynomials and multiplication by numbers) with the linear operator being differentiation of the polynomial.
 
But differentiation of polynomials in R[t] can be represented by the (infinite) matrix
[tex]\begin{bmatrix}<br /> 0&1&0&0&\cdots\\<br /> 0&0&2&0&\cdots\\<br /> 0&0&0&3&\cdots\\<br /> \vdots&\vdots&\vdots&\vdots&\ddots\\<br /> \end{bmatrix}[/tex]
relative to the standard basis {1, t, t2, ...}. So can any other linear transformation between vector spaces with countable (Hamel) bases.
 
  • #10
HallsofIvy said:
A linear operator on a vector space is a function, F, such that F(au+ bv)= aF(u)+ bF(v) where u and v are vectors and a and b are scalars.

A matrix is, of course, an array of numbers with specific addition and multiplication defined.

We can think of n by m matrices as linear transformations from the specific vector space [itex]R^n[/itex] to the vector space [itex]R^m[/itex].

More generally, if F is a linear transformation from a vector space of dimension n to a vector space of dimension m, then, given a specific ordered basis, it can be represented as an n by m matrix.

But linear transformations are not the same as matrices. For one thing, the correspondence depends on a specific basis (the matrices representing the same linear transformation in different bases are "similar" matrices however). Further, there exist infinite dimensional vector spaces in which linear transformations cannot be represented by matrices. One example is the vector space of all polynomials (with the usual definition of addition of polynomials and multiplication by numbers) with the linear operator being differentiation of the polynomial.

Well sure but if the OP doesn't know what an eigenvector/value is it's best to start with a finite dimensional case.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 3 ·
Replies
3
Views
4K