Why can't we define an eigenvalue of a matrix as any scalar value?

  • Thread starter Thread starter member 731016
  • Start date Start date
  • Tags Tags
    Eigenvectors Matrix
Click For Summary
An eigenvalue cannot be defined as any scalar value because it must correspond to a specific eigenvector that satisfies the equation A*x = λ*x. In the discussed example, the matrix multiplication does not yield a scalar multiple of the eigenvector, which is necessary for λ to be considered an eigenvalue. The confusion arises from the misunderstanding that any scalar can be factored out, but the eigenvector must appear on both sides of the equation. Therefore, λ=1 cannot be an eigenvalue in this case since the vector involved does not satisfy the eigenvalue condition. Understanding the relationship between eigenvalues and their corresponding eigenvectors is crucial for correctly identifying eigenvalues.
member 731016
Homework Statement
Please see below
Relevant Equations
Please see below
For this,
1682737134968.png

Dose anybody please know why we cannot say ##\lambda = 1## and then ##1## would be the eigenvalue of the matrix?

Many thanks!
 
Physics news on Phys.org
The result of the multiplication is ##\begin{bmatrix} 1 \\ 5 \end{bmatrix}##, not ##\begin{bmatrix} \lambda \\ 0 \end{bmatrix}##, so it doesn't matter what the value of ##\lambda## is.
 
  • Like
Likes member 731016
ChiralSuperfields said:
Dose anybody please know why we cannot say λ=1 and then 1 would be the eigenvalue of the matrix?
"Dose" -- an amount of medicine.
"Does" -- third person singular conjugation of the infinitive verb "to do."

An eigenvalue ##\lambda## is a number such that for an eigenvector x, ##A\mathbf x = \lambda \mathbf x##.

For the matrix you asked about ##\begin{bmatrix}1 & 6 \\ 5 & 2\end{bmatrix} \begin{bmatrix}1 \\ 0 \end{bmatrix} = \begin{bmatrix}1 \\5 \end{bmatrix} \ne \lambda \begin{bmatrix}1 \\ 0 \end{bmatrix}## for any value of ##\lambda##.
 
  • Like
Likes member 731016
Thank you for your replies @FactChecker and @Mark44!

Sorry I still don't I understand. I'll try to explain what my understanding is so that any misconception can be exposed. ##\lambda## is the constant in front that is factor out of the column vector ##\vec x## which is called the eigenvalue. For examples 1 and 2 below, the constant multiplied to the column vector is ##\lambda = 7 , -4## respectively.
1682746269493.png


However, for this example,

##\begin{bmatrix}1 & 6 \\ 5 & 2\end{bmatrix} \begin{bmatrix}1 \\ 0 \end{bmatrix} = \begin{bmatrix}1 \\5 \end{bmatrix}##, why can't we factor out a 1 from the column vector to get ##\begin{bmatrix}1 & 6 \\ 5 & 2\end{bmatrix} \begin{bmatrix}1 \\ 0 \end{bmatrix} = 1 \begin{bmatrix}1 \\5 \end{bmatrix}##.

According to the textbook, ##\lambda## can be any real number, so why can't ##1## be an eigenvalue?

Many thanks!
 
Mark44 said:
An eigenvalue ##\lambda## is a number such that for an eigenvector x, ##A\mathbf x = \lambda \mathbf x##.
You didn't read what I wrote in my previous post carefully enough. An eigenvalue is closely associated with a specific eigenvector. In the equation above, x is an eigenvector that appears on both sides of the equation. For an eigenvalue/eigenvector pair, multiplication of the vector by the matrix produces a value that is a scalar multiple (i.e., the eigenvalue) of that same vector.
ChiralSuperfields said:
However, for this example,
##\begin{bmatrix}1 & 6 \\ 5 & 2\end{bmatrix} \begin{bmatrix}1 \\ 0 \end{bmatrix} = \begin{bmatrix}1 \\5 \end{bmatrix}##, why can't we factor out a 1 from the column vector to get ##\begin{bmatrix}1 & 6 \\ 5 & 2\end{bmatrix} \begin{bmatrix}1 \\ 0 \end{bmatrix} = 1 \begin{bmatrix}1 \\5 \end{bmatrix}##.
Because ##\begin{bmatrix}1 \\ 0 \end{bmatrix}## isn't the vector that appears on both sides of the equation.
 
  • Like
Likes member 731016
Mark44 said:
You didn't read what I wrote in my previous post carefully enough. An eigenvalue is closely associated with a specific eigenvector. In the equation above, x is an eigenvector that appears on both sides of the equation. For an eigenvalue/eigenvector pair, multiplication of the vector by the matrix produces a value that is a scalar multiple (i.e., the eigenvalue) of that same vector.

Because ##\begin{bmatrix}1 \\ 0 \end{bmatrix}## isn't the vector that appears on both sides of the equation.
Oh, thank you @Mark44! I see now. Sorry I forgot that the column vector has to be on both sides.
 
First, I tried to show that ##f_n## converges uniformly on ##[0,2\pi]##, which is true since ##f_n \rightarrow 0## for ##n \rightarrow \infty## and ##\sigma_n=\mathrm{sup}\left| \frac{\sin\left(\frac{n^2}{n+\frac 15}x\right)}{n^{x^2-3x+3}} \right| \leq \frac{1}{|n^{x^2-3x+3}|} \leq \frac{1}{n^{\frac 34}}\rightarrow 0##. I can't use neither Leibnitz's test nor Abel's test. For Dirichlet's test I would need to show, that ##\sin\left(\frac{n^2}{n+\frac 15}x \right)## has partialy bounded sums...