A question on matrix's eigenvalue problem from Eberhard Zeidler's first volume of Nonlinear Function

In summary, the question is about finding the dominating eigenvalue and its corresponding eigenvector using the power method. The initial vector must have a non-zero component in the subspace associated with the dominating eigenvalue. The proof and further references can be found in C.D. Meyer's book "Matrix Analysis and Applied Linear Algebra".
  • #1
Alone
60
0
The question is posted in the following post in MSE, I'll copy it here:
https://math.stackexchange.com/ques...problem-from-eberhard-zeidlers-first-volume-o

I have a question from Eberhard Zeidler's book on Non-Linear Functional Analysis, question 1.5a, he gives as a reference for this question the book by Wilkinson called "The Algebraic Eigenvalue Problem", but he doesn't give a page where it appears and it's hard to find exactly the wording suitable for this question.

Anyway, the question is as follows:
Let $A$ be an $N\times N$ matrix with $N$ linearly independent eigenvectors $x_1,\ldots , x_N$ and corrseponding eigenvalues $\lambda_i$, where $|\lambda_1|>|\lambda_2| \ge |\lambda_3| \ge \ldots \ge |\lambda_N|$.
Let $x^{(0)}=\sum_{i=1}^N \alpha_i x_i , \ \alpha_1 \ne 0$, and $x^{(n)}=A^nx^{(0)}/\| A^n x^{(0)}\|_{\infty}$. Show that:
as $n\rightarrow \infty$, $x^{(n)}\rightarrow \frac{x_1}{\|x_1\|_{\infty}}$ and $\|Ax^{(n)}\|_{\infty} \rightarrow \lambda_1$ .

It appears on page 40.In the correspondence with the poster in the forum we didn't explicitly wrote the proof, he said it's too long; I still would like a full solution to this problem.

Or if you have a reference to this problem?

Thanks.
 
Physics news on Phys.org
  • #2
Hi Alan,

Note that for large $n$

$$x^{(n)}=\frac{\sum_{i=1}^{N}\alpha_{i}\lambda_{i}^{n}x_{i}}{|\alpha_{1}\lambda_{1}^{n}|},$$

which comes from the inequality on the eigenvalues and the the fact that exponentials are increasing functions. I think if you can work out the details to this hint you can also get the second part of the problem as well. It is also helpful to note that $A$ is a diagonal matrix when written in terms of the basis $x_{1},\ldots, x_{N}.$ Let me know if you have further questions.
 
  • #3
This is called the "power method" for finding a dominating eigenvalue - eigenvector pair. (It is a particular case, because Zeidler apparently assumes diagonalizability.) Note that the initial vector is assumed to have a non-zero component in the subspace associated with the dominating eigenvalue. I think you should be able to find it under this name in most numerical linear algebra textbooks.
 
  • #4
Krylov said:
This is called the "power method" for finding a dominating eigenvalue - eigenvector pair. (It is a particular case, because Zeidler apparently assumes diagonalizability.) Note that the initial vector is assumed to have a non-zero component in the subspace associated with the dominating eigenvalue. I think you should be able to find it under this name in most numerical linear algebra textbooks.

Do you have a specific reference where the proof is covered?
 
  • #5
To be sure, I have looked this up in Zeidler and you copied the formulation of the question correctly.

Something strange is going on. I assume that $\|\cdot\|_{\infty}$ is the maximum norm - i.e. the maximum of the absolute values of the components of the vector argument? In that case, what happens when $\lambda_1 < 0$ or has non-zero imaginary part? It can never happen that $\|Ax^{(n)}\|_{\infty} \to \lambda_1$ as $n \to \infty$, because the norm is real and non-negative.

So, it seems we need to slightly adapt the formulation of the question. Still, this is related to the power method and the hint in post #2 is good.

EDIT: I think it may work when we start by defining $x^{(n)}$ as $A^n x^{(0)}$ divided by the component of $A^n x^{(0)}$ with the largest modulus.
 
Last edited:
  • #6
Krylov said:
To be sure, I have looked this up in Zeidler and you copied the formulation of the question correctly.

Something strange is going on. I assume that $\|\cdot\|_{\infty}$ is the maximum norm - i.e. the maximum of the absolute values of the components of the vector argument? In that case, what happens when $\lambda_1 < 0$ or has non-zero imaginary part? It can never happen that $\|Ax^{(n)}\|_{\infty} \to \lambda_1$ as $n \to \infty$, because the norm is real and non-negative.

So, it seems we need to slightly adapt the formulation of the question. Still, this is related to the power method and the hint in post #2 is good.

EDIT: I think it may work when we start by defining $x^{(n)}$ as $A^n x^{(0)}$ divided by the component of $A^n x^{(0)}$ with the largest modulus.
Obviously $A^n x^{(0)} = \sum_i \alpha_i \lambda_i^n x_i$, so $x^{(n)} = \sum_i \alpha_i \lambda_i^n x_i/\max |\sum_i \alpha_i \lambda_i^n x_i|$, but how do you find the maximum value in the denominator of the last equality?
 
  • #7
Alan said:
Obviously $A^n x^{(0)} = \sum_i \alpha_i \lambda_i^n x_i$, so $x^{(n)} = \sum_i \alpha_i \lambda_i^n x_i/\max |\sum_i \alpha_i \lambda_i^n x_i|$, but how do you find the maximum value in the denominator of the last equality?
It is not that obvious that you should divide by the maximum norm, see my post #5.

I gave it some more thought and at the end I think the best discussion is in Example 7.3.7 of C.D. Meyer's Matrix Analysis and Applied Linear Algebra, SIAM, 2000. (This is also my favorite book on linear algebra.)
 
  • #8
Krylov said:
It is not that obvious that you should divide by the maximum norm, see my post #5.

I gave it some more thought and at the end I think the best discussion is in Example 7.3.7 of C.D. Meyer's Matrix Analysis and Applied Linear Algebra, SIAM, 2000. (This is also my favorite book on linear algebra.)

Looks like a great reference book; also there's a chapter on Perron-Frubenius theorem which I always wanted to read its proof.

Thanks!
 

1. What is an eigenvalue?

An eigenvalue is a scalar value that represents the extent to which a particular vector is stretched or compressed by a linear transformation. It is a characteristic of a square matrix and is found by solving the eigenvalue problem.

2. What is the eigenvalue problem?

The eigenvalue problem is a mathematical problem that involves finding the eigenvalues and corresponding eigenvectors of a square matrix. It is used in various fields of science and engineering to analyze and solve systems of linear equations.

3. How is the eigenvalue problem solved?

The eigenvalue problem is solved by first finding the characteristic polynomial of the matrix, which is then used to find the eigenvalues. The corresponding eigenvectors can then be found by solving a system of linear equations involving the eigenvalues.

4. Why is the eigenvalue problem important?

The eigenvalue problem is important because it allows us to understand the behavior and properties of linear transformations, which are fundamental in many areas of science and engineering. It is also used in various applications such as data compression, image processing, and quantum mechanics.

5. Are there any real-world applications of the eigenvalue problem?

Yes, there are many real-world applications of the eigenvalue problem. For example, in physics and engineering, it is used to analyze the stability and dynamics of systems. In data analysis, it is used for dimension reduction and pattern recognition. In computer graphics, it is used for image processing and animation. It also has applications in chemistry, economics, and statistics.

Similar threads

  • Quantum Physics
Replies
2
Views
972
Replies
2
Views
2K
Replies
24
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
983
Replies
1
Views
737
  • Advanced Physics Homework Help
Replies
1
Views
923
Replies
3
Views
2K
  • Thermodynamics
Replies
7
Views
1K
  • Advanced Physics Homework Help
Replies
4
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
1K
Back
Top