MHB Where Can I Find a Solution to Zeidler's Eigenvalue Problem?

Alone
Messages
57
Reaction score
0
The question is posted in the following post in MSE, I'll copy it here:
https://math.stackexchange.com/ques...problem-from-eberhard-zeidlers-first-volume-o

I have a question from Eberhard Zeidler's book on Non-Linear Functional Analysis, question 1.5a, he gives as a reference for this question the book by Wilkinson called "The Algebraic Eigenvalue Problem", but he doesn't give a page where it appears and it's hard to find exactly the wording suitable for this question.

Anyway, the question is as follows:
Let $A$ be an $N\times N$ matrix with $N$ linearly independent eigenvectors $x_1,\ldots , x_N$ and corrseponding eigenvalues $\lambda_i$, where $|\lambda_1|>|\lambda_2| \ge |\lambda_3| \ge \ldots \ge |\lambda_N|$.
Let $x^{(0)}=\sum_{i=1}^N \alpha_i x_i , \ \alpha_1 \ne 0$, and $x^{(n)}=A^nx^{(0)}/\| A^n x^{(0)}\|_{\infty}$. Show that:
as $n\rightarrow \infty$, $x^{(n)}\rightarrow \frac{x_1}{\|x_1\|_{\infty}}$ and $\|Ax^{(n)}\|_{\infty} \rightarrow \lambda_1$ .

It appears on page 40.In the correspondence with the poster in the forum we didn't explicitly wrote the proof, he said it's too long; I still would like a full solution to this problem.

Or if you have a reference to this problem?

Thanks.
 
Physics news on Phys.org
Hi Alan,

Note that for large $n$

$$x^{(n)}=\frac{\sum_{i=1}^{N}\alpha_{i}\lambda_{i}^{n}x_{i}}{|\alpha_{1}\lambda_{1}^{n}|},$$

which comes from the inequality on the eigenvalues and the the fact that exponentials are increasing functions. I think if you can work out the details to this hint you can also get the second part of the problem as well. It is also helpful to note that $A$ is a diagonal matrix when written in terms of the basis $x_{1},\ldots, x_{N}.$ Let me know if you have further questions.
 
This is called the "power method" for finding a dominating eigenvalue - eigenvector pair. (It is a particular case, because Zeidler apparently assumes diagonalizability.) Note that the initial vector is assumed to have a non-zero component in the subspace associated with the dominating eigenvalue. I think you should be able to find it under this name in most numerical linear algebra textbooks.
 
Krylov said:
This is called the "power method" for finding a dominating eigenvalue - eigenvector pair. (It is a particular case, because Zeidler apparently assumes diagonalizability.) Note that the initial vector is assumed to have a non-zero component in the subspace associated with the dominating eigenvalue. I think you should be able to find it under this name in most numerical linear algebra textbooks.

Do you have a specific reference where the proof is covered?
 
To be sure, I have looked this up in Zeidler and you copied the formulation of the question correctly.

Something strange is going on. I assume that $\|\cdot\|_{\infty}$ is the maximum norm - i.e. the maximum of the absolute values of the components of the vector argument? In that case, what happens when $\lambda_1 < 0$ or has non-zero imaginary part? It can never happen that $\|Ax^{(n)}\|_{\infty} \to \lambda_1$ as $n \to \infty$, because the norm is real and non-negative.

So, it seems we need to slightly adapt the formulation of the question. Still, this is related to the power method and the hint in post #2 is good.

EDIT: I think it may work when we start by defining $x^{(n)}$ as $A^n x^{(0)}$ divided by the component of $A^n x^{(0)}$ with the largest modulus.
 
Last edited:
Krylov said:
To be sure, I have looked this up in Zeidler and you copied the formulation of the question correctly.

Something strange is going on. I assume that $\|\cdot\|_{\infty}$ is the maximum norm - i.e. the maximum of the absolute values of the components of the vector argument? In that case, what happens when $\lambda_1 < 0$ or has non-zero imaginary part? It can never happen that $\|Ax^{(n)}\|_{\infty} \to \lambda_1$ as $n \to \infty$, because the norm is real and non-negative.

So, it seems we need to slightly adapt the formulation of the question. Still, this is related to the power method and the hint in post #2 is good.

EDIT: I think it may work when we start by defining $x^{(n)}$ as $A^n x^{(0)}$ divided by the component of $A^n x^{(0)}$ with the largest modulus.
Obviously $A^n x^{(0)} = \sum_i \alpha_i \lambda_i^n x_i$, so $x^{(n)} = \sum_i \alpha_i \lambda_i^n x_i/\max |\sum_i \alpha_i \lambda_i^n x_i|$, but how do you find the maximum value in the denominator of the last equality?
 
Alan said:
Obviously $A^n x^{(0)} = \sum_i \alpha_i \lambda_i^n x_i$, so $x^{(n)} = \sum_i \alpha_i \lambda_i^n x_i/\max |\sum_i \alpha_i \lambda_i^n x_i|$, but how do you find the maximum value in the denominator of the last equality?
It is not that obvious that you should divide by the maximum norm, see my post #5.

I gave it some more thought and at the end I think the best discussion is in Example 7.3.7 of C.D. Meyer's Matrix Analysis and Applied Linear Algebra, SIAM, 2000. (This is also my favorite book on linear algebra.)
 
Krylov said:
It is not that obvious that you should divide by the maximum norm, see my post #5.

I gave it some more thought and at the end I think the best discussion is in Example 7.3.7 of C.D. Meyer's Matrix Analysis and Applied Linear Algebra, SIAM, 2000. (This is also my favorite book on linear algebra.)

Looks like a great reference book; also there's a chapter on Perron-Frubenius theorem which I always wanted to read its proof.

Thanks!
 
Back
Top