Where Can I Find a Solution to Zeidler's Eigenvalue Problem?

Click For Summary

Discussion Overview

The discussion revolves around a problem from Eberhard Zeidler's book on Non-Linear Functional Analysis concerning the behavior of eigenvectors and eigenvalues of a matrix as powers of the matrix are applied. Participants seek a solution or references related to the problem, which involves proving the convergence of a normalized vector and the limit of a matrix's action on that vector.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant shares the original problem statement and expresses a desire for a full solution or reference.
  • Another participant suggests a hint involving the expression of the vector in terms of eigenvalues and eigenvectors, indicating that the details could lead to the solution of the second part of the problem.
  • Some participants identify the problem as related to the "power method" for finding the dominant eigenvalue and eigenvector, noting assumptions about diagonalizability and the initial vector's properties.
  • Concerns are raised about the implications of the maximum norm when the dominant eigenvalue is negative or complex, suggesting a need to adapt the problem's formulation.
  • A participant proposes a potential adjustment to the definition of the normalized vector to address issues with the maximum norm.
  • Another participant questions the method of finding the maximum value in the denominator of the normalization process and references a specific example from a linear algebra textbook as a useful resource.

Areas of Agreement / Disagreement

Participants express differing views on the implications of the maximum norm and the conditions under which the problem can be solved. There is no consensus on the correct approach or solution to the problem, and multiple perspectives on the formulation and interpretation of the problem remain.

Contextual Notes

Participants note potential limitations regarding the assumptions about the eigenvalues and the definitions used in the problem, particularly concerning the maximum norm and its behavior with negative or complex eigenvalues.

Alone
Messages
57
Reaction score
0
The question is posted in the following post in MSE, I'll copy it here:
https://math.stackexchange.com/ques...problem-from-eberhard-zeidlers-first-volume-o

I have a question from Eberhard Zeidler's book on Non-Linear Functional Analysis, question 1.5a, he gives as a reference for this question the book by Wilkinson called "The Algebraic Eigenvalue Problem", but he doesn't give a page where it appears and it's hard to find exactly the wording suitable for this question.

Anyway, the question is as follows:
Let $A$ be an $N\times N$ matrix with $N$ linearly independent eigenvectors $x_1,\ldots , x_N$ and corrseponding eigenvalues $\lambda_i$, where $|\lambda_1|>|\lambda_2| \ge |\lambda_3| \ge \ldots \ge |\lambda_N|$.
Let $x^{(0)}=\sum_{i=1}^N \alpha_i x_i , \ \alpha_1 \ne 0$, and $x^{(n)}=A^nx^{(0)}/\| A^n x^{(0)}\|_{\infty}$. Show that:
as $n\rightarrow \infty$, $x^{(n)}\rightarrow \frac{x_1}{\|x_1\|_{\infty}}$ and $\|Ax^{(n)}\|_{\infty} \rightarrow \lambda_1$ .

It appears on page 40.In the correspondence with the poster in the forum we didn't explicitly wrote the proof, he said it's too long; I still would like a full solution to this problem.

Or if you have a reference to this problem?

Thanks.
 
Physics news on Phys.org
Hi Alan,

Note that for large $n$

$$x^{(n)}=\frac{\sum_{i=1}^{N}\alpha_{i}\lambda_{i}^{n}x_{i}}{|\alpha_{1}\lambda_{1}^{n}|},$$

which comes from the inequality on the eigenvalues and the the fact that exponentials are increasing functions. I think if you can work out the details to this hint you can also get the second part of the problem as well. It is also helpful to note that $A$ is a diagonal matrix when written in terms of the basis $x_{1},\ldots, x_{N}.$ Let me know if you have further questions.
 
This is called the "power method" for finding a dominating eigenvalue - eigenvector pair. (It is a particular case, because Zeidler apparently assumes diagonalizability.) Note that the initial vector is assumed to have a non-zero component in the subspace associated with the dominating eigenvalue. I think you should be able to find it under this name in most numerical linear algebra textbooks.
 
Krylov said:
This is called the "power method" for finding a dominating eigenvalue - eigenvector pair. (It is a particular case, because Zeidler apparently assumes diagonalizability.) Note that the initial vector is assumed to have a non-zero component in the subspace associated with the dominating eigenvalue. I think you should be able to find it under this name in most numerical linear algebra textbooks.

Do you have a specific reference where the proof is covered?
 
To be sure, I have looked this up in Zeidler and you copied the formulation of the question correctly.

Something strange is going on. I assume that $\|\cdot\|_{\infty}$ is the maximum norm - i.e. the maximum of the absolute values of the components of the vector argument? In that case, what happens when $\lambda_1 < 0$ or has non-zero imaginary part? It can never happen that $\|Ax^{(n)}\|_{\infty} \to \lambda_1$ as $n \to \infty$, because the norm is real and non-negative.

So, it seems we need to slightly adapt the formulation of the question. Still, this is related to the power method and the hint in post #2 is good.

EDIT: I think it may work when we start by defining $x^{(n)}$ as $A^n x^{(0)}$ divided by the component of $A^n x^{(0)}$ with the largest modulus.
 
Last edited:
Krylov said:
To be sure, I have looked this up in Zeidler and you copied the formulation of the question correctly.

Something strange is going on. I assume that $\|\cdot\|_{\infty}$ is the maximum norm - i.e. the maximum of the absolute values of the components of the vector argument? In that case, what happens when $\lambda_1 < 0$ or has non-zero imaginary part? It can never happen that $\|Ax^{(n)}\|_{\infty} \to \lambda_1$ as $n \to \infty$, because the norm is real and non-negative.

So, it seems we need to slightly adapt the formulation of the question. Still, this is related to the power method and the hint in post #2 is good.

EDIT: I think it may work when we start by defining $x^{(n)}$ as $A^n x^{(0)}$ divided by the component of $A^n x^{(0)}$ with the largest modulus.
Obviously $A^n x^{(0)} = \sum_i \alpha_i \lambda_i^n x_i$, so $x^{(n)} = \sum_i \alpha_i \lambda_i^n x_i/\max |\sum_i \alpha_i \lambda_i^n x_i|$, but how do you find the maximum value in the denominator of the last equality?
 
Alan said:
Obviously $A^n x^{(0)} = \sum_i \alpha_i \lambda_i^n x_i$, so $x^{(n)} = \sum_i \alpha_i \lambda_i^n x_i/\max |\sum_i \alpha_i \lambda_i^n x_i|$, but how do you find the maximum value in the denominator of the last equality?
It is not that obvious that you should divide by the maximum norm, see my post #5.

I gave it some more thought and at the end I think the best discussion is in Example 7.3.7 of C.D. Meyer's Matrix Analysis and Applied Linear Algebra, SIAM, 2000. (This is also my favorite book on linear algebra.)
 
Krylov said:
It is not that obvious that you should divide by the maximum norm, see my post #5.

I gave it some more thought and at the end I think the best discussion is in Example 7.3.7 of C.D. Meyer's Matrix Analysis and Applied Linear Algebra, SIAM, 2000. (This is also my favorite book on linear algebra.)

Looks like a great reference book; also there's a chapter on Perron-Frubenius theorem which I always wanted to read its proof.

Thanks!
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
Replies
7
Views
2K
Replies
12
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
7
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K