Rayleigh Quotient: Finding 2nd Eigenvalue & Vector

  • Thread starter Thread starter dirk_mec1
  • Start date Start date
  • Tags Tags
    quotient Rayleigh
Click For Summary

Homework Help Overview

The problem involves finding the second eigenvalue and its corresponding eigenvector of a symmetric n x n matrix using the Rayleigh quotient. The original poster discusses the minimization of the Rayleigh coefficient under a constraint that excludes the first eigenvector.

Discussion Character

  • Exploratory, Assumption checking, Mathematical reasoning

Approaches and Questions Raised

  • Participants explore the use of Lagrange multipliers for the minimization problem and question the implications of the orthogonality of eigenvectors. There is discussion about substituting the expanded form of the vector x and minimizing over the coefficients of the remaining eigenvectors.

Discussion Status

Participants are actively engaging with the problem, with some suggesting that the approach is similar to the first part of the exercise. There is a recognition of the need to prove that the second eigenvalue is the minimum, and some guidance is offered regarding the use of Lagrange multipliers.

Contextual Notes

There is an emphasis on the constraint that the vector x must be orthogonal to the first eigenvector, which shapes the approach to the minimization problem. The discussion reflects uncertainty about the application of mathematical techniques and the implications of the constraints involved.

dirk_mec1
Messages
755
Reaction score
13

Homework Statement


Let A be a symmetric n x n - matrix with eigenvalues and orthonormal eigenvectors (\lambda_k, \xi_k) assume ordening: \lambda_1 \leq...\leq \lambda_n

We define the rayleigh coefficient as:

<br /> R(x) = \frac{(Ax)^T x}{x^T x} <br />Show that the following constrained problem produces the second eigenvalue and its eigenvector:

<br /> min \left( R(X)| x \neq 0, x \bullet \xi_1 = 0 \right) <br />

The Attempt at a Solution



In the first part of the exercise I was asked to proof that (without that inproduct being zero) the minimalisation produces the first eigenvalue. The idea was to use lagrange multipliers but I don't how to use it here.

Do I need to use Lagrange multipliers?
 
Last edited:
Physics news on Phys.org
Not really. The dot product condition tells you that x is ranging over linear combinations c_i*xi_i with c_1=0. It's just the same as the first problem with the first eigenvector thrown out.
 
Dick said:
Not really. The dot product condition tells you that x is ranging over linear combinations c_i*xi_i with c_1=0.
So if I understand correctly the eigenvectors are orthogonal to each other, right?

and so:

x= c_2 \cdot \xi_2+...+c_n \cdot \xi_n
It's just the same as the first problem with the first eigenvector thrown out.
So I just substitute the above expanded x?
 
Last edited:
Yes, and minimize over c2,...,cn.
 
Dick said:
Yes, and minimize over c2,...,cn.
I get this:


<br /> \frac{\sum_{2=1}^n c_i^2\lambda_i}{\sum_{i=2}^n c_i^2} <br />

but how do I prove that \lambda_2 is the minimum? I've tried putting the partial deratives to zeros and failed.
 
"In the first part of the exercise I was asked to proof that (without that inproduct being zero) the minimalisation produces the first eigenvalue. The idea was to use lagrange multipliers but I don't how to use it here." I thought that meant that you proved the first part using lagrange multipliers. Did you skip that part? Because what you have now looks almost exactly like the first part. If you want to spell out a repetition of the proof of the first part, yes, use lagrange multipliers.
 
Dick said:
I thought that meant that you proved the first part using lagrange multipliers. Did you skip that part?
No I didn't skip it but I showed there that the minimizer should be an (orthogonal) eigenvector and upon substitution I get a min( \lambda_i) from which the first eigenvalue results.

Because what you have now looks almost exactly like the first part. If you want to spell out a repetition of the proof of the first part, yes, use lagrange multipliers.
With the length of the vector c is one? So the problem is minimize:

\frac{ \lambda c^Tc}{c^Tc} with \lambda and c vectors.
 
Yes, it's the same as the first problem. Just one dimension lower.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
3K
Replies
1
Views
2K
Replies
4
Views
6K
  • · Replies 3 ·
Replies
3
Views
4K
Replies
3
Views
5K
Replies
8
Views
2K
Replies
11
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K