Rayleigh Quotient: Finding 2nd Eigenvalue & Vector

  • Thread starter Thread starter dirk_mec1
  • Start date Start date
  • Tags Tags
    quotient Rayleigh
dirk_mec1
Messages
755
Reaction score
13

Homework Statement


Let A be a symmetric n x n - matrix with eigenvalues and orthonormal eigenvectors (\lambda_k, \xi_k) assume ordening: \lambda_1 \leq...\leq \lambda_n

We define the rayleigh coefficient as:

<br /> R(x) = \frac{(Ax)^T x}{x^T x} <br />Show that the following constrained problem produces the second eigenvalue and its eigenvector:

<br /> min \left( R(X)| x \neq 0, x \bullet \xi_1 = 0 \right) <br />

The Attempt at a Solution



In the first part of the exercise I was asked to proof that (without that inproduct being zero) the minimalisation produces the first eigenvalue. The idea was to use lagrange multipliers but I don't how to use it here.

Do I need to use Lagrange multipliers?
 
Last edited:
Physics news on Phys.org
Not really. The dot product condition tells you that x is ranging over linear combinations c_i*xi_i with c_1=0. It's just the same as the first problem with the first eigenvector thrown out.
 
Dick said:
Not really. The dot product condition tells you that x is ranging over linear combinations c_i*xi_i with c_1=0.
So if I understand correctly the eigenvectors are orthogonal to each other, right?

and so:

x= c_2 \cdot \xi_2+...+c_n \cdot \xi_n
It's just the same as the first problem with the first eigenvector thrown out.
So I just substitute the above expanded x?
 
Last edited:
Yes, and minimize over c2,...,cn.
 
Dick said:
Yes, and minimize over c2,...,cn.
I get this:


<br /> \frac{\sum_{2=1}^n c_i^2\lambda_i}{\sum_{i=2}^n c_i^2} <br />

but how do I prove that \lambda_2 is the minimum? I've tried putting the partial deratives to zeros and failed.
 
"In the first part of the exercise I was asked to proof that (without that inproduct being zero) the minimalisation produces the first eigenvalue. The idea was to use lagrange multipliers but I don't how to use it here." I thought that meant that you proved the first part using lagrange multipliers. Did you skip that part? Because what you have now looks almost exactly like the first part. If you want to spell out a repetition of the proof of the first part, yes, use lagrange multipliers.
 
Dick said:
I thought that meant that you proved the first part using lagrange multipliers. Did you skip that part?
No I didn't skip it but I showed there that the minimizer should be an (orthogonal) eigenvector and upon substitution I get a min( \lambda_i) from which the first eigenvalue results.

Because what you have now looks almost exactly like the first part. If you want to spell out a repetition of the proof of the first part, yes, use lagrange multipliers.
With the length of the vector c is one? So the problem is minimize:

\frac{ \lambda c^Tc}{c^Tc} with \lambda and c vectors.
 
Yes, it's the same as the first problem. Just one dimension lower.
 
Back
Top