Scalar Equations: How to solve them

  • Thread starter Thread starter abhimanipal
  • Start date Start date
  • Tags Tags
    Scalar
abhimanipal
Messages
2
Reaction score
0
Hi,
My name is Abhishek Agrawal. I am a grad student in Computer Science. I know very little about scalar equations. But my research has led me to them and I am at my wits end on how to go about solving them. I have looked around every where but I am unable to start.

Homework Statement



http://en.wikipedia.org/wiki/Divide-and-conquer_eigenvalue_algorithm

This is a Wikipedia link on how to use the divide and conquer algorithm to solve for eigen values of a matrix. I understood most of the stuff but towards the end of the link, the author ends up with this equation

1 + wT(D − λI)−1 w = 0

which is a scalar equation. He transforms this equation into an another form. (The formatting came bad on this post, better to view it in Wiki)

1 + \sum_{j=1}^{m} \frac{w_{j}^{2}}{d_{j} - \lambda} = 0.


What I cannot understand is how is the summation applied. How does the author get the wj and dj terms from the matrix . Is he using an entire row or is there some thing I am missing ?

I would appreciate if some one could give me some links so as to what I am missing or could point me to some tutorials
Any help would be appreciated
 
Physics news on Phys.org
Hint: if you wrap the TeX code in tex tags (type [ tex ] without the spaces, then this code, then [ / tex ] without the spaces - ie [/tex]), then you will get a nice formatting, like so:
1 + \sum_{j=1}^{m} \frac{w_{j}^{2}}{d_{j} - \lambda} = 0.

Answer to your question: note that D is a diagonal matrix. Therefore, D - λI is a diagonal matrix as well, looking like
D - \lambda I = \begin{pmatrix} d_1 - \lambda & & 0 \\ & \ddots & \\ 0 & & d_m - \lambda \end{pmatrix}

The inverse of this matrix is then simply
(D - \lambda I)^{-1} = \begin{pmatrix} 1/( d_1 - \lambda ) & \cdots & 0 \\ \vdots & \ddots & \\ 0 & & 1 / (d_m - \lambda) \end{pmatrix}

If you now perform the matrix multiplication explicitly for a vector w = (w1, ..., wm) you will find the second expression.
 
Thank you for your quick reply. I need one more clarification.
When the author writes
<br /> 1 + \sum_{j=1}^{m} \frac{w_{j}^{2}}{d_{j} - \lambda} = 0. <br />

in this equation d_{j}, does this mean d_{j,j} ?
 
Thread 'Use greedy vertex coloring algorithm to prove the upper bound of χ'
Hi! I am struggling with the exercise I mentioned under "Homework statement". The exercise is about a specific "greedy vertex coloring algorithm". One definition (which matches what my book uses) can be found here: https://people.cs.uchicago.edu/~laci/HANDOUTS/greedycoloring.pdf Here is also a screenshot of the relevant parts of the linked PDF, i.e. the def. of the algorithm: Sadly I don't have much to show as far as a solution attempt goes, as I am stuck on how to proceed. I thought...
Back
Top