Calculating Square Roots - Matrix Algorithm

AI Thread Summary
A matrix algorithm for calculating square roots is discussed, where an initial approximation of √n is refined through iterative matrix multiplication. The method involves using a matrix to create a new approximation, which can be repeated for increased accuracy. The convergence of this method is linked to the stability of fixed points, with the derivative's absolute value at the fixed point indicating stability. The discussion also touches on the eigenvalues of the matrix, suggesting that repeated multiplication leads to convergence towards the eigenspace associated with the largest eigenvalue. The mathematical principles behind this method highlight its effectiveness for approximating square roots.
danago
Gold Member
Messages
1,118
Reaction score
4
Hi. I only just recently found out about an algorithm for calculating the square roots of a number.

Lets say i want to evaluate \sqrt {n}. I can make an approximation by inspection, and say \sqrt n \approx \frac{a}{b}. Now, using this approximation, i can write:

<br /> \left[ {\begin{array}{*{20}c}<br /> 1 &amp; n \\<br /> 1 &amp; 1 \\<br /> \end{array}} \right]\left[ {\begin{array}{*{20}c}<br /> a \\<br /> b \\<br /> \end{array}} \right] = \left[ {\begin{array}{*{20}c}<br /> {a + bn} \\<br /> {a + b} \\<br /> \end{array}} \right]<br />

Treating the resultant matrix as a fraction (<br /> \frac{{a + bn}}{{a + b}}<br />
), i have a better approximation of \sqrt {n}. If i keep repeating this method with the new approximation, over and over again, i get a more accurate answer. So the next step would be:

<br /> \left[ {\begin{array}{*{20}c}<br /> 1 &amp; n \\<br /> 1 &amp; 1 \\<br /> \end{array}} \right]\left[ {\begin{array}{*{20}c}<br /> {a + bn} \\<br /> {a + b} \\<br /> \end{array}} \right] = \left[ {\begin{array}{*{20}c}<br /> {a + an + 2bn} \\<br /> {2a + b + bn} \\<br /> \end{array}} \right]<br />

And <br /> \frac{{a + an + 2bn}}{{2a + b + bn}} would be an even better approximation to \sqrt {n}.

Even if the starting approximation is way off, if a lot of iterations are complete, the answer will still be accurate.


Im just wondering, why does this method work? Is there a name for this method, or anywhere i can look to find more information?

Thanks,
Dan.
 
Physics news on Phys.org
That sure seems like a complex method.

A Newton's method root finder yields a very simple iterative method:

x_{n+1} = \frac 1 2 (x_n + \frac A {x_n})

So A is the number you are computing the root of and x_0 is your inital guess, it doesn't have to be very good. Compute x_1 with the formula, continue.

This procedure converges quadratically.
 
Hmm that method does seem good. I am still interested in the matrix method though. I am not interested in it as a way to actually calculate square roots, but as a mathematical phenomenon. Just curious as to why it works.

Ofcouse I am still always going to resort to my calculator to evaluate them :)
 
well, you are essentially taking an initial value of x, and then iterate through
x_{i+1}=\frac{x_{i}+n}{x_{i}+1}
so, let
f(x)=\frac{x+n}{x+1}
the fix point of f
f(x)=x
is sqrt(n)

you can check derivatives and find out when this fix point is asymptotically stable.
 
tim_lou said:
well, you are essentially taking an initial value of x, and then iterate through
x_{i+1}=\frac{x_{i}+n}{x_{i}+1}
so, let
f(x)=\frac{x+n}{x+1}
the fix point of f
f(x)=x
is sqrt(n)

you can check derivatives and find out when this fix point is asymptotically stable.

Yep, i understand that i am essentially iterating through x_{i+1}=\frac{x_{i}+n}{x_{i}+1}, but what do you mean when you say 'fix point'?
 
Just researched it on wikipedia, and now understand what a fixed point is.

<br /> \begin{array}{l}<br /> f(x) = \frac{{x + n}}{{x + 1}} \\ <br /> \frac{{x + n}}{{x + 1}} = x \\ <br /> x^2 + x = x + n \\ <br /> x^2 = n \\ <br /> x = \sqrt n \\ <br /> \end{array}<br />

That pretty much exactly answers my question. Thanks very much for that
 
hehe, if you want to dig deep. Do you know why this works for All n?

well, the reason, is, the absolute value of f'(x) at the fixed point is less than 1 for all n except n=0. there is a theorem that says (from my difference equation book)

if x* is a fix point for f and f is continuously differentiable at x* then
(1) if |f'(x*)| < 1, then x* is asymptotically stable.
(2) if |f'(x*)| > 1, then x* is unstable.

so that in your case, x* is asymptotically stable.

what if |f'(x*)|=0? then you need to do a lot more tests... if you want to know more. go to the library and pick up a difference equation book...
 
The analysis is a lot easier in the case of multiplying by a matrix. :smile:

The eigenvalues of the matrix

<br /> \left[ {\begin{array}{*{20}c}<br /> 1 &amp; n \\<br /> 1 &amp; 1 \\<br /> \end{array}} \right]<br />

are

1 \pm \frac{\sqrt{n}}{2}

If you take (almost) any vector and repeatedly multiply by the matrix A, the result will approach the eigenspace associated to the largest eigenvalue in magnitude. Here, that's 1 + \sqrt{n} / 2.


Given the rest of the discussion, I expect that the eigenspace associated to the eigenvalue 1 + \sqrt{n} / 2 is the set of all:

<br /> a <br /> \left[ {\begin{array}{c}<br /> \sqrt{n} \\ 1<br /> \end{array}} \right]<br />

In fact, I would go so far as to expect A to factor as:

<br /> <br /> \left[ {\begin{array}{cc}<br /> 1 &amp; n \\<br /> 1 &amp; 1 \\<br /> \end{array}} \right]<br /> =<br /> \left[ {\begin{array}{cc}<br /> \sqrt{n} &amp; -\sqrt{n} \\<br /> 1 &amp; 1 \\<br /> \end{array}} \right]<br /> \left[ {\begin{array}{cc}<br /> 1 + \frac{\sqrt{n}}{2} &amp; 0 \\<br /> 0 &amp; 1 - \frac{\sqrt{n}}{2} \\<br /> \end{array}} \right]<br /> <br /> \left[ {\begin{array}{cc}<br /> \sqrt{n} &amp; -\sqrt{n} \\<br /> 1 &amp; 1 \\<br /> \end{array}} \right]^{-1}<br /> <br />
 
Last edited:

Similar threads

Replies
1
Views
1K
Replies
5
Views
2K
Replies
3
Views
3K
Replies
13
Views
2K
Replies
4
Views
1K
Back
Top