Recognitions:
Gold Member

## challenge: can you take the sqrt (n) using only one operation

Here is how one takes the square root of 2 by using addition, the square root of 2 00 is 14 sums of the odd numbers 1+3+5+7+9+11+13+15+17+19+21+23+25+ 27=196
square root of 2 00 00 1s 141 sums, 2 00 00 00 is 1414 sums, etc. Not to say it's efficient, but it will work with only addition.

 Quote by coolul007 .... Not to say it's efficient, but it will work with only addition.
Conversely, one may start with n (=x²) and use only subtraction, if 215² = 46225:

46225 -1, -3, -5.....etc. It would take (x =) 215 operations, using this elementary logics.

But subtraction can be more efficient, as it allows to improve the basic, simple algorithm. Remember you have the right, following B (or N) scheme, ( see wiki-link above), to find a suitable x0, performing micro-operations in your mind. If you guess the first digit and the number of digits (4'62'25 $\rightarrow$ 2 | 0 | 0|), you may start subtracting 4 | 00 | 00 | : 46225 - 40000 = 6225, and then continue subtracting (200 x 2 +1) 401,...etc:

(n² - 40000 = ) 6225 - 401, - 403, - 405....etc. That would take only 15 + 1 operations

But, of course, it takes a more ingenious method, a more sophisticated logic to outwit Newton and Babylonian logics, weighted mean is simple but powerful.

 I am no expert here at all, but is there a reason why no one posted this algorithm: http://www.homeschoolmath.net/teachi...-why-works.php on square roots?? It appears to use the least amount of operations that the OP was asking for. It was used back in the days of elementary school of the old days before the calculator was around.
 Mentor I was taught this algorithm back when I was in about the 7th grade or so (in the 50s).

 Quote by Altrepair I am no expert here at all, but is there a reason why no one posted this algorithm: http://www.homeschoolmath.net/teachi...-why-works.php on square roots?? It appears to use the least amount of operations that the OP was asking for. It was used back in the days of elementary school of the old days before the calculator was around.

This is exactly the same algorithm I posted here. I think it is likely the easiest one to use with pencil and paper.

DonAntonio

 Recognitions: Gold Member I don't know if this is considered "Newton" a = x^2 +2x +1 then the square root of a = x+1 doing some manipulation we are left with x = (a - 1)/(x+2), seed x with something close or just use a, remember to add "1" for the result. In 10 iterations a=2, 1.414213584 calculator = 1.414213562 in 13 iterations same as calculator

 Quote by Mark44 I was taught this algorithm back when I was in about the 7th grade or so (in the 50s).
I, too, was taught the long-division (LD) method way back in '56 (); using only micro-operations, we had fun taking square/cube roots without pen and paper.

LD is surely better then N: that isn't much, since N can be catastrophic (266 ops.!), if you choose the wrong x0. The weak point in LD is that the number of operations (*Nt=o) depends on individual skills (and luck) as it includes "trial-and-error", (the great advantage is that it requires only 3/4-digit ops).

Mark44 , you are the greatest expert on algorithms, could you tell Altrepair if he can put that algorithm (LD) on a pocket calculator? What do you think of Newton's, could it be used in a more efficient way, skipping useless iterations?

* The OP appeared confusing to a hasty reader because N[umber of operations] means 4 different things:

a [square]root is found with Nt ops., iterating Ni times an algorithm that requires No ops. using N+ different op-signs. Nt = Ni * No
(in the OP): if No = 1 $\rightarrow$ (N+ = 1
, Nt = Ni), in the title: N+ =1 $\rightarrow$ No = 1

EDIT
: $\sqrt{354.045^2}$; (N) N+ = 3 , Nt = 20 (5*4); (B) N+ = 3, Nt = 15 (5*3); (LD, if Ni = 1 $\rightarrow$ Nt = No) N+ = 2, Nt=o = ??

Mentor
 Quote by logics I, too, was taught the long-division (LD) method way back in '56 (); using only micro-operations, we had fun taking square/cube roots without pen and paper. LD is surely better then N: that isn't much, since N can be catastrophic (266 ops.!) if you choose the wrong x0]. The weak point is that the number of operations*Nt=o depends on individual skills (and luck) as it includes "trial-and-error". Mark44 , you are the greatest expert on algorithms, could you tell Altrepair if he can put that algorithm (LD) on a pocket calculator? What do you think of Newton's, could it be used in a more efficient way, skipping useless iterations?
I wouldn't consider myself to be an expert at algorithms, let alone the "greatest" expert. As far as programming a (programmable) calculator, I'm pretty sure it could be done. If you can write down an algorithm, someone can come along and implement the algorithm in some programming language.

 Quote by logics * The OP appeared confusing to a hasty reader because N[umber of operations] means 4 different things: a [square]root is found with Nt ops, iterating Ni times an algorithm that requires No operations and N+ different op-signs If No = 1 $\rightarrow$ N+ = 1 If (LD) Ni = 1 $\rightarrow$ Nt = No

 Quote by Mark44 As far as programming a (programmable) calculator, I'm pretty sure it could be done... and implement the algorithm in some programming language.
I'm not sure if that applies to my old 'programmable' (£10) Sharp EL-509, with no programming language. It has a 12²-place buffer where I can write formulas and ANS. I can put there N (B), [because it's a simple formula], changing x $\rightarrow$ ANS : {(ANS²+a) : 2ANS} , and press 5x Enter. But, is this an algorithm, just because of the feedback : ANS? I can't put LD in my calculator (how do I deal with 'trial-and-error'?), I'd like to learn how to do that, without a real algorithm. This problem has been bugging me, I wonder if anyone can kindly help, as:

I used that term in the OP only beacause I read it in other threads, but I was afraid it was not appropriate, as , AFAIK, N (or B) is just an iterative method, and an iterated formula requires no logical decisions such as : "if A, ...then GOTO; if B...."
 Quote by logics The best known algorithm... requires 3 operations, ... Do you know a simpler or faster one? Can you find a method that requires only 1 operation?
1 operation: { k [:] m} can hardly be referred to as an 'algorithm'...(I called it, more modestly, a method)
 Quote by Hurkyl What about "<" as well?....Many algorithms ...rely on being able to tell whether one number is bigger than another, and so "<" is important.
...which, as it was rightly pointed out, needs decisions like : if p > q, then....;.... if p< q... then STOP etc. But in post #13, again, N was referred to as Newton's algorithm, so, why N's can do without '> <' and mine can't?

Lastly, to explain the sense of my question ("could you tell Altrepair..."): LD is surely less expensive than N and B, especially if we count micro-operations, but is surely more cumbersome than both: my challenge was to find the most simple (and fastest), formula: LD is much more complex than N (B).
'Only one operation' may seem a tall order, but not only it is possible , it is not too difficult if you examine the structure of a square

 Recognitions: Gold Member Science Advisor Staff Emeritus Many languages have conditional expressions in addition to conditional statements: e.g. a function satisfying: $$\textrm{Choose}(P, x, y) = \begin{cases} x & P \\ y & \neg P \end{cases}$$ where P is a truth value. In other languages, conditions return numeric values; e.g. true is 1 and false is 0, but sometimes the opposite. You can implement absolute value, for example, by $$|x| = x (1 - 2(x<0) )$$ Sometimes, various operations can be used as substitutes for tests. e.g. if you have the sign function available, you can use it in ways similar to what I just described when you just care about orderings: $$|x| = x \mathop{\textrm{sign}}(x)$$

Recognitions:
Gold Member
Staff Emeritus
 Quote by logics Doesn't that apply to Newton's algorithm?
Yes. But at the time, I had decided to give you the benefit of the doubt and assume you really mean to ask about algorithms for producing approximations to square roots, but just didn't want to say it out loud.

 Quote by Hurkyl I had decided to ..... assume you really mean to ask about algorithms for producing approximations .
You'll greatly oblige me (and future wievers) , if you help me edit the OP, can you (or Mark44, or ... anyone) do it?

 Mentor Wievers - ?? What does that mean? It's not in my dictionary. Did you mean "viewers"?
 It can be done in one step provided you specify the precision in advance. You use the number as the index into an array containing the root. As long as the precision is specified, each real number is represented by an integer. Most modern computers will perform the indexed register load as an atomic operation (single step).

 Quote by Mark44 Did you mean "viewers"?
I'm of courteous disposition and act my age, do you expect a reply?

I suggested: (edited OP)
 The best known algorithm EDIT: i.e. iterative method ('Babylonian') to take the square root of a number requires 3 operations..... Do you know a simpler or faster one? Can you find a method that requires only 1 operation? (No = 1) EDIT : a [square]root is found with Nt ops., iterating Ni times a formula that requires No ops. using N+ different op-signs. Nt = Ni * No. Babylonian method, $\sqrt{354.045^2}, ($ if x0 = 300.00) : N+=No = 3, Ni = 4 , 3 * 4 $\rightarrow$ Nt = 12 operations if No = 1 $\rightarrow$ (N+ = 1, Nt = Ni); [in the title: N+ = 1 $\rightarrow$ No = 1 , Nt < 12] ; Nt < 8 is a 'good' solution, (5 * 1) = 5 operations would be 'brilliant'.
Post #31 proves that new readers can't carefully examine all previous posts. An edit to the OP would probably help. Thanks.