# How do you calculate recursive efficiency?

1. Oct 17, 2014

### David Carroll

I am right now working on a recursive formula whose nature I'll define later. Unfortunately, I have no formal education in math. What I want is some references to where I can find how to calculate the efficiency of a recursive formula and how to prove its efficiency.

For example (though this is not the formula I'm working with), consider the algorithm to find the square root of a number: Step 1: Divide x2 by any number. Step 2: Take the arithmetic mean of the output number and the original input number and divide x2 by this arithmetic mean. Repeat step 2 until the output differs by its corresponding input by zero at D decimal places, where D is the number of decimal places that fits on some calculating device.

How is the efficiency of the above recursive formula defined and calculated? How is the efficiency of any recursive formula defined and calculated?

2. Oct 18, 2014

### Stephen Tashi

A numerical algorithm may quickly converge to the answer for some inputs (for example divide 4 by 2 to estimate its square root) and slowly converge to an answer for others. So you can begin by deciding what you care about - best case performance? worst case performance? You can attempt to deal with "average case performance", but this can get complicated because you must specify what you mean by "average". Do you have some way of defining a probability distribution for the inputs that are chosen?

3. Oct 18, 2014

### zoki85

Never heard of a term "recursive efficiency". But wait, I remembered something, maybe it will be of some help. I know recursive algorithms encoded in a some programming language require more RAM to be executed. It is easier (and shorter way) for a programmer but harder for a machine.

Last edited: Oct 18, 2014
4. Oct 22, 2014

### ellipsis

There's a certain fact that struck me as profound and enlightening when I first learned of it:

"Any recursive function has an iterative counterpart, and any iterative function has a recursive counterpart."

Convert your recursive solution to an iterative one. Then, use O(n) notation to analyze and formally mark the efficiency of your algorithm. Whatever complexity the iterative form is, the same complexity will embody the recursive form.

If you know exactly how many times something will loop, (and sometimes, even when you don't), you can put the computation in a 'closed-form'. The following code outputs your sqrt(x) example after four iterations, assuming an initial guess of 'x'.

input x
A = x+1
B = A^2
C = 4*x

output ( (C+B)^2+32*x*B ) / ( 8*A*(C+B) )

You can find that by finding the composition of the initial function of step 2 with itself a few times.

Last edited: Oct 22, 2014
5. Oct 22, 2014

### David Carroll

Thanks guys. Actually, this is all a moot point. I found a non-recursive way to deal with what I'm working on.

6. Oct 23, 2014

### Staff: Mentor

I guess you looked for rate of convergence.

As examples, a linear convergence will add the same number of significant digits per step, while a quadratic convergence (like the mentioned square root finding method) will double the number of significant digits each n steps.