# I Ramanujan square roots

1. Jan 22, 2017

### jostpuur

How do you prove the identity

$$3 = \sqrt{1 + 2\sqrt{1 + 3\sqrt{1+4\sqrt{1 + \cdots}}}}$$

with a real proof that actually proves the convergence? I know there are "proofs" that "prove" the identity with some trickery that ignore all the convergence issues, and I'm not interested in those trickeries.

2. Jan 22, 2017

### stevendaryl

Staff Emeritus
The "trickery" method can be used to prove convergence.

We can generalize Ramanujan's result:

$\sqrt{1+x\sqrt{1+(x+1)\sqrt{1+(x+2)\sqrt{1+ ...}}}}= x+1$

The original result is the special case $x=2$. How to prove the general case?

Let's define a function $f(x) = x+1$. We can prove that:

$f(x) = \sqrt{1 + x f(x+1)}$

We can similarly expand $f(x+1)$:

$f(x+1) = \sqrt{1+(x+1) f(x+2)}$

So we can keep expanding to get, for $n > 0$:

$f(x) = \sqrt{1+x \sqrt{1+(x+1) \sqrt{1 + (x+2) \sqrt{1 + ... + \sqrt{1 + (x+n-1) f(x+n+1)}}}}}$

Now, let's define a sequence of functions $f_n(x)$:

$f_n(x) = \sqrt{1+x \sqrt{1+(x+1) \sqrt{1 + (x+2) \sqrt{1 + ... + \sqrt{1+ (x+n-1)}}}}}$

It's obvious that $f_n(x) < f(x)$. And it's also obvious that $f_{n+1}(x) > f_n(x)$. So for fixed $x$, we have an increasing sequence that is bound from above by $f(x) = x+1$. So every increasing bounded sequence converges. So $f_n(x)$ converges.

How do you show that its limit is $f(x)$? Well, if it converges to some function $f_{\infty}(x)$, then we can see that that function satisfies:

$f_{\infty}(x) = \sqrt{1+ x f_{\infty}(x+1)}$

A solution to that is clearly $f_{\infty}(x) = x+1$. Is it the only solution? I don't know, but at this point, the problem isn't convergence, because you know that it converges.

3. Jan 22, 2017

### jostpuur

I think we've pinpointed the last step to complete the proof, because that seems to be the only acceptable solution after all. The functional equation

$$g(x+1) = \frac{(g(x))^2 - 1}{x}$$

obviously has infinitely many solutions, but it seems that it has only one unique solution that satisfies the condition $0\leq g(x)\leq x+1$.

If you set g(2) = 2.9, this happens:

g(3) = 3.705
g(4) = 4.2423...
g(5) = 4.2493...
g(6) = 3.4114...
g(7) = 1.7729...
g(8) = 0.3062...
g(9) = -0.1132...

If you set g(2) = 2.99, this happens:

g(3) = 3.9700...
g(4) = 4.9204...
g(5) = 5.8026...
g(6) = 6.5341...
g(7) = 6.9492...
g(8) = 6.7560...
g(9) = 5.5804...
g(10) = 3.3490...
g(11) = 1.0216...
g(12) = 0.0039...
g(13) = -0.0833...

And so on. So it seems that these type of solutions are not going to satisfy $g(x)=\sqrt{1 + xg(x+1)}$ due to the presence of negative values.

If you set g(2) > 3, then obviously the constraint $g(x)\leq x+1$ gets violated. Those type of solutions would satisfy $g(x)= \sqrt{1 + xg(x+1)}$, but we know that the function sequence $f_1,f_2,f_3,\ldots$ is not converging to those.

4. Jan 23, 2017

### jostpuur

The numerical result I gave in the previous post does not look like an easy thing to prove, so the question of my opening post is still open.

In order to complete the proof in the way envisioned above, the following result should be proven: Assume that $0<g(2)<3$, and that the sequence $g(3),g(4),\ldots$ is generated by the formula
$$g(n+1) = \frac{g(n)^2 - 1}{n},\quad\quad n=2,3,\ldots$$
Then $g(n)<0$ with with some $n$.

Here's some remarks concerning the most obvious proof attempts. Suppose
$$g(n) < n + 1 - \epsilon$$
holds with some $\epsilon >0$. Then
$$g(n+1) < \frac{(n+1-\epsilon)^2 - 1}{n} < n + 2 - 2\epsilon + \frac{\epsilon^2}{n}$$
I dropped one term to simplify the inequality a little, weakening the inequality at the same time. If you assume that the epsilon is so small that its square can be ignored, we get
$$g(n+1) \lesssim n + 2 - 2\epsilon.$$

Treating $2\epsilon$ as a "new epsilon" we'll get
$$g(n+2) \lesssim n + 3 - 2^2\epsilon$$
$$g(n+3) \lesssim n+4 - 2^3\epsilon$$
and so on.

Unfortunately the presence of term $-2^n\epsilon$ will not prove that the sequence would go below zero, because the assumption about the smallness of the epsilon has been contradictory. We learned that the sequence will start to deviate from $n+1$ roughly exponentially only as long as the deviation is very small. Once the deviation is sufficiently large that the square of the epsilon can no longer be ignored, the sequence of inequalities stops working.

The effect of the term $\frac{\epsilon^2}{n}$ is that it tends to push the sequence upwards, so it doesn't look like that would be an easy problem.

5. Jan 29, 2017

### jostpuur

I managed to prove the missing step. The trick was to approach the exponential stuff with a slightly different touch. If $g(n)<\alpha(n+1)$ holds with some constant $0<\alpha <1$, then also $g(n+1)<\alpha^2 (n+2)$ holds. This implies that if we start with a condition $g(2)<3\alpha$, inductively we get
$$g(n) < \alpha^{2^{n-2}}(n+1)$$
for all $n\in\{2,3,4,\ldots\}$.

This alone does not prove the sought claim that $g(n)<0$ would hold for some $n$, but the exponential dominating does prove that $g(n)<1$ will hold from some point on. On the other hand the condition $0\leq g(n)<1$ implies that $g(n+1)\leq g(n)-\frac{1}{n}$, so in other words, if $0<g(n_A)<1$ holds with some $n_A$, from that point on
$$g(n) \leq g(n_A) - \sum_{k=n_A}^{n-1}\frac{1}{k}$$
will hold for all $n\in\{n_A+1,n_A+2,\ldots, n_B\}$, where $n_B$ is the inevitable point where $g(n_B)<0$ will be true.

6. Jan 31, 2017

### jostpuur

I just realized that my proof contained an unnecessary silly complication, and it could have been simpler. Since nobody pointed it out, I can conclude that my proof was not read very carefully by anyone