Prove Ramanujan Identity: 3 = \sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+...}}}}

  • Context: Undergrad 
  • Thread starter Thread starter jostpuur
  • Start date Start date
  • Tags Tags
    Roots Square
Click For Summary

Discussion Overview

The discussion centers around proving the identity 3 = √(1 + 2√(1 + 3√(1 + 4√(1 + ...)))) with a focus on establishing convergence rigorously. Participants express concerns about "trickery" methods that bypass convergence issues and seek a solid proof for the general case of Ramanujan's result.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions how to prove the identity while addressing convergence issues, expressing a desire to avoid non-rigorous methods.
  • Another participant proposes a generalization of Ramanujan's result and defines a function f(x) = x + 1, suggesting that it can be shown that f(x) satisfies a specific functional equation.
  • It is noted that the sequence of functions f_n(x) is increasing and bounded, leading to the conclusion that it converges, although the limit function f_{\infty}(x) is still under discussion.
  • Concerns are raised about whether f_{\infty}(x) = x + 1 is the only solution to the functional equation, with numerical examples provided to illustrate the behavior of potential solutions.
  • Further exploration reveals that if g(2) is set within certain bounds, it leads to negative values for g(n) at some n, raising questions about the validity of the assumptions made.
  • One participant claims to have proven a missing step by analyzing the behavior of g(n) under specific conditions, although they acknowledge that this does not directly prove that g(n) becomes negative.
  • A later post reflects on the complexity of the proof, suggesting that it could have been approached more simply, indicating a level of uncertainty about the clarity and reception of their arguments.

Areas of Agreement / Disagreement

Participants express differing views on the uniqueness of solutions to the functional equation and the behavior of the sequence g(n). The discussion remains unresolved regarding the completeness of the proof and the conditions under which g(n) becomes negative.

Contextual Notes

Participants note that assumptions about the bounds of g(n) and the behavior of the sequence are critical to the discussion, but these assumptions are not universally accepted or proven, leaving some aspects of the proof open to further exploration.

jostpuur
Messages
2,112
Reaction score
19
How do you prove the identity

[tex] 3 = \sqrt{1 + 2\sqrt{1 + 3\sqrt{1+4\sqrt{1 + \cdots}}}}[/tex]

with a real proof that actually proves the convergence? I know there are "proofs" that "prove" the identity with some trickery that ignore all the convergence issues, and I'm not interested in those trickeries.
 
Physics news on Phys.org
The "trickery" method can be used to prove convergence.

We can generalize Ramanujan's result:

[itex]\sqrt{1+x\sqrt{1+(x+1)\sqrt{1+(x+2)\sqrt{1+ ...}}}}= x+1[/itex]

The original result is the special case [itex]x=2[/itex]. How to prove the general case?

Let's define a function [itex]f(x) = x+1[/itex]. We can prove that:

[itex]f(x) = \sqrt{1 + x f(x+1)}[/itex]

We can similarly expand [itex]f(x+1)[/itex]:

[itex]f(x+1) = \sqrt{1+(x+1) f(x+2)}[/itex]

So we can keep expanding to get, for [itex]n > 0[/itex]:

[itex]f(x) = \sqrt{1+x \sqrt{1+(x+1) \sqrt{1 + (x+2) \sqrt{1 + ... + \sqrt{1 + (x+n-1) f(x+n+1)}}}}}[/itex]

Now, let's define a sequence of functions [itex]f_n(x)[/itex]:

[itex]f_n(x) = \sqrt{1+x \sqrt{1+(x+1) \sqrt{1 + (x+2) \sqrt{1 + ... + \sqrt{1+ (x+n-1)}}}}}[/itex]

It's obvious that [itex]f_n(x) < f(x)[/itex]. And it's also obvious that [itex]f_{n+1}(x) > f_n(x)[/itex]. So for fixed [itex]x[/itex], we have an increasing sequence that is bound from above by [itex]f(x) = x+1[/itex]. So every increasing bounded sequence converges. So [itex]f_n(x)[/itex] converges.

How do you show that its limit is [itex]f(x)[/itex]? Well, if it converges to some function [itex]f_{\infty}(x)[/itex], then we can see that that function satisfies:

[itex]f_{\infty}(x) = \sqrt{1+ x f_{\infty}(x+1)}[/itex]

A solution to that is clearly [itex]f_{\infty}(x) = x+1[/itex]. Is it the only solution? I don't know, but at this point, the problem isn't convergence, because you know that it converges.
 
  • Like
Likes   Reactions: mfb and berkeman
stevendaryl said:
A solution to that is clearly [itex]f_{\infty}(x) = x+1[/itex]. Is it the only solution? I don't know

I think we've pinpointed the last step to complete the proof, because that seems to be the only acceptable solution after all. The functional equation

[tex] g(x+1) = \frac{(g(x))^2 - 1}{x}[/tex]

obviously has infinitely many solutions, but it seems that it has only one unique solution that satisfies the condition [itex]0\leq g(x)\leq x+1[/itex].

If you set g(2) = 2.9, this happens:

g(3) = 3.705
g(4) = 4.2423...
g(5) = 4.2493...
g(6) = 3.4114...
g(7) = 1.7729...
g(8) = 0.3062...
g(9) = -0.1132...

If you set g(2) = 2.99, this happens:

g(3) = 3.9700...
g(4) = 4.9204...
g(5) = 5.8026...
g(6) = 6.5341...
g(7) = 6.9492...
g(8) = 6.7560...
g(9) = 5.5804...
g(10) = 3.3490...
g(11) = 1.0216...
g(12) = 0.0039...
g(13) = -0.0833...

And so on. So it seems that these type of solutions are not going to satisfy [itex]g(x)=\sqrt{1 + xg(x+1)}[/itex] due to the presence of negative values.

If you set g(2) > 3, then obviously the constraint [itex]g(x)\leq x+1[/itex] gets violated. Those type of solutions would satisfy [itex]g(x)= \sqrt{1 + xg(x+1)}[/itex], but we know that the function sequence [itex]f_1,f_2,f_3,\ldots[/itex] is not converging to those.
 
  • Like
Likes   Reactions: stevendaryl
The numerical result I gave in the previous post does not look like an easy thing to prove, so the question of my opening post is still open.

In order to complete the proof in the way envisioned above, the following result should be proven: Assume that [itex]0<g(2)<3[/itex], and that the sequence [itex]g(3),g(4),\ldots[/itex] is generated by the formula
[tex] g(n+1) = \frac{g(n)^2 - 1}{n},\quad\quad n=2,3,\ldots[/tex]
Then [itex]g(n)<0[/itex] with with some [itex]n[/itex].

Here's some remarks concerning the most obvious proof attempts. Suppose
[tex] g(n) < n + 1 - \epsilon[/tex]
holds with some [itex]\epsilon >0[/itex]. Then
[tex] g(n+1) < \frac{(n+1-\epsilon)^2 - 1}{n} < n + 2 - 2\epsilon + \frac{\epsilon^2}{n}[/tex]
I dropped one term to simplify the inequality a little, weakening the inequality at the same time. If you assume that the epsilon is so small that its square can be ignored, we get
[tex] g(n+1) \lesssim n + 2 - 2\epsilon.[/tex]

Treating [itex]2\epsilon[/itex] as a "new epsilon" we'll get
[tex] g(n+2) \lesssim n + 3 - 2^2\epsilon[/tex]
[tex] g(n+3) \lesssim n+4 - 2^3\epsilon[/tex]
and so on.

Unfortunately the presence of term [itex]-2^n\epsilon[/itex] will not prove that the sequence would go below zero, because the assumption about the smallness of the epsilon has been contradictory. We learned that the sequence will start to deviate from [itex]n+1[/itex] roughly exponentially only as long as the deviation is very small. Once the deviation is sufficiently large that the square of the epsilon can no longer be ignored, the sequence of inequalities stops working.

The effect of the term [itex]\frac{\epsilon^2}{n}[/itex] is that it tends to push the sequence upwards, so it doesn't look like that would be an easy problem.
 
I managed to prove the missing step. The trick was to approach the exponential stuff with a slightly different touch. If [itex]g(n)<\alpha(n+1)[/itex] holds with some constant [itex]0<\alpha <1[/itex], then also [itex]g(n+1)<\alpha^2 (n+2)[/itex] holds. This implies that if we start with a condition [itex]g(2)<3\alpha[/itex], inductively we get
[tex] g(n) < \alpha^{2^{n-2}}(n+1)[/tex]
for all [itex]n\in\{2,3,4,\ldots\}[/itex].

This alone does not prove the sought claim that [itex]g(n)<0[/itex] would hold for some [itex]n[/itex], but the exponential dominating does prove that [itex]g(n)<1[/itex] will hold from some point on. On the other hand the condition [itex]0\leq g(n)<1[/itex] implies that [itex]g(n+1)\leq g(n)-\frac{1}{n}[/itex], so in other words, if [itex]0<g(n_A)<1[/itex] holds with some [itex]n_A[/itex], from that point on
[tex] g(n) \leq g(n_A) - \sum_{k=n_A}^{n-1}\frac{1}{k}[/tex]
will hold for all [itex]n\in\{n_A+1,n_A+2,\ldots, n_B\}[/itex], where [itex]n_B[/itex] is the inevitable point where [itex]g(n_B)<0[/itex] will be true.
 
I just realized that my proof contained an unnecessary silly complication, and it could have been simpler. Since nobody pointed it out, I can conclude that my proof was not read very carefully by anyone :rolleyes:
 

Similar threads

Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
9
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
5K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K