Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Ramanujan square roots

  1. Jan 22, 2017 #1
    How do you prove the identity

    [tex]
    3 = \sqrt{1 + 2\sqrt{1 + 3\sqrt{1+4\sqrt{1 + \cdots}}}}
    [/tex]

    with a real proof that actually proves the convergence? I know there are "proofs" that "prove" the identity with some trickery that ignore all the convergence issues, and I'm not interested in those trickeries.
     
  2. jcsd
  3. Jan 22, 2017 #2

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    The "trickery" method can be used to prove convergence.

    We can generalize Ramanujan's result:

    [itex]\sqrt{1+x\sqrt{1+(x+1)\sqrt{1+(x+2)\sqrt{1+ ...}}}}= x+1[/itex]

    The original result is the special case [itex]x=2[/itex]. How to prove the general case?

    Let's define a function [itex]f(x) = x+1[/itex]. We can prove that:

    [itex]f(x) = \sqrt{1 + x f(x+1)}[/itex]

    We can similarly expand [itex]f(x+1)[/itex]:

    [itex]f(x+1) = \sqrt{1+(x+1) f(x+2)}[/itex]

    So we can keep expanding to get, for [itex]n > 0[/itex]:

    [itex]f(x) = \sqrt{1+x \sqrt{1+(x+1) \sqrt{1 + (x+2) \sqrt{1 + ... + \sqrt{1 + (x+n-1) f(x+n+1)}}}}}[/itex]

    Now, let's define a sequence of functions [itex]f_n(x)[/itex]:

    [itex]f_n(x) = \sqrt{1+x \sqrt{1+(x+1) \sqrt{1 + (x+2) \sqrt{1 + ... + \sqrt{1+ (x+n-1)}}}}}[/itex]

    It's obvious that [itex]f_n(x) < f(x)[/itex]. And it's also obvious that [itex]f_{n+1}(x) > f_n(x)[/itex]. So for fixed [itex]x[/itex], we have an increasing sequence that is bound from above by [itex]f(x) = x+1[/itex]. So every increasing bounded sequence converges. So [itex]f_n(x)[/itex] converges.

    How do you show that its limit is [itex]f(x)[/itex]? Well, if it converges to some function [itex]f_{\infty}(x)[/itex], then we can see that that function satisfies:

    [itex]f_{\infty}(x) = \sqrt{1+ x f_{\infty}(x+1)}[/itex]

    A solution to that is clearly [itex]f_{\infty}(x) = x+1[/itex]. Is it the only solution? I don't know, but at this point, the problem isn't convergence, because you know that it converges.
     
  4. Jan 22, 2017 #3
    I think we've pinpointed the last step to complete the proof, because that seems to be the only acceptable solution after all. The functional equation

    [tex]
    g(x+1) = \frac{(g(x))^2 - 1}{x}
    [/tex]

    obviously has infinitely many solutions, but it seems that it has only one unique solution that satisfies the condition [itex]0\leq g(x)\leq x+1[/itex].

    If you set g(2) = 2.9, this happens:

    g(3) = 3.705
    g(4) = 4.2423...
    g(5) = 4.2493...
    g(6) = 3.4114...
    g(7) = 1.7729...
    g(8) = 0.3062...
    g(9) = -0.1132...

    If you set g(2) = 2.99, this happens:

    g(3) = 3.9700...
    g(4) = 4.9204...
    g(5) = 5.8026...
    g(6) = 6.5341...
    g(7) = 6.9492...
    g(8) = 6.7560...
    g(9) = 5.5804...
    g(10) = 3.3490...
    g(11) = 1.0216...
    g(12) = 0.0039...
    g(13) = -0.0833...

    And so on. So it seems that these type of solutions are not going to satisfy [itex]g(x)=\sqrt{1 + xg(x+1)}[/itex] due to the presence of negative values.

    If you set g(2) > 3, then obviously the constraint [itex]g(x)\leq x+1[/itex] gets violated. Those type of solutions would satisfy [itex]g(x)= \sqrt{1 + xg(x+1)}[/itex], but we know that the function sequence [itex]f_1,f_2,f_3,\ldots[/itex] is not converging to those.
     
  5. Jan 23, 2017 #4
    The numerical result I gave in the previous post does not look like an easy thing to prove, so the question of my opening post is still open.

    In order to complete the proof in the way envisioned above, the following result should be proven: Assume that [itex]0<g(2)<3[/itex], and that the sequence [itex]g(3),g(4),\ldots[/itex] is generated by the formula
    [tex]
    g(n+1) = \frac{g(n)^2 - 1}{n},\quad\quad n=2,3,\ldots
    [/tex]
    Then [itex]g(n)<0[/itex] with with some [itex]n[/itex].

    Here's some remarks concerning the most obvious proof attempts. Suppose
    [tex]
    g(n) < n + 1 - \epsilon
    [/tex]
    holds with some [itex]\epsilon >0[/itex]. Then
    [tex]
    g(n+1) < \frac{(n+1-\epsilon)^2 - 1}{n} < n + 2 - 2\epsilon + \frac{\epsilon^2}{n}
    [/tex]
    I dropped one term to simplify the inequality a little, weakening the inequality at the same time. If you assume that the epsilon is so small that its square can be ignored, we get
    [tex]
    g(n+1) \lesssim n + 2 - 2\epsilon.
    [/tex]

    Treating [itex]2\epsilon[/itex] as a "new epsilon" we'll get
    [tex]
    g(n+2) \lesssim n + 3 - 2^2\epsilon
    [/tex]
    [tex]
    g(n+3) \lesssim n+4 - 2^3\epsilon
    [/tex]
    and so on.

    Unfortunately the presence of term [itex]-2^n\epsilon[/itex] will not prove that the sequence would go below zero, because the assumption about the smallness of the epsilon has been contradictory. We learned that the sequence will start to deviate from [itex]n+1[/itex] roughly exponentially only as long as the deviation is very small. Once the deviation is sufficiently large that the square of the epsilon can no longer be ignored, the sequence of inequalities stops working.

    The effect of the term [itex]\frac{\epsilon^2}{n}[/itex] is that it tends to push the sequence upwards, so it doesn't look like that would be an easy problem.
     
  6. Jan 29, 2017 #5
    I managed to prove the missing step. The trick was to approach the exponential stuff with a slightly different touch. If [itex]g(n)<\alpha(n+1)[/itex] holds with some constant [itex]0<\alpha <1[/itex], then also [itex]g(n+1)<\alpha^2 (n+2)[/itex] holds. This implies that if we start with a condition [itex]g(2)<3\alpha[/itex], inductively we get
    [tex]
    g(n) < \alpha^{2^{n-2}}(n+1)
    [/tex]
    for all [itex]n\in\{2,3,4,\ldots\}[/itex].

    This alone does not prove the sought claim that [itex]g(n)<0[/itex] would hold for some [itex]n[/itex], but the exponential dominating does prove that [itex]g(n)<1[/itex] will hold from some point on. On the other hand the condition [itex]0\leq g(n)<1[/itex] implies that [itex]g(n+1)\leq g(n)-\frac{1}{n}[/itex], so in other words, if [itex]0<g(n_A)<1[/itex] holds with some [itex]n_A[/itex], from that point on
    [tex]
    g(n) \leq g(n_A) - \sum_{k=n_A}^{n-1}\frac{1}{k}
    [/tex]
    will hold for all [itex]n\in\{n_A+1,n_A+2,\ldots, n_B\}[/itex], where [itex]n_B[/itex] is the inevitable point where [itex]g(n_B)<0[/itex] will be true.
     
  7. Jan 31, 2017 #6
    I just realized that my proof contained an unnecessary silly complication, and it could have been simpler. Since nobody pointed it out, I can conclude that my proof was not read very carefully by anyone :rolleyes:
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Ramanujan square roots
  1. Square Root (Replies: 4)

  2. About square root (Replies: 9)

Loading...