An identity with Bessel functions

  • A
  • Thread starter Juan Comas
  • Start date
  • #1
Juan Comas
6
2
Hello.
Does anybody know a proof of this formula?
$$J_{2}(e)\equiv\frac{1}{e}\sum_{i=1}^{\infty}\frac{J_{i}(i\cdot e)}{i}\cdot\frac{J_{i+1}((i+1)\cdot e)}{i+1}$$with$$0<e<1$$
We ran into this formula in a project, and think that it is correct. It can be checked successfully with numeric computing programs up to where they can reach, but we have not been able to find a mathematical proof.
 

Answers and Replies

  • #3
Juan Comas
6
2
Thank you for your answer.
We have already used DMLF, Wolfram and many other resources without success.
This is not an easy job. It is a summation of a second order Kapteyn series. Its difficulty has been acknowledged by many authors, for example by R.C. Tautz and I. Lerche. I attach a pair of articles of them.
Sincerely, we have made large unsuccessful efforts trying to obtain a summation of this Kapteyn series. The result is very simple ##J_{2}(e)##.
As a new trial we come to this forum to see if somebody knows already the solution.
So my question remains. Does anybody know a proof of the formula?
 

Attachments

  • Tautz - 2011.pdf
    225.1 KB · Views: 26
  • Tautz - 2009.pdf
    2.4 MB · Views: 25
  • #4
Petek
Gold Member
399
23
Have you considered posting your question on MathOverflow? That site caters to research-level problems. It has a tag for questions about Bessel functions, so you might find some advice there.
 
  • #5
Juan Comas
6
2
Thank you for your advice. I will consider it.
 
  • #6
jasonRF
Science Advisor
Gold Member
1,508
576
Have you looked in A Treatise on the Theory of Bessel Functions by Watson? It is over 800 pages and full of proofs.
 
  • Like
Likes Juan Comas and dextercioby
  • #7
Juan Comas
6
2
Have you looked in A Treatise on the Theory of Bessel Functions by Watson? It is over 800 pages and full of proofs.
Thanks. Efectively, Watson's book has a full chapter (chapter XVII) on Kapteyn series, but mainly on first order series. At the end, it has one short section 17.6, barely one page, on second type (or order) series. Unfortunately it considers only type $$
\sum\beta_{n}J_{\mu+n}\left\{ \left(\frac{\mu+\nu}{2}\right)z\right\} J_{\nu+n}\left\{ \left(\frac{\mu+\nu}{2}\right)z\right\} ,$$ that is with the same input value for both Bessel functions. This does not happen in our formula.
 
  • #8
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
2022 Award
24,046
15,750
Hello.
Does anybody know a proof of this formula?
$$J_{2}(e)\equiv\frac{1}{e}\sum_{i=1}^{\infty}\frac{J_{i}(i\cdot e)}{i}\cdot\frac{J_{i+1}((i+1)\cdot e)}{i+1}$$with$$0<e<1$$
We ran into this formula in a project, and think that it is correct. It can be checked successfully with numeric computing programs up to where they can reach, but we have not been able to find a mathematical proof.
I assume you have tried showing that the RHS satisfies the appropriate differential equation?
 
  • #9
renormalize
Gold Member
70
99
Does anybody know a proof of this formula?
$$J_{2}(e)\equiv\frac{1}{e}\sum_{i=1}^{\infty}\frac{J_{i}(i\cdot e)}{i}\cdot\frac{J_{i+1}((i+1)\cdot e)}{i+1}$$
Playing around in Mathematica, I am almost able to prove this by brute force. I substitute the power series for each of the Bessel functions into the products on the right side, then use the Cauchy product formula to rewrite those products as a single power series. By monkeying with the orders and limits of the various summations, I can complete the proof provided that the following finite summation formula in terms of hypergeometric functions ##{}_{2}F_{1}## is valid:$$\sum_{k=0}^{m}\frac{\left(-1\right)^{k}\left(m-k+1\right)^{m-k}\left(m-k+2\right)^{m+k+1}{}_{2}F_{1}\left(-k,-m-2;m-k+2;\left(\frac{m-k+1}{m-k+2}\right)^{2}\right)}{k!\left(m-k+1\right)!}\overset{?}{=}\frac{2\left(-1\right)^{m}}{m!}$$I've checked that it's true for all integers ##m## from 0 to 100 (physicists induction:smile:) but I haven't yet been savvy enough to prove it in general. Since the formula amounts to an expansion of the reciprocal factorial function, you might think it would be in the literature somewhere, but if so I haven't been able to find it.

So if and when one of the math whizzes here can verify this expression, I'll go ahead and post my full proof of the OP's Bessel-function identity.
 
Last edited:
  • Like
Likes Juan Comas, Paul Colby and PeroK
  • #10
Juan Comas
6
2
I assume you have tried showing that the RHS satisfies the appropriate differential equation?
No, we have not tried this, but I think that it is a good idea. I understand that when you say the appropiate differential equation, you mean that $$
\mathrm{RHS}=\sum_{i=1}^{\infty}\frac{1}{e}\frac{J_{i}(ie)}{i}\frac{J_{i+1}(\left(i+1\right)e)}{i+1}$$should satisfy Bessel equation: $$
e^{2}\frac{d^{2}\left(\mathrm{RHS}\right)}{de^{2}}+e\frac{d\left(\mathrm{RHS}\right)}{de}+\left(e^{2}-4\right)\left(\mathrm{RHS}\right)=0$$ Of course, if the identity is correct RHS must satisfy this differential equation, but I think that this would not prove that ##\mathrm{RHS}=J_{2}(e)##, because the equation has more solutions besides ##J_{2}(e)##. Effectively, this differential equation has all these solutions ##
\mathrm{RHS}=\mathrm{A}\cdot J_{2}(e)+\mathrm{B}\cdot Y_{2}(e)##, having A and B any value.
 
  • #11
Juan Comas
6
2
Playing around in Mathematica, I am almost able to prove this by brute force. I substitute the power series for each of the Bessel functions into the products on the right side, then use the Cauchy product formula to rewrite those products as a single power series. By monkeying with the orders and limits of the various summations, I can complete the proof provided that the following finite summation formula in terms of hypergeometric functions ##{}_{2}F_{1}## is valid:$$\sum_{k=0}^{m}\frac{\left(-1\right)^{k}\left(m-k+1\right)^{m-k}\left(m-k+2\right)^{m+k+1}{}_{2}F_{1}\left(-k,-m-2;m-k+2;\left(\frac{m-k+1}{m-k+2}\right)^{2}\right)}{k!\left(m-k+1\right)!}\overset{?}{=}\frac{2\left(-1\right)^{m}}{m!}$$I've checked that it's true for all integers ##m## from 0 to 100 (physicists induction:smile:) but I haven't yet been savvy enough to prove it in general. Since the formula amounts to an expansion of the reciprocal factorial function, you might think it would be in the literature somewhere, but if so I haven't been able to find it.

So if and when one of the math whizzes here can verify this expression, I'll go ahead and post my full proof of the OP's Bessel-function identity.
Congratulations. I am looking forward to it.
 
  • #12
pasmith
Homework Helper
2022 Award
2,592
1,196
From here we have the integral representation [tex]
J_n(nx) = \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{in(\tau - x \sin \tau)}\,d\tau[/tex] whence the integral representation [tex]
\sum_{n=1}^\infty \frac{J_n(nx)J_{n+1}((n+1)x)}{n(n+1)} =
\frac{1}{4\pi^2} \int_{-\pi}^\pi \int_{-\pi}^\pi \sum_{n=1}^\infty \frac{e^{nf(\zeta,\eta,x)}}{n(n+1)}e^{i(\eta - x\sin\eta)}\,d\eta\,d\zeta[/tex] where [tex]
f(\zeta, \eta, x) = i(\zeta + \eta - x(\sin \zeta + \sin \eta)).[/tex] The series can be summed by observing that if [tex]g(\alpha) = \sum_{n=1}^\infty \frac{\alpha^n}{n(n+1)}[/tex] then [tex]
(\alpha^2 g'(\alpha))' = \sum_{n=1}^\infty \alpha^n = \frac{\alpha}{1 - \alpha}.[/tex] The ultimate aim is to end up with [tex]
xJ_2(x) = \frac{1}{2\pi} \int_{-\pi}^\pi xe^{i(2\tau - x\sin \tau)}\,d\tau.[/tex]
 
  • Like
Likes Paul Colby and Juan Comas
  • #13
pasmith
Homework Helper
2022 Award
2,592
1,196
The series can be summed by observing that if [tex]g(\alpha) = \sum_{n=1}^\infty \frac{\alpha^n}{n(n+1)}[/tex] then [tex]
(\alpha^2 g'(\alpha))' = \sum_{n=1}^\infty \alpha^n = \frac{\alpha}{1 - \alpha}.[/tex]

One possible difficulty is that this power series has radius of convergence 1, and we need to apply it for [itex]|\alpha| = 1[/itex]. So it may be necessary to look at [tex]
\sum_{n=1}^\infty \frac{e^{in\theta}}{n(n+1)} = \sum_{n=1}^\infty \frac{\cos n\theta + i\sin n\theta}{n(n+1)}[/tex] and try to recognise the right hand side as the fourier series of something.
 
  • #14
julian
Gold Member
752
240
No, we have not tried this, but I think that it is a good idea. I understand that when you say the appropiate differential equation, you mean that $$
\mathrm{RHS}=\sum_{i=1}^{\infty}\frac{1}{e}\frac{J_{i}(ie)}{i}\frac{J_{i+1}(\left(i+1\right)e)}{i+1}$$should satisfy Bessel equation: $$
e^{2}\frac{d^{2}\left(\mathrm{RHS}\right)}{de^{2}}+e\frac{d\left(\mathrm{RHS}\right)}{de}+\left(e^{2}-4\right)\left(\mathrm{RHS}\right)=0$$ Of course, if the identity is correct RHS must satisfy this differential equation, but I think that this would not prove that ##\mathrm{RHS}=J_{2}(e)##, because the equation has more solutions besides ##J_{2}(e)##. Effectively, this differential equation has all these solutions ##
\mathrm{RHS}=\mathrm{A}\cdot J_{2}(e)+\mathrm{B}\cdot Y_{2}(e)##, having A and B any value.
I know you are considering ##0 < e < 0##. But can we not use that ##\lim_{e \rightarrow 0} J_2 (e) \rightarrow 0## and that ##Y_2 (e)## is singular for ##e=0##?

Wouldn't the sum:

\begin{align*}
\frac{1}{e} \sum_{i=1}^\infty \dfrac{J_i ( i e)}{i} \dfrac{J_{i+1} ((i+1) e)}{i+1}
\end{align*}

tend to ##0## as ##e \rightarrow 0## because ##\lim_{e \rightarrow 0} \frac{1}{e} J_i (i e) \rightarrow 0## for ##i >0##?
 
Last edited:
  • #15
julian
Gold Member
752
240
It's not exactly "drivng me batty", but let's switch from ##e## to ##x## anyway. Say we have

\begin{align*}
A J_2 (x) + B Y_2 (x)= \frac{1}{x} \sum_{n=1}^\infty \dfrac{J_n (n x)}{n} \dfrac{J_{n+1} ((n+1) x)}{n+1}
\end{align*}

We have that ##J_2 (0) = 0## and the RHS is zero at ##x=0## (because of the formula for ##J_n (x)## that I will quote in a moment). We must have ##B=0## because ##Y_2 (x)## is singular at ##x=0##.

So we have

\begin{align*}
A J_2 (x) = \frac{1}{x} \sum_{n=1}^\infty \dfrac{J_n (n x)}{n} \dfrac{J_{n+1} ((n+1) x)}{n+1}
\end{align*}

We wish to find ##A##. Let's take the 2nd derivative and set ##x=0##.

We have the formula:

\begin{align*}
J_n (x) = \sum_{k=0}^\infty \dfrac{(-1)^k (x/2)^{n+2k}}{k! (n+k)!}
\end{align*}

In particular

\begin{align*}
J_2 (x) = \sum_{k=0}^\infty \dfrac{(-1)^k (x/2)^{2+2k}}{k! (2+k)!}
\end{align*}

Obviously

\begin{align*}
\left. \frac{d^2}{d x^2} \frac{1}{x} \dfrac{J_n (n x)}{n} \dfrac{J_{n+1} ((n+1) x)}{n+1} \right|_{x=0} = 0 \qquad \text{for } n > 1
\end{align*}

So we need only consider

\begin{align*}
A \left. \frac{d^2}{d x^2} J_2 (x) \right|_{x=0} = \left. \frac{d^2}{d x^2} \frac{1}{x} J_1 ( x) \dfrac{J_2 (2 x)}{2} \right|_{x=0}
\end{align*}

First

\begin{align*}
\left. \frac{d^2}{d x^2} J_2 (x) \right|_{x=0} = \left. \frac{d^2}{d x^2} \dfrac{ (x/2)^2}{0! (2+0)!} \right|_{x=0} = \frac{1}{4} .
\end{align*}

Next

\begin{align*}
& \left. \frac{d^2}{d x^2} \frac{1}{x} J_1 ( x) \dfrac{J_2 (2 x)}{2} \right|_{x=0} =
\nonumber \\
& = \left. \frac{d^2}{d x^2} \left( \frac{1}{x} \sum_{k=0}^\infty \dfrac{(-1)^k (x/2)^{1+2k}}{k! (1+k)!} \right)
\left( \frac{1}{2} \sum_{k=0}^\infty \dfrac{(-1)^k (2x/2)^{2+2k}}{k! (2+k)!} \right)
\right|_{x=0}
\nonumber \\
& = \left. \frac{d^2}{d x^2} \left( \frac{1}{x} \dfrac{(-1)^0 (x/2)}{0! (1+0)!} \right) \left( \frac{1}{2} \dfrac{x^2}{0! (2+0)!} \right)
\right|_{x=0}
\nonumber \\
& = \left. \frac{d^2}{d x^2} \left( \frac{1}{x} (x/2) \right) \left( \frac{1}{2} \dfrac{x^2}{2!} \right)
\right|_{x=0}
\nonumber \\
& = \frac{1}{4}
\end{align*}

implying ##A = 1##.

So it is just left to show that:

\begin{align*}
x^2 \frac{d^2 (RHS)}{d x^2} + x \frac{d (RHS)}{d x} + (x^2 - 4) (RHS) = 0 .
\end{align*}
 
Last edited:
  • #16
1,499
443
I think I have a good lead on a proof. The ##J_n(x)## are analytic throughout the entire complex plane. If we power series expand both sides of our problem about ##x=0##, we must only show that coefficients for corresponding powers of ##x## are equal. The good news is, only a finite number of terms contribute for any fixed power of ##x## so it's just a matter of arithmetic to show equality. The bad news is, we must show the arithmetic is valid for all powers of ##x##.

For reference, our problem is to show,
$$
J_2(x)=\frac{1}{x}\sum_{n=1}^\infty \frac{J_n(nx)J_{n+1}((n+1)x)}{n(n+1)}
$$
We make repeated use of the series expansion,
$$
J_n(x) = \sum_{k=0}^\infty A(n,k)x^{n+2k},
$$
where
$$
A(n,k) = \frac{(-1)^k}{2^{n+2k}k!(n+k)!}.
$$
Clearly,
$$
\frac{J_n(nx)}{n} = \sum_{r=0}^\infty n^{n+2r-1}A(n,r)\;x^{n+2r},
$$
with equally apparent,
$$
\frac{J_{n+1}((n+1)x)}{x(n+1)} = \sum_{s=0}^\infty (n+1)^{n+2s}A(n+1,s)\;x^{n+2s}.
$$
Substituting these expressions into the right-hand-side, we equate coefficients of equal powers of ##x##,
$$
A(2,k) = \sum_{m+r+s=k} (m+1)^{m+2r}(m+2)^{m+2s+1}A(m+1,r)A(m+2,s).
$$
Note that we've put all indices on an equal footing by choosing ##n=m+1##. This restricts the indices ##m,r,s\ge 0##. For any finite ##k## one may program the expression on the right in the computer algebra program of choice. I chose Maxima because it's free. I've checked the last equation is arithmetically true for ##k=0## to 30. The rest sounds like a job for recursion on ##k##.
 
Last edited:
  • Like
Likes Juan Comas and renormalize
  • #17
renormalize
Gold Member
70
99
Motivated by Paul Colby's comment, I am posting my very similar “proof” of the OP's Bessel-function identity in the hopes it may inspire someone to discover a proper proof of the finite sum formula (7) below involving the Gauss hypergeometric function ##_{2}F_{1}##.

Start with the series representation of the Bessel functions of the first kind:$$J_{n}\left(z\right)=\sum_{k=0}^{\infty}\frac{\left(-1\right)^{k}}{k!\left(k+n\right)!}\left(\frac{z}{2}\right)^{2k+n}\qquad\qquad(1)$$and define the function ##\phi\left(x\right)## by:$$x\thinspace\phi\left(x\right)\equiv\sum_{p=1}^{\infty}\frac{J_{p}\left(p\,x\right)J_{p+1}\left(\left(p+1\right)x\right)}{p\left(p+1\right)}=\sum_{n=0}^{\infty}\frac{J_{n+1}\left(\left(n+1\right)x\right)J_{n+2}\left(\left(n+2\right)x\right)}{\left(n+1\right)\left(n+2\right)}\qquad(2)$$Using (1), the factors under the summation are ##J_{n+1}\left(\left(n+1\right)x\right)/\left(n+1\right)=\sum_{i=0}^{\infty}a_{i}\left(n,x\right)## and ##J_{n+2}\left(\left(n+2\right)x\right)/\left(n+2\right)=\sum_{j=0}^{\infty}b_{j}\left(n,x\right)## where:$$a_{i}\left(n,x\right)\equiv\frac{\left(-1\right)^{i}\left(n+1\right)^{2i+n}}{i!\left(i+n+1\right)!}\left(\frac{x}{2}\right)^{2i+n+1},\;b_{j}\left(n,x\right)\equiv\frac{\left(-1\right)^{j}\left(n+2\right)^{2j+n+1}}{j!\left(j+n+2\right)!}\left(\frac{x}{2}\right)^{2j+n+2}\qquad(3)$$This puts (2) into the form ##x\thinspace\phi\left(x\right)=\sum_{n=0}^{\infty}(\sum_{i=0}^{\infty}a_{i}\left(n,x\right))(\sum_{j=0}^{\infty}(b_{i}\left(n,x\right))## to which we can apply the Cauchy product formula ##(\sum_{i=0}^{\infty}a_{i}\left(n,x\right))(\sum_{j=0}^{\infty}(b_{i}\left(n,x\right))=\sum_{k=0}^{\infty}\sum_{l=0}^{k}a_{l}\left(n,x\right)b_{k-l}\left(n,x\right)## and (3) to get:$$x\thinspace\phi\left(x\right) =\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\sum_{l=0}^{k}a_{l}\left(n,x\right)b_{k-l}\left(n,x\right)$$$$=\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\sum_{l=0}^{k}\frac{\left(-1\right)^{k}\left(n+1\right)^{2l+n}\left(n+2\right)^{2k-2l+n+1}}{l!\left(k-l\right)!\left(l+n+1\right)!\left(k-l+n+2\right)!}\left(\frac{x}{2}\right)^{2\left(k+n\right)+3}\qquad\qquad(4)$$Now introduce a new index ##m\equiv k+n## ##(0\leq m<\infty)## and make the replacement ##k=m-n##. Note that since ##k## must be non-negative, the range of ##n## is now restricted to be ##0\leq n\leq m## so that (4) becomes:$$x\thinspace\phi\left(x\right)=\sum_{m=0}^{\infty}\sum_{n=0}^{m}\sum_{l=0}^{m-n}\frac{\left(-1\right)^{m-n}\left(n+1\right)^{n+2l}\left(n+2\right)^{2m-2l-n+1}}{l!\left(m-l+2\right)!\left(n+l+1\right)!\left(m-l-n\right)!}\left(\frac{x}{2}\right)^{2m+3}$$$$=\;x\sum_{m=0}^{\infty}s\left(m\right)\frac{\left(-1\right)^{m}}{m!\left(m+2\right)!}\left(\frac{x}{2}\right)^{2m+2}\qquad\qquad(5)$$where ##s(m)## is the finite double-sum:$$s\left(m\right)\equiv\sum_{n=0}^{m}\sum_{l=0}^{m-n}\frac{m!\left(m+2\right)!\left(-1\right)^{-n}\left(n+1\right)^{n+2l}\left(n+2\right)^{2m-2l-n+1}}{2\;l!\left(m-l+2\right)!\left(n+l+1\right)!\left(m-l-n\right)!}\qquad\qquad(6)$$Remarkably, Mathematica can evaluate the inner-most sum of (6) in closed form:$$s(m)=\sum_{n=0}^{m}\frac{m!\left(-1\right)^{-n}\left(n+1\right)^{n}\left(n+2\right)^{2m-n+1}{}_{2}F_{1}\left(-m-2,n-m;n+2;\left(\frac{n+1}{n+2}\right)^{2}\right)}{2\left(n+1\right)!\left(m-n\right)!}$$$$=\;1\quad(??)\qquad\qquad(7)$$But alas, Mathematica chokes on this second sum. Nevertheless, I have verifed that (7) is true for all integers ##m## from 0 to 100. So under the assumption that ##s(m)=1## holds in general, eq.(5) reduces to:$$\phi\left(x\right)=\sum_{m=0}^{\infty}\frac{\left(-1\right)^{m}}{m!\left(m+2\right)!}\left(\frac{x}{2}\right)^{2m+2}=J_{2}\left(x\right)$$which completes the “proof” of the OP's identity.
 
Last edited:
  • Like
Likes Juan Comas, julian and Paul Colby
  • #18
julian
Gold Member
752
240
I am not contributing something new. I am taking a result given in Watson and arriving at the same condition that @renormalize arrived at.

On page 148 of Watson is the formula:

\begin{align*}
J_\mu (az) J_\nu (bz) & = \dfrac{ (\frac{1}{2} az)^\mu (\frac{1}{2} b z)^\nu}{\Gamma (\nu+1)}
\nonumber \\
& \times \sum_{m=0}^\infty \dfrac{(-1)^m (\frac{1}{2} az)^{2m} \;_2F_1 \left( -m , - \mu - m ; \nu+1 ; b^2/a^2 \right)}{ m! \Gamma (\mu+m+1)}
\end{align*}

Here https://books.google.co.uk/books?id=qy1GNv2ovHQC&lpg=PA59&dq=integral+product+bessel+functions&pg=PA59&redir_esc=y#v=onepage&q=integral product bessel functions&f=false they give a slightly different version of the formula:

\begin{align*}
J_\mu (az) J_\nu (bz) & = \dfrac{ (\frac{1}{2} az)^\mu (\frac{1}{2} b z)^\nu}{\Gamma (\mu+1)}
\nonumber \\
& \times \sum_{m=0}^\infty \dfrac{(-1)^m (\frac{1}{2} bz)^{2m} \;_2F_1 \left( -m , - \nu - m ; \mu+1 ; a^2/b^2 \right)}{m! \Gamma (\nu+m+1)}
\end{align*}

which is obtained from the Watson formula by simultaneously performing the interchanges ##a \leftrightarrow b## and ##\mu \leftrightarrow \nu##. We'll use the second version. From which we have:

\begin{align*}
& \dfrac{J_{n+1} ((n+1)x)}{n+1} \dfrac{J_{n+2} ((n+2)x)}{n+2}
\nonumber \\
& = \dfrac{ ((n+1)x/2)^{n+1} ((n+2)x/2)^{n+2}}{(n+1) (n+2) (n+1)!} \sum_{m=0}^\infty \dfrac{(-1)^m ((n+2) x/2)^{2m}}{ m! (n+m+2)! }
\nonumber \\
& \qquad \qquad \times \;_2F_1 \left( -m , - n - 2 - m ; n+2 ; \left( \frac{n+1}{n+2} \right)^2 \right)
\end{align*}

or

\begin{align*}
& \dfrac{J_{n+1} ((n+1)x)}{n+1} \dfrac{J_{n+2} ((n+2)x)}{n+2}
\nonumber \\
& = \dfrac{(n+1)^n (n+2)^{n+1}}{(n+1)!} \sum_{m=0}^\infty \dfrac{(-1)^m (n+2)^{2m}}{ m! (n+m+2)!}
\nonumber \\
& \qquad \qquad \times \;_2F_1 \left( -m , - n - 2 - m ; n+2 ; \left( \frac{n+1}{n+2} \right)^2 \right) \left( \frac{x}{2} \right)^{2n+2m+3}
\end{align*}

Then,

\begin{align*}
x \phi (x) & = \sum_{n=0}^\infty \dfrac{J_{n+1} ((n+1)x)}{n+1} \dfrac{J_{n+2} ((n+2)x)}{n+2}
\nonumber \\
& = \sum_{n=0}^\infty \sum_{m=0}^\infty \dfrac{(-1)^m (n+1)^n (n+2)^{2m+n+1}}{(n+1)! m! (n+m+2)!}
\nonumber \\
& \qquad \qquad \times \;_2F_1 \left( -m , - n - 2 - m ; n+2 ; \left( \frac{n+1}{n+2} \right)^2 \right) \left( \frac{x}{2} \right)^{2n+2m+3}
\nonumber \\
& = \sum_{n=0}^\infty \sum_{m=0}^\infty s_{n,m} \left( \frac{x}{2} \right)^{2n+2m+3}
\end{align*}

where ##s_{n,m}## is defined by

\begin{align*}
s_{n,m} & = \dfrac{(-1)^m (n+1)^n (n+2)^{2m+n+1}}{(n+1)! m! (n+m+2)!}
\;_2F_1 \left( -m , - n - 2 - m ; n+2 ; \left( \frac{n+1}{n+2} \right)^2 \right)
\end{align*}

The double sum can be written

\begin{align*}
\sum_{n=0}^\infty \sum_{m=0}^\infty s_{n,m} \left( \frac{x}{2} \right)^{2n+2m+3} = \sum_{m=0}^\infty \sum_{n=0}^m s_{n,m-n} \left( \frac{x}{2} \right)^{2m+3}
\end{align*}

Here

\begin{align*}
s_{n,m-n} & = \dfrac{(-1)^{m-n} (n+1)^n (n+2)^{2m-n+1}}{(n+1)! (m-n)! (m+2)!}
\;_2F_1 \left( -m + n , - 2 - m ; n+2 ; \left( \frac{n+1}{n+2} \right)^2 \right)
\end{align*}

So

\begin{align*}
\phi (x) = \sum_{m=0}^\infty \sum_{n=0}^m \frac{1}{2} s_{n,m-n} \left( \frac{x}{2} \right)^{2m+2} .
\end{align*}

Writing

\begin{align*}
\phi (x) & = \sum_{m=0}^\infty \sum_{n=0}^m \frac{1}{2} s_{n,m-n} (-1)^m m! (m+2)! \dfrac{(-1)^m}{m! (m+2)!} \left( \frac{x}{2} \right)^{2m+2}
\end{align*}

we see that if ##\phi (x) = J_2 (x)## we would have

\begin{align*}
\sum_{n=0}^m \frac{1}{2} (-1)^m m! (m+2)! s_{n,m-n} = 1 \qquad \text{for } m \geq 0 .
\end{align*}

That is, if ##\phi (x) = J_2 (x)## we would have

\begin{align*}
\sum_{n=0}^m \dfrac{m! (-1)^{-n} (n+1)^n (n+2)^{2m-n+1}
\;_2F_1 \left( n-m , - m - 2 ; n+2 ; \left( \frac{n+1}{n+2} \right)^2 \right)}{2 (n+1)! (m-n)!} = 1
\end{align*}

for ##m \geq 0##. Which is the same condition that @renormalize arrived at.
 
Last edited:
  • Like
Likes Juan Comas and renormalize

Suggested for: An identity with Bessel functions

Replies
1
Views
580
  • Last Post
Replies
4
Views
1K
Replies
3
Views
1K
Replies
5
Views
584
Replies
4
Views
514
  • Last Post
Replies
4
Views
797
Replies
3
Views
769
Replies
8
Views
811
Replies
7
Views
739
Top