How do I solve this Bessel function integral using a u-substitution?

erok81
Messages
454
Reaction score
0

Homework Statement



This is part of a vibrating circular membrane problem, so if I need to post more details please let me know. Everything is pretty straight forward with the information I'll provide but you never know.

We haven't really learned what these are, just that they are complicated and we won't learn them for a while. All the stuff I am posting is the extent of what I know. Hopefully this works.

I've used an identity for all of the other integrals I've done. This is the first I have to actually calculate.

\int_{0}^{2} J_{0}( \lambda _{0,n} r) r dr

\lambda _{0,n} = \frac{ \alpha _0,n}{2}

Homework Equations



None.

The Attempt at a Solution



I'm not really sure at all how to solve these, but here is my first guess of how it could go.

Do a u sub with u = λ0,nr

That might work and then integrate using integration by parts. Actually I take that back. That isn't even close to how it should be solved.

So in other words, I have zero idea how to start this one.

Any ideas on how to get this started?
 
Physics news on Phys.org
An example of the identity I was using for previous problems.

\int _{0}^{a} (a^2 -r^2)r^{k+1} J_k ( \frac{ \alpha }{a}r) dr = 2 \frac{a^{k+4}}{\alpha ^{2}}J_{k+2} (\alpha)

And what lead me to the original integral in question...which I should have put in the original post. :redface:a_{0,n}^{*}= \frac{1}{2 \pi \alpha_{0,n}J_{1}^{2}(\alpha_{0,n})} \int_{0}^{2} \int_{0}^{2 \pi}g(r, \theta) J_0 (\lambda _{0,n} r) r d \theta dr

Where g(r,θ) = 1. Performing the first integral I end up with.

a_{0,n}^{*}= \frac{1}{ \alpha_{0,n}J_{1}^{2}(\alpha_{0,n})} \int_{0}^{2} J_0 (\lambda _{0,n} r) r dr

Which is what I have in post 1 without the stuff to the left of the integral.
 
Last edited:
Abramowitz and Stegun has the identity (11.3.20)

\int_0^z t^\nu J_{\nu-1}(t) dt = z^\nu J_\nu(z), ~~\text{Re}(\nu)>0,

that seems to be what you need. I imagine that this follows directly from the series representation.
 
Yep, that should work nicely. Thanks for posting it. It also helped me see that they use tables for all of these.

And after reading ahead five sections to the chapter that deals with Bessel functions in detail there was one example where they actually integrated almost that exact same thing.

It looked they did a u sub like I mentioned, but didn't integrate by parts...which is weird. They bumped the Bessel function up one and left the r in place and imposed the integration limits. But now that I look at it, they used a table which is an identity similar to yours.

I really hate using integration tables, but I guess it has to be done sometimes.
 
Like I said, I think the series expansion would work. You can also derive this from the recurrence relation for the derivative:

t \frac{d}{dt} J_\nu(t) = t J_{\nu-1}(t) - \nu J_\nu (t).

We can write

\left( t^\nu \frac{d}{dt} + \nu t^{\nu-1} \right) J_\nu (t) = t^\nu J_{\nu-1}(t),<br />

which rearranges to

t^\nu J_{\nu-1}(t) = \frac{d}{dt} \left( t^\nu J_\nu (t) \right).

Integrating both sides results in the identity above.
 
That makes sense. I had to read ahead a ways and now I see how a series expansion would work. Thanks for the derivation as well. The more the better.

Maybe you could help me with one last thing. I've never understood this and the farther I get the worse it is, because it's used in every separation of variables problem. I can't find it anywhere in my text and I've tried to find a pattern but haven't been successful yet.

Say I start with this.

f(r) = \Sigma_{n=1}^{\infty} A_{n}J_{0}( \lambda _{n}r)

How does one get to this to solve for An?

A_{n}= \frac{2}{a^{2} J_{1}^{2}( \alpha _{n})} \int _{0}^{a} f(r) J_{0}(\lambda _{n}r)r dr

Where \lambda_{n} = \frac{\alpha_{n}}{a}

I am getting closer to understanding these, but this is the only major thing I cannot see.
 
Last edited:
There's an identity

\int_0^1 dx~x J_\mu(\alpha_{\mu,m}x)J_\mu(\alpha_{\mu,n}x) = \frac{1}{2} \delta_{mn} [J_{\mu+1}(\alpha_{\mu,m})]^2,

where \alpha_{\mu,m} is the m^\text{th} zero of J_\mu(x). It looks like your \lambda_n = \alpha_{0,n}/a, so you just have to adjust the limit of integration to use the identity to prove your formula.
 
That may have been a poor example because I am not the best at using Bessel functions. I can integrate them fine now, it's moving from the sum to integral that I am having trouble with.

Here is an easier example.

My general solution is

u(x,y)= \sum_{n=1}^\infty B_{n} sin \frac{n \pi}{a}x sinh \frac{n \pi}{a}y

Then to solve for my Bn I have this integral

B_{n}=\frac{2}{a sinh \frac{n \pi b}{a}} \int_{0}^{a} f(x) sin \frac{n \pi}{a}x dx
 
OK, so it looks like u(x,b)=f(x) is a boundary condition. So write

\sum_n B_n \sin\left(\frac{n\pi x}{a}\right)\sinh\left(\frac{n\pi b}{a}\right) = f(x).~~(1)

Now, it's an important fact that the set of functions

t_n(x) = \sin\left(\frac{n\pi x}{a}\right)

are orthogonal, in the sense that

\int_0^a \sin\left(\frac{m\pi x}{a}\right)\sin\left(\frac{n\pi x}{a}\right) = \frac{a}{2} \delta_{mn}. ~~(2)

You can easily verify this formula, by first considering m\neq n and then considering m=n. This orthogonality is analogous to orthogonality of basis vectors, although here we are working in a space of functions rather than Euclidean space. You probably saw that there are similar orthogonality conditions on Bessel functions.

So let's go back and integrate (1) with a factor of t_m(x) inserted:\int_0^a dx \sin\left(\frac{m\pi x}{a}\right) \sum_n B_n \sin\left(\frac{n\pi x}{a}\right)\sinh\left(\frac{n\pi b}{a}\right) =\int_0^a dx \sin\left(\frac{m\pi x}{a}\right) f(x).

using (2), we can write

\sum_n \frac{a}{2} \delta_{mn} B_n \sinh\left(\frac{n\pi b}{a}\right) =\int_0^a dx \sin\left(\frac{m\pi x}{a}\right) f(x).

The \delta_{mn} picks out the mth term of the sum, so

\frac{a}{2} B_m \sinh\left(\frac{m\pi b}{a}\right) =\int_0^a dx \sin\left(\frac{m\pi x}{a}\right) f(x).

With a bit of algebra and a relabeling of m to n, we recover the formula you asked about.
 
Last edited:
  • #10
Finally. Now I see what my professor was talking about when does this step and I can also do it. I tried on a few other formulas in my book and could get all of them.

I am assuming your last couple of equations you accidentally left off the sin under the integral?

Thanks for taking the time to type that all out. I've been wanting to understand this step for the last few chapters. Now I finally get it. So thanks again!
 
  • #11
erok81 said:
Finally. Now I see what my professor was talking about when does this step and I can also do it. I tried on a few other formulas in my book and could get all of them.

I am assuming your last couple of equations you accidentally left off the sin under the integral?

Yes, I had been cutting and pasting and the sin got left out.

Glad to have cleared that up. The idea that certain functions are orthogonal is of great importance in discussing the solutions of linear differential equations, so you have to know it.
 
Back
Top