Example of Ritz method with Bessel functions for trial function

In summary, the conversation discusses the use of Bessel functions in the Ritz method for approximating eigenvalues and eigenfunctions. The difficulty lies in determining the necessary terms for the approximation, and while Bessel functions are not commonly used in numerical methods, the process can be found in chapters of Abramowitz & Stegun. The generalized eigenvalue problem is also mentioned, and it is noted that the results would change in this case. The conversation also touches on the use of calculus in the Rayleigh-Ritz method.
  • #1
member 428835
Hi PF!

Do you know of any examples of the Ritz method which use Bessel functions as trial functions? I’ve seen examples with polynomials, Legendre polynomials, Fourier modes. However, all of these are orthogonal with weight 1. Bessel functions are different in this way.

Any advice on an example (published journal, notes, book, etc) is SO appreciated!

thanks so much!
 
Physics news on Phys.org
  • #2
I don't think there's anything more conecptually difficult about evaluating [tex]
\frac{\langle L(f), f \rangle}{\langle f, f \rangle}[/tex] where [tex]\langle f, g \rangle = \int_a^b f(x)g(x)w(x)\,dx[/tex] for non-constant [itex]w[/itex] than there is for [itex]w = 1[/itex].

Given your eigenvalue problem [itex]L(f) = \lambda f[/itex] subject to self-adjoint boundary conditions on [itex]f[/itex], you approximate [itex]f[/itex] as a linear combination of basis functions which satisfy the boundary conditions: [tex]f = \sum_{n = 1}^N a_n \phi_n.[/tex] Then since [itex]L[/itex] is linear, [tex]
L(f) = \sum_{n=1}^N a_n L(\phi_n) = \sum_{n=1}^N a_n\sum_{m=1}^N M_{nm}\phi_m [/tex] where [tex]
L(\phi_n) = \sum_{m=1}^N M_{nm} \phi_m.[/tex] If you let [itex]L(f) = \sum b_m\phi_m(x)[/itex] then you have now [tex]
\mathbf{b}^T = \mathbf{a}^T M[/tex] and taking the inner product with [itex]f[/itex] you get [tex]
\sum_{m} \sum_{n} b_m a_n \langle \phi_m, \phi_n \rangle = \sum_n a_nb_n \|\phi_n\|^2[/tex] since the basis functions are orthogonal with respect to [itex]\langle \cdot, \cdot \rangle[/itex]. This gives you the approximation [tex]
\lambda = \frac{\langle L(f), f \rangle}{\langle f, f \rangle} \approx
\frac{\mathbf{b}^TD\mathbf{a}}{\mathbf{a}^TD\mathbf{a}}
= \frac{\mathbf{a}^TMD\mathbf{a}}{\mathbf{a}^TD\mathbf{a}}[/tex] where [tex]
D = \operatorname{diag}(\|\phi_1\|^2, \dots, \|\phi_N\|^2).[/tex] The difficulty with using Bessel functions rather than trigonometric or polynomials is not that the weight function is not constant (there are systems of orthogonal polynomials which also have non-constant weight functions) but in determining [tex]
M_{nm} = \langle L(\phi_n), \phi_m \rangle = \int_a^b x L(\phi_n)(x) \phi_m(x)\,dx[/tex] and [tex]\|\phi_n\|^2 = \int_a^b x \phi_n(x)^2\,dx.[/tex] This is one of the reasons why Bessel functions are less likely to be used in numerical methods than Chebyshev polynomials or finite elements.

If you do want to attempt it, then the relevant chapters of Abramowitz & Stegun are probably a good
place to start.
 
  • Like
Likes Delta2 and member 428835
  • #3
pasmith said:
If you let [itex]L(f) = \sum b_m\phi_m(x)[/itex] then you have now [tex]
\mathbf{b}^T = \mathbf{a}^T M[/tex] and taking the inner product with [itex]f[/itex] you get [tex]
\sum_{m} \sum_{n} b_m a_n \langle \phi_m, \phi_n \rangle = \sum_n a_nb_n \|\phi_n\|^2[/tex] since the basis functions are orthogonal with respect to [itex]\langle \cdot, \cdot \rangle[/itex]. This gives you the approximation [tex]
\lambda = \frac{\langle L(f), f \rangle}{\langle f, f \rangle} .[/tex]

Okay, I think this makes sense, but if we instead had the generalized eigenvalue problem ##L(f) = \lambda K(f)##, then the results would change, right? Now instead of orthogonality with respect to ##\langle \cdot, \cdot \rangle## we would require orthogonality with respect to ##\langle K[\cdot], \cdot \rangle##, right? Then the approximation would be $$\lambda = \frac{\langle L(f), f \rangle}{\langle M(f), f \rangle}$$
 
Last edited by a moderator:
  • #4
Also @pasmith, so let me highlight the process and you can tell me if I'm understanding it correctly:

$$L(f) = \lambda K(f) \implies L(\sum_j a_j \phi_j) = \lambda K(\sum_j a_j \phi_j) \implies L(\sum_j a_j \phi_j) = \lambda K(\sum_j a_j \phi_j) $$

and then taking inner products we have

$$\left(L(\sum_j a_j \phi_j),\sum_i a_i \phi_i\right)= \lambda \left(K(\sum_j a_j \phi_j),\sum_i a_i \phi_i\right) \implies \sum_j \sum_i a_ja_i(L(\phi_j),\phi_i) = \lambda \sum_j \sum_i a_ja_i(K(\phi_j),\phi_i)$$

but from here I'm unsure how you proceeded. Could you explain the next step on how one ultimately arrives at the matrix equation ##L = \lambda K##?
 
Last edited by a moderator:
  • #5
You can set [itex]L_{ij} = (L(\phi_i),\phi_j)[/itex], and then [tex]
\sum_j \sum_i a_i a_j L_{ij} = \sum_j \sum_i a_i L_{ij} a_j = \mathbf{a}^T\mathbf{L}\mathbf{a}[/tex]
 
  • #6
pasmith said:
You can set [itex]L_{ij} = (L(\phi_i),\phi_j)[/itex], and then [tex]
\sum_j \sum_i a_i a_j L_{ij} = \sum_j \sum_i a_i L_{ij} a_j = \mathbf{a}^T\mathbf{L}\mathbf{a}[/tex]
Right, but ultimately shouldn't we arrive at $$\mathbf{L} \mathbf{a} = \lambda \mathbf{K} \mathbf{a}$$ where ##\mathbf L = (L(\phi_i),\phi_j)## and ##\mathbf K = (K(\phi_i),\phi_j)## and ##\mathbf a## the eigenvectors? Isn't Rayleigh-Ritz a variational method? But we've not used any calculus to this point (aside from inner products).
 

1. What is the Ritz method?

The Ritz method is a numerical technique used to approximate the solution of a differential equation by minimizing the error between the exact solution and a trial function. It is commonly used in engineering and science to solve complex problems that cannot be solved analytically.

2. How does the Ritz method work?

The Ritz method works by substituting a trial function, which is typically a polynomial or a series of functions, into the differential equation. The coefficients of the trial function are then varied to minimize the error between the exact solution and the trial function. This results in a set of equations that can be solved to obtain an approximate solution to the differential equation.

3. What are Bessel functions?

Bessel functions are a class of special functions that arise in many areas of mathematics and physics, particularly in problems involving circular or cylindrical symmetry. They are named after the mathematician Friedrich Bessel and are characterized by their oscillatory behavior.

4. How are Bessel functions used in the Ritz method?

In the Ritz method, Bessel functions are used as trial functions to approximate the solution of a differential equation. This is because they have the important property of being eigenfunctions of certain differential operators, making them useful in solving differential equations. They are also well-behaved and have a wide range of applications, making them a popular choice in the Ritz method.

5. What are some advantages of using the Ritz method with Bessel functions?

One advantage of using the Ritz method with Bessel functions is that it can provide accurate solutions to differential equations that cannot be solved analytically. It also allows for a flexible choice of trial functions, making it applicable to a wide range of problems. Additionally, the use of Bessel functions as trial functions can lead to simpler and more efficient numerical methods compared to other techniques.

Similar threads

  • Differential Equations
Replies
3
Views
1K
  • Differential Equations
Replies
4
Views
6K
  • Differential Equations
Replies
6
Views
2K
  • Differential Equations
Replies
1
Views
1K
  • Differential Equations
Replies
7
Views
2K
  • Science and Math Textbooks
Replies
12
Views
926
  • Advanced Physics Homework Help
Replies
13
Views
2K
  • Differential Equations
Replies
2
Views
1K
  • Differential Equations
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
Back
Top