Graduate Example of Ritz method with Bessel functions for trial function

Click For Summary
The discussion focuses on applying the Ritz method using Bessel functions as trial functions, highlighting the challenges due to their non-constant weight function compared to other orthogonal functions like polynomials. It emphasizes that while evaluating the eigenvalue problem L(f) = λf is conceptually similar regardless of the weight function, the complexity arises in calculating the matrix elements M_{nm} and the norms of the basis functions. The conversation also touches on the generalized eigenvalue problem L(f) = λK(f), noting that orthogonality must then be defined with respect to the inner product involving K. Participants seek clarification on the steps to derive the matrix equation L = λK and the role of the Rayleigh-Ritz method in this context. Overall, the thread provides insights into the mathematical intricacies of using Bessel functions in numerical methods.
member 428835
Hi PF!

Do you know of any examples of the Ritz method which use Bessel functions as trial functions? I’ve seen examples with polynomials, Legendre polynomials, Fourier modes. However, all of these are orthogonal with weight 1. Bessel functions are different in this way.

Any advice on an example (published journal, notes, book, etc) is SO appreciated!

thanks so much!
 
Physics news on Phys.org
I don't think there's anything more conecptually difficult about evaluating <br /> \frac{\langle L(f), f \rangle}{\langle f, f \rangle} where \langle f, g \rangle = \int_a^b f(x)g(x)w(x)\,dx for non-constant w than there is for w = 1.

Given your eigenvalue problem L(f) = \lambda f subject to self-adjoint boundary conditions on f, you approximate f as a linear combination of basis functions which satisfy the boundary conditions: f = \sum_{n = 1}^N a_n \phi_n. Then since L is linear, <br /> L(f) = \sum_{n=1}^N a_n L(\phi_n) = \sum_{n=1}^N a_n\sum_{m=1}^N M_{nm}\phi_m where <br /> L(\phi_n) = \sum_{m=1}^N M_{nm} \phi_m. If you let L(f) = \sum b_m\phi_m(x) then you have now <br /> \mathbf{b}^T = \mathbf{a}^T M and taking the inner product with f you get <br /> \sum_{m} \sum_{n} b_m a_n \langle \phi_m, \phi_n \rangle = \sum_n a_nb_n \|\phi_n\|^2 since the basis functions are orthogonal with respect to \langle \cdot, \cdot \rangle. This gives you the approximation <br /> \lambda = \frac{\langle L(f), f \rangle}{\langle f, f \rangle} \approx<br /> \frac{\mathbf{b}^TD\mathbf{a}}{\mathbf{a}^TD\mathbf{a}}<br /> = \frac{\mathbf{a}^TMD\mathbf{a}}{\mathbf{a}^TD\mathbf{a}} where <br /> D = \operatorname{diag}(\|\phi_1\|^2, \dots, \|\phi_N\|^2). The difficulty with using Bessel functions rather than trigonometric or polynomials is not that the weight function is not constant (there are systems of orthogonal polynomials which also have non-constant weight functions) but in determining <br /> M_{nm} = \langle L(\phi_n), \phi_m \rangle = \int_a^b x L(\phi_n)(x) \phi_m(x)\,dx and \|\phi_n\|^2 = \int_a^b x \phi_n(x)^2\,dx. This is one of the reasons why Bessel functions are less likely to be used in numerical methods than Chebyshev polynomials or finite elements.

If you do want to attempt it, then the relevant chapters of Abramowitz & Stegun are probably a good
place to start.
 
  • Like
Likes Delta2 and member 428835
pasmith said:
If you let L(f) = \sum b_m\phi_m(x) then you have now <br /> \mathbf{b}^T = \mathbf{a}^T M and taking the inner product with f you get <br /> \sum_{m} \sum_{n} b_m a_n \langle \phi_m, \phi_n \rangle = \sum_n a_nb_n \|\phi_n\|^2 since the basis functions are orthogonal with respect to \langle \cdot, \cdot \rangle. This gives you the approximation <br /> \lambda = \frac{\langle L(f), f \rangle}{\langle f, f \rangle} .

Okay, I think this makes sense, but if we instead had the generalized eigenvalue problem ##L(f) = \lambda K(f)##, then the results would change, right? Now instead of orthogonality with respect to ##\langle \cdot, \cdot \rangle## we would require orthogonality with respect to ##\langle K[\cdot], \cdot \rangle##, right? Then the approximation would be $$\lambda = \frac{\langle L(f), f \rangle}{\langle M(f), f \rangle}$$
 
Last edited by a moderator:
Also @pasmith, so let me highlight the process and you can tell me if I'm understanding it correctly:

$$L(f) = \lambda K(f) \implies L(\sum_j a_j \phi_j) = \lambda K(\sum_j a_j \phi_j) \implies L(\sum_j a_j \phi_j) = \lambda K(\sum_j a_j \phi_j) $$

and then taking inner products we have

$$\left(L(\sum_j a_j \phi_j),\sum_i a_i \phi_i\right)= \lambda \left(K(\sum_j a_j \phi_j),\sum_i a_i \phi_i\right) \implies \sum_j \sum_i a_ja_i(L(\phi_j),\phi_i) = \lambda \sum_j \sum_i a_ja_i(K(\phi_j),\phi_i)$$

but from here I'm unsure how you proceeded. Could you explain the next step on how one ultimately arrives at the matrix equation ##L = \lambda K##?
 
Last edited by a moderator:
You can set L_{ij} = (L(\phi_i),\phi_j), and then <br /> \sum_j \sum_i a_i a_j L_{ij} = \sum_j \sum_i a_i L_{ij} a_j = \mathbf{a}^T\mathbf{L}\mathbf{a}
 
pasmith said:
You can set L_{ij} = (L(\phi_i),\phi_j), and then <br /> \sum_j \sum_i a_i a_j L_{ij} = \sum_j \sum_i a_i L_{ij} a_j = \mathbf{a}^T\mathbf{L}\mathbf{a}
Right, but ultimately shouldn't we arrive at $$\mathbf{L} \mathbf{a} = \lambda \mathbf{K} \mathbf{a}$$ where ##\mathbf L = (L(\phi_i),\phi_j)## and ##\mathbf K = (K(\phi_i),\phi_j)## and ##\mathbf a## the eigenvectors? Isn't Rayleigh-Ritz a variational method? But we've not used any calculus to this point (aside from inner products).
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K