Example of Ritz method with Bessel functions for trial function

Click For Summary

Discussion Overview

The discussion revolves around the application of the Ritz method using Bessel functions as trial functions. Participants explore the challenges and nuances of employing Bessel functions in this context, particularly in comparison to other orthogonal functions like polynomials and Fourier modes. The conversation includes technical details about eigenvalue problems and inner product formulations.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant inquires about examples of the Ritz method using Bessel functions, noting that most examples encountered involve polynomials or Fourier modes, which are orthogonal with a constant weight.
  • Another participant discusses the evaluation of the eigenvalue problem and suggests that the difficulty with Bessel functions is not solely due to the non-constant weight function but also in determining the matrix elements associated with the operator.
  • A participant raises a question regarding the generalized eigenvalue problem and how it alters the orthogonality conditions, suggesting that the approximation would change accordingly.
  • Further clarification is sought on the process of deriving the matrix equation from the generalized eigenvalue problem, with participants discussing the steps involved in setting up the inner products and matrix representations.
  • There is a mention of the Rayleigh-Ritz method as a variational approach, with a participant questioning the absence of calculus in the current discussion.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and interpretation of the Ritz method and its application with Bessel functions. There is no consensus on the challenges posed by Bessel functions compared to other functions, nor on the implications of the generalized eigenvalue problem.

Contextual Notes

Participants discuss the complexities of using Bessel functions, including the need for specific inner product definitions and the implications of non-constant weight functions. There are unresolved mathematical steps in transitioning from the generalized eigenvalue problem to the matrix formulation.

member 428835
Hi PF!

Do you know of any examples of the Ritz method which use Bessel functions as trial functions? I’ve seen examples with polynomials, Legendre polynomials, Fourier modes. However, all of these are orthogonal with weight 1. Bessel functions are different in this way.

Any advice on an example (published journal, notes, book, etc) is SO appreciated!

thanks so much!
 
Physics news on Phys.org
I don't think there's anything more conecptually difficult about evaluating <br /> \frac{\langle L(f), f \rangle}{\langle f, f \rangle} where \langle f, g \rangle = \int_a^b f(x)g(x)w(x)\,dx for non-constant w than there is for w = 1.

Given your eigenvalue problem L(f) = \lambda f subject to self-adjoint boundary conditions on f, you approximate f as a linear combination of basis functions which satisfy the boundary conditions: f = \sum_{n = 1}^N a_n \phi_n. Then since L is linear, <br /> L(f) = \sum_{n=1}^N a_n L(\phi_n) = \sum_{n=1}^N a_n\sum_{m=1}^N M_{nm}\phi_m where <br /> L(\phi_n) = \sum_{m=1}^N M_{nm} \phi_m. If you let L(f) = \sum b_m\phi_m(x) then you have now <br /> \mathbf{b}^T = \mathbf{a}^T M and taking the inner product with f you get <br /> \sum_{m} \sum_{n} b_m a_n \langle \phi_m, \phi_n \rangle = \sum_n a_nb_n \|\phi_n\|^2 since the basis functions are orthogonal with respect to \langle \cdot, \cdot \rangle. This gives you the approximation <br /> \lambda = \frac{\langle L(f), f \rangle}{\langle f, f \rangle} \approx<br /> \frac{\mathbf{b}^TD\mathbf{a}}{\mathbf{a}^TD\mathbf{a}}<br /> = \frac{\mathbf{a}^TMD\mathbf{a}}{\mathbf{a}^TD\mathbf{a}} where <br /> D = \operatorname{diag}(\|\phi_1\|^2, \dots, \|\phi_N\|^2). The difficulty with using Bessel functions rather than trigonometric or polynomials is not that the weight function is not constant (there are systems of orthogonal polynomials which also have non-constant weight functions) but in determining <br /> M_{nm} = \langle L(\phi_n), \phi_m \rangle = \int_a^b x L(\phi_n)(x) \phi_m(x)\,dx and \|\phi_n\|^2 = \int_a^b x \phi_n(x)^2\,dx. This is one of the reasons why Bessel functions are less likely to be used in numerical methods than Chebyshev polynomials or finite elements.

If you do want to attempt it, then the relevant chapters of Abramowitz & Stegun are probably a good
place to start.
 
  • Like
Likes   Reactions: Delta2 and member 428835
pasmith said:
If you let L(f) = \sum b_m\phi_m(x) then you have now <br /> \mathbf{b}^T = \mathbf{a}^T M and taking the inner product with f you get <br /> \sum_{m} \sum_{n} b_m a_n \langle \phi_m, \phi_n \rangle = \sum_n a_nb_n \|\phi_n\|^2 since the basis functions are orthogonal with respect to \langle \cdot, \cdot \rangle. This gives you the approximation <br /> \lambda = \frac{\langle L(f), f \rangle}{\langle f, f \rangle} .

Okay, I think this makes sense, but if we instead had the generalized eigenvalue problem ##L(f) = \lambda K(f)##, then the results would change, right? Now instead of orthogonality with respect to ##\langle \cdot, \cdot \rangle## we would require orthogonality with respect to ##\langle K[\cdot], \cdot \rangle##, right? Then the approximation would be $$\lambda = \frac{\langle L(f), f \rangle}{\langle M(f), f \rangle}$$
 
Last edited by a moderator:
Also @pasmith, so let me highlight the process and you can tell me if I'm understanding it correctly:

$$L(f) = \lambda K(f) \implies L(\sum_j a_j \phi_j) = \lambda K(\sum_j a_j \phi_j) \implies L(\sum_j a_j \phi_j) = \lambda K(\sum_j a_j \phi_j) $$

and then taking inner products we have

$$\left(L(\sum_j a_j \phi_j),\sum_i a_i \phi_i\right)= \lambda \left(K(\sum_j a_j \phi_j),\sum_i a_i \phi_i\right) \implies \sum_j \sum_i a_ja_i(L(\phi_j),\phi_i) = \lambda \sum_j \sum_i a_ja_i(K(\phi_j),\phi_i)$$

but from here I'm unsure how you proceeded. Could you explain the next step on how one ultimately arrives at the matrix equation ##L = \lambda K##?
 
Last edited by a moderator:
You can set L_{ij} = (L(\phi_i),\phi_j), and then <br /> \sum_j \sum_i a_i a_j L_{ij} = \sum_j \sum_i a_i L_{ij} a_j = \mathbf{a}^T\mathbf{L}\mathbf{a}
 
pasmith said:
You can set L_{ij} = (L(\phi_i),\phi_j), and then <br /> \sum_j \sum_i a_i a_j L_{ij} = \sum_j \sum_i a_i L_{ij} a_j = \mathbf{a}^T\mathbf{L}\mathbf{a}
Right, but ultimately shouldn't we arrive at $$\mathbf{L} \mathbf{a} = \lambda \mathbf{K} \mathbf{a}$$ where ##\mathbf L = (L(\phi_i),\phi_j)## and ##\mathbf K = (K(\phi_i),\phi_j)## and ##\mathbf a## the eigenvectors? Isn't Rayleigh-Ritz a variational method? But we've not used any calculus to this point (aside from inner products).
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
Replies
8
Views
2K