Rayleigh Ritz and inner products

  • A
  • Thread starter member 428835
  • Start date
  • Tags
    Rayleigh
You're welcome! I am familiar with Mathematica and would be happy to take a look at your problem. Please send me the description and code and I will do my best to help you out.
  • #1
member 428835
Hi PF!

Say we have the variational problem $$M[\phi(x)] = \lambda K[\phi(x)]$$ where ##\lambda## is the eigenvalue and ##M,K## are linear integro-differential operators. Now given a set of basis functions ##\phi = \sum_j c_j \phi_j##, is this simple enough to conclude that eigenvalues ##\lambda## can be approximated via the Rayleigh-Ritz variational technique as the eigenvalues of the augmented matrix equation (in component form) as $$\left( M[\phi_i],\phi_j\right) = \lambda \left( K[\phi_i],\phi_j\right)$$

where ##(f,g) \equiv \int f g##? Or does the inner product get more complicated given basis functions that are orthogonal w.r.t. different weights?
 
Physics news on Phys.org
  • #2
You sohuld use the inner product with respect to which the basis functions are orthogonal. That leaves you with the generalized eigenvalue problem [tex]
T(\lambda)\mathbf{c} = 0[/tex] for some [itex]N \times N[/itex] matrix [itex]T[/itex]. How you solve that eigenvalue problem is up to you. However, if your basis functions do not satisfy the boundary conditions of your problem then the eigenvalue [itex]\lambda[/itex] won't appear in every row of [itex]T[/itex] (one or more rows must enforce the boundary condition rather than the operator equation) so methods which assume that will not work.

Whether the eigenvalues you get are a good approximation to those of the original problem is uncertain. Some of them will be; some will not. Boyd, Chebyshev and Fourier Spectral Methods (2nd ed, Dover 2001) discusses this issue in Chapter 7.
 
  • Like
Likes member 428835
  • #3
Can you show me exactly why the inner product needs to be orthogonal with the basis functions? I believe you, but can’t convince myself this needs to be the case.
 
  • #4
Ignoring complications from the boundary conditions, it doesn't matter which inner product you use, and the natural choice is the one which makes the basis functions orthogonal.

Suppose the [itex]\phi_i[/itex] are orthogonal with respect to [itex](\cdot,\cdot)_1[/itex]. Then if [itex]\tilde{M}[/itex] is the matrix representation of [itex]M[/itex] where [itex]\tilde{M}_{ji} =(M(\phi_i),\phi_j)_1[/itex] we have for a different inner product [itex](\cdot,\cdot)_2[/itex] that [tex]
(M(\phi_i),\phi_j)_2 = (\phi_k,\phi_j)_2\tilde{M}_{ki} = P_{jk}\tilde{M}_{ki}
[/tex] so changing the inner product amounts to left-multiplication by [itex]P[/itex]. But [itex]P[/itex] is invertible (see below), so the eigenvalues of [itex]P\tilde{M}[/itex] are those of [itex]\tilde{M}[/itex]. (The eigenvectors, of course, will differ.)

To show [itex]P[/itex] is invertible, let [itex]V[/itex] be the finite dimensional space spanned by the [itex]\phi_i[/itex]. Then we can find a basis [itex]\{\psi_i\}[/itex] for [itex]V[/itex] which is orthonormal with respect to [itex](\cdot,\cdot)_2[/itex], and there exists an invertible matrix [itex]B[/itex] such that [itex]\phi_i = B_{ji}\psi_j[/itex]. Then
[tex]P_{ji} = (\phi_i,\phi_j)_2 = B_{ki}B_{lj}(\psi_k,\psi_l)_2 = B_{ki}B_{kj}.[/tex] But [itex]P = B^{T}B[/itex] is invertible, since [itex]B[/itex] and thus [itex]B^T[/itex] are invertible.

Having to use rows of [itex]T[/itex] to enforce boundary conditions might result in getting different eigenvalues, but they are only approximations to the eigenvalues of the original problem anyway. However it is best to ensure that every function in [itex]V[/itex] satisfies the boundary conditions.
 
  • #5
pasmith said:
Ignoring complications from the boundary conditions, it doesn't matter which inner product you use, and the natural choice is the one which makes the basis functions orthogonal.

Suppose the [itex]\phi_i[/itex] are orthogonal with respect to [itex](\cdot,\cdot)_1[/itex]. Then if [itex]\tilde{M}[/itex] is the matrix representation of [itex]M[/itex] where [itex]\tilde{M}_{ji} =(M(\phi_i),\phi_j)_1[/itex] we have for a different inner product [itex](\cdot,\cdot)_2[/itex] that [tex]
(M(\phi_i),\phi_j)_2 = (\phi_k,\phi_j)_2\tilde{M}_{ki} = P_{jk}\tilde{M}_{ki}
[/tex] so changing the inner product amounts to left-multiplication by [itex]P[/itex]. But [itex]P[/itex] is invertible (see below), so the eigenvalues of [itex]P\tilde{M}[/itex] are those of [itex]\tilde{M}[/itex]. (The eigenvectors, of course, will differ.)
I'm trying to understand this, but I think I'm missing something. If We define ##M = d/dx## and let ##\phi_{1,2} = (1-x)x, (1-x)x^2## with [itex](\cdot,\cdot)_1 = \int_0^1 \cdot \cdot [/itex] then [itex]\tilde{M} = 0,-1/60; 1/60,0[/itex]. If we let [itex](\cdot,\cdot)_2 = \int_0^1 x \cdot \cdot [/itex] so that it's a different inner product then we get ##(M(\phi_i),\phi_j)_2 = -1/60,-1/60; 0, -1/210##. But ##(\phi_k,\phi_j)_2\tilde{M}_{ki} = -1/6300, -1/10080; 1/3600, 1/6300##. So it seems the two different inner products give different results and don't uphold the equality. Am I missing something?
 
  • #6
67F0E4D6-21E8-4986-B4EF-83AF2D0CBACD.jpeg
This might be easier to see
 
  • #7
pasmith said:
Suppose the [itex]\phi_i[/itex] are orthogonal with respect to [itex](\cdot,\cdot)_1[/itex]. Then if [itex]\tilde{M}[/itex] is the matrix representation of [itex]M[/itex] where [itex]\tilde{M}_{ji} =(M(\phi_i),\phi_j)_1[/itex] we have for a different inner product [itex](\cdot,\cdot)_2[/itex] that [tex]
(M(\phi_i),\phi_j)_2 = (\phi_k,\phi_j)_2\tilde{M}_{ki} = P_{jk}\tilde{M}_{ki}
[/tex] so changing the inner product amounts to left-multiplication by [itex]P[/itex]. But [itex]P[/itex] is invertible (see below), so the eigenvalues of [itex]P\tilde{M}[/itex] are those of [itex]\tilde{M}[/itex]. (The eigenvectors, of course, will differ.)

I of course meant that the values of [itex]\lambda[/itex] for which [itex]\det(\tilde{M} - \lambda \tilde{K}) = 0[/itex] are the same as those for which [itex]\det(P\tilde{M} - \lambda P\tilde{K})) = 0[/itex], since [itex]\det(P) \neq 0[/itex]. Naturally if [itex]P[/itex] is not the identity then [itex]P\tilde M \neq \tilde M[/itex].
 
  • Like
Likes member 428835
  • #8
pasmith said:
I of course meant that the values of [itex]\lambda[/itex] for which [itex]\det(\tilde{M} - \lambda \tilde{K}) = 0[/itex] are the same as those for which [itex]\det(P\tilde{M} - \lambda P\tilde{K})) = 0[/itex], since [itex]\det(P) \neq 0[/itex]. Naturally if [itex]P[/itex] is not the identity then [itex]P\tilde M \neq \tilde M[/itex].
Wow, I feel stupid. Like REALLY stupid. But seriously, thank you so much!

Do you know Mathematica? Would you be interested in helping me out with a very tough problem that is closely related to this? I've done most of the heavy lifting, but something is wrong and I can't figure out what it is.

I can write up a neat description for you in LaTeX and I have very clear comments on the Mathematica code (you almost don't even need to know the language to read my steps, since it's Mathematica is pretty user friendly)

Edit: it's a fluid dynamics problem, which seems like something you're interested in from your bio.
 
  • #9
Solved it, FINALLY! Thanks for the interest everyone!
 

1. What is the Rayleigh-Ritz method?

The Rayleigh-Ritz method is a mathematical technique used to approximate the solutions of a differential equation or a system of equations. It involves expressing the solution as a linear combination of basis functions and finding the coefficients of these basis functions using an inner product. This method is commonly used in engineering and science to solve complex problems.

2. What is the inner product in the Rayleigh-Ritz method?

The inner product in the Rayleigh-Ritz method is a mathematical operation that takes two vectors as inputs and produces a scalar value as the output. It is used to measure the similarity or orthogonality between two vectors. In the Rayleigh-Ritz method, the inner product is used to find the coefficients of the basis functions by minimizing the error between the approximate solution and the actual solution.

3. What are the advantages of using the Rayleigh-Ritz method?

The Rayleigh-Ritz method has several advantages, including its ability to handle complex problems with high accuracy, its flexibility in choosing the basis functions, and its efficiency in solving large systems of equations. It also allows for easy implementation using computer algorithms, making it a popular choice in numerical analysis and scientific computing.

4. How is the Rayleigh-Ritz method related to the finite element method?

The Rayleigh-Ritz method is closely related to the finite element method, as both use a similar approach of approximating the solution using basis functions and finding the coefficients through an inner product. However, the finite element method is more commonly used for solving problems with irregular geometries, while the Rayleigh-Ritz method is often used for regular domains.

5. What are some applications of the Rayleigh-Ritz method?

The Rayleigh-Ritz method has a wide range of applications in various fields, including structural engineering, fluid dynamics, heat transfer, and quantum mechanics. It is commonly used to solve problems involving differential equations, such as finding the natural frequencies of a vibrating structure or the temperature distribution in a heat transfer problem. It is also used in optimization and control problems, as well as in the analysis of complex systems in physics and engineering.

Similar threads

Replies
9
Views
1K
  • Differential Equations
Replies
2
Views
1K
Replies
13
Views
1K
  • Differential Equations
Replies
1
Views
1K
  • Differential Equations
Replies
4
Views
2K
  • Differential Equations
Replies
1
Views
2K
Replies
1
Views
945
  • Classical Physics
Replies
14
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
769
  • Linear and Abstract Algebra
Replies
8
Views
1K
Back
Top