Graduate Eigenvalue Problem and the Calculus of Variations

Click For Summary
The discussion centers on the relationship between the eigenvalue problem defined by the equation Bu = λAu and the calculus of variations, specifically finding stationary values of (Bu, u) under the constraint (Au, u) = 1. Participants suggest using the method of Lagrange multipliers to connect these concepts, emphasizing the importance of minimizing (Bu, u) while adhering to the constraint. The conversation also touches on the distinction between linear operators and matrices, noting that they coincide only in finite-dimensional spaces. Additionally, there is a reference to previous discussions about finding eigenvalues of the products of matrices, indicating ongoing interest in this topic. Overall, the thread highlights the mathematical intricacies of eigenvalue problems in the context of functional analysis.
member 428835
Hi PF!

Given ##B u = \lambda A u## where ##A,B## are linear operators (matrices) and ##u## a function (vector) to be operated on with eigenvalue ##\lambda##, I read that the solution to this eigenvalue problem is equivalent to finding stationary values of ##(Bu,u)## subject to ##(Au,u)=1##, where ##(g,f) = \int fg##.

Can someone explain this to me, or point me in the right direction? I don't see how the two relate.
 
Physics news on Phys.org
Did you try applying the method of Lagrange multipliers to the stationary value problem?
 
Orodruin said:
Did you try applying the method of Lagrange multipliers to the stationary value problem?
Could you elaborate? I've seen something like this done before I think, where ##u = \sum_i \alpha_i w_i## where ##\alpha## is a constant and ##w## is a known trial function. By construction ##(w_i,w_j)=\delta_{ij}##. Then evidently we choose ##\alpha## to minimize ##(Bu,u)## (why?) under the constraint ##\sum \alpha_i^2=1##, and hence we arrive at the eigenvalue problem ##Bu=\lambda u##.
 
joshmccraney said:
Hi PF!

Given ##B u = \lambda A u## where ##A,B## are linear operators (matrices) .

Note that Linear Operator and Matrix coincide only in finite-dimensional spaces. There is no such representation in infinite -dim spaces.
 
StoneTemplePython said:
If you're still trying to tackle the problem of finding eigenvalues of ##B^{-1}A## or ##A^{-1}B## --you posted about this a few months back as I recall, then you may want to check out this thread:

https://www.physicsforums.com/threads/eigenvalues-of-the-product-of-two-matrices.588101
Yes, I definitely did ask about this a while ago. I ended up taking someone's advice on here (don't recall who it was) and used a build in function that worked great (errors were mine but shockingly also the paper's).
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
Replies
1
Views
3K