Fredholm Integral of Second Kind, Eigenvalues

  • Context: Graduate 
  • Thread starter Thread starter beautiful1
  • Start date Start date
  • Tags Tags
    Eigenvalues Integral
Click For Summary

Discussion Overview

The discussion revolves around solving a Fredholm integral equation of the second kind involving a correlated Gaussian kernel. Participants explore various approaches to handle the eigenvalue equation, including analytical solutions, series expansions, and transformations of variables.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents the integral eigenvalue equation and seeks help on handling it, noting that an analytic solution exists for the eigenvalues and eigenfunctions in terms of Hermite functions.
  • A moderator suggests that the thread may be better suited for the differential equations subforum and provides a method to rewrite the equation using constants derived from the Gaussian kernel.
  • Another participant summarizes the previous contributions and reformulates the equation in terms of a definite integral, expressing it with constants and suggesting integration to find eigenvalues.
  • A participant introduces a complication regarding the kernel being correlated and not separable, providing a specific form of the kernel and expressing interest in pursuing the suggested solutions despite this complication.
  • Another participant proposes expanding the kernel in a power series to approximate the solution, indicating that this could be a viable method for the correlated case.
  • A different participant references a method from "Methods of Theoretical Physics" suggesting a series expansion approach for the kernel, leading to a set of simultaneous equations for the coefficients involved.
  • One participant discusses the Taylor series expansion of a specific kernel and questions whether solving the related homogeneous Fredholm equation could yield an approximate solution for the unbounded case.
  • Another participant expresses interest in the kernel expansion method and considers using a Taylor series for the remaining integrals, indicating uncertainty about the solution process.
  • A participant suggests defining new variables to make the correlated kernel separable, but another participant requests clarification on how to apply this substitution in the context of the integral.

Areas of Agreement / Disagreement

Participants express various methods and approaches to tackle the problem, but there is no consensus on a single solution or method. The discussion remains unresolved with multiple competing views and techniques being explored.

Contextual Notes

Participants mention the complexity of the correlated Gaussian kernel and the potential need for specific assumptions or transformations to simplify the problem. The discussion includes various mathematical formulations and approaches that may depend on the definitions and assumptions made by participants.

beautiful1
Messages
31
Reaction score
0
I need help with an integral eigenvalue equation...I am lost on how to handle this:

[tex] \int_{-\infty}^{\infty} dy K(x,y) \psi_n(y) = \lambda_n \psi_n(x)[/tex]

The kernel, [tex]K(x,y)[/tex] is a 2D, correlated Gaussian. I have read that for this case an analytic solution exist for the eigenvalues, [tex]\lambda_n[/tex], and the eigenfunctions, [tex]\psi_n(x)[/tex], are given in terms of the Hermite functions (polynomials?).Any suggestions on starting this solution would be appreciated.

p.s. dear moderator, perhaps you know if this should be posted in the differential equations subforum. I wasn't sure.
 
Last edited:
Physics news on Phys.org
Hey, I'm a moderator and I'm not sure either! It might get more responses in differential equation than calculus so I will move it there.

"2d Gaussian"? That's [itex]Const e^{-x^2-y^2}= Const e^{-x^2}e^{-y^2}[/itex] isn't it? If so then do this: write the equation as
[tex]Const e^{-x^2}\int_{-\infty}^{\infty}e^{-y^2}\psi_n(y)dy= \lambda_n\psi_n(x)[/itex].<br /> <br /> Notice that the integral on the left is a definite integral: it is a constant: Let<br /> [itex]X_n= \int_{-\infty}{\infty}e^{-y^2}\psi_n(y)dy[/itex]. Then the equation says [itex]Const X_ne^{-x^2}= \lamda_n\psi_n(x)[/itex]. Multiply both sides by [itex]e^{-x^2}[/itex] to get [itex]Const X_n e^{-2x^2}= \lambda_n \psi_n(x)e^{-x^2}[/itex].<br /> <br /> Now integrate both sides, with respect to x, from -infinity to infinity: <br /> [itex]Const X_n \int_{-\infty}^{/infty}e^{-2x^2}dx= \lamba_n X_n[/itex]. What value of [itex]\lamba_n[/itex] makes that true for <b>any</b> X<sub>n</sub>?[/tex]
 
Last edited by a moderator:
Hall you mind if I summarize your work here:

[tex]\mathcal{F}\left(\Psi_n\right)=\lambda_n\Psi_n[/tex]

where:

[tex]\mathcal{F}\left\{f\right\}=\int_{-\infty}^{\infty}K(x,y)f(y)dy[/tex]

with:

[tex]K(x,y)=Ce^{-(x^2+y^2)}[/tex]

so that we have:

[tex]\lambda_n\Psi_n(x)=Ce^{-x^2}\int_{-\infty}^{\infty}e^{-y^2}\Psi_n(y)dy[/tex]

Representing the definite integral as the constant [itex]X_n[/itex] as Hall indicated above:

[tex]X_n=\int_{-\infty}^{\infty}e^{-y^2}\Psi_n(y)dy[/tex]

we obtain:

[tex]\lambda_n\Psi_n(x)=CX_ne^{-x^2}[/tex]

Multiplying both sides by [itex]e^{-x^2}[/tex] and integrating:<br /> <br /> [tex]\int_{-\infty}^{\infty}\lambda_n\Psi_n(x)e^{-x^2}dx=\int_{-\infty}^{\infty}CX_ne^{-2x^2}dx[/tex]<br /> <br /> but from above:<br /> <br /> [tex]\int_{-\infty}^{\infty}\Psi_n(v)e^{-v^2}dv=X_n[/tex]<br /> <br /> so that we're left with:<br /> <br /> [tex]\lambda_n X_n=CX_n\int_{-\infty}^{\infty}e^{-2x^2}dx[/tex]<br /> <br /> If I've incorrectly interpreted Hall's analysis above, I'm sure he'll . . . indicate such. <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f642.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=":smile:" title="Smile :smile:" data-smilie="1"data-shortname=":smile:" />[/itex]
 
Last edited:
Thank you both for your response. I will think about this approach as it looks helpful. But there is a slight complication, which is that the Guassian is correlated, i.e. not separable in x and y. The example I have in mind is
[tex] K(x,y) = C \exp \{-\sigma_1^2 (c x + sy)^2-\sigma_2^2 (cy - sx)^2 \}[/tex]
where [tex]C[/tex], [tex]\sigma_1[/tex], and [tex]\sigma_2[/tex] are real constants and [tex]c = \cos \theta[/tex] and [tex]s = \sin\theta[/tex] for some angle [tex]\theta[/tex]. Plotted, such a function would be a 2D, squeezed Gaussian rotated w.r.t to the x-y axes.
But your solution may also work in this case and I am pursuing that. Sorry for not being more explicit earlier.
 
Last edited:
beautiful1 said:
. The example I have in mind is
[tex] K(x,y) = C \exp \{-\sigma_1^2 (c x + sy)^2-\sigma_2^2 (cy - sx)^2 \}[/tex]
where [tex]C[/tex], [tex]\sigma_1[/tex], and [tex]\sigma_2[/tex] are real constants and [tex]c = \cos \theta[/tex] and [tex]s = \sin\theta[/tex] for some angle [tex]\theta[/tex].

How about expanding the kernel in a power series and then solving (approximating) it as per above?

For example:

[tex]e^{-((x+y)^2-(x-y)^2)}= 1-4xy+8x^2y^2-\frac{32x^3y^3}{3}+\frac{32x^4y^4}{3}-...[/tex]
 
Thanks for the second reply saltydog; your suggestions mirrors another approach I found in "Methods of Theoretical Physics" by Morse and Feschbach.There, the suggestions is to assume the kernel is an expansion of the form[tex] K(x,y)= \sum_{n=0}^{\infty} h_{n}(x) g_{n}(y)[/tex]where [itex]h_n[/itex] is a complete set of functions and [itex]g_n(y)[/itex] is the corresponding coefficient. Substituting this forumal in yields

[tex] \psi_n(x) = \lambda^{-1} \sum_{n} A_n h_n(x)[/tex]with [tex]A_n = \int dy g_n(y) \psi_n(y)[/tex]Inserting the series expansion for [itex]\psi_n(y)[/itex] yields[tex] A_n = \sum_p \alpha_{np} A_p[/tex]which is a set of simultaneous equations for the [itex]A[/itex]'s with coefficients
[tex]\alpha_{np} = \int dy g_n(y) h_p(y)[/tex]This can then converted into the usual eigenvalue problem.
And for a judicious selection of the original functions, the problem can be made relatively easy. For example, in the case of the Gaussian, a diagonal basis using the Hermite functions (Hermite polynomials times a Gaussian) is a good choice, since[tex]\alpha_{np} = a_{nn} \delta_{pn}[/tex]

and

[tex]\psi_n(x) = h_n(x)[/tex]

I think your suggestions of a power series would be somewhat similar.

Thanks to everyones help.

BTW, there may be some mistakes in the above.
 
Last edited:
I used the wrong equation last night. Consider a kernel of the form:

[tex]K(x,y)=e^{-((x+y)^2+(y-x)^2)}[/tex]

and expand it out to 6 terms in a Taylor series:

[tex] \begin{align*}<br /> e^{-((x+y)^2+(y-x)^2)}&=1-2x^2+2x^4-\frac{4x^6}{3} \\<br /> &+\left(-2+4x^2-4x^4+\frac{8x^6}{3}\right)y^2 \\<br /> &+\left(2-4x^2+4x^4-\frac{8x^6}{3}\right)y^4<br /> \end{align}[/tex]

I've attached plots of the kernel and its 6-term Taylor equivalent. As you can see they are similar in a region about the origin; the kernel rapidly decays beyond this. Can one then solve the related homogeneous Fredholm equation:

[tex]u(x)=\int_{-a}^{a}T(x,y)u(y)dy[/tex]

where T(x,y) is the Taylor expansion of K(x,y)

and obtain an approximate solution to the unbounded case?

Would the accuracy improve as more terms are added and the limits of integration are expanded. No?

I suspect all of this can be analyzed from the perspective of integral operators in Hilbert Space and it's completeness thereof.

Anyway Beau (how about I just call you that?), I've read your post above and will look into more. Thanks.
 

Attachments

  • taylor.JPG
    taylor.JPG
    16.8 KB · Views: 490
  • kernel.JPG
    kernel.JPG
    17.2 KB · Views: 494
Thanks saltydog, I really like your approach of expanding the kernel. I would think that it is a very general and useful method for use with many other kernels too.


I'm not sure how to solve the remaining integrals, perhaps by using a Taylor series expansion of u(x)? If this was truncated at the same order as [itex]K[/itex] then maybe coefficients of like powers could be equated. Maybe. I'll check my new favorite book by Morse and Feschbach (which incidently is selling for almost $1000 on Amazon!)


Thanks for your help. I'll let you know how things turn out.
beau
 
beautiful1 said:
Thank you both for your response. I will think about this approach as it looks helpful. But there is a slight complication, which is that the Guassian is correlated, i.e. not separable in x and y. The example I have in mind is
[tex] K(x,y) = C \exp \{-\sigma_1^2 (c x + sy)^2-\sigma_2^2 (cy - sx)^2 \}[/tex]
where [tex]C[/tex], [tex]\sigma_1[/tex], and [tex]\sigma_2[/tex] are real constants and [tex]c = \cos \theta[/tex] and [tex]s = \sin\theta[/tex] for some angle [tex]\theta[/tex]. Plotted, such a function would be a 2D, squeezed Gaussian rotated w.r.t to the x-y axes.
But your solution may also work in this case and I am pursuing that. Sorry for not being more explicit earlier.
Just define new variables:
[tex]u=cx+sy,\ v=sx-cy[/tex]
It's separable in those variables. This just represents a rotation of the axes by an angle of theta.
 
  • #10
krab said:
Just define new variables:
[tex]u=cx+sy,\ v=sx-cy[/tex]
It's separable in those variables. This just represents a rotation of the axes by an angle of theta.

Hello Krab.

I don't understand how to make that substitution for the rest of the equation: the u(y)dy part of the integral in particular. Also, how would the left-hand side change as well? Might you explain a little further please?:confused:
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
954
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
1
Views
2K