Efficient methods for solving homogeneous fredholm integral equation

AI Thread Summary
The discussion focuses on solving a homogeneous Fredholm integral equation with a complex kernel, highlighting challenges in numerical solutions due to kernel sparsity and the need for frequent parameter adjustments. The original method involves discretizing the integral and computing eigenvalues and eigenvectors, but this approach becomes inefficient for small values of the parameter omega, requiring excessive discretization points. Suggestions include exploring non-matrix methods like the method of moments (MoM) or boundary element method (BEM) for efficiency, and using sparse matrix solvers to compute only the largest eigenvalue and eigenvector. Additionally, parallel computing techniques are recommended to expedite the calculations. These strategies aim to provide faster and more efficient solutions to the integral equation.
Mute
Homework Helper
Messages
1,388
Reaction score
12
I have an integral equation of the form

y(x) = \int_0^{2\pi} dx'~K(x,x';\omega,\alpha)y(x'),

where K(x,x';\omega,\alpha) is a real, non-symmetric, non-separable kernel that depends on the parameters \omega and \alpha, as well as some others that I won't need to vary as much. The kernel is, unfortunately, relatively ugly, and I can't say too much about its properties analytically.

I have solved the equation numerically (in C) using the usual methods: I discretized the integral using Gauss-Legendre weights, w_j, on the [0,2*pi) interval, and then found the eigenvalues and eigenvectors of the matrix K_{ij} = K(x_i,x_j:\omega,\alpha)w_j, and picked the eigenvector corresponding to the eigenvalue of 1 (or close to it), which appears to be the largest eigenvector. I used the GSL routines to compute the weights and solve the eigensystem. With the corresponding eigenvector in hand, the interpolating formula for the solution is

y(x) = \sum_{j=0}^{N-1} K(x,x_j)w_j y_j,
for an N-point discretization, with y_j the eigenvector corresponding to an eigenvalue of 1.

However, there are a few problems with this approach:

(1) As \omega becomes small (less than ~100 for typical parameter choices I have been using), the kernel becomes increasingly sparse, being significantly non-zero only near (0,2Pi), (2Pi,0) and a curve near the diagonal. As a result, for small omega I need to sample very many points when I discretize the integral, otherwise the peaked regions are not well resolved and solving the eigensystem gives a largest eigenvalue that isn't even close to 1. For \omega greater than about 100, the kernel is not very sparse and ~30 discretization points is sufficient to get the right eigenvalue. Near a value of about 10, I need around 100 discretization points to get an eigenvalue near 1. Below a value of 10, the number of points I need is simply too large.

(2)The other problem is that I need to change the parameters \omega and \alpha quite frequently, which means that every time I change the parameters I need to regenerate the discretized kernel and resolve the eigensystem. A lot of the interesting features I am trying to look at also occur for values of omega below 100, which means that the matrices are typically of size ~100x100. The GSL routines I am using will compute all the eigenvalues and eigenvectors associated with the discrete kernel, but I only need the one eigenvector with eigenvalue (approximately) 1, which seems to be the largest and only real eigenvalue. There doesn't seem to be an option to have the routine just compute the largest eigenvalue and corresponding eigenvector (and I'm not sure if it would be significantly faster not to compute the others, or if it just uses less memory)

So, what I am looking for are methods to quickly compute the solution to an integral equation, which could be done many times over.

I would be interested in references to non-matrix methods that could solve the problem more quickly than the matrix methods, (and hopefully give me an interpolating formula for the result), or if I am to use matrix methods, c packages that can quickly compute the largest eigenvalue and eigenvector, so as to offset the need for many more points when \omega becomes small, or any other tricks that might speed up computation but won't take another month to implement.

Thanks for any help,

--Mute
 
Technology news on Phys.org


Dear Mute,

Thank you for sharing your problem with us. I understand the challenges of solving complex equations numerically and the need for efficient methods. After reviewing your approach, I have a few suggestions that may help you in your research.

Firstly, I would recommend exploring non-matrix methods such as the method of moments (MoM) or the boundary element method (BEM) for solving your integral equation. These methods are known to be efficient for solving integral equations with non-separable and non-symmetric kernels. They also provide an interpolating formula for the solution, which will save you from discretizing the integral and solving the eigensystem. You can find more information about these methods in the book "Boundary Integral Equations" by George C. Hsiao and Wolfgang L. Wendland.

Secondly, if you prefer to stick with the matrix method, I would suggest using a sparse matrix solver instead of the GSL routines. These solvers are designed to handle sparse matrices efficiently and can compute only the largest eigenvalue and eigenvector, saving you time and memory. A popular package for sparse matrix computation is the Sparse Matrix Package (SPARSKIT), which is available in C.

Lastly, if the above options do not work for your problem, you can try using a parallel computing approach. By distributing the computation over multiple processors, you can reduce the time required for solving the equation. There are many libraries available for parallel computing in C, such as OpenMP and MPI.

I hope these suggestions will help you in your research and provide a faster and more efficient solution to your integral equation. Best of luck!
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top