Fredholm Integral Equation Numerically

If you approximate the integral by a finite sum of the form$$\int_a^b h(x,t)f(t)dt \approx \sum_{i=1}^n A_i(x) f(x_i)$$then you get the equation$$\sum_{i=0}^n B_i(x) f(x_i)=g(x),$$where ##B_i(x) = h(x,x_i)+\sum_{j=1}^N h(x, x_i) A_j(x_j)## and ##C_i(x) = \sum_{j=1}^N h(x,x_j) A_j(x_j)##. One can now solve the system of equations for ##f
  • #1
member 428835

Homework Statement


A specific problem of the Fredholm integral equation is given as

$$\phi(x) = \pi x^2+\int_0^\pi3(0.5\sin(3x)-tx^2)\phi(t)\,dt$$

and the exact solution is ##\phi(x) = \sin 3x##.

Homework Equations


Nothing comes to mind.

The Attempt at a Solution


I'm unsure how to approach solving this numerically. I can immediately tell that ##\phi(0) = 0##. I'm trying to think of a simple procedural way of numerically solving this equation. Something like an Euler scheme. Any ideas or where to start?
 
Physics news on Phys.org
  • #2
You can, e.g., use iteration. That is,
$$\phi_n(x)=\pi x^2+\int_0^{\pi} 3(0.5 \sin (3x)-tx^2)\phi_{n-1}(t) dt$$
with ##n=1, 2, ...## and ##\phi_0=0##. In this way you can compute the solution for the required values of ##x##. One alternative is to discretize the integral which leads to a linear system of equations.
 
  • #3
joshmccraney said:

Homework Statement


A specific problem of the Fredholm integral equation is given as

$$\phi(x) = \pi x^2+\int_0^\pi3(0.5\sin(3x)-tx^2)\phi(t)\,dt$$

and the exact solution is ##\phi(x) = \sin 3x##.

Homework Equations


Nothing comes to mind.

The Attempt at a Solution


I'm unsure how to approach solving this numerically. I can immediately tell that ##\phi(0) = 0##. I'm trying to think of a simple procedural way of numerically solving this equation. Something like an Euler scheme. Any ideas or where to start?
An approach I saw once in a book long ago was a finite-difference scheme, where you approximate the integral by a sum, for example, using the tranpezoidal rule:
$$h(t) + \int_0^t f(x) \,dx \approx h(n \Delta x) +\frac{1}{2} f(0) \Delta x + \sum_{i=1}^{n-1} f(i \Delta x) \Delta x + \frac{1}{2} f(n \Delta x) \Delta x,$$
when ##t = n \Delta x##. Using that you can get a set of coupled linear equations for ##f(\Delta x), f(2 \Delta x) , \ldots, f(n \Delta x)##. The equations have, essentially, a triangular form, so we get ##f(\Delta x)## in terms of ##f(0)## and ##h(\Delta x)##, get ##f(2 \Delta x)## in terms of ##h(2 \Delta x), f(0), f(\Delta x)##, etc.
 
  • Like
Likes member 428835
  • #4
eys_physics said:
You can, e.g., use iteration. That is,
$$\phi_n(x)=\pi x^2+\int_0^{\pi} 3(0.5 \sin (3x)-tx^2)\phi_{n-1}(t) dt$$
with ##n=1, 2, ...## and ##\phi_0=0##. In this way you can compute the solution for the required values of ##x##.
I like the second suggestion that you suggest and also that Ray Vickerson suggested. However, are you saying in your first suggestion that ##\phi(x) = \lim_{n\to\infty}\phi_n(x)##? If so can you explain how?
 
  • #5
Ray Vickson said:
An approach I saw once in a book long ago was a finite-difference scheme, where you approximate the integral by a sum, for example, using the tranpezoidal rule:
$$h(t) + \int_0^t f(x) \,dx \approx h(n \Delta x) +\frac{1}{2} f(0) \Delta x + \sum_{i=1}^{n-1} f(i \Delta x) \Delta x + \frac{1}{2} f(n \Delta x) \Delta x,$$
when ##t = n \Delta x##. Using that you can get a set of coupled linear equations for ##f(\Delta x), f(2 \Delta x) , \ldots, f(n \Delta x)##. The equations have, essentially, a triangular form, so we get ##f(\Delta x)## in terms of ##f(0)## and ##h(\Delta x)##, get ##f(2 \Delta x)## in terms of ##h(2 \Delta x), f(0), f(\Delta x)##, etc.
Hey Ray VIckerson, so in my situation wouldn't I would have something like $$
\phi(x) = \pi x^2 +\frac{1}{2} 3(0.5\sin(3x)-\pi x^2))\phi(\pi)+\sum_{i=1}^{n-1}3\left(0.5\sin(3x)-\frac{i\pi}{n} x^2\right)\phi\left(\frac{i\pi}{n}\right)
$$
How does this give us a system of equations?
 
  • #6
joshmccraney said:

Homework Statement


A specific problem of the Fredholm integral equation is given as

$$\phi(x) = \pi x^2+\int_0^\pi3(0.5\sin(3x)-tx^2)\phi(t)\,dt$$

and the exact solution is ##\phi(x) = \sin 3x##.

Homework Equations


Nothing comes to mind.

The Attempt at a Solution


I'm unsure how to approach solving this numerically. I can immediately tell that ##\phi(0) = 0##. I'm trying to think of a simple procedural way of numerically solving this equation. Something like an Euler scheme. Any ideas or where to start?
joshmccraney said:
Hey Ray VIckerson, so in my situation wouldn't I would have something like $$
\phi(x) = \pi x^2 +\frac{1}{2} 3(0.5\sin(3x)-\pi x^2))\phi(\pi)+\sum_{i=1}^{n-1}3\left(0.5\sin(3x)-\frac{i\pi}{n} x^2\right)\phi\left(\frac{i\pi}{n}\right)
$$
How does this give us a system of equations?

If we write the equation as
$$\phi(x) = f(x) + \int_0^{\pi} K(x,t) \phi(t) \, dt, $$
then when we discretize ##x## and ##t## to ##0,h, 2h, \cdots, Nh = \pi## and use an integration scheme of the form
$$\int_0^{\pi} F(t) \, dt \approx \sum_{i=0}^N c_i h F(ih), \hspace{4em} (1)$$
then the integral equation is approximated as
$$p_i = f_i + \sum_{j=0}^N h C_{ij} p_j,$$
where ##p_i = \phi(ih), f_i = f(ih)## and ##C_{ij} = c_j K(ih,jh)##. The variables to be solved for are the ##p_0, p_1, \ldots, p_N.##

Note that different integration schemes just correspond to different constants ##c_i## in (1). The trapezoidal rule uses ##c_0=c_N = 1/2## and ##c_i = 1, \,1 \leq i \leq N-1##. Simpson's rule would correspond to some other set of constants ##c_i##.

However, that is not the way I would finally decide to solve your specific problem. Because of the special structure of your right-hand-side, your equation is easily solvable by elementary methods. Unfortunately, I cannot say more without giving away the whole solution. All I can suggest is that you look very carefully at the actual form of your equation. If you still have not got it a week or two after turning in your assignment for marking, I will be happy to outline the best method then.
 
  • Like
Likes member 428835
  • #7
joshmccraney said:
I like the second suggestion that you suggest and also that Ray Vickerson suggested. However, are you saying in your first suggestion that ϕ(x)=limn→∞ϕn(x)ϕ(x)=limn→∞ϕn(x)\phi(x) = \lim_{n\to\infty}\phi_n(x)? If so can you explain how?

Hey, Josh. I'm sorry for the delay of my reply, but I was busy with other things yesterday. You are correct in that ##\phi(x)=\lim_{n \rightarrow \infty}\phi_n(x)##.
I cannot give a formal proof of that but I can give you some ideas. For this purpose, I will consider the slightly more general equation
$$f(x)=g(x)+\int_a^b h(x,t) f(t) dt$$.
We can do one iteration by inserting again the definition (given by the equation) of ##f(t)## on the right-hand side, i.e.
$$f(x)=g(x)+\int_a^b h(x,t)[g(t)+\int_a^b h(t,t_1)f(t_1)dt_1]$$.This same process can now be continued arbitrarily number of times leading to more integrals on the right-hand side.

Consider now the method based on iteration I proposed. Then,
$$f_0=0,$$
$$f_1=g(x)+\int_a^b h(x,t) f_0(y) dy$$,
$$f_2=g(x)+\int_a^b h(x,t) f_1(y)dy=g(x)+\int_a^b h(x,t)[g(t)+\int_a^b h(t,t_1)f_0(y_1)dy_1]$$, etc

One can therefore see, that the iteration leads to an equation of the type as above. It can also be shown that ##\phi(x)=\lim_{n \rightarrow \infty}\phi_n(x)##.

The second method I proposed is quite similar to the one proposed by Ray but a bit simpler. Consider again the equation
$$f(x)=g(x)+\int_a^b h(x,t) f(t) dt$$.
By Gauss-Legendre integration or other quadrature method, one can write
$$f(x)=g(x)+\sum_{k=1}^n w_k h(x,t_k) f(t_k)$$
where ##t_k## are the nodes and ##w_k## the corresponding weights. By considering a mesh of discrete points ##x_i, i=1, N##, you obtain a system of linear equations which can be solved by standard methods.
 
  • #8
Ray Vickson said:
If we write the equation as
$$\phi(x) = f(x) + \int_0^{\pi} K(x,t) \phi(t) \, dt, $$
then when we discretize ##x## and ##t## to ##0,h, 2h, \cdots, Nh = \pi## and use an integration scheme of the form
$$\int_0^{\pi} F(t) \, dt \approx \sum_{i=0}^N c_i h F(ih), \hspace{4em} (1)$$
then the integral equation is approximated as
$$p_i = f_i + \sum_{j=0}^N h C_{ij} p_j,$$
where ##p_i = \phi(ih), f_i = f(ih)## and ##C_{ij} = c_j K(ih,jh)##. The variables to be solved for are the ##p_0, p_1, \ldots, p_N.##

Note that different integration schemes just correspond to different constants ##c_i## in (1). The trapezoidal rule uses ##c_0=c_N = 1/2## and ##c_i = 1, \,1 \leq i \leq N-1##. Simpson's rule would correspond to some other set of constants ##c_i##.

However, that is not the way I would finally decide to solve your specific problem. Because of the special structure of your right-hand-side, your equation is easily solvable by elementary methods. Unfortunately, I cannot say more without giving away the whole solution. All I can suggest is that you look very carefully at the actual form of your equation. If you still have not got it a week or two after turning in your assignment for marking, I will be happy to outline the best method then.
eys_physics said:
Hey, Josh. I'm sorry for the delay of my reply, but I was busy with other things yesterday. You are correct in that ##\phi(x)=\lim_{n \rightarrow \infty}\phi_n(x)##.
I cannot give a formal proof of that but I can give you some ideas. For this purpose, I will consider the slightly more general equation
$$f(x)=g(x)+\int_a^b h(x,t) f(t) dt$$.
We can do one iteration by inserting again the definition (given by the equation) of ##f(t)## on the right-hand side, i.e.
$$f(x)=g(x)+\int_a^b h(x,t)[g(t)+\int_a^b h(t,t_1)f(t_1)dt_1]$$.This same process can now be continued arbitrarily number of times leading to more integrals on the right-hand side.

Consider now the method based on iteration I proposed. Then,
$$f_0=0,$$
$$f_1=g(x)+\int_a^b h(x,t) f_0(y) dy$$,
$$f_2=g(x)+\int_a^b h(x,t) f_1(y)dy=g(x)+\int_a^b h(x,t)[g(t)+\int_a^b h(t,t_1)f_0(y_1)dy_1]$$, etc

One can therefore see, that the iteration leads to an equation of the type as above. It can also be shown that ##\phi(x)=\lim_{n \rightarrow \infty}\phi_n(x)##.

The second method I proposed is quite similar to the one proposed by Ray but a bit simpler. Consider again the equation
$$f(x)=g(x)+\int_a^b h(x,t) f(t) dt$$.
By Gauss-Legendre integration or other quadrature method, one can write
$$f(x)=g(x)+\sum_{k=1}^n w_k h(x,t_k) f(t_k)$$
where ##t_k## are the nodes and ##w_k## the corresponding weights. By considering a mesh of discrete points ##x_i, i=1, N##, you obtain a system of linear equations which can be solved by standard methods.
The iteration technique does not work. Regardless of ##\phi_0## we find ##\lim_{n\to\infty}\phi_n## does not converge.
 
  • #9
joshmccraney said:
The iteration technique does not work. Regardless of ##\phi_0## we find ##\lim_{n\to\infty}\phi_n## does not converge.

I don't know why your iteration did not converge. Can you give some more details about your implementation?
From the formal point of view the iteration should converge. But, sometimes it is difficult in practice. Usually, the second method I proposed works well.
But, I'm surprised that the iteration doesn't work for such a simple equation.
 
  • #10
eys_physics said:
I don't know why your iteration did not converge. Can you give some more details about your implementation?
From the formal point of view the iteration should converge. But, sometimes it is difficult in practice. Usually, the second method I proposed works well.
But, I'm surprised that the iteration doesn't work for such a simple equation.
Yea I did it in Mathematica, but no luck. The coefficients for ##\sin(3x)## grow very big very fast. I decided to use Ray Vickerson's approach, and it worked great.
 
  • #11
Okay, good that it worked out good for you.
 
  • #12
joshmccraney said:
Yea I did it in Mathematica, but no luck. The coefficients for ##\sin(3x)## grow very big very fast. I decided to use Ray Vickerson's approach, and it worked great.

Yes, indeed, the iterative method diverges. Looking at the right-hand-side of the integral equation, we have
$$\phi(x) = \pi x^2 + \frac{3}{2} \sin(3x) \underbrace{\int_0^{\pi} \phi(t) \, dt}_{\text{constant}} -
3 x^2 \underbrace{\int_0^{\pi} t \phi(t) \, dt}_{\text{constant}}, $$
so we can write the iterative results as
$$\phi_n(x) = \pi A_n x^2 + B_n \sin(3x),$$
with
$$A_{n+1} = 1-B_n - \frac{3}{4} \pi^4 A_n, \: B_{n+1} = B_n + \frac{1}{2} \pi^4 A_n \hspace{4em}(1)$$
and ##A_0 = 1, B_0 = 0##.

I solved the recursion (1) with the given initial conditions using Maple, and obtained
$$\begin{array}{rcr} A_n &=& 0.0137696\, (0.3363889)^n - 0.0137696\, (-72.393206)^n \\
B_n &=& 1 + 1.0091243\, (0.3363889)^n + 0.0091243\, (-72.393206)^n
\end{array}
$$
This obviously explodes exponentially.

Nevertheless, a non-iterative solution is easy to get: writing ##\phi(x) = \pi A x^2 + B \sin(3x)##, the integral equation yields the equations
$$A = 1 - \frac{3}{4} \pi^4 A - B, \; B = B + \frac{1}{2} \pi^4 A,$$
so we have ##A=0, B=1##. This gives ##\phi(x) = \sin(3x)## exactly, as claimed.
 
Last edited:
  • #13
Ray Vickson said:
Yes, indeed, the iterative method diverges. Looking at the right-hand-side of the integral equation, we have
$$\phi(x) = \pi x^2 + \frac{3}{2} \sin(3x) \underbrace{\int_0^{\pi} \phi(t) \, dt}_{\text{constant}} -
3 x^2 \underbrace{\int_0^{\pi} t \phi(t) \, dt}_{\text{constant}}, $$
so we can write the iterative results as
$$\phi_n(x) = \pi A_n x^2 + B_n \sin(3x),$$
with
$$A_{n+1} = 1-B_n - \frac{3}{4} \pi^4 A_n, \: B_{n+1} = B_n + \frac{1}{2} \pi^4 A_n \hspace{4em}(1)$$
and ##A_0 = 1, B_0 = 0##.

I solved the recursion (1) with the given initial conditions using Maple, and obtained
$$\begin{array}{rcr} A_n &=& 0.0137696 (0.3363889)^n - 0.0137696 (-72.393206)^n \\
B_n &=& 1 + 1.0091243 (0.3363889)^n + 0.0091243 (-72.393206)^n
\end{array}
$$
This obviously explodes exponentially.

Nevertheless, a non-iterative solution is easy to get: writing ##\phi(x) = \pi A x^2 + B \sin(3x)##, the integral equation yields the equations
$$A = 1 - \frac{3}{4} \pi^4 A - B, \; B = B + \frac{1}{2} \pi^4 A,$$
so we have ##A=0, B=1##. This gives ##\phi(x) = \sin(3x)## exactly, as claimed.
Was this the elementary solution you proposed? To be honest I like the linear system much more :oldsmile:
 
  • #14
joshmccraney said:
Was this the elementary solution you proposed? To be honest I like the linear system much more :oldsmile:

Basically, yes. It works for any equation of the form
$$f(x) = g(x) + \int_a^b K(x,y) g(y) \, dy$$
with
$$K(x,y) = u_1(x) v_1(y) + u_2(x) v_2(y) + \cdots + u_r(x) v_r(y)$$
The reason is that the quantities ##C_i = \int_a^b v_i(y) g(y) \, dy## are constants (although unknown), so the solution must be of the form
$$f(x) = g(x) + C_1 u_1(x) + C_2 u_2(x) + \cdots + C_r u_r(x).$$
Plugging that form into the integral equation gives you a set of linear equations for finding the coefficients ##C_i##.
 
Last edited:

What is a Fredholm Integral Equation?

A Fredholm Integral Equation is a type of mathematical equation that involves an unknown function being integrated over a certain interval, and the resulting equation must be solved for that unknown function.

How is a Fredholm Integral Equation solved numerically?

A Fredholm Integral Equation is typically solved numerically using various numerical methods, such as the Gauss-Legendre quadrature method or the Gauss-Jacobi quadrature method. These methods involve approximating the integral using a series of discrete points and then solving for the unknown function using these approximations.

What are the applications of Fredholm Integral Equations?

Fredholm Integral Equations have various applications in different fields of science and engineering, including signal processing, image reconstruction, and quantum mechanics. They are also used in solving boundary value problems and in the study of differential equations.

What are the challenges in solving Fredholm Integral Equations numerically?

One of the main challenges in solving Fredholm Integral Equations numerically is the choice of the numerical method. Different methods may have varying levels of accuracy and efficiency for different types of integral equations. Another challenge is dealing with singularities or highly oscillatory integrands, which can affect the accuracy of the numerical solution.

Are there any software packages available for solving Fredholm Integral Equations numerically?

Yes, there are various software packages available for solving Fredholm Integral Equations numerically, such as MATLAB, Mathematica, and GNU Octave. These packages offer built-in functions and algorithms for solving different types of integral equations, making it easier and more efficient for scientists to solve these equations.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
563
  • Calculus and Beyond Homework Help
Replies
1
Views
676
  • Calculus and Beyond Homework Help
Replies
34
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
958
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
166
  • Calculus and Beyond Homework Help
Replies
1
Views
288
  • Calculus and Beyond Homework Help
Replies
3
Views
910
  • Calculus and Beyond Homework Help
Replies
7
Views
708
Back
Top