Convergence of numeric schemes

In summary, the author analyzes the rate of convergence for numerical methods for approximating definite integrals and first order initial value problems. He observes a pattern in the rate of convergence for various schemes, but cannot prove his hypothesis. He finds that the quotient form of the Fundamental Theorem of Calculus yields a rate of convergence of order 2, the only explicit method that approaches e from above.
  • #1
MarkFL
Gold Member
MHB
13,288
12
A while back, I dug out a topic I worked on many years ago after taking a course in ordinary differential equations and I was left with an unanswered question, which I thought I would post here. While my question arose from studying numeric methods for approximating the solutions to ODEs, I feel it is probably more of a question in numeric analysis.

I was analyzing the correlation between the approximation methods for definite integrals and for first order initial value problems via the anti-derivative form of the Fundamental Theorem of Calculus. In an attempt to determine the rate of convergence for the various methods, I have observed what I believe to be a pattern, but had difficulty in proving a hypothesis.

I began by using the IVP:

$\displaystyle \frac{dy}{dx}=y$ where $y(0)=1$

and used the explicit numerical schemes to approximate y(1) which led to approximations for $e$.

Analysis of Euler's method (Riemann sum of regular partitions) gave rise to

$\displaystyle e=\lim_{n\to\infty}\left(1+\frac{1}{n} \right)^n$

Use of a limit comparison test shows this scheme to be of order 1.

Analysis of the improved Euler's method (trapezoidal scheme) and the 2nd order Runge-Kutta method (mid-point scheme) which both use Euler's method as a predictor-corrector, show that both give rise to:

$\displaystyle e=\lim_{n\to\infty}\left(1+\frac{1}{n}+\frac{1}{2n^2} \right)^n$

Use of a limit comparison test shows this scheme to be of order 2.

Analysis of the 4th order Runge-Kutta method (Simpson's or prismoidal scheme) gave rise to:

$\displaystyle e=\lim_{n\to\infty}\left(1+\frac{1}{n}+\frac{1}{2n^2}+\frac{1}{6n^3}+\frac{1}{24n^4} \right)^n$

Use of a limit comparison test shows this scheme to be of order 4.

My hypothesis is that an explicit numerical scheme that yields

(1) $\displaystyle e=\lim_{n\to\infty}\left(\sum_{i=0}^{p}\left(\frac{1}{i!n^{i}} \right) \right)^n$

will be of order $p$.

Interestingly, two of the implicit methods yield the formula

$\displaystyle e=\lim_{n\to\infty}\left(\frac{2n+1}{2n-1} \right)^n$

but I did not explore its rate of convergence or what type of formula an implicit method of order $p$ will yield.

While it was a simple enough matter to show that (1) is valid and that as p increases the rate of convergence improves, demonstrating that this rate of convergence increases linearly as $p$ is another matter entirely. I applied L'Hôpital's rule and Maclaurin expansions for logarithmic functions, but to no avail. I also looked into the use of a Taylor formula of order $p$ because of its relationship to the limit comparison test and its use in the derivation of the Runge-Kutta methods and the Maclaurin power series.

If anyone could shed some light on this or post links to relevant material, I would greatly appreciate it! :cool:
 
Physics news on Phys.org
  • #2
Just to verify, I created an Excel sheet with these sequences.
A log-log-plot of n versus the error in the approximation from e, shows the rate of convergence.
And indeed for each approximation sequence we get a line with a slope that corresponds to the order of convergence.
It turns out that a sequence with powers up to 6, has a 6th order of convergence.

The quotient form $(\dfrac {2n+1}{2n-1})^2$ turns out to have convergence order 2.
It is also the only one that approaches e from above instead of from below.
 
Last edited:
  • #3
I appreciate you taking the time to verify the rate of convergence up to $p=6$. (Yes)
 
  • #4
Here's an observation.
Suppose we let p go to infinity, then we get for n=1:

$\qquad (\displaystyle\sum_{p=0}^\infty \dfrac{1}{p! n^p})^n = (1 + \frac 1 {2!} + \frac 1 {3!} + ...)^1 = e$

Hey! We know this sequence!
It is the Taylor expansion of $e^x$ for $x=1$.

And for higher values of n, we simply get $(e^{1/n})^n = e$.
So in the ultimate case we get a constant sequence of just e.
 
Last edited:
  • #5


I find your observations and hypothesis about the convergence of numerical schemes for approximating ODE solutions to be very interesting. It seems like you have put a lot of thought and effort into analyzing these methods and trying to understand their convergence rates.

One suggestion I have is to look into the concept of "order of convergence" in numerical analysis. This refers to the rate at which the error in a numerical method decreases as the step size (or number of subintervals) decreases. It is usually denoted by the letter "p" and can be determined by analyzing the error term in the method's formula.

In your case, it seems like you have already observed a pattern where the order of convergence increases as the number of terms in the formula increases. This is consistent with the idea that more terms in the formula can lead to a more accurate approximation. However, it is important to note that the order of convergence may also depend on other factors such as the smoothness of the function being approximated.

I would suggest looking into the error analysis of the methods you have studied and comparing them to see if they align with your observations. This may also help you to understand why the implicit methods yield a different formula for $e$ and what their order of convergence is.

Overall, your question and observations are a great example of the intersection between numerical analysis and differential equations. I hope you continue to explore this topic and make new discoveries. Good luck!
 

1. What is convergence of numeric schemes?

Convergence of numeric schemes is a concept in numerical analysis that refers to the behavior of a numerical method as the size of the problem or the step size approaches zero. It indicates whether the numerical method is producing results that approach the exact solution of the problem.

2. Why is convergence important in numerical analysis?

Convergence is important because it ensures that the numerical method is producing accurate results. If a numerical method does not converge, it means that the results are not approaching the exact solution and therefore, the method is not reliable for solving the problem.

3. How is the convergence rate of a numerical scheme determined?

The convergence rate of a numerical scheme is determined by analyzing the behavior of the error as the step size decreases. The rate of convergence is typically expressed in terms of the error decreasing by a certain factor with each iteration, such as halving the error with each iteration.

4. What factors can affect the convergence of a numerical scheme?

There are several factors that can affect the convergence of a numerical scheme, including the choice of the numerical method, the stability of the method, and the complexity of the problem being solved. Other factors such as rounding errors and computational limitations can also impact the convergence of a numerical scheme.

5. How can the convergence of a numerical scheme be improved?

The convergence of a numerical scheme can be improved by using higher order methods, refining the step size, and implementing more stable numerical algorithms. Additionally, careful analysis and selection of the numerical method for a specific problem can also lead to improved convergence rates.

Similar threads

  • Topology and Analysis
Replies
3
Views
986
  • Topology and Analysis
Replies
11
Views
1K
  • Topology and Analysis
Replies
4
Views
2K
  • Topology and Analysis
Replies
8
Views
2K
Replies
4
Views
299
  • Topology and Analysis
Replies
9
Views
1K
  • Topology and Analysis
Replies
29
Views
2K
Replies
11
Views
1K
  • Topology and Analysis
Replies
21
Views
1K
Replies
4
Views
1K
Back
Top