A Finite difference scheme for y'(t)=a*y(t)

feynman1
Messages
435
Reaction score
29
It seems no finite difference scheme is stable for a>0, dt>0, correct?
 
Physics news on Phys.org
For any discretization I think you end up with a linear recurrence <br /> A_{n+1}y_{n+1} = A_ny_n + \dots + A_{n-k}y_{n-k} where each A_i \in \mathbb{C}[a\Delta t]. The solution is then <br /> y_n = \sum_{j=1}^{k+1} \alpha_jn^{m_j}\Lambda_j^n where the \Lambda_j are the roots of <br /> A_{n+1}\Lambda^{k+1} = A_n\Lambda^k + \dots + A_{n-k} and m_j = 0 unless there are repeated roots. The coefficients \alpha_j are determined by y_0, y_1, \dots, y_k. You can see from this that the absolute error |e^{na\Delta t} - y_n| will increase without bound as n \to \infty with a\Delta t &gt; 0.
 
  • Like
Likes feynman1
pasmith said:
For any discretization I think you end up with a linear recurrence <br /> A_{n+1}y_{n+1} = A_ny_n + \dots + A_{n-k}y_{n-k} where each A_i \in \mathbb{C}[a\Delta t]. The solution is then <br /> y_n = \sum_{j=1}^{k+1} \alpha_jn^{m_j}\Lambda_j^n where the \Lambda_j are the roots of <br /> A_{n+1}\Lambda^{k+1} = A_n\Lambda^k + \dots + A_{n-k} and m_j = 0 unless there are repeated roots. The coefficients \alpha_j are determined by y_0, y_1, \dots, y_k. You can see from this that the absolute error |e^{na\Delta t} - y_n| will increase without bound as n \to \infty with a\Delta t &gt; 0.
thanks a lot then what finite differences can solve this eq?
 
feynman1 said:
thanks a lot then what finite differences can solve this eq?

I think the main point here is that the solution grows exponentially, and any discretization constructs polynomial approximations. The exponential will eventually grow faster than the polynomial, and then you'll never be able to catch up.

That said, it's a bit unusual to want a finite difference method to actually compute for *all* t>0. If you only care about a fixed time range (even if it's enormous), you can get arbitrarily good approximations in that region.
 
  • Like
Likes feynman1
feynman1 said:
thanks a lot then what finite differences can solve this eq?

Any of them.

For example, for the Euler method we have <br /> y_{n+1} = (1 + a\Delta t)y_n with solution <br /> y_n = y_0(1 + a\Delta t)^n.If we let \Delta t \to 0 with N\Delta t = T fixed we get <br /> y(t) = \lim_{N \to \infty} y_0\left(1 + \frac{aT}{N}\right)^N = y_0e^{aT} which is the analytical solution. Thus the method works, in that you get a more accurate result by taking a smaller timestep.

It is also the case that for a &gt; 0 both e^{na\Delta t} and (1 + a\Delta t)^n exhibit the same qualitative behaviour, namely exponential increase with n. The absolute error grows because they do not increase at the same rate: The approximation can be written as e^{n\beta\Delta t} where <br /> \beta = \frac{\log(1 + a\Delta t)}{\Delta t} &lt; a and therefore increases more slowly than the analytical solution.

A tedious calculation shows that for the fourth-order Runge-Kutta method we have <br /> y_{n+1} = \left(1 + (a\Delta t) + \tfrac12(a\Delta t)^2 + \tfrac16(a\Delta t)^3 + \tfrac{1}{24}(a\Delta t)^4\right)y_n so that <br /> \beta = \frac{\log\left(1 + (a\Delta t) + \tfrac12(a\Delta t)^2 + \tfrac16(a\Delta t)^3 + \tfrac{1}{24}(a\Delta t)^4\right)}{\Delta t} which doesn't increase as fast as the analytical solution, but does increase faster than the Euler solution. And again we have \beta \to a as \Delta t \to 0.
 
Last edited:
  • Like
Likes feynman1
It seems to me that the backward Euler scheme is stable even if a > 0. Stability doesn't mean that the solution does not grow without bound. It means that the difference between the numerical solution and the exact solution does not grow without bound.
 
  • Like
Likes feynman1
Chestermiller said:
It seems to me that the backward Euler scheme is stable even if a > 0. Stability doesn't mean that the solution does not grow without bound. It means that the difference between the numerical solution and the exact solution does not grow without bound.
but the stability regime for back euler is |1-a*dt|>=1, not even stable for dt->0+ when a>0.
 
pasmith said:
Any of them.

For example, for the Euler method we have <br /> y_{n+1} = (1 + a\Delta t)y_n with solution <br /> y_n = y_0(1 + a\Delta t)^n.If we let \Delta t \to 0 with N\Delta t = T fixed we get <br /> y(t) = \lim_{N \to \infty} y_0\left(1 + \frac{aT}{N}\right)^N = y_0e^{aT} which is the analytical solution. Thus the method works, in that you get a more accurate result by taking a smaller timestep.
thanks a lot, but isn't that analysis compatibility rather than stability? doesn't work for dt that is big?
 
Office_Shredder said:
I think the main point here is that the solution grows exponentially, and any discretization constructs polynomial approximations. The exponential will eventually grow faster than the polynomial, and then you'll never be able to catch up.

That said, it's a bit unusual to want a finite difference method to actually compute for *all* t>0. If you only care about a fixed time range (even if it's enormous), you can get arbitrarily good approximations in that region.
thanks but that said, any finite difference would work and why discuss stability? stability talks about finite t as well.
 
  • #10
I think I was mistaken. For backward Euler, the difference scheme is $$y^{n+1}=\frac{y^n}{(1-a\Delta t)}$$ which is accurate only if ##a\Delta t<<1##.
 
  • #11
Chestermiller said:
I think I was mistaken. For backward Euler, the difference scheme is $$y^{n+1}=\frac{y^n}{(1-a\Delta t)}$$ which is accurate only if ##a\Delta t<<1##.
but that doesn't fall into the stable regime |1-a*dt|>=1
 
  • #12
?
 
Back
Top