- #1

- 435

- 29

It seems no finite difference scheme is stable for a>0, dt>0, correct?

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- A
- Thread starter feynman1
- Start date

- #1

- 435

- 29

It seems no finite difference scheme is stable for a>0, dt>0, correct?

- #2

Homework Helper

2022 Award

- 2,670

- 1,275

A_{n+1}y_{n+1} = A_ny_n + \dots + A_{n-k}y_{n-k}[/tex] where each [itex]A_i \in \mathbb{C}[a\Delta t][/itex]. The solution is then [tex]

y_n = \sum_{j=1}^{k+1} \alpha_jn^{m_j}\Lambda_j^n[/tex] where the [itex]\Lambda_j[/itex] are the roots of [tex]

A_{n+1}\Lambda^{k+1} = A_n\Lambda^k + \dots + A_{n-k}[/tex] and [itex]m_j = 0[/itex] unless there are repeated roots. The coefficients [itex]\alpha_j[/itex] are determined by [itex]y_0, y_1, \dots, y_k[/itex]. You can see from this that the absolute error [itex]|e^{na\Delta t} - y_n|[/itex] will increase without bound as [itex]n \to \infty[/itex] with [itex]a\Delta t > 0[/itex].

- #3

- 435

- 29

thanks a lot then what finite differences can solve this eq?

A_{n+1}y_{n+1} = A_ny_n + \dots + A_{n-k}y_{n-k}[/tex] where each [itex]A_i \in \mathbb{C}[a\Delta t][/itex]. The solution is then [tex]

y_n = \sum_{j=1}^{k+1} \alpha_jn^{m_j}\Lambda_j^n[/tex] where the [itex]\Lambda_j[/itex] are the roots of [tex]

A_{n+1}\Lambda^{k+1} = A_n\Lambda^k + \dots + A_{n-k}[/tex] and [itex]m_j = 0[/itex] unless there are repeated roots. The coefficients [itex]\alpha_j[/itex] are determined by [itex]y_0, y_1, \dots, y_k[/itex]. You can see from this that the absolute error [itex]|e^{na\Delta t} - y_n|[/itex] will increase without bound as [itex]n \to \infty[/itex] with [itex]a\Delta t > 0[/itex].

- #4

Staff Emeritus

Science Advisor

Gold Member

- 5,544

- 1,480

thanks a lot then what finite differences can solve this eq?

I think the main point here is that the solution grows exponentially, and any discretization constructs polynomial approximations. The exponential will eventually grow faster than the polynomial, and then you'll never be able to catch up.

That said, it's a bit unusual to want a finite difference method to actually compute for *all* t>0. If you only care about a fixed time range (even if it's enormous), you can get arbitrarily good approximations in that region.

- #5

Homework Helper

2022 Award

- 2,670

- 1,275

thanks a lot then what finite differences can solve this eq?

Any of them.

For example, for the Euler method we have [tex]

y_{n+1} = (1 + a\Delta t)y_n[/tex] with solution [tex]

y_n = y_0(1 + a\Delta t)^n.[/tex]If we let [itex]\Delta t \to 0[/itex] with [itex]N\Delta t = T[/itex] fixed we get [tex]

y(t) = \lim_{N \to \infty} y_0\left(1 + \frac{aT}{N}\right)^N = y_0e^{aT}[/tex] which is the analytical solution. Thus the method works, in that you get a more accurate result by taking a smaller timestep.

It is also the case that for [itex]a > 0[/itex] both [itex]e^{na\Delta t}[/itex] and [itex](1 + a\Delta t)^n[/itex] exhibit the same qualitative behaviour, namely exponential increase with [itex]n[/itex]. The absolute error grows because they do not increase at the same rate: The approximation can be written as [itex]e^{n\beta\Delta t}[/itex] where [tex]

\beta = \frac{\log(1 + a\Delta t)}{\Delta t} < a[/tex] and therefore increases more slowly than the analytical solution.

A tedious calculation shows that for the fourth-order Runge-Kutta method we have [tex]

y_{n+1} = \left(1 + (a\Delta t) + \tfrac12(a\Delta t)^2 + \tfrac16(a\Delta t)^3 + \tfrac{1}{24}(a\Delta t)^4\right)y_n[/tex] so that [tex]

\beta = \frac{\log\left(1 + (a\Delta t) + \tfrac12(a\Delta t)^2 + \tfrac16(a\Delta t)^3 + \tfrac{1}{24}(a\Delta t)^4\right)}{\Delta t}[/tex] which doesn't increase as fast as the analytical solution, but does increase faster than the Euler solution. And again we have [itex]\beta \to a[/itex] as [itex]\Delta t \to 0[/itex].

Last edited:

- #6

- 22,526

- 5,337

- #7

- 435

- 29

but the stability regime for back euler is |1-a*dt|>=1, not even stable for dt->0+ when a>0.

- #8

- 435

- 29

thanks a lot, but isn't that analysis compatibility rather than stability? doesn't work for dt that is big?Any of them.

For example, for the Euler method we have [tex]

y_{n+1} = (1 + a\Delta t)y_n[/tex] with solution [tex]

y_n = y_0(1 + a\Delta t)^n.[/tex]If we let [itex]\Delta t \to 0[/itex] with [itex]N\Delta t = T[/itex] fixed we get [tex]

y(t) = \lim_{N \to \infty} y_0\left(1 + \frac{aT}{N}\right)^N = y_0e^{aT}[/tex] which is the analytical solution. Thus the method works, in that you get a more accurate result by taking a smaller timestep.

- #9

- 435

- 29

thanks but that said, any finite difference would work and why discuss stability? stability talks about finite t as well.I think the main point here is that the solution grows exponentially, and any discretization constructs polynomial approximations. The exponential will eventually grow faster than the polynomial, and then you'll never be able to catch up.

That said, it's a bit unusual to want a finite difference method to actually compute for *all* t>0. If you only care about a fixed time range (even if it's enormous), you can get arbitrarily good approximations in that region.

- #10

- 22,526

- 5,337

- #11

- 435

- 29

but that doesn't fall into the stable regime |1-a*dt|>=1

- #12

- 435

- 29

?

Share:

- Replies
- 8

- Views
- 3K

- Replies
- 2

- Views
- 676

- Replies
- 1

- Views
- 835

- Replies
- 9

- Views
- 904

- Replies
- 2

- Views
- 2K

- Replies
- 6

- Views
- 485

- Replies
- 3

- Views
- 841

- Replies
- 7

- Views
- 900

- Replies
- 0

- Views
- 397

- Replies
- 1

- Views
- 2K