- #1
fahraynk
- 186
- 6
- TL;DR Summary
- Trying to understand the difference between consistency and convergence in PDE finite difference methods, and why we need both definitions. Why they do not really both mean the same thing.
What is the definition of consistency?
I have seen a proof that shows a finite difference scheme is consistent, where they basically plug a true solution ##𝑢(𝑡)##
into a finite difference scheme, and they get every term, for example ##𝑢^{𝑖+1}_𝑗## and ##𝑢^𝑖_{𝑗+1}##, using taylors polynomials.
Then, they show that the taylor approximations plugged into the finite difference scheme go to zero as Δ𝑡
goes to zero.
So this seems like the definition should be, Consistency : The error of the real solution in the finite difference scheme goes to zero as time step goes to zero.
So, on the difference between convergence and consistency, it seems like convergence is computing 𝑢(𝑡+Δ𝑡)
with the finite difference scheme and having a low error, while consistency is plugging the true values of 𝑢 and 𝑢(𝑡+Δ𝑡)
into the finite difference scheme and having a low error.
But... they are kind of exactly the same, since if the finite difference scheme is convergent, the approximation for 𝑢(𝑡+Δ𝑡)
converges to the true value, and so if you plug the true value into the fin. dif. equation or you plug the approximation in, why would the output be different?
Can anyone give me some intuition?
I have seen a proof that shows a finite difference scheme is consistent, where they basically plug a true solution ##𝑢(𝑡)##
into a finite difference scheme, and they get every term, for example ##𝑢^{𝑖+1}_𝑗## and ##𝑢^𝑖_{𝑗+1}##, using taylors polynomials.
Then, they show that the taylor approximations plugged into the finite difference scheme go to zero as Δ𝑡
goes to zero.
So this seems like the definition should be, Consistency : The error of the real solution in the finite difference scheme goes to zero as time step goes to zero.
So, on the difference between convergence and consistency, it seems like convergence is computing 𝑢(𝑡+Δ𝑡)
with the finite difference scheme and having a low error, while consistency is plugging the true values of 𝑢 and 𝑢(𝑡+Δ𝑡)
into the finite difference scheme and having a low error.
But... they are kind of exactly the same, since if the finite difference scheme is convergent, the approximation for 𝑢(𝑡+Δ𝑡)
converges to the true value, and so if you plug the true value into the fin. dif. equation or you plug the approximation in, why would the output be different?
Can anyone give me some intuition?