Approximating Equations for Unknowns: How to Justify the Form of $U_0$?

  • Context: MHB 
  • Thread starter Thread starter evinda
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around justifying the form of the equation for the unknown $U_0$ in a finite difference method applied to a boundary value problem. Participants explore the implications of Taylor expansions, the inclusion of second-order corrections, and the invertibility of the coefficient matrix associated with the finite difference scheme.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant presents the finite difference method for approximating the solution to a boundary value problem and seeks clarification on the form of the equation for $U_0$.
  • Another participant suggests that the inclusion of a second-order correction in the Taylor expansion leads to the desired form of the equation for $U_0$.
  • There is a question about the assumption of no error in the Taylor expansion, with a later reply indicating that while there will be an error, it may be of a smaller order.
  • Participants discuss the structure of the coefficient matrix $A$ and its potential invertibility, with one noting that it may not be guaranteed for all functions $q$ and values of $h$.
  • One participant proposes that if $h$ is small enough, the matrix approaches a specific form that may be invertible, referencing properties of tridiagonal matrices.
  • There is a consideration of the error associated with the finite difference method, with a participant questioning whether an error of order $O(h^3)$ can be ignored as it converges to zero.

Areas of Agreement / Disagreement

Participants express differing views on the assumptions regarding errors in Taylor expansions and the conditions under which the coefficient matrix is invertible. The discussion remains unresolved with multiple competing perspectives on these issues.

Contextual Notes

Participants acknowledge the dependence of the invertibility of the matrix on the choice of the function $q$ and the value of $h$. There are also unresolved questions about the formal justification for ignoring certain errors in the approximation.

evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

Given the problem $$-u''(x)+q(x)u(x)=f(x), 0 \leq x \leq 1, \\ u'(0)=u(0), \ \ u(1)=0$$ where $f,g$ are continuous functions on $[0,1]$ with $q(x) \geq q_0>0, x \in [0,1]$. Let $U_j$ be the approximations of $u(x_j)$ at the points $x_j=jh, j=0, 1, \dots , N+1$, where $(N+1)h=1$, that gives the finite difference method $$-\frac{1}{h^2}\left (U_{j-1}-2U_j+U_{j+1}\right )+q(x_j)U_j=f(x_j), \ \ 1 \leq j \leq N \\ \frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$$ where $U_{N+1}=0$.

I have to justify the form of the equation for the unknown $U_0$. We have that the approximation of the first derivative $u'(x_j)$ is $$u'(x_j) \approx \frac{u(x_{i+1})-u(x_{i-1})}{2h}$$

so from $u'(0)=u(0)$ we have $$\frac{U_1-U_0}{h}=U_0 \Rightarrow \frac{1}{h}(U_1-U_0)-U_0=0$$ but this is not the desired result.

What have I done wrong? How do we get $\frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$ ? (Thinking)
 
Last edited:
Physics news on Phys.org
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)
 
I like Serena said:
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)

I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

- - - Updated - - -

Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)
 
evinda said:
I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)
Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)

From the wiki on tridiagonal matrices, it would appear so, since this particular matrix has all the off-diagonal elements equal (it's also Toeplitz, but that's more general). However, these results have only been obtained around 1996 or 1997 - fairly recently. Not sure they've made their way into a lot of textbooks yet.
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We would have an error of order $O(h^3)$. Can we ignore it since it converges to $0$?
If so, how could we justify it formally? (Thinking)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 65 ·
3
Replies
65
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
8K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K