Approximating Equations for Unknowns: How to Justify the Form of $U_0$?

  • Context: MHB 
  • Thread starter Thread starter evinda
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on justifying the form of the equation for the unknown $U_0$ in the finite difference method applied to the differential equation $$-u''(x)+q(x)u(x)=f(x)$$ with boundary conditions $u'(0)=u(0)$ and $u(1)=0$. The participants derive the relationship $$\frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$$ by incorporating a second-order correction term from the Taylor expansion. They also discuss the invertibility of the coefficient matrix $A$, concluding that while it may not be invertible for all $h$ and functions $q$, it approaches a form that is invertible as $h$ becomes small.

PREREQUISITES
  • Understanding of finite difference methods for differential equations
  • Familiarity with Taylor series expansions and their applications
  • Knowledge of matrix theory, particularly regarding tridiagonal matrices
  • Basic concepts of boundary value problems in differential equations
NEXT STEPS
  • Study the derivation and application of finite difference methods in solving boundary value problems
  • Learn about the properties and applications of tridiagonal matrices in numerical analysis
  • Explore Taylor series and their role in numerical approximations
  • Investigate the conditions for the invertibility of matrices in numerical methods
USEFUL FOR

Mathematicians, numerical analysts, and engineers involved in solving differential equations and optimizing numerical methods for boundary value problems.

evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

Given the problem $$-u''(x)+q(x)u(x)=f(x), 0 \leq x \leq 1, \\ u'(0)=u(0), \ \ u(1)=0$$ where $f,g$ are continuous functions on $[0,1]$ with $q(x) \geq q_0>0, x \in [0,1]$. Let $U_j$ be the approximations of $u(x_j)$ at the points $x_j=jh, j=0, 1, \dots , N+1$, where $(N+1)h=1$, that gives the finite difference method $$-\frac{1}{h^2}\left (U_{j-1}-2U_j+U_{j+1}\right )+q(x_j)U_j=f(x_j), \ \ 1 \leq j \leq N \\ \frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$$ where $U_{N+1}=0$.

I have to justify the form of the equation for the unknown $U_0$. We have that the approximation of the first derivative $u'(x_j)$ is $$u'(x_j) \approx \frac{u(x_{i+1})-u(x_{i-1})}{2h}$$

so from $u'(0)=u(0)$ we have $$\frac{U_1-U_0}{h}=U_0 \Rightarrow \frac{1}{h}(U_1-U_0)-U_0=0$$ but this is not the desired result.

What have I done wrong? How do we get $\frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$ ? (Thinking)
 
Last edited:
Physics news on Phys.org
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)
 
I like Serena said:
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)

I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

- - - Updated - - -

Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)
 
evinda said:
I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)
Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)

From the wiki on tridiagonal matrices, it would appear so, since this particular matrix has all the off-diagonal elements equal (it's also Toeplitz, but that's more general). However, these results have only been obtained around 1996 or 1997 - fairly recently. Not sure they've made their way into a lot of textbooks yet.
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We would have an error of order $O(h^3)$. Can we ignore it since it converges to $0$?
If so, how could we justify it formally? (Thinking)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 65 ·
3
Replies
65
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
8K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K