MHB Approximating Equations for Unknowns: How to Justify the Form of $U_0$?

  • Thread starter Thread starter evinda
  • Start date Start date
AI Thread Summary
The discussion centers on justifying the form of the equation for the unknown $U_0$ in a finite difference method for a boundary value problem. The initial approach using the first derivative approximation led to an incorrect equation, prompting a reevaluation that includes a second-order correction from Taylor expansion. The matrix of coefficients $A$ is analyzed for invertibility, with the conclusion that it cannot be guaranteed for all functions $q$ and step sizes $h$. However, under certain conditions, particularly with small $h$, the matrix approaches a form that is likely invertible. The error in the approximation is noted to be of order $O(h^3), suggesting that it can be considered negligible in the limit as $h$ approaches zero.
evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

Given the problem $$-u''(x)+q(x)u(x)=f(x), 0 \leq x \leq 1, \\ u'(0)=u(0), \ \ u(1)=0$$ where $f,g$ are continuous functions on $[0,1]$ with $q(x) \geq q_0>0, x \in [0,1]$. Let $U_j$ be the approximations of $u(x_j)$ at the points $x_j=jh, j=0, 1, \dots , N+1$, where $(N+1)h=1$, that gives the finite difference method $$-\frac{1}{h^2}\left (U_{j-1}-2U_j+U_{j+1}\right )+q(x_j)U_j=f(x_j), \ \ 1 \leq j \leq N \\ \frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$$ where $U_{N+1}=0$.

I have to justify the form of the equation for the unknown $U_0$. We have that the approximation of the first derivative $u'(x_j)$ is $$u'(x_j) \approx \frac{u(x_{i+1})-u(x_{i-1})}{2h}$$

so from $u'(0)=u(0)$ we have $$\frac{U_1-U_0}{h}=U_0 \Rightarrow \frac{1}{h}(U_1-U_0)-U_0=0$$ but this is not the desired result.

What have I done wrong? How do we get $\frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$ ? (Thinking)
 
Last edited:
Mathematics news on Phys.org
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)
 
I like Serena said:
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)

I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

- - - Updated - - -

Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)
 
evinda said:
I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)
Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)

From the wiki on tridiagonal matrices, it would appear so, since this particular matrix has all the off-diagonal elements equal (it's also Toeplitz, but that's more general). However, these results have only been obtained around 1996 or 1997 - fairly recently. Not sure they've made their way into a lot of textbooks yet.
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We would have an error of order $O(h^3)$. Can we ignore it since it converges to $0$?
If so, how could we justify it formally? (Thinking)
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Back
Top