MHB Approximating Equations for Unknowns: How to Justify the Form of $U_0$?

  • Thread starter Thread starter evinda
  • Start date Start date
evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

Given the problem $$-u''(x)+q(x)u(x)=f(x), 0 \leq x \leq 1, \\ u'(0)=u(0), \ \ u(1)=0$$ where $f,g$ are continuous functions on $[0,1]$ with $q(x) \geq q_0>0, x \in [0,1]$. Let $U_j$ be the approximations of $u(x_j)$ at the points $x_j=jh, j=0, 1, \dots , N+1$, where $(N+1)h=1$, that gives the finite difference method $$-\frac{1}{h^2}\left (U_{j-1}-2U_j+U_{j+1}\right )+q(x_j)U_j=f(x_j), \ \ 1 \leq j \leq N \\ \frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$$ where $U_{N+1}=0$.

I have to justify the form of the equation for the unknown $U_0$. We have that the approximation of the first derivative $u'(x_j)$ is $$u'(x_j) \approx \frac{u(x_{i+1})-u(x_{i-1})}{2h}$$

so from $u'(0)=u(0)$ we have $$\frac{U_1-U_0}{h}=U_0 \Rightarrow \frac{1}{h}(U_1-U_0)-U_0=0$$ but this is not the desired result.

What have I done wrong? How do we get $\frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right )$ ? (Thinking)
 
Last edited:
Mathematics news on Phys.org
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)
 
I like Serena said:
Hey evinda! (Smile)

I believe we're including the second order correction:
$$u(h) = u(0) + hu'(0) +\frac 12 h^2 u''(0)$$
Thus
$$u(h) =u(0) + hu(0) +\frac 12 h^2\Big(q(0)u(0)-f(0)\Big) \\
\Rightarrow U_1 = U_0 + hU_0 +\frac 12h^2\Big(q(0)U_0-f(0)\Big)
$$
(Thinking)

I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

- - - Updated - - -

Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)
 
evinda said:
I see... So do we suppose that at the Taylor expansion there is no error? (Thinking)

There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)
Also how could we show that the matrix of coefficients

$A=\begin{bmatrix}
-\frac{1}{h^2}+\frac{1}{h}+\frac{q(x_0)}{2} & -\frac{1}{h^2} & 0 & 0 & \cdots& 0\\
-\frac{1}{h^2} & \frac{2}{h^2}+q(x_1) & -\frac{1}{h^2} & 0 & \cdots & 0 \\
0 & -\frac{1}{h^2}& \frac{2}{h^2}+q(x_2) & -\frac{1}{h^2} & & 0\\
& & & & \ddots & 0 \\
& & & & & -\frac{1}{h^2}\\
& & & & -\frac{1}{h^2} & \frac{2}{h^2}+q(x_N)
\end{bmatrix}$

is invertible? (Thinking)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We won't be able to guarantee that it's invertible for any $h$ and any function $q$.
I think that for any $h$ there will be a function $q$ such that the matrix is not invertible.

However, we can write $A$ as:
$$A=\frac 1{h^2}\begin{bmatrix}
-1+h+h^2\frac{q(x_0)}{2} & -1 & 0 & 0 & \cdots& 0\\
-1 & 2+h^2q(x_1) & -1 & 0 & \cdots & 0 \\
0 & -1& 2+h^2q(x_2) & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2+h^2q(x_N)
\end{bmatrix}$$
And if $h$ is small enough, it approaches:
$$A \approx \frac 1{h^2}\begin{bmatrix}
-1 & -1 & 0 & 0 & \cdots& 0\\
-1 & 2 & -1 & 0 & \cdots & 0 \\
0 & -1& 2 & -1 & & 0\\
& & & & \ddots & 0 \\
& & & & & -1\\
& & & & -1 & 2
\end{bmatrix} $$
Would that be invertible? (Wondering)

From the wiki on tridiagonal matrices, it would appear so, since this particular matrix has all the off-diagonal elements equal (it's also Toeplitz, but that's more general). However, these results have only been obtained around 1996 or 1997 - fairly recently. Not sure they've made their way into a lot of textbooks yet.
 
I like Serena said:
There will still be an error, just an order of magnitude less.
Not bad eh? (Mmm)

We would have an error of order $O(h^3)$. Can we ignore it since it converges to $0$?
If so, how could we justify it formally? (Thinking)
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top