Explore the Multiple Grid Method: Get Accurate Solution Faster

In summary, the conversation is about the multiple grid method and its applications in solving linear systems of differential equations. The method uses a combination of fine and coarse grids to achieve both accuracy and fast convergence. The conversation also discusses the use of the finite difference method to solve the linear system and the application of Fourier analysis in approximating the solution. The concept of spectrum is also mentioned, which refers to the decomposition of a function into its frequencies.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

I am trying to understand the multiple grid method.

The idea is to use at each step different grids to solve a linear system of differential equations, so that we get the best approximation as possible.
It holds that the finer the discretization, the more accurate the solution ist and the coarser the discretization, the faster the process converges.
The multiple grid method combines these two grids, to get an accurate solution with fast convergence.
Is the idea correct? (Wondering)

We consider the boundary problem \begin{align*}-&y''(x)=f(x) \ \ \text{ for } \ x\in \Omega:=(0,\pi) \\ &y(0)=y(\pi)=0\end{align*}
The usual discretization with a step size $h = \frac{\pi }{ n}$ leads to an one-dimensional grid $\Omega_h=\{x_j=jh \mid j=1, \ldots , n-1\}\subset \Omega$ and finally to a linear equation system \begin{equation*}A_hu_h=f_h, \ \ \ A_h:=\frac{1}{h^2}\begin{bmatrix}2 & -1 & & 0 \\ -1 & 2 & \ddots& \\ & \ddots& \ddots & -1 \\ 0 & & -1 & 2\end{bmatrix}, \ \ f_h:=\begin{bmatrix}f(x_1) \\ f(x_2) \\ \vdots \\ f(x_{n-1})\end{bmatrix}\end{equation*} for a vector $u_h=[u_{h;1}, \ldots , u_{h;n-1}]^T$ of approximations $u_{h;j}\approx y(x_j)$ for the exact solution $y$ on the grid $\Omega_h$.

At this part did we used the finite difference method? Or how do we get these relations? (Wondering)

So, at the beginning we apply a one-grid method to get a first approximation of the exact solution, or not? The eigenvalues $\lambda_h^{(k)}$ and the eigenvectors $z_h^{(k)}$ of the matrix $A_h$ are the following:
\begin{align*}&z_h^{(k)}:=[\sin kh, \ \sin 2kh, \ \ldots , \ \sin (n-1)kh]^T \\ &\lambda_h^{(k)}:=\frac{1}{h^2}4\sin^2\frac{kh}{2}=\frac{2}{h^2}(1-\cos kh), \ \ \ k=1, 2, \ldots n-1\end{align*} What exactly is $k$ ? (Wondering)

We consider the components $\sin jkh=\sin \frac{jk\pi}{n}$ of the vectors $z_h^{(k)}$ on the grid points $x_j$ of $\Omega_h$ for $j=1, \ldots , n-1$. $z^{(k)}=z_h^{(k)}$ describe oscillations of increasing "frequency" $k$.
The frequency $k$ gives the number of half waves on $\Omega_h$.

Then we check the error of the above method, or not? According to this we choose the appropriate grid for the next step? Or what do we do here? Could you explain to me this part of the multiple grid method? (Wondering)
 
Mathematics news on Phys.org
  • #2
Hey mathmari! (Wave)

It seems to me that the matrix indeed comes from the finite differences method.
It identifies the problem that we want to solve.
And if I understand correctly, we want to:
  • Avoid to solve the system directly, since it can be very big.
    Instead we want to iteratively improve the approximation.
  • Switch back and forth between a coarse grid and a fine grid to reduce both low and high frequency errors.
    Apparently the multigrid method offers a quick convergence with a fixed amount of iterations for that.
  • As the first step I think we would reduce the problem to a coarse grid of, say 2x2, which can be solved directly.
As yet I do not understand how they got those eigenvalues and eigenvectors.
They seem to be part of Fourier analysis, but it is not yet clear to me how.

EDIT: Do you perchance have a reference to the paper? (Thinking)
 
  • #3
Brainstorming some more, I believe we are approximating $y(x)$ with its Fourier form, which is:
$$y(x) \approx \sum_{j=1}^{n-1} a_j \sin(jx)$$
It allows us to inspect the spectrum, which shows the high and low frequency components.
It follows that:
$$-y''(x) =f(x) \approx \sum_{j=1}^{n-1} a_j j^2\sin(jx) \quad\Rightarrow\quad
\sum_{j=1}^{n-1} a_j j^2\sin(jkh) \approx f(kh) = f_k
$$
It gives a perfect match with the given mesh $x_j$ and corresponding function values $f_j$.
And now we know what $k$ is.
Furthermore $h=\frac\pi n$, which leads to the components that you mentioned. (Thinking)
 
  • #4
I like Serena said:
And if I understand correctly, we want to:
  • Avoid to solve the system directly, since it can be very big.
    Instead we want to iteratively improve the approximation.
  • Switch back and forth between a coarse grid and a fine grid to reduce both low and high frequency errors.
    Apparently the multigrid method offers a quick convergence with a fixed amount of iterations for that.
  • As the first step I think we would reduce the problem to a coarse grid of, say 2x2, which can be solved directly.

Ah ok!

The grid $\Omega_h$ of my initial post is it a $2\times 2$ one?
I like Serena said:
EDIT: Do you perchance have a reference to the paper? (Thinking)

I am reading the book "Numerische Mathematik 2" by Josef Stoer and Roland Bulirsch.
I like Serena said:
Brainstorming some more, I believe we are approximating $y(x)$ with its Fourier form, which is:
$$y(x) \approx \sum_{j=1}^{n-1} a_j \sin(jx)$$

So, do you mean that we use the finite difference method to write the differential equation as a linear system and now we go to the differential equation and consider the Fourier form of $y(x)$, right?

Cn we approximate each function with its Fourier form?
I like Serena said:
It allows us to inspect the spectrum, which shows the high and low frequency components.

What exactly is meant by spectrum?
I like Serena said:
It follows that:
$$-y''(x) =f(x) \approx \sum_{j=1}^{n-1} a_j j^2\sin(jx) \quad\Rightarrow\quad
\sum_{j=1}^{n-1} a_j j^2\sin(jkh) \approx f(kh) = f_k
$$
It gives a perfect match with the given mesh $x_j$ and corresponding function values $f_j$.
And now we know what $k$ is.

I haven't really understood that part. Why do we calculate that expression at $x=kh$ ?

(Wondering)
 
Last edited by a moderator:
  • #5
mathmari said:
Ah ok!

The grid $\Omega_h$ of my initial post is it a $2\times 2$ one?

Why would it be?
I'm assuming that $\Omega_h$ is a fine grid that we have to make coarse (by averaging) to be able to solve it directly. (Thinking)

mathmari said:
I am reading the book "Numerische Mathematik 2" by Josef Stoer and Roland Bulirsch.

Ok.
I'm afraid that I do not have that book available.

mathmari said:
So, do you mean that we use the finite difference method to write the differential equation as a linear system and now we go to the differential equation and consider the Fourier form of $y(x)$, right?

Can we approximate each function with its Fourier form?

Yes.

mathmari said:
What exactly is meant by spectrum?

That's what a Fourier transform does. It takes a function and decomposes it into its frequencies.
The $a_i$ that I mentioned show the contribution of each frequency and together they form the spectrum.

We can compare it with the sun. It emits white light. If it passes through raindrops we get a rainbow. (Sun) (Rain)
That is, the white light is decomposed in light of different frequencies, which is what we call a spectrum.

mathmari said:
I haven't really understood that part. Why do we calculate that expression at $x=kh$ ?

Don't we have the values of $f(x_k)$? That is, the values of $f$ on the grid $\Omega_h$?
And doesn't it hold that $x_k=kh$? (Wondering)

We take the Fourier version of our approximation of $y(x)$ and substitute it into the $-y''(x)=f(x)$.
The result should match with the $f(x_k)$. (Thinking)
 
  • #6
I like Serena said:
Why would it be?
I'm assuming that $\Omega_h$ is a fine grid that we have to make coarse (by averaging) to be able to solve it directly. (Thinking)

So, we have now a grid with $n$ points, or not? This is a fine grid, isn't it?

What do you mean that we have to make it coarse by averaging?

(Wondering)

I like Serena said:
Ok.
I'm afraid that I do not have that book available.

Here is the part of the book about the multiple grid method:

View attachment 8185
I like Serena said:
That's what a Fourier transform does. It takes a function and decomposes it into its frequencies.
The $a_i$ that I mentioned show the contribution of each frequency and together they form the spectrum.
Don't we have the values of $f(x_k)$? That is, the values of $f$ on the grid $\Omega_h$?
And doesn't it hold that $x_k=kh$? (Wondering)

We take the Fourier version of our approximation of $y(x)$ and substitute it into the $-y''(x)=f(x)$.
The result should match with the $f(x_k)$. (Thinking)

Ah ok! And from that we get the eigenvalues? Or how do we use that? (Wondering)
 

Attachments

  • Numerische Mathematik 2 (1)-365-376.pdf
    203.8 KB · Views: 93
  • #7
The next step is to apply the Jacobi method.

We write the matrix $A_h$ as a sum of the diagonal, strict upper and lower triangular matrix: $A_h=L_h+D_h+U_h$, where $\displaystyle{D_h=\frac{2}{h^2}I}$.

The iteration step is \begin{align*}v^{(i+1)}=D_h^{-1}\left (f_h-\left (A_h-D_h\right )v^{(i)}\right )&\Rightarrow v^{(i+1)}=D_h^{-1}f_h-D_h^{-1}\left (A_h-D_h\right )v^{(i)}\\ & \Rightarrow v^{(i+1)}=D_h^{-1}f_h+J_hv^{(i)} \\ & \Rightarrow v^{(i+1)}=\frac{h^2}{2}f_h+J_hv^{(i)} \end{align*} where $\displaystyle{J_h:=-D_h^{-1}\left (A_h-D_h\right )=-D_h^{-1}A_h+D_h^{-1}D_h=I-D_h^{-1}A_h=I-\frac{h^2}{2}A_h}$.

So we get a sequence $v^{(i)}$ of approximation vectors for the exact solution $u$.

The error is the equal to $e^{(i)}=v^{(i)}-u$.

Now we construct a new system with $e^{(i)}$ on a coarse grid, or not? (Wondering)

Here we apply again the Jacobi method, don't we? (Wondering)
 
  • #8
mathmari said:
So, we have now a grid with $n$ points, or not? This is a fine grid, isn't it?

What do you mean that we have to make it coarse by averaging?

On page 861 we can see that number of points is reduced by a factor of 2 by calculating a weighted average.
That is, we use all the values of the original grid, and we reduce it to half the number of those values.
The averaging formula is $g_{2h;j} = \frac 14(g_{h;(2j-1)} + 2 g_{h;2j} + g_{h;(2j+1)})$.

On the same page they also explain that an alternative approach is to leave out every other value, but that approach is not discussed any further.
mathmari said:
Ah ok! And from that we get the eigenvalues? Or how do we use that?

Apparently Abschnitt 8.4 explains how they got those eigenvectors and eigenvalues, but I don't have that. (Sadface)

However, we can verify it by observing that:
$$h^2A_h\,z_h^{(k)} = \begin{bmatrix} 2\sin kh -\sin 2kh \\ -\sin kh +2\sin 2kh -\sin 3kh\\ \vdots\end{bmatrix}
= \begin{bmatrix} 2\sin kh - (\sin 2kh + \sin 0) \\2\sin 2kh - (\sin 3kh + \sin kh)\\ \vdots\end{bmatrix} \\
= \begin{bmatrix} 2\sin kh - 2\sin\frac 12(2kh+0)\cos\frac 12(2kh-0) \\2\sin 2kh - 2\sin\frac 12(3kh+kh)\cos\frac 12(3kh-kh))\\ \vdots\end{bmatrix}
= \begin{bmatrix} 2(1 - \cos kh)\sin kh \\ 2(1 - \cos kh)\sin\ 2kh\\ \vdots\end{bmatrix}
= 2(1-\cos 2kh)\, z_h^{(k)}
$$
So for each $k$ we have that $A_h\,z_h^{(k)} = \lambda_h^{(k)}\,z_h^{(k)}$.
(Thinking)

mathmari said:
The next step is to apply the Jacobi method.

We write the matrix $A_h$ as a sum of the diagonal, strict upper and lower triangular matrix: $A_h=L_h+D_h+U_h$, where $\displaystyle{D_h=\frac{2}{h^2}I}$.

The iteration step is \begin{align*}v^{(i+1)}=D_h^{-1}\left (f_h-\left (A_h-D_h\right )v^{(i)}\right )&\Rightarrow v^{(i+1)}=D_h^{-1}f_h-D_h^{-1}\left (A_h-D_h\right )v^{(i)}\\ & \Rightarrow v^{(i+1)}=D_h^{-1}f_h+J_hv^{(i)} \\ & \Rightarrow v^{(i+1)}=\frac{h^2}{2}f_h+J_hv^{(i)} \end{align*} where $\displaystyle{J_h:=-D_h^{-1}\left (A_h-D_h\right )=-D_h^{-1}A_h+D_h^{-1}D_h=I-D_h^{-1}A_h=I-\frac{h^2}{2}A_h}$.

So we get a sequence $v^{(i)}$ of approximation vectors for the exact solution $u$.

The error is the equal to $e^{(i)}=v^{(i)}-u$.

Now we construct a new system with $e^{(i)}$ on a coarse grid, or not? (Wondering)

Here we apply again the Jacobi method, don't we? (Wondering)

I believe that is indeed what we do in the 2-grid method (8.9.7).
That is, we go to a coarse grid, apply the Jacobi method, take the residu with us back to the fine grid, and apply the Jacobi method again on the residu. (Thinking)

In a cycle of the multigrid method (8.9.14), we take the approximation $v$ with us to the coarser or finer grid, and apply the Jacobi method again.
Apparently we start with the original fine grid, apply the cycle, make the grid coarser, and repeat $j$ times.
Then we go in reverse, make the grid finer, apply the cycle, and repeat until we are back at the original grid. (Thinking)
 
  • #9
Ah ok! (Thinking) I read the method again. It is as follows, isn't it? (Wondering)

First, a solution on a fine grid is calculated. With this solution, an auxiliary problem is set up on a coarser grid, the so-called defect problem, in order to determine a solution improvement.


For the corrections, a corresponding transfer of the functions between the different grid levels is necessary.

Decisive for the convergence is the smoother, which allows the optimal result. This is a process that does not necessarily have to be suitable as a solver, but has the property of working on and significantly reducing the error rate of the current solution.
The error is divided into high-frequency and low-frequency components.

High-frequency components are poorly displayed when transferred to a coarser grid, which can lead to problems in the calculation. Also, by a restriction
low-frequency get to high-frequency fault components. Therefore, the smoother is chosen so that on the current lattice convertes high-frequency error components into low-frequency. These are then going through a restriction to high-frequency error components on the coarser grid and can there again be worked through the smoother.

For that we often apply the Jacobi or the Gauss-Seidel algorithm. Or do we not used the normal Jacobi or the Gauss-Seidel algorithm but the respective damped version? And which exactly is the difference? (Wondering) Then we consider an example using a 2-grid method.

\begin{align*}-&y''(x)=f(x) \ \text{ für } \ x\in \Omega:=(0, \pi) \\ &y(0)=y(\pi )=0\end{align*}

We get the linear system $A_hu_h=f_h$.

Starting from a suitable starting solution $u_1^{(0)}$, we apply the smoothing method several times to get an improved approximation of the solution $u_1^{(\nu)}=S^{\nu}\left (u_1^{(0)}, f_1\right )$. $S^{\nu}$ corresponds to the $ \nu $ -time application of the smoothing tool.

We consider now the residual of the system and restrict the problem to the coarse grid. \begin{align*}&r_1=f_1-A_1u_1^{(\nu)} \\ &r_0=R_1^0r_1\end{align*}
At this level the residual problem $A_0v_0=r_0$ will be solved with sufficient accuracy. The vector $ v_0 $ corresponds to a correction of the current solution approximation.

The correction vector $ v_0 $ which is now calculated with sufficient accuracy at this grid level is transported back to the fine grid and used to improve the solution approximation. $\mu$-times smoothing ensures that the high-frequency error components that may have been generated again by the prolongation are attenuated and adapted to the fine mesh. With the solution approximation $u_1^{(\nu+1)}$ as starting solution, the process is continued until the solution meets the demanded termination criterion, for example a sufficiently small residual.
Are these the steps that we follow? (Wondering)
 
  • #10
mathmari said:
Ah ok! (Thinking) I read the method again. It is as follows, isn't it? (Wondering)

First, a solution on a fine grid is calculated. With this solution, an auxiliary problem is set up on a coarser grid, the so-called defect problem, in order to determine a solution improvement.


For the corrections, a corresponding transfer of the functions between the different grid levels is necessary.

Decisive for the convergence is the smoother, which allows the optimal result. This is a process that does not necessarily have to be suitable as a solver, but has the property of working on and significantly reducing the error rate of the current solution.
The error is divided into high-frequency and low-frequency components.

High-frequency components are poorly displayed when transferred to a coarser grid, which can lead to problems in the calculation. Also, by a restriction
low-frequency get to high-frequency fault components. Therefore, the smoother is chosen so that on the current lattice convertes high-frequency error components into low-frequency. These are then going through a restriction to high-frequency error components on the coarser grid and can there again be worked through the smoother.

For that we often apply the Jacobi or the Gauss-Seidel algorithm. Or do we not used the normal Jacobi or the Gauss-Seidel algorithm but the respective damped version? And which exactly is the difference?

Yes, that is how I understand it as well.
As I understand it both the Jacobi and the Gauss-Seidel algorithm are iterative algorithms that improve the solution to $A\mathbf x = \mathbf b$.
And both dampen the error in the successive approximations at all frequencies, although they dampen least at the extreme frequencies. (Thinking)

mathmari said:
Then we consider an example using a 2-grid method.

\begin{align*}-&y''(x)=f(x) \ \text{ für } \ x\in \Omega:=(0, \pi) \\ &y(0)=y(\pi )=0\end{align*}

We get the linear system $A_hu_h=f_h$.

Starting from a suitable starting solution $u_1^{(0)}$, we apply the smoothing method several times to get an improved approximation of the solution $u_1^{(\nu)}=S^{\nu}\left (u_1^{(0)}, f_1\right )$. $S^{\nu}$ corresponds to the $ \nu $ -time application of the smoothing tool.

We consider now the residual of the system and restrict the problem to the coarse grid. \begin{align*}&r_1=f_1-A_1u_1^{(\nu)} \\ &r_0=R_1^0r_1\end{align*}
At this level the residual problem $A_0v_0=r_0$ will be solved with sufficient accuracy. The vector $ v_0 $ corresponds to a correction of the current solution approximation.

The correction vector $ v_0 $ which is now calculated with sufficient accuracy at this grid level is transported back to the fine grid and used to improve the solution approximation. $\mu$-times smoothing ensures that the high-frequency error components that may have been generated again by the prolongation are attenuated and adapted to the fine mesh. With the solution approximation $u_1^{(\nu+1)}$ as starting solution, the process is continued until the solution meets the demanded termination criterion, for example a sufficiently small residual.

Are these the steps that we follow? (Wondering)

It looks correct to me. (Nod)
 
  • #11
I like Serena said:
Yes, that is how I understand it as well.
As I understand it both the Jacobi and the Gauss-Seidel algorithm are iterative algorithms that improve the solution to $A\mathbf x = \mathbf b$.
And both dampen the error in the successive approximations at all frequencies, although they dampen least at the extreme frequencies. (Thinking)

What is the difference between the damped Jacobi method and the usual Jacobi method? Does the damped version converge faster?

At the multiple grid method do we use at each step the damped Jacobi method? Or do we calculate the first approximation of the solution with the usual Jacobi method? (Wondering)
I like Serena said:
It looks correct to me. (Nod)

Great! (Happy)
 
  • #12
mathmari said:
What is the difference between the damped Jacobi method and the usual Jacobi method? Does the damped version converge faster?

Page 357 explains that the Jacobi method is:
$$v^{(i+1)} = Jv^{(i)} + \frac{h^2}2f$$
where $J= J_h = I-\frac{h^2}2 A_h$.

They continue to explain on page 358 that the error is damped at all frequencies, but worst at the low and high frequencies.
To improve that, they bring in the damped Jacobi method, which is:
$$v^{(i+1)} = J(\omega)v^{(i)} + \frac\omega 2{h^2}f$$
where $J(\omega) = (1-\omega)I + \omega J$.

So it looks like it would converge slower, but it would better reduce the errors at low and high frequencies, wouldn't it? (Thinking)

mathmari said:
At the multiple grid method do we use at each step the damped Jacobi method? Or do we calculate the first approximation of the solution with the usual Jacobi method?

In a cycle of the multi grid method (page 366) they execute the damped Jacobi method with $\omega=2/3$, don't they? (Wondering)
 
  • #13
I like Serena said:
Page 357 explains that the Jacobi method is:
$$v^{(i+1)} = Jv^{(i)} + \frac{h^2}2f$$
where $J= J_h = I-\frac{h^2}2 A_h$.

They continue to explain on page 358 that the error is damped at all frequencies, but worst at the low and high frequencies.
To improve that, they bring in the damped Jacobi method, which is:
$$v^{(i+1)} = J(\omega)v^{(i)} + \frac\omega 2{h^2}f$$
where $J(\omega) = (1-\omega)I + \omega J$.

So it looks like it would converge slower, but it would better reduce the errors at low and high frequencies, wouldn't it? (Thinking)
In a cycle of the multi grid method (page 366) they execute the damped Jacobi method with $\omega=2/3$, don't they? (Wondering)

Ah ok! Do we take $\omega=2/3$ only at the 2-grid or in general? (Wondering) Could you give me a specific example where we can apply the method 2-grid so that I can understand this better? (Wondering)
 
  • #14
mathmari said:
Ah ok! Do we take $\omega=2/3$ only at the 2-grid or in general?

We can freely choose to use $\omega=2/3$ or a different $\omega$, or not use the damped Jacobi method at all.
The chapter explains that it's giving only an example of how the multigrid method (or 2-grid method in this case) can be used, but not how it is generally used, nor what the best way is to use it. (Nerd)

mathmari said:
Could you give me a specific example where we can apply the method 2-grid so that I can understand this better? (Wondering)

We can apply it to the problem you have in post #1 can't we?
Pick a grid size $n$ and apply the method? (Thinking)
 
  • #15
I like Serena said:
We can freely choose to use $\omega=2/3$ or a different $\omega$, or not use the damped Jacobi method at all.
The chapter explains that it's giving only an example of how the multigrid method (or 2-grid method in this case) can be used, but not how it is generally used, nor what the best way is to use it. (Nerd)

Ahh I thought that at the proposition 8.9.13 (pg. 365) they would mean that we can get a good approximation for the error, but now I read that again and they just say that using this $\omega$ we can bound the error by $0.782\|e_h\|_2$, or not?

From what does the choice of $\omega$ depend? Is there no secific reason that we took $\omega=\frac{2}{3}$ ?

(Wondering)
I like Serena said:
We can apply it to the problem you have in post #1 can't we?
Pick a grid size $n$ and apply the method? (Thinking)

Do we have to take also a specific function $f(x)$ ? If yes, which could we take for example? (Wondering)
 
  • #16
mathmari said:
Ahh I thought that at the proposition 8.9.13 (pg. 365) they would mean that we can get a good approximation for the error, but now I read that again and they just say that using this $\omega$ we can bound the error by $0.782\|e_h\|_2$, or not?

Yes. And it also assumes we take $\nu=2$ steps with the damped Jacobi method. (Nod)

mathmari said:
From what does the choice of $\omega$ depend? Is there no secific reason that we took $\omega=\frac{2}{3}$ ?

It's not quite clear from the text to me.
With a choice of $\omega=1$ we have the undamped Jacobi method, what I think means quick convergence, but bad low and high frequency errors, which is undesired.
It may also be relevant how quickly the error bound goes down, which will also depend on the number of steps $\nu$.
With $\omega=0$ the method does not do anything.

I suspect it's a matter of trial-and-error to see what the best $\omega$ is, and $\omega=2/3$ may be based on experience. (Thinking)

mathmari said:
Do we have to take also a specific function $f(x)$ ? If yes, which could we take for example?

I suggest to start with the simplest $f$ that provides a solution.
If we pick $f(x)=0$, the solution becomes $y(x)=0$. We might start with an initial approximation that deviates from $0$, say $v(x)=1$, and see how quickly we converge on the solution $y(x)=0$.
To make it a bit more interesting we might pick $f(x)=2$. What will be the solution then? (Wondering).
 
  • #17
I like Serena said:
I suggest to start with the simplest $f$ that provides a solution.
If we pick $f(x)=0$, the solution becomes $y(x)=0$. We might start with an initial approximation that deviates from $0$, say $v(x)=1$, and see how quickly we converge on the solution $y(x)=0$.
To make it a bit more interesting we might pick $f(x)=2$. What will be the solution then? (Wondering).

We consider the boundary problem \begin{align*}-&y''(x)=0 \ \ \text{ for } \ x\in \Omega:=(0,\pi) \\ &y(0)=y(\pi)=0\end{align*} Let's pick the grid size $n=5$.

The usual discretization with a step size $h = \frac{\pi }{ 5}$ leads to an one-dimensional grid $\Omega_h=\{x_j=jh \mid j=1, \ldots , 4\}\subset \Omega$ and finally to a linear equation system \begin{equation*}A_hu_h=f_h, \ \ \ A_h:=\frac{1}{h^2}\begin{bmatrix}2 & -1 & 0 & 0 \\ -1 & 2 & -1& 0 \\ 0 & -1& 2 & -1 \\ 0 & 0& -1 & 2\end{bmatrix}=\frac{25}{\pi^2}\begin{bmatrix}2 & -1 & 0 & 0 \\ -1 & 2 & -1& 0 \\ 0 & -1& 2 & -1 \\ 0 & 0& -1 & 2\end{bmatrix}, \ \ f_h:=\begin{bmatrix}0 \\ 0 \\ 0 \\ 0\end{bmatrix}\end{equation*} for a vector $u_h=[u_{h;1}, u_{h;2}, u_{h;3}, u_{h;4}]^T$ of approximations $u_{h;j}\approx y(x_j)$ for the exact solution $y$ on the grid $\Omega_h$.

We have the matrix $J= J_h = I-\frac{h^2}2 A_h=\begin{bmatrix}1 & 0 & 0 &0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{bmatrix}-\begin{bmatrix}1 & -\frac{1}{2} & 0 & 0 \\ -\frac{1}{2} & 1 & -\frac{1}{2}& 0 \\ 0 & -\frac{1}{2}& 1 & -\frac{1}{2} \\ 0 & 0& -\frac{1}{2} & 1\end{bmatrix}=\begin{bmatrix}0 & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & \frac{1}{2}& 0 \\ 0 & \frac{1}{2}& 0 & \frac{1}{2} \\ 0 & 0& \frac{1}{2} & 0\end{bmatrix}$.

For $\omega=\frac{2}{3}$ we get the following damped version:
$J(\omega) = (1-\omega)I + \omega J= \frac{1}{3}I + \frac{2}{3}J=\begin{bmatrix}\frac{1}{3} & 0 & 0 &0 \\ 0 & \frac{1}{3} & 0 & 0 \\ 0 & 0 & \frac{1}{3} & 0 \\ 0 & 0 & 0 & \frac{1}{3}\end{bmatrix}+\begin{bmatrix}0 & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & 0 & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& 0 & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & 0\end{bmatrix}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}$

The damped Jacobi method is then:
$$v^{(i+1)} = J(\omega)v^{(i)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(i)}$$

Let's consider the starting vector $v^{(0)}=\begin{bmatrix}1\\ 1\\ 1\\ 1\end{bmatrix}$.

Then we get the following:
$$v^{(1)} = J(\omega)v^{(0)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(0)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}1\\ 1\\ 1\\ 1\end{bmatrix}=\begin{bmatrix}\frac{2}{3}\\ 1\\ 1\\ \frac{2}{3}\end{bmatrix}$$
$$v^{(2)} = J(\omega)v^{(1)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(1)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{2}{3}\\ 1\\ 1\\ \frac{2}{3}\end{bmatrix}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}$$ The residual is then $$r_h:=f_h-A_hv^{(2)}=-\frac{25}{\pi^2}\begin{bmatrix}2 & -1 & 0 & 0 \\ -1 & 2 & -1& 0 \\ 0 & -1& 2 & -1 \\ 0 & 0& -1 & 2\end{bmatrix}\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}=-\frac{25}{\pi^2}\begin{bmatrix}\frac{2}{9}\\ -\frac{1}{3}\\ \frac{1}{3}\\ \frac{2}{9}\end{bmatrix}=\begin{bmatrix}-\frac{50}{9\pi^2}\\ \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\\ -\frac{50}{9\pi^2}\end{bmatrix}$$

We go to the coarser grid:
$$r_{2h}=I_h^{2h}r_h=\begin{bmatrix}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\end{bmatrix}\begin{bmatrix}-\frac{50}{9\pi^2}\\ \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\\ -\frac{50}{9\pi^2}\end{bmatrix}=\begin{bmatrix} \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\end{bmatrix}$$

Our new problem on the coarse grid is then the following:
$$A_{2h}e_{2h}=-r_{2h}$$ where $$A_{2h}=I_h^{2h}A_h=\begin{bmatrix}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\end{bmatrix}\begin{bmatrix}\frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 & 0 \\ -\frac{25}{\pi^2} & \frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 \\ 0 & -\frac{25}{\pi^2}& \frac{50}{\pi^2} & -\frac{25}{\pi^2} \\ 0 & 0& -\frac{25}{\pi^2} & 2\end{bmatrix}=\begin{bmatrix}-\frac{25}{\pi^2} & \frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 \\ 0 & -\frac{25}{\pi^2}& \frac{50}{\pi^2} & -\frac{25}{\pi^2} \end{bmatrix}$$ Now we apply again twice the damped Jacobi, right? Which starting vector do we use? (Wondering)

Is everythig correct so far? (Wondering)

Then we calculate $v^{(2)}-I_{2h}^he_{2h}$ where $e_{2h}$ is the solution of the last Jacobi method, or not? (Wondering)

Is this then the result, I mean the last approximation of the solution of the initial value problem? (Wondering)
 
  • #18
I had a mistake at $r_{2h}$ and $r_{2h}$ we have the following:

\begin{equation*}r_{2h}=I_h^{2h}r_h=\frac{1}{4}\begin{bmatrix}1 & 2 & 1 & 0 \\ 0 & 0 & 1 & 2\end{bmatrix}\begin{bmatrix}-\frac{50}{9\pi^2}\\ \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\\ -\frac{50}{9\pi^2}\end{bmatrix} =\begin{bmatrix} \frac{25}{36\pi^2}\\ -\frac{175}{36\pi^2}\end{bmatrix}\end{equation*}

and

\begin{align*}A_{2h}&=I_h^{2h}A_hI_{2h}^h=\frac{1}{4}\begin{bmatrix}1 & 2 & 1 & 0 \\ 0 & 0 & 1 & 2\end{bmatrix}\begin{bmatrix}\frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 & 0 \\ -\frac{25}{\pi^2} & \frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 \\ 0 & -\frac{25}{\pi^2}& \frac{50}{\pi^2} & -\frac{25}{\pi^2} \\ 0 & 0& -\frac{25}{\pi^2} & \frac{50}{\pi^2}\end{bmatrix}\frac{1}{2}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\\ & =\frac{1}{8}\begin{bmatrix}1 & 2 & 1 & 0 \\ 0 & 0 & 1 & 2\end{bmatrix}\begin{bmatrix}\frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 & 0 \\ -\frac{25}{\pi^2} & \frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 \\ 0 & -\frac{25}{\pi^2}& \frac{50}{\pi^2} & -\frac{25}{\pi^2} \\ 0 & 0& -\frac{25}{\pi^2} & \frac{50}{\pi^2}\end{bmatrix}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\\ & =\frac{1}{8}\begin{bmatrix} 0& \frac{50}{\pi^2}& 0& -\frac{25}{\pi^2}\\ 0& -\frac{25}{\pi^2}& 0& \frac{75}{\pi^2}\end{bmatrix}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\\ &= \begin{bmatrix} \frac{100}{8\pi^2}& -\frac{50}{8\pi^2}\\ -\frac{50}{8\pi^2}& \frac{150}{8\pi^2}\end{bmatrix}= \begin{bmatrix} \frac{25}{2\pi^2}& -\frac{25}{4\pi^2}\\ -\frac{25}{4\pi^2}& \frac{75}{4\pi^2}\end{bmatrix}\end{align*} So, now we have the system $A_{2h}e_{2h}=-r_{2h}$. Do we apply now again twice damped Jacobi method? Can we take here as the initial vector the zero-vector? (Wondering)
 
  • #19
It looks correct to me so far. (Thinking)

mathmari said:
We go to the coarser grid:
$$r_{2h}=I_h^{2h}r_h=\begin{bmatrix}0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0\end{bmatrix}\begin{bmatrix}-\frac{50}{9\pi^2}\\ \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\\ -\frac{50}{9\pi^2}\end{bmatrix}=\begin{bmatrix} \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\end{bmatrix}$$

Did you intentionally pick the $I_h^{2h}$ that does not use a weighed average? (Wondering)

mathmari said:
I had a mistake at $r_{2h}$ and $r_{2h}$ we have the following:

\begin{equation*}r_{2h}=I_h^{2h}r_h=\frac{1}{4}\begin{bmatrix}1 & 2 & 1 & 0 \\ 0 & 0 & 1 & 2\end{bmatrix}\begin{bmatrix}-\frac{50}{9\pi^2}\\ \frac{25}{3\pi^2}\\ -\frac{25}{3\pi^2}\\ -\frac{50}{9\pi^2}\end{bmatrix} =\begin{bmatrix} \frac{25}{36\pi^2}\\ -\frac{175}{36\pi^2}\end{bmatrix}\end{equation*}

and

\begin{align*}A_{2h}&=I_h^{2h}A_hI_{2h}^h=\frac{1}{4}\begin{bmatrix}1 & 2 & 1 & 0 \\ 0 & 0 & 1 & 2\end{bmatrix}\begin{bmatrix}\frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 & 0 \\ -\frac{25}{\pi^2} & \frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 \\ 0 & -\frac{25}{\pi^2}& \frac{50}{\pi^2} & -\frac{25}{\pi^2} \\ 0 & 0& -\frac{25}{\pi^2} & \frac{50}{\pi^2}\end{bmatrix}\frac{1}{2}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\\ & =\frac{1}{8}\begin{bmatrix}1 & 2 & 1 & 0 \\ 0 & 0 & 1 & 2\end{bmatrix}\begin{bmatrix}\frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 & 0 \\ -\frac{25}{\pi^2} & \frac{50}{\pi^2} & -\frac{25}{\pi^2} & 0 \\ 0 & -\frac{25}{\pi^2}& \frac{50}{\pi^2} & -\frac{25}{\pi^2} \\ 0 & 0& -\frac{25}{\pi^2} & \frac{50}{\pi^2}\end{bmatrix}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\\ & =\frac{1}{8}\begin{bmatrix} 0& \frac{50}{\pi^2}& 0& -\frac{25}{\pi^2}\\ 0& -\frac{25}{\pi^2}& 0& \frac{75}{\pi^2}\end{bmatrix}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\\ &= \begin{bmatrix} \frac{100}{8\pi^2}& -\frac{50}{8\pi^2}\\ -\frac{50}{8\pi^2}& \frac{150}{8\pi^2}\end{bmatrix}= \begin{bmatrix} \frac{25}{2\pi^2}& -\frac{25}{4\pi^2}\\ -\frac{25}{4\pi^2}& \frac{75}{4\pi^2}\end{bmatrix}\end{align*} So, now we have the system $A_{2h}e_{2h}=-r_{2h}$. Do we apply now again twice damped Jacobi method? Can we take here as the initial vector the zero-vector? (Wondering)

That looks better. (Happy)

Yes, we would again apply the damped Jacobi method twice.
And since we're talking about a residual, I think it is okay if we pick the zero-vector as the initial approximation. (Thinking)
 
  • #20
I like Serena said:
It looks correct to me so far. (Thinking)

(Happy)
I like Serena said:
Did you intentionally pick the $I_h^{2h}$ that does not use a weighed average? (Wondering)

I did it accidentaly (Tmi)
I like Serena said:
That looks better. (Happy)

Yes, we would again apply the damped Jacobi method twice.
And since we're talking about a residual, I think it is okay if we pick the zero-vector as the initial approximation. (Thinking)

Then we get the following, or not? (Wondering) We have the matrix $J= J_{2h} = I-\frac{(2h)^2}2 A_{2h}=I-2h^2 A_{2h}=\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}-\begin{bmatrix}1& -\frac{1}{2}\\ -\frac{1}{2}& \frac{3}{2}\end{bmatrix}=\begin{bmatrix}0& \frac{1}{2}\\ \frac{1}{2}& -\frac{1}{2}\end{bmatrix}$.

For $\omega=\frac{2}{3}$ we get the following damped version:
$J(\omega) = (1-\omega)I + \omega J= \frac{1}{3}I + \frac{2}{3}J=\begin{bmatrix}\frac{1}{3} & 0 \\ 0 & \frac{1}{3} \end{bmatrix}+\begin{bmatrix}0& \frac{1}{3}\\ \frac{1}{3}& -\frac{1}{3}\end{bmatrix}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & 0 \end{bmatrix}$

The damped Jacobi method is then:
$$e^{(i+1)} = J(\omega)e^{(i)} + \frac\omega 2{(2h)^2}(-r_{2h})=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & 0 \end{bmatrix}e^{(i)}-\frac{4\pi^2}{75}\begin{bmatrix} \frac{25}{36\pi^2}\\ -\frac{175}{36\pi^2}\end{bmatrix}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & 0 \end{bmatrix}e^{(i)}-\begin{bmatrix} \frac{1}{27}\\ -\frac{7}{27}\end{bmatrix}$$

Let's consider the starting vector $e^{(0)}=\begin{bmatrix}0\\ 0\end{bmatrix}$.

Then we get the following:
$$e^{(1)} = J(\omega)e^{(0)} + \frac\omega 2{(2h)^2}(-r_{2h})=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & 0 \end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}-\begin{bmatrix} \frac{1}{27}\\ -\frac{7}{27}\end{bmatrix}=\begin{bmatrix} -\frac{1}{27}\\ \frac{7}{27}\end{bmatrix}$$
$$e^{(2)} = J(\omega)e^{(1)} + \frac\omega 2{(2h)^2}(-r_{2h})=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & 0 \end{bmatrix}\begin{bmatrix} -\frac{1}{27}\\ \frac{7}{27}\end{bmatrix}-\begin{bmatrix} \frac{1}{27}\\ -\frac{7}{27}\end{bmatrix}=\begin{bmatrix} -\frac{2}{27}\\ -\frac{1}{81}\end{bmatrix}-\begin{bmatrix} \frac{1}{27}\\ -\frac{7}{27}\end{bmatrix}=\begin{bmatrix} -\frac{3}{27}\\ \frac{20}{81}\end{bmatrix}$$ Then we get $$v^{(2)}+I_{2h}^he_{2h}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}+\frac{1}{2}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\begin{bmatrix} -\frac{3}{27}\\ \frac{20}{81}\end{bmatrix}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}+\frac{1}{2}\begin{bmatrix}-\frac{3}{27} \\ -\frac{6}{27} \\ \frac{11}{81} \\ \frac{20}{81} \end{bmatrix}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}+\begin{bmatrix}-\frac{3}{54} \\ -\frac{3}{27} \\ \frac{11}{162} \\ \frac{10}{81} \end{bmatrix}=\begin{bmatrix}\frac{1}{2}\\ \frac{7}{9}\\ \frac{155}{162}\\ \frac{55}{81}\end{bmatrix}$$ Is this the last approximation that we get? (Wondering)
 
Last edited by a moderator:
  • #21
Or do we have to apply again the damped Jacobi method on the fine grid? (Wondering)
 
  • #22
Applying the damped Jacobi method for $Au=f$ again twice we have the following:

We have the matrix $J= J_h = I-\frac{h^2}2 A_h=\begin{bmatrix}0 & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & \frac{1}{2}& 0 \\ 0 & \frac{1}{2}& 0 & \frac{1}{2} \\ 0 & 0& \frac{1}{2} & 0\end{bmatrix}$.

For $\omega=\frac{2}{3}$ we get the following damped version:
$J(\omega) = (1-\omega)I + \omega J= \frac{1}{3}I + \frac{2}{3}J=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}$

The damped Jacobi method is then:
$$v^{(i+1)} = J(\omega)v^{(i)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(i)}$$

We consider the starting vector $v^{(0)}=\begin{bmatrix}\frac{1}{2}\\ \frac{7}{9}\\ \frac{155}{162}\\ \frac{55}{81}\end{bmatrix}$, the approximation that we have so far.

Then we get the following:
$$v^{(1)} = J(\omega)v^{(0)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(0)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{1}{2}\\ \frac{7}{9}\\ \frac{155}{162}\\ \frac{55}{81}\end{bmatrix}=\begin{bmatrix}\frac{23}{54}\\ \frac{181}{243}\\ \frac{391}{486}\\ \frac{265}{486}\end{bmatrix}$$
$$v^{(2)} = J(\omega)v^{(1)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(1)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{23}{54}\\ \frac{181}{243}\\ \frac{391}{486}\\ \frac{265}{486}\end{bmatrix}=\begin{bmatrix}\frac{569}{1458}\\ \frac{160}{243}\\ \frac{509}{729}\\ \frac{328}{729}\end{bmatrix}$$ This is now the last approximation, isn't it? (Wondering)
 
  • #23
mathmari said:
Applying the damped Jacobi method for $Au=f$ again twice we have the following:

We have the matrix $J= J_h = I-\frac{h^2}2 A_h=\begin{bmatrix}0 & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & \frac{1}{2}& 0 \\ 0 & \frac{1}{2}& 0 & \frac{1}{2} \\ 0 & 0& \frac{1}{2} & 0\end{bmatrix}$.

For $\omega=\frac{2}{3}$ we get the following damped version:
$J(\omega) = (1-\omega)I + \omega J= \frac{1}{3}I + \frac{2}{3}J=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}$

The damped Jacobi method is then:
$$v^{(i+1)} = J(\omega)v^{(i)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(i)}$$

We consider the starting vector $v^{(0)}=\begin{bmatrix}\frac{1}{2}\\ \frac{7}{9}\\ \frac{155}{162}\\ \frac{55}{81}\end{bmatrix}$, the approximation that we have so far.

Then we get the following:
$$v^{(1)} = J(\omega)v^{(0)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(0)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{1}{2}\\ \frac{7}{9}\\ \frac{155}{162}\\ \frac{55}{81}\end{bmatrix}=\begin{bmatrix}\frac{23}{54}\\ \frac{181}{243}\\ \frac{391}{486}\\ \frac{265}{486}\end{bmatrix}$$
$$v^{(2)} = J(\omega)v^{(1)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}v^{(1)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{23}{54}\\ \frac{181}{243}\\ \frac{391}{486}\\ \frac{265}{486}\end{bmatrix}=\begin{bmatrix}\frac{569}{1458}\\ \frac{160}{243}\\ \frac{509}{729}\\ \frac{328}{729}\end{bmatrix}$$ This is now the last approximation, isn't it? (Wondering)

As I understand it, we should:
1. Apply the damped Jacobi method twice on the fine grid.
2. Switch to the residue on the coarse grid.
3. Solve for that residue on the coarse grid. It doesn't say how, but for the proposition that comes after, this may be assumed to be a perfect solution.
4. Update the solution on the fine grid with the result from step 3.

After this procedure we should have reduced the upper bound of the error by a factor $0.782$, shouldn't we? (Wondering)
Is that the case?
We can repeat the procedure. After executing the procedure $j$ times, we should have reduced the error by a factor $0.782^{j}$. (Thinking)
 
  • #24
I like Serena said:
As I understand it, we should:
1. Apply the damped Jacobi method twice on the fine grid.
2. Switch to the residue on the coarse grid.
3. Solve for that residue on the coarse grid. It doesn't say how, but for the proposition that comes after, this may be assumed to be a perfect solution.

In this step we can calculate the solution either approximately (apply again a multigrid sheme) or exactly, it depends on how big the coarse grid is, or not?

In this case we have a 2x2 system, that can be calculated exactly, or not? (Wondering)

To do that we have to calculate the inverse of $A_{2h}$, right? (Wondering)
I like Serena said:
4. Update the solution on the fine grid with the result from step 3.

We update the solution by adding to that the error when we have returned that to the fine grid, or how can we do that? (Wondering)
I like Serena said:
After this procedure we should have reduced the upper bound of the error by a factor $0.782$, shouldn't we? (Wondering)
Is that the case?
We can repeat the procedure. After executing the procedure $j$ times, we should have reduced the error by a factor $0.782^{j}$. (Thinking)

For that we have to calculate the norm of the solution of the residuum problem, or not? (Wondering)
 
  • #25
mathmari said:
In this step we can calculate the solution either approximately (apply again a multigrid sheme) or exactly, it depends on how big the coarse grid is, or not?

Correct.

mathmari said:
In this case we have a 2x2 system, that can be calculated exactly, or not?

To do that we have to calculate the inverse of $A_{2h}$, right?

For a 2x2 system it's indeed easiest to calculate and apply $A_{2h}^{-1}$. (Nod)

mathmari said:
We update the solution by adding to that the error when we have returned that to the fine grid, or how can we do that?

Yes. The result of applying the 2-grid method (ZG) once, is:
$$w_h − I_{2h}^h e_{2h}$$
where $w_h$ is the result of 2 iterations of the damped Jacobi method on the fine grid, and $e_{2h} = A_{2h}^{-1} r_{2h}$.

mathmari said:
For that we have to calculate the norm of the solution of the residuum problem, or not? (Wondering)

I think we should see that the error at each point in the mesh has been reduced by at least a factor $0.782$.
We started with $1$ at each point, while we know that the solution is $0$ at each point.
So at every point we started with an error of $1$.
After applying the 2-grid method (ZG) once, we should see that at every point the absolute value is less than $0.782$. (Thinking)
 
  • #26
I like Serena said:
For a 2x2 system it's indeed easiest to calculate and apply $A_{2h}^{-1}$. (Nod)
Yes. The result of applying the 2-grid method (ZG) once, is:
$$w_h − I_{2h}^h e_{2h}$$
where $w_h$ is the result of 2 iterations of the damped Jacobi method on the fine grid, and $e_{2h} = A_{2h}^{-1} r_{2h}$.

So, we have the following:
\begin{equation*}A_{2h}^{-1}=\frac{1}{\det (A_{2h})}\begin{bmatrix} \frac{75}{4\pi^2}& \frac{25}{4\pi^2}\\ \frac{25}{4\pi^2}& \frac{25}{2\pi^2} \end{bmatrix}=\begin{bmatrix} \frac{12\pi^2}{125}& \frac{4\pi^2}{125}\\ \frac{4\pi^2}{125}& \frac{8\pi^2}{125} \end{bmatrix}\end{equation*}

Therefore, we get:
\begin{align*}& e_{2h}=-A_{2h}^{-1}r_{2h}\\ & \Rightarrow e_{2h}=-\begin{bmatrix} \frac{12\pi^2}{125}& \frac{4\pi^2}{125}\\ \frac{4\pi^2}{125}& \frac{8\pi^2}{125} \end{bmatrix}\begin{bmatrix} \frac{25}{36\pi^2}\\ -\frac{175}{36\pi^2}\end{bmatrix}=-\begin{bmatrix} -\frac{4}{45}\\ -\frac{13}{45}\end{bmatrix}=\begin{bmatrix} \frac{4}{45}\\ \frac{13}{45}\end{bmatrix}\end{align*}

Then we update the solution:
\begin{equation*}v^{(2)}-I_{2h}^he_{2h}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}-\frac{1}{2}\begin{bmatrix}1 & 0 \\ 2 & 0 \\ 1 & 1 \\ 0 & 2\end{bmatrix}\begin{bmatrix} \frac{4}{45}\\ \frac{13}{45}\end{bmatrix}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}-\frac{1}{2}\begin{bmatrix}\frac{4}{45} \\ \frac{8}{45} \\ \frac{17}{45} \\ \frac{26}{45} \end{bmatrix}=\begin{bmatrix}\frac{5}{9}\\ \frac{8}{9}\\ \frac{8}{9}\\ \frac{5}{9}\end{bmatrix}-\begin{bmatrix}\frac{2}{45} \\ \frac{4}{45} \\ \frac{17}{90} \\ \frac{13}{45} \end{bmatrix}=\begin{bmatrix}\frac{23}{45}\\ \frac{4}{5}\\ \frac{7}{10}\\ \frac{4}{15}\end{bmatrix}\end{equation*}

Do, we apply now again twice the damped Jacobi method? Or only if the error is not reduced by at least a factor $0.782$ ? (Wondering)

We have that $$\begin{bmatrix}\frac{23}{45}\\ \frac{4}{5}\\ \frac{7}{10}\\ \frac{4}{15}\end{bmatrix}-\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix}=\begin{bmatrix}\frac{23}{45}\\ \frac{4}{5}\\ \frac{7}{10}\\ \frac{4}{15}\end{bmatrix}=\begin{bmatrix}0.511\\ 0.8\\ 0.7\\0.267\end{bmatrix}$$

The error at the second component isnot less than $0.782$. Does this mean that we have to apply again twice the damped Jacobi method? (Wondering)

Is the bound of the errorthe criterion when we have to stop? (Wondering) We apply the damped Jacobi method with the initial vector $\tilde{v}^{(0)}=\begin{bmatrix}\frac{23}{45}\\ \frac{4}{5}\\ \frac{7}{10}\\ \frac{4}{15}\end{bmatrix}$, or not? (Wondering)

Then we get the following:
$$\tilde{v}^{(1)} = J(\omega)\tilde{v}^{(0)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\tilde{v}^{(0)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{23}{45}\\ \frac{4}{5}\\ \frac{7}{10}\\ \frac{4}{15}\end{bmatrix}=\begin{bmatrix}\frac{59}{135}\\ \frac{181}{270}\\ \frac{53}{90}\\ \frac{29}{90}\end{bmatrix}$$
$$\tilde{v}^{(2)} = J(\omega)\tilde{v}^{(1)} + \frac\omega 2{h^2}f=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\tilde{v}^{(1)}=\begin{bmatrix}\frac{1}{3} & \frac{1}{3} & 0 & 0 \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3}& 0 \\ 0 & \frac{1}{3}& \frac{1}{3} & \frac{1}{3} \\ 0 & 0& \frac{1}{3} & \frac{1}{3}\end{bmatrix}\begin{bmatrix}\frac{59}{135}\\ \frac{181}{270}\\ \frac{53}{90}\\ \frac{29}{90}\end{bmatrix}=\begin{bmatrix}\frac{299}{810}\\ \frac{229}{405}\\ \frac{427}{810}\\ \frac{41}{135}\end{bmatrix}$$

We check again the error:
$$\begin{bmatrix}\frac{299}{810}\\ \frac{229}{405}\\ \frac{427}{810}\\ \frac{41}{135}\end{bmatrix}-\begin{bmatrix}0\\ 0\\ 0\\ 0\end{bmatrix}=\begin{bmatrix}\frac{299}{810}\\ \frac{229}{405}\\ \frac{427}{810}\\ \frac{41}{135}\end{bmatrix}=\begin{bmatrix}0.369\\ 0.565\\ 0.527\\ 0.304\end{bmatrix}$$

Now every component of the error satisfies the desired upper bound. That means that this is th elast approximation, or not? (Wondering)
 
  • #27
Or do we have the following:

$$ZG(v^{(0)})=\begin{bmatrix}\frac{23}{45}\\ \frac{4}{5}\\ \frac{7}{10}\\ \frac{4}{15}\end{bmatrix}$$

$$\|e\|_2=\sqrt{1^2+1^2+1^2+1^2}=\sqrt{4}=2$$

$$\|\tilde{e}\|_2=\sqrt{\left (\frac{23}{45}\right )^2+\left (\frac{4}{5}\right )^2+\left (\frac{7}{10}\right )^2+\left (\frac{4}{15}\right )^2}=\sqrt{\frac{2369}{1620}}\approx 1.2093$$

We have then $$\frac{\|\tilde{e}\|_2}{\|e\|_2}=\frac{1.2093}{2}=0.60465\leq 0.782$$

And so we don't have to apply again the damped Jacobi method, do we? (Wondering)
 
  • #28
So, after applying once the 2-grid method with the V-cycle we get a better approximation that the initial once. Do we stop here? Or do we have to apply that again? Or do we apply that just once at the V-cycle?

(Wondering)
 
  • #29
mathmari said:
So, after applying once the 2-grid method with the V-cycle we get a better approximation that the initial once. Do we stop here? Or do we have to apply that again? Or do we apply that just once at the V-cycle?

That looks about right.
For the record, there is currently no defined limit when to stop.
We should see after a single application of the 2-grid method (ZG) that the upper bound of the error is reduced by at least a factor of $0.782$.
If it's not we must be doing something wrong and we would have to figure out what that is.
Other than that, we can repeat the method until the upper bound of the error is lower than any bound that we choose to set on it. For instance, if we want to have a solution with an error less than, say, $0.01$, we would need to repeat the 2-grid method $j$ times with $j$ such that $0.782^j<0.01$. (Thinking)

Btw, we have not been applying the multigrid method with its V-cycle.
That is a different algorithm. (Nerd)
 
  • #30
I like Serena said:
That looks about right.
For the record, there is currently no defined limit when to stop.
We should see after a single application of the 2-grid method (ZG) that the upper bound of the error is reduced by at least a factor of $0.782$.
If it's not we must be doing something wrong and we would have to figure out what that is.
Other than that, we can repeat the method until the upper bound of the error is lower than any bound that we choose to set on it. For instance, if we want to have a solution with an error less than, say, $0.01$, we would need to repeat the 2-grid method $j$ times with $j$ such that $0.782^j<0.01$. (Thinking)

Ah ok! I understand! (Mmm)
I like Serena said:
Btw, we have not been applying the multigrid method with its V-cycle.
That is a different algorithm. (Nerd)

Ahh. What have I done differently as at the V-cycle shema?

So, what I did has no specific shema?

(Wondering)
 
  • #31
mathmari said:
Ahh. What have I done differently as at the V-cycle shema?

So, what I did has no specific shema?

We applied the 2-grid method (ZG or Zweigitterverfahren) that is specified on page 363.
The multigrid method (MV or Mehrgitter-V-cyclus) is specified on page 366. (Thinking)
 
  • #32
I like Serena said:
We applied the 2-grid method (ZG or Zweigitterverfahren) that is specified on page 363.
The multigrid method (MV or Mehrgitter-V-cyclus) is specified on page 366. (Thinking)

Yes, but which is the difference with what I did? (Wondering)
 
  • #33
mathmari said:
Yes, but which is the difference with what I did? (Wondering)

The Mehrgitter-V-cyclus doesn't have a residue $r$ or an error $e$ does it?
Instead it just has an approximation $v_H$ that is refined through coarser and finer grids. (Thinking)
 
  • #34
I like Serena said:
The Mehrgitter-V-cyclus doesn't have a residue $r$ or an error $e$ does it?
Instead it just has an approximation $v_H$ that is refined through coarser and finer grids. (Thinking)

Do we not have the residue problem at the step 2 if we are not on the coarsest grid, i.e. if we don't have $H=2^jh$ and so we calculate $f_{2h}$ and $v_{2h}$? (Wondering)
 
  • #35
Could you explain to me the natrices $I_h^{2h}$ and $I_{2h}^h$ and especially the graphs Fig. 25 and Fig. 26? (Wondering)

Consider the Fig. 25: Do we consider three points on the fine grid and we take from there the average and the result is then one point on the coarse grid, i.e. the first box? (Wondering)
 

Similar threads

Replies
8
Views
2K
Replies
7
Views
2K
Replies
4
Views
1K
Replies
21
Views
3K
Replies
3
Views
1K
Replies
4
Views
878
Replies
1
Views
10K
Replies
16
Views
4K
Back
Top