Finite difference method derivation PDE

fahraynk
Messages
185
Reaction score
6

Homework Statement


Which algebraic expressions must be solved when you use finite difference approximation to solve the following Possion equation inside of the square :

$$U_{xx} + U_{yy}=F(x,y)$$[/B]
$$0<x<1$$ $$0<y<1$$
Boundary condition $$U(x,y)=G(x,y)$$

Homework Equations


Central difference approximation
$$U_{xx}=\frac{-2F(x)+F(x+h)+F(x-h)}{h^2}$$
$$U_x=\frac{F(x+h)-F(x-h)}{2h}$$

The Attempt at a Solution


$$U_{xx}+U_{yy} = \frac{1}{h^2}[U(x+h,y)+U(x-h,y)-4U(x,y)+U(x,y+h)+U(x,y-h)]=F(x,y)$$
$$U(x,y)=\frac{1}{4}[U(x+h,y)+U(x-h,y)+U(x,y-h)+U(x,y+h)-h^2F(x,y)]$$

The books solution is
$$\frac{1}{2}[U(i+1,j)+U(i-1,j)+U(i,j+1)+U(i,j-1)]-\frac{h^2}{4}F(x,y)$$

I know why the book changed x,y to i,j... but I don't get why the fraction is 1/2 instead of 1/4 across the entire equation.
 
Physics news on Phys.org
fahraynk said:

Homework Statement


Which algebraic expressions must be solved when you use finite difference approximation to solve the following Possion equation inside of the square :

$$U_{xx} + U_{yy}=F(x,y)$$[/B]
$$0<x<1$$ $$0<y<1$$
Boundary condition $$U(x,y)=G(x,y)$$

Homework Equations


Central difference approximation
$$U_{xx}=\frac{-2F(x)+F(x+h)+F(x-h)}{h^2}$$
$$U_x=\frac{F(x+h)-F(x-h)}{2h}$$

The Attempt at a Solution


$$U_{xx}+U_{yy} = \frac{1}{h^2}[U(x+h,y)+U(x-h,y)-4U(x,y)+U(x,y+h)+U(x,y-h)]=F(x,y)$$
$$U(x,y)=\frac{1}{4}[U(x+h,y)+U(x-h,y)+U(x,y-h)+U(x,y+h)-h^2F(x,y)]$$

The books solution is
$$\frac{1}{2}[U(i+1,j)+U(i-1,j)+U(i,j+1)+U(i,j-1)]-\frac{h^2}{4}F(x,y)$$

I know why the book changed x,y to i,j... but I don't get why the fraction is 1/2 instead of 1/4 across the entire equation.

I think the book's expression is wrong; your computation is the standard one that appears in numerous textbooks (maybe not yours!) and in many web pages; just Google "laplacian + finite differences".
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top