# Numerical Solution for Complex PDE (Ginzburg-Landau Eqn.)

• A
I cant seem to figure out how to edit the post :/. Here is a pic of the error I get
http://tinypic.com/r/25tjkm1/9
Strum, without knowing the exact 9-point stencil you used I tried to replicate the plot you produced (in Matlab I assume?). Apparently, there is no standard for the 9-point kernel although there is some consensus on the 5-point kernel. Different formulations of the finite difference formulae will yield differing kernels. So when I use a 9 point kernel, I obtain the following for the same domain and $\Delta x = \Delta y = h = 0.1$

It seems like I get a very small $|\epsilon|^2$By the way, the reason I am insisting on using convolution is purely for computational reasons: it can be done in hardware (ie. GPU). It seems like we both have comparable rates. The 5-point stencil yields the following result:

Notice that there are some artefacts that appear about one of the "color rings" in the last errror plot. Were you using another method other than convolution?

Twigg
Gold Member
First off, I want to say thanks to Johnny_Sparx for taking the time to make this a great thread. I've learned a lot from it and enjoyed reading every new post.

I needed to take some time to think about this whole thing. The convolution method I was working with has a spectral response. By this I mean mean the finite boundary extents should yield oscillations at the extents, and these are related to the spatial step size. When I initialize my array with uniformly random values (from the complex unit square with length less than or equal to one), relative to one another, these are all 2D step functions. For example, for an arbitrary array location (i,j) assume I have a random value of 1+0j. Adjacent to it, say at (i+1,j-1) there may be a value of 0+0j. This is a sharp discontinuity whose second derivative is quite large.

For a sufficiently large initialization array with values that are locally discontinuous the Laplacian can yield large values, especially the way I am iterating for the Runge-Kutta algorithm... repeatedly calling the Laplacian operator. I am just throwing ideas around... what do you think?
I may be missing something, but I don't see how the oscillations at the extents are related to the error in the Laplacian due to large differences divided by small step sizes. There is a way you could test to see if these differences are what's giving you the overflow. Go into your initialization code where you sample random points from the complex unit square and make the radius of the complex square a variable, not necessarily 1. For clarity, let's call the size of the complex square that the initial values are sampled from g, and let's call the stepsize h. Write a for loop to run your simulation for several values of g and h. My instinct would be to have it do orders of magnitude. For example, the loop would start with h = 1.0E-03 and g = 1.0E-03 and run your simulation, then do h = 1.0E-03 and g = 1.0E-02, etc., then h = 1.0E-3 and g = 1.0E3, then h = 1.0E-02 and g = 1.0E-03, etc etc. If you are right and the issue is entirely due to large differences in the Laplacian, then the overflow error will only depend on g/h. If you plot the overflow error over the gh-plane, the resulting surface plot should be independent of the radial coordinate and depend only on the angular coordinate (on average, at least, since the initialization is randomized). If that's not the case, I would venture to say that something else is up. This may take a lot of computing time.

I maybe should've asked a little earlier, but can you tell us what you're using as boundary conditions, and how you tell the kernel to behave on those boundaries? For example, on the edges are you using a 4-point stencil or is there some contribution from outside? I can't tell if the errors you're seeing are related to boundaries or not. Hopefully the above test helps narrow it down.

First off, I want to say thanks to Johnny_Sparx for taking the time to make this a great thread. I've learned a lot from it and enjoyed reading every new post.
Truth be known, in my personal circles, I have been talking about the direction both you and Strum have helped me with. I just posted, you both replied. And I too have been learning a lot of things. Lots of deep insight in this thread. Thought provoking stuff!

I may be missing something, but I don't see how the oscillations at the extents are related to the error in the Laplacian due to large differences divided by small step sizes.
My inspiration for saying this extends from the following idea:
Consider a unit step function $f(t)$ (ie. Heaviside function with one dimensional discontinuity). The 1D laplacian of this function is essentially the second derivative $\frac{\partial^2}{\partial t^2}f(t)$. This is the doublet function, a double sided, unit impulse (FYI, the Kroniker delta function is a discrete version of the Dirac function and the doublet is its derivative). In the frequency domain these impulses have an infinitely wideband spectrum (constant in the frequency domain). Discretely sampling an infinitely wideband signal essentially filters it. To me, derivatives at crisp boundaries creates time domain oscillations that are band-limited by the sampling frequency. That's why when we look at simulations involving Laplacians or gradients over time, amplitude changes are typically initiated at boundaries that are sharp and have wave-like patterns.
I just wanted to express motivations for my comment, thanks for calling me to task for this. It helped me clarify my thoughts on the matter.

There is a way you could test to see if these differences are what's giving you the overflow. Go into your initialization code where you sample random points from the complex unit square and make the radius of the complex square a variable, not necessarily 1. For clarity, let's call the size of the complex square that the initial values are sampled from g, and let's call the stepsize h. Write a for loop to run your simulation for several values of g and h. My instinct would be to have it do orders of magnitude. For example, the loop would start with h = 1.0E-03 and g = 1.0E-03 and run your simulation, then do h = 1.0E-03 and g = 1.0E-02, etc., then h = 1.0E-3 and g = 1.0E3, then h = 1.0E-02 and g = 1.0E-03, etc etc. If you are right and the issue is entirely due to large differences in the Laplacian, then the overflow error will only depend on g/h. If you plot the overflow error over the gh-plane, the resulting surface plot should be independent of the radial coordinate and depend only on the angular coordinate (on average, at least, since the initialization is randomized). If that's not the case, I would venture to say that something else is up. This may take a lot of computing time.
This might take some time to do which I don't think I have. What I did do was to test the Laplacian I implemented with a function that did not vanish at the borders. In other words, I used the same function, but biased it by 1 and computed the Laplacian with 0 as fill values at the border. Essentially this forms a crisp edge. The function used was:$$f(x,y) = {{\rm e}^{-{x}^{2}-{y}^{2}}}+1$$ Note the last term. The analytic Laplacian was found to be:$$\nabla^2 f = -4\,{{\rm e}^{-{x}^{2}-{y}^{2}}}+4\,{x}^{2}{{\rm e}^{-{x}^{2}-{y}^{2}}}+4\,{y}^{2}{{\rm e}^{-{x}^{2}-{y}^{2}}}$$
The 9-point stencil was convolved over the discretized function ($-6 \leq x,y\leq 6, h=0.1$). The convolutional function was called with fill values of 0 at the boundaries. The result:

And look at those errors! 6 orders of magnitude greater than the previous test result I posted. It seems the convolutional method does not respect boundaries as expected. Next experiment. Convolution using a "wrap" at the end of both axes, ie, the domain is a torus. This continuous domain yields (for the same function):

And now look at the boundary. My conclusion seems to be converging to:

In computing a discrete Laplacian, it seems convolution with a 5/9 point stencil (or any other) works on continuous boundaries - not on discrete ones, despite what fill values you insert at the boundaries.

I maybe should've asked a little earlier, but can you tell us what you're using as boundary conditions, and how you tell the kernel to behave on those boundaries? For example, on the edges are you using a 4-point stencil or is there some contribution from outside? I can't tell if the errors you're seeing are related to boundaries or not. Hopefully the above test helps narrow it down.
At the boundary, I need for the order parameter to be zero, ie $\psi=0$ at the boundaries. What is the name of a method I can use that does not employ convolution to enforce these boundaries? I know we mentioned the use of sparse matrices... is there a name for this method so that I can look it up?

Were you using another method other than convolution?
I used a simple 5 or 9 point finite difference stensil.

You result in #28 is to be expected since you are differentiating a function given by ## f(x,y) = e^{-(x^2 + y^2)} + 1 ##, in the interior and ## f(x,u) = 0 ## on the boundary. That will generate problems. This however, is not the situation you will be in when doing your simulation, where the G-L field will find a natural solution with ## \psi = 0 ## on the boundary.

In computing a discrete Laplacian, it seems convolution with a 5/9 point stencil (or any other) works on continuous boundaries - not on discrete ones, despite what fill values you insert at the boundaries.
I am not entirely certain I understand what you are saying but I am suspecting it does not make sense. There is no problem specifying Dirichlet boundary conditions using all sorts of different numerical methods.

At the boundary, I need for the order parameter to be zero, ie ## \psi=0 ## at the boundaries. What is the name of a method I can use that does not employ convolution to enforce these boundaries? I know we mentioned the use of sparse matrices... is there a name for this method so that I can look it up?
Perhaps you should try and use a finite difference method. See for example these slides: http://people.sc.fsu.edu/~jpeterson/notes_fd.pdf . In order to enforce your boundary conditions you extend the lattice with a boundary layer with a specific value( in your case ## 0 ## ), which you do not update. This is shown on slide 16. On slide 18 the differentiation matrix for a finite difference method is shown. As you can see it is sparse.

Twigg
Gold Member
Finite difference is the method I was thinking of. Specifically, a central-difference scheme for the Laplacian. The catch is that your problem is non-linear due to the ##|\psi|^{2} \psi## term. You will need to use Newton's method to get a solution, and that opens a new can of worms with convergence and validation. I've never tried this personally, but I know at least one piece of commercial software (COMSOL) that uses Newton's method to solve the matrix problems of finite element methods.

Is there a way to adapt your existing code to initialize the order parameter on the boundaries to 0 and then tell the convolutional kernel to act on everything but the outermost cells? That might make a lot more sense if you already have a method that works. For example, if your initial data was [0, 0, 0, 0; 0, 0.3+0.5i, 0.5-0.2i,0;0,0,0,0], then you would tell the convolutional kernel only to operate on the inner two entries and not act on the boundary zeros. Is that possible? Just a thought.

Finite difference is the method I was thinking of. Specifically, a central-difference scheme for the Laplacian. The catch is that your problem is non-linear due to the |ψ|2ψ|\psi|^{2} \psi term. You will need to use Newton's method to get a solution, and that opens a new can of worms with convergence and validation. I've never tried this personally, but I know at least one piece of commercial software (COMSOL) that uses Newton's method to solve the matrix problems of finite element methods.
Why would you think there is a problem with non-linear terms? I really can not see how this should pose a problem as long as he uses some explicit time integration scheme ( and even if he used an implicit I can not see why the difficulties would even be related to the finite difference method ).

Twigg
Gold Member
Why would you think there is a problem with non-linear terms? I really can not see how this should pose a problem as long as he uses some explicit time integration scheme ( and even if he used an implicit I can not see why the difficulties would even be related to the finite difference method ).
You're absolutely right, my mistake. I was thinking of a steady-state problem, which this is not.