Twigg said:
First off, I want to say thanks to Johnny_Sparx for taking the time to make this a great thread. I've learned a lot from it and enjoyed reading every new post.
Truth be known, in my personal circles, I have been talking about the direction both you and Strum have helped me with. I just posted, you both
replied. And I too have been learning a lot of things. Lots of deep insight in this thread. Thought provoking stuff!
Twigg said:
I may be missing something, but I don't see how the oscillations at the extents are related to the error in the Laplacian due to large differences divided by small step sizes.
My inspiration for saying this extends from the following idea:
Consider a unit step function f(t) (ie. Heaviside function with one dimensional discontinuity). The 1D laplacian of this function is essentially the second derivative \frac{\partial^2}{\partial t^2}f(t). This is the doublet function, a double sided, unit impulse (FYI, the Kroniker delta function is a discrete version of the Dirac function and the doublet is its derivative). In the frequency domain these impulses have an infinitely wideband spectrum (constant in the frequency domain). Discretely sampling an infinitely wideband signal essentially filters it. To me, derivatives at crisp boundaries creates time domain oscillations that are band-limited by the sampling frequency. That's why when we look at simulations involving Laplacians or gradients over time, amplitude changes are typically initiated at boundaries that are sharp and have wave-like patterns.
I just wanted to express motivations for my comment, thanks for calling me to task for this. It helped me clarify my thoughts on the matter.
Twigg said:
There is a way you could test to see if these differences are what's giving you the overflow. Go into your initialization code where you sample random points from the complex unit square and make the radius of the complex square a variable, not necessarily 1. For clarity, let's call the size of the complex square that the initial values are sampled from g, and let's call the stepsize h. Write a for loop to run your simulation for several values of g and h. My instinct would be to have it do orders of magnitude. For example, the loop would start with h = 1.0E-03 and g = 1.0E-03 and run your simulation, then do h = 1.0E-03 and g = 1.0E-02, etc., then h = 1.0E-3 and g = 1.0E3, then h = 1.0E-02 and g = 1.0E-03, etc etc. If you are right and the issue is entirely due to large differences in the Laplacian, then the overflow error will only depend on g/h. If you plot the overflow error over the gh-plane, the resulting surface plot should be independent of the radial coordinate and depend only on the angular coordinate (on average, at least, since the initialization is randomized). If that's not the case, I would venture to say that something else is up. This may take a lot of computing time.
This might take some time to do which I don't think I have. What I
did do was to test the Laplacian I implemented with a function that did not vanish at the borders. In other words, I used the same function, but biased it by 1 and computed the Laplacian with 0 as fill values at the border. Essentially this forms a crisp edge. The function used was:f(x,y) = {{\rm e}^{-{x}^{2}-{y}^{2}}}+1 Note the last term. The analytic Laplacian was found to be:\nabla^2 f = -4\,{{\rm e}^{-{x}^{2}-{y}^{2}}}+4\,{x}^{2}{{\rm e}^{-{x}^{2}-{y}^{2}}}+4\,{y}^{2}{{\rm e}^{-{x}^{2}-{y}^{2}}}
The 9-point stencil was convolved over the discretized function (-6 \leq x,y\leq 6, h=0.1). The convolutional function was called with fill values of 0 at the boundaries. The result:
And look at those errors! 6 orders of magnitude greater than the previous test result I posted. It seems the convolutional method does not respect boundaries as expected. Next experiment. Convolution using a "wrap" at the end of both axes, ie, the domain is a
torus. This continuous domain yields (for the same function):
And now look at the boundary. My conclusion seems to be converging to:
In computing a discrete Laplacian, it seems convolution with a 5/9 point stencil (or any other) works on continuous boundaries - not on discrete ones, despite what fill values you insert at the boundaries.
What do you folks think about this conclusion? Make sense?
Twigg said:
I maybe should've asked a little earlier, but can you tell us what you're using as boundary conditions, and how you tell the kernel to behave on those boundaries? For example, on the edges are you using a 4-point stencil or is there some contribution from outside? I can't tell if the errors you're seeing are related to boundaries or not. Hopefully the above test helps narrow it down.
At the boundary, I need for the order parameter to be zero, ie \psi=0 at the boundaries. What is the name of a method I can use that does not employ convolution to enforce these boundaries? I know we mentioned the use of sparse matrices... is there a name for this method so that I can look it up?