Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Numerical Solution for Complex PDE (Ginzburg-Landau Eqn.)

  1. Jun 25, 2016 #1
    I am looking to numerically solve the (complex) Time Domain Ginzburg Landau Equation. I wish to write a python simulator to observe the nucleation of fluxons over a square 2D superconductor domain (eventually 3D, cubic domain).

    I am using a fourth order Runge Kutta solver for this which I made in Python (the odeint solver does not handle complex variables). How do I implement the RK4 algorithm for complex variables?

    The equation I am looking to solve is....

    ∂ψ/∂t = -p/i ∇2 - q/i |ψ|2 ψ + γ ψ

    Where p and q are chosen within the complex unit square (unit magnitude), and γ is real on [0,1]. Any help in implementing this would be appreciated. I'd also appreciate if you could point me to software or libraries that can advance my efforts.
  2. jcsd
  3. Jun 26, 2016 #2


    User Avatar
    Gold Member

    First, to clarify, is this what you're trying to solve?

    ## \frac{\partial \psi}{\partial t} = [-\frac{p}{i} \nabla^{2} - \frac{q}{i} |\psi|^{2}] \psi + \gamma \psi##

    Supposing you had a function worked out that could return the right hand side as a function of t, then the Runge-Kutta integration method is the same for a single complex dependent variable as it is for two real dependent variables. You essentially just do Runke-Kutta 4th order integration twice, once for the real part and once for the imaginary part.

    If I were going to do this, my first attempt would be to start with a central difference scheme in the 2 space domains and 1 time domain for the special case of q=0, then try to use those as trial functions, maybe adding in some correction functions based on intuition/hunch about what q does to the solutions or just observational insight. There are probably more elegant and effective methods.

    Hope that helps!
  4. Jun 27, 2016 #3
    Yes, it does help. Your equation is the one I want to solve, it seems I could not render my equation correctly (new to BB codes). When I think about it, there are no restrictions on the RK algorithm (ie. no restriction to stay with the real domain). Given that Python can handle complex numbers, I wonder why I have to modify my direct RK implementation. Perhaps there is something within NumPy that makes this an issue.

    I will try this now, but need a predictable test case to see if things are working out, something non-trivial but straightforward to test the RK4 algorithm. If you have a simple suggestion - I'd appreciate it. By the way, how did you render your equation? My BB coding was not as legible as yours.
  5. Jun 27, 2016 #4


    User Avatar
    Gold Member

    For a test case, try something simple like ##\frac{dz}{dt} = -9.81it##. Then try ##\frac{dz}{dt} = -9.81t##. Then try something like ##\frac{dz}{dt} = iz##.

    As for the fractions, \frac{x}{y} => ##\frac{x}{y}##
  6. Jun 29, 2016 #5
    I'm using Python/NumPy to do the simulation and it has complex variable handling. The RK4 algorithm does not seem to have restrictions on domain, but I tried full separation of real and complex parts as you suggested. Furthermore, I appreciate the simple test cases you suggested and used them.

    The fully separated version seemed to match the output from the plain/direct implementation but I needed special handling to take care of basic 1 and 2D ODEs which I did (I'm trying to create a 2D simulation for now). Here's another question: How to implement a complex Laplacian? The TDGL equation has ∇2 ψ. At the moment, I used a discrete convolutional kernel to find the Laplacian of a purely real 2D ψ array. Should the same kernel be used for complex values?

    See: https://en.wikipedia.org/wiki/Discrete_Laplace_operator#Finite_Element_Method

    I am really having trouble with this simulation and do appreciate help.
  7. Jun 29, 2016 #6


    User Avatar
    Gold Member

    You can use the discrete Laplacian on the real part and imaginary part separately. This isn't a huge issue. What's going to be hard is the nonlinear term in the PDE. I don't think you'll be able to write a discrete matrix operator for it like you do for the Laplacian, since that term has 3 powers of psi in it. This is where I think you may need to start turning to trial functions or a finite element analysis as opposed to a finite difference analysis. One idea that comes to mind is solving the problem for q = 0 via the finite difference approach and then using trial functions (kind of like a Galerkin method) to find the right correction to account for the q-dependent term. Beyond that, I don't have enough experience to make recommendations. Anyone more familiar with numerical strategies for this equation?
  8. Jun 30, 2016 #7
    I can't seem to embed an image, but I did a test on a 2D domain: \frac{dψ}{dt} = ∇ψ:

    *** 0.00s to 50.00s in 100 timesteps.
    *** -10.00 < x < 10.00, delta_x = 200
    *** -10.00 < y < 10.00, delta_y = 200
    *** Starting simulation at 2016-06-30 15:51:20.
    *** Running ODE solver in array mode.
    *** Simulation concluded after 9 uS.

    The initial value was a 2D step function = 1.0+1.0j between +/- 4 in both axes. My software ran without crashing (a good thing), and the output is roughly what I'd expect it to be at the end of simulation time. Our conversation helped clear things up in that sense.

    I read your post and was unclear on your reply. My Python implementation seems to follow something I found very closely, if not identically:
    1. In this report, Equation 7 sticks out. The first term on the right hand side makes use of the 1D Laplacian with no issues. For each step in the RK algorithm, ψ(x,t) has t held constant so the Laplacian over the complex field should pose no problem. The partial derivatives should be fine.
    2. The second term throws me off somewhat, but |ψ|2 ψ seems to simply a complex scalar field... am I missing something here?
    I think I am likely missing some very basic insight into how the GL equation should be computed. If there were a Python package that solved this for me, I'd be ecstatic but looks like there's only something for the London Equations (because they are restricted versions of Maxwell's equations which can be solved using the FDTD method and there's lots for that).
  9. Jul 2, 2016 #8
    How important is system size? If not so much how about a super simple exponential time integration scheme in fourier space with some simple(or no) anti-aliasing on the non-linear term? I seem to recall it worked pretty well back when I worked with it. Have you read The world of the complex Ginzburg-Landau equation by Aranson and Kramer? They are bound to have some references to numerical schemes.
  10. Jul 6, 2016 #9
    I will have to review what you suggested in detail, I think I have seen this reference somewhere. Furthermore, I am not familiar with the method you speak so it will be worthwhile to look at this.

    At the moment, I am trying to push forward with what I have... I don't know how to upload a video to PF, but i think I got something that make sense to a neophyte in superconductivity (my background is in electrical and computer engineering). I randomly initialized a complex field (the order parameter). Over time, the system seems to demonstrate fluxon nucleation... this is to say, normal cores in a sea of superconducting fluid.

    Some updates for Twigg and now you Strum...
    1. The fluxon cores do not seem to move. I assume this is because they have enough space not to demonstrate their inter-repulsive forces.
    2. The cores seem to have a very low concentration of superfluid, consistent with GL theory.
    3. The simulation seems to overflow at times... I suspect this has to do with the 5-point stencil I am using to compute the Laplacian. The convolutional kernel I am using does not divide by 4, nor does it take into account the spatial resolution of the simulation (ie. h)
    4. The simulation runs slow. I may need to move over to a GPU to do the computations, I want to do this in 3D.
    5. I can generate images and video that look pretty, but that doesn't count for anything technical. I somehow need to validate this thing.
    So now, I am looking for two things in the short term as I review the suggested doc....
    1. Getting my Laplacian to work more numerically accurately and not produce overflows.
    2. Find some sort of validation strategy.
    3. Buy an new laptop with a GPU. o_O

    Attached Files:

  11. Jul 6, 2016 #10


    User Avatar
    Gold Member

    Before you start shopping new computers, is the code you're using to take the Laplacian your own original source code? If so, did you use sparse matrix encoding? I have never had an issue with a Laplacian matrix being too big for the processor to handle, even in 3D, while using sparse matrix encoding. Granted, it could still be that you are asking for higher resolution than I have ever attempted. But I was able to solve Poisson's equation on a MacBook Pro from 2011 using this approach with fifty to a hundred nodes per dimension. A friend of mine was able to take this simulation code and put it in a loop to simulate tens of different geometries and display the results as a real time movie, without buffering the simulation data. I have a hard time believing the Laplacian matrix itself is your biggest demand on memory. Can you verify this?

    I'm unfamiliar with the physics involved, so I'll have to review these papers before I make any further comments. I just figured you might want the above information before you invest in a new laptop.
  12. Jul 7, 2016 #11
    It looks like matlab colormap. In matlab depending on your implementation you do not need a matrix formulation. You can directly vectorize the input. It is a bit hard to see the lattice size but already 512^3 needs some memory.

    For validation, perhaps you can look at the velocity statistics of the vortices( which I by the way recall as moving, either anihilating or repelling. Perhaps the friction coefficient can make them stationary?)? You could look at this paper:
    Anisotropic velocity statistics of topological defects under shear flow, http://arxiv.org/pdf/1201.5758v1.pdf
    which have a few results and also describes a really nice method for calculating the charge and velocity of the vortices.

    For the numerical method I suggested, I can give you a fast description later if you like.
  13. Jul 10, 2016 #12
    Also if you do not mind I would like to see your Runge-Kutta code. I never got around to really learn this method despite it being all over the place.
  14. Jul 10, 2016 #13
    Yes it is, and no I did not use sparse encoding as I did not think it was necessary here, the ψ(x,y) (order parameter) matrix is a scalar field.

    I implemented the discrete Laplacian operator using a convolutional kernel for my 2D complex scalar field and this is an O(n2) complexity operation. The Runge-Kutta integration itself has linear complexity but is applied over the 2D field. This can get computationally intensive, but not necessarily memory intensive IMHO (there is very little internal buffering for the actual computation). I did some very basic profiling and realized that the graphic rendering is what is the memory hog. My poor laptop. After a bit of rendering... the fan goes crazy... This week, I will be validating the Laplacian with a known discretized analytic function. I will work things out on paper and then inspect the numerical result. I will be sure to get back to you on this.

    To be honest, my computer is an old dual core unit that is filled with software used at my different jobs over the past 6+ years. Disk/memory swapping may be my problem, but regardless - this PC needs to be replaced soon... I just need a catalyst. I am looking at a MSI gaming laptop that will give me some CUDA capability. I simply can't wait for 5 minutes between simulation trials as I am testing and things should get more complex and this will cumulate to the point of effective work stoppage. My PhD is a part time affair as I have two full time jobs... sleep is an important scheduled activity. This was actually at the forefront of my thoughts when browsing for laptops. My circumstance is not one of choice. :frown:
  15. Jul 10, 2016 #14
    Is there a way I can paste Python code in this forum? I seem to not be very good at figuring out how to perform basic posting operation. I absolutely don't mind sharing the code. It is basic, but works well for 1D and 2D fields with no modification. With basic modification, you can specify functional/control systems using state equations... that is REALLY cool for lumped physical models. I just tried cutting and pasting my code: hideous.
  16. Jul 11, 2016 #15
    I wanted to make a video of my work in progress, although it is not completely validated and bug free. I put up a temporary YouTube video to share what I have at the moment:

    I will still be validating the Laplacian later on among other things.
  17. Jul 11, 2016 #16
    Very cinematic! What is your unit of length in the simulation box?

    Perhaps you can paste your code using the code tags.

    Code (Text):

  18. Jul 12, 2016 #17
    Okay... seems like my implementation of the Laplacian has some performance issues.My validation method:
    1. Used a 2D function and computed the symbolic Laplacian so I had a perfect reference: f(x,y) = exp(-x2-y2). This is a single 2D hump centered at (0,0). Most of it vanishes at a distance of 2 (radial) units from this center.
    2. Set up the discrete Laplacian function: 5-point stencil & convolution.
    3. For various domain extents (the range of x and y), I computed the error between the analytic and discrete solutions.
    4. Generated error plots for each range, with varied values for h (the finite difference step).
    5. BONUS: A psuedocolor plot of the surface as well as the error surface resulting from the aforementioned comparison.
    Attached are the results (plots are annotated).
    Discrete_Laplacian_ERROR_Test1.png Discrete_Laplacian_ERROR_Test2.png Discrete_Laplacian_ERROR_Test3.png Discrete_Laplacian_ERROR_Test4.png Discrete_Laplacian_ERROR_Test5.png Discrete_Laplacian_ERROR_Test6.png Discrete_Laplacian_Validation.png
    Now this was an INTERESTING exercise. The maximum error that results can be dramatic. As the domain extents come closer to the hump's center (the domain region clips the hump. This is to say, the function is not allowed to vanish. Edge effects take a serious toll here. This is what is likely causing my overflow problems. As the domain is extended further away from the center, the function begins to vanish and the error vanishes as well.

    I suppose, my question is... is my convolution method just a quick and dirty method of computing the function Laplacian or is there a better way. I know Twigg mentioned a method that involved sparse matrices. I did some thinking while driving and tried to figure out why he mentioned this. Now I can see this as another method (compute the finite difference at each point on the vectorized surface and solve the resulting system)... Can I get more information on this, like an algorithm name?

    Code rendering on this forum is terrible/non-existant. A tab character won't render (Python loves tabs), if there is another solution for posting code - I wouldn't mind knowing about it.
  19. Jul 12, 2016 #18
    Goodness, the code tags don't like tabs. This ruins any formatting that makes the code legible! My simulation length is kinda generated from random parameters (yes, I know... I'm testing stuff:nb)).

    I made the video kinda cinematic deliberately. When I work in industry, you need to get the attention of your peers in R&D. This works. So I just fall back to old habits. Some of my technical videos look like there should be a rapper in there somewhere. It gets the point across in a pleasant way and costs little time. I bring this habit to academia now...:cool:
  20. Jul 12, 2016 #19
    This is not an ideal posting, but I could not ignore your request despite the forum's poor rendering ability (AFAIK).
    This will work for 1D or 2D functions (the second block is straight from the textbook for simple ODEs). With some modification, you can use it to solve state-space equations. If you do any engineering, you can simulate some pretty cool systems. I actually implemented this code on a microcontroller and it worked. I had a real-time thermal simulator. I could heat or cool a virtual piece of material arbitrarily and monitor its temperature. I found it to be pretty impressive, but coupled ODEs are even cooler.
  21. Jul 13, 2016 #20


    User Avatar
    Gold Member

    I thought this was what you were doing all along. Sorry for misunderstanding! I don't know the fancy name for it, but what I did is I constructed a 3D discrete Laplacian sparse matrix using kronecker products. There's a formula for that on wikipedia. For example, if I had a 2D 3x3 mesh, I would ordinarily need a 9x9 discrete Laplacian matrix with 81 entries. Using sparse encoding, you cut that number down to only the nonzero entries, which are only a few diagonals, and save a lot of memory. I think the math is the same as the convolutional kernel you are describing. It speeds things up a lot.

    I have a speculative theory on this, hopefully it is productive. This convolutional kernel business doesn't seem to take spatial boundary conditions into account. I'm basing this off my experience with the discrete Laplacian method I described above (which, if you think about it, uses the same principle) and Jackson's treatment of Green's functions. The convolution matrix method to me seems like taking an integral convolution, which is like taking the convolution of a Green's function with a source distribution. Green's functions depend on the boundary conditions (explained fully in Jackson, or just take an image charge as an example). For some more concrete connection here, when I used the discrete Laplacian matrix method (which I _think_ is mathematically identical to the convolutional kernel), we had to delete all entries for mesh points on the boundaries, edges, and corners of our rectangular grid and insert driven constraints from physical considerations. It doesn't give you the usual discrete Laplacian when you consider boundary conditions (Dirichlet or von Neumann, either one changes stuff up). In the case of Dirichlet conditions, I would replace an entire row of the Laplacian with a row consisting of a single '1' and all others '0's, and put the corresponding data point in the solution vector. The problem I was working on was a heat transfer problem, so I didn't have any true von Neumann conditions to work with, but for boundary heat sources I had to use a modified row for the discrete Laplacian, which I figured out a pattern for by doing simple examples by hand, plus the heat source coming in from the outside. I don't know if these simplified boundary conditions are helpful for your application, but I have a hunch that boundary conditions are what's causing your edge-related errors.

    NOTE: The discrete Laplacian formula from wikipedia doesn't include boundary effects either! I had to derive a pattern/strategy by doing very small cases by hand.
  22. Jul 13, 2016 #21


    User Avatar
    Gold Member

    I don't think I made this very clear in my last post. The ordinary discrete Laplace operator that's described in the paper you posted works for any point on the interior of your domain. In the presence of boundary conditions, you modify the operator on points on the boundary accordingly. The interior point Laplacian operator expects for every mesh cell that it is surrounded by six other mesh cells. I suspect that your overflow errors are due to the Laplacian "looking" for mesh cells that aren't there because of the finite extent, if that makes sense.
  23. Jul 16, 2016 #22
    A max error on the laplacian of around 0.005 for even large dx for this function seems pretty good to me. On the other hand in your 2d plots it looks weird that the error does not go towards zero away from the middle( assuming red is not = 0. You should put in a colorbar on those plots ). My own fast implementation of both 5 and 9 point stencils goes to zero pretty quickly,
    Thank you for the code, I will look at it perhaps sunday. :)
  24. Jul 17, 2016 #23
    I cant seem to figure out how to edit the post :/. Here is a pic of the error I get 25tjkm1.png http://tinypic.com/r/25tjkm1/9
  25. Jul 21, 2016 #24
    I needed to take some time to think about this whole thing. The convolution method I was working with has a spectral response. By this I mean mean the finite boundary extents should yield oscillations at the extents, and these are related to the spatial step size. When I initialize my array with uniformly random values (from the complex unit square with length less than or equal to one), relative to one another, these are all 2D step functions. For example, for an arbitrary array location (i,j) assume I have a random value of 1+0j. Adjacent to it, say at (i+1,j-1) there may be a value of 0+0j. This is a sharp discontinuity whose second derivative is quite large.

    For a sufficiently large initialization array with values that are locally discontinuous the Laplacian can yield large values, especially the way I am iterating for the Runge-Kutta algorithm... repeatedly calling the Laplacian operator. I am just throwing ideas around... what do you think?
  26. Jul 21, 2016 #25
    (How did you get your pic to appear inline???)

    I notice you are using a 9-point stencil here, while I was using a 5-point stencil (on the rectangular grid, you are taking diagonals into account while I am not). You were also using convolution here?
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted