How Can I Solve a Set of Coupled, Nonlinear PDEs with Two Independent Variables?

  • Thread starter TheCanadian
  • Start date
  • Tags
    Pde
In summary: E+F)####\frac {\partial...}{\partial z} = cCD - fE####\frac {\partial...}{\partial z} = cCD - fF####\frac {\partial...}{\partial t} = aAC + bBD - cE####\frac {\partial...}{\partial z} = a(E+F)+b(E-F)####\frac {\partial...}{\partial z} = (a+b)(E+F)+c(E-F)####\frac {\partial...}{\partial z} = a
  • #1
TheCanadian
367
13
I've been trying to solve a set of coupled, nonlinear PDEs with the general form:

## \frac {\partial A}{\partial t} = aAC + bBD - cE ##

## \frac {\partial B}{\partial t} = - c(E+F) ##

## \frac {\partial C}{\partial t} = dAD - eE ##

## \frac {\partial D}{\partial t} = dBC + - eF ##

## \frac {\partial E}{\partial z} = cCD - fE ##

## \frac {\partial F}{\partial z} = cCD - fF ##

(Upper case letters stand for the solutions sought; lower case letters represent constants)

I've tried existing packages such as FiPy and tried my own basic solver using FEM, but neither has been successful aside from a few nontrivial cases. With the two independent variables (z and t) especially, this seems to complicate the solver. Any advice you have on existing libraries or numerical methods I should implement would be of great interest. (I am most familiar with Python, but am open to responses for any programming language.)
 
Technology news on Phys.org
  • #2
No replies so my two cents: Open source just can't compete with commercial math software like Mathematica, still. In most cases, even if you make the money flipping burgers, the time you'll save by buying it will more than cover the time spent getting the money to buy it, that's been my experience. So if you haven't, make sure to check out the apis, or online interfaces for the commercial offerings in solving this problem.
 
  • #3
Hey TheCanadian.

Have you tried to do an error/order analysis of your PDE system?

What I mean by this is have you tried to get some sort of bound for the error or order of error?

If you can derive that with the same sort of logic used to check the error of numerical schemes then you can get an idea of what step-size to use to cover the sorts of cases you are intending to simulate and identify the other cases where given some step size it will be too difficult to use numerical techniques on (due to the computation required).

What is the interval you want to numerically integrate on?
 
  • #4
What is the boundary condition on E and F?
 
  • #5
Fooality said:
No replies so my two cents: Open source just can't compete with commercial math software like Mathematica, still. In most cases, even if you make the money flipping burgers, the time you'll save by buying it will more than cover the time spent getting the money to buy it, that's been my experience. So if you haven't, make sure to check out the apis, or online interfaces for the commercial offerings in solving this problem.

Thank you for the response. I have recently gotten Mathematica and will certainly look into its applicability for this problem. After a cursory review of the available documentation and related problems, I have yet to come across an existing technique that demonstrates it can solve problems exactly like mine (i.e. none of the examples online involve nonlinear equations, coupled equations, and also more than one independent variables). Although I'll certainly look into it further.
 
Last edited:
  • #6
chiro said:
Hey TheCanadian.

Have you tried to do an error/order analysis of your PDE system?

What I mean by this is have you tried to get some sort of bound for the error or order of error?

If you can derive that with the same sort of logic used to check the error of numerical schemes then you can get an idea of what step-size to use to cover the sorts of cases you are intending to simulate and identify the other cases where given some step size it will be too difficult to use numerical techniques on (due to the computation required).

What is the interval you want to numerically integrate on?

Hmm one problem I was encountering in determining an appropriate step size for z and t is choosing a suitable method to determine error in my system. For example, when using a scheme such as Von Neumann analysis, I was having trouble formulating it in such a way to place a particular bound on ## \frac {\Delta z}{\Delta t} ##. Any suggestions? (I'll try to lay out a more concrete example of a particular situation I am looking at in the next post, responding to Chestermiller's comment.)

The intervals I am considering below are z = [0,300] and t = [0,500]. I used a uniform grid with 500 steps on each axis.
 
Last edited:
  • #7
Chestermiller said:
What is the boundary condition on E and F?

Since E and F are more general versions of a particular problem I am looking at, I'll outline one case of some example conditions I know I am working with and associated code (in Python). The equations in my initial post are real-valued solutions (which I would use if I am dealing with an existing solver), but the functions below I am solving for will be complex-valued just in case you try to follow along with the code.

My problem at hand:

## \frac {\partial N}{\partial t} = i(PE + P^*E^*) - \frac{N}{T_1/T_R}
\\ \frac {\partial P}{\partial t} = 2i(E^*N) - \frac{P}{T_2/T_R}
\\ \frac {\partial E}{\partial z} = qiP - \frac {E}{L_{diff}}
##

where ## i = \sqrt{-1}; q = \frac{\omega T_R N_0 d^2}{2c\hbar\epsilon V}; N, P, ##and ##E## are the complex-valued solutions sought; ##T_1, T_2, T_R, q, N_0, d, \hbar, c, \epsilon, L_{diff} ## are constants defined in the code. ##L## is the length of the sample (i.e. z = [0, L]) and ## \theta_0 = 4.7 \times 10^{-5} ##

Initial conditions:
## N(z,0) = 0.5sin(\theta_0) \\
P(z,0) = 0.5cos(\theta_0) ##

Boundary conditions:
I am working on 3 different cases right now: ## E(0,t) = 0 \\ E(0,t) = 0.025j,\\E(L,t) = 0 ##

Code:
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np

#constants & parameters
theta0 = 4.7*(10**(-5))
TD = 0.25*(np.log(theta0/(2*np.pi)))**2
d = 1.7*(10**(-30))
omega = 2.*np.pi*(1612.*10**(6))
eps = 8.85*10**(-12)
c = 3.*(10**8)
hbar = (6.626*10**(-34))/(2*np.pi)
eta = 0.01
nn = 10.**7
n = eta*nn

lambdaOH = c/(1612.*10**(6))
gamma = 1.282*(10.**(-11))
Tsp = 1./gamma
TR = 604800.
T1 = 210.*TR
T2 = 210.*TR
L = (Tsp/TR)*(8.*np.pi)/((3.*(lambdaOH**2.))*n)
radius = np.sqrt(lambdaOH*L/np.pi)
A = np.pi*(radius**2)
phi_diffraction = (lambdaOH**2)/A
V = (np.pi*(radius**2))*L
NN = n*V
constant3 = (omega*TR*NN*(d**2)/(2*c*hbar*eps*V))

Fn = 1.
Lp = 1.
Ldiff = Fn*L/0.35

#time
tmax = 500. #time limit
Ngrid = 500.
ht = tmax/Ngrid
interval = tmax/ht + 1 #i.e. [0,tmax] by interval ht
t = np.linspace(0,tmax,interval)

#z space
zmax = L/Lp
w = len(t) - 1 #interval number in z
z = np.linspace(0,zmax,len(t))
hz = zmax/w

Ep = np.zeros((len(z),len(t)),dtype=np.complex_) #number of rows = len(z)
Pp = np.zeros((len(z),len(t)),dtype=np.complex_)
N = np.zeros((len(z),len(t)),dtype=np.complex_) #using np.complex_ or complex works

Ep = np.zeros((len(z),len(t)),dtype=np.complex_) #number of rows = len(z)
Es = np.zeros((len(z),len(t)),dtype=np.complex_)

#Initial Conditions
Pp[:,0] = 0.5*np.cos(theta0)
N[:,0] = 0.5*np.sin(theta0)
#Boundary Conditions
Ep[0,:] = 0.025j

#----------------solving Bloch equations----------------
j = 0
i = 0
while i < (len(t)-1):
    kNB = (1j)*(Pp[j,i]*Ep[j,i] - np.conj(Ep[j,i])*np.conj(Pp[j,i])) \
    - N[j,i]/(T1/TR)
    kPB = (2j)*(np.conj(Ep[j,i]))*N[j,i] - Pp[j,i]/(T2/TR)
    bNB = (1j)*((Pp[j,i] + ht/2*kPB)*Ep[j,i] - \
    np.conj(Ep[j,i])*np.conj(Pp[j,i] + ht/2*kPB)) - (N[j,i] + ht/2*kNB)/(T1/TR)
    bPB = (2j)*(np.conj(Ep[j,i]))*(N[j,i] + ht/2*kNB) - (Pp[j,i] \
    + ht/2*kPB)/(T2/TR)
    N[j,i+1] = N[j,i] + ht*bNB
    Pp[j,i+1] = Pp[j,i] + ht*bPB
    i = i + 1

k = 1
while k < (len(z)): #k goes from 1 and up to and including index 400
    EpE = Ep
    kpE = (1j*constant3)*np.conj(Pp[k-1]) - EpE[k-1]/(Ldiff/Lp)
    bpE = (1j*constant3)*np.conj(Pp[k-1]) - (EpE[k-1] + hz/2*kpE)/(Ldiff/Lp)
    EpE[k] = EpE[k-1] + hz*bpE

#----------------calculating NE & PpE----------------
    PpE = Pp
    NE = N
    i = 0
    while i < (len(t)-1):
        kNBE = (1j)*(PpE[j,i]*EpE[j,i] - np.conj(EpE[j,i])*np.conj(PpE[j,i])) \
        - NE[j,i]/(T1/TR)
        kPBE = (2j)*(np.conj(EpE[j,i]))*NE[j,i] - PpE[j,i]/(T2/TR)
        bNBE = (1j)*((PpE[j,i] + ht/2*kPBE)*EpE[j,i] - np.conj(EpE[j,i])*np.conj(PpE[j,i] \
        + ht/2*kPBE)) - (NE[j,i] + ht/2*kNBE)/(T1/TR)
        bPBE = (2j)*(np.conj(EpE[j,i]))*(NE[j,i] + ht/2*kNBE) - (PpE[j,i] \
        + ht/2*kPBE)/(T2/TR)
        NE[k,i+1] = NE[k,i] + ht*bNBE
        PpE[k,i+1] = PpE[k,i] + ht*bPBE
        i = i + 1

    NA = N
    NA[k,:] = (NE[k,:]+N[k-1,:])/2

    PpA = Pp
    PpA[k,:] = (PpE[k,:]+Pp[k-1,:])/2

    EpM = EpE #Ep field for this step

    kpM = (1j*constant3)*np.conj(PpA[k-1]) - EpM[k-1]/(Ldiff/Lp)
    bpM = (1j*constant3)*np.conj(PpA[k-1]) - (EpM[k-1] + hz/2*kpM)/(Ldiff/Lp)
    EpM[k,:] = EpM[k-1,:] + hz*bpM

    NM = NA #final N
    PpM  = PpA #final Pp

    i = 0
    while i < (len(t)-1):
        kNBM = (1j)*(PpM[j,i]*EpM[j,i] - np.conj(EpM[j,i])*np.conj(PpM[j,i])) \
        - NM[j,i]/(T1/TR)
        kPBM = (2j)*(np.conj(EpM[j,i]))*NM[j,i] - PpM[j,i]/(T2/TR)
        bNBM = (1j)*((PpM[j,i] + ht/2*kPBM)*EpM[j,i] - np.conj(EpM[j,i])*np.conj(PpM[j,i] \
        + ht/2*kPBM)) - (NM[j,i] + ht/2*kNBM)/(T1/TR)
        bPBM = (1j)*(np.conj(EpM[j,i]))*(NM[j,i] + ht/2*kNBE) - (PpM[j,i] \
        + ht/2*kPBM)/(T2/TR)
        NM[k,i+1] = NM[k,i] + ht*bNBM
        PpM[k,i+1] = PpM[k,i] + ht*bPBM
        i = i + 1

    Ep[k,:] = EpM[k,:]
    N[k,:] = NM[k,:]
    Pp[k,:] = PpM[k,:]

    k = k + 1

fig = plt.figure(1,figsize=(4,4))
X, Y = np.meshgrid(t,z)
ax = fig.add_subplot(1,1,1, projection='3d')
surf = ax.plot_surface(X,Y,(N*np.conj(N)).real, rstride=4, cstride=4, alpha=0.1)
ax.set_xlabel('t (ns)')
ax.set_ylabel('z')
ax.set_zlabel('Population Inversion (scaled)')
ax.view_init(30, 45)
plt.show()

I_SR_scaled = (0.5*c*eps*(Ep*np.conj(Ep))).real # = Itot
I_SR = I_SR_scaled/((d*TR/hbar)**2)
I_nc = NN*hbar*omega*(1./(A*Tsp))*(phi_diffraction/(4.*np.pi))
IntensityRatio = (I_SR/(NN*I_nc))/0.001

fig = plt.figure(2,figsize=(4,4))
X, Y = np.meshgrid(t,z)
ax = fig.add_subplot(1,1,1, projection='3d')
surf = ax.plot_surface(X,Y,IntensityRatio, rstride=4, cstride=4, alpha=0.1)
ax.set_xlabel('t (ns)')
ax.set_ylabel('z')
ax.set_zlabel(r"$ \frac {I_{SR}}{NI_{nc}}$")
ax.view_init(45, 45)
plt.show()
 
  • #8
At any point in time, the equations involving partial derivatives with respect to z can be integrated to give the E and F values at all grid points. The time-dependent equations can then be solved using the so-called method of lines. This involves solving the time dependent equations at n equally spaced grid points in z. The time dependent equations at each of the grid points can be solved as coupled ODEs. So you n sets of ODEs. This results in a total set of nXm stiff coupled ODEs to be solved, where m is the number of dependent variables, not including E and F. You need to solve these ODEs using a stiff ODE equation solver, like the Gear method. Software packages to apply the Gear method are available in the IMSL Library, and also on line (check out DASSL).
 
  • #9
Chestermiller said:
At any point in time, the equations involving partial derivatives with respect to z can be integrated to give the E and F values at all grid points. The time-dependent equations can then be solved using the so-called method of lines. This involves solving the time dependent equations at n equally spaced grid points in z. The time dependent equations at each of the grid points can be solved as coupled ODEs. So you n sets of ODEs. This results in a total set of nXm stiff coupled ODEs to be solved, where m is the number of dependent variables, not including E and F. You need to solve these ODEs using a stiff ODE equation solver, like the Gear method. Software packages to apply the Gear method are available in the IMSL Library, and also on line (check out DASSL).

Thank you for the response! I will certainly try that all out! Just one thing: when integrating E and F over z, wouldn't that be a bit problematic when their partial derivatives involve E and F themselves?
 
  • #10
TheCanadian said:
Thank you for the response! I will certainly try that all out! Just one thing: when integrating E and F over z, wouldn't that be a bit problematic when their partial derivatives involve E and F themselves?
I can't see why that would be a problem. At constant t, these are. just ODEs in z.
 
  • #11
Very impressive advice thanks sharing
 
  • #12
Chestermiller said:
I can't see why that would be a problem. At constant t, these are. just ODEs in z.

Just thought I'd give a quick update since it's been some time. I did use the method of lines as you said. Instead of the Gear method, I implemented a 4th-order RK to integrate the t and z domain. It ended up working very well! Thank you for the advice.

Now as with any model, we always want to make it more complex. At least to start off, we're trying to consider much larger domains and will be analyzing higher frequency terms. Therefore the number of time (t) and spatial (z) steps needed using a finite difference implementation will be significantly higher (on the order of ##10^{12}## steps for the precision we need). I've been looking into adaptive mesh methods and FFT-finite difference methods--and there seems to be some promise in either choice (possibly combining them is a good strategy). I'm working on implementing the former two approaches, but do you have any recommendations in particular for PDE solvers that traverse very large domains and have high frequency components? (Essentially the slowly varying envelope employed in Maxwell-Bloch equations is no longer used, but an EM wave of particular frequency is still considered.)
 
  • #13
In PDE's describing transient diffusion, increasing the spatial resolution (smaller grid spacing) using an explicit integrator like 4th order RK requires using smaller time steps in order to maintain numerical stability. Basically, what is happening is that the ODE's are becoming more "stiff." So not only are there more calculations because of more grid points, but there are also more calculations because of smaller time steps. This was the motivation for the development of the implicit "stiff" ODE solvers like the Gear integrators. These are stable for arbitrary time step size, and are designed to automatically adjust time step and order to control accuracy.
 
  • Like
Likes TheCanadian
  • #14
Chestermiller said:
In PDE's describing transient diffusion, increasing the spatial resolution (smaller grid spacing) using an explicit integrator like 4th order RK requires using smaller time steps in order to maintain numerical stability. Basically, what is happening is that the ODE's are becoming more "stiff." So not only are there more calculations because of more grid points, but there are also more calculations because of smaller time steps. This was the motivation for the development of the implicit "stiff" ODE solvers like the Gear integrators. These are stable for arbitrary time step size, and are designed to automatically adjust time step and order to control accuracy.

I've been still working on implementing the gear method. Changing my system into a set of first order differential equations just increases the number of equations, but do you have any suggestions for how to handle terms that include the (higher order) derivatives of spatial variables? The gear method of course works when stepping through a single variable, but when the expressions themselves rely upon derivatives of different variables, I am wondering what the best approach is.

For example, the general set of equations I'm working with is similar to the below:

$$ \frac {\partial y_1}{\partial t} = i(y_2 y_3 - y_3 ^* y_2 ^*) - C_1 y_1 $$

$$ \frac {\partial y_2}{\partial t} = i y_3^*y_1 - C_2 y_2 $$

$$ \frac {\partial ^2 y_3}{\partial t^2} = \frac {\partial ^2 y_3}{\partial z^2} + 2iC_3( \frac {\partial y_3}{\partial t} + \frac {\partial y_3}{\partial z}) - C_4 ( \frac {\partial^2 y_2^*}{\partial t^2} - iC_3 \frac {\partial y_2^*}{\partial t} - C_3 ^2 y_2 ^*) $$

(where ## y_n## are the solutions I am looking for, ## C_n ## are constants, ## i = \sqrt{-1} ##)

In this case, I am just having a bit of trouble handling the first and second order derivatives of z. At the moment, I planning to just handle the spatial derivatives through a central difference scheme (and will be testing it hopefully soon), but are there any better methods you are aware of to handle derivatives of different variables?

Also, this question is out of curiosity: when you have terms such as higher order derivatives (e.g. ## \frac {\partial^2 y_3}{\partial z^2}##) at the boundary (e.g. z = 0), how exactly does one appropriately handle this term? A central difference scheme would require ## y_2[z=0], y_2[z=hz], ## and ## y_2[z=-hz]##, but in this case, ## y_2[z=-hz]## does not exist on the grid.
 
Last edited:
  • #15
To clarify, I can make the above into a set of first order differential equations:

$$ y_1' = i(y_2 y_3 - y_3 ^* y_2 ^*) - C_1 y_1 $$

$$y_2' = i y_3^*y_1 - C_2 y_2 $$

$$ y_4 = y_2' $$

$$ y_5 = y_3'$$

$$ y_5' = \frac {\partial ^2 y_3}{\partial z^2} + 2iC_3( y_5 + \frac {\partial y_3}{\partial z} ) - C_4 (y_4'^* - i C_3 y_4^* - C_3 ^2 y_2 ^*) $$

This is what I will be putting through the gear method.
 
  • #16
I hope you are not writing your own ad hoc gear integration routine. I hope you are using a package code like DASSYL, which is available for free over the internet.

As you said, for higher order time derivatives, you just add more variables to obtain all first derivatives. For partials with respect to z, you use a central difference formula, employing variables from neighboring grid points. If n is the number of time derivative equations and m is the number of grid points, you are solving a set of n x m coupled ODEs in time (including coupling in the z direction). This can lead to a very large matrix inversion at each time step. However, the matrices to invert are going to be highly banded. And the Gear integration packages have a banded matrix solvers that can be applied to invert the matrix. So, when using the Gear package, you need to use some parameter setting to choose a banded matrix solver.
 
  • #17
Chestermiller said:
I hope you are not writing your own ad hoc gear integration routine. I hope you are using a package code like DASSYL, which is available for free over the internet.

Haha oh boy...I think you know the answer to that unfortunately. I've been learning a lot trying to write my own code. There are terms that can be handled fairly easily in existing packages such as complex terms being split into real and imaginary parts and working with a system of differential equations, but there's a certain freedom in writing it myself (e.g. easily change methods, manipulate grids myself easily). I've come across some other codes that I think I can fairly easily implement with my existing code...although I'll see how things go. It will likely end up in me needing to use these packages for time and sanity's sake. I'll certainly look into DASSYL—thank you for the suggestion.

Chestermiller said:
As you said, for higher order time derivatives, you just add more variables to obtain all first derivatives. For partials with respect to z, you use a central difference formula, employing variables from neighboring grid points. If n is the number of time derivative equations and m is the number of grid points, you are solving a set of n x m coupled ODEs in time (including coupling in the z direction). This can lead to a very large matrix inversion at each time step. However, the matrices to invert are going to be highly banded. And the Gear integration packages have a banded matrix solvers that can be applied to invert the matrix. So, when using the Gear package, you need to use some parameter setting to choose a banded matrix solver.

I'll definitely give it a go.
 

1. What are PDE solvers?

PDE solvers are mathematical algorithms used to solve partial differential equations (PDEs). These equations describe how a physical system changes over time and space, and PDE solvers allow scientists to simulate and predict the behavior of these systems.

2. How do PDE solvers work?

PDE solvers use numerical methods to approximate the solution to a PDE. These methods involve dividing the space and time domain into a grid of points and solving the equation at each point using mathematical formulas. The solutions at each point are then combined to create a solution for the entire system.

3. What types of PDE solvers are there?

There are several types of PDE solvers, including finite difference methods, finite element methods, and spectral methods. Each method has its own advantages and is suited for different types of PDEs or systems.

4. What factors should be considered when choosing a PDE solver?

When choosing a PDE solver, it is important to consider the type of PDE being solved, the accuracy and speed of the solver, the stability of the solution, and the computational resources available. It is also important to understand the limitations and assumptions of the chosen solver.

5. Are there any tips for using PDE solvers effectively?

Some tips for using PDE solvers effectively include choosing the appropriate solver for the problem, carefully considering the initial and boundary conditions, using adaptive methods to improve accuracy, and validating the results with analytical solutions or experimental data.

Similar threads

  • Programming and Computer Science
Replies
3
Views
2K
Replies
1
Views
3K
  • Differential Equations
Replies
8
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
3K
Replies
6
Views
4K
Replies
2
Views
2K
  • Advanced Physics Homework Help
Replies
21
Views
3K
  • Differential Equations
Replies
1
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
3
Views
279
  • Math Proof Training and Practice
Replies
28
Views
5K
Back
Top