# Diffusion Quanutm Montecarlo

1. Jun 19, 2011

### Derivator

Last edited by a moderator: May 5, 2017
2. Jun 21, 2011

### Timo

I'm not an expert in Monte Carlo moves that are inspired by actual dynamics. Anyways: The propagator (equation 128, not 124) gives you the probability that a particle which was at x at t=0 is at y at t=delta t. This is the dynamics you want to simulate. The propagator is a Gaussian in the spatial coordinates. So it seems very sensible to me to simulate this Gaussian probability distribution by using a Gaussian probability distribution for the move from x to a random new y (sidenote: you can also propose moves from a different probability distribution, but then you'd have to reject some of those proposals). The other factor in the propagator is treated in an extra step (I think missing sentence is "If q is greater than 1 the walker survives"), and in a less direct manner.
I'm not sure if I really understood your question, though. It seems a bit strange to ask why one simulates a Gaussian via a Gaussian (even though it is a good question once you dig a bit deeper), so maybe I missed your point.

Last edited: Jun 21, 2011
3. Jun 21, 2011

### Derivator

well, i think you did. i was not aware of, that the propagator gives the probability that a particle which was at x at t=0 is at y at t=delta t.

If I got it right, the propagator G is sampled via the Monte Carlo walkers (I assume one uses the Metropolis algorithm) and in a second step the integral right below formula 128 is calculated (via monte carlo integration?). I assume, that the initial density function \rho(y,t) in this integral is arbitrary?

Last edited: Jun 21, 2011
4. Jun 25, 2011

### Timo

Hi Derivator,

sorry for the late reply. I don't have time to invest into forum discussions, so I can't offer you super thought-through comments. Still, I should at least give some feedback, I think:
I'd rather call it sampling a process according to the dynamics given by the propagator.
That's not said in the part of the text you quoted, I think. I think the formula directly below (128) merely is the claim that the error (whatever that may be in detail) scales quadratically with the time step (note that you sample small time steps).
In the theory of Monte Carlo simulations, there are two conditions which (in theory) guarantee that a process started from any arbitrary starting state will converge towards equilibrium, and from thereon sample states according to the equilibrium distribution. Those are ergodicity and detailed balance. I don't understand the specific process you are describing well enough to make comments about them in this particular case. But I think you can assume that this is supposed to sample an equilibrium case, and that the author of the text knows how to construct Monte Carlo algorithms => the starting state probably doesn't matter.