# Path integrals and supposed sum over paths

1. May 19, 2010

### marsel martin

If I understand correctly--a big if--the path integration method, at least when applied to plain old QM, is described as (1) every possible path the particle could take is assigned an amplitude, (2) sum up (integrate over) these amplitudes for all possible paths.

The problem I have with this is when you look at the actual math, there seem to me be contributions from discontinuous paths. And I don't just mean pointwise continuous but radically discontinuous.

To actually derive the path integral expression, you slice up the time between the initial position and final position into N time intervals, and then integrate over all possible positions at each time interval. As you increase the number of slices, you increase the number of position integrations, until the number of position integrations supposedly "goes over" into a continuum limit as N goes to infinity (which doesn't sound all that convincing to me).

But for each N, the integrations over position are all independent of each other. Of course, for any wild set of N positions, we can always connect them with a continuous function q(t). But as we increase N, we increase the number of positions being integrated over, which necessarily include contributions which are wilder and wilder. I can't see any reason why they would all settle down into continuous functions as N goes to infinity.

I am not just talking about very wild continuous paths. What I mean is if we look at two close time-slices LaTeX Code: q_j,t_j and LaTeX Code: q_{j+1},t_{j}+\\delta t and we integrate over LaTeX Code: q_j and LaTeX Code: q_{j+1} independently, over the whole real line, then there is a lot of contributions from amplitudes associated with large values of LaTeX Code: |q_{j+1}-q_j| . That's not going to settle down into anything like continuous paths. LaTeX Code: |q_{j+1}-q_j| is not going to get any smaller as LaTeX Code: \\delta t goes to zero.

However, the author I am reading (MacKenzie http://xxx.lanl.gov/abs/quant-ph/0004090 ) uses arguments based on variations in the action integral from the particle's classical path (cf section 2.2.2 in which MacKenzie calculates the harmonic oscillator propagator) which, I would think, would only work if we are restricted to nice continous variations in the path.

I don't have any problem with the strict definition of the path integral in terms of a limit. But after that the conceptual exposition sounds pretty shaky.

What's my question? I guess it is, do I understand the above correctly?

2. May 19, 2010

### xepma

You are definitely on to something here, and it is intimately related to the divergencies encountered in a quantum field theory.

The problematic contributions that you describe aren't necessarily discontinuous, however. No matter how weird the behavior of the path is, it's still possible to simply 'connect the dots', sort of speak. What does become problematic is the fact that these paths you describe are non-differentiable. Since kinetic terms are based on comparing the values of a field at different instances in time, you can see that non-differentiable paths can give to infinities.

To properly treat these infinities you have to introduce some way of taming them. This is called renormalization. Wether these non-differentiable paths really give rise to problematic stuff depends on the theory, and sometimes you do not have to worry about them, but for most QFT they have very real and measurable consequences. But even from a mathematical point of view you are entirely correct to state that it is not allowed to simply regard all paths as continuous (and in quantum field theory such a treatment always leads to infinities!). Ordinary quantum mechanics does not need to be regularized, and the 'naive' approach turns out to be correct (probably for some good reason). Luckily, calculations in QFT are not done in this matter.

On a sidenote, renormalization can be interpreted as introducing a smallest distance scale in your theory, called the cutoff. The cutoff basically allows you to 'disregard' the infinities due to these non-differentiable paths as they do not 'exist' in your discretized spacetime. This does lead to a redefined (renormalized) theory in which the infinities do not appear at the cost of 'something else' -- this 'something else' is a true quantum effect and can take the form of for instance a running coupling constant or an anomaly.

Renormalization (or maybe better: regularization) is a very important concept in quantum field theory from both a physical and axiomatic (mathematical) perspective.

3. May 21, 2010

### LukeD

You are right that these paths are sometimes very jagged (but they are always continuous, that is an assumption that we make). However, since our integral is taken over all paths, we can partition up our space of paths in a way that makes the computations easier. For very jagged nowhere-differentiable paths, it makes sense to consider their quadratic variation (which is 0 for any differentiable path). A representative path of Brownian motion, for instance, has 1 unit of quadratic variation for each unit distance traveled along the path.
The wildly varying paths that you are probably thinking about are ones that have very large (or even unbounded) quadratic variation.

The weighting on each path in the path integral is a unit complex number. Sets of paths only contribute a significant amount to the integral if their weights add coherently (if they are unit complex numbers that point in roughly the same direction)
It turns out that paths with large quadratic variation contribute very little to the path integral of non-relativistic QM. At least I've been told so by my professors. I'm still trying to find a calculation that shows this though...

4. May 21, 2010

### schieghoven

Hello,

I never got to first base with path integrals, because the following argument:

The path integral used in quantum theory seems to be using the Euclidean measure in N dimensions, taking the limit N to infinity.

In N-dimensional euclidean space, if the unit hypercube [0,1]^N has measure (i.e., volume) V > 0, then the smaller hypercube [0,1/2]^N must have measure 2^{-N} V, because there are 2^{-N} disjoint copies of the smaller cube contained within the larger one. But in infinite dimensional euclidean space, there are infinitely many disjoint copies of the smaller hypercube contained within the larger one, so the only way to avoid contradiction is to conclude each of the small hypercubes, and the large one, have measure zero. In other words, Euclidean measure in infinite dimensions is trivial, and any time we talk about taking the limit N to infinity we should simply read, '0'.

Have I missed a trick here -- or misapplied the argument?

Cheers

Dave

Apologies for the lack of tex -- the parser kept messing up and mixing up the expressions, no idea why.

5. May 21, 2010

### xepma

The boltzmann factor $$e^{iS/\hbar}$$ with S the action serves as the measure for the path integral. Each path has a corresponding measure $$e^{iS/\hbar}$$ with S evaluated along the path.

You have to be careful with how you define measures in infinite dimensional space, but you are correct to note that the 'conventional' measure (i.e. the Lebesque measure) does not exist in infinite dimensional spaces. There are other measures you can defined though.

6. Jun 2, 2010

### marsel martin

Last edited by a moderator: Apr 25, 2017
7. Jun 2, 2010

### genneth

And now you understand why mathematicians regard path integrals as sophistry. Nevertheless, just like we didn't *really* know how calculus worked before we had proper analysis, we can still get perfectly correct answers with it. In fact, now we have more than one theory that produces calculus in its high-school form.

See http://www.mth.kcl.ac.uk/~streater/lostcauses.html#IX [Broken]

But remember that physics is not mathematics. In physics, we call something right when it agrees with reality. Failing that is the ultimate sin, regardless of what small-minded people say about rigour.

"A proof is a repeatable experiment in persuasion." -- Jim Horning

Last edited by a moderator: May 4, 2017
8. Jun 2, 2010

### sumeetkd

Shankar's QM speaks the exact thing, the wild paths are possible just the possibility of this happening is very low classically ( which is what we should get with the highest probability in this method ). I haven't used this much but I guess the wild paths cancel out in summation and hence dont contribute

9. Jun 2, 2010

### Geigerclick

What is wrong with considering Path Integrals as non-physical tools for calculation?

10. Jun 2, 2010