spacetime2

Struggles With The Continuum – Part 2

[Total: 5    Average: 5/5]

 

Last time we saw that that nobody yet knows if Newtonian gravity, applied to point particles, truly succeeds in predicting the future. To be precise: for four or more particles, nobody has proved that almost all initial conditions give a well-defined solution for all times!

The problem is related to the continuum nature of space: as particles get arbitrarily close to other, an infinite amount of potential energy can be converted to kinetic energy in a finite amount of time.

I left off by asking if this problem is solve by more sophisticated theories. For example, does the ‘speed limit’ imposed by special relativity help the situation? Or might quantum mechanics help, since it describes particles as ‘probability clouds’, and puts limits on how accurately we can simultaneously know both their position and momentum?

We begin with quantum mechanics, which indeed does help.

The quantum mechanics of charged particles

Few people spend much time thinking about ‘quantum celestial mechanics’—that is, quantum particles obeying Schrödinger’s equation, that attract each other gravitationally, obeying an inverse-square force law. But Newtonian gravity is a lot like the electrostatic force between charged particles. The main difference is a minus sign, which makes like masses attract, while like charges repel. In chemistry, people spend a lot of time thinking about charged particles obeying Schrödinger’s equation, attracting or repelling each other electrostatically. This approximation neglects magnetic fields, spin, and indeed anything related to the finiteness of the speed of light, but it’s good enough explain quite a bit about atoms and molecules.

In this approximation, a collection of charged particles is described by a wavefunction ##\psi##, which is a complex-valued function of all the particles’ positions and also of time. The basic idea is that ##\psi## obeys Schrödinger’s equation
$$ \frac{d \psi}{dt} = – i H \psi $$
where ##H## is an operator called the Hamiltonian, and I’m working in units where ##\hbar = 1##.

Does this equation succeeding in predicting ##\psi## at a later time given ##\psi## at time zero? To answer this, we must first decide what kind of function ##\psi## should be, what concept of derivative applies to such funtions, and so on. These issues were worked out by von Neumann and others starting in the late 1920s. It required a lot of new mathematics. Skimming the surface, we can say this.

At any time, we want ##\psi## to lie in the Hilbert space consisting of square-integrable functions of all the particle’s positions. We can then formally solve Schrödinger’s equation as
$$ \psi(t) = \exp(-i t H) \psi(0) $$
where ##\psi(t)## is the solution at time ##t##. But for this to really work, we need ##H## to be a self-adjoint operator on the chosen Hilbert space. The correct definition of ‘self-adjoint’ is a bit subtler than what most physicists learn in a first course on quantum mechanics. In particular, an operator can be superficially self-adjoint—the actual term for this is ‘symmetric’—but not truly self-adjoint.

In 1951, based on earlier work of Rellich, Kato proved that ##H## is indeed self-adjoint for a collection of nonrelativistic quantum particles interacting via inverse-square forces. So, this simple model of chemistry works fine. We can also conclude that ‘celestial quantum mechanics’ would dodge the nasty problems that we saw in Newtonian gravity.

The reason, simply put, is the uncertainty principle.

In the classical case, bad things happen because the energy is not bounded below. A pair of classical particles attracting each other with an inverse square force law can have arbitrarily large negative energy, simply by being very close to each other. Since energy is conserved, if you have a way to make some particles get an arbitrarily large negative energy, you can balance the books by letting others get an arbitrarily large positive energy and shoot to infinity in a finite amount of time!

When we switch to quantum mechanics, the energy of any collection of particles becomes bounded below. The reason is that to make the potential energy of two particles large and negative, they must be very close. Thus, their difference in position must be very small. In particular, this difference must be accurately known! Thus, by the uncertainty principle, their difference in momentum must be very poorly known: at least one of its components must have a large standard deviation. This in turn means that the expected value of the kinetic energy must be large.

This must all be made quantitative, to prove that as particles get close, the uncertainty principle provides enough positive kinetic energy to counterbalance the negative potential energy. The Kato–Lax–Milgram–Nelson theorem, a refinement of the original Kato–Rellich theorem, is the key to understanding this issue. The Hamiltonian ##H## for a collection of particles interacting by inverse square forces can be written as
$$ H = K + V $$
where ##K## is an operator for the kinetic energy and ##V## is an operator for the potential energy. With some clever work one can prove that for any ##\epsilon > 0##, there exists ##c > 0## such that if ##\psi## is a smooth normalized wavefunction that vanishes at infinity and at points where particles collide, then
$$ | \langle \psi , V \psi \rangle | \le \epsilon \langle \psi, K\psi \rangle + c. $$
Remember that ##\langle \psi , V \psi \rangle## is the expected value of the potential energy, while ##\langle \psi, K \psi \rangle## is the expected value of the kinetic energy. Thus, this inequality is a precise way of saying how kinetic energy triumphs over potential energy.

By taking ##\epsilon = 1##, it follows that the Hamiltonian is bounded below on such
states ##\psi##:
$$ \langle \psi , H \psi \rangle \ge -c . $$
But the fact that the inequality holds even for smaller values of ##\epsilon## is the key to showing ##H## is ‘essentially self-adjoint’. This means that while ##H## is not self-adjoint when defined only on smooth wavefunctions that vanish at infinity and at points where particles collide, it has a unique self-adjoint extension to some larger domain. Thus, we can unambiguously take this extension to be the true Hamiltonian for this problem.

To understand what a great triumph this is, one needs to see what could have gone wrong! Suppose space had an extra dimension. In 3-dimensional space, Newtonian gravity obeys an inverse square force law because the area of a sphere is proportional to its radius squared. In 4-dimensional space, the force obeys an inverse cube law:
$$ F = -\frac{Gm_1 m_2}{r^3} . $$
Using a cube instead of a square here makes the force stronger at short distances, with dramatic effects. For example, even for the classical 2-body problem, the equations of motion no longer ‘almost always’ have a well-defined solution for all times. For an open set of initial conditions, the particles spiral into each other in a finite amount of time!

Hyperbolic spiral - a fairly common orbit in an inverse cube force

Hyperbolic spiral – a fairly common orbit in an inverse cube force.

The quantum version of this theory is also problematic. The uncertainty principle is not enough to save the day. The inequalities above no longer hold: kinetic energy does not triumph over potential energy. The Hamiltonian is no longer essentially self-adjoint on the set of wavefunctions that I described.

In fact, this Hamiltonian has infinitely many self-adjoint extensions! Each one describes different physics: namely, a different choice of what happens when particles collide. Moreover, when ##G## exceeds a certain critical value, the energy is no longer bounded below.

The same problems afflict quantum particles interacting by the electrostatic force in 4d space, as long as some of the particles have opposite charges. So, chemistry would be quite problematic in a world with four dimensions of space.

With more dimensions of space, the situation becomes even worse. In fact, this is part of a general pattern in mathematical physics: our struggles with the continuum tend to become worse in higher dimensions. String theory and M-theory may provide exceptions.

Next time we’ll look at what happens to point particles interacting electromagnetically when we take special relativity into account. After that, we’ll try to put special relativity and quantum mechanics together!

For more

For more on the inverse cube force law, see:

• John Baez, The inverse cube force law, Azimuth, 30 August 2015.

It turns out Newton made some fascinating discoveries about this law in his Principia; it has remarkable properties both classically and in quantum mechanics.

The hyperbolic spiral is one of 3 kinds of orbits possible in an inverse cube force; for the others see:

Cotes’s spiral, Wikipedia.

The picture of a hyperbolic spiral was drawn by Anarkman and Pbroks13 and placed on Wikicommons under a Creative Commons Attribution-Share Alike 3.0 Unported license.

 

Click Here For Forum Comments

I’m a mathematical physicist. I work at the math department at U. C. Riverside in California, and also at the Centre for Quantum Technologies in Singapore. I used to do quantum gravity and n-categories, but now I mainly work on network theory and the Azimuth Project, which is a way for scientists, engineers and mathematicians to do something about the global ecological crisis.
39 replies
Newer Comments »
  1. Telemachus
    Telemachus says:

    Is there any previous work, any attempt on trying to discretize space, or even space time and trying to work the laws of physics, dynamics, classical mechanics, quantum mechanics, and relativity in such a discretized space? I supposse that in the limit of the grid space tending to zero one could get the usual mechanics in continuum space or space-time, but I would like to know what kind of physical predictions the discretized space would make, and the possibility to observe experimentally some of those predictions.

  2. FabioFumi
    FabioFumi says:

    Hi – I'll be a bit naive here, forgive me, but couldn't it be that our mathematics is not representing reality, but just a model of it? Nature could be not-continuous by itself. I cannot think of a "real" particle being "infinitely close" to another one… And even quantization, or discretizations used in computerized models, aren't real themselves, but just convenient models of natures, useful to make good predictions only up to a certain "extent".I'm afraid we still don't have mathematical models that really represent nature completely (will ever we?) and the difficulties in having converging solutions under certain circumstances might be the result of using inadequate mathematics.Sorry if my point is only loosely defined, but I wonder if this or similar arguments has ever been raised before (as I guess).Thanks John for the excellent presentation of these concepts, anyhow.

  3. glaucousNoise
    glaucousNoise says:

    It doesn't seem to me that anything is ever continuous; continuous is always an approximation, even in classical mechanics. Take a baseball. Suppose you integrate numerically the simple case where there is just gravity, no air resistance. Can you really make your timestep arbitrarily small? How about if you integrated at a femtosecond timescale? In principle you could obtain the entire motion, but in practice each step would, individually, produce no information about the system, since the change in position would be, relative to the characteristic scales of the problem, zero during each step. There is arguably a minimum time step, which is the smallest step that produces a nonzero change in state relative to the characteristic scales.Space is also always discretized in terms of the characteristic scales of the system.

  4. nagamani r
    nagamani r says:

    I'm new here,  but I thinking you keep breaking down physical objects,  you will reach  point where breaking it down  further will yield no information,  but it can be still be theorotically broken down.  Would this be discrete or continuous?

  5. martinbn
    martinbn says:

    And he has a point! The real numbers are a bit more man made than the integers. After all from rationals to reals there is a choice, you can complete them to get the reals or any of the p-adic numbers. May be one shouldn't take reals over any of the p-adics. May be the way to go is to work with the adeles.

  6. fizzle
    fizzle says:

    Is the problem with your Newtonian example due to the implied instantaneous-action-at-a-distance in the fundamental equation (which means the result isn't fully conservative)? At some point you have to modify the equations to allow for changes in the gravitational system to propagate, like Heaviside did in the 1890s, and then you get significantly different results in extreme cases.One final note. Your reply to the God-made/Man-made joke was uncomfortable but you have to remember that discussing the continuum is really the physics equivalent to a "religious" question. I'm an atheist but when I step back, ignore all the equations/models/theories, and look around … I wonder "what in the hell is all this stuff anyway, none of it makes sense". It seems like the only thing we can do is perpetually oscillate between experimental and theoretical advances, never reaching a "ta da, we're done!" moment.

  7. Hendrik Boom
    Hendrik Boom says:

    The closest I've encountered about grains of space is in loop quantum gravity where space seems to be broken into tiny pieces, maybe a googol of them in a teaspoon.  But as I understand it, the pieces are not arranged in a regular grid; they are constantly rearranging themselves, and all their possible arrangements get quantum-superposed so they are all smudged together into something that feels kind of continuous.Is this image even approximately a correct view of the theory?

  8. Jimster41
    Jimster41 says:

    [I]”At any time, we want ψ to lie in the Hilbert space consisting of square-integrable functions of all the particle’s positions. We can then formally solve Schrödinger’s equation as
    ψ(t)=exp(−itH)ψ(0)

    where ψ(t) is the solution at time t. But for this to really work, we need H to be a self-adjoint operator on the chosen Hilbert space. The correct definition of [URL=’https://en.wikipedia.org/wiki/Self-adjoint_operator#Self-adjoint_operators’]‘self-adjoint’[/URL] is a bit subtler than what most physicists learn in a first course on quantum mechanics. In particular, an operator can be superficially self-adjoint—the actual term for this is [URL=’https://en.wikipedia.org/wiki/Self-adjoint_operator#Symmetric_operators’]‘symmetric’[/URL]—but not truly self-adjoint.”[/I]

    Is this because we want the movement of a system in that space to be perfectly reversible? Is Reimannian continuity (if that’s the right term) really about wanting to assume things are equally able or likely to go in any direction in their phase-space from any point?

    Basically I am confused here by similar but different terms “Continuity”, “Reversibility”, “Symmetry”, “Commutativity”, “Hermitian-ness” and “Self-Adjoint-ness”. I think the last two may be synonymous, but are these terms just stronger/weaker versions of the same idea, precluding any preferred “direction of the grain” in the phase space?

    Thanks for the awesome articles by the way.

  9. john baez
    john baez says:

    Thanks!

    I suggest that you look up “Continuity”, “Reversibility”, “Symmetry”, “Commutativity”, “Hermitianness” and “Self-Adjointness” on Wikipedia. They all mean very different things – except for the last two, which are closely related. In my post I was using “symmetric” in a specific technical sense, closely akin to “hermitian”, which is quite different from the general concept of “symmetry” in physics. That’s why I included a link to the definition.

    Learning the precise definitions of technical terms is crucial to learning physics. You’re saying a lot of things that don’t make sense, I’m afraid, so I can’t really comment on most of them. That sounds rude, but I’m really hoping a bit of honesty may help here.

    Anyway: we need the Hamiltonian H to be self-adjoint for the time evolution operator exp(-itH) to be unitary. And we need time evolution to be unitary for probabilities to add up to 1, as they should.

  10. john baez
    john baez says:

    Is there any previous work, any attempt on trying to discretize space, or even space time and trying to work the laws of physics, dynamics, classical mechanics, quantum mechanics, and relativity in such a discretized space?

    Sure, lots.

    I suppose that in the limit of the grid spacing tending to zero one could get the usual mechanics in continuum space or space-time…

    Yes, for example when they compute the mass of the proton, they discretize spacetime and use lattice gauge theory to calculate the answer – nobody knows any other practical way. But you try to make the grid spacing small to get a good answer.

    … but I would like to know what kind of physical predictions the discretized space would make, and the possibility to observe experimentally some of those predictions.

    Wouldn’t the “nice” properties of space be lost, like homogeneity, isotropy, etc?

    You’d mainly tend to lose isotropy – that is, rotation symmetry. Homogeneity – that is, translation symmetry – will still hold for discrete translations that map your grid to itself. People have looked for violations of isotropy, but perhaps more at large scales than microscopic scales.

    What seems cool to me is how cleverly chosen lattice models of fluid dynamics can actually do a darn good job of getting approximate rotation symmetry. For example, in 2 dimensions a square lattice is no good, but a hexagonal lattice is good, thanks to some nice math facts.

  11. Telemachus
    Telemachus says:

    Thanks for your answer John!

    What happens with for example angular momentum conservation when isotropy is lost? what would Newtonian physics looks like in this discretized space? Is there any need of a reformulation of Newtons laws or you can recover our macroscopic physics even with a discretized space and the usual, for example Newton’s second law of physics with discrete variables (with out the need of taking the grid spacing tending to zero)? I guess that inertia would hold, as it is related to space homogenity. And I suppose that people working in numerical methods of physics actually use this kind of discretizations every time they make for example a Riemann integral, or use finite difference methods. Is this equivalent to discretizing space in Newtonian physics? and there is any departure from the predictions in continuum space or everything works in the same way?

    Perhaps I’m being too incisive about this, after all the post is about the struggles with the continuum and not about the possibility of a discretized space itself. But I thought of it as complementary. If space is not a continuum it has to be discrete, right? or there are other possibilities here?

  12. john baez
    john baez says:

    What happens with for example angular momentum conservation when isotropy is lost?

    Typically it goes away, but if you’re clever you can arrange to preserve it, by choosing particle interactions that conserve it.

    What would Newtonian physics looks like in this discretized space?

    The link I sent you is all about ‘lattice Boltzmann gases’, which have particles moving on a lattice and bouncing off each other when they collide. This is one of the earlier papers on this subject, by Steve Wolfram.

    On a square lattice in 2 dimensions, you can easily detect macroscopic deviations from isotropy in the behavior of such a gas. On a hexagonal the deviations are much subtler, because hexagonal symmetry invariance implies complete rotation invariance for a number of tensors that are important in fluid flow.

    Is there any need of a reformulation of Newtons laws or you can recover our macroscopic physics even with a discretized space and the usual, for example Newton’s second law of physics with discrete variables (with out the need of taking the grid spacing tending to zero)?

    There’s a rather beautiful harmonic oscillator where the particle moves in discrete time steps on a 2d square lattice, but I’m pretty sure the inverse square law is going to be ugly. As you said, you can just think of the lattice as the discretization imposed in numerical analysis by working with numbers that have only a certain number of digits. But in general this is pretty ugly: there’s nothing especially nice about physics where ’roundoff errors’ gradually violate conservation laws.

    I find it more interesting to look for discrete models where you can still use a version of Noether’s theorem to get exact conservation laws from symmetries. I had a grad student who wrote his PhD thesis on this:
    [LIST]
    [*]James Gilliam, [URL=’http://math.ucr.edu/home/baez/thesis_gilliam.pdf’]Lagrangian and Symplectic Techniques in Discrete Mechanics[/URL], Ph.D. thesis, U. C. Riverside, 1996.
    [/LIST]
    and we published a paper about it:
    [LIST]
    [*]James Gilliam and John Baez, [URL=’http://math.ucr.edu/home/baez/ca.pdf’]An algebraic approach to discrete mechanics[/URL], Lett. Math. Phys. 31 (1994), 205-212.
    [/LIST]
    However, while it’s fun, I don’t think it’s the right way to go in physics.

  13. john baez
    john baez says:

    Hi – I’ll be a bit naive here, forgive me, but couldn’t it be that our mathematics is not representing reality, but just a model of it? Nature could be not-continuous by itself.

    We use mathematics to model nature. We find ourselves in a mysterious world and try to understand it – we don’t really know what it’s like. But the discrete and the continuous are abstractions we’ve devised, to help us make sense of things. The number 473 is just as ‘man-made’ as the numbers π, i, j and k.

    The universe may be fundamentally mathematical, it may not be – we don’t know. If it is, what kind of mathematics does it use? We don’t know that either. These questions are too hard for now. People have been arguing about them at least since 500 BC when Pythagoras claimed all things are generated from numbers.

    If we ever figure out laws of physics that fully describe what we see, we’ll be in a better position to tackle these hard questions. For now I find it more productive to examine the most successful theories of physics and see what issues they raise.

  14. john baez
    john baez says:

    It doesn’t seem to me that anything is ever continuous; continuous is always an approximation, even in classical mechanics. Take a baseball. Suppose you integrate numerically the simple case where there is just gravity, no air resistance.

    When you speak of numerical integration you’re no longer speaking about a baseball: you’re speaking about a computer program. Of course if you do numerical integration on a computer using time steps, there will be time steps.

    Space is also always discretized in terms of the characteristic scales of the system.

    I have never seen any actual discretization of space, unless you’re talking about man-made structures like pixels on the computer screens we’re looking at now.

  15. Telemachus
    Telemachus says:

    I asked this question to my self last night, perhaps John knows the answer (great responses btw, thank you John, I had a final exam last Friday, so I didn’t had time to read all the details on your responses, but I did yesterday). The question that arised to me was, if space-time is actually discrete, do you know how small the grid spacing should be? According to what we actually know of the laws of physics, is there any upper and/or lower bounding for the space-time grid?

    Many people believe that the space grid is Plancks constant. But I don’t think there is really any physical reason to believe so, what is discrete at the plancks lenght is phase space because of the uncertainty principle, not space-time (actually, plancks constant have units of action, not lenght). Space-time is always treated as a continuum in physics. And I didn’t know until now about the struggles with the continuum. But usually, the continuum hypothesis works, and gives reliable physical predictions in most known physics.

  16. glaucousNoise
    glaucousNoise says:

    When you speak of numerical integration you’re no longer speaking about a baseball: you’re speaking about a computer program. Of course if you do numerical integration on a computer using time steps, there will be time steps.

    I have never seen any actual discretization of space, unless you’re talking about man-made structures like pixels on the computer screens we’re looking at now.

    Wait, you think your point particle with no air resistance is a baseball? It seems to me that neither the differential equation nor the difference equations found on my computer are baseballs, just models of baseballs.

    Space is clearly discrete for the baseball model. A femtometer is zero compared to the typical flight distance, so it is below some minimal distance. Worse, your continuum approximation breaks down for distances on the scale of the atoms which make up the baseball.

    It seems to me that much of the confusion regarding the continuum arises from assuming that “infinitesimals” and “infinity” have meaningful interpretations outside of notions of characteristic scale in a problem! There is only one kind of infinity, and that’s what happens when you take a number significantly larger than the largest scale of your system, and such a number is indeed finite. The distance from here to the andromeda galaxy is “infinite” compared with the typical baseball flight distance; a wavefunction of an electron prepared in a lab is zero when evaluated at the Andromeda galaxy since this is much larger than say, the width of the potential.

  17. Nugatory
    Nugatory says:

    Many people believe that the space grid is Plancks constant. But I don’t think there is really any physical reason to believe so

    And indeed we have another recent Insights article on exactly that question: [URL]https://www.physicsforums.com/insights/hand-wavy-discussion-planck-length/[/URL]

  18. Nugatory
    Nugatory says:

    How about if you integrated at a femtosecond timescale? In principle you could obtain the entire motion, but in practice each step would, individually, produce no information about the system.

    If you can obtain the entire motion “in principle” and only the practical difficulties of getting measurements at the desired resolution stops us from doing it “in practice”, then the underlying system is (pretty much by definition) continuous at that scale.

  19. glaucousNoise
    glaucousNoise says:

    If you can obtain the entire motion “in principle” and only the practical difficulties of getting measurements at the desired resolution stops us from doing it “in practice”, then the underlying system is (pretty much by definition) continuous at that scale.

    Well if the universe has a finite age/size this would presumably put realistic limits on what can or cannot be computed, but I digress from this point: apart from the fact that your continuum model breaks down at the femtosecond timescale, if you are interested in the ballistics of the baseball, no information about the ballistics is stored at this scale, you’d have to run through thousands of meaningless individual steps before such information began to emerge.

    the question is, if at a certain resolution no information about the system is generated, can one really argue that any time elapsed at all? Time is related to how a system changes, which is a matter of what you want to learn about the system. If I’m interested in macroscopic details, a glass of water in thermodynamic equilibrium, while containing many changes at spatial and temporal resolutions I do not care about, is unchanging on the macroscopic scale, and is at this scale time independent.

    the whole point I’m meandering towards here is that information, I think, plays an enormous role in this problem, and was not really discussed in this post.

Newer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply