Argument for discreteness of spacetime

In summary: But this is not really a good argument. It could just as easily be argued that space is discrete because one cannot measure an area smaller than the Planck area.In summary, Smolin argues that because classical fields with infinitely many degrees of freedom cannot describe spacetime as a continuum, spacetime must be discrete. However, this argument is not universally accepted, as there are holes in the argument.
  • #1
bcrowell
Staff Emeritus
Science Advisor
Insights Author
Gold Member
6,724
429
The following is a paraphrase of an argument for the discreteness of spacetime, made by Smolin in his popular-level book Three Roads to Quantum Gravity. The Bekenstein bound says there's a limit on how much information can be stored within a given region of space. If spacetime could be described by continuous classical fields with infinitely many degrees of freedom, then there would be no such limit. Therefore spacetime is discrete.

I gather that this argument is far from being universally accepted. Where are the holes in the argument?

I would appreciate nontechnical answers. My background is that I did my PhD in low-energy nuclear physics, and I have a pretty good understanding of general relativity, but I don't have any technical expertise in quantum gravity. My only knowledge of quantum gravity comes from popular-level books like Smolin's and Susskind's.
 
Physics news on Phys.org
  • #2
I think this is the argument that gravity cannot be a "normal" quantum field theory, and that some sort of "holography" should hold, not that spacetime is discrete. eg. sec 2 & 3 of http://arxiv.org/abs/gr-qc/9508064 , which includes a short critique at the end, noting that the hoop conjecture is used - I think till this day there isn't a formal statement of the hoop conjecture.
 
  • #3
Let's start from a totally different problem: In continuous spacetime (up to arbitrary small length scales) quantum fluctuations of fields could form virtual black holes; these processes would spoil any quantum field theory.

Think about quantum fluctuations of size L where L can become arbitrary small. Let L(E) be the Compton wave length of an object of energy E; L(E) can be considered as the "typical size" of this object. If the object's energy E increases its size L(E) decreases and the object eventually becomes smaller than its own Schwarzschild radius; according to GR it eventually collapses and forms a black hole. If one equates Schwarzschild radius with Compton wavelength one observes that this process will happen at the Planck energy.

This argument does not mean that space necessarily becomes discrete below the Planck length, but that the usual formalism of quantum field theory and GR must be replaced by something different.
 
  • #4
bcrowell said:
...The Bekenstein bound says there's a limit on how much information can be stored within a given region of space. If spacetime could be described by continuous classical fields with infinitely many degrees of freedom, then there would be no such limit. Therefore spacetime is discrete.
...

But what does "spacetime is discrete" mean? One way to show the difficulties with that argument is to look at the example of LQG (but not at a popular level, popular exposition often misleads and confuses since we are talking about math models, not verbal models).

In LQG one starts with a continuum---a differential manifold---representing spacetime.
As usual it is connected. You can run a continuous path between any two points. It is not discrete---does not have discrete topology. Just the usual continuum that mathematicians have been using for over 150 years.

On a spatial slice of that continuum one constructs quantum states of geometry.

A Hilbert space of states of geometry. Operators corresponding to making geometric measurements. Observables.

It turns out that the area and volume operators have discrete spectra. One proves as a theorem that there is a smallest measurable area---essentially the Planck area.

This does not mean that space is topologically discrete. It does not consist (in the LQG context) of separate points.

And one can prove the entropy bound in the LQG context. Indeed Ashtekar recently published a proof of the Bousso covariant entropy bound. This is something that fails as one approaches a singularity in classical GR. So Ashtekar went Bousso one better :biggrin: He proved the covariant entropy bound more generally---extending it to places where it classically fails.

And still, in LQG, space and spacetime are not divided up into little isolated bits. We do not have simpleminded discreteness. There is a discreteness in the operator spectra---at the level of what we can know, and measure, and meaningfully talk about. We cannot measure an area smaller than Planck area.

Notice I'm not claiming LQG is right. These are just rigorous mathematical theorems. You set up a continuum, you define quantum states of geometry in a certain seemingly natural way, you find certain operators have discrete spectra. It turns out there is a limit, for some unknown reason, on what one can measure (in the LQG context.) It is somewhat analogous to the Heisenberg limitation on how accurately one can know position and momentum---limits on knowledge, limits on what it is meaningful to talk about, on measurement. Except that these are limits in the realm of geometry itself, rather than merely in the realm of fields or particles defined on some fixed geometry.

LQG people, as a kind of careless shorthand, especially in a popular wide-audience discussion, will talk about spacetime discreteness when what they mean is this kind of discreteness at the level of geometric information.
 
Last edited:
  • #5
Thanks, folks, for your interesting responses!

tom.stoer said:
This argument does not mean that space necessarily becomes discrete below the Planck length, but that the usual formalism of quantum field theory and GR must be replaced by something different.
Hmm...you lost me at the "but." Why do the ideas you've described require that "the usual formalism of quantum field theory and GR must be replaced by something different"?

marcus said:
It turns out that the area and volume operators have discrete spectra. One proves as a theorem that there is a smallest measurable area---essentially the Planck area.
In the context of the original argument given in #1, it seems to me that this shows that LQG has a natural mechanism for complying with the Bekenstein bound -- and I believe this is essentially what Smolin was claiming in Three Roads. Since the discrete spectra of the area and volume operators appears to be a specific property of LQG, what happens to other models like string theory? How do they comply with the Bekenstein bound? Or is it that they don't comply with the Bekenstein bound, and that's okay because the Bekenstein bound is not firmly established on model-independent grounds?
 
  • #6
The holography arguments against gravity being a normal QFT suggest that Asymptotic Safety is wrong - however, they are just heuristic, and not rigourous arguments against a non-trivial UV fixed point. Niedermeier and Reuter comment "In the context of the asymptotic safety scenario, on the other hand, the presumed reduction to effectively two-dimensional propagating degrees of freedom is a consequence of the renormalization group dynamics, which in this case acts like a ‘holographic map." http://relativity.livingreviews.org/Articles/lrr-2006-5/ [Broken]

However, if AS doesn't work, and gravity is not a normal QFT, then a priori, a discrete spacetime cannot be ruled out. However, that doesn't mean that spacetime must be discrete. In fact in AdS/CFT, we seem to have a quantum theory of gravity (for some universe, not ours) that respects holography and has a smooth "background" with emergent spacetime.

http://arxiv.org/abs/0808.3773
Area laws for the entanglement entropy - a review
J. Eisert, M. Cramer, M.B. Plenio
 
Last edited by a moderator:
  • #7
I'll try to clarify: The ideas I described are based on a rather well-understood formalism in quantum field theory. It is known that quantum field theories a plagued with infinites that have to be removed via renormalization. If one tries to apply this renormalization procedure to general relativity it fails for several reasons, so usually it is claimed that general relativity is not (perturbatively) renormalizable.

To summarize this means that applying QFT methods to GR is inconsistent.

Now we shouldn't jump to the conclusion that quantum gravity requires spacetime to be discrete; the only logical conclusion is that the failure of QFT methods requires a change of formalism if applied to quantum gravity. There are a couple of ideas, not all of them leading to "discrete" spacetime [in addition it is not clear if one should require spacetime to be discrete, or if one should let the formalism decide if it eventually becomes discrete; compare it to the harmonic oscillator: the discreteness of the spectrum is not an input but an output of the theory].

Some approaches:
- LQG
- string theory
- supergravity (which can be seen as low-energy limit of string theory or as a theory on its own)

I am not completely satisfied with Marcus' statements. In LQG one starts with a continuous spacetime and applies a slightly modified formalism of quantization. The result is a theory in which area operators have discrete spectrum. BUT: these area operators are not physical (= gauge invariant) operators, so one must not conclude that "spacetime itself is discrete". In addition in the final formalism there is no spacetime anymore! One ends up with a space of so-called spin networks from which spacetime (as we know it from GR) and GR should emerge in a semiclassical limit and as low-energy effective theory, respecively.

In certain supergravity theories spacetime stays continuous. There is still some hope that SUGRA could be a perturbatively renormalizable theory.The main difference to ordinary GR is that one uses so-called on-shell methods, both for the proof of the closure of certain algebras (which do not close off-shell) and for the proof of the finiteness of Greens functions (which should have certain symmetries valid on-shell) or physical amplitudes.
 
  • #8
bcrowell said:
The following is a paraphrase of an argument for the discreteness of spacetime, made by Smolin in his popular-level book Three Roads to Quantum Gravity. The Bekenstein bound says there's a limit on how much information can be stored within a given region of space. If spacetime could be described by continuous classical fields with infinitely many degrees of freedom, then there would be no such limit. Therefore spacetime is discrete.

I gather that this argument is far from being universally accepted. Where are the holes in the argument?

Everytime continuum models are used, they are part of the input to the model - there is no physical measurement process, and physical representation that is one-2-one to the continuum. So in that sense the continuum does not seem obsercavable, even in the form of in index space, as it would correspond to an infinite amount of information.

But what some non-observable parts are like, is hardly an interesting question. IMHO, discrete index space from the observability point of view, doesn't imply that there is a naive objective discreteness, but neither does it imply that there is a naive objective continuum. Maybe we simply can't decided and that's enough? But which seems to be the more redundant description of the two?

I prefer to view the information bound, not as a limit of the amount of objectively hidden information behind the screen, but as a limit of the amount of inferrable(measurable) information about the other side. But then, it is not even clear to what that there is a notion of objective screen, since each observer effectively has their own screen.

I think there is a lot around this that still extremely unclear and most things are yet very much semiclassical arguments that lack coherence of thought.

/Fredrik
 
  • #9
marcus said:
But what does "spacetime is discrete" mean?

I'd take it to mean that, if one naively pictures the discrete event-index to be embedded in a continuum all "points" in the local neigbourhood of an index node are indistiniguishable. This doesn't perhaps forbid that there actually is a continuum, no more than we can prove that god doesn't exists but it seems at least very redundant.

But to claim that this index, is the same for all observers is of course a a different thing.

But I fail to see the sense in that any inside observer could resolve a continuum. This I think questions these sense and use of starting with "lets consider a continuuus manifold". It almost leaves me a flavour of the physics "spherical cow" joke.

/Fredrik
 
  • #10
tom.stoer said:
Let's start from a totally different problem: In continuous spacetime (up to arbitrary small length scales) quantum fluctuations of fields could form virtual black holes; these processes would spoil any quantum field theory.

Think about quantum fluctuations of size L where L can become arbitrary small. Let L(E) be the Compton wave length of an object of energy E; L(E) can be considered as the "typical size" of this object. If the object's energy E increases its size L(E) decreases and the object eventually becomes smaller than its own Schwarzschild radius; according to GR it eventually collapses and forms a black hole. If one equates Schwarzschild radius with Compton wavelength one observes that this process will happen at the Planck energy.

This argument does not mean that space necessarily becomes discrete below the Planck length, but that the usual formalism of quantum field theory and GR must be replaced by something different.

Lets take a step back because something in your argument is very wrong. L(E) can certainly not be considered the typical size of a object mass M=E. Think of a star or any macroscopic object the Compton wavelength is inversely proportional to the mass. (I made this mistake just the other day). What causes the problems in quantum gravity is not the presence of an apparent horizon, which defines a black hole, but the presence of a central singularity.

As far as QFT goes it is not "QFT methods" that fail with gravity rather it is that "perturbative methods" fail when applied to gravity.
 
  • #11
My personal take on the matter of discrete is probably closest to marcus. I think that it is the very notion of space-time that breaks down. Rather than it being replaced by discrete spacetime instead there is some "background independent" microscopic theory which has an effective description as a continuos space-time. Of coarse I have no idea of what this theory is. The length scale at which this effective description breaks down can be considering "the smallest length" but this does not imply that lengths are discrete.
 
  • #12
Ben, For my part I have NOT found a convincing argument against discreteness...at least not from Smolin, Susskind, Penrose,Greene, Thorne and a few others...

here's a thread that might be of interest if you did not see it at the time...

https://www.physicsforums.com/showthread.php?t=323105&highlight=continuous+frequency

and there is one other if I can find it...

and that some sort of "holography" should hold, not that spacetime is discrete.

Susskind takes such a "holographic principle" to mean that spacetime IS discrete...plank size areas, one bit per Planck area...Beckenstein bound, etc...which tends to support the original premise...

On the other hand although I argued FOR discrete spacetime in the above thread, and generally favor that concept, for me the odd characteristic is that so much of quantum theory comes from quantizing an UNPROVEN continuous formalization...which seems an odd way to therefore conclude discreteness or not. I just noticed that Marcus commented on it:
In LQG one starts with a continuum---a differential manifold---representing spacetime...
worse, we know of situations where both formalizations (Qm and relativity) so far fail...so I like comments above in this thread implying something is missing.

Marcus:
It turns out that the area and volume operators have discrete spectra. One proves as a theorem that there is a smallest measurable area---essentially the Planck area.

yes! so right where we are likely MOST interested in discrete vs continuous we run into THAT obstacle...In another recent thread, information as a basis for theory was discussed...and IF information in a finite area or volume is finite it's hard to figure how spacetime or anything would be infinitely divisible. (another holographic argument.)

Originally Posted by tom.stoer
This argument does not mean that space necessarily becomes discrete below the Planck length, but that the usual formalism of quantum field theory and GR must be replaced by something different.

Originally Posted by tom.stoer
This argument does not mean that space necessarily becomes discrete below the Planck length, but that the usual formalism of quantum field theory and GR must be replaced by something different.
Hmm...you lost me at the "but." Why do the ideas you've described require that "the usual formalism of quantum field theory and GR must be replaced by something different"?

Because neither applies below Planck scale...we have no theory so far to probe there...everything seems to disappear (lose its familiar characteristics) in quantum "foam"...violent irregularities...analogous to Heisenberg uncertainty...some observables seem to have limited access...
 
  • #13
Finbar said:
Lets take a step back because something in your argument is very wrong. L(E) can certainly not be considered the typical size of a object mass M=E. Think of a star or any macroscopic object the Compton wavelength is inversely proportional to the mass. (I made this mistake just the other day). What causes the problems in quantum gravity is not the presence of an apparent horizon, which defines a black hole, but the presence of a central singularity.

As far as QFT goes it is not "QFT methods" that fail with gravity rather it is that "perturbative methods" fail when applied to gravity.
I am sorry, but your argument is wrong.

The difference is that the Compton wave length is the typical size of an quantum object. So it does not apply to classical objects like stars, but certainly to quantum fluctuations and elementary particles. My argument is non-technical. It simply shows that as soon as quantum mechanics (Compton wave length) and gravity (Schwarzsschild radius) come together, something goes wrong.

You are right in the sense that this shows that perturbative methods fail - but this is how QFT is defined in most cases.

Again: the argument is non-technical, but that was should be clear from the beginning as it was explicitly required in the first post to give non-technical arguments :-)

Btw.: if this argument is wrong, we do you get Planck energy from it? just a coincidence? and why is used quite frequently to motivate quantum gravity?
 
Last edited:
  • #14
Fra said:
Everytime continuum models are used, they are part of the input to the model - there is no physical measurement process, and physical representation that is one-2-one to the continuum. So in that sense the continuum does not seem obsercavable, even in the form of in index space, as it would correspond to an infinite amount of information.

But what some non-observable parts are like, is hardly an interesting question. IMHO, discrete index space from the observability point of view, doesn't imply that there is a naive objective discreteness, but neither does it imply that there is a naive objective continuum. Maybe we simply can't decided and that's enough? But which seems to be the more redundant description of the two?

I prefer to view the information bound, not as a limit of the amount of objectively hidden information behind the screen, but as a limit of the amount of inferrable(measurable) information about the other side. But then, it is not even clear to what that there is a notion of objective screen, since each observer effectively has their own screen.

I think there is a lot around this that still extremely unclear and most things are yet very much semiclassical arguments that lack coherence of thought.

/Fredrik

How about:

If entropy is fundamental, then something (what?) must be discrete, because entropy ~ p(x).logp(x), and if x is continuous and has units, then logp(x) will be sensitive to the choice of units, which is unphysical. So x must have some fundamental units, which would be provided by x being discrete.

However, if the mutual information is fundamental, then the logarithmic term is log(p(x)/q(x)), which is unitless whether or not it x is discrete or continuous. Probably the "observer" point of view should take the mutual information to be fundamental, so from that point of view discreteness is not required.
 
  • #15
Just a suggestion to the board. It might help if you define your terms a little bit more carefully. There are 10 different concepts floating around in this thread, and they are not at all a priori related and it makes it impossible to even begin to disambiguate the mess.

Generally speaking, when someone says 'discrete spacetime', it means to me that you want to stick GR (or something like GR) on a lattice.

There are many ways to do this (in fact an uncountable infinity of ways).

Now, the canonical argument against having a fundamental theory based on this is that it breaks lorentz invariance (in the bad way), and that's quite general and not hard to prove. Lorentz invariance is only recovered in the continuum, infinite volume limit (if it all) and a theorist must go to great lenghts to ensure that both the Lorentz group as well as larger groups (like the diffeomorphism group in the case of some formulations of gravity) remain as residual symmetries upon taking this limit.

If you do not take the continuum limit and instead keep space discretized (in the above sense), you invariably hit issues with the matter sector, b/c there is nothing that protects certain dimension 4 operators from being generated in the UV, that will then require a tremendous amount of finetuning in order to satisfy experimental bounds in the IR. There are of course many other related mathematical nogo statements to the same effect.

Anyway, I am not aware of any formulation of quantum gravity that strictly speaking insists on spacetime being discrete. All the lattice gravity models out on the market (CDT, random triangulations, Weyl, Regge, etc) are not supposed to make sense unless you morally take the continuum limit at the end.
 
  • #16
atyy said:
If entropy is fundamental

I actually don't quite know for sure what everyone means by "fundamental", but I suspect you mean that there is a measure of information (entropy) that is a timeless observer invariant? If so, it's not what I think. I think each observer and subsystem encodes their own entropy functional. The case where each subsytem encodes the same entropy functional for some common thing like space, can only be an equilibrium exception I think.

atyy said:
If entropy is fundamental, then something (what?) must be discrete, because entropy ~ p(x).logp(x), and if x is continuous and has units, then logp(x) will be sensitive to the choice of units, which is unphysical. So x must have some fundamental units, which would be provided by x being discrete.

Hmm... the way I envision things "x" here should simply a label of distinguishable events, so there is not really any units. So first we have a set of distinguishable events, a set. So far there is not yet any distance metric, and even less geometry defined. These are all higher constructs to be explained IMO.

So in that sense I think I agree with what you say: x is discrete in the sense that it's simply a label of distinguishable events. Now wether it's more natural to mathematicall label these events by integers or real numbers is another story. Personally I find it very unnatural to think that the set of distinguishable events are uncountable simply because it seems to suggest that the observer doing the counting must be infinitely complex. The problem I have then is later when you are about to compute transition probabilities - how to you rationally compare two possibilities with infinite weight?

Edit: Another thing we also need to reconstruct is stuff like feynmanns path integral. So far there is no proper understanding or explanation on why this is the way to evaluate transition probabilities. If we constrain ourselves by counting only physically distinguishable possibilities, I doubt that that contiuuum will be present except as some limiting case, byt but then the measure in the integral should follow from the construction, rather than beeing left unclear.

Also, from the point of view of representation of the information state vector without imaginary ensembles I think that even the probability p is discrete. The "possible" ratings might not actually cover the entire [0,1] continuum. So not only is x discrete, I think p is too. This why I'm more radical and think that the continuum probability itself needs to be replaced by a combinatorical approach, beucase I don't think the [0,1] continuuum is physical. And it really screws up the "counting" in the action computations.

I think if you already on the input of the model, start by assuming a contiuum we have already bypassed several that I think important questions.

So my own angle to this is more towards discrete information in general. This should apply to information of states, geomeotry and physical law. I think what we think of as the continuum spacetime, is something that will be emergent from a more abstract framework. But then even dimensionality and cardinality must be emergent. To start with a 4D continuum is not explaining anything IMO.

/Fredrik
 
Last edited:
  • #17
atyy said:
If entropy is fundamental, then
...
Probably the "observer" point of view should take the mutual information to be fundamental

Ok now I think I see the distinciton you make. In my view, there is no observer indepdenent view at all. There are only interacting views. But there are "effective" observer independent views that are emergent locally. This should included what we usually see as symmetries of natures, but I think none of these symmetries should be seen as timless and eternal for the simple reason that it bypasses the measurement requirement, that we should only speak of what we can say/infer about nature.

The notion of fundamental timeless observer independent symmetries is a realist type of assumption or belieft that is not the result of a scientific or measurement process.

If we look in detail, the symmetries are ALMOST a result of human sciences, but what we inferred is a well justified expectation of such a symmetry, we have NOT deduced that these symmetry are perfect, eternal and observer independent.

But I think we don't need that, the important thing becomes when you see theories as somehow tools for interacting with your own environment, using the imperfect tools we have is still fully rational. So that what happens during interactions is that all parties acts according to their tools, and the result is also an evolution of the "population of tools".

In think in this hierarchy of laws and evolving laws, all interactions must fit in. Not only gravity, but also the SM-model for particlephysics. But I think that won't work unless we find an abstraction framework that works for both subsystems, such as what we study in particle physics, and cosmological theories. Clearly and something that I refuse to deny is that the notion of ensembles and repeating experiments that does make sense in particle physics when you can make preparations - something that is really a key building stone in our current "measurement theory" - just doesn't make any sense to cosmologicla models. The notion of an observer indepdenent wavefunction of the universe is just outrageous to me.

So I think we need a "measurement theory" does makes sense also for such models. Ordinary QM, with it's fixed hilbert structures does not, that's my clear opinion. Of course there are more problems in it than just hte continuum, but I think the continuuum is a key problem, although not the only one.

/Fredrik
 
  • #18
tom.stoer said:
I am sorry, but your argument is wrong.

The difference is that the Compton wave length is the typical size of an quantum object. So it does not apply to classical objects like stars, but certainly to quantum fluctuations and elementary particles. My argument is non-technical. It simply shows that as soon as quantum mechanics (Compton wave length) and gravity (Schwarzsschild radius) come together, something goes wrong.

You are right in the sense that this shows that perturbative methods fail - but this is how QFT is defined in most cases.

Again: the argument is non-technical, but that was should be clear from the beginning as it was explicitly required in the first post to give non-technical arguments :-)

Btw.: if this argument is wrong, we do you get Planck energy from it? just a coincidence? and why is used quite frequently to motivate quantum gravity?

I admit that your argument is non-technical but I disagree with it and I think what you say is misleading. The Compton wavelength is the length for which quantum effects become important. To a lot of people a black hole is a collapsed star so it would seem to me that one should clarify that the "typical size" one associates with most black holes is certainly not their Compton wavelength.

The Planck energy or Planck length is the length scale that classical gravity breaks down. This happens at the centre of a massive black hole but not at the horizon. Its here, at the centre of the black hole, if anywhere, that things "go wrong".


I totally disagree that QFT are defined by perturbation theory. Perturbation theory is an approximation to exact QFT and will breakdown at some scale. This is true of QED, QCD and gravity.
 
  • #19
from atyy reference in post #2 [ http://arxiv.org/abs/gr-qc/9508064 ...a Smolin paper, 1995)

it seems that Lorentz invariance cannot be consistent with a theory that has a finite number of degrees of freedom per fixed spatial region.
It is then very impressive that there is one context in which this problem
has been definitely solved, which is perturbative string theory. The problem is solved there because the elementary excitations are extended one dimensional objects. As is explained in detail in string theory is consistent with Lorentz invariance in spite of having a finite
number of degrees of freedom per fixed spatial region because the strings,
representing the small excitations of the vacuum, can diffuse transversally
as they are boosted longitudinally...
.

I did not know that (!) and whereas I took this from Haelfix Post #15 to be accurate, now I am considerably less sure:

...Now, the canonical argument against having a fundamental theory based on this (discreteness) is that it breaks lorentz invariance (in the bad way), and that's quite general and not hard to prove. Lorentz invariance is only recovered in the continuum, infinite volume limit (if it all)

Anyway, back to bcrowells original question, if there IS an argument against discreteness, this might be it, although I have not considered it to be personally convincing in the past and now have Smolin's view additionally...
 
  • #20
It's not clear to me that the holographic principle says anything about the discreteness of spacetime. Horizons block us from observing things behind it. But I don't think space all by itself is an observable thing. Only things floating about in space are observable, not space itself.
 
  • #21
Finbar said:
The Compton wavelength is the length for which quantum effects become important. To a lot of people a black hole is a collapsed star so it would seem to me that one should clarify that the "typical size" one associates with most black holes is certainly not their Compton wavelength.
You are right, at the Compton wave length quantum effects become important. But the typical size of a quantum black hole is exactly its Compton wave length!

You can use similar arguments to calculate the energy in atomic spectra from the size of the atom. Take the uncertainty relation, take the typical size of an atom and you get a typical energy scale. The same works for nuclear physics. It's a very simplified argument, but it works. In the same sense it will work for quantum gravity.

Finbar said:
The Planck energy or Planck length is the length scale that classical gravity breaks down. This happens at the centre of a massive black hole but not at the horizon. Its here, at the centre of the black hole, if anywhere, that things "go wrong".
I am not talking about classical black holes but about quantum black holes. I agree that at the center of classical black holes classical physics breaks down, but this does not give you a typical length scale.

Finbar said:
I totally disagree that QFT are defined by perturbation theory. Perturbation theory is an approximation to exact QFT and will breakdown at some scale. This is true of QED, QCD and gravity.
So please have a look at standard QFT textbooks; they are full of path integrals and their perturbative expansion. No non-perturbative methods.

I agree that perturbation expansion is not the definition of the theory in a mathematical sense, but in most cases the path integral as you typically write it down is only formal; you you bring it to life by using its perturbation expansion. Its exactly what you expect in quantum gravity: perturbative quantum gravity can be understood as low-energy effective theory below the Planck scale, but beyond the Planck scale it calls for a paradigm shift.

If you do not agree with my argument to introduce the Planck scale: how would you motivate the Planck scale?
 
  • #22
Naty1 said:
... if there IS an argument against discreteness, this might be it, although I have not considered it to be personally convincing in the past and now have Smolin's view additionally...

I would like to stress the difference between the introduction of discreteness by hand (which in most cases will break Lorentz invariance) and discreteness as the result of a calculation (as you see it in LQG). In LQG all classical symmetries are manifest in the physical Hilbert space as the physical Hilbert space is definied by the Kernel of the generators of the symmetries. Nevertheless certain operators (which are not Dirac observables!) have discrete spectra.

Look at angular momentum: you never introduce any discrete structure, but the angular momentum algebra produces discrete eigenvalues; look at hadrons: you never introduce any discrete structure, but QCD produces a discrete mass spectrum.
 
  • #23
tom.stoer said:
If you do not agree with my argument to introduce the Planck scale: how would you motivate the Planck scale?

The Planck scale is the scale at which perturbative quantum gravity breaks down. Its the scale at which the dimensionless Newtons constant g=E^2 G becomes of order unity such that we can no longer use perturbation theory. Its the same situation in QCD once we reach the energy Lambda QCD we can no longer use perturbation theory. The difference being that perturbation theory works for high energies and breaks down for low energies whereas for gravity(and also QED) its the other way around.

The situation has consequences for black holes but one cannot begin with the idea of a "quantum black hole" and then argue that these define the Planck scale. All we know is that quantum effects become important at the horizon of a black hole when its Compton wave length approaches its Schwarzschild radius.
 
  • #24
:-)

I like the last sentence "All we know is that quantum effects become important ... when its Compton wave length approaches its Schwarzschild radius." That's my reasoning.

My argument does not mean that these quantum black holes do exist. It means that they don't exist, but that you cannot explain this by using arguments of GR + QFT, because these two theories would imply their existence. So it's an idea to show why something goes fundamentally wrong at the Planck scale, not to discuss what really happens there; we don't know yet.

Your idea to compare LambdaQCD with the Planck scale does is not fully correct. The Planck scale can be deduced by purely dimensional arguments, whereas LambdaQCD emerges from a classicaly scale-free theory. That's why it makes sense to insist on the idea that new physics emerges at the Planck scale even if you are not able to derive this. In QCD it's fundamentally different: nobody knows about LambdaQCD before calculating beta functions, going through all the renormalization group stuff, and deriving scaling violation.
 
  • #25
Naty1 said:
...Now, the canonical argument against having a fundamental theory based on this (discreteness) is that it breaks lorentz invariance (in the bad way), and that's quite general and not hard to prove. Lorentz invariance is only recovered in the continuum, infinite volume limit (if it all)
Anyway, back to bcrowells original question, if there IS an argument against discreteness, this might be it, although I have not considered it to be personally convincing in the past and now have Smolin's view additionally...

I for one am open for symmetry breakings I just want to add a possible idea howto handle a possible lorentz violation. Although for me the case here is more about symmetry breaking in general, and my arguments are not specific for lorentz symmetry.

Look at SR, where we have global lorentz symmetry. This is broken globally in GR, but the way it's broken gives rise to gravity as it basically extens the "class of observers". So each symmetry corresponds to a class of observers. So to say that a symmetry is broken just amounts to say that the generators of the original symmetry does not exhaust the set of possible observers.

The usual way of dealing with this, is to extend the symmetry to a larger symmetry, that extends the set of possible observers. But there is also another possible way to deal with it, and to consider all symmetries to be evolving, in a sense where there are no hard timeless objective symmetries. In a environment where the actual population of observers is constrained to be described by a certain symmetry, then any larger symmetry describing "possible" but not realisable observers, are redundant at that point. So breaking symmetries are then related to distinguishing new interactions.

So, until we have unified EW & strong interactions with gravity, I find it strange to be categorical against violating the symmetries of SR and GR, because maybe the way the violate the symmetries are the key to the missing unification?

To insist on lorentz invariance as beeing impeccable, to me is to say that the class of observers generated by the lorenz or poincare class, is exhaustive. This makes little sense to me since it first of all is a statement of spacetime only, it ignores EW and strong interactions of the observers. So I'm not buying the flat argument that lorentz invariance under no circumstances can be broken. Noone suggest that the breaking should take place in the currently tested domains anyway, so I see no empirical support for the "perfect lorentz symmetry". Not to mention that the notion fo lorentz symmetry HAS to break down, or at minimum be reforumlated, if and when spacetime as we know it does.

/Fredrik
 
  • #26
Sorry to repeat myself, but I think a key question for this discussion is also how we view the notion of symmetry of nature.

Do we see the symmetries of nature as realist traits of nature? Without having to be observable, or follow from the result of observation?

If we do, no one can deny that the symmetries of nature we do talk about now are the result of an inference process we call "science". They are the result of human science. We can think that the analysis of the scientific process is more phsychology than physics, and maintain a "realist view" of the symmetries we have inferred from this process as "timeless facts of nature".

Or we can require that an observing subsystem, infers the symmetries of it's environment by physical processes on par with how it infers information about the STATE of it's environment by the measurement process (which in the case of subsystems is what QM describes). So instead of thinking that the "scientific process" is not interesting for fundamental physics, we can think that there is a yet not properly described process wherby a subsystem of the universe. infers symmetries of it's environment, and that this furthermore influences the first systems reaction on it's environment. And that THIS context may be the better way of seeing symmetries.

I of course subscribe to the latter view, but I suspect tha majority of others subscribe to the first view.

Is the first view really satisfactory? In either case I think the choice of attitude here strongly influences our reasoning also of the issue of for example lorentz symmetry.

/Fredrik
 
  • #27
tom.stoer said:
Your idea to compare LambdaQCD with the Planck scale does is not fully correct. The Planck scale can be deduced by purely dimensional arguments, whereas LambdaQCD emerges from a classicaly scale-free theory. That's why it makes sense to insist on the idea that new physics emerges at the Planck scale even if you are not able to derive this. In QCD it's fundamentally different: nobody knows about LambdaQCD before calculating beta functions, going through all the renormalization group stuff, and deriving scaling violation.


On the contrary. The reason that you can make a naive estimate of the Planck scale based on dimensional grounds is due to the dimensionality of Newtons constant. We know that gravity becomes non-perturbative once g=G E^2 ~1. So this suggests that this happens once E^2=1/G . However, just like in QCD, the coupling, in this case, G is itself is a function of the energy scale G(E). So it is case too in gravity that we do not know the E^2 =1/G(E) before we calculate the beta functions for gravity.
 
  • #28
I think you missunderstood. I simply want to say that there's difference how the two scales show up in the theory.

For the Planck scale you know from classical physics that there's a scale dependency: this is due to the dimension of the gravitational constant in the classical Einstein-Hilbert action. There is no question that gravity contains a fundamental scale - and we expect it to become physically relevant w/o knowing from an experiment that this is really the case. In QCD you cannot derive the scale based on dimensional arguments for the classical action because it is scale invariant. In contrast to GR you do not know that the theory contains a scale before you went through all the QFT calculations.

It is strange: in QG you are not able to go through all that stuff but you can construct - from the very beginning and w/o any calculation - a quantum gravity scale simply from G, c and the (quantum mechanical !) Planck constant.
 
Last edited:
  • #29
Discrete space has been proposed at Planck length. But has any other distance been proposed(studied), like near proton width.
 
  • #30
qsa said:
Discrete space has been proposed at Planck length. But has any other distance been proposed(studied), like near proton width.

I think the diameter of a proton is definitely ruled out, because high-energy scattering experiments have probed distances a couple of order of magnitudes less than that.

In a theory of quantum gravity, there is only one length scale that you can build out of the relevant fundamental constants, and that's the Planck scale. Physicists already feel like there are too many arbitrary scales in physics, e.g., the electroweak scale; they don't want to add another one if they can help it.
 
  • #31
bcrowell said:
I think the diameter of a proton is definitely ruled out, because high-energy scattering experiments have probed distances a couple of order of magnitudes less than that.

In a theory of quantum gravity, there is only one length scale that you can build out of the relevant fundamental constants, and that's the Planck scale. Physicists already feel like there are too many arbitrary scales in physics, e.g., the electroweak scale; they don't want to add another one if they can help it.


Thanks for the reply. I am familiar with the standard physics, but few months back I read something like that but I don't remember it any more. Of course when I meant near I meant a number of order of magnitudes( I know I was not clear).


In my own model (my profile), something strange happens when I make position discrete, then when I almost hit 355 strange things happen to the energies of the particles( it is like fixed points). It is known that if you compute 355/113 you get PI with six figure accuracy. Moreover, as I approach 4 all the energies cap to 1 in a similar behavior to black body radiation i.e. when energies are discrete the result becomes finite. But if I make my random throws on real line then all hell breaks loose and there is no stopping to the energies. For various reasons in my model it appears that 4 could represent a length of 1 to 1/1000 times the proton diameter. I am not sure; I have to find out or may be I am just calculating the wrong thing.


The other strange thing in my model is that if I don’t make space discrete I am simply not able to compute interactions (including gravity) properly and there will be ambiguities. But calculating energies is no problem the discrete and the real give me the same numbers that is above position 355.
 
Last edited:
  • #32
bcrowell said:
I think the diameter of a proton is definitely ruled out, because high-energy scattering experiments have probed distances a couple of order of magnitudes less than that.

In a theory of quantum gravity, there is only one length scale that you can build out of the relevant fundamental constants, and that's the Planck scale. Physicists already feel like there are too many arbitrary scales in physics, e.g., the electroweak scale; they don't want to add another one if they can help it.

here is a quote from a paper from this link

https://www.physicsforums.com/showthread.php?p=2721537#post2721537

Entropic force, noncommutative gravity and un-gravity

"Without loosing in generality,
but having in mind Noncommutative Geometry
as a specific tool for the description of the microscopic
structure of a quantum manifold, we start a revision of
Verlinde’s assumptions. Noncommutative Geometry encodes
the spacetime microscopic degrees of freedom by
means of a new uncertainty relation among coordinates
xμx  . (16)
The parameter  has the dimension of a length squared
and emerges as a natural ultraviolet cut off from the geometry
when coordinate operators fail to commute
[xμ, x ] = iμ (17)
with  = |μ|. In other words, the spacetime turns
out to be endowed with an effective minimal length beyond
which non further coordinate resolution is possible
.
This a feature of the phenomenology of any approach to
quantum gravity and it can be found not only in Noncommutative
Geometry (for reviews see [10]), but also
in the framework of Loop Quantum Gravity, Generalized
Uncertainty Principle, Asymptotically Safe Gravity
etc.. The scale at which the minimal length emerges is
not specified a priori, and it is kept generic saying that
at the most p < 10−16 cm, namely smaller than the
typical scale of the Standard Model of particle physics
.
Along this line of reasoning, we have to revise at least
two of the Verlinde’s assumptions."

I guess I was not too far off. PLS, see my earlier posts. But how does that relate to Planck's length. anybody?
 
  • #33
bcrowell said:
The Bekenstein bound says there's a limit on how much information can be stored within a given region of space.

How can you be sure that information has a lower limit for scale? Maybe information can occur in infinitely smaller forms, allowing infinite amounts to occupy any given region.
 
  • #34
The smallest amount of information is one bit. It can store the information whether it's zero or one. Something that is smaller than one bit would always always habe the information zero (no storage for more information :-), but this is no longer information.
 
  • #35
brainstorm said:
How can you be sure that information has a lower limit for scale? Maybe information can occur in infinitely smaller forms, allowing infinite amounts to occupy any given region.

One of the arguments is the holographic principle. It states that our reality is just a hologram of the information contained on a screen. Therefore we observe the spatial object. Each point of the object is created of the product of the two or more information.
The maximum number of the information on a screen is equal Area/4 Planck length squared.
Therefore we can count the number of the information in our observable Universe.
If the number of the information is limited the spacetime has to be descrete too.
Holographic principle is developed by prominent physicists Hawking, Beckenstein , Verlinde, Smoot, 't Hooft and other. It is recently the most promising idea in physics.

As a curiosity:
(lp / l x ) * (lp / l y ) = -a Fg / Fe
where:
lp * lp – Planck length squared = hG/c3
l x , l y –Compton wave length of two interacting particles l= h/mc
a – alfa=ke2 /hc http://en.wikipedia.org/wiki/Fine_structure_constant
Fg – Gravitational Newton's interaction between particle m(x) and m(y)
Fe -Electrostatic Coulomb interaction=ke2 /r2

Each oscillation due to Compton wave causes electromagnetic interaction and a space curvature which we call gravity. The interference of the non-local information of the Compton wave length causes length contraction (space curvature) and time dilation.
This equation is possible if the space-time is discrete only.
http://www.cramerti.home.pl/ [Broken]
 
Last edited by a moderator:
<h2>1. What is the argument for discreteness of spacetime?</h2><p>The argument for discreteness of spacetime is based on the idea that space and time are not continuous, but instead are made up of individual, discrete units. This means that there is a smallest possible unit of space and time, and everything else is made up of these smaller units.</p><h2>2. How does this argument relate to quantum mechanics?</h2><p>This argument is closely related to quantum mechanics because in the quantum world, particles and energy are thought to exist in discrete units, rather than continuously. This suggests that spacetime, which is the fabric of the universe, may also be discrete in nature.</p><h2>3. What evidence supports the argument for discreteness of spacetime?</h2><p>One piece of evidence comes from the phenomenon of black holes. According to the theory of general relativity, the gravitational pull of a black hole is so strong that it causes a distortion in spacetime. This distortion is thought to be made up of discrete units, which supports the idea of a discrete spacetime.</p><h2>4. Are there any counterarguments to this theory?</h2><p>Yes, there are some counterarguments to the theory of discreteness of spacetime. Some scientists argue that the concept of a smallest unit of space and time is not necessary and that a continuous model of spacetime can still explain the behavior of particles and energy in the universe.</p><h2>5. What are the implications of a discrete spacetime for our understanding of the universe?</h2><p>If the argument for discreteness of spacetime is proven to be true, it would have significant implications for our understanding of the universe. It would mean that our current models of physics may need to be revised, and it could potentially lead to new discoveries about the fundamental nature of space and time.</p>

1. What is the argument for discreteness of spacetime?

The argument for discreteness of spacetime is based on the idea that space and time are not continuous, but instead are made up of individual, discrete units. This means that there is a smallest possible unit of space and time, and everything else is made up of these smaller units.

2. How does this argument relate to quantum mechanics?

This argument is closely related to quantum mechanics because in the quantum world, particles and energy are thought to exist in discrete units, rather than continuously. This suggests that spacetime, which is the fabric of the universe, may also be discrete in nature.

3. What evidence supports the argument for discreteness of spacetime?

One piece of evidence comes from the phenomenon of black holes. According to the theory of general relativity, the gravitational pull of a black hole is so strong that it causes a distortion in spacetime. This distortion is thought to be made up of discrete units, which supports the idea of a discrete spacetime.

4. Are there any counterarguments to this theory?

Yes, there are some counterarguments to the theory of discreteness of spacetime. Some scientists argue that the concept of a smallest unit of space and time is not necessary and that a continuous model of spacetime can still explain the behavior of particles and energy in the universe.

5. What are the implications of a discrete spacetime for our understanding of the universe?

If the argument for discreteness of spacetime is proven to be true, it would have significant implications for our understanding of the universe. It would mean that our current models of physics may need to be revised, and it could potentially lead to new discoveries about the fundamental nature of space and time.

Similar threads

  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
13
Views
3K
  • Beyond the Standard Models
Replies
1
Views
1K
  • Special and General Relativity
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
29
Views
2K
  • Beyond the Standard Models
Replies
11
Views
2K
Replies
2
Views
970
  • Beyond the Standard Models
2
Replies
67
Views
7K
  • Beyond the Standard Models
Replies
31
Views
5K
Back
Top