Why Should Outlaws in Quantum Gravity Embrace Constructive Criticism?

  • Thread starter marcus
  • Start date
  • Tags
    Gravity
In summary, Garrett has reservations about CDT and believes that there are justifications for the restrictions. He also thinks that Willem Westra is the person most capable of extending the 2D result on topology change to higher dimension.
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
there is a possibility of having a friendly argument over Loll Gravity (CDT)

the missing ingredient was always some benign opposition willing to put in the necessary work to understand it and discuss the weak points

I was listing the various outlaw approaches to QG in another thread
https://www.physicsforums.com/showthread.php?t=102147
and in post #7 Garrett offered a good summary of CDT and preliminary reasons that he didnt like it or in any case wasnt sure.
https://www.physicsforums.com/showthread.php?p=853927#post853927

this is a generous and collective-minded thing to do because he has his OWN approach to QG----but instead of wanting only to talk about his own ideas, and being purely negative or closed to other, he was looking at CDT in a receptive way without any animosity. you may think that this is the obvious way to act, but I don't think it is common enough.

I think the best strategy for outlaws is to understand each other's work and give each other friendly critiques as one would do in a research seminar.

I'm saying something blatantly obvious, right? If we were a face-to-face community of grad students at a good university this kind of constructive behavior would be TAKEN FOR GRANTED. But I am on west coast, Garrett is on Maui, Torsten is in Berlin, and Renate Loll who is in Utrecht doesn't even come to PF.
Actually what we need most is not Loll, it is Willem Westra. he is the CDT guy working on topology change. That would help answer what Garrett said.
 
Last edited:
Physics news on Phys.org
  • #2
Anyway this thread is just in case Garrett or anybody else wants to say what's wrong with Loll Gravity. And I or others may then respond.

If I had the smarts and the knowledge and the time, this is what I would want to have for EACH of the outlaw QG approaches. I think these outlaws are what ought to be happening and maybe there should even be a CONFERENCE of outlaws. But I don't have the resources to examine each one of these non-string, non-loop QG approaches. All I can really do is list them and keep watch for new papers about them.

So CDT is what is on the table in this thread and here is what Garrett said, by way of reservations about it, in his post #7:

"So... it looks like CDT can be summarized as a restricted quantum Regge calculus? They took the evil behaving QRC and constrained it until it played nice. But I'm not sure I like it. Are there justifications for the restrictions? And I kind of like the idea of there being topology change down at the Planck scale."

This is an important issue. Loll and Westra have been working to include
brief microscopic topology change down at Planck scale. I will try to dig up a PF thread and some links.

(not to duck G.s primary objection, the constraints that make CDT causal are indeed confining and the main justifications AFAIK are pragmatic, will try to address later)

We had a PF poll about whether Loll and Westra would be able to extend their 2D result on topology change to higher dimension.

https://www.physicsforums.com/showthread.php?t=81626

this thread has links to some Loll/Westra papers. and one Loll/Westra/Zohren
http://arxiv.org/hep-th/0306183 [Broken]
http://arxiv.org/hep-th/0309012 [Broken]
http://arxiv.org/hep-th/0507012 [Broken]
Zohren BTW has left Loll's group in Utrecht and is at London Imperial in Dowker's group.

Extending microscopic topology change to 3D is probably Westra's PhD problem and we haven't heard much since the 2D result in 2003. It is probably real hard. Several people in our poll thought it could not be done (i.e. within the context of CDT). I happen to not be so pessimistic: for whatever my guess is worth (not much) I expect Westra PhD thesis to appear in 2006, or some such definite advance comparable in scope to that.
 
Last edited by a moderator:
  • #3
I spent very little time reading a few CDT papers, so I don't consider myself qualified to properly critique the whole model.

But... what about the length fixing of the simplex legs? What's with that? The topology restriction, in my opinion, is the lesser of the two constraints.

To add detail: Loll and all restrict their 4-simplexes to have some legs of equal length in space and some of equal length in time, with the lengths related by a hand picked (is that part true?) constant. As far as I can tell this is done to keep the computer from blowing up. Is this at all justifiable?
 
  • #4
garrett said:
I spent very little time reading a few CDT papers, so I don't consider myself qualified to properly critique the whole model.
But... what about the length fixing of the simplex legs? What's with that? The topology restriction, in my opinion, is the lesser of the two constraints.
To add detail: Loll and all restrict their 4-simplexes to have some legs of equal length in space and some of equal length in time, with the lengths related by a hand picked (is that part true?) constant. As far as I can tell this is done to keep the computer from blowing up. Is this at all justifiable?

Good :smile:
to answer (is that part true?) Yes, that is true.
We are accumulating some quite sensible questions that should be addressed, about CDT. Yes the ratio of timelike edgelength to spacelike is picked somehow, and maybe even changed during a limit-taking process. and "to keep the computer from blowing up" is a really good way to express it! I think. YES! WE DO need to ask is this justifiable.

I have a chorus rehearsal tonight for which I must prepare intensively, so right now cannot respond, Garrett. All I can say is it is great to be asking these questions and I am actually pretty optimistic that the DT approach IS justified.

I may not be able to convince you. but I am pretty confident and I will certainly try my best to persuade you!

BTW the business of using all the same size simplices (or same length edges) is called DT and was invented by Ambjorn and others IIRC in early 1990s. Many people worked on DT before Loll. Ambjorn has a Cambridge UP book, "quantum geometry", about it which i have not seen, dating late 1990s. DT never worked right.
The argument to JUSTIFY DT would have been made back then in early 1990s-----why is it all right to do Regge with edgelengths that can't change?

As soon as I can I will check out what the justification was. Then in 1998 Loll got into it, and wrote a paper with Ambjorn where the "causal" restriction was proposed.

my feeling is the weakest part of CDT is the universal time parameter, that is, the fixed foliation. I believe it could go away somehow in the limit and be proved not to matter---but it is needed for the initial construction and it has not been shown yet to be non-essential.

How I wish that Renate Loll or one of her postdocs would drop out of the sky and answer your questions, but i will try as soon as this rehearsal is over, or tomorrow morning.
 
Last edited:
  • #5
OK, thanks Marcus, I'll look forward to hearing what you can dig up about the length restriction.

People get all huffy about using a universal time parameter, but it doesn't bother me so much. True, the equations you start with need to be Lorentz invariant, but a particular solution doesn't. In fact, our universe has a perfectly good universal time foliation -- a rest frame can be assigned at each point based on the velocity being zero when the cosmic background radiation dipole moment vanishes.

Plus, I don't see picking universal time as much different than an arbitrary choice of coordinates, as long as your physical predictions are independent of these choices.
 
  • #6
garrett said:
OK, thanks Marcus, I'll look forward to hearing what you can dig up about the length restriction.

your view of the time parameter issue makes sense to me

I'm going to respond to the length restriction issue on my own before seeing what i can find in the papers: I think that the original Regge approach used a fixed simplicial complex (and allowed the lengths to vary)

so in a way the Regge approach is just as restrictive and it is a TRADE-OFF. With Regge you don't get to stuff more simplices in, you pick some triangulation and from then on you are stuck with that one triangulation and can only change the lengths.

but with DT "dynamical triangulation" you get a comparable freedom by the triangulation being changeable.

You can, in effect, stetch some distance relative to some other by stuffing in more simplices. You can change the curvature at a point by making more triangles (in 2D case) MEET at that point.

or more making more tetrahedra meet at some line (in 3D case)
or more foursimplices meet at some triangle

so with DT you have just as much freedom (*waves hands enthusiastically*) as you do with vintage Regge---except you get that freedom in a different way----and except you get the freedom in a bumpy, rough, jagged way instead of a SMOOTH way

because when you stuff in more simplices to change the curvature, or take away some, the geometry changes in an abrupt jerky way instead of smoothly----and here the DT people, and Loll CDT as well, will justify by saying that this is QUANTUM. Even though the individual spacetimes are somewhat bumpy and the changes somewhat jerky, it will all BLUR OUT.

When we do the path integral, they will say, it will be like regularizing a particle where it wanders along piecewise linear paths that are jerky but blur out smooth in the sum.

I remember Loll pointing out that with identical simplices (or her two types) one cannot even tile flat space. Anything you make with these dumb little legos can only be *approximately* flat. but because she is doing a quantum dynamics she declares that this doesn't bother her.

Admittedly I can't speak for those who introduced DT as replacement of Regge back circa 1990 (IIRC) so I should research this and see how DT was justified back at the start, by Ambjorn or someone else involved at the beginning.
 
  • #7
Thanks marcus, the path integral smoothing over jagged shapes description makes sense, but then... I have a couple more questions:

Are the DT simplexes describing a spacetime supposed to be non-intersecting and complete, in the same way that a finite element description like a Regge discretization is? i.e. do they properly "tile" a spacetime?

If this is so, I would hope that there is a regular arrangement of these 2 4D simplexes that fills a flat spacetime, or maybe that fills something like S1xS3?

To us a 2D analogy:
I see this the same way that equilateral triangles can tile an icosahedron. I would expect that in this way the icosahedron would be a DT approximation to S2. And "stuffing in more simplexes" is like saying take one triangular face (2D simplex) on your icosahedron and replace it with the three faces of a tetrahedron that would have had as its fourth face the simplex that is replaced. This way, you move from an icosahedron to an icosahedron with a bump on it -- representing a localized bit of higher curvature space. Repeat this procedure to get more and more bumpy spaces.

Does this sound right?

And, if so, what are the spacetimes that a regular collection of Loll's 2 4D simplexes can approximate? The simpler, I think, related question is what 3D spaces are approximated by regular arrangements of 3D equalateral tetrahedra?
 
  • #8
garrett said:
If this is so, I would hope that there is a regular arrangement of these 2 4D simplexes that fills a flat spacetime, or maybe that fills something like S1xS3?
...

Garrett, here's something that might be satisfactory: page 8 of
hep-th/0505154

---quote---
Before measurements can be performed, one needs a well thermalized configuration of a given volume. In order to double-check the quality of the thermalization, we used two different methods to produce starting configurations for the measurement runs. In the first method, we evolved from an initial minimal four-dimensional triangulation of prescribed topology and of a given time extension t, obtained by repeated gluing of a particular triangulated space-time slice of delta tau = 1 and topology [0, 1] × S^3, which consists of 30 four-simplices. The spatial in- and out-geometries of the slice are minimal spheres S^3, made of five tetrahedra.
---endquote---

It speaks of a minimal S^3 being made with 5 tets----those would be purely spatial tets, and therefore equilateral, I assume.

Are the DT simplexes describing a spacetime supposed to be non-intersecting and complete...? i.e. do they properly "tile" a spacetime?

Yes, I believe so! In the Monte Carlo runs, they start out with a minimal "tiling" of a stock regular sort and then they make many passes to allow it to grow (to the desired volume) and to thermalize. Only then can they start taking statistics from it.

By the time their spacetime has thermalized, it is no longer regular in any sense AFAIK.

Whenever you ask a question (it's great to have these questions BTW!) I always start by thinking why don't we write email to Loll or one of her grad students. Having someone able to speak with even a modest amount of authority would be so relaxing.
 
Last edited:
  • #9
Hmm, how well can five equilateral tetrahedra approximate a sphere?

For 2D shapes, 20 equlateral triangles glued to make an icosahedron makes a decent 2D sphere. Gluing 6 triangles together makes a thing with spherical topology, but it has two points of conentrated curvature, so its lumpy.

Is there a 3D analog for making nice 3D spheres out of equilateral tetrahedra? And, if so, I would expect there to be only one with regular curvature. How many tetra does it have?

Also, it occurs to me you could make a space with regular negative curvature. To make a 2D icosahedron you glue together 5 triangles at each vertex. If you glue together 6, you should get an approximation to 2D surface with negative curvature -- a hyperboloid. I'm sure there's a 3D analog.

This stuff must be worked out somewhere for 3D shapes, yes?
 
  • #10
Google to the rescue:
http://astronomy.swin.edu.au/~pbourke/geometry/platonic4d/ [Broken]

In 4 dimensions there are 3 regular polytopes one can construct from equilateral tetrahedra:

4 simplex - made of 5 tetrahedra, 3 meeting at an edge
16 cell - made of 16 tetrahedra, 4 meeting at an edge
600 cell - 600 tetrahedra, 5 meeting at an edge

Marcus, are these the 3D space triangulations Loll starts with? My guess is they've done 5 (as you found) and 16, and are looking for a big computer to do 600.
 
Last edited by a moderator:
  • #11
garrett said:
Google to the rescue:
http://astronomy.swin.edu.au/~pbourke/geometry/platonic4d/ [Broken]
In 4 dimensions there are 3 regular polytopes one can construct from equilateral tetrahedra:
4 simplex - made of 5 tetrahedra, 3 meeting at an edge
...

nice page. so the tiling of S3 by tets is simply the shell of a 4-simplex!

how I picture what they do is they define a set of "moves" that can be performed on a PL manifold (or on a simplicial complex, but the moves are careful to preserve the manifold condition)

and these moves are able to GROW the number of simplexes. as well as to shuffle them around so that the way they are assembled together gets more and more random.

because moves can increase the number of simplexes, they do not have to start out with a lot, it does no harm to start out with the smallest number that will make the desired topology.

when they do a "sweep" in the computer it may involve a million moves or so, and they do many sweeps before they start extracting statistics. the moves are supposed to be ergodic in the sense that if you do enough moves you will visit all possible geometries (of that topology)

when they are in the process of doing a sweep and they randomly generate a possible move then the computer is programmed to CONSIDER the move, first, and use the Lagrangian to determine a probability of doing that move. So it tosses a coin! NOTICE THAT HERE IS LOTS OF ROOM TO "CHEAT" BECAUSE YOU CAN TINKER WITH THE LAGRANGIAN! but actually I trust Loll, she has a very strong character and---well this is just my impression---I think they use a good honest Lagrangian based on the Regge form of the Einstein action.
However this Lagrangian has a little term in it which prefers a certain SIZE for the spacetime to grow to.
so the probabilities of doing moves will, in the initial stages, be slightly BIASED in favor of growing, until the target size is reached!

and then it stabilizes at that size, and they can begin to take statistics and find the expectation values of their observables. So that is how I picture their Monte Carlo process-----they don't need to start big, they can start with only a few simplexes in the machine, and then they grow the spacetime.

once they get one the right size they can continue using it-----they count something (like a dimension) and then they shuffle and randomize it by millions of moves----and then they count again-----and then they shuffle again---and so on.

each sample spacetime is a "path" between the beginning and the end of a small universe, and by generating and studying many such paths they make a "path integral" of the history of this small universe.

Garrett, I guess I am saying all this not for you, because you have already been reading the Loll papers and know about it, but in case others are reading the thread and want a picture. If you or anyone has a more clear idea of their montecarlo spacetime dynamics please give a further description!
 
Last edited by a moderator:
  • #12
OK, it sounds like good fun.

But there's still the question of how to justify using only same edge length simplexes. Any luck finding an answer to that?
 

1. What is the Loll Gravity Pro and Con?

The Loll Gravity Pro and Con is a gravity-based device that is used to simulate the effects of gravity on objects. It is often used in scientific experiments and simulations to study the effects of gravity on various objects.

2. How does the Loll Gravity Pro and Con work?

The Loll Gravity Pro and Con works by using a series of weights and pulleys to create a simulated gravitational pull on objects. By adjusting the weights and the angle of the device, scientists can simulate different levels of gravity and observe its effects on objects.

3. What are the advantages of using the Loll Gravity Pro and Con?

One of the main advantages of the Loll Gravity Pro and Con is its ability to simulate a wide range of gravitational forces, from zero gravity to extreme gravitational pulls. This allows scientists to study the effects of gravity in different environments and on different objects.

4. Are there any disadvantages to using the Loll Gravity Pro and Con?

One potential disadvantage of the Loll Gravity Pro and Con is that it is a relatively large and complex device, making it difficult to transport and set up for experiments. It also requires a certain level of technical expertise to operate effectively.

5. How accurate is the Loll Gravity Pro and Con?

The accuracy of the Loll Gravity Pro and Con depends on several factors, such as the precision of the weights and pulleys used, as well as the skill of the operator. However, when used correctly, it can provide a highly accurate simulation of the effects of gravity on objects.

Similar threads

Replies
134
Views
27K
  • Beyond the Standard Models
Replies
16
Views
4K
  • Beyond the Standard Models
Replies
8
Views
4K
  • Beyond the Standard Models
Replies
10
Views
7K
Replies
17
Views
9K
  • Beyond the Standard Models
Replies
6
Views
3K
  • Beyond the Standard Models
Replies
6
Views
3K
  • Beyond the Standard Models
Replies
5
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
9
Views
2K
Back
Top