What if Weinberg had succeeded in 1979?

  • Thread starter marcus
  • Start date
  • Tags
    Weinberg
In summary, the conversation discusses the misconception that gravity is unrenormalizable and can only be quantized in the context of unification with other forces. However, in 1979, Steven Weinberg proposed a way to renormalize gravity as is, which was later proven to be correct by Martin Reuter in 1998. This solution also explains the high entropy of the CMB and could have potentially predicted inflation without the need for an "inflaton". Unfortunately, Weinberg's idea was not accepted at the time and it wasn't until 2007 that Reuter's theory gained recognition. In conclusion, this conversation highlights the importance of considering the scale dependence of gravitational parameters in understanding the evolution of the universe and how it could
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
From 1980 onwards a lot of people have been operating under the misconception that gravity, by itself, is UNRENORMALIZABLE.

People have tended to suppose that it could only be handled within a unification context, where the Einstein-Hilbert action is abandoned for dreams of more complex Lagrangians to be discovered in the future.

In 1979, Steven Weinberg had an idea for renormalizing gravity AS IS and he tried it with only partial success---he could do it in lower dimensions but couldn't crack the D4 case.

In a paper published in 1998, Martin Reuter showed that Weinberg WAS RIGHT AFTER ALL, he could have succeeded if he'd had better mathematical tools, and gravity is, by itself, RENORMALIZABLE.

But by that time there was a huge mob of people with the idea fixed firmly in their heads that gravity could not be quantized except in unification with other forces, and the programs based on that mistaken notion had a lot of momentum. They assumed the E-H action was not fundamental but only an effective theory that worked approximately at low energy. Almost nobody could HEAR what Reuter said in 1998.

So Reuter and collaborators devoted time to checking this result and making sure about it. When people don't want to listen, you have to be very certain in order to get through to them.

So this is, I think, a very important thing. One way to see how physics can get back on track is to visualize what would have happened if Weinberg had succeeded in 1979 and got Reuter's 1998-2007 results.

And because Weinberg commanded (and still does) considerable influence, the significance of his finding would have been more immediately appreciated.

One thing is that the Einstein-Hilbert action (plus quantum corrections) is not just a low energy effective, it is a HIGH ENERGY BARE ACTION if you put in the right values of G and Lambda. The E-H action is a FIXED POINT in the RG flow, and stuff converges to it. It is a feature of Nature.

So the E-H action is not something Reuter (or a successful Weinberg) would need to put in by hand, it is a prediction born out of the theory.

Another thing is that from Reuter-Saueressig's plot of the RG flow it looks to me as if A SMALL POSITIVE LAMBDA is either predicted or strongly suggested. So a successful Weinberg would have had all the more reason to expect that.

Another thing is that Weinberg could well have coming close to SCOOPING GUTH AND LINDE ON INFLATION without even needing to fantasize exotic matter in the form of an "inflaton". The RG flow for gravity PREDICTS INFLATION SIMPLY FROM THE RUNNING OF LAMBDA in the high-k early universe. It doesn't need an inflaton.

So I can imagine that people in the alternative 1980s are getting pretty excited by all this, when Weinberg tells them about a natural inflation of some 60 efolding that naturally shuts itself off as the scale increases.

Or how do you picture it? If Weinberg had succeeded in 1979.
 
Last edited:
Physics news on Phys.org
  • #2
The entropy of the CMB

The Reuter route to renormalizing gravity solves some interesting puzzles like about the entropy of the CMB

As I understand what Reuter is saying, if you measure in natural units and take the entropy of the entire CMB inside the Hubble radius (some 13.8 billion lightyears) it comes out to be a huge number: 1088 Planck units of entropy.

IIRC, the Planck unit of entropy is the Planck energy divided by Planck temperature. 2 GJ/1.4E32 kelvin. around 10-23 J/K

Anyway, the present entropy of the CMB is something that needs to be explained. How did we ever get such a whopping amount of entropy in the world?

I think we should give Steven Weinberg the benefit of the doubt and assume that he would have EXPLAINED THAT TOO back in 1979 :smile: if he had succeeded in his plan to renormalize quantum gravity.

Instead it had to wait until http://arxiv.org/abs/0706.0174 the paper of Bonanno and Reuter. In case Bonanno sounds like "banana" to you, consider that it means Goodyear in Italian.

I should give some links.
Here is the recent Reuter Saueressig paper (7 August, pedagogical)
http://arxiv.org/abs/0708.1317

Here is the paper Reuter posted in 1996 and got published in Physical Review D in 1998.
http://arxiv.org/abs/hep-th/9605030
I would date the Reuter revolution 1998 because of that publication. It was a year that saw a lot of changes in cosmology and quantum gravity.

Here are the pdf and the mp3 for Reuter's talk at the June Loops conference.

http://www.matmor.unam.mx/eventos/loops07/talks/PL3/Reuter.pdf
http://www.matmor.unam.mx/eventos/loops07/talks/PL3/Reuter.mp3

IF YOU WANT TO KNOW WHERE THE ENTROPY CAME FROM then look around page 16 of Reuter.pdf (the Loops 07 talk)
and then again around page 20---the slides have some misnumbering so it could be page 19
and then on page 22
and finally there is a graph on page 27 that shows a spike in entropy production in the very early universe, which results AFAICS from the running of Lambda.
Reuter explains this in the audio on Reuter.mp3 if you have time to listen. It is roughly 30 minutes into his talk.

It is really a pity that Weinberg didn't get this in 1979. It would have save us from so much misdirected research and compensatory hype.
Or?
Do you have a different take on it?
 
Last edited:
  • #3
Bonanno Reuter Entropy Signature of the Running Cosmological Constant

http://arxiv.org/abs/0706.0174
Entropy signature of the running cosmological constant
Alfio Bonanno, Martin Reuter
57 pages, 7 figures
(Submitted on 1 Jun 2007 (v1), last revised 3 Jun 2007 (this version, v2))

"Renormalization group (RG) improved cosmologies based upon a RG trajectory of Quantum Einstein Gravity (QEG) with realistic parameter values are investigated using a system of cosmological evolution equations which allows for an unrestricted energy exchange between the vacuum and the matter sector. It is demonstrated that the scale dependence of the gravitational parameters, the cosmological constant in particular, leads to an entropy production in the matter system. The picture emerges that the Universe started out from a state of vanishing entropy, and that the radiation entropy observed today is essentially due to the coarse graining (RG flow) in the quantum gravity sector which is related to the expansion of the Universe. Furthermore, the RG improved field equations are shown to possesses solutions with an epoch of power law inflation immediately after the initial singularity. The inflation is driven by the cosmological constant and ends automatically once the RG running has reduced the vacuum energy to the level of the matter energy density."

this is a dynamite paper.

If you would like to hear how Weinberg described his 1979 initiative there is this HINDSIGHT on page 13 of a 1997 paper
http://arxiv.org/abs/hep-th/9702027
this was a talk given in 1996 and at this point Weinberg obviously does not know about the progress made by Reuter.
So he says there are two possibilities for quantum gravity A) finding a fixed point of the RG flow making it renormalizable, and B) some entirely new behavior takes over at high energy. And he does not seem very hopeful about alternative A).
 
Last edited:
  • #4
The 'fixed point' Weinberg was alluding to appears to be an imaginary fragment of wishful thinking, IMO.
 
  • #5
Chronos said:
The 'fixed point' Weinberg was alluding to appears to be an imaginary fragment of wishful thinking, IMO.

They have managed to "take a picture" of the fixed point---it could become an ikon of quantum gravity. Might make a nice T-shirt.

A lot of numerical study over the past 10 years has gone into calculating the flowlines of the renormalization group and whenever they have had a computer do this, it has traced the ghostly whirlpool of the fixed point.

No matter in which case, from which angle projection, whatever. Obviously at this point fair amount of fantasizing must be required to deny its existence.

Sorry if I am skeptical, Chronos, but I have to take your comment as unsupported speculation unless you have read the relevant papers and can point to something wrong. :smile:

If anyone would like to do the homework needed to enable meaningful comment, here's a paper
http://arxiv.org/abs/0708.1317
Page 39 has a sample plot of a projection of the RG flow showing how sample trajectories (numerical solutions to the flow equation) spiral into the fixed point.
the initial computer work on this was published by Reuter and Saueressig in 2002 in Physical Review D, preprint hep-th/0110054, so that paper would have similar plots.

Maybe the 2002 plot should be what we put on a T-shirt :biggrin:

==============
EDIT: reply to the following post.
Thanks for catching the typo, Arivero. I originally typed 1713 instead of 1317. It's fixed now.
 
Last edited:
  • #7
he failed but published his article ?? .. i did not know this used to happen in physics.

anyway..can't we simply evaluate the divergent integrals by summing an approximate divergent series ?

Also i have taken a look at the renormalization group equation and could you REALLY solve it to get something ?? it seems too much complicate for me even using Numerical methods
 
  • #8
Klaus_Hoffmann said:
he failed but published his article ?? .. i did not know this used to happen in physics.

I don't understand your question, Klaus. Weinberg got the desired result in lower dimension, but not in D = 4. So he published in 1979 and he urged people to pursue the result for D = 4. But in his 1997 paper he recounts that this did not go so well

==quote page 12, 13==
One possibility is that the theory remains a quantum field theory, but one
in which the finite or infinite number of renormalized couplings do not run off
to infinity with increasing energy, but hit a fixed point of the renormalization
group equations. One way that can happen is provided by asymptotic freedom
in a renormalizable theory, where the fixed point is at zero coupling,
but it’s possible to have more general fixed points with infinite numbers of
non-zero nonrenormalizable couplings. Now, we don’t know how to calculate
these non-zero fixed points very well, but one thing we know with fair certainty
is that the trajectories that run into a fixed point in the ultraviolet
limit form a finite dimensional subspace of the infinite dimensional space of
all coupling constants. (If anyone wants to know how we know that, I’ll explain
this later.) That means that the condition, that the trajectories hit a
fixed point, is just as restrictive in a nice way as renormalizability used to be:
It reduces the number of free coupling parameters to a finite number. We
don’t yet know how to do calculations for fixed points that are not near zero
coupling.

Some time ago I proposed [26] that these calculations could be done
in the theory of gravitation by working in 2 + epsilon dimensions and expanding
in powers of epsilon = 2, in analogy with the way that Wilson and Fisher27 had
calculated critical exponents by working in 4 − epsilon dimensions and expanding
in powers of epsilon = 1, but this program doesn’t seem to be working very well.

The other possibility, which I have to admit is a priori more likely, is that
at very high energy we will run into really new physics, not describable in
terms of a quantum field theory. I think that by far the most likely possibility
is that this will be something like a string theory.
==endquote==

This program that wasn't going so well is what Reuter made work.

Weinberg reference [26] is to what he wrote in 1979
26. S. Weinberg, in General Relativity, eds. S. W. Hawking and W. Israel,
eds. (Cambridge University Press, Cambridge, 1979): p. 790.
Also i have taken a look at the renormalization group equation and could you REALLY solve it to get something ?? it seems too much complicate for me even using Numerical methods

Did you look at figure 3 of http://arxiv.org/abs/0708.1317
It is on page 39.
This is the paper I cited a few posts back and Arivero corrected the typo.
They used numerical methods to plot many many RG trajectories.

For more information on the Numerical business of solving the flow equation and plotting the trajectories, there is a paper in Physical Review D, of 2002. If you want I can get the preprint arxiv number.
 
Last edited:
  • #9
Numerical simulations are still crude. I would not bank on that methodology. Apparently I did not sufficiently grasp the point of the paper. And apparently I am not alone.
 
  • #10
Let us suppose that Reuter is right. Then do we still need LQG? If yes, why?
 
  • #11
Jacques Distler thinks that there are problems with Reuter's work.

For an interesting review and discussion of Reuter's work, see this entry in the blog Reality Conditions.
 
  • #12
If Weinberg have succeeded (or if Reuter succeeded now) would we be looking for a real good explanation for all our cooked quantization recipes with more enthusiasm by now? Cause once we were happy having '"successfully" quantized all four interactions, maybe we would be interested in what all those mathematical gymnastics really leads us to. Hmm... no.. Maybe we would just keep looking for more and more unified theories...
 
  • #13
George Jones said:
For an interesting review and discussion of Reuter's work, see this entry in the blog Reality Conditions.
Excellent review!
 
  • #14
George Jones said:
...
For an interesting review and discussion of Reuter's work, see this entry in the blog Reality Conditions.

George, thanks for posting the link to Alejandro Satz 10 April review of the Reuter lectures at the Zakopane QGQG school!
http://realityconditions.blogspot.com/2007/04/report-on-quantum-gravity-school_10.html

It is a fine review and supplements what we have from Reuter, explaining a lot of things clearly in words. The comment from Aaron Bergman, and Alejandro's replies to them, impress me as valuable as well.

If I had to hazard a guess as to the overall lay of the land, I would say that
Reuter QEG and Bojowald LQC need each other.

Bojowald Loop Quantum Cosmology needs Reuter because Reuter tells him how the G and Lambda RUN with scale as the universe expands.
This can provide Bojowald with a satisfactory inflatonless inflation, with natural "reheating", with natural "exit", with scaleinvariant structure.
So Reuter is a BONANZA for Bojowald.

But conversely Reuter QEG also needs Bojowald because QEG by itself does not remove the cosmo singularity. QEG is still so close to classic GR that it cannot probe before the bounce. In fact QEG is still so close to conventional QFT that it is subject to the quibbling and bellyaching of Distler :biggrin:

So it will be good if Bojowald can implement the "RG improvement" results of QEG in a completely background independent way, resolve the singularity, and move the whole thing to a venue where it is out of reach Distlerian claws and fangs.

And then beyond that, there should be a synthesis leading to a more fundamental theory that explains us WHY the G and Lambda run. That is just my tentative scoping out for the time being---nothing depends on its being right, it is just the context in which I see the current exciting events.

===============
REPLY TO Demy's post that follows: Dear Harvey, the present distance between the theories is part of what makes a possible convergence (which Reuter talked about in one of the discussions at Loops 07 that Satz reported) so interesting. I am of course aware that Reuter QEG has no Immirzi :smile:
 
Last edited:
  • #15
Marcus, QEG and LQC are based on two different methods of quantization. It is far from clear that these two methods are mutually compatible. In particular, QEG does not depend on the value of the Immirzi parameter, while LQC does.
 
  • #16
http://arxiv.org/PS_cache/arxiv/pdf/0708/0708.1317v1.pdf
Functional Renormalization Group Equations,
Asymptotic Safety, and Quantum Einstein Gravity
Martin Reuter and Frank Saueressig
09 Aug 2007

p.53 (Last comment of paper)
A general discussion of the geometrical issues involved (scale dependent diffeomorphisms, symmetries, causal structures, etc.) was given in [27], and in [26] these ideas were applied to show that QEG can generate a minimum length dynamically. In [3, 5] it has been pointed out that the QEG spacetimes should have fractal properties, with a fractal dimension equal to 4 on macroscopic and 2 on microscopic scales. This picture was confirmed by the computation of their spectral dimension in [28]. Quite remarkably, the same dynamical dimensional reduction from 4 to 2 has also been observed in Monte-Carlo simulations using the causal triangulation approach [29, 30, 31]. It is therefore intriguing to speculate that this discrete approach and the gravitational average action actually describe the same underlying theory.
-----------------
Can someone add/rephrase/paraphrase/explain.
-------------------

I’ve looked at his citations
http://xxx.lanl.gov/abs/hep-th/0508202
Fractal Spacetime Structure in Asymptotically Safe Gravity
Authors: O. Lauscher, M. Reuter
(Submitted on 26 Aug 2005)
------------------
http://www.arxiv.org/abs/hep-th/0604212
Quantum Gravity, or The Art of Building Spacetime
Authors: J. Ambjorn, J. Jurkiewicz, R. Loll
(Submitted on 28 Apr 2006)
----------------------
A summary is at
http://en.wikipedia.org/wiki/Causal_dynamical_triangulation
 
  • #17
jal said:
..."... In [3, 5] it has been pointed out that the QEG spacetimes should have fractal properties, with a fractal dimension equal to 4 on macroscopic and 2 on microscopic scales.[/b] This picture was confirmed by the computation of their spectral dimension in [28]. Quite remarkably, the same dynamical dimensional reduction from 4 to 2 has also been observed in Monte-Carlo simulations using the causal triangulation approach [29, 30, 31]. It is therefore intriguing to speculate that this discrete approach and the gravitational average action actually describe the same underlying theory.

-------------------

I’ve looked at his citations
http://xxx.lanl.gov/abs/hep-th/0508202
Fractal Spacetime Structure in Asymptotically Safe Gravity
Authors: O. Lauscher, M. Reuter
(Submitted on 26 Aug 2005)
------------------
http://www.arxiv.org/abs/hep-th/0604212
Quantum Gravity, or The Art of Building Spacetime
Authors: J. Ambjorn, J. Jurkiewicz, R. Loll
(Submitted on 28 Apr 2006)
----------------------
A summary is at
http://en.wikipedia.org/wiki/Causal_dynamical_triangulation

Good. You are researching the citations and checking WikiP. This is going about it systematically!

I won't try to explain the whole picture but just to help fill in some detail. We had a lot of discussion of this at PF back in 2005 when the Spectral Dimension paper of Ambjorn Loll came out. I don't know if I could find those 2005 threads.

One of the points is that THE DIMENSIONALITY OF THE SPACE AROUND YOU IS AN EXPERIMENTALLY MEASURABLE OBSERVABLE and it depends in a natural way on the SCALE at which you measure it.

Pay careful attention to this next, Jal. There are two simple ways to measure the dimensionality at some location. One is you inflate little balls and compare the radius and the volume and you PLOT the experimental determined relation between radius and volume and you see what the exponent is. You fit a curve. Does V go as R1.7
or as R2.39 or as R4.5?
If you actually perform this experiment where you live I would expect that you will find it goes very nearly as R3---with an exponent of 3 +/- epsilon
That method is called Hausdorff method. The dimension you get, as the exponent when you fit the curve, is called Hausdorff dimension.

then there is the Spectral method which employs a RANDOM WALK OR DIFFUSION PROCESS and measures the PROBABILITY OF GETTING LOST and never finding your way back. The higher dimensionality of the space, the more chance of getting permanently lost and the less chance of accidentally wandering back to your start point. It is known that random walks follow the heat equation or diffusion equation and they can be set up so that you can run the walk over and over again and experimentally determine the return probability and then calculate what the dimensionality is from the measured return probability

This kind of thing was done by Ambjorn and Loll in 2004 and 2005 with RANDOM UNIVERSES. They used a simplex model----a 4-simplex is like a tetrahedron but in 4D.
Causal Dynamical Triangulations (CDT). It was a Monte Carlo (random gamble) thing. They had the universe build itself over and over again out of hundreds of thousands of little simplex lego-blocks. After a million scramblings they would stop and examine the universe and check its dimensionality around some random point. They would check both the SPACE dimension at that point (by keeping the random walk to a spatial slice) and the SPACETIME dimensionality by letting the random walk wander freely.

They also measured the Hausdorff dimension at randomly chosen locations in the universe. The radius would be how many simplexes you hop outwards and the volume would be how many simplexes are enclosed in that radius. The measure of radius and volume is just discrete counting.

THE REASON THIS IS SO INTERESTING IS THAT THEY FOUND DIMENSIONALITY VARYING continuously WITH SCALE IN A SIMILAR WAY TO WHAT REUTER FOUND WITH A COMPLETELY DIFFERENT MODEL THAT DOESNT HAVE ANY SIMPLICES.

The Reuter approach and the Ambjorn Loll approach are totally different. about as different QG as you can get. But both of them found that if you measure the LARGESCALE macroscopic dimension by taking a big radius around the point you will get the 4D that you expect-----human scale perception matches the QG models.
But they both found that as you start to go down to Planck scale the dimensionality starts to vary continuously down to around 1.9 or 2.0 or 2.1----thereabouts.
On the one hand Ambjorn Loll were finding this EXPERIMENTALLY by Monte Carlo and studying their random generated lego-block universes.
On the other hand Reuter was finding this ANALYTICALLY by renormalizing quantum gravity with "running" G and Lambda constants (running means varying with scale).

The reason Reuter points this out is that it CORROBORATES his method because it shows a remarkable coincidence that his method leads to an unexpected which some other completely different method ALSO leads to.

Now you probably want to know how space could be so crumpled and wrinkly at small scale that it defies intuition and has the wrong hausdorff dimensionality and YET at large scale it acts perfectly innocent with a smooth bland 4D demeanor. And how can one picture in ones mind how it can continuously vary from fractal kinkiness up to smoothness.

==========================
REPLY TO HAELFIX NEXT POST:
People vary as to how much and what kind of vision they bring to the issues you raise. Here is something Reuter himself had to say about future converging with LQG.
this is another Satz report, that followed the one which George Jones linked to:
http://realityconditions.blogspot.com/2007/07/loops-07-conference-report-part-3.html
I think the point is that neither QEG nor LQG are in their final forms and both of them have things they don't do which the other does. Check out what Reuter says,
looking ahead over the next 5 years.
 
Last edited:
  • #18
If Reuter is correct, *pure* QG is solved. Its just a field theory given by the Einstein Hilbert action, albeit a peculiar one which barely escapes being formally infinite.

But as has been emphasized many times now, it says absolutely nothing about stability against matter, the real physical regime. Even worse, since we don't know what that matter *is* absent a Planckian accelerator, we have absolutely no way of ever computing whatever new nontrivial fixed point emerges, if it even exists. Game over so to speak.

Having said that, Distlers argument is pretty convincing and straightforward, I don't see a good way around it.
 
  • #19
marcus said:
People vary as to how much and what kind of vision they bring to the issues you raise. Here is something Reuter himself had to say about future converging with LQG. This is another Satz report, that followed the one which George Jones linked to:
http://realityconditions.blogspot.com/2007/07/loops-07-conference-report-part-3.html
I think the point is that neither QEG nor LQG are in their final forms and both of them have things they don't do which the other does. Check out what Reuter says, looking ahead over the next 5 years.

==Reality conditions==
Reuter had one of the most concrete dreams: "It is shown that LQG is equivalent to Asymptotic Safety, and that the quantization ambiguities in it are finite in number and equivalent to the dimensionality of the Non-Gaussian Fixed Point."
==endquote==
 
Last edited:
  • #20
marcus said:
One way to see how physics can get back on track is to visualize what would have happened if Weinberg had succeeded in 1979 and got Reuter's 1998-2007 results.

What are you talking about? This makes no sense whatsoever!
 
  • #21
jal said:
and in [26] these ideas were applied to show that QEG can generate a minimum length dynamically.
-------------------


You still have not said you looked at their reference [26], which is the 2005 Reuter and Schwindt paper. I would like us to do that. I will check it out when I get back. I have to go out now. It is interesting that a minimal length has come up in Reuter, as I would not have expected it to in either Ambjorn Loll CDT or in Reuter QEG

(not every approach covers every base, which is why the process of convergence of approaches can be so interesting to watch :smile: )

http://arxiv.org/hep-th/0511021
A Minimal Length from the Cutoff Modes in Asymptotically Safe Quantum Gravity
Authors: Martin Reuter, Jan-Markus Schwindt
(Submitted on 2 Nov 2005)

Abstract: Within asymptotically safe Quantum Einstein Gravity (QEG), the quantum 4-sphere is discussed as a specific example of a fractal spacetime manifold. The relation between the infrared cutoff built into the effective average action and the corresponding coarse graining scale is investigated. Analyzing the properties of the pertinent cutoff modes, the possibility that QEG generates a minimal length scale dynamically is explored. While there exists no minimal proper length, the QEG sphere appears to be "fuzzy" in the sense that there is a minimal angular separation below which two points cannot be resolved by the cutoff modes.

Comments: 26 pages, 1 figure

wish I had time to look at this now.
 
Last edited by a moderator:
  • #22
Can someone add/rephrase/paraphrase/explain.
Thanks! Marcus!
The last papers that you have found seem to be questioning some of the conclusions of Martin Reuter and Frank Saueressig. (spectral dimension)
The following paper forms part of my citations for a Quantum Minimum Length Structure (QMLS)
Yes, I’ve read it and I would love to get expert input.
Does everyone reject the possibility that at the quantum level the “points” are two dimensional or that the structure could be fractal?
Does everyone reject the possibility that 3d arises from the dynamic dance of those 2d quantum “points”?

(I just did a cut and paste. Anyone interested in clarity would go to the paper)
http://arxiv.org/abs/hep-th/0511021
A Minimal Length from the Cutoff Modes in Asymptotically Safe Quantum Gravity
Authors: Martin Reuter, Jan-Markus Schwindt
p. 5
For a non-gauge theory in flat space the coarse graining or averaging of fields is a well defined procedure, based upon ordinary Fourier analysis, and one finds that in this case the length ℓ is essentially the wave length of the last modes integrated out, the COMs.
p.11
In fact, on the sphere it is easy to write down the geodesic distance (10) explicitly.
Without loss of generality we may assume that the two points x and y are both located on the equator ζ = η = θ = π/2. Denoting their φ-angles by φ(x) and φ(y), respectively, and exploiting that on the equator.
------------
p.19
As a result, the intrinsic distance of x and y is either undefined, or there exist at least two different lengths which satisfy the self-consistency condition (31).
---------------
If you look at my latest blog, WHY? – UNCERTAINTY – SPIN - CONFINEMENT, https://www.physicsforums.com/blogs/jal-58039/why-uncertainty-spin-confinement-1029/ , you will get my simple explanation.
The first drawing shows position #1, 2, 3, 4.
“An energy node can only be at position #1 or position #3. That translates to 50% uncertainty. Position # 1 and # 4 are too close and violate the minimum length.
If it helps you, think of position #1 as real or positive and position #3 as imaginary or negative.
You might argue that there could be an energy node at position #1 and also at position #3. Correct! It could! But each of them would have a different center of spin/orbit. If both of them had the same center of spin/orbit then you would need to identify them as if they had different spin/orbit since that would be the only way to tell them apart. They both would behave as if they had different center of spin/orbit.”
For simplicity, it would be best to ignore one of the points since it would in all probability belong to another spin/orbit and would only be realized at the surface of a black hole which has extreme curvature.
----------
p. 13
As expected, the angular resolution implied by the COMs depends on the RG trajectory. It does so only via the function λ = λ(k) and, as a result, can be of the same size for different values of k. In particular _φ(k) = _φ(k♯) and, in the linear regime, _φ(k) = _φ(kT /k2).
---------
The separation, k, does not need to be Planck length. It can be any length and there is no need for the point to be Planck size.
--------------
The angular separation (21) is the coordinate distance of two consecutive zeros of the real or imaginary part of Y± along the equator ζ = η = θ = π/2.
----------------
This is where I have said that position #2 and #4 cannot be occupied.
---------------
p.17
Should we therefore expect to find an Lmacro/min of the order of 10−3 cm in the real world? The answer is no, most probably. The reason is that our present discussion is based upon the vacuum field equations where it is the value of the cosmological constant alone which determines the curvature of spacetime. In presence of matter, the scale dependence of Lambda can have an observable effect only if the vacuum energy density ρ_ ≡ _/8πG is comparable to the matter energy density (including the matter energy density of the measuring device).
------------------
I could give some argument for 10-3cm. However, I would require a better definition of “vacuum energy density” and “quark sea” which is within the “drip line”. The curvature of spacetime is pronounced in the “quark sea” because of the matter density but is negligible between galaxies, (vacuum).
--------------
p. 21
As we pointed out already, the existence of a finite _φmin is perfectly consistent with having integrated out all modes of the quantum metric.
---------------
I guess that I've raised too many points... too many questions
jal
 
Last edited by a moderator:
  • #23
Looks like another person whose work we need to keep track of is Max Niedermaier. He has a 77 page review which is to appear in the CQG Journal (Classical and Quantum Gravity) His final revision prior to publication was posted just last month.

http://arxiv.org/abs/gr-qc/0610018
The Asymptotic Safety Scenario in Quantum Gravity -- An Introduction
Max Niedermaier
77p, 1 figure; v2: revised and updated; discussion of perturbation theory in higher derivative theories extended. To appear as topical review in CQG
(Submitted on 5 Oct 2006 (v1), last revised 19 Jul 2007 (this version, v2))

"The asymptotic safety scenario in quantum gravity is reviewed, according to which a renormalizable quantum theory of the gravitational field is feasible which reconciles asymptotically safe couplings with unitarity. All presently known evidence is surveyed: (a) from the 2+ epsilon expansion, (b) from the perturbation theory of higher derivative gravity theories and a `large N' expansion in the number of matter fields, (c) from the 2-Killing vector reduction, and (d) from truncated flow equations for the effective average action. Special emphasis is given to the role of perturbation theory as a guide to 'asymptotic safety'. Further it is argued that as a consequence of the scenario the selfinteractions appear two-dimensional in the extreme ultraviolet. Two appendices discuss the distinct roles of the ultraviolet renormalization in perturbation theory and in the flow equation formalism."

As a reminder, ASYMPTOTIC SAFETY for gravity is Weinberg's 1979 program.
The word "asymptotic" here is merely a code-name for the UV limit. The limiting action of gravity at extreme proximity and energy. "Safety" simply means that the action converges to a meaningful limit (a fixed point of the flow that governs its "running", or the "running" of the numerical parameters that determine it.)

I have seen a reference to joint work by Niedermaier and Reuter at the website Living Reviews of Relativity---of the Albert Einstein Institute Potsdam. I'll try to find a link. Yes here one is:
http://relativity.livingreviews.org/Articles/lrr-2006-5/download/index.html
The Asymptotic Safety Scenario in Quantum Gravity
by
Max Niedermaier and Martin Reuter
 
Last edited by a moderator:
  • #24
marcus said:
The word "asymptotic" here is merely a code-name for the UV limit. The limiting action of gravity at extreme proximity and energy. "Safety" simply means that the action converges to a meaningful limit (a fixed point of the flow that governs its "running", or the "running" of the numerical parameters that determine it.)

No, asymptotic safety is not "merely a code-name for the UV limit", but rather describes something very specific about the behaviour of theories at high energies which is that the renormalized couplings remain finite. This is relevant for gravity in that this condition can be defined even when the theory isn't renormalizable in the ordinary sense of eliminating all but a finite number of free parameters.

Also, the action including all higher order corrections won`t in general converge to a meaningful limit nor do we need it to. We simply need to be able to render finite the action or observables to whatever order in the coupling we want to calculate.
 
Last edited by a moderator:
  • #25
Getting back to the main question "What if Weinberg had succeeded?" with his program to renormalize gravity 1976-1979.
Part of the idea of the question is to get a better understanding of the relevant history.

Rovelli has a section in his book that gives a timeline sketch of QG history. I'll quote some exerpts. The essential thing is that in this period 1976-1979 many physicists became convinced of something that may in fact be wrong, namely

General Relativity, since "it's unrenormalizable" must be merely an effective theory giving the right numbers in the low energy regime, and there must be a very different fundamental theory which applies at high energy

Many therefore decided it was a useless exercise to continue trying to quantize General Relativity in ways which respect main features such as freely varying geometry represented by the metric---Einstein-Hilbert action possibly with quantum corrections---avoidance of a fixed background.

Rovelli has an interesting History section starting on page 393, and he gets to the 1976-1979 period (where it seems things could have gone quite differently) at page 405. I will copy some exerpts, as time permits.

I should mention a SEMANTIC PROBLEM. I think Reuter is now the leading authority on assymptotic safety of gravity. He and his collaborators have the main papers on it and it has become quite a large body of literature. they say gravity is NON-PERTURBATIVELY RENORMALIZABLE. That is you can experimentally determine a finite number of parameters and then make predictions (at any k-level including high energy limit as k --> infty). that is what renormalizable means, practically speaking. there just is NO PERTURBATION SERIES playing an essential role as in earlier cases of renormalizability.

I think we need to go along with with Reuter's use of terminology when he says gravity is non-perturbativley renormalizable.

But there are other people who only think of renormalizing in the perturbation theory context. They don't imagine any other way to do it. This in fact was the context in which it first came up. So they would say that "non-perturbatively renormalizable" is a contradiction in terms! they would say that when Reuter asserts that gravity is renormalizable, then for purely VERBAL reasons he must be wrong because (they would claim) renormalizing is only possible in the context of a perturbation series.

This type of objection can lead only to a sterile semantic squabble. So I will just adopt Reuter's terminology and point out where earlier writings differ.
 
Last edited:
  • #26
marcus said:
Getting back to the main question "What if Weinberg had succeeded?"

I believe my post was OT. I guess the reason you couldn`t see that is that you failed to understand what asymptotic safety actually is.
 
  • #27
==quote Rovelli page 404 and following==

1975: It becomes generally accepted that GR coupled to matter is not renormalizable. The research program started with Rosenfeld, Fierz, and Pauli is dead.

1976: A first attempt to save the covariant program is made by Steven Weinberg, who also explores the idea of asymptotic safety [37], developing earlier ideas from Giorgio Parisi [347], Kenneth Wilson and others, suggesting that nonrenormalizable theories could nevertheless be meaningful.

To resuscitate the covariant theory, even if in modified form, the path has already been indicated: find a high-energy modification of GR.

Preserving general covariance, there is not much one can do to modify GR. An idea that attracts much attention is supergravity [348]: it seems that by simply coupling a spin-3/2 particle to GR, ...one can get a theory finite even at two loops.

Supersymmetric string theory is born [349]
==endquote==

Reference [37] is the usual one---something by Weinberg published in 1979.
Rovelli's history gives an idea of when the work was done, and under what circumstances.

==page 7==
The failure of perturbative quantum GR is interpreted as a replay of the Fermi theory [footnote: Fermi theory was an empirically successful but nonrenormalizable theory of the weak interactions, just as GR is an empirically successful but nonrenormalizable theory of the gravitational interaction. The solution has been the Glashow-Weinberg-Salam electroweak theory, which corrects Fermi theory at high energy.]
==enquote==

Rovelli was writing this in 2003 and when he says nonrenormalizable he means PERTURBATIVELY nonrenormalizable. Yes it is that, but it is renormalizable after all outside the perturbation series context---where you expand in a series and try to fix successive terms to make the whole thing finite and predictive. Reuter says to make it finite and predictive a different way, that avoids the perturbation series. And that was what Weinberg was trying to do in 1976.

So we can see where things went wrong circa 1976 (on Rovelli's timeline)
People said this theory is a replay of Fermi weak interaction, it works fine at low energy but it isn't finite at higher so it must be merely effective in a limited range and not fundamental, and we have to try to UNIFY it with something else (like those Nobel guys unified weak into electroweak). Hopefully that will show us what the high-energy version looks like. It will LOOK DIFFERENT from General Relativity. Maybe it won't even be about spacetime geometry. Maybe instead of a geometrical theory will be a UNIFICATION OF FORCES on a fixed background. We don't have to take GR seriously as a theory-building model.

So by and large people stopped trying to QUANTIZE THE METRIC (the metric was the form in which spacetime geometry was represented in Einstein's theory) and went back to the idea of particles and forces operating on a fixed metric background. And that gets us up to Rovelli's history sketch of the 1980s.
 
Last edited:

1. What impact would Weinberg's success in 1979 have had on the scientific community?

If Weinberg had succeeded in 1979, it would have been a major breakthrough in the field of science. His research on the unified theory of everything would have revolutionized our understanding of the universe and opened up new avenues for further exploration.

2. How would Weinberg's success have affected our current understanding of physics?

Weinberg's success in 1979 would have significantly advanced our understanding of physics. His unified theory would have provided a more comprehensive and unified understanding of the fundamental forces of nature, including gravity, electromagnetism, and the strong and weak nuclear forces.

3. What advancements in technology would have been possible if Weinberg had succeeded in 1979?

Weinberg's success in 1979 would have led to significant advancements in technology. The unified theory of everything would have provided a basis for the development of new technologies, such as advanced space travel, energy sources, and communication systems.

4. How would Weinberg's success have impacted other areas of science?

If Weinberg had succeeded in 1979, it would have had a ripple effect on other areas of science. His unified theory of everything would have shed new light on the origins of the universe, the nature of black holes, and the behavior of matter under extreme conditions.

5. Could Weinberg's success in 1979 have led to a cure for diseases or other practical applications?

It is possible that Weinberg's success in 1979 could have led to practical applications, such as the development of new medical treatments or technologies. However, it is impossible to say for certain as these advancements would have depended on further research and experimentation based on Weinberg's unified theory of everything.

Similar threads

  • Beyond the Standard Models
Replies
1
Views
181
  • Beyond the Standard Models
5
Replies
157
Views
27K
  • Beyond the Standard Models
Replies
16
Views
4K
Replies
26
Views
3K
  • Beyond the Standard Models
Replies
5
Views
5K
  • Beyond the Standard Models
2
Replies
61
Views
6K
Replies
148
Views
22K
  • Beyond the Standard Models
4
Replies
105
Views
10K
  • Beyond the Standard Models
Replies
2
Views
2K
Replies
2
Views
3K
Back
Top