Why all these prejudices against a constant? ( dark energy is a fake probem)

marcus
Science Advisor
Homework Helper
Gold Member
Dearly Missed
Messages
24,753
Reaction score
794
Why all these prejudices against a constant? ("dark energy" is a fake probem)

==sample quote==
It is especially wrong to talk about a mysterious “substance” to denote dark energy. The expression “substance” is inappropriate and misleading. It is like saying that the centrifugal force that pushes out from a merry-go-round is the “effect of a mysterious substance”.
==endquote==

http://arxiv.org/abs/1002.3966
Why all these prejudices against a constant?
Eugenio Bianchi, Carlo Rovelli
9 pages, 4 figures
(Submitted on 21 Feb 2010)
"The expansion of the observed universe appears to be accelerating. A simple explanation of this phenomenon is provided by the non-vanishing of the cosmological constant in the Einstein equations. Arguments are commonly presented to the effect that this simple explanation is not viable or not sufficient, and therefore we are facing the 'great mystery' of the 'nature of a dark energy'. We argue that these arguments are unconvincing, or ill-founded."
 
Physics news on Phys.org


The lambda constant is just a constant that naturally occurs when you write down the most general form of the action. With Einstein it was there already at the start! Not something he put in as an afterthought to make cosmology come out right.

==quote==
In fact, it is not even true that Einstein introduced the
λ term because of cosmology. He knew about this term
in the gravitational equations much earlier than his cos-
mological work. This can be deduced from a footnote
of his 1916 main work on general relativity [9] (the foot-
note is on page 180 of the English version). Einstein
derives the gravitational field equations from a list of
physical requirements. In the footnote, he notices that
the field equations he writes are not the most general pos-
sible ones, because there is another possible term, which
is in fact the cosmological term (the notation “λ” already
appears in this footnote).
The most general low-energy second order action for
the gravitational field, invariant under the relevant sym-
metry (diffeomorphisms) is...
...
which leads to (1). It depends on two constants, the
Newton constant G and the cosmological constant λ, and
there is no physical reason for discarding the second term...
==endquote==
 


The cosmological constant becomes a mistery as soon as you do not write it on the left hand = "the gravity" side of the equations

R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu}

but it you write it on the right hand = "the matter" side.

R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8\pi G}{c^4}T_{\mu\nu} - \Lambda g_{\mu\nu}

In vacuum (with T=0) you still have some kind of "matter" which affects spacetime:

R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = - \Lambda g_{\mu\nu}

If you leave this term on the left hand side, the question where it comes from and why it is there, is still open, but it is not a qustion about matter, dark energy or something like that; it is a question about gravity.
 


lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

The question is not why it is there but rather what sets its value. In particular why is it so small? One answer coming from the ERG approach is that there is an infrared fixed point for which lambda=0. Hence on large scales lambda is small.
 


Finbar said:
lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

The question is not why it is there but rather what sets its value. In particular why is it so small? One answer coming from the ERG approach is that there is an infrared fixed point for which lambda=0. Hence on large scales lambda is small.

http://arxiv.org/abs/0910.5167 fig 3, 5 seem to suggest only a UV fixed point?

I think Xu and Horava http://arxiv.org/abs/1003.0009 got a IR fixed point, but not at z=1, which I think is what one would like?

BTW, has the evidence shifted away from AS now that CDT seems to have gone over to Horava? And does that mean that CDT is also problematic, since Horava seemed to have all sorts of problems.
 


atyy said:
http://arxiv.org/abs/0910.5167 fig 3, 5 seem to suggest only a UV fixed point?

I think Xu and Horava http://arxiv.org/abs/1003.0009 got a IR fixed point, but not at z=1, which I think is what one would like?

BTW, has the evidence shifted away from AS now that CDT seems to have gone over to Horava? And does that mean that CDT is also problematic, since Horava seemed to have all sorts of problems.

The IR fixed point is at the origin in fig 3. As you can see there is a trajectory that goes from the UV fixed point to the IR fixed point. But it would seem odd that we sit exactly on that trajectory.

I don't know what your on about with CDT, AS and Horava. CDT does not violate lorentz (at least not in the way Horava does). CDT uses the Regge action which which is a discrete version of einstein-hilbert. So i don't see why you think there is a connection between CDT and horava? If there is its not obvious.
 


Finbar said:
It belongs on the left-hand side you can see this from the fact that G does not couple to it...

The question is not why it is there but rather what sets its value.

Yes. Just like other constants, why is alpha = 1/137?
Why is the ratio of electron mass to Planck mass so small?
I think part of the aim of the paper is to deflate some of the hype surrounding this particular constant.
Not to say it's not interesting though! It would be great to get some handle on why it's that particular size.
 
Last edited:


Finbar said:
The IR fixed point is at the origin in fig 3. As you can see there is a trajectory that goes from the UV fixed point to the IR fixed point. But it would seem odd that we sit exactly on that trajectory.

Is that really a fixed point? Also, if it is, is it's stability such that it would explain the cosmological constant (apparently not, since you say it's odd we'd be exactly on that trajectory)?

Finbar said:
I don't know what your on about with CDT, AS and Horava. CDT does not violate lorentz (at least not in the way Horava does). CDT uses the Regge action which which is a discrete version of einstein-hilbert. So i don't see why you think there is a connection between CDT and horava? If there is its not obvious.

I'm thinking of - ?
http://arxiv.org/abs/0911.0401
http://arxiv.org/abs/1002.3298
 


atyy said:
Is that really a fixed point? Also, if it is, is it's stability such that it would explain the cosmological constant (apparently not, since you say it's odd we'd be exactly on that trajectory)?



I'm thinking of - ?
http://arxiv.org/abs/0911.0401
http://arxiv.org/abs/1002.3298

Hmm well both papers mention AS and horava. I think the CDT guys are hoping that it can be both AS and horava depending how they tune their parameters.


The gaussian fixed point is always going to be there as it just corresponds to the vanishing of the dimensionless couplings. But I don't think you need to be on a trajectory that flows to the IR fixed point. Better to read this paper

http://arXiv.org/abs/hep-th/0410119

"Assuming that Quantum Einstein Gravity (QEG) is the correct theory of gravity on all length scales we use analytical results from nonperturbative renormalization group (RG) equations as well as experimental input in order to characterize the special RG trajectory of QEG which is realized in Nature and to determine its parameters. On this trajectory, we identify a regime of scales where gravitational physics is well described by classical General Relativity. Strong renormalization effects occur at both larger and smaller momentum scales. The latter lead to a growth of Newton's constant at large distances. We argue that this effect becomes visible at the scale of galaxies and could provide a solution to the astrophysical missing mass problem which does not require any dark matter. We show that an extremely weak power law running of Newton's constant leads to flat galaxy rotation curves similar to those observed in Nature. Furthermore, a possible resolution of the cosmological constant problem is proposed by noting that all RG trajectories admitting a long classical regime automatically give rise to a small cosmological constant."
 
  • #10


tom.stoer said:
The cosmological constant becomes a mistery as soon as you do not write it on the left hand = "the gravity" side of the equations

R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu}

but it you write it on the right hand = "the matter" side.

R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8\pi G}{c^4}T_{\mu\nu} - \Lambda g_{\mu\nu}

In vacuum (with T=0) you still have some kind of "matter" which affects spacetime:

R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = - \Lambda g_{\mu\nu}

If you leave this term on the left hand side, the question where it comes from and why it is there, is still open, but it is not a qustion about matter, dark energy or something like that; it is a question about gravity.

I'd like to get into the habit of thinking of it on the left hand side.
However I'm used to seeing the constant given in the form OmegaLambda. A common estimate is OmegaLambda = .73

That means the (fictional?) dark energy is 73% of critical density. If I express critical density in terms of today's Hubble rate H, then what I seem to get is that

Lambda = 3 *H^2* OmegaLambda

= 3*0.73* (71 km/s per megaparsec)^2 ~ 10-35 second-2

Can you confirm that this is right way to get the lefthandside Lambda from the information we are usually given?
 
Last edited:
  • #11


Finbar posted:

lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

I REALLY like that thought! What's the origin of these two relationships?? I don't mean I doubt it, but what led to these particular couplings?? Is it of purely mathematical origin or instead physical insights...or a combination??

"The man who had the courage to tell everybody that their ideas on space and time had to
be changed, then did not dare predicting an expanding universe, even if his jewel theory was saying so..." and " Even a total genius can be silly,
at times." ...suggests Einstein did not appreciate the nature of lambda early on.
 
Last edited:
  • #12


marcus said:
I'd like to get into the habit of thinking of it on the left hand side.

...

Can you confirm that this is right way to get the lefthandside Lambda from the information we are usually given?
Yes, I can confirm that this is the way you get OmegaLambda. But your interpretation
marcus said:
That means the (fictional?) dark energy is 73% of critical density.
means that (implicitly) you think about it as something that appears on the right hand side: density, dark energy, ...
 
  • #13


tom.stoer said:
Yes, I can confirm that this is the way you get OmegaLambda...

That's not what I was asking about. We are constantly being told that OmegaLambda = 0.73, or thereabouts. We can take that as the current estimate.

What I never (or almost never) see an estimate for is LAMBDA ITSELF. The genuine lefthand side article.

That is what I want to calculate. I think it comes to 1.1 x 10-35 second-2
or thereabouts.

Would it be more correct to express it in units of reciprocal area, like in meter-2?

That's what I think of as a common unit for curvature?

What I want confirmation for, or at least your opinion on, is if we are thinking just of the lefthandside form of the cosmo constant, which we are calling Lambda, and if we are given the commonly published figures of 71 for Hubble, and 0.73 for (fictional?) "dark energy fraction,"
then do we use the stated formula to get Lambda? Namely:

Lambda = 3 *H^2* OmegaLambda
 
Last edited:
  • #14


marcus said:
Lambda = 3 *H^2* OmegaLambda

Yes.
 
  • #15


marcus said:
Lambda = 3 *H^2* OmegaLambda

George Jones said:
Yes.

Good. Thank you George. So since I'm always seeing the published figures
71 km/s per megaparsec, and 0.73 (for the Hubble rate and the "darkenergy fraction") I can plug the blue thing into google and get Lambda.

3*0.73*(71 km/s per megaparsec)^2

Anybody can do it themselves. If you paste that blue expression into the google window, what you get is:
1.15946854e-35 s^-2

So that is what Lambda "really is", if you round off appropriately:

Lambda = 1.16 x 10-35 seconds-2
plus or minus whatever uncertainty is contributed by the 0.73 and the 71.

And you can change the numbers 71 and 0.73 to agree with whatever the latest observations indicate, and the google calculator will give you the corresponding estimate for Lambda, accordingly.

If Rovelli is right, and the others that share his views on the subject, then this is a basic constant of nature and we better start getting to know it, and getting used to it, and treating with some of the same respect we normally show basic constants.
 
Last edited:
  • #16


Bianchi and Rovelli are not saying anything new, are they? Take eg. this 2007 review

http://arxiv.org/abs/0705.2533
"The observational and theoretical features described above suggests that one should consider cosmological constant as the most natural candidate for dark energy. Though it leads to well known problems, it is also the most economical [just one number] and simplest explanation for all the observations. Once we invoke the cosmological constant, classical gravity will be described by the three constants G, c and Lambda"
 
  • #17


I agree, they are not saying anything new. The stress the following
1) it is natural to consider Lambda as a constant of nature
2) one should distinguish between "QFT is the source of Lambda" and "QFT could cause corrections to the value of Lambda"
3) the reason for the puzzle is not Lambda, but Lambda on the right hand side of the equation ...
4) ... plus an idea how to calculate it - which fails by 120 orders of magnitude

Let's assume I have some biological theory; unfortunately it says not so much about about mammals, birds etc. but I claim that this theory provides an explanation of the zoogenesis of the duckbill from first principles. But applying this theory, it predicts the duckbill to look like an orca ...

Now I ask you: does this make the duckbill even more enigmatic, or does it mean that my theory is plainly wrong?
 
  • #18


I guess ultimately, one would seek a deeper understanding of why all constants of nature are what they are, and why the laws are like they are. Maybe a lot of the mystery around lambda is due to the fact that it appears to be so closely (although there are certainly the known problems with this) related to the expected zero point energy density that it's hard to resist the temptation that there is something more to this - that may or may not explain the value of this constant that in a deeper way that we probably all seek anyway?

As far as I am concerned, I still seek a deeper understanding of all the E-H action where all terms and constants beg for explanation. My working picture at the moment, makes me think, also in line wit some of the statistical approach to this, is to rather think of Einsteins equations as a equation of state, defining an equilibrium, suggesting that the "constants" might not be proper constants, maybe they just appear constant to us at this moment. So maybe it's not that important what the values are, the most interesting thing is the logic that sets it's value. The actual logic today, is our cosmological models, from Hubble expansions we infer the value, but this is a human level mechanism, and it's value would in principle evolve with the evolution of human science cosmology. But we still don't have the depper intrinsic physical mechanism for it's evolution.

But I also think that seeking the answer in the form of hidden or dark matter or energy "out there" that are like ordinary matter, except not visible might be a sidetrack. I think the "appearance" of hidden energy or accelerating universe could be better understood from the information point of view, where one considers expanding and accelerating evnet horizons, rather than expanding universes etc. But that's still very underdeveloped.

/Fredrik
 
  • #19


Marcus: again, thanks for bringing another great paper to our attention...


Bianchi and Rovelli are not saying anything new, are they?

That seems right, but I did see some different perspectives (new to me) things in the paper, such as:

An effect commonly put forward to support the reality" of such a vacuum energy is the Casimir e ect. But the Casimir e ect does not reveal the existence of a vac-
uum energy: it reveals the e ect of a \change" in vacuum energy, and it says nothing about where the zero point value of this energy is.


and this regarding the coincidence argument:

In order for us to comfortably exist and observe it, we must be ... in a period of the history of the universe where a civilization like ours could exist. Namely when heavy elements abound, but the universe is not dead yet. ...in a universe with the value of lambda like in our universe, it is quite reasonable that humans exist during those 10 or so billions years when omegab and omega lambdaare within a few orders of magnitude from each other.Not much earlier, when there is nothing like the Earth, not much later when stars like ours will be dying.


FRA
I think the "appearance" of hidden energy or accelerating universe could be better understood from the information point of view, where one considers expanding and accelerating evnet horizons, rather than expanding universes etc

yes!
I also think we are current missing much of what appears as an information based universe.
 
  • #20


I guess ultimately, one would seek a deeper understanding of why all constants of nature are what they are, and why the laws are like they are

no guessing..absolutely "yes" :

In gravitational physics there is nothing mysterious in the cosmological constant. At least nothing more mysterious than the Maxwell equations, the Yang-Mills
equations, the Dirac equation, or the Standard Model equations. These equations contain constants whose values we are not able to compute from first principles. The
cosmological constant is in no sense more of a "mystery" than any other among the numerous constants in our fundamental theories

I wish the authors would have instead acknowledged they are ALL equally 'mysterious'..
 
Last edited:
  • #21


Naty1 said:
I wish the authors would have instead acknolwedged they are ALL equally 'mysterious'..

Yes, but one can respond to this in two ways, either

1) stop worrying about the "mysterious lambda"

or

2) START worrying about all mysterious "constants". In the generic sense "constant" could also apply to other things than numbers, for example "constant" symmetries, and thus "physical law" itself. Ie. the fact that it may be "just another constant" doesn't make it less mysterious. Maybe we rather just have more "clues" to this particular constant, than say the gravitational constant or Plancks constant?

/Fredrik
 
  • #22


START worrying about all mysterious "constants".

makes sense and I think in general scientists do...still strikes me as odd that we don't have the first principles to determine the basics in the standard model...clearly we are missing some "information"...pun intended...
 
  • #23


The CC is not a problem for GR (well except that it makes various cosmological solutions a little more ugly), but really a generic quantum problem.

It doesn't matter what side of the field equation you put it on, the problem is the same, namely an unnatural cancelation between two quantities that a priori have nothing to do with one another (read no known physical relation).

In the full quantum theory, we are interested in the expectation value of the stress energy tensor <Tuv>, which in vacuum is proportional to < P > Guv by Lorentz invariance. By inspection of Einsteins field equations, this is equivalent to adding a term to the effective cosmological constant.

lambda effective = lambda + 8piG <P>, where lambda is just the old simple (undetermined classical constant of integration) and <P> is the energy density of the quantum vacuum.

Now (lambda effective/8piG) = Pv ~ 10^-47 Gev^4 by experiment (WMAP say).

The problem is that we know how to calculate <P>, generically it is simply summing up all the normal modes of the zero point energy of some field (or sets of fields), up to some cutoff. If we take the cutoff to be Mpl, <P> will be something like ~10^72 Gev^4
and then you notice that (lambda/8piG + <P>) in order to satisfy experimental bounds, must delicately conspire to cancel to something like 120 decimal places. The problem is that there is no obvious physical reason why lambda/8piG (a quantity arising from a classical equation) should have anything whatsoever to do with <P> (a quantum quantity). Now, if there was an unknown symmetry that related them, you might venture to guess that they could cancel exactly, but no such symmetry is known and worse they don't cancel exactly.

I should point out that you can't get around this miracle trivially. Even if you completely ignore everything from the electroweak scale, all the way down to the Planck scale and instead only consider standard model physics, you would set the cutoff value to something like say the QCD scale, you still have about 40 orders of magnitude worth of decimal places to account for.

Now you are of course free to simply set the constant term equal to the tree level or semiclassical contribution to <P> and set them equal to zero. The problem then is you still have to talk about the shift of the vacuum energy, by higher order terms induced by radiative corrections, and so you have to arrange it so that each constant appearing in front of the counterterms is finetuned order by order, such that the final sum approaches the experimental bound.

It is this, more than anything else, that really is the crux of the problem. The thing we measure in our telescopes is necessarily the full theory (b/c quantum mechanics is part of the real world). And in this quantum theory, nothing protects the vacuum energy from radiative corrections.

An analogous (though much less severe) problem occurs for instance, when you consider the smallness of the Neutrino mass relative to say the Higgs vev. If you naively proceed as above, you see that there is an unnatural cancellation that should take place. Of course there, we are rescued by a mechanism that forces the two terms involved in the cancellation to actually be close (this is called the seesaw mechanism).
 
Last edited:
  • #24


Its worth pointing out how Supersymmetry almost solves the problem and how it elucidates the nature of the issue.

Assume that <P> appearing above, was for some reason identically zero, then indeed lambda effective would just be a constant of integration and everyone would be happy. No rhyme or reason why its small or big, or whatever. Who cares, its just a number that experiment happened to find. You could invoke the anthropic principle trivially if you really wanted too at that point and no one would mind.

And indeed, in exact rigid supersymmetry it was noted long ago that fermion loops exactly cancel boson loops and the net vacuum contribution is zero (at least perturbatively).

The problem is, exact supersymmetry is not the way the world works, and it must be broken. When you break rigid supersymmetry in this framework you induce terms that necessarily set <p> != 0 (and if you include gravity and make the susy local, the superpotential and the kahler potential will not in general exactly cancel!) and you are back to worrying about how weird it is for physical cancellations to take place of that magnitude (although now, the problem is cut in two on a log scale and may also be sensitive to exactly where the supersymmetry breaking scale is set)
 
  • #25


Haelfix,

I think you agree with Rovelli, but I am not sure. He does not deny that the smallness of lambda is no problem, but e says that one must distinguish between 1) a small classical (tree level) lambda which stays small even if you calculate quantum corrections and 2) zero classical lambda where the non-zero part is purely quantum mechanical.

So acording to Rovelli the problem is why lambda stays small even if it is subject to quantum corrections, not how quantum corrections create lambda which zero classically.

Compare it to the Higgs mass. It is unclear how the Higgs mass is protected against quantum corrections pushing it to the Planck scale. But these effects (or their absence) seems to have nothing to do whith the creation of teh Higgs mass at all. The same applies to all other parameters in the SM. It is a puzzle where they are coming from, but it is fairly well understood how they behave under quantum corrections (the Higgs mass is an exception).

What you are saying about SUSY is the core of the problem. Zero lambda would be fine, but tiny lambda including quantum corrections from broken SUSY is a riddle. But if you look at all other classical field theories they perfectly make sense w/o quantum corrections in a certain regime. If you restrict yourself to tis regime there is no problem with the constants at all.
 
  • #26


I don't quite follow.

The way I read the paper was that it wasn't anything new to the standard story I told above, it just restated it differently.

To simplify the terminology, and with total abuse of notation and disregard for constants, i'll just state the above equation again: Lambda total = lambda classical + lambda quantum. Lambda total is fixed by experiment.

You are free to set lt = lc but then you have to explain why lq is zero (this is what I think he wants us to do). You can set lc = 0 but then you have to explain why lq is however many orders of magnitude different than a qft calculation tells us it should be. Or you can insist that there is some sort of mechanism that relates lc and lq such that they are extremely close and the finetuning becomes natural.

The problem is the same in all three cases, its just basically a relabeling of words what you want to call things.
 
  • #27


Naty1 said:
makes sense and I think in general scientists do...still strikes me as odd that we don't have the first principles to determine the basics in the standard model...clearly we are missing some "information"...pun intended...

I think the intrinsic information view that I seek (rather than the extrinsic blockbased info-picture), is generally not something "most physicists" are at least officially interested in as far as I can judge. Maybe secretly, but a lot of the reasoning in some research papers still maintains a somewhat realist view of physical law. I think this is still a realist heritage we are still stuck with.

QM and Relativity did away with some realism, but not all of it. Both are somehow attempts at acknowledging the incompleteness and relativit of nature, without abandoning the incompletness and relativity of physical law. There is an ambigousness there IMO, becase _information about physical law_ are not treated on the same footing as _information about the initial state_ of a system, when it IMO should.

/Fredrik
 
  • #28


Haelfix said:
The problem is the same in all three cases, its just basically a relabeling of words what you want to call things.

I am not quite sure.

Let's try a different approach: for many constants in nature one expects that they are scale-dependent. If they aren't one has to find a mechanism why they are protected. What we measure are not the bare nor the tree-level values but always the "dressed" values where all quantum corrections are already taken into account.

Now we split the constants in a tree level and a quantum correction part (I do not know if this is really a good idea :-)

I think one can state the problem as follows:
1) if we believe in this classical part + quantum correction part story, then we have to solve the two problems what causes the classical part? and why should it be protected against scaling?
2) if we do not believe in this split, we have to solve the problem what causes the cc at all?.

I think what Rovelli is saying is that it's not clear to him why the mechanism which causes the existence of the cc at all should be the same as the mechanism that causes it's scaling.

(example: we understand the mechanism which scales a mass-term in QFT, but we do not understand where this mass term comes from; if we use the Higgs, again we understand the scaling of the Yukawa-coupling, but we do not understand why it is there at all)
 
  • #29


Just one question here: why all this quible here if Marcus himself support assymptotic safety? The small value of the cosmological constant is just a consequence of the non trivial fixed point of G x /\ space due to renormalizable non perturbative nature of the Einstein Hilbert action.
 
  • #30


This means that lambda is a term on the left hand side and is somehow protected against UV corrections. Is would solve the problem for its smallness, not for its existence.

What is the current status of asymptotic safety?
 
  • #31


Well, asymptotic safety doesn`t work withou lambda...
 
  • #32


Here's an alternative take on the CC problem:
http://arxiv.org/abs/1103.4841
The cosmological constant: a lesson from Bose-Einstein condensates
Stefano Finazzi, Stefano Liberati, Lorenzo Sindoni
(Submitted on 24 Mar 2011)
The cosmological constant is one of the most pressing problems in modern physics. In this Letter, we address the issue of its nature and computation using an analogue gravity standpoint as a toy model for an emergent gravity scenario. Even if it is well known that phonons in some condense matter systems propagate like a quantum field on a curved spacetime, only recently it has been shown that the dynamics of the analogue metric in a Bose-Einstein condensate can be described by a Poisson-like equation with a vacuum source term reminiscent of a cosmological constant. Here we directly compute this term and confront it with the other energy scales of the system. On the gravity side of the analogy, this model suggests that in emergent gravity scenarios it is natural for the cosmological constant to be much smaller than its naif value computed as the zero-point energy of the emergent effective field theory. The striking outcome of our investigation is that the value of this constant cannot be easily predicted by just looking at the ground state energy of the microscopic system from which spacetime and its dynamics should emerge. A proper computation would require the knowledge of both the full microscopic quantum theory and a detailed understanding about how Einstein equations emerge from such a fundamental theory. In this light, the cosmological constant appears even more a decisive test bench for any quantum/emergent gravity scenario.
5 pages, 1 figures
 
Last edited:
  • #33
  • #34


The Finazzi-Liberati-Sindoni (FLS) paper could be something of a game-changer, so I want to back up and reconsider what I was saying. Here is an excerpt from their conclusions:
==quote FLS http://arxiv.org/abs/1103.4841 ==
...The implications for gravity are twofold. First, there could be no a priori reason why the cosmological constant should be computed as the zero-point energy of the system. More properly, its computation must inevitably pass through the derivation of Einstein equations emerging from the underlying microscopic system. Second, the energy scale of Λ can be several orders of magnitude smaller than all the other energy scales for the presence of a very small number, nonperturbative in origin, which cannot be computed within the framework of an effective field theory dealing only with the emergent degrees of freedom (i.e. semiclassical gravity).

The model discussed in this Letter shows all this explicitly. Furthermore, it strongly supports a picture where gravity is a collective phenomenon in a pregeometric theory. In fact, the cosmological constant puzzle is elegantly solved in those scenarios. From an emergent gravity approach, the low energy effective action (and its renormalization group flow) is obviously computed within a framework that has nothing to do with quantum field theories in curved spacetime. Indeed, if we interpreted the cosmological constant as a coupling constant controlling some self-interaction of the gravitational field, rather than as a vacuum energy, it would straightforwardly follow that the explanation of its value (and of its properties under renormalization) would naturally sit outside the domain of semiclassical gravity.

For instance, in a group field theory scenario (a generalization to higher dimensions of matrix models for two dimensional quantum gravity [19]), it is transparent that the origin of the gravitational coupling constants has nothing to do with ideas like “vacuum energy” or statements like “energy gravitates”, because energy itself is an emergent concept. Rather, the value of Λ is determined by the microphysics, and, most importantly, by the procedure to approach the continuum semiclassical limit. In this respect, it is conceivable that the very notion of cosmological constant as a form of energy intrinsic to the vacuum is ultimately misleading. To date, little is known about the macroscopic regime of models like group field theories, even though some preliminary steps have been recently done [20]. Nonetheless, analogue models elucidate in simple ways what is expected to happen and can suggest how to further develop investigations in quantum gravity models. In this respect, the reasoning of this Letter sheds a totally different light on the cosmological constant problem, turning it from a failure of effective field theory to a question about the emergence of the spacetime.
==endquote==

This is a brief paper (besides references, only 4 pages!) with potentially far-reaching implications, it seems to me. I don't recall our discussing it---any comments?
 
Last edited:
  • #35


More reasons to mistrust the "dark energy" interpretation of the cosmological constant (and the touted bafflement about its size) can be found in a review article for the special SIGMA issue on Loop gravity and cosmology, written by Larry Sindoni of AEI Potsdam.

http://arxiv.org/abs/1110.0686
Emergent models for gravity: an overview
L. Sindoni
(Submitted on 4 Oct 2011)
We give a critical overview of various attempts to describe gravity as an emergent phenomenon, starting from examples of condensed matter physics, to arrive to more sophisticated pregeometric models. The common line of thought is to view the graviton as a composite particle/collective mode. However, we will describe many different ways in which this idea is realized in practice.
54 pages. Invited review for SIGMA Special Issue "Loop Quantum Gravity and Cosmology".

I tend now to expect this Sindoni review of Emergent Gravity will become a basic well-cited paper, and that the SIGMA special LQG/C issue will constitute the next big Loop gravity book. Many of its chapters have now been posted as arxiv preprints. It's clearly going to be a valuable collection.

Lorenzo Sindoni gave a seminar in December 2008 on Emer. and Analog Grav. that is on video http://pirsa.org/08120049/.
Stefano Liberati at SISSA was his advisor, PhD in 2009 if I remember right.
 
Last edited:
  • #36


Marcus, I believe I'm interested in the subject of your last three postings but it’s a little beyond me. Do you have the patience to explain it one more time in a simpler way? Thanks
 
  • #37


Bill,
I can try to help but I don't know very much about the things Sindoni talks about in his Review paper on Emergent Gravity. Also I don't know much about what is covered in the Finazzi Liberati Sindoni paper. I'm impressed, but it's new stuff to me.
Sindoni http://arxiv.org/abs/1110.0686
FLS http://arxiv.org/abs/1103.4841

I come at this from the perspective of the paper mentioned in the initial post of this thread:
Bianchi Rovelli "Why all these prejudices...?" http://arxiv.org/abs/1002.3966
That, by contrast, is an easy paper to read, very down to earth---you could start there.

They basically say that Lambda (a small constant curvature---or reciprocal area constant) belongs naturally in the Einstein GR equation on the lefthand side because it is allowed by the symmetry of the theory--covariance.
There is no need to think of it as an energy. No need to drag it over the the righthand side where the energy and matter terms are.
No need to get it confused with the QFT Vacuum Energy. That is QFT's problem, they calculate something in a fixed flat geometry context (quite alien to GR) and it comes out ridiculously wrong. So they should deal with it.
Lambda on the lefthandside of the GR equation is a tiny constant curvature determined observationally.

"Dark energy" is a phony idea. "Dark energy problem" is hype. Case closed. So that's simple enough.

Tom Stoer has a good discussion at the beginning of the thread.

Now what you express interest in here is different. I was talking about two new papers that I don't understand and wish someone here would explain to me. THEY PRESENT AN IDEA OF HOW GR COULD EMERGE FROM SOMETHING DEEPER (pre-geometry?) and even HOW LAMBDA MIGHT COME TO BE what it is.

So one thing they do is strengthen the case that we should not think of Lambda as some kind of "dark energy" field. And they say this explicitly. It is a feature that emerges along with the rest of GR, in their scheme, from some more fundamental degrees of freedom.

Today I have been spending time offline trying to read the Sindoni paper. I am woefully unprepared to explain it, or help you.
Same with the Finazzi Liberati Sindoni (FLS). I was struggling with that yesterday. I guess I should get back to it now.
 
Last edited:
  • #38


Thanks Marcus I appreciate any help. It is the two papers that interest me. I think I’ll try reading through them a few more times.
 
  • #39


Me too, maybe we can help each other out.
There are other papers in this cluster, by Sindoni et al, that appeared earlier this year. They may help us understand.
 
  • #40


I have read that Albert Einstein declared his introduction of the the cosmological constant greatest blunder of his life.
 
  • #41


PatrickPowers said:
I have read that Albert Einstein declared his introduction of the the cosmological constant greatest blunder of his life.
Patrick,
there's a subtle point here that is significant but often missed. A very readable discussion, that does not short-cut the facts, is on page 2 of the "Why all these prejudices...?" paper http://arxiv.org/abs/1002.3966:
You might be interested in having a look at the halfpage of discussion leading up to this conclusion, which I quote.
Einstein had in his hands a theory that predicted the cosmic expansion (or contraction) without cosmological constant, with a generic value of the cosmological constant, and even, because of the instability, with a fine-tuned value of the cosmological constant. But he nevertheless chose to believe in the fine-tuned value, goofed-out on the instability, and wrote a paper claiming that his equations were compatible with a static universe! These are facts. No surprise that later he referred to all this as his “greatest blunder”: he had a spectacular prediction in his notebook and contrived his own theory to the point of making a mistake about stability, just to avoid making ... a correct prediction! Even a total genius can be silly, at times.
Why is this relevant for the debate about the cosmological constant? Because short-cutting this story into reporting that Einstein added the cosmological term and then declared this his “greatest blunder” is to charge the cosmological term with a negative judgment that Einstein certainly never meant.
In fact, it may not even be true that Einstein introduced the λ term because of cosmology. He probably knew about this term in the gravitational equations much earlier than his cosmological work. This can be deduced from a footnote of his 1916 main work on general relativity [9] (the footnote is on page 180 of the English version). Einstein derives the gravitational field equations from a list of physical requirements. In the footnote, he notices that the field equations he writes are not the most general possible ones, because there are other possible terms. The cosmological term is one of these (the notation “λ” already appears in this footnote)...​
 
Last edited by a moderator:
  • #42


marcus said:
"Dark energy" is a phony idea. "Dark energy problem" is hype. Case closed. So that's simple enough.
"No need to get it confused with the QFT Vacuum Energy. That is QFT's problem, they calculate something in a fixed flat geometry context (quite alien to GR) and it comes out ridiculously wrong. So they should deal with it."

I apologize, but this is ridiculous and completely missing the point. It does not suffice to invent a new theory of quantum gravity, and explain why the contributions to the cosmological constant are smaller then expected at that energy scale. There are hundreds of papers out there with ideas like that, the analog gravity paper is no exception, and the reason none of them has convinced anybody, is because they are only answering the first step in what is a much bigger universality problem.

The real problem is that we know experimentally that at least certain quantum contributions at our normal energy scales do in fact gravitate. Every time you step on a scale, approximately 90% of your weight lies in this magic (I am of course talking about virtual gluon contributions to the mass of nucleons). If this did not gravitate, it would instantly show up in violent departures from the equivalence principle.

Now, let us for simplicity restrict to a world which only includes gravity and QED, since we know a lot about the latter up to at least energy scales of 100 GEV where it is very precisely described by an effective field theory that is weakly coupled, and pointlike.

Now I can't draw it here, but there is a diagram that contributes to the famous Lamb shift, but this time weakly coupled to gravity (so it looks like a tadpole). If we take the cutoff scale as 100GEV, the vacuum energy of this diagram's contribution to the ZPE of the electron is still approximately 10^55 larger than experiment. So, the statement of the problem is now the following:

Why does *this* contribution to the zero point energy of the electron in vacuum vanish (or is tuned or is canceled by some unknown mechanism) but the analogous diagram, in the environment of atoms that represents the shift in the atomic mass arising from ZPE fluctuations does not (and very accurately gravitates by tests of the equivalence principle).

Now, it gets worse... If you think you have an answer to the above problem, you have to explain another puzzle. Why does the vacuum contributions vanish in the real world (with a mix of complicated matter fields all contributing in various ways), but not in the far more symmetric electroweak vacuum state arising from SU(2)*U(1)? It cannot vanish in both, since the mass of the electron vanishes in the unbroken phase and it is precisely this mass which contributes to both subleading contributions to the aforementioned electron ZPE (some ~10^53 too large) as well as to the classical value of the Higgs potential (its really top quark loops that dominate here, but the electron also does contribute).

The point being, you cannot answer the question by simply begging the question like Rovelli does. Everyone agrees that if you could actually SHOW explicitly that the ZPE vanishes, then you have at least partially solved the problem, but then he doesn't, which is why it is a complete nonanswer.

Anyway, you can be sure that the answer to this puzzle sends whoever solves it straight to Stockholm. So I assure you, it is not 'hype'! Instead it is a problem that has to have a solution, and its just the case that no one has figured one out yet b/c it is very difficult.

(Addendum: If someone do not understand what I am writing above, or the exact details it is probably best to start at the beginning with a classical review paper at least stating the problem clearly)

For cosmologists, Sean Carrol has written a fairly elementary treatment here:

http://relativity.livingreviews.org/Articles/lrr-2001-1/

as well as his CERN course video (highly recommended):
http://www.youtube.com/watch?feature=player_embedded&v=cYVj2RhXxeU

Once you have understood and digested the above, the more theoretically rigorous review is given by Weinberg's classic paper

http://www-itp.particle.uni-karlsruhe.de/~sahlmann/gr+c_seminarII/pdfs/T3.pdf
 
Last edited by a moderator:
  • #43


It looks to me as if Haelfix is just saying stuff that is irrelevant but obvious, stuff everybody knows that does not connect with topic. Earlier I quoted the FLS paper in hope someone might comment. No FLS-relevant comment so far.
marcus said:
The Finazzi-Liberati-Sindoni (FLS) paper could be something of a game-changer, so I want to back up and reconsider what I was saying. Here is an excerpt from their conclusions...
==quote FLS http://arxiv.org/abs/1103.4841 ==
...The implications for gravity are twofold. First, there could be no a priori reason why the cosmological constant should be computed as the zero-point energy of the system. More properly, its computation must inevitably pass through the derivation of Einstein equations emerging from the underlying microscopic system. Second, the energy scale of Λ can be several orders of magnitude smaller than all the other energy scales for the presence of a very small number, nonperturbative in origin, which cannot be computed within the framework of an effective field theory dealing only with the emergent degrees of freedom (i.e. semiclassical gravity).

The model discussed in this Letter shows all this explicitly. Furthermore, it strongly supports a picture where gravity is a collective phenomenon in a pregeometric theory. In fact, the cosmological constant puzzle is elegantly solved in those scenarios. From an emergent gravity approach, the low energy effective action (and its renormalization group flow) is obviously computed within a framework that has nothing to do with quantum field theories in curved spacetime. Indeed, if we interpreted the cosmological constant as a coupling constant controlling some self-interaction of the gravitational field, rather than as a vacuum energy, it would straightforwardly follow that the explanation of its value (and of its properties under renormalization) would naturally sit outside the domain of semiclassical gravity.

For instance, in a group field theory scenario (a generalization to higher dimensions of matrix models for two dimensional quantum gravity [19]), it is transparent that the origin of the gravitational coupling constants has nothing to do with ideas like “vacuum energy” or statements like “energy gravitates”, because energy itself is an emergent concept. Rather, the value of Λ is determined by the microphysics, and, most importantly, by the procedure to approach the continuum semiclassical limit. In this respect, it is conceivable that the very notion of cosmological constant as a form of energy intrinsic to the vacuum is ultimately misleading. To date, little is known about the macroscopic regime of models like group field theories, even though some preliminary steps have been recently done [20]. Nonetheless, analogue models elucidate in simple ways what is expected to happen and can suggest how to further develop investigations in quantum gravity models. In this respect, the reasoning of this Letter sheds a totally different light on the cosmological constant problem, turning it from a failure of effective field theory to a question about the emergence of the spacetime.
==endquote==
 
Last edited:
  • #44


To throw in an another stick, I pondered about and inferencial interpreration of the cosmological constant in this in this old thread https://www.physicsforums.com/showthread.php?t=239414 with the purpose of stimulating some thinking

Of course my argument appeals to general forms of any action, that is understood as -log P where P is a transition probability.

I just compared the FORM of the E-H action, with the FORM of an action I get from a particular construction.

The conclusion is that a kind of integrated "cosmological constant term" appears generically in any action of that form, and it's interpreted as having to do with the observers truncation of confidence. If the maximum probability was 100% then the cosmological constant should approach zero. When the maximum probability is large but finite, due to limited inferrability capacity of the finite obsever, the terms is bound to be non-zero - but finite.

This is not very specific, but it illustrates a alterantive logic, that MIGHT be able to work out specifically.

I think to connect this to the specifici cosmo constant in 4D spacetime, one needs to construct spacetime by inference from the microsctructure of information. Where dimensionality is bound to be emergen as well, perhaps a little bit like a truncated principal component analysis for dimensional regulation, where the truncation is forced upon the inference due to the observers incompleteness.

Then to the point is that all QFT thinking, seems to picture the observer at infinite or in some background - which then effectively has infinite mass - thus the cosmo constant "should be zero" if they only could find out how to cancel the summation properly... but tis perspective fails for gravity, when the observer is inside, so it seems reasonable that the cosmo constant that's effectiuvely inferred by earht based cosmo observations are not expected to be zero. Here the "massive background" does not exist. I think this is at least conceptually related to this issue.

But to get from here to explicit solutions seems hard indeed since it involves the entire chain of complexities, such as mass generation, theory scaling and evolution etc.

/Fredrik
 
  • #45


Some readers may have overlooked the points in the Finazzi Liberati Sindoni paper that I'm asking for comment on, so I will boil down and highlight

==quote FLS http://arxiv.org/abs/1103.4841 ==
... there could be no a priori reason why the cosmological constant should be computed as the zero-point energy of the system. More properly, its computation must inevitably pass through the derivation of Einstein equations emerging from the underlying microscopic system.
... it strongly supports a picture where gravity is a collective phenomenon in a pregeometric theory. In fact, the cosmological constant puzzle is elegantly solved in those scenarios. From an emergent gravity approach, the low energy effective action (and its renormalization group flow) is obviously computed within a framework that has nothing to do with quantum field theories in curved spacetime. Indeed, if we interpreted the cosmological constant as a coupling constant controlling some self-interaction of the gravitational field, rather than as a vacuum energy, it would straightforwardly follow that the explanation of its value (and of its properties under renormalization) would naturally sit outside the domain of semiclassical gravity.

For instance, in a group field theory scenario (a generalization to higher dimensions of matrix models for two dimensional quantum gravity [19]), it is transparent that the origin of the gravitational coupling constants has nothing to do with ideas like “vacuum energy” or statements like “energy gravitates”, because energy itself is an emergent concept. Rather, the value of Λ is determined by the microphysics, and, most importantly, by the procedure to approach the continuum semiclassical limit. In this respect, it is conceivable that the very notion of cosmological constant as a form of energy intrinsic to the vacuum is ultimately misleading. ... In this respect, the reasoning of this Letter sheds a totally different light on the cosmological constant problem, turning it from a failure of effective field theory to a question about the emergence of the spacetime.
==endquote==

Since we are considering a matter of critical judgment here, I might mention that Liberati has well over 3000 citations
http://inspirehep.net/search?ln=en&rm=citation&jrec=1&p=a+Liberati
The guy is a world-class cosmologist/phenomenologist. Still fairly young (40-year-old) and turning out top-cited papers.
His PhD adviser was Dennis Sciama, if the name means anything to you.
In case anyone might be confused about this, Liberati is not part of the Loop QG community. He has never attended the biannual Loops conference. He's more the outsider phenomenologist type---long-time interest in cosmological/astrophysical tests.

No Fra, the FLS paper is not "Rovelli-style" :biggrin:
 
Last edited:
  • #46


I didn't read the paper to see what they mean but in my thinking the "cosmological term" in the general action I associated it when it comes to specifically 4D spacetime, can only be understood beyond the "just another parameter" level if we also understand how 4D spacetime emerges, because that is what would somehow factor out that term.

Probably the differences lies in what is meant by emergence. Each time before when I've read Rovelli style papers it was clear that there are different meanings of the concept. I don't believe in any fundamental DOFs. The alterantive concept of emergence is just interacting effective theories, since there is no master theory (since in my view theory attaches to observer machinery), there are no fundamental DOFs. The task thus becomes how to even make something constructure without referring to fundamental DOFs.

The difference for emergence would be in like "emergence FROM something else", or "emergence in terms of just evolution relative to the prior state" where there are no fixed background context at all.

This is even the constructing principle behind the association in my old post. The idea is that the possible future can only be rated in terms of an action measure in terms of a specific reduced time history. This is why P_max < 1, and thus us why the normally unbounded information divergence IS bounded in this case. And this was also the keys that allows the expectation that the term is small, but strictly non-zero as inferred by a finite observer.

So maybe if we focus on the "emergence of spacetime" the cosmo constant problem will be solved automatically, if done the right way. IMO the emergence of spacetime, is an inference, and it's hosted by an observer. Ie. an extension of the essence that Smolin et all mention in the relative locality idea that spacetime is simply a result of an inference from actual data! Now that data needs to be stored and processed by something with finite capacity. This is where a lot of things are missing...

/Fredrik
 
  • #47


atyy said:
Bianchi and Rovelli are not saying anything new, are they? Take eg. this 2007 review

http://arxiv.org/abs/0705.2533
"The observational and theoretical features described above suggests that one should consider cosmological constant as the most natural candidate for dark energy. Though it leads to well known problems, it is also the most economical [just one number] and simplest explanation for all the observations. Once we invoke the cosmological constant, classical gravity will be described by the three constants G, c and Lambda"

Thanks for pointing that out! I think you are right. It has been a fairly commonplace view among cosmologists all along. That is, Lambda is not some kind of exotic energy field with possibly varying density and equation of state parameters.

For a while after 1998 people naturally wanted to make sure that they were right---no variation was observed, so there is increasing confidence in the standard Lamda-CDM cosmic model (which treats Lambda as a constant curvature built into spacetime.)

So I don't see that Bianchi Rovelli are saying anything new. They are just puncturing a bubble of hype. Pointing out that the "dark energy" Emperor is walking down the street buck naked :biggrin:

What I do see as new is what Liberati et al are saying. They look deeper into the quantum origin of this classical curvature constant. Why, when spacetime emerges from pregeometry d.o.f., does it emerge with this curvature?
They illustrate with a "what-if" group field theory (GFT) example.

BTW Atyy in a sense you chose the perfect example (that 2007 review) to point out that Bianchi Rovelli's message is mainstream. It was an invited review for a special issue of GRG edited by Herman Nicolai, Roy Maartens, and George Ellis---than which there is no whicher :biggrin:
 
Last edited:
  • #48


Does anyone here doubt that quantum vacuum energy exists?
 
  • #49


Harv said:
Does anyone here doubt that quantum vacuum energy exists?

Of course not! :smile: How about this: read the Bianchi Rovelli paper. They discuss quantum vacuum energy at length, and the difficulties with calculating it accurately.
But by all means read the paper. Anyone who wants to join in the discussion should. It is a fairly non-technical easy read.

I already gave the link. But I will again:

http://arxiv.org/abs/1002.3966
Why all these prejudices against a constant?
Eugenio Bianchi, Carlo Rovelli
9 pages, 4 figures
(Submitted on 21 Feb 2010)
"The expansion of the observed universe appears to be accelerating. A simple explanation of this phenomenon is provided by the non-vanishing of the cosmological constant in the Einstein equations. Arguments are commonly presented to the effect that this simple explanation is not viable or not sufficient, and therefore we are facing the 'great mystery' of the 'nature of a dark energy'. We argue that these arguments are unconvincing, or ill-founded."

There is also a short (4-page) paper by Stefano Liberati et al, which is both relevant and fascinating. It considers where that Lambda constant in classical spacetime geometry might be coming from in an emergent spacetime picture. Classically Lambda is a curvature naturally occurring on the lefthand side of the Einstein field equation, whose value is measured to be about
1.16 x 10-35 second-2
You might want to take a look at the final page of the Liberati paper where they state their conclusions:

http://arxiv.org/abs/1103.4841
The cosmological constant: a lesson from Bose-Einstein condensates
Stefano Finazzi, Stefano Liberati, Lorenzo Sindoni
(Submitted on 24 Mar 2011)
...Here we directly compute this term and confront it with the other energy scales of the system. On the gravity side of the analogy, this model suggests that in emergent gravity scenarios it is natural for the cosmological constant to be much smaller than its naif value computed as the zero-point energy of the emergent effective field theory. The striking outcome of our investigation is that the value of this constant cannot be easily predicted by just looking at the ground state energy of the microscopic system from which spacetime and its dynamics should emerge. A proper computation would require the knowledge of both the full microscopic quantum theory and a detailed understanding about how Einstein equations emerge from such a fundamental theory. In this light, the cosmological constant appears even more a decisive test bench for any quantum/emergent gravity scenario.

=============================

The tendency in observational cosmology in recent years has been to confirm and accept that Lambda is in fact simply a constant and not necessarily connected with the naive QFT calculation of vacuum energy (which is after all based on a non-quantum static flat Minkowski geometry.) To some extent this is a matter of one's background and opinions---I'm not talking about diehard QFT-ers, this is the trend I see in observational cosmology. Here are some illustrative links:Here is one I found by Paolo Serra et al (2009)
http://arxiv.org/abs/0908.3186
No Evidence for Dark Energy Dynamics from a Global Analysis of Cosmological Data
Paolo Serra (UC Irvine), Asantha Cooray (UC Irvine), Daniel E. Holz (Los Alamos National Laboratory), Alessandro Melchiorri (University of Rome), Stefania Pandolfi (University of Rome), Devdeep Sarkar (UC Irvine, University of Michigan)
Physical Review D

From the Serra et al conclusions [their italics :biggrin:]:
"We find no evidence for a temporal evolution of dark energy—the data is completely consistent with a cosmological constant. This agrees with most previous results, but significantly improves the overall constraints [13, 14, 19, 20]."

Here is another by Tamara Davis et al (2007)
http://inspirehep.net/record/742618
Scrutinizing Exotic Cosmological Models Using ESSENCE Supernova Data Combined with Other Cosmological Probes
Astrophysical Journal

One by Wood-Vasey et al (2007)
http://inspirehep.net/record/741585?ln=en
Observational Constraints on the Nature of the Dark Energy: First Cosmological Results from the ESSENCE Supernova Survey
Astrophysical Journal

There is also the "WMAP7" report of Komatsu et al. which appeared in 2010.
This was part of a NASA series of papers presenting the full 7-year data from the WMAP mission.
Here is the link. http://arxiv.org/abs/1001.4538
Page 24 has some constraints on the equation of state number w which in case Lambda is simply a constant would be exactly w = -1. Indeed that is about what you get combining latest WMAP+BAO+SN data. (The high-z supernova data SN is the most effective at constraining w. The BAO data is based on galaxy counts and is also good---they combined all the best.)
For example on page 24 in section 5.1 you see:
"The high-z supernova data provide the most stringent limit on w. Using WMAP+BAO+SN, we find w = −0.980±0.053 (68% CL)..."

That is really really close to -1. As time goes on the constraints seem to tighten and I hear less and less about Lambda considered as an actual "energy". We may be getting closer to accepting it simply as a small constant amount of curvature.
 
Last edited:
  • #50


marcus said:
The tendency in observational cosmology in recent years has been to confirm and accept that Lambda is in fact simply a constant and not necessarily connected with the naive QFT calculation of vacuum energy (which is after all based on a non-quantum static flat Minkowski geometry.) To some extent this is a matter of one's background and opinions---I'm not talking about diehard QFT-ers, this is the trend I see in observational cosmology. Here are some illustrative links:

That is really really close to -1. As time goes on the constraints seem to tighten and I hear less and less about Lambda considered as an actual "energy". We may be getting closer to accepting it simply as a small constant amount of curvature.

Hi Marcus, you are mixing up two things here. W = -1 is very much what the simple QFT prescription is about. It is interpreted as arising from a cosmological constant, with units of energy density (g/cm^3). You are also free to think of it as a sort of negative pressure in the context of the FRW lambda dust solution.

What you are confusing this with is the case for w < -1, which is what is commonly known as quintessence (which is a scalar field that mimics the observed cosmological constant in our epoch and introduces an explicit time dependence). That latter involves very exotic physics, and is decidedly NOT predicted by the standard QFT calculation..

Now the separate confusion is that there is absolutely no problem whatsoever in moving the cosmological constant term from the left side to the right side of the Einstein field equations in general. You can always do that!

That does not change the predictions or physics in any way, in particular whether the term is renormalized or not!

So consider an empty box and physics that contains a huge positive cosmological constant term. You can think of weighing the box if you put it on a scale, or alternatively you can think of the geometry that this induces (an expansion scalefactor term that looks like A(t) ~ E^(Ht)) but the problem of having a quantum vacuum density 10^120 orders of magnitude too big is still wrong on physical grounds, no matter what side of the original field equations you put it on in order to solve the equation. An empty box solution simply does not weigh that much, and it does not induce curvature of that magnitude in the real world!

What saves the prediction (but also what defines the problem) is that the quantum vacuum is not the only contribution to the total cosmological constant, we are instructed to cancel it with a classical contribution. This latter is typically arbitrary, and we are thus left with the finetuning problem. Why does 2 apparently different physical quantities, cancel to fantastic accuracy?

So this is the problem! It is not that we have a theory that gives a wrong prediction. We can make our theories give the right value. The problem is that this value is wildly different then what you might consider natural!

One possible resolution is to just say that the quantum ZPE sums to identically zero in a more refined theory. And that is FINE and of course would partially solve the problem! For instance with exact supersymmetry you can show that this is indeed what happens!

However, the solution MUST exist at all scales, not just up at the Planck scale. And so the solution must be transparent within the low energy physics based formalism defined up to 100 GEV. So for instance in the context of supersymmetry, one can see explicitly that the thing that cancels the electron tadpole diagrams, is the analogous selectron diagrams!

So the point is you have to actually SHOW this mechanism explicitly.

To give an analogy it would be like arguing for the clay millenium prize regarding QCD. You can't simply say, 'well we don't observe free quarks in nature, therefore QCD is confining -QED'. The whole point is in *showing* this, mathematically!
 
Back
Top