# Why all these prejudices against a constant? ( dark energy is a fake probem)

Gold Member
Dearly Missed
Why all these prejudices against a constant? ("dark energy" is a fake probem)

==sample quote==
It is especially wrong to talk about a mysterious “substance” to denote dark energy. The expression “substance” is inappropriate and misleading. It is like saying that the centrifugal force that pushes out from a merry-go-round is the “eﬀect of a mysterious substance”.
==endquote==

http://arxiv.org/abs/1002.3966
Why all these prejudices against a constant?
Eugenio Bianchi, Carlo Rovelli
9 pages, 4 figures
(Submitted on 21 Feb 2010)
"The expansion of the observed universe appears to be accelerating. A simple explanation of this phenomenon is provided by the non-vanishing of the cosmological constant in the Einstein equations. Arguments are commonly presented to the effect that this simple explanation is not viable or not sufficient, and therefore we are facing the 'great mystery' of the 'nature of a dark energy'. We argue that these arguments are unconvincing, or ill-founded."

Gold Member
Dearly Missed

The lambda constant is just a constant that naturally occurs when you write down the most general form of the action. With Einstein it was there already at the start! Not something he put in as an afterthought to make cosmology come out right.

==quote==
In fact, it is not even true that Einstein introduced the
in the gravitational equations much earlier than his cos-
mological work. This can be deduced from a footnote
of his 1916 main work on general relativity [9] (the foot-
note is on page 180 of the English version). Einstein
derives the gravitational ﬁeld equations from a list of
physical requirements. In the footnote, he notices that
the ﬁeld equations he writes are not the most general pos-
sible ones, because there is another possible term, which
is in fact the cosmological term (the notation “λ” already
appears in this footnote).
The most general low-energy second order action for
the gravitational ﬁeld, invariant under the relevant sym-
metry (diﬀeomorphisms) is...
...
which leads to (1). It depends on two constants, the
Newton constant G and the cosmological constant λ, and
there is no physical reason for discarding the second term...
==endquote==

tom.stoer

The cosmological constant becomes a mistery as soon as you do not write it on the left hand = "the gravity" side of the equations

$$R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu}$$

but it you write it on the right hand = "the matter" side.

$$R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8\pi G}{c^4}T_{\mu\nu} - \Lambda g_{\mu\nu}$$

In vacuum (with T=0) you still have some kind of "matter" which affects spacetime:

$$R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = - \Lambda g_{\mu\nu}$$

If you leave this term on the left hand side, the question where it comes from and why it is there, is still open, but it is not a qustion about matter, dark energy or something like that; it is a question about gravity.

lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

The question is not why it is there but rather what sets its value. In particular why is it so small? One answer coming from the ERG approach is that there is an infra-red fixed point for which lambda=0. Hence on large scales lambda is small.

atyy

lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

The question is not why it is there but rather what sets its value. In particular why is it so small? One answer coming from the ERG approach is that there is an infra-red fixed point for which lambda=0. Hence on large scales lambda is small.

http://arxiv.org/abs/0910.5167 fig 3, 5 seem to suggest only a UV fixed point?

I think Xu and Horava http://arxiv.org/abs/1003.0009 got a IR fixed point, but not at z=1, which I think is what one would like?

BTW, has the evidence shifted away from AS now that CDT seems to have gone over to Horava? And does that mean that CDT is also problematic, since Horava seemed to have all sorts of problems.

http://arxiv.org/abs/0910.5167 fig 3, 5 seem to suggest only a UV fixed point?

I think Xu and Horava http://arxiv.org/abs/1003.0009 got a IR fixed point, but not at z=1, which I think is what one would like?

BTW, has the evidence shifted away from AS now that CDT seems to have gone over to Horava? And does that mean that CDT is also problematic, since Horava seemed to have all sorts of problems.

The IR fixed point is at the origin in fig 3. As you can see there is a trajectory that goes from the UV fixed point to the IR fixed point. But it would seem odd that we sit exactly on that trajectory.

I don't know what your on about with CDT, AS and Horava. CDT does not violate lorentz (at least not in the way Horava does). CDT uses the Regge action which which is a discrete version of einstein-hilbert. So i don't see why you think there is a connection between CDT and horava? If there is its not obvious.

Gold Member
Dearly Missed

It belongs on the left-hand side you can see this from the fact that G does not couple to it...

The question is not why it is there but rather what sets its value.

Yes. Just like other constants, why is alpha = 1/137?
Why is the ratio of electron mass to planck mass so small?
I think part of the aim of the paper is to deflate some of the hype surrounding this particular constant.
Not to say it's not interesting though! It would be great to get some handle on why it's that particular size.

Last edited:
atyy

The IR fixed point is at the origin in fig 3. As you can see there is a trajectory that goes from the UV fixed point to the IR fixed point. But it would seem odd that we sit exactly on that trajectory.

Is that really a fixed point? Also, if it is, is it's stability such that it would explain the cosmological constant (apparently not, since you say it's odd we'd be exactly on that trajectory)?

I don't know what your on about with CDT, AS and Horava. CDT does not violate lorentz (at least not in the way Horava does). CDT uses the Regge action which which is a discrete version of einstein-hilbert. So i don't see why you think there is a connection between CDT and horava? If there is its not obvious.

I'm thinking of - ?
http://arxiv.org/abs/0911.0401
http://arxiv.org/abs/1002.3298

Is that really a fixed point? Also, if it is, is it's stability such that it would explain the cosmological constant (apparently not, since you say it's odd we'd be exactly on that trajectory)?

I'm thinking of - ?
http://arxiv.org/abs/0911.0401
http://arxiv.org/abs/1002.3298

Hmm well both papers mention AS and horava. I think the CDT guys are hoping that it can be both AS and horava depending how they tune their parameters.

The gaussian fixed point is always going to be there as it just corresponds to the vanishing of the dimensionless couplings. But I don't think you need to be on a trajectory that flows to the IR fixed point. Better to read this paper

http://arXiv.org/abs/hep-th/0410119

"Assuming that Quantum Einstein Gravity (QEG) is the correct theory of gravity on all length scales we use analytical results from nonperturbative renormalization group (RG) equations as well as experimental input in order to characterize the special RG trajectory of QEG which is realized in Nature and to determine its parameters. On this trajectory, we identify a regime of scales where gravitational physics is well described by classical General Relativity. Strong renormalization effects occur at both larger and smaller momentum scales. The latter lead to a growth of Newton's constant at large distances. We argue that this effect becomes visible at the scale of galaxies and could provide a solution to the astrophysical missing mass problem which does not require any dark matter. We show that an extremely weak power law running of Newton's constant leads to flat galaxy rotation curves similar to those observed in Nature. Furthermore, a possible resolution of the cosmological constant problem is proposed by noting that all RG trajectories admitting a long classical regime automatically give rise to a small cosmological constant."

Gold Member
Dearly Missed

The cosmological constant becomes a mistery as soon as you do not write it on the left hand = "the gravity" side of the equations

$$R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu}$$

but it you write it on the right hand = "the matter" side.

$$R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8\pi G}{c^4}T_{\mu\nu} - \Lambda g_{\mu\nu}$$

In vacuum (with T=0) you still have some kind of "matter" which affects spacetime:

$$R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = - \Lambda g_{\mu\nu}$$

If you leave this term on the left hand side, the question where it comes from and why it is there, is still open, but it is not a qustion about matter, dark energy or something like that; it is a question about gravity.

I'd like to get into the habit of thinking of it on the left hand side.
However I'm used to seeing the constant given in the form OmegaLambda. A common estimate is OmegaLambda = .73

That means the (fictional?) dark energy is 73% of critical density. If I express critical density in terms of today's Hubble rate H, then what I seem to get is that

Lambda = 3 *H^2* OmegaLambda

= 3*0.73* (71 km/s per megaparsec)^2 ~ 10-35 second-2

Can you confirm that this is right way to get the lefthandside Lambda from the information we are usually given?

Last edited:

Finbar posted:

lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

I REALLY like that thought! What's the origin of these two relationships?? I don't mean I doubt it, but what led to these particular couplings?? Is it of purely mathematical origin or instead physical insights...or a combination??

"The man who had the courage to tell everybody that their ideas on space and time had to
be changed, then did not dare predicting an expanding universe, even if his jewel theory was saying so..." and " Even a total genius can be silly,
at times." .....suggests Einstein did not appreciate the nature of lambda early on.

Last edited:
tom.stoer

I'd like to get into the habit of thinking of it on the left hand side.

...

Can you confirm that this is right way to get the lefthandside Lambda from the information we are usually given?
Yes, I can confirm that this is the way you get OmegaLambda. But your interpretation
That means the (fictional?) dark energy is 73% of critical density.
means that (implicitly) you think about it as something that appears on the right hand side: density, dark energy, ...

Gold Member
Dearly Missed

Yes, I can confirm that this is the way you get OmegaLambda...

That's not what I was asking about. We are constantly being told that OmegaLambda = 0.73, or thereabouts. We can take that as the current estimate.

What I never (or almost never) see an estimate for is LAMBDA ITSELF. The genuine lefthand side article.

That is what I want to calculate. I think it comes to 1.1 x 10-35 second-2

Would it be more correct to express it in units of reciprocal area, like in meter-2?

That's what I think of as a common unit for curvature?

What I want confirmation for, or at least your opinion on, is if we are thinking just of the lefthandside form of the cosmo constant, which we are calling Lambda, and if we are given the commonly published figures of 71 for Hubble, and 0.73 for (fictional?) "dark energy fraction,"
then do we use the stated formula to get Lambda? Namely:

Lambda = 3 *H^2* OmegaLambda

Last edited:
George Jones
Staff Emeritus
Gold Member

Lambda = 3 *H^2* OmegaLambda

Yes.

Gold Member
Dearly Missed

Lambda = 3 *H^2* OmegaLambda

Yes.

Good. Thank you George. So since I'm always seeing the published figures
71 km/s per megaparsec, and 0.73 (for the Hubble rate and the "darkenergy fraction") I can plug the blue thing into google and get Lambda.

3*0.73*(71 km/s per megaparsec)^2

Anybody can do it themselves. If you paste that blue expression into the google window, what you get is:
1.15946854e-35 s^-2

So that is what Lambda "really is", if you round off appropriately:

Lambda = 1.16 x 10-35 seconds-2
plus or minus whatever uncertainty is contributed by the 0.73 and the 71.

And you can change the numbers 71 and 0.73 to agree with whatever the latest observations indicate, and the google calculator will give you the corresponding estimate for Lambda, accordingly.

If Rovelli is right, and the others that share his views on the subject, then this is a basic constant of nature and we better start getting to know it, and getting used to it, and treating with some of the same respect we normally show basic constants.

Last edited:
atyy

Bianchi and Rovelli are not saying anything new, are they? Take eg. this 2007 review

http://arxiv.org/abs/0705.2533
"The observational and theoretical features described above suggests that one should consider cosmological constant as the most natural candidate for dark energy. Though it leads to well known problems, it is also the most economical [just one number] and simplest explanation for all the observations. Once we invoke the cosmological constant, classical gravity will be described by the three constants G, c and Lambda"

tom.stoer

I agree, they are not saying anything new. The stress the following
1) it is natural to consider Lambda as a constant of nature
2) one should distinguish between "QFT is the source of Lambda" and "QFT could cause corrections to the value of Lambda"
3) the reason for the puzzle is not Lambda, but Lambda on the right hand side of the equation ...
4) ... plus an idea how to calculate it - which fails by 120 orders of magnitude

Let's assume I have some biological theory; unfortunately it says not so much about about mammals, birds etc. but I claim that this theory provides an explanation of the zoogenesis of the duckbill from first principles. But applying this theory, it predicts the duckbill to look like an orca ...

Now I ask you: does this make the duckbill even more enigmatic, or does it mean that my theory is plainly wrong?

I guess ultimately, one would seek a deeper understanding of why all constants of nature are what they are, and why the laws are like they are. Maybe alot of the mystery around lambda is due to the fact that it appears to be so closely (although there are certainly the known problems with this) related to the expected zero point energy density that it's hard to resist the temptation that there is something more to this - that may or may not explain the value of this constant that in a deeper way that we probably all seek anyway?

As far as I am concerned, I still seek a deeper understanding of all the E-H action where all terms and constants beg for explanation. My working picture at the moment, makes me think, also in line wit some of the statistical approach to this, is to rather think of Einsteins equations as a equation of state, defining an equilibrium, suggesting that the "constants" might not be proper constants, maybe they just appear constant to us at this moment. So maybe it's not that important what the values are, the most interesting thing is the logic that sets it's value. The actual logic today, is our cosmological models, from hubble expansions we infer the value, but this is a human level mechanism, and it's value would in principle evolve with the evolution of human science cosmology. But we still don't have the depper intrinsic physical mechanism for it's evolution.

But I also think that seeking the answer in the form of hidden or dark matter or energy "out there" that are like ordinary matter, except not visible might be a sidetrack. I think the "appearance" of hidden energy or accelerating universe could be better understood from the information point of view, where one considers expanding and accelerating evnet horizons, rather than expanding universes etc. But that's still very underdeveloped.

/Fredrik

Marcus: again, thanks for bringing another great paper to our attention....

Bianchi and Rovelli are not saying anything new, are they?

That seems right, but I did see some different perspectives (new to me) things in the paper, such as:

An effect commonly put forward to support the reality" of such a vacuum energy is the Casimir e ect. But the Casimir e ect does not reveal the existence of a vac-
uum energy: it reveals the e ect of a \change" in vacuum energy, and it says nothing about where the zero point value of this energy is.

and this regarding the coincidence argument:

In order for us to comfortably exist and observe it, we must be .... in a period of the history of the universe where a civilization like ours could exist. Namely when heavy elements abound, but the universe is not dead yet. ....in a universe with the value of lambda like in our universe, it is quite reasonable that humans exist during those 10 or so billions years when omegab and omega lambdaare within a few orders of magnitude from each other.Not much earlier, when there is nothing like the Earth, not much later when stars like ours will be dying.

FRA
I think the "appearance" of hidden energy or accelerating universe could be better understood from the information point of view, where one considers expanding and accelerating evnet horizons, rather than expanding universes etc

yes!!!!!!!!!!
I also think we are current missing much of what appears as an information based universe.

I guess ultimately, one would seek a deeper understanding of why all constants of nature are what they are, and why the laws are like they are

no guessing..absolutely "yes" :

In gravitational physics there is nothing mysterious in the cosmological constant. At least nothing more mysterious than the Maxwell equations, the Yang-Mills
equations, the Dirac equation, or the Standard Model equations. These equations contain constants whose values we are not able to compute from first principles. The
cosmological constant is in no sense more of a "mystery" than any other among the numerous constants in our fundamental theories

I wish the authors would have instead acknowledged they are ALL equally 'mysterious'..

Last edited:

I wish the authors would have instead acknolwedged they are ALL equally 'mysterious'..

Yes, but one can respond to this in two ways, either

1) stop worrying about the "mysterious lambda"

or

2) START worrying about all mysterious "constants". In the generic sense "constant" could also apply to other things than numbers, for example "constant" symmetries, and thus "physical law" itself. Ie. the fact that it may be "just another constant" doesn't make it less mysterious. Maybe we rather just have more "clues" to this particular constant, than say the gravitational constant or plancks constant?

/Fredrik

START worrying about all mysterious "constants".

makes sense and I think in general scientists do....still strikes me as odd that we don't have the first principles to determine the basics in the standard model...clearly we are missing some "information"....pun intended....

Haelfix

The CC is not a problem for GR (well except that it makes various cosmological solutions a little more ugly), but really a generic quantum problem.

It doesn't matter what side of the field equation you put it on, the problem is the same, namely an unnatural cancelation between two quantities that a priori have nothing to do with one another (read no known physical relation).

In the full quantum theory, we are interested in the expectation value of the stress energy tensor <Tuv>, which in vacuum is proportional to < P > Guv by Lorentz invariance. By inspection of Einsteins field equations, this is equivalent to adding a term to the effective cosmological constant.

lambda effective = lambda + 8piG <P>, where lambda is just the old simple (undetermined classical constant of integration) and <P> is the energy density of the quantum vacuum.

Now (lambda effective/8piG) = Pv ~ 10^-47 Gev^4 by experiment (WMAP say).

The problem is that we know how to calculate <P>, generically it is simply summing up all the normal modes of the zero point energy of some field (or sets of fields), up to some cutoff. If we take the cutoff to be Mpl, <P> will be something like ~10^72 Gev^4
and then you notice that (lambda/8piG + <P>) in order to satisfy experimental bounds, must delicately conspire to cancel to something like 120 decimal places. The problem is that there is no obvious physical reason why lambda/8piG (a quantity arising from a classical equation) should have anything whatsoever to do with <P> (a quantum quantity). Now, if there was an unknown symmetry that related them, you might venture to guess that they could cancel exactly, but no such symmetry is known and worse they dont cancel exactly.

I should point out that you can't get around this miracle trivially. Even if you completely ignore everything from the electroweak scale, all the way down to the Planck scale and instead only consider standard model physics, you would set the cutoff value to something like say the QCD scale, you still have about 40 orders of magnitude worth of decimal places to account for.

Now you are of course free to simply set the constant term equal to the tree level or semiclassical contribution to <P> and set them equal to zero. The problem then is you still have to talk about the shift of the vacuum energy, by higher order terms induced by radiative corrections, and so you have to arrange it so that each constant appearing in front of the counterterms is finetuned order by order, such that the final sum approaches the experimental bound.

It is this, more than anything else, that really is the crux of the problem. The thing we measure in our telescopes is necessarily the full theory (b/c quantum mechanics is part of the real world). And in this quantum theory, nothing protects the vacuum energy from radiative corrections.

An analogous (though much less severe) problem occurs for instance, when you consider the smallness of the Neutrino mass relative to say the Higgs vev. If you naively proceed as above, you see that there is an unnatural cancellation that should take place. Of course there, we are rescued by a mechanism that forces the two terms involved in the cancellation to actually be close (this is called the seesaw mechanism).

Last edited:
Haelfix

Its worth pointing out how Supersymmetry almost solves the problem and how it elucidates the nature of the issue.

Assume that <P> appearing above, was for some reason identically zero, then indeed lambda effective would just be a constant of integration and everyone would be happy. No rhyme or reason why its small or big, or whatever. Who cares, its just a number that experiment happened to find. You could invoke the anthropic principle trivially if you really wanted too at that point and no one would mind.

And indeed, in exact rigid supersymmetry it was noted long ago that fermion loops exactly cancel boson loops and the net vacuum contribution is zero (at least perturbatively).

The problem is, exact supersymmetry is not the way the world works, and it must be broken. When you break rigid supersymmetry in this framework you induce terms that necessarily set <p> != 0 (and if you include gravity and make the susy local, the superpotential and the kahler potential will not in general exactly cancel!) and you are back to worrying about how weird it is for physical cancellations to take place of that magnitude (although now, the problem is cut in two on a log scale and may also be sensitive to exactly where the supersymmetry breaking scale is set)

tom.stoer

Haelfix,

I think you agree with Rovelli, but I am not sure. He does not deny that the smallness of lambda is no problem, but e says that one must distinguish between 1) a small classical (tree level) lambda which stays small even if you calculate quantum corrections and 2) zero classical lambda where the non-zero part is purely quantum mechanical.

So acording to Rovelli the problem is why lambda stays small even if it is subject to quantum corrections, not how quantum corrections create lambda which zero classically.

Compare it to the Higgs mass. It is unclear how the Higgs mass is protected against quantum corrections pushing it to the Planck scale. But these effects (or their absence) seems to have nothing to do whith the creation of teh Higgs mass at all. The same applies to all other parameters in the SM. It is a puzzle where they are coming from, but it is fairly well understood how they behave under quantum corrections (the Higgs mass is an exception).

What you are saying about SUSY is the core of the problem. Zero lambda would be fine, but tiny lambda including quantum corrections from broken SUSY is a riddle. But if you look at all other classical field theories they perfectly make sense w/o quantum corrections in a certain regime. If you restrict yourself to tis regime there is no problem with the constants at all.