Why all these prejudices against a constant? ( dark energy is a fake probem)

In summary, the conversation discusses the concept of dark energy and the prevalence of prejudices against the idea of a cosmological constant. The participants argue that the constant is not a mysterious substance, but a natural occurrence in the most general form of the action. They also discuss the question of why it is present and why it has a small value. The conversation also touches on the different approaches to understanding dark energy, including the use of the Regge action in CDT and Horava's theories. Overall, the aim of the conversation is to clarify the concept of dark energy and the role of the cosmological constant in physics.
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
Why all these prejudices against a constant? ("dark energy" is a fake probem)

==sample quote==
It is especially wrong to talk about a mysterious “substance” to denote dark energy. The expression “substance” is inappropriate and misleading. It is like saying that the centrifugal force that pushes out from a merry-go-round is the “effect of a mysterious substance”.
==endquote==

http://arxiv.org/abs/1002.3966
Why all these prejudices against a constant?
Eugenio Bianchi, Carlo Rovelli
9 pages, 4 figures
(Submitted on 21 Feb 2010)
"The expansion of the observed universe appears to be accelerating. A simple explanation of this phenomenon is provided by the non-vanishing of the cosmological constant in the Einstein equations. Arguments are commonly presented to the effect that this simple explanation is not viable or not sufficient, and therefore we are facing the 'great mystery' of the 'nature of a dark energy'. We argue that these arguments are unconvincing, or ill-founded."
 
Physics news on Phys.org
  • #2


The lambda constant is just a constant that naturally occurs when you write down the most general form of the action. With Einstein it was there already at the start! Not something he put in as an afterthought to make cosmology come out right.

==quote==
In fact, it is not even true that Einstein introduced the
λ term because of cosmology. He knew about this term
in the gravitational equations much earlier than his cos-
mological work. This can be deduced from a footnote
of his 1916 main work on general relativity [9] (the foot-
note is on page 180 of the English version). Einstein
derives the gravitational field equations from a list of
physical requirements. In the footnote, he notices that
the field equations he writes are not the most general pos-
sible ones, because there is another possible term, which
is in fact the cosmological term (the notation “λ” already
appears in this footnote).
The most general low-energy second order action for
the gravitational field, invariant under the relevant sym-
metry (diffeomorphisms) is...
...
which leads to (1). It depends on two constants, the
Newton constant G and the cosmological constant λ, and
there is no physical reason for discarding the second term...
==endquote==
 
  • #3


The cosmological constant becomes a mistery as soon as you do not write it on the left hand = "the gravity" side of the equations

[tex]R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu}[/tex]

but it you write it on the right hand = "the matter" side.

[tex]R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8\pi G}{c^4}T_{\mu\nu} - \Lambda g_{\mu\nu}[/tex]

In vacuum (with T=0) you still have some kind of "matter" which affects spacetime:

[tex]R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = - \Lambda g_{\mu\nu}[/tex]

If you leave this term on the left hand side, the question where it comes from and why it is there, is still open, but it is not a qustion about matter, dark energy or something like that; it is a question about gravity.
 
  • #4


lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

The question is not why it is there but rather what sets its value. In particular why is it so small? One answer coming from the ERG approach is that there is an infrared fixed point for which lambda=0. Hence on large scales lambda is small.
 
  • #5


Finbar said:
lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

The question is not why it is there but rather what sets its value. In particular why is it so small? One answer coming from the ERG approach is that there is an infrared fixed point for which lambda=0. Hence on large scales lambda is small.

http://arxiv.org/abs/0910.5167 fig 3, 5 seem to suggest only a UV fixed point?

I think Xu and Horava http://arxiv.org/abs/1003.0009 got a IR fixed point, but not at z=1, which I think is what one would like?

BTW, has the evidence shifted away from AS now that CDT seems to have gone over to Horava? And does that mean that CDT is also problematic, since Horava seemed to have all sorts of problems.
 
  • #6


atyy said:
http://arxiv.org/abs/0910.5167 fig 3, 5 seem to suggest only a UV fixed point?

I think Xu and Horava http://arxiv.org/abs/1003.0009 got a IR fixed point, but not at z=1, which I think is what one would like?

BTW, has the evidence shifted away from AS now that CDT seems to have gone over to Horava? And does that mean that CDT is also problematic, since Horava seemed to have all sorts of problems.

The IR fixed point is at the origin in fig 3. As you can see there is a trajectory that goes from the UV fixed point to the IR fixed point. But it would seem odd that we sit exactly on that trajectory.

I don't know what your on about with CDT, AS and Horava. CDT does not violate lorentz (at least not in the way Horava does). CDT uses the Regge action which which is a discrete version of einstein-hilbert. So i don't see why you think there is a connection between CDT and horava? If there is its not obvious.
 
  • #7


Finbar said:
It belongs on the left-hand side you can see this from the fact that G does not couple to it...

The question is not why it is there but rather what sets its value.

Yes. Just like other constants, why is alpha = 1/137?
Why is the ratio of electron mass to Planck mass so small?
I think part of the aim of the paper is to deflate some of the hype surrounding this particular constant.
Not to say it's not interesting though! It would be great to get some handle on why it's that particular size.
 
Last edited:
  • #8


Finbar said:
The IR fixed point is at the origin in fig 3. As you can see there is a trajectory that goes from the UV fixed point to the IR fixed point. But it would seem odd that we sit exactly on that trajectory.

Is that really a fixed point? Also, if it is, is it's stability such that it would explain the cosmological constant (apparently not, since you say it's odd we'd be exactly on that trajectory)?

Finbar said:
I don't know what your on about with CDT, AS and Horava. CDT does not violate lorentz (at least not in the way Horava does). CDT uses the Regge action which which is a discrete version of einstein-hilbert. So i don't see why you think there is a connection between CDT and horava? If there is its not obvious.

I'm thinking of - ?
http://arxiv.org/abs/0911.0401
http://arxiv.org/abs/1002.3298
 
  • #9


atyy said:
Is that really a fixed point? Also, if it is, is it's stability such that it would explain the cosmological constant (apparently not, since you say it's odd we'd be exactly on that trajectory)?



I'm thinking of - ?
http://arxiv.org/abs/0911.0401
http://arxiv.org/abs/1002.3298

Hmm well both papers mention AS and horava. I think the CDT guys are hoping that it can be both AS and horava depending how they tune their parameters.


The gaussian fixed point is always going to be there as it just corresponds to the vanishing of the dimensionless couplings. But I don't think you need to be on a trajectory that flows to the IR fixed point. Better to read this paper

http://arXiv.org/abs/hep-th/0410119

"Assuming that Quantum Einstein Gravity (QEG) is the correct theory of gravity on all length scales we use analytical results from nonperturbative renormalization group (RG) equations as well as experimental input in order to characterize the special RG trajectory of QEG which is realized in Nature and to determine its parameters. On this trajectory, we identify a regime of scales where gravitational physics is well described by classical General Relativity. Strong renormalization effects occur at both larger and smaller momentum scales. The latter lead to a growth of Newton's constant at large distances. We argue that this effect becomes visible at the scale of galaxies and could provide a solution to the astrophysical missing mass problem which does not require any dark matter. We show that an extremely weak power law running of Newton's constant leads to flat galaxy rotation curves similar to those observed in Nature. Furthermore, a possible resolution of the cosmological constant problem is proposed by noting that all RG trajectories admitting a long classical regime automatically give rise to a small cosmological constant."
 
  • #10


tom.stoer said:
The cosmological constant becomes a mistery as soon as you do not write it on the left hand = "the gravity" side of the equations

[tex]R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu\nu}[/tex]

but it you write it on the right hand = "the matter" side.

[tex]R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = \frac{8\pi G}{c^4}T_{\mu\nu} - \Lambda g_{\mu\nu}[/tex]

In vacuum (with T=0) you still have some kind of "matter" which affects spacetime:

[tex]R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = - \Lambda g_{\mu\nu}[/tex]

If you leave this term on the left hand side, the question where it comes from and why it is there, is still open, but it is not a qustion about matter, dark energy or something like that; it is a question about gravity.

I'd like to get into the habit of thinking of it on the left hand side.
However I'm used to seeing the constant given in the form OmegaLambda. A common estimate is OmegaLambda = .73

That means the (fictional?) dark energy is 73% of critical density. If I express critical density in terms of today's Hubble rate H, then what I seem to get is that

Lambda = 3 *H^2* OmegaLambda

= 3*0.73* (71 km/s per megaparsec)^2 ~ 10-35 second-2

Can you confirm that this is right way to get the lefthandside Lambda from the information we are usually given?
 
Last edited:
  • #11


Finbar posted:

lambda is the coupling to the volume of space-time. It belongs on the left-hand side you can see this from the fact that G does not couple to it. G couples to the energy-momentum.

I REALLY like that thought! What's the origin of these two relationships?? I don't mean I doubt it, but what led to these particular couplings?? Is it of purely mathematical origin or instead physical insights...or a combination??

"The man who had the courage to tell everybody that their ideas on space and time had to
be changed, then did not dare predicting an expanding universe, even if his jewel theory was saying so..." and " Even a total genius can be silly,
at times." ...suggests Einstein did not appreciate the nature of lambda early on.
 
Last edited:
  • #12


marcus said:
I'd like to get into the habit of thinking of it on the left hand side.

...

Can you confirm that this is right way to get the lefthandside Lambda from the information we are usually given?
Yes, I can confirm that this is the way you get OmegaLambda. But your interpretation
marcus said:
That means the (fictional?) dark energy is 73% of critical density.
means that (implicitly) you think about it as something that appears on the right hand side: density, dark energy, ...
 
  • #13


tom.stoer said:
Yes, I can confirm that this is the way you get OmegaLambda...

That's not what I was asking about. We are constantly being told that OmegaLambda = 0.73, or thereabouts. We can take that as the current estimate.

What I never (or almost never) see an estimate for is LAMBDA ITSELF. The genuine lefthand side article.

That is what I want to calculate. I think it comes to 1.1 x 10-35 second-2
or thereabouts.

Would it be more correct to express it in units of reciprocal area, like in meter-2?

That's what I think of as a common unit for curvature?

What I want confirmation for, or at least your opinion on, is if we are thinking just of the lefthandside form of the cosmo constant, which we are calling Lambda, and if we are given the commonly published figures of 71 for Hubble, and 0.73 for (fictional?) "dark energy fraction,"
then do we use the stated formula to get Lambda? Namely:

Lambda = 3 *H^2* OmegaLambda
 
Last edited:
  • #14


marcus said:
Lambda = 3 *H^2* OmegaLambda

Yes.
 
  • #15


marcus said:
Lambda = 3 *H^2* OmegaLambda

George Jones said:
Yes.

Good. Thank you George. So since I'm always seeing the published figures
71 km/s per megaparsec, and 0.73 (for the Hubble rate and the "darkenergy fraction") I can plug the blue thing into google and get Lambda.

3*0.73*(71 km/s per megaparsec)^2

Anybody can do it themselves. If you paste that blue expression into the google window, what you get is:
1.15946854e-35 s^-2

So that is what Lambda "really is", if you round off appropriately:

Lambda = 1.16 x 10-35 seconds-2
plus or minus whatever uncertainty is contributed by the 0.73 and the 71.

And you can change the numbers 71 and 0.73 to agree with whatever the latest observations indicate, and the google calculator will give you the corresponding estimate for Lambda, accordingly.

If Rovelli is right, and the others that share his views on the subject, then this is a basic constant of nature and we better start getting to know it, and getting used to it, and treating with some of the same respect we normally show basic constants.
 
Last edited:
  • #16


Bianchi and Rovelli are not saying anything new, are they? Take eg. this 2007 review

http://arxiv.org/abs/0705.2533
"The observational and theoretical features described above suggests that one should consider cosmological constant as the most natural candidate for dark energy. Though it leads to well known problems, it is also the most economical [just one number] and simplest explanation for all the observations. Once we invoke the cosmological constant, classical gravity will be described by the three constants G, c and Lambda"
 
  • #17


I agree, they are not saying anything new. The stress the following
1) it is natural to consider Lambda as a constant of nature
2) one should distinguish between "QFT is the source of Lambda" and "QFT could cause corrections to the value of Lambda"
3) the reason for the puzzle is not Lambda, but Lambda on the right hand side of the equation ...
4) ... plus an idea how to calculate it - which fails by 120 orders of magnitude

Let's assume I have some biological theory; unfortunately it says not so much about about mammals, birds etc. but I claim that this theory provides an explanation of the zoogenesis of the duckbill from first principles. But applying this theory, it predicts the duckbill to look like an orca ...

Now I ask you: does this make the duckbill even more enigmatic, or does it mean that my theory is plainly wrong?
 
  • #18


I guess ultimately, one would seek a deeper understanding of why all constants of nature are what they are, and why the laws are like they are. Maybe a lot of the mystery around lambda is due to the fact that it appears to be so closely (although there are certainly the known problems with this) related to the expected zero point energy density that it's hard to resist the temptation that there is something more to this - that may or may not explain the value of this constant that in a deeper way that we probably all seek anyway?

As far as I am concerned, I still seek a deeper understanding of all the E-H action where all terms and constants beg for explanation. My working picture at the moment, makes me think, also in line wit some of the statistical approach to this, is to rather think of Einsteins equations as a equation of state, defining an equilibrium, suggesting that the "constants" might not be proper constants, maybe they just appear constant to us at this moment. So maybe it's not that important what the values are, the most interesting thing is the logic that sets it's value. The actual logic today, is our cosmological models, from Hubble expansions we infer the value, but this is a human level mechanism, and it's value would in principle evolve with the evolution of human science cosmology. But we still don't have the depper intrinsic physical mechanism for it's evolution.

But I also think that seeking the answer in the form of hidden or dark matter or energy "out there" that are like ordinary matter, except not visible might be a sidetrack. I think the "appearance" of hidden energy or accelerating universe could be better understood from the information point of view, where one considers expanding and accelerating evnet horizons, rather than expanding universes etc. But that's still very underdeveloped.

/Fredrik
 
  • #19


Marcus: again, thanks for bringing another great paper to our attention...


Bianchi and Rovelli are not saying anything new, are they?

That seems right, but I did see some different perspectives (new to me) things in the paper, such as:

An effect commonly put forward to support the reality" of such a vacuum energy is the Casimir e ect. But the Casimir e ect does not reveal the existence of a vac-
uum energy: it reveals the e ect of a \change" in vacuum energy, and it says nothing about where the zero point value of this energy is.


and this regarding the coincidence argument:

In order for us to comfortably exist and observe it, we must be ... in a period of the history of the universe where a civilization like ours could exist. Namely when heavy elements abound, but the universe is not dead yet. ...in a universe with the value of lambda like in our universe, it is quite reasonable that humans exist during those 10 or so billions years when omegab and omega lambdaare within a few orders of magnitude from each other.Not much earlier, when there is nothing like the Earth, not much later when stars like ours will be dying.


FRA
I think the "appearance" of hidden energy or accelerating universe could be better understood from the information point of view, where one considers expanding and accelerating evnet horizons, rather than expanding universes etc

yes!
I also think we are current missing much of what appears as an information based universe.
 
  • #20


I guess ultimately, one would seek a deeper understanding of why all constants of nature are what they are, and why the laws are like they are

no guessing..absolutely "yes" :

In gravitational physics there is nothing mysterious in the cosmological constant. At least nothing more mysterious than the Maxwell equations, the Yang-Mills
equations, the Dirac equation, or the Standard Model equations. These equations contain constants whose values we are not able to compute from first principles. The
cosmological constant is in no sense more of a "mystery" than any other among the numerous constants in our fundamental theories

I wish the authors would have instead acknowledged they are ALL equally 'mysterious'..
 
Last edited:
  • #21


Naty1 said:
I wish the authors would have instead acknolwedged they are ALL equally 'mysterious'..

Yes, but one can respond to this in two ways, either

1) stop worrying about the "mysterious lambda"

or

2) START worrying about all mysterious "constants". In the generic sense "constant" could also apply to other things than numbers, for example "constant" symmetries, and thus "physical law" itself. Ie. the fact that it may be "just another constant" doesn't make it less mysterious. Maybe we rather just have more "clues" to this particular constant, than say the gravitational constant or Plancks constant?

/Fredrik
 
  • #22


START worrying about all mysterious "constants".

makes sense and I think in general scientists do...still strikes me as odd that we don't have the first principles to determine the basics in the standard model...clearly we are missing some "information"...pun intended...
 
  • #23


The CC is not a problem for GR (well except that it makes various cosmological solutions a little more ugly), but really a generic quantum problem.

It doesn't matter what side of the field equation you put it on, the problem is the same, namely an unnatural cancelation between two quantities that a priori have nothing to do with one another (read no known physical relation).

In the full quantum theory, we are interested in the expectation value of the stress energy tensor <Tuv>, which in vacuum is proportional to < P > Guv by Lorentz invariance. By inspection of Einsteins field equations, this is equivalent to adding a term to the effective cosmological constant.

lambda effective = lambda + 8piG <P>, where lambda is just the old simple (undetermined classical constant of integration) and <P> is the energy density of the quantum vacuum.

Now (lambda effective/8piG) = Pv ~ 10^-47 Gev^4 by experiment (WMAP say).

The problem is that we know how to calculate <P>, generically it is simply summing up all the normal modes of the zero point energy of some field (or sets of fields), up to some cutoff. If we take the cutoff to be Mpl, <P> will be something like ~10^72 Gev^4
and then you notice that (lambda/8piG + <P>) in order to satisfy experimental bounds, must delicately conspire to cancel to something like 120 decimal places. The problem is that there is no obvious physical reason why lambda/8piG (a quantity arising from a classical equation) should have anything whatsoever to do with <P> (a quantum quantity). Now, if there was an unknown symmetry that related them, you might venture to guess that they could cancel exactly, but no such symmetry is known and worse they don't cancel exactly.

I should point out that you can't get around this miracle trivially. Even if you completely ignore everything from the electroweak scale, all the way down to the Planck scale and instead only consider standard model physics, you would set the cutoff value to something like say the QCD scale, you still have about 40 orders of magnitude worth of decimal places to account for.

Now you are of course free to simply set the constant term equal to the tree level or semiclassical contribution to <P> and set them equal to zero. The problem then is you still have to talk about the shift of the vacuum energy, by higher order terms induced by radiative corrections, and so you have to arrange it so that each constant appearing in front of the counterterms is finetuned order by order, such that the final sum approaches the experimental bound.

It is this, more than anything else, that really is the crux of the problem. The thing we measure in our telescopes is necessarily the full theory (b/c quantum mechanics is part of the real world). And in this quantum theory, nothing protects the vacuum energy from radiative corrections.

An analogous (though much less severe) problem occurs for instance, when you consider the smallness of the Neutrino mass relative to say the Higgs vev. If you naively proceed as above, you see that there is an unnatural cancellation that should take place. Of course there, we are rescued by a mechanism that forces the two terms involved in the cancellation to actually be close (this is called the seesaw mechanism).
 
Last edited:
  • #24


Its worth pointing out how Supersymmetry almost solves the problem and how it elucidates the nature of the issue.

Assume that <P> appearing above, was for some reason identically zero, then indeed lambda effective would just be a constant of integration and everyone would be happy. No rhyme or reason why its small or big, or whatever. Who cares, its just a number that experiment happened to find. You could invoke the anthropic principle trivially if you really wanted too at that point and no one would mind.

And indeed, in exact rigid supersymmetry it was noted long ago that fermion loops exactly cancel boson loops and the net vacuum contribution is zero (at least perturbatively).

The problem is, exact supersymmetry is not the way the world works, and it must be broken. When you break rigid supersymmetry in this framework you induce terms that necessarily set <p> != 0 (and if you include gravity and make the susy local, the superpotential and the kahler potential will not in general exactly cancel!) and you are back to worrying about how weird it is for physical cancellations to take place of that magnitude (although now, the problem is cut in two on a log scale and may also be sensitive to exactly where the supersymmetry breaking scale is set)
 
  • #25


Haelfix,

I think you agree with Rovelli, but I am not sure. He does not deny that the smallness of lambda is no problem, but e says that one must distinguish between 1) a small classical (tree level) lambda which stays small even if you calculate quantum corrections and 2) zero classical lambda where the non-zero part is purely quantum mechanical.

So acording to Rovelli the problem is why lambda stays small even if it is subject to quantum corrections, not how quantum corrections create lambda which zero classically.

Compare it to the Higgs mass. It is unclear how the Higgs mass is protected against quantum corrections pushing it to the Planck scale. But these effects (or their absence) seems to have nothing to do whith the creation of teh Higgs mass at all. The same applies to all other parameters in the SM. It is a puzzle where they are coming from, but it is fairly well understood how they behave under quantum corrections (the Higgs mass is an exception).

What you are saying about SUSY is the core of the problem. Zero lambda would be fine, but tiny lambda including quantum corrections from broken SUSY is a riddle. But if you look at all other classical field theories they perfectly make sense w/o quantum corrections in a certain regime. If you restrict yourself to tis regime there is no problem with the constants at all.
 
  • #26


I don't quite follow.

The way I read the paper was that it wasn't anything new to the standard story I told above, it just restated it differently.

To simplify the terminology, and with total abuse of notation and disregard for constants, i'll just state the above equation again: Lambda total = lambda classical + lambda quantum. Lambda total is fixed by experiment.

You are free to set lt = lc but then you have to explain why lq is zero (this is what I think he wants us to do). You can set lc = 0 but then you have to explain why lq is however many orders of magnitude different than a qft calculation tells us it should be. Or you can insist that there is some sort of mechanism that relates lc and lq such that they are extremely close and the finetuning becomes natural.

The problem is the same in all three cases, its just basically a relabeling of words what you want to call things.
 
  • #27


Naty1 said:
makes sense and I think in general scientists do...still strikes me as odd that we don't have the first principles to determine the basics in the standard model...clearly we are missing some "information"...pun intended...

I think the intrinsic information view that I seek (rather than the extrinsic blockbased info-picture), is generally not something "most physicists" are at least officially interested in as far as I can judge. Maybe secretly, but a lot of the reasoning in some research papers still maintains a somewhat realist view of physical law. I think this is still a realist heritage we are still stuck with.

QM and Relativity did away with some realism, but not all of it. Both are somehow attempts at acknowledging the incompleteness and relativit of nature, without abandoning the incompletness and relativity of physical law. There is an ambigousness there IMO, becase _information about physical law_ are not treated on the same footing as _information about the initial state_ of a system, when it IMO should.

/Fredrik
 
  • #28


Haelfix said:
The problem is the same in all three cases, its just basically a relabeling of words what you want to call things.

I am not quite sure.

Let's try a different approach: for many constants in nature one expects that they are scale-dependent. If they aren't one has to find a mechanism why they are protected. What we measure are not the bare nor the tree-level values but always the "dressed" values where all quantum corrections are already taken into account.

Now we split the constants in a tree level and a quantum correction part (I do not know if this is really a good idea :-)

I think one can state the problem as follows:
1) if we believe in this classical part + quantum correction part story, then we have to solve the two problems what causes the classical part? and why should it be protected against scaling?
2) if we do not believe in this split, we have to solve the problem what causes the cc at all?.

I think what Rovelli is saying is that it's not clear to him why the mechanism which causes the existence of the cc at all should be the same as the mechanism that causes it's scaling.

(example: we understand the mechanism which scales a mass-term in QFT, but we do not understand where this mass term comes from; if we use the Higgs, again we understand the scaling of the Yukawa-coupling, but we do not understand why it is there at all)
 
  • #29


Just one question here: why all this quible here if Marcus himself support assymptotic safety? The small value of the cosmological constant is just a consequence of the non trivial fixed point of G x /\ space due to renormalizable non perturbative nature of the Einstein Hilbert action.
 
  • #30


This means that lambda is a term on the left hand side and is somehow protected against UV corrections. Is would solve the problem for its smallness, not for its existence.

What is the current status of asymptotic safety?
 
  • #31


Well, asymptotic safety doesn`t work withou lambda...
 
  • #32


Here's an alternative take on the CC problem:
http://arxiv.org/abs/1103.4841
The cosmological constant: a lesson from Bose-Einstein condensates
Stefano Finazzi, Stefano Liberati, Lorenzo Sindoni
(Submitted on 24 Mar 2011)
The cosmological constant is one of the most pressing problems in modern physics. In this Letter, we address the issue of its nature and computation using an analogue gravity standpoint as a toy model for an emergent gravity scenario. Even if it is well known that phonons in some condense matter systems propagate like a quantum field on a curved spacetime, only recently it has been shown that the dynamics of the analogue metric in a Bose-Einstein condensate can be described by a Poisson-like equation with a vacuum source term reminiscent of a cosmological constant. Here we directly compute this term and confront it with the other energy scales of the system. On the gravity side of the analogy, this model suggests that in emergent gravity scenarios it is natural for the cosmological constant to be much smaller than its naif value computed as the zero-point energy of the emergent effective field theory. The striking outcome of our investigation is that the value of this constant cannot be easily predicted by just looking at the ground state energy of the microscopic system from which spacetime and its dynamics should emerge. A proper computation would require the knowledge of both the full microscopic quantum theory and a detailed understanding about how Einstein equations emerge from such a fundamental theory. In this light, the cosmological constant appears even more a decisive test bench for any quantum/emergent gravity scenario.
5 pages, 1 figures
 
Last edited:
  • #33
  • #34


The Finazzi-Liberati-Sindoni (FLS) paper could be something of a game-changer, so I want to back up and reconsider what I was saying. Here is an excerpt from their conclusions:
==quote FLS http://arxiv.org/abs/1103.4841 ==
...The implications for gravity are twofold. First, there could be no a priori reason why the cosmological constant should be computed as the zero-point energy of the system. More properly, its computation must inevitably pass through the derivation of Einstein equations emerging from the underlying microscopic system. Second, the energy scale of Λ can be several orders of magnitude smaller than all the other energy scales for the presence of a very small number, nonperturbative in origin, which cannot be computed within the framework of an effective field theory dealing only with the emergent degrees of freedom (i.e. semiclassical gravity).

The model discussed in this Letter shows all this explicitly. Furthermore, it strongly supports a picture where gravity is a collective phenomenon in a pregeometric theory. In fact, the cosmological constant puzzle is elegantly solved in those scenarios. From an emergent gravity approach, the low energy effective action (and its renormalization group flow) is obviously computed within a framework that has nothing to do with quantum field theories in curved spacetime. Indeed, if we interpreted the cosmological constant as a coupling constant controlling some self-interaction of the gravitational field, rather than as a vacuum energy, it would straightforwardly follow that the explanation of its value (and of its properties under renormalization) would naturally sit outside the domain of semiclassical gravity.

For instance, in a group field theory scenario (a generalization to higher dimensions of matrix models for two dimensional quantum gravity [19]), it is transparent that the origin of the gravitational coupling constants has nothing to do with ideas like “vacuum energy” or statements like “energy gravitates”, because energy itself is an emergent concept. Rather, the value of Λ is determined by the microphysics, and, most importantly, by the procedure to approach the continuum semiclassical limit. In this respect, it is conceivable that the very notion of cosmological constant as a form of energy intrinsic to the vacuum is ultimately misleading. To date, little is known about the macroscopic regime of models like group field theories, even though some preliminary steps have been recently done [20]. Nonetheless, analogue models elucidate in simple ways what is expected to happen and can suggest how to further develop investigations in quantum gravity models. In this respect, the reasoning of this Letter sheds a totally different light on the cosmological constant problem, turning it from a failure of effective field theory to a question about the emergence of the spacetime.
==endquote==

This is a brief paper (besides references, only 4 pages!) with potentially far-reaching implications, it seems to me. I don't recall our discussing it---any comments?
 
Last edited:
  • #35


More reasons to mistrust the "dark energy" interpretation of the cosmological constant (and the touted bafflement about its size) can be found in a review article for the special SIGMA issue on Loop gravity and cosmology, written by Larry Sindoni of AEI Potsdam.

http://arxiv.org/abs/1110.0686
Emergent models for gravity: an overview
L. Sindoni
(Submitted on 4 Oct 2011)
We give a critical overview of various attempts to describe gravity as an emergent phenomenon, starting from examples of condensed matter physics, to arrive to more sophisticated pregeometric models. The common line of thought is to view the graviton as a composite particle/collective mode. However, we will describe many different ways in which this idea is realized in practice.
54 pages. Invited review for SIGMA Special Issue "Loop Quantum Gravity and Cosmology".

I tend now to expect this Sindoni review of Emergent Gravity will become a basic well-cited paper, and that the SIGMA special LQG/C issue will constitute the next big Loop gravity book. Many of its chapters have now been posted as arxiv preprints. It's clearly going to be a valuable collection.

Lorenzo Sindoni gave a seminar in December 2008 on Emer. and Analog Grav. that is on video http://pirsa.org/08120049/.
Stefano Liberati at SISSA was his advisor, PhD in 2009 if I remember right.
 
Last edited:

Similar threads

Back
Top