New Barrow paper - A New Solution of The Cosmological Constant Problems

In summary: So you have to design your experiment to get rid of or manage them as well....which is why we're not all doing 100 times better already.
  • #1
nicksauce
Science Advisor
Homework Helper
1,271
7
New Barrow paper - "A New Solution of The Cosmological Constant Problems"

http://arxiv.org/abs/1007.3086

Anyone care to comment?

I haven't taken too careful of a look yet, because it is late, but I'll try to write a response to it tomorrow.
 
Space news on Phys.org
  • #2


nicksauce said:
http://arxiv.org/abs/1007.3086

Anyone care to comment?

I haven't taken too careful of a look yet, because it is late, but I'll try to write a response to it tomorrow.

I'm not sure I really have the expertise to comment on the naturalness or validity of this approach. However, I would like to comment that it seems to me this paper is tackling a non-problem. If a small value of the cosmological constant is required for structure formation to occur, then nobody could ever observe anything but a small value of the cosmological constant, and thus we have no right to be surprised that that's exactly what we observe.
 
  • #3


nicksauce said:
http://arxiv.org/abs/1007.3086

Anyone care to comment?

I haven't taken too careful of a look yet, because it is late, but I'll try to write a response to it tomorrow.

I'm looking frwd to seeing your response. It's potentially very interesting.

=========================

In case anyone just browsing is curious, I'll copy the abstract here:

http://arxiv.org/abs/1007.3086
A New Solution of The Cosmological Constant Problems
John D. Barrow, Douglas J. Shaw
5 pages
(Submitted on 19 Jul 2010)
"We extend the usual gravitational action principle by promoting the bare cosmological constant (CC) from a parameter to a field which can take many possible values. Variation leads to a new integral constraint equation which determines the classical value of the effective CC that dominates the wave function of the universe. In a Friedmann background cosmology with observed matter and radiation content the expected value of the effective CC, is calculated from measurable quantities to be O(tU-2)~ 10-122 (in natural units), as observed, where t_U is the present age of the universe. Any application of our model produces a falsifiable prediction for Lambda in terms of other measurable quantities. This leads to a specific prediction for the observed spatial curvature parameter of Omegak0 = 5.2 x 10-5, which is of the magnitude expected if inhomogeneities have an inflationary origin. This explanation of the CC requires no fine tunings, extra dark energy fields, or Bayesian selection in a multiverse."


========================
Nicksauce, here is a place on page 1 where they refer to work in preparation:

"The variation leads to a new field equation which determines the value of λ, and hence the
effective CC, in terms of other properties of the observed universe. Crucially, one finds that the observed classical history naturally has tΛ ∼tU. Further details of our paradigm are presented elsewhere [26]. When it is applied to GR, λ (and hence Λ, except during phase transitions) is a true constant and is not seen to evolve. Hence, the resulting history is indistinguishable from GR with
the value of Λ put in by hand."

[26] D.J. Shaw and J.D. Barrow, in preparation (2010)

My sense of the situation is that this is just a 4-page note presenting results and that (without seeing detailed steps) one cannot yet say if they are sound. But I don't feel especially confident about that judgement.

In any case it is potentially interesting and worth discussing.
 
Last edited:
  • #4


The convention about Omegak that cosmologists use is that if it is zero then Omegatotal = 1 and U is spatially flat.
If Omegak is positive then the spatial curvature is negative.

That is counterintuitive and bears remarking. by definition
Omegatotal = 1 - Omegak

They claim to PREDICT that Omegak = 0.000052

That is, their model predicts that the spatial curvature will turn out to be just slightly negative.
People measure it. There is a published 95% confidence interval from WMAP7 data.
I think that confidence interval would have to shrink down by a factor of 100 before it could begin to test the Barrow Shaw prediction.

That is, some space instruments would have to be 100 times better than the existing WMAP and Planck space instruments. It is possible. I am reasoning very loosely, just reacting mindlessly really. I mistrust what they are saying but it gets my attention.
 
  • #5


marcus said:
The convention about Omegak that cosmologists use is that if it is zero then Omegatotal = 1 and U is spatially flat.
If Omegak is positive then the spatial curvature is negative.

That is counterintuitive and bears remarking. by definition
Omegatotal = 1 - Omegak

They claim to PREDICT that Omegak = 0.000052

That is, their model predicts that the spatial curvature will turn out to be just slightly negative.
People measure it. There is a published 95% confidence interval from WMAP7 data.
I think that confidence interval would have to shrink down by a factor of 100 before it could begin to test the Barrow Shaw prediction.

That is, some space instruments would have to be 100 times better than the existing WMAP and Planck space instruments. It is possible. I am reasoning very loosely, just reacting mindlessly really. I mistrust what they are saying but it gets my attention.
My main concern here would be as to what the cosmic variance limitation on curvature measurements is. I'm not entirely sure, but I worry it may potentially be too close to this magnitude to be measurable.

I should mention, however, that it isn't a CMB instrument that best estimates curvature, but instead a combination of CMB data with nearby data. Currently the best nearby sort of data to use is Baryon Acoustic Oscillations.

P.S. For the uninitiated, doing 100 times better in an experiment means you either need to reduce your noise by a factor of 100, or increase your data sample by a factor of 10,000.
 
  • #6


P.S. For the uninitiated, doing 100 times better in an experiment means you either need to reduce your noise by a factor of 100, or increase your data sample by a factor of 10,000.
...and get rid of the systematics. I guess that's the real problem concerning CMB measurements.
 
  • #7


Ich said:
...and get rid of the systematics. I guess that's the real problem concerning CMB measurements.
Well, now it's the primary remaining problem. Basically, as your instrumental noise gets lower and lower, you need to do more and more work to deal with the systematics. WMAP was already pushing this boundary, and Planck will only do it even more so.

But systematics are an even bigger issue with cosmological observations that rely upon galaxies, because galaxies are just that much more complex a phenomena. There is an order of magnitude of additional work involved in getting Baryon Acoustic Oscillation systematic effects nailed down (same with weak lensing surveys).
 
  • #8


marcus said:
My sense of the situation is that this is just a 4-page note presenting results and that (without seeing detailed steps) one cannot yet say if they are sound. But I don't feel especially confident about that judgement.

Upon, a deeper reading, this is what I found as well. It could just be my lack of expertise, but I found their arguments very unclear/unconvincing. For example the following is claimed without citation,

In this scenario, the effective value [itex]Omega_k=-k/a^2H^2[/itex] today is an observer-dependent parameter that vanishes on averages but has a characteristic variance, [itex]\sigma_k^2\propto Q^2[/itex] where [itex]\sigma_k\propto\Omega_m^{0.65}[/itex] for small [itex]\Omega_m=\kappa\rho_{m}/3H^2[/tex]. In our visible universe [itex]Q\approx 1.9\times10^{-5}[/itex],[itex]\Omega_{m}\approx 0.27[/itex], and so we find [itex]\sigma_{k}\sim 5\times 10^{-5}[/itex]

It sounds plausible, but it isn't exactly a convincing argument, at least to me. I am interesting in seeing what is contained in this reference [26] paper in preparation, though, and will try to reserve more judgement until them. As marcus said, this seems just like a paper presenting their results, rather than trying to present completely sound arguments for them.

So yeah, they predict a value for Omega_k, that Planck will most likely not be able to verify. So I'm guessing not too much will come of this paper...
 
  • #9


Hi all. As one of the authors of this paper I might be well placed to answer some of the questions about it.

As has been stated before this is just a short summary of calculations found in a longer paper that is currently at the advanced draft stage (we need to check for typos etc. and finish writing the summary, but all the calculations are finalized. All things being equal it will be on the arxiv within a month). For those interested in the technical details I suggest you wait till then. The calculations are fairly technical and long (48+ pages at the moment) and so might not interest everyone. For that reason we have attempted to summarize the results in a less technical manner in this short 4 page version.

The main result is really that for a reasonable value of the effective spatial curvature, the observed cosmological constant has a value such that the total action, I is stationary i.e.

d I / d Lambda = 0 for the observed value of Lambda

I notice from some of the post that there are questions about the original of the expected magnitude of Omega_k.

For this we use a nice standard result that is easily rederived (we give such a rederivation in the long paper), that if starts with a linear perturbation of a flat FRW with matter and CC, then letting $\delta$ be the perturbation in the matter density and defining:
[tex]
\nabla^2 \phi = \frac{3}{2}a^2 \Omega_{\rm m} H^2 \delta_{\rm m},
[/tex]
In the far past, [itex]\phi \rightarrow \phi_{I}(x)[/itex], and [itex]\phi_{I}(x)[/itex] is the primordial perturbation in the effective Newton's potential.

Then a surface [itex]\Sigma[/itex] of constant time and fixed boundary has a 3-volume that scales as [itex]a_{\Sigma}^{3}(t)[/itex] where [itex]a_{\Sigma}(t)[/itex] is an effective scale factor that obeys a Friedmann equation of a non-flat matter + Lambda FRW universe. The effective curvature is given by:
[tex]
k_{\rm eff} = \frac{10}{9} \left\langle\nabla^2 \phi_{I}(x)\right\rangle_{\Sigma}
[/tex]
where the angled brackets indicate an average over $\Sigma$.

Now inflation says that [itex]\phi_{I}(x)[/itex] comes from an almost gaussian distribution with vanishing mean. The two point function [itex]\langle \phi_{I}(x) \phi_{I}(x+y)\rangle[/itex] is also given in terms of the primordial power spectrum and transfer function. Thus on an average over many different hypersurfaces [itex]\Sigma[/itex], [itex]k_{\rm eff}[/itex] should vanish and we can also calculate its variance in terms of the primordial power spectrum and transfer function.

Now this calculate is for the effective curvature of a constant time hypersurface, whereas in our model we want the effective curvature of the past light cone. To do this we average the curvature contribution to the Friedmann equation over the past light cone i.e. we define a constant [itex]k_{\ast}[/itex], say such that
[tex]
\left\langle \frac{k_{\rm eff}}{a^2}\right\rangle = \left\langle \frac{k_{\ast}}{a^2}\right\rangle
[/tex]
where the average is over the past light cone. Again on an average over many past light cones [itex]k_{\ast}[/itex] vanishes and we can calculate its variance in terms of the primordial power spectrum and transfer function. Doing this for our Universe gives the quoted [itex]\sigma_{k}[/itex]. [itex]\sigma_{k}[/itex] scales linearly with the magnitude of primordial fluctuations [itex]Q[/itex]. So that's where that comes from.

Another way of looking at it is saying that if the observable portion of our universe is slightly underdense at a level [itex]Q[/itex] compared with the whole inflated region, then we would have an effective curvature [itex]\Omega_{\rm k} \sim Q[/itex]. Inflation generates perturbations at the [itex]few \times 10^{-5}[/itex] level so a [itex]\vert \Omega_{\rm k}\vert \sim few \times 10^{-5}[/itex].

Whilst our predicted [itex]\Omega_{\rm k}[/itex] is almost certainly much too small to be seen by Planck our model would nonetheless be ruled out by an observation of [itex]\Omega_{\rm k} <0[/itex] or [itex]\Omega_{\rm k} \gg few \times 10^{-5}[/itex] both of Planck could feasibly see. In this way the situation is similar to non-gaussianity in inflation. Standard single field inflation predicts a non-gaussianity that is too small to see however it could still be ruled out by a detection of an much larger non-gaussiantiy (which some think is likely).

On the question of the extent to which the CC problem is or isn't a problem, I realize there are differing views here. Clearly it is a problem if the CC can fundamentally only take one value, since the value it takes is pretty weird in terms of size and its coincidence with our observation time. The only other explanation of the CC problems that I know is based on arguments of Bayesian anthropic selection in a multiverse. This has its roots in an old argument put forward by Weinberg, and also Barrow and Tipler. Tegmark, Rees and others updated this in 2006. It involves noting that if Lambda where much (more than ~100-1000 times) larger than its observed value, the structure required for life such as ourself could not form. If you have a multiverse where the value of the CC in each universe has an approximately uniform distribution need 0 (i.e. when the CC << Mpl^4) then you would naturally expect to live in a universe where the CC is close to its maximum value. We do indeed live in such a Universe. The potential problem here is the assumption about the distribution of the CC near zero. If for instant it is logarithmic or as some proposed prior to the detection of $\Lambda$ proportional to [itex]\exp(1/\Lambda)[/itex], one would naturally expect the observed CC to be much much smaller than its observed value. To date, I don't believe anyone really knows what the prior distribution of the CC in a multiverse scenario should be, uniform is reasonable but so are other possibilities. That for me represents the more interesting aspect of the CC problem, not so much why is it not much bigger (it can't be, we couldn't exist) but why is it not much much smaller. In the multiverse scenario is it a question of priors.

In our scenario however when the other properties of the observed universe are fixed there is only one allowed value of [itex]\Lambda[/itex]. This means that the distribution of [itex]\Lambda[/itex] is projected onto it from the distribution of the other properties of the universe, the most important of which seems to be the effective spatial curvature. This distribution is known though (gaussian) at least in inflationary models, so there is no ambiguity about prior distributions. A [itex]\Lambda[/itex] much smaller than the observed value would require a fine tuning of [itex]\Omega_{k}[/itex] that is highly unnatural.

Anyway, hope that helps,

Best
Doug

[EDITED to put in tex tags, thanks Chalnoth]
 
Last edited:
  • #10


Just fyi, for this board, you can insert latex into text by sandwiching the latex between the [noparse][itex][/noparse] and [noparse][/itex][/noparse] commands ($'s not required). For equations outside of text, the [noparse][tex][/tex][/noparse] tags are preferred.
 
  • #11


Also just on the issue of cosmic variance. If you were trying to measure the true spatial curvature underlying of the universe, at and around the $10^{-5}$ level, on horizon scales you would be unable to separate contributions to it from inflationary perturbations because you only have one universe to observe, so you would be limited by cosmic variance. For our purposes though what enters the equations is the effective spatial curvature including contributions from horizon scale perturbations (in other words underlying spatial curvature + cosmic variance), so that shouldn't be an issue in principle. One can see the effective spatial curvature as just a fitting parameter in the expansion history (that's the way it enters our calculations) and at least in principle, one could measure this at the 10^{-5} level although, admittedly, not any time soon.
 

1. What is the significance of the "New Barrow paper" in regards to the Cosmological Constant Problems?

The "New Barrow paper" presents a new solution to the Cosmological Constant Problems, which have long been a major challenge in the field of cosmology. These problems revolve around the mysterious nature of dark energy and its role in the expansion of the universe. The paper offers a novel approach to addressing these issues, potentially providing new insights into the fundamental properties of our universe.

2. What are the main points of the "New Barrow paper"?

The "New Barrow paper" proposes a new model of the universe in which the cosmological constant, or dark energy, is not a constant, but rather a function of cosmic time. This model, known as the "cosmological variable lambda model," offers a solution to the Cosmological Constant Problems by allowing the value of the cosmological constant to change over time. The paper also presents a mathematical framework for this model and compares it to other existing theories.

3. How does the "New Barrow paper" differ from previous theories about dark energy?

Unlike previous theories, which assume a constant value for the cosmological constant, the "New Barrow paper" proposes a time-varying cosmological constant. This allows for a more dynamic and realistic representation of dark energy, which has been observed to have a significant impact on the expansion of the universe. The paper also takes into account the effects of matter and radiation in the universe, providing a more comprehensive understanding of the role of dark energy.

4. What evidence supports the claims made in the "New Barrow paper"?

The "New Barrow paper" is based on a combination of observational data and mathematical modeling. The paper utilizes data from various sources, including the cosmic microwave background radiation, supernovae, and galaxy clustering, to support its proposed model. Additionally, the paper's mathematical framework is based on well-established theories and has been tested using simulations.

5. How could the findings of the "New Barrow paper" impact the field of cosmology?

The "New Barrow paper" offers a new perspective on the Cosmological Constant Problems and provides a potential solution that has not been previously explored. If the proposed model is further validated through future observations and experiments, it could significantly impact our understanding of the fundamental properties of the universe. It could also inspire new research and lead to further advancements in our understanding of dark energy and the expansion of the universe.

Similar threads

Replies
2
Views
1K
Replies
92
Views
4K
Replies
26
Views
2K
Replies
3
Views
2K
Replies
4
Views
990
  • Cosmology
Replies
3
Views
734
  • Cosmology
Replies
13
Views
2K
Replies
153
Views
10K
Replies
1
Views
1K
Replies
8
Views
2K
Back
Top