Dark Energy in Light of the Cosmic Horizon

In summary, the author of this paper suggests that the current age of the universe is actually the horizon time th=R0/c, which is always shorter than t0. This would eliminate the apparent coincidence altogether between the current age of the universe and the perceived current age of the universe.
  • #1
wolram
Gold Member
Dearly Missed
4,446
558
arXiv:0711.4810
Title: Dark Energy in Light of the Cosmic Horizon
Authors: Fulvio Melia
Comments: Submitted to MNRAS
Subjects: Astrophysics (astro-ph)
Based on dramatic observations of the CMB with WMAP and of Type Ia supernovae with the Hubble Space Telescope and ground-based facilities, it is now generally believed that the Universe's expansion is accelerating. Within the context of standard cosmology, the Universe must therefore contain a third `dark' component of energy, beyond matter and radiation. However, the current data are still deemed insufficient to distinguish between an evolving dark energy component and the simplest model of a time-independent cosmological constant. In this paper, we examine the role played by our cosmic horizon R0 in our interrogation of the data, and reach the rather firm conclusion that the existence of a cosmological constant is untenable. The observations are telling us that R0=c t0, where t0 is the perceived current age of the Universe, yet a cosmological constant would drive R0 towards ct (where t is the cosmic time) only once, and that would have to occur right now. In contrast, scaling solutions simultaneously eliminate several conundrums in the standard model, including the `coincidence' and `flatness' problems, and account very well for the fact that R0=c t0. We show here that for such dynamical dark energy models, either R0=ct for all time (thus eliminating the apparent coincidence altogether), or that what we believe to be the current age of the universe is actually the horizon time th=R0/c, which is always shorter than t0. Our best fit to the Type Ia supernova data indicates that t0 would then have to be ~16.9 billion years. Though surprising at first, an older universe such as this would actually eliminate several other long-standing problems in cosmology, including the (too) early appearance of supermassive black holes (at a redshift > 6) and the glaring deficit of dwarf halos in the local group.
 
Space news on Phys.org
  • #2
wolram said:
Our best fit to the Type Ia supernova data indicates that t0 would then have to be ~16.9 billion years. Though surprising at first, an older universe such as this would actually eliminate several other long-standing problems in cosmology.

In this scenario, no light was emitted for the first 4 billion years after t0. Any thoughts as to what was going on for 4 billion years?
 
  • #3
I haven't fully digested this paper yet but it does seem to contain some glaring problems. The alternative cosmology requires that there be 70% energy density of dark energy at recombination. Studies of the effect of even 1% of w<-1/3 material at this time have found that to be inconsistent with the observed CMB. This was not even given the slightest mention.

It also claims to resolve the substructure problem (more small DM halos are seen in simulations that in observations) but doesn't show how this would be the case in any detail. The overall normalisation of the amplitude of fluctuations is also in very good agreement for LCDM between the CMB and galaxy surveys. The model proposed in this paper would not agree with this and again this issue isn't given any thought.

New proposals need to explain all the data, not just one small part, to be reasonable.

Methodologicaly the approach to science taken is also flawed. The author finds that it is completely inconceivable that a number measured in nature be close to unity, and this is the sole basis for the rejection of LCDM, yet makes this rather odd statement about his own model

Unfortunately [the model presented] is not fully consistent with Type 1A supernova data, so either our interpretation of current observations is wrong or the Universe just doesn't work this way. But it's worth our while spending some time with it because of the shear elegance it brings to the table.

The proposal is that if an observations gives a model with a value that is near a number that has a pre-conceived notion of being special (in this case unity) the model must be wrong. However, a model that in the opinion of its inventor is 'elegant' should be right, even if unsupported by data.

Such thinking was attractive to Aristotle, but science has moved on a little since then...
 
  • #4
Well, I've read through it once and still need to think about all the details as well, but I
don't see any glaring problems yet. First of all, I don't see any new models presented
here. He's just comparing existing models and showing that LCDM doesn't work. And it
doesn't work, not because something has a magical number of 1, but because it has a
magical number of one ONLY NOW, at this very moment, not 2 billion years ago, not
1/2 billion years ago, but now. That's weird by any account.

I think what's happening is that if w is < -1/3, then the universe has to be older than
13.7 billion years. Light produced before that time is gravitationally redshifted to infinity
at and beyond the horizon radius, at 13.5 billion years (if I got that right).

He says in the paper that the dwarf halo problem goes away since there would have been
more time for hierarchical merging to deplete the lower end of the mass distribution. I
don't know if 3 or 4 billion years is enough to do that, but it would help I guess.

I also took a look at the earlier paper (The Cosmic Horizon). I didn't understand much
of that one since it's more mathematical, but this one is based heavily on that one.
You should take a look.
 
  • #5
I've had a brief look at the other paper you refer too, but haven't looked at it in detail.

patty144 said:
...it
doesn't work, not because something has a magical number of 1, but because it has a
magical number of one ONLY NOW, at this very moment, not 2 billion years ago, not
1/2 billion years ago, but now. That's weird by any account.

Why is it weird? If the number that it was now and would only ever be once was 2.6 would you be amazed? Clearly not. The only reason this is at all remarkable is by having a pre-concieved notion that the number one is special. This is not good science, but merely numerology!

patty144 said:
I think what's happening is that if w is < -1/3, then the universe has to be older than
13.7 billion years. Light produced before that time is gravitationally redshifted to infinity
at and beyond the horizon radius, at 13.5 billion years (if I got that right).

He says in the paper that the dwarf halo problem goes away since there would have been
more time for hierarchical merging to deplete the lower end of the mass distribution. I
don't know if 3 or 4 billion years is enough to do that, but it would help I guess.

Exactly, there would be more time for the dwarfs to merge, but is 3-4 Gyrs enough? Who knows? You can't make the statement that this new model (where by new model I mean a dark energy that tracks the matter density perfectly for all time) removes the sub-structure problem without making any attempt to actually make even the most rudimentary analysis of structure growth in this new model.

There are three mains observational pillars of modern cosmology, distance redshift measurements (the most important of which are the SN1A), the CMB and large scale structure. The success of LCDM is that is agrees with all three of these, with the same parameters. Making 'first order' calculations of the predictions of a given cosmological model for these three observables is something any decent cosmologists is capable of doing. Any new model proposed must therefore address all three, not just one as in the case of this paper. The analysis doesn't have to be too intricate, but it must be shown to be consistent at least to first order with these observations for anyone to take a new idea seriously.
 
  • #6
hmm, when I look at figure 1 in the paper, I don't see numerology. I see R/ct as a
function of time and I see that it heads towards the value 1 only at the present time. I
don't know, it sure looks weird to me. Are we that special that in the entire history of
the universe we live just at the right time for this to be happening?

When I look at figure 3, LCDM looks deplorable. It doesn't fit the SN1 data at all. Yet figure
9 shows that a tracking solution fits it with an amazing chi^2. Again, I don't know, but the comparison looks pretty amazing to me.

Again, I don't see where the new model is here. He is not presenting any new model. He's comparing models that have been around for years. I still don't see anything wrong with
what's in the paper.
 
  • #7
Wallace said:
...(where by new model I mean a dark energy that tracks the matter density perfectly for all time) ...

J.G.Pereira and R. Aldrovandi have a model where dark energy tracks the matter density, as I understand it. this is one of the things that is worrying me about their model.
I was fearing that it was a fatal flaw.

Now this paper comes along and seems to make a virtue out of it! I will have a look.
 
  • #8
patty144 said:
hmm, when I look at figure 1 in the paper, I don't see numerology. I see R/ct as a
function of time and I see that it heads towards the value 1 only at the present time. I
don't know, it sure looks weird to me. Are we that special that in the entire history of
the universe we live just at the right time for this to be happening?

You have to establish the specialness of 'what is happening just now' in order to determine if the fact that it only happens just now is weird. The co-incidence problem always looks a lot worse in a log-log plot as well, a very small difference in time either way (a billion or two years either side of now) suffers the same issue, so we aren't that special.

In any case the problem with testing model by co-incidence is that it comes down to an entirely subjective appraisal about what is 'special'. I prefer to let the data speak, rather than appeal to aesthetics.

patty144 said:
When I look at figure 3, LCDM looks deplorable. It doesn't fit the SN1 data at all. Yet figure
9 shows that a tracking solution fits it with an amazing chi^2. Again, I don't know, but the comparison looks pretty amazing to me.

Those plots are deplorable, but only because of the shiftyness the author has shown in putting them together! For the 'LCDM' plot, he has used first order canonical values of 0.3,0.7 rather than the actual best fit values for these parameters! If the LCDM plot was really that bad, we never would have settled on the model! I've played around with the SN data and cosmology enough to know that you only make LCDM look that bad by trying.

By contrast his scaling model has be fine-tuned to give the optimal fit. In any case, he still has only addressed one of the three main pieces of evidence for LCDM. It is easy to beat a model that satisfies all the data by one that only satisfies a small part of it.

patty144 said:
Again, I don't see where the new model is here. He is not presenting any new model. He's comparing models that have been around for years. I still don't see anything wrong with
what's in the paper.

Sure, the scaling type solutions have been around for years, but this idea is quite different. The regular approach is for the scaling to moderate the change over time between matter and Lambda domination epoch, making the transition longer and reducing the co-incidence problem. However this approach is much more drastic, taking in some sense a 'perfect' scaling solution that keeps the energy densities not just similar but identical at all times.

The reason why this model proposed makes no sense is that is completely alters the CMB signal and structure formation. The presence of all that negative pressure material in the early universe suppresses the growth of structure in a way that a few more billion years is not going to make up for. The linear growth factor as a function of time is an elementary calculation that should have been performed for this model. It has either been omitted because is would show that the model fails at the first hurdle for structure formation, or because the author isn't aware that cosmology extends beyond supernovae measurements.

Either way the paper is not a fair assessment of the two main models it compares, and should not make the firm statements that it does without doing these additional calculations.
 
  • #9
Another 'problem' is the assumption Dark Matter is a single 'species' of particle. I'm very suspicious of this proposition. Introducing a variety of flavors of DM particles changes everything. We already know that neutrinos [a DM particle of sorts] come in a number of flavors. We also know they cannot comprise more than a tiny fraction of the total amount of DM in the universe.
 
  • #10
Fulvio Melia has also published this similar preprint already accepted in MNRAS The Cosmic Horizon
The cosmological principle, promoting the view that the universe is homogeneous and isotropic, is embodied within the mathematical structure of the Robertson-Walker (RW) metric. The equations derived from an application of this metric to the Einstein Field Equations describe the expansion of the universe in terms of comoving coordinates, from which physical distances may be derived using a time-dependent expansion factor. These coordinates, however, do not explicitly reveal properties of the cosmic spacetime manifested in Birkhoff's theorem and its corollary. In this paper, we compare two forms of the metric--written in (the traditional) comoving coordinates, and a set of observer-dependent coordinates--first for the well-known de Sitter universe containing only dark energy, and then for a newly derived form of the RW metric, for a universe with dark energy and matter. We show that Rindler's event horizon--evident in the co-moving system--coincides with what one might call the "curvature horizon" appearing in the observer-dependent frame. The advantage of this dual prescription of the cosmic spacetime is that with the latest WMAP results, we now have a much better determination of the universe's mass-energy content, which permits us to calculate this curvature with unprecedented accuracy. We use it here to demonstrate that our observations have probed the limit beyond which the cosmic curvature prevents any signal from having ever reached us. In the case of de Sitter, where the mass-energy density is a constant, this limit is fixed for all time. For a universe with a changing density, this horizon expands until de Sitter is reached asymptotically, and then it too ceases to change.

Garth
 
  • #11
Wallace said:
...The reason why this model proposed makes no sense is that is completely alters the CMB signal and structure formation. The presence of all that negative pressure material in the early universe suppresses the growth of structure in a way that a few more billion years is not going to make up for...

This is just the sort of thing that was worrying me over the past few days about the Pereira and Aldrovandi paper. I wonder if there is some way out of the difficulty.

Patty and Wallace, excuse me for interrupting your conversation. I find these two papers by Fulvio Melia very interesting.

In case either of you is curious here is the link to the paper I was reading earlier, which seems to bear some relation to these.

http://arxiv.org/abs/0711.2274
de Sitter Relativity: a New Road to Quantum Gravity
R. Aldrovandi, J. G. Pereira
17 pages
(Submitted on 14 Nov 2007)

"The Poincaré group generalizes the Galilei group for high-velocity kinematics. The de Sitter group is here assumed to go one step further, generalizing Poincaré as the group governing high-energy kinematics. Algebraically, this is done by supplementing spacetime translations with proper conformal transformations. This change in special relativity implies concomitant changes in general relativity -- yielding a de Sitter general relativity. The source current turns out to include now, in addition to energy-momentum, the proper conformal current, which appears as the origin of the cosmological constant. In consequence, it is no longer a free parameter, and can be determined in terms of other quantities. When applied to the propagation of ultra-high energy photons, de Sitter general relativity gives a good estimate of the time delay observed in extragalactic gamma-ray flares. It can, for this reason, be considered a new approach to quantum gravity."

They calculate a value for the effective Lambda which agrees roughly with observation. Provide a mechanism explaining how the dark energy effect arises (don't put it in by hand). They also calculate a delay for 10 TeV gammaray which agrees roughly with what was reported by MAGIC from observations of Makarian 501 AGN flares. Both of these things would appear to be "too good to be true". Their gammaray dispersion relation exposes their model to immediate risk of falsification as soon as more AGN flare data appears.

their de Sitter General Relativity model is something they have been developing for quite a few years and are now beginning to derive numerical predictions from. I find it very interesting that it has some apparent overlap with what Fulvio Melia is saying.
 
Last edited:
  • #12
Garth said:
Fulvio Melia has also published this similar preprint already accepted in MNRAS The Cosmic Horizon

Garth

Yes Garth, that is the paper which Patty was talking about earlier (post #4) called The Cosmic Horizon.
patty144 said:
I also took a look at the earlier paper (The Cosmic Horizon). I didn't understand much
of that one since it's more mathematical, but this one is based heavily on that one.
You should take a look.

The two papers should be read together. It is encouraging that both were submitted to MNRAS and the first one has already been accepted for publication. Since the second is a continuation, extending rather than repeating the first, I expect it too will be promptly accepted.
 
Last edited:
  • #13
marcus said:
The two papers should be read together. It is encouraging that both were submitted to MNRAS and the first one has already been accepted for publication. Since the second is a continuation, extending rather than repeating the first, I expect it too will be promptly accepted.

I'm not so sure. I still haven't looked at the first paper in detail, but it seems to be better than the second in that it outlines as far as I can see a previously unknown property of the FRW solution to GR, or at least illuminates the significance of a previously little known result. But as I say, I can't assess it too much as I haven't read it properly yet.

The second paper though falls short for the reasons I've already stated.
 
  • #14
marcus said:
Yes Garth, that is the paper which Patty was talking about earlier (post #4) called The Cosmic Horizon.
Thank you marcus, sorry Patty144 - I didn't notice that comment of yours and BTW a very warm welcome to these Forums! :smile:

Wallace - although the LCDM model fits the data very well and the OP link paper could have only made the LCDM model "look that bad by trying", nevertheless the question arises of whether other models, such as suggested in these papers, might also eventually fit the data as well.

I note the number of free parameters that are necessary in the mainstream model and are constrained to make that model fit the data and my question is whether any proposed model is actually describing the real physics or just emulating it. A laboratory detection of the ‘Inflaton’ and DM particle with the required properties would help resolve this problem.

Garth
 
Last edited:
  • #15
Wallace said:
The second paper though falls short for the reasons I've already stated.

One of the main reasons you stated is structure formation and the paper I mentioned by P and A hints at an answer to that one.

To roughly paraphrase, you pointed out that if DE is proportional to matter density thru all history, then early universe has much more DE than we are used to supposing. So clusters that are trying to form would be dissipated scattered by expansion and never get a chance, or some would take much longer to form because an uphill fight against all that expansion.

This was worrying me for several days as I was trying to read the Pereira Aldrovandi paper---so I think I know what you mean. They also have this roughly constant proportionality. but for them, DE DOES NOT GRAVITATE, because it is an affect that comes out of deSitter GR. So they require a lot more DARK MATTER in their universe, so as to achieve critical density for spatial flatness.

this gives me the faint hope that (whether or not P and A are on the right track) something could work out that makes Fulvio Melia's idea OK----something that addresses your objection about structure formation.
 
  • #16
Heard an interesting journal-club style presentation on this today. Seems to be getting
a lot of attention. The reason his LCDM fit does poorly is that he's using basic LCDM, as
he states. The SN people get better fits because they need to introduce past
deceleration (beyond redshift 0.5) before acceleration. As such, his results agree with
them. They get a good fit only if they introduce new parameters into the model, which
is NOT basic LCDM.

After reading this paper through again carefully and listening to the talk, it's clear that he
is not a proponent of anyone model. His main emphasis is simply to show that a
consideration of the Cosmic Horizon is necessary in order to interpret the data properly.
We rederived his equation (5), and as far as we can tell, it's correct and
cannot be ignored. LCDM doesn't seem to fit with this.

What's also interesting, and hadn't noticed/heard before, is that scaling solutions have
the potential of eliminating both the coindicence and flatness problems, as he shows
with his equations (10) - (13). That might be new.

Also remember, an important point I heard this morning, is that just becauseyou
have a scaling solution doesn't mean it has to go all the way back to the beginning.
There may be some effect that drops off when z reaches 5, 10 or beyond. We just
don't know. I think the main point is that certainly for z<3-5, scaling solutions work
_much_ better than LCDM. Take a look at Fig 9, in particular. A chi^2 of 1.001 is
pretty darned impressive, and there is no need to introduce decelration followed by
acceleration.

This is fun!
 
  • #17
Is this talk available on line?
 
  • #18
patty144 said:
Heard an interesting journal-club style presentation on this today. Seems to be getting
a lot of attention. The reason his LCDM fit does poorly is that he's using basic LCDM, as
he states. The SN people get better fits because they need to introduce past
deceleration (beyond redshift 0.5) before acceleration. As such, his results agree with
them. They get a good fit only if they introduce new parameters into the model, which
is NOT basic LCDM.

'basic' LCDM inevitably contains decceleration before z~1 and acceleration afterwards. This is not artificially introduced just to fit the SN data! The Riess et al papers used LCDM only and fit the data without any additional hacks. The reason Melia's LCDM plot doesn't look good is that he is using [tex]\Omega_m = 0.3[/tex],[tex]\Omega_{\Lambda} = 0.7[/tex] which are not the best fit LCDM parameters! Have a read of the Riess et al papers to see the better fits.

patty144 said:
Also remember, an important point I heard this morning, is that just becauseyou
have a scaling solution doesn't mean it has to go all the way back to the beginning.
There may be some effect that drops off when z reaches 5, 10 or beyond. We just
don't know. I think the main point is that certainly for z<3-5, scaling solutions work
_much_ better than LCDM. Take a look at Fig 9, in particular. A chi^2 of 1.001 is
pretty darned impressive, and there is no need to introduce decelration followed by
acceleration.

I made that point previously in this thread, that scaling solutions proposed in the past have not been the 'perfect' scaling as proposed in this paper, but as you say looks more like LCDM in the early universe.

The overriding point though is that we can speculate about models for as long as we want but when simple calculations to relate them to data can be performed, but are ignored, then the speculation is pointless.
 
  • #19
This is what is actually said in the paper: "Given the broad range of alternative
theories of dark energy that are still considered to be viable, it is beyond the scope
of this paper to exhaustively study all dynamical scenarios. Instead, we shall focus
on a class of solutions with particular importance to cosmology---those in which
the energy density of the scalar field mimics the background fluid energy density."

I don't think he's proposing any model. I may be wrong, but my interpretation is
different. I think the main point he's making is that one cannot ignore the role
of the Cosmic Horizon, as when he says: "In this paper, we examine
the role played by our cosmic horizon R0 in our interrogation of
the data..."

And then he claims that the behavior of LCDM shown in figure 1, which is far worse
than previously supposed for the coincidence problem, makes it very unlikely that
dark energy is due to Lambda.

To me, this odd behavior of LCDM, and the fact that a simple scaling solution fits
the SN data as well as shown in Figure 9, suggests that LCDM is in trouble. Then
there's the question of why Lambda should be so much smaller than predicted for
the vacuum energy in quantum mechanics. I looked up the Klypin paper, and according
to them, LCDM predicts 10 times as many small halos as are seen.

There's no doubt that with enough free parameters one can make LCDM fit the SN
data. But scaling solutions fit the SN data too. What I get out of the paper is that
LCDM doesn't explain R0=ct0 well at all, but scaling solutions do. It seems to me that
the balance is in favor of the latter. At the very least, one should keep an open mind.
I don't see why there's so much vested in LCDM that there's fear to consider something
else, especially if the data doesn't agree with it.
 
  • #20
patty144 said:
And then he claims that the behavior of LCDM shown in figure 1, which is far worse
than previously supposed for the coincidence problem, makes it very unlikely that
dark energy is due to Lambda.

We only have one universe. If the one we got to look at has a coincidence, then it has a coincidence. If that is what the data says then so be it.

patty144 said:
To me, this odd behavior of LCDM, and the fact that a simple scaling solution fits
the SN data as well as shown in Figure 9, suggests that LCDM is in trouble. Then
there's the question of why Lambda should be so much smaller than predicted for
the vacuum energy in quantum mechanics. I looked up the Klypin paper, and according
to them, LCDM predicts 10 times as many small halos as are seen.

Right, so the Klypin paper carefully calculates the structure expected for LCDM, and a well known issue is found. The paper under question spends one sentence simply asserting that the scaling solution will simply fix this problem. No justification, no calculation of even an order of magnitude or first order estimate. It is easy to beat a model that has been rigorously interrogated with one in which you simply make up it's predictions to suit the data.

patty144 said:
There's no doubt that with enough free parameters one can make LCDM fit the SN
data.

You are missing the point, the issue is that figure 1 intentionally uses the wrong values of the parameters of LCDM, not that is uses more or less 'free parameters'. It is clearly designed to make LCDM look bad. With the values of LCDM paramters that actually are fitted to the data, it is a much better fit.

patty144 said:
But scaling solutions fit the SN data too. What I get out of the paper is that
LCDM doesn't explain R0=ct0 well at all, but scaling solutions do.

Again, the paper compares a LCDM with intentionally ill-fitting parameters, with a scaling solution that has an extra free parameter compared to LCDM, and with the parameters properly fitted. Hardly a fair comparison!

patty144 said:
It seems to me that
the balance is in favor of the latter.

Only when you consider one small part of available cosmological data, and when you fiddle with the numbers for maximum effect!

patty144 said:
At the very least, one should keep an open mind.

There really needs to be a rule, let's call is Wallace's Law for sake of argument, that as soon as you appeal to 'having an open mind' then you must have run out of intelligent things to say and have therefore lost the argument. I have a very open mind when it comes to cosmology, as do most people who get this banal like thrown at them. The point is that I'll consider any idea on it's merits. If an alternative idea has huge gaping holes compared to the current best bet, then of course I will express concern over those holes, and demand that they be addressed.

It's not about having an open mind, but an active one that question everything. I'll put my money on the model that best answers those questions.

My own research revolves almost entirely around non-LCDM models. The point is that you have to do the hard yards and actually consider the full implications of a model to do it properly, not make a smash and grab paper that takes a skewed view of a small sub-set of the data and then makes grandiose claims.

patty144 said:
I don't see why there's so much vested in LCDM that there's fear to consider something
else, especially if the data doesn't agree with it.

Again, your emotive appeals to 'a fear' of considering alternatives is ill-considered. We know there isn't perfect agreement between all data and LCDM, but if we try and pretend that we've fixed things by ignoring the majority of the data we aren't getting anywhere.
 
  • #21
sysreset said:
In this scenario, no light was emitted for the first 4 billion years after t0. Any thoughts as to what was going on for 4 billion years?
a huge black hole consumed the whole universe or was the original universe
 
  • #22
andrewj said:
a huge black hole consumed the whole universe or was the original universe

Seriously?? I don't think so. We would still be in there. An entire universe with a 13 billion light year cosmic horizon inside a black hole...
 
  • #23
Wallace said:
...The point is that I'll consider any idea on it's merits...

Wallace and Patty, I sense that both of you are in or around grad/postdoc level cosmology and astrophysics.
I would consider myself fortunate if you would have a look at this recent Pereira Aldrovandi paper that has been gnawing on me for a week or so.

I will get the link, in case you would be willing to look it over and respond. The point is they have a SCALED relation between the effective Lambda and the matter density.

In the dust case they derive a scaling ratio of 2. This is similar to the figure I saw in Fulvio Melia's paper of around 2.33. My sense is that this paper tends to reinforce that of Melia or vice versa. Melia says that a scaled model has a chance of fitting the data in a not terribly halfass way. which gives support to P&A, while P&A provide a mechanism for why you get a scaled model----it comes out of their (deS modified) Einstein Field Equation, which does not have a Lambda term.http://arxiv.org/abs/0711.2274
de Sitter Relativity: a New Road to Quantum Gravity
R. Aldrovandi, J. G. Pereira
17 pages
(Submitted on 14 Nov 2007)

"The Poincaré group generalizes the Galilei group for high-velocity kinematics. The de Sitter group is here assumed to go one step further, generalizing Poincaré as the group governing high-energy kinematics. Algebraically, this is done by supplementing spacetime translations with proper conformal transformations. This change in special relativity implies concomitant changes in general relativity -- yielding a de Sitter general relativity. The source current turns out to include now, in addition to energy-momentum, the proper conformal current, which appears as the origin of the cosmological constant. In consequence, it is no longer a free parameter, and can be determined in terms of other quantities. When applied to the propagation of ultra-high energy photons, de Sitter general relativity gives a good estimate of the time delay observed in extragalactic gamma-ray flares. It can, for this reason, be considered a new approach to quantum gravity."

They calculate a value for the effective Lambda which agrees roughly with observation. Provide a mechanism explaining how the dark energy effect arises. They also calculate a delay for 10 TeV gammaray which agrees roughly with what was reported by MAGIC from observations of Makarian 501 AGN flares. Both of these things would appear to be "too good to be true". Their gammaray dispersion relation exposes their model to immediate risk of falsification.

====EDIT TO REPLY TO OLDMAN'S POST #25 BELOW====
Thank you, a friend in need is a friend indeed.
oldman said:
It's been gnawing at me also. This paper seems to me to have a stimulating mix of constructive ingredients: the authors build on and extend SR and GR, rather than crankily trying to undermine these cornerstones; they re-interpret the cosmological constant in a (to me) novel way; they point out that "...there exists a deep relationship between optical media and metrics ... (which) allows (one) to reduce the problem of the propagation of electromagnetic waves in a gravitational field to the problem of wave propagation in a refractive medium in flat spacetime", and go on to apply this relationship to explain actual observations (gamma-ray flares).

But I'm unfamiliar with such things as "proper conformal currents" and "the Wigner–In¨on¨u processes of group contraction and expansion" that underlie their suggestions. So who am I to judge? I'd really appreciate an enlightened comments on this paper from folk who understand such matters.

Is this a very important paper that is concordant with some of the suggestions in Fulvio Melia's paper, under discussion here? Or not?

About Wigner-Inonu, it happened that when John Baez was posting here for a few weeks last year he explained Wigner-Inonu contraction intuitively using the example of deSitter group contracting down to Poincare (or viceversa, my memory of this somewhat faded)
I think he may have used a lower dimensional example also, like circle group going to additive R group as the circle gets larger. It seemed that Wigner-Inonu was basically very intuitive. You have two groups that you can parametrize in a compatible way and you find that one is the limit of the other as you take the parameter to zero/infinity. What Wigner Inonu accomplished was to make the process rigorous in greater generality, instead of simply obvious in some simple cases.

When Baez was visiting here and talking about this, I remember seeing a paper about this by Inonu that was available online. I may be able to track it down.

What bothers me is the idea of CONFORMAL CURRENT. You mention both this and the Wigner-Inonu business
 
Last edited:
  • #24
Our best fit to the Type Ia supernova data indicates that t0 would then
have to be ≈ 16.9 billion years.

-------
http://arxiv.org/abs/0711.4181
The Cosmic Horizon
Authors: Fulvio Melia
(Submitted on 27 Nov 2007)

Birkhoff’s theorem states that the metric inside an empty spherical cavity, at the center of a spherically symmetric system, must be equivalent to the flat-space Minkowski metric.
Space must be flat in a spherical cavity even if the system is infinite. It matters not what the constituents of the medium outside the cavity are, as long as the medium is spherically symmetric.
If one then imagines placing a spherically symmetric mass at the center of this cavity, according to Birkhoff’s theorem and its corollary, the metric between this mass and the edge of the cavity is necessarily of the Schwarzschild type.
This consequence of the corollary to Birkhoff’s theorem is so important—and critical to the discussion in this paper—that it merits re-statement: the spacetime curvature of a wordline linking any point in the universe to an observer a distance R away may be determined by calculating the mass-energy enclosed within a sphere of radius R centered at the origin (i.e., at the location of the observer). The mass-energy outside of this volume has a net zero effect on observations made within the sphere.
There is little doubt that a cosmic horizon exists.
It is required by the application of the corollary to Birkhoff’s theorem to an infinite, homogeneous medium, and there is some evidence that we have already observed phenomena close to it. However, it may be that observational cosmology is not entirely consistent with the condition R0 ≈ ct in the current epoc. If not, there must be some other reason for this apparent coincidence. Perhaps the assumption of an infinite, homogeneous universe is incorrect. Whatever the case may be, the answer could be even more interesting than the one we have explored here.
-----------
Birkhoff’s theorem seems to be the building block of this model that requires a discussion.
jal
 
  • #25
marcus said:
Wallace and Patty...I would consider myself fortunate if you would have a look at this recent Pereira Aldrovandi paper that has been gnawing on me for a week or so...http://arxiv.org/abs/0711.2274
de Sitter Relativity: a New Road to Quantum Gravity
R. Aldrovandi, J. G. Pereira

It's been gnawing at me also. This paper seems to me to have a stimulating mix of constructive ingredients: the authors build on and extend SR and GR, rather than crankily trying to undermine these cornerstones; they re-interpret the cosmological constant in a (to me) novel way; they point out that "...there exists a deep relationship between optical media and metrics ... (which) allows (one) to reduce the problem of the propagation of electromagnetic waves in a gravitational field to the problem of wave propagation in a refractive medium in flat spacetime", and go on to apply this relationship to explain actual observations (gamma-ray flares).

But I'm unfamiliar with such things as "proper conformal currents" and "the Wigner–In¨on¨u processes of group contraction and expansion" that underlie their suggestions. So who am I to judge? I'd really appreciate an enlightened comments on this paper from folk who understand such matters.

Is this a very important paper that is concordant with some of the suggestions in Fulvio Melia's paper, under discussion here? Or not?
 
  • #26
Wallace, I've read both of the cosmic horizon papers several times now, and I can't see what you're trying to say in them. For example, it's quite a stretch to claim that he's purposely fiddling with the numbers to make one or another thing look bad. That's attributing a motivation that I just don't see. To me, the main point seems to be that one cannot ignore the cosmic horizon when interpreting the data. No one I've talked to agrees with you that figure 1 can be ignored. To keep saying that "if such a coincidence exists, then so be it" is to give up trying to understand what is really happening. There's always some reason why things like R0=ct0 come up in nature. LCDM probably is not it. Probably many other models don't work either. But no one seems to have realized that there exists this additional constraint in FRW. Probably the existence of this horizon affects your own research too.
 
  • #27
oldman said:
Is this a very important paper that is concordant with some of the suggestions in Fulvio Melia's paper, under discussion here? Or not?

Oldman, I appended my post #23 (back a few) to reply to yours at more length. Briefly, I do think Pereira Aldrovandi is concordant with what Melia says in the paper under discussion here. In particular by defining a mechanism that could cause what Melia calls a scaling solution to arise. What I'm eager to hear is some reasons why P&A is not what you say. Maybe someone active on Cosmocoffee will put it up for discussion there.
 
Last edited:
  • #28
Hi patty, welcome to PF! I'm leaning towards Wallace's side of the fence. My question is what 'cosmic horizon' are you talking about? I did not perceive the nature of your disagreement beyond that point.
 
  • #29
Hi Chronos. Take a look at arXiv:0711.4181, The Cosmic Horizon. This paper on Dark Energy is a followup to that one, which is being published by MNRAS. Thanks.
 
  • #30
String theories also, have bouncing universes and cosmic horizons. Their calculations are done within a “closed universe” which can only be interpreted as within a greater cosmic horizon.
http://arxiv.org/abs/gr-qc/0506040
Regular two-component bouncing cosmologies and perturbations therein
Authors: V. Bozza, G. Veneziano
(Submitted on 7 Jun 2005 (v1), last revised 8 Sep 2005 (this version, v2))

The first proposal of a bouncing string cosmology was the so-called Pre-Big Bang scenario [1], which exploits the non-minimal coupling of the dilaton in superstring theory to drive a Pre-Big Bang super-inflationary epoch, ending when higher-order derivatives and/or loop corrections
become non-negligible. A second realization of the same idea was given by the ekpyrotic/cyclic scenario [2], inspired by Horawa–Witten braneworlds [3]. Here the pre-bounce is characterized by the slow approach of two parallel branes, which eventually collide ...
note: I would say two equal size black holes collide and the cosmic horizon is increased.

... and then move away, giving rise to an ordinary expanding universe on each of the two branes. Both scenarios, when viewed in a four-dimensional Einstein frame, can be represented by a universe bouncing from contraction to a standard expansion.
He then goes on to say:
“In the context of GR, the more conservative possibility is to study closed universes, where the bounce is possible without violating the NEC [10].”
Note: I would say,”within a cosmic horizon”. Later when he says, “… far from the bounce”, I would still put him within the “cosmic horizon” of the “collision of the black holes” that was originally created.
---------
 
  • #31
Recent post on this issue: http://xxx.lanl.gov/abs/1001.4795

Interestingly (and I say so as an author), the time average of the deceleration parameter
appears to be very close to zero.


Through the Looking Glass: Why the "Cosmic Horizon" is not a horizon
Authors: Pim van Oirschot, Juliana Kwan, Geraint F. Lewis
(Submitted on 26 Jan 2010)

Abstract: The present standard model of cosmology, $\Lambda$CDM, contains some intriguing coincidences. Not only are the dominant contributions to the energy density approximately of the same order at the present epoch, but we note that contrary to the emergence of cosmic acceleration as a recent phenomenon, the time averaged value of the deceleration parameter over the age of the universe is nearly zero. Curious features like these in $\Lambda$CDM give rise to a number of alternate cosmologies being proposed to remove them, including models with an equation of state w = -1/3. In this paper, we examine the validity of some of these alternate models and we also address some persistent misconceptions about the Hubble sphere and the event horizon that lead to erroneous conclusions about cosmology.

Comments: Accepted for publication by MNRAS, 6 pages, 3 figures
Subjects: Cosmology and Extragalactic Astrophysics (astro-ph.CO)
Cite as: arXiv:1001.4795v1 [astro-ph.CO]
 

1. What is dark energy?

Dark energy is a theoretical form of energy that is thought to make up about 70% of the total energy in the universe. It is believed to be responsible for the accelerating expansion of the universe.

2. How does dark energy relate to the cosmic horizon?

The cosmic horizon is the maximum distance that light has been able to travel since the beginning of the universe. Dark energy plays a crucial role in the expansion of the universe, which in turn affects the size and position of the cosmic horizon.

3. How do scientists study dark energy in light of the cosmic horizon?

Scientists use various methods such as observing the brightness and distance of supernovae, measuring the cosmic microwave background radiation, and studying the large-scale structure of the universe to study the effects of dark energy on the cosmic horizon.

4. What is the significance of understanding dark energy in light of the cosmic horizon?

Understanding dark energy and its effects on the cosmic horizon can provide valuable insights into the past, present, and future of the universe. It can also help us understand the fundamental laws of physics and the nature of space and time.

5. Are there any current theories or explanations for dark energy in light of the cosmic horizon?

There are several theories and explanations for dark energy, but the most widely accepted one is the cosmological constant, which suggests that dark energy is a constant energy density that fills the entire universe. However, there are ongoing studies and research to further understand this mysterious force.

Similar threads

Replies
37
Views
2K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
23
Views
1K
  • Cosmology
Replies
10
Views
3K
Replies
9
Views
1K
  • Cosmology
Replies
1
Views
1K
Replies
3
Views
1K
Replies
50
Views
2K
Back
Top