Dark energy a furphy, says new paper

AI Thread Summary
A new paper by Wiltshire suggests that the age of the universe varies based on the observer's location, proposing it is 14.7 billion years from our galactic perspective but over 18 billion years from a void. Wiltshire argues that abandoning the Copernican principle and accepting a non-uniform distribution of matter could eliminate the need for dark energy. However, this approach raises concerns about scientific rigor, as it allows for excessive flexibility in modeling, complicating data fitting. Critics note that some quantum gravity theories necessitate a positive cosmological constant, which Wiltshire's model dismisses. The implications of a variable universe age and the potential for complex life in different cosmic environments remain topics of ongoing discussion.
SF
http://www.abc.net.au/science/articles/2007/12/21/2124258.htm

He makes some interesting claims:
The age of the universe also depends on where you're standing, as Wiltshire discovered in calculations published in the New Journal of Physics.

The universe is 14.7 billion years old, a billion years older than the currently accepted age, from our galactic observation point.

But it is more than 18 billion years old from an average location in a void.
 
Space news on Phys.org
Here are Wiltshire's preprints
http://arxiv.org/find/grp_physics/1/au:+Wiltshire_D/0/1/0/all/0/1
I believe the one he just published this week in PRL, that the newsletter mentions, is #4 on the list (Exact solution to the averaging problem in cosmology)
The other one, about time, that was mentioned further down, is #5 on the list (Cosmic clocks, cosmic variance and cosmic averages). That's the one published in New Journal of Physics.

Here at Cosmology Forum we have been watching with some interest for the past couple of years, as Wiltshire has repeatedly published papers arguing that by abandoning the Copernican principle and assuming a largescale uneven distribution of matter we can do away with the need for dark energy. My feeling is it is a drastic price to pay to get rid of something that may be required for other reasons which Wiltshire does not acknowledge.

Kea, who used to post a lot at PF, often called attention to Wiltshire's ideas. She is a grad student at the same Kiwi university as Wiltshire, and knows him. He is a respected reputable cosmologist and is doing what scientists are supposed to do----explore alternatives.

There are several drawbacks. One is that abandoning the assumption of uniformity gives too much freedom. If you assume largescale irregularity you can concoct pictures so as to make virtually ANYTHING happen. Cosmologists usually assume largescale uniformity (homogeneous isotropic universe---the Cosmological Principle).
Those assumptions make it harder to fit model to data--there's less wiggle room to fudge, so it narrows the field of competing models----it gives you traction.

It would be extremely inconvenient to give up the Cosmological Principle.

Although, as Wiltshire points out, giving it up would let you EXPLAIN things, by freely imagining a wealth of different distributions of matter out beyond where we can see.

That is one drawback---it would be like pouring oil on the road. Less traction. Harder to do science. But that is just a practical disadvantage.

Another drawback is that Wiltshire sets the cosmological constant to zero. BUT SOME UPANDCOMING QUANTUM GRAVITY theories require a small positive cosmo constant.
They don't put it just because they WANT it. The model forces there to be a Lambda and refuses to work without it.
I am not talking about some finetune Lambda being needed to fit the data. I am saying Lambda just to work at all.
Some of the leading QG contenders are like that. And also some of the newest arrivals.
Without saying which is which, I will mention a few QG approaches
Reuter QEG
Loll CDT
Pereira dS-GR (actually predicts the observed Lambda value in relation to matter density)
Sorkin Causal Sets approach (also predicts a value)

This is not to say that these QG approaches are right, but there seems to be an effective need for Lambda coming up in theories----in effect the theories are recognizing the cosmo constant and saying what it IS in their terms. Usually in these cases no particle or field is required, the effect of a small positive cosmological constant emerges from the theory.

I guess you could say that some or all of these approaches ALSO do away with the need for "dark energy" (in the sense of a mysterious field or particle with negative pressure) but they do away with it WITHOUT HAVING TO REARRANGE MATTER OUT BEYOND THE HORIZON, the way Wiltshire does.
=============

Anyway, if anyone is curious about what Wiltshire just got published in Physical Review Letters, the preprint is on that ArXiv list
 
Last edited:
The concept of the age of the universe being dependent on frame of reference seems quite solid to me, and you don't have to abandon the cosmological principle to embrace it. Due to time dilation in strong gravitational fields and in rapidly moving frames of reference, the age of the universe has to vary significantly, even within our observable universe with its lace-like stucture. I hope much discussion of the implications of our "universe of variable age" continues.
 
sysreset said:
... Due to time dilation in strong gravitational fields and in rapidly moving frames of reference, the age of the universe has to vary significantly, even within our observable universe with its lace-like stucture.

Yes that seems pretty straightforward. I haven't looked at his article about clock variation that was #5 on the list and published in New Journal of Physics. I mentioned it because it was referred to in the popular newsletter.

AFAIK Wiltshire doesn't say anything new or controversial in that one. But he may, you'd have to actually look at it.

What is commonly accepted is (what you say) that time varies a lot depending on where the clock is. Recession speed does not affect it. And the speeds that do have affect are typically LOW like a few hundred km/s. (a tenth of a percent of speed of light) That does not affect time very much. But depth in a gravitational field could-----you mention the lacey cobwebby structure. That may be all Wiltshire is talking about in the article, in which case it wouldn't be especially interesting.

You might want to take a look at the article and see if he gets into more radical terrritory. He may apply his notion of extra irregular density to explain acceleration as a local effect. That additional irregularity (which is so far just conjectural) would have an even larger effect on differences in time. There might be some observable consequences of that---but I'm only guessing.

Why not take a look at the Clocks paper? If Wiltshire's line of investigation interests you.
http://arxiv.org/abs/gr-qc/0702082
Ooops! I see that it is highly controversial too. It is not just about the well-established fact about time being slowed by gravity-well depth. It is another place where he goes whole-hog and explains away the cosmological constant. Well you still might want to glance at it.
 
Last edited:
SF, you may want to check out http://https://www.physicsforums.com/showthread.php?t=201702&page=4" thread here on PF.

Jon
 
Last edited by a moderator:
"The age of the universe also depends on where you're standing, as Wiltshire discovered in calculations published in the New Journal of Physics.

The universe is 14.7 billion years old, a billion years older than the currently accepted age, from our galactic observation point.

But it is more than 18 billion years old from an average location in a void."

=================================================================

When I first read this, my first thought was, is it possible that there is an alien life form out there that is 3.7 billion years more advanced than us in evoltion and technology? My second thought is any life form that is reasonably similar to us would probably have to evolve on a planet that has the minimum mass to hold water on its surface and have an atmosphere. The gravitational field on such a planet would largely be determined by the mass of the planet and insignificantly affected by the location of the planet in a void or otherwise. If that is a correct assumption, then it is unlikely that is there is a massive body supporting life forms that have a head start of billions of years over us. Does that seem reasonable?
 
Hi Kev,

I haven't done much reading on the topic of which galaxies might have the highest propensity for intelligent life at the present time. But I think the bottom line answer is that while there's a lot of speculation, no one really knows.

Obviously, only a very small percentage of all galaxies are located well inside voids, so their opportunity to develop a given number of planets with complex life is far, far less than for filament/wall galaxies. But a small percentage still includes a very large (and potentially infinite) number of stars, so there is lots of statistical room for something to happen, in some void somewhere.

An earth-type planet will develop only in a star system that has a reasonably high (but not too high) level of metallicity. Can't build Earth out of just hydrogen and helium. Moderately high metallicity typically requires the star to form in a relatively long-lived galaxy which has had (potentially multiple) generations of star birth and death, enabling metal elements to be created and distributed by supernovae. Stellar lifespans are believed to have been quite short in the early universe, as stars were very large but had little metal content.

New generations of active stellar birth tend to occur in regions (such as an AGN, active galactic nuclus) that are disrupted by some pwerful influence, such as shockwaves from a nearby black hole consuming stars, or from mergers between galaxies, dwarf galaxies and globular clusters. The more matter there is in a particular region, the more frequently these events occur. So overdense local regions probably are much more active in stellar regeneration in general than underdense local regions.

On the other hand, life as we know it is likely to be utterly obliterated in regions where powerful energetic events occur, such as gamma ray and xray emissions in relativistic jets near some black holes, which can extend far enough that they could potentially destroy all life in a nearby galaxy which is aligned with the axis of the jet.

So, the odds suggest that development and continuation of complex life is most likely to occur in modestly overdense regions in which the structure has been relatively stable for many giga years. Not surprisingly, our Milky way galaxy resides in what seems to have been a fairly quiet corner of the local region, in a small filament not inside any massive rich galactic cluster, but nearby (attached to) a large supercluster. And our Sun is in a region of our galaxy which is at a relatively safe distance from the energetic events occurring in the galactic nucleus, but still within a "sweet spot" of moderately high metallicity. As I understand it, the Milky Way galaxy is believed to have been a fairly stable structure for more than 10 Gy. It probably was formed by accretion of numerous smaller galaxies, and more recently has merged with some dwarf galaxies and globular clusters. But it is not believed to have merged with any other large galaxy (unlike Andromeda, which is believed to be the result of such a merger). And the level of perturbation in the past 5 Gy probably has not stirred up much energetic activity in the Milky Way's nucleus. It is estimated that the Milky Way will merge with Andromeda 2-4 giga years in the future, and that collison may stir up some dangerous activity in our nucleus, or not. Other than that, the future looks reasonably quiet for the galaxy, which probably is good news for us. But then, we simply don't know enough to predict various events which might prove catastrophic. And of course an isolated disruptive event such as an errant asteroid or comet collision could occur in our solar system which could wipe out life on earth.

To make a long story short, my (relatively uneducated) guess is that complex life is statistically far less likely to arise in a void galaxy, since I predict that metallicity would be slow to rise to a sufficient level, although according to Wiltshire they could be up to 5 Gy behind us in metalicity development, and yet still be "caught up" with us from an evolutionary perspective because of their faster clocks.

Jon
 
Last edited:
Jon, that was a very nicely written informative post. Thanks ;)

One small issue you did not address was the relative time dilation of a void compared to a cluster. I have not gone into the maths in great detail, but I imagine that a planet like ours in a galaxy similar to ours but in a vast void would not differ greatly in time rate. I have seen a formula for the proper time of an orbiting object that takes orbital velocity, the altitude and the mass of the object it is orbiting into account. The distribution of matter outside its orbit (void or dense) would not appear to make much difference and a difference in the region of 4 billion years seems difficult to justify. For example the difference in time rate between a particle in the most empty remote part of space and a particle on the surface of the Earth would not be that great and vast majority of the difference would be due to the mass of the Earth and very little due to the mass of our galaxy or our velocity around the galaxy. To put it into context, the figures Wiltshire mentions imply a time dilation factor of around 30% which is a huge amount and only experienced at distance of twice the Schwarzschild radius from a black hole or when moving at a relative velocity of 70% the speed of light.
 
kev said:
I imagine that a planet like ours in a galaxy similar to ours but in a vast void would not differ greatly in time rate... To put it into context, the figures Wiltshire mentions imply a time dilation factor of around 30% which is a huge amount and only experienced at distance of twice the Schwarzschild radius from a black hole or when moving at a relative velocity of 70% the speed of light.

Hi Kev,

Your point is excellent, and is correct as a description of gravitationally bound systems.

Wiltshire's central thesis is that although the geometry of space within bound wall and filament systems is approximately flat, the geometry in large voids has significant negative (open) geometric curvature. This follows logically from the significant (relative) underdensity of matter within voids. Negative curvature means that there is very powerful quasilocal (anti-) gravitational energy within voids. He uses a version of the Einstein equations (called Buchert's equations) to calulate that this gravitational energy differential, as compared to bound walls and filaments, causes void clocks to run significantly faster.

Wiltshire points out that our galaxy has been gravitationally stable for at least 10Gy, so there has been that much time for void clocks to continue diverging from our local clocks. In addition, the clock discrepency obviously is larger when measured by the faster void clocks than by our local clocks. Wiltshire considers void clocks to be the proper basis for cosmic measurements because voids dominate the current observable universe on a relative volume basis, constituting about 76% of the total volume.

His calculations are quite precise, subject to normal observational errors. In http://http://arxiv.org/abs/0709.2535v2" he calculates that the accumulated discrepency of dominant void clocks compared to wall/filament clocks is 38%, as measured by void clocks.

Wiltshire's GR math is complex. It looks logical to me, but it is beyond my mathematical ability to actually verify it. Feel free to help yourself!

Jon
 
Last edited by a moderator:
  • #10
Wiltshire published another http://http://arxiv.org/abs/0712.3984" on xmas eve. It provides a slightly less technical overview of his "Fractal Bubble" theory, together with some added commentary. He also says he will be publishing two additional papers on this subject soon, which I look forward to reading.

Jon
 
Last edited by a moderator:
  • #11
jonmtkisco said:
Negative curvature means that there is very powerful quasilocal (anti-) gravitational energy within voids. He uses a version of the Einstein equations (called Buchert's equations) to calulate that this gravitational energy differential, as compared to bound walls and filaments, causes void clocks to run significantly faster.

Hi Jon.

I feel a bit uncomfortable with the terms "quasilocal (anti-) gravitational energy" in the voids. I did not notice the "anti-" in Wiltshire's papers. What do you mean by it? (Or point me to Wiltshire's definition).

Jorrie
 
  • #12
Jorrie said:
Hi Jon.

I feel a bit uncomfortable with the terms "quasilocal (anti-) gravitational energy" in the voids. I did not notice the "anti-" in Wiltshire's papers. What do you mean by it? (Or point me to Wiltshire's definition).

Jorrie

I suppose I too am a bit uncomfortable about the "anti-", because it is just my interpretation to try to make sense out of Wiltshire's technical jargon. Are you uncomfortable just because I added it, or because you think it's wrong?

In his 12/24 paper referenced above, Wiltshire describes why some recent analysis done by others has calculated negative quasilocal energy in a k = -1 negative curvature Friedmann universe (p.10):

"These results are expected in the current approach, since one is effectively subtracting a fiducial flat spacetime in each case, and the relative sign of energy depends on the observer. An isotropic k = 0 Friedmann observer has zero quasilocal energy in the approach of Chen, Liu, and Nestor; thus relative to the k = -1 geometry the k = 0 geometry has negative quasilocal energy, but conversely relative to the k = 0 geometry the k = -1 has positive quasilocal energy. Our viewpoint here will be that the fiducial reference point is the k = 0 geometry of the finite infinity region. This agrees with the Newtonian version of energy in the Friedmann equation, the LTB [Lemaitre-Tolman-Bondi] energy function, and with the idea that binding energy is negative."

I interpret Wiltshire to say that, from the perspective of an observer (such as us) located in a flat (k = 0) geometry, the geometry in negatively curved (k= -1) void will have "positive" gravitational energy. Since gravitational binding energy is "negative", doesn't it make sense to interpret that positive gravitational energy is equivalent to (anti-) binding energy? I'd appreciate your analysis on this point.

It makes sense to me that any gravitational force associated with the negative curvature in voids would be "anti-gravitational", in the sense that it adds to repulsion rather than to attraction. It also makes void clocks run relatively faster, not relatively slower. Otherwise the underdensity and negative curvature of the void would tend to cancel each other out rather than reinforce each other. At the simplest level, I can't visualize a void as being a normal gravitational well.

Jon
 
Last edited:
  • #13
jonmtkisco said:
It makes sense to me that any gravitational force associated with the negative curvature in voids would be "anti-gravitational", in the sense that it adds to repulsion rather than to attraction. It also makes void clocks run relatively faster, not relatively slower. Otherwise the underdensity and negative curvature of the void would tend to cancel each other out rather than reinforce each other. At the simplest level, I can't visualize a void as being a normal gravitational well.

Jon, to me this is roughly analogous to the "void" between Earth and the Moon. Objects initially at rest near the Lagrangian point L1 will tend to free-fall away from it. This is not anti-gravity, but just normal gravity (and orbital dynamics), caused by the gravitational wells of the two massive orbiting bodies. Also, clocks near L1 will gain time on clocks on Earth or on the Moon. I suppose one can view the spacetime curvature at L1 to be negative, hence geodesics diverge.

In the same way I think of void galaxies as being 'attracted' to the walls, not being 'pushed' by some anti-gravity. Maybe one should rather say that the geodesics of the void galaxies diverge due to the negative spacetime curvature there.

jonmtkisco said:
I interpret Wiltshire to say that, from the perspective of an observer (such as us) located in a flat (k = 0) geometry, the geometry in negatively curved (k= -1) void will have "positive" gravitational energy. Since gravitational binding energy is "negative", doesn't it make sense to interpret that positive gravitational energy is equivalent to (anti-) binding energy?

Wilstshire's "quasi-local" gravitational binding energy in the voids is less negative than in the void walls, depending on what the reference point is. But I would not call that "(anti-) binding energy)"!
 
Last edited:
  • #14
Hi Jorrie,

Hmmm, well first I think I need to change my terminology. Wiltshire's point is that the cosmic expansion is not accelerating per se; rather the acceleration is apparent only, a result of measuring the expansion rate of voids by reference to our wall clocks. If instead we referred to the dominant void clocks, Wiltshire says we would see the voids expanding at an Einstein-de Sitter rate which is appropriate for their average underdensity. This means that even in voids the expansion rate is decelerating, although it must be decelerating more slowly than in overdense regions. Deceleration of the voids must trend towards zero over time as their relative underdensity continues to increase.

In the absence of any true accelerative force in the voids in the Wiltshire model, I guess the negative curvature by definition isn't manifested as either an attractive or a repulsive force; it's not manifested as a force at all. It is manifested only as an Einstein-de Sitter underlying Hubble expansion.

Having said that, I don't understand Wiltshire's use of the term "positive gravitational energy". He clearly is portraying it as some kind of inverse to "negative" binding energy. Would it be more accurate to call it "kinetic energy of expansion" instead?

I do not agree with your more general point that void galaxies are only pulled by wall galaxies and are not pushed by the intrinsic expansion of space in the void. Even if you are referring only to Wiltshire's "apparent acceleration" of the voids as measured by wall clocks [edit: or as measured by void clocks], it seems obvious to me that over time a void together with its integrated wall structure is in net expanding overall at faster than the cosmic average Hubble rate, which certainly is uncharacteristic of a gravitationally bound structure.

I'm interpreting your push/pull argument as not being limited just to the Wiltshire model. On the assumption that's so, I'm going to write more on this subject in my thread "Hubble expansion in a contracting supercluster", which is directed more to that subject.

Jon
 
Last edited:
  • #15
2 cents

Just to start off, I think these ideas are really interesting and deserve some real thought. I think the terminology is confusing (finite infinity regions for example) and I don't understand a lot of it, but it does attempt to open up a very sensible line of reasoning. The FLRW metric that is used in the standard model can be used to determine many things but it is a metric in which there is NO structure. Everything is distributed evenly throughout all of space. Averaging over some very large distance this appears to be true, but it is not clear that all of our observations can be interpreted with this metric.

Consider the question "Why does the expansion of the Universe not pull our solar system apart or pull the moon and the Earth apart?" The usual answer is that gravitationally bound structures are not effected by cosmic expansion. The more correct (but in the same spirit) answer is that the solar system and our earth/moon system are not described by something like an FLRW metric. They are closer to a Schwarzschild metric if anything. Also our galaxy, our local group, and in some important ways, the filamentary structure of the Universe are not described well by an FLRW metric. He (Wiltshire) is not saying that there is some special configuration of matter that is fine tuned to produce cosmic acceleration, he is saying (I think) that if you properly take into account the observed structure of the Universe (the swiss cheese with large voids/filaments/sheets) then standard general relativity could explain our observation of acceleration.

Sitting in the middle of a large void is like being in an FLRW metric that has a density less the rho critical and could mimic a FLRW universe at rho crit. that is accelerating. This is an interesting (and short) paper by Caldwell and Stebbins about an idea for testing this kind of hypothesis.
http://arxiv.org/abs/0711.3459

Always good conversations on these forums ; )
 
  • #16
Hi Allday,
I think the general view is that Hubble expansion exists as a background phenomenon everywhere in the universe, in contention against the opposing local influence of gravitational acceleration. When the local space experiences net zero, or negative expansion (gravitational collapse), then obviously gravitational acceleration is dominant in that region. The one vector is directly subtracted from the other.

To answer your specific question, a number of calculations have been done regarding what the Hubble expansion effect is at the scale of our solar system. The answer seems to be that there is an effect, but it is utterly insignificant at our scale. One http://http://lanl.arxiv.org/abs/astro-ph/9803097v1" says that at the scale of the Earth's orbit, it is 44 orders of magnitude smaller than the Sun's gravity. Another paper pointed out that it's far too small to explain the Pioneer anomaly, and besides that anomoly is towards us, not away from us.

It is accurate to describe a non-expanding region of asymptotically flat vacuum around a concentrated mass as a Schwartzschild space. But that doesn't mean that anything fundamental has changed about the background Hubble expansion within that space; it just means that local gravity vector happens to be dominating the expansion vector for now. If the object has peculiar motion, then each succeeding such Schwartzchild space it departs will of course immediately rejoin the overall Hubble expansion once the gravitational influence has moved away.

Saying that "a lack of structure causes a higher rate of expansion" is not mutually exclusive with "the existence of structure causes a lower rate of expansion." It depends on the baseline you choose to start measuring from. The baseline for mainstream cosmology is the cosmic-average Hubble rate, so that seems to leave semantic room for both "peculiar expansion" and "peculiar contraction." In the Wiltshire model, there is no "real" acceleration of expansion in voids; the "apparent" acceleration is an artifact of the differing clock rates.

The idea that we may be located near the center of a local "Hubble Bubble" has been discussed quite a bit in the literature. Hopefully the tests described by Caldwell and Stebbins will help answer the question. Personally I tend to think it's unlikely we are in such a void -- the Hubble Bubble local redshift discrepencies are more likely due to some measurement inaccuracy or artifact, or some other effect. But who knows. In any event, the Hubble Bubble is not to be confused with the Local Void. Our Local Sheet forms part of the wall structure of the Local Void, so we are not actually inside that void. The paper you reference isn't clear about whether their measurement technique could detect indications of voids which are nearby, but which we are outside of.

By the way, I think it's just a matter of semantics to say that if we are in the middle of a void, it violates the Copernican Principle. The Copernican Principle in no way contradicts the existence of voids and other structure, and presumably there can be an infinite number of voids and void observers, none of whom should be considered any more "privileged" than the potentially infinite number of non-void observers. In fact at present, the universe is believed to be 95% voids, by volume.

Jon
 
Last edited by a moderator:
  • #17
Hi jonmtkisco,

I think the general view is that Hubble expansion exists as a background phenomenon everywhere in the universe, in contention against the opposing local influence of gravitational acceleration. When the local space experiences net zero, or negative expansion (gravitational collapse), then obviously gravitational acceleration is dominant in that region. The one vector is directly subtracted from the other.

Thinking of the Hubble expansion as separate from the detailed structure of the Universe is the problem I think. What you describe is a simple way to paste together two ideas: 1. the Newtonian approximation of flat, zero cosmological constant, space that contains matter in a swiss cheese structure and 2: the FLRW metric which is a solution to the Einstein Field Equations for a given set of omegas( matter, lambda, radiation ...) in a perfectly smooth universe.

The solution of the Friedman equation gives you the evolution of the scale factor (and therefore the Hubble parameter) with time in the smooth FLRW Universe.

<br /> H(t) = \frac{ \dot{a(t)} }{a(t)}<br />

The pasting of this expansion onto the lumpy universe is what I think is the important issure here. Our local Hubble parameter could be derived using standard general relativity, but with metrics that more accurately describe our observed universe. If not a completely new metric than at least a pasting together of different FLRW metrics that match the swiss cheese nature of the observations better.

Just to be a little more clear, I am not saying that this is without a doubt the solution to the dark energy problem, I just think that more people should take into account the bumpy nature of the current Universe when interpreting observations and try to crank out some GR with no dark energy and something other than the FLRW metric. By the way, do you have any links to research that shows that we are in a sheet, filament, or void? I was not aware that we had placed ourselves within one of these structures with any survey data.

I really will have to read the paper before I post again ; )

Also, I aggree that it is not a violation of the Copernican Principle to be located at a void center (just as it wouldn't be a violation if we happened to be in the CMB rest frame)
 
Last edited:
  • #18
Hi Allday,
Allday said:
The pasting of this expansion onto the lumpy universe is what I think is the important issure here.
I agree, but I'd say it the other way around: A lumpy matter structure has been overlaid onto a smooth, constant primordial FLRW Hubble expansion. The gravitational lumpiness both distorts the expansion rate locally, and causes the overall expansion to permanently lose expansion "momentum". (If there is dark energy, then it is believed to be reaccelerating that momentum.)

Allday said:
If not a completely new metric than at least a pasting together of different FLRW metrics that match the swiss cheese nature of the observations better.
As mentioned in my thread "Hubble expansion within a collapsing supercluster", Chernin et al are developing a lumpy version of FLRW based on swiss cheese "vacuoles". They have not yet suggested that it provides an alternative to dark energy, in fact their modelling all assumes dark energy.

Allday said:
I was not aware that we had placed ourselves within one of these structures with any survey data.
Recent observations indicate that our Local Sheet is part of the wall structure of our Local Void. Please see the Tully papers referenced in my "Hubble expansion..." thread mentioned above. The matter is not entirely certain, because a big chunk of the void is blocked from our view by the "Zone of Avoidance", the body of our Milky Way galaxy.

Jon
 
Last edited:
  • #19
After having read the paper

Thanks for all the references jon, Ill take a look.

So, I read the short one (http://arxiv.org/abs/0709.0732) "Exact solution to the averaging problem in cosmology" I am a lot clearer now than I was before on what Wiltshire is doing. I wrongly assumed he was doing something like calculating the consequences of us being in a void or Hubble Bubble, but now I see he is (in rough language) averaging a solution of the Einstein equations in voids (where we are not) and void walls (where we are). This gives rise to "bare" and "dressed" values of the cosmological parameters. The dressed values are those measured by us as wall observers and can imitate those that would be expected in FLRW model with dark energy.

Really cool stuff. What would be the observations that could discriminate dark energy from an averaging solution?
 
  • #20
Wiltshire has made some specific predictions, though by his own admission they are very hard to measure. I can't remember the exact details but from memory I think it relates to the average dispersion (or peculiar) velocity of galaxies. In other words, on average, what is the average deviation of galaxies from a perfectly linear Hubble law. I think this is in his papers somewhere, I just remember him talking about this at a conference recently.

Really though, this is a theoretical not an observational problem. The debate boils down to whether or not spatial averaging a lumpy universe gives you the correct background expansion. This is a question that must be solve theoretically. Actually the problem is that 95% of the cosmology community believe this is solved already and don't bother even refuting the papers of Wiltshire and others, since they are regarded as simply being flawed reasoning.

To say there is a debate about averaging in cosmology would be misleading. What is happening is that a minority are making very loud noises about it and the majority have already considered these ideas, come to the conclusion that they are wrong and aren't really paying much attention. I'm not taking sides either way but I think that is an accurate (if blunt) depiction of things at present. Note that the work of Chernin and co-workers has nothing at all to do with this issue.

If I may digress for a moment, this points to a bit of an issue with the way research works. There is little value in a researcher spending time refuting these kind of ideas, since the majority of the community aren't interested and so don't need convincing and the minority are the minority, which means a much smaller group of people who might cite your work (and in research citations make the world go around). I know of only one paper that systematically argues against dark energy being due to averaging not working in GR and I wouldn't hold your breath for more.

The problem with this is that no one has actually made a systematic calculation that demonstrates that the effect of perturbations is large enough to spoof the data to look like dark energy. If you read Wiltshire's papers very closely, he spends a lot of time talking about 'finite infinity', 'quasi-local energy' and other phrases that he introduces, which he argues justifies the form of the improved FRW equations that he writes down. But again, if you read carefully you will see that these equations contain completely free parameters whose values are determined by fitting to data. This is not a robust argument, nor is it even a proof in principle. He is arguing against very reasonable approximations (read any standard textbook for detailed discussion) without actually doing any calculation to demonstrate that the effect is any where near big enough.

Wiltshire criticizes Rocky Kolb and his group (who have been working on lumpy models for a long time) yet they at least are trying to do the theoretical calculation, even if they aren't getting anywhere with it very fast.

No one disputes that averaging introduces some error, but the standard textbook calculation puts the error at around 10^{-4} or something like that from memory. In other words the difference in expansion rate you would expect due to average than in the real world is less than 0.01%. The same applies to any difference in the luminosity distance, clock rates or any other quantity. The challenge for anyone wanting to claim what Wiltshire and others do is why the error is in fact 5 orders of magnitude bigger than expected. That's a pretty tough call, not impossible certainly, but a convincing calculation has yet to be demonstrated.
 
Last edited:
  • #21
Wallace said:
To say there is a debate about averaging in cosmology would be misleading. What is happening is that a minority are making very loud noises about it and the majority have already considered these ideas, come to the conclusion that they are wrong and aren't really paying much attention.

Hi Wallace,

I agree that there are plenty of cosmologists who are skeptical that the "backreaction" from cosmological inhomogeneity could be large enough to replace dark energy. They may be right, but hopefully this will get resolved soon. I haven't seen too many who have commented specifically on Wiltshire's point that negative curvature in voids reinforces the effect of inhomogeneity and therefore may push it up to the necessary magnitude.

In any event, the subject does seem to be getting a fair amount of discussion and debate recently. Here's a sampling of recent papers on the subject -- (this list is not all inclusive.) Some cite Wiltshire, others don't.

"[URL [astro-ph] 16 Jan 2008"]Paranjape & Singh, 1/08 [/URL]
"[URL [astro-ph] 22 Jan 2008"]Nan Li, Seikel, Schwarz 1/08[/URL]
"[URL [astro-ph] 17 Jan 2008"]Rasanen 1/08[/URL]
"[URL [math-ph] 3 Jan 2008"]Carfora & Buchert 1/08[/URL]
"[URL [astro-ph] 18 Dec 2007"]Beherend, Brown, Robbers 12/07[/URL]
"[URL [gr-qc] 3 Dec 2007"]Buchert 12/07[/URL]
"[URL [astro-ph] 10 Sep 2007"]Mattsson & Ronkainen 9/07[/URL]
"[URL [astro-ph] 19 Jun 2007"]Vanderveld et al 6/07[/URL]
"[URL [gr-qc] 13 Apr 2007"]Coley4/07[/URL]

Happy reading!

Jon
 
Last edited by a moderator:
  • #22
Sure there is discussion, but it is amongst the small minority who think there is something in this idea at all. Most have decided that the standard approximation is robust and there is no good reason that it would break down so severely and unexpectedly. This is why this will not be resolved soon, since only the small minority who write those papers don't believe it has been resolved already. I'm pressed for time at the moment, but could you indicate which if any of the papers you list argue against the whole concept?

As I mentioned previously, I know of only a single referred paper that has done this so I'd be curious to read any others.
 
  • #23
Wallace said:
Sure there is discussion, but it is amongst the small minority who think there is something in this idea at all. Most have decided that the standard approximation is robust and there is no good reason that it would break down so severely and unexpectedly. This is why this will not be resolved soon, since only the small minority who write those papers don't believe it has been resolved already. I'm pressed for time at the moment, but could you indicate which if any of the papers you list argue against the whole concept?

As I mentioned previously, I know of only a single referred paper that has done this so I'd be curious to read any others.

I agree with your perspective. the idea of explaining acceleration by inhomogeneity has been around since before 2005 and for the great majority it's goose was cooked by the Ishibashi and Wald paper published 2006. Robert Wald is a major figure for sure.

Compared with the main body of work on dark energy and the cosmological constant, there has only been a trickle of papers of this type, and I don't think Wiltshire is even the most prolific or cited in that small minority. I would put Biswas ahead of him and maybe Buchert.

I am not saying right or wrong, just that the general consensus noticed the idea around 2005 and dismissed it around 2006 and now it is kind of fringe (whatever merits)

http://arxiv.org/abs/gr-qc/0509108
Can the Acceleration of Our Universe Be Explained by the Effects of Inhomogeneities?
Akihiro Ishibashi, Robert M. Wald
20 pages, 1 figure, published Class.Quant.Grav. 23 (2006) 235-250
(Submitted on 27 Sep 2005)

"No. It is simply not plausible that cosmic acceleration could arise within the context of general relativity from a back-reaction effect of inhomogeneities in our universe, without the presence of a cosmological constant or 'dark energy'. We point out that our universe appears to be described very accurately on all scales by a Newtonianly perturbed FLRW metric. (This assertion is entirely consistent with the fact that we commonly encounter \delta \rho/\rho &gt; 10^{30}.) If the universe is accurately described by a Newtonianly perturbed FLRW metric, then the back-reaction of inhomogeneities on the dynamics of the universe is negligible. If not, then it is the burden of an alternative model to account for the observed properties of our universe. We emphasize with concrete examples that it is not adequate to attempt to justify a model by merely showing that some spatially averaged quantities behave the same way as in FLRW models with acceleration. A quantity representing the 'scale factor' may 'accelerate' without there being any physically observable consequences of this acceleration. It also is not adequate to calculate the second-order stress energy tensor and show that it has a form similar to that of a cosmological constant of the appropriate magnitude. The second-order stress energy tensor is gauge dependent, and if it were large, contributions of higher perturbative order could not be neglected. We attempt to clear up the apparent confusion between the second-order stress energy tensor arising in perturbation theory and the 'effective stress energy tensor' arising in the 'shortwave approximation.' "

It will be interesting to see if anything happens. But I think it may have been downhill since 2006. Here is an invited review paper by Copeland Sami and Tsujikawa
http://arxiv.org/abs/hep-th/0603057
Dynamics of dark energy
Edmund J. Copeland, M. Sami, Shinji Tsujikawa
93 pages, 26 figures, Invited Review published in the International Journal of Modern Physics D15 (2006) 1753-1936
(Submitted on 8 Mar 2006)

"In this paper we review in detail a number of approaches that have been adopted to try and explain the remarkable observation of our accelerating Universe... "

In this 94 page review with over 500 citations, the idea of explaining acceleration this way is mentioned briefly in two paragraphs, mostly devoted to paraphrasing the argument by Wald and Ishibashi, which is their reference [510]. Wiltshire is not mentioned by name but a paper of his is one of a dozen or so papers of the same type which are lumped together as reference [511].
 
Last edited:
  • #24
Yes Ishibashi and Wald, that was the 'one' paper I was thinking of, I couldn't remember who wrote it. Thanks Marcus, always good for a reference :)

To be honest I must admit I wasn't completely taken with their arguments, I found they mainly rehash the kind of textbook justifications for averaging rather than a more rigorous calculation demonstrating the the approximation really is fine. This criticism is a bit harsh since that calculation is incredibly difficult, which is why it is not done.
 
  • #25
Hi Wallace! I think Wald and Ishibashi probably overstate. The review article by Copeland et al take a more balanced view and are more mellow. they repeat Wald arguments and they say well there is not much if any indication that Wiltshire's idea is right, but wouldn't it be nice if it WERE :-) and then we could explain everything without any novel dark energy.

So it is a minor or fringe idea but it is still alive and getting some attention from a few people. That said we can look at the small bunch of papers about this and see who the main proponents are. Within that group I am impressed by how much Biswas is cited. Here is a list that gives LINKS TO ABSTRACTS so anyone who is curious can quickly see what the papers are about:
this is Spires with the command FIND C GR-QC/0503099
http://www.slac.stanford.edu/spires/find/hep?c=GR-QC/0503099

If I copy and past this, it will not have the links but if you just go to that Spires URL then you get the links and information about how much the various papers have been cited.

this is not a very selective list. It is just the 20 papers which cited Wiltshire's main 2005 paper about it. That Wiltshire paper got 20 cites, and some of the subsequent ones by other people (who cited Wiltshire) have in turn gotten more than 20.

1) Local Void vs Dark Energy: Confrontation with WMAP and Type Ia Supernovae.
Stephon Alexander, Tirthabir Biswas (Penn State U.) , Alessio Notari (McGill U. & CERN) , Deepak Vaid (Penn State U.) . IGPG-07-2-1, Dec 2007. 26pp.
e-Print: arXiv:0712.0370 [astro-ph]
Cited 5 times

2) The Spatial averaging limit of covariant macroscopic gravity: Scalar corrections to the cosmological equations.
Aseem Paranjape, T.P. Singh (Tata Inst.) . Mar 2007. 24pp.
Published in Phys.Rev.D76:044006,2007.
e-Print: gr-qc/0703106
Cited 14 times

3) Swiss-Cheese Inhomogeneous Cosmology and the Dark Energy Problem.
Tirthabir Biswas (McGill U. & Penn State U.) , Alessio Notari (McGill U.) . IGPG-07-2-2, Feb 2007. 35pp.
e-Print: astro-ph/0702555
Cited 10 times

4) Cosmic clocks, cosmic variance and cosmic averages.
David L. Wiltshire (Canterbury U.) . Feb 2007. 72pp.
Published in New J.Phys.9:377,2007.
e-Print: gr-qc/0702082
Cited 18 times

5) Nonlinear Structure Formation and Apparent Acceleration: An Investigation.
Tirthabir Biswas (McGill U.) , Reza Mansouri (McGill U. & Sharif U. of Tech.) , Alessio Notari (McGill U.) . Jun 2006. 57pp.
Published in JCAP 0712:017,2007.
e-Print: astro-ph/0606703
Cited 32 times

6) Correspondence between kinematical backreaction and scalar field cosmologies: The `Morphon field'.
Thomas Buchert (Bielefeld U. & Munich U.) , Julien Larena, Jean-Michel Alimi (LUTH, Meudon) . Jun 2006. 36pp.
Published in Class.Quant.Grav.23:6379-6408,2006.
e-Print: gr-qc/0606020
Cited 25 times

7) Dark matter, and its darkness.
Dharam Vir Ahluwalia . ASGBG-PREPRINT:-20-03-2006AH, Mar 2006. 12pp.
* Brief entry *.
Published in Int.J.Mod.Phys.D15:2267-2278,2006.
e-Print: astro-ph/0603545
Cited 1 time

8) Dynamics of dark energy.
Edmund J. Copeland (Nottingham U.) , M. Sami (Jamia Millia Islamia) , Shinji Tsujikawa (Gunma Coll. Tech.) . Mar 2006. 84pp.
Published in Int.J.Mod.Phys.D15:1753-1936,2006.
e-Print: hep-th/0603057
Cited 434 times

9) Structured frw universe leads to acceleration: a non-perturbative approach.
Reza Mansouri (McGill U.) . Dec 2005.
e-Print: astro-ph/0512605
Cited 28 times

10) An inhomogeneous alternative to dark energy?
Havard Alnes (Oslo U.) , Morad Amarzguioui (Inst. Theor. Astrophys., Oslo) , Oyvind Gron (Oslo Coll. & Oslo U.) . Dec 2005. 8pp.
Published in Phys.Rev.D73:083519,2006.
e-Print: astro-ph/0512006
Cited 45 times

11) Asymmetric inflation: Exact solutions.
Roman V. Buniy (Oregon U.) , Arjun Berera (Edinburgh U.) , Thomas W. Kephart (Vanderbilt U.) . Nov 2005. 42pp.
Published in Phys.Rev.D73:063529,2006.
e-Print: hep-th/0511115
Cited 13 times

12) Long-wavelength modes of cosmological scalar fields.
Marcin Jankiewicz, Thomas W. Kephart (Vanderbilt U.) . Oct 2005. 12pp.
Published in Phys.Rev.D73:123514,2006.
e-Print: hep-ph/0510009
Cited 6 times

13) On globally static and stationary cosmologies with or without a cosmological constant and the dark energy problem.
Thomas Buchert (Munich U.) . Sep 2005. 33pp.
Published in Class.Quant.Grav.23:817-844,2006.
e-Print: gr-qc/0509124
Cited 28 times

14) Can a dust dominated Universe have accelerated expansion?
Havard Alnes (Oslo U.) , Morad Amarzguioui (Inst. Theor. Astrophys., Oslo) , Oyvind Gron (Oslo Coll. & Oslo U.) . Jun 2005. 11pp.
Published in JCAP 0701:007,2007.
e-Print: astro-ph/0506449
Cited 21 times

15) Inflessence: A Phenomenological model for inflationary quintessence.
Vincenzo F. Cardone, A. Troisi (Salerno U. & INFN, Salerno) , S. Capozziello (Naples U. & INFN, Naples) . Jun 2005. 12pp.
Published in Phys.Rev.D72:043501,2005.
e-Print: astro-ph/0506371
Cited 14 times

16) Late-time inhomogeneity and acceleration without dark energy.
John W. Moffat (Perimeter Inst. Theor. Phys. & Waterloo U.) . May 2005. 10pp.
Published in JCAP 0605:001,2006.
e-Print: astro-ph/0505326
Cited 30 times

17) The Effects of gravitational back-reaction on cosmological perturbations.
Patrick Martineau (McGill U.) , Robert H. Brandenberger (McGill U. & Brown U.) . May 2005. 9pp.
Published in Phys.Rev.D72:023507,2005.
e-Print: astro-ph/0505236
Cited 17 times

18) Type Ia supernovae tests of fractal bubble universe with no cosmic acceleration.
Benedict M.N. Carter, Ben M. Leith (Canterbury U.) , S.C.Cindy Ng (Singapore Natl. U.) , Alex B. Nielsen, David L. Wiltshire (Canterbury U.) . Apr 2005. 10pp.
e-Print: astro-ph/0504192
Cited 10 times

19) Large scale cosmological inhomogeneities, inflation and acceleration without dark matter.
John W. Moffat (Perimeter Inst. Theor. Phys. & Waterloo U.) . Apr 2005. 10pp.Cited 9 times
e-Print: astro-ph/0504004

20) Cosmic microwave background, accelerating Universe and inhomogeneous cosmology.
John W. Moffat (Perimeter Inst. Theor. Phys. & Waterloo U.) . Feb 2005. 18pp.
Published in JCAP 0510:012,2005. Cited 37 times
e-Print: astro-ph/0502110
 
Last edited by a moderator:
  • #26
thanks everyone for posting those review papers. I still think this averaging solution could hold something important, but I agree that a majority of big name researchers don't think about these things. I would gamble that (for the majority in the mainstream) its not so much that they have thought deeply about the models with no cosmological constant and dismissed them, but that they've thought to themselves, "thats sounds interesting, but very difficult to calculate. GR is hard enough using the simple FLRW metric and I'll stick with that until these averaging guys can (ever) make a really strong claim about something that can be observed to prove them right, then I'll pay some more attention to it"
 
  • #27
I'd take you up on that bet. The issue of averaging is covered in most textbooks on cosmology. There are good, well thought-out reasons why it is not thought to change the observed or 'actual' expansion rate. No one has made a convincing argument as to why these arguments are flawed and this is why there is not much attention to them.

Certainly these issues (and others) are talked about in the community, but it is at the level of chatting over coffee at morning tea or over a beer at a conference. People are aware of this and are thinking the issues over but keep coming to the same conclusion, which is why there is no talk at the level of published papers.

That's my bet anyway :)
 
  • #28
Hi Allday and Wallace,

I don't know if Wiltshire is right or wrong. However, I want to repeat that he also agrees that backreaction from averaging of inhomogeneities is not a large enough effect to cause actual acceleration as large as current observations indicate.

His point is different. He says that there is no actual acceleration occurring. Instead, we observe "apparent acceleration" which is the result of measuring void expansion by wall observer clocks, instead of cosmic average clocks based on the dominant voids.

Therefore, I think the most of the papers on the subject of backreaction are not directly relevant to his point. We'll have to wait and see if other cosmologists publish papers that are directed towards his specific point. Although I have issues with some of Wiltshire's analysis, in my opinion it's just too soon to handicap the outcome.

Jon
 
  • #29
jonmtkisco said:
Hi Allday and Wallace,

I don't know if Wiltshire is right or wrong. However, I want to repeat that he also agrees that backreaction from averaging of inhomogeneities is not a large enough effect to cause actual acceleration as large as current observations indicate.

His point is different. He says that there is no actual acceleration occurring. Instead, we observe "apparent acceleration" which is the result of measuring void expansion by wall observer clocks, instead of cosmic average clocks based on the dominant voids.

Wiltshire is not alone here, but my comments previously where aimed at either proposal, 'real' acceleration or apparent acceleration via some unexpected altering of an observational signal due to averaging.

jonmtkisco said:
Therefore, I think the most of the papers on the subject of backreaction are not directly relevant to his point. We'll have to wait and see if other cosmologists publish papers that are directed towards his specific point. Although I have issues with some of Wiltshire's analysis, in my opinion it's just too soon to handicap the outcome.

Jon

Again, Wiltshire's specific mechanism is his, however as far back as Einstein people have been examining how inhomogeneities in the Universe affect the expected results, not just by backreaction but also luminosity distance, clock rates etc.

I think you'll only see the 'mainstream' community bothering to publish in this area if someone demonstrates that the effects are big enough to spoof dark energy. Neither Wiltshire or anyone else have done this yet (by his own admission, if you read the papers and not the press release!).

When do you think it will no longer be 'too soon'?

I hope I'm not being a complete wet blanket here, I think this is an interesting area but we need to be realistic about what has and hasn't been demonstrated and what the chances of these ideas working are.
 
  • #30
Wallace said:
I think you'll only see the 'mainstream' community bothering to publish in this area if someone demonstrates that the effects are big enough to spoof dark energy. Neither Wiltshire or anyone else have done this yet (by his own admission, if you read the papers and not the press release!).
Hi Wallace,
In his three 2007 papers, Wiltshire very definitely claims that the accumulated difference in clock rates between void and wall clocks over 13.7 Gyr is sufficient to explain the "apparent" acceleration of expansion (as he terms it). You should read them if you haven't. They give a straightforward explanation of his analysis and calculations, as well as how he differentiates his theory from other recent backreaction propositions.

Wallace said:
When do you think it will no longer be 'too soon'?
I can't predict when we can all say that the issue has been reasonably resolved. However, I will be very surprised if there aren't several papers from other authors this year dealing more specifically with Wiltshire's clock rate proposition.

Jon
 
  • #31
jonmtkisco said:
Hi Wallace,
In his three 2007 papers, Wiltshire very definitely claims that the accumulated difference in clock rates between void and wall clocks over 13.7 Gyr is sufficient to explain the "apparent" acceleration of expansion (as he terms it). You should read them if you haven't. They give a straightforward explanation of his analysis and calculations, as well as how he differentiates his theory from other recent backreaction propositions.

You need to read between the lines a little better. Wiltshire, by his own admission, has not actually calculated, from the Einstein Equations, the magnitude of the effect due to inhomogeneities. He only claims that what he has done so far is a first attempt at how such a calculation may be formulated, but nowhere has he claimed to have actually done it. This is not a criticism, since no one else has done it either and frankly no one really knows how you would even go about it.

What he has done is to argue why a particular form of the, as he calls 'dressed', Friedmann equations is a good first approximation. However, there are completely free parameters in these equations and the values of those parameters go to the heart of this issue. Among these are the 'shift' parameter that describes the difference between wall and void clock rates. Wiltshire has argued why he thinks they should be different in his papers, but has not calculated from theory what value this should have given the level of structure growth that has occurred in the Universe. In order to get a value for these parameters, Wiltshire fits his equation to the data. He finds that his equations, with certain parameter values, gives a concordant model to all the data, although again by his own admission there is a lot of work to go in accurately determining the observational consequences of his model. His recent work has been on this area, rather than refining the theory itself.

To reiterate, what is needed to prove this kind of proposal is to show from theory alone a robust calculation that the difference in clock rates, luminosity distance, back reaction or whatever else is being claimed that inhomogeneities lead to is large enough to explain dark energy. This is a big ask, but this it what would be needed to get the 'mainstream' to sit up and listen more closely. Until then I don't think you'll hear much from most of the community about this.
 
  • #32
Hi Wallace,

Wallace said:
You need to read between the lines a little better.

I keep trying!

Wallace said:
Wiltshire, by his own admission, has not actually calculated, from the Einstein Equations, the magnitude of the effect due to inhomogeneities. He only claims that what he has done so far is a first attempt at how such a calculation may be formulated, but nowhere has he claimed to have actually done it. This is not a criticism, since no one else has done it either and frankly no one really knows how you would even go about it.

I agree that Wiltshire does not claim that the problem of differential clock rates in regions of different geometric curvature has been or can ever be calculated exactly from the Einstein equations, essentially because the equations do not seem capable of solving the problem of quasilocal gravitational energy. Wiltshire comments:

"It is unfortunate that general relativists have been obsessed by exact solutions of Einstein’s equations, whether they involve likely or unlikely approximations for the matter distribution. We should face up to the fact that the solution for the actual matter distribution is analytically intractable, and therefore the question of cosmological averaging is paramount. Furthermore, once we do take an average we must address the fundamental problem that the relationship of rods and clocks at one point to those at a distant point, a conceptual centrepiece of general relativity, is highly non–trivial once gradients in spatial curvature and gravitational energy are considered. In an expanding universe these involve subtle dynamical aspects of general relativity, which cannot be localized at a point on account of the equivalence principle."

So, Wiltshire applies Buchert's averaging equations (derived from the Einstein equations) to tackle the problem.

Wallace said:
What he has done is to argue why a particular form of the, as he calls 'dressed', Friedmann equations is a good first approximation.
In one of his 2007 papers, he claims to formulate "exact solutions" to the Buchert equations for a 2-scale model comprised of idealized (1) gravitationally bound wall observers and (2) observers in dominant (48 Mpc) voids.

The term "dressed parameters" should not be viewed as a perjorative term. It simply refers to the need to translate between the clocks and rods of wall and void observers.

Wallace said:
However, there are completely free parameters in these equations and the values of those parameters go to the heart of this issue. Among these are the 'shift' parameter that describes the difference between wall and void clock rates.

In his most recent papers, Wiltshire claims that there are 4 free parameters, of which 2 are so constrained by CMB priors and a tracking solution that they are insignificant. That leaves only 2 significant free parameters, the "bare" Hubble constant and the present void fraction (by volume). He notes that the present void fraction should eventually be estimatable by observation. He comments:

"This illustrates the power of the FB model as compared to the spatially flat LCDM model. Both models depend on the Hubble parameter and one other free parameter. However, [Lambda] is not directly observable, whereas the void–volume fraction, fv, is empirically observable in principle."

In conclusion, Wiltshire asks the following question, which in my opinion is perfectly reasonable:

"In my view caution should always be exercised, but this includes caution with the conceptual basis of our theory and the operational interpretation of measurements. To those who are uncomfortable with my proposal about cosmological quasilocal gravitational energy let me ask the following: Without reference to an asymptotically flat static reference scale, which does not exist given the universe is expanding, and without reference to a background which evolves by the Friedmann equation at some level, an assumption which is manifestly violated by the observed inhomogeneities, What keeps clocks synchronized in cosmic evolution? Please explain."

From: Dark energy without dark energy, arXiv:0712.3984, 32 page overview, from talks given at NZIP2007, GRG18, Dark2007; to appear in the Proceedings of the Dark 2007 Conference, Sydney, Australia, Sept 2007, eds H. Klapdor-Kleingrothaus and G.F. Lewis, (World Scientific, Singapore, 2008).

Jon
 
Last edited:
  • #33
jonmtkisco said:
In the absence of any true accelerative force in the voids in the Wiltshire model, I guess the negative curvature by definition isn't manifested as either an attractive or a repulsive force; it's not manifested as a force at all. It is manifested only as an Einstein-de Sitter underlying Hubble expansion.

Having said that, I don't understand Wiltshire's use of the term "positive gravitational energy". He clearly is portraying it as some kind of inverse to "negative" binding energy. Would it be more accurate to call it "kinetic energy of expansion" instead?

I still don't really understand the term "positive gravitational energy". However, I see that Wiltshire describes an interesting aspect of the negative geometric curvature within voids. He points out that a negatively curved region has a volume which is larger compared to its radius than would be the case in a flat region. In other words,
V &gt; \frac{4}{3}\pi r^{3}.
So a void's volume is larger, and its density (M/V) is lower, than its observed radius would lead us to expect. That super-expansion rate within a constrained radius is the source of positive gravitational energy which causes time anti-dilation.

Wiltshire also clearly differentiates the positive gravitational energy of negative curvature from the kinetic energy of expansion. He says that the latter is a lesser contribution to differential clock rates in voids, and he has not included it in his calculations to date.

Jon
 
Last edited:
  • #34
jonmtkisco said:
I still don't really understand the term "positive gravitational energy". . . . So a void's volume is larger, and its density (M/V) is lower, than its observed radius would lead us to expect. That super-expansion rate within a constrained radius is the source of positive gravitational energy which causes time anti-dilation.

Jon, the simplistic way that I understood this is that the negative gravitational potential energy in the voids are simply less negative than in the filaments; hence, relative to our neighborhood it's positive.

I'm not sure that it can be determined that a void's negative curvature makes its volume larger than its observed radius would lead us to expect. Remember, we observe "along" the presumed negatively curved space of the void.

"Time anti-dilation"? It's a funny term!:frown:
 
Last edited:
  • #35
Jorrie said:
Jon, the simplistic way that I understood this is that the negative gravitational potential energy in the voids are simply less negative than in the filaments; hence, relative to our neighborhood it's positive.
Jorrie, the problem I have with that explanation is that the negative curvature doesn't seem to play any independent role in creating gravitational energy. The total absence of matter (and its associated binding gravitational energy) merely brings the void to a proper "zero" gravitational energy. That doesn't seem like it would create enough energy differentiation from filaments to achieve Wiltshire's clock differentials. In order to increase the differentiation, isn't some truly "positive" gravitational energy required?

Jorrie said:
"Time anti-dilation"? It's a funny term!:frown:
Aww c'mon Jorrie, you know what I meant. Time contraction.

Jon
 
  • #36
jonmtkisco said:
In his most recent papers, Wiltshire claims that there are 4 free parameters, of which 2 are so constrained by CMB priors and a tracking solution that they are insignificant. That leaves only 2 significant free parameters, the "bare" Hubble constant and the present void fraction (by volume). He notes that the present void fraction should eventually be estimatable by observation. He comments:

"In my view caution should always be exercised, but this includes caution with the conceptual basis of our theory and the operational interpretation of measurements. To those who are uncomfortable with my proposal about cosmological quasilocal gravitational energy let me ask the following: Without reference to an asymptotically flat static reference scale, which does not exist given the universe is expanding, and without reference to a background which evolves by the Friedmann equation at some level, an assumption which is manifestly violated by the observed inhomogeneities, What keeps clocks synchronized in cosmic evolution? Please explain."

One might equally ask, what makes the clock rates different? Wiltshire makes a lot of broad statements about 'quasilocal energy', 'finite infinity' and such to justify a large difference between wall and void clock rates.

However, the crucial point is that he has still not demonstrated how to calculate what this difference would be given a level of inhomogeneity in any example universe. He has only fitted this value to data. This is not good enough. Again, I'm not meaning to be critical of Wiltshire as he may well provide this in time and you can't do everything at once.

The key parameter is \gamma, the 'lapse' function (I erroneously called this the shift in a previous post). Wiltshire has this at around 1.5 which is ridiculously high. This means clock in walls run 1.5 times faster than in voids (or is it the other way around?). You'd have to be traveling close to the speed of light or be sitting very close to a black hold for General Relativity to predict such a lapse. It's a big ask for the very weak potentials present in the large scale structure of the Universe to be responsible for this.

This is why Wiltshire must show how this lapse can actually be calculated, and say how it precisely evolves with cosmic time (since it starts out at Unity). There is nothing approaching this in any of his papers to date.
 
  • #37
Hi Wallace,

Wallace said:
The key parameter is \gamma, the 'lapse' function (I erroneously called this the shift in a previous post). Wiltshire has this at around 1.5 which is ridiculously high. This means clock in walls run 1.5 times faster than in voids (or is it the other way around?). You'd have to be traveling close to the speed of light or be sitting very close to a black hold for General Relativity to predict such a lapse.

A key point of Wiltshire's model is that "apparent" acceleration is not a direct function of the value of the "lapse" parameter \overline{\gamma}. Rather, the illusion of cosmic acceleration occurs only during a specific epoch while the void fraction f_{v} is increasing at a high rate. He calculates that the illusion of acceleration began when f_{v} = 0.59, at about 7Gy (z= 0.09). He puts the present void fraction at about 0.76, having begun very close to zero and increased slowly at first. The illusion of acceleration will reach a maximum in the near future when f_{v} \simeq 0.77, when \ddot {f_{v}} \rightarrow 0. After that, \ddot {f_{v}} will go negative, and the illusion will begin to fade away. Note that at no point do void observers measure any apparent acceleration; they observe a decelerating Einstein-de Sitter universe.

Wiltshire calculates the lapse parameter \overline{\gamma} at 1.38 now, not 1.5. Again, \overline{\gamma} begins at 1 with almost 0 rate of increase. It then grows monotonically to its current value. It's average over 10Gy+ is about 1.1-1.2. At present, the time variation in the lapse function [ \ddot {\overline{\gamma}} ] is near zero.

The 1.5 figure is the upper bound on how high \overline{\gamma} can ever get. Wiltshire explains: "In the absence of a dark energy component the only absolute upper bound on the difference in clock rates is that within voids \overline{\gamma} (\tau, X) &lt; \frac{3}{2}, which represents the ratio of the local expansion rate of an empty Milne universe region to an Einstein-de Sitter one.

When Wiltshire plugs "reasonable fit" numbers into his equations, he calculates the global matter density parameter at \Omega _{M0}= 0.127, far lower than the same parameter in the \Lambda CDM model. This figure is consistent with a transition to an 'open' FRLW universe. The deceleration parameter q he calculates also is significantly lower than in \Lambda CDM, but it remains negative.

Regarding the gravitational energy of negative curvature which causes the time differential, Wiltshire says:

"The l.h.s. of the Friedmann equation ... can be regarded as the difference of a kinetic energy density per unit rest mass, E _{kin} = \frac{1}{2} \frac{\dot {a^2}}{a^2} and a total energy density per univ rest mass E _{tot} = - \frac{1}{2} \frac{k^2}{a^2} of the opposite sign to the Gaussian curvature, k. Such terms represent forms of gravitational energy, but since they are identical for all observers in an isotropic homogeneous geometry, they are not often discussed in introductory cosmology texts. ... In an inhomogeneous cosmology, gradients in the kinetic energy of expansion and in spatial curvature, will be manifest in the Einstein tensor, leading to variations in gravitational energy that cannot be localised. ... Clocks run slower where mass is concentrated, but because this time dilation relates mainly to energy associated with spatial curvature gradients, the differences can be significantly larger than those we would arrive at in considering only binding energy below the finite infinity scale, which is very small."

Jon
 
Last edited:
  • #38
I think we're still not on the same page here Jon. The point is that Wiltshire conjectures a model and then determines the parameters of the model a posteriori. Having done so he then demonstrates how his model fits other predictions (ellipticity in the CMB etc).

What is missing is a demonstration that the physical mechanism of the model can actually do what is being claimed, mere parameter values mean nothing without this and it is entirely absent.

It doesn't matter how many claims from papers you pull out, it doesn't strengthen the argument at all, since all of those claims rest on the same unstable base.

To give you an idea an of what is needed, Wiltshire (or anyone else) would need to be able to produce a process that could calculate the observational signature of a cosmological models specified by the homogenous variables (mean densities etc) and then additionally the power spectrum and amplitude of density fluctuations. In the standard case the homogenous parameters affect the evolution of the density fluctuations but not the other way around (either in 'reality' or apparent 'dressed' parameters). Wiltshire and others claim that in fact both of these feedback on each other but the models they propose cannot be transparently tested, since there is no coherent description of how these things relate in general. I.E. you should be able to play with the parameters and see how many different Universes would look, not just deal with a single set of parameters from our Universe.
 
  • #39
Hi Wallace,

Wallace said:
The point is that Wiltshire conjectures a model and then determines the parameters of the model a posteriori. ...

What is missing is a demonstration that the physical mechanism of the model can actually do what is being claimed, mere parameter values mean nothing without this and it is entirely absent.

To give you an idea an of what is needed, Wiltshire (or anyone else) would need to be able to produce a process that could calculate the observational signature of a cosmological models specified by the homogenous variables (mean densities etc) and then additionally the power spectrum and amplitude of density fluctuations. ... I.E. you should be able to play with the parameters and see how many different Universes would look, not just deal with a single set of parameters from our Universe

I don't understand how that's different from FLRW with \Lambda CDM. Friedmann conjectured the original model before there was any observation of the Hubble constant parameter. The observed figure later was 'plugged in'. Even later, a 'best fit' number was plugged in for \Lambda. These numbers were then changed and refined to reflect new data such as WMAP.

Wiltshire has a set of equations, into which he plugs in selected parameters. He 'best fits' an initial void fraction value which yields a reasonable current void fraction value, and yields other reasonable present parameters.

Wallace said:
Wiltshire and others claim that in fact both of these feedback on each other but the models they propose cannot be transparently tested, since there is no coherent description of how these things relate in general.

Wiltshire does not claim that inhomogeneities "feedback" on the average expansion rate. That is what the "backreaction" advocates claim. Wiltshire does not believe that backreaction can explain apparent acceleration.

Wiltshire claims simply that the observations of wall observers like us are misleading because they don't take into account the difference in wall and void clock rates caused by the significant negative curvature of voids. In fact he says that wall and void expansion rates are identical and decelerating when measured by a single volume-average clock. It's a fairly straightforward and logical concept, except that there is no accepted equation for exactly calculating averaged quasi-local energy values. That's why he supplies one, based on Buchert's equations. I don't know if it's right or wrong, but it is capable of generating appropriate results from what appear to be reasonable input parameters. In my book that's a very good start.

I have cited a number of his claims in these posts only because I want to ensure that his model is accurately described.

Jon
 
Last edited:
  • #40
jonmtkisco said:
I don't understand how that's different from FLRW with \Lambda CDM. Friedmann conjectured the original model before there was any observation of the Hubble constant parameter. The observed figure later was 'plugged in'. Even later, a 'best fit' number was plugged in for \Lambda. These numbers were then changed and refined to reflect new data such as WMAP.

Wiltshire has a set of equations, into which he plugs in selected parameters. He 'best fits' an initial void fraction value which yields a reasonable current void fraction value, and yields other reasonable present parameters.

Now we're getting somewhere. In a sense you're right, the standard model has of course also been shaped by data. However, there is still a big something missing. In the standard approach you can specify the physics, say the potential of a quintessence field, then, along with the density parameters and hubbles constant you can predict what the physics would look like. You can then have a look at the data. In practice in order to properly fit a model you need to calculate what thousands of slightly different parameter sets 'look like'.

This is what Wiltshire's model currently lacks. There is no theoretical tool that can determine the general observational signature from a given physics. As such you cannot properly test the model. Wiltshire has calculated the time delay implied by his model that gives the 'apparent' acceleration we observe by fitting to the data of our Universe. However he cannot predict this time delay for a Universe in general, from a hypothetical set of conditions. This is the basic requirement of a cosmological model.

jonmtkisco said:
Wiltshire does not claim that inhomogeneities "feedback" on the average expansion rate. That is what the "backreaction" advocates claim. Wiltshire does not believe that backreaction can explain apparent acceleration.

You misunderstand me, sorry if I wasn't clear enough. By 'feedback' I don't mean a physical mechanism neccessarily in the backreaction sense. Let me explain. Take two Universes with the same mean density and curvature. One is completely smooth, the other has the kind of structure we see in our Universe. In the standard model the structure does not significantly change observables such as Supernovae measurements that intend to probe the homogeneous background expansion. In Wiltshire model however, these two Universes would look different. The one with structure has a set of 'dressed' or 'apparent' parameters that differ from the homogenous one. That is if we interpret the results from the structured Universe assuming it is smooth (the way that cosmology operates today) we get an 'apparent' set of parameters that are different from the true ones. That is what I mean by feedback, that in Wiltshires model, if we wanted to just know something about the mean properties of the Universe, the equations we are working with need to know about the structure. Normally when we just use the FLRW metric for say Supernovae results, the equations don't care about the structure.

The problem is that Wiltshire has not provided the equations to do this with. We cannot predict what any arbitrary level and type of structure will do to measurements of the background with his work except for one set of parameters, the ones he has fitted to data. This means we don't know if his mechanism is valid.

jonmtkisco said:
Wiltshire claims simply that the observations of wall observers like us are misleading because they don't take into account the difference in wall and void clock rates caused by the significant negative curvature of voids. It's a fairly straightforward and logical concept, except that there is no accepted equation for exactly calculating averaged quasi-local energy values. That's why he supplies one, based on Buchert's equations. I don't know if it's right or wrong, but it is capable of generating appropriate results from what appear to be reasonable input parameters. In my book that's a very good start.

Be careful, again you need to read between the lines, when you say he supplies an exact equation you need to point out that the equations he uses have not been solved. They are merely a 'template' for dealing with the issue and demonstrate a possible form of the solution but he does not solve the equations. That is, he does not start with a description of a perturbed density field, plug them into his equations and produce a solution that predicts observables. He gets as far as a general form, then fixes the parameter values from the data. Without actually solving the equations for a realistic density field it is impossible to actually judge if the 'energy gradients' etc that he claims causing the apparent acceleration are anywhere big enough to do the job.
 
Last edited:
  • #41
Wallace said:
Now we're getting somewhere.

Hurray, Wallace!

Wallace said:
Wiltshire has calculated the time delay implied by his model that gives the 'apparent' acceleration we observe by fitting to the data of our Universe. However he cannot predict this time delay for a Universe in general, from a hypothetical set of conditions. This is the basic requirement of a cosmological model. ...

The problem is that Wiltshire has not provided the equations to do this with. We cannot predict what any arbitrary level and type of structure will do to measurements of the background with his work except for one set of parameters, the ones he has fitted to data.

Maybe you understand his equations better than I do, but the factual basis for your assertion escapes me.

Wiltshire starts with a crisp global definition of 'finite infinity' (valid for any data set), and sets the 'true' critical density (as evolved by FLRW) at that location in space. He uses the CMB data as the data input to work backwards to this critical density value. That becomes his baseline for all the other equations. If the CMB data changes, he can adjust his critical density value accordingly. He then uses his baseline to calculate the 'bare' Hubble constant.

Based on observations, he selects a "dominant" void size. Then he selects an initial void fraction value which, when plugged into the Buchert equations, will generate reasonable output parameters. If he plugged in a slightly different initial void fraction, the resulting output parameters would be slightly different. Why doesn't that meet your test?

Wallace said:
He gets as far as a general form, then fixes the parameter values from the data. Without actually solving the equations for a realistic density field it is impossible to actually judge if the 'energy gradients' etc that he claims causing the apparent acceleration are anywhere big enough to do the job.

Can you please point more specifically to where in his calculations he stops solving equations and starts using templates? I don't see it. He admits in his 2/07 paper that he integrated forward to calculate results; but in his 9/07 paper he replaced the integration process with an exact calculation of the Buchert equations.

Jon
 
  • #42
An interesting corollary to Wiltshire's model occurs to me. Imagine two separate universes, one exactly at 'critical density' with flat geometry, the other at below critical density with negative curvature. I think his model means that there is no meaningful difference in the expansion rate of the two universes. To the extent that an observer of both universes measures the 'open' universe to be expanding faster, it is merely an artifact of the different clock rates in the two universes. A 'common' clock would show both universes to be expanding at exactly the same rate. [All of this assumes some hypothetical observer who can measure both universes concurrently]. [Edit: Well, not necessarily. If one counts clock ticks based on some constant periodic event, such as the orbital period of a hydrogen electron, the same number of elapsed ticks would correlate to the combined absolute scale factor and density of every universe, even if there were no observer in a position to count both sets of ticks concurrently.]

A further conjecture: If the second universe were instead at above critical density and contracting, then would its clock rate need to literally 'run backwards' in order to align the closed universe's negative expansion rate to the expansion rate of the flat universe?

Of course if there is no absolute metric of time, then "it's all relative." What one observer describes as a clock running backwards can be described by another observer merely as a clock running relatively slower. Hmmm... food for thought.

This suggests that the flow of time and cosmic expansion are causally inseparable; they truly are different physical manifestations of a single phenomenon. Every constant periodic event in any universe bears a fixed metric relationship to the combination of that universe's scale factor and density.
Jon
 
Last edited:
  • #43
jonmtkisco said:
Maybe you understand his equations better than I do

Agreed :p

jonmtkisco said:
Wiltshire starts with a crisp global definition of 'finite infinity' (valid for any data set), and sets the 'true' critical density (as evolved by FLRW) at that location in space. He uses the CMB data as the data input to work backwards to this critical density value. That becomes his baseline for all the other equations. If the CMB data changes, he can adjust his critical density value accordingly. He then uses his baseline to calculate the 'bare' Hubble constant.

In the above, the emphasis in mine and this is where you've been mislead. This calculation is not done. I can't exactly point to the exact point where this isn't done now can I? He does 'calculate' this value assuming his equations are valid but does not do the required calculation to demonstrate this validity. You could therefore write down any old equation and make the same claim.

jonmtkisco said:
Based on observations, he selects a "dominant" void size. Then he selects an initial void fraction value which, when plugged into the Buchert equations, will generate reasonable output parameters. If he plugged in a slightly different initial void fraction, the resulting output parameters would be slightly different. Why doesn't that meet your test?

The Buchert equations you speak of are not solutions to the Einstein Field Equations for the density field in question. They are in principle forms of solutions. This is why plugging numbers in doesn't help, since these equations differ from FRW and hence will clearly give a different result. What needs to be demonstrated is that these equations themselves are valid. This is what has not been done! I'm not sure how many times I can say this. You can't prove that the form of an equation that has simply been written down is valid by fitting arbitrary parameters of it to the data. In this process you are fitting data to data!

To be clear, for instance, you say Wiltshire takes the void fraction observed as an observational input. The problem is that his equations that depend of the void fraction have not been shown to be valid. In other words, the proposition that the void fraction matters cannot be proven by showing that a model that relies on knowing the void fraction gives an accurate prediction. What needs to be demonstrated is how the void fraction matters. I know Wiltshire has said a lot of words about finite infinity and such, yet inescapably the equations he uses are not solutions to the field equations, and hence it cannot be shown that they are at all valid.

Can you please point more specifically to where in his calculations he stops solving equations and starts using templates? I don't see it. He admits in his 2/07 paper that he integrated forward to calculate results; but in his 9/07 paper he replaced the integration process with an exact calculation of the Buchert equations.

Jon

I'll turn it around. Can you show anywhere in which his equations are shown to be solutions to the Einstein Field Equations? Of course it is impossible to show where something is not done! Don't rely on the Buchert equations being gospel, they aren't solutions to the EFE, they are just metric equations (i.e. start with a metric and work out the dynamics, rather than find a metric that matches a density field and then find the dynamics).

So he solves equations yes, but the equations he solves (his step 1) do not come from a demonstrably valid description of a perturbed FLRW universe. If we can't establish step 1 then all the subsequent steps are not very useful.

To try and be clearer, I'm going to be a little absurdest for a moment. In a very exaggerated way, here is an example of the reasoning used (with apologies to the Flying Spaghetti Monster):

I have a proposition, that the number of pirates in the world is linked to global temperature (c.f. apparent acceleration is caused by inhomogeneities). I can't yet exactly calculate exactly the strength of this connection, but I think it might look like

temperature = A + B * Number of Pirates
(c.f. Wiltshires version of the Buchert equations with free parameters but no theoretical calculation of the values the parameters would have in a given density field representing the structure in the Universe)

Now, by looking at the temperature of the Earth and the population of pirates as a function of time we can see that A=(some number) and B=(some number) (c.f. the lapse function etc in Wiltshires equations).

Therefore the number of pirates determines the global temperature and the model makes an accurate prediction of this.

Obviously the example is silly but this somewhat analogous to the reasoning used. In the pirate case, what is lacking is an actual prediction, without any data of what the function temperature(pirates) looks like. It is the same here. Without actually demonstrating that the equations Wiltshire uses are actually solutions of the EFE for a particular density field we can have no faith in the form of the equations being correct. Likewise without a calculation from a hypothetical density field to the expected values of the parameters we can have no faith that the parameters WIltshire fits to the data have any physical significance.
 
Last edited:
  • #44
Wallace said:
The Buchert equations you speak of are not solutions to the Einstein Field Equations for the density field in question. ...

Can you show anywhere in which his equations are shown to be solutions to the Einstein Field Equations? Of course it is impossible to show where something is not done! Don't rely on the Buchert equations being gospel, they aren't solutions to the EFE...

OK Wallace, I understand that the Buchert equations are not exact solutions to the Einstein Field Equations. Several posts back I quoted Wiltshire saying that an exact solution to the EFE for an inhomogeneous universe is intractible. Probably no solution in our lifetimes or the next.

We can't expect any inhomogeneous model to be "solved" to that standard, so "serious" cosmologists will need to ignore all such models indefinitely. OK.

Jon
 
  • #45
jonmtkisco said:
OK Wallace, I understand that the Buchert equations are not exact solutions to the Einstein Field Equations. Several posts back I quoted Wiltshire saying that an exact solution to the EFE for an inhomogeneous universe is intractible. Probably no solution in our lifetimes or the next.

Right, a complete solution may be effectively impossible, but that doesn't mean we can't try to make approximations. The point is that the process needs to be open and progress needs to be considered honestly. A full solution may not be necessary, but what does need to be done are calculations showing quantitatively what the departure from the 'averaged' solution might be. Wiltshire does give a lot of qualitative justification but there is still not detail in the maths as to why the standard approach fails and how his ideas give an effect of a sufficient order of magnitude.

jonmtkisco said:
We can't expect any inhomogeneous model to be "solved" to that standard, so "serious" cosmologists will need to ignore all such models indefinitely. OK.

Jon

I detect a note of sarcasm here. Shame.

The issue is not that inhomogeneous models can't be solved though. That is not why, the as you drawl "serious" cosmologists aren't too interested. You are completely misrepresenting the views of the cosmology community. The reason is that while you can't solve the full equations you can use perturbation theory to show that the deviations from the averaged solution that a full inhomogeneous solution would predict are very small. So in fact the calculation, for all intents and purposes, has already been done. The hard task for Wiltshire and others is to demonstrate why perturbation theory should fail so spectacularly in a way that is unprecedented in physics.

So yes, I'm skeptical of this and other works but at the same time interested. It would be fantastic if this idea worked, but we've got to look carefully at the details rather than believe the hype as such in a desire for it to be right.
 
  • #46
Wallace said:
The reason is that while you can't solve the full equations you can use perturbation theory to show that the deviations from the averaged solution that a full inhomogeneous solution would predict are very small. So in fact the calculation, for all intents and purposes, has already been done. The hard task for Wiltshire and others is to demonstrate why perturbation theory should fail so spectacularly in a way that is unprecedented in physics.

Hi Wallace. As I've said several times, Wiltshire agrees that perturbations on an FLRW model would be too small to cause acceleration. He agrees that the backreaction models have been reasonably well proved to not generate results consistent with observations.

I'm just trying to emphasize that his model is based on an entirely different concept. Therefore it is unreasonable to point to the failure of the backreaction models as an independent reason to not treat Wiltshire seriously. I think many people are unaware of this point.

Wallace said:
It would be fantastic if this idea worked, but we've got to look carefully at the details rather than believe the hype as such in a desire for it to be right.

Agreed absolutely. As I've said repeatedly, I don't know if Wiltshire's model is right or wrong. I have no intention of being swayed by hype. Personally, for certain reasons I'd prefer it were wrong. But in my modest opinion it is a very solidly conceived model. In particular, understanding his work is helpful to anyone trying to get their arms around the general subject of inhomogeneity.

Jon
 
  • #47
jonmtkisco said:
Hi Wallace. As I've said several times, Wiltshire agrees that perturbations on an FLRW model would be too small to cause acceleration. He agrees that the backreaction models have been reasonably well proved to not generate results consistent with observations.

I'm just trying to emphasize that his model is based on an entirely different concept. Therefore it is unreasonable to point to the failure of the backreaction models as an independent reason to not treat Wiltshire seriously. I think many people are unaware of this point.

Most people (where by people I mean cosmologists) are aware of this point, and I was not referring to backreaction. From the 'standard' set of tools (perturbation theory etc) you can show why the inhomogenaities in the Universe don't cause any part of the FRW approximation to break down, either in a direct back-reaction sense or through any other means including any difference in clock rates, luminosity distance or anything else. I don't know why you thought I was talking about backreaction only? It's not a new idea to suggest that inhomogeneities alter the appearance of observable in relation to the FRW case by any number of causes.

jonmtkisco said:
Agreed absolutely. As I've said repeatedly, I don't know if Wiltshire's model is right or wrong. I have no intention of being swayed by hype. Personally, for certain reasons I'd prefer it were wrong. But in my modest opinion it is a very solidly conceived model. In particular, understanding his work is helpful to anyone trying to get their arms around the general subject of inhomogeneity.

Jon

Exactly, which is why I've been trying to help you (and by extension anyone else reading) to understand what Wiltshire's papers do and do not say.
 
  • #48
Wallace said:
From the 'standard' set of tools (perturbation theory etc) you can show why the inhomogenaities in the Universe don't cause any part of the FRW approximation to break down, either in a direct back-reaction sense or through any other means including any difference in clock rates, luminosity distance or anything else.

Thanks Wallace, it would greatly aid my understanding of this subject if you can please show us briefly how standard perturbation tools prove that the differential clock rate caused by negative curvature in a rapidly expanding void fraction is too insignificant to cause the temporary illusion of acceleration that Wiltshire describes. I'm not aware that anyone has published such a proof.

Jon
 
Last edited:
  • #49
It's like punching smoke to address every particular point you, Wiltshire or anyone else raises in the fashion you ask. However this is not necessary.

The standard argument (that can be found in any cosmology textbook, or for more details see Ishibashi & Wald 2006) starts by saying that on large enough scales the FRW metric is exact if the matter density field is spatial averaged. This is not in dispute. Now, we know that General Relativity re-produces Newtonian gravity in the limit of weak fields and low velocities, indeed GR would be simply wrong if it did not. Therefore, we can write down the 'Newtonian Perturbed FRW metric' which looks like this:

<br /> <br /> d\tau^2 = -dt^2(1+2\Phi) + a^2(t)(1-2\Phi)(d\chi^2 + S_k(\chi)d\Omega^2)

where \Phi is the Newtonian potential.

To cut a long story short, if you plug in values of \Phi that describe the strength of the gravitational field in the structures in the Universe, you find that these perturbations are quite small. Plugging this metric in the Einstein tensor, the new terms that appear that are not present in the simple non-perturbed FRW metric are very very small in comparison to the existing terms.

These new terms describe all of the effects that inhomogeneities have on either the 'true' background expansion (back-reaction) or any 'apparent' effects, such as described by Wiltshire and others. It is very easy to see that all of these effects are very small (again see any textbook or Ishibashi and Wald for details).

It is hard to see how this process fails, but it may fail if for instance the first term in a perturbation expansion is tiny so we ignore it but for some reason the second or higher order term becomes large. It is unusually but not impossible for an equation to do such a thing, particularly a highly non-linear equations such as the Einstein equation. However, despite this possibility this had never been demonstrated as a fact.

Wiltshires differential clocks rates due to negative curvature must, if they truly exist in general relativity, must be embodied by a term or terms In the Einstein tensor given a valid metric describing the Universe and furthermore the resultant Einstein tensor must equate to a stress-energy tensor that accurately describes the density field of the Universe. Now, as we have agreed, such a full solution is too ambitious to ask for, however it is not unreasonable to ask that if the voids in the Universe cause such a dramatic difference in clock rates, why does this not fall out naturally from the weak field metric describing a void in an expanding Universe? The strength and gradient of the Newtonian potentials in question are not large (of order 10^-5 at most in geometric units) and therefore the weak field metric should give the correct answer. If it does not, then GR is not going to the Newtonian limit and therefore must be wrong since Newtonian gravity is much better tested than GR and is known to be very accurate.

Wiltshire must address this question to convince 'the establishment' since it is the obvious first objection, as discussed in Ishibashi and Wald. Wiltshire argues that there 'may' be an effect due to curvature and potential gradients but does not show why this is effect should be so big and why the weak field metric does not work in a regime that is thoroughly weak field.

I've done in the past some simple calculations of how well the weak-field metric describes the Universe. By plugging in that metric to the Einstein tensor you can directly see what density field it implies through equating with the stress energy tensor. The differences between the density field that gave the potentials that went into the metric and the resultant Einstein Tensor from that metric were of order 10^{-15}\%. I'd call that a pretty good approximation!

To claim that there is something in standard general relativity that makes this process break down, but not demonstrate why this apparently excellent approximation breaks down by a factor of 2 is a brave call.
 
  • #50
Thanks for the explanation Wallace.

My sense is that the Newtonian approximation deals with energy gradients arising from mass differentials (which are too small to generate significant deviations), but it does not deal with energy gradients arising from spatial curvature. I believe that the latter is a subject which must be calculated entirely by means of GR. Since no suitable exact solution to the Einstein Field Equations is a available to us (and may not exist), in my opinion it comes down to trying to understand the Buchert averaging equations and assessing whether or not they are sound and Wiltshire is applying them properly. I hope that additional scholar(s) will weigh in on this specific subject, and not view the whole subject area as tarnished because of the failure of the backreaction solutions.

So I'm going to keep an eye out for future publications. I understand that you're not holding your breath. Wiltshire says he has a couple of new papers forthcoming. In any event, I think we've thoroughly beaten this subject to death. Until there's new news, its time to move on to a new subject!

Thanks again Wallace.

Jon
 
Back
Top