For more of David Wiltshire's point of view here is an email he sent me on Dec 31, 2009
Edwin,
Thanks for letting me know that the paper had "sparked some discussion"; I
was not aware of this til your email, and just did a google search... Eeek,
those forums again. It is a bit amusing to see this only picked up now, as
this essay has been publically available at the FQXi competition website
http://fqxi.org/community/essay/winners/2008.1#Wiltshire for over a year
- and the longer Physical Review D paper on which the essay was based
[arxiv:0809.1183 = PR D78 (2008) 084032] came before that. I was just
tidying things up at the end of the year and - prompted by receiving
the proofs of the essay - put a few old things (this essay and some
conference articles 0912.5234, 0912.5236) on the arxiv.
Contributors to forums like PF tend to get such a lot of things wrong (since
as the admit they are not experts about the subjects under discussion), and
I don't have time to comment on all the wrong things - but it is reassuring
to see that a couple of people have realized that 0912.4563 is only
"hand-wavy" because it is an essay, and the real work is in the various
other papers like 0803.1183 and 0909.0749 which very often go unnoticed
at places like PF.
So just a few comments, which will make this missive long enough...
The understanding of quasilocal energy and conservation laws is an unsolved
problem in GR, which Einstein himself and many a mathematical relativist
since has struggled with. I never said Einstein was wrong; there are simply
bits of his theory which have never been fully understood. If "new" physics
means a gravitational action beyond the Einstein-Hilbert one then there is
no "new" physics here, but since not everything in GR has been settled there
are new things to be found in it. Every expert in relativity knows that, and
the area of quasilocal energy is a playground for mathematical relativists
of the variety who only publish in mathematically oriented journals, and
never touch data. Such mathematical relativists are often surprised by my
work as they never imagine that these issues could be of more than arcane
mathematical relevance. What I am doing is trying to put this on a more
physical footing - with I claim important consequences for cosmology - with
a physical proposal about how the equivalence principle can be extended to
attack the averaging problem in cosmology in a way consistent with the
general philosophy of Mach's principle. In doing so, it reduces the
solution space of possible solutions to Einstein equations as models
with global anisotropies (various Bianchi models) or closed timelike loops
(Goedel's universe) are excluded, while keeping physically relevant ones
(anything asymptotically flat thing like a black hole) and still extending
the possible cosmological backgrounds to inhomogeneous models of a class
much larger than the smooth homogeneous isotropic Friedmann-Lemaitre-
Robertson-Walker (FLRW) class. The observational evidence is that
the present universe has strong inhomogeneities on scales less than 200Mpc.
A number of other people (Buchert, Carfora, Zalaletdinov, Rasanen, Coley,
Ellis, Mattsson, etc) have (in some cases for well over a decade) looked at
the averaging problem - most recently with a view to understanding the
expansion history for which we invoke dark energy. But given an initial
spectrum of perturbations consistent with the evidence of the CMB these
approaches, which only consider a change to the average evolution as
inhomogeneity grows, cannot realistically match observation in a statistical
sense. The clock effect idea is my own "crazy" contribution, which the
others in the averaging community in mathematical relativity have not yet
subscribed to. But with this idea I can begin to make testable predictions
(which the others cannot to the same degree), which are in broad quantitive
agreement with current observations, and which can be distinguished from
"dark energy" in a smooth universe by future tests. My recent paper
0909.0749, published in PRD this month, describes several tests and
compares data where possible. The essay, which summarises the earlier
PR D78 (2008) 084032 is an attempt to describe in non-technical
language why this "crazy idea" is physically natural.
One important point I tackle which has not been much touched by my
colleagues in the community (a couple of papers of Buchert and
Carfora expected) is that as soon as there is inhomogeneity
we must go beyond simply looking at the changes to average evolution,
because when there is significant variance in geometry not every observer
is the same observer. Structure formation gives a natural division of
scales below the scale of homogeneity. Observers only exist in regions
which were greater than critical density; i.e., dense enough to overcome
the expansion of the universe and form structure. Supernovae "near a void"
will not have different properties to other than supernovae (apart from the
small differences due to the different metallicities etc between rich clusters of galaxies and void galaxies) because all supernovae are in
galaxies and all galaxies are greater than critical density.
Of course, my project is just at the beginning and much remains to be
done to be able to quantitatively perform several of the tests that
observational cosmologists are currently starting to attempt, especially
those that relate to the growth of structure (e.g., weak lensing, redshift
space distortions, integrated Sachs-Wolfe effect).
It is true that numerical simulations are an important goal. The problem
with this is not so much the computer power but the development of an
appropriate mathematical framework in numerical relativity. Because of
the dynamical nature of spacetime, one has to be extremely careful in
choosing how to split spacetime to treat it as an evolution problem.
The are lots of issues to do with gauge ambiguities and control of
singularities. The two-black hole problem was only solved in 2005
(by Pretorius) after many decades of work by many people.
In numerical cosmology at present general relativity is not really used.
(One sometimes sees statements that some test such as the one Rachel
Bean looked at in 0909.3853 is evidence against "general relavity" when
all that is being really tested is a Newtonianly perturbed
Friedmann-Lemaitre universe.) The only sense in which GR enters numerical
simulations in cosmology at present is that the expansion rate of a LCDM
Friedmann-Lemaitre universe is put in by hand, and structure formation is
treated by Newtonian gravity on top of the base expansion. This explains
some but not all the features of the observed universe (e.g., voids do
not tend to be as "empty" as the observed ones). Anyway, the base expansion
rate is kept artifically uniform in constant time slice and the expansion
and matter sources are not directly coupled as they are in Einstein's theory.
The full GR problem is just very difficult. But a former postdoc of
Pretorius has told me that he has begun looking into it in his spare time
when not doing colliding black holes. To make the problem tractable is so
difficult that I do not know yet that anyone has got funding to do the
numerical problem as a day job.
To make progress with the numerical problem one has to really make a
very good guess at what slicing to choose for the evolution equations.
The right guess, physically informed, can simplify the problem. My proposal
suggests that a slicing which preserves a unform quasilocal Hubble flow
[proper length (cubic root of volume) with respect to proper time] of
isotropic observer is the way to go. This would be a "CMC gauge"
(constant mean extrinsic curvature) which happens to be the one favoured
by many mathematical relativists studying existence and uniqueness in
the PDEs of GR. At a perturbative level near a FLRW geometry, such a
slicing - in terms of a uniform Hubble flow condition [as in one of
the classic gauges of Bardeen (1980)] supplemented by a minimal shift
distortion condition [as separated investigated by York in the 1970s]
has also been arrived by Bicak, Katz and Lynden-Bell (2007) as one of the
slicings that can be used to best understand Mach's principle. I mentioned
this sort of stuff in my first serious paper on this, in the New J Physics
special focus issue on dark energy in 2007, gr-qc/0702082 or
http://www.iop.org/EJ/abstract/1367-2630/8/12/E07 [New J. Phys. 9 (2007) 377]
To begin to do things like this numerically one must first recast the
averaging problem in the appropriate formalism. Buchert's formalism
uses a comoving constant time slicing because people did not think that
clock effects could be important in the averaging problem as we are
talking about "weak fields". [This is why I am claiming a "new" effect,
when one has the lifetime of the universe to play with an extremely small
relative regional volume deceleration (typically one angstrom per second2)
can nonetheless have a significant cumulative effect. As physicists
we are most used to thinking about special relativity and boosts; but
this is not a boost - it is a collective degree of freedom of the regional
background; something you can only get averaging on cosmological scales
in general relativity.] So anyway, while Buchert's formalism - with
my physical reinterpretation which requires coarse-graining the dust
at the scale of statistical homogeneity (200 Mpc) - has been adequate for
describing gross features of the average geometry (and relevant
quantitative tests), to do all the fine detail one wants to revisit
the mathematics of the average scheme. This would be a precursor to
the numerical investigations.
These are not easy tasks, as one is redoing everything from first
principles. At least I now have a postdoc to help.
At one level, it does not matter whether my proposal as it stands is
right or wrong. Physics involves asking the right hard questions in
the first place; and that is something I am trying to do. For
decades we have been adding epicycles to the gravitational action,
while keeping the geometry simple because we know how to deal with
simple geometries. I have played those games myself for most my career;
but none of those games was ever physically compelling. Once one is
"an expert in GR" one appreciates that real physics involves trying
to deeply understand the nature of space and time, symmetries and
conservation laws, rather than invoking new forces or particles
just for the hell of it. GR as a whole - beyond the simple arenas of
black holes and exact solutions in cosmology - is conceptually difficult
and not completely understood. But I am convinced that to begin to
resolve the complexity one needs to think carefully about the
conceptual foundations, to address the not-completely-resolved issues
such as Mach's principle which stand at its foundations. The phenomenon
of "dark energy" is, I think, an important clue. Whether I am right or
wrong the hard foundational problems - along with observational puzzles
which in a number of cases do not quite fit LCDM - are the issues
that have to be faced.
Happy New Year and best wishes,
David W
PS You are welcome to post and share this with your PF friends, but I don't
have time to get involved in long discussions or trying to explain all the
various technical things (such as CMC slicings etc). Writing this missive
has taken long enough!