From time to timescape – Einstein’s unfinished revolution?

  • Thread starter Thread starter Garth
  • Start date Start date
  • Tags Tags
    Revolution Time
Click For Summary
David Wiltshire's paper proposes a new perspective on dark energy, suggesting it may stem from misidentified gravitational energy gradients and varying clock rates, rather than being a fundamental force. He introduces the Cosmological Equivalence Principle, which emphasizes the influence of the universe's evolving density on clock synchronization and inertial frame calibration. This model indicates that observers in different regions of the universe could assign vastly different ages to the universe, challenging traditional notions of time and simultaneity. The discussion highlights the potential for a paradigm shift in understanding cosmology, as Wiltshire's approach could lead to testable predictions. Overall, the conversation reflects a mix of skepticism and intrigue regarding the implications of Wiltshire's theories on established physics.
  • #31
edpell said:
When I put up a post mentioning Mach's principle it was deleted as "personal theory". I guess my question is why is this whole thread not deleted as "personal theory"? Or is truth defined by having a tenured professorship?

It's because he is playing the game and there's enough meat in the paper so that people in the field have something to think about. What he is suggesting is new and original, and it's fun to read new and original ideas even if they happen to be wrong.
 
Space news on Phys.org
  • #32
twofish-quant said:
Part of the problem with the paper was that it wasn't clear whether or not he was arguing for new physics or not, and after the third time I read through it carefully, I came to the conclusion that he *was* arguing for a non-standard theory of gravity.

I'm still really surprised that this is your interpretation so it would be nice to hear if others see the same.

As I say, I had a good half hour conversation with Wiltshire and I think his belief is that he is doing GR more deeply - yes, a valid extension of the equivalence principle - rather than something which is new physics in the sense that anything was wrong or needs correcting at the equations level.

Making a probably non-standard suggestion myself, there is an interesting question when it comes to averaging over any system, but especially an open or expanding system.

In a closed or static system, we would expect averages to be gaussian. But in open or expanding systems, we expect averages to be log/log or powerlaw.

So I guess there is the possibility that the standard way of averaging the flatness of the universe builds on that gaussian expectation. And perhaps the reality may be fractal in some real sense. So for example, we might have relativistic curvature of quasi-localities over all scales. Mostly, we look at the universe as being large and flat. But around black-holes, clearly the curvature becomes extreme.

So if we could actually profile the average relativistic curvature of the timescape, it could perhaps be not generally "very flat" with a few local exceptions like black holes, but instead flat in a powerlaw sense.

I'm sure I will be told I'm wrong here. But I put it forward to be educated as to how I should be thinking about this. The timescape seems to say the universe is lumpy and so has local variations in spacetime curvature. But it could even be lumpy in a powerlaw fashion.

This connects with another long-running cosmo debate I could never follow - the apparent upset caused by fractal universe stories. All the debate about galactic walls, filaments, etc, and how large-scale cosmic structure would be a problem for the assumption of homogeneity, isotropy, what have you.

Wiltshire was certainly saying that it appears the universe is void dominated over 200 megaparsecs. And that would fit in with the sound horizon of the big bang. Below that scale, the variation would have been scrambled and look close to gaussian (which would of course mean that the universe would not actually fit a pure powerlaw matter/curvature distribution over all scales).
 
  • #33
To quote Wiltshire from the peer reviewed literature "In laying the foundations of general relativity, Einstein sought to refine our physical understanding of that most central physical concept: inertia. As he stated: “In a consistent theory of relativity there can be be no inertia relatively to ‘space’, but only an inertia of masses relatively to one another”. This is the general philosophy that underlies Mach’s principle, which strongly guided Einstein."

How do we feel about this idea that inertia is defined only relative to other masses? Did Einstein think that? Does Wiltshire think that? Do you agree?
 
  • #34
twofish-quant said:
For some people, it is.

But edpell did not ask about "some people," he asked about the policies of Physics Forums.
twofish-quant said:
Not for me, since I can think of some papers in Ap.J. that I (and pretty much everyone in the field) think are crackpot. I know of people in the National Academy of Sciences and who have Nobel prizes that have ideas that pretty much everyone in the field thinks are crackpot (i.e. don't mention topic black holes or accretion jets or redshift around so and so since he'll bore you with his "proof" that they don't exist).

Yes, there are many examples of stuff like this, which is why Physics Forums Rules require more than just "published in professional peer-reviewed journals." For my take on the wording (which I think is overly convoluted) of the relevant part of Physics Forums Rules, see

https://www.physicsforums.com/showthread.php?p=2251832#post2251832.

This is a judgment call by the Mentors (moderators).
 
  • #35
dispit the crackpot points, most very irritating, i think it's worth the time to read it.
 
  • #36
The fear and hatred that people express towards the non-canonical is just as high as the Catholic Churches fear and hate of Galileo. I think we have discovered a truth about human nature. People hate new ideas.

They really don't. Part of the way that science works. If you are in a boxing ring and a prize fighter doesn't take a swing at you, then you really get disappointed. When I come up with a new idea, I spend about a day thinking about everything that could be wrong with it. Then I go to the person next door, and then we spend about a week trying to kill the idea. After a few weeks, if it passes the gauntlet, then eventually it gets published and everyone starts beating up on it.

Theorists love new ideas, but the way you come up with new ideas is to take an idea, put it into a gladiatorial arena and then toss lions at it.
 
  • #37
George at your pointer you get "published and mainstream". OK, so why has this whole thread not been deleted as it is "not mainstream"? I do like that there is a forum section at the bottom for basically "other stuff" maybe it should be there?
 
  • #38
edpell said:
How do we feel about this idea that inertia is defined only relative to other masses? Did Einstein think that? Does Wiltshire think that? Do you agree?

Personally if I'm understanding his paper (and I may not be) then it's a point which I find irrelevant and totally uninteresting (although other people may disagree). I'm more interested in the latter half of the paper where he writes down a metric and enough information where I can more or less do a calculation from it.
 
  • #39
twofish-quant said:
...the way you come up with new ideas is to take an idea, put it into a gladiatorial arena and then toss lions at it.

Love the phrase particularly the "toss lions at it" :)
 
  • #40
twofish-quant said:
The final reason I'm pretty sure that Wiltshire *is* invoking new physics is that he doesn't do any detailed calculations. If he *were* saying that inhomogenities are being handled incorrectly, then it wouldn't be hard to do a "we have a problem" calculation using standard GR. What I think he is doing is to use a new equivalence principle to create a new *class* of models, but since you have a class of parameterizible models rather than a single model, the next step is to try to put numbers in that let you do calculations.

OK, that sounds more reasonable. And there were indeed mutterings about the dangers of opening up of a "landscape" of new GR modelling if you give up the simplicity of existing cosmological calculation machinery.

Wiltshire is certainly pleased that he has just had funding for a new post-doc, Teppo Mattsson from Helsinki, who has calculational skills in this area.

And he threw up some slides which show places where his predictions and dark energy predictions should differ. "Baryon acoustic" and a few other things I didn't recognise.
 
  • #41
twofish-quant said:
I'm more interested in the latter half of the paper where he writes down a metric and enough information where I can more or less do a calculation from it.

Are you talking about his equation #2? What can you calculate from it?
 
  • #42
Dark Energy as simply a mis-identification of gravitational energy gradients and the resulting variance in clock rates?

If that were true then there would be no dark energy in our galaxy because there would be little variance in clock rates, but I'm sure there is no evidence that suggests dark energy dosn't exist in this galaxy.
I have my own theory on dark energy and it does have a great deal to do with clock rates, just not in the way you suggest, but not wanting to be labeled a crackpot as seems to be inevitable after reading comments on this thread I will leave it at that.
 
  • #43
edpell said:
George at your pointer you get "published and mainstream". OK, so why has this whole thread not been deleted as it is "not mainstream"? I do like that there is a forum section at the bottom for basically "other stuff" maybe it should be there?

As I said, it's a judgement call by the Mentors, and, so far, no Mentor has seen Wiltshire's work as sufficiently far from mainstream. This really is severely disrupting this thread by taking it far off topic.
twofish-quant said:
Personally if I'm understanding his paper (and I may not be) then it's a point which I find irrelevant and totally uninteresting (although other people may disagree).

I only have scanned the paper very, very quickly, but I it looks like I agree.
 
  • #44
apeiron said:
As I say, I had a good half hour conversation with Wiltshire and I think his belief is that he is doing GR more deeply - yes, a valid extension of the equivalence principle - rather than something which is new physics in the sense that anything was wrong or needs correcting at the equations level.

I'm staring at the metric that he wrote down, and I just don't see how it's consistent with standard GR. I *will* be interested to see how the people that do the cosmological simulations respond to the paper. If it's the case that Wiltshire believes that he is "doing GR correctly" (and by implication the people who are doing the simulations are doing GR incorrectly) then I think we'll have a "battle royale" and I'll sit back and munch popcorn and watch the fireworks.

I'm sure I will be told I'm wrong here. But I put it forward to be educated as to how I should be thinking about this. The timescape seems to say the universe is lumpy and so has local variations in spacetime curvature. But it could even be lumpy in a powerlaw fashion.

The standard FRW cosmology assumes that the universe is isotropic and homogenous. Now it isn't. So the LCDM model puts all of the lumpiness as first order perturbations, and models them as sound waves. In modeling perturbations as sound waves you ignore self-gravitation for the same reasons that you ignore self-gravitation when you model sound waves in air or ocean waves. It's too weak to make a difference.

Wiltshire says this is wrong, but people that invented LCDM didn't make these assumptions without careful thought. One problem is that if you don't separate out pressure effects from gravitation effects, you end up with a total mess and unable to calculate anything.

One other problem with Wiltshire's model is that I'm pretty sure that you would see some weird lensing effects. Also I'd expect to see acceleration for supernova Ia in back of voids to be very different from the acceleration of those that aren't in back of voids.

This connects with another long-running cosmo debate I could never follow - the apparent upset caused by fractal universe stories. All the debate about galactic walls, filaments, etc, and how large-scale cosmic structure would be a problem for the assumption of homogeneity, isotropy, what have you.

I think that a lot of the debate got garbled. First, we need to clearly define what a "fractal" is. A fractal is a shape that has a self-similar shape. You get self-similarity when you have tightly coupled chaotic, non-linear interacting systems. LCDM models pressure differences as "small" changes from the average. If we really did see fractals, then there would be something basically wrong with LCDM, but we see lumps, but they aren't fractal lumps.

Below that scale, the variation would have been scrambled and look close to gaussian (which would of course mean that the universe would not actually fit a pure powerlaw matter/curvature distribution over all scales).

Which is a good thing for LCDM.
 
  • #45
Reading some more...

http://adsabs.harvard.edu/abs/2009PhRvD..80l3512W

which IMHO is a much better paper, but it's a matter of taste. The interesting thing that I got out of this is that trying to explain acceleration as an artifact of GR inhomogenities isn't Wiltshire's idea but there is a whole group of people trying to do that, but that the basic problem is that these calculations are really, really hard to do. What Wiltshire does bring to the table is that he has a formalism that actually makes comparison with real data.

As far as whether what is proposes is new physics. Now that I've read his Phy Rev D, paper, it's pretty clear that he doesn't think so. The trouble is that I look at his equations, and I don't see how it is consistent with standard GR. The trouble with that is that the argument that I'd use to argue this involves some assumptions that Wiltshire and the people that he cites would consider invalid. To resolve this, you'd have to solve the full Einstein equations, and my bet would be that what you'd end up with when you do that is something much closer to Lambda-CDM than what Wiltshire is proposing but obviously he would disagree with that.
 
  • #46
George Jones said:
As I said, it's a judgement call by the Mentors, and, so far, no Mentor has seen Wiltshire's work as sufficiently far from mainstream.

What Wiltshire and the people he is citing is seems pretty clearly "non-mainstream." They are arguing that acceleration observed in supernova Ia may be due to GR related inhomogenities, which is a pretty radical and non-standard idea, but it's an interesting one worth thinking about.

I think what it boils down to is that Wiltshire has done his homework and so he has come up with fresh new ideas that aren't obviously wrong or untestable. That makes his ideas interesting.
 
  • #47
apeiron said:
And there were indeed mutterings about the dangers of opening up of a "landscape" of new GR modelling if you give up the simplicity of existing cosmological calculation machinery.

Which may not be a bad thing if it turns out that the current machinery is seriously flawed. The "standard LCDM" assumes that you can model density fluctuations as corrections to an average field, and if you go into Wiltshire's references, there are about a dozen people that are questioning that idea, and presented some things that suggest that maybe you can't. But there are no smoking guns. What the Wiltshire paper has done is three things:

1) put together a detailed model that *is* observationally different from the standard cosmological model
2) explained how that model is different from the standard model so that you can translate between the two
3) suggested a symmetry principle that his model holds, the standard LCDM model does not, and which he believes GR also holds

It's pretty clear that he and I both think about GR in very different ways. The way I think about it is very heavily influenced by the "membrane paradigm" by Kip Thorne. What Thorne did was to invent a way of thinking about black holes which (and there is the hard part) he showed was justified by Einstein's equations. It appears that no one has done the same thing with cosmology models. A lot of the work in GR that Throne and his colleagues have done could be titled "how a non super-math genius can think about GR without going crazy."

Wiltshire is certainly pleased that he has just had funding for a new post-doc, Teppo Mattsson from Helsinki, who has calculational skills in this area.

Cool. Here is one of this papers

http://arxiv.org/abs/0711.4264

And he threw up some slides which show places where his predictions and dark energy predictions should differ. "Baryon acoustic" and a few other things I didn't recognise.

Interesting. However looking over the papers, I don't see mention of what I think would be a big smoking gun. If the acceleration in the universe were caused by inhomogenity, then you should see supernova Ia next to known voids behave very differently than those that aren't, and there should be some sort of gravitational lensing effect.
 
  • #48
aggy said:
Dark Energy as simply a mis-identification of gravitational energy gradients and the resulting variance in clock rates?

Teppo Mattsson wrote a paper that describes the idea

http://arxiv.org/pdf/0711.4264

The idea is that it's known that clocks in regions of high density run more slowly than clocks in region of low density. So if we happen to be in a region of low density, the rest of the universe will appear to be speed up (i.e. you have the illusion of acceleration) even though it's just our clocks slowing down.

Now what Mattsson is saying is that you *can* get something like this to explain cosmic acceleration, but there is a price. You have to assume that we are in the middle of a giant spherical void, and you have to assume that the density of our patch of the universe evolved in a certain way to get the right numbers. The void has to be spherical because if we were in a non-spherical if we were off center, then galaxies in one half of the sky would look different than the other half. Ummm... Sounds fishy, and Mattsson knows it sounds fishy, so he spends the rest of the paper trying to come up with ideas that are less bogus sounding.

For example instead of being in the center of one big void, what happens if you are in the center of lots of little ones. But then you have another problem. If you assume that we are in the middle of one big void, then the math is easy. If you don't, then the math is very messy. Messy math is a bad thing.

The cool thing is that once you've proposed a theoretical model, you can think about it and come up with observational tests and build on that idea theoretically.

If that were true then there would be no dark energy in our galaxy because there would be little variance in clock rates, but I'm sure there is no evidence that suggests dark energy dosn't exist in this galaxy.

Mattsson is suggesting that maybe there isn't any dark energy anywhere. Our clocks are just slow. Also he wrote that paper in 2007, and things may have changed since then.

I have my own theory on dark energy and it does have a great deal to do with clock rates, just not in the way you suggest, but not wanting to be labeled a crackpot as seems to be inevitable after reading comments on this thread I will leave it at that.

The interesting thing here is that Mattson and Wiltshire are both coming up with wild and crazy ideas that are non-standard and non-mainstream, and this is a good example of how those ideas are handled. The important thing is that Mattson and Wiltshire are "playing the game." The papers don't say ***I HAVE FOUND THE SECRET TO THE UNIVERSE, BUT UNFORTUNATELY I DON'T KNOW ENOUGH MATH OR PHYSICS TO DO ANY DETAILED CALCULATIONS BUT IF YOU DISAGREE WITH ME YOU ARE BEING CLOSED MINDED***. It's a lot of "here are some interesting ideas, I've worked through these equations gotten these results, what do you think?"
 
  • #49
edpell said:
Or more generally there are two kinds of physicists 1) those whose status and self worth is tied to their mastery of an existing canon of theory and experimental data and who feel threatened by their current view being called into question and 2) those who enjoy learning and discovery and ideas (Ars Gratia Artis) an example would be Feynman.

I've never met anyone in the first category. Part of the thing is that in order to come up with a new and original and earth-shaking idea, you have to know a *HUGE* amount of data. If you aren't swimming in the existing canon of theory and experimental data, you are going to come up with stuff that people thought of fifty years ago and rejected for very good reasons. The neat thing is that all of the existing canon of theory and experimental data is now online. All you need is a tour guide that goes through the papers does some translations. That's where I come in.

The other thing is that there is much too much for anyone person to know so a lot of the conversations involve interactions with people that have very different information pools.

People don't get Nobel prizes for being unoriginal, but being original is a lot harder than it sounds.

But the hate is a psychological problem of the hater. If you have some rational for filtering something tell me about it but I have no interest in hearing your hate or using your hate as a filter.

But people in physics have weird ways of expressing love. If you go into any physics department, you'll find people *screaming* at each other in ways that make you think that they are going to kill each other, but then after about an hour they stop, shake hands, and then go out for drinks. It's really cool to watch two experts go at each other like that.

If physicist really thinks that you have an interesting idea, they are going to try to blow it to smithereens. If you get into the ring with a heavyweight champion and he tries to beat the living stuffing out of you, it's not because he hates or disrespects you. If he really hated or disrespects you, he *wouldn't* be trying to beat the living stuffing out of you.

One important rite of passage is the Ph.D. defense. That's when you get in a room with five or so of your teachers, and they take the work that you have been doing for the last five years, and try to rip it to shreds. If you've ever been in that situation it's a lot like the kung fu movies in which you have the hero in the center of the ring while five people try to bash him to shreds. The whole point of the process is to see If you can fight back and hold your ground. If you can, then you get the Ph.D.
 
  • #50
twofish-quant said:
For example instead of being in the center of one big void, what happens if you are in the center of lots of little ones. But then you have another problem. If you assume that we are in the middle of one big void, then the math is easy. If you don't, then the math is very messy. Messy math is a bad thing.

Wiltshire was definitely thinking not of a single void, but a foamy story where there are voids over all scales above 200 megaparsecs.
 
  • #51
So the physical universe has some structure some lumps and bumps (or more correctly voids and walls and filaments) and this means at some level of accuracy simple calculations based on simple uniform distributions are not accurate enough. Understandablely the folks doing the computations do not want harder work and so resist the idea. Until some hungry young guy/gal thinks hey if I do the work and it is important I will be a winner. Then they do it and receive acclaim or find they wasted five years of effort.

Why is this viewed as such a complex calculation? You make a series of Monte Carlo model universes and do the integration at several points and compare? It is the computer that is doing the work.
 
  • #52
edpell said:
So the physical universe has some structure some lumps and bumps (or more correctly voids and walls and filaments) and this means at some level of accuracy simple calculations based on simple uniform distributions are not accurate enough.

Or maybe they are. Not clear right now.

Understandablely the folks doing the computations do not want harder work and so resist the idea.

Utter and total non-sense.

The first thing that you try to do when you have a problem like this is to do a quick "is this a totally nutty idea or not" calculation which was what I was planning to do when I read Wiltshire's paper. However Teppo Mattsson already did the calculation that I was planning on doing on page 13 and 14 of the paper that I referenced earlier. What he is showing that if you are sitting in a big empty bubble that's 300 Mparsec's wide, that yes it clocks can slow down to make it look like the universe is accelerating. Now this probably *isn't* anything like the real universe. But it's a quick toy calculation that says that this is a half-decent idea that we need to look into further.

What Wiltshire is trying to do is to take things from being a "toy model" into something that you can actually compare to real experiments. Now that I understand what he is trying to do, it's a decent idea. One problem with the way that Wiltshire is going about it is that he is using math that's great for human number crunchers but totally awful for computers.

Until some hungry young guy/gal thinks hey if I do the work and it is important I will be a winner. Then they do it and receive acclaim or find they wasted five years of effort.

If someone goes through the effort of figuring out whether or not it works or not, and it doesn't, it's not a wasted effort. If nothing else you understand how inhomogenities in GR work. If someone spends about five years and then comes up with an airtight argument why none of this will work, that's worth a Ph.D. Also the cool thing is that while you are looking for X, you invariably stumble onto Y.

Why is this viewed as such a complex calculation? You make a series of Monte Carlo model universes and do the integration at several points and compare? It is the computer that is doing the work.

Well computers need programmers. We are talking about 10 coupled non linear equations *just for the gravity* in a 10,000x10,000x10,000 cube with maybe 100,000 time steps. If you run the full simulation, it's just not doable with current technology. So you end up with clever ways of reducing computer time, which "cross your fingers" don't actually destroy the calculation.

These simulations can eat up a month of supercomputing time. If you just dump the equations into a computer, changes are that the computer will just spit out "I can't do this calculation" and give you random noise. The first time you do a test run, the simulation will invariably not work. So you spend a few months debugging, and debugging, and finally you come up with something that looks reasonable. But is it?

And even getting to the point where you can code it is a challenge.

For example, one problem with the way that Wiltshire does the problem is that he splits things into "calculations you do at the voids" and "calculations you to in the non-voids". If you try to put it into a computer program, then chances are the computer will go nuts at the boundary conditions. Also you don't want if statements in a computer program. The computer chips like to add arrays of numbers. If you have branching statements, then the chip has to go down two different code paths, your pipelines get trashed, your L1 caches get overwritten, and a calculations that would have taken two weeks, now will take a year and can't be done. Also he does a lot of averaging. Averaging is bad. What do you average? How do you average?
 
  • #53
twofish-quant said:
Well computers need programmers. We are talking about 10 coupled non linear equations *just for the gravity* in a 10,000x10,000x10,000 cube with maybe 100,000 time steps. If you run the full simulation, it's just not doable with current technology.

I would love to know the computational size of this problem versus the computational size of the calculations done by the lattice gauge folks to compute particle masses. I think the lattice gauge folk go as far as building special purpose compute hardware for the specific calculation.
 
  • #54
There is a nice intro to numerical relativity at Cal Tech http://www.black-holes.org/numrel1.html

From the pages it is clear this is a new area.
 
Last edited by a moderator:
  • #55
twofish-quant said:
Wiltshire and the people he is citing ...are arguing that acceleration observed in supernova Ia may be due to GR related inhomogenities, which is a pretty radical and non-standard idea ...

But not nearly as radical as contradicting the
?[PLAIN]http://nasascience.nasa.gov/astrophysics/what-is-dark-energy[/URL] that
NASA said:
roughly 70% of the Universe is dark energy

You've been very helpful in clarifying what Wiltshire is doing, TQ. But you seem to imply that it is only the S1A results which Wiltshire is taking to be an artefact of GR in a lumpy universe.

What about the 70% invisible stuff that helps to flatten the universe?
 
Last edited by a moderator:
  • #56
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.
 
  • #57
edpell said:
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.

On the contrary, it is a very widely held view amongst the Cosmological Community.

The standard model is call the LCDM or \LambdaCDM model; the L or better \Lambda stands for the cosmological constant, shorthand for DE, whatever it may finally turn out to be.

In the standard model DE is necessary to bolster the 4% baryonic matter and 23 % Dark Matter to make the total near 100% critical density to account for the observed flatness or near flatness of the geometry of space (WMAP observations etc.)

Also, with an equation of state of \omega = -1, DE explains the observed acceleration in cosmological expansion.

Garth
 
Last edited:
  • #58
Garth said:
In the standard model DE is necessary to bolster the 4% baryonic matter and 23 % Dark Matter to make the total near 100% critical density to account for the observed flatness or near flatness of the geometry of space (WMAP observations etc.)

I would like to understand this better. There are two uses of flat (I think) one meaning uniformity of density and one meaning a certain topological shape. I think you mean the latter the topological shape of the universe? How does WMAP tell us the topological shape of the universe [I am not disagreeing this is just new subject matter for me]?
 
  • #59
edpell said:
I would like to understand this better. There are two uses of flat (I think) one meaning uniformity of density and one meaning a certain topological shape. I think you mean the latter the topological shape of the universe? How does WMAP tell us the topological shape of the universe [I am not disagreeing this is just new subject matter for me]?
By 'flat' I do mean the geometric shape of the 3D space foliation (slice) of the 4D space-time of the universe.

The surface could be spherical (a 3D version of the Earth's 2D surface), flat, or hyperbolic (saddle shaped), depending on how much average density there is in the universe. This is a basic property of the cosmological solution of Einstein's GR Field Equation.

You can tell the type of surface that you are living in by studing the geometry around you.
A flat surface has triangles whose interior angles sum to 1800, a spherical surface where they sum to more than 1800 and a hyperbolic surface where they sum to less than 1800. Try it in 2D on different curved surfaces.

The WMAP observations are consistent with a flat surface.

This would require an average density equal to, or very nearly equal to, the critical density in Einstein's equations.Garth
 
  • #60
edpell said:
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.

I was wrong I withdraw the above statement.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
Replies
19
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K