What Are the Key Questions About Supernova Data and the Modified Milne Model?

In summary: But that's a bit more complicated than what you asked.In summary, the author presents graphs and data which seem to support the idea that the universe may be a modified Milne model. There are questions regarding the accuracy of the data and the validity of the modified Milne model. In addition, the author discusses the implications of adding major events to the Milne model. Finally, the author provides a summary of the article.
  • #71
Chalnoth said:
There is no physical definition for any particular coordinates in General Relativity. These are not physical entities, just labels we place on the system.

However, in this case the primed coordinates are the Minkowski coordinates, with the unprimed coordinates being the Milne coordinates. The Milne coordinates can be thought of as the inside of a light cone in Minkowski space-time. There is a plot here that shows the shape:
http://world.std.com/~mmcirvin/milne.html#time

In any case, coordinates are not physical things, and are merely chosen for their convenience for a particular problem. The Milne coordinates are exactly equivalent to the Minkowski coordinates.

If you have Mathematica, you can paste this into it. Otherwise, treat it as pseudocode.

e0 = Table[{r, 0}, {r, -10, 10}];
e1 = Table[{r, 1}, {r, -10, 10}];
comovingWorldLines = Transpose[{e0, e1}];
ListLinePlot[comovingWorldLines]
e0 = Table[{0 Sinh[r], 0 Cosh[r]}, {r, -1.5, 1.5, .1}];
e1 = Table[{1 Sinh[r], 1 Cosh[r]}, {r, -1.5, 1.5, .1}];
milneWorldLines = Transpose[{e0, e1}];
ListLinePlot[milneWorldLines]

What "the metric" is doing is converting a homogeneous group of comoving particles into a set of particles which are separated by an equipartition of rapidity. (i.e. they start together at a point, and are flying away from each other.)

These two things are in no way the same. Milne's model is flying apart. The standard model is standing still. There's no way to claim they're both the same. In Milne's model, the particles were all at the same point at t=0. In the Standard Model, all the particles were at different points at t=0.

This is analogous to a mercator projection of the earth. On the Mercator Projection, the north and south poles occupy the same space as the equator. In real life, the north and south poles are points. And there is no confusion about which form is real.
 
Last edited by a moderator:
Space news on Phys.org
  • #72
JDoolin said:
These two things are in no way the same. Milne's model is flying apart. The standard model is standing still. There's no way to claim they're both the same. In Milne's model, the particles were all at the same point at t=0. In the Standard Model, all the particles were at different points at t=0.
In post n. 40 you derived yourself the metric of the Milne model and it turned out it was Minkowski metric, so what s your point?

Is this some kind of joke?
 
  • #73
A good way to re-write the equations in a sensible manner would be as follows:

[tex] \begin {matrix}
t = t' \cosh \phi\\
r = t' \sinh \phi
\end {matrix}
[/tex]​

where t and r are the position and time (in the reference frame of a stationary observer.) where a particle which has traveled at constant rapidity [tex]\phi[/tex], from the Big Bang event (t=0, r=0) reaches the proper age of t',

The use of a fictional reference frame where the particles are comoving is entirely unnecessary.
 
  • #74
While Milne was attempting to show how ridiculous Eddington's ideas were, he gave an equation which would map comoving world-lines ([itex]t \in \lbrace 0, \infty \rbrace[/itex],r=constant) to world-lines that were moving away from a single event at a constant velocity. (t',r')

[tex]\begin {matrix}
t' \to t \cosh r\\
r' \to t \sinh r
\end {matrix}
[/tex]​

The equation was nonsense, and Milne's point was that it was nonsense. (To make it legitimate, the r term should be rapidity--not a distance. The t term refers to the proper time since the (0,0) event of the particle.)

However, because his point was also that Eddington's ideas were ridiculous, the Eddington followers latched onto the very equation that Milne was describing as nonsense, and began calling it The Milne Model.

The Minkowski metric and the real Milne metric are equivalent.
[tex]ds^2=dt^2-dx^2-dy^2-dz^2[/tex]​

However when you map in the nonsense equation,

[tex]
\begin {matrix}
t \to t \cosh r\\
r \to t \sinh r
\end {matrix}
[/tex]​

you "derive" the metric given on Wikipedia for the Milne Model:
[tex]ds^2 = dt^2-t^2(dr^2+\sinh^2{r} d\Omega^2) [/tex]​
where
[tex]d\Omega^2 = d\theta^2+\sin^2\theta d\phi^2 [/tex]​

This metric is no longer equivalent to the Minkowski Metric.

Why is it important not to change metrics?

Your distance and time are fundamentally different things than rapidity and proper time of a distant particle who has maintained constant velocity since (t=0,r=0). If I have to try to use the Lorentz transformations but am only allowed to use the initial rapidity of the particle, and its proper age assuming that it remained at that initial rapidity, I won't be able to do any good physics at all.

If you have a set of comoving galaxies, and treat it in Minkowski spacetime, then when an observer changes velocity, you'll have length contraction of the entire universe. In other words, there is one unique velocity at which the universe appears to be "at rest."

If you have a set of particles, all equipartitioned by rapidity, all coming from a single event, and treat the system in Minkowski Spacetime, the result is a Lorentz Invariant expanding sphere. Meaning, if an observer accelerates, no matter how large the [tex]\Delta v[/tex], he will continue to be inside an expanding spherical shape. This means there is no "special" velocity where the universe appears spherical. There is also no "special" particle within this system who can say "only I am at the center." No matter how fast the observer is going, the universe will look like a sphere. And no matter which particle you pick, it looks like it is in the center.

Jonathan
 
Last edited:
  • #75
So you can't use Lorentz transformations. So what? Any physical quantity you could ever calculate will give you the exact same result in either coordinate system.

An example of a physical quantity, by the way, would be the time required for a light beam to bounce off some far-away mirror and return.
 
  • #76
Chalnoth said:
So you can't use Lorentz transformations. So what? Any physical quantity you could ever calculate will give you the exact same result in either coordinate system.

An example of a physical quantity, by the way, would be the time required for a light beam to bounce off some far-away mirror and return.

Yeah. Unfortunately, I ended with what I realized was my weakest point. I deleted the line, but then I realized you had already responded. Sorry about that.

However, since you brought it up, though you make the case that you "can't use Lorentz Transformations" that is a cop-out. Changing the metric does not release you from the Lorentz Transformation--it only changes the form of the Lorentz Transformation.
 
  • #77
JDoolin said:
Yeah. Unfortunately, I ended with what I realized was my weakest point. I deleted the line, but then I realized you had already responded. Sorry about that.
Fair enough, but I still don't see how a change in coordinates is something to argue against. If they're useful, they're useful. If not, not. A coordinate change doesn't change anything measurable.

JDoolin said:
However, since you brought it up, though you make the case that you "can't use Lorentz Transformations" that is a cop-out. Changing the metric does not release you from the Lorentz Transformation--it only changes the form of the Lorentz Transformation.
Yes.
 
  • #78
JDoolin said:
If you have a set of particles, all equipartitioned by rapidity, all coming from a single event, and treat the system in Minkowski Spacetime,

How is this a solution to Einstein's equation for general relativity?
 
  • #79
George Jones said:
How is this a solution to Einstein's equation for general relativity?

Obviously, it's not. Milne never accepted GR.
 
  • #80
Hmmm. I'd better start distinguishing between the Minkowski-Milne model and the Friedman-Milne model. The Minkowski-Milne model describes an infinite number of particles flying apart from a single event into pre-existing "Minkowski" space.

The Friedman-Milne model is a mapping from one spacetime where all of the particles are comoving to another spacetime where all of the particles are flying apart. Which of these two spacetimes is the one where the Minkowski Metric applies? And which one of them is what you think of as the "true" metric?

George Jones said:
How is this a solution to Einstein's equation for general relativity?

I don't entirely understand the Einstein Field Equations or what they are for. They are the "equations you solve to do General Relativity" and have something to do with gravity.

I still am stuck, conceptually, on how taking a derivative of the scale factor, has any meaningful relationship to gravity. Part of the problem is that I stubbornly insist that the scale factor is constant. From my perspective, of course, it appears you are stubbornly insisting the scale factor is NOT constant, though I cannot fathom your reason to suppose it is changing.

Certainly I have seen pictures of earth-colored balls causing dents in a sheet, and then other balls will roll down to them. At one time, I actually thought "aha!" but over time I realized this had no explanatory power whatsoever. All that model does is turn the source of the gravity perpendicular to the plane of motion. This would require a fourth spatial dimension if it were a valid description.

If you want to see where I'm at in understanding the Einstein Field Equations, go back into this thread and read posts 11, 14, 21, 23-28.

From my own explorations, I am rather swayed that only time is affected by gravity. For instance, from my (not entirely complete) analysis of the Rindler coordinate problem, it seems to me that the deeper a clock is in a gravitational well, the slower it will tick. Though I'm still working on it, I currently suspect this slowing in time also slows the speed of light. As long as the speed of light goes slower, and not faster, then all of the event-intervals associated with that disturbed light ray become timelike (which means they won't make causality problems.)

But the rocket in the Rindler problem is actually exactly the same length to a person on board the rocket as it is to an inertial observer with whom the rocket is instantaneously at rest.

If the rocket appears to be the same length to both parties, this means that "acceleration" does not cause a warping of space--hence I would expect that gravity does not either.

As such, I would propose a theory of gravity which merely slows the clocks (and possibly the speed of light) in gravitational wells, but does not affect the scale of space.

I don't know whether such a thing is compatible with the Einstein Field Equations. There are apparently 10 Einstein field equations, so if it is compatible, perhaps this would reduce their number, and simplify them greatly.

Jonathan
 
  • #81
JDoolin said:
I still am stuck, conceptually, on how taking a derivative of the scale factor, has any meaningful relationship to gravity. Part of the problem is that I stubbornly insist that the scale factor is constant. From my perspective, of course, it appears you are stubbornly insisting the scale factor is NOT constant, though I cannot fathom your reason to suppose it is changing.
One way to look at it is this. Let's imagine that we want to answer the question, "What is the most general type of metric we can write down that is both homogeneous and isotropic?"

First of all, if it is to be isotropic, the metric must not have any off-diagonal components. That is, there are no [itex]dxdy[/itex] or [itex]drd\theta[/itex] components.

Now, if we multiply the entire metric by any function, it doesn't change the physics, so we can arbitrarily choose the [itex]dt^2[/itex] component to have no pre-factors. Now, to make things simple, we'll work in Euclidean space for the three spatial components, and ask what sorts of metric factors they can pick up. Well, since we demand isotropy, we know that whatever function we choose, we must place the same function in front of every spatial component of the metric. Otherwise we would be picking out a specific direction in space.

Now this function we place in front of the other components of the metric can obviously be a function of time and retain homogeneity and isotropy. Naively we wouldn't think, however, that it could be a function of space. But it does turn out that there is a specific choice of function that does depend upon space which still obeys homogeneity and isotropy: constant spatial curvature.

So our general homogeneous, isotropic metric becomes:

[tex]ds^2 = dt^2 - {a^2(t) \over 1 - k(x^2 + y^2 + z^2)}(dx^2 + dy^2 + dz^2)[/tex]

So we automatically get a scale factor that depends upon time just by asking what the most general homogeneous, isotropic metric can be. It then becomes an exercise in math to determine what this metric does in General Relativity, and we are led inexorably to the Friedmann equations.

JDoolin said:
Certainly I have seen pictures of earth-colored balls causing dents in a sheet, and then other balls will roll down to them. At one time, I actually thought "aha!" but over time I realized this had no explanatory power whatsoever. All that model does is turn the source of the gravity perpendicular to the plane of motion. This would require a fourth spatial dimension if it were a valid description.
This is just a visualization of the curvature. General Relativity requires no extra dimensions to describe the curvature of space-time, but we can't very well visualize the curvature without artificially adding an extra dimension.

What happens in General Relativity, though, is that so-called "test particles" always follow paths that mark the shortest space-time distance between two points in space-time. These hypothetical test particles are objects which respond to the space-time curvature but don't affect it. They are a good approximation to reality whenever you're tracking the path of an object that is much less massive/energetic than the sources of the gravitational field it's traveling in.

Now, in flat space-time, the shortest path between any two events is always a straight line. This means that in flat space-time, objects always move with constant speed in a constant direction.

So when we see an object like the Moon orbiting the Earth, that means there is a massive departure from flat space-time surrounding the Earth: instead of going in a straight line, the Moon goes in a circle! This can be visualized as space-time being sort of a rubber sheet and the Earth providing an indentation on that sheet, an indentation which the Moon follows, but this is just a visualization because we simply can't visualize four-dimensional space-time curvature directly.

One thing that we know from General Relativity, however, is that the only way you can have flat space-time, which is the case for Minksowki/Milne space-time, is if the universe is empty. If you take the above homogeneous, isotropic metric, for example, the Milne metric pops out as the metric you get when you set the energy density of the universe to zero.
 
  • #82
JDoolin said:
This metric is no longer equivalent to the Minkowski Metric.

Yes it is. Let's define some light beams. When you have a beam of light then ds = 0, and you'll find that the curves for which ds=0 are the same. Once you have a grid of light beams, then you can start describing the path of an object in reference to different light beams, and if you change from one coordinate to another, you'll find that the paths are the same.

The particles in your universe don't know anything about r or t. They can only do experiments by sending light beams over to each other or describing their location with respect to light beams, and you'll find that those are the same.
 
  • #83
JDoolin said:
IYou are replacing distance with distance, and time with time. Certainly, you preserve all of the information by doing so, but you do not preserve the shape.

The information about the shape is in the ds equation. When you change your coordinates, then the distance equation changes so that the shapes are the same.

Three of these transformations significantly affect the shape of the earth, while the fourth only affects the size and position.

They only change the shape if you throw away the metric equation.
 
  • #84
JDoolin said:
What "the metric" is doing is converting a homogeneous group of comoving particles into a set of particles which are separated by an equipartition of rapidity. (i.e. they start together at a point, and are flying away from each other.)

No you aren't. You are just replacing one piece of graph paper with one that has different lines. Now if you have particles that follow the lines of one piece of graph paper, and then you change the graph paper radically, then it's no longer going to follow the lines on the other piece.

But that doesn't matter.

These two things are in no way the same. Milne's model is flying apart. The standard model is standing still. There's no way to claim they're both the same.

Different pieces of graph paper. Beams of light will travel along lines in which ds=0.
 
  • #85
JDoolin said:
Hmmm. I'd better start distinguishing between the Minkowski-Milne model and the Friedman-Milne model. The Minkowski-Milne model describes an infinite number of particles flying apart from a single event into pre-existing "Minkowski" space.

You are using the work "metric" in a way that I don't understand.

In SR, you can use any set of coordinates you want to describe a physical situation. The important number is the "space-time distance" between two events, and two observers will always agree on that. If you have a beam of light, the coordinates through which the beam of light goes through is always going to be ds=0.

Everything else is just graph paper.

Now if you are proposing something different, that's fine, but you aren't talking about metrics.

But it doesn't matter...

Also to relate this to observational cosmology. It's really all rather unimportant when you compare to observations. The only thing that you care about is how quickly the universe expands. Whether it expands according to GR, SR, or something else isn't important. Once you get an equation for how quickly the universe expands, then you see how sound waves go through the expanding universe, and you get a lumpiness factor.

Now it turns out that you can punch in numbers to your computer programs in which the universe expands in exactly the same way that the Milne model says it should, and you find that the universe expands too quickly. The faster the universe expands, the quicker it cools and the more deuterium you end up with. Also the faster the universe the further sound waves can to before they stall...

http://cmb.as.arizona.edu/~eisenste/acousticpeak/acoustic_physics.html

The important thing to point out is that *these* calculations only involve gas physics, gravity only enters as far as it tells you how the quickly universe expands.
 
Last edited by a moderator:
  • #86
Chalnoth said:
One way to look at it is this. Let's imagine that we want to answer the question, "What is the most general type of metric we can write down that is both homogeneous and isotropic?"

First of all, if it is to be isotropic, the metric must not have any off-diagonal components. That is, there are no [itex]dxdy[/itex] or [itex]drd\theta[/itex] components.

Now, if we multiply the entire metric by any function, it doesn't change the physics, so we can arbitrarily choose the [itex]dt^2[/itex] component to have no pre-factors. Now, to make things simple, we'll work in Euclidean space for the three spatial components, and ask what sorts of metric factors they can pick up. Well, since we demand isotropy, we know that whatever function we choose, we must place the same function in front of every spatial component of the metric. Otherwise we would be picking out a specific direction in space.

To me, claiming that the space is stretching represents a HUGE change in the physics. To me, claiming that Lorentz Transformations are not valid in cosmology represents a HUGE change in the physics. If it did not represent a change in the physics then we would not be arguing with each other. We would be saying to one another: "ah, yes, that's another perfectly valid way to look at it."

For the Milne-Minkowski model, I would suggest that we should consider the view of this planet from a distant galaxy traveling away at 90, or 99% of the speed of light. If the alien is asked to "compute the speed of the clock in on earth," For a good approximation, he may freely neglect the rotational velocity of the arms of the Milky Way Galaxy. And the effect of the Earth's gravity on the speed of the clock will be even more negligible than that. The small effects of general relativity will be tiny compared to the effects of Special Relativity.

But I frequently hear proponents of the "standard model" say that the effects of Special Relativity are only a local effect. (since all the galaxies are comoving, I gather, there is no time-dilation or desynchronization between the galaxies.) This is simply not true in the Milne-Minkowski model--where you must consider the relativity of simultaneity. This represents another HUGE change in the physics based on the metric.

Chalnoth said:
Now this function we place in front of the other components of the metric can obviously be a function of time and retain homogeneity and isotropy. Naively we wouldn't think, however, that it could be a function of space. But it does turn out that there is a specific choice of function that does depend upon space which still obeys homogeneity and isotropy: constant spatial curvature.

Why is your goal to find a metric where homogeneity and isotropy are retained? Why don't you, instead, make the goal to find a distribution of matter in which homogeneity and isotropy are retained?

This is what Milne already has found--a distribution of matter in Minkowski Space that is both homogeneous and isotropic. Isn't the only reason that Friedmann etc. continued to look for a "metric" because they erroneously denied that Milne's model was homogeneous and isotropic?

Chalnoth said:
So our general homogeneous, isotropic metric becomes:

[tex]ds^2 = dt^2 - {a^2(t) \over 1 - k(x^2 + y^2 + z^2)}(dx^2 + dy^2 + dz^2)[/tex]

So we automatically get a scale factor that depends upon time just by asking what the most general homogeneous, isotropic metric can be. It then becomes an exercise in math to determine what this metric does in General Relativity, and we are led inexorably to the Friedmann equations.

We should check the possibility that the variety of "metrics" you are creating may well be ways to map a stationary or comoving distribution of matter into a variety of homogeneous isotropic moving distributions of matter.

If so, there may be some compatibility between what we are each talking about, and I strongly suspect there is.

Chalnoth said:
This is just a visualization of the curvature. General Relativity requires no extra dimensions to describe the curvature of space-time, but we can't very well visualize the curvature without artificially adding an extra dimension.

What happens in General Relativity, though, is that so-called "test particles" always follow paths that mark the shortest space-time distance between two points in space-time. These hypothetical test particles are objects which respond to the space-time curvature but don't affect it. They are a good approximation to reality whenever you're tracking the path of an object that is much less massive/energetic than the sources of the gravitational field it's traveling in.

In this area, I will not argue with you. When you're talking about local gravitational effects, I can entertain the idea of a non-constant metric. But it has to be a mapping from one view to another view--for instance the free-falling view, vs. the view from the ground, vs. the view from orbit, vs. the view from the center of the planet.

The variables must represent different physical quantities before and after the "metric" is applied.

I think the case has been made for the local effects of gravity, but from afar, all these local effects will simply manifest themselves as a slowing of the speed of light. All of the events can still be mapped to a Minkowskian global metric. The large scale global metric does not need to adjust for these modified light-like intervals, for we already have many examples of materials (glass, water, etc) slowing the speed of light.

Chalnoth said:
Now, in flat space-time, the shortest path between any two events is always a straight line. This means that in flat space-time, objects always move with constant speed in a constant direction.

So when we see an object like the Moon orbiting the Earth, that means there is a massive departure from flat space-time surrounding the Earth: instead of going in a straight line, the Moon goes in a circle! This can be visualized as space-time being sort of a rubber sheet and the Earth providing an indentation on that sheet, an indentation which the Moon follows, but this is just a visualization because we simply can't visualize four-dimensional space-time curvature directly.

One thing that we know from General Relativity, however, is that the only way you can have flat space-time, which is the case for Minksowki/Milne space-time, is if the universe is empty. If you take the above homogeneous, isotropic metric, for example, the Milne metric pops out as the metric you get when you set the energy density of the universe to zero.

I'm pretty sure you are still applying the Friedman/Milne logic. In the Friedman/Milne model, you pretend that you don't need to worry about the relativity of simultaneity, because all the galaxies are comoving.

But remember, in the Minkowski/Milne model, we have already found a homogeneous, isotropic distribution of matter, without any change in "metric" at all. Since the distribution is isotropic, no matter how much matter or energy there is, it should all balance out--there's no net force in any direction, no matter how much "matter density" or "energy density" you have.

You have said the Milne model introduces an "explosion" which you find unaesthetic. But I think this is more aesthetically pleasing than what the standard model offers: In the standard model, everything in the universe appeared all at once, at t=0, uniformly distributed through space, all perfectly stationary with each other, but in a universe with a scale factor of zero.

So, instead of a single event creating all the matter in the universe, the standard model offers an infinite number of events, all occurring at the same time, at different places, but in the same place because the scale factor was zero.

Perhaps you find the point "explosion" idea unaesthetic, but do you really think it is more bizarre than the standard model's tiny infinite universe?
 
  • #87
twofish-quant said:
Now it turns out that you can punch in numbers to your computer programs in which the universe expands in exactly the same way that the Milne model says it should, and you find that the universe expands too quickly.

I need more detail here. Exactly how did they make this analysis that Milne's model universe would expand too quickly? Was this after or before they decided Milne's model had no matter in it?

The outer radius of the Minkowski/Milne's universe would expand at a speed of precisely the speed of light, though, as I've mentioned elsewhere, to an accelerating observer, the twin paradox manifests itself as universal inflation.

As for the local expansion, that would be determined, approximately, by an equipartition of rapidity, and the scale of the partition would be determined somehow by Planck's constant, and the mass of the primordial particles. If the size of those particles were extremely large, this velocity would be extremely low. I don't think you can say exactly how fast the Milne model would expand, unless you know the nature of the first particles, and how fast they moved away from each other.

In the context of the Minkowski/Milne model, what I would recommend is determine how fast the universe appears to expand, locally, and then, from that they could determine the size of this primordial particle.

In my original post, I said...
The reason I wish to modify the Milne model is to add two or three major events. These events are sudden accelerations of our galaxy or explosions of the matter around our galaxy, while the universe was still very dense, well before our galaxy actually spread out into stars.

The possibility had occurred to me that some of these events might be the quantum decay processes of gargantuan primordial particles.
 
  • #88
JDoolin said:
To me, claiming that the space is stretching represents a HUGE change in the physics. To me, claiming that Lorentz Transformations are not valid in cosmology represents a HUGE change in the physics.
Here's the thing: if you work with purely Newtonian gravity and work out how an expanding universe would behave, you get the same answer. So arguing against the expanding universe requires arguing that the behavior of gravity changes drastically on large distance scales. And we don't have any evidence of that.

What's more, we do have ample evidence against the Milne cosmology. Nobody is disagreeing that the Milne cosmology is different. It's just that the Milne cosmology is ruled out by observation.

JDoolin said:
But I frequently hear proponents of the "standard model" say that the effects of Special Relativity are only a local effect. (since all the galaxies are comoving, I gather, there is no time-dilation or desynchronization between the galaxies.) This is simply not true in the Milne-Minkowski model--where you must consider the relativity of simultaneity. This represents another HUGE change in the physics based on the metric.
Yes, because in General Relativity, you can use Minkowski space-time to describe the local region about any point. But if you try to apply special relativity globally, you start getting the wrong answers pretty quickly. Now, many of the same effects you see in Special Relativity still exist in General Relativity, it's just that the details differ. You may think of General Relativity only talking about effects due to the local galaxy, but in cosmology it also adds effects due to the intervening curvature between us and a far-away galaxy. Of course, you have to go very far because the cosmological curvature is very small, but when you get out to a few billion light years, the differences start to become significant.

JDoolin said:
Why is your goal to find a metric where homogeneity and isotropy are retained? Why don't you, instead, make the goal to find a distribution of matter in which homogeneity and isotropy are retained?
Well, if the distribution of matter obeys homogeneity and isotropy, then the particular solution to the Einstein equations must also obey the same symmetries. Thus we write down a metric that obeys homogeneity and isotropy in order to reduce the number of degrees of freedom, to make the system easier to solve. In this case, it reduces to a function of time (the scale factor) and constant parameter (the spatial curvature). The relationship between these and a homogeneous, isotropic matter distribution leads us, through the Einstein field equations, to the Friedmann equations.

JDoolin said:
This is what Milne already has found--a distribution of matter in Minkowski Space that is both homogeneous and isotropic. Isn't the only reason that Friedmann etc. continued to look for a "metric" because they erroneously denied that Milne's model was homogeneous and isotropic?
So? It's observationally wrong.

JDoolin said:
You have said the Milne model introduces an "explosion" which you find unaesthetic.
It's not "unaesthetic". It's observationally wrong.
 
  • #89
JDoolin said:
I need more detail here. Exactly how did they make this analysis that Milne's model universe would expand too quickly?

OK. Let's forget about a theory of gravity. You just give some equations telling me how you think the universe is behaving and then I run them through a simulation that just simulates the behavior of gas under the conditions that you gave me.

The three things that I can get out of that simulations are:

1) the composition of the universe from nuclear reaction rates
2) the lumpiness factors of the cosmic microwave background
3) the lumpiness factors of the galaxies

So let's have things expand at a constant rate, and let's not be concerned about how that happens. What you find is that the universe cools very quickly and so you end up without burning deuterium. The second thing that you find is that the sound waves travel further before they run into each other and so you end up with a universe that is much less lumpy.

The important thing about these things is that you are very limited as to the amount of weird physics that you can put in. Gas is gas. Nuclear reactions are nuclear reactions. What happens is that you put in all of the known physics, it doesn't work either. At that point you ask yourself what you have to do to get things to work, and you find that things work out if you put in just the right about of dark matter and dark energy.

In the context of the Minkowski/Milne model, what I would recommend is determine how fast the universe appears to expand, locally, and then, from that they could determine the size of this primordial particle.

If I'm understanding the Milne model, things are expanding at a constant rate, so you just take the current Hubble expansion and then assume that there is no slowdown.
 
  • #90
JDoolin said:
To me, claiming that the space is stretching represents a HUGE change in the physics.

Curiously the fact that space "bends" is something that you can test experimentally with spacecraft .

Anyone if you find GR weird as a theory of gravity and want to propose a new one, that's find. There is an entire industry of physicists proposing alternative theories of gravity. However, if you want to apply any new theory to the universe, you have to deal with the observational constraints that I've mentioned. You tell me how the universe expands, you push the numbers into your favorite nucleosynthesis and lumpiness factor code, and I tell you if that will work or not.

The two things that the standard models get right are the deuterium abundances and the existence of the first acoustic peak.

Why is your goal to find a metric where homogeneity and isotropy are retained? Why don't you, instead, make the goal to find a distribution of matter in which homogeneity and isotropy are retained?

Because you don't get the right deuterium abundances and the first acoustic peak.

You have said the Milne model introduces an "explosion" which you find unaesthetic. But I think this is more aesthetically pleasing than what the standard model offers: In the standard model, everything in the universe appeared all at once, at t=0, uniformly distributed through space, all perfectly stationary with each other, but in a universe with a scale factor of zero.

No it doesn't. The standard model of cosmology says *NOTHING* about what happened pre-inflation. I have to put this in bold because this is something people get wrong. With current observations you can get to the inflationary period, but what happened before is *NOT* part of the standard model.

So, instead of a single event creating all the matter in the universe, the standard model offers an infinite number of events, all occurring at the same time, at different places, but in the same place because the scale factor was zero.

No it doesn't. The standard model says *NOTHING* about how things behaved at t=0.
 
  • #91
twofish-quant said:
1) the composition of the universe from nuclear reaction rates

So let's have things expand at a constant rate, and let's not be concerned about how that happens. What you find is that the universe cools very quickly and so you end up without burning deuterium.
But a coasting universe is consistent with observational restrictions on primordial nucleosynthesis -
that has been known for some time. See, e.g., astro-ph/9903084, or more recent papers by the same
authors.
 
  • #92
Old Smuggler said:
But a coasting universe is consistent with observational restrictions on primordial nucleosynthesis -
that has been known for some time. See, e.g., astro-ph/9903084, or more recent papers by the same
authors.
I have a hard time seeing how much heavier elements would fail to form in the early universe in such a cosmology.

But at any rate, it doesn't much matter, because it's completely ruled out by the scale of inhomogeneities in the CMB.
 
  • #93
Old Smuggler said:
But a coasting universe is consistent with observational restrictions on primordial nucleosynthesis -
that has been known for some time. See, e.g., astro-ph/9903084, or more recent papers by the same
authors.

No it's not.

It's easy to get the right amount of helium with any sort of BB model. What happens is that the ratio of protons to neutrons is rather constant regardless of what you do, and most of it is going to get burned to He4. The really hard thing to get right is deuterium, because the amount of deuterium changes radically depending on how quickly the temperatures cool. The authors of the paper realize this and mention in page 4.

To explain how the get the wrong number for deuterium, they invoke spallation and cite an obsolete paper from the 1970's, and say that "If one considers spallation of a helium deficient cloud onto a helium rich cloud, it is easy to produce deuterium as demonstrated by Epstein" which is just flatly wrong. What happens if you try to produce deuterium through spallation is that it turns out that you never produce any deuterium because if your energies are too low, you produce lots of lithium and if your energies are too high, things just shatter and you produce nothing.

People tried very hard to get the models to work using spallation and the consensus is that they don't. Put in some dark matter and they work just fine. The dark matter keeps the universe from expanding too quickly and this burns off deuterium.
 
  • #94
I've completed a one-spatial-dimension demonstration of what the Milne Minkowski model would predict.

Here, there are two major events represented. One of them is the big bang event. The second is not technically a single event, but represents many, many primordial particles decaying in approximately the same point in time and space, thus resulting in something approximating a second "big bang"

This is what I was trying to get across in my ASCII diagrams http://groups.google.com/group/sci.astro/msg/2751e0dc068c725c?hl=en".


modMilne3-1.jpg


Within this model there are several parameters that are "To Be Determined."

maxRapidity should be infinite; representing the initial big bang event.
deltaRapidity1 is a function of the initial primordial particles.
firstHalfLife is a property of the initial primordial particles.
deltaRapidity2 is a function of the energy of the decay process.

(Oops. I called them both deltaRapidity. I didn't distinguish between the two.)

You see that some of the world-lines cross each other. This would have to modify the model somewhat, as it will mean particles are ramming into each other all over the universe.

Also, in the diagram, I have only represented one secondary decay-process "bang." The full model should have an infinite number of such secondary bangs, all along a hyperbolic arc of constant tau=halflife.

Code:
maxRapidity = 5;
deltaRapidity = .2;
bigBangEvent = {0 Sinh[rapidity], 0 Cosh[rapidity]};
e0 = Table[
   bigBangEvent, {rapidity, -maxRapidity, maxRapidity, deltaRapidity}];
(*The e0 values here will all be {0,0}*)
e1 = Table[{1 Sinh[rapidity], 
    1 Cosh[rapidity]}, {rapidity, -maxRapidity, maxRapidity, 
    deltaRapidity}];


firstHalfLife = 0.4; fHL = firstHalfLife;
secondHalfLife = 1 - firstHalfLife; sHL = secondHalfLife;
initialRapidity = -3; iR = initialRapidity;
deltaRapidity = .1; dR = deltaRapidity;
resultingParticles = 12; rP = resultingParticles;
decayEvent = {fHL Sinh[iR], fHL Cosh[iR]};
nextWorldLinesBegin = Table[decayEvent, {n, -rP, +rP}];
nextWorldLinesEnd = 
  Table[decayEvent + {sHL*Sinh[iR + n*dR], 
     sHL*Cosh[iR + n*dR]}, {n, -rP, rP}];

e0 = Join[e0, nextWorldLinesBegin];
e1 = Join[e1, nextWorldLinesEnd];

(*Apply the Lorentz Transformation around the decayEvent.*)

decayEventList = Table[decayEvent, {n, 1, Length[e0]}];
e0 = e0 - decayEventList;
e1 = e1 - decayEventList;

LT[theta_] := {{Cosh[theta], -Sinh[theta]}, {-Sinh[theta], 
    Cosh[theta]}};
Manipulate[
 ePrime0 = Transpose[LT[theta].Transpose[e0]];
 ePrime1 = Transpose[LT[theta].Transpose[e1]];
 milneWorldLines = Transpose[{ePrime0, ePrime1}];
 ListLinePlot[milneWorldLines, 
  PlotRange -> {{-2, 2}, {-.5, 2}}], {{theta, iR}, iR - (rP*dR)/2, 
  iR + (rP*dR)/2}]

Now you keep telling me that they tried it and it didn't work, but I think this analysis I'm doing is unique. I've not seen anybody really give the model half a chance.
 
Last edited by a moderator:
  • #95
If you're not going to pay any attention to the observational evidence already mentioned, why should we pay the model any further attention when it completely ignores gravity?
 
  • #96
twofish-quant said:
Curiously the fact that space "bends" is something that you can test experimentally with spacecraft .

Again, this is a local effect.

Anyone if you find GR weird as a theory of gravity and want to propose a new one, that's find. There is an entire industry of physicists proposing alternative theories of gravity. However, if you want to apply any new theory to the universe, you have to deal with the observational constraints that I've mentioned. You tell me how the universe expands, you push the numbers into your favorite nucleosynthesis and lumpiness factor code, and I tell you if that will work or not.

The two things that the standard models get right are the deuterium abundances and the existence of the first acoustic peak.

I propose no theory of gravity, except to say that if you have isotropy, there's no net pull in any direction.



Because you don't get the right deuterium abundances and the first acoustic peak.

I'm not at all that far along. I simulate a lot of decay processes going on, and call it one event occurring where the proper-time reaches the half-life. It calls into question whether early decays would cause a chain reaction, or if you'd have the decay rate follow a regular exponential curve in time. If you got a chain reaction, maybe it would create a flow of matter.

No it doesn't. The standard model of cosmology says *NOTHING* about what happened pre-inflation. I have to put this in bold because this is something people get wrong. With current observations you can get to the inflationary period, but what happened before is *NOT* part of the standard model.



No it doesn't. The standard model says *NOTHING* about how things behaved at t=0.

I am modeling right back to t=0, in Minkowski spacetime, because I think by doing so, we can actually explain inflation, and explain variation in Hubble's Constant.

The point to going back to t=0 is it forces us to ask the question--which makes more sense? A universe that began at a single event, or a universe which began simultaneously at many points in space? Especially since there is no universal meaning of "simultaneously." What are simultaneous distant events to one observer are spread out in space and time to unlimited extent to another observer.
 
  • #97
Chalnoth said:
If you're not going to pay any attention to the observational evidence already mentioned, why should we pay the model any further attention when it completely ignores gravity?

ISOTROPY! You have no net force in any direction.
 
  • #98
JDoolin said:
I propose no theory of gravity, except to say that if you have isotropy, there's no net pull in any direction.
As I said earlier, you can do the calculations for the interaction between gravity and a uniform fluid either in General Relativity or in Newtonian gravity. You get the same answer either way.
 
  • #99
Chalnoth said:
As I said earlier, you can do the calculations for the interaction between gravity and a uniform fluid either in General Relativity or in Newtonian gravity. You get the same answer either way.

Well, that sounds like a nice place to start. An infinite uniform fluid? And what do you find that answer to be?
 
  • #100
JDoolin said:
Well, that sounds like a nice place to start. An infinite uniform fluid? And what do you find that answer to be?
This leads to the Friedmann equations, which describe how the rate of expansion relates to the energy density and pressure of the contents of the fluid.
 
  • #101
Chalnoth said:
This leads to the Friedmann equations, which describe how the rate of expansion relates to the energy density and pressure of the contents of the fluid.

What I would prefer to see are the actual logical steps coming from the assumptions of homogeneity and isotropy which lead to the Friedmann equations.

For instance, I seem to recall reading an article where, to calculate the force on a particle, the author picked an arbitrary distant particle, imagined a sphere around it, and used http://en.wikipedia.org/wiki/Gauss%27_law_for_gravity" . I also recall being rather dismayed at the author's choice of using symmetry around an arbitrary distant particle, instead of using symmetry around the point of interest.

Perhaps, a more worthy analysis of the uniform perfect fluid is to ask what happens when you have a minor perturbation in the uniformity. For instance, if one particle is removed, or pushed away from it's position, all of the adjacent particles are affected. I believe it may even result in an expanding hole, since all of the adjacent particles would then be pulled away from that opening.

In the uniform model, the effect of distant particles goes down as 1/r^2, while the density remains constant. In the Minkowski-Milne model,the density is NOT constant; the density tends to infinity at a finite distance.

The question does beg for some kind of vector volume integral, and this deserves more thought.
 
Last edited by a moderator:
  • #102
JDoolin said:
What I would prefer to see are the actual logical steps coming from the assumptions of homogeneity and isotropy which lead to the Friedmann equations.
1. Construct a homogeneous, isotropic stress-energy tensor. This isn't terribly difficult: if we start with Euclidean coordinates, it must be diagonal and the diagonal spatial components must be the same. So we end up with just two degrees of freedom: energy density and pressure. The energy density tells us how much of the stuff there is, and the pressure is then determined from the energy density based upon what kind of stuff we have.

2. Construct a homogeneous, isotropic metric. I already showed you this part of it. It ends up depending on two parameters: a function of time (by convention, [itex]a(t)[/itex]), and the spatial curvature ([itex]k[/itex]).

3. From the homogeneous, isotropic metric we can calculate the Einstein tensor. The exact steps here are a bit hairy, but suffice it to say you end up with a tensor that only has diagonal components, and those components depend upon [itex]a(t)[/itex] and [itex]k[/itex].

4. The Einstein Field equations now equate the Einstein tensor (which depends upon [itex]a(t)[/itex] and [itex]k[/itex]) to the stress-energy tensor we constructed earlier (which depends upon [itex]\rho[/itex] and [itex]p[/itex]). In principle this gives us four equations, but all of the spatial equations are identical, so there's really just two independent equations. The time-time equation can be reduced to the first Friedmann equation. The second equation, by convention again, comes from the sum of all four equations.

This is how it works in General Relativity, of course. You can do the same thing in Newtonian physics just by assuming a homogeneous, isotropic fluid that has some energy density (which equates to the mass density in Newtonian physics) and pressure.

JDoolin said:
For instance, I seem to recall reading an article where, to calculate the force on a particle, the author picked an arbitrary distant particle, imagined a sphere around it, and used http://en.wikipedia.org/wiki/Gauss%27_law_for_gravity" . I also recall being rather dismayed at the author's choice of using symmetry around an arbitrary distant particle, instead of using symmetry around the point of interest.
Well, in general when you want to exploit the symmetry of the system, the symmetry has to actually exist for it to be valid. You don't have complete freedom to choose the symmetry.

In Newtonian physics, when you compute the force between two particles, you only consider the gravitational field around one of them (basically, a particle's own gravitational field doesn't contribute to the force that particle feels, so it's irrelevant). Thus the correct point of symmetry is not the particle on which you're calculating the force, but the particle that is the source of the force you're calculating.

JDoolin said:
Perhaps, a more worthy analysis of the uniform perfect fluid is to ask what happens when you have a minor perturbation in the uniformity. For instance, if one particle is removed, or pushed away from it's position, all of the adjacent particles are affected. I believe it may even result in an expanding hole, since all of the adjacent particles would then be pulled away from that opening.
This is a whole topic in cosmology, called perturbation theory. The basic idea is you start with a uniform fluid, and allow there to be deviations from uniformity. You then calculate the effects of those deviations. In general this is a very difficult thing to do, but there are approximations you can make that allow you to calculate the behavior under certain constraints. For our universe, those constraints mean that you can use perturbation theory to accurately calculate the formation of structure in our universe at very large scales. At smaller scales things get much messier and we have to use N-body simulations.
 
Last edited by a moderator:
  • #103
Chalnoth said:
This is how it works in General Relativity, of course. You can do the same thing in Newtonian physics just by assuming a homogeneous, isotropic fluid that has some energy density (which equates to the mass density in Newtonian physics) and pressure.

If possible, I would like to approach the problem with a basically Newtonian physics. My first thought is that one could perform a volume integral around the affected particle.

integrate (Gravitational function*density*differential Volume element)

The two hard parts are figuring out the density and figuring out the gravitational function.

The density will not be symmetrical around the "observer particle" but will be symmetrical around the world-line through the Big Bang event, parallel to the tangent of the observer-particle's world-curve. There should be a polar symmetry, since the end result of all the acceleration must be a single velocity. We should be able to express the final density as a function of (r,theta).

Also, the density should not be calculated as it is "now" in the observer's frame, but I would presume that the speed of gravity is the same as the speed of light. So if we take Milne's density function as given, we still have to take into account this delay.

Also, the gravitational field of a receding body is going to be less than the gravitational field of an oncoming body. I can see it in the demonstration of the linear motion of the point charge
http://www.its.caltech.edu/~phys1/java/phys1/MovingCharge/MovingCharge.html" , by clicking "Go" and then adjusting the velocity slider, but I need some way of expressing this mathematically.

I think, after having precise mathematical answers to these questions, I can give a legitimate answer to the question I posed in post 12.

JDoolin said:
Okay. Which way is the particle at (r,t) pulled by gravity? Toward the center, or away from the center, and why? (and by how much?)

As a hint, Milne claimed, a particle at rest (v=0) in this reference frame would be pulled toward the center. I do not recall how he reasoned this out, though. It was not entirely clear. I would have expected there to be no pull in either direction. Because a particle in the same position, but with v=r/t would be in the center, in its own reference frame, so would feel no such pull.
 
Last edited by a moderator:
  • #104
JDoolin said:
If possible, I would like to approach the problem with a basically Newtonian physics. My first thought is that one could perform a volume integral around the affected particle.

integrate (Gravitational function*density*differential Volume element)

The two hard parts are figuring out the density and figuring out the gravitational function.
Yes, this is relatively difficult. For some help, you can try reading this Wikipedia article section.

JDoolin said:
Also, the gravitational field of a receding body is going to be less than the gravitational field of an oncoming body.
This is wrong. In Newtonian physics it's obvious that it's wrong, because there is no velocity dependence of gravity at all. In General Relativity, it's also wrong but the argument gets a bit more subtle. Basically, because velocity is arbitrary, the gravitational field of a moving particle is just the gravitational field of a stationary particle in a coordinate system moving with respect to the particle.
 
  • #105
Chalnoth said:
Yes, this is relatively difficult. For some help, you can try reading this Wikipedia article section.


This is wrong. In Newtonian physics it's obvious that it's wrong, because there is no velocity dependence of gravity at all. In General Relativity, it's also wrong but the argument gets a bit more subtle. Basically, because velocity is arbitrary, the gravitational field of a moving particle is just the gravitational field of a stationary particle in a coordinate system moving with respect to the particle.

In Special Relativity, there's another problem--simultaneity. If I try to use the gravitational field of the distant, receding particle in its "current" rest frame, then I would be talking about an event that happened long in the past on earth.

If I want to talk about the event that Earth is experiencing now in the receding particle's rest frame, that event is far far in the future in the frame of the receding particle.
 
Back
Top