Interpreting the Supernova Data

  • Thread starter JDoolin
  • Start date
  • #76
JDoolin
Gold Member
720
9
So you can't use Lorentz transformations. So what? Any physical quantity you could ever calculate will give you the exact same result in either coordinate system.

An example of a physical quantity, by the way, would be the time required for a light beam to bounce off some far-away mirror and return.
Yeah. Unfortunately, I ended with what I realized was my weakest point. I deleted the line, but then I realized you had already responded. Sorry about that.

However, since you brought it up, though you make the case that you "can't use Lorentz Transformations" that is a cop-out. Changing the metric does not release you from the Lorentz Transformation--it only changes the form of the Lorentz Transformation.
 
  • #77
Chalnoth
Science Advisor
6,195
443
Yeah. Unfortunately, I ended with what I realized was my weakest point. I deleted the line, but then I realized you had already responded. Sorry about that.
Fair enough, but I still don't see how a change in coordinates is something to argue against. If they're useful, they're useful. If not, not. A coordinate change doesn't change anything measurable.

However, since you brought it up, though you make the case that you "can't use Lorentz Transformations" that is a cop-out. Changing the metric does not release you from the Lorentz Transformation--it only changes the form of the Lorentz Transformation.
Yes.
 
  • #78
George Jones
Staff Emeritus
Science Advisor
Gold Member
7,390
1,014
If you have a set of particles, all equipartitioned by rapidity, all coming from a single event, and treat the system in Minkowski Spacetime,
How is this a solution to Einstein's equation for general relativity?
 
  • #79
3,507
26
How is this a solution to Einstein's equation for general relativity?
Obviously, it's not. Milne never accepted GR.
 
  • #80
JDoolin
Gold Member
720
9
Hmmm. I'd better start distinguishing between the Minkowski-Milne model and the Friedman-Milne model. The Minkowski-Milne model describes an infinite number of particles flying apart from a single event into pre-existing "Minkowski" space.

The Friedman-Milne model is a mapping from one spacetime where all of the particles are comoving to another spacetime where all of the particles are flying apart. Which of these two spacetimes is the one where the Minkowski Metric applies? And which one of them is what you think of as the "true" metric?

How is this a solution to Einstein's equation for general relativity?
I don't entirely understand the Einstein Field Equations or what they are for. They are the "equations you solve to do General Relativity" and have something to do with gravity.

I still am stuck, conceptually, on how taking a derivative of the scale factor, has any meaningful relationship to gravity. Part of the problem is that I stubbornly insist that the scale factor is constant. From my perspective, of course, it appears you are stubbornly insisting the scale factor is NOT constant, though I cannot fathom your reason to suppose it is changing.

Certainly I have seen pictures of earth-colored balls causing dents in a sheet, and then other balls will roll down to them. At one time, I actually thought "aha!" but over time I realized this had no explanatory power whatsoever. All that model does is turn the source of the gravity perpendicular to the plane of motion. This would require a fourth spatial dimension if it were a valid description.

If you want to see where I'm at in understanding the Einstein Field Equations, go back into this thread and read posts 11, 14, 21, 23-28.

From my own explorations, I am rather swayed that only time is affected by gravity. For instance, from my (not entirely complete) analysis of the Rindler coordinate problem, it seems to me that the deeper a clock is in a gravitational well, the slower it will tick. Though I'm still working on it, I currently suspect this slowing in time also slows the speed of light. As long as the speed of light goes slower, and not faster, then all of the event-intervals associated with that disturbed light ray become timelike (which means they won't make causality problems.)

But the rocket in the Rindler problem is actually exactly the same length to a person on board the rocket as it is to an inertial observer with whom the rocket is instantaneously at rest.

If the rocket appears to be the same length to both parties, this means that "acceleration" does not cause a warping of space--hence I would expect that gravity does not either.

As such, I would propose a theory of gravity which merely slows the clocks (and possibly the speed of light) in gravitational wells, but does not affect the scale of space.

I don't know whether such a thing is compatible with the Einstein Field Equations. There are apparently 10 Einstein field equations, so if it is compatible, perhaps this would reduce their number, and simplify them greatly.

Jonathan
 
  • #81
Chalnoth
Science Advisor
6,195
443
I still am stuck, conceptually, on how taking a derivative of the scale factor, has any meaningful relationship to gravity. Part of the problem is that I stubbornly insist that the scale factor is constant. From my perspective, of course, it appears you are stubbornly insisting the scale factor is NOT constant, though I cannot fathom your reason to suppose it is changing.
One way to look at it is this. Let's imagine that we want to answer the question, "What is the most general type of metric we can write down that is both homogeneous and isotropic?"

First of all, if it is to be isotropic, the metric must not have any off-diagonal components. That is, there are no [itex]dxdy[/itex] or [itex]drd\theta[/itex] components.

Now, if we multiply the entire metric by any function, it doesn't change the physics, so we can arbitrarily choose the [itex]dt^2[/itex] component to have no pre-factors. Now, to make things simple, we'll work in Euclidean space for the three spatial components, and ask what sorts of metric factors they can pick up. Well, since we demand isotropy, we know that whatever function we choose, we must place the same function in front of every spatial component of the metric. Otherwise we would be picking out a specific direction in space.

Now this function we place in front of the other components of the metric can obviously be a function of time and retain homogeneity and isotropy. Naively we wouldn't think, however, that it could be a function of space. But it does turn out that there is a specific choice of function that does depend upon space which still obeys homogeneity and isotropy: constant spatial curvature.

So our general homogeneous, isotropic metric becomes:

[tex]ds^2 = dt^2 - {a^2(t) \over 1 - k(x^2 + y^2 + z^2)}(dx^2 + dy^2 + dz^2)[/tex]

So we automatically get a scale factor that depends upon time just by asking what the most general homogeneous, isotropic metric can be. It then becomes an exercise in math to determine what this metric does in General Relativity, and we are led inexorably to the Friedmann equations.

Certainly I have seen pictures of earth-colored balls causing dents in a sheet, and then other balls will roll down to them. At one time, I actually thought "aha!" but over time I realized this had no explanatory power whatsoever. All that model does is turn the source of the gravity perpendicular to the plane of motion. This would require a fourth spatial dimension if it were a valid description.
This is just a visualization of the curvature. General Relativity requires no extra dimensions to describe the curvature of space-time, but we can't very well visualize the curvature without artificially adding an extra dimension.

What happens in General Relativity, though, is that so-called "test particles" always follow paths that mark the shortest space-time distance between two points in space-time. These hypothetical test particles are objects which respond to the space-time curvature but don't affect it. They are a good approximation to reality whenever you're tracking the path of an object that is much less massive/energetic than the sources of the gravitational field it's traveling in.

Now, in flat space-time, the shortest path between any two events is always a straight line. This means that in flat space-time, objects always move with constant speed in a constant direction.

So when we see an object like the Moon orbiting the Earth, that means there is a massive departure from flat space-time surrounding the Earth: instead of going in a straight line, the Moon goes in a circle! This can be visualized as space-time being sort of a rubber sheet and the Earth providing an indentation on that sheet, an indentation which the Moon follows, but this is just a visualization because we simply can't visualize four-dimensional space-time curvature directly.

One thing that we know from General Relativity, however, is that the only way you can have flat space-time, which is the case for Minksowki/Milne space-time, is if the universe is empty. If you take the above homogeneous, isotropic metric, for example, the Milne metric pops out as the metric you get when you set the energy density of the universe to zero.
 
  • #82
6,814
13
This metric is no longer equivalent to the Minkowski Metric.
Yes it is. Let's define some light beams. When you have a beam of light then ds = 0, and you'll find that the curves for which ds=0 are the same. Once you have a grid of light beams, then you can start describing the path of an object in reference to different light beams, and if you change from one coordinate to another, you'll find that the paths are the same.

The particles in your universe don't know anything about r or t. They can only do experiments by sending light beams over to each other or describing their location with respect to light beams, and you'll find that those are the same.
 
  • #83
6,814
13
IYou are replacing distance with distance, and time with time. Certainly, you preserve all of the information by doing so, but you do not preserve the shape.
The information about the shape is in the ds equation. When you change your coordinates, then the distance equation changes so that the shapes are the same.

Three of these transformations significantly affect the shape of the earth, while the fourth only affects the size and position.
They only change the shape if you throw away the metric equation.
 
  • #84
6,814
13
What "the metric" is doing is converting a homogeneous group of comoving particles into a set of particles which are separated by an equipartition of rapidity. (i.e. they start together at a point, and are flying away from each other.)
No you aren't. You are just replacing one piece of graph paper with one that has different lines. Now if you have particles that follow the lines of one piece of graph paper, and then you change the graph paper radically, then it's no longer going to follow the lines on the other piece.

But that doesn't matter.

These two things are in no way the same. Milne's model is flying apart. The standard model is standing still. There's no way to claim they're both the same.
Different pieces of graph paper. Beams of light will travel along lines in which ds=0.
 
  • #85
6,814
13
Hmmm. I'd better start distinguishing between the Minkowski-Milne model and the Friedman-Milne model. The Minkowski-Milne model describes an infinite number of particles flying apart from a single event into pre-existing "Minkowski" space.
You are using the work "metric" in a way that I don't understand.

In SR, you can use any set of coordinates you want to describe a physical situation. The important number is the "space-time distance" between two events, and two observers will always agree on that. If you have a beam of light, the coordinates through which the beam of light goes through is always going to be ds=0.

Everything else is just graph paper.

Now if you are proposing something different, that's fine, but you aren't talking about metrics.

But it doesn't matter.....

Also to relate this to observational cosmology. It's really all rather unimportant when you compare to observations. The only thing that you care about is how quickly the universe expands. Whether it expands according to GR, SR, or something else isn't important. Once you get an equation for how quickly the universe expands, then you see how sound waves go through the expanding universe, and you get a lumpiness factor.

Now it turns out that you can punch in numbers to your computer programs in which the universe expands in exactly the same way that the Milne model says it should, and you find that the universe expands too quickly. The faster the universe expands, the quicker it cools and the more deuterium you end up with. Also the faster the universe the further sound waves can to before they stall....

http://cmb.as.arizona.edu/~eisenste/acousticpeak/acoustic_physics.html

The important thing to point out is that *these* calculations only involve gas physics, gravity only enters as far as it tells you how the quickly universe expands.
 
Last edited by a moderator:
  • #86
JDoolin
Gold Member
720
9
One way to look at it is this. Let's imagine that we want to answer the question, "What is the most general type of metric we can write down that is both homogeneous and isotropic?"

First of all, if it is to be isotropic, the metric must not have any off-diagonal components. That is, there are no [itex]dxdy[/itex] or [itex]drd\theta[/itex] components.

Now, if we multiply the entire metric by any function, it doesn't change the physics, so we can arbitrarily choose the [itex]dt^2[/itex] component to have no pre-factors. Now, to make things simple, we'll work in Euclidean space for the three spatial components, and ask what sorts of metric factors they can pick up. Well, since we demand isotropy, we know that whatever function we choose, we must place the same function in front of every spatial component of the metric. Otherwise we would be picking out a specific direction in space.
To me, claiming that the space is stretching represents a HUGE change in the physics. To me, claiming that Lorentz Transformations are not valid in cosmology represents a HUGE change in the physics. If it did not represent a change in the physics then we would not be arguing with each other. We would be saying to one another: "ah, yes, that's another perfectly valid way to look at it."

For the Milne-Minkowski model, I would suggest that we should consider the view of this planet from a distant galaxy traveling away at 90, or 99% of the speed of light. If the alien is asked to "compute the speed of the clock in on earth," For a good approximation, he may freely neglect the rotational velocity of the arms of the Milky Way Galaxy. And the effect of the earth's gravity on the speed of the clock will be even more negligible than that. The small effects of general relativity will be tiny compared to the effects of Special Relativity.

But I frequently hear proponents of the "standard model" say that the effects of Special Relativity are only a local effect. (since all the galaxies are comoving, I gather, there is no time-dilation or desynchronization between the galaxies.) This is simply not true in the Milne-Minkowski model--where you must consider the relativity of simultaneity. This represents another HUGE change in the physics based on the metric.

Now this function we place in front of the other components of the metric can obviously be a function of time and retain homogeneity and isotropy. Naively we wouldn't think, however, that it could be a function of space. But it does turn out that there is a specific choice of function that does depend upon space which still obeys homogeneity and isotropy: constant spatial curvature.
Why is your goal to find a metric where homogeneity and isotropy are retained? Why don't you, instead, make the goal to find a distribution of matter in which homogeneity and isotropy are retained?

This is what Milne already has found--a distribution of matter in Minkowski Space that is both homogeneous and isotropic. Isn't the only reason that Friedmann etc. continued to look for a "metric" because they erroneously denied that Milne's model was homogeneous and isotropic?

So our general homogeneous, isotropic metric becomes:

[tex]ds^2 = dt^2 - {a^2(t) \over 1 - k(x^2 + y^2 + z^2)}(dx^2 + dy^2 + dz^2)[/tex]

So we automatically get a scale factor that depends upon time just by asking what the most general homogeneous, isotropic metric can be. It then becomes an exercise in math to determine what this metric does in General Relativity, and we are led inexorably to the Friedmann equations.
We should check the possibility that the variety of "metrics" you are creating may well be ways to map a stationary or comoving distribution of matter into a variety of homogeneous isotropic moving distributions of matter.

If so, there may be some compatibility between what we are each talking about, and I strongly suspect there is.

This is just a visualization of the curvature. General Relativity requires no extra dimensions to describe the curvature of space-time, but we can't very well visualize the curvature without artificially adding an extra dimension.

What happens in General Relativity, though, is that so-called "test particles" always follow paths that mark the shortest space-time distance between two points in space-time. These hypothetical test particles are objects which respond to the space-time curvature but don't affect it. They are a good approximation to reality whenever you're tracking the path of an object that is much less massive/energetic than the sources of the gravitational field it's traveling in.
In this area, I will not argue with you. When you're talking about local gravitational effects, I can entertain the idea of a non-constant metric. But it has to be a mapping from one view to another view--for instance the free-falling view, vs. the view from the ground, vs. the view from orbit, vs. the view from the center of the planet.

The variables must represent different physical quantities before and after the "metric" is applied.

I think the case has been made for the local effects of gravity, but from afar, all these local effects will simply manifest themselves as a slowing of the speed of light. All of the events can still be mapped to a Minkowskian global metric. The large scale global metric does not need to adjust for these modified light-like intervals, for we already have many examples of materials (glass, water, etc) slowing the speed of light.

Now, in flat space-time, the shortest path between any two events is always a straight line. This means that in flat space-time, objects always move with constant speed in a constant direction.

So when we see an object like the Moon orbiting the Earth, that means there is a massive departure from flat space-time surrounding the Earth: instead of going in a straight line, the Moon goes in a circle! This can be visualized as space-time being sort of a rubber sheet and the Earth providing an indentation on that sheet, an indentation which the Moon follows, but this is just a visualization because we simply can't visualize four-dimensional space-time curvature directly.

One thing that we know from General Relativity, however, is that the only way you can have flat space-time, which is the case for Minksowki/Milne space-time, is if the universe is empty. If you take the above homogeneous, isotropic metric, for example, the Milne metric pops out as the metric you get when you set the energy density of the universe to zero.
I'm pretty sure you are still applying the Friedman/Milne logic. In the Friedman/Milne model, you pretend that you don't need to worry about the relativity of simultaneity, because all the galaxies are comoving.

But remember, in the Minkowski/Milne model, we have already found a homogeneous, isotropic distribution of matter, without any change in "metric" at all. Since the distribution is isotropic, no matter how much matter or energy there is, it should all balance out--there's no net force in any direction, no matter how much "matter density" or "energy density" you have.

You have said the Milne model introduces an "explosion" which you find unaesthetic. But I think this is more aesthetically pleasing than what the standard model offers: In the standard model, everything in the universe appeared all at once, at t=0, uniformly distributed through space, all perfectly stationary with each other, but in a universe with a scale factor of zero.

So, instead of a single event creating all the matter in the universe, the standard model offers an infinite number of events, all occurring at the same time, at different places, but in the same place because the scale factor was zero.

Perhaps you find the point "explosion" idea unaesthetic, but do you really think it is more bizarre than the standard model's tiny infinite universe?
 
  • #87
JDoolin
Gold Member
720
9
Now it turns out that you can punch in numbers to your computer programs in which the universe expands in exactly the same way that the Milne model says it should, and you find that the universe expands too quickly.
I need more detail here. Exactly how did they make this analysis that Milne's model universe would expand too quickly? Was this after or before they decided Milne's model had no matter in it?

The outer radius of the Minkowski/Milne's universe would expand at a speed of precisely the speed of light, though, as I've mentioned elsewhere, to an accelerating observer, the twin paradox manifests itself as universal inflation.

As for the local expansion, that would be determined, approximately, by an equipartition of rapidity, and the scale of the partition would be determined somehow by Planck's constant, and the mass of the primordial particles. If the size of those particles were extremely large, this velocity would be extremely low. I don't think you can say exactly how fast the Milne model would expand, unless you know the nature of the first particles, and how fast they moved away from each other.

In the context of the Minkowski/Milne model, what I would recommend is determine how fast the universe appears to expand, locally, and then, from that they could determine the size of this primordial particle.

In my original post, I said...
The reason I wish to modify the Milne model is to add two or three major events. These events are sudden accelerations of our galaxy or explosions of the matter around our galaxy, while the universe was still very dense, well before our galaxy actually spread out into stars.
The possibility had occurred to me that some of these events might be the quantum decay processes of gargantuan primordial particles.
 
  • #88
Chalnoth
Science Advisor
6,195
443
To me, claiming that the space is stretching represents a HUGE change in the physics. To me, claiming that Lorentz Transformations are not valid in cosmology represents a HUGE change in the physics.
Here's the thing: if you work with purely Newtonian gravity and work out how an expanding universe would behave, you get the same answer. So arguing against the expanding universe requires arguing that the behavior of gravity changes drastically on large distance scales. And we don't have any evidence of that.

What's more, we do have ample evidence against the Milne cosmology. Nobody is disagreeing that the Milne cosmology is different. It's just that the Milne cosmology is ruled out by observation.

But I frequently hear proponents of the "standard model" say that the effects of Special Relativity are only a local effect. (since all the galaxies are comoving, I gather, there is no time-dilation or desynchronization between the galaxies.) This is simply not true in the Milne-Minkowski model--where you must consider the relativity of simultaneity. This represents another HUGE change in the physics based on the metric.
Yes, because in General Relativity, you can use Minkowski space-time to describe the local region about any point. But if you try to apply special relativity globally, you start getting the wrong answers pretty quickly. Now, many of the same effects you see in Special Relativity still exist in General Relativity, it's just that the details differ. You may think of General Relativity only talking about effects due to the local galaxy, but in cosmology it also adds effects due to the intervening curvature between us and a far-away galaxy. Of course, you have to go very far because the cosmological curvature is very small, but when you get out to a few billion light years, the differences start to become significant.

Why is your goal to find a metric where homogeneity and isotropy are retained? Why don't you, instead, make the goal to find a distribution of matter in which homogeneity and isotropy are retained?
Well, if the distribution of matter obeys homogeneity and isotropy, then the particular solution to the Einstein equations must also obey the same symmetries. Thus we write down a metric that obeys homogeneity and isotropy in order to reduce the number of degrees of freedom, to make the system easier to solve. In this case, it reduces to a function of time (the scale factor) and constant parameter (the spatial curvature). The relationship between these and a homogeneous, isotropic matter distribution leads us, through the Einstein field equations, to the Friedmann equations.

This is what Milne already has found--a distribution of matter in Minkowski Space that is both homogeneous and isotropic. Isn't the only reason that Friedmann etc. continued to look for a "metric" because they erroneously denied that Milne's model was homogeneous and isotropic?
So? It's observationally wrong.

You have said the Milne model introduces an "explosion" which you find unaesthetic.
It's not "unaesthetic". It's observationally wrong.
 
  • #89
6,814
13
I need more detail here. Exactly how did they make this analysis that Milne's model universe would expand too quickly?
OK. Let's forget about a theory of gravity. You just give some equations telling me how you think the universe is behaving and then I run them through a simulation that just simulates the behavior of gas under the conditions that you gave me.

The three things that I can get out of that simulations are:

1) the composition of the universe from nuclear reaction rates
2) the lumpiness factors of the cosmic microwave background
3) the lumpiness factors of the galaxies

So let's have things expand at a constant rate, and lets not be concerned about how that happens. What you find is that the universe cools very quickly and so you end up without burning deuterium. The second thing that you find is that the sound waves travel further before they run into each other and so you end up with a universe that is much less lumpy.

The important thing about these things is that you are very limited as to the amount of weird physics that you can put in. Gas is gas. Nuclear reactions are nuclear reactions. What happens is that you put in all of the known physics, it doesn't work either. At that point you ask yourself what you have to do to get things to work, and you find that things work out if you put in just the right about of dark matter and dark energy.

In the context of the Minkowski/Milne model, what I would recommend is determine how fast the universe appears to expand, locally, and then, from that they could determine the size of this primordial particle.
If I'm understanding the Milne model, things are expanding at a constant rate, so you just take the current Hubble expansion and then assume that there is no slowdown.
 
  • #90
6,814
13
To me, claiming that the space is stretching represents a HUGE change in the physics.
Curiously the fact that space "bends" is something that you can test experimentally with spacecraft.

Anyone if you find GR weird as a theory of gravity and want to propose a new one, that's find. There is an entire industry of physicists proposing alternative theories of gravity. However, if you want to apply any new theory to the universe, you have to deal with the observational constraints that I've mentioned. You tell me how the universe expands, you push the numbers into your favorite nucleosynthesis and lumpiness factor code, and I tell you if that will work or not.

The two things that the standard models get right are the deuterium abundances and the existence of the first acoustic peak.

Why is your goal to find a metric where homogeneity and isotropy are retained? Why don't you, instead, make the goal to find a distribution of matter in which homogeneity and isotropy are retained?
Because you don't get the right deuterium abundances and the first acoustic peak.

You have said the Milne model introduces an "explosion" which you find unaesthetic. But I think this is more aesthetically pleasing than what the standard model offers: In the standard model, everything in the universe appeared all at once, at t=0, uniformly distributed through space, all perfectly stationary with each other, but in a universe with a scale factor of zero.
No it doesn't. The standard model of cosmology says *NOTHING* about what happened pre-inflation. I have to put this in bold because this is something people get wrong. With current observations you can get to the inflationary period, but what happened before is *NOT* part of the standard model.

So, instead of a single event creating all the matter in the universe, the standard model offers an infinite number of events, all occurring at the same time, at different places, but in the same place because the scale factor was zero.
No it doesn't. The standard model says *NOTHING* about how things behaved at t=0.
 
  • #91
1) the composition of the universe from nuclear reaction rates

So let's have things expand at a constant rate, and lets not be concerned about how that happens. What you find is that the universe cools very quickly and so you end up without burning deuterium.
But a coasting universe is consistent with observational restrictions on primordial nucleosynthesis -
that has been known for some time. See, e.g., astro-ph/9903084, or more recent papers by the same
authors.
 
  • #92
Chalnoth
Science Advisor
6,195
443
But a coasting universe is consistent with observational restrictions on primordial nucleosynthesis -
that has been known for some time. See, e.g., astro-ph/9903084, or more recent papers by the same
authors.
I have a hard time seeing how much heavier elements would fail to form in the early universe in such a cosmology.

But at any rate, it doesn't much matter, because it's completely ruled out by the scale of inhomogeneities in the CMB.
 
  • #93
6,814
13
But a coasting universe is consistent with observational restrictions on primordial nucleosynthesis -
that has been known for some time. See, e.g., astro-ph/9903084, or more recent papers by the same
authors.
No it's not.

It's easy to get the right amount of helium with any sort of BB model. What happens is that the ratio of protons to neutrons is rather constant regardless of what you do, and most of it is going to get burned to He4. The really hard thing to get right is deuterium, because the amount of deuterium changes radically depending on how quickly the temperatures cool. The authors of the paper realize this and mention in page 4.

To explain how the get the wrong number for deuterium, they invoke spallation and cite an obsolete paper from the 1970's, and say that "If one considers spallation of a helium deficient cloud onto a helium rich cloud, it is easy to produce deuterium as demonstrated by Epstein" which is just flatly wrong. What happens if you try to produce deuterium through spallation is that it turns out that you never produce any deuterium because if your energies are too low, you produce lots of lithium and if your energies are too high, things just shatter and you produce nothing.

People tried very hard to get the models to work using spallation and the consensus is that they don't. Put in some dark matter and they work just fine. The dark matter keeps the universe from expanding too quickly and this burns off deuterium.
 
  • #94
JDoolin
Gold Member
720
9
I've completed a one-spatial-dimension demonstration of what the Milne Minkowski model would predict.

Here, there are two major events represented. One of them is the big bang event. The second is not technically a single event, but represents many, many primordial particles decaying in approximately the same point in time and space, thus resulting in something approximating a second "big bang"

This is what I was trying to get across in my ASCII diagrams http://groups.google.com/group/sci.astro/msg/2751e0dc068c725c?hl=en".


modMilne3-1.jpg


Within this model there are several parameters that are "To Be Determined."

maxRapidity should be infinite; representing the initial big bang event.
deltaRapidity1 is a function of the initial primordial particles.
firstHalfLife is a property of the initial primordial particles.
deltaRapidity2 is a function of the energy of the decay process.

(Oops. I called them both deltaRapidity. I didn't distinguish between the two.)

You see that some of the world-lines cross each other. This would have to modify the model somewhat, as it will mean particles are ramming into each other all over the universe.

Also, in the diagram, I have only represented one secondary decay-process "bang." The full model should have an infinite number of such secondary bangs, all along a hyperbolic arc of constant tau=halflife.

Code:
maxRapidity = 5;
deltaRapidity = .2;
bigBangEvent = {0 Sinh[rapidity], 0 Cosh[rapidity]};
e0 = Table[
   bigBangEvent, {rapidity, -maxRapidity, maxRapidity, deltaRapidity}];
(*The e0 values here will all be {0,0}*)
e1 = Table[{1 Sinh[rapidity], 
    1 Cosh[rapidity]}, {rapidity, -maxRapidity, maxRapidity, 
    deltaRapidity}];


firstHalfLife = 0.4; fHL = firstHalfLife;
secondHalfLife = 1 - firstHalfLife; sHL = secondHalfLife;
initialRapidity = -3; iR = initialRapidity;
deltaRapidity = .1; dR = deltaRapidity;
resultingParticles = 12; rP = resultingParticles;
decayEvent = {fHL Sinh[iR], fHL Cosh[iR]};
nextWorldLinesBegin = Table[decayEvent, {n, -rP, +rP}];
nextWorldLinesEnd = 
  Table[decayEvent + {sHL*Sinh[iR + n*dR], 
     sHL*Cosh[iR + n*dR]}, {n, -rP, rP}];

e0 = Join[e0, nextWorldLinesBegin];
e1 = Join[e1, nextWorldLinesEnd];

(*Apply the Lorentz Transformation around the decayEvent.*)

decayEventList = Table[decayEvent, {n, 1, Length[e0]}];
e0 = e0 - decayEventList;
e1 = e1 - decayEventList;

LT[theta_] := {{Cosh[theta], -Sinh[theta]}, {-Sinh[theta], 
    Cosh[theta]}};
Manipulate[
 ePrime0 = Transpose[LT[theta].Transpose[e0]];
 ePrime1 = Transpose[LT[theta].Transpose[e1]];
 milneWorldLines = Transpose[{ePrime0, ePrime1}];
 ListLinePlot[milneWorldLines, 
  PlotRange -> {{-2, 2}, {-.5, 2}}], {{theta, iR}, iR - (rP*dR)/2, 
  iR + (rP*dR)/2}]
Now you keep telling me that they tried it and it didn't work, but I think this analysis I'm doing is unique. I've not seen anybody really give the model half a chance.
 
Last edited by a moderator:
  • #95
Chalnoth
Science Advisor
6,195
443
If you're not going to pay any attention to the observational evidence already mentioned, why should we pay the model any further attention when it completely ignores gravity?
 
  • #96
JDoolin
Gold Member
720
9
Curiously the fact that space "bends" is something that you can test experimentally with spacecraft.
Again, this is a local effect.

Anyone if you find GR weird as a theory of gravity and want to propose a new one, that's find. There is an entire industry of physicists proposing alternative theories of gravity. However, if you want to apply any new theory to the universe, you have to deal with the observational constraints that I've mentioned. You tell me how the universe expands, you push the numbers into your favorite nucleosynthesis and lumpiness factor code, and I tell you if that will work or not.

The two things that the standard models get right are the deuterium abundances and the existence of the first acoustic peak.
I propose no theory of gravity, except to say that if you have isotropy, there's no net pull in any direction.



Because you don't get the right deuterium abundances and the first acoustic peak.
I'm not at all that far along. I simulate a lot of decay processes going on, and call it one event occurring where the proper-time reaches the half-life. It calls into question whether early decays would cause a chain reaction, or if you'd have the decay rate follow a regular exponential curve in time. If you got a chain reaction, maybe it would create a flow of matter.

No it doesn't. The standard model of cosmology says *NOTHING* about what happened pre-inflation. I have to put this in bold because this is something people get wrong. With current observations you can get to the inflationary period, but what happened before is *NOT* part of the standard model.



No it doesn't. The standard model says *NOTHING* about how things behaved at t=0.
I am modeling right back to t=0, in Minkowski spacetime, because I think by doing so, we can actually explain inflation, and explain variation in Hubble's Constant.

The point to going back to t=0 is it forces us to ask the question--which makes more sense? A universe that began at a single event, or a universe which began simultaneously at many points in space? Especially since there is no universal meaning of "simultaneously." What are simultaneous distant events to one observer are spread out in space and time to unlimited extent to another observer.
 
  • #97
JDoolin
Gold Member
720
9
If you're not going to pay any attention to the observational evidence already mentioned, why should we pay the model any further attention when it completely ignores gravity?
ISOTROPY! You have no net force in any direction.
 
  • #98
Chalnoth
Science Advisor
6,195
443
I propose no theory of gravity, except to say that if you have isotropy, there's no net pull in any direction.
As I said earlier, you can do the calculations for the interaction between gravity and a uniform fluid either in General Relativity or in Newtonian gravity. You get the same answer either way.
 
  • #99
JDoolin
Gold Member
720
9
As I said earlier, you can do the calculations for the interaction between gravity and a uniform fluid either in General Relativity or in Newtonian gravity. You get the same answer either way.
Well, that sounds like a nice place to start. An infinite uniform fluid? And what do you find that answer to be?
 
  • #100
Chalnoth
Science Advisor
6,195
443
Well, that sounds like a nice place to start. An infinite uniform fluid? And what do you find that answer to be?
This leads to the Friedmann equations, which describe how the rate of expansion relates to the energy density and pressure of the contents of the fluid.
 

Related Threads on Interpreting the Supernova Data

  • Last Post
Replies
3
Views
1K
  • Last Post
Replies
2
Views
466
  • Last Post
Replies
6
Views
880
Replies
8
Views
9K
  • Last Post
Replies
12
Views
3K
  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
3
Views
2K
Replies
1
Views
2K
Top