Perlmutter & Supernovae: Debunking the Myth of Accelerating Galaxies

  • Thread starter Thread starter gork
  • Start date Start date
  • Tags Tags
    Supernovae
AI Thread Summary
The discussion centers around the interpretation of Perlmutter's findings on the acceleration of galaxies, which suggests that distant galaxies are moving away from us faster than closer ones. Critics argue that this interpretation may be flawed, as it relies on observations of light from the past, which complicates the understanding of current velocities. The Hubble Law, established over 60 years ago, indicates that the rate of expansion is proportional to distance, but does not directly imply acceleration. Some participants emphasize the need for long-term observations to confirm changes in redshift over time, rather than relying on snapshots. The conversation highlights the complexities of measuring cosmic expansion and the ongoing debate about the role of dark energy in the universe's acceleration.
  • #51
Let me attempt to articulate my objection to the claim that cosmological kinematics are on par with terrestrial kinematics. In terrestrial physics we can make local measurements of position versus time and the spacetime metric is not a variable. In cosmological physics the spacetime metric is a variable and we can't directly measure position versus time, which is a local quantity in our theory of spacetime. That is, the metric is spatiotemporally local in GR and an important variable in cosmology, yet we have no way to do local, direct measurements of spacetime intervals in cosmology. So, when you try to put these two kinematics on equal footing, I strongly object because the differences are too pronounced.
 
Astronomy news on Phys.org
  • #52
RUTA said:
What I mean by "expansion rate" is given by the scale factor in GR, i.e., a(t).

First disagreement. You end up with a scale factor if you have *any* model of the universe that is isotropic and homogenous. You can assume that the universe is Newtonian or Galliean or whatever. As long as you assume that the universe is isotropic and homogenous, then you end up with a scale factor. Now GR provides a specific set of equations for a(t), but you can put alternative ones in the equation.

Now there are non-GR principles that you can use to constrain a(t). For example, if a(t) results in local velocities that exceed the speed of light you have problems. If a(t) is not monotonic you end up with shell colliding with each other. Etc. Etc.

This is responsible for the deceleration parameter q and the Hubble "constant" H in GR cosmology.

Disagree. H and q have nothing to do with GR at all. Just like a(t) has nothing to do with GR, H and q have nothing to do with GR. Now GR provides a specific equation for a(t), but you don't have to use that equation.

If, as is done with SN, one produces luminosity distance as a function of redshift and I want to know whether or not that indicates accelerated expansion, I have to find the GR model that best fits the data and the a(t) for that model then tells me whether the universe is accelerating or decelerating.

a(t) has nothing to do with GR.

You're claiming (?) that I can skip the choice of cosmology model and render a definition of expansion rate in terms of ... luminosity distance and redshift directly?

Not exactly. If turns out that the specifics of GR enter into the equation because GR asserts that gravity changes geometry and because gravity changes geometry, you don't have have a 1/r^2 power law for brightness. So you have to correct for geometry effects. I'm claiming that these corrections are not huge and once you put "plausible" geometry corrections you quickly figure out that you still have acceleration and that this really isn't one of the parameters that causes a lot of uncertainty in the results.

Now what do I mean by "plausible" geometry. We do know that GR is correct at galactic levels from pulsar observations. We do know that gravity is locally Newtonian. We are pretty sure that information can't travel faster than the speed of light. Using only those principles, you can already pretty tightly constrain the possible geometry to the point that there isn't a huge amount of uncertainty.

There another way of thinking about it. You can think of GR = Newtonian + correction terms and then you can think of "real gravity" = Newtonian + known GR correction terms + unknown corrections. We know that "unknown corrections" in the limit of galactic scales = 0. We can constrain the size of the "unknown corrections" via various arguments. If gravity "suddenly" changes, then you ought to see light refract. My claim is that if you feed the "unknown gravity effects" back into the equation, they aren't huge and they aren't enough to get rid of acceleration.

I'm not willing to grant you that it's a model-independent result. You've tacitly chosen a model via your particular definition of "acceleration" that involves luminosity distance and redshift.

In order to define "acceleration", your merely assumes needs to assume isotropy and homogenity. Once you assume isotropy and homogenity, then you get a scale factor a(t). Once you get a(t), you get q, and you get a definition of "cosmic acceleration".

Note the isotropy and homogenity, are "round earth" assumptions. We *know* that the universe is not perfectly isotropic and homogenous, so we know our model doesn't *exactly* reflect reality. So then we go back at check if it matters, and it doesn't (at least so far).

Also note, people are much more interested in investigating isotropy and homogenity than gravity models. You can constrain gravity pretty tightly. The assumption of isotropy and homogenity are more fundamental, and the constraints are less severe. For example, we are pretty sure that gravity doesn't change based on the direction you go in the universe (or else a lot of weird things would happen) but it's perfectly possible that we are in a pancake shaped void.

Because, again, what GR textbooks define as q involves a(t), so someone could convert your luminosity distances and redshifts to proper distances versus cosmological redshifts in their cosmology model (as in GR cosmology) and obtain a resulting best fit model for which a(t) says the universe isn't accelerating.

No you can't, for any model that reduces to GR (and Newtonian physics) at t=now. The problem is that the data says that q(now) = -0.6. I can calculate q_mymodel and q_GR, and if mymodel=GR for t=now, then q_mymodel must equal q_GR for t=now. (I'm fudging a bit, because the data really is q(almost now), but you get the point.)

Now if you assert that GR doesn't work for t=now, then we can drop apples and I can pull out my GPS. Also if you accept that GR is correct at t=now, then that strongly limits the possible gravitational theories for t=(almost now).

Also, if your model assumes that gravity is the same in all parts of the universe at a specific time, then you can mathematically express the difference between GR and your model by mathematically describing the differences in a(t).

However, the flat, matter-dominated GR model without a cosmological constant is a decelerating model that fits luminosity distance vs z data nicely for small z.

Acceleration is a second derivative which means that if you have data at only one point, you can't calculate it. If you have only z point, the mathematically you can't calculate acceleration. If you have three points that are close to each other, then you need extremely precise measurements of z to get acceleration, and there is a limit to how precise you can get z measurements.

If your z measurements are all small redshift, then your error bars are large enough so that you can't say anything about q, which is why people didn't.

Therefore, I disagree with your claim that your definition of acceleration puts your SN kinematical results on par with terrestrial physics. I do not run into an ambiguity with the definition of acceleration in intro physics.

Note here that terrestrial physics is important. I would claim that knowing *only* that GR is correct within the Milky Way and about a half dozen "reasonable" assumptions (isotropy, homogenity, causality), that you can exclude acceleration. Once you've established that GR is correct within the Milky Way, then causality limits how much geometry can change, and how different your model can be with GR.
 
  • #53
RUTA said:
In terrestrial physics we can make local measurements of position versus time and the spacetime metric is not a variable. In cosmological physics the spacetime metric is a variable and we can't directly measure position versus time, which is a local quantity in our theory of spacetime.

It gets a little messy. What you end up having to do is to define several different definitions of "distance" and "time" and it's important to keep those definitions straight. "Brightness distance" for example ends up being different from "light travel distance".

But one important mathematical characteristic of any definition is that as you go to small distances, all of the different definitions of distance have to converge, and that turns out to give you a lot of constraints.

My claim (and a lot of this involves being around people that do modified gravity models and I haven't worked this out myself), so that at z=1, "distance ambiguity" isn't enough to kill the observations. Now z=10, you have a different story.

That is, the metric is spatiotemporally local in GR and an important variable in cosmology, yet we have no way to do local, direct measurements of spacetime intervals in cosmology.

We have no way of doing direct local measurements of Mars or Alpha Centauri. Other than we have more data about Mars or Alpha Centauri, I don't see why it's different.

And then there is GPS. For GPS to work GR has to work to very, very tight tolerances, but figuring out where you are using GPS involves no local, direct measures of spacetime intervals, and it turns out that getting the metrics right is pretty essential for GPS to work. I don't see why cosmological measurements are more "suspicious" than GPS other than the fact that people run GPS measurements more often.
 
  • #54
Thanks for your extensive replies. I could nitpick several points, but they don't bear on the main issue -- there are significant assumptions needed to do cosmological kinematics that are not needed in terrestrial kinematics and your post only serves to support this fact.
 
  • #55
RUTA said:
Thanks for your extensive replies. I could nitpick several points, but they don't bear on the main issue -- there are significant assumptions needed to do cosmological kinematics that are not needed in terrestrial kinematics and your post only serves to support this fact.

OK. Let's list them

1) the universe is large scale isotropic
2) the universe is large scale homogenous
3) SR is correct locally (which implies that causality holds)
4) QM is correct locally
5) The true theory of gravity reduces locally to GR and then to Newtonian mechanics
6) There are no gravitational effects in redshift emission

I claim that with those assumptions that you can read off the scale factor directly from the supernova results. I also claim that none of these assumptions are non-testable. In particular, we know that the universe isn't perfectly isotropic and homogenous, and we can test the limits.

One way of doing showing this is to do things in the Newtonian limit with a tiny bit of special relativity.

http://spiff.rit.edu/classes/phys443/lectures/Newton/Newton.html

Look specifically at the derivation of the luminosity equation.

No GR at all in that derivation and you get out all of the numbers. The only thing that's close to GR is when they talk about the Robertson-Walker metric but you can that out of "isotropy+homogenity+local SR". If you assume that isotropy and homogenity hold and that special relativity works locally, then you end up with an expression for proper time.

So what I'm asserting is that to get the result that the universe is accelerating, you don't have to assume a precise cosmological model. You just have to assume that the isotropy + homogenity + gravity model reduces to Newtonian + some SR.
 
  • #56
I should point out that the assumption of isotropy and homogenity are pretty big assumptions.

What you are essentially saying is that if you can show that the laws of physics are X, Y, Z, *anywhere*, then they are true *everywhere* at a given time. This means that if you want to know what happens if an apple drops at quasar 3C273, you don't have to go to 3C273. You drop an apple at Earth, and whatever it does on Earth, it's going to do that at 3C273. Having isotropy and homogenity in space allows for the laws of physics to change over time, but not by much. We know for example, that the fine structure constant and gravitational constant didn't change by much over the last five billion years on earth, and with the "magic assumption" this means that the fine structure constant and gravitational constant didn't change *anywhere*.
 
  • #57
Again, thanks for taking the time to explain exactly what you understand are the assumptions needed to measure q. And, again, I could nitpick some of your statements, but I think it's easiest to simply compare your list of assumptions with those necessary to measure the acceleration of a ball rolling down an incline plane.

1. Newtonian mechanics holds in the lab

And, I have direct access to all spatiotemporal regions needed to make the spatial and temporal measurements for the ball on the incline while, as you admit, you do not have comparable access in cosmology. Thus, we can find statements such as:

The first question is whether drifting observers in a perturbed, dust-dominated Friedmann-Robertson-Walker (FRW) universe and those following the Hubble expansion could assign different values (and signs) to their respective deceleration parameters. Whether, in particular, it is theoretically possible for a peculiarly moving observer to ‘‘experience’’ accelerated expansion while the Universe is actually decelerating. We find that the answer to this question is positive, when the peculiar velocity field adds to the Hubble expansion. In other words, the drifting observer should reside in a region that expands faster than the background universe. Then, around every typical observer in that patch, there can be a section where the deceleration parameter takes negative values and beyond which it becomes positive again. Moreover, even small (relative to the Hubble rate) peculiar velocities can lead to such local acceleration. The principle is fairly simple: two decelerated expansions (in our case the background and the peculiar) can combine to give an accelerating one, as long as the acceleration is ‘‘weak’’ (with 1<q<0–where q is the deceleration parameter) and not ‘‘strong’’ (withq<1)—see Sec. II C below. Overall, accelerated expansion for a drifting observer does not necessarily imply the same for the Universe itself. Peculiar motions can locally mimic the effects of dark energy. Furthermore, the affected scales can be large enough to give the false impression that the whole Universe has recently entered an accelerating phase.

in Phys Rev (Sep 2011 Tsagas paper referenced earlier) concerning our understanding of the cosmological kinematics while no comparable publications will be found concerning the acceleration of balls rolling down inclined planes. And, while we have ck'd Newtonian physics, SR, GR, and QM on cosmologically small scales, any of these theories can be challenged on large scales simply because we don't have cosmological access. Modified Newtonian dynamics was proposed to explain dark matter and in this paper: Arto Annila. “Least-time paths of light.” Mon. Not. R. Astron. Soc. 416, 2944-2948 (2011), the author "argues that the supernovae data does not imply that the universe is undergoing an accelerating expansion." http://www.physorg.com/news/2011-10-supernovae-universe-expansion-understood-dark.html.

Now you can argue that these challenges are baseless, but they were published in respected journals this year and I cannot say I've seen one such publication concerning the conclusion that balls accelerate while rolling down inclined planes in the intro physics lab.

Why is that? Because the assumptions required to conclude the universe is undergoing accelerating expansion are significant compared to those required to conclude a ball is accelerating as it rolls down an incline plane. Thus my claim that cosmological kinematics is not on par with terrestrial kinematics.
 
  • #58
Keep in mind that I'm an insider here, i.e., I got my PhD in GR cosmology, I teach cosmology, astronomy and GR, I love this stuff! I've been doing some curve fitting with the Union2 Compilation, it's great data! I'm VERY happy with the work done by you guys! So, I don't want to sound unappreciative. I'm only saying what I think is pretty obvious, i.e., cosmology faces challenges that terrestrial physics doesn't face. Here are two statements by Ellis, for example (Class. Quantum Grav. 16 (1999) A37–A75):

The second is the series of problems that arise, with the arrow of time issue being symptomatic, because we do not know what influence the form of the universe has on the physical laws operational in the universe. Many speculations have occurred about such possible effects, particularly under the name of Mach’s principle‡, and, for example, made specific in various theories about a possible time variation in the ‘fundamental constants’ of nature, and specifically the gravitational constant (Dirac 1938). These proposals are to some extent open to test (Cowie and Songaila 1995), as in the case of the Dirac–Jordan–Brans–Dicke theories of a time-varying gravitational constant. Nevertheless, in the end the foundations of these speculations are untestable because we live in one universe whose boundary conditions are given to us and are not amenable to alteration, so we cannot experiment to see what the result is if they are different. The uniqueness of the universe is an essential ultimate limit on our ability to test our cosmological theories experimentally, particularly with regard to the interaction between local physics and the boundary conditions in the universe (Ellis 1999b). This therefore also applies to our ability to use cosmological data to test the theory of gravitation under the dynamic conditions of the early universe.


Appropriate handling of the uniqueness of the universe. Underlying all these issues is
the series of problems arising because of the uniqueness of the universe, which is what
gives cosmology its particular character, underlying the special problems in cosmological
modelling and the application of probability theory to cosmology (Ellis 1999b). Proposals
to deal with this by considering an ensemble of universes realized in oneway or another are
in fact untestable and, hence, of a metaphysical rather than physical nature; but this needs
further exploration. Can this be made plausible? Alternatively, how can the scientific
method properly handle a theory which has only one unique object of application?


Clearly, that's not an issue with balls rolling down inclined planes, so while I love cosmology, I keep it in proper perspective.
 
  • #59
RUTA said:
And, I have direct access to all spatiotemporal regions needed to make the spatial and temporal measurements for the ball on the incline while, as you admit, you do not have comparable access in cosmology.

I claim that you have comparable experiments. If gravity was markedly non-Newtonian at small scales, then you end up with very different stellar evolution. The supernova mechanism is very sensitive to gravity.

And, while we have ck'd Newtonian physics, SR, GR, and QM on cosmologically small scales, any of these theories can be challenged on large scales simply because we don't have cosmological access.

So let's look at cosmologically small scales. If you take the latest supernova measurements and bin them, you can see acceleration at z<0.4 and z<0.1. OK, you might be able to convince me that "something weird" happens at z=1. But at 0.1 < z < 0.3, (v/c)^2 < 0.1, GR becomes Newtonian, and if something weird happens, then it's got to be very weird.

http://www.astro.ucla.edu/~wright/sne_cosmology.html

Also you have this paper...

Model and calibration-independent test of cosmic acceleration
http://arxiv.org/PS_cache/arxiv/pdf/0810/0810.4484v3.pdf

Now you can argue that these challenges are baseless, but they were published in respected journals this year and I cannot say I've seen one such publication concerning the conclusion that balls accelerate while rolling down inclined planes in the intro physics lab.

That's because sometimes things are so obvious that they aren't going to be published. For example, the Tsagas paper was published in Phys Rev. D. I really doubt that it would have been published in Ap.J. without some revision because the stuff in that paper was "general knowledge."

I haven't read the MNRAS paper, but my first reaction is "good grief, not another tired light model." The problem with tired light models is that anything that says "something weird happens to the light from supernova" means "something weird happens from the light from things beyond supernova." Now I haven't read the paper so if the first thing he says is "I know that you aren't in the mood to see another tired light model, and I know the standard flaws with tired light but..." then I'm interested. If in reading the paper, he doesn't seem to have any notion of the standard problems with tired light models, then it goes in the trash.

Why is that? Because the assumptions required to conclude the universe is undergoing accelerating expansion are significant compared to those required to conclude a ball is accelerating as it rolls down an incline plane.

OK, let's forget about the ball going down hill. What about GPS? What about observations of Alpha Centauri?

Also as far as what gets published where, that goes a lot into the sociology of science. And there is really no need for going into "proof by sociology". Write down all of the assumptions that go into GPS. Write down all of the assumptions that go into the accelerating universe. I claim that the lists aren't very different.

It's also bad to get into generalizations.

One other thing is goes with the Columbus analogy. The 1997 low-z SN studies didn't see the expanding universe because their measurements are not precise enough. However, if you restrict yourself to z<0.3, you can see the universe accelerate very clearly with 2011 data.

What that means is that Perlmutter and Riess were in some sense lucky. If Columbus didn't discover America someone other person would have. If you don't do high-z supernova studies and just do z <0.3, then someone would have spotted the acceleration by 2004, that that person would have gotten the Nobel.

That also means that Perlmutter/Riess shouldn't have got the Nobel for high-z supernova studies any more than Columbus gets known for being a good sailor.
 
  • #60
RUTA said:
Keep in mind that I'm an insider here, i.e., I got my PhD in GR cosmology, I teach cosmology, astronomy and GR, I love this stuff!

I'm also an insider. I got my Ph.D. in supernova theory.

particularly with regard to the interaction between local physics and the boundary conditions in the universe (Ellis 1999b). This therefore also applies to our ability to use cosmological data to test the theory of gravitation under the dynamic conditions of the early universe.

Which is true but in this situation irrelevant. We are aren't talking about the early universe. For z=0.3, we are talking about lookback times of 3 billion years. There are rocks that are older than that. If you want to convince me that gravity was really different 10 billion years ago, that's all cool. If you want to convince me that gravity was really different 3 billion years ago, then that's going to take some convincing.

Underlying all these issues is
the series of problems arising because of the uniqueness of the universe, which is what
gives cosmology its particular character, underlying the special problems in cosmological
modelling and the application of probability theory to cosmology

Again I don't see the relevance of this to supernova data. The universe is unique, but supernova, galaxies, and stars aren't.

Part of the way that you deal with difficult problems is to figure out when you can avoid the problem. There are a lot of deep theoretical problems when you deal with the early universe. The nice thing about supernova data is that you aren't dealing with the early universe. By the time you end up with supernova, you are in a part of the universe in which stars form and explode, which means that it's not completely kooky.

Proposals
to deal with this by considering an ensemble of universes realized in oneway or another are
in fact untestable and, hence, of a metaphysical rather than physical nature; but this needs
further exploration. Can this be made plausible? Alternatively, how can the scientific
method properly handle a theory which has only one unique object of application?


Clearly, that's not an issue with balls rolling down inclined planes, so while I love cosmology, I keep it in proper perspective.

It's also not a problem with supernova. Also this is why the possibility that supernova Ia evolve is a much bigger hole than gravity. I would be very, very surprised if gravity worked very differently 3 billion years ago. I *wouldn't* be surprised if supernova Ia worked very differently 3 billion years ago since we don't really know what causes supernova Ia.

The fact that supernova Ia seem to be standard candles is an empirical fact, but we have *NO IDEA* why that happens. It's an assumption. We have observational reasons for that assumption, but it's an assumption.

Part of the reason why the supernova (and galaxy count) data is so strong is that we are *NOT* in weird physical regimes.
 
  • #61
twofish-quant said:
Which is true but in this situation irrelevant. We are aren't talking about the early universe. For z=0.3, we are talking about lookback times of 3 billion years. There are rocks that are older than that. If you want to convince me that gravity was really different 10 billion years ago, that's all cool. If you want to convince me that gravity was really different 3 billion years ago, then that's going to take some convincing.
It's not the time evolution of the dynamical phenomena I’m questioning here (although, that is something people play with in cosmology), it's the fact that distance is not directly measureable at these scales. We can't lay meter sticks along the proper distance corresponding to z = 0.3, which in the GR flat, dust model with age of 14 Gy is 5.2 Gcy, i.e., 12% of the way to the particle horizon (42 Gcy). We certainly can't bounce radar signals off objects at z = 0.3, we can’t even bounce radar signals off the galactic center 30,000 cy away. Direct kinematical measurements are just not possible. And the various distance measures are already starting to differ significantly at z = 0.3. The light was emitted when the universe was 9.44 Gy old (same model), i.e., when the universe was only 2/3 its current age. Thus, the light traveled for (14 – 9.44)Gy = 4.6 Gy (where did you get 3 Gy?), so the time-of-flight distance is 4.6 Gcy which differs from the proper distance of 5.2 Gcy by 12%. And the difference between luminosity distance and proper distance is 30% in this model, i.e., lumin dist = (1+z)(prop dist).

So, yes, concerns with the stability of physical law over cosmological time scales is an issue and we hope that the laws were at least consistent since the formation of Earth (4.6 Gy ago). Of course, we don’t know that and can’t ever check it directly, that’s a limitation inherent in cosmology as Ellis points out. But, I’m also pointing out that we don’t know about the applicability of the laws as they currently stand over cosmological distances and we can’t check that directly either.

I would be less skeptical if we had a well-established model of super unified physics. But, we don’t have super unified physics and we don’t know what such a theory might hold for our understanding of current physics, so it might be that the dark energy phenomenon is providing evidence that could help in our search for new fundamental physics. Therefore, I’m not willing to close theoretical options.

There seems to be a theme in our disagreement. You’re saying I should be more skeptical of the data and I’m saying you should be more skeptical of the theory. Sounds like we just live in two different camps :smile:
 
  • #62
RUTA said:
It's not the time evolution of the dynamical phenomena I’m questioning here (although, that is something people play with in cosmology), it's the fact that distance is not directly measureable at these scales. We can't lay meter sticks along the proper distance corresponding to z = 0.3, which in the GR flat, dust model with age of 14 Gy is 5.2 Gcy, i.e., 12% of the way to the particle horizon (42 Gcy).

We can't lay meter sticks to Alpha Centauri either.

And the difference between luminosity distance and proper distance is 30% in this model, i.e., lumin dist = (1+z)(prop dist).

So go down to z=0.1. The moment you move past the "local peculiar motions" you should (and as the latest measurements indicate that we do) see the acceleration of the universe. Also the luminosity distance equation is derivable from special relativity, so you *don't* need a specific cosmological model to get it to work.

What I don't get is how measurements of the cosmological constant are that different from measurements of say intergalactic hydrogen.

So, yes, concerns with the stability of physical law over cosmological time scales is an issue and we hope that the laws were at least consistent since the formation of Earth (4.6 Gy ago). Of course, we don’t know that and can’t ever check it directly, that’s a limitation inherent in cosmology as Ellis points out.

And this is where I disagree. If G or the fine structure constant were "different enough" at cosmological distances and times we'd see it.

You keep using the word "directly" as if there were some different between direct and indirect measurements, and I don't see where that comes from.

I would be less skeptical if we had a well-established model of super unified physics.

I'd be skeptical of any model of super unified physics. I don't trust theory. What I'm arguing is that in the case of this specific data, I don't have to. Which is a good thing since these results depend crucially on the idea that SN Ia are standard candles, which is something that we have *NO* theoretical basis to believe.

There seems to be a theme in our disagreement. You’re saying I should be more skeptical of the data and I’m saying you should be more skeptical of the theory. Sounds like we just live in two different camps :smile:

Actually I would have thought that it was the opposite. I think you should be less skeptical of the data and more skeptical of the theory.

It's pretty obvious that we have some deep philosophical disagreement on something, but right now it's not obvious what that is.
 
  • #63
twofish-quant said:
So go down to z=0.1. The moment you move past the "local peculiar motions" you should (and as the latest measurements indicate that we do) see the acceleration of the universe.

If I confine myself to z < 0.1 in the Union2 Compilation and fit log(DL/Gpc) vs log(z) with a line I get R = 0.9869 and sum of squares error (SSE) of 0.208533. If I fit the flat, dust model of GR I get SSE of .208452 for Ho = 68.6 km/s/Mpc (only parameter). If I fit the LambdaCDM model, I get SSE of .208086 for Ho = 69.0 km/s/Mpc and OmegaM = 0.74 (two parameters here). That is, both an accelerating and decelerating model fit the data equally well. Now using all the Union2 data (out to z = 1.4), I find a best fit line with R = 0.9955 and SSE of 1.95. LCDM gives SSE of 1.79 for Ho = 69.2 and OmegaM = 0.29. The flat, dust model of GR gives SSE of 2.68 for Ho = 60.9. Now it's easy to see that the accelerating model is superior to the decelerating model. But, you need those large z and that's where assumptions concerning the nature of distance matters.

twofish-quant said:
Also the luminosity distance equation is derivable from special relativity, so you *don't* need a specific cosmological model to get it to work.

DL = (1+z)Dp only in the flat model. DL depends on spatial curvature in GR cosmology, so it's related differently to Dp in the open and closed models. Here is a nice summary:

http://arxiv.org/PS_cache/astro-ph/pdf/9905/9905116v4.pdf

twofish-quant said:
You keep using the word "directly" as if there were some different between direct and indirect measurements, and I don't see where that comes from.

So, you need large z to discriminate between accelerating and decelerating GR models and the relationship between what you "measure" (DL) and what tells you the universe is accelerating or decelerating (Dp) is model dependent at large z. Therefore, without a means of measuring Dp directly, your conclusion that the universe is undergoing accelerating expansion is model dependent.

You claim to have a super general model in which you can detect acceleration at small z using only the six assumptions given earlier. If your model is super general, then it must subsume the GR models I'm using above (they certainly meet your assumptions). Thus, if you can indeed show acceleration at z < 0.1 using your super general model, there must be a mistake in my calculations. Can you show me that mistake?
 
  • #64
RUTA said:
You claim to have a super general model in which you can detect acceleration at small z using only the six assumptions given earlier. If your model is super general, then it must subsume the GR models I'm using above (they certainly meet your assumptions). Thus, if you can indeed show acceleration at z < 0.1 using your super general model, there must be a mistake in my calculations. Can you show me that mistake?

I can't but Seikel and Schwarz have written a paper on this topic

Model- and calibration-independent test of cosmic acceleration
http://arxiv.org/PS_cache/arxiv/pdf/0810/0810.4484v3.pdf

Their claim is that with 0.1<z<0.3 and the assumption of isotropy and homogenity, the universe is accelerating. They don't try to fit to a GR model, but rather use nearby supernova to compare against those that are far away.

Also I seem to have misread their paper. They can show that the acceleration holds if you *either* take the low redshift sample with a flat or closed universe *or* if you take the all the data and then vary the GR model. They didn't explicitly cover the case if you take both low redshift samples *and* vary the model parameters.

However, the question is that if you can't see acceleration at z=0.1 and you can with z=1.4, what's the minimum set of data that you need to see acceleration, and the answer seems to be closer to z=0.1 than z=1.4.

The other point is that there is an industry of papers that try to make sense of the supernova data with model independent approaches. adswww.harvard.edu with the terms "model independent" and supernova gets you this...

Bayesian Analysis and Constraints on Kinematic Models from Union SNIa
http://arxiv.org/abs/0904.3550

A Model-Independent Determination of the Expansion and Acceleration Rates of the Universe as a Function of Redshift and Constraints on Dark Energy
http://adsabs.harvard.edu/abs/2003ApJ...597...9D

Improved Constraints on the Acceleration History of the Universe and the Properties of the Dark Energy
http://adsabs.harvard.edu/abs/2008ApJ...677...1D

(One cool thing that Daly does is that she looks at angular distance.)

Model independent constraints on the cosmological expansion rate
http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.0981v2.pdf

The general theme of those papers is that instead of fitting against a specific model, they parameterize the data figure out what can be inferred from the data.

Here is a cool paper.

Direct evidence of acceleration from distance modulus redshift graph
http://arxiv.org/PS_cache/astro-ph/pdf/0703/0703583v2.pdf
 
Last edited:
  • #65
The other thing is that we need to be careful about the claims:

1) What *can* be shown with current SN data?
2) What *was* shown in 1998, 2002, 2008 with supernova data?
3) What can be shown with other data?

Also establishing what happens at 0.1<z<0.3 is important because somewhere between z=0.3 and z=0.5, the acceleration turns into a deceleration.

The other thing I think we agree on (which is why I'm arguing the point) is that if it turns out that you need to fit GR expansion curves to z=1 / 1.4 to establish that there is acceleration at low z's, then you are screwed.
 
Last edited:
  • #66
Something else that I noticed. If you do best fit of the union supernova data, you are getting H_0=69.0 at z=0.1 regardless of model. However, if you measure the Hubble constant to the nearest galaxies, you end up getting H_0=74.0 +/- 3.0

http://hubblesite.org/pubinfo/pdf/2011/08/pdf.pdf

Hmmmmmm...

Now since you have data, I'd be interested in seeing what your fits look like if you fix z=0, H_0=74.0. Once you fix that number, my guess is that decelerating models no longer fit the nearby supernova data. We can get the number of H_0 from the type of measurements that de Vauculeurs and Sandage have been doing since the 1970's.

Now you can argue that we don't really know that z=0, H_0=74.0, since there are local measurements that are lower than that, or you could argue that there is some apples/orange effect. These are valid arguments, but they involve observational issues that have nothing to do with the gravitational model.
 
Last edited:
  • #67
Thanks for the Seikel and Schwarz reference, hopefully I can use this to clarify my philosophical position.

I have no qualms with their analysis or conclusion which means that, given their assumptions, I agree the SN data out to z = 0.2 indicates accelerated expansion. I don’t contest their assumption of homogeneity and isotropy, and they take into account positive and negative spatial curvature. The assumption I want to relax (there could be others) is DL = (1+z)Dp in flat space, i.e., the assumed relationship between what we “measure,” luminosity distance (DL), and what we use to define expansion rate, proper distance (Dp). They make this assumption in obtaining Eq 2 from Eq 1 (Dp = (c/Ho) ln(1+z) in the empty universe), with the counterparts in open and closed universes assumed in Eq 8. But, suppose that DL = (1+z)Dp is only true for ‘small’ Dp. Then the challenge is to find a DL as a function of Dp for a spatially flat, homogeneous and isotropic model (so as to keep in accord with WMAP data) that reduces to DL= (1+z)Dp for ‘small’ Dp and, therefore, doesn’t change kinematics at z < 0.01 (so as not to affect Ho measurements), and that gives a decelerating universe with the SN data. Does this require new physics? Yes, but so does accepting an accelerating universe (requires cosmological constant which is otherwise unmotivated, quintessence, f(R) gravity, etc).

Thus, I’ve been arguing for more theoretical skepticism. By subscribing to the belief that we’ve “discovered the accelerating expansion of the universe,” we’re ruling out theoretical possibilities that involve decelerated expansion (the one I’ve pointed out and possibly others). Why would you restrict your explanation of the data to accelerating options when either way you’ve got to invoke new physics? That strikes me as unnecessarily restrictive. That’s my point.
 
  • #68
Perhaps I am mistaken, as I don't have a good grasp of the math of all this, but isn't the accelerating universe model the "best fit" to the data? Would assuming that DL=(1+z)Dp is true only for small Dp be a less reasonable assumption than assuming it is true for all values? Do we have any real reason for believing that?
 
  • #69
Drakkith said:
Perhaps I am mistaken, as I don't have a good grasp of the math of all this, but isn't the accelerating universe model the "best fit" to the data?

I have not seen an alternative to accelerated expansion that fits the data as well as the concordance model (LambdaCDM).

Drakkith said:
Would assuming that DL=(1+z)Dp is true only for small Dp be a less reasonable assumption than assuming it is true for all values? Do we have any real reason for believing that?

It is an example of an alternative assumption that might be made because we don't measure Dp directly. Whether someone would consider alternatives to the assumptions required to render an accelerated expansion depends on their particular motivations. I'm not here to argue for or against any particular assumption, I'm using this as an example to convey a general point. If you keep all the assumptions that lead to accelerated expansion, then you're left having to explain the acceleration. So, why close the door on alternative assumptions motivated by other ideas for new physics that lead to decelerated expansion? But, when the community says they've discovered the accelerated expansion of the universe, that's exactly what they're doing. If in, say, 20 years we have a robust unified picture of physics and it points to and explains accelerated expansion, I will be on board. I'm not arguing *against* accelerated expansion. I'm arguing for skepticism.
 
  • #70
RUTA said:
The assumption I want to relax (there could be others) is DL = (1+z)Dp in flat space, i.e., the assumed relationship between what we “measure,” luminosity distance (DL), and what we use to define expansion rate, proper distance (Dp).

And that's a perfectly reasonable thing to do. However, one thing that you quickly figure out is that in order to fit the data, you quickly end up with relationships that are not allowed by GR. Basically to explain the data, you have to assume that space is negatively curved more than it allowed by GR.

One other thing is that there are observational limits on what you can assume for DL. You can argue all sorts of weird things for the relationship between DL and Dp, it's much harder to argue for weird things in the relationship between DL and Da (angular distance), and there are observational tests for angular distance. Also, if you have a weird DL/DP relationship then there are implications for gravitational lensing.

But, suppose that DL = (1+z)Dp is only true for ‘small’ Dp. Then the challenge is to find a DL as a function of Dp for a spatially flat, homogeneous and isotropic model (so as to keep in accord with WMAP data)

Whoa. This doesn't work at all...

It's known that you *cannot* come up with a DL/Dp relationship that reduces to general relativity. You try every DL-Dp relationship that is allowed by GR, and it doesn't work. Basically you want to spread out the light as much as possible. If the universe is negatively curved, that spreads out light more, but maximum negative curvature occurs when the universe is empty, and even then, it's not going to fit.

So you can throw out GR. That's fine, but if you throw out GR, then you have to reinterpret the WMAP data with your new theory of gravity, at which point there is no theoretical evidence for a flat, homogenous, isotropic model since you've thrown out the theoretical basis for concluding that there is a flat, homogenous, isotropic model.

The "problem" with the cosmic acceleration is that it's not a "early universe" thing. If you throw out all of the data we have for z<0.5, then everything fits nicely with a decelerating universe. Acceleration only starts at between z=0.3 and z=0.5, and increases as you go to z=0.0. This poses a problem for any weird theory of gravity, because you'd expect things to go in the opposite direction. The higher the z, the more weird gravity gets.

But that's not what we see.

that reduces to DL= (1+z)Dp for ‘small’ Dp and, therefore, doesn’t change kinematics at z < 0.01 (so as not to affect Ho measurements), and that gives a decelerating universe with the SN data.

And then you end up having to fit your data with gravitational lensing statistics and cosmological masers. The thing about those is that they give you angular distance.

Also as we get more data, it's going to be harder to get things to work. New data is coming in constantly, and as we get new data, the error bars go down.

Does this require new physics? Yes, but so does accepting an accelerating universe (requires cosmological constant which is otherwise unmotivated, quintessence, f(R) gravity, etc).

Sure. I don't have a problem with new physics, but new physics has got to fit the data, and that's hard since we have a lot of data. One reason I like *this* problem more than talking about quantum cosmology at the t=0 is that for t=0, you can make up anything you want. The universe was created by Fred the cosmic dragon. There is no data that tells you otherwise.

For cosmic acceleration, things are data driven.

Thus, I’ve been arguing for more theoretical skepticism. By subscribing to the belief that we’ve “discovered the accelerating expansion of the universe,” we’re ruling out theoretical possibilities that involve decelerated expansion

And the problem with those theoretical possibilities is that for the most part they don't fit the data. The data is such that no gravitational theory that reduces to GR at intermediate z will fit the data. That leaves you with gravitational theories that don't reduce to GR, at which point you are going to have problems with gravitational lensing data.

Also, there *are* viable theoretical possibilities that don't involve weird gravity. The most likely explanation of the data that doesn't involve acceleration are that we are in an odd part of the universe (i.e. a local void) or that there is weird evolution of SN Ia. However, in both those cases, one should expect that they become either less viable or more viable as you have new data.

Why would you restrict your explanation of the data to accelerating options when either way you’ve got to invoke new physics?

Because once you try to invoke new physics, you find that it doesn't get rid of the acceleration or blows up for some other reason (so people have told me, I'm not an expert in modified gravity).

Where the signal happens is important. If you tell me that gravity behaves weird at z=1, then I'm game. If you tell me that gravity behaves weird at z=0.1, then you are going to have to do a lot of explaining to do.

Also you don't have to invoke new physics. There are some explanations for the data that invoke *NO* new physics. The two big ones are local void or SN Ia evolution.

That strikes me as unnecessarily restrictive. That’s my point.

And people have been thinking about alternative explanations. The problem is that for the most part, they don't fit the data.

The other thing is that there are some things that have to do with the sociology of science. Working on theory is like digging for gold. There is an element of luck and risk. Suppose I spend three years working on a new theory of gravity, and after those three years I come up with something that fits the data as of 2011. The problem is that this is not good enough. The error bars are going down, so I'm going to have to fit the data as of 2014, and if it turns out that it doesn't, then I've just wasted my time that I could have spent looking for gold somewhere else.

On the other hand if I spend my time with local void and SN Ia models, then even if it turns out that they don't kill cosmic acceleration, I still end up with something useful at the end of the effort.
 
  • #71
Drakkith said:
Perhaps I am mistaken, as I don't have a good grasp of the math of all this, but isn't the accelerating universe model the "best fit" to the data?

I don't think this is a good way of thinking about the problem. Since "best fit" really means nothing. The problem is that the reasoning is circular. In order to have a "best fit" you have to have a model of the problem which is a problem if you don't understand what is going on. If you don't have a model for what is going on, then how can you tell if one fit is "better" than another?

One reason I'm arguing with RUTA is that I do think we would have a serious problem if cosmologists were doing what he thinks they are doing, but they aren't.

What is better is to look at the data, go through all of the possible explanations, and then see which ones are excluded and which ones are allowed. As you get more data, the number of viable explanations goes down.

Would assuming that DL=(1+z)Dp is true only for small Dp be a less reasonable assumption than assuming it is true for all values?

Doesn't matter. The problem is that when you are dealing with unexpected data, there is no basis for figuring out what is a "reasonable assumption." So what you do is to assume that you've got the relationship wrong, and then see what happens.

In fact, what happens is that you end up with a Taylor expansion, and for small z, the first term is (1+z)Dp.

Do we have any real reason for believing that?

GR says that DL *isn't* (1+z)Dp for curved spacetime. However GR also puts some limits into what the relationship between DL and Dp can be.
 
  • #72
twofish-quant said:
What is better is to look at the data, go through all of the possible explanations, and then see which ones are excluded and which ones are allowed. As you get more data, the number of viable explanations goes down.

I'm not sure I see the difference between this and what I said. Is that not a "best fit"? Or am I missing a key point between the two?
 
  • #73
RUTA said:
I have not seen an alternative to accelerated expansion that fits the data as well as the concordance model (LambdaCDM).

I have. Void models seem to work, but they have other problems. Also, as a supernova geek, I'm *really* worried about the assumption that SN Ia are standard candles, but fortunately people are reproducing the data with other distance measures.

One other problem is what does the "fit" tell you. For example, I can take the data, and draw a line through it, but that tells me nothing.

Whether someone would consider alternatives to the assumptions required to render an accelerated expansion depends on their particular motivations.

I don't think it really does have much to do with motivations.

If you keep all the assumptions that lead to accelerated expansion, then you're left having to explain the acceleration. So, why close the door on alternative assumptions motivated by other ideas for new physics that lead to decelerated expansion?

1) First of all, you eliminate the low lying fruit first. There are a *LOT* of possible explanations for the data that involve no new physics at all. As we get more and more data, those explanations are less and less plausible.

2) Second of all, no one else closing the door on new physics that lead to decelerated expansion. The trouble is that no one has come up with one that fits the data. There is an entire industry of people working on alternative gravity models.

But, when the community says they've discovered the accelerated expansion of the universe, that's exactly what they're doing.

What people are saying is that we've spent ten years trying to come up with explanations, and none of them seem to work. If saying "we've discovered accelerated expansion" is stopping people from looking into modified gravity models, that's a bad thing, but I see no evidence that this is the case. It's the reverse, the modified gravity people are telling us that they've tried to come up with alternative explanations, and none of them seem to work.

They might come up with something tomorrow, but if you look for Bigfoot and can't find him, them maybe it's because he isn't there.

If in, say, 20 years we have a robust unified picture of physics and it points to and explains accelerated expansion, I will be on board. I'm not arguing *against* accelerated expansion. I'm arguing for skepticism.

What I'm saying is that if you assume X, Y, and Z you get acceleration. I then go through through X, Y, and Z and then explain the current state of research for X, Y, and Z.

Also part of the reason the point here is to figure out what we need to research next.
 
  • #74
Twofish, you and I could continue to discuss technical details associated with alternative assumptions, but unless I can keep that conversation centered on a published paper, such a discussion would violate forum rules (rightfully so, it’s too speculative). Thankfully, that discussion is not essential to the point at hand.

You have presented arguments for your claim that the accelerated expansion of the universe has been directly measured, where by “directly measured” you mean in a sense equivalent to measuring the acceleration of a car on the street or a ball rolling down an inclined plane. I was very interested in these arguments because if I could be convinced that the acceleration was “directly measured,” I would certainly accept it as “fact” and forgo any attempt to explore decelerating alternatives. While you have failed to convince me that we have “discovered the accelerating expansion of the universe,” i.e., that we have indeed “directly measured” accelerating expansion, this discussion allows readers to see why such a claim is made and why it is challenged. They can now make a more informed decision as to whether to believe or remain skeptical.
 
  • #75
Drakkith said:
I'm not sure I see the difference between this and what I said. Is that not a "best fit"? Or am I missing a key point between the two?

Twofish, you say you've seen fits to the SN data that match LCDM. [I'm inferring that these are decelerating models given the context in which you made that statement.] Of course, you can't share them here if they're not published, but do you have any published examples? The decelerating fits I've seen are all discernibly weaker than LCDM at large z.
 
  • #76
RUTA said:
Twofish, you say you've seen fits to the SN data that match LCDM. [I'm inferring that these are decelerating models given the context in which you made that statement.] Of course, you can't share them here if they're not published, but do you have any published examples? The decelerating fits I've seen are all discernibly weaker than LCDM at large z.

Are you meaning to quote me and talk to twofish, or is that just a mistake?
 
  • #77
Drakkith said:
Are you meaning to quote me and talk to twofish?

Yep.
 
  • #78
Here is an example of what I'm talking about:
Figure 2 in arXiv:gr-qc/0605088v2 (published in Class. Quant. Grav.). You can see the two curves (m vs z) diverging at z = 0.8. The figure stops at z = 1, but if the divergence continues at this rate, the fit would be terrible at z = 1.4 (end of Union2 Compilation, for example).

Here is another example:
http://www.physorg.com/newman/gfx/news/2011/supernovaelight2.jpg (published in Mon. Not. R. Ast. Soc.). He doesn't show the LCDM fit, but I've done this fit (mu vs log(z)) with the SN data in this range and LCDM is discernibly better at large z than this fit.

Anyway, Twofish, if you know of any decelerating models that fit the SN data at large z as well as LCDM, please let me know.
 
  • #79
Here is an example from the Supernova Cosmology Project website showing the difference between accelerating and decelerating cases being determined at large z (0.2 and up):

http://supernova.lbl.gov/PDFs/HubbleDiagramPhysicsToday.pdf

I'm working on a sum of squares error for Annila's version of mu (Mon. Not. R. Astron. Soc. 416, 2944–2948 (2011)) using linearized Union2 data from the SCP website. Then we can see how it compares to LCDM's 1.79 and the flat, dust-filled model's 2.68 posted earlier in this thread. Annila shows a fit of mu vs log(z) using data from the SCP website (Fig 3 of his paper), but he does not provide an SSE. Annila's mu = 5log(z*c*T*sqrt(1+z)/10 pc) (obtained via Eq 4 of his paper) so it has only one fitting parameter, T, age of the universe. In that same paper, he has DL = c*T*z/(1+z), so I notice he is not using mu = 5log(DL/10 pc). I'm hoping Twofish will have something to say about that. Anyway, his mu vs z with T = 14Gy maps roughly (eyeball) to LCDM with Ho = 65 km/s/Mpc, OmegaM = 0.24 and OmegaL = 0.76 (best fit for mu vs log(z) in Fig 4 of arXiv:astro-ph/9805201v1 which appeared in Ap. J.). Here is that "eyeball fit"

http://users.etown.edu/s/stuckeym/Plot 15Gy.pdf

Green curve is Annila and red is LCDM. In Fig 3 of his paper, his "best fit" uses T = 13.7Gy, but it looks weak at larger z. I'll let you know what I find.
 
  • #80
Oops, that comparison of Annila with LCDM was using T = 15Gy, not 14Gy. Here's the comparison using T = 14Gy:

http://users.etown.edu/s/stuckeym/Plot 14Gy.pdf

In this figure you can see Annila is a bit lower than LCDM at high z, which is consistent with the curve in his Fig 3 looking like it's a bit low at high z using T = 13.7Gy per his figure caption.

http://users.etown.edu/s/stuckeym/Annila Figure 3.jpg

The best fit for Annila gave SSE = 1.95 (same as best fit line) using T = 14.9Gy. For T = 13.7Gy (per his caption) I have SSE = 2.69 (same as best fit flat, dust-filled model). To remind you, I had SSE = 1.79 for LCDM using Ho = 69.2 km/s/Mpc, OmegaM = 0.29 and OmegaL = 0.71. So, Annila's model isn't as good as LCDM, but it's an improvement over LCDM without Lambda.
 
Last edited by a moderator:
  • #81
Here is a paper that was just accepted at Class. Quant. Grav. I couldn't say anything about it before since it wasn't yet accepted, but the Union2 Compilation data is fit with a decelerating universe just as well as LambdaCDM. It's a flat, matter-dominated universe and as far as I can tell, it shouldn't have any problems with WMAP either, although I'd be interested in comments in that regard.
 

Attachments

  • #82
Our essay (http://users.etown.edu/s/STUCKEYM/GRFessay2012.pdf) “Explaining the Supernova Data without Accelerating Expansion” won Honorable Mention in the Gravity Research Foundation 2012 Awards for Essays on Gravitation.

http://www.gravityresearchfoundation.org/announcements.html

There's a nice quote in the essay from Yousaf Butt at the Harvard-Smithsonian Center for Astrophysics:

Various alternatives to an accelerating universe have also been proposed (see,
for example, C. Tsagas, Phys. Rev. D 84, 063503 (2011)). Whether such
alternatives are viable remains to be seen, but the Nobel Committee for
Physics has perhaps acted somewhat prematurely by selecting a preferred
interpretation of the supernova projects’ data. The effect, intentional or not,
is to bully the skeptics into silence, self-censorship, or ridicule, whereas good
science proceeds with a healthy dose of skepticism and with open minds.

There were some big names in the Honorable Mention list to include Jacob D. Bekenstein, Carlo Rovelli, and Ted Jacobson, so we were indeed “honored” to be “mentioned” in that list :-)

The essay is based on our March 2012 paper (see previous post) in Classical and Quantum Gravity (http://arxiv.org/abs/1110.3973) where we fit the supernova data without accelerating expansion or dark energy by suggesting a correction to GR. The idea for proposing such a correction to GR comes from our interpretation of quantum mechanics as described most recently in our April 2012 paper in Foundations of Physics (http://arxiv.org/abs/1108.2261).
 
Last edited by a moderator:
  • #83
The problem with the papers is that not obvious that the universe ends up decelerating. The data presentation doesn't include the standard delta distance modulus diagram. Also it didn't include a comparison of a(t) evolution over time. They assert that their model ends up with a decelerating universe but nowhere did I see the graphs to *show* that their model ends up with a decelerating universe.

The problem with the paper was that it was trying to do two things at once. It ended up with a new theory of gravity and then tried to show that it results in a decelerating universe. I would have liked to see a graph of a(t) using their best parameters versus a graph of a(t) in the standard cosmology. Also more discussion about where the differences comes from. They do two things, the change the DL<->DM factor and then they also have a new evolution equation for a(t). Which one causes the universe to decelerate?
 
  • #84
One thing that bothers me is that it looks like an example of "tired light" and there are reasons to rule out those models...

http://en.wikipedia.org/wiki/Tired_light
http://en.wikipedia.org/wiki/Tolman_surface_brightness_test

One thing that wasn't clear from the papers was how much change in the modulus was needed to eliminate acceleration. One way of doing this would be graph DP_GR versus DP_new_model. Once you have that number, then show it to some observers, and I'm pretty sure they'll consider the amount of darkening you need to be out of bounds.
 
  • #85
The more I think about it, the more the paper looks like a weak tired light model.

"Tired light" was a class of cosmological models that assumed that GR did something weird to light so that the universe was not expanding. In this situation, the assertion is that the universe is expanding, but that GR is doing something to light to make it look like the universe is expanding more quickly than it really is. At that point, the experiment evidence against "tired light" becomes important.

So my guess is that if someone goes through the papers on tired light, they will find one or more experiments that kill the idea. I'm guessing that someone already did this, but it's not publishable to find that yes, "weak tired light" doesn't work.

Now if it turns out that the experimental evidence doesn't rule out "weak tired light" then you've got a paper.

The general way of presenting unconventional results is to present the paper as something that will confirm the conventional result. If you find something that supports "weak tired light" claim it's an anomaly that requires further investigation and that if you find X that will support the prevailing theory. Of course, you may be of the opinion that people will find not-X, at which point you act surprised.

The problem with the paper as written is that it's a quantum gravity paper and not an observational astronomy paper. The parameter that causes false acceleration to be observed is a free parameter, and I know that if I twist the parameters hard enough, I can get whatever result that I want. The question that I'd be interested in is "how hard to you have to twist the parameters" and are there any observational blocks to twisting those parameters.
 
  • #86
Thanks for your response, twofish. The evolution of the modified equations follows a(t) for Einstein-deSitter very precisely all the way back to the stop point, so it is in fact decelerating. Also, there is no mechanism causing light to redshift in transit as in tired light. We are proposing a different mechanism altogether for the coupling of EM sources in an FRW universe.

Do we believe astrophysicists should be exploring such a proposed change to general relativity? No. The proposed modification has serious consequences for many other things that work well, i.e., all those associated with the Schwarzschild solution. Until those ramifications are fleshed out, the idea is largely worthless for astrophysics. We are working on the Schwarzschild modifications now and that issue will be resolved in the next year or two.

The reason the paper is published in CQG is, as you point out, because it’s a paper on quantum gravity rather than astrophysics. The reason I posted it here is, as I argued earlier, because I believe the Nobel citation claiming “the discovery of the accelerating expansion of the universe” is premature. Is accelerating expansion the best explanation of the data as of now? Yes, but who knows what the future holds. The Nobel committee decided to award the prize for a particular interpretation of the data, rather than for acquiring the data itself (which I think is worthy). As Dr. Butt said, “The effect, intentional or not, is to bully the skeptics into silence, self-censorship, or ridicule, whereas good science proceeds with a healthy dose of skepticism and with open minds.”
 
  • #87
RUTA said:
Also, there is no mechanism causing light to redshift in transit as in tired light. We are proposing a different mechanism altogether for the coupling of EM sources in an FRW universe.

I'm less interested in the specific mechanisms than the observational tests. By changing the DL / DP relationship, that may causes changes that act "as if" it were tired light (even though it isn't). What I'm interested is if the observational evidence against tired light also constrains the DL / DP. One reason I'm interested in this is that there *aren't* observational constraints against the DL / DP relationship, this is a "hole" in Perlmutter's paper, and it's something that should be patched up.

One problem is that because I'm not physically on a university, I don't have easy access to people that I can ask about this. There were some people that I'd ask about this in the past, and sometimes they come up in five minutes with a reason why this won't work. If they think about it for two weeks, and they can't come up with anything, then it's a paper.

The SN papers did a pretty good job at "patching holes", and I don't recall anyone mentioning variations on the DL/DP relationship. This could either be because it's so obviously wrong that no one bothered mentioning it. Or it could be because no one thought this as an issue.

Do we believe astrophysicists should be exploring such a proposed change to general relativity? No.

But it could be that the even if the quantum gravity theory is wrong, if that something else is changing the DL/DP relationship, that's still very interesting.

The reason I posted it here is, as I argued earlier, because I believe the Nobel citation claiming “the discovery of the accelerating expansion of the universe” is premature.

On the other hand, it's clear that Perlmutter discovered *something big*. If it turns out that the universe is decelerating and GR is wrong, that's even more earth-shattering than an accelerating universe, and worth a Nobel.

The Nobel committee decided to award the prize for a particular interpretation of the data, rather than for acquiring the data itself (which I think is worthy).

I don't think that's quite true. "Dark energy" is the "least weird" explanation for the Perlmutter's results. There are other explanations but they are all weirder.

“The effect, intentional or not, is to bully the skeptics into silence, self-censorship, or ridicule, whereas good science proceeds with a healthy dose of skepticism and with open minds.”

I don't think this is worse than any other "dominant paradigm" and I don't think it's too bad in astrophysics.

Also there are social tricks that get around this. There's the "Columbo strategy." If I thought that the world were flat, I wouldn't publish a paper saying the "The World Is Flat", I'd publish a paper saying "Observational Constraints on the Roundness of the Earth." Here are some observational tests that you can do to show that the world is round, oh wait, you did those tests and then didn't work, well... that's surprising... Hmmm... Well since we all know the world is round, why don't you try doing this... Oh... Your coming up with odd answers... Well... What do *you* think is going on?

I don't think astrophysics is ossified. I do have very little respect for academic finance and economics, but that's something else.
 
  • #88
Knowledge is power and power is jealous. The ancients clearly understood this and defended their knowledge from prying eyes. Sacrificing 'heretics' was a well received and popular tactic dating back thousands of years.
 
  • #89
Chronos said:
Knowledge is power and power is jealous. The ancients clearly understood this and defended their knowledge from prying eyes. Sacrificing 'heretics' was a well received and popular tactic dating back thousands of years.

In science sometimes the lunatics end up running the asylum.

Also, there are a surprising number of people with power that have very unconventional views. I know of at least two Nobel prize winners that are convinced that black holes don't exist, and I know of a former president of the American Astronomical Society that has extremely unconventional views on galactic jets.
 

Similar threads

Replies
7
Views
2K
Replies
12
Views
2K
Replies
2
Views
1K
Replies
13
Views
2K
Replies
1
Views
1K
Replies
12
Views
2K
Back
Top