Offshoot from 'Theoretically how far can one see in the universe'

  • Thread starter Rymer
  • Start date
  • Tags
    Universe
In summary, most of the statements made in this conversation are correct within the current model used for cosmological redshift. However, the model itself is not fully proved and details such as the actual values of distances quoted are subject to change and revision. Some statements, like changes in the expansion rate and implied relative velocities greater than the speed of light, are still part of the unproved portion of the current model. While there is talk of "precision cosmology," there is still a long way to go before it can be claimed. The evidence is strongly against the idea that there is no gravity at large scales, and the majority of evidence supports the Standard Model. Having an open mind means being willing to change in response to evidence, and it
  • #1
Rymer
181
0
Only comment on this is that most of this is correct WITHIN the current model used for the cosmological redshift. The model itself is NOT fully proved and the details -- such as the actual values of distances quoted -- are subject to change and revision of the model -- as is true for all scientific models.

Some statements -- like changes in the expansion rate and implied relative velocities greater than the speed of light are part of the unproved portion of the current model.

For all the talk about 'precision cosmology' we are still very far away from being able to claim that. (This claim seems to be more related to promoting money from various sources than to any real scientific accuracy.)
 
Space news on Phys.org
  • #2


Rymer said:
Only comment on this is that most of this is correct WITHIN the current model used for the cosmological redshift. The model itself is NOT fully proved and the details -- such as the actual values of distances quoted -- are subject to change and revision of the model -- as is true for all scientific models.
While in principle this is true, our limits on how much those distances can change are now quite small. The probability of a qualitative difference is essentially nil.

Rymer said:
Some statements -- like changes in the expansion rate and implied relative velocities greater than the speed of light are part of the unproved portion of the current model.
You keep saying this, but the evidence is strongly against you.
 
  • #3


Chalnoth said:
While in principle this is true, our limits on how much those distances can change are now quite small. The probability of a qualitative difference is essentially nil.
Not true -- or more accurately only within your model.

You keep saying this, but the evidence is strongly against you.
It's the same evidence for all models at this point -- the support for both is nearly identical.
So I guess this means the evidence is against your model too.

Chalnoth, you have a closed mind -- and remind me of the same kind of people that hounded Boltzmann to death a hundred years ago.

At this time there are only a few hundred data points that provide much in the determination of the cosmological redshift relation and their accuracy is very poor. The error needs to be decreased by at least an order of magnitude in order to be able to differentiate between the models.

The Standard Model at the moment is only a 'winner of a popularity contest' -- and is NOT scientifically proved or even supported as the best fit to the data. Further, it cannot even DERIVE much from theoretical point of view. I have never seen a derivation of the Hubble constant or for Omega matter --- both should be possible if the model is accurate.
 
Last edited:
  • #4


Rymer said:
It's the same evidence for all models at this point -- the support for both is nearly identical.
So I guess this means the evidence is against your model too.
Except, as I've already pointed out, you are completely ignoring the majority of the evidence we have available to us. You have focused only upon the supernova evidence, and have ignored the copious amounts of other evidence, including that from the CMB, from baryon acoustic oscillations, from weak lensing, from cluster counts, etc.

When you consider the evidence as a whole, your video that there is no gravity at large scales is obviously false.

Rymer said:
Chalnoth, you have a closed mind -- and remind me of the same kind of people that hounded Boltzmann to death a hundred years ago.
I'm not the one that's ignoring the evidence. Having an open mind is the willingness to change one's mind in response to evidence. You haven't even provided the minimum tests I asked for earlier (a chi square analysis, just on supernova data), while I have provided quite a bit of evidence.

Rymer said:
The Standard Model at the moment is only a 'winner of a popularity contest' -- and is NOT scientifically proved or even supported as the best fit to the data. Further, it cannot even DERIVE much from theoretical point of view. I have never seen a derivation of the Hubble constant or for Omega matter --- both should be possible if the model is accurate.
That is a positively silly critique. Accuracy is not determined by theoretical motivation. Accuracy is determined by the evidence.

You might as well argue that clearly, people two hundred years ago couldn't have known that the sky is blue because they didn't understand how light interacts with atoms in our atmosphere.
 
  • #5


Chalnoth said:
Except, as I've already pointed out, you are completely ignoring the majority of the evidence we have available to us. You have focused only upon the supernova evidence, and have ignored the copious amounts of other evidence, including that from the CMB, from baryon acoustic oscillations, from weak lensing, from cluster counts, etc.

When you consider the evidence as a whole, your video that there is no gravity at large scales is obviously false.I'm not the one that's ignoring the evidence. Having an open mind is the willingness to change one's mind in response to evidence. You haven't even provided the minimum tests I asked for earlier (a chi square analysis, just on supernova data), while I have provided quite a bit of evidence.That is a positively silly critique. Accuracy is not determined by theoretical motivation. Accuracy is determined by the evidence.

You might as well argue that clearly, people two hundred years ago couldn't have known that the sky is blue because they didn't understand how light interacts with atoms in our atmosphere.
You love to argue don't you. For Cosmological Redshift the ONLY data of importance is the higher redshift data with otherwise determined distance. Yes, mostly supernovae -- but some gamma ray burst and even a few other possibles (Tully-Fisher, etc).

As far as I'm aware the CMB data is not effected by the difference in these models. I do question some of the popular statements that have been made about CMB -- but that has nothing to do with the cosmological redshift relation.

YOU provide me with the dataset you want tested with chi^2 -- the method in detail you want used for the test -- a comparison/equivalent for Standard Model along with the detailed technique used to arrive at the values for that model.

The problem I have found is that the models by their nature require completely different fitting techniques. So what criteria are you saying to use to compare? How can you do a chi^2 with any meaning in such a case?

I'm willing to give it a shot if you can define a way to do it -- I have tried before and found no real difference. And that is the point. With current data it cannot be determined.

If there is a noticeable gravitational effect at large scale then the universe would unlikely to appear FLAT. The only explanation I've seen for a cosmological scaled gravity and a FLAT universe is 'coincidence'. Even the Dark Energy solution is a 'coincidence' solution. Do YOU have a different explanation (I don't like 'coincidence')?


Added note: my model does NOT require fitting to the data. There are derived values for all the parameters needed -- derived from theory. The fitting on this model is only used to confirm these values -- as much as they can be with the poor data.
 
Last edited:
  • #6


Rymer said:
You love to argue don't you. For Cosmological Redshift the ONLY data of importance is the higher redshift data with otherwise determined distance. Yes, mostly supernovae -- but some gamma ray burst and even a few other possibles (Tully-Fisher, etc).
Except it's all interrelated. You can't just single out a single piece of experimental data, taken out of context of the whole body of evidence. That's called 'cherry picking'.

Rymer said:
As far as I'm aware the CMB data is not effected by the difference in these models. I do question some of the popular statements that have been made about CMB -- but that has nothing to do with the cosmological redshift relation.
One of the most tightly-constrained parameters for the CMB is its angular diameter distance (measured from the average angular size of the fluctuations). And we also know its redshift to extremely high accuracy. So yes, it most definitely has quite a lot to do with the cosmological redshift relation.

Rymer said:
YOU provide me with the dataset you want tested with chi^2 -- the method in detail you want used for the test -- a comparison/equivalent for Standard Model along with the detailed technique used to arrive at the values for that model.
Meh, don't worry so much about comparing against the standard model (yet). Just figure out the chi^2 for your "model" for, say, one set of supernova data (the SNLS data would be good here). And don't forget to show your work.

Rymer said:
The problem I have found is that the models by their nature require completely different fitting techniques. So what criteria are you saying to use to compare? How can you do a chi^2 with any meaning in such a case?
Clearly you don't know much of anything about what the chi^2 test means. As long as we have accurate error bars on the data points, it's possible to perform a simple chi^2 test on any theoretical model to see if it's at least a somewhat reasonable model. It's not a terribly robust check, but it's a good first-blush check.

Rymer said:
If there is a noticeable gravitational effect at large scale then the universe would unlikely to appear FLAT. The only explanation I've seen for a cosmological scaled gravity and a FLAT universe is 'coincidence'. Even the Dark Energy solution is a 'coincidence' solution. Do YOU have a different explanation (I don't like 'coincidence')?
The flatness problem is a separate issue that is solved by inflation.

P.S. Every model requires some degree of "fitting", as there are always at least some free parameters. Yours has the Hubble constant, for instance. Also (at the absolute least) the distance at which gravity "turns off".
 
Last edited:
  • #7


Chalnoth said:
Except it's all interrelated. You can't just single out a single piece of experimental data, taken out of context of the whole body of evidence. That's called 'cherry picking'.


One of the most tightly-constrained parameters for the CMB is its angular diameter distance (measured from the average angular size of the fluctuations). And we also know its redshift to extremely high accuracy. So yes, it most definitely has quite a lot to do with the cosmological redshift relation.


Meh, don't worry so much about comparing against the standard model (yet). Just figure out the chi^2 for your "model" for, say, one set of supernova data (the SNLS data would be good here). And don't forget to show your work.


Clearly you don't know much of anything about what the chi^2 test means. As long as we have accurate error bars on the data points, it's possible to perform a simple chi^2 test on any theoretical model to see if it's at least a somewhat reasonable model. It's not a terribly robust check, but it's a good first-blush check.


The flatness problem is a separate issue that is solved by inflation.

P.S. Every model requires some degree of "fitting", as there are always at least some free parameters. Yours has the Hubble constant, for instance.

Actually, the Hubble constant is derived -- under some assumptions of course: 70.506

Inflation -- the genie -- pick the right value and it works. Non-sense.

YOU don't understand. My model does not require any fitting in the fully derived form.

And you are right -- I do NOT understand how to apply chi^2 to such a situation -- that is why I want to see how its applied to Standard Model first -- I want the issue to be the result and not the technique. So what is the Standard Model result?
 
  • #8


Rymer said:
Actually, the Hubble constant is derived -- under some assumptions of course: 70.506
Derived? How?

Rymer said:
Inflation -- the genie -- pick the right value and it works. Non-sense.
Inflation is not without its faults, but it nevertheless does solve the flatness problem, and its predictions match observation.

Rymer said:
And you are right -- I do NOT understand how to apply chi^2 to such a situation -- that is why I want to see how its applied to Standard Model first -- I want the issue to be the result and not the technique. So what is the Standard Model result?
Wow, okay. Here is the chi^2 test:

[tex]\chi^2 = \sum_i \frac{(d_i - t_i)^2}{\sigma_i^2}[/tex]
Here [tex]d_i[/tex] is the data value (in this case, it's typically the apparent magnitude of the supernova), [tex]t_i[/tex] is the theoretical value (which will be some function of the redshift, which is assumed to be perfectly-known), and [tex]\sigma_i[/tex] is the RMS uncertainty for that particular data point.

You then get a number. If your fit is a good one, then the number should be close to the number of data points. If the fit is very poor, then it will be many times the number of data points.
 
  • #9


Chalnoth said:
Derived? How?Inflation is not without its faults, but it nevertheless does solve the flatness problem, and its predictions match observation.Wow, okay. Here is the chi^2 test:

[tex]\chi^2 = \sum_i \frac{(d_i - t_i)^2}{\sigma_i^2}[/tex]
Here [tex]d_i[/tex] is the data value (in this case, it's typically the apparent magnitude of the supernova), [tex]t_i[/tex] is the theoretical value (which will be some function of the redshift, which is assumed to be perfectly-known), and [tex]\sigma_i[/tex] is the RMS uncertainty for that particular data point.

You then get a number. If your fit is a good one, then the number should be close to the number of data points. If the fit is very poor, then it will be many times the number of data points.

Assuming I calculated right -- with no comparison I have no idea.

OK: Derived value with NO CORRECTION: 3.785337 *398
Derived value with -0.0653 correction (implied by Riess May 2009): 3.436380 *398

As said before -- data is poor.
Fitting will take a little longer to do.
 
  • #10


Rymer said:
Assuming I calculated right -- with no comparison I have no idea.

OK: Derived value with NO CORRECTION: 3.785337 *398
Derived value with -0.0653 correction (implied by Riess May 2009): 3.436380 *398

Fitting will take a little longer to do.
Right, which means it's a rather poor fit, between 2 and 3 sigma away from a proper fit.

This is one of the things about having lots of data points: even if, by eye, it looks like the fit line goes through the data points, the statistical power of having a large number of them may mean that the line is not explained at all by the data.
 
  • #11


Chalnoth said:
Right, which means it's a rather poor fit, between 2 and 3 sigma away from a proper fit.

This is one of the things about having lots of data points: even if, by eye, it looks like the fit line goes through the data points, the statistical power of having a large number of them may mean that the line is not explained at all by the data.

EXACTLY what I've been saying -- the data is too poor to make a determination.

When I check with a fit using Ned Wrights calculator and optimizing for a slope of one and offset of zero I get Chi^2 = 1326 == for my derived value above its 1368

So what does your basic Standard Model Flat fit result in?

With that data scatter the difference is meaningless.Added: My comparable fitted result is about 1330 (corrected -- wrong fit)
 
Last edited:
  • #12


Rymer said:
EXACTLY what I've been saying -- the data is too poor to make a determination.

When I check with a fit using Ned Wrights calculator and optimizing for a slope of one and offset of zero I get Chi^2 = 1326 == for my derived fit above its 1368

So what does your basic Standard Model Flat fit result in?

With that data scatter the difference is meaningless.Added: My comparable fitted result is about 1330 (corrected -- wrong fit)
I'd have to calculate it. Got a link to the specific data that you used?

And by the way, no, the scatter is taken into account with the error bars on the data (provided the error bars are accurate, of course).
 
Last edited:
  • #13


OK -- rewriting for clarity:

398 datapoints from SCPunion

Used Ned Wrights Standard Model calculator iterating a slope of 1 and offset of 0 (to 6 decimal places) with a Reduced Major Axis fit (gives the lowest chi^2), result: 1326.34

Using my model iterating a slope of 1 and offset of 0 with a Reduced Major Axis fit gives 1329.75

The difference is statistically meaningless -- however there is one interesting difference: my model allows for the derivation of relation parameter values -- Standard Model does not (to my knowledge). The purely derived curve has a chi^2 = 1368

I'd have to calculate it. Got a link to the specific data that you used?

And by the way, no, the scatter is taken into account with the error bars on the data.

Yes. http://www.sgm-cosmology.org/SCPUnion_AllSNe.tex From the Kowalski paper.

Hummm ... yes the data scatter is included in the algorithm -- HOWEVER the numerical difference between the two fits is meanlingless due to the large value.

Also, note Ned Wright's calculator includes some corrections that are not in my model at this point in time. However, my model does have some corrections for gravitation due to nearby supernovae (about 0.022c) using this correction gives a result of 1326.38

Too many apples and oranges.
 
Last edited by a moderator:
  • #14


Sorry, I went away from my computer for a bit. One more question: what is the redshift/distance relation you used in your model?

I'll get to producing the Chi^2 for a best-fit standard cosmology shortly.
 
  • #15


Chalnoth said:
Sorry, I went away from my computer for a bit. One more question: what is the redshift/distance relation you used in your model?

I'll get to producing the Chi^2 for a best-fit standard cosmology shortly.

My model starts with Doppler for Velocity, -- transforms it into an index in co-moving space using law of cosines and an expansion velocity, then using a 'distance reference' (and Hubble like relation) converts to co-moving distance, then (1+z) into luminosity distance, etc.
(Requires an iterative numerical solution.)

See: http://www.sqm-cosmology.org
 
Last edited by a moderator:
  • #16


Rymer said:
My model starts with Doppler for Velocity, -- transforms it into an index in co-moving space using law of cosines and an expansion velocity, then using a 'distance reference' (and Hubble like relation) converts to co-moving distance, then (1+z) into luminosity distance, etc.
(Requires an iterative numerical solution.)

See: http://www.sqm-cosmology.org
Link doesn't work.

However, here's the Chi^2 I compute for the standard model, using the best-fit parameters in SCP Union paper, and the Riess et. al. (2009) value for the Hubble constant:

Chi^2 = 448.04

With N = 307 supernovae, this makes Chi^2/N = 1.46. That's a fairly decent fit. It's not quite Chi^2/N = 1, but then we don't expect it to be, as most real errors in data have longer tails than Gaussian. But in any case, this is a pretty good fit. There's no reason to suggest that it's wrong just from these data, anyway.

Contrast that with the Chi^2 you compute above: that's a horrible fit. Anyway, if you can provide a link that works, I can make some pretty pictures showing why you get such a better fit with the standard cosmology.

P.S. The exact parameters I use are:
[tex]\Omega_m = 0.287[/tex]
[tex]\Omega_\Lambda = 0.713[/tex]
[tex]H_0 = 74.2[/tex]
 
Last edited by a moderator:
  • #17


Addendum:
Okay, I found your website, it's:

http://www.sgm-cosmology.org/

I don't see anywhere in there where you compute the luminosity distance from a given redshift. You seem to go from the redshift to a recession velocity, and from a luminosity distance to said recession velocity, but that's not a useful comparison as the recession velocity isn't a measured quantity. How do you go from a redshift to a luminosity distance in your model?
 
Last edited by a moderator:
  • #18


Chalnoth said:
Addendum:
Okay, I found your website, it's:

http://www.sgm-cosmology.org/

I don't see anywhere in there where you compute the luminosity distance from a given redshift. You seem to go from the redshift to a recession velocity, and from a luminosity distance to said recession velocity, but that's not a useful comparison as the recession velocity isn't a measured quantity. How do you go from a redshift to a luminosity distance in your model?

That is what requires the iteration.

/* 'd' is redshift velocity from Doppler */

double VI(long double d)
{
long double x,x0;
long double q=0.000000000000001;
long double D;

D=d/ev;
x0=0;
while(1)
{
x=1-cos(x0)+sqrt(D*D-sin(x0)*sin(x0));
while (x<0.0) x+=1.0;
x=(x+x0)/2;
if (Abs(x-x0)<=q)
break;
x0=x;
}

if (x0>=pi/2)
{
Err=1;
fprintf(stderr,"Warning: Data exceeds maximum possible for ev=%4.3lf at vi=%lf\n",ev,(double)(ev*x));
}

return((double)(x)*ev);
}

Returns a Velocity index that when scaled by (1+z)/ev and a distance constant gives the luminosity distance
 
Last edited by a moderator:
  • #19


What is your value for ev? And the distance constant?
 
  • #20


FYI: My DERIVED result for the 'best fit' dataset for SCPUnion is:

Chi^2 = 395.346774 with 307 datapoints 1.287775 (unshifted)

Just for the fun of it I found the data shift value that would give the lowest Chi^2
Found -0.0853 (magnitude shift nearer) giving:
Chi^2 = 334.835791 with 307 datapoints 1.09067
(this was with the derived parameter values)
 
  • #21


Chalnoth said:
What is your value for ev? And the distance constant?

Using derived values: ev=0.868479c and dref=17.03331 bln-lyrs
 
  • #22


FYI: I used Ned Wright's Calculator on the dataset with your parameters and got:
Calculating with Ho=74.2 Om=0.287 Ov=.713
Chi^2 = 448.129454 with 307 datapoints 1.459705

The small difference is likely due to his additional 'neutrino' radiation corrections.
 
  • #23


Yeah, that's not what I'm getting at all. I'm getting Chi^2 = 2964 using those "derived" values. Go through, step by step, each calculation you do to get D_L.
 
  • #24


Chalnoth said:
Yeah, that's not what I'm getting at all. I'm getting Chi^2 = 2964 using those "derived" values. Go through, step by step, each calculation you do to get D_L.

Sorry, just woke up:

First Vr is found from the redshift z:

Vr = ((1+z)^2 - 1) / ((1+z)^2 + 1)

VI is then determined from the iterative C code

The luminosity distance relation to the velocity index is:

DL = Dref * (1+z) * VI / ev

(Note: VI is returned scaled to the speed of light for separate plotting purposes)
 
  • #25


see attached C code for an example.
 

Attachments

  • lum_c.txt
    9 KB · Views: 356
  • #26


Rymer said:
Sorry, just woke up:

First Vr is found from the redshift z:

Vr = ((1+z)^2 - 1) / ((1+z)^2 + 1)

VI is then determined from the iterative C code

The luminosity distance relation to the velocity index is:

DL = Dref * (1+z) * VI / ev

(Note: VI is returned scaled to the speed of light for separate plotting purposes)
Okay, I was missing the last division by ev. Now it's more reasonable. I've switched back to using the WMAP best-fit cosmology, however, with the following parameters:
Omegam = 0.256
OmegaL = 0.744
H_0 = 71.9

With this change, the Chi^2 tests become:

For the standard cosmology:
Chi^2 = 328.421

For yours:
Chi^2 = 400.048

Now, this isn't actually big enough of a difference to say definitively which model is better, given the number of data points. But, let's just add one more data point:

[tex]d_A = 14279 \pm 187 ~\mathrm{Mpc}[/tex]

This is the comoving distance to the surface of last scattering. What is the Chi^2 if I add just this single data point in?

Well, for the standard cosmology with the above parameters, I compute:
[tex]d_A^\mathrm{std} = 14194.7 ~\mathrm{Mpc}[/tex]

This adds to Chi^2 a mere 0.2. And for your model?

[tex]d_A^\mathrm{sgm} = 7898.12 ~\mathrm{Mpc}[/tex]

Which adds a whopping 1164.33 to the Chi^2.

This is what I said before about your model not holding up against other data. Sure, you can fit the supernovae relatively well, but you can't fit both the supernovae and CMB at the same time. I know that you say that this relationship doesn't hold for the CMB, but then the onus is upon you to demonstrate how your model does fit the CMB. It's also worth mentioning that the [tex]d_A[/tex] that I calculate with your model for z=1089 doesn't have [tex]x > \pi/2[/tex], so it fits within your model.

From looking over the rest of your model in detail however, let me say that I am not optimistic. You have no physical motivation whatsoever for any of what you have written. There is no physical interpretation for the "reference distance" D_ref, or for the "expansion velocity" ev. And the Hubble constant that you add in requires an ad-hoc multiplication to become anything remotely reasonable. Furthermore, there is no link at all between this idea of gravity and where you claim it comes from, that is, pair production.
 
  • #27


Chalnoth said:
Okay, I was missing the last division by ev. Now it's more reasonable. I've switched back to using the WMAP best-fit cosmology, however, with the following parameters:
Omegam = 0.256
OmegaL = 0.744
H_0 = 71.9

With this change, the Chi^2 tests become:

For the standard cosmology:
Chi^2 = 328.421

For yours:
Chi^2 = 400.048

Now, this isn't actually big enough of a difference to say definitively which model is better, given the number of data points. But, let's just add one more data point:

[tex]d_A = 14279 \pm 187 ~\mathrm{Mpc}[/tex]

This is the comoving distance to the surface of last scattering. What is the Chi^2 if I add just this single data point in?

Well, for the standard cosmology with the above parameters, I compute:
[tex]d_A^\mathrm{std} = 14194.7 ~\mathrm{Mpc}[/tex]

This adds to Chi^2 a mere 0.2. And for your model?

[tex]d_A^\mathrm{sgm} = 7898.12 ~\mathrm{Mpc}[/tex]

Which adds a whopping 1164.33 to the Chi^2.

This is what I said before about your model not holding up against other data. Sure, you can fit the supernovae relatively well, but you can't fit both the supernovae and CMB at the same time. I know that you say that this relationship doesn't hold for the CMB, but then the onus is upon you to demonstrate how your model does fit the CMB. It's also worth mentioning that the [tex]d_A[/tex] that I calculate with your model for z=1089 doesn't have [tex]x > \pi/2[/tex], so it fits within your model.

From looking over the rest of your model in detail however, let me say that I am not optimistic. You have no physical motivation whatsoever for any of what you have written. There is no physical interpretation for the "reference distance" D_ref, or for the "expansion velocity" ev. And the Hubble constant that you add in requires an ad-hoc multiplication to become anything remotely reasonable. Furthermore, there is no link at all between this idea of gravity and where you claim it comes from, that is, pair production.

Well, early days -- more work to come.

But the reports of pair production associated with supermassive objects in the galactic core is what started me looking at this again -- after 35 years.

Frankly, I didn't expect it to hold up as well as it has. So the question is: what is the last scattering 'distance' based on? Redshift is one thing with its issues -- but what justifies the distance itself? And to what accuracy?
 
  • #28


Rymer said:
Well, early days -- more work to come.

But the reports of pair production associated with supermassive objects in the galactic core is what started me looking at this again -- after 35 years.

Frankly, I didn't expect it to hold up as well as it has. So the question is: what is the last scattering 'distance' based on? Redshift is one thing with its issues -- but what justifies the distance itself? And to what accuracy?
Well, I listed the accuracy above. But the distance is measured based upon the angular size of the first acoustic peak, which comes from the waves bunching up at the sound horizon. It depends upon two things, then:
1. The speed of sound in the plasma.
2. The time since the "big bang" that the CMB was emitted.

These depend somewhat upon the contents of the universe, but since dark energy had to have been very sub-dominant at that time, the primary components have quite well-known properties: dark matter, normal matter, and radiation. Our knowledge of the matter/radiation makeup of the universe is further supported by observations of primordial light element abundances (which stem from the time that protons and neutrons condensed out of the quark-gluon plasma, in much the same way that neutral atoms condensed out of the plasma at the time the CMB was emitted).
 
  • #29


Chalnoth said:
Well, I listed the accuracy above. But the distance is measured based upon the angular size of the first acoustic peak, which comes from the waves bunching up at the sound horizon. It depends upon two things, then:
1. The speed of sound in the plasma.
2. The time since the "big bang" that the CMB was emitted.

These depend somewhat upon the contents of the universe, but since dark energy had to have been very sub-dominant at that time, the primary components have quite well-known properties: dark matter, normal matter, and radiation. Our knowledge of the matter/radiation makeup of the universe is further supported by observations of primordial light element abundances (which stem from the time that protons and neutrons condensed out of the quark-gluon plasma, in much the same way that neutral atoms condensed out of the plasma at the time the CMB was emitted).

OK ... how is the 'time since the Big Bang' determined?
 
  • #30


Rymer said:
OK ... how is the 'time since the Big Bang' determined?
It's as I said above, it depends upon the contents, as the contents of the universe effect the expansion rate.

Now, I am aware that you've got some radically different idea of what is going on there, but you're going to have to show how your model explains the CMB if you want to make any headway. And then there's of course all the other cosmological data, such as BAO, weak lensing surveys, etc. etc.

This is the fundamental problem to proposing completely different ideas of how the universe works: the current model explains a very wide array of data. Any competing model also has to explain all of these same data, to as good or better accuracy, if it is to even be given a glance by the scientific community. There's no point in bothering with a model that only explains one tiny fraction of the data, and appears to be completely contradicted by other pieces of evidence, or fails to explain them altogether. It's hard work, then, to produce a completely new idea for how the world works.

The way that new scientific theories almost always do this is that the people working on said theories demonstrate that for most of the experiments done to date, the new theory predicts the same thing as the old theory (thus they don't have to go back and recalculate what their theory does in each and every experiment). Once that is accomplished, they show where the theory diverges from the old one, and how this accords better with experiment (or at least they propose where it could accord better if the experiments were done).
 
  • #31


Chalnoth said:
It's as I said above, it depends upon the contents, as the contents of the universe effect the expansion rate.

Now, I am aware that you've got some radically different idea of what is going on there, but you're going to have to show how your model explains the CMB if you want to make any headway. And then there's of course all the other cosmological data, such as BAO, weak lensing surveys, etc. etc.

This is the fundamental problem to proposing completely different ideas of how the universe works: the current model explains a very wide array of data. Any competing model also has to explain all of these same data, to as good or better accuracy, if it is to even be given a glance by the scientific community. There's no point in bothering with a model that only explains one tiny fraction of the data, and appears to be completely contradicted by other pieces of evidence, or fails to explain them altogether. It's hard work, then, to produce a completely new idea for how the world works.

The way that new scientific theories almost always do this is that the people working on said theories demonstrate that for most of the experiments done to date, the new theory predicts the same thing as the old theory (thus they don't have to go back and recalculate what their theory does in each and every experiment). Once that is accomplished, they show where the theory diverges from the old one, and how this accords better with experiment (or at least they propose where it could accord better if the experiments were done).

As you have indicated, there are several problems with this model and CMB.

1) Model ONLY addresses redshift AFTER CMB and not the CMB value.

2) The model is specific to an expanding universe of matter at a constant velocity
that is NOT the speed of light.

3) The distance reference used is derived from a concept of gravity that indicates
it does NOT exist (as we define it today) at or prior to the time of last scattering.

4) The model redshift is specific to atomic emission/absorption lines -- not blackbody
displacement peaks.

5) Redshift is assumed to be related to a specific 'piece of matter' -- in the past --
NOT including the entire observable universe at the time (as is CMB).

6) The photons in this model are assumed NOT to be effected in any way -- redshift
being due to a Doppler recession -- no other redshift mechanisms are included.

As you have implied the 'CMB redshift' is more related to a 'time' -- than a 'distance'.

Also, my 'fitted' Chi^2 is 335 (not 400) to compare with the fitted standard model 328.
(again a meaningless difference)

Since the model specifically EXCLUDES the CMB data as being within the range of computable data, there is indeed a problem in reconciling the differences. As I have stated before this is 'A' solution to the problem and was never intended to be 'THE' solution.

Until there is a proper quantum gravity model I do not see how this 'bridge' can be crossed.
In fact, the entire point of this work was intended to identify a possible starting point for a quantum gravity model. (The expanded version still being worked on gives 6 matter states and no singularities even in this very basic approach.)
 
  • #32


Rymer said:
As you have indicated, there are several problems with this model and CMB.

1) Model ONLY addresses redshift AFTER CMB and not the CMB value.
I know. But your reasons for this are completely arbitrary and irrelevant. This is cherry-picking.

Rymer said:
2) The model is specific to an expanding universe of matter at a constant velocity
that is NOT the speed of light.
Expansion isn't a velocity, though. It's a rate.

Rymer said:
3) The distance reference used is derived from a concept of gravity that indicates
it does NOT exist (as we define it today) at or prior to the time of last scattering.
It's not derived. It's completely ad-hoc.

Rymer said:
4) The model redshift is specific to atomic emission/absorption lines -- not blackbody
displacement peaks.
The way in which the redshift is measured is completely irrelevant to this discussion.

Rymer said:
5) Redshift is assumed to be related to a specific 'piece of matter' -- in the past --
NOT including the entire observable universe at the time (as is CMB).
Except the CMB was emitted by lots of pieces of matter. So this reasoning flawed. If redshift in your model applies to pieces of matter in the past, then it also applies to the CMB.

Rymer said:
Also, my 'fitted' Chi^2 is 335 (not 400) to compare with the fitted standard model 328.
(again a meaningless difference)
Why not? You can always compare the Chi^2 between different models used to explain the same data.

Rymer said:
Since the model specifically EXCLUDES the CMB data as being within the range of computable data, there is indeed a problem in reconciling the differences. As I have stated before this is 'A' solution to the problem and was never intended to be 'THE' solution.

Until there is a proper quantum gravity model I do not see how this 'bridge' can be crossed.
In fact, the entire point of this work was intended to identify a possible starting point for a quantum gravity model. (The expanded version still being worked on gives 6 matter states and no singularities even in this very basic approach.)
It's a solution to a non-existent problem, though. Just using standard General Relativity (in combination with other known laws of physics) not only explains the redshift-distance relationship for supernovae, but it also explains the CMB, baryon acoustic oscillations, weak lensing surveys, the primordial abundances of light elements, etc.
 
  • #33


Originally Posted by Rymer View Post

As you have indicated, there are several problems with this model and CMB.

1) Model ONLY addresses redshift AFTER CMB and not the CMB value.

I know. But your reasons for this are completely arbitrary and irrelevant. This is cherry-picking.
Depends on your point of view -- including CMB is 'cherry-picking' from mine.Originally Posted by Rymer View Post

2) The model is specific to an expanding universe of matter at a constant velocity
that is NOT the speed of light.

Expansion isn't a velocity, though. It's a rate.
That is the point of the statement -- in THIS model its a velocity.Originally Posted by Rymer View Post

3) The distance reference used is derived from a concept of gravity that indicates
it does NOT exist (as we define it today) at or prior to the time of last scattering.

It's not derived. It's completely ad-hoc.
Again a viewpoint thing -- the distance reference was first derived 35 years ago -- as a simple 'first stab' in an anticipated development. The Simple Geometic Model is recent with the present supernovae data.
Originally Posted by Rymer View Post

4) The model redshift is specific to atomic emission/absorption lines -- not blackbody
displacement peaks.

The way in which the redshift is measured is completely irrelevant to this discussion.

No it isn't. YOU are the one making the assertion that they are the same. Originally Posted by Rymer View Post

5) Redshift is assumed to be related to a specific 'piece of matter' -- in the past --
NOT including the entire observable universe at the time (as is CMB).

Except the CMB was emitted by lots of pieces of matter. So this reasoning flawed. If redshift in your model applies to pieces of matter in the past, then it also applies to the CMB.
Not necessarily. I believe your assumption of the equivalence of these redshifts (emission verses blackbody peak) may be flawed. Originally Posted by Rymer View Post

Also, my 'fitted' Chi^2 is 335 (not 400) to compare with the fitted standard model 328.
(again a meaningless difference)

Why not? You can always compare the Chi^2 between different models used to explain the same data.

No not really in this case. If there is a systematic shift in the data, then this needs to be 'removed' when comparing to a completely derived relation. You can't fit one model to the data and not the other -- and expect anything like a valid comparison. This was one of my original concerns about even attempting this Chi^2
Originally Posted by Rymer View Post

Since the model specifically EXCLUDES the CMB data as being within the range of computable data, there is indeed a problem in reconciling the differences. As I have stated before this is 'A' solution to the problem and was never intended to be 'THE' solution.

Until there is a proper quantum gravity model I do not see how this 'bridge' can be crossed.
In fact, the entire point of this work was intended to identify a possible starting point for a quantum gravity model. (The expanded version still being worked on gives 6 matter states and no singularities even in this very basic approach.)

It's a solution to a non-existent problem, though. Just using standard General Relativity (in combination with other known laws of physics) not only explains the redshift-distance relationship for supernovae, but it also explains the CMB, baryon acoustic oscillations, weak lensing surveys, the primordial abundances of light elements, etc.

I've asked before -- I have have not seen -- exactly HOW General Relativity has anything to do with 'the CMB, baryon acoustic oscillations, weak lensing surveys, the primordial abundances of light elements, etc. ' CMB shows a 'flat' universe -- so no need for GR.

AND in my model, a proper derivation of the 'distance reference' would seem to require a quantum gravity model. In fact, I believe that a proper quantum gravity model should include a new redshift relation that hopefully is APPROXIMATED by the Simple Geometric Model. That was the intent of developing SGM. It is a 'stepping-stone' model -- not 'THE' solution.
 
Last edited:
  • #34


Rymer said:
Depends on your point of view -- including CMB is 'cherry-picking' from mine.
Including additional data is never cherry picking. You should read the definition of the word before pulling it out.

Rymer said:
I've asked before -- I have have not seen -- exactly HOW General Relativity has anything to do with 'the CMB, baryon acoustic oscillations, weak lensing surveys, the primordial abundances of light elements, etc. ' CMB shows a 'flat' universe -- so no need for GR.
Look, if you're going to try to overturn current cosmology, you should at least seek to understand current cosmology first. The things you are asking here are part of one of the most basic fundamentals of modern cosmology: linear structure formation.

Your statement that the "CMB shows a 'flat' universe" is particularly revealing of your abject ignorance of the field of cosmology, because:
1. The measurement of flatness assumes General Relativity.
2. The measurement is that the universe is nearly spatially flat, but that there is quite a lot of space-time curvature. So General Relativity is very much required.

Rymer said:
AND in my model, a proper derivation of the 'distance reference' would seem to require a quantum gravity model. In fact, I believe that a proper quantum gravity model should include a new redshift relation that hopefully is APPROXIMATED by the Simple Geometric Model. That was the intent of developing SGM. It is a 'stepping-stone' model -- not 'THE' solution.
Cosmological redshifts are entirely within the regime of classical gravity, and it is obscenely unlikely that quantum gravity has anything whatsoever to say here.
 
  • #35


You seem to be insisting that any model be an 'everything' model. This one never was and never was intended to be.

The 'Distance Reference' and the concept of the approach is from a simple 'particle in a box' approach for gravity. The actual quantum gravity model is not expected to be quite this simple. This is only a first attempt -- a concept checker. So of course it will have shortcomings -- that is no reason to reject it out of hand for something it cannot address.

In fact, the currently being worked version does not allow redshift greater than about 25.6 -- this is due to a slightly lower expansion velocity of 0.8660254c. The view is that the simple classically derived value for 'Distance Reference' is likely not really a constant but has some redshift dependence due to these high relative velocities. Could be wrong -- don't know yet. And that is the point. With the current data and the current level of development the model -- or an enhanced version -- this cannot be ruled out. (Remember that the blackbody radiation curve was the origin of quantum physics, I do not see how CMB can be incorporated into this gravity-mechanism based model without using a quantum model for gravity.)


This approach -- whether these values are correct or not -- does seem to indicate that a quantum theory of gravity has the possibility of predicting a new redshift relation -- one that includes DERIVED values (or much tighter ranges) of parameter values. The current Standard Model does NOT. From my viewpoint this is a MAJOR failing of Standard Model.


Note: I am not saying the current General Relativity is 'wrong' -- just not needed for what is currently being done in large scale cosmology. A Quantum Model of Gravity is required in order to proper integrate the various concepts in the proposed 'everything' Standard Models.

Your insisted upon CMB requirement is the same as having to produce a Quantum Theory of Gravity -- for THIS model. Just can't be done at this time.
 

Similar threads

Replies
1
Views
1K
Replies
9
Views
1K
  • Cosmology
Replies
11
Views
1K
  • Special and General Relativity
Replies
24
Views
1K
Replies
14
Views
1K
Replies
1
Views
1K
Replies
12
Views
2K
  • Cosmology
Replies
1
Views
1K
Back
Top