I A glitch in Jorrie’s Cosmo-Calculator?

  • Thread starter Thread starter JimJCW
  • Start date Start date
Click For Summary
Jorrie’s Cosmo-Calculator is experiencing persistent inconsistencies in results for Dnow and Dthen compared to Gnedin’s and Wright’s calculators, particularly at low redshift values. Users speculate there may be a glitch, possibly linked to numerical integration methods used in the calculator. Adjustments to the integration parameters have shown some reduction in error but not a complete fix, indicating deeper issues in the implementation. The calculator was originally designed as an educational tool, which may explain some of its limitations. Ongoing investigations are needed to address these discrepancies and improve accuracy.
  • #61
Ibix said:
Any thoughts?
Yes that's exactly what I was thinking (plus step 0 being sanitizing inputs and other initialization).
 
Space news on Phys.org
  • #62
I'm impressed with the apparent enthusiasm for the community project, which should finally become a PF Cosmo tool, decoupled from anyone member and managed by an elected committee on a neutral platform.
 
  • #63
Well kind of self-elected! PM GitHub user ids to be added as project admins.
 
  • #64
If there is no good open source quadrature package available for JavaScript, it seems to me that the methods described in section 4.5 of the third edition of “Numerical Recipes” are more than adequate for the required integrals, and are much less code than trying to reproduce something like QUADPACK.
 
  • #65
I'm quite rusty on my cosmology... so I can't vouch for the equations.
When this thread started, I was interested in trying to reproduce the calculations in Desmos.

So, here is what I have so far:
https://www.desmos.com/calculator/dorewco5ms
See the table at the end to check the numerical calculations.

You'll have to check the equations ( based on calculations by Jorrie and Ibix ; see also work by pbuk ) all documented in the Desmos file.

I don't have any more time to play around with it.
Maybe someone might find it useful (once the equations are verified).


(A note on Desmos... the ordering of the entries doesn't matter.
Unlike geogebra or trinket/Glowscript,
every Desmos "save" generates a new unique URL... which the saver has to share for others to see.)
 
  • #66
@robphy , looking at Desmos worksheet (really nice, by the way, never heard of this tool), I have one question. You have a formula using ##f_T##, but I don’t see a definition for it.
 
  • #67
PAllen said:
@robphy , looking at Desmos worksheet (really nice, by the way, never heard of this tool), I have one question. You have a formula using ##f_T##, but I don’t see a definition for it.
It’s in the “constants” folder.
It’s a conversion unit.
 
  • #68
pbuk said:
I'm not. Jorrie's calculation uses a fixed step size, the integration steps don't match the result steps excactly, it doesn't treat the integration to infinity properly and it only manages to get as close as it does by using far more integration steps than it needs to - in short it needs rewriting.

I have been calculating Dnow using various calculators. The results are often slightly different from one another. The results calculated with cosmic and Toth calculators are added to the table shown in Post #57. They are consistent with that calculated by @Ibix but with more digits kept:

z
Gnedin
Jorrie, 2022
ibix
cosmic
Toth
0.11.410041.40841.411.4095921.409608
0.091.272141.2706091.271.271741.271756
0.081.133541.1321061.131.1331891.133219
0.070.994260.9929310.9940.9939440.993964
0.060.854270.853050.8540.8540050.854024
0.050.71360.7124770.7130.7133770.713399
0.040.572250.5712250.5720.5720630.57209
0.030.430230.4292780.430.4300650.430062
0.020.287480.2866510.2870.2873850.287382
0.010.144080.143360.1440.1440290.144017
000.000621000
-0.010.144740.145260.1450.1446980.144702
-0.020.290150.2905670.290.2900610.290057
-0.030.436230.4365360.4360.4360840.436097
-0.040.582950.5831610.5830.5827630.582789
-0.050.730320.7304330.730.7300940.730101
-0.060.878340.8783580.8780.8780720.878098
-0.071.026991.0269261.031.0266881.026715
-0.081.176311.1761241.181.1759411.175951
-0.091.326231.3259681.331.3258241.32584
-0.11.476791.4764161.481.4763321.476349

For the discussions blow, let’s assume that the cosmic result is correct and use it as a reference for comparison.

For two properly working calculators, the difference in their calculated Dnow’s as a function of z should be a slightly fluctuating plot around the y = 0 line, as exemplified by the Dcosmic - DToth plot shown in the following figure:

1653273274626.png


The Dcosmic - DJorrie plot, however, shows a peculiar behavior: there is an abrupt break around z = 0 and DJorrie(z) deviates systematically from Dcosmic(z). It will be nice if we can get a clue of why this is happening. If you or @Jorrie have some ideas about the location(s) in the Calculation.js file that is responsible for this behavior, please let us know.
 
  • #69
JimJCW said:
If you or @Jorrie have some ideas about the location(s) in the Calculation.js file that is responsible for this behavior, please let us know.
It's more complicated than that, see post #55 above. Anyway we have moved on from there, the Dnow calculations seem to be much improved over the range near z = 0, but other calculations need to be checked and a few other things sorted. The results in the table below are taken from the current development version (1.2.0) running at https://lightcone7.github.io/LightCone7-develop.html.

z
Gnedin
Jorrie, 2022
ibix
cosmic
Toth
v1.2.0
0.11.410041.40841.411.4095921.4096081.409700
0.091.272141.2706091.271.271741.2717561.271836
0.081.133541.1321061.131.1331891.1332191.133276
0.070.994260.9929310.9940.9939440.9939640.940199
0.060.854270.853050.8540.8540050.8540240.854071
0.050.71360.7124770.7130.7133770.7133990.713433
0.040.572250.5712250.5720.5720630.572090.572107
0.030.430230.4292780.430.4300650.4300970.
0.020.287480.2866510.2870.2873850.2873820.287407
0.010.144080.143360.1440.1440290.1440170.144040
000.0006210000
-0.010.144740.145260.1450.1446980.1447020.144040
-0.020.290150.2905670.290.2900610.2900570.290084
-0.030.436230.4365360.4360.4360840.4360970.436119
-0.040.582950.5831610.5830.5827630.5827890.582810
-0.050.730320.7304330.730.7300940.7301010.730153
-0.060.878340.8783580.8780.8780720.8780980.878142
-0.071.026991.0269261.031.0266881.0267151.026771
-0.081.176311.1761241.181.1759411.1759511.176036
-0.091.326231.3259681.331.3258241.325841.325931
-0.11.476791.4764161.481.4763321.4763491.476451
 
  • #70
pbuk said:
It's more complicated than that, see post #55 above. Anyway we have moved on from there, the Dnow calculations seem to be much improved over the range near z = 0, but other calculations need to be checked and a few other things sorted. The results in the table below are taken from the current development version (1.2.0) running at https://lightcone7.github.io/LightCone7-develop.html.

I noticed that using version (1.2.0), I can’t duplicate your Dnow(z) in one run of the calculator; I had to calculate one or two values at a time. Some of the Dnow values probably were not correctly entered in your table. For example, Dnow = 0 for z = 0.03.

If we add (Dcosmic - Dv1.2.0) vs z curve to the figure shown in Post #68, we get the following result:

1653477812401.png


The Dv1.2.0(z) result shows a big improvement over the Jorrie 2022 result, but it still deviates systematically from Dcosmic(z). A better indication of this is shown below:

1653477885989.png
 
  • #71
JimJCW said:
I noticed that using version (1.2.0), I can’t duplicate your Dnow(z) in one run of the calculator; I had to calculate one or two values at a time.
Yes, it doesn't currently do linear intervals; this should be easy to add.

JimJCW said:
The Dv1.2.0(z) result shows a big improvement over the Jorrie 2022 result, but it still deviates systematically from Dcosmic(z).
I'm not really surprised at this deviation, steps to improve it include:
  • Check the precision of the physical constants and model values
  • Try midpoint instead of trapezium rule which always overestimates for a concave function.
  • Add error estimation/adaptive step size.
But these are all enhancements for 1.4.0.
 
Last edited:
  • #72
pbuk said:
I'm not really surprised at this deviation, steps to improve it include:
These calculators use different values for the constants* (and Cosmic doesn't take ## \Omega_R ## into account at all). It's not surprising they give different results with a regular pattern. Note that the "new Jorrie" site does not yet allow overriding the default values (again this is something that is easy to add but pointless until we are happy with ALL the calculations, not just ## D_{now} ##.

*e.g. ## H_0 ## Jorrie, pbuk 67.74 km/s/Mpc, cosmic 67.04, Gnedin 68.14.
 
  • #73
pbuk said:
These calculators use different values for the constants* (and Cosmic doesn't take ## \Omega_R ## into account at all). It's not surprising they give different results with a regular pattern. Note that the "new Jorrie" site does not yet allow overriding the default values (again this is something that is easy to add but pointless until we are happy with ALL the calculations, not just ## D_{now} ##.

*e.g. ## H_0 ## Jorrie, pbuk 67.74 km/s/Mpc, cosmic 67.04, Gnedin 68.14.

You are right; all other calculators I have tried do not include ΩR. That’s why I like and use Jorrie’s calculator. The effect of ΩR is significant only during early cosmological times (high z values) and it probably does not have any noticeable effects in our Dnow calculations.

All calculators I tried use H0, Ωm and ΩΛ as input parameters and I use H0 = 67.74 km/s/Mpc, Ωm = 0.309 and ΩΛ = 0.691 for all of them.
 
  • Informative
Likes pbuk
  • #74
JimJCW said:
You are right; all other calculators I have tried do not include ΩR. That’s why I like and use Jorrie’s calculator. The effect of ΩR is significant only during early cosmological times (high z values) and it probably does not have any noticeable effects in our Dnow calculations.
It would be easy enough to exclude it for better comparisons: then with the same constants the only thing that should be different is the integration algorithm.

JimJCW said:
All calculators I tried use H0, Ωm and ΩΛ as input parameters and I use H0 = 67.74 km/s/Mpc, Ωm = 0.309 and ΩΛ = 0.691 for all of them.
Ah OK. Is Ωm = 0.309 correct or is it Ωm = 0.3089 as per Jorrie's defaults?
 
Last edited:
  • #75
pbuk said:
Ah OK. Is Ωm = 0.309 correct or is it Ωm = 0.3089 as per Jorrie's defaults?
Keep in mind that the accuracy of our knowledge of the Omega's is far worse than the rounding contributions. It is good to chase mathematical integrity in algorithms, but is it worth it to get far ahead of the quality of the observations that we work with?
 
  • #76
Jorrie said:
is it worth it to get far ahead of the quality of the observations that we work with?
Not in actual use, but in testing it is important to determine whether differences in results arise from different models or poor integration methods.
 
  • #77
pbuk said:
Ah OK. Is Ωm = 0.309 correct or is it Ωm = 0.3089 as per Jorrie's defaults?

See discussion in StackExchange:

1653645616295.png


If we change zeq from 3370 to 999998 in the current version of the calculator (i.e., neglect the radiation energy density), Ωm,0 will change from 0.3089 to 0.309.

In the current version, changing zeq in the calculator does not change the calculated Dnow. When it is fixed, we can compare the calculated Dnow using large zeq values with that calculated with the cosmic calculator (no radiation energy term).
 
  • #78
Hi @Ibix,

In the figures included in Post #70, Dv1.2.0(z) by @pbuk shows a small, systematic deviation from Dcosmic(z) near z = 0. It is interesting to see whether this is caused by the omission of the radiation term ΩR,0 in the cosmic model. Your python version includes the radiation term and may provide information concerning this. Would you please post your result with more digits? (See Post #57)

z
Gnedin
Jorrie, 2022
ibix
ibix_2
cosmic
Toth
0.11.410041.40841.411.4095921.409608
0.091.272141.2706091.271.271741.271756
0.081.133541.1321061.131.1331891.133219
0.070.994260.9929310.9940.9939440.993964
0.060.854270.853050.8540.8540050.854024
0.050.71360.7124770.7130.7133770.713399
0.040.572250.5712250.5720.5720630.57209
0.030.430230.4292780.430.4300650.430062
0.020.287480.2866510.280.2873850.287382
0.010.144080.143360.1440.1440290.144017
000.000621000
-0.010.144740.145260.1450.1446980.144702
-0.020.290150.2905670.290.2900610.290057
-0.030.436230.4365360.4360.4360840.436097
-0.040.582950.5831610.5830.5827630.582789
-0.050.730320.7304330.730.7300940.730101
-0.060.878340.8783580.8780.8780720.878098
-0.071.026991.0269261.031.0266881.026715
-0.081.176311.1761241.181.1759411.175951
-0.091.326231.3259681.331.3258241.32584
-0.11.476791.4764161.481.4763321.476349
 
  • #79
Hi @JimJCW,

Yes, this is part of my plan for generating test data. I am away on a sailing trip at the moment so can't work on it for a while though.
 
  • #80
Hi @Jorrie, @pbuk, @lbix,

I think the cosmology calculator by The International Centre for Radio Astronomy Research, ICRAR, can be useful for testing in our calculator development. It is written in R language and can have ΩR as an input parameter. The inputs to this calculator for PLANCK 2015 Data are:

H0 = 67.74​
ΩΛ = 0.691​
ΩR = 0.0000916 [obtained from ΩR = Ωm / (zeq + 1)]​
Ωm = 1 - 0.691 - 0.0000916 = 0.3089​

I have some questions about some of the terms:

What is the difference between ‘The Comoving Radial Distance LoS to z’ and ‘The Comoving Radial Distance Tran to z’?​
What is Sigma8[0]?​
 
Last edited:
  • #81
JimJCW said:
Hi @Jorrie, @pbuk, @lbix,

I think the cosmology calculator by The International Centre for Radio Astronomy Research, ICRAR, can be useful for testing in our calculator development. It is written in R language and can have ΩR as an input parameter. ...

I have some questions about some of the terms:

What is the difference between ‘The Comoving Radial Distance LoS to z’ and ‘The Comoving Radial Distance Tran to z’?​
What is Sigma8[0]?​
You will find the definitions of comoving distances in Hogg
(arXiv:astro-ph/9905116v4 16 Dec 2000) section 5.

I have no clue about Sigma8[0]
 
  • #83
JimJCW said:
In the figures included in Post #70, Dv1.2.0(z) by @pbuk shows a small, systematic deviation from Dcosmic(z) near z = 0. It is interesting to see whether this is caused by the omission of the radiation term ΩR,0 in the cosmic model. Your python version includes the radiation term and may provide information concerning this. Would you please post your result with more digits? (See Post #57)
I have posted results from Ibix's Python program below with a LOT more digits. I have also written a much better integrator for my Javascript code* which I am pleased to say agrees almost to machine precision with Python's scipy.integration.quad here. I have posted these results (with differing digits shown in bold) as "pbuk g7k15" below. I may be able to get this into GitHub and the demo site over the weekend.

I have also run Ibix's code with the radiation parameters ## \Omega_{R0} ## and ## \Omega_R ## removed. You can see from the table below that it doesn't make much difference so there must be some other reason for the differences from the other calculators.

z
Gnedin
Jorrie, 2022
ibix
pbuk g7k15
ibix no ## \Omega_R ##
cosmic
Toth
0.11.410041.40841.4096995371674351.4096995371674351.4097071.4095921.409608
0.091.272141.2706091.27183645996174401.27183645996174381.2718421.271741.271756
0.081.133541.1321061.1332757881107791.1332757881107791.1332801.1331891.133219
0.070.994260.9929310.99401986048914960.99401986048914960.9940230.9939440.993964
0.060.854270.853050.85407121547408080.85407121547408070.8540740.8540050.854024
0.050.71360.7124770.71343259282539450.71343259282539450.7134340.7133770.713399
0.040.572250.5712250.57210693537856640.57210693537856640.5721080.5720630.57209
0.030.430230.4292780.43009739054182380.43009739054182370.4300980.4300650.430062
0.020.287480.2866510.28740731158840520.28740731158840520.2874080.2873850.287382
0.010.144080.143360.14404025873529720.14404025873529720.1440400.1440290.144017
000.00062100000
-0.010.144740.145260.14470948817285470.14470948817285470.1447090.1446980.144702
-0.020.290150.2905670.29008402052289970.29008402052289970.2900840.2900610.290057
-0.030.436230.4365360.436119202830953620.436119202830953570.4361190.4360840.436097
-0.040.582950.5831610.58281043202858960.58281043202858970.5828090.5827630.582789
-0.050.730320.7304330.73015289656704170.73015289656704190.7301510.7300940.730101
-0.060.878340.8783580.87814157705166910.87814157705166910.8781390.8780720.878098
-0.071.026991.0269261.02677124714768621.02677124714768621.0267681.0266881.026715
-0.081.176311.1761241.17603647476232801.17603647476232781.1760331.1759411.175951
-0.091.326231.3259681.32593162350803271.32593162350803251.3259271.3258241.32584
-0.11.476791.4764161.47645085445058591.47645085445058591.4764451.4763321.476349

* Given that scipy.integrate.quad simply links the venerable Fortran library QUADPACK in, I am quite pleased that my Gauss-Kronrod [7, 15] implementation seems to perform as well, on this problem at least. I need to make sure it handles the discontinuity at ## s = 0 ## well and add transformation for the integration to ## \infty ## before I put it into the 'new' LightCone7.
 
  • #84
Hi @pbuk,

Thanks for the results (Post #83) obtained with the python program by @Ibix. It is nice to know that when the ΩR term is included, pbuk g7k15 = version (1.2.0) = ibix. Please confirm that if ΩR = 0, pbuk g7k15 = ibix is also true.

When comparing the various results, what do you think about using the following assumptions?

The cosmic calculator works properly when ΩR = 0.​
The ICRAR calculator works properly when the ΩR term is included.​
 
  • #85
pbuk said:
I need to make sure it handles the discontinuity at ## s = 0 ## well and add transformation for the integration to ## \infty ## before I put it into the 'new' LightCone7.
I presume you meant "at ## z = 0 ##" ?
There is no real discontinuity at z = 0. One could just as well have had distances (and some other outputs) go negative for future times (negative z). The reason the original Lightcone team have chosen to keep things positive was for better resolution on the vertical graph axis, while keeping the graphs vertically compact.

Maybe we should start talking about Lightcone8 (or X) due to the major rework and so avoid confusion...?
 
  • #86
JimJCW said:
Please confirm that if ΩR = 0, pbuk g7k15 = ibix is also true.
Yes this is true: the calculations in pbuk g7k15, ibix and jorrie are in fact identical except for the numerical integration method summarised below
  • jorrie uses a simple fixed step method with the 'glitch' in the title of this thread.
  • ibix uses Python's scipy.integrate.quad which links the ancient, well-tested FORTRAN 77 code QUADPACK.
  • pbuk g7k15 (note this is not yet the version used on the website) uses a new JavaScript routine written from scratch. I believe that for infinite ranges this uses the same transformation and underlying algorithm as QUADPACK (15-point Gauss-Kronrod), although there are differences in the error estimation and hence adaptive step calculations, however for finite ranges QUADPACK uses a higher order (21 point) method: this does not seem to make any material difference.

JimJCW said:
When comparing the various results, what do you think about using the following assumptions?

The cosmic calculator works properly when ΩR = 0.​
The ICRAR calculator works properly when the ΩR term is included.​
I'm not sure, other calculators seem to include other parameters, for instance Ned Wright's includes this:

JavaScript:
// by Ned Wright
// 25 Jul 1999 - revised 2 Dec 2005
// Copyright Edward L. Wright, all rights reserved.

// ...

// correction for annihilations of particles not present now like e+/e-
// added 13-Aug-03 based on T_vs_t.f
  var lpz = Math.log((1+1.0*z))/Math.log(10.0);
  var dzage = 0;
  if (lpz >  7.500) dzage = 0.002 * (lpz -  7.500);
  if (lpz >  8.000) dzage = 0.014 * (lpz -  8.000) +  0.001;
  if (lpz >  8.500) dzage = 0.040 * (lpz -  8.500) +  0.008;
  if (lpz >  9.000) dzage = 0.020 * (lpz -  9.000) +  0.028;
  if (lpz >  9.500) dzage = 0.019 * (lpz -  9.500) +  0.039;
  if (lpz > 10.000) dzage = 0.048;
  if (lpz > 10.775) dzage = 0.035 * (lpz - 10.775) +  0.048;
  if (lpz > 11.851) dzage = 0.069 * (lpz - 11.851) +  0.086;
  if (lpz > 12.258) dzage = 0.461 * (lpz - 12.258) +  0.114;
  if (lpz > 12.382) dzage = 0.024 * (lpz - 12.382) +  0.171;
  if (lpz > 13.055) dzage = 0.013 * (lpz - 13.055) +  0.188;
  if (lpz > 14.081) dzage = 0.013 * (lpz - 14.081) +  0.201;
  if (lpz > 15.107) dzage = 0.214;
  zage = zage*Math.pow(10.0,dzage);

There is clearly a lot more to this than I realized at first: I'm just a humble coder :wink:
 
Last edited:
  • #87
Jorrie said:
I presume you meant "at ## z = 0 ##" ?
Well I meant at ## s = 0 ## because I was confusing the calculation of ## T_{now} = \int_{s=S}^\infty \frac{1}{sH(s)} ## with ## D_{hor} = \frac1S \int_{s=0}^S \frac{1}{H(s)} ## and thinking we had to compute ## \int_{s=0}^S \frac{1}{sH(s)} ## somewhere! So there is no discontinuity: that's good, my integrator doesn't handle discontinuities as well as QUADPACK.

Jorrie said:
Maybe we should start talking about Lightcone8 (or X) due to the major rework and so avoid confusion...?
Yes, a bit of housekeeping to do here.
 
  • #88
It may be interesting to compare Dnow vs z curves from different calculators for larger z values. The result is shown below:

1655119700106.png

This suggests that for large z values, the results from all these calculators are almost indistinguishable and the glitch discussed in this thread is very small. I am working on the comparison for small z values.

Note that DJorrie,2021 is the modified version discussed in Posts #15 and #17.
 
  • #89
In the following discussion, I assume that the ICRAR calculator works properly and use it as a reference for comparison.

1655205630253.png


1655205656099.png


These figures suggest that there exist real discrepancies among the three calculators. When we modify and/or rewrite Jorrie’s calculator, maybe we can use ICRAR calculator as a tool for testing.
 
  • #90
JimJCW said:
When we modify and/or rewrite Jorrie’s calculator, maybe we can use ICRAR calculator as a tool for testing.
It's already rewritten, now working here: https://light-cone-calc.github.io. There's not much point testing it against the ICRAR calculator though as that uses a more sophisticated model.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
7K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
Replies
20
Views
2K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K