I A glitch in Jorrie’s Cosmo-Calculator?

  • I
  • Thread starter Thread starter JimJCW
  • Start date Start date
  • #51
Ibix said:
This has an ODE solver: https://ccc-js.github.io/numeric2/index.html. I think that could be slightly abused to do integration.
I don't think that is a very good idea: (i) the project is even more stale than Lightcone7 (no commits for 10 years); (ii) robust methods for quadrature are very different from methods for initial value problems.

Ibix said:
Or we could just use the approach in the existing module.
I think that would be better.

Ibix said:
I don't think it's a bad approach and seems pretty responsive (although I'd move some stuff around to make it a bit easier to see what's going on, and you can get rid of one pass of the integrator, I think). It's just grown messy and needs a dead code pruning round.
I agree with all of that. An approach I have found successful in the past is to extract the existing code into a stand-alone module and implement continuous integration with some high level functional tests to check you don't break anything before refactoring the existing code so that you can figure out how it works/why it doesn't.

Once you have done this you can prioritize what needs to be done next (quick fix to address bugs? rewrite algorithm for better stability/accuracy/speed? Etc...).
 
Space news on Phys.org
  • #52
pbuk said:
(ii) robust methods for quadrature are very different from methods for initial value problems.
.
While I have no comment on the referenced project, far more development of adaptive and variable step size has occurred for ODEs than quadrature. In particular, for spikey functions poorly approximated by polynomials, a given accuracy is often achieved with much less computation by casting the quadrature as an ODE, and using modern ODE methods.
 
  • Informative
Likes pbuk
  • #53
Here it is https://lightcone7.github.io/LightCone7.html with the calculations split out into a separate module (source here) with the bare minimum of changes to get it to work independently.

Needs some tidying up and documentation: PM me with github user details if you want to use/take over this.
 
Last edited:
  • Like
Likes Jorrie
  • #54
PAllen said:
In particular, for spikey functions poorly approximated by polynomials, a given accuracy is often achieved with much less computation by casting the quadrature as an ODE, and using modern ODE methods.
I didn't realize that, although these functions are generally smooth.

PAllen said:
Far more development of adaptive and variable step size has occurred for ODEs than quadrature.
Yes this is true in general, but there is not much implemented in Javascript for ODEs either.
 
  • #55
pbuk said:
I didn't realize that, although these functions are generally smooth.
Agreed.
pbuk said:
Yes this is true in general, but there is not much implemented in Javascript for ODEs either.
Probably true, but in other languages, publicly available implementations of most any method should be available. The SciPy quadrature interface @Ibix mentioned looks good. It uses adaptive step size. Scipy also has ODE methods.

[edit: I see this was all discussed before - browser availability is a requirement. I have no idea what is available open source in this context.]
 
  • #56
pbuk said:
I have some experience of collaborative Javascript projects, @Jorrie do you mind if I set something up as a illustration of how to transition the calculation module to a robust modern approach?
By all means, do that. :smile:
 
  • #57
Ibix said:
Install python, man! . . .

That sounds like a good idea. Thanks for calculating the Dnow values! A comparison is shown below:

zGnedinJorrie, 2022ibix
0.11.410041.40841.41
0.091.272141.2706091.27
0.081.133541.1321061.13
0.070.994260.9929310.994
0.060.854270.853050.854
0.050.71360.7124770.713
0.040.572250.5712250.572
0.030.430230.4292780.43
0.020.287480.2866510.28
0.010.144080.143360.144
000.0006210
-0.010.144740.145260.145
-0.020.290150.2905670.29
-0.030.436230.4365360.436
-0.040.582950.5831610.583
-0.050.730320.7304330.73
-0.060.878340.8783580.878
-0.071.026991.0269261.03
-0.081.176311.1761241.18
-0.091.326231.3259681.33
-0.11.476791.4764161.48

If we plot the three Dnow vs z curves, they are just about indistinguishable. If we add the (DGnedin – Dibix) result to the plot shown in Post #36, we get the following:

1652873102275.png

In general, (DGnedin - Dibix) fluctuates around the y = 0 line. The greater deviations from the line near the two ends are probably due to round-off effect of the calculated Dibix. I am still puzzled by the slightly different behaviors of the two data sets.
 
  • #58
JimJCW said:
I am still puzzled by the slightly different behaviors of the two data sets.
I'm not. Jorrie's calculation uses a fixed step size, the integration steps don't match the result steps excactly, it doesn't treat the integration to infinity properly and it only manages to get as close as it does by using far more integration steps than it needs to - in short it needs rewriting.
 
  • Like
Likes Jorrie
  • #59
pbuk said:
in short it needs rewriting.
It looks to me that Calculate() can be split up into three steps
  1. Compute a list of the ##z## values requested, making sure ##z=0## is there
  2. Do the integrals ##\int_0^S\frac{1}{H(s)}ds## and ##\int_0^S\frac{1}{sH(s)}ds## for all ##S## corresponding to those ##z## and also for ##S=\infty##.
  3. Compute all output statistics for each ##z## (some of which depend on the integral up to ##S=1## or ##S=\infty##, hence doing this after the integration run).
I would expect that this is just reordering existing code, so ought to make little to no difference to the output. Then we could work on doing the integration in a better way.

Any thoughts? I haven't looked at your github page yet @pbuk, but will do so when I'm on something with a slightly larger screen.
 
  • #60
Sounds good. The present Calculate() also has a separate T_age module, simply because there is a legacy requirement to immediate see the effect on present age when any relevant input parameter is changed. But this could be integrated with the ##\int_0^S\frac{1}{H(s)}ds## module.
 
  • #61
Ibix said:
Any thoughts?
Yes that's exactly what I was thinking (plus step 0 being sanitizing inputs and other initialization).
 
  • #62
I'm impressed with the apparent enthusiasm for the community project, which should finally become a PF Cosmo tool, decoupled from anyone member and managed by an elected committee on a neutral platform.
 
  • #63
Well kind of self-elected! PM GitHub user ids to be added as project admins.
 
  • #64
If there is no good open source quadrature package available for JavaScript, it seems to me that the methods described in section 4.5 of the third edition of “Numerical Recipes” are more than adequate for the required integrals, and are much less code than trying to reproduce something like QUADPACK.
 
  • #65
I'm quite rusty on my cosmology... so I can't vouch for the equations.
When this thread started, I was interested in trying to reproduce the calculations in Desmos.

So, here is what I have so far:
https://www.desmos.com/calculator/dorewco5ms
See the table at the end to check the numerical calculations.

You'll have to check the equations ( based on calculations by Jorrie and Ibix ; see also work by pbuk ) all documented in the Desmos file.

I don't have any more time to play around with it.
Maybe someone might find it useful (once the equations are verified).


(A note on Desmos... the ordering of the entries doesn't matter.
Unlike geogebra or trinket/Glowscript,
every Desmos "save" generates a new unique URL... which the saver has to share for others to see.)
 
  • #66
@robphy , looking at Desmos worksheet (really nice, by the way, never heard of this tool), I have one question. You have a formula using ##f_T##, but I don’t see a definition for it.
 
  • #67
PAllen said:
@robphy , looking at Desmos worksheet (really nice, by the way, never heard of this tool), I have one question. You have a formula using ##f_T##, but I don’t see a definition for it.
It’s in the “constants” folder.
It’s a conversion unit.
 
  • #68
pbuk said:
I'm not. Jorrie's calculation uses a fixed step size, the integration steps don't match the result steps excactly, it doesn't treat the integration to infinity properly and it only manages to get as close as it does by using far more integration steps than it needs to - in short it needs rewriting.

I have been calculating Dnow using various calculators. The results are often slightly different from one another. The results calculated with cosmic and Toth calculators are added to the table shown in Post #57. They are consistent with that calculated by @Ibix but with more digits kept:

z
Gnedin
Jorrie, 2022
ibix
cosmic
Toth
0.11.410041.40841.411.4095921.409608
0.091.272141.2706091.271.271741.271756
0.081.133541.1321061.131.1331891.133219
0.070.994260.9929310.9940.9939440.993964
0.060.854270.853050.8540.8540050.854024
0.050.71360.7124770.7130.7133770.713399
0.040.572250.5712250.5720.5720630.57209
0.030.430230.4292780.430.4300650.430062
0.020.287480.2866510.2870.2873850.287382
0.010.144080.143360.1440.1440290.144017
000.000621000
-0.010.144740.145260.1450.1446980.144702
-0.020.290150.2905670.290.2900610.290057
-0.030.436230.4365360.4360.4360840.436097
-0.040.582950.5831610.5830.5827630.582789
-0.050.730320.7304330.730.7300940.730101
-0.060.878340.8783580.8780.8780720.878098
-0.071.026991.0269261.031.0266881.026715
-0.081.176311.1761241.181.1759411.175951
-0.091.326231.3259681.331.3258241.32584
-0.11.476791.4764161.481.4763321.476349

For the discussions blow, let’s assume that the cosmic result is correct and use it as a reference for comparison.

For two properly working calculators, the difference in their calculated Dnow’s as a function of z should be a slightly fluctuating plot around the y = 0 line, as exemplified by the Dcosmic - DToth plot shown in the following figure:

1653273274626.png


The Dcosmic - DJorrie plot, however, shows a peculiar behavior: there is an abrupt break around z = 0 and DJorrie(z) deviates systematically from Dcosmic(z). It will be nice if we can get a clue of why this is happening. If you or @Jorrie have some ideas about the location(s) in the Calculation.js file that is responsible for this behavior, please let us know.
 
  • #69
JimJCW said:
If you or @Jorrie have some ideas about the location(s) in the Calculation.js file that is responsible for this behavior, please let us know.
It's more complicated than that, see post #55 above. Anyway we have moved on from there, the Dnow calculations seem to be much improved over the range near z = 0, but other calculations need to be checked and a few other things sorted. The results in the table below are taken from the current development version (1.2.0) running at https://lightcone7.github.io/LightCone7-develop.html.

z
Gnedin
Jorrie, 2022
ibix
cosmic
Toth
v1.2.0
0.11.410041.40841.411.4095921.4096081.409700
0.091.272141.2706091.271.271741.2717561.271836
0.081.133541.1321061.131.1331891.1332191.133276
0.070.994260.9929310.9940.9939440.9939640.940199
0.060.854270.853050.8540.8540050.8540240.854071
0.050.71360.7124770.7130.7133770.7133990.713433
0.040.572250.5712250.5720.5720630.572090.572107
0.030.430230.4292780.430.4300650.4300970.
0.020.287480.2866510.2870.2873850.2873820.287407
0.010.144080.143360.1440.1440290.1440170.144040
000.0006210000
-0.010.144740.145260.1450.1446980.1447020.144040
-0.020.290150.2905670.290.2900610.2900570.290084
-0.030.436230.4365360.4360.4360840.4360970.436119
-0.040.582950.5831610.5830.5827630.5827890.582810
-0.050.730320.7304330.730.7300940.7301010.730153
-0.060.878340.8783580.8780.8780720.8780980.878142
-0.071.026991.0269261.031.0266881.0267151.026771
-0.081.176311.1761241.181.1759411.1759511.176036
-0.091.326231.3259681.331.3258241.325841.325931
-0.11.476791.4764161.481.4763321.4763491.476451
 
  • #70
pbuk said:
It's more complicated than that, see post #55 above. Anyway we have moved on from there, the Dnow calculations seem to be much improved over the range near z = 0, but other calculations need to be checked and a few other things sorted. The results in the table below are taken from the current development version (1.2.0) running at https://lightcone7.github.io/LightCone7-develop.html.

I noticed that using version (1.2.0), I can’t duplicate your Dnow(z) in one run of the calculator; I had to calculate one or two values at a time. Some of the Dnow values probably were not correctly entered in your table. For example, Dnow = 0 for z = 0.03.

If we add (Dcosmic - Dv1.2.0) vs z curve to the figure shown in Post #68, we get the following result:

1653477812401.png


The Dv1.2.0(z) result shows a big improvement over the Jorrie 2022 result, but it still deviates systematically from Dcosmic(z). A better indication of this is shown below:

1653477885989.png
 
  • #71
JimJCW said:
I noticed that using version (1.2.0), I can’t duplicate your Dnow(z) in one run of the calculator; I had to calculate one or two values at a time.
Yes, it doesn't currently do linear intervals; this should be easy to add.

JimJCW said:
The Dv1.2.0(z) result shows a big improvement over the Jorrie 2022 result, but it still deviates systematically from Dcosmic(z).
I'm not really surprised at this deviation, steps to improve it include:
  • Check the precision of the physical constants and model values
  • Try midpoint instead of trapezium rule which always overestimates for a concave function.
  • Add error estimation/adaptive step size.
But these are all enhancements for 1.4.0.
 
Last edited:
  • #72
pbuk said:
I'm not really surprised at this deviation, steps to improve it include:
These calculators use different values for the constants* (and Cosmic doesn't take ## \Omega_R ## into account at all). It's not surprising they give different results with a regular pattern. Note that the "new Jorrie" site does not yet allow overriding the default values (again this is something that is easy to add but pointless until we are happy with ALL the calculations, not just ## D_{now} ##.

*e.g. ## H_0 ## Jorrie, pbuk 67.74 km/s/Mpc, cosmic 67.04, Gnedin 68.14.
 
  • #73
pbuk said:
These calculators use different values for the constants* (and Cosmic doesn't take ## \Omega_R ## into account at all). It's not surprising they give different results with a regular pattern. Note that the "new Jorrie" site does not yet allow overriding the default values (again this is something that is easy to add but pointless until we are happy with ALL the calculations, not just ## D_{now} ##.

*e.g. ## H_0 ## Jorrie, pbuk 67.74 km/s/Mpc, cosmic 67.04, Gnedin 68.14.

You are right; all other calculators I have tried do not include ΩR. That’s why I like and use Jorrie’s calculator. The effect of ΩR is significant only during early cosmological times (high z values) and it probably does not have any noticeable effects in our Dnow calculations.

All calculators I tried use H0, Ωm and ΩΛ as input parameters and I use H0 = 67.74 km/s/Mpc, Ωm = 0.309 and ΩΛ = 0.691 for all of them.
 
  • Informative
Likes pbuk
  • #74
JimJCW said:
You are right; all other calculators I have tried do not include ΩR. That’s why I like and use Jorrie’s calculator. The effect of ΩR is significant only during early cosmological times (high z values) and it probably does not have any noticeable effects in our Dnow calculations.
It would be easy enough to exclude it for better comparisons: then with the same constants the only thing that should be different is the integration algorithm.

JimJCW said:
All calculators I tried use H0, Ωm and ΩΛ as input parameters and I use H0 = 67.74 km/s/Mpc, Ωm = 0.309 and ΩΛ = 0.691 for all of them.
Ah OK. Is Ωm = 0.309 correct or is it Ωm = 0.3089 as per Jorrie's defaults?
 
Last edited:
  • #75
pbuk said:
Ah OK. Is Ωm = 0.309 correct or is it Ωm = 0.3089 as per Jorrie's defaults?
Keep in mind that the accuracy of our knowledge of the Omega's is far worse than the rounding contributions. It is good to chase mathematical integrity in algorithms, but is it worth it to get far ahead of the quality of the observations that we work with?
 
  • #76
Jorrie said:
is it worth it to get far ahead of the quality of the observations that we work with?
Not in actual use, but in testing it is important to determine whether differences in results arise from different models or poor integration methods.
 
  • #77
pbuk said:
Ah OK. Is Ωm = 0.309 correct or is it Ωm = 0.3089 as per Jorrie's defaults?

See discussion in StackExchange:

1653645616295.png


If we change zeq from 3370 to 999998 in the current version of the calculator (i.e., neglect the radiation energy density), Ωm,0 will change from 0.3089 to 0.309.

In the current version, changing zeq in the calculator does not change the calculated Dnow. When it is fixed, we can compare the calculated Dnow using large zeq values with that calculated with the cosmic calculator (no radiation energy term).
 
  • #78
Hi @Ibix,

In the figures included in Post #70, Dv1.2.0(z) by @pbuk shows a small, systematic deviation from Dcosmic(z) near z = 0. It is interesting to see whether this is caused by the omission of the radiation term ΩR,0 in the cosmic model. Your python version includes the radiation term and may provide information concerning this. Would you please post your result with more digits? (See Post #57)

z
Gnedin
Jorrie, 2022
ibix
ibix_2
cosmic
Toth
0.11.410041.40841.411.4095921.409608
0.091.272141.2706091.271.271741.271756
0.081.133541.1321061.131.1331891.133219
0.070.994260.9929310.9940.9939440.993964
0.060.854270.853050.8540.8540050.854024
0.050.71360.7124770.7130.7133770.713399
0.040.572250.5712250.5720.5720630.57209
0.030.430230.4292780.430.4300650.430062
0.020.287480.2866510.280.2873850.287382
0.010.144080.143360.1440.1440290.144017
000.000621000
-0.010.144740.145260.1450.1446980.144702
-0.020.290150.2905670.290.2900610.290057
-0.030.436230.4365360.4360.4360840.436097
-0.040.582950.5831610.5830.5827630.582789
-0.050.730320.7304330.730.7300940.730101
-0.060.878340.8783580.8780.8780720.878098
-0.071.026991.0269261.031.0266881.026715
-0.081.176311.1761241.181.1759411.175951
-0.091.326231.3259681.331.3258241.32584
-0.11.476791.4764161.481.4763321.476349
 
  • #79
Hi @JimJCW,

Yes, this is part of my plan for generating test data. I am away on a sailing trip at the moment so can't work on it for a while though.
 
  • #80
Hi @Jorrie, @pbuk, @lbix,

I think the cosmology calculator by The International Centre for Radio Astronomy Research, ICRAR, can be useful for testing in our calculator development. It is written in R language and can have ΩR as an input parameter. The inputs to this calculator for PLANCK 2015 Data are:

H0 = 67.74​
ΩΛ = 0.691​
ΩR = 0.0000916 [obtained from ΩR = Ωm / (zeq + 1)]​
Ωm = 1 - 0.691 - 0.0000916 = 0.3089​

I have some questions about some of the terms:

What is the difference between ‘The Comoving Radial Distance LoS to z’ and ‘The Comoving Radial Distance Tran to z’?​
What is Sigma8[0]?​
 
Last edited:
  • #81
JimJCW said:
Hi @Jorrie, @pbuk, @lbix,

I think the cosmology calculator by The International Centre for Radio Astronomy Research, ICRAR, can be useful for testing in our calculator development. It is written in R language and can have ΩR as an input parameter. ...

I have some questions about some of the terms:

What is the difference between ‘The Comoving Radial Distance LoS to z’ and ‘The Comoving Radial Distance Tran to z’?​
What is Sigma8[0]?​
You will find the definitions of comoving distances in Hogg
(arXiv:astro-ph/9905116v4 16 Dec 2000) section 5.

I have no clue about Sigma8[0]
 
  • #83
JimJCW said:
In the figures included in Post #70, Dv1.2.0(z) by @pbuk shows a small, systematic deviation from Dcosmic(z) near z = 0. It is interesting to see whether this is caused by the omission of the radiation term ΩR,0 in the cosmic model. Your python version includes the radiation term and may provide information concerning this. Would you please post your result with more digits? (See Post #57)
I have posted results from Ibix's Python program below with a LOT more digits. I have also written a much better integrator for my Javascript code* which I am pleased to say agrees almost to machine precision with Python's scipy.integration.quad here. I have posted these results (with differing digits shown in bold) as "pbuk g7k15" below. I may be able to get this into GitHub and the demo site over the weekend.

I have also run Ibix's code with the radiation parameters ## \Omega_{R0} ## and ## \Omega_R ## removed. You can see from the table below that it doesn't make much difference so there must be some other reason for the differences from the other calculators.

z
Gnedin
Jorrie, 2022
ibix
pbuk g7k15
ibix no ## \Omega_R ##
cosmic
Toth
0.11.410041.40841.4096995371674351.4096995371674351.4097071.4095921.409608
0.091.272141.2706091.27183645996174401.27183645996174381.2718421.271741.271756
0.081.133541.1321061.1332757881107791.1332757881107791.1332801.1331891.133219
0.070.994260.9929310.99401986048914960.99401986048914960.9940230.9939440.993964
0.060.854270.853050.85407121547408080.85407121547408070.8540740.8540050.854024
0.050.71360.7124770.71343259282539450.71343259282539450.7134340.7133770.713399
0.040.572250.5712250.57210693537856640.57210693537856640.5721080.5720630.57209
0.030.430230.4292780.43009739054182380.43009739054182370.4300980.4300650.430062
0.020.287480.2866510.28740731158840520.28740731158840520.2874080.2873850.287382
0.010.144080.143360.14404025873529720.14404025873529720.1440400.1440290.144017
000.00062100000
-0.010.144740.145260.14470948817285470.14470948817285470.1447090.1446980.144702
-0.020.290150.2905670.29008402052289970.29008402052289970.2900840.2900610.290057
-0.030.436230.4365360.436119202830953620.436119202830953570.4361190.4360840.436097
-0.040.582950.5831610.58281043202858960.58281043202858970.5828090.5827630.582789
-0.050.730320.7304330.73015289656704170.73015289656704190.7301510.7300940.730101
-0.060.878340.8783580.87814157705166910.87814157705166910.8781390.8780720.878098
-0.071.026991.0269261.02677124714768621.02677124714768621.0267681.0266881.026715
-0.081.176311.1761241.17603647476232801.17603647476232781.1760331.1759411.175951
-0.091.326231.3259681.32593162350803271.32593162350803251.3259271.3258241.32584
-0.11.476791.4764161.47645085445058591.47645085445058591.4764451.4763321.476349

* Given that scipy.integrate.quad simply links the venerable Fortran library QUADPACK in, I am quite pleased that my Gauss-Kronrod [7, 15] implementation seems to perform as well, on this problem at least. I need to make sure it handles the discontinuity at ## s = 0 ## well and add transformation for the integration to ## \infty ## before I put it into the 'new' LightCone7.
 
  • #84
Hi @pbuk,

Thanks for the results (Post #83) obtained with the python program by @Ibix. It is nice to know that when the ΩR term is included, pbuk g7k15 = version (1.2.0) = ibix. Please confirm that if ΩR = 0, pbuk g7k15 = ibix is also true.

When comparing the various results, what do you think about using the following assumptions?

The cosmic calculator works properly when ΩR = 0.​
The ICRAR calculator works properly when the ΩR term is included.​
 
  • #85
pbuk said:
I need to make sure it handles the discontinuity at ## s = 0 ## well and add transformation for the integration to ## \infty ## before I put it into the 'new' LightCone7.
I presume you meant "at ## z = 0 ##" ?
There is no real discontinuity at z = 0. One could just as well have had distances (and some other outputs) go negative for future times (negative z). The reason the original Lightcone team have chosen to keep things positive was for better resolution on the vertical graph axis, while keeping the graphs vertically compact.

Maybe we should start talking about Lightcone8 (or X) due to the major rework and so avoid confusion...?
 
  • #86
JimJCW said:
Please confirm that if ΩR = 0, pbuk g7k15 = ibix is also true.
Yes this is true: the calculations in pbuk g7k15, ibix and jorrie are in fact identical except for the numerical integration method summarised below
  • jorrie uses a simple fixed step method with the 'glitch' in the title of this thread.
  • ibix uses Python's scipy.integrate.quad which links the ancient, well-tested FORTRAN 77 code QUADPACK.
  • pbuk g7k15 (note this is not yet the version used on the website) uses a new JavaScript routine written from scratch. I believe that for infinite ranges this uses the same transformation and underlying algorithm as QUADPACK (15-point Gauss-Kronrod), although there are differences in the error estimation and hence adaptive step calculations, however for finite ranges QUADPACK uses a higher order (21 point) method: this does not seem to make any material difference.

JimJCW said:
When comparing the various results, what do you think about using the following assumptions?

The cosmic calculator works properly when ΩR = 0.​
The ICRAR calculator works properly when the ΩR term is included.​
I'm not sure, other calculators seem to include other parameters, for instance Ned Wright's includes this:

JavaScript:
// by Ned Wright
// 25 Jul 1999 - revised 2 Dec 2005
// Copyright Edward L. Wright, all rights reserved.

// ...

// correction for annihilations of particles not present now like e+/e-
// added 13-Aug-03 based on T_vs_t.f
  var lpz = Math.log((1+1.0*z))/Math.log(10.0);
  var dzage = 0;
  if (lpz >  7.500) dzage = 0.002 * (lpz -  7.500);
  if (lpz >  8.000) dzage = 0.014 * (lpz -  8.000) +  0.001;
  if (lpz >  8.500) dzage = 0.040 * (lpz -  8.500) +  0.008;
  if (lpz >  9.000) dzage = 0.020 * (lpz -  9.000) +  0.028;
  if (lpz >  9.500) dzage = 0.019 * (lpz -  9.500) +  0.039;
  if (lpz > 10.000) dzage = 0.048;
  if (lpz > 10.775) dzage = 0.035 * (lpz - 10.775) +  0.048;
  if (lpz > 11.851) dzage = 0.069 * (lpz - 11.851) +  0.086;
  if (lpz > 12.258) dzage = 0.461 * (lpz - 12.258) +  0.114;
  if (lpz > 12.382) dzage = 0.024 * (lpz - 12.382) +  0.171;
  if (lpz > 13.055) dzage = 0.013 * (lpz - 13.055) +  0.188;
  if (lpz > 14.081) dzage = 0.013 * (lpz - 14.081) +  0.201;
  if (lpz > 15.107) dzage = 0.214;
  zage = zage*Math.pow(10.0,dzage);

There is clearly a lot more to this than I realized at first: I'm just a humble coder :wink:
 
Last edited:
  • #87
Jorrie said:
I presume you meant "at ## z = 0 ##" ?
Well I meant at ## s = 0 ## because I was confusing the calculation of ## T_{now} = \int_{s=S}^\infty \frac{1}{sH(s)} ## with ## D_{hor} = \frac1S \int_{s=0}^S \frac{1}{H(s)} ## and thinking we had to compute ## \int_{s=0}^S \frac{1}{sH(s)} ## somewhere! So there is no discontinuity: that's good, my integrator doesn't handle discontinuities as well as QUADPACK.

Jorrie said:
Maybe we should start talking about Lightcone8 (or X) due to the major rework and so avoid confusion...?
Yes, a bit of housekeeping to do here.
 
  • #88
It may be interesting to compare Dnow vs z curves from different calculators for larger z values. The result is shown below:

1655119700106.png

This suggests that for large z values, the results from all these calculators are almost indistinguishable and the glitch discussed in this thread is very small. I am working on the comparison for small z values.

Note that DJorrie,2021 is the modified version discussed in Posts #15 and #17.
 
  • #89
In the following discussion, I assume that the ICRAR calculator works properly and use it as a reference for comparison.

1655205630253.png


1655205656099.png


These figures suggest that there exist real discrepancies among the three calculators. When we modify and/or rewrite Jorrie’s calculator, maybe we can use ICRAR calculator as a tool for testing.
 
  • #90
JimJCW said:
When we modify and/or rewrite Jorrie’s calculator, maybe we can use ICRAR calculator as a tool for testing.
It's already rewritten, now working here: https://light-cone-calc.github.io. There's not much point testing it against the ICRAR calculator though as that uses a more sophisticated model.
 
  • #91
pbuk said:
It's already rewritten, now working here: https://light-cone-calc.github.io. There's not much point testing it against the ICRAR calculator though as that uses a more sophisticated model.

Great! I can make more comparisons.

Please explain what you mean by “. . . the ICRAR calculator . . . uses a more sophisticated model”. Doesn’t it calculate the same Dnow?
 
  • #92
JimJCW said:
Please explain what you mean by “. . . the ICRAR calculator . . . uses a more sophisticated model”. Doesn’t it calculate the same Dnow?
Not exactly, there is a much more sophisticated treatment of dark energy. You can see the code (in R) if you go to https://cosmocalc.icrar.org/ and click the "R code" tab.
 
  • #93
pbuk said:
Not exactly, there is a much more sophisticated treatment of dark energy. You can see the code (in R) if you go to https://cosmocalc.icrar.org/ and click the "R code" tab.

How about comparing the cosmic model with your new version (setting ΩR = 0)? Similar discrepancy exists.
 
  • #94
JimJCW said:
How about comparing the cosmic model with your new version (setting ΩR = 0)? Similar discrepancy exists.
Rather than reverse engineer output I'd rather look at the implementations themselves. For instance the constants used in Jorrie's original model include these:
JavaScript:
// https://github.com/light-cone-calc/light-cone-calc/blob/main/src/model.ts
const physicalConstants = {
  rhoConst: 1.7885e9, // 3 / (8 pi G)
  secInGy: 3.1536e16, // s / Gyr
  tempNow: 2.725, // CMB temperature now
  convertToGyr: 1 / 978, // Convert km/s/Mpc -> Gyr^-1
};

whereas cosmic uses these

C++:
// https://github.com/joshkempner/Cosmic.NET/blob/main/Cosmic.NET/cosmo/Cosmo.cs
        private const double c = 2.99792458e5; // speed of light
        private const double G = 6.67259e-8;   // gravitational constant
        private const double KmPerMpc = 3.08567758e19;
        private const double TropicalYear = 3.1556926e7; // in seconds

Also, as noted in the code, I'm not sure about the calculation of ## \Omega_m ##
JavaScript:
//  s_eq: 1 + 3370, // Stretch when OmegaM=OmegaR
...

  //@TODO check this - should it be s_eq + 1 as the original, or as below
  // from Ibix?
  // const OmegaM = ((Omega - OmegaL) * s_eq) / (s_eq + 1); // Energy density of matter
  // const OmegaR = OmegaM / s_eq; // Energy density of radiation
  const OmegaM = ((Omega - OmegaL) * (s_eq + 1)) / (s_eq + 2); // Energy density of matter

A better place to discuss this is probably https://github.com/light-cone-calc/light-cone-calc/issues - in fact I've already noted the last point here https://github.com/light-cone-calc/light-cone-calc/issues/6.
 
  • #95
Pbuk, Jim and Ibix, thanks for your great work!
We have a local 4-day weekend starting tomorrow, so I will take a look good at your Lightcone8 version and also at the issue that you raised on Github and discuss it there.
 
  • Like
Likes berkeman and pbuk
  • #96
Yes, There is an error in my old Legacy js code, line 356
Ibix is correct, i.e.
// const OmegaM = ((Omega - OmegaL) * s_eq) / (s_eq + 1); // Energy density of matter

Tiny, tiny influence, considering the uncertainty on the z_eq value.

@pbuk, which one did you use in Lightcone8?
 
  • #97
Jorrie said:
Ibix is correct, i.e.
// const OmegaM = ((Omega - OmegaL) * s_eq) / (s_eq + 1); // Energy density of matter

Tiny, tiny influence, considering the uncertainty on the z_eq value.
Yes of course.

Jorrie said:
@pbuk, which one did you use in Lightcone8?
I used yours, but it's a quick fix.
 
  • #98
Jorrie said:
Tiny, tiny influence, considering the uncertainty on the z_eq value.
Tiny but important: this was the remaining part of the problem causing the offset at z = 0.

Code:
z: 0
Dnow: 3.552713678800501e-15
Dthen: 3.552713678800501e-15
Vnow: 2.460744627831758e-16
Vthen: 2.4607446278317573e-16
These values should all be 0 of course. Now fixed.
 
  • #99
pbuk said:
Code:
z: 0
Dnow: 3.552713678800501e-15
Dthen: 3.552713678800501e-15
Vnow: 2.460744627831758e-16
Vthen: 2.4607446278317573e-16
These values should all be 0 of course. Now fixed.
Great stuff! I will link https://light-cone-calc.github.io/ in my signature and you can do the same, if you so wish.
Maybe do a last edit to take some credit for the improvement?
 
  • #100
Hi @Jorrie, @pbuk,

When trying LightCone8, I noticed that if zlower = 0, a duplicated line is printed at z = 0. For example:

1655545085892.png
 
Back
Top