I A glitch in Jorrie’s Cosmo-Calculator?

  • I
  • Thread starter Thread starter JimJCW
  • Start date Start date
JimJCW
Gold Member
Messages
208
Reaction score
40
TL;DR Summary
Recently I noticed that when calculating Dnow and Dthen using Jorrie’s calculator, the result is slightly inconsistent with those obtained from Gnedin’s or Wright’s calculator. Could there be a glitch in Jorrie’s calculator?
Jorrie’s LightCone7-2017-02-08 Cosmo-Calculator can be very useful and handy when making calculations based on the ΛCDM model. Recently I noticed that when calculating Dnow and Dthen using Jorrie’s calculator, the result is slightly, but persistently, inconsistent with those obtained from Gnedin’s or Wright’s calculator. An example is shown below:

1651335843443.png

The inconsistency becomes more obvious for small z values. For example,

1651335920690.png

Based on the observation that the results from Gnedin’s and Wright’s calculators are consistent with each other, and that Jorrie’s result shown in the above figure is peculiar for small z values, one may wonder whether there is a glitch in Jorrie’s calculator. You can help by doing the following:

(1) Verify that there is such a peculiar result in Jorrie’s program and​
(2) Contact @Jorrie about it if you know how.​
 
Space news on Phys.org
This is why relying on other people's calculations if you can't do them yourself is a bad idea.
 
Vanadium 50 said:
This is why relying on other people's calculations if you can't do them yourself is a bad idea.

How would you fix the glitch, assuming there is one in Jorrie’s calculator?
 
Hmm. @Jorrie would probably like to look at this problem.
 
JimJCW said:
How would you fix the glitch
First I'd have to find it.
 
Knowing how these calculations are done, chances are it's an issue with the numerical integration. It would make sense if Jorrie used a less-good approximation for that integral, causing this result. The behavior at low-z suggests, to me, that using a fixed delta-z for the integral might be the culprit.
 
By looking at the sources of the webpages
(and, in some cases, the list of .js files called as scripts),
one can find the underlying calculations of these three resources.

As @Vanadium 50 suggests, it probably would be good to be able to obtain the results from one's own calculation. I think it makes it easier to order to understand what others have done... and possibly identify different approaches that may lead to the discrepancies that you note.
 
jim mcnamara said:
Hmm. @Jorrie would probably like to look at this problem.
Hmm, yes will do. Perhaps good to note that this calculator was more of an educational tool than an attempt at a quasi-professional tool.
 
  • Like
Likes berkeman
Jorrie said:
Hmm, yes will do. Perhaps good to note that this calculator was more of an educational tool than an attempt at a quasi-professional tool.
It seems that in the process of evolving the calculator so that the 'future', i.e. "negative redshifts" could be visible, the distance parameters got out of step with the redshift parameter by a small amount. I will have to investigate deeper to fix.
1651742945336.png
 
  • #10
Jorrie said:
It seems that in the process of evolving the calculator so that the 'future', i.e. "negative redshifts" could be visible, the distance parameters got out of step with the redshift parameter by a small amount. I will have to investigate deeper to fix.
View attachment 301053

Using the 2021 version of the calculator, I get a different result:

1651835465153.png


Note that Dthen is in Gly.
 
  • #11
JimJCW said:
Using the 2021 version of the calculator, I get a different result:
Note that Dthen is in Gly.
Apart from the anomalous shift to the right, taking into account the Gly/Gpc distance scaling, it is only the single data point at the origin that is different between the two graphs. The latter is an easily fixable error. The shift to the right is more troublesome, due to the way that the algorithm was implemented, requiring somewhat of a reprogramming. Will hopefully get around to it soon.

I think these anomalies were not noticed before, because for larger redshifts they disappear in the noise. Cool that you have spotted them. :-))
 
  • #12
JimJCW said:
Summary:: Recently I noticed that when calculating Dnow and Dthen using Jorrie’s calculator, the result is slightly inconsistent with those obtained from Gnedin’s or Wright’s calculator. Could there be a glitch in Jorrie’s calculator?

Jorrie’s LightCone7-2017-02-08 Cosmo-Calculator
...actually links to the 2021 version:
http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html .
Here is the 2017 version
http://jorrie.epizy.com/LightCone7-2017-02-08/LightCone_Ho7.html .
 
  • #13
(I'm not actively working in this...
I was just curious about the question
and was poking around...)

From a diff (using https://www.diffchecker.com/diff )
of the two calculation files, there are two differences.

One in Calculate()
if (s > 0.9999999 && s < 1.0000001) { z = 0; // the 'now' line, force z to zero } else { z = s - 1; // redshift } s=z + 1 // ensure they are in step H = s * qa * Ho;
The "s=z+1" (line 236) is new in the 2021 version.

The other in ScaleResults().
Line 357 has a revised value
var ConversionHo = 978
which was 973.1 in the 2017 version.

(My remarks in #7 still apply.)
 
  • #14
robphy said:

You are right; Jorrie’s LightCone7-2017-02-08 Cosmo-Calculator ...actually links to the 2021 version: http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html. That was my mistake; I meant to say, “Jorrie’s LightCone7-2021-03-12 Cosmo-Calculator”.

Jorrie changed the 2017 version to the 2021 version to fix the discrepancy in the Hubble parameter calculation. See the thread, Discrepancy in Jorrie’s LightCone7-2017-02-08 Cosmo-Calculator.
 
  • #15
Jorrie said:
Apart from the anomalous shift to the right, taking into account the Gly/Gpc distance scaling, it is only the single data point at the origin that is different between the two graphs. The latter is an easily fixable error. The shift to the right is more troublesome, due to the way that the algorithm was implemented, requiring somewhat of a reprogramming. Will hopefully get around to it soon.

I did some experiments with the calculator and noticed the following:

If we change the ‘number of fine grain steps’ and ‘fine grain steps’ in the Calculation.js file from

N = 10000000​
sf = 0.00001​

to

N = 1000000000​
sf = 0.000001​

We can get the following result:

1651877091932.png

While this reduces the effect of the glitch, the glitch is not completely eliminated (see the following result for smaller z):

1651877163540.png


Does this suggest anything?
 
  • #16
JimJCW said:
... While this reduces the effect of the glitch, the glitch is not completely eliminated (see the following result for smaller z):
Does this suggest anything?
Yup, tightening the integration loops always helps, but it slows things down quite a bit. It was more of a problem when the calculator was first prototyped (around 2009), so I guess one can now insert your values for the faster computers of today.
However, the factor 100 tightening of the loops just reduces the error for the very low z by a factor 100, so it does not fix the real problem in the implementation. I'm still looking at that.

The problem with 'funnel' shape around the origin is easy to fix, at least temporarily, by taking out the statements in lines 241-245,
if (Math.abs(a-1) < 0.0001)
{
Dnow = 0;
Dthen = 0;
}
which forces Dnow and Dthen to zero for a very close to 1 (i.e, 'now').
The issue should disappear when the 'asymmetry problem' is fixed.
 
  • #17
Jorrie said:
The problem with 'funnel' shape around the origin is easy to fix, at least temporarily, by taking out the statements in lines 241-245,
if (Math.abs(a-1) < 0.0001)
{
Dnow = 0;
Dthen = 0;
}
which forces Dnow and Dthen to zero for a very close to 1 (i.e, 'now').

You are right; if I do the following:

if (Math.abs(a-1) < 0.000001)​
// if (Math.abs(a-1) < 0.0001)​
{​
Dnow = 0;​
Dthen = 0;​
}​

I get this:

1651921229093.png
 
  • #18
Cool! :cool:
A little bit of background on the (convoluted path of the) development of Lightcone7. It really started all the way back in 2007, after a thread: https://www.physicsforums.com/threads/cosmo-calculator-recession-speed-tutorial.163996/ by the late and much appreciated Marcus.

Later in the thread, member hellfire confirmed a link to his own super-efficient, single pass calculator (post #18). With his permission, I started to use it in efforts of my own. Unfortunately @hellfire has not been seen around since 2010 and AFIAK, his original calculator is no longer accessible.

That "humble calculator" (@hellfire's words) has since grown to a multi-pass, much more generalized tool, but that efficient single pass code/module, although not very easy to read, is still used multiple times in Lightcone7. Marcus played a huge role in guiding the development, as a sort-off community project. Other members also played significant roles and to some extent, it became a 'camel', a "horse designed by a committee"! In a way, this community project is still continuing...

Since personal (and other) websites come and go, maybe we should find a more permanent place for it to reside?
 
  • #19
While I got nostalgic, I looked around on a backup and I found @hellfire's 2009 Calculate Function that I later incorporated into Lightcone7. For what it's worth, here it is:

function Calculate(OmegaM, OmegaR, OmegaL, HubbleP, zo, Omegat)
{
// >>>>>> Latest update 12-05-2009
// >>>>>> Original: http://www.geocities.com/alschairn/cc_e.htm

// >>>>>> input

var Ol = OmegaL * 1; // Lambda density par
var Or = OmegaR * 1; // Radiation density par
var Om = OmegaM * 1; // matter density par
var Ok = 1 - Om - Or - Ol; //curvature densioty par
var H = HubbleP * 1; // Hubble const now
var zt = zo * 1; // redshift of object
var at = (1 / (1 + zt)); // requested redshift value

// >>>>>> output
var T = 0; // time
var Dc = 0; // comoving distance
var Dp = 0; // proper distance
var Da = 0; // angular diameter distance
var Dl = 0; // luminosity distance
var Hp = 0; // time variable Hubble constant
var Omt = 0;
var Olt = 0;
var Ort = 0;

// >>>>>> auxiliary
var N = 100000;
var a = 0;
var a_ic = 0;
var qa = 0;
var pa = 1 / N; // Incremental a
var Eset = 0;
var Dtc = 0;
var Dtp = 0;

// >>>>>> conversion 1/H to Myr
var Tyr = 978000;

// >>>>>> Loop from a = 0 to a = 1, stop to get at values
while (a_ic < 1)
{
a = a_ic + (pa / 2); // expansion factor as incremented
qa = Math.sqrt((Om / a) + Ok + (Or / (a * a)) + (Ol * a * a)); // time variable density function (Or input 10000 times hi)
T = T + (pa / qa); // time (age)
Dc = Dc + ((1 / (a * qa)) * pa); // proper distance now
Dp = Dp + ((1 / qa) * pa); // proper distance then
a_ic = a_ic + pa; // new a
if ((a > at) && (Eset == 0))
{
window.document.cc_e.Epoch.value = Math.round((T * (Tyr / H)) * 10000) / 10000;
Dtc = Dc;
Dtp = Dp;
Dtl = Dl;
Dta = Da;
Eset = 1;
};
}

// >>>>>> auxiliary
Okc = Ok;

// >>>>>> Angular diameter distance -- D. Hogg, astro-ph/9905116
if ((Math.abs(Okc)) < 0.05)
{
Da = at * (Dc - Dtc);
}
else if (Okc > 0)
{
Da = at * (1/Math.sqrt(Okc)) * 0.5 * (Math.exp(Math.sqrt(Okc) * (Dc - Dtc)) - Math.exp(- Math.sqrt(Okc) * (Dc - Dtc)));
}
else
{
Okc = - Okc;
Da = at * (1/Math.sqrt(Okc)) * Math.sin(Math.sqrt(Okc) * (Dc - Dtc));
}

if (Da < 0)
{
Da = - Da;
}

// >>>>>> Luminosity distance
Dl = Da / (at * at);

// >>>>>> Conversion
T = T * (Tyr / H);
Dc = Dc * (Tyr / H);
Dtc = Dtc * (Tyr / H);
Dp = Dp * (Tyr / H);
Dtp = Dtp * (Tyr / H);
Da = Da * (Tyr / H);
Dl = Dl * (Tyr / H);

// >>>>>> Output "now"
window.document.cc_e.Age.value = Math.round(T * 10000) / 10000;
window.document.cc_e.Size.value = Math.round(Dc * 10000) / 10000;
window.document.cc_e.Dcomoving.value = Math.round((Dc - Dtc) * 10000) / 10000;
window.document.cc_e.Dproper.value = Math.round((Dp - Dtp) * 10000) / 10000;
window.document.cc_e.Dluminosity.value = Math.round(Dl * 10) / 10;
window.document.cc_e.Dangular.value = Math.round(Da * 10000) / 10000;
//window.document.cc_e.Vrec.value = Math.round((1-at*at) / (1+at*at)*10000)/10000; //temp Vrec calc using normal Doppler
window.document.cc_e.Vrec.value = Math.round(((((Dc - Dtc) / 3.26) * H) / 300000) * 10000) / 10000;

// >>>>>> Hubble parameter then, at a = at
Hp = (1/at) * H * Math.sqrt((Om * (1/at)) + (Or * (1/(at * at))) + (Ok) + (Ol * at * at));

// >>>>>> Omegas
Omt = (Om * (H * H) / (Hp * Hp * at * at * at));
Olt = (Ol * (H * H) / (Hp * Hp));
Ort = (Or * (H * H) / (Hp * Hp * at * at * at * at));
// >>>Ot = (Omt + Olt + Ort);
Ot = 0.0001878754 * Hp * Hp;
//Ot = Ot + ""
//Ot = Ot.substring(0,4)
//parseFloat(Ot)

// >>>>>> auxiliary
Okc = 1 - Omt - Olt - Ort;

// >>>>>> Output "then"
window.document.cc_e.Hthen.value = Math.round(Hp * 10000) / 10000;
window.document.cc_e.Dthen.value = Math.round((Dc - Dtc) * at * 10000) / 10000;
window.document.cc_e.Vrecthen.value = Math.round(((((Dc - Dtc) * at / 3.26) * Hp) / 300000) * 10000) / 10000;
window.document.cc_e.Sizethen.value = Math.round(Dtc * 10000) / 10000;
window.document.cc_e.OmegaMt.value = Math.round(Omt * 10000000) / 10000000;
window.document.cc_e.OmegaLt.value = Math.round(Olt * 10000000) / 10000000;
window.document.cc_e.OmegaRt.value = Math.round(Ort * 10000000) / 10000000;
window.document.cc_e.Omegat.value = Math.round(Ot * 10000000) / 10000000;
// window.document.cc_e.Omegat.value = Ot;
}
 
Last edited:
  • Like
Likes robphy
  • #20
Jorrie said:
Since personal (and other) websites come and go, maybe we should find a more permanent place for it to reside?
You could put the code to github? I expect github won't last forever (nothing does), but it has a huge amount of inertia. Your code wouldn't run there, but it would always be available for someone to host should your version disappear.
 
  • #21
Jorrie said:
While I got nostalgic, I looked around on a backup and I found @hellfire's 2009 Calculate Function that I later incorporated into Lightcone7. For what it's worth, here it is:
Great.
Here is the archived webpage
http://web.archive.org/web/2008*/http://www.geocities.com/alschairn/cc_e.htm

While the snapshots have form-interface, the .js script (presumably what you posted above) wasn't archived.
Here's one snapshot
http://web.archive.org/web/20080813234529/http://www.geocities.com/alschairn/cc_e.htm

For posterity, someone could piece it together.

If the code were text on a webpage (as opposed to just being a script), archive.org might have archived it.

This thread was captured a few days ago.
If it captures it again, then it will capture the code you posted as text.
https://web.archive.org/web/2022050...a-glitch-in-jorries-cosmo-calculator.1014779/
 
  • #22
Thanks robphy, I will look into the github option.
 
  • #23
Jorrie said:
Thanks robphy, I will look into the github option.
Just to clarify:
The github idea was due to @Ibix .
I just found the webpage [without its working script] on archive.org.
 
  • #24
Thanks to you both, and also to JimJCW for the heads up.
If any of you are interested in participating in modifying, testing and ironing out of bugs in Lightcone7, it will be great.

I'm still pondering the solution to the proper-distance/redhift offset problem encountered, which is suppressed by tightening of the integration loops, but not completely removed.

Perhaps a community project can take it to the next level of "amateur professionalism".
 
  • #25
Jorrie said:
If any of you are interested in participating in modifying, testing and ironing out of bugs in Lightcone7, it will be great.
Sounds interesting - happy to help.
 
  • #26
Thanks! Will let you know when I pick up the ropes again - still puzzling about a better way to economically implement the standard model's equations, in order to eliminate irritating offsets, like the one this thread is about.
 
  • Like
Likes berkeman and Ibix
  • #27
Jorrie said:
- still puzzling about a better way to economically implement the standard model's equations, in order to eliminate irritating offsets, like the one this thread is about.

I have been playing with Line 248 of the Calculation.js file:

Dnow = Math.abs(Dtc-Dc);​

Here is my result:

1652236680514.png


1652236735508.png
1652236831123.png


1652236884560.png


I don’t understand what is going on, but hope you can see something interesting here.
 
  • #28
One should not need to put correcting offsets into that subtraction The problem arises because both De and Dtc are the result of a massive integration loops and at z=0, they do not agree (they should be equal and they are not). I think it is differences in round-off errors in the two loops that causes it.

Essentially, Lightcone's z can run from near "positive infinite" to near "negative infinite". I think it will be better to start at zero and work towards the two extremes,, but this may case other issues.

I'm slowly working towards such a solution, and I may ask you guys to help with testing.
 
  • #29
Jorrie said:
One should not need to put correcting offsets into that subtraction The problem arises because both De and Dtc are the result of a massive integration loops and at z=0, they do not agree (they should be equal and they are not). I think it is differences in round-off errors in the two loops that causes it.

Essentially, Lightcone's z can run from near "positive infinite" to near "negative infinite". I think it will be better to start at zero and work towards the two extremes,, but this may case other issues.

I'm slowly working towards such a solution, and I may ask you guys to help with testing.

I think that makes sense. I can help with testing.
 
  • #30
Here is a patched version of Lightcone7, dated 2002-05-14, which numerically corrects for the offset in the low-z regime.
When testing in that regime, remember to set zlower and zupper in the input boxes to a range compatible with the z-regime of interest. When simply run over the default range of z, it produces too course an output for proper testing.

There are still anomalies around z=0, but it is good to remember that the homogeneity condition required by LCDM does not hold for z < 0.025 or so, roughly the value for the Como cluster.

I leave the old link in my signature for now, in-case you want to compare. Still pondering an overhaul of the legacy-ridden calculation.js module.
 
  • Like
Likes Bandersnatch
  • #31
Jorrie said:
Here is a patched version of Lightcone7, dated 2002-05-14, which numerically corrects for the offset in the low-z regime.
When testing in that regime, remember to set zlower and zupper in the input boxes to a range compatible with the z-regime of interest. When simply run over the default range of z, it produces too course an output for proper testing.

There are still anomalies around z=0, but it is good to remember that the homogeneity condition required by LCDM does not hold for z < 0.025 or so, roughly the value for the Como cluster.

I leave the old link in my signature for now, in-case you want to compare. Still pondering an overhaul of the legacy-ridden calculation.js module.

I will look at it tonight. I noticed the following:

When specifying the following:

1652609736024.png
I got this:

1652609804349.png


The ranges of z values seem not consistent.
 
  • #32
Hi @Jorrie. I've spent a peaceful afternoon reading your Insights on how this works and looking through the Javascript before implementing it in python using one of its canned integration routines to do the hard work. I've uncovered a couple of possible issues that you might want to look at.

First I looked at your Lightcone7 Tutorial Part III to try to understand the maths you were implementing. I noticed a minor bug in that article - it defines ##\Omega_r=\Omega_r/S_{eq}##, which I think ought to be ##\Omega_m/S_{eq}##, or possibly even ##\Omega_mS/S_{eq}##.

But between that page and the Javascript I was able to write a python program to replicate the outputs. There's no UI - "inputs" are just variables and you need to specify all the ##z## values manually and the result comes out as a file. The code is below and should just work as long as you have the scipy.integrate package installed, which is what does the numerical work. It's partially commented - see the Insight linked above and Lightcone7 itself for details of what's actually being calculated.
Python:
import math, scipy.integrate

##########################################################################
# Various constants
rhoConst = 1.7885e+9  # 3 / (8 pi G)
secInGy = 3.1536e+16  # s / Gyr
tempNow = 2.725       # CMB temperature now
Hconv = 1 / 978       # Convert km/s/Mpc -> Gyr^-1
infty = float("inf")  # Infinity - used as a limit in integrator

##########################################################################
# Inputs (Planck 2015, copied from Jorrie's calculator
H0 = 67.74                # H0 control
OmegaL = 0.691            # OmegaL control
Omega = 1                 # Omega control
z_eq = 3370               # z_eq control
zvals = [ 1089.999,       # The z values copied from Lightcone7's table
          339.0316542,
          104.9771906,
          32.02928913,
          9.293570928,
          2.207515378,
          0,
          -0.6893293728,
          -0.8690917659,
          -0.9451725312,
          -0.9773721629,
          -0.991 ]

##########################################################################
# Constants derived from inputs
H0 *= Hconv                                    # H0 in Gyr^-1
rhocritNow = rhoConst * (H0 / secInGy)**2      # Critical density now
s_eq = 1 + z_eq                                # Stretch when OmegaM=OmegaR
OmegaM = (Omega - OmegaL) * s_eq / (s_eq + 1)  # Energy density of matter
OmegaR = OmegaM / s_eq                         # Energy density of radiation
OmegaK = 1 - OmegaM - OmegaR - OmegaL          # Curvature energy density

##########################################################################
# Functions
def H(s):
    """Hubble constant as a function of stretch, s = 1/a, where a is the
    usual FLRW scale factor"""
    return H0 * math.sqrt(OmegaL
                            + OmegaK * s**2
                            + OmegaM * s**3
                            + OmegaR * s**4)
def TH(s):
    return 1 / H(s)
def THs(s):
    return TH(s) / s
def formatted(vals):
    """Convenience function to apply standard formatting to a bunch of
    numbers"""
    return ",".join(("%7.2e" % v) for v in vals)

##########################################################################
# For each specified z, calculate all the things Jorrie's calculator does
# and output them as CSV for easy pasting into a spreadsheet
with open("planck.csv","w") as op:
    op.write("z,Scale (a),S,Tnow Gyr,R Gly,Dnow Gly,Dthen Gly," \
                +"DHor Gly,Dpar Gly,Vgen/c,Vnow/c,Vthen/c,H(z)," \
                +"Temp (K),rho kg/m3,OmegaM,OmegaL,OmegaR,OmegaT\n")
    for z in zvals:
        s = 1 + z
        #s = 1.001 + z  # Uncomment to replicate a systematic bug in Lightcone7
        a = 1 / s
        Tnow = scipy.integrate.quad(THs, s, infty)[0]
        R = 1 / H(s)
        Dnow = abs(scipy.integrate.quad(TH, 1, s)[0])
        Dthen = Dnow / s
        Dhor = scipy.integrate.quad(TH, 0, s)[0] / s
        Dpar = scipy.integrate.quad(TH, s, infty)[0] / s
        Vgen = H(s) / (H0 * s)
        Vnow = Dnow * H0
        Vthen = Dthen * H(s)
        Temp = tempNow * s
        rhocrit = rhoConst * (H(s) / secInGy)**2
        OmegaMT = (Omega-OmegaL) * s**3 * (H0 / H(s))**2
        OmegaLT = (H0 / H(s))**2 * OmegaL
        OmegaRT = OmegaMT * s / s_eq
        OmegaTT = OmegaMT + OmegaLT + OmegaRT
        op.write(formatted([z, a, s, Tnow, R,
                            Dnow, Dthen, Dhor, Dpar,
                            Vgen, Vnow, Vthen,
                            H(s) / Hconv,
                            Temp, rhocrit,
                            OmegaMT, OmegaLT, OmegaRT, OmegaTT])+"\n")
If you run this it should produce a table of data in a CSV file called Planck.csv that should open in a spreadsheet package. It produces all of the columns Lightcone7 does in the same order and to Lightcone7's default precision, to make it easy to compare with the Javascript results. I uncovered two issues by doing this, working with your 2022-05-14 version.

I think the first issue is what's causing the bug that @JimJCW is seeing. Your ##S## actually appears to be ##1.001+z## across the board. Since everything calculates from ##S## the only practical effect of this is that the ##z## being used in the calculation is off by 0.001 compared to the results, which looks to be about right for where the graphs posted on page 1 are hitting the horizontal axis.

There's a line commented out in the loop at the end of my code that switches to defining ##S=1.001+z## and almost all of the (fairly slight) differences between python and Javascript results go away if you uncomment that. The remainder are, I think, rounding disagreements.

The second issue is that I can't get my Dpar and Dhor values to match yours. According to your Insight, $$\begin{eqnarray*}
D_{par}&=&\frac 1S\int_{s=S}^\infty T_Hds\\
D_{hor}&=&\frac 1S\int_{s=0}^S T_Hds
\end{eqnarray*}$$and that's fairly simple to implement, so I don't think I've made a mistake there. Given those definitions, I'd expect ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}##, and that's true of my results but is not true in the Lightcone7. So I think there might be something wrong either in those calculations or the definition in the Insight. I haven't investigated further - not even going and looking up those formulae in a textbook.

I have had a quick look at the Javascript backing this, and I would say it could probably be simplified quite a lot. It looks like it's been through quite a few re-writes but could probably do with another one focussed on cleaning it up a bit. Happy to help with that if you like - I have some ideas, but I don't want to trample on your toes.
 
  • Like
Likes Jorrie and Bandersnatch
  • #33
Ibix said:
Hi @Jorrie. ...

I have had a quick look at the Javascript backing this, and I would say it could probably be simplified quite a lot. It looks like it's been through quite a few re-writes but could probably do with another one focussed on cleaning it up a bit. Happy to help with that if you like - I have some ideas, but I don't want to trample on your toes.
Hi Ibix, thanks a ton! Yes, part of that Js code dates back to 2010 and comes from @Hellfire's original. Didn't want to fix what was working, but it has been added to so many times that it is quite a mess by now. I tried to get too much mileage out of that old engine...
I think both the distance and horizon anamolies originate there.

I would prefer to leave the overall Input/Output structure the same, but the calculation module to be completely redesigned and cleared up to produce the required outputs. I would be grateful if you want to assist - and it looks like you already have a working algorithm!

And yes, it should be ##\Omega_r=\Omega_m/S_{eq}##. Will fix that, if I can figure out how to edit that old tutorial...
 
  • #34
Ibix said:
Hi @Jorrie.
...
The second issue is that I can't get my Dpar and Dhor values to match yours. According to your Insight, $$\begin{eqnarray*}
D_{par}&=&\frac 1S\int_{s=S}^\infty T_Hds\\
D_{hor}&=&\frac 1S\int_{s=0}^S T_Hds
\end{eqnarray*}$$and that's fairly simple to implement, so I don't think I've made a mistake there. Given those definitions, I'd expect ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}##, and that's true of my results but is not true in the Lightcone7. So I think there might be something wrong either in those calculations or the definition in the Insight. I haven't investigated further - not even going and looking up those formulae in a textbook.
I have used Tamara Davis' "Fundamental Aspects of the Expansion of the The universe and Cosmic Horizons" equations in the Tutorial, but the implementation in Lightcone7 was a bit different and quite possibly inaccurate.

Note that ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}## only holds for comoving distances and not for the proper distances used in Lightcone7, where ##D_{par}## tends to infinity and ##D_{hor}## tends to a finite constant value. This is clear from the Davis Three-panel Cosmological Expansion graphs in my sig.
 
  • #35
Ibix said:
Ibix said:
I think the first issue is what's causing the bug that @JimJCW is seeing. Your ##S## actually appears to be ##1.001+z## across the board. Since everything calculates from ##S## the only practical effect of this is that the ##z## being used in the calculation is off by 0.001 compared to the results, which looks to be about right for where the graphs posted on page 1 are hitting the horizontal axis.

Please let me know the line number and file name concerning the '1.001 + z' problem.
 
  • #36
Jorrie said:
Here is a patched version of Lightcone7, dated 2002-05-14, which numerically corrects for the offset in the low-z regime.

First, the output z range is still different from the input z range:

1652699864880.png

1652699912214.png


I did the following experiment:

If I calculate Dnow using Gnedin's and your 2022 calculator and plot 'DGnedin - DJorrie' vs z, I expect to see points fluctuating around the Dnedin - DJorrie = 0 line. But I got the following result:

1652700186389.png


I wonder what clues we can get out of this.
 
  • #37
@JimJCW - You asked: Please let me know the line number and file name concerning the '1.001 + z' problem

In my python version there's a line near the bottom with a comment saying something like "to replicate a bug in Lightcone7 uncomment this line". In the Javascript version, no idea. I noticed it in the output - if you turn on both the ##S## and ##z## columns and bump up the number of dps they display with you can verify it yourself by calculating ##S-z##, but I don't know why it happens.
 
  • #38
Jorrie said:
I have used Tamara Davis' "Fundamental Aspects of the Expansion of the The universe and Cosmic Horizons" equations in the Tutorial, but the implementation in Lightcone7 was a bit different and quite possibly inaccurate.

Note that ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}## only holds for comoving distances and not for the proper distances used in Lightcone7, where ##D_{par}## tends to infinity and ##D_{hor}## tends to a finite constant value. This is clear from the Davis Three-panel Cosmological Expansion graphs in my sig.
Right. I'll do a bit more reading.
 
  • #39
JimJCW said:
First, the output z range is still different from the input z range:

View attachment 301506
View attachment 301507
Hi Jim, yes there is that difference, but that's not the problem, the inputs range must just be a little larger than the output range. As long as the distance now output is correct for the displayed z, that's acceptable for now.
1652712589257.png

For a trial, I've just subtracted 0.001 (line 274 in the 2022-05-14 version) from the output z in order to cancel the origin's horizontal offset, which is caused by the way the distances are computed in the legacy algorithm. The question is: are those (new) values correct? Perhaps you can check it against Gnedin's.

I know that this is not the right way to handle it, so we should be investigating a purer algorithm, like @Ibix is looking at. My own time is slightly limited atm, but I will keep watching and contributing as it hopefully develops as a community project.

Thanks for contributing!
 

Attachments

  • 1652712454038.png
    1652712454038.png
    6.9 KB · Views: 135
  • #40
Jorrie said:
Still pondering an overhaul of the legacy-ridden calculation.js module.
That might be an idea. Is the duplication of the line result.H_t = result.H_t*Ynow; in ScaleResultsan error?

JavaScript:
      case "Normalized":
          result.Tnow = result.Tnow/Tage;
          result.Y = result.Y/Ynow;
          result.Dnow  = result.Dnow/Ynow;
          result.Dthen = result.Dthen/Ynow;
          result.Dhor = result.Dhor/Ynow;
          result.Dpar = result.Dpar/Ynow;
          result.H_t = result.H_t*Ynow;
          result.H_t = result.H_t*Ynow;
          result.TemperatureT = result.TemperatureT / 2.725 ;  
        break;
 
  • #41
Ibix said:
You could put the code to github? I expect github won't last forever (nothing does), but it has a huge amount of inertia. Your code wouldn't run there, but it would always be available for someone to host should your version disappear.
GitHub provides a (free for open source) service called GitHub Pages which can host this so that it WILL run there (as long as it doesn't go over the free usage limits which should be adequate).

Now GitHub is owned by Microsoft, and they have moved a lot of their code (both open source and proprietory) on to it, it is not going to disappear for a LONG time.
 
  • Like
Likes Ibix
  • #42
pbuk said:
they [NMicrosoft] have moved a lot of their code (both open source and proprietory) on to it, it is not going to disappear for a LONG time.
As much as we might want it to. <sigh>
 
  • Like
Likes jim mcnamara
  • #43
pbuk said:
That might be an idea. Is the duplication of the line result.H_t = result.H_t*Ynow; in ScaleResultsan error?

JavaScript:
      case "Normalized":
          result.Tnow = result.Tnow/Tage;
          result.Y = result.Y/Ynow;
          result.Dnow  = result.Dnow/Ynow;
          result.Dthen = result.Dthen/Ynow;
          result.Dhor = result.Dhor/Ynow;
          result.Dpar = result.Dpar/Ynow;
          result.H_t = result.H_t*Ynow;
          result.H_t = result.H_t*Ynow;
          result.TemperatureT = result.TemperatureT / 2.725 ; 
        break;
Yes, good spot! Case "Normalized" is not often used, so that's maybe why the mistake escaped detection.
 
  • #44
Ibix said:
@JimJCW - You asked: Please let me know the line number and file name concerning the '1.001 + z' problem

In my python version there's a line near the bottom with a comment saying something like "to replicate a bug in Lightcone7 uncomment this line". In the Javascript version, no idea. I noticed it in the output - if you turn on both the ##S## and ##z## columns and bump up the number of dps they display with you can verify it yourself by calculating ##S-z##, but I don't know why it happens.

Would you please calculate the following Dnow values using your calculator so I can do some comparison?

z
Gnedin
Jorrie, 2022
ibix
0.11.410041.4084
0.091.272141.270609
0.081.133541.132106
0.070.994260.992931
0.060.854270.85305
0.050.71360.712477
0.040.572250.571225
0.030.430230.429278
0.020.287480.286651
0.010.144080.14336
000.000621
-0.010.144740.14526
-0.020.290150.290567
-0.030.436230.436536
-0.040.582950.583161
-0.050.730320.730433
-0.060.878340.878358
-0.071.026991.026926
-0.081.176311.176124
-0.091.326231.325968
-0.11.476791.476416

I am puzzled by the plot shown in Post #36.
 
  • #45
I think the puzzles will go away if we get the distances algorithm sorted out.
 
  • Like
Likes pbuk
  • #46
pbuk said:
GitHub provides a (free for open source) service called GitHub Pages which can host this so that it WILL run there (as long as it doesn't go over the free usage limits which should be adequate).
I will give GitHub Pages a try, because it seems a good place for a collaborative project.
 
  • #47
JimJCW said:
Would you please calculate the following Dnow values using your calculator so I can do some comparison?
Install python, man! If you're remotely interested in any scientific computing it's a useful tool. I believe Anaconda is the recommended Windows distribution, and I believe it comes with the scipy library I used in my code. Then you can play around with my code to your heart's content.

Here's the results. ##S## is set to the correct ##1+z## not the compatibility ##1.001+z## mode.
z​
Dnow Gly​
0.1​
1.41​
0.09​
1.27​
0.08​
1.13​
0.07​
0.994​
0.06​
0.854​
0.05​
0.713​
0.04​
0.572​
0.03​
0.43​
0.02​
0.287​
0.01​
0.144​
0​
0​
-0.01​
0.145​
-0.02​
0.29​
-0.03​
0.436​
-0.04​
0.583​
-0.05​
0.73​
-0.06​
0.878​
-0.07​
1.03​
-0.08​
1.18​
-0.09​
1.33​
-0.1​
1.48​
 
  • Like
Likes Motore
  • #48
Jorrie said:
I think the puzzles will go away if we get the distances algorithm sorted out.
Yes, although this is not going to be as simple as porting @Ibix's code which leverages a scipy module (which in turn leverages a FORTRAN module) to do integration with infinite limits and automatic step size adjustment. This module is not available in the browser so it is necessary to implement a similar algorithm "by hand".

I have some experience of collaborative Javascript projects, @Jorrie do you mind if I set something up as a illustration of how to transition the calculation module to a robust modern approach?
 
  • #49
Jorrie said:
I think the puzzles will go away if we get the distances algorithm sorted out.
I've been having a look at your Calculation.js module. Your ##D_{hor}## is just ##1/H(z)## multiplied by the appropriate unit conversion factor, so is actually the same as your ##R## column - I can do that.

However your ##D_{par}## seems to be ##a\int_0^a\frac{da'}{a'H(a')}## which, noting that ##a=1/S##, ##a'=1/s## and ##da'=-ds/s^2##, is ##\frac 1S\int_S^\infty\frac{ds}{sH(s)}##, which is what I've implemented, so I'm a bit confused. Also full of viruses today, so possible I've screwed something up.
 
  • #50
pbuk said:
This module is not available in the browser so it is necessary to implement a similar algorithm "by hand".
This has an ODE solver: https://ccc-js.github.io/numeric2/index.html. I think that could be slightly abused to do integration. Or we could just use the approach in the existing module, which seems to work fine with the possible exception of the ##D_{par}## calculation that we need to revisit. I don't think it's a bad approach and seems pretty responsive (although I'd move some stuff around to make it a bit easier to see what's going on, and you can get rid of one pass of the integrator, I think). It's just grown messy and needs a dead code pruning round.
 
Back
Top