I LightCone Calculator Improvements

  • I
  • Thread starter Thread starter Jorrie
  • Start date Start date
  • Tags Tags
    Calculator
  • #51
Jorrie said:
Jim, I don't quite understand your objection against https://light-cone-calc.github.io/, because the two give identical outputs for the Planck + BAO input data, . . .

Both deviate from the Planck + BAO input data for Ωm, for reasons discussed before.

Among Ω0, ΩΛ,0, Ωm,0, ΩR,0 and zeq, only three of them can be used as inputs because of the following two relations:
1659859408917.png

LightCone8 (https://burtjordaan.github.io/light-cone-calc.github.io/) correctly uses only three: Ω0, ΩΛ,0, and zeq, but LightCone8.1.2 - Jorrie Trial UI (https://light-cone-calc.github.io/) incorrectly uses four: Ω0, ΩΛ,0, Ωm,0, and zeq. This causes the discrepancy in LightCone8.1.2 (see Post #30, #40, and #42).

I noticed that in LightCone8.1.2 the inputted Ωm,0 is used only in calculating ΩR,0 printed to the right. The rest of the calculations is the same as LightCone8, using only Ω0, ΩΛ,0, and zeq. Let me illustrate this using an exaggerated example (setting Ωm,0 = 0.9):

1659859597669.png


The output remains the same as in LightCone8:
1659859758816.png

So, using both ΩΛ,0 and Ωm,0 as inputs at the same time is incorrect and produces discrepancies.

Suggestion:

We can use Ωm,0 and ΩR,0 as derived quantities and print them on the right-hand side:

1659859913974.png
 
Space news on Phys.org
  • #52
JimJCW said:
Suggestion:

We can use Ωm,0 and ΩR,0 as derived quantities and print them on the right-hand side:

View attachment 305466
Oops, yes thanks for the headsup - I see now that the table calculation module ignores the matter density input value. I will temporarily reset the old input tables. But I would still like to use Ωm,0 as one of the primary inputs (for reasons that I mentioned in my prior post). So it would mean swapping ΩL and Ωm in what you have above.

To do the change in primary input, @pbuk must also alter his calculation module to use Ωm,0 as primary input. I will coordinate with him to get us there.
 
  • #53
Jorrie said:
Oops, yes thanks for the headsup - I see now that the table calculation module ignores the matter density input value. I will temporarily reset the old input tables. But I would still like to use Ωm,0 as one of the primary inputs (for reasons that I mentioned in my prior post). So it would mean swapping ΩL and Ωm in what you have above.

To do the change in primary input, @pbuk must also alter his calculation module to use Ωm,0 as primary input. I will coordinate with him to get us there.

Please see Post #13:

If Ωm,0 is used as input (see Post #2, where seq = zeq + 1),

ΩΛ,0 = Ω0 - Ωm,0 (zeq + 2) / (zeq + 1)
ΩR,0 = Ωm,0 / (zeq + 1)

@pbuk
 
Last edited:
  • #55
Jorrie said:
To do the change in primary input, @pbuk must also alter his calculation module to use Ωm,0 as primary input. I will coordinate with him to get us there.

The underlying model (since 24 July) does use Ωm,0, but only if ΩΛ,0 is not provided by the UI coupling. In that case it will apply the whole of the Ωrad,0 adjustment to ΩΛ,0, which I think we have agreed is not the best thing to do. If ΩΛ,0 is provided it will apply the whole of the Ωrad,0 adjustment to Ωm,0.

JavaScript:
    // Use omegaLambda0 if it is provided.
    if (props.omegaLambda0 != null) {
      omegaLambda0 = props.omegaLambda0;
      omegaM0 = (omega0 - omegaLambda0) * (sEq / (sEq + 1));
    } else if (props.omegaM0 != null) {
      omegaM0 = props.omegaM0;
      omegaLambda0 = omega0 - omegaM0 * ((sEq + 1) / sEq);
    } else {
      throw new Error('Must provide either omegaM0 or omegaLambda0');
    }
(see https://github.com/cosmic-expansion...0b12217c50ce8991181686fd406/src/model.ts#L156)

I think what we are now suggesting is that if both Ωm,0 and ΩΛ,0 are provided we should apply the Ωrad,0 adjustment proportionately across Ωm,0 and ΩΛ,0?

I'll have a look at this.
 
  • #56
pbuk said:
I think what we are now suggesting is that if both Ωm,0 and ΩΛ,0 are provided we should apply the Ωrad,0 adjustment proportionately across Ωm,0 and ΩΛ,0?

I'll have a look at this.
Yes, although I started to think we must use Ωm,0 (as the more directly established parameter), I guess that since we do not yet understand why the Planck collaboration have elected to present the data with this apparent inconsistency, it may be best to take their exact values at face value as inputs.

Since we are desirous to keep Ω0 = 1, I suppose we can decide how to process that. Adjusting proportionately across both seems to be the more neutral scheme.
 
  • #57
Jorrie said:
Yes, although I started to think we must use Ωm,0 (as the more directly established parameter), I guess that since we do not yet understand why the Planck collaboration have elected to present the data with this apparent inconsistency, it may be best to take their exact values at face value as inputs.

Since we are desirous to keep Ω0 = 1, I suppose we can decide how to process that. Adjusting proportionately across both seems to be the more neutral scheme.

The source of our problem is that Planck 2018 results. I data is incorrect for a flat universe:
1659958235413.png

We shouldn’t use these numbers as given at the same time.

We have been using ΩΛ,0 as input and Ωm,0 and ΩR,0 as derivatives. We are planning to use Ωm,0 as input and ΩΛ,0 and ΩR,0 as derivatives. I think either one is good.

@pbuk
 
  • #58
JimJCW said:
The source of our problem is that Planck 2018 results. I data is incorrect for a flat universe:
View attachment 305565
We shouldn’t use these numbers as given at the same time.
Yea, you are right. Trying to use all 4 will give inconsistencies.
For now, let's leave it as it is.
 
  • #59
If we ever find Omega_0 slightly above 1, the most likely culprit will be dark matter. Here is a graph for Omega_0 = 1.1 (an exaggerated case for visibility)
1659965892569.png

Interesting how Omega will first rise (in this case overshooting the the "current 1.1" slightly) and then being dragged down to unity again by dark energy dominance.
 
  • #60
Jorrie said:
If we ever find Omega_0 slightly above 1, the most likely culprit will be dark matter. Here is a graph for Omega_0 = 1.1 (an exaggerated case for visibility)
View attachment 305573
Interesting how Omega will first rise (in this case overshooting the the "current 1.1" slightly) and then being dragged down to unity again by dark energy dominance.

The expansion of space in the Big Bang model makes many situations very complicated. I often use the LightCone calculator to help me to get pictures in my mind. The following two pictures are similar to yours:

For Ω0 = 1:

1660011726495.png


For Ω0 = 0.9:

1660011808339.png
 
Last edited:
  • #61
Cool! - it looks like we've got ourselves a workable calculator. :smile:
I have just pushed a small update that shows ##\Omega_M## and ##\Omega_R## on the conversion side, as discussed before.
 
  • #62
Discrepancy between LightCone8 and LightCone7:

When comparing the outputs of LightCone8 and LightCone7, I noticed a discrepancy in calculated event horizon:

1660213893799.png


I think the value calculated with LightCone8 is questionable.

@Jorrie, @pbuk
 
  • #63
JimJCW said:
I think the value calculated with LightCone8 is questionable
It seems to be exactly the same as R, I'll have a look tonight (UK).
 
  • #64
JimJCW said:
Discrepancy between LightCone8 and LightCone7:

When comparing the outputs of LightCone8 and LightCone7, I noticed a discrepancy in calculated event horizon:
Interesting, I had started with the code from the last version of LightCone7 http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html and it gives exactly the same incorrect results. But when I use an older version of LightCone7 e.g. http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html I get the expected results.

I have now added the correct calculation in the back end development branch, I will push this to the live site over the weekend along with the new UI.
 
  • #65
pbuk said:
Interesting, I had started with the code from the last version of LightCone7 http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html and it gives exactly the same incorrect results. But when I use an older version of LightCone7 e.g. http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html I get the expected results.
No idea how that happened, accept that lots of experimentation happened around that time.
Anyway, thanks for the excellent work done by JimJCW and yourself.
 
Last edited:
  • #66
Jorrie said:
No idea how that happened, accept that lots of experimentation happened around that time.
@pbuk It happens in line 47 of your new calculate.js, where the mapping for Dhor is incorrect. Note Y (legacy) is the same as R later.
44 Y: entry.r,
---
47 Dhor: entry.r,
 
  • #67
Jorrie said:
@pbuk It happens in line 47 of your new calculate.js, where the mapping for Dhor is incorrect. Note Y (legacy) is the same as R later.
44 Y: entry.r,
---
47 Dhor: entry.r,

Yes, I picked this up from line 267 of the 2022-05-14 version of LightCone7
JavaScript:
          if (Dhor < Y)
              Dhor = Y ;
I removed the test because Dhor was never being set above 0 anywhere else in the code so was always being set to Y (r). This was because the line
JavaScript:
          Dhor =  a * (Dte-Dc);
which was there in LightCone7 2021-03-12 disappeared in the later version.
 
  • #68
Yes, correct. Is it something that you can fix on the calculate side, or must I fix it outside of calculate?
 
  • #69
pbuk said:
Interesting, I had started with the code from the last version of LightCone7 http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html and it gives exactly the same incorrect results. But when I use an older version of LightCone7 e.g. http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html I get the expected results.

Let’s call
A: http://jorrie.epizy.com/Lightcone7-2021-03-12/LightCone_Ho7.html
B: http://jorrie.epizy.com/lightcone7/2022-05-14/LightCone7.html
and use PLANCK Data (2015) for the present discussion.

Result from A:

1660648897696.png


Result from B:

1660648942045.png

Note that Ro and Dhor overlap with each other.Comparing the Calculation.js files of A and B, I noticed that some statements in A are not in B: near Line 180 and Line 245. If we put the missing statements back to B:

pa = a_ic * sf;
////////////////*************Add the following missing line
a = a_ic + (pa / 2);
////////////////*************Found the problem
qa = Math.sqrt((Om / a) + Ok + (Or / (a * a)) + (Ol * a * a));
Dthen = 0;
}
////////////////*************Add the following missing lines
else
{
Dnow = Math.abs(Dtc-Dc);
Dthen = a * Dnow;
}
Dhor = a * (Dte-Dc);

////////////////*************
Dpar = a * Dc;

the modified B gives the following result:

1660649168858.png


@Jorrie
 
  • #70
Jorrie said:
Yes, correct. Is it something that you can fix on the calculate side, or must I fix it outside of calculate?
Fixed now. I've also tidied up the html (there were some extra empty <td>s and unmatched </tr>s) and bumped the version to 8.3 for the added CSV functionality.

I've pulled all these changes into the 'main' branch and changed the source for the live site back to 'main' from 'develop' for good practice.
 
Last edited:
  • #71
Maybe cache problem, but although it shows version 8.3, the changes for correcting Dhor have not pulled through. Strange.
Outputs updated: 2022-08-16 15:36:10
1660657236450.png

Copyright © 2009-2022 jorrie.epizy.com | AGPL license | LightCone v8.3.0 | CosmicExpansion v1.1.1
 
  • #72
Jorrie said:
Maybe cache problem
Oops, no it was me (updated the calculations, forgot to update the UI). Fixed now.
 
  • #74
Since we now have a seemingly well behaved Lightcone8, e.g.
1660665805519.png

I decided to stress test it a little. By simply turning the knob labeled OmegaL down well below the stated minimum, namely to 0.0001, and expanding the range of the graph by trial and error, I've got this amazing result (from both a programatical and perhaps a cosmological POV):

1660665547648.png

1660665462778.png

If true, it shows the amazing performance of @pbuk's new calculation module and also the fact that even a relatively tiny OmegaL still points to the fact that the Hubble radius R0 and the cosmological horizon approach each other well beyond 2 trillion years.

Perhaps @JimJCW (our master tester) can advise us how far we can relax the input ranges before we run into gross inaccuracies.
 
  • #75
Jorrie said:
If true, it shows the amazing performance of @pbuk's new calculation module

Perhaps @JimJCW (our master tester) can advise us how far we can relax the input ranges before we run into gross inaccuracies.
😊 actually I'm pretty confident in the integration now: the functions become very smooth as ## t \to \infty ## and the integration uses the transformation $$ f'(x) = \frac{f \left (a + \frac{1}{1-t} \right )}{(1-t)^2} \implies \int_a^\infty f(x) dx = \int_0^1 f'(x) dx $$ to deal with the improper integral.
 
  • #76
Jorrie said:
. . . how far we can relax the input ranges before we run into gross inaccuracies.

In LightCone7:

1660734354006.png


In LightCone8.3:

1660734732416.png


Note that in LightCone8.3, the range specification, ‘>0.001 to 1.00’, does not work. The specification in LightCone7, ‘>0 to 1.00’, seems to be more suitable.

In ICRAR, ΩΛ,0 can be set to 0 or a small value such as 0.000000001:

1660734836873.png


@pbuk
 
  • #77
JimJCW said:
Note that in LightCone8.3, the range specification, ‘>0.001 to 1.00’, does not work. The specification in LightCone7, ‘>0 to 1.00’, seems to be more suitable.
The test is simply that it is greater than 0 so the text should probably be changed, yes. Setting it to 0 seems to prevent the integration converging (probably because the Physics says it doesn't but I can't think exactly why at the moment).
 
  • #78
pbuk said:
The test is simply that it is greater than 0 so the text should probably be changed, yes. Setting it to 0 seems to prevent the integration converging (probably because the Physics says it doesn't but I can't think exactly why at the moment).
Yes ##\Omega_\Lambda>0 ## is correct for the programming as it stands. I will change the input text.
When ##\Omega_\Lambda= 0##, there does not exist a cosmological event horizon - this is probably why the integration does not converge in that case.
 
  • #79
While on the topic of stretching the input boundaries (ranges), a few others can possibly also be changed to allow for more broad comparison with other cosmo-calculators. I'm thinking of zupper up to a million and zlower to >-1, e.g.,

1660804956712.png


Any others?

Edit: possibly with a warning that a very high zupper may cause slow processing on laptops, tablets etc.
 
  • Like
Likes pbuk
  • #80
I don't think high z is a problem anyway, we already integrate to infinity to get ## t_0 ##.
 
  • #81
pbuk said:
I don't think high z is a problem anyway, we already integrate to infinity to get ## t_0 ##.
For some reason I find a severe slowdown when starting z at 1E6, possibly because of the table that takes longer for larger ranges.
 
  • #82
Jorrie said:
For some reason I find a severe slowdown when starting z at 1E6, possibly because of the table that takes longer for larger ranges.
Perhaps the excessive slowdown for more severe ranges could be effected by the internet speed. As a test, I opened up the z range from -0.9999 to 1 000 000 and got between 10 and 15 seconds delay before the result table was presented. On my end I'm on a 50 Mbps fiber, but there are some intercontinental delays as well.

If I understand it correctly, your integration routine does not run locally, but on some remote server? If so, a large number of requests need to go to it, I guess.

For what it's worth, if it would be workable, it gave the following range in cosmic time:

1660920678678.png

That's from less than one year, way out to 173 Gyr...
 
  • #83
Jorrie said:
If I understand it correctly, your integration routine does not run locally, but on some remote server? If so, a large number of requests need to go to it, I guess.
No, it's all running in the local machine. I think the problem may be seeking too high a precision at extreme redshifts.
 
  • #84
Jorrie said:
For some reason I find a severe slowdown when starting z at 1E6, possibly because of the table that takes longer for larger ranges.

Where or how can I try the calculator?
 
  • #86
Jorrie said:
For some reason I find a severe slowdown when starting z at 1E6, possibly because of the table that takes longer for larger ranges.

The value z = 1000000 is unusual (I don't know why); it takes more than 10 sec to get my calculation result. For other values, such as z = 99999.99999, it takes only a fraction of a sec. You could change the range to,

> zlower to 99999​
 
  • #87
JimJCW said:
The value z = 1000000 is unusual (I don't know why); it takes more than 10 sec to get my calculation result. For other values, such as z = 99999.99999, it takes only a fraction of a sec.
I don't find z = 1000000 (1 million) to be special. It seems the calc time goes up exponentially with large z. Up to z = 300000 it seems to stay within 1 second execution time. This gives around 8 years after T = 0.

There is of course a different route that can be followed for times shortly after inflation, when radiation dominated overwhelmingly, i.e.,
$$H =H_0 (z+1)^2 \sqrt{\Omega_r}$$
(Edit2: sorry I had this upside down originally, as well as wrong in a silly way)

Not sure if it is worth the effort in coding though...
 
Last edited:
  • #88
Jorrie said:
I don't find z = 1000000 (1 million) to be special. It seems the calc time goes up exponentially with large z. Up to z = 300000 it seems to stay within 1 second execution time. This gives around 8 years after T = 0.

You are right; I mistakenly thought '99999' = '1000000 - 1'.
 
  • #89
Jorrie said:
http://jorrie.epizy.com/docs/index.html

I'm still struggling to get a working link out of my fork for direct use on github
I have marked it v8.3.x
A recap on links:

The stable version (currently 8.3.0, incorporating CSV download, ## D_{hor} ## correction and the other changes we have discussed) is at https://light-cone-calc.github.io/.

@Jorrie your fork is at https://burtjordaan.github.io/light-cone-calc.github.io/, this serves whatever is in the docs folder of the main branch, which currently appears to be v.8.1.2 (although I think it is actually a bit modified from the "original" 8.1.2). But this code seems to be a bit behind what we are now working on (I think this was in the develop[ branch but that seems to have disappeared).
 
  • #90
Calculator computation time:

The computation time of LightCone8 is discussed in Post #79-83, #86-88. For example, when the value of z is increased from 1 to 1000000, the computation time is increased from a fraction of a sec to 10 sec or so.

To get some ideas about this question, I did some experiments with the ICRAR calculator using PLANCK(2018+BAO) data and various values of z. The calculation times are all of the order of 1 sec. An example is given below:

1661081396520.png


@Jorrie, @pbuk
 
  • #91
I don't know the ICRAR calculator. I see it can produce graphs, but numerically it looks like a very advanced one-shot-z calculator of professional quality and accuracy. I think one-shot makes things considerably easier and faster.

Lightcone8's algorithm can possibly be more optimized, but for its intended purpose, as a relatively easy to use educational tool, I doubt if it is worth the effort needed for pursuing the ultra-high-z regime.
 
  • Like
Likes pbuk
  • #92
I seem to remember the ICRAR calculator changes the model to radiation only for very early redshifts, as Burt suggested a few posts back. There are a few other differences that may be worth investigating, and one thing that definitely needs looking at is the target error for the integration. This all needs work at a lower level than via the LightCone GUI so if you are interested I suggest you set up a NodeJS development environment and play with the underlying model which lives at https://github.com/cosmic-expansion/cosmic-expansion-js.
 
  • #93
pbuk said:
I seem to remember the ICRAR calculator changes the model to radiation only for very early redshifts, as Burt suggested a few posts back.

I wouldn’t say that the ICRAR calculator changes the model to radiation only for vert early redshifts. Instead, I would say that for high z values, the radiation term becomes dominating in both the ICRAR calculator and LightCone8:

1661137074621.png


1661137102917.png


By the way, where can I find Burt’s post?

@Jorrie
 
  • #94
Jorrie said:
I don't know the ICRAR calculator. I see it can produce graphs, but numerically it looks like a very advanced one-shot-z calculator of professional quality and accuracy. I think one-shot makes things considerably easier and faster.

The ICRAR result shown in Post #90 suggests that the computation time using that calculator remains the same even for z = 1050, but using LightCone8, it increases by a factor of 10 when z = 106. It is a little peculiar, but not crucial.

Taking your result into consideration, we can use ‘zupper > zlower to 300000’ as permitted range for Upper row redshift. At z = 300000, ΩR,0 already becomes very dominating (see Post #93).

@pbuk
 
  • #95
JimJCW said:
Taking your result into consideration, we can use ‘zupper > zlower to 300000’ as permitted range for Upper row redshift. At z = 300000, ΩR,0 already becomes very dominating (see Post #93).
I have left zupper at 1e6, but included a recommendation for 3e5. There is also an indicator that the integration is running for those up to 1e6 values.
http://jorrie.epizy.com/docs/index.html?i=1
This link will remain the same from one X version to the next. When we accept the version as valid, we can bump it up to the main branch.
 
  • #96
JimJCW said:
I wouldn’t say that the ICRAR calculator changes the model to radiation only for vert early redshifts.
I don't see how you can establish that by looking at the outputs, you need to inspect the code which you can see by clicking the "R Code" tab on the ICRAR site (although it may actually be the code at https://github.com/asgr/celestial/blob/master/R/cosgrow.R). Having said that, it does look as though it doesn't change the integrand for high z.

As I say if you want to investigate what is causing the slowdown you need to work with the underlying model and try adjusting the epsilon parameter to the integration which controls relative error and is currently set to ## 10^{-8} ##, and also look at the full return value from the integrator to see the number of steps that are taken.

None of this is near the top of my priority list because I believe the current code achieves the objectives for the LightCone application.
 
Last edited:
  • #97
pbuk said:
I don't see how you can establish that by looking at the outputs, you need to inspect the code which you can see by clicking the "R Code" tab on the ICRAR site (although it may actually be the code at https://github.com/asgr/celestial/blob/master/R/cosgrow.R). Having said that, it does look as though it doesn't change the integrand for high z.

As I say if you want to investigate what is causing the slowdown you need to work with the underlying model and try adjusting the epsilon parameter to the integration which controls relative error and is currently set to ## 10^{-8} ##, and also look at the full return value from the integrator to see the number of steps that are taken.

None of this is near the top of my priority list because I believe the current code achieves the objectives for the LightCone application.

In Post #22, it is demonstrated that the calculation results from Lightcone8 and ICRAR are consistent for z = 0.02. A similar conclusion can be reached for z = 300000:

LightCone8
ICRAR
z300000300000
Scale (a)3.33332222E-063.33332222E-06
T Gyr8.34828635E-098.34837088E-09
R Gpc5.10964255E-09
Dnow Gpc1.41602035E+011.41602035E+04
Dthen Gpc4.72005209E-054.72005209E-02
DHor Gpc5.10964255E-09
Dpar Gpc5.12398784E-09
Vgen/c2.89051673E+03
Vnow/c3.19580877E+00
Vthen/c9.23753872E+03
H(z)5.86719042E+105.86719042E+10
Temp K8.17646726E+05
rho kg/m36.46598880E-096.46643447E-09
OmegaM1.11671814E-021.11671814E-02
OmegaL9.16135695E-199.16135695E-19
OmegaR9.88832819E-019.88832819E-01
OmegaT1.000000000E+001.000000000E+00

This result suggests that the ICRAR calculator uses all three parameters, Ωm, ΩΛ, and ΩR even for high z values, just like LightCone8.

I have been a happy user of LightCone calculator(s) and am now very happy with LightCone8.

I think we are only trying to tighten some loose ends, right?

@Jorrie
 
  • #98
JimJCW said:
I think we are only trying to tighten some loose ends, right?
Yup and at the same time we have opened the range of usability quite a bit. Lightcone7 was limited to zupper of 20,000. With Lightcone8 we are now confident with upper of 300,000 and 1,000,000 at a stretch (1 year age).

Plus the future expansion capability has doubled and we have an accurate cosmological horizon calculation value that many calculators lack.

Team effort paying off... Thanks guys :smile:
 
  • #99
Hi @Jorrie,

What is the status of LightCone8 Cosmological Calculator (v8.3.x) at http://jorrie.epizy.com/docs/? It still gives incorrect Dhor (Ro and Dhor overlap):

1662209414852.png


Please see Post #69.
 

Similar threads

Back
Top