I A glitch in Jorrie’s Cosmo-Calculator?

  • Thread starter Thread starter JimJCW
  • Start date Start date
Click For Summary
Jorrie’s Cosmo-Calculator is experiencing persistent inconsistencies in results for Dnow and Dthen compared to Gnedin’s and Wright’s calculators, particularly at low redshift values. Users speculate there may be a glitch, possibly linked to numerical integration methods used in the calculator. Adjustments to the integration parameters have shown some reduction in error but not a complete fix, indicating deeper issues in the implementation. The calculator was originally designed as an educational tool, which may explain some of its limitations. Ongoing investigations are needed to address these discrepancies and improve accuracy.
  • #31
Jorrie said:
Here is a patched version of Lightcone7, dated 2002-05-14, which numerically corrects for the offset in the low-z regime.
When testing in that regime, remember to set zlower and zupper in the input boxes to a range compatible with the z-regime of interest. When simply run over the default range of z, it produces too course an output for proper testing.

There are still anomalies around z=0, but it is good to remember that the homogeneity condition required by LCDM does not hold for z < 0.025 or so, roughly the value for the Como cluster.

I leave the old link in my signature for now, in-case you want to compare. Still pondering an overhaul of the legacy-ridden calculation.js module.

I will look at it tonight. I noticed the following:

When specifying the following:

1652609736024.png
I got this:

1652609804349.png


The ranges of z values seem not consistent.
 
Space news on Phys.org
  • #32
Hi @Jorrie. I've spent a peaceful afternoon reading your Insights on how this works and looking through the Javascript before implementing it in python using one of its canned integration routines to do the hard work. I've uncovered a couple of possible issues that you might want to look at.

First I looked at your Lightcone7 Tutorial Part III to try to understand the maths you were implementing. I noticed a minor bug in that article - it defines ##\Omega_r=\Omega_r/S_{eq}##, which I think ought to be ##\Omega_m/S_{eq}##, or possibly even ##\Omega_mS/S_{eq}##.

But between that page and the Javascript I was able to write a python program to replicate the outputs. There's no UI - "inputs" are just variables and you need to specify all the ##z## values manually and the result comes out as a file. The code is below and should just work as long as you have the scipy.integrate package installed, which is what does the numerical work. It's partially commented - see the Insight linked above and Lightcone7 itself for details of what's actually being calculated.
Python:
import math, scipy.integrate

##########################################################################
# Various constants
rhoConst = 1.7885e+9  # 3 / (8 pi G)
secInGy = 3.1536e+16  # s / Gyr
tempNow = 2.725       # CMB temperature now
Hconv = 1 / 978       # Convert km/s/Mpc -> Gyr^-1
infty = float("inf")  # Infinity - used as a limit in integrator

##########################################################################
# Inputs (Planck 2015, copied from Jorrie's calculator
H0 = 67.74                # H0 control
OmegaL = 0.691            # OmegaL control
Omega = 1                 # Omega control
z_eq = 3370               # z_eq control
zvals = [ 1089.999,       # The z values copied from Lightcone7's table
          339.0316542,
          104.9771906,
          32.02928913,
          9.293570928,
          2.207515378,
          0,
          -0.6893293728,
          -0.8690917659,
          -0.9451725312,
          -0.9773721629,
          -0.991 ]

##########################################################################
# Constants derived from inputs
H0 *= Hconv                                    # H0 in Gyr^-1
rhocritNow = rhoConst * (H0 / secInGy)**2      # Critical density now
s_eq = 1 + z_eq                                # Stretch when OmegaM=OmegaR
OmegaM = (Omega - OmegaL) * s_eq / (s_eq + 1)  # Energy density of matter
OmegaR = OmegaM / s_eq                         # Energy density of radiation
OmegaK = 1 - OmegaM - OmegaR - OmegaL          # Curvature energy density

##########################################################################
# Functions
def H(s):
    """Hubble constant as a function of stretch, s = 1/a, where a is the
    usual FLRW scale factor"""
    return H0 * math.sqrt(OmegaL
                            + OmegaK * s**2
                            + OmegaM * s**3
                            + OmegaR * s**4)
def TH(s):
    return 1 / H(s)
def THs(s):
    return TH(s) / s
def formatted(vals):
    """Convenience function to apply standard formatting to a bunch of
    numbers"""
    return ",".join(("%7.2e" % v) for v in vals)

##########################################################################
# For each specified z, calculate all the things Jorrie's calculator does
# and output them as CSV for easy pasting into a spreadsheet
with open("planck.csv","w") as op:
    op.write("z,Scale (a),S,Tnow Gyr,R Gly,Dnow Gly,Dthen Gly," \
                +"DHor Gly,Dpar Gly,Vgen/c,Vnow/c,Vthen/c,H(z)," \
                +"Temp (K),rho kg/m3,OmegaM,OmegaL,OmegaR,OmegaT\n")
    for z in zvals:
        s = 1 + z
        #s = 1.001 + z  # Uncomment to replicate a systematic bug in Lightcone7
        a = 1 / s
        Tnow = scipy.integrate.quad(THs, s, infty)[0]
        R = 1 / H(s)
        Dnow = abs(scipy.integrate.quad(TH, 1, s)[0])
        Dthen = Dnow / s
        Dhor = scipy.integrate.quad(TH, 0, s)[0] / s
        Dpar = scipy.integrate.quad(TH, s, infty)[0] / s
        Vgen = H(s) / (H0 * s)
        Vnow = Dnow * H0
        Vthen = Dthen * H(s)
        Temp = tempNow * s
        rhocrit = rhoConst * (H(s) / secInGy)**2
        OmegaMT = (Omega-OmegaL) * s**3 * (H0 / H(s))**2
        OmegaLT = (H0 / H(s))**2 * OmegaL
        OmegaRT = OmegaMT * s / s_eq
        OmegaTT = OmegaMT + OmegaLT + OmegaRT
        op.write(formatted([z, a, s, Tnow, R,
                            Dnow, Dthen, Dhor, Dpar,
                            Vgen, Vnow, Vthen,
                            H(s) / Hconv,
                            Temp, rhocrit,
                            OmegaMT, OmegaLT, OmegaRT, OmegaTT])+"\n")
If you run this it should produce a table of data in a CSV file called Planck.csv that should open in a spreadsheet package. It produces all of the columns Lightcone7 does in the same order and to Lightcone7's default precision, to make it easy to compare with the Javascript results. I uncovered two issues by doing this, working with your 2022-05-14 version.

I think the first issue is what's causing the bug that @JimJCW is seeing. Your ##S## actually appears to be ##1.001+z## across the board. Since everything calculates from ##S## the only practical effect of this is that the ##z## being used in the calculation is off by 0.001 compared to the results, which looks to be about right for where the graphs posted on page 1 are hitting the horizontal axis.

There's a line commented out in the loop at the end of my code that switches to defining ##S=1.001+z## and almost all of the (fairly slight) differences between python and Javascript results go away if you uncomment that. The remainder are, I think, rounding disagreements.

The second issue is that I can't get my Dpar and Dhor values to match yours. According to your Insight, $$\begin{eqnarray*}
D_{par}&=&\frac 1S\int_{s=S}^\infty T_Hds\\
D_{hor}&=&\frac 1S\int_{s=0}^S T_Hds
\end{eqnarray*}$$and that's fairly simple to implement, so I don't think I've made a mistake there. Given those definitions, I'd expect ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}##, and that's true of my results but is not true in the Lightcone7. So I think there might be something wrong either in those calculations or the definition in the Insight. I haven't investigated further - not even going and looking up those formulae in a textbook.

I have had a quick look at the Javascript backing this, and I would say it could probably be simplified quite a lot. It looks like it's been through quite a few re-writes but could probably do with another one focussed on cleaning it up a bit. Happy to help with that if you like - I have some ideas, but I don't want to trample on your toes.
 
  • Like
Likes Jorrie and Bandersnatch
  • #33
Ibix said:
Hi @Jorrie. ...

I have had a quick look at the Javascript backing this, and I would say it could probably be simplified quite a lot. It looks like it's been through quite a few re-writes but could probably do with another one focussed on cleaning it up a bit. Happy to help with that if you like - I have some ideas, but I don't want to trample on your toes.
Hi Ibix, thanks a ton! Yes, part of that Js code dates back to 2010 and comes from @Hellfire's original. Didn't want to fix what was working, but it has been added to so many times that it is quite a mess by now. I tried to get too much mileage out of that old engine...
I think both the distance and horizon anamolies originate there.

I would prefer to leave the overall Input/Output structure the same, but the calculation module to be completely redesigned and cleared up to produce the required outputs. I would be grateful if you want to assist - and it looks like you already have a working algorithm!

And yes, it should be ##\Omega_r=\Omega_m/S_{eq}##. Will fix that, if I can figure out how to edit that old tutorial...
 
  • #34
Ibix said:
Hi @Jorrie.
...
The second issue is that I can't get my Dpar and Dhor values to match yours. According to your Insight, $$\begin{eqnarray*}
D_{par}&=&\frac 1S\int_{s=S}^\infty T_Hds\\
D_{hor}&=&\frac 1S\int_{s=0}^S T_Hds
\end{eqnarray*}$$and that's fairly simple to implement, so I don't think I've made a mistake there. Given those definitions, I'd expect ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}##, and that's true of my results but is not true in the Lightcone7. So I think there might be something wrong either in those calculations or the definition in the Insight. I haven't investigated further - not even going and looking up those formulae in a textbook.
I have used Tamara Davis' "Fundamental Aspects of the Expansion of the The universe and Cosmic Horizons" equations in the Tutorial, but the implementation in Lightcone7 was a bit different and quite possibly inaccurate.

Note that ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}## only holds for comoving distances and not for the proper distances used in Lightcone7, where ##D_{par}## tends to infinity and ##D_{hor}## tends to a finite constant value. This is clear from the Davis Three-panel Cosmological Expansion graphs in my sig.
 
  • #35
Ibix said:
Ibix said:
I think the first issue is what's causing the bug that @JimJCW is seeing. Your ##S## actually appears to be ##1.001+z## across the board. Since everything calculates from ##S## the only practical effect of this is that the ##z## being used in the calculation is off by 0.001 compared to the results, which looks to be about right for where the graphs posted on page 1 are hitting the horizontal axis.

Please let me know the line number and file name concerning the '1.001 + z' problem.
 
  • #36
Jorrie said:
Here is a patched version of Lightcone7, dated 2002-05-14, which numerically corrects for the offset in the low-z regime.

First, the output z range is still different from the input z range:

1652699864880.png

1652699912214.png


I did the following experiment:

If I calculate Dnow using Gnedin's and your 2022 calculator and plot 'DGnedin - DJorrie' vs z, I expect to see points fluctuating around the Dnedin - DJorrie = 0 line. But I got the following result:

1652700186389.png


I wonder what clues we can get out of this.
 
  • #37
@JimJCW - You asked: Please let me know the line number and file name concerning the '1.001 + z' problem

In my python version there's a line near the bottom with a comment saying something like "to replicate a bug in Lightcone7 uncomment this line". In the Javascript version, no idea. I noticed it in the output - if you turn on both the ##S## and ##z## columns and bump up the number of dps they display with you can verify it yourself by calculating ##S-z##, but I don't know why it happens.
 
  • #38
Jorrie said:
I have used Tamara Davis' "Fundamental Aspects of the Expansion of the The universe and Cosmic Horizons" equations in the Tutorial, but the implementation in Lightcone7 was a bit different and quite possibly inaccurate.

Note that ##(D_{par}+D_{hor})S=\int_0^\infty T_Hds=\mathrm{const}## only holds for comoving distances and not for the proper distances used in Lightcone7, where ##D_{par}## tends to infinity and ##D_{hor}## tends to a finite constant value. This is clear from the Davis Three-panel Cosmological Expansion graphs in my sig.
Right. I'll do a bit more reading.
 
  • #39
JimJCW said:
First, the output z range is still different from the input z range:

View attachment 301506
View attachment 301507
Hi Jim, yes there is that difference, but that's not the problem, the inputs range must just be a little larger than the output range. As long as the distance now output is correct for the displayed z, that's acceptable for now.
1652712589257.png

For a trial, I've just subtracted 0.001 (line 274 in the 2022-05-14 version) from the output z in order to cancel the origin's horizontal offset, which is caused by the way the distances are computed in the legacy algorithm. The question is: are those (new) values correct? Perhaps you can check it against Gnedin's.

I know that this is not the right way to handle it, so we should be investigating a purer algorithm, like @Ibix is looking at. My own time is slightly limited atm, but I will keep watching and contributing as it hopefully develops as a community project.

Thanks for contributing!
 

Attachments

  • 1652712454038.png
    1652712454038.png
    6.9 KB · Views: 148
  • #40
Jorrie said:
Still pondering an overhaul of the legacy-ridden calculation.js module.
That might be an idea. Is the duplication of the line result.H_t = result.H_t*Ynow; in ScaleResultsan error?

JavaScript:
      case "Normalized":
          result.Tnow = result.Tnow/Tage;
          result.Y = result.Y/Ynow;
          result.Dnow  = result.Dnow/Ynow;
          result.Dthen = result.Dthen/Ynow;
          result.Dhor = result.Dhor/Ynow;
          result.Dpar = result.Dpar/Ynow;
          result.H_t = result.H_t*Ynow;
          result.H_t = result.H_t*Ynow;
          result.TemperatureT = result.TemperatureT / 2.725 ;  
        break;
 
  • #41
Ibix said:
You could put the code to github? I expect github won't last forever (nothing does), but it has a huge amount of inertia. Your code wouldn't run there, but it would always be available for someone to host should your version disappear.
GitHub provides a (free for open source) service called GitHub Pages which can host this so that it WILL run there (as long as it doesn't go over the free usage limits which should be adequate).

Now GitHub is owned by Microsoft, and they have moved a lot of their code (both open source and proprietory) on to it, it is not going to disappear for a LONG time.
 
  • Like
Likes Ibix
  • #42
pbuk said:
they [NMicrosoft] have moved a lot of their code (both open source and proprietory) on to it, it is not going to disappear for a LONG time.
As much as we might want it to. <sigh>
 
  • Like
Likes jim mcnamara
  • #43
pbuk said:
That might be an idea. Is the duplication of the line result.H_t = result.H_t*Ynow; in ScaleResultsan error?

JavaScript:
      case "Normalized":
          result.Tnow = result.Tnow/Tage;
          result.Y = result.Y/Ynow;
          result.Dnow  = result.Dnow/Ynow;
          result.Dthen = result.Dthen/Ynow;
          result.Dhor = result.Dhor/Ynow;
          result.Dpar = result.Dpar/Ynow;
          result.H_t = result.H_t*Ynow;
          result.H_t = result.H_t*Ynow;
          result.TemperatureT = result.TemperatureT / 2.725 ; 
        break;
Yes, good spot! Case "Normalized" is not often used, so that's maybe why the mistake escaped detection.
 
  • #44
Ibix said:
@JimJCW - You asked: Please let me know the line number and file name concerning the '1.001 + z' problem

In my python version there's a line near the bottom with a comment saying something like "to replicate a bug in Lightcone7 uncomment this line". In the Javascript version, no idea. I noticed it in the output - if you turn on both the ##S## and ##z## columns and bump up the number of dps they display with you can verify it yourself by calculating ##S-z##, but I don't know why it happens.

Would you please calculate the following Dnow values using your calculator so I can do some comparison?

z
Gnedin
Jorrie, 2022
ibix
0.11.410041.4084
0.091.272141.270609
0.081.133541.132106
0.070.994260.992931
0.060.854270.85305
0.050.71360.712477
0.040.572250.571225
0.030.430230.429278
0.020.287480.286651
0.010.144080.14336
000.000621
-0.010.144740.14526
-0.020.290150.290567
-0.030.436230.436536
-0.040.582950.583161
-0.050.730320.730433
-0.060.878340.878358
-0.071.026991.026926
-0.081.176311.176124
-0.091.326231.325968
-0.11.476791.476416

I am puzzled by the plot shown in Post #36.
 
  • #45
I think the puzzles will go away if we get the distances algorithm sorted out.
 
  • Like
Likes pbuk
  • #46
pbuk said:
GitHub provides a (free for open source) service called GitHub Pages which can host this so that it WILL run there (as long as it doesn't go over the free usage limits which should be adequate).
I will give GitHub Pages a try, because it seems a good place for a collaborative project.
 
  • #47
JimJCW said:
Would you please calculate the following Dnow values using your calculator so I can do some comparison?
Install python, man! If you're remotely interested in any scientific computing it's a useful tool. I believe Anaconda is the recommended Windows distribution, and I believe it comes with the scipy library I used in my code. Then you can play around with my code to your heart's content.

Here's the results. ##S## is set to the correct ##1+z## not the compatibility ##1.001+z## mode.
z​
Dnow Gly​
0.1​
1.41​
0.09​
1.27​
0.08​
1.13​
0.07​
0.994​
0.06​
0.854​
0.05​
0.713​
0.04​
0.572​
0.03​
0.43​
0.02​
0.287​
0.01​
0.144​
0​
0​
-0.01​
0.145​
-0.02​
0.29​
-0.03​
0.436​
-0.04​
0.583​
-0.05​
0.73​
-0.06​
0.878​
-0.07​
1.03​
-0.08​
1.18​
-0.09​
1.33​
-0.1​
1.48​
 
  • Like
Likes Motore
  • #48
Jorrie said:
I think the puzzles will go away if we get the distances algorithm sorted out.
Yes, although this is not going to be as simple as porting @Ibix's code which leverages a scipy module (which in turn leverages a FORTRAN module) to do integration with infinite limits and automatic step size adjustment. This module is not available in the browser so it is necessary to implement a similar algorithm "by hand".

I have some experience of collaborative Javascript projects, @Jorrie do you mind if I set something up as a illustration of how to transition the calculation module to a robust modern approach?
 
  • #49
Jorrie said:
I think the puzzles will go away if we get the distances algorithm sorted out.
I've been having a look at your Calculation.js module. Your ##D_{hor}## is just ##1/H(z)## multiplied by the appropriate unit conversion factor, so is actually the same as your ##R## column - I can do that.

However your ##D_{par}## seems to be ##a\int_0^a\frac{da'}{a'H(a')}## which, noting that ##a=1/S##, ##a'=1/s## and ##da'=-ds/s^2##, is ##\frac 1S\int_S^\infty\frac{ds}{sH(s)}##, which is what I've implemented, so I'm a bit confused. Also full of viruses today, so possible I've screwed something up.
 
  • #50
pbuk said:
This module is not available in the browser so it is necessary to implement a similar algorithm "by hand".
This has an ODE solver: https://ccc-js.github.io/numeric2/index.html. I think that could be slightly abused to do integration. Or we could just use the approach in the existing module, which seems to work fine with the possible exception of the ##D_{par}## calculation that we need to revisit. I don't think it's a bad approach and seems pretty responsive (although I'd move some stuff around to make it a bit easier to see what's going on, and you can get rid of one pass of the integrator, I think). It's just grown messy and needs a dead code pruning round.
 
  • #51
Ibix said:
This has an ODE solver: https://ccc-js.github.io/numeric2/index.html. I think that could be slightly abused to do integration.
I don't think that is a very good idea: (i) the project is even more stale than Lightcone7 (no commits for 10 years); (ii) robust methods for quadrature are very different from methods for initial value problems.

Ibix said:
Or we could just use the approach in the existing module.
I think that would be better.

Ibix said:
I don't think it's a bad approach and seems pretty responsive (although I'd move some stuff around to make it a bit easier to see what's going on, and you can get rid of one pass of the integrator, I think). It's just grown messy and needs a dead code pruning round.
I agree with all of that. An approach I have found successful in the past is to extract the existing code into a stand-alone module and implement continuous integration with some high level functional tests to check you don't break anything before refactoring the existing code so that you can figure out how it works/why it doesn't.

Once you have done this you can prioritize what needs to be done next (quick fix to address bugs? rewrite algorithm for better stability/accuracy/speed? Etc...).
 
  • #52
pbuk said:
(ii) robust methods for quadrature are very different from methods for initial value problems.
.
While I have no comment on the referenced project, far more development of adaptive and variable step size has occurred for ODEs than quadrature. In particular, for spikey functions poorly approximated by polynomials, a given accuracy is often achieved with much less computation by casting the quadrature as an ODE, and using modern ODE methods.
 
  • Informative
Likes pbuk
  • #53
Here it is https://lightcone7.github.io/LightCone7.html with the calculations split out into a separate module (source here) with the bare minimum of changes to get it to work independently.

Needs some tidying up and documentation: PM me with github user details if you want to use/take over this.
 
Last edited:
  • Like
Likes Jorrie
  • #54
PAllen said:
In particular, for spikey functions poorly approximated by polynomials, a given accuracy is often achieved with much less computation by casting the quadrature as an ODE, and using modern ODE methods.
I didn't realize that, although these functions are generally smooth.

PAllen said:
Far more development of adaptive and variable step size has occurred for ODEs than quadrature.
Yes this is true in general, but there is not much implemented in Javascript for ODEs either.
 
  • #55
pbuk said:
I didn't realize that, although these functions are generally smooth.
Agreed.
pbuk said:
Yes this is true in general, but there is not much implemented in Javascript for ODEs either.
Probably true, but in other languages, publicly available implementations of most any method should be available. The SciPy quadrature interface @Ibix mentioned looks good. It uses adaptive step size. Scipy also has ODE methods.

[edit: I see this was all discussed before - browser availability is a requirement. I have no idea what is available open source in this context.]
 
  • #56
pbuk said:
I have some experience of collaborative Javascript projects, @Jorrie do you mind if I set something up as a illustration of how to transition the calculation module to a robust modern approach?
By all means, do that. :smile:
 
  • #57
Ibix said:
Install python, man! . . .

That sounds like a good idea. Thanks for calculating the Dnow values! A comparison is shown below:

zGnedinJorrie, 2022ibix
0.11.410041.40841.41
0.091.272141.2706091.27
0.081.133541.1321061.13
0.070.994260.9929310.994
0.060.854270.853050.854
0.050.71360.7124770.713
0.040.572250.5712250.572
0.030.430230.4292780.43
0.020.287480.2866510.28
0.010.144080.143360.144
000.0006210
-0.010.144740.145260.145
-0.020.290150.2905670.29
-0.030.436230.4365360.436
-0.040.582950.5831610.583
-0.050.730320.7304330.73
-0.060.878340.8783580.878
-0.071.026991.0269261.03
-0.081.176311.1761241.18
-0.091.326231.3259681.33
-0.11.476791.4764161.48

If we plot the three Dnow vs z curves, they are just about indistinguishable. If we add the (DGnedin – Dibix) result to the plot shown in Post #36, we get the following:

1652873102275.png

In general, (DGnedin - Dibix) fluctuates around the y = 0 line. The greater deviations from the line near the two ends are probably due to round-off effect of the calculated Dibix. I am still puzzled by the slightly different behaviors of the two data sets.
 
  • #58
JimJCW said:
I am still puzzled by the slightly different behaviors of the two data sets.
I'm not. Jorrie's calculation uses a fixed step size, the integration steps don't match the result steps excactly, it doesn't treat the integration to infinity properly and it only manages to get as close as it does by using far more integration steps than it needs to - in short it needs rewriting.
 
  • Like
Likes Jorrie
  • #59
pbuk said:
in short it needs rewriting.
It looks to me that Calculate() can be split up into three steps
  1. Compute a list of the ##z## values requested, making sure ##z=0## is there
  2. Do the integrals ##\int_0^S\frac{1}{H(s)}ds## and ##\int_0^S\frac{1}{sH(s)}ds## for all ##S## corresponding to those ##z## and also for ##S=\infty##.
  3. Compute all output statistics for each ##z## (some of which depend on the integral up to ##S=1## or ##S=\infty##, hence doing this after the integration run).
I would expect that this is just reordering existing code, so ought to make little to no difference to the output. Then we could work on doing the integration in a better way.

Any thoughts? I haven't looked at your github page yet @pbuk, but will do so when I'm on something with a slightly larger screen.
 
  • #60
Sounds good. The present Calculate() also has a separate T_age module, simply because there is a legacy requirement to immediate see the effect on present age when any relevant input parameter is changed. But this could be integrated with the ##\int_0^S\frac{1}{H(s)}ds## module.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
7K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
Replies
20
Views
2K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K