Graduate Fisher Forecasting For EUCLID Survey Help

Click For Summary
The discussion revolves around recreating results from a specific paper to derive constraints for matter density and the Hubble constant using a Fisher Matrix approach. The user consistently encounters excessively high values in their Fisher Matrix, suggesting potential issues with their calculations, particularly related to the large survey volume. Observational uncertainties, including redshift errors and parameter uncertainties, are discussed as potential factors influencing the results. The user seeks advice on whether the number of observed galaxies can help mitigate the large survey volume effect and queries the proper representation of their Fisher Matrix equations. General debugging strategies are suggested, including simplifying the system and ensuring the scaling of the data aligns with expectations.
xdrgnh
Messages
415
Reaction score
0
I'm trying to recreate the results of this paper https://arxiv.org/pdf/1607.08016.pdf
ZDew7.png


to obtain the constraints for the matter density and Hubble constant h.

However every time I try to create there results my Fisher Matrix has elements of order of 10^14 which is far to high. I suspect this is happening because the Vsurvey I'm calculating is so large. I have no idea how they were able to there results. I'll attach my Mathematica code for the F_11 element of the Fisher Matrix. I don't know if I'm am misunderstanding a formula, if its a mathematica error or if there is some missing step.

Parallelize[
Total[Table[(NIntegrate[(E^(0)) ((D[
Log[Pobs,
H]] /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + z)^3 +
0.00008824284992310034` (1 +
z)^4)})^2)*(Veff /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4)})*
k^2/(8*Pi^2), {u, -1, 1}, {k, 0,
f[z]}]) ((D[(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + z)^3 + (
0.000041769223554` (1 + z)^4)/(hh)^2]),
MM] /. {MM -> .2984, hh -> .688})^2) +
2*(D[(300000/(1 + z)*
NIntegrate[
1/(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + Z)^3 + (
0.000041769223554` (1 + Z)^4)/(hh)^2]), {Z, 0, z}]),
MM] /. {MM -> .2984,
hh -> .688})*(D[(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + z)^3 + (
0.000041769223554` (1 + z)^4)/(hh)^2]),
MM] /. {MM -> .2984, hh -> .688})*
NIntegrate[(E^(0)) (D[
Log[Pobs,
H]] /. {H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0,
z}])}) (D[
Log[Pobs,
Da]] /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 +
z)^4])})*(Veff /. {H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0,
z}])})*k^2/(8*Pi^2), {u, -1, 1}, {k, 0,
f[z]}] + ((D[(300000/(1 + z)*
NIntegrate[
1/(100 hh Sqrt[
1 - 0.000041769223554`/(hh)^2 - MM + MM (1 + Z)^3 + (
0.000041769223554` (1 + Z)^4)/(hh)^2]), {Z, 0, z}]),
MM] /. {MM -> .2984,
hh -> .688})^2)*(NIntegrate[(E^(0)) ((D[
Log[Pobs,
Da]] /. {H -> (68.8` Sqrt[
0.7015117571500769` + 0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0,
z}])})^2)*(Veff /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + Z)^3 +
0.00008824284992310034` (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8` \[Sqrt](0.7015117571500769` +
0.2984` (1 + z)^3 +
0.00008824284992310034` (1 + z)^4)})*
k^2/(8*Pi^2), {u, -1, 1}, {k, 0, f[z]}]), {z, .7, 2.1, .1}]]]

This code is supposed to calculate F_11 and

Pmatter =
E^(-k^2*u^2*
rr^2)*(((8 Pi^2*(300000)^4*.002*2.45*10^-9)/(25*((100*h)^4)*
M^2))*

(0.02257`/(h^2*M)*Tb + ((M - 0.02257)/M)*Tc)^2)*((Gz/
Go)^2)*(k/.002)^.96

Pobs = ((Dref)^2*H)/(Da^2*Href)*Pg;
Veff = (((1.2*Pg)/(1.2*Pg + 1))^2)*Vsurvey;

Pg = (1 +
z) (1 + (0.4840378144001318` k^2)/((k^2 + u^2) Sqrt[1 + z]))^2*
Pmatter

geFqS.png


bx2kv.png


CxH6q.png


If anyone has had similar issues, can offer any help or has done this calculation before I will greatly appreciate your help.

Oh and my Vsurvey is Vsurvey = 5.98795694781456`*^11(MPC)^3
 
Space news on Phys.org
It's very difficult to parse this code as written, so I don't know where the issue lies. However, very large Fisher matrix values should not be occurring. Those would indicate values which have close to zero variance, i.e. are almost perfectly-determined by the data. That's not likely to be the case.

One would, however, expect a Fisher matrix with incredibly tiny eigenvalues on occasion: this will happen if there are combinations of variables that are not constrained by the data at all.

This may indicate that somewhere in the code you've mixed up your Fisher matrix and your Covariance matrix. Or, if you're computing derivatives as shown at the bottom of your post, that you've got a singularity.
 
  • Like
Likes xdrgnh
kimbyd said:
It's very difficult to parse this code as written, so I don't know where the issue lies. However, very large Fisher matrix values should not be occurring. Those would indicate values which have close to zero variance, i.e. are almost perfectly-determined by the data. That's not likely to be the case.

One would, however, expect a Fisher matrix with incredibly tiny eigenvalues on occasion: this will happen if there are combinations of variables that are not constrained by the data at all.

This may indicate that somewhere in the code you've mixed up your Fisher matrix and your Covariance matrix. Or, if you're computing derivatives as shown at the bottom of your post, that you've got a singularity.
I checked each derivative myself and none of them have any singularities. My survey volume is of the order of 10^11 and that is why my answer comes out to 10^14. I suspect I am not doing some step which would heavily suppress my huge survey volume size. Is there a type of normalization I can do to my galaxy power spectrum?
 
xdrgnh said:
I checked each derivative myself and none of them have any singularities. My survey volume is of the order of 10^11 and that is why my answer comes out to 10^14. I suspect I am not doing some step which would heavily suppress my huge survey volume size. Is there a type of normalization I can do to my galaxy power spectrum?
There may be some type of observational uncertainty that needs to be added to make the result sensible. What sorts of uncertainties are you assuming in the underlying data?
 
  • Like
Likes xdrgnh
kimbyd said:
There may be some type of observational uncertainty that needs to be added to make the result sensible. What sorts of uncertainties are you assuming in the underlying data?

In the survey a redshift error of (delta)z=.001(1+z) is assumed. Residual noise is explicitly neglected. For the fiducial parameters the uncertainties are omega M +/ .0096, h +/ .0075. The error bars for each parameter respectively I'm supposed to get are +/ .0015 and +/ .0010. Now as I write down the fiducial error bars are they supposed to play some role in calculating my Fisher Matrix?
 
Last edited:
xdrgnh said:
In the survey a redshift error of (delta)z=.001(1+z) is assumed. Residual noise is explicitly neglected. For the fiducial parameters the uncertainties are omega M +/ .0096, h +/ .0075. The error bars for each parameter respectively I'm supposed to get are +/ .0015 and +/ .0010. Now as I write down the fiducial error bars are they supposed to play some role in calculating my Fisher Matrix?
The parameter errors shouldn't be related. Only the experimental ones.

The redshift error is usually the least significant source of error for such surveys. What are the other observables? And are you making sure to use a data set that is stochastic?
 
  • Like
Likes xdrgnh
kimbyd said:
The parameter errors shouldn't be related. Only the experimental ones.

The redshift error is usually the least significant source of error for such surveys. What are the other observables? And are you making sure to use a data set that is stochastic?
My observables are the Hubble Constant H(z) and the angular diameter Da(z). From those observables the that fisher matrix is propagated to a fisher matrix for the parameters Omega M and little h. My problem is that the derivative of the log of the power spectrum is not a small enough number to balance the large Vsurvey. There is one parameter they give in the paper that I haven't utilized. They say that the number of galaxy observed is 50*10^6. Now I initially thought that number is used to calculate the number density. However do you think it can be used to off set the huge Vsurvey which depending on the z values is between 10^9 and 10^11?
 
Oh and I propagate the matrix in the following way

F_11=(f_11)*D[H,M]^2+2*D[Da,M]*D[H,M](f_12)+(f_22)(D[Da,M])^2

F_22=(f_11)*D[H,h]+2*D[Da,h]*D[H,h]*(f_12)+(D[Da,h]^2)*f_22

F_12=F_21=(f_22)*D[Da,M]*D[Da,h]+(f_12)*D[Da,M]*D[H,h]+(f_21)*D[H,M]*D[Da,h]+(f_11)D[H,h]*D[H,M]

where M is the omega mass. q1 is the M and q2 is h. p1 is H p2 is Da.
Does this look like a faithful representation of the last formula?
 
xdrgnh said:
Oh and I propagate the matrix in the following way

F_11=(f_11)*D[H,M]^2+2*D[Da,M]*D[H,M](f_12)+(f_22)(D[Da,M])^2

F_22=(f_11)*D[H,h]+2*D[Da,h]*D[H,h]*(f_12)+(D[Da,h]^2)*f_22

F_12=F_21=(f_22)*D[Da,M]*D[Da,h]+(f_12)*D[Da,M]*D[H,h]+(f_21)*D[H,M]*D[Da,h]+(f_11)D[H,h]*D[H,M]

where M is the omega mass. q1 is the M and q2 is h. p1 is H p2 is Da.
Does this look like a faithful representation of the last formula?
Sorry for the delayed response. Haven't been checking my e-mail over the 4th of July weekend.

Unfortunately I'm not really willing to spend the time required to parse and understand an equation this complicated, though you might want to consider using LaTeX to display equations like this (see the LaTeX link near the bottom of every post page for instructions).

All I can suggest are general debugging tips:
1. Simplify the system to one that you can fully understand, and make sure your Fisher matrix code does the expected thing. If it doesn't behave as expected, you can use that to debug. For example, you might reduce the system to only a handful of data points (say, two or three), and see if you can't come up with an alternative method of obtaining the result by hand with so few data points.
2. Make sure the scaling of the system has the expected result. For example, if you halve the number of data samples, your errors should be increased by a factor of approximately ##\sqrt{2}##. Note that for this to work you can't change the overall properties of the dataset: you'll get a very different answer if you selectively remove only the nearby data samples. See if you can come up with other scalings that the answer should respect. If you get a discrepancy, you can use that to debug the problem.
 
  • #10
kimbyd said:
Sorry for the delayed response. Haven't been checking my e-mail over the 4th of July weekend.

Unfortunately I'm not really willing to spend the time required to parse and understand an equation this complicated, though you might want to consider using LaTeX to display equations like this (see the LaTeX link near the bottom of every post page for instructions).

All I can suggest are general debugging tips:
1. Simplify the system to one that you can fully understand, and make sure your Fisher matrix code does the expected thing. If it doesn't behave as expected, you can use that to debug. For example, you might reduce the system to only a handful of data points (say, two or three), and see if you can't come up with an alternative method of obtaining the result by hand with so few data points.
2. Make sure the scaling of the system has the expected result. For example, if you halve the number of data samples, your errors should be increased by a factor of approximately ##\sqrt{2}##. Note that for this to work you can't change the overall properties of the dataset: you'll get a very different answer if you selectively remove only the nearby data samples. See if you can come up with other scalings that the answer should respect. If you get a discrepancy, you can use that to debug the problem.
Thank you so much for taking the time to write out this detailed response. I greatly appreciated all of your help. I'm happy to say that I found out what I was doing wrong, specifically D[Log[Pobs,H]] should of been written as D[Log[Pobs],H]. Also a few steps that were not explicitely mentioned in the paper I wasn't doing. Thankfully the authors were able to explain them to me. Mainly I had to evaluate the power spectrum at values of z in between the bins and the effective volume had to be evaluated at bin widths. I was able to reproduce there results and I can move on to the next part of my research. Speaking of that I'm about to ask another question.
 
  • #11
xdrgnh said:
Thank you so much for taking the time to write out this detailed response. I greatly appreciated all of your help. I'm happy to say that I found out what I was doing wrong, specifically D[Log[Pobs,H]] should of been written as D[Log[Pobs],H]. Also a few steps that were not explicitely mentioned in the paper I wasn't doing. Thankfully the authors were able to explain them to me. Mainly I had to evaluate the power spectrum at values of z in between the bins and the effective volume had to be evaluated at bin widths. I was able to reproduce there results and I can move on to the next part of my research. Speaking of that I'm about to ask another question.
Great! Glad to hear you figured it out!
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 59 ·
2
Replies
59
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
477
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K