# A Fisher Forecasting For EUCLID Survey Help

Tags:
1. Jun 27, 2017

### xdrgnh

I'm trying to recreate the results of this paper https://arxiv.org/pdf/1607.08016.pdf

to obtain the constraints for the matter density and hubble constant h.

However every time I try to create there results my Fisher Matrix has elements of order of 10^14 which is far to high. I suspect this is happening because the Vsurvey I'm calculating is so large. I have no idea how they were able to there results. I'll attach my Mathematica code for the F_11 element of the Fisher Matrix. I don't know if I'm am misunderstanding a formula, if its a mathematica error or if there is some missing step.

Parallelize[
Total[Table[(NIntegrate[(E^(0)) ((D[
Log[Pobs,
H]] /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + z)^3 +
0.00008824284992310034 (1 +
z)^4)})^2)*(Veff /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + z)^3 +
0.00008824284992310034 (1 + z)^4)})*
k^2/(8*Pi^2), {u, -1, 1}, {k, 0,
f[z]}]) ((D[(100 hh Sqrt[
1 - 0.000041769223554/(hh)^2 - MM + MM (1 + z)^3 + (
0.000041769223554 (1 + z)^4)/(hh)^2]),
MM] /. {MM -> .2984, hh -> .688})^2) +
2*(D[(300000/(1 + z)*
NIntegrate[
1/(100 hh Sqrt[
1 - 0.000041769223554/(hh)^2 - MM + MM (1 + Z)^3 + (
0.000041769223554 (1 + Z)^4)/(hh)^2]), {Z, 0, z}]),
MM] /. {MM -> .2984,
hh -> .688})*(D[(100 hh Sqrt[
1 - 0.000041769223554/(hh)^2 - MM + MM (1 + z)^3 + (
0.000041769223554 (1 + z)^4)/(hh)^2]),
MM] /. {MM -> .2984, hh -> .688})*
NIntegrate[(E^(0)) (D[
Log[Pobs,
H]] /. {H -> (68.8 Sqrt[
0.7015117571500769 + 0.2984 (1 + z)^3 +
0.00008824284992310034 (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0,
z}])}) (D[
Log[Pobs,
Da]] /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0, z}]),
H -> (68.8 Sqrt[
0.7015117571500769 + 0.2984 (1 + z)^3 +
0.00008824284992310034 (1 +
z)^4])})*(Veff /. {H -> (68.8 Sqrt[
0.7015117571500769 + 0.2984 (1 + z)^3 +
0.00008824284992310034 (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0,
z}])})*k^2/(8*Pi^2), {u, -1, 1}, {k, 0,
f[z]}] + ((D[(300000/(1 + z)*
NIntegrate[
1/(100 hh Sqrt[
1 - 0.000041769223554/(hh)^2 - MM + MM (1 + Z)^3 + (
0.000041769223554 (1 + Z)^4)/(hh)^2]), {Z, 0, z}]),
MM] /. {MM -> .2984,
hh -> .688})^2)*(NIntegrate[(E^(0)) ((D[
Log[Pobs,
Da]] /. {H -> (68.8 Sqrt[
0.7015117571500769 + 0.2984 (1 + z)^3 +
0.00008824284992310034 (1 + z)^4]),
Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0,
z}])})^2)*(Veff /. {Da -> (300000/(1 + z)*
NIntegrate[
1/(68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + Z)^3 +
0.00008824284992310034 (1 + Z)^4)), {Z, 0, z}]),
H -> 68.8 \[Sqrt](0.7015117571500769 +
0.2984 (1 + z)^3 +
0.00008824284992310034 (1 + z)^4)})*
k^2/(8*Pi^2), {u, -1, 1}, {k, 0, f[z]}]), {z, .7, 2.1, .1}]]]

This code is supposed to calculate F_11 and

Pmatter =
E^(-k^2*u^2*
rr^2)*(((8 Pi^2*(300000)^4*.002*2.45*10^-9)/(25*((100*h)^4)*
M^2))*

(0.02257/(h^2*M)*Tb + ((M - 0.02257)/M)*Tc)^2)*((Gz/
Go)^2)*(k/.002)^.96

Pobs = ((Dref)^2*H)/(Da^2*Href)*Pg;

Veff = (((1.2*Pg)/(1.2*Pg + 1))^2)*Vsurvey;

Pg = (1 +
z) (1 + (0.4840378144001318 k^2)/((k^2 + u^2) Sqrt[1 + z]))^2*
Pmatter

If anyone has had similar issues, can offer any help or has done this calculation before I will greatly appreciate your help.

Oh and my Vsurvey is Vsurvey = 5.98795694781456`*^11(MPC)^3

2. Jun 28, 2017

### kimbyd

It's very difficult to parse this code as written, so I don't know where the issue lies. However, very large Fisher matrix values should not be occurring. Those would indicate values which have close to zero variance, i.e. are almost perfectly-determined by the data. That's not likely to be the case.

One would, however, expect a Fisher matrix with incredibly tiny eigenvalues on occasion: this will happen if there are combinations of variables that are not constrained by the data at all.

This may indicate that somewhere in the code you've mixed up your Fisher matrix and your Covariance matrix. Or, if you're computing derivatives as shown at the bottom of your post, that you've got a singularity.

3. Jun 29, 2017

### xdrgnh

I checked each derivative myself and none of them have any singularities. My survey volume is of the order of 10^11 and that is why my answer comes out to 10^14. I suspect I am not doing some step which would heavily suppress my huge survey volume size. Is there a type of normalization I can do to my galaxy power spectrum?

4. Jun 30, 2017

### kimbyd

There may be some type of observational uncertainty that needs to be added to make the result sensible. What sorts of uncertainties are you assuming in the underlying data?

5. Jun 30, 2017

### xdrgnh

In the survey a redshift error of (delta)z=.001(1+z) is assumed. Residual noise is explicitly neglected. For the fiducial parameters the uncertainties are omega M +/ .0096, h +/ .0075. The error bars for each parameter respectively I'm supposed to get are +/ .0015 and +/ .0010. Now as I write down the fiducial error bars are they supposed to play some role in calculating my Fisher Matrix?

Last edited: Jun 30, 2017
6. Jul 1, 2017

### kimbyd

The parameter errors shouldn't be related. Only the experimental ones.

The redshift error is usually the least significant source of error for such surveys. What are the other observables? And are you making sure to use a data set that is stochastic?

7. Jul 2, 2017

### xdrgnh

My observables are the Hubble Constant H(z) and the angular diameter Da(z). From those observables the that fisher matrix is propagated to a fisher matrix for the parameters Omega M and little h. My problem is that the derivative of the log of the power spectrum is not a small enough number to balance the large Vsurvey. There is one parameter they give in the paper that I haven't utilized. They say that the number of galaxy observed is 50*10^6. Now I initially thought that number is used to calculate the number density. However do you think it can be used to off set the huge Vsurvey which depending on the z values is between 10^9 and 10^11?

8. Jul 2, 2017

### xdrgnh

Oh and I propagate the matrix in the following way

F_11=(f_11)*D[H,M]^2+2*D[Da,M]*D[H,M](f_12)+(f_22)(D[Da,M])^2

F_22=(f_11)*D[H,h]+2*D[Da,h]*D[H,h]*(f_12)+(D[Da,h]^2)*f_22

F_12=F_21=(f_22)*D[Da,M]*D[Da,h]+(f_12)*D[Da,M]*D[H,h]+(f_21)*D[H,M]*D[Da,h]+(f_11)D[H,h]*D[H,M]

where M is the omega mass. q1 is the M and q2 is h. p1 is H p2 is Da.
Does this look like a faithful representation of the last formula?

9. Jul 5, 2017

### kimbyd

Sorry for the delayed response. Haven't been checking my e-mail over the 4th of July weekend.

Unfortunately I'm not really willing to spend the time required to parse and understand an equation this complicated, though you might want to consider using LaTeX to display equations like this (see the LaTeX link near the bottom of every post page for instructions).

All I can suggest are general debugging tips:
1. Simplify the system to one that you can fully understand, and make sure your Fisher matrix code does the expected thing. If it doesn't behave as expected, you can use that to debug. For example, you might reduce the system to only a handful of data points (say, two or three), and see if you can't come up with an alternative method of obtaining the result by hand with so few data points.
2. Make sure the scaling of the system has the expected result. For example, if you halve the number of data samples, your errors should be increased by a factor of approximately $\sqrt{2}$. Note that for this to work you can't change the overall properties of the dataset: you'll get a very different answer if you selectively remove only the nearby data samples. See if you can come up with other scalings that the answer should respect. If you get a discrepancy, you can use that to debug the problem.

10. Jul 8, 2017

### xdrgnh

Thank you so much for taking the time to write out this detailed response. I greatly appreciated all of your help. I'm happy to say that I found out what I was doing wrong, specifically D[Log[Pobs,H]] should of been written as D[Log[Pobs],H]. Also a few steps that were not explicitely mentioned in the paper I wasn't doing. Thankfully the authors were able to explain them to me. Mainly I had to evaluate the power spectrum at values of z in between the bins and the effective volume had to be evaluated at bin widths. I was able to reproduce there results and I can move on to the next part of my research. Speaking of that I'm about to ask another question.

11. Jul 10, 2017

### kimbyd

Great! Glad to hear you figured it out!