How To Do Fisher Forecasting For Constraining Parameters

Click For Summary

Discussion Overview

The discussion centers on the process of conducting a Fisher Forecast for experimental physics, specifically focusing on parameter estimation and uncertainty quantification. Participants explore the methodology, including the formation and inversion of Fisher matrices, the use of covariance matrices, and considerations regarding data and model parameters.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant outlines their approach to Fisher Forecasting, detailing the use of observables, expected noise levels, and covariance matrices from past experiments to form a Fisher matrix.
  • Another participant confirms that the diagonal elements of the inverse of the Fisher information matrix correspond to uncertainties on individual variables, cautioning about potential instability if small eigenvalues are present.
  • Concerns are raised about the number of variables relative to the constraints provided by the experiments, with a suggestion that having more variables than constraints can lead to issues.
  • Participants discuss the expected calculation time for the Fisher Forecast, indicating that while the computation may be quick, significant time may be spent on understanding and implementing the method correctly.
  • One participant mentions using various data sources (Supernova, GRB, BAO, CMB) and seeks clarification on the concept of nuisance parameters and their marginalization in the context of the Fisher Forecast.
  • Another participant clarifies the distinction between nuisance parameters and degeneracies in data, emphasizing the importance of monitoring eigenvalues of the Fisher matrix to identify potential issues.

Areas of Agreement / Disagreement

Participants express a range of views on the methodology and potential pitfalls of Fisher Forecasting, indicating that while there are shared understandings, multiple competing perspectives and uncertainties remain regarding the approach and its implementation.

Contextual Notes

Participants note the importance of understanding the relationship between data points and uncertainties, as well as the potential for degeneracies in parameter estimation. There is an acknowledgment of the need for careful consideration of model parameters and their effects on the analysis.

xdrgnh
Messages
415
Reaction score
0
I'm doing a Fisher Forecast to satisfy my experimental physics requirement. I never done anything like this before so I'm kind of in the dark of how to proceed. I have done some readings on Fisher Forecasting and would like to outline how I would do it and then ask if you can tell me if I'm right.

  1. I have a list of observables and how they vary with parameters I want to know the errors bars for.
  2. I also have a list of upcoming experiments which can constrain these parameters with there expected noise levels and instrumental uncertainty.
  3. I have covariant matrices from experiments done in the past that constrain these parameters.
  4. I then form my fisher matrix with the following equation
    q94SK.png

  1. Once I have those matrices from all of my experiments for each parameter I then invert those matrices, add them onto the covariant matrices from past experiments. I then invert that to get my Fisher Information Matrix
  2. In order to find the uncertainties for each of my parameters once I have the Fisher Information Matrix I compute
    D6Szb.png
    where the 11 can be ij and the all I need to worry about are the diagonal elements of this inverse.
And then I'm done I think. Can anyone tell me if it is really this simple or am I missing a whole bunch of stuff. I never did data analysis before and I need to be fully done by the end of this month.

Thanks you
 
Space news on Phys.org
Oh and the model in equation is a modified friedmann equation and I'm assuming all of my errors are gaussian in nature.
 
xdrgnh said:
I'm doing a Fisher Forecast to satisfy my experimental physics requirement. I never done anything like this before so I'm kind of in the dark of how to proceed. I have done some readings on Fisher Forecasting and would like to outline how I would do it and then ask if you can tell me if I'm right.

  1. I have a list of observables and how they vary with parameters I want to know the errors bars for.
  2. I also have a list of upcoming experiments which can constrain these parameters with there expected noise levels and instrumental uncertainty.
  3. I have covariant matrices from experiments done in the past that constrain these parameters.
  4. I then form my fisher matrix with the following equationView attachment 205166
  1. Once I have those matrices from all of my experiments for each parameter I then invert those matrices, add them onto the covariant matrices from past experiments. I then invert that to get my Fisher Information Matrix
  2. In order to find the uncertainties for each of my parameters once I have the Fisher Information Matrix I compute View attachment 205167 where the 11 can be ij and the all I need to worry about are the diagonal elements of this inverse.
And then I'm done I think. Can anyone tell me if it is really this simple or am I missing a whole bunch of stuff. I never did data analysis before and I need to be fully done by the end of this month.

Thanks you
Right, the diagonal elements of the inverse of the Fisher information matrix are the uncertainties on the individual variables. This is because the inverse of the Fisher Information Matrix is the Covariance matrix, whose diagonal elements are just the variances.

In practice you have to be careful with this kind of calculation, because if your Fisher information matrix has any really small eigenvalues it can make the calculations unstable. Typically this means that you are using a model with more variables than your experiment can constrain. There are ways to deal with this, but it's always a bit finicky.
 
  • Like
Likes   Reactions: xdrgnh
kimbyd said:
Right, the diagonal elements of the inverse of the Fisher information matrix are the uncertainties on the individual variables. This is because the inverse of the Fisher Information Matrix is the Covariance matrix, whose diagonal elements are just the variances.

In practice you have to be careful with this kind of calculation, because if your Fisher information matrix has any really small eigenvalues it can make the calculations unstable. Typically this means that you are using a model with more variables than your experiment can constrain. There are ways to deal with this, but it's always a bit finicky.
I'm trying to constrain 5 parameters and I will be doing all of this on Mathematica. In total from past experiments I have roughly 600 data points. From your experience how long does it take to perform a Fisher Forecast?
 
xdrgnh said:
I'm trying to constrain 5 parameters and I will be doing all of this on Mathematica. In total from past experiments I have roughly 600 data points. From your experience how long does it take to perform a Fisher Forecast?
In terms of calculation time, it should take a couple of seconds at the most (likely much less). All of the time will be spent on figuring out precisely how to do it, and getting it to work.

The number of data points isn't as important as what those data points do to the uncertainties. You could have ten million data points and there could still be a problem if you have variables (or a combination of variables) that aren't affected by the data.

To take one example, if you make use of supernova data, and allow the absolute magnitude of the supernovae to be a free parameter, then you cannot measure ##H_0## from the supernova data at all. You can get ##H_0## only if you combine the supernova data with something else (such as an estimate of the absolute magnitude of the supernovae, or of ##H_0## from a different source such as the CMB).
 
  • Like
Likes   Reactions: xdrgnh
kimbyd said:
In terms of calculation time, it should take a couple of seconds at the most (likely much less). All of the time will be spent on figuring out precisely how to do it, and getting it to work.

The number of data points isn't as important as what those data points do to the uncertainties. You could have ten million data points and there could still be a problem if you have variables (or a combination of variables) that aren't affected by the data.

To take one example, if you make use of supernova data, and allow the absolute magnitude of the supernovae to be a free parameter, then you cannot measure ##H_0## from the supernova data at all. You can get ##H_0## only if you combine the supernova data with something else (such as an estimate of the absolute magnitude of the supernovae, or of ##H_0## from a different source such as the CMB).
Well I'm using Supernova data, GRB data, BAO data and CMB data. I haven't begun yet because my adviser first wants me to outline to her the Fisher Forecast method. Once she thinks I understand it well enough she will give me papers for upcoming experiments and then I'll be on my way to Fisher Forecasting Tuesday of this upcoming week. Those variables that are not effected by the data, are those the nuisance parameters you are supposed to marginalize?
 
That sounds like a pretty good combination of data that should work pretty well to constrain most cosmological parameters.

As for nuisance parameter, I think you're misunderstanding slightly. The absolute magnitude parameter is a nuisance parameter, because it's not one that helps you understand the expansion of the universe that's being measured. And yes, you have to marginalize over it.

What I was describing, however, was a degeneracy in the data. And while I described a degeneracy that was due to a nuisance parameter and a parameter of interest (##H_0##), they can happen in any parameters. Ideally you won't have this problem. If you do, it can be tricky to deal with. I'd forge ahead assuming it won't happen for now. Just bear in mind that it might, and the way you can tell is by looking at the eigenvalues of your Fisher matrix. If any of them are much, much smaller than the others, you've got a degeneracy (how much smaller depends upon the calculation method, but you're likely to have issues if one eigenvalue is more than about ##10^{10}## times another).
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K