Covariance matrix with asymmetric uncertainties

AI Thread Summary
The discussion revolves around constructing a covariance matrix for a large dataset with asymmetric uncertainties while calculating the Chi-Squared for a fit of cosmic proton flux data. The challenge lies in accurately representing the asymmetric uncertainties in the covariance matrix without having to rebuild it at each iteration of the minimization process. Suggestions include using symmetric uncertainties for initial estimates and then adjusting for direction in subsequent runs, or leveraging the minimization program to calculate likelihoods externally, which could accommodate more complex uncertainty models. The goal is to find the best fit parameters for a cosmic ray formula while effectively managing the impact of asymmetric uncertainties on the analysis. A likelihood-based approach is recommended as it may provide a more accurate representation of the uncertainties involved.
Daaavde
Messages
29
Reaction score
0
Hello everyone, I'm currently building the covariance matrix of a large dataset in order to calculate the Chi-Squared. The covariance matrix has this form:

\begin{bmatrix}
\sigma^2_{1, stat} + \sigma^2_{1, syst} & \rho_{12} \sigma_{1,syst} \sigma_{2, syst} & ... \\
\rho_{12} \sigma_{1,syst} \sigma_{2, syst} & \sigma^2_{2, stat} + \sigma^2_{2, syst} & ... \\
... & ... & ...
\end{bmatrix}

However, all my data points have asymmetrix uncertainties (d^{+ \sigma^+_n}_{- \sigma^-_n}) where (\sigma^+_n \neq \sigma^-_n).
How do I calculate the Chi-Squared in this case?
 
Physics news on Phys.org
If your uncertainties are asymmetric, reducing them to two numbers can be dangerous because you probably don't have a perfect Gaussian distribution of the likelihood towards each side separately. You could use the uncertainty that applies in your case (pick the one for the right direction), but a likelihood-based analysis might be better.
 
I thought about picking the uncertainty that applies to the different cases (lower uncertainty if fit lower than data point or viceversa), but the problem is that I'm running the covariance matrix in a minimizer to find the best fit parameters for my test formula.

Currently I'm generating my matrix (500x500) outside the minimizer (the minimizer loop the values of the parameters of my fit formula, so that only the difference vectors need to be recalculated at each iteration), but picking the right uncertainties to use in building the covariance matrix would mean constructing a different covariance matrix at each iteration. Is there a way to avoid that?

I'm interested in the likelihood-based analysis you mentioned, how would it solve the asymmetric uncertainty problem?
 
Where do your uncertainties come from and what do you fit how?
Likelihood
 
My uncertainties are systematic and statistical uncertainties on datapoints representing the flux of cosmic protons as a function of energy. The systematic uncertainties come from different factors related to the detector, resolution and MC.

I'm currently performing a global fit including different experiments measuring the flux of cosmic protons. In order to do that I'm comparing a formula (GSHL) predicting the flux of cosmic protons with the actual data (their difference is the numerator of my Chi-Squared). The cosmic ray formula depends on four parameters. By minimizing the Chi-Squared (looping through different values of the four parameters) I intend to determine the best fit values for the four parameters and their relative uncertainties.
 
The minimizer probably uses this covariance matrix to produce a likelihood estimate, and maximizes this likelihodd (more likely: minimizes the negative logarithm of it). Approaches I see:
- use symmetric uncertainties to get an estimate accurate enough to know which direction your deviation has for each bin, then plug in the correct direction and re-run. Should work if the asymmetries are not too large.
- Figure out if your minimization program allows to calculate the likelihood externally, where you can pick the right direction in every iteration.

The second approach also allows to include more complex uncertainty estimates. The asymmetric errors are problably just an approximation to a more complex likelihood function, and directly using this function would be more accurate.
 
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top