Undergrad Why Does Covariance Matrix Change with Different Functions?

Click For Summary
The discussion centers on the calculation of covariance matrices using the Python package iminuit for different functions. When testing the function x^2 + y^2, the covariance matrix results in a simple identity matrix, indicating independence between parameters. In contrast, the function (x - y)^2 yields a covariance matrix with values greater than 1, reflecting the lack of a unique minimum and the dependence of the parameters. It is clarified that covariance is not constrained to values between -1 and 1, unlike correlation, which is limited in magnitude. Understanding these differences is crucial for interpreting covariance in the context of parameter fitting.
Silviu
Messages
612
Reaction score
11
Hello! I have to calculate the covariance between 2 parameters from a fit function. I found this package in Python called iminuit that did a good fit and also calculate the covariance matrix of the parameters. I tested the package on a simple function and I am not sure I understand the result. When the function I put is x^2+y^2, which has the minimum for x=y=0, I obtain ((1.0, 0.0), (0.0, 1.0)), as a covariance matrix. When I use (x-y)^2 I obtain ((250.24975024975475, 249.75024975025426), (249.75024975025426, 250.24975024975475)), as a covariance matrix. I don't understand why do I get value of covariance greater than 1 and why in the first case I get 0 on the sides and 1 on the main diagonal? It is the first time I encounter covariance so I am not sure I got it right. Thank you!
 
Physics news on Phys.org
The definition of covariance is
862d47a1798266ac335e484532f30dee.gif


If two variable are independent we would expect in the limit of large n that the covariance would be 0. If they are dependent the covariance is not limited to 1.

Perhaps you are thinking of the correlation matrix call it Σ where the diagonal elements ∑II are always 1 and the off diagonal elements are -1 ≤ Σij ≤ 1

Your second function does not have an unique minimum but an infinitely long trough for values x=y.
 
covariance is not limited to 1. Correlation, the ratio of the covariance to the square root of the product of the variances, is limited (in magnitude) to 1.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K