Undergrad Is the Likelihood Function a Multivariate Gaussian Near a Minimum?

Click For Summary
The likelihood function approaches a Gaussian form near the minimum of the chi-square function, with the mean corresponding to the minimizing parameter value. This representation is valid when considering variations in a single parameter while keeping others fixed at their minimum values. Away from the minimum, the likelihood function can become a multivariate Gaussian if correlations between parameters are present. The general formula for the likelihood function in this context is indeed a multivariate Gaussian, though the specific case discussed simplifies to univariate behavior. Understanding these nuances is crucial for accurate data analysis and error assessment.
kelly0303
Messages
573
Reaction score
33
Hello! I am reading Data Reduction and Error Analysis by Bevington, 3rd Edition and in Chapter 8.1, Variation of ##\chi^2## Near a Minimum he states that for enough data the likelihood function becomes a Gaussian function of each parameter, with the mean being the value that minimizes the chi-square: $$P(a_j)=Ae^{-(a_j-a_j')^2/2\sigma_j^2}$$ where ##A## is a function of the other parameters, but not ##a_j##. Is this the general formula or is it a simplification where the correlation between the parameters is zero? From some examples later I guess this is just a particular case and I assume the most general formula would be a multivariate gaussian, but he doesn't explicitly state this anywhere. Can someone tell me what's the actual formula? Also, can someone point me towards a proof of this? Thank you!
 
  • Informative
Likes BvU
Physics news on Phys.org
It is true if you only consider changes in aj and fix all other parameters to their value at the minimum, but not otherwise.
 
mfb said:
It is true if you only consider changes in aj and fix all other parameters to their value at the minimum, but not otherwise.
Thank you! Does it become a multivariate Gaussian, tho, away from the minimum (if there are correlations)? Or does this formula apply only at the minimum value?
 
A multivariate Gaussian can describe the function in a small region around the minimum.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
Replies
2
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 22 ·
Replies
22
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
6K