I About chi-squared and r-squared test for fitting data

AI Thread Summary
The discussion centers on the differences between R-squared and chi-squared tests in data fitting. R-squared is typically used in ordinary least squares (OLS) regression, which has specific conditions like linearity and constant variance that may not always be met in experimental data. Chi-squared, derived from maximum likelihood estimation, is presented as a more general method that does not require these restrictions, allowing for fitting with any known variance. Participants also highlight the importance of understanding the distinction between maximum likelihood and probability, emphasizing that maximum likelihood estimates may not always represent likely outcomes. Overall, the conversation explores the broader applicability of chi-squared in statistical analysis compared to R-squared.
chastiell
Messages
11
Reaction score
0
hi all, i just want you to tell me if my ideas are correct or not :

As far as i can see the R^2 test is usually used in OLS (ordinary least squares) method where many conditions for data is showed (something like linearity in coefficients, expectation values for perturbations must be zero, and variance for perturbations a constant value).

Many times in experiments the conditions are not accomplished. After reading something like that i found an old book in my files statistical data analysis from Glenn Cowan, where least squares is derived from maximum likelihood parameters estimation method, i think this feel so natural and general because linear and constant variance restriction is not showed, you have data with any known variance , any model where linearity in coefficients is not obligatory, then you only need to maximize the function:

##\chi^2=\sum_i {ydata_i-f(parameters,xdata)\over yerror_i^2}##
(sorry for that strange latex code but i don't know how to use equations in this editor)
<<Moderator's note: simply use the proper tags. See https://www.physicsforums.com/help/latexhelp/>>

for the parameters , then the chi-squared/number of degrees of freedom is used as a measure of goodness of fit. Any method for maximization can be used (i prefer numerical methods).

Without more information (because i didn't find it) I'm really tempted to conclude (at least this is my hypothesis) that r-squared is a coefficient for goodness of fit only (there are few methods to use it if variance condition is not accomplished) if conditions are given and that chi-squared goodness of fit is used in a more general way (i mean without the variance condition and linearity ) are you agree with me ? why yes? or why not? thanks for you answers :)
 
Last edited by a moderator:
Physics news on Phys.org
chastiell said:
After reading something like that i found an old book in my files statistical data analysis from Glenn Cowan, where least squares is derived from maximum likelihood parameters estimation method, i think this feel so natural and general because linear and constant variance restriction is not showed, you have data with any known variance , any model where linearity in coefficients is not obligatory, then you only need to maximize the function:

##\chi^2=\sum_i {ydata_i-f(parameters,xdata)\over yerror_i^2}##
I think these notes summarize what you are talking about: https://www.physics.ohio-state.edu/~gan/teaching/spring04/Chapter6.pdf

You need an "i" subscript on the "xdata": ##\chi^2=\sum_i {ydata_i-f(parameters,xdata_i)\over yerror_i^2}##
and that chi-squared goodness of fit is used in a more general way (i mean without the variance condition and linearity ) are you agree with me ? why yes? or why not? thanks for you answers :)

This is a use of chi-square which is distinct from "Pearson's Chi-square". Unlike Pearson's, it does not used binned data. However, you need to know "##yerror_i##, the standard deviation of the errors in each of the measurements. In the terminology of notes in that link, you need the same information for R as for ##\chi^2##. Perhaps you are talking about a definition of R that differs from the one given in that link.

I agree that a maximum likelihood fit has a clearer intuitive meaning than a least squares fit. The only caution about maximum likelihood is that the most likely thing that may happen in a model may not be very likely at all. Maxiumum likelihood fits are convincing when is a large probability that something "about the same" as the most likely event will happen. If the maximum likelihood event is an isolated thin peak in a distribution then that event or something "about the same" may have a very small probability of occurring.
 
Hi Stephen, I'm a bit confused about your last paragraph. Can you elaborate more what you mean? A likelihood function is not a probability so I get a bit confused when you say you use likelihood and large probability in the same sentence.
 
MarneMath said:
Hi Stephen, I'm a bit confused about your last paragraph. Can you elaborate more what you mean? A likelihood function is not a probability so I get a bit confused when you say you use likelihood and large probability in the same sentence.

An interval around the single value x = a that produces maximum liklihood has a probability. Fitting by the criteria of maximum liklihood makes sense when the predicted distribution has an interval "near" x = a that has a large probability - using whatever definition of "near" applies to specific practical problem.
 
I think I get what you're saying now. I usually hear your point in terms of Bayesian criticism of the MLE, ie that the mle fails to account for the volume of the parameter space that fits the data well.
 
hi all thanks for your answers, thanks for the link :)
 
Back
Top