- #1
chastiell
- 11
- 0
hi all, i just want you to tell me if my ideas are correct or not :
As far as i can see the R^2 test is usually used in OLS (ordinary least squares) method where many conditions for data is showed (something like linearity in coefficients, expectation values for perturbations must be zero, and variance for perturbations a constant value).
Many times in experiments the conditions are not accomplished. After reading something like that i found an old book in my files statistical data analysis from Glenn Cowan, where least squares is derived from maximum likelihood parameters estimation method, i think this feel so natural and general because linear and constant variance restriction is not showed, you have data with any known variance , any model where linearity in coefficients is not obligatory, then you only need to maximize the function:
##\chi^2=\sum_i {ydata_i-f(parameters,xdata)\over yerror_i^2}##
(sorry for that strange latex code but i don't know how to use equations in this editor)
<<Moderator's note: simply use the proper tags. See https://www.physicsforums.com/help/latexhelp/>>
for the parameters , then the chi-squared/number of degrees of freedom is used as a measure of goodness of fit. Any method for maximization can be used (i prefer numerical methods).
Without more information (because i didn't find it) I'm really tempted to conclude (at least this is my hypothesis) that r-squared is a coefficient for goodness of fit only (there are few methods to use it if variance condition is not accomplished) if conditions are given and that chi-squared goodness of fit is used in a more general way (i mean without the variance condition and linearity ) are you agree with me ? why yes? or why not? thanks for you answers :)
As far as i can see the R^2 test is usually used in OLS (ordinary least squares) method where many conditions for data is showed (something like linearity in coefficients, expectation values for perturbations must be zero, and variance for perturbations a constant value).
Many times in experiments the conditions are not accomplished. After reading something like that i found an old book in my files statistical data analysis from Glenn Cowan, where least squares is derived from maximum likelihood parameters estimation method, i think this feel so natural and general because linear and constant variance restriction is not showed, you have data with any known variance , any model where linearity in coefficients is not obligatory, then you only need to maximize the function:
##\chi^2=\sum_i {ydata_i-f(parameters,xdata)\over yerror_i^2}##
(sorry for that strange latex code but i don't know how to use equations in this editor)
<<Moderator's note: simply use the proper tags. See https://www.physicsforums.com/help/latexhelp/>>
for the parameters , then the chi-squared/number of degrees of freedom is used as a measure of goodness of fit. Any method for maximization can be used (i prefer numerical methods).
Without more information (because i didn't find it) I'm really tempted to conclude (at least this is my hypothesis) that r-squared is a coefficient for goodness of fit only (there are few methods to use it if variance condition is not accomplished) if conditions are given and that chi-squared goodness of fit is used in a more general way (i mean without the variance condition and linearity ) are you agree with me ? why yes? or why not? thanks for you answers :)
Last edited by a moderator: