- #1
pergradus
- 138
- 1
Normalized SSE for a set of data??
Hi, supposed I have a set of data points, and each data point has a certain value of uncertainty associated with it.
Supposed also I have a function which models the data. What I'd like to know is how does one quantitatively measure how good the fit of the model is to the data, in such as way that one can compare the model to different sets of data?
For example, taking the SSE defined as:
[itex]\sum(y_i - f(x_i , \beta))^2[/itex]
where [itex] \beta[/itex] is a set of parameters, one can measure the difference between the model and the data. However, this does not take into account the number of data points or the degree of uncertainty. If I have a very large number of data points, a small difference between the data and model will result in a very large SSE, even if the model is a very good fit. On the contrary, even if the model is poor, a small set of data points may produce a small SSE - thus there is no way to compare between data sets for the same model.
Also, one must consider the magnitude of the data points when comparing sets. For example, if my [itex] y_i [/itex] values range from 100...500 for one set, a small fractional change between the model and the data may still produce a huge SSE, while a huge fractional change between the model and data ranging from 0.01...0.5 will result in a small SSE.
So, what I'd like is a way to compare the goodness of a fit for a wide variety of data sets that takes into account the error in the data, the number of data points, and the magnitude of the dependent variables in the data - can someone explain how to do this and what such a quantity is called?
Hi, supposed I have a set of data points, and each data point has a certain value of uncertainty associated with it.
Supposed also I have a function which models the data. What I'd like to know is how does one quantitatively measure how good the fit of the model is to the data, in such as way that one can compare the model to different sets of data?
For example, taking the SSE defined as:
[itex]\sum(y_i - f(x_i , \beta))^2[/itex]
where [itex] \beta[/itex] is a set of parameters, one can measure the difference between the model and the data. However, this does not take into account the number of data points or the degree of uncertainty. If I have a very large number of data points, a small difference between the data and model will result in a very large SSE, even if the model is a very good fit. On the contrary, even if the model is poor, a small set of data points may produce a small SSE - thus there is no way to compare between data sets for the same model.
Also, one must consider the magnitude of the data points when comparing sets. For example, if my [itex] y_i [/itex] values range from 100...500 for one set, a small fractional change between the model and the data may still produce a huge SSE, while a huge fractional change between the model and data ranging from 0.01...0.5 will result in a small SSE.
So, what I'd like is a way to compare the goodness of a fit for a wide variety of data sets that takes into account the error in the data, the number of data points, and the magnitude of the dependent variables in the data - can someone explain how to do this and what such a quantity is called?