I have a set of data that consists of about ~1000 data points, each of which has two measurements. I have a series of models with two parameters that I have fit to this data to find the best fitting pair of parameters. There is quite a spread around the best-fitting model, because none of the models are perfect, but this isn't because of the measurement errors, which are known. Anyway, because the fit is bad, the reduced chi-squared value is much greater than 1. This is ok, its well known that the models aren't perfect. But, is it still possible to calculate the, say 1 sigma, confidence interval on my best fit, and if so, what is the value of Delta Chi-squared I use to calculate this interval when I vary my parameters?