Hello, I have done a chi squared test on the measurements from a neutron flux experiment to get the best parameters for a function of the form Ysim=Acos(B*X). I used Solver in Excel to find the minimum parameters. The test takes the form chi^2 / dof = SUM(Ysim-Yi)^2/(sigma i)^2 Where Yi are the measured values of the flux at various heights and (sigma i) is the uncertainty in flux i. What I want to do is to find the uncertainty in the parameter B. I have been told that if I shift the parameter B until the minimum value of chi^2 is altered to get chi^2 + 1 then the difference between the original value for B and the new value for B can be used to get the uncertainty in B. Does this make any kind of mathematical sense? I've found hints that this is equivalent to garbageing chi^2 by one standard deviation but I have not found any hard evidence of this. Has anyone seen this method referenced anywhere?