Hey all, I'm doing some research on protein dynamics which involves fitting models to data. Basically, we think that our system can be described by a complex, 5 parameter model rather than the generally accepted 2 parameter model. The data we're working with was acquired from experiments where we added a chemical species to a solution containing a receptor - the amount of binding was then quantified by fluorescence that occurs only upon binding. We ran very many experiments of this type since there is a lot of error involved. The experiments were ran in slightly different ways that involved changing some concentrations of chemical species. For each experiment, we fit curves describing both models to the data by a least squares approach. Naively comparing the sum of squared errors between the two fits won't work since the more complex model will always fit better. So, we employed an F-test to compare the fits since the two models are nested. This produces good results that validate our hypothesis in most cases. However, we must run an F-test for each experiment and can only look at model comparison within the experiment itself. What we want to do now is do perform a single, final comparison that takes all of the data (all experiments) into account, allowing us to say with what certainty we can choose the more complex model. I'm very clueless as to how to do this and I'm not sure that it can even be done at all! Biological systems are inherently different, even when they are identical on a genetic level. So, perhaps it is fundamentally flawed to ask about the relationship among independent samples. Is there a way to compare F statistics for slightly dissimilar, unique experiments? If not, is there a way to consolidate these experiments before testing a hypothesis in order to look at overall behavior? Perhaps the best we can do is simply compare p-values for each experiment. Thank you for any help - I'm sure there is much statistical intuition I can gain from working this out.