This is perhaps a philosophical question, but I am trying to make the transition from engineer to scientist, and I am trying to relearn how to think and ask questions. As an engineer, a lot of times, we can get away with making something that consistently worked without understanding it. A common scenario where I worked earlier was that the chips we designed were such a limited run, that we could simply run a screen that tested functionality that we needed long enough despite having very low yield. We didn't have to publish a guarantee, and if a chip failed, we could replace it. The customers would not been around if we waited for a good understanding of what the issues were, but were perfectly happy with us replacing parts when they failed. Here, just finding the maximum-likelihood Poisson distribution seemed good enough (and perhaps even overkill). It seems to me that some of the best scientists are the ones good at forming hypothesis. But a lot of time I see people advocating that we should let the data speak for itself, and that we ought to make no hypotheses, and let the data tell us what is true. However, this sort of statistical thinking seems to lack something when it comes to forming and testing the types of hypotheses important in the science. I could be wrong. But if I am, I was wondering: if we pretended we didn't know Newton's laws, using inferential statistics, would it be possible to infer F=m*a directly from the data (IOW, without an apriori hypothesis)? What experiments would we do? What would we measure, and what sort of statistical analysis would we do?