- #1

- 3

- 0

**Summary::**When experimenting to improve a theory, account for the fact that your experimental equipment is made using the very same theory which you are trying to improve.

1.) It would take many decades (~ 80 years?) to design and make equipment entirely using a proposed new theory which has just been formulated.

2.) When testing an improvement to an existing theory (e.g. improving an emergent theory; or playing with assumptions when the original theory's are not self-consistent, as in quantum mechanics), one uses equipment which is designed and understood using the existing theory. Yet, the new proposed theory is close enough, in predictions, to have gone undetected in previous experiments - in a sense, it has "fooled" existing experiments. One example here is that Newton's theory fooled us until general relativity came about (and Newtonian physics is still used where accuracy and precision allow us to use its simpler mathematics).

3.) To account for the fact that a new, proposed theory will approximately follow the old theory, yet that it is tested using equipment built under the old theory, one can introduce an accuracy error, which accounts for the equipment and setup being derived and built under the old theory. Ideally, the scientists would formulate the proposed theory and develop it to the point where one could build a new set of experimental equipment, which is entirely understood under the new proposed theory. However, as aforementioned, that latter task would take many decades - probably longer than a physicist's lifetime. Thus, the introduction of this accuracy error (one could call it an "alpha-error").

The accuracy error can be thought of as a "fuzzing" of the input values and changes under different apparatus. For example, on an optics table, one has the source (which is nowadays a big box, but could be thought of as a light bulb), the lenses, and the detector. Each of these should be built once under the old theory, and once under the newly-formed hypothesis. In the analysis stage, we already account for the differences in predictions between the two theories - it's just in the experimental apparatus that we exclusively use the old theory to describe and understand what's happening, under current methods.

4.) This accuracy error has also been described as a computer-based approximation to mathematical equations, like a region-based Taylor approximation. This application came from speaking with one of my students, and I haven't fully worked out the application. For this application, one inputs a set of domain and range(s) to a math equation, setting values for constants if needed, to find its values in a certain experimental (or theoretical) regime. Then, assign error or approximation regions to each point found in this way. Bear in mind that Bayesian error bars are actually "confidence intervals", which say NOT that the experiment is 99% accurate, but rather that 99% of the time the experimental result will lie within the error bar. Perform a set of fits, using different regimes or values in each, finding simple-to-solve equations within that approximation region and cutting out equations which amount to over-fitting or which are so long/complicated that we could not hope to understand the physics behind them. If desired, identify equations which bear resemblance to certain physics equations (this is analogous to an algorithmic search for results like the method of images, in electromagnetism). Combine the fits, with the help of a mathematician if needed, to make a theoretical equation which describes the system under the conditions used in this computer-based mathematical experiment.

Any thoughts/ideas? Technically this is philosophy of science, akin to Carl Popper's work, and it's very applicable in physics. The latter mathematical approximation tool is cool, if that application is do-able.