Methodology / Philosophy of Science

In summary: This leads to the introduction of an accuracy error, which can be thought of as a "fuzzing" of the input values and changes under different apparatus. This is necessary because it would take too long to build new equipment under the new theory. However, this approach has been criticized for providing a limited view of scientific progress and the workings of the sciences.
  • #1
Pinewater234
3
0
Summary:: When experimenting to improve a theory, account for the fact that your experimental equipment is made using the very same theory which you are trying to improve.

1.) It would take many decades (~ 80 years?) to design and make equipment entirely using a proposed new theory which has just been formulated.

2.) When testing an improvement to an existing theory (e.g. improving an emergent theory; or playing with assumptions when the original theory's are not self-consistent, as in quantum mechanics), one uses equipment which is designed and understood using the existing theory. Yet, the new proposed theory is close enough, in predictions, to have gone undetected in previous experiments - in a sense, it has "fooled" existing experiments. One example here is that Newton's theory fooled us until general relativity came about (and Newtonian physics is still used where accuracy and precision allow us to use its simpler mathematics).

3.) To account for the fact that a new, proposed theory will approximately follow the old theory, yet that it is tested using equipment built under the old theory, one can introduce an accuracy error, which accounts for the equipment and setup being derived and built under the old theory. Ideally, the scientists would formulate the proposed theory and develop it to the point where one could build a new set of experimental equipment, which is entirely understood under the new proposed theory. However, as aforementioned, that latter task would take many decades - probably longer than a physicist's lifetime. Thus, the introduction of this accuracy error (one could call it an "alpha-error").
The accuracy error can be thought of as a "fuzzing" of the input values and changes under different apparatus. For example, on an optics table, one has the source (which is nowadays a big box, but could be thought of as a light bulb), the lenses, and the detector. Each of these should be built once under the old theory, and once under the newly-formed hypothesis. In the analysis stage, we already account for the differences in predictions between the two theories - it's just in the experimental apparatus that we exclusively use the old theory to describe and understand what's happening, under current methods.

4.) This accuracy error has also been described as a computer-based approximation to mathematical equations, like a region-based Taylor approximation. This application came from speaking with one of my students, and I haven't fully worked out the application. For this application, one inputs a set of domain and range(s) to a math equation, setting values for constants if needed, to find its values in a certain experimental (or theoretical) regime. Then, assign error or approximation regions to each point found in this way. Bear in mind that Bayesian error bars are actually "confidence intervals", which say NOT that the experiment is 99% accurate, but rather that 99% of the time the experimental result will lie within the error bar. Perform a set of fits, using different regimes or values in each, finding simple-to-solve equations within that approximation region and cutting out equations which amount to over-fitting or which are so long/complicated that we could not hope to understand the physics behind them. If desired, identify equations which bear resemblance to certain physics equations (this is analogous to an algorithmic search for results like the method of images, in electromagnetism). Combine the fits, with the help of a mathematician if needed, to make a theoretical equation which describes the system under the conditions used in this computer-based mathematical experiment.Any thoughts/ideas? Technically this is philosophy of science, akin to Carl Popper's work, and it's very applicable in physics. The latter mathematical approximation tool is cool, if that application is do-able.
 
  • Skeptical
Likes symbolipoint
Science news on Phys.org
  • #2
The preferred way to do this is to use a test theory instead. The test theory should have some free parameters such that you recover every theory of interest for specific values of the parameters. Then you design an experiment to measure one or more of the free parameters. This takes less than 80 years historically, although “decades” is accurate.
 
  • #3
I think this philosophy of science offers a very limited view on the real progress of science. Scientific progress and the workings of the sciences cannot be explained or described in a scientific/ philosophical framework. If applied to real science and all practitioners would adapt this view, science would come to a sqeacking halt. Or maybe some very limited progress would be observed. But it is a nice view.
 
  • #4
Prishon said:
I think this philosophy of science offers a very limited view on the real progress of science. Scientific progress and the workings of the sciences cannot be explained or described in a scientific/ philosophical framework. If applied to real science and all practitioners would adapt this view, science would come to a sqeacking halt. Or maybe some very limited progress would be observed. But it is a nice view.
Well, that is one philosophical view. Hopefully you see the problems with opposing a caricature of the POS - you still face the questions of what defines science? what are its limits? What kind of questions can it answer or not answer? Etc
Every practitioner of science has a philosophy of science, even if they are not conscious of it
 
  • #5
Pinewater234 said:
Summary:: When experimenting to improve a theory, account for the fact that your experimental equipment is made using the very same theory which you are trying to improve.

1.) It would take many decades (~ 80 years?) to design and make equipment entirely using a proposed new theory which has just been formulated.

2.) When testing an improvement to an existing theory (e.g. improving an emergent theory; or playing with assumptions when the original theory's are not self-consistent, as in quantum mechanics), one uses equipment which is designed and understood using the existing theory. Yet, the new proposed theory is close enough, in predictions, to have gone undetected in previous experiments - in a sense, it has "fooled" existing experiments. One example here is that Newton's theory fooled us until general relativity came about (and Newtonian physics is still used where accuracy and precision allow us to use its simpler mathematics).

I don’t see the issue, and are you conflating theories and hypothesis?

science works by building on existing knowledge, so I fail to see how a ‘new theory’ being reliant on an exiting one is a problem. It would be problematic if the new theory completely discarded elements of a former one that were established. GR built on Newtonian gravity and was proven by the tools of Newtonian astronomy - by explaining the formerly unexplainable precession of the prehelion of Mercury. How would one even go about making equipment based on GR to test it, and why would it matter?
 
  • Like
Likes BillTre

What is the difference between deductive and inductive reasoning?

Deductive reasoning is a logical process where specific conclusions are drawn from general principles or premises. Inductive reasoning, on the other hand, involves making generalizations based on specific observations or evidence.

What is the scientific method?

The scientific method is a systematic approach to solving problems and answering questions through observation, experimentation, and analysis. It involves formulating a hypothesis, designing experiments to test the hypothesis, collecting and analyzing data, and drawing conclusions based on the results.

What is the role of falsifiability in science?

Falsifiability is the ability of a scientific hypothesis or theory to be proven false through observation or experimentation. It is an important concept in science because it allows for theories to be tested and potentially revised or discarded if they are proven to be incorrect.

What is the difference between qualitative and quantitative research?

Qualitative research involves collecting and analyzing non-numerical data, such as words, images, or observations, to gain an understanding of a phenomenon. Quantitative research, on the other hand, involves collecting and analyzing numerical data to test hypotheses and make statistical inferences.

What is the role of peer review in the scientific community?

Peer review is the process of evaluating and providing feedback on scientific research by experts in the same field. It ensures that research is of high quality, valid, and reliable before it is published and shared with the wider scientific community. It also allows for constructive criticism and helps to improve the overall quality of scientific research.

Similar threads

Replies
4
Views
980
  • Calculus and Beyond Homework Help
Replies
4
Views
704
  • Quantum Interpretations and Foundations
2
Replies
40
Views
4K
Replies
14
Views
921
  • Quantum Interpretations and Foundations
Replies
7
Views
713
  • Art, Music, History, and Linguistics
Replies
3
Views
2K
  • Beyond the Standard Models
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • General Discussion
Replies
18
Views
3K
Back
Top