reilly said:
There is a presumption that classical physics describes a world of certainty. For example, moving classical objects are, supposedly, described by well defined trajectories, and so on. And, in fact, this take seems to work wonderfully well in practice in the prescribed theaters of physics.
But... In physics labs, at any level, we learn that we must take errors of measurement into account. One measurement is virtually never sufficient to pin something down. In the ususal drill, errors are assumed to be Gaussian, and standard statistics is usually sufficient to set the standard error.
So, given the reality of experimental errors, what can be said about the certainty of classical physics?
Regards,
Reilly Atkinson
You really obtain in an experiment for an observable A is
A = <A> + deltaA
and classical physics (EXCEPT thermodynamics) traditionally focuses ONLY on average values.
For example Newton second law usually wrote as
ma = F
would be written as
m<a> = <F>
The most general equation is Langevin one
ma = <F> + F
random
Nobody can prove that F
random was the result of a deterministic underlying force (between others requirements one would use a laboratory instrumental with INFINITE precision, which is obviously imposible)
What can be said is that determinism is a
philosophical option: Ones argue (newer prove) that F
random is the reflect of some ASSUMED underlying deterministic force f, others argue that the world is inherently non-determinist (e.g. Juan R., Prigogine, etc.) and that f does not exist.
From a
scientific point of view our universe is stochastic (newer determinist) and in real laboratory measurements we ALWAYS obtain
ma = <F> + F
random
and this is the reason that we repeat experiments for obtaining average values and write down equations for those average values.
m<a> = <F>
The equations of classical mechanics, EM, etc are valid
only for the averages.
Also usual (elementary) equations of others disciplines are valid for averages. For example for the chemical reaction A --> B
the kinetic equation
d[A]/dt = - k [A]
that appears in any elementary textbook of chemistry is only valid for <[A]>.
d<[A]>/dt = - k <[A]>
The most general expresion is
d[A]/dt = - k [A] + c
where c is a chemical random force. This is the reason that when one does an experimental measurement of REAL [A] versus time one do NOT obtain the exponential predicted by the equation valid for the average. Then one extract <[A]> from the real data for A via statistical analysis and compute via theoretical methods the value of k for that reaction. In macroscopic chemical kinetics one is generally interested only in k; in single molecular dynamics, however, one is really interested in c. In fact, c plays a crucial role in the chemistry of biological systems, for example in biological chanels in membranes.
P.S: If i am not wrong we have found a possible demonstration that f does not exist using a relativistic quantum formulation that includes also quantum gravity, in the Center for CANONICAL |SCIENCE). If this research is correct, relativistic quantum gravity restrictions to spacetime 'foam' add an inherent indeterminism to universe.
Remember that the dynamical structure of QM -far from usual beliefs- is DETERMINIST. Indeterminism in QM arises only in the quantum measurement process WHICH is not explained by the Schrödinger equation. That quantum gravity may play a special role in the quantum measurement process is also maintained by a number of authors, e.g. Penrose. In Penrose theory, the indeterminism of quantum mechanics arises due to structure of spacetime in quantum gravity.