Science and Mathematics (heh)

1. Dec 28, 2003

Sikz

Mathematics is a thing of deduction. We use deductive logic to find "truths".

Science is a thing of induction. We use inductive logic to find "truths".

All Mathematics is based on postulates. Postulates are "truths" found through inductive logic (or they are simply "definitions").

If postulates are found inductively and Science operates inductively... Perhaps our Science is so young that we are still in the "finding postulates" stage? Perhaps later we will progress to theorums and the like... Or maybe Technology is the study of theorums based on the postulates studied by Science? Perhaps that, perhaps the other, perhaps both... What do you all think?

2. Dec 29, 2003

Staff Emeritus
The "Theory of Everything" would have to be unique. It would also have to have the property that "it must be so", that it cannot be otherwise than what it is. Otherwise it isn't much of a theory of everything.

Those properties are mathematical, so I say that if and when we ever get to a theory of everything, it will be through the unification of physics and mathematics. But I am not holding my breath.

3. Dec 29, 2003

marcus

that reminds me of something I just saw on SPR, a reference to a short Baez essay dated 2000
http://math.ucr.edu/home/baez/background.html
the essay indicates how it may be possible to carry
the search for a "background-free" theory to extremes

a theory that explains everything and "must be so"
perhaps such a theory was declared to be a heresy in 1227 by
the Bishop of Paris

he did not like the idea of consistently applying the idea that if A affects B then B affects A----reciprocity of cause and effect, related to Newton's law of the reciprocity of force.
something, said the Bishop, (we might say the laws of mathematics and physics) should provide a fixed unalterable background

in an "everything" theory presumably everything would be on the table, capable of alteration, perhaps affected by everything else, and subject to explanation. a shocking idea.

there is a hierarchy of backgrounds and the question is how far back to go

let us dispense with a fixed geometry of space, very well that leaves us with the topology

should we dispense, as well, with a fixed topology, or with the mathematical and physical laws governing space, or with constant quantitities serving as coefficients in these laws

shall 3 spatial dimensions and the speed of light be explained

the Bishop declared that this peeling away of backgrounds must stop
whereas you mildly decline to hold your breath

I mean that his declaring something heretical in 1227 and your refusing to hold your breath in 2004----while different in degree and, one may say, solemnity----are aligned in both upholding a kind of commonsense view regarding excesses of explanatory zeal.

Last edited: Dec 29, 2003
4. Dec 29, 2003

marcus

you suggest that Science is "young" and so what was a postulate may now be deduced from deeper assumptions
what is now postulated may, in future, be derived from still deeper laws, as yet undiscovered

the domains of science and mathematics are not fixed
what today is physics (the experimental value of the fine structure constant 1/137.036..., for which there is no known mathematical formula) may tomorrow turn out to be mathematics (a power series for it might be postulated and somehow "proven" just as we have formulas for pi, although all attempts have failed so far)

the postulates in mathematics are not fixed. deeper postulates are gradually found. geometry and arithmetic can now be constructed on the basis of deeper axioms---those of set theory.
this deepening happens so slowly that it is barely noticeable
it is like plate tectonics

the laws of physics are not fixed. they tend gradually to be subject to explanation on the basis of deeper laws

in these senses, Science (as you say) is "young"

I do not say it is "young" because I see no maturity in the future where growth stops. I would say, instead of young, growing.
Growing is nearly the same thing as young.

5. Jan 9, 2004

metacristi

Actually science is based on both induction and deduction (though some philosophers,Popper for example,argue,on good reason strictly philosophically speaking,that induction is not involved in science).Since the subject is very complex I have to make a digression in order to make things clearer.

The scientific methodology of establishing what is real is a form of empiricism (based on the correspondence and coherence philosophical theories of truth) with the inter subjective observational evidence as the 'highest authority'.But unlike the so called 'common truths' (things people agree usually based on some superficial observations) the scientific method carry out its research methodically and with the best devices available before accepting certain statements about the physical world.

The main steps involved by the scientific method are:

1.Observation/experiment

2.Hypothesis.As someone put it 'hypothesis is tentative; it's a kind of educated guess that must be tested further'.It doesn't really counts how we derive it,intuition is accepted too.All that count is coherence,conformity with the observed facts,predictions [potentially falsifiable].

3.Scientific law.A hypothesis that is confirmed by many experiments,using induction,is considered as representing a 'law' giving us 'objective knowledge' at least provisionally.If there are more successful,competing,hypothesis the most 'confirmed' hypothesis is chosen.If this is not enough the simpler 'gains' (by applying Occam's Razor).Occam's Razor is only a logical criterion indeed but since science is intrinsically pragmatic it is acceptable to use it.Indeed the first goal of science is to find the simplest,'working' for all our practical purposes,theories not some absolute truths.

4.Scientific theory=a set of scientific laws coherent with each other whose components have been chosen as above,not disproved yet,accounting for a broader domain of reality.Considered as giving us 'objective knowledge',provisionally.When it is no more sustained by all well conducted experiments or when it becomes theoretically and experimentally stagnant ---> loop to 2. and so on.

Reliable experiments/observations are the highest authority,representing the starting point in the majority of cases for hypothesis making process.Basically in order to explain the observed results of some experiments we must deduce them from a set of premises,considered true,inferred preferably (and not falsified yet) from empirical evidence.These could be simple statements deduced directly from empirical evidence (scientific laws) or other successful scientific theories (or only parts of them)+the basic axioms of science (such as the apriori rejection of all types of idealism,that nature can be understood and so on).That is the newly observed fact is explained within an existing paradigm.But sometimes scientists have to create brand new theories,to change paradigm,in order to expain them.

It can happen now in this process that toghether with empirically verifiable facts they must be forced to posit also in premises the existence of unobserved yet physical entities.This is totally acceptable with the only condition that they are indispensable theoretical constructs,that is directly responsible for the empirical success of the hypothesis.In other words they cannot be discarded being conceivable that we will be able to prove their existence empirically (even if only indirectly) later (but of course this is not a rule since metaphysics could still lead to very successful scientific theories!-as I'll argue below).Even 'invented' mathematical formalisms (as it is Schrodinger's equation,for example) are acceptable as much as they comply to the observed facts.It must be mentioned however that all,would be successful,new scientific theories must be able not only to engulf the result of all previous reliable experiments but additionally to make 'forward', new,predictions (not verified yet) potentially falsifiable.

The results of experiments count as predictions (deductions from a set of premises in fact),induction being only a part of the process.Technological devices,based on some scientific successful theories,count as their predictions even in cases when we first build them practically and only later find a scientific successful theory explaining their functioning.

Thus in science,unlike mathematics,we have to begin from the end in order to construct a successful theory.Since we can never be sure that all premises are true in absolute the only logical base we can find for the scientific approach is the so called hypothetical deductive method,modus tollens,based on the notion of implication [->]:

If a -> b is valid
and
a=(considered) TRUE
then
b=TRUE

where a is the set of premises and b are the predictions.

But if we find experimentally ~b (a certain prediction is falsified);where ~b=non b
then we have ~a (this means that one of the premises is false) --->scientists must return at the process of hypothesis making,the false premise must be found and eliminated,other premises must be added instead.

As far the predictions are supported by experiments b=TRUE but we cannot say something about the truth status of the premises (both a=TRUE or a=FALSE (one premise might be false) could lead to b=TRUE-in other words metaphysics still could lead to very succesfull scientific theories!).

But when b=FALSE by applying the modus tollens 'a' must be forcefully false a=FALSE (more exactly one or more premise(s) are false).The hypothesis is falsified,soundly disproved then.This logical assimetry is one of the basis for the 'falsifiationist' approach.In practice it is very difficult to really disprove a theory since we can always postulate that we are not yet aware of subtle relations between known entities which could potentially solve the puzzle later within the same paradigm.That's why some label the simple popperian falsifiationism 'naive',being the base of the (justified) Lakatosian criticism of simple falsifiationism.

[edited for some minor changes in layout]

Last edited: Jan 17, 2004