Evaluating Scientific Experiments: Has the Standard of Proof Shifted?

AI Thread Summary
The discussion centers on whether the evaluation of scientific experiments has fundamentally changed, suggesting a shift in the standard of proof due to the complexities of modern science. Participants explore the philosophical underpinnings of scientific inquiry, emphasizing that science is more about modeling and prediction than absolute truth. The conversation highlights the challenges of induction and the existence of multiple theories that can explain the same evidence. It also touches on the evolution of scientific methods, such as the use of double-blind experiments, and the increasing complexity of scientific fields. Ultimately, the debate reflects a recognition that while the methodology of science remains consistent, the context and interpretation of scientific findings have evolved significantly.
Grimble
Messages
485
Reaction score
11
I thought this was the appropriate place for what is essentially (at least to my mind) a pholosphical idea.

Has there been a fundamental change in the way scientific experiments are evaluated?

Has the standard of proof shifted, as indeed it probably has to, due to the phenomena we are investigating?

I was taught at school that the scientific process was to investigate, to evaluate, to form a theory and to test that theory. In particular that one must conclude that every part of a theory happened as envisaged and could have no other possible cause.

But as the scope of scientific theory continues to grow we focus on the extremes: the 'infinitely' small, sub-atomic particles; the 'infinitely' large, cosmological entities and processes; the 'infinitely' brief, etc, etc.
We seem to be tending towards the line of argument; if A then B and if evidence is found, that can be cosidered to comply with A, then B is considered to be proven.

Please understand that I am not in any way decrying modern science, for perhaps this has always been the way?

The example of Phlogiston comes to mind: first given that name by Georg Ernst Stahl, in 1703 and said by some to have had negative weight, the theory lasted for nearly 100 years.

Grimbleo:)
 
Last edited:
Physics news on Phys.org
Good question! You're definitely talking about philosophy here, or more specifically, philosophy of science. Philosophy of science asks questions like "what does science tell us?" and "what are the limits of science?"

Many people have spent their lives trying to answer these questions, and we still don't have consensus. The short answer is, however, is that you have been lied to :smile:. You'll find that this happens a lot actually, so it's good to ask what's really going on. The fact is that we don't really know as much as we say we do about a lot of topics, but in order to teach anything without getting bogged down we just have to pretend that some things are true when they are deemed close enough.

Science works on the principle of induction. We assume that if we take a measurement repeatedly we will always get the same reading. We assume that the same principles of nature that cause us to measure Earth's gravity to be 9.8m/s^2 at some locations will hold at all locations. We then further generalize and predict what the acceleration due to gravity will be on planets we have never been to. The problem with induction is that we can never have absolute proof that things will always hold up how we think they will. This is known as http://plato.stanford.edu/entries/induction-problem/" .

Furthermore, we can always come up with multiple alternative theories that fit experiments equally well. There are always other possible choices. Are you familiar with Galileo and the copernican vs ptolemaic models of the solar system? Both the heliocentric and geocentric models fit experiments equally well. We ended up switching to thinking of the sun as the center of the solar system, rather than calling Earth the center of the universe, purely because the math is more simple when we put the sun at the center.

In philosophy there is a famous thought experiment about "grue." Grue is a property of objects which causes them to be green before some year t and blue afterward. Many of the objects you had theorized were green might actually instead be grue. You have equal experimental evidence for both the theories of green and grue. From http://en.wikipedia.org/wiki/The_New_Problem_of_Induction" :
The problem is as follows. A standard example of induction is this: All emeralds examined thus far are green. This leads us to conclude (by induction) that also in the future emeralds will be green, and every next green emerald discovered strengthens this belief. Goodman observed that (assuming t has yet to pass) it is equally true that every emerald that has been observed is grue. Why, then, do we not conclude that emeralds first observed after t will also be grue, and why is the next grue emerald that comes along not considered further evidence in support of that conclusion?

You ask another question about having only indirect experimental evidence to support our theories. This too has always been the case, although is is more clearly evident in modern science. When you use an electron telescope to "see" something, you are relying on all of our theories involving how that technology will work. When you just step on a scale you are relying on our theories of gravity to present you with a proper measurement of mass. The form of argument you gave though, if A then B and A is true so B is true, is perfectly valid. If, in fact, you have proven A, then you have proven B without a doubt. The problem is, as discussed above, you can't ever deductively prove any scientific theories (although you can prove them wrong).
 
Last edited by a moderator:
Grimble said:
Has there been a fundamental change in the way scientific experiments are evaluated?

This is a very interesting question. And we could talk about both the theory and the practice.

The theory of how to do science is still pretty much what it always was. Though the limits are better appreciated perhaps.

Science is modelling. It is a mistake to think it is purely a search for "the truth". It is about creating general models which generally predict the world - or more accurately, predicts the kinds of things we are most interested in predicting. So modelling is tied to human purposes. And the main purpose (that gets socially rewarded) is the making of machines. Technology. Control over the world.

This would then be one kind of change perhaps - that theory building has become even more closely tied to the technological fruits that may result.

There is also the status research, the national prestige research - men on the moon, supercolliders, etc. This is science tied to political purposes - though there is good spin-off for military and economic purposes when it comes to nuclear physics, space races, etc.

Well, already I'm talking about the practical changes. But then there is a whole other kind of change in the fact that there are vastly more academics doing "research" at universities. And that so much of the straightforward science has been done. You know have a situation where multiple standards of quality apply. So there are domains of high rigour and others where anything goes. It is just a much more varied eco-scape these days. It is hard to speak of "science" as a monolithic or homogenic discipline and judge it good or bad.

Returning to your key point, which seems to be about the difficulty of making the measurements that confirm the theories, yes, at the extremes we are getting towards the situation where we can essentially make only the one measurement. The state of the universe we are within.

And people also dream they may discover a ToE - which I see as a measurement-less theory of reality. Just from considerations of pure symmetry, there may be reasons why we live in this 3D realm with certain particle symmetries, etc. A few constants might be randomly chosen as the way this symmetry breaks.

Would this still be science? Well it would probably not deliver us much in the way of new technology. It would probably not be testable by new kinds of measurement (though it would have to fit all the existing ones).

But it could be more like science as the pursuit of pure truth. Indeed, more like philosophy!

And personally I don't see that as a bad thing. I would spare a few taxpayer dollars to fund it.

And with the wrangles over the status of string theory and anthropic principles, we can see people trying to thrash out the rules for entering this next phase perhaps.
 
Grimble said:
I was taught at school that the scientific process was to investigate, to evaluate, to form a theory and to test that theory.

This actually touches on a pretty major disagreement in the philosophy of science. Which comes first, evidence or hypothesis? Its a problem with formulating the method.

This is related to the rationalism vs empiricism argument, as it deals with what inspires a theory. Some theories do indeed seem to come from our imagination... and then get proved or disproved via evidence. But at the other end of things, sometimes scientists are inspired by the evidence, that is, the evidence points to the theory that no one thought of before.

Its really a chicken and egg problem. Scientists are always evaluating evidence, but they can also be inspired by seemingly unrelated things.

The use of 'double blind' experiment also comes to mind as a 'development' in how science is done.
 
JoeDawg said:
The use of 'double blind' experiment also comes to mind as a 'development' in how science is done.

Oh, so is psychology a science now :wink:? I think that's the development.
 
JoeDawg said:
This actually touches on a pretty major disagreement in the philosophy of science. Which comes first, evidence or hypothesis? Its a problem with formulating the method...Its really a chicken and egg problem.

Ah, well at least that is always an easy loop to get out of.

Understanding begins in a vague state of development and becomes crisply developed - dichotomised into generals and particulars, models and measurements. The two naturally go hand in hand as in synergistic activity, each driving the other in their opposed directions. So no need to worry about which comes first.
 
kote said:
Oh, so is psychology a science now :wink:? I think that's the development.
Both are, but psychology is just a new field of study, its not a change in method. You can apply 'scientific method' to anything really, including economics, and even astrology.

Double blind is a good example of how to control for bias, and the importance of the impact of the observer, has been something that has come slowly to science. Even now, physicists are grappling with this in terms of quantum theory.
 
apeiron said:
So no need to worry about which comes first.
That doesn't really address the problem of defining method.
 
JoeDawg said:
That doesn't really address the problem of defining method.

This is the method...

Understanding begins in a vague state of development and becomes crisply developed - dichotomised into generals and particulars, models and measurements. The two naturally go hand in hand as in synergistic activity, each driving the other in their opposed directions.
 
  • #10
No there has not been a fundamental change in the way we do science. While yes I do agree one is needed.
 
  • #11
Thank you Gentlemen, (No, I must not assume that, must I?), [STRIKE]Gentlemen[/STRIKE]: Good People, one and all, it is a much broader subject than I had anticipated, and deeper too.

In my original post I expressed the following opinion:
We seem to be tending towards the line of argument; if A then B and if evidence is found, that can be cosidered to comply with A, then B is considered to be proven.

To which I received the following reply:
The form of argument you gave though, if A then B and A is true so B is true, is perfectly valid. If, in fact, you have proven A, then you have proven B without a doubt.
(thank you, kote, and thank you too for a good summary of the difficulties, your reference to the fascinating problem of induction was very welcome:smile:.

But I had mixed up my own line of reasoning there; what I should have said was: We seem to be tending towards the line of argument; if A then B and if evidence is found, that can be considered to comply with B, then A is considered to be proven.

An example of this would be: 'if a star has planets, and if their orbit is in our plane, then that star's brilliance will diminish regularly. Then if such regular dimming of the star occurs, the star has planets'.
(please understand that I am not saying that this is what scientists have said, but it is the way it has been reported) and my question would be "but is there anything else that could give rise to the same evidence/observation?"

Grimble:wink:
 
  • #12
Interpretations of what a physical finding means, will always be contested. If A then B, and if B then A will hold for as long as we find a contradiction. When we find one, our assumptions on which a scientific investigation is always based, will have to be revised. You can say that this is what is currently going in physics.
 
  • #13
apeiron said:
This is the method...
That is your opinion. There is a wide array of opinions within the philosophy of science on this particular topic, and there has been a distinct shift in opinion over the last several centuries.

And really, you haven't said much of anything about an actual method. If anything, what you have said indicates a lack of discernable method.

One of the problems with defining any method is that scientists often find the greatest success by being unorthodox... while others work with the tools that function best in their field, which means there are either many different methods, or none at all, that is, anything that generates success... wins.
 
  • #14
JoeDawg said:
One of the problems with defining any method is that scientists often find the greatest success by being unorthodox... while others work with the tools that function best in their field, which means there are either many different methods, or none at all, that is, anything that generates success... wins.


I agree with this. I have to look up who said something to the effect of:

The greatest barrier to new knowledge in science is its past success.
 
Last edited:
  • #15
Apeiron, what do you mean by "crispness"?
 
  • #16
Jarle said:
Apeiron, what do you mean by "crispness"?

I use it specifically as the complementary term to vagueness, so it is a technical term.

It would mean definitely and clearly existing, or fully developed, present in most certain form.

The point is that most people assume that all reality IS crisp. Something either exists or it doesn't. So crisp would be a redundant term because that is simply the way things are. Any vagueness would be semantic - as in the sorites paradox.

But I am interested in logics founded on vagueness - the dichotomous separation of pure potential. So the logic of Anaximander, some versions of Tao, and of CS Peirce. Vague and crisp are two ordinary english terms that seem to come closest to capture the essence of the technical ideas involved.

I picked them both up from Stan Salthe who uses them in hierarchy theory approach.
 
  • #17
JoeDawg said:
That is your opinion.

It is an opinion supported by argument and shared by others. I've cited Rosen's modelling relations and anticipatory systems work here often enough.

Your problem is that you have not once either shown the basic idea of dichotomisation is in error, or provided some arguments and references in support of some other view.

I can only conclude you don't actually study current epistemology and are only responding from a casual layman's understanding of the issues.

So here are three refs for you to chew on.

The first compares Popper and Rosen. The second is a general intro to Rosen. The third is about bottom-up~top-down, vague to crisp, dynamic logic, based on Grossberg's anticipatory neural nets.

http://www.osti.gov/bridge/purl.cov...CC01B257B2EA5?purl=/10460-5uGkyu/webviewable/

http://www.people.vcu.edu/~mikuleck/PPRISS3.html

http://www.scitopics.com/Dynamic_logic.html
 
  • #18
apeiron said:
Your problem is that you have not once either shown the basic idea of dichotomisation is in error, or provided some arguments and references in support of some other view.

No, my problem is you repeat this stock answer about crispiness and dichotomies in response to just about any issue that comes along, and ignore any references that disagree with your rather narrow understandings. I know what a dichotomy is, I also know what a false dichotomy is.

That crisp enough for you?
 
Last edited:
  • #19
JoeDawg said:
I know what a dichotomy is, I also know what a false dichotomy is.

That crisp enough for you?

No. Please tell me what you understand by the terms?

I am still waiting for even the vaguest argument against the foundational importance of dichotomies in philosophical thought.

I have argued that they are the ONLY method by which people have arrived at fundamental concepts. I have not found a counter-example.

So a crisp response on that particular question would be much appreciated.
 
  • #20
Ok, for one thing I never said 'dichotomization is an error'. You seem to be going out of your way to misrepresent me... once again.

What I said was that dichotomies are not fundamental to epistemology. Dichotomies are binary constructs, which describes two distinct 'defined' states.
As in: 1,0
As in: A and not-A

Binaries can be *useful* in describing things, but knowledge is not limited to them.
A binary is just a framework.
We can describe, for instance, something with 3 states, a trichotomy.
Past, Present, Future
Dead cat,alive cat,deadandalive cat
Wave, particle, wave-particle.
Human color vision cones: short (blue), medium(green) and long (yellow-red)

The strength and weakness of dichotomies (and trichotomies) are that they are abstractly defined, or limited by definition. We can map a dichotomy onto anything, and that can be useful, but it can also be misleading, and invariably is, at least on some level.

Red and Blue for instance could be used to describe light. Its either red or not-red(blue). (ie green would be included in blue and yellow included in red)
But given what we know about the biology of the human eye, saying something is 'either blue or not blue', while it may be technically true, it doesn't accurately describe what you are talking about. Its misleading, because you have mapped a dichotomy onto something that is better described in another way. In fact, color is just one aspect of light the eye can see.

We can also describe something as a spectrum, or as 'approaching a state'.
A dichotomy is simply a type of map, or a convenient way of organizing. Its not fundamental to knowledge. And there are other systems of knowledge, as I pointed out before, which deal with unities, or wholes. The fact you are not impressed by them is irrelevant.

But I'm just a poor casual layman after all, you're much more impressive with your overly verbose, meandering, sermons about crispiness.
 
  • #21
JoeDawg said:
Binaries can be *useful* in describing things, but knowledge is not limited to them.
A binary is just a framework.

If dichotomies were merely useful for counting things, then they would indeed be trivial - mere numerology.

There are many examples of this like the five chinese elements, or the 12 astrological signs.

JoeDawg said:
We can describe, for instance, something with 3 states, a trichotomy.
Past, Present, Future
Dead cat,alive cat,deadandalive cat
Wave, particle, wave-particle.
Human color vision cones: short (blue), medium(green) and long (yellow-red)

Trichotomies are actually hierarchies when they are "good" - conceptually fundamental. And hierarchies of course are formed by dichotomies. When what has separate then mixes.

So have you identified any legitimate triads here?

Past, Present, Future - This one is tricky as it depends how you view time. It deserves a thread of its own.

Dead cat,alive cat,deadandalive cat/Wave, particle, wave-particle - particle~wave is a dichotomy and their superposition state would be the vagueness from which they arise as separate alternatives. So really just a dichotomy. Schrodinger's cat I suppose is a dichotomy in that we can divide the world into bios and a-bios. But it is not a fundamental distinction about nature.

Human color vision cones - you will be aware how the eye actually works? That dichotomies apply widely? As with the "on-center" and "off-center" ganglion cells. And the colour opponent channels. So perhaps three cones, but four primaries - the colour pairs of red~green and blue~yellow. Even colours don't compute unless there is A and not-A.

JoeDawg said:
Red and Blue for instance could be used to describe light. Its either red or not-red(blue). (ie green would be included in blue and yellow included in red)
But given what we know about the biology of the human eye, saying something is 'either blue or not blue', while it may be technically true, it doesn't accurately describe what you are talking about. Its misleading, because you have mapped a dichotomy onto something that is better described in another way. In fact, color is just one aspect of light the eye can see.

Not blue would be yellow. Not red would be green.

But while it is revealing that the brain relies deeply on dichotomies (attention~habit, ideas~impressions, what~where, sensory~motor) this is not quite the same as the philosophically fundamental dichotomies like substance~form or stasis~flux.

Taking stasis~flux for example, can you think of a third or even fourth possibility here? Stasis and flux, by being mutually exclusive, also exclude all other options.

Well you might suggest that a third thing is the superposition of stasis~flux - and I would agree that this would be a vaguer undivided state.

Or you could also say that a mixture of stasis and flux is the third thing - and I would agree that this is then the triad of a hierarchy. We can imagine such a mixture being fractal. Indeed, as the NK Boolean networks of Kauffman where the fluid and the crystalline regimes are mixed "to the edge of chaos" as they used to say.

JoeDawg said:
We can also describe something as a spectrum, or as 'approaching a state'.
A dichotomy is simply a type of map, or a convenient way of organizing. Its not fundamental to knowledge.

This is an assertion not an argument.

Again, as Bohr, Hegel, Peirce and many others have pointed out, when alternatives become mutually exclusive, they must be fundamental. It is not a convenience that the discrete and the continuous exhaust the possibilities. If you can show it is actually just a convenience, then you will have an argument.

Again, focus on what seem to be the fundamental dichotomies that have arisen in the history of philosophy and thought generally.

local~global

substance~form

discrete~continuous

chance~necessity

stasis~flux

particular~general

atom~void

figure~ground

separation~mixing

simple~complex

vague~crisp
 
  • #22
From my perspective at reading this thread, I see no specific discussion on experiments, as indicated in the topic. It appears more to be a generic discussion on the workings of science, rather than the role of experiments. I also wonder how many people who offered responses in this aspect has actually done any extensive scientific experiments. This isn't a criticism. It is merely something I often use to judge the background expertise of the contributor.

It is highly premature to make any kind of conclusion or judgment if you are not fully aware of how things are done. Using generic impression, or based on things you have no first-hand knowledge of, will simply perpetuate misconception, misinformation, or downright false impression. The "standard" impression of how science works that has been mentioned here is grossly outdated. I can cite many new physics, for example, that came out of nowhere other than simply an expected experimental observation that no existing theory at that time had predicted (example: CP-violation, superconductivity, fractional quantum hall effect, etc... ). In other words, the experimental discovery caused the impetus for a new theory that expanded physics. In fact, if you've read Harry Lipkin's sharp article "http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=PHTOAD000053000007000015000001&idtype=cvips&prog=normal" ", you would have come out with a strong impression that it is experiments that drive physics.

The process of knowledge doesn't follow one single path, and it would be quite naive to think that one can easily make clean summarization of how things are done in science. Not only is there a closed loop, but it is also a self-consistent feedback loop, where at each stage, both theory and experiments feed off each other to refine the understanding of something. This is especially true when the search for a valid description of a phenomena is still ongoing (example: understanding high-Tc superconductors). Each experimental discovery drives a refinement of existing theory, or even a formation of new theory, which then drives more experiments that tries to test those theories, while in the process, often makes startling new observations (example: the pseudogap in the underdoped phase), which in turn, goes back to the theorists to figure something out. And the circle continues.

What philosophical principle that you derive out of this is your expertise, not mine. But it would help if you first have the accurate "data" to work with, rather than some handwaving impression. And for background info, I am an experimental physicist.

Zz.
 
Last edited by a moderator:
  • #23
ZapperZ said:
It is highly premature to make any kind of conclusion or judgment if you are not fully aware of how things are done. Using generic impression, or based on things you have no first-hand knowledge of, will simply perpetuate misconception, misinformation, or downright false impression. The "standard" impression of how science works that has been mentioned here is grossly outdated.

Hi ZapperZ. Which standard impression are you referring to? There is a difference between what contemporary scientists do and the general concepts of inductive empirical inquiry. There are also different philosophies regarding each.

Every one of us does experiments every day in the sense of pure empiricism. I believe that the sun will rise in the morning because I've seen it rise every morning. Every time I wake up is a new experiment in a sense. When we talk about this kind of experiment, we certainly aren't saying anything about the sociology of contemporary physics, or paradigms, or how science is or should be done. We are talking about the philosophical underpinnings that allow us to have science at all and the limits that those underpinnings place on the knowledge science can give us. These basics are necessary for us to consider how valid or not it is for experiments to drive science, or for theories to drive science, etc.
 
  • #24
kote said:
Hi ZapperZ. Which standard impression are you referring to? There is a difference between what contemporary scientists do and the general concepts of inductive empirical inquiry. There are also different philosophies regarding each.

Every one of us does experiments every day in the sense of pure empiricism. I believe that the sun will rise in the morning because I've seen it rise every morning. Every time I wake up is a new experiment in a sense. When we talk about this kind of experiment, we certainly aren't saying anything about the sociology of contemporary physics, or paradigms, or how science is or should be done. We are talking about the philosophical underpinnings that allow us to have science at all and the limits that those underpinnings place on the knowledge science can give us. These basics are necessary for us to consider how valid or not it is for experiments to drive science, or for theories to drive science, etc.

The topic asked about "scientific experiments". I used the conventional understanding on what is implied by that, rather than anecdotal observations or simply observations without any kind of systematic analysis of both the observed and the observer. It would be an insult, don't you think, to think that your everyday observation is anywhere identical to a typical scientific experiment at, say, the LHC, and putting those on the same ground.

Zz.
 
  • #25
ZapperZ said:
The topic asked about "scientific experiments". I used the conventional understanding on what is implied by that, rather than anecdotal observations or simply observations without any kind of systematic analysis of both the observed and the observer. It would be an insult, don't you think, to think that your everyday observation is anywhere identical to a typical scientific experiment at, say, the LHC, and putting those on the same ground.

Zz.

I don't think it's an insult at all. Logically the two use the same methods to draw conclusions about the correlation between observations. It's easier to consider the logic using simple examples than it is to start off with a very complicated experiment that took thousands of years to arrive at. Empirical knowledge is gained through the inductive consideration of observations and the development of theories to model those observations. If I did not perform a systematic analysis on my observation of the sun rising each morning, I would never come to expect that it will rise regularly.

I'm not disagreeing with you at all about contemporary science. Anyone who has taken an introductory philosophy of science course will be familiar with your concept of experiments leading to new theory. Kuhn's The Structure of Scientific Revolutions is standard reading, as are Popper and Feyerabend. Kuhn draws a distinction between "normal" and "revolutionary" science, placing the creation of new theories arising from unexpected and unexplainable observations in the revolutionary category. Feyerabend's Against Method argues that there is no such thing as a strict method that science follows (or can or should follow).

If anyone reading has been thinking that, beyond the basic inductive method of inference from repeated observation, there is a particular structured "scientific method" that is followed, I recommend taking a look at http://en.wikipedia.org/wiki/Epistemological_anarchism.

The Stanford Encyclopedia of Philosophy also has good introductory articles on http://plato.stanford.edu/entries/thomas-kuhn/" , and other related topics (see "Related Entries" at the bottom of each page).
 
Last edited by a moderator:
  • #26
apeiron said:
I use it specifically as the complementary term to vagueness, so it is a technical term.

It would mean definitely and clearly existing, or fully developed, present in most certain form.

The point is that most people assume that all reality IS crisp. Something either exists or it doesn't. So crisp would be a redundant term because that is simply the way things are. Any vagueness would be semantic - as in the sorites paradox.

But I am interested in logics founded on vagueness - the dichotomous separation of pure potential. So the logic of Anaximander, some versions of Tao, and of CS Peirce. Vague and crisp are two ordinary english terms that seem to come closest to capture the essence of the technical ideas involved.

I picked them both up from Stan Salthe who uses them in hierarchy theory approach.

I agree with you.
 
  • #27
ZapperZ said:
The topic asked about "scientific experiments". I used the conventional understanding on what is implied by that, rather than anecdotal observations or simply observations without any kind of systematic analysis of both the observed and the observer. It would be an insult, don't you think, to think that your everyday observation is anywhere identical to a typical scientific experiment at, say, the LHC, and putting those on the same ground.

It is a good point that the unpredicted or mispredicted drives new theory. That is one of the features that makes science faster developing. We measure the world in a way that seeks the exceptions to the rule.

Fundamentally, the process of scientific knowing and ordinary knowing are the same. Any kind of mind - animals included - is a model of reality that anticipates what will happen then responds to the exceptions.

As I reach out my hand to pick up my coffee cup, I have a mental prediction of just how it is going to feel to the grasp. And so accurate is the prediction that I don't really notice the feeling as I pick the cup up. But should the cup have been transformed into a rubber doppelganger, then that exception to prediction becomes the urgent subject of new theory-building!

So there is this basic hypothesis and test principle in the architecture of mind. This is the best way to model the mind and life some feel - as anticipatory systems.
http://www.istc.cnr.it/doc/1a_0000b_20080724d_anticipation.pdf

Is science with its expensive tools like the LHC any different?

Well I think science can be credited with a more systematic openness to experimental exceptions. Minds do try to gloss over the holes in predictions. We call it prejudice. Because of the effort of changing well established thought habits, exceptions will be assimliated rather than responded to a lot of the time.

This would be because our mental structures are a fairly homogenous whole - a world model. Science is different in that it operates in a much more fractured way. There are thousands of sub-disciplines and so each shard can make changes to its local world model, so to speak.

We would also want to talk about the grades of observation practiced in science. In softer sciences like biology, there is the uncontrolled experiment - going out and observing, noticing, cataloguing, collecting. Field work as well as lab work.

And we have to bear in mind the purposes of the modeller. So it is naive to think that even the LHC is some pure search after naked truth. It is also about prestige. It probably no longer has an overt weapons technology purpose, but particle physics has a large establishment because this was a historical purpose.

The OP was really about the possibly changing balance, or nature, of the two parts of the knowing process that describes minds, and the extension to what minds do that we call science. Now that we are trying to reach beyond practical measurement, does this mean science stops, that science becomes unreliable, that science has to find some new balance, some new method?

From my perspective, I think science - or at least those parts of it addressing the boundary questions of the very small, the very large, and the very complex - has to change its approach to make progress.

Theory and measurement must continue to be the formula in some way as this is simply the way minds work in modelling reality. But I think it is pretty obvious that the current activity in those boundary domains is too much in the analytic and technologic tradition. The sociology needs to be tilted towards the synthetic and philosophic end of the spectrum.

So measurements would need to focus on measuring the whole rather than measuring the parts, for instance. What does this even mean? Well if you take a systems approach, then you would look for models such as dissipative structure theory or semiotics which spans many kinds of systems (biological, chemical, physical). Then measure reality in terms of that common coin.

We are sort of doing that already. Witness the way thermodynamics became part of cosmology (inflation, black hole entropy, Beckenstein's bound). Or condensed matter approaches are being suggested for particle physics (Wen, Volovik, Laughlin).

Philosophy of science would be about discussing such shifts in collective behaviour explicitly.
 
Last edited by a moderator:
  • #28
I think most of you are forgetting something very important. Physics, for example, isn't just about describing what goes up, must come down. It must also involve, get this, when and where it will come down!. The quantitative aspect of it is a huge part of physics, and science in general. When you lack that, you are simply making a handwaving argument. You say that the sun rises each morning? Fine. Can you tell me when, where, and how it goes through the sky, please?

Casual observation is NOT the same as scientific experiment. If it is, then every single paranormal pseudoscience would be part of science. You can also prove that to me by publishing what you observe casually in a science journal. Till then, all we have is what you insist on without any valid foundation. What make your opinion any better than mine that states that they are not the same thing?

Zz.
 
  • #29
ZapperZ said:
I think most of you are forgetting something very important. Physics, for example, isn't just about describing what goes up, must come down. It must also involve, get this, when and where it will come down!. The quantitative aspect of it is a huge part of physics, and science in general. When you lack that, you are simply making a handwaving argument. You say that the sun rises each morning? Fine. Can you tell me when, where, and how it goes through the sky, please?

Casual observation is NOT the same as scientific experiment. If it is, then every single paranormal pseudoscience would be part of science. You can also prove that to me by publishing what you observe casually in a science journal. Till then, all we have is what you insist on without any valid foundation. What make your opinion any better than mine that states that they are not the same thing?

Zz.

We're not forgetting :smile:. What you are describing is called the http://en.wikipedia.org/wiki/Demarcation_problem" . There are serious logical problems with demarcating science. Falsification was Popper's influential method of demarcation. According to Popper, if you could potentially prove a theory wrong by observation, then it counts as science.

By the falsification criterion of demarcation, a theory of the sun rising each morning and setting each night is absolutely science. The problem is that it turns out to be logically impossible to falsify a theory by reference to evidence or observation (http://en.wikipedia.org/wiki/Confirmation_holism).

Taking your criteria for demarcation above... the sun rising every day can be quantified. Every 24 hours the sun will rise. Of course, experiments will show that this is not exact. But when was the last time you exactly measured anything? We ignore experimental error if our theory still remains the best explanation.

As for the how, science never answers that question at the fundamental level. In psychology we answer "how" by reference to biology. In biology we answer "how" by reference to chemistry. In chemistry we answer "how" by reference to physics. Physics doesn't even attempt an explanation - it just tells you how things are. Was Planck not doing science when he came up with his constant while simultaneously claiming that it "could not be expected to have more than a formal significance?"

The fact is that it is very difficult, and most likely impossible, to smoothly draw a line between science and pseudo-science. So what we are left with are things like journals and grants and other sociological factors that determine what counts and what doesn't. I don't agree that journals are a good test either. Try submitting any science done 100 years ago or more to a journal. Surely science has been around for longer than 100 years, but none of that would ever be published.

Casual observation is clearly not science. I would argue that coming up with a quantified model based on repeated observations is. My model of the sun rising every 24 hours is both quantified and based on repeated experiments, and no experiments have shown it to be wrong outside of a certain experimental error. Of course you can come up with better models, but does that mean the original wasn't science? Were Newton's laws declared to no longer be science when we came up with relativistic versions?

Demarcation is not such a simple problem, and agreement only seems to come primarily by looking at individual cases (as is common in ethics and aesthetics). http://plato.stanford.edu/entries/pseudo-science/#UniDiv:
Kuhn observed that although his own and Popper's criteria of demarcation are profoundly different, they lead to essentially the same conclusions on what should be counted as science respectively pseudoscience (Kuhn 1974, 803). This convergence of theoretically divergent demarcation criteria is a quite general phenomenon. Philosophers and other theoreticians of science differ widely in their views of what science is. Nevertheless, there is virtual unanimity in the community of knowledge disciplines on most particular issues of demarcation. There is widespread agreement for instance that creationism, astrology, homeopathy, Kirlian photography, dowsing, ufology, ancient astronaut theory, Holocaust denialism, and Velikovskian catastrophism are pseudosciences. There are a few points of controversy, for instance concerning the status of Freudian psychoanalysis, but the general picture is one of consensus rather than controversy in particular issues of demarcation.

It is in a sense paradoxical that so much agreement has been reached in particular issues in spite of almost complete disagreement on the general criteria that these judgments should presumably be based upon.
 
Last edited by a moderator:
  • #30
kote said:
We're not forgetting :smile:. What you are describing is called the http://en.wikipedia.org/wiki/Demarcation_problem" . There are serious logical problems with demarcating science. Falsification was Popper's influential method of demarcation. According to Popper, if you could potentially prove a theory wrong by observation, then it counts as science.

By the falsification criterion of demarcation, a theory of the sun rising each morning and setting each night is absolutely science. The problem is that it turns out to be logically impossible to falsify a theory by reference to evidence or observation (http://en.wikipedia.org/wiki/Confirmation_holism).

Taking your criteria for demarcation above... the sun rising every day can be quantified. Every 24 hours the sun will rise. Of course, experiments will show that this is not exact. But when was the last time you exactly measured anything? We ignore experimental error if our theory still remains the best explanation.

As for the how, science never answers that question at the fundamental level. In psychology we answer "how" by reference to biology. In biology we answer "how" by reference to chemistry. In chemistry we answer "how" by reference to physics. Physics doesn't even attempt an explanation - it just tells you how things are. Was Planck not doing science when he came up with his constant while simultaneously claiming that it "could not be expected to have more than a formal significance?"

The fact is that it is very difficult, and most likely impossible, to smoothly draw a line between science and pseudo-science. So what we are left with are things like journals and grants and other sociological factors that determine what counts and what doesn't. I don't agree that journals are a good test either. Try submitting any science done 100 years ago or more to a journal. Surely science has been around for longer than 100 years, but none of that would ever be published.

Casual observation is clearly not science. I would argue that coming up with a quantified model based on repeated observations is. My model of the sun rising every 24 hours is both quantified and based on repeated experiments, and no experiments have shown it to be wrong outside of a certain experimental error. Of course you can come up with better models, but does that mean the original wasn't science? Were Newton's laws declared to no longer be science when we came up with relativistic versions?

Demarcation is not such a simple problem, and agreement only seems to come primarily by looking at individual cases (as is common in ethics and aesthetics). http://plato.stanford.edu/entries/pseudo-science/#UniDiv:

So let me try to put words into your mouth here. You think you have a good grasp, beyond just superficial knowledge, what experimental science really is to be able to make general characterization about it?

With respect to your analogy to the sun rising every 24 hours, there are variations to such a "rule" beyond just "experimental error". We are not dealing YET with experimental errors here. Variation in such rise and sun set times occurs at all locations on the Earth over a period of a year. Even ancients civilizations made detailed studies of such things. So a simple handwaving description of stating that the sun rises every 24 hours is quantitatively in error.

And I have no idea why I'm being given a lecture in what science/physics does or doesn't. Unless you are disputing my view of how experiments are done and how they fit into how science is practiced, I don't see any relevance to such a thing. My incursion into this thread was not meant to "enhance" your philosophical discussion. I'm stating things based on what I perceived to be severe misconception of what I have first-hand knowledge of. If this is something you do not like, all I can say is, I'm sorry, but that's just the way it is.

Zz.
 
Last edited by a moderator:
  • #31
ZapperZ said:
I think most of you are forgetting something very important...![/b]. The quantitative aspect of it is a huge part of physics, and science in general...What make your opinion any better than mine that states that they are not the same thing?

Where are you seeing this in what is being said? My main citation all along has been Rosen's modelling relations. Is this what you think he is saying?

However, in modelling, quality is as important as quantity. And precision arises from the dichotomisation. You have to push the accuracy of both to achieve sharper (ie: crisper) modelling.

For instance, physics has made its strides by defining reality in terms of qualities that can be measured as quantities. You create the very specific notion of energy, or duration, or charge, then you can go out and measure the world more exactly and objectively in those constructed terms.

This is the essence of science. Creating the quantity~quality dyads that are objective in the sense that we can all measure reality in the same way to compare results.

If I say that sofa is red, you may have different visual paths that make you say it is puce. But if we both step back to something more abstract like the concept of wavelength, then we can invent measuring apparatus that allow us to compare quantitative readings.

The big problem in mind science, as an example, is that no one really knows what they should be measuring. They don't have the qualitative concepts that allow for the quantitative measurements. The field is a mess. It tried to rally around the notion of the hunt for the neural correlates of consciousess in the 1990s, but as I say, just could not agree on a correct qualitative definition that would allow actual meaningful measurements.

So the philosophy of science would be concerned about the method of developing qualitative concepts as much as the quantitative measurements.

I have a particular interest in the qualitative question. And this is because 1) mind science has plenty of data, but no proven concepts. And 2) because even physics has settled into a set of concepts and has not really done the groundwork for a new systems-level approach to reality modelling.
 
  • #32
I would not touch "mind science" with a 10-foot sofa.

BTW, how is the concept of "wavelength" more "abstract" than the concept of color? Wavelength is well-defined. Color isn't.

Zz.
 
  • #33
ZapperZ said:
So let me try to put words into your mouth here. You think you have a good grasp, beyond just superficial knowledge, what experimental science really is to be able to make general characterization about it?

Zz.

That certainly wasn't the point. I think it's safe to say that I'm familiar with experimental science, although I don't claim to be an expert in any particular field. Or are you asking for my personal background?

The point was that there is an academic discipline that professionally studies what science is and which has established a large body of knowledge after centuries of debate. Kuhn, one of the most influential philosophers and historians of science, whom I have studied, began as a physics professor before switching to philosophy. Feyerabend also started in physics.

There are things that science can and can't be. There are methods of determining what counts as science and what doesn't. These have been studied and debated by scientists and philosophers. I didn't claim that there is consensus or that I know the correct method of demarcation.

Epistemology is the branch of philosophy that studies rational knowledge, including the logical justification for beliefs. Both deductive and analytic beliefs and inductive empirical beliefs are studied. Science, in using inductive empirical reasoning, is subject to the constraints of inductive logic. Philosophy studies what those constraints are.

So yes, there are generalizations that can be made about science. In the course of this thread, I haven't presented anything that isn't already accepted by the academic discipline that studies what science is. I've tried to show sources, but I can provide more specifics if you're interested.
 
Last edited:
  • #34
ZapperZ said:
I would not touch "mind science" with a 10-foot sofa.

BTW, how is the concept of "wavelength" more "abstract" than the concept of color? Wavelength is well-defined. Color isn't.

Zz.

Tell that to someone who hasn't studied physics :wink:. I suppose this one is relative, but if you take what you immediately perceive - you immediately perceive color. It takes more inference to get to a concept of wavelength. In that sense, it's more abstract. Obviously from a physics point of view this isn't the case.

I'm not aware of any research into levels of abstractness in this sense :smile:.
 
  • #35
I see you added to your post.
ZapperZ said:
So a simple handwaving description of stating that the sun rises every 24 hours is quantitatively in error.

Even theories I think we could both agree are science are known to quantitatively be in error. That was the point I tried to make with Newton's laws. Newton's laws are quantitatively in error. They were never shown to exactly predict experiments. We now know that Newton's laws are not fundamental at all, except as approximations. Was Newton not doing science?

We know that quantum mechanics is quantitatively in error. It doesn't jive with gravity. It is also a theory that only gives approximate or statistical predictions. Is quantum mechanics not science?

The fact that a theory is approximate does not seem to exclude it from the realm of science. In its logical principles, the theory of the sun rising every 24 hours, as an approximate theory, is no different.

ZapperZ said:
And I have no idea why I'm being given a lecture in what science/physics does or doesn't. Unless you are disputing my view of how experiments are done and how they fit into how science is practiced, I don't see any relevance to such a thing. My incursion into this thread was not meant to "enhance" your philosophical discussion. I'm stating things based on what I perceived to be severe misconception of what I have first-hand knowledge of. If this is something you do not like, all I can say is, I'm sorry, but that's just the way it is.

I guess it's unfortunate that you aren't interested in the issue beyond posting your personal opinion. I thought it certainly was adding to the discussion. It sounds like we're actually in agreement on how normal experimental science is done.

Everything I've said has been said before me by professional physicists who have explicitly studied the issue. I also have first hand experience with serious experimental research, although I'm sure you have more. I don't blame you for not being interested in the study of what science is, though. In the words of Feynman, "Philosophy of science is about as useful to scientists as ornithology is to birds."
 
  • #36
ZapperZ said:
I would not touch "mind science" with a 10-foot sofa.

But you would also like to see mind science done right?

ZapperZ said:
BTW, how is the concept of "wavelength" more "abstract" than the concept of color? Wavelength is well-defined. Color isn't.

Wavelength is based on the concept of a wave, or a resonance. It is a mathematical concept made precise by sine and other quantifying tricks. It is abstract in that we can apply it to many more things than actual waves on the sea.

But this is obvious to you.

Colour is clearly not a very objective concept. When we talk of the colour of quarks, it is the vaguest of analogies - a reminder of threeness - not meant as a physically generally resemblance.

So my point was that comparing colours is a subjective exercise - though if you were doing psychophysics experiments in the laboratory, there are ways of making measurements that are relatively well controlled.

But wavelength is a physical idea. It has two parts. There is the qualitative concept of "a wave" and then the quantitative machinery that can be used to measure the "waviness" of many aspects of reality.

In fact, the correlation between wavelength and colour experience is not one-to-one. Google Land color constancy if you are interested.
 
  • #37
kote said:
I see you added to your post.Even theories I think we could both agree are science are known to quantitatively be in error. That was the point I tried to make with Newton's laws. Newton's laws are quantitatively in error. They were never shown to exactly predict experiments. We now know that Newton's laws are not fundamental at all, except as approximations. Was Newton not doing science?

This is totally irrelevant to my point. Newton's laws are PERFECTLY VALID in its realm of applications. A structural engineer will look at your funny if you insist on using QM or SR to build a house. My point has nothing to do with something being an "approximation". It has everything to do with the "First Principle" aspect of the description. Your casual observation of the description of the sun's rise is faulty IN PRINCIPLE.

Furthermore, this misses the way in which such a description is done in science, and especially in physics. We don't simply say "The electric field at a point r way from a source charge is E V/m". This says nothing, the very same way that saying the sun rises every 24 hours says nothing. However, writing that

E = \frac{kq}{r^2}

provides the useful description of the phenomenon, which is why physics is done this way. What you did was merely "stamp-collecting". It is not the same type of experiment that is done in science. It is void of any study of the relationships between two or more quantities being studied, i.e. how does A behave with B? This is what Newton laws have. It had nothing to do with it being "exact" or an "approximation".

I guess it's unfortunate that you aren't interested in the issue beyond posting your personal opinion. I thought it certainly was adding to the discussion. It sounds like we're actually in agreement on how normal experimental science is done.

But I thought this whole thread (and generally, this forum) is filled with "personal opinion". My personal opinion here is that there are misconception in what is deemed as experimental science and how it fits into how science is practiced. And that opinion comes from a practicing experimentalist. If you think that you and everyone involved already have a good grasp of what it is that you are talking about, then I'm sorry to have attempted to correct the error. So carry on.

Zz.
 
  • #38
apeiron said:
mere numerology.
There are many examples of this like the five chinese elements, or the 12 astrological signs.
And these are not useful because they don't actually, regardless of claims to the contrary, represent anything concrete. It is mere numberology, if it doesn't map to something more concrete.
Trichotomies are actually hierarchies
If you like, fine. But it doesn't really matter to what I'm saying if they are. Hierarchies are constructs, they don't actually exist concretely, they are abstract representations. Taking an example from biology... biologists used to refer to 'food chains', the implication being that a vertical hierarchy exists, but now they refer to 'food webs'. Similarly the 'north' and 'south' poles of a magnet are just labels. There is no reason, not to call the south pole 'north'. Its just an arbitrary standard. Its based on something concrete, so you can't just arbitarily change it, the pole called north does have certain properties the pole called south doesn't, but the label itself doesn't matter. The abstract is arbitary, the concrete is not.
Even colours don't compute unless there is A and not-A.
That's because colors are artificial. There are no colors, there is only a spectrum of wavelengths interpreted by your brain. Colors are arbitary. Some human languages don't distinguish between green an blue.
Not blue would be yellow. Not red would be green.
The ones we select will depend on which ones are concretely useful, that is the litmus test. Pigment color wheels are different from 'direct light' color wheels. Print color is normally CMYK, video RGB.

And yes, you can dichotomize... if its useful to do so. It may not be. Dichotomies are just one way of respresenting information.
Taking stasis~flux for example, can you think of a third or even fourth possibility here?
That would depend on what concrete thing you were talking about. Let's look at water for example. When water is frozen its in a form of stasis(there are other forms obviously). When it melts it changes, when it evaporates it changes again.

Solid/Liquid/Gas

One could argue these are the 'fundamental' states of water. Saying its either solid or not, is accurate, but not always useful. What is fundamental then depends on the concrete, but as soon as you start talking about concrete things, the line between states gets less precise.
when alternatives become mutually exclusive, they must be fundamental.
If you're talking purely abstractly, it is definitional, and purely arbitrary(numerology as you say). If you are talking concretely, then it depends on what is useful. In the latter sense, what is useful is not arbitrary, but it does depend on context.
 
  • #39
JoeDawg said:
If you're talking purely abstractly, it is definitional, and purely arbitrary(numerology as you say). If you are talking concretely, then it depends on what is useful. In the latter sense, what is useful is not arbitrary, but it does depend on context.

First, the definition of "arbitrary" is: determined by chance, whim, or impulse, and not by necessity, reason, or principle.

The series of dichotomies I list are distinguished by their non-arbitrary nature. The point is that the existence of one as a concept then demands its other as a matter of necessary logic.

So if you have stasis as one kind of extreme, then the opposite would be flux. It couldn't be anything else. It couldn't be two or three other kinds of things. That is as far from arbitrary as you can get.
 
  • #40
ZapperZ said:
The "standard" impression of how science works that has been mentioned here is grossly outdated. I can cite many new physics, for example, that came out of nowhere other than simply an expected experimental observation that no existing theory at that time had predicted
How science is done has changed since 'the method' was first defined, and can be very different depending on the field one is in. Physics and astronomy (I'm not speaking professionally here, so feel free to judge and insult me) are almost entirely data driven, largely because of technology. We have developed recording devices that allow us to accumulate/analyse large amounts of data. More primitive physics and astronomy is more dependent on theories and hypotheses, because you have to choose your experiments wisely or you won't get the data you need.

Which is why everyone is theorizing like crazy leading up to the LHC turning on. They are looking for a headstart on analysing the data... when they finally get it. But this situation is similar to how science used to be done, almost uniformly, which is why the method was strictly defined the way it was.

The process of knowledge doesn't follow one single path, and it would be quite naive to think that one can easily make clean summarization of how things are done in science.
I would agree, but that is also very problematic for science, because it makes distingishing pseudo-science difficult. You can always argue 'common sense' but that's rarely convincing.
 
  • #41
ZapperZ said:
You can also prove that to me by publishing what you observe casually in a science journal.

You mean, like those cold fusion, and cloning, experiments that got published. :)
 
  • #42
Is it possible in practice to win an argument with an experienced philosopher? An experienced philosopher will argue anything and everything, while staying within the wide limits of generalisations.
 
  • #43
apeiron said:
First, the definition of "arbitrary" is: determined by chance, whim, or impulse, and not by necessity, reason, or principle.
Sounds good.
The series of dichotomies I list are distinguished by their non-arbitrary nature.
Due to the fact that they correspond to concrete observation, and only to the extent that they correspond.

Try these binaries:

Finite/Infinite
Something/Nothing

In both cases the first part of the pairing is non-arbitrary, we can find a concrete example of those. However, the second state of each pairing is more problematic, largely because we don't have a concrete example. The second are pure negation of the first, purely abstract.

Then try:
Justice/Injustice
Perfection/Imperfection.

These are certainly dichotomies, and if you have one then you don't have the other, but the more abstract they are the more arbitrary they are. Logic won't save you. You need a foundation on which to build premises about justice, something concrete, or you can't distinguish that there is any difference at all.
The point is that the existence of one as a concept then demands its other as a matter of necessary logic.

Fine:

Fliddidle/Baddidle

Its a dichotomy. A completely arbitary one. I've really only replaced A and not-A with nonsense words. Where does the meaning come from?? I could create a whole series of dependent binaries based on this one. No, I'm not going to.

So if you have stasis as one kind of extreme, then the opposite would be flux. It couldn't be anything else. It couldn't be two or three other kinds of things. That is as far from arbitrary as you can get.

But only because you have defined them as opposites... Unless you have something concrete to base them on... which you do... and quite a lot.
 
  • #44
JoeDawg said:
Finite/Infinite
Something/Nothing

Finite and infinite are not a dichotomy. But if we generalise further, we get discrete~continuous. You understand the difference?

Is the not-finite the infinite? Well, no because it could be the infinitesimal. But is the not-discrete the continuous? Well we can't think of some other intelligible alternative?

Something and nothing are also not really general enough to be dichotomies. The fact we can think of a third thing as a natural part of this group (everything) should tell you this.

I would generalise nothing and something to the vague~crisp.

JoeDawg said:
Then try:
Justice/Injustice
Perfection/Imperfection.

These are simple negations not the real thing, an asymmetric dichotomy. You can't do much philosophical heavy-lifting with simple negations.

If something is not just, is it necessarily unjust? If something is not perfect, is it imperfect? It seems we need more information here as the answer is not contained within the dichotomy itself.

If something is perfect, well that sounds pretty absolute. But is something that is merely OK also imperfect in that absolute sense, or just a relative sense? Both halves of the dichotomy have to be equally developed, equally crisp, equally absolute.

JoeDawg said:
Fliddidle/Baddidle

Its a dichotomy. A completely arbitary one. I've really only replaced A and not-A with nonsense words. Where does the meaning come from?? I could create a whole series of dependent binaries based on this one. No, I'm not going to.

Correct. This is indeed nonsense. How could you see this as constituting an argument?

You still seem to miss the essential point about dichotomies...as they relate to philosophy. There are not an infinite number. They arise from the search for what is most basic, most general, about reality.

My own opinion is that the number of truly universal dichotomies can in fact be reduced to just two. There is the dichotomy of development - vague~crisp, and the dichotomy of existence - local~global.

But you can still get the principle of dichotomies just from considering the classical ones like substance~form, discrete~continuous, stasis~flux, chance~necessity, etc. There was a reason Plato, Aristotle and the rest spent so much time working up these dichotomies and why the persist even into modern scientific theory as they basic dualities.

Is there some reason why you always make up other "dichotomies" and don't address the ones I say at actually dichotomies?
 
  • #45
kote said:
If I did not perform a systematic analysis on my observation of the sun rising each morning, I would never come to expect that it will rise regularly.

When you wrote that were you saying: "If I did not perform a systematic analysis on my observation of the sun rising each morning...", did you mean:
" perform a systematic analysis... each morning",
"... my observation of the sun rising each morning...", or
"... the sun rising each morning..."?

Which ever, I am not sure what you are getting at for I am sure that there must be many, like myself, who do none of that yet expect that the sun will rise each morning. (And, what do you know... it does!)

Zz is quite right in his observations, I (OP) was querying the way some experiments were reported, I suppose, as I stated in my last post, where I corrected a mistake in my first one.
Perhaps someone would be good enough to reply to that...

Thank you, one and all, Gimbleo:)o:)
 
  • #46
apeiron said:
You still seem to miss the essential point about dichotomies...as they relate to philosophy. There are not an infinite number.
Yes there is, some are just more useful than others.
They arise from the search for what is most basic, most general, about reality.
Exactly, we use the one's that reflect our experience. Experience is the foundation, like I have been saying, its not dichotomies that are fundamental or axioms, those are just generalizations from experience.

We take particulars and generalize, then use those generalizations to predict future(or unobserved) particulars. What is fundamental is experience however, because it is where we get logic and data.
 
  • #47
Grimble said:
Zz is quite right in his observations, I (OP) was querying the way some experiments were reported, I suppose, as I stated in my last post, where I corrected a mistake in my first one.
Perhaps someone would be good enough to reply to that...

Not sure I understand what either of you is getting at...but one of the problems in science, and this may relate to the theory of phlogiston is as follows...

http://en.wikipedia.org/wiki/Affirming_the_consequent#Use_of_the_fallacy_in_science
 
  • #48
JoeDawg said:
We take particulars and generalize, then use those generalizations to predict future(or unobserved) particulars. What is fundamental is experience however, because it is where we get logic and data.

Phew, it's taken a while but you're very nearly there.

Particulars and generals are the modelling dichotomy - the subject of this thread. And in happy fashion, the modelling dichotomy is the way to model fundamental experience. So it all fits nicely together.

So it is not some arbitrary choice. The way our minds work is also the way our scientific modelling works.

The mind is an anticipatory system that predicts the world. It develops general ideas that frame the particular impressions. And in dichotomous fashion, the parade of general impressions is generalised to form those general ideas.

A child at first sees cats and dogs in a vague way because it lacks the developed ideas that allow more precise impressions. Later, learning through unsuccessful predictions (hey that cat barked!), the child will develop more crisp ideas of "the general dog" and the "general cat" that allow more definite particular experiences of these animals. Still later, it may learn to tell persians and maine coons apart. The long term memories and the short term memories develop hand in hand.

So fundamental experience is in fact fundamentally dichotomised into generals and particulars - the framing ideas and the moment to moment states of anticipation and comprehension we would call our impressions.

Then science is the same process writ larger and more systematic. Hypothesis and test, formal model and informal measurement, theory and prediction, concept and quantification, universals and specifics, generals and particulars. There are many ways of saying the same thing. But dichotomisation is the way rather than merely a way.
 
  • #49
apeiron said:
So it is not some arbitrary choice.
I have not changed my position. Dichotomies are not fundamental to epistemology.

If they are purely abstract, dichotomies are arbitrary.
If they are purely abstract, axioms are arbitrary.

If they are based on experience, then they have a foundation.
The problem is, experience is not precise, nor is it universal, and experience doesn't prove anything.

The only way you can have proof, is if you define things purely abstractly, but then where you start is arbitrary. You can choose to start anywhere, you can choose any initial premise, any form of logic.

It is experience that puts limits on the type of logic you use, and the premises you choose. Experience however, is wholey subjective; it is descriptive, not definitional.

So fundamental experience is in fact fundamentally dichotomised into generals and particulars - the framing ideas and the moment to moment states of anticipation and comprehension we would call our impressions.

No, that is the way we frame experience, it is the way we generalize experience. Experience is overflowing and messy, its too much for us to digest, so we create an artifical structure in order to make sense of it.

Dichotomies are artifical, constructed, created... and unless they are based on experience, they are arbitrary.
 
  • #50
JoeDawg said:
I have not changed my position. Dichotomies are not fundamental to epistemology.

If they are purely abstract, dichotomies are arbitrary.
If they are purely abstract, axioms are arbitrary.

Again, to remind you, the definition of "arbitrary" is: determined by chance, whim, or impulse, and not by necessity, reason, or principle.

So you really want to say that abstractions are a matter of chance, whim or impulse?

Do you think you could flesh this surprising claim out with some further argument or references for others who have taken this view.

Of course we can all agree that these matters are a matter of subjective choice. But hopefully it is a reasoned and principled choice.

I indeed go further and feel it is a necessary choice. And I can see there is room to debate this claim. But the idea that the choices of axioms and dichotomies have ever been arbitrary in human history - well that's plain nuts.

So you are saying I could model reality with, pears~Heinkel 111s or liquid helium~Paris Hilton just as well as stasis~flux or discrete~continuous?

Perhaps you are misunderstanding the definiton of abstraction too? It comes from the Latin for to draw off. So you question has to be from what? From what else apart from subjective experience? And you can tell this is thought to be a purposeful action. Not an arbitrary matter.

JoeDawg said:
Dichotomies are artifical, constructed, created... and unless they are based on experience, they are arbitrary.

Phew. So we can agree then. Of course dichotomies are constructs. Ideas. Generalisations.

And the only ones of interest here are those derived systematically from experience. Abstracted from it as we would say. It would be arbitrary to chose ones that did not seem justified by experience.

So dichotomies are constructs used to model reality. They derive from experience. And because generalisation proves itself to be such a powerful method, we would want to abstract our way to the most general possible dichotomies. I've supplied a list of those that have been around a good 2300 years now. Perhaps you might want to address your reponses to some particular one.

It is your free choice which. But what about stasis~flux?

How is that not abstracted from experience? In what way is it arbitrary - on the same footing as liquid helium~Paris Hilton?

How is it not deeply reasonable that whatever is not changing is static, and whatever is static is not changing? Except if it is vague of course.

You can see the difference if we were to suggest that whatever is not liquid helium is Paris Hilton and whatever is Paris Hilton is not liquid helium. Immediately the lack of mutal definition is obvious.

Or perhaps you want to try this litmus test with other more clearly abstract dichotomies which are almost plausible.

So what about truth~ugliness? This would be claiming that whatever is not true is ugly, and whatever is ugly is not true.

Well, we might just about get away with the first part but not its inverse I feel.

So here we have an easy test of true dichotomies. They have to pass the A~notA criteria. The inverse statement has to be equally true. Nothing whimsical or random about that, is there? As Bohr realized, it is in fact a remarkably stiff test.

If you would ever just consider the dichotomies which I say are dichotomies - the ones that have been central to metaphysics every since the Miletians - rather than continuing to invent your own non-dichotomies, bogus pairings which merely ape the dichotomy notation, then you would actually understand the point.

And out of curiosity, why do you never cite any references germane to your arguments?
 

Similar threads

Replies
15
Views
2K
Replies
18
Views
4K
Replies
12
Views
3K
Replies
16
Views
2K
Replies
3
Views
3K
Replies
4
Views
2K
Replies
2
Views
6K
Back
Top