Vagueness as a model of initial conditions

In summary: If we go beyond the realm of possibilities and consider the universe itself to be an effect, then it must have a cause. But what could that cause be? A model of initial conditions becomes necessary to answer this question. In summary, the thread discusses the idea of vagueness and its potential implications for the understanding of the universe's initial conditions. Vagueness is a higher-level symmetry that exists before even the space of all possible symmetries. It is a model of "pre-symmetry". Vagueness is considered an ontic possibility, and gives us another way of modelling the initial conditions. There are some general references on vagueness, recent papers on ontic vagu
  • #1
apeiron
Gold Member
2,138
2
This thread will attempt to bring together "vagueness" based approaches to going beyond the standard model. Vagueness gives a different way to model the Universe's initial conditions. (And also quantum indeterminacy, the two being not un-related).

Vagueness in logic means indistinct, undecided, a potential not yet organised. It is a state that is pre-statistics, pre-evolution, pre-geometry, because there is not even yet a crisp variety upon which the machinery of selection can act.

We could call it “the symmetry of chaos” as – like a formless fog – a vagueness would look the same in every conceivable direction, free of any orderliness either spatial or temporal. It is a higher level of symmetry than even, say, the monster lie algebras, as it is the “space” that contains all possible symmetries. Or more accurately, the space of all possible symmetry breakings. Vagueness is a model of “pre-symmetry”.

Vagueness is an idea that dates back to ancient Greek and Buddhist thought and it became central to the logic of CS Peirce, the founder of pragmatism. But it fell from favour when Bertrand Russell argued that vagueness could only be semantic – a lack of precision in our language or theories of the world – and not ontic, a possible fact about reality itself.

Russell gave the example of a smudged photographic plate. The image might be a picture of Brown or Jones or Robinson. We cannot really be sure. But we can see the vagueness lies only in the representation. Out in the real world there will be a real person.

Russell’s arguments were taken as conclusive largely because many people wanted them to be so. It seemed a necessary truth to support the greater project of logical positivism – the prevailing epistemology of physics. But a century on, vagueness is again being considered as an ontic possibility. It gives us another way of modelling the initial conditions from which a universe, or even chaotically spawning multiverse, might spring.

Some general references on vagueness...
http://www.btinternet.com/~justin.needle/
http://plato.stanford.edu/entries/vagueness/

Some recent papers on ontic vagueness...
http://www.unicamp.br/~chibeni/public/whatisonticvagueness.pdf
http://www.ifs.csic.es/sorites/Issue_15/chibeni.htm [Broken]
http://www.personal.leeds.ac.uk/~phljrgw/wip/onticvagueness.pdf

Russell's 1923 argument against ontic vagueness...
http://www.cscs.umich.edu/~crshalizi/Russell/vagueness/
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
So where does vagueness fit into the current landscape of thinking in physics?

Paul Davies summed up the standard metaphysics of initial conditions in his 2006 book, The Goldilocks Enigma. The Universe must have arisen out of something. So what is that set of possible "somethings" we need to consider?

1) Nothingness: The universe (or multiverse) could have sprung into being out of pure nothingness. For some reason, there was nothing (not even time, let alone space or matter) and then an orderly world just started up, abruptly came into being. It would appear that such a world would either have to be uncaused, which would be paradoxical, or it would have to be called into being by some kind of teleological cause - a cause acting from its "future". In which case we would need a theory (a Theory of Everything) that can conjure up outcomes from absolute nothingness. Not easy.

People are often pushed towards Platonic approaches here, but even Plato was forced to allow for the existence of the chora, the formless substance that could take the imprint of his forms. His was not a pure nothingness ontology.

2) No beginning (or somethingness): Alternatively, the universe, space and time, could have been in existence exactly as we know it forever. There was no beginning, no creation event, and so no need for a model of initial conditions. Existence is infinite and uncaused. With no essential change – either development or evolution – reality just is. We would also call this the somethingness story as we are now saying there was always something there.

This would seem a failure of explanation. But we have to consider the possibility there is actually nothing there to explain. A model of initial conditions is unnecessary as there is at least one effect that never had a cause.

3) Circular logic or the Ouroboros hypothesis: An attractive way to get the best out of both the “from out of nothingness” and the “no beginnings” stories is some kind of circular approach where endings are also beginnings. Effects are also causes, but not in a “dangerous” backwards-in-time teleological sense, only in a progressive forward-moving sense.

So like the Big Crunch model of the universe, a world could have both existed forever, and also undergo periodic birth and death. Final states become the new initiating conditions without either beginning or end. And both developing and non-developing versions are possible. We can have essentially the same universe repeating cyclically, or we can have a branching tree, a spawning variety of world-lets and histories. Through the weak anthopic principle, we can then happen to find ourselves in one of the world-let branches conducive to our kind of complex physicality.

4) Everythingess: Circularity, like nothingess and no beginnings, does not really tackle the question of how something can first come to be. It inherits their paradoxical elements. So a fourth way of thinking about the issue of initial conditions is instead to jump out to a third extreme, to say that in the beginning, everything existed. There was a plenum or infinoverse. Then our own world is the result of a constraint of this infinite possibility. We are a specific subset of a realm of general being.

So think of a sculptor. He can either construct bottom-up from nothing, create by adding bits and pieces together in a void. Or he can work top-down from an everythingness, taking a block of marble that solidly represents every possible statue that could be imagined, and then chipping away to produce some actual statue.

****

So we have four distinct metaphysical positions on the initial conditions that could ground a standard model of particle physics and cosmological origins. Either it all began with nothing, with something, or with everything. Or fourthly, some more complicated mix of these ingredients that would be circular, recursive, or somehow else embed a selection principle (a final cause that hopefully does not look like the kind of final cause that physicists so dread).

And none of these four alternatives invoke the notion of vagueness. They are all crisp approaches, ones where the initial conditions are definite. Nothingness is definitely nothing. Everything is definitely everything. Something is definitely something.

So vagueness would be choice no. 5, one that Davies did not consider. Though in many ways it is close to an everythingness approach - the idea of chaotic possibility.

The claim would become not that everything "existed" but that everything was "potential".
 
Last edited:
  • #3
To complete my preamble on the current landscape of thinking on initial conditions, there is a further useful subdivision that can be made here.

There are many thinkers like Smolin, Davies and Linde now being drawn to more complex metaphysical accounts of the origins of things, ones that as I say, embed some version of a selectionist or anthropic principle (as final causes cannot, in the end, be avoided).

We can divide their approaches into the developmental and the evolutionary – that is, selection without memory, and selection with memory.

So for example, Linde would employ an “eternal somethingness” ontology. There is the eternally existing something of an inflaton field which spawns a multiplicity of world-lets. But this is a purely developmental story. Every variety of state can develop (within the constraints set by the notion of a self-inflating field). Then through the weak anthropic principle, somewhere within this infinite variety we can expect to find a world with observers like our own.

Smolin on the other hand suggests an evolutionary scenario which demands the further ingredient of a memory. Smolin again starts with a tacit “somethingness” as his initial conditions. But there is now an active selection, not a passive one. Each new universe spawned through some mechanism, like a black hole showing white on its other side, carries a memory of its particular initial conditions. So with “time”, the right kind of universes become infinitely more frequent as only they have the right stuff to reproduce.

So where Linde says every kind of universe would exist, our liveable one would be a statistical outlier, infinitely rare. Smolin, on the other hand, is arguing every kind of universe can exist, but only our kind would be exceptionally common.

Smolin would be an “improvement” on Linde because he delivers something we want. That is one set of initiating conditions leading to one outcome. We are very uncomfortable with the randomness of Linde – the “senseless” production of so much to get the little something, our world, that we want. Smolin gives us a story in which only our kind of world becomes likely again. But Smolin still gives no clear account of initial conditions. He still has to say “there was something” that set the whole game going. When you examine it, there is still the usual “turtles all the way down” that comes with circularity or somethingness approaches.

And again tying this to vagueness, we will still need some equivalent to a selection principle.

So vagueness is most like the everythingness ontology except that instead of claiming everything "exists" we are saying everything is "potential". And now as part of our intellectual machinery, we need some principle by which the potential becomes actual.
 
Last edited:
  • #4
For example...(Davies citing Wheeler)

"The problem of “what exists” takes on a different complexion if one relinquishes an excessively Platonic view of physical law. In sections 3 and 4 I described a possible scenario to express Wheeler’s “law without law” concept, in which the laws of physics emerge from the ferment of the cosmic origin gradually over time, steadily “congealing” onto excellent but still imperfect approximations to their idealized textbook forms. Using the imagery of Fig. 1, the boundaries A and B start out fuzzy and indistinct, but firm up over time. The inherent ambiguity implied by this ontology means that the problem of what exists is not well-posed. That opens the way to a richer description of nature in which there is room for a hierarchy of laws at various levels of complexity, and in which the ancient dualism between laws and states becomes blurred."

http://arxiv.org/abs/astro-ph/0602420

And then once the ontological possibilities are recognised, there follows the task of finding the "selection" principle that operates to regulate the "congealing".

As Davies also understands...

"Ultimately the problem of what exists cannot be solved within this framework. Needed is an additional criterion, such as Leibniz’s optimization principle or Wheeler’s self-consistent closed circuit of meaning, which he describes as “a self-referential deductive axiomatic system” (Wheeler, 1989, p. 357). A discussion of these topics will be given elsewhere (Davies 2006)."

So it would appear a legitimate project among knowlegable cosmologists now.
 
  • #5
Then Wheeler himself...

"The belief is expressed that particles, fields of force, space-time, and ``initial conditions'' are only intermediate entities in the building of physics, that at bottom there is no ``law,'' that everything is built higgledy-piggledy on the unpredictable outcomes of billions upon billions of elementary quantum phenomena, and that the laws and initial conditions of physics arise out of this chaos by the action of a regulating principle, the discovery and proper formulation of which is the number one task of the coming third era of physics. What a regulating principle means and how it works is illustrated in the far more modest content of (1) Boltzmann's law for the distribution of energy among molecules, (2) universality of exponents near thermodynamic critical points, (3) Wigner's ``semicircle law'' for the distribution of characteristic frequencies of a randomly coupled system, and (4) a new ``physicist's version'' of the problem of the traveling salesman. The regulating principles to be seen in these simple examples fall far short—in scope and simplicity—of the sought-for regulating principle. The search for it lies in the new domain of ``recognition physics,'' being explored today on four fronts and at least half a dozen centers of investigation."

http://dx.doi.org/10.1119/1.13224

Note that Wheeler flags criticality and the rules of phase transitions would seem to be an example of how a vagueness might be transformed into something more definite, how a symmetry of chaos might be broken.
 
  • #6
Thanks for the lucid synopsis! And for the quotations (in another thread) from Charles Peirce, sounding almost Hegelian --

apeiron said:
"Out of the womb of indeterminacy we must say that there would have come something, by the principle of Firstness, which we may call a flash. Then by the principle of habit there would have been a second flash. Though time would not yet have been, this second flash was in some sense after the first, because resulting from it. Then there would have come other successions ever more and more closely connected, the habits and the tendency to take them ever strengthening themselves, until the events would have been bound together into something like a continuous flow." (CP 1.412)


But ultimately this sort of account of the emergence of the world from vagueness remains dishearteningly vague, and therefore not very persuasive. In contrast, what makes Darwin's account of evolution in biology so illuminating is that it's very clear how something like this can work, given that organisms reproduce themselves, and some reproduce more than others. If pigeon-breeders can develop new traits by selective breeding, evidently this can happen through natural selection too.

My sense is that what's going on in physics may ultimately be just as comprehensible as what's going on in biology -- once we can grasp the basic functionality that's evolving in the physical world, analogous to differential reproduction in biology. I'm thinking the difficulty may be that it's something so familiar and so obvious that we just tend to take it for granted.

So I think Wheeler may have been on the right track. QM gives us a way of describing indeterminacy as a superposition of possibilities... and specifically, it describes a physical system as a superposition of all the possible results of an observation of that system, made within a specified context. Wheeler took seriously the notion that for there to be a fact about something, it has to be observed -- i.e. there needs to be a physical context in which that fact makes a definable difference to something else. And then that difference presumably has to make a difference to something else in another context, and so on.

Certainly one of the things we take most for granted about our world is that it communicates information about itself. If something has a certain property, such as mass or electric charge, then of course there will also be some kind of interaction-context in which that property can be measured -- i.e. defined in terms of other properties of other kinds of things, for which there are also measurement-contexts. Unfortunately I don't know of any attempts to understand what's required for a "self-referential" system like this to work.

I'm guessing that if we could describe clearly what's going on with this business of physically observing and defining information, we'd find it would be clear how such a system could evolve, just as it's clear how biological evolution works.

The thing is, in biology we're looking at things out there in the world -- living organisms -- and seeing them do their thing. In physics, this very process of seeing may be basic to what's going on in the world. So it's much harder to grasp, because we're so completely engaged as participants in the system we're trying to describe.
 
  • #7
A few quick comments...

"In contrast, what makes Darwin's account of evolution in biology so illuminating is that it's very clear how something like this can work"

Biology does split into two parts - evo and devo. Or the easy and the hard subjects.

Darwinian selection is a beautifully simple selection rule. A way of reducing variety. But then the hard part for biology is to account for the generation of variety, the development part.

I guess it could be argued that physics has always focused on modelling the hard part, the development of things. It did this via a succession of mechanics. And where it ran into selection issues, as with QM, it took the easy way out, resorting to the nonsense (in my mind) many worlds interpretation, or the conscious observer collapse.

Decoherence approaches would be a way of putting the selection rule into the model properly. So wave function models development and decoherence (in its yet to be fully worked out form) will model the evolution of spacetimes.

"QM gives us a way of describing indeterminacy as a superposition of possibilities... and specifically, it describes a physical system as a superposition of all the possible results of an observation of that system, made within a specified context."

Alternatively, we could say QM is a way of describing vagueness. If something is in-determinate, that implies there exists a definite outcome, it just lies in the future. But saying the situation is vague rather than merely indeterminate commits us to something deeper ontologically.

So all the machinery of QM modelling works, no question. But it starts to make more sense once it is transplanted to a better ontology.

It is the same for non-local. The traditional assumption is that everything should be local just as everything should be determinate. And physicists use non-local and in-determinate to signal that all is not well with their ontology, yet they are not going to give it up. A negative construction is as far as they will go. A positive step would be to feel comfortable about an ontology in which positive constructions could be used.

So instead of indeterminate, we say vague. And instead of non-local, we would say global. The mathematics does not change, but the interpretation would.

"The thing is, in biology we're looking at things out there in the world -- living organisms -- and seeing them do their thing. In physics, this very process of seeing may be basic to what's going on in the world. So it's much harder to grasp, because we're so completely engaged as participants in the system we're trying to describe."

Agreed. Biology is the easiest science. But my field was neuroscience and consciousness studies - which is also in a mess due to participant effects!

So my argument is that science needs to reconsider its foundational story most in the three most difficult areas - our modelling of the very small (QM), the very large (universe) and very complex (mind).

Biology happens to be the place where I have found the different kind of thinking that I find appealing. Perhaps because it is easier overall, it has been easier to see there are these divides like evo and devo. Or to see how systems science and semiotics apply.
 
  • #8
apeiron said:
Darwinian selection is a beautifully simple selection rule. A way of reducing variety. But then the hard part for biology is to account for the generation of variety, the development part.

I see what you mean. In biology it's easy to understand the basics -- what's going on and why. Including, why variety develops. But the how is vastly complicated.

apeiron said:
I guess it could be argued that physics has always focused on modeling the hard part, the development of things. ...And where it ran into selection issues, as with QM, it took the easy way out, resorting to the nonsense (in my mind) many worlds interpretation, or the conscious observer collapse.

Decoherence approaches would be a way of putting the selection rule into the model properly.

Makes sense to me. I've just started to appreciate the decoherence idea. But I don't think it clarifies the basic question of selection, the "why". What you call the easy part!

apeiron said:
Alternatively, we could say QM is a way of describing vagueness. If something is in-determinate, that implies there exists a definite outcome, it just lies in the future. But saying the situation is vague rather than merely indeterminate commits us to something deeper ontologically.

Here I might be missing your point. I would agree that what the wave function describes is not a priori a combination of distinct future possibilities. It resolves into that when the system it describes is put in a context in which various things can be observed. But I don't know what you have in mind by "something deeper ontologically."

apeiron said:
It is the same for non-local. The traditional assumption is that everything should be local just as everything should be determinate.

Hmmm... yes, it makes sense to me that the distinct spacetime separation of events is not built in a priori. But I haven't yet seen where you're going with the idea of a basis that is vague and global -- what the positive connotation is for these terms.

apeiron said:
But my field was neuroscience and consciousness studies - which is also in a mess due to participant effects!

So my argument is that science needs to reconsider its foundational story most in the three most difficult areas - our modeling of the very small (QM), the very large (universe) and very complex (mind).

Yes! I agree that dragging consciousness into quantum physics is a terrible idea -- and on the other hand, there is a profound connection between the two fields. Both are caught in deep difficulties just in clarifying what the fundamental issues are, and I think for essentially the same reason.

That is, the world we're born into is a world made of communication -- both at the physical level and, quite separately, at the human level. We become "conscious" beings only because we grow up in an environment where people talk to us and expect us to learn how to talk back. A human who grew up without ever communicating with other humans might well be "conscious" in some sense of the word, but not in a human sense. But just because communication is what we live and breathe, we see right through it, take it for granted, and think of it (if at all) in terms of the world of things we see through it.

I must have a dozen or so books lying around here on the evolution of consciousness and language. In only one or two is there even a glimpse of the thought that communication might be able to evolve, in a way distinct from evolution by reproduction. There is the "meme" idea, that behaviors might reproduce themselves through imitation, and so evolve. But what we learn to do with other people, as infants, is so much more than imitate them!

Sorry, but I find it hard not to rant about certain things. But I think there is a sober thought here, which is that our blindness about the evolution of human communication is deeply related to our blindness about how communication works in the physical world. In both cases we tend to think in terms of the transmission of data, as if the question of how the data comes to mean something in particular, in two distinct contexts, were not a key issue.

And I apologize for dragging your inquiry into the vagueness of initial conditions so far off track! I hope someone can give you a more pertinent response!
 
  • #9
May not be so far off track as the evolution of the language and the human mind was the field I started out in. First Vygotskian psychology and then "Lurian" neuroscience you could say. You may even have some of my books!

I progressed from this specific area to the general question of how to model complex systems, and now any kind of system. Hence my current interest in the metaphysics adopted (implicitly or explicitly) in physics and cosmology.

If you get into systems science and complexity modelling, you will naturally start to stress the ideas of communication, process, relations, constraints - seeing these as fundamental and not just add-ons. And you may find yourself studying hierarchy theory and semiotics.

So there is a path. Vagueness is one of the key ideas I found at the end it that made sense of so much. A powerful conceptual tool.
 
  • #10
I just saw this thread. I just want to add that your notion of vagueness seems closely related to reasoning I made in some of the other threads. Now I see why you liked part of those reasoning. (I didn't know of this thread).

The symmetry of chaos as you say, and the solution to the problem of landscapes of initial values and finetuning goes well with this.

I never used the word vagueness but I think we pretty much talk about the similar idea. Like I explained in the other perspective, I see if from the point of view of an "inside observer", and the effect is indeed a symmetry of the chaos, as you scale down the complexity of the observer. The observed laws, and symmetries become simple. Then what's usually called the breaking of symmetry and emergence of differently distinguished forces is a result of self-organisation taking place as the inside observer grows larger, and organises. Out of the vaguenss, emerges more confident structures.

I only saty the first post. I'll read more later.

/Fredrik
 
  • #11
I skimmed the thread and in short I think the focus is good. I share the general vision here, so I don't have much to argue on. I guess the questions coming out of this are what the more specific implementation of this idea is in physics.

What abstractions and connections to physics and what mathematical formalism may come out of this.

I am working on some personal ideas on this, but aside from that

1) Smolins Cosmological natural selection, as presented in his many papers and talks, as well as in his book on the life of cosmos is one.

Some related ideas IMHO, are Rovelli's reasoning in Relational Quantum mechanics, and Olaf Dreyers internal relativity.

None of them might be fully to my liking, but all of them have some golden grains of reasoning IMO.

The relation between the "inside view" as both sniffed by Rovelli's relational QM, and Dreyers internal relavity is to try to express physics, as an inside observer would. Suppose we asked an inside observer, to write down the laws of physics? what would we get?

It's this connection I think is fruitful. The inside view, when the observer is scaled down to minimal complexty, that to me is the physical meaning of vaugeness in the sene that the laws of physics themselves become vague as the inside observer becomes "small".

The initial value problem goes away if you formulate it from the inside observer. Because the observable state space would correspondingly shrink as the observer shrinks. So the solution rather than picking initial conditions in an infinitely large state space becomes that of growing the state space. When the statespace is small enough, there isn't much of a choice to make. Instead the only choice, evolves on, and the observable and distinguishable state space (as seen from an inside observer) grows with evolution.

This is how I see it. I've got some ideas howto rebuild a mathematical structure, essentially a discrete form of probability theory, from the simple combinatorical starting point of a simple observer.

The complications are, why does the observers information capacity grow? Ie. what process generates it's mass? Here, the analogy I see is the same as learning. How come you can start with a completely random guess, take a chance, learn from feedback and eventually grow confident information, all starting from a random guess. Here I picture the variation and selection also comes into play. I think of this as a self-preserving, and replicating organism. But the replication doesn't need to be hands on sex, or black hole bounces, it's simply that by acting, your encourage birth of "consistent observers" in your environment. A sort of reproduction by induction.

/Fredrik
 
  • #12
Semantic vs Ontic Vagueness

There are two notions of vagueness, one that is about semantic content, the other an assertion about reality itself.

Semantic vagueness is the trivial and untroublesome kind usually illustrated by the Sorites paradox, a logic puzzle attributed to Eubulides of Miletus

Sorites comes from the Greek word soros meaning a heap. Eubulides asked would you say a single grain of wheat was a heap? No? Well, what about two grains? Or perhaps three? At some point, there must surely be enough to make a heap, but where exactly do we draw the line?

Another version of this riddle is the falakros or bald man. Would we describe a person with one hair on his head as bald? Yes? Then what about two or three strands? Again where is the line crossed between bald and hairy?

Quite clearly the issue here is semantic. The real world is always in some definite state. A heap is crisply a certain size. A man has some exact number of hairs on his head. It is just that we have not invented words to distinguish these finer shades of difference.

The philosopher Bertrand Russell was widely taken to have proven that all vagueness is merely semantic. The imprecision is in our mental representation of the world and to believe anything else is to commit “the fallacy of verbalism”.

Russell gave the example of a smudged photographic plate. It might be a picture of Brown or Jones or Robinson – we cannot really be sure. But we can see that the vagueness lies in the representation. Out in the real world there will be a real person.

It is the same with our words, images, thoughts or even scientific models. The representations may be smudged but we should always believe that the world itself is definite, capable of being measured with complete accuracy if so desired.

Ontic vagueness by contrast says the physical world itself can be in some ill-determined or undecided state. Semantic vagueness is about epistemology – about the limits of what we can know. Ontology is about what actually is, whether are making an effort to know it or not.
 
  • #13
Vagueness - the origins of a word

The idea of a vagueness dates all the way back to the first recorded metaphysician, Anaximander, who coined the term apeiron, meaning the limitless, the boundless.

The term vagueness itself was first popularised as an ontological term by the philosopher and logician CS Peirce in the 1890s (though much of Peirce's writings were not published until quite recently).

In the 1930s, Max Black distinguished vagueness from ambiguity, generality, and indeterminacy (though he defined vagueness as semantic, not ontic). Other scholars such as Kortabiński, Adjukiewicz and Fleck were drawn to discuss vagueness without adding much.

In the 1950s, the geometer, Karl Menger, talked about a geometry based on vague objects – variable lumps rather than crisp Euclidean points - which he called “ensembles flous” or hazy sets.

Around the same time, Post, Tarski, Knuth and Lukasiewicz played around with logics that allowed indecision. The middle ground between two crisply defined alternatives was only somewhat or loosely excluded.

Then in the 1960s, Lotfi Zadeh popularised the idea of fuzzy sets in which the excluded middle was represented as an actual spectrum of possibility. Things could have graded membership of a set or class of events. This eventually became a big deal because it offered a way of doing computing when objects were semantically ill-defined. It also led to a lot of new talk about vagueness even though it did not shed any particular light on the matter.

There are still other recent movements such as the rise of Bayesian probability and paraconsistent logic that hinge on the question of why things may be uncertain or ill-determined.

But vagueness is not fuzziness or other statistical forms of uncertainty because these assume that an answer is to be found "in the vincinity" of some safely anchored (that is, crisp) value. Fuzziness lacks the absolute indistinctness that we mean to conjure up with the word "vague".

Fuzzy or hazy means roughly about there somewhere. Vague means where, what, who?
 
  • #14
Modelling vagueness

Clearly we are dealing with something slippery here. So we need a few principles to underpin our modelling of vagueness as a "state", a realm that vaguely exists.

1) The precursor principle

A first step in modelling vagueness is to says that logically, whatever it is, it must be composed of whatever later comes out of it. We start off with something that is murky and ill-defined - a raw potential. Then something definite develops out of it. So whatever came out of it, must have been "in there" as a potential in the first place.

For example, say we are thinking about a world that has both chance and necessity, both randomness and determinism. These are crisp categories, two quite distinct things. So if we project backwards to whatever seed state gave birth to this world, then randomness and determinism must have been in that state as not yet firmed up potentials.

2) Vagueness is infinite or unlimited symmetry

Symmetry is about changes which are not a change. Turn a circle and it looks the same - no change. Or equally, see a circle not turning - well actually you have no idea that it is not spinning like a top.

A vagueness would seem to be an ultimate state of symmetry. All changes are possible because none are visible.

Think of a block of stone. Before the sculptor begins his work. It is in a high symmetry state. Every kind of statue is possible, and none are actual. The bust of Nero is in there. So is a carving of a MIG fighter. All possible sculptures have vague existence. But the symmetry of the stone has yet to be broken.
 
  • #15
apeiron said:
But vagueness is not fuzziness or other statistical forms of uncertainty because these assume that an answer is to be found "in the vincinity" of some safely anchored (that is, crisp) value. Fuzziness lacks the absolute indistinctness that we mean to conjure up with the word "vague".

I appreciate you're taking the time to summarize all this! I made a brief stab at the links you provided at the start of this thread, but quickly got lost hunting for the relevance to physics.

And I agree -- "fuzziness" presupposes that other things are more or less well-defined. And what's most striking about QM is that everything we thought was well-defined, seems to emerge from a depth where rules and definitions seem not to apply.


apeiron said:
A first step in modelling vagueness is to say that logically, whatever it is, it must be composed of whatever later comes out of it. We start off with something that is murky and ill-defined - a raw potential. Then something definite develops out of it. So whatever came out of it, must have been "in there" as a potential in the first place.

Here's where I have a problem... when it comes to such unfamiliar territory, I don't trust statements like "logically... it must be..." It's not that I think your "precursor principle" is wrong. But I've looked at too many websites where people feel they understand how the world began because of something that logically must be the case. Not that you're making such claims!

More specifically, my personal prejudice is that ancient Greek thinking put us on the track of believing in and conceptualizing the reality of things "in themselves" while dismissing their connections with each other, i.e. "the mere appearance" of things. Concepts like "what things are made of", "potential" and "symmetry" describe things or systems existing in themselves. My intuition is that we need to give at least equal weight to the "existing for each other" aspect of the world, or the world as a system of communicative relationships... which perhaps can only be seen or defined from inside, by participants. Speaking of precursors, our concepts of what's real have evolved over 2,500 years, while it's been only a century since "the observer" showed up in physics. There's still no consensus at alll about what that means.

Anyway, I admire your bravery in facing "vagueness" head-on. But I have doubts that "logic" can get us from undefinedness to something definite "in itself". Hegel certainly believed in that, and I admire his bravery as well. In some ways he seems closest of any modern thinker to the spirit of ancient philosophy.
 
  • #16
The precursor principle would be an axiom. So subject to godelian incompleteness considerations.

All I can say as a modelling human is that I can't for the life of me imagine this principle not being the case and therefore I am proceding with it as a foundational assumption. So what would things look like if this axiom does hold.

So this not some act of blind faith. Instead, it is identifying the actual place where a founding assumption is being made so that it is something that can be explicitly challenged.

The precursor argument is not from me, by the way. I'm just presenting vagueness as a ontic paradigm that would seem potentially very useful for physics where it has to deal with initial conditions.

QM and the Big Bang would be two obvious candidates for the relevancy of a different way of modelling initial conditions.

I think you are wrong about greek philosophy being all existence-based rather than a process or persistence view. The greeks actually explored both views. And Anaximander was a real process and relationships guy.

Hegel was re-capping the greek view and adding a Christian twist for his day.

Vagueness is the initial conditions, and then the only logical way for this initial conditions to develop into something definite is through symmetry breaking - the dichotomisation into an asymmetric this and that. What Hegel called thesis and antithesis.

Anaximander dichotomised the apeiron into the hot and the cold. The history of greek thought was the search for more fundamental dichotomies. Stasis and flux, chance and necessity, substance and form, atom and void.

Hegel is OK in an Aristotelean mould, but skip forward to Peirce for the real advance in logic - one based on observers too.
 
  • #17
apeiron said:
And again tying this to vagueness, we will still need some equivalent to a selection principle.

So vagueness is most like the everythingness ontology except that instead of claiming everything "exists" we are saying everything is "potential". And now as part of our intellectual machinery, we need some principle by which the potential becomes actual.

Jumping back aways in the thread, to what seems to be the key point...

We're trying to imagine an evolution of something "crisp" -- definite and specific -- from an initial vagueness, indefinite and inspecific.

Do you see this in connection with quantum measurement? It seems that we could interpret QM as telling us that the world consists of measurement-interactions in which specific possibilities are somehow selected as "actual" -- and those set new boundary conditions for the future possibilities of the systems involved.

So it seems as though in QM we're seeing the "selection principle" you're looking for, in action, creating actual facts out of potential ones. This "measurement" business happens in a lot of different ways, though, at a lot of different scales. It's not clear (to me) how best to characterize what they all share in common -- what the core functionality is here.

I'm suggesting (vaguely) that we probably know what the selection principle is, or at least where it is, physically. The challenge seems to be conceptualizing it.
 
  • #18
Yes, clearly all this seems to fit QM - offer a naturalistic interpretation of it at least. (The formal maths of QM works and does not in itself need interpretation of course.)

The usual view of QM potential is as a superposition of microstates. A collection of crisp probabilities. Then from outside comes an act of observation - selection and collapse. One ball wins the jackpot.

Which works fine. Though it does not model the specifics of the observation process (how and when does it happen?). It does not account for the instantaneous nonlocal nature of the collapse. And we have to wonder about how to talk about these crisp microstates in the absence of an observing act. In what way does the inside terrain of a wavefunction exist if there is no collapser to reveal its shape?

So in general, the standard QM model is saying here are a bunch of microstates. They exist. Now make a selection. Shut your eyes and pick out one at random.

Vagueness would be different in making both the creation of crisp variety and the selection process the responsibilities of the system. The two aspects would be tied together rather than being separate.

Standard QM based on crisp microstates says some finite set of outcomes are possible. Now choose.

A vagueness approach would say anything is possible. But for anything to become actual, it must be a self-consistent balance of micro and macro scales. The act of forming crisp microstates is tied to the possibility they are also choosable - that they will weave a system that could choose them.

This is a systems science or cybernetic view. And is teleological in its way. The ends justify the means. Anything could have happened (in a vague way). But the only thing that actually happens is that which could develop in a synergistic or self-organising, self-sustaining fashion.

to be continued...
 
  • #19
apeiron said:
So in general, the standard QM model is saying here are a bunch of microstates. They exist. Now make a selection...

Vagueness would be different in making both the creation of crisp variety and the selection process the responsibilities of the system. The two aspects would be tied together rather than being separate.

Standard QM based on crisp microstates says some finite set of outcomes are possible. Now choose.

A vagueness approach would say anything is possible. But for anything to become actual, it must be a self-consistent balance of micro and macro scales. The act of forming crisp microstates is tied to the possibility they are also choosable - that they will weave a system that could choose them.

I don't quite agree about standard QM. As I understand it, it's only when we set up a system in a specific measurement-context, that we can describe it as a superposition of "crisp" possibilities. That is, the QM description involves two steps - describing / arranging the system's interactional context, and then actually "choosing" a specific outcome by carrying out a measurement. And of course no measurement actually does away with "vagueness". There is always a certain "uncertainty" remaining in the parameter being measured, since no measurement is perfectly accurate, and the more accurately we measure one parameter, the more "uncertainty" we create in other parameters of the system.

Maybe what you're pointing out is that the only clear description QM can give of a system is via a mathematical combination of "crisp microstates". But both the fact that such a description works so well, and also that it has very specifically defined limits, must be giving us important information about what's going on here.

Now as to "weaving a system that could choose them" -- this is well said. Likewise "making both the creation of crisp variety and the selection process the responsibilities of the system." Current interpretations of QM don't give us much help here.

I'm trying to work out something to post on Rovelli's Relational QM. It doesn't give the answer either, but I think it does point the way into the question -- "Physics... is concerned with the description that physical systems give of other physical systems."
 
  • #20
ConradDJ said:
And of course no measurement actually does away with "vagueness". There is always a certain "uncertainty" remaining in the parameter being measured, since no measurement is perfectly accurate, and the more accurately we measure one parameter, the more "uncertainty" we create in other parameters of the system.

Yes, but the symmetry of the uncertainty is broken. As we become certain of one potential aspect (that was vague) we become exponentially more uncertain about the value of the other.

So we began, pre-collapse (or pre-constraint, pre-decoherence) with a vague idea of both location and momentum say. Then as we try to fix one, the other does not just stay "in the same place" as a probability. It diverges from mild to extreme uncertainty.

We begin with a symmetry (location and momentum are equally vaguely defined) and end up with an asymmetry (location exact, momentum radically uncertain).


ConradDJ said:
Maybe what you're pointing out is that the only clear description QM can give of a system is via a mathematical combination of "crisp microstates". But both the fact that such a description works so well, and also that it has very specifically defined limits, must be giving us important information about what's going on here.

The model works well by assuming that the crisp microstates were "in" the pre-collapse realm of quantum potential. Then all the selection mechanism needs to do is make its choice.

This allows for a very stripped down story on the selection side of things. But note how it is a circular story in that what was "inside the wavefunction" is known by what came out of it. A wavefunction is a yo-yoing around some concrete single value. It is not an infinite variety of possibility but already a selection.

To try to express this more clearly, the classical world is observed to be asymmetric, symmetry broken. A particle has a crisp location and momentum (when seen from a suitable distance away to average away observational uncertainty).

Wavefunction modelling then recreates that asymmetry as two divergence axes of possibility. Zoom in too close and the glued-together asymmetry of the classical realm is turned into an unglued asymmetry. Two now independent axes where focus on one (attempts to constrain to some certain status) is modeled as producing complete lack of focus (complete absence of constraint) on the other.

In the vagueness approach, we can still say that both location and motion can be modellled as being "in there" - but as potentials rather than actual existing axes (one of which we actually measure, sending the other one off the scale).

The vagueness approach is tied to a selection mechanism - a system based on the downward causality of global constraints. The realm of the system is sharply divided (into its locations and motions, its space and its time). But it runs out of the ability to crisply constrain as it approaches the Planck scale. Things go out of focus. The system can no longer see "both things at once". It can try to see one thing (constrain it to some exact value) and the other goes even more badly out of view. The making of a crisp asymmetry breaks down.

So in reality (taking the ontic vagueness route to QM interpretation) there is nothing there below the Planck scale except raw potential. But it certainly suits us to model it as a symmetry of the two aspects which we know appear later at the classical scale as a crisp asymmetry. We say there is an equal amount of location and momentum, but they have just become unglued and so impossible to see together in the usual crisp way.
 
  • #21
Vagueness and Aristotle's law of the excluded middle...

Aristotle of course created the basis for modern logic with his law of the excluded middle. That is, it is impossible for something both to be something and also not that something. It either is, or isn't the case. Crisply so. Any third option or middle ground is excluded.

And this binary yes/no, true/false, 0/1 thinking became basic to logic. It has become to hard to think in other terms.

But Aristotle also countered this with examples of situations where middles have not been excluded - where the situations are not crisp or definite but vague, in a state of non-decision. An unbroken symmetry.

There was his account of Diodorus's question about the sea-battle. Tomorrow, the Greek and Persian fleets may engage. Well, tomorrow either one or the other outcome will be definitely the case. But for today, there is only the potential. The situation is vague. Not either, but not neither either. Just poised to become something definite.

Note how a Laplacean determinist, or a hidden variables enthusiast, would still want to trace back some crisp causal sequence. Perhaps some tiny thing, like what the Persian commander had for breakfast, or a butterfly flap in Brazil which turned the wind a different direction, meant that the next day was already crisply determined.

But vagueness says no. Just accept the symmetry is perfect. There really is only a naked unblemished potential. That then can become crisply broken.
 
  • #22
Interesting.

I've always found the whole idea of "determinism" very hard to swallow, basically because its based on the premise that we "could" calculate initial conditions to infinite decimal values.

However the reality is that we cannot ever in practice measure anything to infinite accuracy so the calling chaos determinstic is really a misnomer.
 
  • #23
Coldcall said:
I've always found the whole idea of "determinism" very hard to swallow, basically because its based on the premise that we "could" calculate initial conditions to infinite decimal values.

Me too.

Also, even if we had the initial conditions absolutely exact, there's no analytical solution to the equations describing many-particle interaction. So not even an outrageously powerful supercomputer could actually determine the future of any real physical system. And of course, QM makes determinism at the sub-microscopic level more than questionable.

But the amazing thing is that even though the initial conditions are vague, and the dynamics is hopelessly non-computable, things in the macroscopic world we experience behave in a remarkably precise, deterministic way. Ultimately, I think this may be the primary thing physical theory needs to explain.

Apeiron's approach seems to rely on the notion of spontaneous symmetry-breaking -- in the usual example, a pencil balanced on its tip will choose a certain direction in which to fall. But if what we have to begin with is just undefined-ness, where do we get the structured set of possibilities from which certain ones can be randomly chosen?

QM essentially describes the world in terms of structured sets of possibilities, defined in terms of what can be measured ("determined") in specific situations. So my inclination is to try to understand how physical parameters actually get measured in terms of other physical parameters, as an approach to the question of where "determinacy" ultimately comes from.
 
  • #24
ConradDJ,

"But the amazing thing is that even though the initial conditions are vague, and the dynamics is hopelessly non-computable, things in the macroscopic world we experience behave in a remarkably precise, deterministic way. Ultimately, I think this may be the primary thing physical theory needs to explain."

I think spooky action at a distance may be responsible for the "appearance" of consistency and determinism on a macroscopic scale. I have zero proof other than the odd fact that there appears to be a law of entanglement which means in theory at least, particles and matter can change their values quicker than the speed of light. Which by definition means they can change faster than we could notice them change since our visual perceptions are limited to the speed of light hitting our retina.

I think one of the problems we may have with "determinacy" is that is represents a human emotional need for certainty. If our reality is based on some sort of universal probability theory as seen through qm, then we must accept that our idea of "determinacy" has more to do with our own assumptions than the true character of nature.
 
  • #25
ConradDJ said:
Apeiron's approach seems to rely on the notion of spontaneous symmetry-breaking -- in the usual example, a pencil balanced on its tip will choose a certain direction in which to fall. But if what we have to begin with is just undefined-ness, where do we get the structured set of possibilities from which certain ones can be randomly chosen?

This is a very good example of how standard approaches presume some crisp and exact initial conditions. Everything about the situation is first determined - and then the outcome can be "pure chance". Someone holds a pencil poised (the crisly determined bit). Then let's it fall as it will (the crisply undetermined bit).

Chaos theory does the same trick. The crisply undetermined (chaotic behaviour) is modeled as an iterative act over all scales (the fractal expression of some motif). So this is what has been demonstrated. Chaos is some action "freely" expressed over all scales. But the iterative act is completely determined (it is a crisply expressed equation, with some completely crisp set of initial conditions plugged in as measurements).

So interesting and useful models of reality are built from a modelling trick that divides the world into the crisply determined and the crisply free - chance and necessity, randomness and order.

Again, like a dice or roulette wheel, the context is completely controlled (a die has 6 sides, the roulette wheel 22 slots). Then completely uncontrolled actions produce an outcome (the dice is toss, the wheel spun).

So how should we treat all this from the point of view of vagueness? Using the precursor principle (what came out of vagueness must have been in there as a potential), we would say that both chance and necessity, both the determined and undetermined aspects of some system, must have been once more vaguely defined. Part of a shared potential.

We can imagine this for example as a rather rough-hewn die. Or one sort of dropped rather than tossed properly. It would become vaguer as to whether the outcome was random or determined. The issue would seem muddy to us as it is now more a matter of potential than certain. The die came up 6. But you tried to drop it that way. Or the fact the die is not a completely crisp cube made it more likely to fall that way. The throw was somewhere between random and determined now.

Note too how this view also fits the idea that vagueness is symmetry, crisp outcomes are asymmetric (the most extreme possible form of symmetry-breaking).

So we are talking about an asymmetric world - one where events like falling pencils and tumbling die (and QM events!) are completely, crisply, divided by our notion of order and disorder, random and determined. And then we can see how these contrasting, opposed, extremes can be folded back into each other. As they become muddied and mixed again, a symmetry is restored. Not a crisp symmetry of course, but a symmetry of vague potential.

We become increasingly uncertain about whether a situation is random or determined. Just that it seems to have a potential to be divided in a way that will split these two alternatives apart.

QM is based on a whole bunch of asymmetries - dichotomies. You have local and nonlocal (local and global) for a start. Location and momentum. Time and energy. Wavefunction and collapse.

You can start to see how it is the system that is the crisply divided. The system that manufactures the strong asymmetry. A die or roulette wheel or poised pencil has to be prepared. And classical reality (which is crisply based on divisions) would be manufacturing these divisions from a prior potential state, a vaguer realm, which we model with quantum mechanics.

So quantum symmetry => classical asymmetry (a world crisply divided into this and that's such as chance and necessity, atom and void, stasis and change).
 
  • #26
Coldcall said:
I think one of the problems we may have with "determinacy" is that is represents a human emotional need for certainty. If our reality is based on some sort of universal probability theory as seen through qm, then we must accept that our idea of "determinacy" has more to do with our own assumptions than the true character of nature.

It is not so irrational as all that. It is in fact an effective modelling tactic to seek out what is determined, and then the rest of the story can become crisply your "chance".

So it is like starting with a mishapen die and improving it so that it is as cube-like as possible. In improving its deterministic characteristics, you are also improving its ability to produce completely chance outcomes. In fixing one thing, you fix the other as well.

The more science can extract the deterministic, the more it is also left with the pure chance.

Another related way of putting this is that science modelling divides the world into models and measurements. There is the completely determined aspect (the general laws we create). And then the completely undetermined aspect (the particular measurements we might plug into those laws, into those equations).

This is what happens with deterministic chaos. We have the crisp equation. We just need to plug in some crisp measurements and we are away. But with non-linear laws, we have a little problem in that we now need infinitely accurate measurements because our errors (in lawful fashion) multiply exponentially too.

For ordinary linear equations, rough measurements are okay as errors are contained. For a chaotic system like the weather, the simulation will start out okay but soon run off track.

Is determinancy in nature? Nature certainly seems lawful. It has regularities. But our actual problem is that nature appears to be asymmetrically divided so that it has both crisp universals (the laws) and crisp particulars (the local measurements). We have got good at modelling this situation through various brands of "mechanics".

Mechanics capture the determined or universal aspects of the universe as a system. Then either initial conditions (the measurement of starting points) or the outcomes (measurements of ending points) become undetermined (or undeterminable) - matters of chance.

We worry about this. Everything seems so determined, and yet there is this pesky chance involved too.

But we have made the world that way in our models! And it is also an asymmetry that exists in the world - so the world has made itself that way!

Yes we do want a way of modelling both aspects of reality in the same frame. We want to see the chance and the determined, the local and the global, the modeled and the measured, the context and the event, both together in the one meta-model.

Which is why you would need a logic of vagueness. A way of melting these crisp, and crisply opposed, extremes back into some common prior unity.

Of course, chance and necessity would disappear as different things when you melt them down to a pure symmetry of potential. You would lose them. Which strikes an emotional response in people. Because we love our mechanics, it is very hard to let go of our crisp alternatives which generate so much logical paradox.

But now that would be being irrational rather than scientific :wink:
 
  • #27
Hegel's version of vagueness as being and becoming...

“Being” seems to be both “immediate” and simple, but reflection reveals that it itself is, in fact, only meaningful in opposition to another concept, “nothing.” In fact, the attempt to think “being” as immediate, and so as not mediated by its opposing concept “nothing,” has so deprived it of any determinacy or meaning at all that it effectively becomes nothing. That is, on reflection it is grasped as having passed over into its “negation” . Thus, while “being” and “nothing” seem both absolutely distinct and opposed, from another point of view they appear the same as no criterion can be invoked which differentiates them. The only way out of this paradox is to posit a third category, “becoming,” which seems to save thinking from paralysis because it accommodates both concepts: “becoming” contains “being” and “nothing” since when something “becomes” it passes, as it were, between nothingness and being. That is, when something becomes it seems to possesses aspects of both being and nothingness, and it is in this sense that the third category of such triads can be understood as containing the first two as sublated “moments.”
http://plato.stanford.edu/entries/hegel/

This is certainly a way for human thought to arrive at the concept of vagueness, but then the way reality operates would be the other way round - becoming => being. Or rather the dichotomous version - becoming => being-nonbeing.

So becoming is another way of talking about a state of pure potential, a vagueness. It is neither being nor nothing but a greater state of symmetry into which these two crisp opposites are dissolved.
 
  • #28
Some of the comments philosophers have been making about ontic or metaphysical vagueness (or indeterminacy)...

I'm actually pretty attracted to the view that the right way to think about these things would be to treat indeterminacy as a metaphysical primitive, in the way that some modalists might treat contingency.

http://theoriesnthings.blogspot.com/2007/06/vagueness-and-quantum-stuff.html


The idea of ontic vagueness is in one way very simple – it‟s vagueness in the world, vagueness in what there is as opposed to our descriptions or knowledge of what there is. But glosses like this don‟t do much more than frame the concept, and they‟ll do little to appease the prevailing worry that ontic vagueness is somehow mysterious, or even unintelligible. Large amounts have been written on the subject, but there remains a lurking suspicion that ontic vagueness is not in dialectical good standing and that those who talk about it are at the end of the day talking nonsense. This suspicion may stem, in large part, from the fact that though much has been written on particular puzzles involving ontic vagueness (vague persistence, objects with vague spatial boundaries, vague identity, etc), very little has been written on the phenomenon more generally. This, in turn, leads many to worry that ontic vagueness is in fact a topic which cannot be systematically addressed.
This paper aims to allay such worries...

http://www.personal.leeds.ac.uk/~phlejb/ontic%20vagueness%20(final).pdf [Broken]

According to quantum mechanics, electrons (to take a stock example) are entities such that, in most quantum states, they are indeterminate instances not of one, but of all point position properties (such as lying on the x-axis at exactly 1 meter from the origin of the coordinate system). Such entities are, thus, genuinely vague in a very strong sense.

http://www.unicamp.br/~chibeni/public/whatisonticvagueness.pdf

The source of this metaphysical vagueness in all cases is the ‘entangled’
states which quantum particles enter into. It is such states which lie at the
heart of the quantum mystery and which force all the attempts at metaphysical
understanding, vague or otherwise.We believe that such understanding
might be achieved by tackling the metaphysics head on, as it were, and
considering these states in terms of basic metaphysical categories such as
that of individuality.

http://www.cfh.ufsc.br/~dkrause/Artigos/Vagueness.pdf

How to model deep metaphysical indeterminacy remains an open
question.

http://web.mit.edu/bskow/www/research/quantum-indef.pdf
 
Last edited by a moderator:
  • #29
apeiron said:
Some of the comments philosophers have been making about ontic or metaphysical vagueness (or indeterminacy)...

I'm actually pretty attracted to the view that the right way to think about these things would be to treat indeterminacy as a metaphysical primitive, in the way that some modalists might treat contingency.

Hi Apeiron -- Thanks for the links... I had time only for a quick look today.

I certainly agree that we need to open up our ontology to include the aspects of the world that are not intrinsically well-defined "in themselves". But it seems to me that Quantum Mechanics doesn't really presuppose any particular notion of indeterminacy. It says, everything is indeterminate except in a context in which it is actually being determined (measured) in some particular interaction-context.

So for me the issue is not what "indeterminacy" means but what it takes to determine anything. I started a thread on this -- "Why is anything measurable?" Of course in a world where everything happens in discrete, Planck-size interactions, nothing will ever be precisely determinate...

Conrad
 
  • #30
ConradDJ said:
Of course in a world where everything happens in discrete, Planck-size interactions, nothing will ever be precisely determinate...

*appears to happen in...

We can't actually know either way. But I suppose that's where apeiron's metaphysical vagueness comes in. Continue :smile:.
 
  • #31
This is all about modelling, so what works as metaphysical description rather than starting with a claim about what is true. Both standard reductionism and a systems approach to causality are models. You might have reason to use both, but in different domains. For instance, a machine like logic is very good for building machines. A systems logic maybe simply what you need for understanding fundamental nature.

Then to the question of does ontological vagueness (a general metaphysical idea) = quantum indeterminancy (a possible specific example - although possibly a canonical example)?

Currently I would say that QM models indeterminancy as probabilties - that is crisp variety, countable microstates. So not vague states, but crisp alternatives like spin up and spin down that are "in superposition" or "entangled".

But it is exactly this that makes a mystery of the collapse mechanism.

I'd look at it differently using the systems view. I would see the observing context as the crisp macrostate - the constraining boundary conditions. It is the classical context of an experiment that sets up the wavefunction and imposes constraints on a region of quantum vagueness. Crisp outcomes are determined via top-down causality - only spin up or spin down can be observed due to experimental set-up - and quantum vagueness decoheres accordingly, the local potential to be various things developing into crisply some localised event.

Non-locality fits in here because the decohering context is the whole of the context involved. And it is also a final causes argument as the global scale involves both space and time, allowing a retro-causal (top-down from the largest scale) or transactional approach.

Vagueness itself I define as infinite symmetry. So collapse of a wave-function would be a symmetry breaking.

Symmetry breaking can have to general forms. We can either end up with a symmetry breaking within scale (as in the left/right mirror image breaking of charge where both halves are still the same size, and unstably broken as a result - the two halves want to get back together).

Or we can have a symmetry breaking across scale - a local~global asymmetry where one half ends up shunken very small, the other becomes the very large. We would call this figure and ground, event and context. Or other things, like atom and void, QM and GR, the discrete and the continuous...

Symmetry breakings across scale - asymmetrical outcomes - are stable because the two poles of being have moved as far away as possible from each other, and also - being a systems story - they are mutually defining. This is why certain metaphysical dichotomies have proved so robust. Whatever is not changing is by asymmetric definition static. Whatever is not material is void. Whatever is not signal is noise.

The point here is that we find these kinds of dichotomies at the heart of QM. The various local~global descriptions such as particle~wave, position~momentum, energy~time.

So in QM already, there is something that looks just like the infinite symmetry, the unbroken potential, of vagueness. It is just being modeled in mechanical terms as a set of crisp entangled probabilities (that we reason post-measurement as having had to exist "in there", in the wavefunction).

And in QM we have something that looks just like the dichotomous symmetry breaking mechanism that is about asymmetry of scale, a breaking which finds the form local~global.

Uncertainty arises when the scale of observation is so constrained that there is no room for stable dichotomisation (into a classically certain particle or event within a classically certain world or context). You instead get the Planck grain effect. Radically unconstrained outcomes.
 
  • #32
I definitely agree with a vagueness view of QM. This does seem to get rid of any collapse problems. You can similarly avoid any of those issues by just taking an instrumentalist view of science, or an epistemological view of the vagueness. That leaves all sorts of problems what ontology is though.

Whether the vagueness is epistemological, ontological, or you collapse the two or consider it some other way... I think it's necessary. There are just too many inconsistencies with collapse and a lot of other QM interpretations. In that way, I think QM is a good case study for vagueness.
 

1. What is "vagueness" as a model of initial conditions?

Vagueness refers to the concept that initial conditions in a scientific model may not have a precise or exact value, but rather fall within a range of possible values. This allows for uncertainty and flexibility in the model, as opposed to strict determinism.

2. How is vagueness used in scientific models?

Vagueness can be used in scientific models to account for unknown or imprecise factors that may affect the outcome of an experiment or observation. It allows for a more realistic representation of the complexity and unpredictability of natural phenomena.

3. What are the benefits of using vagueness as a model of initial conditions?

Using vagueness in models can help to avoid oversimplification and provide a more accurate representation of reality. It also allows for more flexibility and adaptability in the model, as it can account for unexpected or unknown factors.

4. Are there any limitations to using vagueness as a model of initial conditions?

One limitation of using vagueness is that it can make the model more difficult to interpret and analyze. It may also be challenging to determine the boundaries of the vague parameters, leading to potential inconsistencies or inaccuracies in the model.

5. How does vagueness impact the validity of scientific findings?

Vagueness in initial conditions can make it more difficult to draw definitive conclusions from scientific findings. However, it can also allow for a more nuanced understanding of complex phenomena and provide insights that may not have been possible with a more rigid model.

Similar threads

  • General Discussion
3
Replies
71
Views
8K
Back
Top