Entanglement, Mind and Causality

In summary, quantum mechanics suggests that causality is mutual and symmetrical, similar to the concept of action and reaction in Newton's laws. This means that the observer of an experiment is also influenced by the event they are observing, creating a state of entanglement. However, this does not apply to conscious entities, as they have a unique ability to influence the outside universe without being equally influenced in return. This leads to a lack of perfect entanglement and the phenomenon of the "wave function collapse". This concept may also explain the perception of time and leave room for the idea of free will. It is important to note that while there may be some similarities between quantum physics and other philosophical concepts, it is not advisable to make connections where
  • #1
SMERSH
15
0
It seems to me a lot of what quantum mechanics is essentially saying is that "causality of any given event is equal and opposite to the causing event" - leading to entanglement.

This is in some way analogous to the Newton's laws of "action and reaction are equal and opposite".

What I mean is that generally the so-called observer that caused the experiment to happen is also been influenced in some equal and opposite way by the observed event and therefore is inextricably "entangled" with the experiment by this causality law.

So if you are a fan of the movie "the matrix" the merovingian that laced the lady's drink with something in the matrix world to set of a chain reaction did not realize, perhaps he is somehow also influenced in some equal and opposite way by that event that he thought he caused... who was the one really causing things to happen? Anyways I digress a bit, pardon moi.

This causal symmetry would then apply to any "non-conscious" systems as well that influence each other in a symmetric fasion in equal and opposite ways.

However, I also hypothesize that what we call "mind" or consciousness is a unique entity that is not governed by this "equal and opposite" causality law. This "mind" is able to influence other entities ("physical world" or "outside universe") without being influenced *as much* in return (asymmetric), therefore it (the mind) is uniquely ***"closed"*** from a causal perspective relative to the "universe" out there. Sort of like unidirectional or semi-unidirectional gate for the flow of causality.

The "flow" of causality is then more from the conscious entity (or "mind" or whatever you want to call it) to the outside universe than vice versa unlike other non-conscious systems in which causality goes equally in both directions. Lack of perfect causal symmetry between the mind and outside universe then leads to lack of perfect entanglement.

This then leads to what we call the "wave function collapse" because the mind is no longer perfectly entangled with the outside physical world and forces the physical world to behave differently than it otherwise would when it interacts with non-conscious systems.

This causal asymmetry may somehow also lead to the perception of the flow of time...

This concept may then allow room for the idea of "free will" since consciousness is somehow causally insulated (at least partly) from the causal chaos in the surrounding universe or multiverse whatever you want to call it.

Mere speculation but it's something I think is worth pondering and I'm using quite general, vague ideas rather than hard specifics.
 
Physics news on Phys.org
  • #2
however tempting it might be, associating various quantum physics phenomena with all kinds of mystical/metaphysical/vari-philosophical ideas and concepts is something (at least I think) we should refrain from.

sure, some quantum phenomena and epiphenomena have a certain "exotic" flavour to them but still, a golden rule of thumb is: never try to pull out more than it's there... this kind of associative chain is rather working against the enrichment of knowledge not alongside it...

noticing some apparent similarity just isn't a good enough hint to try and mix two different things together.

ps: I guess I'm not talking just about this particular thread but also about many other such topics I noticed on this board (although, admittedly there are some that do bring some relatively interesting points)...
 
  • #3
SMERSH said:
It seems to me a lot of what quantum mechanics is essentially saying is that "causality of any given event is equal and opposite to the causing event" - leading to entanglement.
This is in some way analogous to the Newton's laws of "action and reaction are equal and opposite".
What I mean is that generally the so-called observer that caused the experiment to happen is also been influenced in some equal and opposite way by the observed event and therefore is inextricably "entangled" with the experiment by this causality law.

Actually there is a kernel of an important idea here. And as usual, it is about the application of dichotomies in modelling.

What smersh is talking about is an equilibrium of interaction - a balance between events and their contexts, the local and global scale of a system.

Consider Newton's action and reaction. The need for this idea of a world that pushes back at you becomes less mysterious once you realize that it is a book-keeping exercise to correct for the fact that the Newtonian approach breaks the world down into a tale of one-way chains of cause and effect. A force - by definition - is an action in a single direction. Yet if reality is actually woven of interactions, causality must always be mutual in some way. The supposed doer and done-to need to have the properties that allow them to engage - to create an event - in the first place.

According to the third law, if a person pushes a door shut, the door is said to push just as hard against the person’s hand. A diagram would show two equal force vectors springing to life through the line of contact.

However what is really occurring is an interaction between two complex contexts. There is the door with all its material properties such as a certain structural rigidity, a relative lightness, and a low-friction hinge. Then there is the person with also a certain structural rigidity, a greater mass, and the ability to employ the frictional resistance of feet braced against the even greater bulk of the Earth. And so any interaction between hand and door will strike an emergent balance. In this case, the door will be seen to move a lot, a person and the Earth very little.

Then to simply all this messy reality down to the kind of math vectors that can easily be manipulated, Newton created the first two laws to deal with the very idea of vectors, then the third law to deal with the symmetry of interaction - the equilibrium balance that emerges even when two interacting are wildly asymmetric, or local~global.

So what about QM? Well we should arguably expect the same story to apply. Science thrives on simplifications. But the model still has to capture the messy reality.

Now QM famously does not have the observer, the global context, as part of its formal structure. The observer has to be added in uncertain fashion as a pragmatic step in everyday use of the theory, and leads to a host of competing interpretations as to "what is really going on".

But if the observer, the globally constraining scale, is to be added to the theory, then the Newton's third law seems a good example of the kind of way it would be done.

This is I believe the way the decoherence interpretation of QM should pan out. Local QM potentials in interaction with global GR/classical contexts. It has ABSOLUTELY NOTHING to do with human consciousness or biological complexity (excuse the shouting). It has everything to do with the way classical reality is an equilbrium structure in which the bottom-up constructing push of QM is in constant dynamic balance with the top-down constraining "push" of an already decohered universe.

At every scale of observation (by that I mean actual observation by human scientists) we will find that there is then a symmetry - a steady state balance between two quite unequal scales of action. This would in fact be a formal prediction of a dissipative structure, systems-style, approach to QM. And we do see that generally speaking, the classical world does really dominate over the quantum "weirdness" or uncertainty. Humans have to create special isolated systems - as in twin slit experiments - to get naked QM to show at all.

Again, the mind and freewill have NOTHING to do with any of this. Though you could perhaps say that human experimenters do manage to set up corners of the universe where there is LESS constraint on QM phenomenon. We ease the reaction to make the action more visible.
 
  • #4
tauon said:
however tempting it might be, associating various quantum physics phenomena with all kinds of mystical/metaphysical/vari-philosophical ideas and concepts is something (at least I think) we should refrain from.

sure, some quantum phenomena and epiphenomena have a certain "exotic" flavour to them but still, a golden rule of thumb is: never try to pull out more than it's there... this kind of associative chain is rather working against the enrichment of knowledge not alongside it...

noticing some apparent similarity just isn't a good enough hint to try and mix two different things together.

I think you do have a point that caution is required when dealing with the possible implications of our current knowledge of quantum physics and applying it in general to metaphysical ideas.

That is why these ideas are as I described in the original post are (needless to say) highly speculative as you suggest.

However I think that speculation is an essential method of exploring the "phase space" or "solution space" of possible models of reality that *may* provide usefull information in ruling in or out certain ideas of this reality and completely abandoning such speculation is just as dogmatic and anti-intellectual as overvaluing such speculation.

There is always a middle ground of strong intellectual curiosity and the need to seek out knowledge however strange it may be while maintaining healthy sceptimism. For example I do not consider *anything* to have a 100% probability of being true. *Everything* every theory is subject to possible error.

In my opinion, there is currently an inconsistent "mosaic" of data out there including the current understanding of quantum physics, classical physics and mathematics that essentially has hit a theoretical "dead end" in my opinion in the last few decades and therefore an open mind is needed to explore new ideas ***however strange it may be to current scientific dogma***. Historically dogma has tended to be wrong and also honestly quite boring.

We need more empiric data but while we wait for this data, it is harmless (and perhaps intellectually interesting to some people) to speculate on a message board about how we can fit the current "mosaic" of data into some vaguely sensible, simple but speculative model or models. The merits of any said speculation is of course highly subjective since it is not based on solid empiric data.
 
  • #5
apeiron said:
Actually there is a kernel of an important idea here. And as usual, it is about the application of dichotomies in modelling.

What smersh is talking about is an equilibrium of interaction - a balance between events and their contexts, the local and global scale of a system.

But if the observer, the globally constraining scale, is to be added to the theory, then the Newton's third law seems a good example of the kind of way it would be done.

This is I believe the way the decoherence interpretation of QM should pan out. Local QM potentials in interaction with global GR/classical contexts. It has ABSOLUTELY NOTHING to do with human consciousness or biological complexity (excuse the shouting). It has everything to do with the way classical reality is an equilbrium structure in which the bottom-up constructing push of QM is in constant dynamic balance with the top-down constraining "push" of an already decohered universe.

At every scale of observation (by that I mean actual observation by human scientists) we will find that there is then a symmetry - a steady state balance between two quite unequal scales of action. This would in fact be a formal prediction of a dissipative structure, systems-style, approach to QM. And we do see that generally speaking, the classical world does really dominate over the quantum "weirdness" or uncertainty. Humans have to create special isolated systems - as in twin slit experiments - to get naked QM to show at all.

Again, the mind and freewill have NOTHING to do with any of this. Though you could perhaps say that human experimenters do manage to set up corners of the universe where there is LESS constraint on QM phenomenon. We ease the reaction to make the action more visible.

I have seen your previous and current posts about this very interesting description of a sort of symmetry between local and global, top down and bottom up causality and it appears to be consistent with the idea that the rules that govern this universe we are in tend to be highly if not completely symmetric.

Intuitivelly any self-consistent universe with consistent rules needs a kind of symmetry.

However, I am not sure how this approach is able to explain the concept of "I" (considering the limitations of semantics and language, I guess "I" seems to be the most often used word for this).

The classic quote from Descartes which is almost a cliche: "I think, therefore I am". It sounds to many as the one statement that could be counted as fact if anything is to be considered as fact at all. If this statement by Descartes is false, then everything else I think is either false or everything else doesn't matter anyways, since "I" do not exist.

By induction, one could consider that there are other "I"s or other entities that are also thinking (ie other thinking beings).

So what is this "I" that thinks? What is this entity or process "I"? And why is this "I" seemingly "tied" to this specific physical "body" (my body) rather than for example your body?

Lets say there is a set of apparent entities that consist of all the "I"s (whatever these "I's are) on this planet Earth set A (a, b, c, d.. ) and a set of apparent entities A' (a', b', c', d', ..) that represent the physical bodies that appear to correspond to the "I"s in the set A.

What is the process or law that specifically says that "a" in the set A must correspond to a' in the set A'. Is this process or law deterministic or random? Is it governed by the local or the global and why?

Some may say the set A' does not exist (physical bodies do not exist etc). If so then at least the set A must exist from Descarte's theorem. What law or process distingishes "a" from "b" or from "c" within the set of A. In other words what makes the members of the set A INTRINSICALLY DIFFERENT. Again is this law or process deterministic or random? Is it governed by the local or gobal and why?

Some may say that there is no distinction between members of set A, therefore there is only one member, "a". To me that is analogous to the previous dogma that the Earth was the centre of the universe, an anthropocentric notion but nonetheless I suppose some may consider it valid.

Some may say that this set A simply does not exist, and therefore by implication this "I", not even one member of the set A exists, or in some sense A is a null set. Therefore in this case, Descartes was wrong and "I" does not exist.

So if this "I" does not exist and A is a null set, then what is the basis of determining from our logic and thinking that anythying else exists much less what the laws are that govern these things?

I wonder what your thoughts are on these questions and how they may relate to your ideas.

Essentially what I'm getting at is that a lot of theories do not seem to confront this idea of "I" which seems to be the STARTING POINT of any belief in anything that may exist and I think a theory is incomplete if it doesn't directly address what this "I" is.

In my opinion any metaphysical theory has to start at this point with what we consider most certain ("I" exist) before it address the entities are are less certain (Empirical data, the quantum and classical physical universes, metaverses, the Models and Laws governing everything etc). I think the fact that we "think" is more concrete evidence than anything else.

I would almost propose a "level of evidence" scale for metaphysical discussion: Level 0 (Aboslute truth, unknowable), Level 1 (I think therefore I am or similar such proof), Level 2 (Empirical data, experiments), Level 3 (Theories or Models that predict experiments and predict empirical data, ? Qualia) etc.
 
Last edited:
  • #6
SMERSH said:
Essentially what I'm getting at is that a lot of theories do not seem to confront this idea of "I" which seems to be the STARTING POINT of any belief in anything that may exist and I think a theory is incomplete if it doesn't directly address what this "I" is.

The question you have to ask yourself is whether I is the simplest thing or instead the most complex thing? Is it a fundamental aspect of reality, or is perhaps rather the intersection of quite a variety of activities and processes?

Well science supports the second view, even if naive realism seems to say that I-ness is somehow the irreducible fundamental - the only thing that can't be doubted, imagined to be not existent, etc.
 
  • #7
apeiron said:
The question you have to ask yourself is whether I is the simplest thing or instead the most complex thing? Is it a fundamental aspect of reality, or is perhaps rather the intersection of quite a variety of activities and processes?

Well science supports the second view, even if naive realism seems to say that I-ness is somehow the irreducible fundamental - the only thing that can't be doubted, imagined to be not existent, etc.

Well said. The second view seems to me too reminiscent of a return to classical reductionism, that this I-ness is a construct of smaller less complex subunits (activities or processes) as opposed to an independently "thing" that exists in itself (or that has a strongly emergent property and exists indepently at least).

Also the second view seems self-contradictory. This I-ness (the thinking process) by definition encompases our very same logical thinking processes that we are using to come up with all our questions, our theories, our logic, our mathematics and our thoughts etc. So if this thinking by "I" is somehow false or not real, then all the content of the thinking (all our theories and logic) is also somehow false and not real as well (since it cannot be proven without thought). Therefore with the second view ANY theory we came up with is not real since the thinking is not real.
 
  • #8
SMERSH said:
Well said. The second view seems to me too reminiscent of a return to classical reductionism...

Actually, it is the second view I believe in and was arguing for.

But of course, I don't advocate classical reductionism. I advocate the appropriate approach to modelling complexity which is the systems science approach. Which would be what is now the norm in theoretical biology and theoretical neuroscience.

Examples of classical reductionism still exist of course - evolutionary psychiatry and overly computational approaches to cognitive psychology for instance. Very popular still in US and UK especially.
 
  • #9
apeiron said:
Actually, it is the second view I believe in and was arguing for.

But of course, I don't advocate classical reductionism. I advocate the appropriate approach to modelling complexity which is the systems science approach. Which would be what is now the norm in theoretical biology and theoretical neuroscience.

Examples of classical reductionism still exist of course - evolutionary psychiatry and overly computational approaches to cognitive psychology for instance. Very popular still in US and UK especially.

I understood you were advocating for the second view and I merely meant you stated it well, although I did not agree with it, and you have made some very interesting points.

I think one of the problematic ideas with reductionism is that something that is complex must necessarily be less fundamental and must consist of simpler more fundamental components. But there is no logic that this is necessarily so. Something *may* be exceedingly complex yet fundamental and irriducible to any simpler components.
 
  • #10
SMERSH said:
I understood you were advocating for the second view and I merely meant you stated it well, although I did not agree with it, and you have made some very interesting points.

Thanks.

SMERSH said:
I think one of the problematic ideas with reductionism is that something that is complex must necessarily be less fundamental and must consist of simpler more fundamental components. But there is no logic that this is necessarily so. Something *may* be exceedingly complex yet fundamental and irriducible to any simpler components.

Or what about reductions that go in two directions instead of just the one?

Take water for example. An H2O molecule can be "reduced" both to its substantive components and its global form. And I mean reduced as not "breaking something big into little pieces", but in the modelling sense of generalisation - that is, reducing the amount of information needed to model some aspect of reality.

So we can talk about H2O as particular example of a collection of atoms. Or we can also talk about it as a particular example of the global form we call "a liquid" (or even more generally of course, a phase of matter, so bringing in solids, gases and plasmas).

Does the atomic description of H2O ever tell us everything about H2O-ness? No. And equally, we need to know more about H2O that that it is a liquid and shares a form that is common to many liquids. But given both the atomic description and the global systems behaviour, we now do have a pretty good understanding of H2O molecules.

Same with "conscious" humans. We can look to the neurons. But we should also look to the global form of brains, which for example could be modeled as an anticipatory system, an autopoietic system, a complex adaptive system - there are few choices saying reasonably much the same thing. These would be generalisations about the organisation of the parts, like "liquid" is a generalisation about the organisation of atoms.

But much more complex!
 
  • #11
Not being a philosopher I feel unqualified to join this discussion but I did take a QM class that went through entanglement.

My professor did not view entanglement as a reason to doubt Quantum mechanics. Originally, entanglement was posed as the Achilles heel of the theory because it implied non-locality of signalling of events. A measurement on Earth can instantaneously inform an observer in the Andromeda galaxy. Einstein felt that signals need to travel through the intervening space and that any theory that allowed instantaneous signalling could not describe be correct because it violated any rational picture of how the Universe must work. His objection was much like an application of the principal of sufficient reason.

Not long after Einstein died, a experiment was devised to test for non-locality. It showed that non-locality was correct and that Einstein was wrong. Since then, many physicists accept entanglement as an established feature of the Universe.

However, the real conceptual problem is not with entanglement per se but with the nature of measurement. quantum mechanics describes the outcomes of measurements but not what a measurement is. For instance, a measurement is not a solution of the Shroedinger equation.

The collapse of the wave function during measurement remains a mystery. There is no good fundamental theory yet of how this happens. All that we have is a phenomenological description of the outcomes of repeated measurements.

Eugene Wigner believed that measurement involved the duality of mind and matter. Matter QM teaches us has the quality of non-definiteness - of a distribution of states that are simultaneously combined - whereas mind has the property of definiteness and discrete specific objects. These are concepts and particular perceptions. Wigner believed that when the two realities of matter and mind interact the mind is forced by its intrinsic nature to convert the indefinite world of matter into the specific world of mind, that is of definite discrete ideas. This for him was a measurement. For Wigner dualism was a theory of measurement.

In Wigner's theory the observer doesn't really affect reality. Measurement is the interpretation of one world (matter) by another (mind) - I think.

Naively, it seems to me that Wigner's theory is not reductionist. The Wave function/matter is more complex that the observed reality and the process of observation which requires the mind matter system is actually simpler.

Physicists today are still trying to reconcile the problem of measurement and do not entertain Wigner's theory probably because it does not take them any further.

An important point though is that it is really the problem of measurement, not of entanglement, that makes QM confusing. For instance, the measurement on Earth of the momentum of a proton collapses its wave function and instantaneously provides the information to the observer on the Andromeda galaxy. So measurement contains entanglement as a special case.
 
  • #12
wofsy said:
The collapse of the wave function during measurement remains a mystery. There is no good fundamental theory yet of how this happens. All that we have is a phenomenological description of the outcomes of repeated measurements.
The mystery is only there if you reify (treat as a representation of reality) the wave-function.
When you follow the orthodox Copenhagen Interpretation the wave-function is a representation of information about the system in the same sense as a classical probability distribution. That is to say it is one level deeper in abstraction than a physical representation of reality.

Similarly entanglement is simply a non-classically describable but purely statistical correlation between observables of two systems. The non-locality is a red herring and provably non-causal (no Bell telephones are possible.) If the non-local phenomena cannot be used to send a signal then it must be non-locality with regard to non-physical components of a model. (We don't observe wave-functions we write wave-functions for systems which we do observe.)

Given that interpretation there is no mystery to the collapse of the wave-function as it is qualitatively no different from the collapse of a classical probability distribution upon updating assumptions about the physical system.

The analogue I often use is the "non-local collapse" in value of lottery tickets when the drawing is made. It expresses a change in relationships subject to non-local constraints (there is only one winning ticket among all the tickets distributed over all of space).

Take the example of an observation of a particle position. The assumption that you are dealing with a system of say exactly one proton over a large volume is the imposition of a non-local constraint. Measuring the position of that proton is a non-local measurement because it incorporates the continued constraint on e.g. allowing another proton to form via pair production or to enter the experimental region. One measures the proton at position 1 you are simultaneously observing all positions or at least observing proton fluxes across a boundary surrounding all regions in question and hence your action is not localized. Why then should you expect your representation of your knowledge about the proton's behavior be localized?

As to the system's not satisfying Schrodinger's equation during measurement... it is not a fundamental transgression of the SE but rather the fact that a measurement process involves an interaction between system and measuring device which must invoke an unknown Hamiltonian involving dissipative effects inherent in the measurement process. Measurement is a thermodynamically non-trivial act. The non-unitarity of the system evolution during measurement is merely a feature of the fact that during measurement the system is not isolated but rather part of a larger entropic system-measuring device.

Failure of unitary evolution during measurement is no different than non-conservation of energy for a system in contact with an entropy dump. One does not need to believe this non-conservation is occurring at a fundamental level, only that the usual assumption that the system is evolving in isolation is explicitly being invalidated.

Mind you mysteries and unanswered questions remain w.r.t. quantum theory but I don't think there are any great mysteries with regard to entanglement and WF-collapse per se.
 
  • #13
I do not believe that there is any way to introduce a new Hamiltonian into the system to account for measurement. That is the whole point.
 
  • #14
It seems to me that your lottery ticket analogy begs the questions of what a measurement is.

Your arguments about the non-locality of measurement of position are not predicted in quantum mechanics unless all you mean by this is that a particular position gives you information that all other positions are excluded. This is true of any measurement of anything. I do not see that it explains the collapse of the wave function. To do this this I would think you need to have a physical theory that describes the selection of the lottery ticket. In classical physics this description exists because the theory requires that particles have positions and momentums in phase space. But in QM this is not true. The wave function evolves according to a law which then is suddenly violated in measurement.
 
Last edited:
  • #15
jambaugh said:
The mystery is only there if you reify (treat as a representation of reality) the wave-function.
When you follow the orthodox Copenhagen Interpretation the wave-function is a representation of information about the system in the same sense as a classical probability distribution.


In classical probability, the so-called 'probability' is equal to our 'ignorance' of the values of the variables(e.g. flipping coins). This isn't the case in QM and makes for a completely new case, that has no resemblance to the inherent indeterminate observables. Moreover, contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables.


Similarly entanglement is simply a non-classically describable but purely statistical correlation between observables of two systems.


Why are you asserting this? What does it explain?


The non-locality is a red herring and provably non-causal (no Bell telephones are possible.)


So if information cannot be send by humans, then no causal signalling is at play? This assertion cries for elaboration and elucidation.


If the non-local phenomena cannot be used to send a signal then it must be non-locality with regard to non-physical components of a model. (We don't observe wave-functions we write wave-functions for systems which we do observe.)

I agree with this. But in my reasoning, there is something fundamental unaccounted for, that cannot be patched up with "this is just a strong correlation", "it's just statistics", "it's just a feature of the universe we live in", etc.

Given that interpretation there is no mystery to the collapse of the wave-function as it is qualitatively no different from the collapse of a classical probability distribution upon updating assumptions about the physical system.


Do you mean that nonlocal signalling is not a physical process?


The analogue I often use is the "non-local collapse" in value of lottery tickets when the drawing is made. It expresses a change in relationships subject to non-local constraints (there is only one winning ticket among all the tickets distributed over all of space).


The tickets always have definite properties, whether you measure them or not. You cannot claim so with quantum objects, so your example cannot be valid. As the saying goes - "Heisenberg may have been here".:smile:
 
Last edited:
  • #16
WaveJumper said:
In classical probability, the so-called 'probability' is equal to our 'ignorance' of the values of the variables(e.g. flipping coins).

This isn't the case in QM and makes for a completely new case, that has no resemblance to the inherent indeterminate observables.
Within the classical context of ontological description. When you transition to quantum theory you transition to "praxic" probabilities.

Remember you can reverse this "representation of ignorance" to "representation of limited knowledge". You then are not referring to what is potentially operationally meaningless and sticking to the operationally meaningful "knowledge about the system" in the form of observables and the probability of what you will observe.

One is still making the "classicalness" assumption when one partitions degree of information about a system into "knowledge vs. ignorance" as in "knowledge vs. ignorance about the state of reality". Even in the classical paradigm this can lead to problems as with Gibbs paradox. In the quantum paradigm the two qualities gain more independence. One can still quantify ignorance in the form of entropy but without making a priori judgments about nature.

Moreover, contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables.
One can make simultaneous predictions in the form of stating expectation values for all observables (including squares of the given observables and hence their variances).

Rather one cannot simultaneously test all predictions.

entanglement is simply a non-classically describable but purely statistical correlation
Why are you asserting this? What does it explain?
The nature of entanglement. Specifically that there isn't necessarily some unobservable causal connection between actions (e.g. acts of measurement) on entangled pairs.
So if information cannot be send by humans, then no causal signalling is at play? This assertion cries for elaboration and elucidation.
So I shall elaborate. If information cannot be sent by humans then any causal signaling which you may assert is empirically meaningless, in the same sense as a aether explanations of Michelson-Morley experiments is empirically meaningless. You can say "Ha! You cannot disprove the aether!" and naturally one cannot empirically verify the non-existence of the fundamentally empirically invisible. Likewise you can say "Ha! You cannot disprove causal signaling!". But such hypotheses, being inherently untestable (within the given theory) should be excised with Occam's ever-sharp scalpel.

I agree with this. But in my reasoning, there is something fundamental unaccounted for, that cannot be patched up with "this is just a strong correlation", "it's just statistics", "it's just a feature of the universe we live in", etc.

How many angels can dance on the head of a pin? Is there a God? There is much which is "unaccounted for" in physics. Our first task is to excise from the theories what is fundamentally unaccountable because it is empirically meaningless. There is still room for it in the language of physics and you are welcome to speak of a particular model for what lies beneath the observable physical phenomenon. E.g. it not a priori improper to speak of modeling entangled pairs and EPR experiments with this addition of causal signaling. But the traditional use of models is to act as scaffolding for the generation of the actual theories and once they are built this scaffolding ought to be removed.

Do you mean that nonlocal signalling is not a physical process?
Observable non-local signaling is a "physical process" in that it is a testable hypothesis. Physical processes are observable processes so a "fundamentally unobservable physical process" is an oxymoron.

Now mind you, we can play with new theories invoking more in the way of what is observable about nature. E.g. one can speak of a theory in which there are exotic means to actually perceive the aether. Likewise one could theorize about exotic means to directly observe Bohm's pilot waves. One is then stepping beyond the theory in hand. Feel free to play with alternative theories. But otherwise, while one is in the given theory, one should not make physicality assertions about what is assumed within that theory to be fundamentally unobservable.

The tickets always have definite properties, whether you measure them or not. You cannot claim so with quantum objects, so your example cannot be valid. As the saying goes - "Heisenberg may have been here".:smile:
Yes this isn't an example of quantum system. Nonetheless there is still a "collapse"! This analogy is intended to show that collapse in description needn't imply some physical seismic activity "out there". The update in our knowledge about the tickets is demonstrably non-physical (i.e. not a sudden non-local change in the nature of each ticket). Because the tickets are classical objects we can directly assert this and it is obvious.

Now the lotto drawing is not a measurement of the tickets. The analogy is not intended to be pushed that far. It is only a demonstration of the nature of "collapse".

In the actual quantum case of a measurement one can first assert a specific measurement is made (a physical action) in which case one changes the system description. After a measurement is made and before one has asserted which value one has measured the proper description is a density operator diagonal in the eigen-basis of the observable in question (well diagonal assuming it is a non-degenerate observable, otherwise block diagonal...)

The "quantum probabilities" become "classical probabilities". It is then when you make the assertion that "the measured value was x!" that you collapse these classical probabilities to a singular certainty and you'll note it is a classical collapse. You can then update the density operator, assert it is in fact represents a sharp description and write the corresponding "ket" or "wave function" if you prefer that format. But if you stick to density operator format you loose nothing physical in the description.

In that format it is easier to break down the physical act of measurement. Clearly by interacting with our measuring device the quantum system has become entangled. Thus the joint density operator for system + device yields more entropy when we do a partial trace. This is simply decoherence of the system.

Now if you are still looking for a physical collapse note that simply by changing our description we can change "where and when the collapse occurs". It is hardly reasonable to assert that a phenomenon so format dependent is actually physical. Not impossible mind you but if so, you necessarily should be able to recast exactly what is going on in a format independent way. Have at it but I suspect such an attempt with regards to collapse is futile.

Ultimately the distinction between classical and quantum theory is not the scale of the system but the scale of the observer (measuring devices). Classically we assume the observer can be arbitrarily small so as to be affected by the system without affecting it. (The bug notices the bus without the bus noticing the bug.) Note the definition and role in classical theories of the test particle.

We always (semantically) begin with a system description which is meaningful as our knowledge about the system. This is necessarily so given the empirical epistemology of science.

The classical assumption is then that we can recast a representation of "our knowledge about the system" to a representation of the reality of the system. This assumption we make implicitly and intuitively but it gets made. But given the physical nature of our knowledge, e.g. it derives from a physical act of measurement = physical interaction with the system this assumption has physical consequences. It is a testable component of the theory which gets invalidated when we note acts of measurement do not commute.

When we "explicitize" and thence withhold this assumption and the corresponding transition to an ontological description we are still doing good science. But we must consistently stay in the realm of "system description as our knowledge about the system". Attempting to look deeper is attempting to reassert that which we so necessarily rejected.

Thinking in terms of "the real state of the system" is as improper in this context in just the same way as is thinking in terms of "which twin is really older" in the symmetric twins paradox of SR. One can choose to define an absolute answer to the latter by asserting an absolute frame defined by the aether. But in so doing one is both rejecting the cornerstone of the theory and stepping outside the realm of science in making operationally meaningless assertions.

QM relativizes the "absolute state of reality" in the same way that SR relativizes absolute time.
 
  • #17

I would love to understand your points but I find your exposition difficult. Can you simplify the language?

here are a couple of talking points that may help direct your response.

- In classical mechanics one can in principle follow the path of any particle although in practice this may be impossible. In QM it is in principle impossible to follow the path of a particle. So it is not really a case of ignorance or limited knowledge.

The example of a lottery doesn't isolate this distinction. For instance, I can think of the lottery as measuring the position of a particle in continuous Brownian motion.

Further in the Brownian motion, the act of measurement does no change the stochastic process governing the path. In QM measurement creates a jump discontinuity in the stochastic process. In Brownian motion there is no collapse of the wave function. The stochastic process of the particle is unchanged.

- QM measurement can change the stochastic process of characteristics of the QM entity than other the one being measured. It can for instance throw a particle out of an eigenstate of another operator. Again the lottery analogy does not cover this.

- While I have never worked this out I wonder what the evolution of measurment probabilities for the position operator are. In QM it seems that the real stochastic process is the evolution of amplitudes. This is a Markov like process except that amplitudes replace probabilities. This again is not reflected in the lottery analogy.

- One might summarize all of these point by saying that the stochastic processes of QM are different than processes where only probabilities are evolving. Another example is that interference in QM is really the superposition of two "complex Markov processes."

- In entanglement there is no signaling merely collapse of the wave function. A signal would have to travel at a finite speed. One might try to mimic signaling by setting up an apparatus on Betelgeuse that detects the second electron's spin when ever it actually had a definite spin - although I am not sure that such an apparatus could in principle be constructed. Then when we measure the spin of the first electron here on Earth the apparatus on Betelgeuse will immediately record the spin of the second electron. The problem with this seems to be that simultaneity is not an absolute concept.

Instantaneous changes occur in classical physics even in General Relativity. The motion of a star changes the curvature of all of space continuously and instantaneously.

- You say, 'Thinking in terms of "the real state of the system" is as improper in this context in just the same way as is thinking in terms of "which twin is really older" in the symmetric twins paradox of SR."

If a measurement throws a particle out of a previously measured eigenstate of another operator then you do know the state of the particle with respect to the other operator. It is a definite linear combination of eigenstates of that operator.
 
Last edited:
  • #18
jambaugh said:
Within the classical context of ontological description. When you transition to quantum theory you transition to "praxic" probabilities.

Remember you can reverse this "representation of ignorance" to "representation of limited knowledge". You then are not referring to what is potentially operationally meaningless and sticking to the operationally meaningful "knowledge about the system" in the form of observables and the probability of what you will observe.

One is still making the "classicalness" assumption when one partitions degree of information about a system into "knowledge vs. ignorance" as in "knowledge vs. ignorance about the state of reality". Even in the classical paradigm this can lead to problems as with Gibbs paradox. In the quantum paradigm the two qualities gain more independence. One can still quantify ignorance in the form of entropy but without making a priori judgments about nature.


I don't agree with the assumption that unmeasured states have definite observables that are inaccessible to our experiments(i.e. limited knowledge of the system). That seems to be the dividing line between our viewpoints.


One can make simultaneous predictions in the form of stating expectation values for all observables (including squares of the given observables and hence their variances).

Rather one cannot simultaneously test all predictions.


The nature of entanglement. Specifically that there isn't necessarily some unobservable causal connection between actions (e.g. acts of measurement) on entangled pairs.

So I shall elaborate. If information cannot be sent by humans then any causal signaling which you may assert is empirically meaningless, in the same sense as a aether explanations of Michelson-Morley experiments is empirically meaningless. You can say "Ha! You cannot disprove the aether!" and naturally one cannot empirically verify the non-existence of the fundamentally empirically invisible. Likewise you can say "Ha! You cannot disprove causal signaling!". But such hypotheses, being inherently untestable (within the given theory) should be excised with Occam's ever-sharp scalpel.


If you didn't assume that particles had definite properties at all times that lie in a blocked view from us, you'd be hard pressed to come up with classical-like example. In my view, a polarised photon has neither a horizontal, nor a vertical polarisation until you do a measurement. I find it meaningless to about unmeasured events. It appears you are you advocating the Bohmian approach.





Now mind you, we can play with new theories invoking more in the way of what is observable about nature. E.g. one can speak of a theory in which there are exotic means to actually perceive the aether. Likewise one could theorize about exotic means to directly observe Bohm's pilot waves. One is then stepping beyond the theory in hand. Feel free to play with alternative theories. But otherwise, while one is in the given theory, one should not make physicality assertions about what is assumed within that theory to be fundamentally unobservable.


Yes this isn't an example of quantum system. Nonetheless there is still a "collapse"! This analogy is intended to show that collapse in description needn't imply some physical seismic activity "out there". The update in our knowledge about the tickets is demonstrably non-physical (i.e. not a sudden non-local change in the nature of each ticket). Because the tickets are classical objects we can directly assert this and it is obvious.

Now the lotto drawing is not a measurement of the tickets. The analogy is not intended to be pushed that far. It is only a demonstration of the nature of "collapse".

In the actual quantum case of a measurement one can first assert a specific measurement is made (a physical action) in which case one changes the system description. After a measurement is made and before one has asserted which value one has measured the proper description is a density operator diagonal in the eigen-basis of the observable in question (well diagonal assuming it is a non-degenerate observable, otherwise block diagonal...)

The "quantum probabilities" become "classical probabilities". It is then when you make the assertion that "the measured value was x!" that you collapse these classical probabilities to a singular certainty and you'll note it is a classical collapse. You can then update the density operator, assert it is in fact represents a sharp description and write the corresponding "ket" or "wave function" if you prefer that format. But if you stick to density operator format you loose nothing physical in the description.

In that format it is easier to break down the physical act of measurement. Clearly by interacting with our measuring device the quantum system has become entangled. Thus the joint density operator for system + device yields more entropy when we do a partial trace. This is simply decoherence of the system.

Now if you are still looking for a physical collapse note that simply by changing our description we can change "where and when the collapse occurs". It is hardly reasonable to assert that a phenomenon so format dependent is actually physical. Not impossible mind you but if so, you necessarily should be able to recast exactly what is going on in a format independent way. Have at it but I suspect such an attempt with regards to collapse is futile.

Ultimately the distinction between classical and quantum theory is not the scale of the system but the scale of the observer (measuring devices)...



I can't decide if my confusion stems from you mixing Bohmian mechanics with QM or if I misunderstood your point.
 
  • #19
WaveJumper said:
In my view, a polarised photon has neither a horizontal, nor a vertical polarisation until you do a measurement.

Why do you think this? To me this is post hoc ergo propter hoc; to assume the UP specifies that objects are inherently undetermined until hit by photons, electrons, or some other combination of particles that we use to measure them is a logical fallacy. While I agree that the UP specifies that any observation necessitates a change in that system, and we therefor can not know momentum and position simultaneously, I do not agree that it necessitates that reality is undetermined until interacted with, whether intentionally (observation) or not.

Consider this thought experiment:

Define the Universe to have one quantum system, a hydrogen atom, H. Let us assume that the observer is disregarded and that we can somehow, however implausible, know about H. We are somehow privy to the absolute, which includes that which does and does not exist.

By your reasoning:
- H would never emerge into existence because there are no other quantum systems other than H. (Causality is violated--reality itself, not to be confused with determinism, is somehow defined by other portions of reality? Hmm. I don't know if I can agree with this. This is like saying nothing would exist if it weren't for something else interacting with it. Please explain if you feel this is true. And what happens if a particle is utterly isolated, does it wisp out of existence? Wouldn't that violate conservation?)

By my reasoning:
- H is there, but could change if other quantum systems were introduced to the universe. (Causality is preserved.)

A third reasoning (undecided):

- H isn't there yet, but exists in some quasi-fundamental plane/state in which an interaction (observation, etc) with another quantum system will cause it to emerge into the Universe, but based deterministically on the interactions with said system(s). (Causality is preserved.)

Looking forward to your thoughts on this.

-Truth
 
  • #20
1Truthseeker said:
Why do you think this? To me this is post hoc ergo propter hoc; to assume the UP specifies that objects are inherently undetermined until hit by photons, electrons, or some other combination of particles that we use to measure them is a logical fallacy. While I agree that the UP specifies that any observation necessitates a change in that system, and we therefor can not know momentum and position simultaneously, I do not agree that it necessitates that reality is undetermined until interacted with, whether intentionally (observation) or not.

Following from http://plato.stanford.edu/entries/bell-theorem/" , when considering classical complementary properties like position and momentum, it is a necessary fact of QM that the two values cannot have simultaneous definite existence. The suggestion that properties actually belong to particles but are just hidden from our knowledge is a claim of local realism, which has been falsified according to experiments, QM, and the assumptions of all interpretations anywhere near the mainstream.

This is why Bohr often referred to Heisenberg's "indeterminacy relation" rather than his "uncertainty principle." The historical use of "uncertainty" is confusing and unfortunate.
 
Last edited by a moderator:
  • #21
WaveJumper said:
I don't agree with the assumption that unmeasured states have definite observables that are inaccessible to our experiments(i.e. limited knowledge of the system). That seems to be the dividing line between our viewpoints.

I think I missed where this was stated in jambaugh's comment. Mind giving some clarification jambaugh? Or mind explaining where you see this Wavejumper?

WaveJumper said:
I can't decide if my confusion stems from you mixing Bohmian mechanics with QM or if I misunderstood your point.

I'm not an expert on what would be considered modern day orthodox CI, but I thought jambaugh's description was a very good retelling of what were essentially Bohr's views, including the statistical knowledge-only meaning of the wave function, the non-reality of collapse, the necessity of the expression of reality in classical terms, and minimal speculation on anything else lying beneath. I'm missing the traces of Bohm here.

By the way, http://plato.stanford.edu/entries/qm-copenhagen/ has a very accurate recap of CI and Bohr in particular for anyone interested in more.
 
  • #22
kote said:
Following from http://plato.stanford.edu/entries/bell-theorem/" , when considering classical complementary properties like position and momentum, it is a necessary fact of QM that the two values cannot have simultaneous definite existence. The suggestion that properties actually belong to particles but are just hidden from our knowledge is a claim of local realism, which has been falsified according to experiments, QM, and the assumptions of all interpretations anywhere near the mainstream.

This is why Bohr often referred to Heisenberg's "indeterminacy relation" rather than his "uncertainty principle." The historical use of "uncertainty" is confusing and unfortunate.

Does the QM system pop into existence or not? That is the question. Its properties are another thing entirely. I don't mind that. Its existence, its being or not being, is the real concern. Can you agree that an electron exists 100% of the time, or are you saying that the electron doesn't exist until we make measurement?

My view is that it does not pop into existence, it was always there, but that its fundamental nature is of superposition; that is the way that type of thing exists in reality. Which is to paraphrase Feynman, like nothing we have ever seen before, so don't try and relate it to your everyday experience.

I totally accept an electron probability density. What I do not accept is that the electron doesn't exist before collapse. That to me seems an outrageous misinterpretation of the theories. QED reasons that the thing exists. How does CI differ from QED?

Agree/disagree? Why
 
Last edited by a moderator:
  • #23
1Truthseeker said:
Does the QM system pop into existence or not? That is the question. Its properties are another thing entirely. I don't mind that. Its existence, its being or not being, is the real concern. Can you agree that an electron exists 100% of the time, or are you saying that the electron doesn't exist until we make measurement?

My view is that it does not pop into existence, it was always there, but that its fundamental nature is of superposition; that is the way that type of thing exists in reality. Which is to paraphrase Feynman, like nothing we have ever seen before, so don't try and relate it to your everyday experience.

I totally accept an electron probability density. What I do not accept is that the electron doesn't exist before collapse. That to me seems an outrageous misinterpretation of the theories. QED reasons that the thing exists. How does CI differ from QED?

Agree/disagree? Why

I could go either way, but I think it's largely a question of semantics. I believe Bohr would tell you that the electron always exists. I'm not so sure. I was careful to concentrate on properties to avoid this question actually :wink:.

Here's a question for you: to what extent is the existence of an object defined by its properties? Can a causally ineffective property-less object be said to exist? In my mind existence is a classical concept having to do with persistence. If an object has persistent properties - position, momentum, charge, etc - then it exists. The problem in QM is that there is no such thing as a persistent objective property*. Is it fair to call something an electron if it doesn't have such a thing as charge? What if it doesn't have such things as position or momentum?

We either have to radically redefine what it is to be an electron, or we have to redefine what it is for something to exist. Bohr's program was to keep the language as classical as possible, redefining the classical concepts in a quantum framework when necessary. If properties, which define how we conceive of objects, don't have independent persistence, then existence must not require the persistence of properties. Problem solved. Existence redefined.

Bohr, actually, doesn't allow you to talk about independent existence, since all phenomena require a complete description of the context of their measurement or interaction to be meaningful. Electrons exist for him, but nothing is independent and objective. So in the normal sense of existence, which is synonymous with objective persistence, I don't think we can fairly say that anything exists. Existence, in this sense, becomes an obsolete classical notion.

*Neglecting contemporary Bohmian interpretations for the moment, in which a single causally ineffectual property (location) is granted persistent existence.
 
Last edited:
  • #24
kote said:
I think I missed where this was stated in jambaugh's comment. Mind giving some clarification jambaugh? Or mind explaining where you see this Wavejumper?


Anyone claiming that there is no signalling between entangled pairs of praticles, assumes those particles have 'hidden', well defined observables, which is what a classical intuition would lead you to believe. Here in post 16 he states:

jambaugh said:
The nature of entanglement. Specifically that there isn't necessarily some unobservable causal connection between actions (e.g. acts of measurement) on entangled pairs.

If there is no information transfer, then realism is wrong. If there is information transfer, then it's a non-local effect. If both locality and realism are true, then entanglement is wrong, which is beyond any reasonable doubt. Local hidden variable theories are what he appears to allude to and my impression that he was stating that particles always had definite unobserved properties was further amplified by:

jambaugh said:
Remember you can reverse this "representation of ignorance" to "representation of limited knowledge".




kote said:
I'm not an expert on what would be considered modern day orthodox CI, but I thought jambaugh's description was a very good retelling of what were essentially Bohr's views, including the statistical knowledge-only meaning of the wave function, the non-reality of collapse, the necessity of the expression of reality in classical terms, and minimal speculation on anything else lying beneath. I'm missing the traces of Bohm here.

By the way, http://plato.stanford.edu/entries/qm-copenhagen/ has a very accurate recap of CI and Bohr in particular for anyone interested in more.


OK, maybe i am missing the point and maybe you could clarify - how would you propose that we keep both realism and locality(no FTL signalling)? Or do you know of a way around Bell's theorem and the disproval of LHV? Or did he imply realism was wrong and I missed his point by a large margin?
 
Last edited:
  • #25
1Truthseeker said:
Why do you think this? To me this is post hoc ergo propter hoc; to assume the UP specifies that objects are inherently undetermined until hit by photons, electrons, or some other combination of particles that we use to measure them is a logical fallacy. While I agree that the UP specifies that any observation necessitates a change in that system, and we therefor can not know momentum and position simultaneously, I do not agree that it necessitates that reality is undetermined until interacted with, whether intentionally (observation) or not.


That is a basic tenet of QM - it is a non-deterministicit theory and does not predict the outcome of any measurement with certainty. Instead, it tells us what the probabilities of the outcomes are. This leads to the situation where measurements of a certain property done on two apparently identical systems can give different answers.



Consider this thought experiment:

Define the Universe to have one quantum system, a hydrogen atom, H. Let us assume that the observer is disregarded and that we can somehow, however implausible, know about H. We are somehow privy to the absolute, which includes that which does and does not exist.

By your reasoning:
- H would never emerge into existence because there are no other quantum systems other than H. (Causality is violated--reality itself, not to be confused with determinism, is somehow defined by other portions of reality? Hmm. I don't know if I can agree with this. This is like saying nothing would exist if it weren't for something else interacting with it. Please explain if you feel this is true. And what happens if a particle is utterly isolated, does it wisp out of existence? Wouldn't that violate conservation?)


I have no objections to this. If realism is violated, we can no longer maintain the first law of thermodynamics. But this is going too far even for a philosophy discussion.



By my reasoning:
- H is there, but could change if other quantum systems were introduced to the universe. (Causality is preserved.)


H is a wave when unobserved or in a coherent state. It isn't the kind of "object" that most people imagine, even if it were possible that H had properties that were not context-relative.


A third reasoning (undecided):

- H isn't there yet, but exists in some quasi-fundamental plane/state in which an interaction (observation, etc) with another quantum system will cause it to emerge into the Universe, but based deterministically on the interactions with said system(s). (Causality is preserved.)

Looking forward to your thoughts on this.

-Truth

This is somewhat close to decoherence, but decoherence is only an appearance of classical objects. A kind of optical illusion of same phase sine waves that add on top of each other to produce the illusion of a classical world. If this is how the world really is, then i am sorry i ever had anything to do with physics, philosophy and the world "out there".
 
  • #26
WaveJumper said:
Anyone claiming that there is no signalling between entangled pairs of praticles, assumes those particles have 'hidden', well defined observables, which is what a classical intuition would lead you to believe. Here in post 16 he states:

I can't speak for jambaugh, but I take his statement to be a denial of nonlocal signaling, or realism (in the EPR sense). I don't propose that we keep realism in this sense if we go with CI/Bohr/jambaugh.

One can still have a lack of knowledge about a system without implying hidden variables. The knowledge you are missing could simply be what the outcome of your measurement will be. It doesn't have to be ignorance about some deterministic intrinsic properties that the system currently has and which will cause it to have a certain outcome.

For Bohr, there were no hidden variables, and there was also no signaling. The fact that you made a previous measurement on an entangled photon was just part of the context of your subsequent measurement. Without context it was meaningless for him to talk about any properties existing. By establishing context you aren't signaling your photon to exhibit a certain polarization any more than an observer in special relativity is signaling a rocket to exhibit a certain velocity.
 
Last edited:
  • #27
kote said:
Here's a question for you: to what extent is the existence of an object defined by its properties?

That depends if a property is defining or declaratory; in computer science we distinguish the two when dealing with the creation of variables and procedures all the time. The reason for this distinction is that we can declare something, which is to say that we are saying it is like this or that and has certain properties. If we define something we have created its exact nature of how it acts, its very structure, and behaviors. You can change the declaration of something without changing its internal workings.

In reality, we do not create an electron through measurement, we augment it, so we define nothing; we can only declare. This is my reasoning that we only describe the fundamental point-particles as abstractions. We are declaring them like the functions and variables. They make sense, and they work, but this is not the same as their definition--their true and exacting nature is unknown to us, only the manifestation and statistical analysis of their behaviors of which we declare to be reasonably known.

I would say that anything that exists is an ordered system. We can all agree to that. Systems theory is very useful in this context. An electron, as a system, can not pop in and out of reality if it did not first exist somewhere, at some place. Otherwise, what one is saying if they do not accept this, is that we can create electrons from nothing based on the way in which we interact with the vacuum.

kote said:
Can a causally ineffective property-less object be said to exist?

Another awesome question. Yes. There are many systems beyond our perception that exist and have properties we are only beginning to understand. To say otherwise is to say that we understand everything. For example the core of the Earth. We do not fully understand it. We have limited properties regarding it, yet it surely does exist and can and does have an impact on our lives.

But you would reason that a property is not just a semantic term, but an informational, ordered dimensional quality of a system with order. One could represent a property/declaration as a distinct dimension of truth about something that is lower than maximum entropy. If this is the case, then without at least one dimension of truth, it would be untrue and at maximum entropy.

But the problem with confusing entropy with declarations/properties is that we are trying to say something doesn't exist because we can not access something about it. If we accept that electrons and other fundamentals spontaneously emerge into our Universe, then we must accept a point of origin beyond our Universe, OR, accept that it is a fundamental axiom of our Universe that entropy spontaneously inverses into lower-entropy at an event horizon between existence and non-existence, triggered by interaction or observation between other systems. This is the part about CI that I can not agree with. I do not vie for hidden anything, only that our properties may be incomplete, or that we are not truly at the fundamental level, yet.

kote said:
In my mind existence is a classical concept having to do with persistence. If an object has persistent properties - position, momentum, charge, etc - then it exists.

We should work with something more fundamental than that. Existence isn't qualitatively reached under certain conditions, it is notwithstanding intrinsic properties, but on whether or not it satisfies a single proposition: does it perturb a field of maximum entropy in a system? If so, then that system is said to exist. Can we agree to that? If for no other reason than to be on the same page with terms.
 
  • #28
WaveJumper said:
H is a wave when unobserved or in a coherent state. It isn't the kind of "object" that most people imagine, even if it were possible that H had properties that were not context-relative.

H is whatever it is. I specifically didn't state what H was, other than a hydrogen atom, to avoid a potential conflict. It doesn't matter what H is, so long as it exists. H is information, and it is true in that regard; it is a system of lower than maximum entropy. That is agreeable.

I could debate with you on the wave statement, but I won't. I want to focus instead on whether or not something existed before decoherence.
 
  • #29
1Truthseeker said:
We should work with something more fundamental than that. Existence isn't qualitatively reached under certain conditions, it is notwithstanding intrinsic properties, but on whether or not it satisfies a single proposition: does it perturb a field of maximum entropy in a system? If so, then that system is said to exist. Can we agree to that? If for no other reason than to be on the same page with terms.

If it perturbs a field of maximum entropy (and I'm not exactly sure what you mean by that) then by definition it has at least one property. Well two, because it is found at a place, at a locale, as well as having an interaction with its context, its global environment.

Then in the classical analysis, if it has the property of location, it actually also has the minimal properties of motion (or lack of), shape (or lack of) and extension (or lack of). I add lack of, because even the absence of these properties would be a positive fact within the global context.

Of course, you may be thinking of your field of maxent as some kind of virtual computational space - one itself lacking cohesive structure, properties of any kind. Which would seem both nonphysical and also wiring in the answer you require. Which is why I'm interested but confused by your comments.

Generally speaking, I would be with Kote in believing nothing brutely exists. Some things are events. Others may be persistent. But their "existence" is always contextual.
 
  • #30
apeiron said:
If it perturbs a field of maximum entropy (and I'm not exactly sure what you mean by that) then by definition it has at least one property. Well two, because it is found at a place, at a locale, as well as having an interaction with its context, its global environment.

To make clear what I mean by a field of maximum entropy, I will define it mathematically: for any dimension of an n-dimensional field of maximum entropy, let there be an hyperplane that extends along that dimension infinitely in every direction. Time is not treated as a special case, but as an dimension. For the domain of all values to a function of maximum entropy the range must equal to zero. Perturbations to the field result in deformations of equilibrium and describe lower entropy. If zero is the baseline, one can then graph order from disorder in any direction away from zero. The only absolutes are negative infinity, positive infinity, and zero. For simplification we will use natural numbers to describe the range. Dimensions do not necessarily correlate to physical locations in time or space, as n-dimensional maximum entropic fields may not correlate to a physical system, but to information or otherwise.

Your conclusion that I am basing it on a computational field is very accurate, only even more fundamental than that, as a computational field strictly expresses a notion of computer science.

In plain sense, I am describing a system in equilibrium.

apeiron said:
Of course, you may be thinking of your field of maxent as some kind of virtual computational space - one itself lacking cohesive structure, properties of any kind. Which would seem both nonphysical and also wiring in the answer you require. Which is why I'm interested but confused by your comments.

This is correct.

apeiron said:
Generally speaking, I would be with Kote in believing nothing brutely exists. Some things are events. Others may be persistent. But their "existence" is always contextual.


I will not decry your views; however, I will say that it is possible to absolutely and objectively quantify something (with the technological means) as existing or not if it can be quantified as a perturbation of a field of maximum entropy. Does the thing in question, A, have maximum entropy? No. Then it exists. Where or why, is not important at this step in the discussion as constructing the phenomenological experience of reality is many orders of magnitude higher of an abstraction.

This is really getting good now. Truly great comments, Apeiron. In fact I had seen your comments in an earlier thread and had meant to contact you as we have similar philosophical interests.

By the way, did you just now coin the term 'maxent' because I like that very much as a brief summary of maximum entropy. I would like to use that term from henceforth.

You are, indeed, correct that I am expressing maxent as a non-descriptive computational field, but that it also applies as a foundational substrate for any conceivable reality and their associated laws of physics. Distortions to maxent necessitate order; otherwise known as information or negative entropy.

Thus, we can use this model to presuppose fundamental point-particles and help us argue the fine points of any interpretation of QM. Let us work on the central issue of whether or not fundamental particles in a coherent state exist somewhere. I reason that they must, or they violate the most fundamental logical precepts of truth; which is that spontaneous deviation of maxent is work, and work requires energy. Not the standard model, not any of the wildest dreams of theoretical physics, can ignore the reasoning of the laws of conservation of matter and energy.

For two QM systems, regardless of being fermion, quark, or boson, what I have heard from many proponents of the present mainstream is that they reason that a QM system spontaneously has order, and thus is detectable and knowable, only after decoherence. And that while coherent, the system is said to be undetermined, and not yet manifest, but (un-)exists as a potential, a probability, which allows for the strangeness we observe.

I argue that the strangeness is not the result of an (un-)reality, but from a possible misinterpretation of the available data. I say this very great, great caution, as I respect the work of those that came before me with the greatest of approbation, but I must concede that if anything violates the common sense of entropy that we are likely in error.

There may be an alternative explanation that isn't hidden variables, but another fundamental layer to reality we have yet to uncover, which isn't necessarily another dimension or another reality, but perhaps just a deeper truth.

Thoughts?
 
Last edited:
  • #31
WaveJumper said:
I don't agree with the assumption that unmeasured states have definite observables that are inaccessible to our experiments(i.e. limited knowledge of the system). That seems to be the dividing line between our viewpoints.
No you misunderstand. The limit to knowledge may be absence of knowledge about existent information (as with classical ignorance) or it may be a fundamental limit in that a system is limited to encoding only so many bits of information. In the quantum case when we have gone over to the praxic (pragmatic) operational definition we should be properly agnostic about the nature of those limits (except where we can e.g. define entropy). We should specifically not positively assume existent values apart from actualized measurements but we should also neither rule this out completely.

Think of that last part as similar to saying while science has no business asserting the existence of God it by the very same arguments has no business asserting his non-existence. Science should be atheistic not anti-theistic. Similarly with ontological reality... quantum theory should be agnostic with regard reality of unmeasured values.
If you didn't assume that particles had definite properties at all times that lie in a blocked view from us, you'd be hard pressed to come up with classical-like example. In my view, a polarised photon has neither a horizontal, nor a vertical polarisation until you do a measurement. I find it meaningless to about unmeasured events. It appears you are you advocating the Bohmian approach.

I can't decide if my confusion stems from you mixing Bohmian mechanics with QM or if I misunderstood your point.

No you misunderstand my position. I fully adhere to Orthodox Copenhagen. The wave function is a representation of our knowledge not a representation of reality. The definition of a value for an observable is that it is an actualized measured value and hence is undefined in the absence of an act of measurement.

Remember that in reference to an object we think of it having a property such as momentum and a value for that property such as 3kg m/s. A quantum particle at all times has the definite property of momentum in that at any time it is meaningful to measure the momentum (and e.g. write the momentum operator down) but that property doesn't have a defined value until and unless we actualize that measurement. This is simply saying that the meaning of "having a value" is that the measurement has been actualized (or the equivalent w.r.t. other measurements and specific dynamic evolution.)

We are in agreement there but for a bit of semantics in distinguishing "property" and "value of property".
 
  • #32
1Truthseeker said:
This is really getting good now. Truly great comments, Apeiron. In fact I had seen your comments in an earlier thread and had meant to contact you as we have similar philosophical interests.

Agreed :-p. Welcome to the forums!

Regarding your basic field, there's one major issue I see. Whatever it is in your field that is upholding the existence of an electron even when it is not exhibiting any observable properties is inherently beyond the limits of empirical science. In other words, if it doesn't correspond to observables, we can't ever know anything about it. It will and will always remain pure speculation.

Philosophers have known, since Plato and his cave and shadows, that we don't actually know anything about the source of our perceptions. We can only model the observables we perceive.

There was an operating assumption in classical physics, due to its determinism, that our models did in fact truly represent reality. From a philosophical standpoint, this was an unjustified assumption. The probabilistic nature of QM forces us to reevaluate this assumption.

The Bohmian interpretation probably comes closest to the view you are talking about, one that insists on persistence. I agree with Heisenberg's comment on this interpretation, which is very relevant to your maxent field, in his 1958 Physics and Philosophy: The Revolution in Modern Science:
What does it mean to call waves in configuration space “real”? This space is a very abstract space. The word “real” goes back to the Latin word “res,” which means “thing;” but things are in the ordinary three-dimensional space, not in an abstract configuration space. ... Bohm considers himself able to assert: “We do not need to abandon the precise, rational, and objective description of individual systems in the realm of quantum theory.” This objective description, however, reveals itself as a kind of “ideological superstructure,” which has little to do with immediate physical reality.
But even Bohm does not read too much into his "Ontological Interpretation." Bohm, in his (1986?) A New Theory of the Relationship of Mind and Matter, writes:
The deeper reality is something beyond either mind or matter, both of which are only aspects that serve as terms for analysis
Bohr, of course, also makes this explicit in Atomic Theory and the Description of Nature (1934):
We meet here in a new light the old truth that in our description of nature the purpose is not to disclose the real essence of the phenomena but only to track down, so far as it is possible, relations between the manifold and aspects of our experience.
Of course there are also those with opposite views, but the fact is that even if you come up with a deterministic theory to replace QM, you will never have any evidence whatsoever of what is causing us to see the patterns we do. This is one belief even Bohm and Bohr shared and made explicit. The structure of the matrix, deeper reality, or the manifold, is hidden from us :wink:.
 
Last edited:
  • #33
1Truthseeker said:
To make clear what I mean by a field of maximum entropy, I will define it mathematically: for any dimension of an n-dimensional field of maximum entropy, let there be an hyperplane that extends along that dimension infinitely in every direction. Time is not treated as a special case, but as an dimension...In plain sense, I am describing a system in equilibrium.

Aha. That's my kind of thinking.

But where I would differ is in trying to be even more general here. I think you are imagining a static realm - time being just one of the n-dimensions. I would assume a dynamic view in which things expand in some equilibrium fashion. To prevent expansion, there would have to be some further constraints applied.

So truly max-ent (that is a fairly common phrase I think) would involve maximum disorder in time as well as space. Which is then going to be associated with a powerlaw statistics (rather than gaussian, as with an ideal gas trapped statically in a container for instance). So we would be talking Renyi, Tsallis and others taking a non-extensive approach to entropy. Perhaps i/f noise in the way events occur chaotically over a time dimension.

I'm not sure how this would map to your hyperplane notion - though it might turn it conformal perhaps! I don't know.

1Truthseeker said:
I will not decry your views; however, I will say that it is possible to absolutely and objectively quantify something (with the technological means) as existing or not if it can be quantified as a perturbation of a field of maximum entropy.

I agree I think. Max-ent is a limit (whether we take the static or dynamic case). And it is a naturally self-organising concept. So it is naturally persistent - equilibrium is by definition what is most likely to exist, if anything exists. Then local deviations from equilbriun will be distinctive as events (if brief departures) and objects (if enduring departures),

We can then yoke this to Prigogine style dissipative structure thinking. Negentropy as you say. This accounts for locally persisting structure such as solitons and quasiparticles - the order that arises on the back of disorder (for a time).

1Truthseeker said:
Thus, we can use this model to presuppose fundamental point-particles and help us argue the fine points of any interpretation of QM. Let us work on the central issue of whether or not fundamental particles in a coherent state exist somewhere. I reason that they must, or they violate the most fundamental logical precepts of truth; which is that spontaneous deviation of maxent is work, and work requires energy. Not the standard model, not any of the wildest dreams of theoretical physics, can ignore the reasoning of the laws of conservation of matter and energy.

This is then a critical problem you get to with the condensed matter approach. Why do electrons and protons persist probably "forever"?

One answer, from the open systems approach, is that they would have to be supplied with a continuous throughput of sustaining energy in some way, like ordinary solitons.

But another possibility is - I think - that they are locked into the fabric of things because spacetime expands. So they are the product of a different kind of open system. Instead of being a static system kept alive by energy throughput, they are knots in the fabric that cannot fall apart because the fabric keeps expanding (and cooling).

This may be quite wrong as the explanation, but I think you are highlighting a core problem with taking a condensate approach (as Wilzcek calls it).

1Truthseeker said:
For two QM systems, regardless of being fermion, quark, or boson, what I have heard from many proponents of the present mainstream is that they reason that a QM system spontaneously has order, and thus is detectable and knowable, only after decoherence. And that while coherent, the system is said to be undetermined, and not yet manifest, but (un-)exists as a potential, a probability, which allows for the strangeness we observe.

I argue that the strangeness is not the result of an (un-)reality, but from a possible misinterpretation of the available data. I say this very great, great caution, as I respect the work of those that came before me with the greatest of approbation, but I must concede that if anything violates the common sense of entropy that we are likely in error.

There may be an alternative explanation that isn't hidden variables, but another fundamental layer to reality we have yet to uncover, which isn't necessarily another dimension or another reality, but perhaps just a deeper truth.

Thoughts?

Here we get back to the OP. And now I think we need to accept the reality of QM non-locality. The arguments over twin slits and Bell's inequality have persuaded me in the past and I've not yet heard anything that allows us to dismiss them as proof non-locality is a fact. Of course, all is modelling. But I would personally still put non-locality as something existing "out there".

What this would mean is that a condensed matter/soliton approach to problem of the persistence of particles would need to be married to a non-local interpretation of the existence of spacetime generally.

So the fabric would arise (as an equilibrium causal realm with strongly local properties) from the deeper reality of a local~non-local decoherence machinery. Particles would then be knots in that equilibrium fabric.

I think that one of the things no one talks about is how limited non-locality appears to be. To me, it seems that non-locality is all about the freedoms that gets suppressed by the self-organisation of space time. Get rid of all non-locality (dissipate it!) and things become cleanly local. Only the faintest residue of weirdness is left (such as Bell's inequalities).

This is a thermo or systems view of QM. And extension to the trend already started with decoherence approaches.

I guess this would be the next revolution. Mind science and life science have already begun a rapid reduction to thermo principles over the past 30 years. You have Prigogine's dissipative structures, Brier's biosemiotics, Salthe's specification hierarchy, Kay/Schneider's entropy degraders, Swenson's maximum entropy principle - a whole bunch of ways of saying much the same thing, of reducing bios to a thermodynamic basis.

And the same would seem to be occurring in physics (if people would let it). We have Laughin, Wen and others trying to bring in condensed matter approaches to particle physics. There is all the black hole, holographic horizon, stuff in GR. There is decoherence in QM.

This is how I am seeing the bigger picture anyway.
 
  • #34
WaveJumper said:
Anyone claiming that there is no signalling between entangled pairs of praticles, assumes those particles have 'hidden', well defined observables, which is what a classical intuition would lead you to believe.
Nope. There is a "gripping hand" alternative. When you fully flush the habitual implicit realism we all grow up with and carefully parse the EPR experiment there is no need to assume hidden variables. Said assumption is again invoking realism and the necessity that outcomes of measurements are determinations of some aspect of an underlying state of reality.

Quantum Mechanics in the Orthodox CI is simply a series of statements about the probabilistic correlation between system-device interactions. Given the system interacted with device A in a certain way it will have a calculable probability of interact with device B in a certain way. Causal propagation, entanglement, and all the rest get defined in terms of such statements. There is no need nor any contradiction in remaining agnostic about an underlying reality. If one asks about hidden variables then one has invoked a Buddhist koan. No definite answer is correct. One should rather stand mute.

(Well we don't have to be that Zen about it. One should in fact state that within QM such questions are as ill posed as asking the value of an observable in the absence of an actual measurement.)

To be fair one should be just as agnostic about "no signaling" excepting that one can reject it (for classical relativistic reasons) with impunity.

If there is no information transfer, then realism is wrong.
Absolute Realism is what you give up in QM just as absolute time is what you give up in SR.
If you want, you can call it "relative realism" but that's probably not helpful.
If there is information transfer, then it's a non-local effect. If both locality and realism are true, then entanglement is wrong, which is beyond any reasonable doubt. Local hidden variable theories are what he appears to allude to and my impression that he was stating that particles always had definite unobserved properties was further amplified by:

Or do you know of a way around Bell's theorem and the disproval of LHV? Or did he imply realism was wrong and I missed his point by a large margin?
The locality business is a "red herring" in the Bell theorem derivation. Bell's inequality is basically the triangle inequality of the "metric" defined for subsets of a state manifold by taking a measure of the symmetric difference (xor).

The main RAA hypothesis of Bell's theorem is that there exists a set of realities and that probabilities for outcomes of measurements derive from a probability measure over that set. The locality business is just the easiest of many ways you can assert the measurements of two observables are causally independent i.e. commute. And yet you can, according to QM, prepare systems so that the probabilities violate the respective Bell inequality.

Forget about locality issues entirely. One should either reject realism or one must acknowledge that there is no such thing as two commuting observables (beyond trivial cases). Note signaling invalidates the assumed commutativity.

That two observables commute invalidates the "signaling" loophole as the product order IS the causal order in the event of any causal effect (hidden or no) on the outcomes. That there are such commuting observables is predicted by QM so you can't assume signaling when commuting quantum observables are strongly correlated (entangled) without invalidating QM.

Let me try to write this up in some rigorous detail and I'll post it.
 
  • #35
jambaugh said:
Forget about locality issues entirely. One should either reject realism or one must acknowledge that there is no such thing as two commuting observables (beyond trivial cases). Note signaling invalidates the assumed commutativity.

I like your point about rejecting realism and accepting "relative realism". I think the analogy with GR is actually right.

But that then would mean we could hope for a new level of modelling on that basis. We don't have to be just agnostic about "what is really going on".

The question is then relative to what? My suggestion, made rather too frequently perhaps, is that we need a developmental notion like vague to crisp. So we can have relatively crisply developed realities, and then very crisply developed realities.

The current view of QM is very black and white. We want to insist either there is nothing or everything. Either we have QM indeterminancy or classical definiteness.

But what if instead all reality was a gradient of decoherence? So a twin slit apparatus at least constrains events relatively to a realistic path. But the final bit of development has to be added late in the day. A wavefunction always pins down quite a lot, even if it does not pin down everything. So a wavefunction would be a relatively developed state of realism in this view. And even after the wavefunction is collapsed, it would be more a case of there now being a train of events that is asymptotically close to being definite.

This is in the spirit of GR where mass can approach lightspeed yet never actually reach it. Thus remains relative.

jambaugh said:
Let me try to write this up in some rigorous detail and I'll post it.

It would be good if you could also offer some views on transactional interpretations. I don't believe in a literal back and forth Bohmian wave as per Cramer. That is just the clunky method of making the calculations.

But the basic idea that whole events are made in ways that cross time (and space, given Bell's inequality) is the future for "realistic" QM for me. This would make non-locality real, and locality emergent I think.

And hey, Cramer even believes the transactional approach is testable. So it is science as well!
 

Similar threads

  • Special and General Relativity
Replies
21
Views
2K
  • STEM Academic Advising
Replies
3
Views
682
  • Quantum Interpretations and Foundations
3
Replies
79
Views
5K
  • Programming and Computer Science
Replies
7
Views
1K
  • Quantum Physics
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
54
Views
4K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
3
Replies
87
Views
5K
Replies
19
Views
2K
  • General Discussion
Replies
13
Views
6K
Back
Top