I won't debate on the wavefunction collapse

  • #101
Vanesch said:
After the fact, there's no point in assigning probabilities to outcomes. Napoleon lost, with 100% certainty.
If MWI is true, it is possible to define a probabilty measure to this outcome ! Ie the ratio of the number of universes he lost in, to the number he won in.

Seriously, there seems to be a general consensus in this thread that there's a limit to what we can understand.

Fredrik:
Nature doesn't "think" - it just seems to take the shortest path, or most likely path - as judged from the subjective viewpoint - but I think this will as the complexity increase give the appearance of "intelligence". But it has IMO nothing to do with anything "human", divine or anything such. It's still fundamental reality.
Sounds OK. But we could get side-tracked trying to define 'intelligence' ( a human centred concept).
 
Physics news on Phys.org
  • #102
Mentz114 said:
Yep. I would call it a psychological construct and I don't grant it physical existence utside our heads.

I agree that probability as per the normal "probability theory" is an idealisation, and the main idealization(=the problem) lies IMO in two main points

1) There is no finite measurement that can determine a probability. The infinite measurement series with infinite data storage seems unrealistic.

2) The other quite serious problem is that the event space itself, is not easily deduced from observations. So not only is there an uncertainty in the probability (the state) but also an uncertainty of the probabiltiy space itself (the space of states).

Again considering a larger space of possibles "spaces of states" solves nothing to principle, it just make another iteration using the same logic and it could go on for ever unless we have another principle that prevents this. So point 2, seems o suggest that reality is somehow an infinite dimesional infinitely complex thing (or at least "infinite" to the extent of the entire universe). This seems to make it impossible to make models because the models would be infinitely complex and is thus nonsensial. But the stabilizing factor is that the bound of relational complexity prevents this. A given observer can I think only represent a finite amount of information amount, and we need frameworks that can handle this.

So in this sense, I think even the probability spaces we tihnk of can be observable, but the observational resolution is limited by the observer himself, unless the observer keeps growing and doesn't release relations/storage capacity. Because even thouhg we have witness our past, the memory is bound to dissipate. We can't retain all information we have ever consumed - it makes no sense. So another "decision" on what to discard needs to be made (minimum loss).

To me the challange is to understand how effective probability spaces and effectively stable structure are emergent in this description, and also how the effective dynamics is emergent from this picture. I am sufficiently convinced it can be done to try it, but it seems hard.

/Fredrik
 
  • #103
Fra said:
For example. In normal QM, the probability space itself is assume to be objective and known with certainty - this alone does not quite IMO comply to the basic idea that we should deal with information at hand, and that information is always induced.
The state space is algebraically derivable from the relationships between different kinds of measurements. This is one point of the C*-algebra formalism; once we write down the measurement algebra as a C*-algebra, we can select a unitary representation, which gives us a Hilbert space of states. Or, we can study representation theory so as to catalog all possible unitary representations. And this approach covers all cases -- by the GNS theorem, any state can be represented by a vector in some unitary representation of our C*-algebra.


If you like to think there exists a objective reality, then I would like to see a fool proof formula that guarantees that any two arbitrary observers will always see the same reality when consuming different subsets of the information flow (note that two observers can't typically make the SAME observation), and explain how the actual comparasion takes place.
We're doing science, not formal logic! A fool proof formula is an unreasonable demand; what we do have is empirical evidence. Not only the direct kind, but it is a prediction of quantum mechanics, which also has mounds of experimental evidence.


Also, what exactly is a probability - in terms of something real measureable and retainable to a real observer?
If, when repeating an experiment many times, the proportion of times that a given outcome is seen converges almost surely to a particular ratio, then that ratio is the probability of that outcome in that experiment.
 
  • #104
We need to be carefull when we talk about "probabilities" here. There is a significant difference between classical probability theory and the probabilitstic interpretation of QM, they are not mathematically equivalent (which has been known for a long time, von Neumann even proved it around 1930). The reason is essentially that there are non-commuting operators which is why we use psedudo-distributions in QM such as the Wigner distribution; the latter is the closest thing you can get to a classical distribution but has some very "non-classical" properties, it can e.g. be negative.
Hence, if we assume that QM is a more "fundamental" theory than classical physics, ordinary probability theory can't be used.
 
  • #105
Mentz114 said:
Vanesch said:
If MWI is true, it is possible to define a probabilty measure to this outcome ! Ie the ratio of the number of universes he lost in, to the number he won in.

Or better, the ratio of the squared sum of hilbert norms of the universes he won in. There's no a priori need to introduce a uniform probability distribution over "universes" ; or, in other words, there's no need to assign equal probabilities to universes with different hilbert norm.
 
  • #106
f95toli said:
We need to be carefull when we talk about "probabilities" here. There is a significant difference between classical probability theory and the probabilitstic interpretation of QM, they are not mathematically equivalent (which has been known for a long time, von Neumann even proved it around 1930). The reason is essentially that there are non-commuting operators which is why we use psedudo-distributions in QM such as the Wigner distribution; the latter is the closest thing you can get to a classical distribution but has some very "non-classical" properties, it can e.g. be negative.
Hence, if we assume that QM is a more "fundamental" theory than classical physics, ordinary probability theory can't be used.

This is only one view on the issue, and makes in fact the assumption of hidden variables. The probability distributions generated by QM are entirely "classical probability theory". It is only when we assign hypothetical values to hypothetical measurement results that we run into such non-classical probabilities, but these are probabilities of non-physically possible measurement results. In other words, it is only when insisting upon the existence of well-determined values for non-performed measurements that one runs into these issues. It is for instance what you get when you insist upon the existence of pre-determined values of outcomes in a hidden-variable model for EPR experiments that you cannot avoid having to introduce negative probabilities.
 
  • #107
Hurkyl said:
The state space is algebraically derivable from the relationships between different kinds of measurements. This is one point of the C*-algebra formalism; once we write down the measurement algebra as a C*-algebra, we can select a unitary representation, which gives us a Hilbert space of states. Or, we can study representation theory so as to catalog all possible unitary representations. And this approach covers all cases -- by the GNS theorem, any state can be represented by a vector in some unitary representation of our C*-algebra.

I can't accept the concept of starting with a measurement algebra as a first principle. What is the origin of this algebra? Is it induced from past experiments? If so, this coupling should be explicit. If not, it is too ad hoc to be satisfactory. Ad hoc however doesn't mean it's wrong, it just means I see it as a high risk strategy.

Many things are can stated as, given this and that we can prove this. But the weak point is often the initial assumptions. It sure is true that it's hard to find a non-trivial and unambigous starting point, but this kind of starting point is just over the top to qualify for first principles in my world.

Hurkyl said:
If, when repeating an experiment many times, the proportion of times that a given outcome is seen converges almost surely to a particular ratio, then that ratio is the probability of that outcome in that experiment.

I understand this and it's a standard interpretation but it does not satisfy me because...

a) It means that for any finite measurement series there is an uncertainty in the probability as all we get is an relative frequency. And what about the sample space? Does it make sense to know the set of possible distinguishable outcomes, before we have seen a single sample? I think not?

b) Making an infinitely long measurement series takes (long) time, making the issue complex as it raises the question when the information is to be "dated".

c) What assurance do we have that the repeated experiment is comparable and identical? Clearly the world around us generally evolves.

d) Can a real observer relate to the continuum that would be required by an infinitely resolved probability? What is the physical basis for this infinite resolution? If the resolution of observation is limited by the observer himself, what implications does this have on the objectivity on probability, since this resolution is probably different for different observers.

Not to be seem silly I'll add that in many cases these issus are practically insignificant as verified by a finite amount of experience, but my comments are entirely based on that I think we are talking about or probing supposed fundamental principles here and not practical matters only.

/Fredrik
 
  • #108
You can get around the problems asociated with probablities by reformulating the postulates so that they don't mention "probability" anymore, but only deal with certainties. E.g. the rule that says that measuring an observable will yield one of the eigenvalues with probability given by the absolute value squared of the inner product of the state with the eigenstate can be replaced by a rule that doesn't mention probablities:

If a state is in an eigenstate of an observable, then measuring the observable will yield the eigenvalue.

This looks like a weaker statement, because it doesn't say what will happen if we measure an observable if the state is not in an eigenstate. However, you can consider the tensor product of the system with itself N times. For this system you consider the operator that measures the frequency of a particular outcomes if you measure the observable. In the limit N to infinity this operator becomes a diagonal operator. Since all states are now eigenstates you can apply the weakened postulate. The result is, of course, that the statistics are given by the usual formula.
 
  • #109
I've got IT:


if you put two bananas end-to-end (one 'up', one 'down'), it will look like a 'wave' !

---------

of course, you've got to cut the stems off
 
Last edited:
  • #110
Fra said:
I can't accept the concept of starting with a measurement algebra as a first principle. What is the origin of this algebra? Is it induced from past experiments? If so, this coupling should be explicit. If not, it is too ad hoc to be satisfactory. Ad hoc however doesn't mean it's wrong, it just means I see it as a high risk strategy.

Many things are can stated as, given this and that we can prove this. But the weak point is often the initial assumptions. It sure is true that it's hard to find a non-trivial and unambigous starting point, but this kind of starting point is just over the top to qualify for first principles in my world. If not, it is too ad hoc to be satisfactory. Ad hoc however doesn't mean it's wrong, it just means I see it as a high risk strategy.

Many things are can stated as, given this and that we can prove this. But the weak point is often the initial assumptions. It sure is true that it's hard to find a non-trivial and unambigous starting point, but this kind of starting point is just over the top to qualify for first principles in my world.
Of course it comes from experiments; that's the whole point! Each experiment we can perform is postulated to correspond to an element of our measurement algebra, and the algebraic structure of the algebra is supposed to be given by the observed relationships between measurements.

The point of the algebraic approach is that this is all the postulating we need to do -- from the algebra, we can derive what sorts of "stuff" exists and what "properties" it might have.

Any Scientific theory has to talk about measurement, otherwise it couldn't connect to experiment. So starting with the properties of measurement is more conservative than other approaches!


Your argument here is sort of a red herring -- it's nothing more than a generic foundational concern. Nothing about it is specific to measurement algebra: you could replace measurement algebra with just about any other notion and the quoted passage doesn't vary at all in meaning or relevance. (It would only vary in target, and possibly in alignment with your personal opinion)
 
  • #111
Fra said:
I understand this and it's a standard interpretation but it does not satisfy me because...

a) It means that for any finite measurement series there is an uncertainty in the probability as all we get is an relative frequency. And what about the sample space? Does it make sense to know the set of possible distinguishable outcomes, before we have seen a single sample? I think not?

b) Making an infinitely long measurement series takes (long) time, making the issue complex as it raises the question when the information is to be "dated".

c) What assurance do we have that the repeated experiment is comparable and identical? Clearly the world around us generally evolves.

d) Can a real observer relate to the continuum that would be required by an infinitely resolved probability? What is the physical basis for this infinite resolution? If the resolution of observation is limited by the observer himself, what implications does this have on the objectivity on probability, since this resolution is probably different for different observers.

Not to be seem silly I'll add that in many cases these issus are practically insignificant as verified by a finite amount of experience, but my comments are entirely based on that I think we are talking about or probing supposed fundamental principles here and not practical matters only.

/Fredrik
This is why statistics is an entire branch of mathematics, rather than simply a one-semester course. :smile:

(Note that these issues are not specific to physics)
 
  • #112
Hurkyl said:
Of course it comes from experiments; that's the whole point! Each experiment we can perform is postulated to correspond to an element of our measurement algebra, and the algebraic structure of the algebra is supposed to be given by the observed relationships between measurements.

OK, so what is the algebraic structure that's connected to measurement? Thanks.
 
  • #113
Hurkyl said:
(Note that these issues are not specific to physics)

I agree, you are completely right :)

But IMO they happen to be of so fundamental importance even to physics, that doing physics without analysing the foundations is strategy with poor risk analysis. I have no problems if other do it, but that's not how I do it.

Your also right that my comments above are not specific to measurement algebras only.

The issues I have relate more specifically to the scientific method in general, and which is my point. My issues with some of these things are not to pick on theories. The problem gets worse when you see the "theories" in the perspective of evolution. Then theories are nothing but evolving structures, and the task them becomes not to find a falsifiable theory that we keep testing, but more fundamental thing IMO is to evolve the theories in an efficient manner. Which means that I think the interesting part is exactly when a theory is found inappropriate, how does the transition to the new theory look like and what is the information-view of this process itself. In this view, more of the postulates can be to a larger exten be attributed measureable status, but measurements doesn't necessarily correspond only to the idea of "projections" of some state vector.

The poppian ideal seems to suggest that we come of up with falsifiable theories. And the scientific ideal doesn't seem to specifiy a method, so the ad hoc method is fine. But is the ad hoc method the most efficient/best one? Or can we evolve, not only our theories, but also our scientific method?

/Fredrik
 
  • #114
Hurkyl said:
Your argument here is sort of a red herring -- it's nothing more than a generic foundational concern.

True, but the more important!

Say we want to build a house, then the foundation is as important as the house itself. In fact, investing too much in a house build on shaky foundation is a high risk project. I am happy to take limited risks at low odds, but I wouldn't want to invest a significant part of my total resources in something without making sure the foundational issues can be defended. A good foundation should lasts for several generations of houses.

/Fredrik
 
  • #115
Hurkyl said:
Of course it comes from experiments; that's the whole point! Each experiment we can perform is postulated to correspond to an element of our measurement algebra, and the algebraic structure of the algebra is supposed to be given by the observed relationships between measurements.

The interesting part here is the "mechanics" of postulation. What leads us to, given a certain experience, to make a particular postulate, and is it unique? Is there not logic behind this process beyond the "ad hoc"? I think there is! And I think this can and should be formalised.

/Fredrik
 
  • #116
Fra said:
True, but the more important!

Say we want to build a house, then the foundation is as important as the house itself. In fact, investing too much in a house build on shaky foundation is a high risk project. I am happy to take limited risks at low odds, but I wouldn't want to invest a significant part of my total resources in something without making sure the foundational issues can be defended. A good foundation should lasts for several generations of houses.

/Fredrik
But as I said, every Scientific theory has to deal with measurement. So, the programme of having a theory axiomatize measurement and from there derive the existence and properties of "stuff" is going to be a conservative approach.
 
  • #117
Of course, there is bound to be some kind of "channel" through an observer gets his information about the rest of the "world", this we call experiments or interactions and I agree that one way or the other some idea of this is needed. But by no means do I agree that the current QM scheme is the only way, the unique way or the best way.

To be a little more specific, what I lack in the standard formalism is a relational base for the measurement axiomatizations. With that I mean that the actualy result of measurement needs to relate to the observers internal state somehow. And I would even like to take it as far as to define measurements in terms of changes and uncertainties in the observers own state - as a mirror of the environment. This sort of renders the measurement object themselves a relative or subjective. This makes it more complex, but for me personally I think it is more correct becase it is more in line with how I perceive reality. The objective measurements are then, rather emergent at a higher level, but not fundamental.

So some sort of formalism of measurements is needed indeed. But at least the representations of these strategies I have seen has not been very in depth satisfactory. The formalism and postulations seem innocent and "clean" but they clearly contain loads of assumptions about reality that I can't buy.

I think ultimately a measurement is a change, or an interaction. The idealized measurements we make in a lab, with a controlled apparatous is hardly a fundamental thing, it's a very highly advanced kind of measurement, that has not clear correspondence for say an electron making measurements/interactions on another particle.

I am trying to find a satisfactory solution that doesn't only make sense for the macroscopic and idealized measurements. Because I think in a consistent model interactions are measurements must be treated on a similar footing.

/Fredrik



/Fredrik
 
  • #118
So what I look for, is to axiomatize information first of all. Then define measurements in terms of uncertainties of the information. I don't have a solution yet, but I am not just complaining generically without seeing a possible better solution.

So what is information? IMO it's first of all A having information about B. So the information is a relation. This should also mean that the information A can possibly have about B, is limited to the relational capacity of A. (ultimately I associate this to energy, and I think it an allow for a fundamental definition thereof).

/Fredrik
 
  • #119
Fra said:
Of course, there is bound to be some kind of "channel" through an observer gets his information about the rest of the "world", this we call experiments or interactions and I agree that one way or the other some idea of this is needed. But by no means do I agree that the current QM scheme is the only way, the unique way or the best way.

To be a little more specific, what I lack in the standard formalism is a relational base for the measurement axiomatizations. With that I mean that the actualy result of measurement needs to relate to the observers internal state somehow. And I would even like to take it as far as to define measurements in terms of changes and uncertainties in the observers own state - as a mirror of the environment. This sort of renders the measurement object themselves a relative or subjective. This makes it more complex, but for me personally I think it is more correct becase it is more in line with how I perceive reality. The objective measurements are then, rather emergent at a higher level, but not fundamental.

So some sort of formalism of measurements is needed indeed. But at least the representations of these strategies I have seen has not been very in depth satisfactory. The formalism and postulations seem innocent and "clean" but they clearly contain loads of assumptions about reality that I can't buy.

I think ultimately a measurement is a change, or an interaction. The idealized measurements we make in a lab, with a controlled apparatous is hardly a fundamental thing, it's a very highly advanced kind of measurement, that has not clear correspondence for say an electron making measurements/interactions on another particle.

I am trying to find a satisfactory solution that doesn't only make sense for the macroscopic and idealized measurements. Because I think in a consistent model interactions are measurements must be treated on a similar footing.


/Fredrik


Between Quantum and MWI, there's a chance if I wave to myself in the mirror, I'll turn into a banana
 
  • #120
Interactions are not necessarily measurements

Hurkyl said:
But as I said, every Scientific theory has to deal with measurement. So, the programme of having a theory axiomatize measurement and from there derive the existence and properties of "stuff" is going to be a conservative approach.

I would welcome a comment on this alternative view:

Every scientific theory has to deal with interactions. So, the programme of having a theory axiomatize interactions, and from there derive the existence and properties of "stuff", is going to be the most conservative approach.

I have in mind that, seeking to measure the width of my desk, the nature of the interaction (with a tape measure) will determine the extent to which the outcome is an accurate "measurement".

Or, more critically, the (supposed) "measured" polarization of a photon is the outcome of a "severe" interaction and is not therefore a "measurement" in any common-sense meaning of the word --- ?

In other words; seeking to speak with precision: Interactions are more general entities than measurements.
 
  • #121
The reason every scientific theory has to deal with measurement is because measurement is what gathers empirical data. If a theory doesn't relate to empirical data, then it's not scientific.

There is no similar argument that a scientific theory must relate to interactions that are not measurements.

In terms of familiar theories, taking interaction as a foundation lies somewhere between taking measurement as a foundation and taking stuff as a foundation.
 
  • #122
Hurkyl said:
The reason every scientific theory has to deal with measurement is because measurement is what gathers empirical data. If a theory doesn't relate to empirical data, then it's not scientific.

There is no similar argument that a scientific theory must relate to interactions that are not measurements.

In terms of familiar theories, taking interaction as a foundation lies somewhere between taking measurement as a foundation and taking stuff as a foundation.

I have to slightly disagree--or more specifically---foundational theories are more built on a specific hypothesis about the measurements and stuff, eg --nothing travels faster than light---because nothing has ever been observed conclusively/scientifically to travel FTL.
 
  • #123
rewebster said:
I have to slightly disagree--or more specifically---foundational theories are more built on a specific hypothesis about the measurements and stuff, eg --nothing travels faster than light---because nothing has ever been observed conclusively/scientifically to travel FTL.
I don't agree on it. If you talk about SR, it's based on the empirical data (measurement) that light's speed doesn't depend on the relative speed between source and observer, more than the hypothesis "nothing travels faster than light".
I apologize in case this is not what you intended.
 
  • #124
lightarrow:
If you talk about SR, it's based on the empirical data (measurement) that light's speed doesn't depend on the relative speed between source and observer.
What empirical data would that be ? I don't believe a laboratory experiment is feasible, and astronomical data has large error bars.
 
  • #125
Mentz114 said:
lightarrow:

What empirical data would that be ? I don't believe a laboratory experiment is feasible, and astronomical data has large error bars.

Check, for example,

T. Alvager, F. J. M. Farley, J. Kjellman, I. Wallin, "Test of the second postulate of special relativity in the GeV region", Phys. Lett., 12 (1964) 260.

They used the time-of-flight method to measure the speed of gamma quanta emitted by high-speed \pi^0 mesons from an accelerator. This was a direct and accurate confirmation that the speed of light does not depend on the velocity of the source.

Eugene.
 
  • #126
lightarrow said:
I don't agree on it. If you talk about SR, it's based on the empirical data (measurement) that light's speed doesn't depend on the relative speed between source and observer, more than the hypothesis "nothing travels faster than light".
I apologize in case this is not what you intended.

yes--there is more, of course--but, for example, one reason that relativity hasn't won a Nobel Prize may be, in that, it may be highly correlative and useful in some circumstances (as was the celestial model), it at it's core, is a speculation (a hypothesis) that hasn't been totally proven. Things such as energy conversion/equivalence may be just coincidentally close.
 
  • #127
nrqed said:
But, in my humble opinion, this is simply replacing one mystery with another mystery. How does this "interaction" occurs? What is the physical process behind it? When does it occur? Etc etc.

saying that "hocus-pocus, the wavefunction of the particle becomes entangled with the the measurement device when we do the measurement" is as mysterious as saying "the wavefunction collapses".

I am not saying I disagree with your point. I do agree that a formalism in which the collapse never occurs is more satisfying than the collapse approach. I am just pointing out that saying this opens up as many questions as it answers, IMHO.

Agree, this wavefunction collapses just garbage,
as Gell-Mann said Niels Bohr brainwashed a whole generation of physicists into believing that the problem had been solved
 
Last edited:
  • #128
So what has been totally proven?
Regards,
Reilly Atkinson

rewebster said:
yes--there is more, of course--but, for example, one reason that relativity hasn't won a Nobel Prize may be, in that, it may be highly correlative and useful in some circumstances (as was the celestial model), it at it's core, is a speculation (a hypothesis) that hasn't been totally proven. Things such as energy conversion/equivalence may be just coincidentally close.
 
  • #129
meopemuk said:
I think it is dangerous to pretend that we know what happens to the system "in reality", i.e., while we are not watching. This is a sure way to logical paradoxes. The whole point of complex amplitudes in quantum mechanics is to refuse any statements about "reality" and concentrate only on (probabilities) of measurable outcomes of experiments.

Eugene.

Recent experiments have proven Bell inequality has been violated
So the viewpoint of "local reality" is wrong and incompatible with quantum mechanics
 
  • #130
rewebster said:
yes--there is more, of course--but, for example, one reason that relativity hasn't won a Nobel Prize may be, in that, it may be highly correlative and useful in some circumstances (as was the celestial model), it at it's core, is a speculation (a hypothesis) that hasn't been totally proven. Things such as energy conversion/equivalence may be just coincidentally close.

I disagree in several ways. relativity is not a hypothesis, but a theory.

And it is not the goal of science to totally prove things. If you read a bit about the philosophy behind science (Popper or the Quine-Duhem thesis), you will notice that science can only totally disprove theories. And even that just in a limited sense (see Quine-Duhem). Also, any experimental result might just be conincidentally close to theory.

Earlier this month ZapperZ posted a wonderful essay about the scientific meaning of words like theory or hypothesis and common misconceptions:
https://www.physicsforums.com/showthread.php?t=149923

Anyway, enough of that. This is getting slightly off topic.
 
  • #131
Isn't the "wavefunction collapse" nothing more than the wavefunction CHANGING?

Take two "free" electrons, non-entangled, defned by Y1(x,t), Y2(x,t). The probability of them interacting at some space 'x' and time 't' is a function of both wavefunctions: |Y1*Y2|(x,t). If the interaction occurs, the particles are subsequently defined by new complementary wavefunctions Y3(x,t) and Y3'(x,t) and, thus, entangled. Interaction gave the particles new wave functions. Then, once entangled, a further interaction/change in one particle's wave function causes a complementary change in the other.

The "collapse" language was introduced to ease the minds of people afraid of spooky action at a distance, i.e., accepting that change (even a statistical one) can be caused by something nonlocal.

Personally, I'm convinced that if it were possible to trace the wave function of every particle since the big bang, we would find the apparent "non-local" influence to be nothing more than a consequence of the infinite complexity of the entanglement of every particle with one another. But I won't even try to prove that. ;)
 
  • #132


I'm new to the forum. While not being ‘technically’ informed and having more of a visual-type understanding (as limited, and limiting to technical discussion as that is) I find QM (QFT whatever) extremely fascinating.

Now, if I were claiming to ‘understand/know’ any of this, I'd not be posting here with my occasional query. That said:


Take two "free" electrons, non-entangled, defned by Y1(x,t), Y2(x,t). The probability of them interacting at some space 'x' and time 't' is a function of both wavefunctions: |Y1*Y2|(x,t). If the interaction occurs, the particles are subsequently defined by new complementary wavefunctions Y3(x,t) and Y3'(x,t) and, thus, entangled. Interaction gave the particles new wave functions. Then, once entangled, a further interaction/change in one particle's wave function causes a complementary change in the other.

Personally, I'm convinced that if it were possible to trace the wave function of every particle since the big bang, we would find the apparent "non-local" influence to be nothing more than a consequence of the infinite complexity of the entanglement of every particle with one another. But I won't even try to prove that.


My previous understanding, was that the degree/extent of 'entanglement' depends upon the nature(?) length(?) of the interaction. Is this not so? If it is so, what indicates, to you, a causal linkage (through entangled wavefunctions) to the degree you've suggested above, i.e. 'finding the apparent "non-local" influence to be nothing more than a consequence of the infinite complexity of the entanglement of every particle with one another.'

Just curious. It’s provoked a few tangentially similar thoughts.
 
Last edited:
  • #133
Oh, I'm not claiming to understand/know any of this either. ;) Nonetheless -

They are indeed entangled to the extent of their interaciton. If the interaciton is a "classical" collision between two massive particles, for example, their momentum and spin would be entangled. If the interaction is, say, absorption of a photon by an electron, then the photon ceases to exist entirely and the electron has more energy (you could say the electron and the now non-existant photon are entangled). If the interaction was between two photons, they might constructively or destructively interfere. If the interaction was beta decay of a neutron, then the electron, proton, and neutrino would all be charge / mass / momentum entangled.

Now take the case of entangled photons where one goes through a lens. Everyone likes to simplify the problem and assume the photon going out is the same one going in, but we all know that's not what happens. The photon gets absorbed by an atom in the lens; for an instnat, the atom is entangled with the now-non-existant photon and its twin on the other side of the lab. It might be a "loose" entanglement, but it is still there. An instant later, the atom emits a new photon, which is partly entangled with the atom in the lens, and partly entangled with its original twin. And so forth, until the final atom in the lens emits a photon at the other end, in which case the new photon is still entangled with its twin to a degree, e.g., it's still polarization-entangled, but no longer direction-entangled - it's been "bent". Then there's that one last atom in the lens that's still entangled with that last photon - largely, in fact, for an instant. But then the other atoms in the lens all rapidly influence that last atom to such a large degree as to, on any perceiveable scale, render completely negligible the photon's influence on the atom. But the history of the interaction is still a part of that atom's wave function, no matter how small a part. It never "collapsed" - it just became infinitely small. But that infinitely small influence propagates through the entire lens, the surrounding air, the earth, the solar wind, etc. Thus, every interaction between any two particles alters the wave function of the entire universe at an infinitessimal level.

Hence, my statement that everything is entangled in an infinitely complex way, and my hypothesis that if one could model a huge number of particles interacitng and entangling as such, always obeying the laws of nature locally, the chaos that would ensue would, I believe, entirely mimic the quantum observations without the need for non-locality. Or put another way, my hypothesis is that any chaotic system appears to exhibit quantum-like effects when viewed at a sufficiently large scale.

Sorry for the overtly metaphysical babble.
 

Similar threads

Back
Top