I Is Polarisation Entanglement Possible in Photon Detection?

  • #201
stevendaryl said:
You're making a philosophical point that I disagree with.
Most of the discussion here is philosophical; labeling a particular statement as such does not help.
stevendaryl said:
To me, if someone flips a coin and hides the result, then I use probabilities to reflect my ignorance about the fine details of the coin-flipping process. In my opinion, bringing up ensembles is unnecessary and unhelpful.
Thus you and the someone will assign different states to the same physical situation. This means that in the situation you describe, the assigned state is purely subjective and contains no physics. It is a property of your mind and not of the coin. You cannot check the validity of your probability assignment; so any probability is as good as any other. Applying probabilities to single coin flips is simply meaningless.
 
Physics news on Phys.org
  • #202
forcefield said:
Yes. Alice always measures before Bob or vice versa.
With physical collapse you mean that measurement of Alice's photon changes Bob's photon polarization? Meaning that if initially we model Bob's mixed state as statistical mixture of orthogonal pure states H/V then after Alice's measurement in H'/V' basis Bob's mixed state components change to H'/V' basis, right?
 
  • #203
Simon Phoenix said:
The only way out of this (that I can see) is to assume that there is no meaning to a system being 'in' a state and the word 'state' means a mathematical quantity that is merely descriptive of our knowledge and not descriptive of some objective physical property of an entity.
Simon Phoenix said:
I would (grudgingly) agree that this 'knowledge' viewpoint makes more coherent logical sense, but as a physicist it leaves me very unsatisfied because I no longer have any real physical 'picture' of what's happening but must deal with things in a very operational way using vague terms like 'knowledge' or 'what can be known' in order to interpret things.
As I see this is similar to my own disappointment with defining "state" in a "what can be known" way. To me it seems that intuitive meaning of concept "state" is a model for real physical situation i.e. a model that explains our observations rather than observations themselves. And because that concept is stolen for something else it's harder to talk about model for real physical situation.

Simon Phoenix said:
It also doesn't really explain (to my mind, at least) why our 'knowledge' has to be encoded in a mathematical object that evolves according the Schrodinger equation (involving physical things like energy and interactions), lives in an abstract complex space, has such close connections at a deeper level to classical mechanics, and yet is not supposed to model 'reality' in any objective way.
Let me oppose you here. Mathematical object that evolves according the Schrodinger equation is a bit closer to real physical 'picture' and is not quite identical to state (in "what can be known" sense).
I found this jtbell post https://www.physicsforums.com/threa...-of-schrodinger-equation.889605/#post-5596330 quite interesting and sort of confirming my sentiments. As I understand Schrödinger's intuition that helped him to arrive at his equation was this:
"is one not greatly tempted to investigate whether the non-applicability of ordinary mechanics to micro-mechanical problems is perhaps of exactly the same kind as the non-applicability of geometrical optics to the phenonema of diffraction or interference and may, perhaps, be overcome in an exactly similar way?"
So it's interference phenomena for massive particles that was starting point for him.
And it's interesting that Feynman too had some very special attitude toward interference phenomena:
"We choose to examine a phenomenon which is impossible, absolutely impossible, to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery."

As I see the "the heart of quantum mechanics" is represented by phase factor. So to me it seems not very wise to hide it somewhere away or to try to drop it entirely.
 
  • #204
zonde said:
As I see the "the heart of quantum mechanics" is represented by phase factor. So to me it seems not very wise to hide it somewhere away or to try to drop it entirely.

I don't think anyone is saying to get rid of phase information. As you say, it's absolutely at the heart of quantum phenomena. The density matrix formulation does not get rid of relative phase information, only overall phase, which plays no role in interference.
 
  • #205
stevendaryl said:
I don't think anyone is saying to get rid of phase information. As you say, it's absolutely at the heart of quantum phenomena. The density matrix formulation does not get rid of relative phase information, only overall phase, which plays no role in interference.

Quantum interference is somewhat like classical wave inference, but is subtly different: A classical wave is a wave in physical space; you have a field that extends throughout space, and propagates as a function of time according to a wave equation. A quantum wave function is a wave in configuration space. In QM, the interference effects involve interference between different possibilities.

For classical interference of say, light waves, you can understand it in terms of part of the wave goes one way (through one slit, for instance) while another part of the wave goes another way (through a different slit). The interference is not interference between possibilities, it's interference between actualities---there really is electromagnetic fields going through both slits. But when you attempt to apply that idea to quantum mechanics, it seems to me that you are forced to a many-worlds view, where different possibilities are equally real. Most people reject that interpretation, and understandably so (it seems to posit the existence of whole worlds that are unobservable), but if you reject the reality of alternative possibilities, then it's hard for me to understand what interference effects are about. (Note: The Bohmian interpretation seems more realistic than other interpretations, since it only has one world with definite positions for particles at all time. But in the Bohmian interpretation, there is still a wave function that acts as a "guide" to particle motion, and this wave function is determined by interference effects among possibilities, even though only one possibility is considered "real".)
 
  • #206
stevendaryl said:
In QM, the interference effects involve interference between different possibilities.
I would say that interference between "possibilities" is heuristic that avoids hard and currently unanswerable questions.
stevendaryl said:
But in the Bohmian interpretation, there is still a wave function that acts as a "guide" to particle motion, and this wave function is determined by interference effects among possibilities, even though only one possibility is considered "real".
I don't know Bohmian interpretation very well but judging by it's key features I think that Bohmian interpretation goes in right direction. What I'm missing in that interpretation is particle effect on pilot wave. As I understand many interacting worlds interpretations is variation of Bohmian interpretation that is modeling pilot wave from many particles so it sort of fills that gap. But I have not investigated it as there is not so much to read about it at my level.
 
  • #207
zonde said:
I would say that interference between "possibilities" is heuristic that avoids hard and currently unanswerable questions.

In what way?
 
  • #208
A. Neumaier said:
Thus you and the someone will assign different states to the same physical situation. This means that in the situation you describe, the assigned state is purely subjective and contains no physics. It is a property of your mind and not of the coin. You cannot check the validity of your probability assignment; so any probability is as good as any other. Applying probabilities to single coin flips is simply meaningless.
Just because the assignment of probability to a single event is subjective and cannot be checked does not mean it's meaningless. Such a Bayesian subjective assignment of probability may be useful in making decisions. This is something that people do (often intuitively and unconsciously) every day. (For instance, I have to buy shoes for my wedding (and I was never buying wedding shoes before), so have to decide which shop I will visit first. I choose the one for which I estimate a larger probability of finding shoes I will be satisfied with.)
 
  • #209
A. Neumaier said:
Thus you and the someone will assign different states to the same physical situation. This means that in the situation you describe, the assigned state is purely subjective and contains no physics.

That's plainly not true. You assign (subjective) probabilities to initial conditions, and then you evolve them in time using physics, to get derived probabilities for future conditions. There's plenty of physics involved.

The assumption that if there is a subjective element to your reasoning, then the entire reasoning process is nonscientific would, if taken seriously, imply that science is impossible. Whether you perform an experiment 5 times or 1 million times, there is the logical possibility that the statistics that you gather are a "fluke". To make any conclusion requires a subjective judgment that your data is sufficient to rule out some possibility. Without making such subjective judgments, you really couldn't make any conclusions in science.

Saying that the physics describes ensembles, rather than individual events does absolutely nothing to change the fundamental subjectivity of probability judgments. If the actual ensemble is finite (which it always is), then in reality, you have the same problem as single events, which is how to make judgments based on finite data.
 
  • #210
stevendaryl said:
In what way?
You don't have to speak about actualities. Meaning you don't care how to give realistic model of interference.
 
  • #211
Demystifier said:
Just because the assignment of probability to a single event is subjective and cannot be checked does not mean it's meaningless. Such a Bayesian subjective assignment of probability may be useful in making decisions. This is something that people do (often intuitively and unconsciously) every day. (For instance, I have to buy shoes for my wedding (and I was never buying wedding shoes before), so have to decide which shop I will visit first. I choose the one for which I estimate a larger probability of finding shoes I will be satisfied with.)
Yes, but buying shoes is not physics.
 
  • Like
Likes Mentz114
  • #212
zonde said:
You don't have to speak about actualities. Meaning you don't care how to give realistic model of interference.

Well, it's a pure fact that quantum mechanical probabilities involve summing over possibilities. That's the basis for Feynman's path integral formulation, but it's true for any formulation: the probability amplitude \psi(A,B,t_0, t) to go from state A at time t_0 to state B at time t is equal to the sum over a complete set of intermediate states C of \psi(A,C, t_0, t_1) \psi(C, b, t_1, t) where t_0 < t_1 < t.
 
  • #213
stevendaryl said:
If the actual ensemble is finite (which it always is), then in reality, you have the same problem as single events, which is how to make judgments based on finite data.
If the finite number is large, there is no problem at all - the law of large numbers makes things reliable to a fairly high precision.. This is why thermodynamics makes definite predictions since it averages over 10^23 molecules.

And this is why repeatability is the hallmark of scientific work. If something is not repeatable in 1 out of 1000 cases, one uually ignores the signe exception, attributing it to side effects unaccounted for (which is indeed what it boils down to since we can't know the precise state of the unverse, which evolves deterministically).
 
  • #214
stevendaryl said:
You assign (subjective) probabilities to initial conditions, and then you evolve them in time using physics, to get derived probabilities for future conditions. There's plenty of physics involved.
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

Subjective probabilities for single events cannot be tested since different subjects can assign arbitrary probabilities but there will be only one outcome independent of anyone's probability.

And under repetition, there will be only one relative frequency, and among all subjective probabilities only those are scientific that match the observed relative frequency within the statistically correct uncertainty. All others are unscientific though subjectively they are allowed. Therefore subjective probability is simply prejudice, sometimes appropriate and sometimes inappropriate to the situation.

Whereas physics is about what really happens, independent of our subjective impressions.
 
  • #215
A. Neumaier said:
If the finite number is large, there is no problem at all - the law of large numbers makes things reliable to a fairly high precision..

I disagree. There are many aspects to assessing data that are subjective. Is it really the case that it is an ensemble of identically prepared system? Is the system really in equilibrium?

I think you're wrong on two counts: (1) that subjectivity makes it nonscientific, and (2) that it is possible to eliminate subjectivity. I don't think either is true.
 
  • Like
Likes Demystifier
  • #216
A. Neumaier said:
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

I'm saying that they are not meaningless, and in fact it is inconsistent to say that they are meaningless. If probabilities for single events are meaningless, then probabilities for 10 events are meaningless, and probabilities for 10,000 events are meaningless. Any finite number of events would be equally meaningless.

Garbage in: Probabilities for single events are meangingless.
Garbage out: Probabilities for any finite number of events are meaningless.
 
  • #217
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.
 
  • #218
Mentz114 said:
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.
but it can increase only to one, as otherwise one has multiple coin tosses.
 
  • #219
A. Neumaier said:
but it can increase only to one, as otherwise one has multiple coin tosses.
You misunderstand what I wrote. I amend it thus

... the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely, if I performed this.

It is the empirical definition of probability. I thought it was standard.
 
  • #220
Mentz114 said:
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.

But (1) there is no guarantee there is such a limit, and (2) we can't actually measure the limit; we can only approximate it with a large but finite number.
 
  • #221
stevendaryl said:
But (1) there is no guarantee there is such a limit, and (2) we can't actually measure the limit; we can only approximate it with a large but finite number.
1) if there is no limit then the distribution has no first moment i.e. <x> is undefined and no predictions are possible ( for instance Cauchy pdf)
2) Yes. Just like ##\pi## we can only get estimates.
 
Last edited:
  • #222
Mentz114 said:
if I performed this.
The result of unperformed tosses cannot be observed, and if you performed more than one toss you are no longer talking about a single coin toss.
 
  • #223
Mentz114 said:
It is the empirical definition of probability. I thought it was standard.
The empirical definition of probability applies only in the case where many repetitions are performed - in physicists' terms, for an ensemble; in statisticians' terms, for a large sample of i.i.d. realizations.
 
  • #224
Mentz114 said:
1) if there is no limit then the distribution has no first moment i.e. <x> is undefined and no predictions are possible ( for instance Cauchy pdf)

What I mean is that there is no guarantee that when flipping a coin repeatedly that the relative frequency of "heads" approaches any kind of limit. What you can say is that if the probability of a coin flip yielding "heads" is p, then the probability that N independent coin flips will yield a relative frequency of heads much different from p goes to zero, in the limit as N \rightarrow \infty. In other words, if you flip a coin many, many times, you will probably get a relative frequency that is close to the probability, but it's not a guarantee.
 
  • #225
A. Neumaier said:
The result of unperformed tosses cannot be observed, and if you performed more than one toss you are no longer talking about a single coin toss.
But I am talking about a single coin toss. It makes sense to me to define the single toss probability in terms of an ensemble of coins.
 
  • #226
stevendaryl said:
What I mean is that there is no guarantee that when flipping a coin repeatedly that the relative frequency of "heads" approaches any kind of limit. What you can say is that if the probability of a coin flip yielding "heads" is p, then the probability that N independent coin flips will yield a relative frequency of heads much different from p goes to zero, in the limit as N \rightarrow \infty. In other words, if you flip a coin many, many times, you will probably get a relative frequency that is close to the probability, but it's not a guarantee.
Have you got some equations to back this up ?
I guess I'll just have ride my luck.
 
  • #227
Mentz114 said:
But I am talking about a single coin toss. It makes sense to me to define the single toss probability in terms of an ensemble of coins.
Then it is a property of the latter but not of the former.

It is like defining the color of a single bead in terms of the colors of an ensemble of different unseen beads. In which sense is this a definition that applies to the single bead?
 
  • #228
Mentz114 said:
Have you got some equations to back this up ?
I guess I'll just have ride my luck.

It's pretty standard. If you make N trials, each trial has a probability of success of p, then the probability that you will get m successes is:

p_{m,N} = \frac{N!}{m! (N-m)!} p^{m} (1-p)^{N-m}

Now, write m = N (p + x). Using Sterling's approximation, we can estimate this for small x to be:

p_{m,N} \approx e^{- \frac{x^2}{\sigma^2}}

where (if I've done the calculation correctly) \sigma = \sqrt{\frac{p(1-p)}{N}}

If N is large, the probability distribution for x (which measures the departure of the relative frequency from p) approaches a strongly peaked Gaussian, where the standard deviation \sigma \rightarrow 0.
 
  • #229
A. Neumaier said:
It is like defining the color of a single bead in terms of the colors of an ensemble of different unseen beads. In which sense is this a definition that applies to the single bead?
If we choose a bead at random from N beads, then the probability of our selection being of color n is (number of beads of color n)/N.

Note that these definitions are not subjective.
 
  • #230
stevendaryl said:
It's pretty standard. If you make N trials, each trial has a probability of success of p, then the probability that you will get m successes is:If N is large, the probability distribution for x (which measures the departure of the relative frequency from p) approaches a strongly peaked Gaussian, where the standard deviation \sigma \rightarrow 0.

If I recall correctly, the sample mean of a random sample from a Gaussiam pdf is the maximum likelihood estimator of the mean ##\mu## and is also unbiased. So the expected value of ##x## is zero.

I have not checked the bias of the binomial estimators ##n/N## but in the large sample limit I'll bet (:wink:) they are unbiased also.
 
  • #231
Mentz114 said:
If we choose a bead at random from N beads, then the probability of our selection being of color n is (number of beads of color n)/N.

Note that these definitions are not subjective.
But this is a definition for the probability of selecting an arbitrary bead from the N beads at random. Thus it is a property of the ensemble, not of any particular bead; in particular not of the bead that you have actually drawn (since this one has a definite color).

Consider the probability of a man (heavy smoker, age 60) to die of cancer within the next 5 years. If you take him to be a member of the ensemble of all men, you get a different probability than if you take him to be a member of the ensemble of all heavy smokers, another probability if you take him to be a member of all men of age 60, and yet another probability if you take him to be a member of the ensemble of all heavy smokers of age 60. But it is always the same man. This makes it clear that the probability belongs to the ensemble considered and not to the man.
 
  • #232
Mentz114 said:
If I recall correctly, the sample mean of a random sample from a Gaussiam pdf is the maximum likelihood estimator of the mean ##\mu## and is also unbiased. So the expected value of ##x## is zero.

Right, that's the way I defined it. x =\frac{m}{N} - p. So x=0 corresponds to the relative frequency \frac{m}{N} being equal to the probability p.

Anyway, the point is that when N is large, \frac{m}{N} is very likely to be nearly equal to p. But there is no guarantee.
 
  • #233
A. Neumaier said:
But this is a definition for the probability of selecting an arbitrary bead from the N beads at random. Thus it is a property of the ensemble, not of any particular bead; in particular not of the bead that you have actually drawn (since this one has a definite color).

Consider the probability of a man (heavy smoker, age 60) to die of cancer within the next 5 years. If you take him to be a member of the ensemble of all men, you get a different probability than if you take him to be a member of the ensemble of all heavy smokers, another probability if you take him to be a member of all men of age 60, and yet another probability if you take him to be a member of the ensemble of all heavy smokers of age 60. But it is always the same man. This makes it clear that the probability belongs to the ensemble considered and not to the man.
Naturally this is entirely correct. So it is sensible to talk about a single case when the ensemble is specified.

The ensemble of identically tossed identical coins is one ensemble and its members 'inherit' from only this ensemble. So obviously a probability distribution belongs to the ensemble, but describes the individuals. So it is sensible to talk about an indivdual.

The statement "it is nonsense to ascribe probability to a single event" is too extreme for me.

Likewise @stevendaryl s assertion that subjective probabilities are essential in physics.
 
  • #234
Mentz114 said:
The statement "it is nonsense to ascribe probability to a single event" is too extreme for me.

Then you're probably more of a Bayesian at heart o0)

I think it's meaningful to talk about probabilities of single events too. It seems to be a common position of so-called frequentists to assert that the probability of a single event is meaningless. I have no idea why a statement like "the probability that a photon is detected in this output arm of my 50:50 beamsplitter when I input a single photon is 1/2" should be considered to be meaningless.

Of course if we want to experimentally determine a probability then a single event is somewhat useless, and we're going to need lots of trials. But I don't see why that should prevent us from talking meaningfully about probabilities applied to single events.

Getting a precise technical definition of probability (or perhaps more specifically randomness) is also, surprisingly perhaps, non-trivial and essentially recursive as far as I can see.

David MacKay discusses these issues and gives some great examples of the Bayes vs. Frequency approaches in his fantastic book "Information Theory, Inference and Learning Algorithms" which you can read online

http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
 
  • #235
A. Neumaier said:
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

Subjective probabilities for single events cannot be tested since different subjects can assign arbitrary probabilities but there will be only one outcome independent of anyone's probability.

And under repetition, there will be only one relative frequency, and among all subjective probabilities only those are scientific that match the observed relative frequency within the statistically correct uncertainty. All others are unscientific though subjectively they are allowed. Therefore subjective probability is simply prejudice, sometimes appropriate and sometimes inappropriate to the situation.

Whereas physics is about what really happens, independent of our subjective impressions.
This is also a wrong argument you hear very often. Only, because the notion of state in QT has a probabilistic meaning, that doesn't mean that the association of a state to a physical situation is subjective. This becomes clear when you step back from the formalism for a moment and think about what the state means concretely in the lab: In an operational sense it's an equivalence class of preparation procedures, and the preparation can be completely determining the state, which is described in the formalism by a pure state. That means by your preparation procedure you determine a complete set of compatible observables with certain values. This is possible only for very simple systems, e.g., the protons in the LHC which have a pretty well-determined momentum. Already their polarization is not determined, and thus you have not a complete preparation of the proton state, but you associate them as being unpolarized. This can, of course, be checked in principle. If you find a polarization, you correct your probabilistic description, but it's not subjective. You can always gain information about a system (sometimes implying that you change the state due to the interaction between measurement apparatus and system which is necessary to gain the information you want). Other systems, particularly macroscopic many-body systems are very difficult to prepare in a pure state, and thus you associate mixed states based on the (incomplete) information you have. Here the choice of the statistical operator is not unique, but you can use objective concepts to determine one, e.g., the maximum-entropy principle, which associates the state of "least prejudice" taking into account the constraints given by the available information on the system. Whether this state is a good guess or not is again subject to observations, i.e., you can test the hypothesis, again with clear objective statistsical methods, given by the association of a statistical operator to the information and refine this hypothesis. E.g. if you have a cup of tea on your desk, sitthing there for a while, so that at least it's not moving somehow anymore, it's a good hypothesis to assume that it is in (local) thermal equilibrium. Then you measure its temperature (maybe even at different places within the cup) and check whether the hypothesis is good or not. You can also determine the temperature of the surrounding to see, whether the cup of tea is even in equilibrium with the rest of your office and so on and so on. I'd rather call it "uncertainty" than "subjectivity" to determine the state, if you don't have complete information. At the end always experiments and careful observations have to verify your "educated guess" about the association of a statistical operator with the real situation in nature. Physics is an empirical (and objective!) natural science!
 
  • #236
A. Neumaier said:
The empirical definition of probability applies only in the case where many repetitions are performed - in physicists' terms, for an ensemble; in statisticians' terms, for a large sample of i.i.d. realizations.
Sure, that's why I never understood all this talk about Bayesianism, let alone the extreme form in QT, known as qbism ;-)). If I want to test a probabilistic statement, I have to "collect enough statistics" to test the hypothesis. That's the easy part of empirical science: You repeat the experiment on a large sample of equally prepared (as good as you can at least) objects and measure as good as you can the observables in question to test the hypothesis. Much more complicated is the reliable estimate of the systematic errors ;-).
 
  • #237
A question for @A. Neumaier :

Suppose I perform a measurement on which I have no theoretical knowledge, except that only two results are a priori possible: result A and result B. Suppose that I repeat the measurement 10 times and get A each time. Now I want to use this result to make a prediction about future measurements. What is the confidence that I will get A when I perform the measurement next time?

Now consider a variation in which I perform only one measurement and get A. What is now the confidence that I will get A when I perform the measurement next time?
 
  • #238
Simon Phoenix said:
I have no idea why a statement like "the probability that a photon is detected in this output arm of my 50:50 beamsplitter when I input a single photon is 1/2" should be considered to be meaningless.
This is indeed meaningful since, according to standard grammar, ''a photon'' is an anonymous photon from an ensemble, just like ''a person'' doesn't specify which person.

Once one carefully defines the language one gets rid of many of the apparent paradoxes caused by sloppy conceptualization. See also the thread Quantum mechanics is not weird, unless presented as such.
 
Last edited:
  • #239
Mentz114 said:
So obviously a probability distribution belongs to the ensemble, but describes the individuals.
It describes the anonymous individuals collectively (as an ensemble) but no single one. To use the property of the ensemble for a particular case is a common thing but has no basis in the formalism and therefore leads to paradoxes when pushed to the extreme.
 
  • #240
For ensembles we have statistics. Probability is model for individual case based on statistics of ensemble.
 
  • #241
zonde said:
For ensembles we have statistics. Probability is model for individual case based on statistics of ensemble.
No. Probability is the theoretical tool in terms of which statistics is formulated. For individual cases we just have observations, together with a sloppy (or subjective) tradition of misusing the notion of probability.
 
  • #242
A. Neumaier said:
No. Probability is the theoretical tool in terms of which statistics is formulated. For individual cases we just have observations, together with a sloppy (or subjective) tradition of misusing the notion of probability.

The subjective treatment of probability is anything but sloppy. It's much more careful than the usual frequentist approach.
 
  • #243
zonde said:
With physical collapse you mean that measurement of Alice's photon changes Bob's photon polarization? Meaning that if initially we model Bob's mixed state as statistical mixture of orthogonal pure states H/V then after Alice's measurement in H'/V' basis Bob's mixed state components change to H'/V' basis, right?
Let's say that Alice always "measures" first. Then when the photon pair interacts with her polarizer, it prepares the state for both Alice and Bob. I think Simon has said more or less the same thing.

Interestingly, I saw yesterday a Danish TV program from 2013, where the main message seemed to be that people should just accept the non-locality a la Bohr. I did not recognize other people talking there but they did have Zeilinger talking there. They also had "Bohr" and "Einstein" traveling back and forth with a train discussing Bohr's ideas and whether moon is there when nobody is looking. "Bohr" just said that "Einstein" can't prove it.
 
  • #244
vanhees71 said:
the preparation can be completely determining the state, which is described in the formalism by a pure state.
In most cases, when the model is sufficiently accurate, only by a mixed state. Whatever is prepares, the state is objectively given by the experimental setting. No subjective interpretation enters, except for the choice of a level of detail and accuracy with which the situation is modeled.
vanhees71 said:
the protons in the LHC which have a pretty well-determined momentum
Even the state of protons will generally be mixed states, since their position/momentum uncertainty is larger than that required for a pure state.
vanhees71 said:
you associate mixed states based on the (incomplete) information you have.
No. Otherwise the state would change if the experimenter gets a stoke and forgets the information, and the assistant who completes the experiment has not yet read the experimental logbook where this information was recorded.

One associates mixed states based on the knowledge (or hope) that these mixed states correctly describe the experimental situation. The predictions with a mixed state will be correct if and only if this mixed state actually describes the experiment, and this is completely independent of the knowledge various people have.

Introducing talk about knowledge introduces a nonscientific subjective aspect into the setting that is completely spurious. What counts is the knowledge that Nature has, not the one of one of the persons involved in an experiment. Whose knowledge should count in case of collision experiments at CERN where most experimental information is gathered completely automatically, and nobody ever looks at all the details?
 
  • #245
A. Neumaier said:
In most cases, when the model is sufficiently accurate, only by a mixed state. Whatever is prepares, the state is objectively given by the experimental setting. No subjective interpretation enters, except for the choice of a level of detail and accuracy with which the situation is modeled.

That's like saying no subjective interpretation enters, other than the parts that are subjective.
 
  • #246
A basic question from a beginner. Two polarization-entangled photons are generated, and set off in opposite directions across the universe. One of them bumps into a heavenly body and gets absorbed by one of its atoms, displacing an electron into a higher orbit. And then no longer exists as a photon. What happens to the other, still out in free space?
 
  • #247
jeremyfiennes said:
A basic question from a beginner. Two polarization-entangled photons are generated, and set off in opposite directions across the universe. One of them bumps into a heavenly body and gets absorbed by one of its atoms, displacing an electron into a higher orbit. And then no longer exists as a photon. What happens to the other, still out in free space?

The simplest answer is: nothing at all happens to the other photon. That's according to some interpretations of QM. OTOH other interpretations might say its polarization wavefunction collapses. (Of course its energy or direction wouldn't be affected.)

To avoid that interpretation issue, change the question to "if we measure the other photon's polarization, can we say anything about the result?" That depends on whether the (first photon's) absorption is considered a measurement. Some interpretations would say it is, others not.

To avoid that interpretation issue, let's assume a scientist observes the "heavenly body" atom after the photon is absorbed. With an appropriate extremely sensitive detector, he can theoretically determine what its polarization was. All interpretations agree that constitutes a measurement.

Then the other photon would definitely be measured with the expected "entangled" polarization. Typically, opposite to the first photon.

AFAIK.
 
  • #248
Ok. Thanks. I got the measurement bit. My main doubt is that I have read that entangled photons are 'forever entangled". But what if one is absorbed, and hence ceases to exist as a photon, before 'forever' expires? A further more basic question arises from this. If a measurement is made on one photon, collapsing the common wave function and determining the polarization state of the other, after that are the photons still entangled?
 
  • #249
Excuse my ignorance: what is "AFAIK"?
 
  • #250
jeremyfiennes said:
Ok. Thanks. I got the measurement bit. My main doubt is that I have read that entangled photons are 'forever entangled". But what if one is absorbed, and hence ceases to exist as a photon, before 'forever' expires? A further more basic question arises from this. If a measurement is made on one photon, collapsing the common wave function and determining the polarization state of the other, after that are the photons still entangled?

Particles typically cease to be entangled when a measurement is performed on either of a pair. That is a general statement, and there are a number of caveats to consider. For one, no one knows the precise moment that entanglement ceases. Also, a particle can be measured on one basis and remain entangled on another.
 
Back
Top