Undergrad Is Polarisation Entanglement Possible in Photon Detection?

Click For Summary
The discussion centers on the nature of photon polarization states, particularly in the context of entangled photons. When the polarization state of a photon is unknown, it is better described as a mixed state rather than a superposition of states, which is represented by density matrices. For entangled photons, while the composite system can be in a pure state, each individual photon is in a mixed state due to their entanglement. The distinction between superposition and mixture is crucial, as superposition implies a definite probability distribution for outcomes, whereas a mixture does not guarantee consistent outcomes across measurements. Ultimately, the conversation highlights the complexities of quantum states and the importance of understanding the differences between pure and mixed states in quantum mechanics.
  • #211
Demystifier said:
Just because the assignment of probability to a single event is subjective and cannot be checked does not mean it's meaningless. Such a Bayesian subjective assignment of probability may be useful in making decisions. This is something that people do (often intuitively and unconsciously) every day. (For instance, I have to buy shoes for my wedding (and I was never buying wedding shoes before), so have to decide which shop I will visit first. I choose the one for which I estimate a larger probability of finding shoes I will be satisfied with.)
Yes, but buying shoes is not physics.
 
  • Like
Likes Mentz114
Physics news on Phys.org
  • #212
zonde said:
You don't have to speak about actualities. Meaning you don't care how to give realistic model of interference.

Well, it's a pure fact that quantum mechanical probabilities involve summing over possibilities. That's the basis for Feynman's path integral formulation, but it's true for any formulation: the probability amplitude \psi(A,B,t_0, t) to go from state A at time t_0 to state B at time t is equal to the sum over a complete set of intermediate states C of \psi(A,C, t_0, t_1) \psi(C, b, t_1, t) where t_0 < t_1 < t.
 
  • #213
stevendaryl said:
If the actual ensemble is finite (which it always is), then in reality, you have the same problem as single events, which is how to make judgments based on finite data.
If the finite number is large, there is no problem at all - the law of large numbers makes things reliable to a fairly high precision.. This is why thermodynamics makes definite predictions since it averages over 10^23 molecules.

And this is why repeatability is the hallmark of scientific work. If something is not repeatable in 1 out of 1000 cases, one uually ignores the signe exception, attributing it to side effects unaccounted for (which is indeed what it boils down to since we can't know the precise state of the unverse, which evolves deterministically).
 
  • #214
stevendaryl said:
You assign (subjective) probabilities to initial conditions, and then you evolve them in time using physics, to get derived probabilities for future conditions. There's plenty of physics involved.
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

Subjective probabilities for single events cannot be tested since different subjects can assign arbitrary probabilities but there will be only one outcome independent of anyone's probability.

And under repetition, there will be only one relative frequency, and among all subjective probabilities only those are scientific that match the observed relative frequency within the statistically correct uncertainty. All others are unscientific though subjectively they are allowed. Therefore subjective probability is simply prejudice, sometimes appropriate and sometimes inappropriate to the situation.

Whereas physics is about what really happens, independent of our subjective impressions.
 
  • #215
A. Neumaier said:
If the finite number is large, there is no problem at all - the law of large numbers makes things reliable to a fairly high precision..

I disagree. There are many aspects to assessing data that are subjective. Is it really the case that it is an ensemble of identically prepared system? Is the system really in equilibrium?

I think you're wrong on two counts: (1) that subjectivity makes it nonscientific, and (2) that it is possible to eliminate subjectivity. I don't think either is true.
 
  • Like
Likes Demystifier
  • #216
A. Neumaier said:
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

I'm saying that they are not meaningless, and in fact it is inconsistent to say that they are meaningless. If probabilities for single events are meaningless, then probabilities for 10 events are meaningless, and probabilities for 10,000 events are meaningless. Any finite number of events would be equally meaningless.

Garbage in: Probabilities for single events are meangingless.
Garbage out: Probabilities for any finite number of events are meaningless.
 
  • #217
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.
 
  • #218
Mentz114 said:
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.
but it can increase only to one, as otherwise one has multiple coin tosses.
 
  • #219
A. Neumaier said:
but it can increase only to one, as otherwise one has multiple coin tosses.
You misunderstand what I wrote. I amend it thus

... the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely, if I performed this.

It is the empirical definition of probability. I thought it was standard.
 
  • #220
Mentz114 said:
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.

But (1) there is no guarantee there is such a limit, and (2) we can't actually measure the limit; we can only approximate it with a large but finite number.
 
  • #221
stevendaryl said:
But (1) there is no guarantee there is such a limit, and (2) we can't actually measure the limit; we can only approximate it with a large but finite number.
1) if there is no limit then the distribution has no first moment i.e. <x> is undefined and no predictions are possible ( for instance Cauchy pdf)
2) Yes. Just like ##\pi## we can only get estimates.
 
Last edited:
  • #222
Mentz114 said:
if I performed this.
The result of unperformed tosses cannot be observed, and if you performed more than one toss you are no longer talking about a single coin toss.
 
  • #223
Mentz114 said:
It is the empirical definition of probability. I thought it was standard.
The empirical definition of probability applies only in the case where many repetitions are performed - in physicists' terms, for an ensemble; in statisticians' terms, for a large sample of i.i.d. realizations.
 
  • #224
Mentz114 said:
1) if there is no limit then the distribution has no first moment i.e. <x> is undefined and no predictions are possible ( for instance Cauchy pdf)

What I mean is that there is no guarantee that when flipping a coin repeatedly that the relative frequency of "heads" approaches any kind of limit. What you can say is that if the probability of a coin flip yielding "heads" is p, then the probability that N independent coin flips will yield a relative frequency of heads much different from p goes to zero, in the limit as N \rightarrow \infty. In other words, if you flip a coin many, many times, you will probably get a relative frequency that is close to the probability, but it's not a guarantee.
 
  • #225
A. Neumaier said:
The result of unperformed tosses cannot be observed, and if you performed more than one toss you are no longer talking about a single coin toss.
But I am talking about a single coin toss. It makes sense to me to define the single toss probability in terms of an ensemble of coins.
 
  • #226
stevendaryl said:
What I mean is that there is no guarantee that when flipping a coin repeatedly that the relative frequency of "heads" approaches any kind of limit. What you can say is that if the probability of a coin flip yielding "heads" is p, then the probability that N independent coin flips will yield a relative frequency of heads much different from p goes to zero, in the limit as N \rightarrow \infty. In other words, if you flip a coin many, many times, you will probably get a relative frequency that is close to the probability, but it's not a guarantee.
Have you got some equations to back this up ?
I guess I'll just have ride my luck.
 
  • #227
Mentz114 said:
But I am talking about a single coin toss. It makes sense to me to define the single toss probability in terms of an ensemble of coins.
Then it is a property of the latter but not of the former.

It is like defining the color of a single bead in terms of the colors of an ensemble of different unseen beads. In which sense is this a definition that applies to the single bead?
 
  • #228
Mentz114 said:
Have you got some equations to back this up ?
I guess I'll just have ride my luck.

It's pretty standard. If you make N trials, each trial has a probability of success of p, then the probability that you will get m successes is:

p_{m,N} = \frac{N!}{m! (N-m)!} p^{m} (1-p)^{N-m}

Now, write m = N (p + x). Using Sterling's approximation, we can estimate this for small x to be:

p_{m,N} \approx e^{- \frac{x^2}{\sigma^2}}

where (if I've done the calculation correctly) \sigma = \sqrt{\frac{p(1-p)}{N}}

If N is large, the probability distribution for x (which measures the departure of the relative frequency from p) approaches a strongly peaked Gaussian, where the standard deviation \sigma \rightarrow 0.
 
  • #229
A. Neumaier said:
It is like defining the color of a single bead in terms of the colors of an ensemble of different unseen beads. In which sense is this a definition that applies to the single bead?
If we choose a bead at random from N beads, then the probability of our selection being of color n is (number of beads of color n)/N.

Note that these definitions are not subjective.
 
  • #230
stevendaryl said:
It's pretty standard. If you make N trials, each trial has a probability of success of p, then the probability that you will get m successes is:If N is large, the probability distribution for x (which measures the departure of the relative frequency from p) approaches a strongly peaked Gaussian, where the standard deviation \sigma \rightarrow 0.

If I recall correctly, the sample mean of a random sample from a Gaussiam pdf is the maximum likelihood estimator of the mean ##\mu## and is also unbiased. So the expected value of ##x## is zero.

I have not checked the bias of the binomial estimators ##n/N## but in the large sample limit I'll bet (:wink:) they are unbiased also.
 
  • #231
Mentz114 said:
If we choose a bead at random from N beads, then the probability of our selection being of color n is (number of beads of color n)/N.

Note that these definitions are not subjective.
But this is a definition for the probability of selecting an arbitrary bead from the N beads at random. Thus it is a property of the ensemble, not of any particular bead; in particular not of the bead that you have actually drawn (since this one has a definite color).

Consider the probability of a man (heavy smoker, age 60) to die of cancer within the next 5 years. If you take him to be a member of the ensemble of all men, you get a different probability than if you take him to be a member of the ensemble of all heavy smokers, another probability if you take him to be a member of all men of age 60, and yet another probability if you take him to be a member of the ensemble of all heavy smokers of age 60. But it is always the same man. This makes it clear that the probability belongs to the ensemble considered and not to the man.
 
  • #232
Mentz114 said:
If I recall correctly, the sample mean of a random sample from a Gaussiam pdf is the maximum likelihood estimator of the mean ##\mu## and is also unbiased. So the expected value of ##x## is zero.

Right, that's the way I defined it. x =\frac{m}{N} - p. So x=0 corresponds to the relative frequency \frac{m}{N} being equal to the probability p.

Anyway, the point is that when N is large, \frac{m}{N} is very likely to be nearly equal to p. But there is no guarantee.
 
  • #233
A. Neumaier said:
But this is a definition for the probability of selecting an arbitrary bead from the N beads at random. Thus it is a property of the ensemble, not of any particular bead; in particular not of the bead that you have actually drawn (since this one has a definite color).

Consider the probability of a man (heavy smoker, age 60) to die of cancer within the next 5 years. If you take him to be a member of the ensemble of all men, you get a different probability than if you take him to be a member of the ensemble of all heavy smokers, another probability if you take him to be a member of all men of age 60, and yet another probability if you take him to be a member of the ensemble of all heavy smokers of age 60. But it is always the same man. This makes it clear that the probability belongs to the ensemble considered and not to the man.
Naturally this is entirely correct. So it is sensible to talk about a single case when the ensemble is specified.

The ensemble of identically tossed identical coins is one ensemble and its members 'inherit' from only this ensemble. So obviously a probability distribution belongs to the ensemble, but describes the individuals. So it is sensible to talk about an indivdual.

The statement "it is nonsense to ascribe probability to a single event" is too extreme for me.

Likewise @stevendaryl s assertion that subjective probabilities are essential in physics.
 
  • #234
Mentz114 said:
The statement "it is nonsense to ascribe probability to a single event" is too extreme for me.

Then you're probably more of a Bayesian at heart o0)

I think it's meaningful to talk about probabilities of single events too. It seems to be a common position of so-called frequentists to assert that the probability of a single event is meaningless. I have no idea why a statement like "the probability that a photon is detected in this output arm of my 50:50 beamsplitter when I input a single photon is 1/2" should be considered to be meaningless.

Of course if we want to experimentally determine a probability then a single event is somewhat useless, and we're going to need lots of trials. But I don't see why that should prevent us from talking meaningfully about probabilities applied to single events.

Getting a precise technical definition of probability (or perhaps more specifically randomness) is also, surprisingly perhaps, non-trivial and essentially recursive as far as I can see.

David MacKay discusses these issues and gives some great examples of the Bayes vs. Frequency approaches in his fantastic book "Information Theory, Inference and Learning Algorithms" which you can read online

http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
 
  • #235
A. Neumaier said:
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

Subjective probabilities for single events cannot be tested since different subjects can assign arbitrary probabilities but there will be only one outcome independent of anyone's probability.

And under repetition, there will be only one relative frequency, and among all subjective probabilities only those are scientific that match the observed relative frequency within the statistically correct uncertainty. All others are unscientific though subjectively they are allowed. Therefore subjective probability is simply prejudice, sometimes appropriate and sometimes inappropriate to the situation.

Whereas physics is about what really happens, independent of our subjective impressions.
This is also a wrong argument you hear very often. Only, because the notion of state in QT has a probabilistic meaning, that doesn't mean that the association of a state to a physical situation is subjective. This becomes clear when you step back from the formalism for a moment and think about what the state means concretely in the lab: In an operational sense it's an equivalence class of preparation procedures, and the preparation can be completely determining the state, which is described in the formalism by a pure state. That means by your preparation procedure you determine a complete set of compatible observables with certain values. This is possible only for very simple systems, e.g., the protons in the LHC which have a pretty well-determined momentum. Already their polarization is not determined, and thus you have not a complete preparation of the proton state, but you associate them as being unpolarized. This can, of course, be checked in principle. If you find a polarization, you correct your probabilistic description, but it's not subjective. You can always gain information about a system (sometimes implying that you change the state due to the interaction between measurement apparatus and system which is necessary to gain the information you want). Other systems, particularly macroscopic many-body systems are very difficult to prepare in a pure state, and thus you associate mixed states based on the (incomplete) information you have. Here the choice of the statistical operator is not unique, but you can use objective concepts to determine one, e.g., the maximum-entropy principle, which associates the state of "least prejudice" taking into account the constraints given by the available information on the system. Whether this state is a good guess or not is again subject to observations, i.e., you can test the hypothesis, again with clear objective statistsical methods, given by the association of a statistical operator to the information and refine this hypothesis. E.g. if you have a cup of tea on your desk, sitthing there for a while, so that at least it's not moving somehow anymore, it's a good hypothesis to assume that it is in (local) thermal equilibrium. Then you measure its temperature (maybe even at different places within the cup) and check whether the hypothesis is good or not. You can also determine the temperature of the surrounding to see, whether the cup of tea is even in equilibrium with the rest of your office and so on and so on. I'd rather call it "uncertainty" than "subjectivity" to determine the state, if you don't have complete information. At the end always experiments and careful observations have to verify your "educated guess" about the association of a statistical operator with the real situation in nature. Physics is an empirical (and objective!) natural science!
 
  • #236
A. Neumaier said:
The empirical definition of probability applies only in the case where many repetitions are performed - in physicists' terms, for an ensemble; in statisticians' terms, for a large sample of i.i.d. realizations.
Sure, that's why I never understood all this talk about Bayesianism, let alone the extreme form in QT, known as qbism ;-)). If I want to test a probabilistic statement, I have to "collect enough statistics" to test the hypothesis. That's the easy part of empirical science: You repeat the experiment on a large sample of equally prepared (as good as you can at least) objects and measure as good as you can the observables in question to test the hypothesis. Much more complicated is the reliable estimate of the systematic errors ;-).
 
  • #237
A question for @A. Neumaier :

Suppose I perform a measurement on which I have no theoretical knowledge, except that only two results are a priori possible: result A and result B. Suppose that I repeat the measurement 10 times and get A each time. Now I want to use this result to make a prediction about future measurements. What is the confidence that I will get A when I perform the measurement next time?

Now consider a variation in which I perform only one measurement and get A. What is now the confidence that I will get A when I perform the measurement next time?
 
  • #238
Simon Phoenix said:
I have no idea why a statement like "the probability that a photon is detected in this output arm of my 50:50 beamsplitter when I input a single photon is 1/2" should be considered to be meaningless.
This is indeed meaningful since, according to standard grammar, ''a photon'' is an anonymous photon from an ensemble, just like ''a person'' doesn't specify which person.

Once one carefully defines the language one gets rid of many of the apparent paradoxes caused by sloppy conceptualization. See also the thread Quantum mechanics is not weird, unless presented as such.
 
Last edited:
  • #239
Mentz114 said:
So obviously a probability distribution belongs to the ensemble, but describes the individuals.
It describes the anonymous individuals collectively (as an ensemble) but no single one. To use the property of the ensemble for a particular case is a common thing but has no basis in the formalism and therefore leads to paradoxes when pushed to the extreme.
 
  • #240
For ensembles we have statistics. Probability is model for individual case based on statistics of ensemble.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 99 ·
4
Replies
99
Views
5K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 27 ·
Replies
27
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 51 ·
2
Replies
51
Views
6K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 2 ·
Replies
2
Views
466