Is action at a distance possible as envisaged by the EPR Paradox.

Click For Summary
The discussion centers on the possibility of action at a distance as proposed by the EPR Paradox, with participants debating the implications of quantum entanglement. It is established that while entanglement has been experimentally demonstrated, it does not allow for faster-than-light communication or signaling. The conversation touches on various interpretations of quantum mechanics, including the Bohmian view and many-worlds interpretation, while emphasizing that Bell's theorem suggests no local hidden variables can account for quantum predictions. Participants express a mix of curiosity and skepticism regarding the implications of these findings, acknowledging the complexities and ongoing debates in the field. Overall, the conversation highlights the intricate relationship between quantum mechanics and the concept of nonlocality.
  • #721
ThomasT said:
...2. No Local Realistic physical theory can ever reproduce all of the predictions of Quantum Mechanics.

We KNOW that 2. is incorrect, because viable LR models of entanglement exist, and they remain unrefuted. If you refuse to acknowledge them, then so what. They exist nonetheless.

I want readers of this thread to understand this. There are LR theories of entanglement which reproduce all of the predictions of qm. They're in the preprint archives at arxiv.org, and there are some that have even been published in peer reviewed journals. Period. If you, DrChinese, want to dispute this, then it's incumbent on you, or anyone who disputes these claims, to analyze the theories in question and refute their claims regarding locality or realism or compatibility with qm.

If you don't want to inform casual readers of this thread of this fact, then fine. I've informed them.

And just so there's no confusion about this, let me say it again. Bell's theorem does not rule out local realistic theories of entanglement. If DrChinese disagrees with this, then I want you, the casual reader of this thread, to demand that DrChinese analyze a purported LR theory and show that it either isn't local or realistic or both or that it doesn't reproduce qm predictions.
...

There are, at least, a dozen different LR models of entanglement in the literature which reproduce the qm predictions. Of course, if you won't look at any of them then 10^1000 wouldn't be enough. Would it?

All you have to do is look at one. If you think it doesn't qualify as a local or a realistic model, then you can point out why (but don't require that it produce incorrect predictions, because that's just silly). If you're unwilling to do that, then your Einstein quote is just fluffy fuzziness wrt your position on LR models of entanglement.

I want you to refute an LR theory of entanglement that I present. You've been called out. Will you accept the challenge?

I have a requirement that is the same requirement as any other scientist: provide a local realistic theory that can provide data values for 3 simultaneous settings (i.e. fulfilling the realism requirement). The only model that does this that I am aware of is the simulation model of De Raedt et al. There are no others to consider. There are, as you say, a number of other *CLAIMED* models yet none of these fulfill the realism requirement. Therefore, I will not look at them.

Perhaps you will show me where any of the top scientific teams have written something to the effect of "local realism is tenable after Bell". Because all of the teams I know about state the diametric opposite. Here is Zeilinger (1999) in a typical quote of his perspective:

"Second, a most important development was due to John Bell (1964) who continued the EPR line of reasoning and demonstrated that a contradiction arises between the EPR assumptions and quantum physics. The most essential assumptions are realism and locality. This contradiction is called Bell’s theorem."

I would hope you would recognize the above as nearly identical to my line of reasoning. So if you know of any hypothesis that contradicts the above AND yields a local realistic dataset, please give a link and I will give you my thoughts. But I cannot critic that which does not exist. (Again, an exception for the De Raedt model which has a different set of issues entirely.)
 
Physics news on Phys.org
  • #722
my_wan said:
Because precisely what we can presume about reality, counterfactuals, context, etc., plays a large role in what we can consider to get GR <> QM to GR = QM.


Maybe you’re right. Personally, I think semantic discussions on "reality" could keep you occupied for a thousand years, without substantial progress. What if Einstein presented something like this:

"The causal reality for the joint probabilities of E having a relation to M, in respect of the ideal context, is strongly correlated to C."

Except for the very fine "sophistication" – could this be of any real use?

Maybe I’m wrong, and Einstein indeed used this very method to get to:

E = mc2

... I don’t know ...

But wrt "reality", I think we have a very real problem, in that the discordance for aligned parallels is 0:

N(0°, 0°) = 0​

If we then turn one minus thirty degrees and the other plus thirty degrees, from a classical point of view we should get:

N(+30°, -30°) ≤ N(+30°, 0°) + N(0°, -30°)​

Meaning that the discordance when both are turned cannot be greater than the sum of the two turned separately, which is very logical and natural.

But this is NOT true according to quantum mechanical predictions and experiments!

Even a high school freshman can understand this problem, you don’t have to be "sophisticated" or "intellectual superior", that’s just BS.

Now, to start long die-hard discussions on "elementary optics" to get an illusion of a probable solution is not very bright, not even "sophisticated".

I think most here realize that attacking the mathematics as such cannot be considered "healthy".

To discuss what’s real or not maybe could lead to "something", but it will never change the mathematical reality.

Therefore, the only plausible way 'forward' is to find a 'flaw' in QM, which will be very very hard since QM is the most precise scientific theory we got. That potential 'flaw' in QM has to be mathematical, not semantical – words won’t change anything about the EPR paradox and the mathematical predictions of QM.

I think it’s very interesting with your attempts to get a 'classical' explanation for what happens in EPR-Bell experiments, but how is this ever going to change the real mathematical truth, which we both know is true?
 
  • #723
my_wan said:
1. ThomasT has a point wrt mainstream view on the realism issue. I know very few that take as hard a view on realism as DrC.

2. Only with a very restricted notion of realism and what it entails can this be said. I also never got a response to my objection to calling realistic ways a defining contextualization of such variables a Fair Sampling argument.

3. I would love to hear a definition of contextual variables? Certain statements made it sound like contextual variables, by definition, meant non-realistic. I never got a response to the questions, is velocity a contextual variable?

4. I also never got an objection when I pointed out that straight forward squaring of any vector leads to values that are unavoidably coordinate dependent, that is it produces different answers and not just the same answer defined by a different coordinate system. Yet the requirement that a realistic model must model arbitrary detector settings, rather than arbitrary offsets, requires a coordinate independent square of a vector.

5. To say realism is falsified most certainly is an overreach of what can be ascertained from the facts. I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.

In trying to be complete in my response so you won't think I'm avoiding anything:

1. ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.

2. GHZ is very specific. It is a complex argument, but uses the very same definition of reality as does Bell. And this yields a DIFFERENT prediction in every case from QM, not just in a statistical ensemble. So NO, your conclusion is incorrect.

3. A contextual variable is one in which the nature of the observation is part of the equation for predicting the results. Thus it does not respect observer independence. You will see that in your single particle polarizer example, observer dependence appears to be a factor in explaining the results. Keep in mind, contextuality is not an assumption of Bell.

4. Your argument here does not follow regarding vectors. So what if it is or is not true? This has nothing to do with a proof of QM over local realism. I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out. OK, then local realism is ruled out which is what I am asserting anyway. But that result is not generally accepted as true and so I just don't follow. How am I supposed to make your argument for you?

5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.
 
  • #724
my_wan said:
I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.
I’m with you on this one 1000%. This is what we should discuss, not "elementary optics".

I think that it’s overlooked in this thread that this was a major problem for John Bell as well (and I’m going to prove this statement in a few days).

Bell knew that his theorem creates a strong contradiction between QM & SR, one or both must be more or less wrong. Then if QM is more or less wrong, it could mean that Bell’s theorem is also more or less wrong, since it builds its argument on QM predictions.

DrChinese said:
5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.

Don’t you think that interpretations are a little too easy way out of this?? I don’t think John Bell would have agreed with you here...
 
  • #725
DevilsAvocado said:
Bell knew that his theorem creates a strong contradiction between QM & SR, one or both must be more or less wrong. Then if QM is more or less wrong, it could mean that Bell’s theorem is also more or less wrong, since it builds its argument on QM predictions.

Don’t you think that interpretations are a little too easy way out of this?? I don’t think John Bell would have agreed with you here...

Bell shifted a bit on interpretations. I think the majority view is that he supported a Bohmian perspective, but I am not sure he came down fully in anyone interpretation. At any rate, I really don't know what we can say about underlying physical mechanisms. We just don't know how nature manages to implement what we call the formalism.

And don't forget that Bell does not require QM to be correct, just that the QM predictions are incompatible with LR predictions. Of course, Bell tests confirm QM to many SD.
 
  • #726
DrChinese said:
... QM predictions are incompatible with LR predictions.

Yes you are right, and this is what causes the dilemma. The Einsteinian argument fails:

no action on a distance (polarisers parallel) ⇒ determinism

determinism (polarisers nonparallel) ⇒ action on a distance

Meaning QM <> SR.
 
  • #727
zonde said:
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that.
No. I understood what you asserted.

zonde said:
I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
Yes, I seen that. The pure case is in fact what I used to empirically verify the assumption.

zonde said:
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
And this is where you go wrong again. I stand by my factual statement (not assumption) that randomized photon polarizations will have the same route switching statistics as an even mixture of pure H and V polarizations. I verified it both mathematically and in computer simulations.

Consider, in the pure case where you got it right, where you move a detector setting from 0 to 22.5 degrees. The route switching statistics look like cos^2(22.5) = sin^2(67.5), thus you are correct about the pure polarization pairs at 90 degree offsets. Now notice that cos^2(theta) = sin^2(theta +- 90) for ANY arbitrary theta. Now add a second pair of pure H and V photons polarizations that is offset 45 degrees from the first pair. Now at a 0 angle detector setting you've added 50% more photons to be detected from the new H and 50% from the new V polarization beams. Since cos^2(theta) = sin^2(theta +- 90) in ALL cases the overall statistics have not changed. To add more pure beam pairs without changing overall statistics, you have to add 2 pair of pure H and V beams at both 22.5 and 67.5 degree offsets. To add more pure beam sets, without changing overall statistics, requires 4 more H and V pure beams offset equidistant from those 4. Next step requires 8 to maintain the same statistics, and simply take the limit. You then end up with a completely randomized set of photons polarization that exhibit the exact same path switching statistics as the pure H and V case, because cos^2(theta) = sin^2(theta +- 90) for absolutely ALL values of theta.

So if you still don't believe it, show me. If you want a computer program that uses a random number generator, to generate randomly polarized photons and send them to a virtual detector, ask. I can write the program pretty quick. You'll need AutoIt (freeware, not nagware) if you don't want to be sent an exe. With AutotIt installed, you can run the script directly without compiling it.

zonde said:
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
False, and false is not an assumption, it is a demonstrable fact. So long as the pure randomization case exhibits those statistics, physically so must the completely randomized case. This fact is central to EPR modeling attempts. If you can demonstrate otherwise, I'll add a sig line to my profile stating that and linking to where you made a fool of me.
 
  • #728
zonde said:
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that. I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
I also just notice you contradicted yourself. You say:
1) ...it is reasonable only if you assume that you have source with even mixture of H and V photons.
2) If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.

But a random distribution is an "even mixture of H and V" as defined by 1), just not all on the same 2 axis. For a random distribution, there statistically exist both an opposite and perpendicular case for every possible polarization instance.
 
  • #729
DrChinese said:
In trying to be complete in my response so you won't think I'm avoiding anything:

1. ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.
Don't have much to refute this with. I've read the arguments and counterarguments, I was more curious about the general opinion among physicist, with published positions on EPR or not.

DrChinese said:
2. GHZ is very specific. It is a complex argument, but uses the very same definition of reality as does Bell. And this yields a DIFFERENT prediction in every case from QM, not just in a statistical ensemble. So NO, your conclusion is incorrect.
The question was the reasoning behind labeling any specific form of contextualization of contextual variables a Fair Sampling argument. I'm not even sure what this response has to do with the issue as stated. Though I have previously expressed confusion how you defined precisely what did or didn't qualify as realism even with that definition. Merely restating the definition doesn't help much. Nor does it indicate whether realistic models can exist that doesn't respect that definition.

DrChinese said:
3. A contextual variable is one in which the nature of the observation is part of the equation for predicting the results. Thus it does not respect observer independence. You will see that in your single particle polarizer example, observer dependence appears to be a factor in explaining the results. Keep in mind, contextuality is not an assumption of Bell.
Nice definition, I'll keep that for future reference. I'm well aware that my single polarizer example contains contextual dependencies, yet empirically valid consequences. It was the fact that the contextual values didn't depend on any correlations to anything that was important to the argument. Thus it was limited to refuting a non-local claim, not a realism claim. What it indicates is that a classical mechanism for the nonlinear path switching of uncorrelated photon responses to a single polarizer is required to fully justify a realistic model. I even give the opinion that a mechanistic explanation of the Born rule might be required to pull this off. Some would be happy to just accept the empirical mechanism itself as a local classical optics effect and go from there. I'm not. I'm aware contextuality was not an assumption of Bell. Hence the requirement of some form of classical contextuality to escape the stated consequences of his inequality.

DrChinese said:
4. Your argument here does not follow regarding vectors. So what if it is or is not true? This has nothing to do with a proof of QM over local realism. I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out. OK, then local realism is ruled out which is what I am asserting anyway. But that result is not generally accepted as true and so I just don't follow. How am I supposed to make your argument for you?
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

DrChinese said:
5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.
That's more than a little difficult when you seem to falsely represent any particular contextualization of variables as a Fair Sampling argument. Refer back to 2. where your response was unrelated to my objection to labeling contextualization arguments as a Fair Sampling argument.
 
  • #730
my_wan said:
Consider, in the pure case where you got it right, where you move a detector setting from 0 to 22.5 degrees. The route switching statistics look like cos^2(22.5) = sin^2(67.5), thus you are correct about the pure polarization pairs at 90 degree offsets.
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|

my_wan said:
Now notice that cos^2(theta) = sin^2(theta +- 90) for ANY arbitrary theta. Now add a second pair of pure H and V photons polarizations that is offset 45 degrees from the first pair. Now at a 0 angle detector setting you've added 50% more photons to be detected from the new H and 50% from the new V polarization beams. Since cos^2(theta) = sin^2(theta +- 90) in ALL cases the overall statistics have not changed.
The same way as above
|cos^2(67.5)-cos^2(45)| and it is not equal to |cos^2(22.5)-cos^2(0)|
and
|sin^2(67.5)-sin^2(45)| and it is not equal to |sin^2(22.5)-sin^2(0)|

so if you add H and V photons that are offset by 45 degrees you change your statistics.

my_wan said:
So if you still don't believe it, show me. If you want a computer program that uses a random number generator, to generate randomly polarized photons and send them to a virtual detector, ask. I can write the program pretty quick. You'll need AutoIt (freeware, not nagware) if you don't want to be sent an exe. With AutotIt installed, you can run the script directly without compiling it.
I would stick to simple example:
Code:
      polarizer at 0    polarizer at 22.5
p=0   cos^2(0-0)  =1    cos^2(0-22.5)  =0.85  difference=0.15
p=45  cos^2(45-0) =0.5  cos^2(45-22.5) =0.85  difference=0.35
p=90  cos^2(90-0) =0    cos^2(90-22.5) =0.15  difference=0.15
p=135 cos^2(135-0)=0.5  cos^2(135-22.5)=0.15  difference=0.35
average difference=0.25

my_wan said:
I also just notice you contradicted yourself. You say:
1) ...it is reasonable only if you assume that you have source with even mixture of H and V photons.
2) If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.

But a random distribution is an "even mixture of H and V" as defined by 1), just not all on the same 2 axis. For a random distribution, there statistically exist both an opposite and perpendicular case for every possible polarization instance.
The statement in bold makes the difference between 1) and 2).
 
  • #731
zonde said:
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|

This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.

Consider, you have 1 polarizer at 1 setting (0 degrees) with 1 uncorrelated beam pointed at it, such that 50% of the light goes through. You change settings to 22.5 degrees. Now 15% of the V photons switch from going through to not going through the detector, sin^2(22.5). Now at the SAME 22.5 degree setting, you get cos^2(67.5) = 15% more detections from the H photons. 15% lost from V and 15% gained from H. This is even more general in that sin^2(theta) = |cos^2(90-theta)| for all theta. This is NOT a counterfactual measure. This is what you get from the one measure you are getting at the one setting. So you can't use cos from the previous measurement you are not currently measuring. Else it amounts to subtracting cos from a cos that's not even part of the polarizer setting at that time, which breaks it's consistency with BI violations statistics for other possible settings.

ONLY include the statistics of whatever measurement you are performing at THAT time, and you get statistical consistency between BI violations and photon route switching without correlations, with purely randomized photon polarizations. The key is DON'T mix the math for both settings for one measurement. This is key to subverting the couterfactuals in BI and still getting the same statistics. Only count what photons you can empirically expect to switch routes upon switching to that ONE setting by counting H adds and V subtracts at that ONE setting.

Then, by noting it's applicable at all thetas it remains perfectly valid for fully randomized photon polarizations at ANY arbitrary setting, provided you are allowed to arbitrarily relabel the 0 point of the non-physical coordinate labels.
 
  • #732
Besides, you can't change my formula, then claim my formula doesn't do what I claimed because the formula you swapped in doesn't. :wink:
 
  • #733
my_wan said:
This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.
Yes, that is only speculation. Nothing straightforwardly testable.

my_wan said:
Consider, you have 1 polarizer at 1 setting (0 degrees) with 1 uncorrelated beam pointed at it, such that 50% of the light goes through. You change settings to 22.5 degrees. Now 15% of the V photons switch from going through to not going through the detector, sin^2(22.5). Now at the SAME 22.5 degree setting, you get cos^2(67.5) = 15% more detections from the H photons. 15% lost from V and 15% gained from H.
Or lost 35% and gained 35%. Or lost x% and gained x%.
The question is not about lost photon count = gained photon count.
Question is about this number - 15%.
You will keep insisting that it's 15% because it's 15% both ways then we can stop our discussion right there.

my_wan said:
This is even more general in that sin^2(theta) = |cos^2(90-theta)| for all theta.
sin(theta)=cos(90-theta) is trivial trigonometric identity. What you expect to prove with that?

my_wan said:
This is NOT a counterfactual measure. This is what you get from the one measure you are getting at the one setting. So you can't use cos from the previous measurement you are not currently measuring. Else it amounts to subtracting cos from a cos that's not even part of the polarizer setting at that time, which breaks it's consistency with BI violations statistics for other possible settings.

ONLY include the statistics of whatever measurement you are performing at THAT time, and you get statistical consistency between BI violations and photon route switching without correlations, with purely randomized photon polarizations. The key is DON'T mix the math for both settings for one measurement. This is key to subverting the couterfactuals in BI and still getting the same statistics. Only count what photons you can empirically expect to switch routes upon switching to that ONE setting by counting H adds and V subtracts at that ONE setting.
Switch routes to ... FROM what?
You have no switching with ONE setting. You have to have switching FROM ... TO ... otherwise there is no switching.
 
  • #734
my_wan said:
That's more than a little difficult when you seem to falsely represent any particular contextualization of variables as a Fair Sampling argument. Refer back to 2. where your response was unrelated to my objection to labeling contextualization arguments as a Fair Sampling argument.

To me, the (Un)Fair Sampling argument is as follows: "The full universe does not respect Bell's Inequality (or similar), while a sample does. The reason an attributes of the sample is different than that of the universe is that certain data elements are more likely to be detected than others, causing a skewing of the results."

I reject this argument as untenable; however, I would say my position is not generally accepted. A more generally accepted argument is that the GHZ argument renders the Fair Sampling assumption moot.

Now, I am not sure how this crept into our discussion except that as I recall, you indicated that this had some relevance to Bell. I think it is more relevant to tests of Bell's Inequality, which we aren't really discussing. So if there is nothing further to this line, we can drop it.
 
  • #735
my_wan said:
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
 
  • #736
DrChinese said:
A more generally accepted argument is that the GHZ argument renders the Fair Sampling assumption moot.
Can you produce some reference?

I gave reference for the opposite in my post https://www.physicsforums.com/showthread.php?p=2760591#post2760591" but this paper is not freely accessible so it's hard to discuss it. But if you will give your reference then maybe we will be able to discuss the point.
 
Last edited by a moderator:
  • #737
ThomasT said:
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.

Again, you might take a look at the lotto card analogy I offered in post #2 here. If Alice and Bob are each sent scratch lotto cards with a choice of one of three boxes to scratch, and we find that on every trial where they choose the same box to scratch they end up seeing the same fruit, a natural theory would be that the source is always creating pairs of cards that have the same set of "hidden fruits" behind each of the three boxes. But this leads to the conclusion that on the trials where they choose different boxes there should be at least a 1/3 probability they'll see the same fruit, so if the actual observed frequency of seeing the same fruit when they scratch different boxes is some smaller number like 1/4, this can be taken as a falsification of the idea that the identical results when identical boxes are chosen can be explained by each card being assigned identical hidden properties by the source.
ThomasT said:
Do you doubt that this is the view of virtually all physicists?
Virtually all physicists would agree that the violation of Bell inequalities constitutes a falsification of the kind of theory you describe, assuming you're talking about a purely local theory.
 
  • #738
zonde said:
Can you produce some reference?

I gave reference for the opposite in my post https://www.physicsforums.com/showthread.php?p=2760591#post2760591" but this paper is not freely accessible so it's hard to discuss it. But if you will give your reference then maybe we will be able to discuss the point.

Here are a couple that may help us:

Theory:
http://www.cs.rochester.edu/~cding/Teaching/573Spring2005/ur_only/GHZ-AJP90.pdf

Experiment:
http://arxiv.org/abs/quant-ph/9810035

"It is demonstrated that the premisses of the Einstein-Podolsky-Rosen paper are inconsistent when applied to quantum systems consisting of at least three particles. The demonstration reveals that the EPR program contradicts quantum mechanics even for the cases of perfect correlations. By perfect correlations is meant arrangements by which the result of the measurement on one particle can be predicted with certainty given the outcomes of measurements on the other particles of the system. This incompatibility with quantum mechanics is stronger than the one previously revealed for two-particle systems by Bell's inequality, where no contradiction arises at the level of perfect correlations. Both spin-correlation and multiparticle interferometry examples are given of suitable three- and four-particle arrangements, both at the gedanken and at the real experiment level. "
 
Last edited by a moderator:
  • #739
ThomasT said:
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.

Do you doubt that this is the view of virtually all physicists?

Do you see anything wrong with this view?

Sorry, I may have missed this post, and I saw JesseM replying so I thought I would chime in...

The EPR conclusion is most certainly not the view which is currently accepted. That is because the EPR view has been theoretically (Bell) and experimentally (Aspect) rejected. But that was not the case in 1935. At that time, the jury was still out.

What is wrong with this view is that it violates the Heisenberg Uncertainty Principle. Nature does not allow that.
 
  • #740
zonde said:
Yes, that is only speculation. Nothing straightforwardly testable.
It demonstrably consistent with any test. This consistency is taken from the fact that if you take a polarized beam and offset a polarizer in its path, offset defined by the difference between light polarization and polarizer setting, the statistics of what is passed, defined by the light intensity making it through that polarizer, exactly matches in all cases the assumptions I am making.

To demonstrate you can use this polarizer applet:
http://www.lon-capa.org/~mmp/kap24/polarizers/Polarizer.htm
Just add a second polarizer and consider the light coming through the first polarizer your polarized beam, which means you double whatever percentage is read, because the 50% lost to the first polarizer doesn't count.

zonde said:
Or lost 35% and gained 35%. Or lost x% and gained x%.
The question is not about lost photon count = gained photon count.
Question is about this number - 15%.
You will keep insisting that it's 15% because it's 15% both ways then we can stop our discussion right there.
The number 15% only results from the 22.5 setting. If we use a 45 setting then it's 50% lost and 50% gained. Any setting cos^(theta) defines both lost and gained because sin^2(theta) = |cos^2(90-theta)| in all cases. There is nothing special about 22.5 and 15%.


zonde said:
sin(theta)=cos(90-theta) is trivial trigonometric identity. What you expect to prove with that?
That's why it constitutes a proof at all angle, and not just the 22.5 degree setting that gets 15% lost and gained in the example used.

zonde said:
Switch routes to ... FROM what?
You have no switching with ONE setting. You have to have switching FROM ... TO ... otherwise there is no switching.
Lost is photons that would have passed the polarizer but didn't at that setting. Gained is what wouldn't have passed the polarizer but did at that setting. Let's look at it using a PBS so we can divide things in H, V, and L, R routes through the polarizer.

Consider a PBS rather than a plain polarizer placed in front of a simple polarized beam of light that evenly contains pure H an V polarized photons. We'll label the V polarization as angle 0. So, a PBS set a angle 0 will have 100% of the V photons takes L route, and 100% of the H photons takes R. At 22.5 degrees L is ~85% V photons and ~15% H photons, while R beams now contains ~15% V photons and ~85% H photons. WARNING: You have to consider that by measuring the photons at a new setting, it changes the photons polarization to be consistent with that new setting. At a setting of 45 degree you get 50% H and 50% V going L, and 50% H and 50% V going R. Nothing special about 15% or the 22.5 degree setting.

Now what the sin^2(theta) = cos^2(90-theta) represents here is anyone (but only one) polarizer setting, such that theta=theta in both cases, and our sin^2(theta) is V photons that switch to the R route, while cos^2(90-theta) is the H photons that switch to the L route.

Now since this is a trig identity for all cases, it valid for ANY uniform mixture of polarizations, whether 2 pure H and V beams or a random distribution, which by definition is a uniform mixture of polarizations.

It would even be easy to make non-uniform beam mixtures, where certain ranges of polarizations are missing in the beam, such that the sin^2(theta) = cos^2(90-theta) can be used to define the ratios of beam intensities as theta, the polarizer setting, is adjusted. If ANY situation can be defined where sin^2(theta) = cos^2(90-theta) doesn't properly predict beam intensity ratios, from any crafted beam mixture, then I'm wrong.

And here's the kicker: by defining properties in terms of photons properties, rather than properties as defined by the polarizer settings that detect them, and using these polarizer path statistics, BI violations statistics also result as a consequence.
 
  • #741
DrChinese said:
Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
It relates to the arbitrary angle condition placed on the modeling of hv models and nothing else.
Consider:
Hidden variable model successfuly Models QM coincidence statistics, but requires coordinate freedom that is objected to. The following properties are noted:
1) One ot the other, but not both detector settings must be defined to have a 0 angle setting. (objection noted)
2) The detector defined as a zero setting has zero information about the other detectors setting.
3) The zero setting can be arbitrarily changed to any absolute setting along with the detector angle changes with or WITHOUT redefining absolute photon polarizations in the process.
4) The default photon polarizations can be rotated with absolute impunity, having no effect whatsoever on coincidence statistic.
5) The only thing considered for detections/non-detections is the photon polarization relative to the setting of the detector it actually hit.

Thus this proves the 0 coordinate requirement in no way hinges upon physical properties unique to the angles chosen. It is a mathematical artifact, related to non-commuting vectors. It's essentially equivalent of giving only the path of a pool ball and demanding that the path of q-ball that hit it must be uniquely calculable in order to prove pool balls are real.

I'll get around to attempting to use predefined non-commutative vectors to get around it soon, but I have grave doubts. Disallowing arbitrary 0 coordinates is tantamount to disallowing an inertial observer from self defining their own velocity as 0, which requires a universal 0 velocity.

At the very least, I would appreciate if you quit misrepresenting the 0 angle condition as a statistically unique physical state at that angle.
 
  • #742
JesseM said:
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.

When you say: "this is exactly the sort of theory that Bell examined", it does require some presumptive caveats. That is that the properties that is supposed as carried by the photons are uniquely identified by the route it takes at a detector.

If a particle has a perfectly distinct property, in which a detector setting tuned to a nearby setting has some nonlinear odds of defining the property as equal to that offset setting, then BI violations ensue. The problem for models is that vector product are non-commutative, requiring a 0 angle to be defined for one of the detectors.

Consider a hv model that models BI violations, but has the 0 setting condition. You can assign one coordinate system to the emitter, which the detectors know nothing about. Another coordinate system to the detectors, which the emitter knows nothing about, but rotates in tandem with one or the other detector. Now rotating the emitter has absolutely no effect on the coincidence statistics whatsoever, thus proving that the statistics is not unique to physical states of the particles at a given setting. You can also have any arbitrary offset between the two detectors, and consistency with QM is also maintained. Thus the non-commutativity of vectors is the stumbling block for such models. But the complete insensitivity to arbitrary emitter settings proves it's not a physical stumbling block.

So perhaps you can explain to me the physical significant of requiring a non-physical coordinate choice to give exactly the same answers to vector products, under arbitrary rotations, when you can't even do that on a pool table?
 
  • #743
my_wan said:
Lost is photons that would have passed the polarizer but didn't at that setting. Gained is what wouldn't have passed the polarizer but did at that setting. Let's look at it using a PBS so we can divide things in H, V, and L, R routes through the polarizer.

Consider a PBS rather than a plain polarizer placed in front of a simple polarized beam of light that evenly contains pure H an V polarized photons. We'll label the V polarization as angle 0. So, a PBS set a angle 0 will have 100% of the V photons takes L route, and 100% of the H photons takes R. At 22.5 degrees L is ~85% V photons and ~15% H photons, while R beams now contains ~15% V photons and ~85% H photons. WARNING: You have to consider that by measuring the photons at a new setting, it changes the photons polarization to be consistent with that new setting. At a setting of 45 degree you get 50% H and 50% V going L, and 50% H and 50% V going R. Nothing special about 15% or the 22.5 degree setting.
You compare measurement at 22.5 angle with hypothetical measurement at 0 angle. When I used similar reasoning your comment was that:

This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.
So how is your reasoning so radically different than mine that you are allowed to use reasoning like that but I am not allowed?

But let's say it's fine and look at a bit modified case of yours.
Now take beam of light that consists of H, V, +45 and -45 polarized light. What angle should be taken as 0 angle in this case? Let's say it's again V polarization that is 0 angle. Can you work out photon rates in L and R beams for all photons (H,V,+45,-45)?

my_wan said:
Now what the sin^2(theta) = cos^2(90-theta) represents here is anyone (but only one) polarizer setting, such that theta=theta in both cases, and our sin^2(theta) is V photons that switch to the R route, while cos^2(90-theta) is the H photons that switch to the L route.
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Otherwise it's quite unclear what you mean with your statement.
 
  • #744
zonde said:
So how is your reasoning so radically different than mine that you are allowed to use reasoning like that but I am not allowed?

When I give the formula sin^2(theta) = |cos^2(90-theta)| theta and theta are the same number from the same measurement. Hence:
sin^2(0) = |cos^2(90-0)|
sin^2(22.5) = |cos^2(90-22.5)|
sin^2(45) = |cos^2(90-45)|
etc.

You only make presumptions about the path statistics of individual photons, and wait till after the fact to do any comparing to another measurement.

You previously give the formula:
zonde said:
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|
Here you put in the 0 from the first measurement as if it's part of what you are now measuring. It's not. The 22.5 is ALL that you are now measuring. The only thing you are comparing after the fact is the resulting path effects. You don't include measurements you are not presently performing to calculate the results of the measurement you are now making. This is to keep the reasoning separate, and avoid the interdependence inherent in the presumed non-local aspect of EPR correlations. It also allows you to compare it to any arbitrary other measurement without redoing the calculation. It's a non-trivial condition of modeling EPR correlations without non-local effects to keep the measurements separate. On these grounds alone mixing settings from other measurements to calculate results of the present measurement must be rejected. Only the after the fact results may be compared, to see if the local path assumptions remain empirically and universally consistent, with and without EPR correlations.

The primary issue remains whether the path statistics are consistent for both the pure H and V case and the randomized polarization case. This is the point on which I will put my pride ALL in by stating this is unequivocally a factual yes. This should also calculate cases in which the intensity of H and V are unequal, giving the variations of intensity at various polarizer settings at different angles. Such non-uniform beam mixtures to test this can quiet easily be experimentally tested. From a QM perspective this would be equivalent to interference in the wavefunction at certain angles.
 
  • #745
DrChinese said:
Nice, it's exactly the same paper I looked at. I was just unsure if posting that link doesn't violate forum rules.
As the file is not searchable I can point out that the text I quoted can be found on p.1136 in the last full paragraph (end of the page).

DrChinese said:
Experiment:
http://arxiv.org/abs/quant-ph/9810035

"It is demonstrated that the premisses of the Einstein-Podolsky-Rosen paper are inconsistent when applied to quantum systems consisting of at least three particles. The demonstration reveals that the EPR program contradicts quantum mechanics even for the cases of perfect correlations. By perfect correlations is meant arrangements by which the result of the measurement on one particle can be predicted with certainty given the outcomes of measurements on the other particles of the system. This incompatibility with quantum mechanics is stronger than the one previously revealed for two-particle systems by Bell's inequality, where no contradiction arises at the level of perfect correlations. Both spin-correlation and multiparticle interferometry examples are given of suitable three- and four-particle arrangements, both at the gedanken and at the real experiment level. "
I think I caught the point you are making.

Let's see if I will be able to explain my objections from the viewpoint of contextuality.
First about EPR, Bell and non-contextuality.
If we take photon that has polarization angle 0° and put it through polarizer at angle 0° it goes through with certainty. However if we change polarizer angle to 45° it goes through with 50% chance (that's basically Malus law).
So when we have entangled photons we have a prediction that 50% chance is somehow correlated between two entangle photons.
Bell's solution to this was non-contextuality i.e. photon is predetermined to take his chance one way or the other way. I would argue that EPR does not contain any considerations regarding solution of this particular problem - it was just the statement of general problem.

So what are other options different from Bell's solution. As I see other solution is that photons can be considered as taking this 50% chance (under 45° measurement base) dependent from particular conditions of polarizer (context of measurement). But in that case it is obvious that this correlation between two entangled photons of taking chances the same way should be correlation between measurement conditions of two photons and not only correlation between photons themselves. This of course leaves the question how measurement conditions get "entangled" and here I speculate that some leading photons from ensemble transfer their "entanglement" to equipment at the cost of becoming uncorrelated.
That way we have classical correlation when we measure photons in the same base as they were created (0° and 90° measurement base) and quantum (measurement context) correlation when we measure photons using incompatible base from the one they were created in (+45° and -45° measurement base).

Now if we go back to GHZ. These inequalities where derived using Bell's non-contextual approach. If we look at them from perspective of contextuality then we can see that this measurement context correlation is not strictly tied to photon polarizations but by varying experiment setup it could be possible to get quite different correlations then the ones you would expect from pure classical polarization correlations.
And if we isolate conditions so that we measure mostly measurement context correlations then pure classical polarization correlations will be only indirectly related to observed results.
 
  • #746
my_wan said:
When I give the formula sin^2(theta) = |cos^2(90-theta)| theta and theta are the same number from the same measurement.
Please tell me what theta represents physically.

As I asked already:
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Or it's something else?
 
  • #747
zonde said:
Now if we go back to GHZ. ...

Imagine that for a Bell Inequality, you look at some group of observations. The local realistic expectation is different from the QM expectation by a few %. Perhaps 30% versus 25% or something like that.

On the other hand, GHZ essentially makes a prediction of Heads for LR, and Tails for QM every time. You essentially NEVER get a Heads in an actual experiment, every event is Tails. So you don't have to ask whether the sample is fair. There can be no bias - unless Heads events are per se not detectible, but how could that be? There are no Tails events ever predicted according to Realism.

So using a different attack on Local Realism, you get the same results: Local Realism is ruled out. Now again, there is a slight split here are there are scientists who conclude from GHZ that Realism (non-contextuality) is excluded in all forms. And there are others who restrict this conclusion only to Local Realism.
 
  • #748
my_wan said:
When you say: "this is exactly the sort of theory that Bell examined", it does require some presumptive caveats. That is that the properties that is supposed as carried by the photons are uniquely identified by the route it takes at a detector.

If a particle has a perfectly distinct property, in which a detector setting tuned to a nearby setting has some nonlinear odds of defining the property as equal to that offset setting, then BI violations ensue.
Do you just mean that local properties of the particle are affected by local properties of the detector it comes into contact with? If so, no, this cannot lead to any violations of the Bell inequalities. Suppose the experimenters each have a choice of three detector settings, and they find that on any trial where they both chose the same detector setting they always got the same measurement outcome. Then in a local hidden variables model where you have some variables associated with the particle and some with the detector, the only way to explain this is to suppose the variables associated with the two particles predetermined the result they would give for each of the three detector settings; if there was any probabilistic element to how the variables of the particles interacted with the state of the detector to produce a measurement outcome, then there would be a finite probability that the two experimenters could both choose the same detector setting and get different outcomes. Do you disagree?
my_wan said:
Consider a hv model that models BI violations, but has the 0 setting condition. You can assign one coordinate system to the emitter, which the detectors know nothing about. Another coordinate system to the detectors, which the emitter knows nothing about, but rotates in tandem with one or the other detector.
What do you mean by "assigning" coordinate systems? Coordinate systems are not associated with physical objects, they are just aspects of how we analyze a physical situation by assigning space and time coordinates to different events. Any physical situation can be analyzed using any coordinate system you like, the choice of coordinate system cannot affect your predictions about coordinate-invariant physical facts.

Anyway, your description isn't at all clear, could you come up with a mathematical description of the type of "hv model" you're imagining, rather than a verbal one?
 
  • #749
DrChinese said:
ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.
Yes, the generally accepted view does use language like you do. And the generally accepted view for 30 years was that von Neumann's proof disallowed hidden variable theories, even though that proof had been shown to be unacceptable some 30 years before Bell's paper.

Zeilinger's language in the quote you provided, and the general tone of his continuing program, and your language wrt Bell, indicate to me that neither of you understand the subtleties of the arguments being presented here and in certain papers (which are, evidently, not being as clearly presented as necessary) regarding the interpretation of Bell's theorem (ie., the physical basis of Bell inequalities).

You can provide all the quotes you want. Quotes don't refute arguments. You're going to have to refute some purported LR models that reproduce qm predictions but are not rendered in the form of Bell's LHV model.

However, you refuse to look at them because:

DrChinese said:
I have a requirement that is the same requirement as any other scientist: provide a local realistic theory that can provide data values for 3 simultaneous settings (i.e. fulfilling the realism requirement). The only model that does this that I am aware of is the simulation model of De Raedt et al. There are no others to consider. There are, as you say, a number of other *CLAIMED* models yet none of these fulfill the realism requirement. Therefore, I will not look at them.

Please explain what you mean by "a local realistic model that can provide data values for 3 simultaneous settings". Three simultaneous settings of what? In the archetypal optical Bell test setup there's an emitter, two polarizers, and two detectors. The value of (a-b), the angular difference in the polarizer settings, can't have more than one value associated with any given pair of detection attributes. So, I just don't know what you're talking about wrt your 'requirement'.

My not understanding your 'requirement' might well be just a 'mental block' of some sort on my part. In any case, before we can continue, so that you might actually 'refute' something (which you haven't yet), you're going to have to explain, as clearly as you can, what this "data values for 3 simultaneous settings" means and how it is a 'requirement' that purported LR models of entanglement must conform to.

DrChinese said:
(Again, an exception for the De Raedt model which has a different set of issues entirely.)
My understanding is that a simulation is not, per se, a model. So, a simulation might do what a model can't. If this is incorrect, then please inform me. But if it is incorrect, then what's the point of a simulation -- when a model would suffice?

Here's my thinking about this: suppose we eventually get a simulation of an optical Bell test which reproduces the observed results. And further suppose that this simulation involves only 'locally' produced 'relationships' between counter-propagating optical disturbances. And further suppose that this simulation can only be modeled in a nonseparable (nonfactorizable) way. Then what might that tell us about Bell's ansatz?
 
  • #750
DevilsAvocado said:
I must inform the casual reader: Don’t believe everything you read at PF, especially if the poster defines you as "less sophisticated".
No offense DA, but you are 'the casual reader'.

DevilsAvocado said:
Everything is very simple: If you have one peer reviewed theory (without references or link) stating that 2 + 2 = 5 and a generally accepted and mathematical proven theorem stating 2 + 2 = 4, then one of them must be false.
No. Interpreting Bell's theorem (ie., Bell inequalities) is not that simple. If it was then physicists, and logicians, and mathematicians wouldn't still be at odds about the physical meaning of Bell's theorem. But they are, regardless of the fact that those trying to clarify matters are, apparently, a small minority at the present time.

DevilsAvocado said:
And remember: Bell’s theorem has absolutely nothing to do with "elementary optics" or any other "optics", I repeat – absolutely nothing. Period.
Do you think that optical Bell tests (which comprise almost all Bell tests to date) have nothing to do with optics? Even the 'casual reader' will sense that something is wrong with that assessment.

The point is that if optical Bell tests have to do with optics, then any model of those experimental situations must have to do with optics also.

By the way, the fact that I think you're way off in your thinking on this doesn't diminish my admiration for your obvious desire to learn, and your contributions to this thread. Your zealous investigations and often amusing and informative posts are most welcome. And, I still feel like an idiot for overreacting to what I took at the time to be an unnecessarily slanderous post. (Maybe I was just having a bad day. Or, maybe, it isn't within your purview to make statements about other posters' orientations regarding scientific methodology -- unless they've clearly indicated that orientation. The fact is that the correct application of the scientific method sometimes requires deep logical analysis. My view, and the view of many others, is that Bell's 'logical' analysis didn't go deep enough. And, therefore, the common interpretations of Bell's theorem are flawed.)

So, while it's granted that your, and DrC's, and maybe even most physicists, current opinion and expression regarding the physical meaning of violations of BIs is the 'common' view -- consider the possibility that you just might be missing something. You seem to understand that Bell's theorem has nothing to do with optics. I agree. Is that maybe one way of approaching, and understanding, the question of why Bell's ansatz gives incorrect predictions wrt optical Bell tests?
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
20
Views
2K
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
11
Views
2K