I Is quantum weirdness really weird?

  • Thread starter Thread starter Dadface
  • Start date Start date
  • Tags Tags
    Quantum Weird
Click For Summary
The discussion centers on the perception of "weirdness" in quantum mechanics (QM), with participants debating whether certain aspects are genuinely strange or merely counterintuitive. Many experts argue that what is often labeled as weird can be rationally explained through a deeper understanding of the mathematics involved. However, some participants maintain that QM does defy common sense, particularly in phenomena like entanglement and the Cheshire cat effect, which challenge traditional notions of reality. The conversation highlights the subjective nature of "weirdness," emphasizing that definitions vary among individuals based on their familiarity with quantum concepts. Ultimately, the thread illustrates the ongoing debate about the nature of QM and its implications for our understanding of reality.
  • #91
Simon Phoenix said:
the output from a parametric downconverter with coherent state input isn't a coherent state but a squeezed (and entangled) state.
Coherent has multiple meanings. A squeezed state is still very coherent, just with a different group defining the coherent state.
Simon Phoenix said:
Of course, but does it have those properties before measurement?
Of course. Neither the state of the laser nor of the beam is changed by a measurement at the end of the beam. And its properties are reproducible. So why should anyone (except those who want to maintain a weird view of Nature for other reasons) think that these properties should depend on measurement?
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #92
A. Neumaier said:
Neither the state of the laser nor of the beam is changed by a measurement at the end of the beam. And its properties are reproducible.
How does this compare with the EPR criterion: "A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system"?
 
  • #93
The EPR criterion, "A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system", is satisfied by stationary (or sufficiently slowly varying) optical sources and arrangements of beams.

But quantum particles do not. That's why an inappropriate focus on the particle aspect of quantum mechanics creates the appearance of weirdness.

In this sense, sources and beams are much more real than particles.
 
  • #94
rubi said:
Every theory that reproduces the predictions of standard QM must violate counterfactual definiteness.

How do you define "counterfactual definiteness"?

rubi said:
We can assign names all the mathematical statements that appear in the assumptions of Bell's theorem and one of them is often called counterfactual definiteness.

Which one?

rubi said:
It was shown by Stapp and Eberhard that counterfactual definiteness appears as an assumption in all known proofs of Bell type inequalities that also assume locality.

What about the ones that don't?
 
  • Like
Likes zonde
  • #95
I like a simplistic approach to "weirdness" in quantum mechanics, particularly when teaching amateur scientists. The big three weirdnesses are (1) the uncertainty principle, (2) wave-particle duality, and (3) entanglement.
1: Everything in the universe, notably subatomic particles, and always in at least some random motion. So if we try to pin down location, momentum is uncertain, and vice versa.
2: Particles are particles, but their locations in space-time may be wave-like if graphed or plotted. I.e., the waves in this duality are waves of probability in the behavior of particles.
3: Two entangled particles may show interdependent behavior, but that behavior is always, to at least some extent, uncertain. So only probabilities are entangled, not firm unequivocal information. Also, there may be more than four dimensions, and entangled particles may be immediately adjacent in one of those additional dimensions.
 
  • #96
ljagerman said:
I like a simplistic approach to "weirdness" in quantum mechanics, particularly when teaching amateur scientists. The big three weirdnesses are (1) the uncertainty principle, (2) wave-particle duality, and (3) entanglement.

1: Everything in the universe, notably subatomic particles, and always in at least some random motion. So if we try to pin down location, momentum is uncertain, and vice versa.
2: Particles are particles, but their locations in space-time may be wave-like if graphed or plotted. I.e., the waves in this duality are waves of probability in the behavior of particles.
3: Two entangled particles may show interdependent behavior, but that behavior is always, to at least some extent, uncertain. So only probabilities are entangled, not firm unequivocal information. Also, there may be more than four dimensions, and entangled particles may be immediately adjacent in one of those additional dimensions.

That's too simplistic for this discussion, I'd say, and perhaps not entirely correct.

Point 1 implies the particle is real and has a definite position. Its random motion makes it impossible to "pin down" location and momentum simultaneously. But that's (more or less) true only in one interpretation, pilot wave. Most people wouldn't agree. This is called the "realism" assumption (in EPR). Its apparent violation is key to so-called quantum weirdness and can't be ignored even in elementary discussion. BTW, I don't feel QM is at all weird.

Point 2 emphasizes that "particles are particles". Personally, I have no problem with that, but again it's probably not mainstream. QFT represents particles as "excitations of the field".

Point 3 seems misleading. The correlation between the two entangled particles is "certain" - theoretically, at least. The value they have when measured is, as you indicate, uncertain. Finally, I'd say extra dimensions are out of scope for a simplistic explanation.

Apart from that it's on the right track!
 
  • #97
PeterDonis said:
How do you define "counterfactual definiteness"?
It means that you can assign values to unperformed measurements. Mathematically, it's just the requirement that you have functions ##O_\xi : \Lambda\rightarrow \mathbb R## from the space ##\Lambda## of states to the real numbers for all possible measurements ##\xi##. If you want to prove Bell's theorem, it's enough to have these functions for all spin measurements ##\xi=(\text{Alice},\alpha)## of Alice and ##\xi=(\text{Bob},\beta)## for Bob. (Concretely, this means that the functions ##A(\alpha,\lambda)## and ##B(\beta,\lambda)## exist.)

What about the ones that don't?
I don't know a proof of the inequality that doesn't assume counterfactual definiteness.
 
  • #98
PeterDonis said:
What about the ones that don't?
Is there such a thing? Asking, not arguing.
 
  • #99
Nugatory said:
Is there such a thing?

It depends on what "assume locality" means. AFAIK every proof has some form of "factorizability" assumption for the probability distribution, but I don't know that all sources agree on whether that assumption captures "locality".
 
  • #100
Nugatory said:
Is there such a thing? Asking, not arguing.
Yes, you can prove Bell-type inequalities for general random variables in a probability space. For example let ##A,B,C,D:\Lambda\rightarrow\{-1,1\}##. Then it is easy to show that ##\left|A(\lambda)B(\lambda)+A(\lambda)C(\lambda)+B(\lambda)D(\lambda)-C(\lambda)D(\lambda)\right|\leq 2##. Thus ##\left|\left<AB\right>+\left<AC\right>+\left<BD\right>-\left<CD\right>\right|\leq 2##. It doesn't matter whether ##A##, ##B##, ##C## and ##D## represent locally separated observables of a physical theory or not.
 
  • #101
rubi said:
Yes, you can prove Bell-type inequalities for general random variables in a probability space. For example let ##A,B,C,D:\Lambda\rightarrow\{-1,1\}##. Then it is easy to show that ##\left|A(\lambda)B(\lambda)+A(\lambda)C(\lambda)+B(\lambda)D(\lambda)-C(\lambda)D(\lambda)\right|\leq 2##. Thus ##\left|\left<AB\right>+\left<AC\right>+\left<BD\right>-\left<CD\right>\right|\leq 2##. It doesn't matter whether ##A##, ##B##, ##C## and ##D## represent locally separated observables of a physical theory or not.

Sure, but you have to assume they're independent - not communicating. For instance, if Alice and Bob detectors share their settings (imagine them as networked computers) they can easily produce a sequence of measurements that match QM predictions. That's the whole point of the recent Bell-type experiments, with spacelike separate detectors.
 
  • #102
secur said:
Sure, but you have to assume they're independent - not communicating.
No, I really only assumed that they are random variables on a probability space with values ##1## or ##-1##, nothing more. You have ##A(\lambda)B(\lambda)+A(\lambda)C(\lambda)+B(\lambda)D(\lambda)-C(\lambda)D(\lambda) = A(\lambda)\left(B(\lambda)+C(\lambda)\right)+\left(B(\lambda)-C(\lambda)\right)D(\lambda)##. Then either ##B(\lambda)+C(\lambda) = 2## or ##B(\lambda)-C(\lambda) = 2##. In the first case, ##B(\lambda)-C(\lambda) = 0## and in the second case ##B(\lambda)+C(\lambda) = 0##. Hence, the expression is always ##+2## or ##-2##. Take the expectation value and you get the inequality without any further assumption.
 
  • #103
I realize now that you're assuming counterfactual definiteness. In that case you're right.
 
  • Like
Likes Zafa Pi
  • #104
zonde said:
Do you mean, why Bell inequality is violated?
Bell's inequality is the result of Bell's theorem. Experiments show Bell's inequality is not valid. Thus there must be some part of the hypothesis of Bell's theorem that is not valid. What do you think it is?
 
  • #105
secur said:
I realize now that you're assuming counterfactual definiteness.

Please specify what mathematical assumption this is. We have had enough use of vague ordinary language in this thread. Since it is an "I" level thread, use of precise math is within scope.
 
  • #106
PeterDonis said:
Please specify what mathematical assumption this is.

@rubi defined the mathematical assumption of "counterfactual definiteness" above:

rubi said:
It means that you can assign values to unperformed measurements. Mathematically, it's just the requirement that you have functions ##O_\xi : \Lambda\rightarrow \mathbb R## from the space ##\Lambda## of states to the real numbers for all possible measurements ##\xi##. If you want to prove Bell's theorem, it's enough to have these functions for all spin measurements ##\xi=(\text{Alice},\alpha)## of Alice and ##\xi=(\text{Bob},\beta)## for Bob. (Concretely, this means that the functions ##A(\alpha,\lambda)## and ##B(\beta,\lambda)## exist.)

Except in this case we're using four RV's (for CHSH inequality):

rubi said:
Yes, you can prove Bell-type inequalities for general random variables in a probability space. For example let ##A,B,C,D:\Lambda\rightarrow\{-1,1\}##. Then it is easy to show that ##\left|A(\lambda)B(\lambda)+A(\lambda)C(\lambda)+B(\lambda)D(\lambda)-C(\lambda)D(\lambda)\right|\leq 2##. Thus ##\left|\left<AB\right>+\left<AC\right>+\left<BD\right>-\left<CD\right>\right|\leq 2##.

So in this case it means that you can assume there exist definite values for all four RVs in each run of the experiment, even though you don't actually measure all of them.
 
  • Like
Likes Zafa Pi
  • #107
rubi said:
I don't know a proof of the inequality that doesn't assume counterfactual definiteness.

I was asking about the assumption of locality in the particular question you responded to here.
 
  • #108
secur said:
@rubi defined the mathematical assumption of "counterfactual definiteness" above

Ah, ok, I had missed that.
 
  • #109
A. Neumaier said:
The sources have properties independent of measurement, and the beams have properties independent of measurement. These are the real players and the real objects.

I have no idea what you mean by this - especially not these days where manipulation and measurement of single quantum 'entities' is commonplace.

I think you're using 'properties' in a different sense than I was too.

Let's take the situation where we have a 2 level atom in its excited state fired through a high-Q cavity in which there is a vacuum. There's a 'beam' I guess, but it consists of just one atom. If we tailor the cavity flight time right the atom and field are going to be in an entangled state when the atom has left the cavity (if we send another 2 level atom in its ground state through with a different tailored flight time we can end up with atom 1 and atom 2 entangled and these kinds of experiments have been done).

In this case I don't see how the notion of 'beams' helps us understand the properties of the 2 entangled entities (one's an atom and one's a field, or in the second case we have 2 entangled atoms). Nor do I see how any subsequent correlation measurements (obviously we need to repeat the experiment lots of times) are going to be explicable by assuming some collection of variables (properties) that have an existence independent of measurement.

I don't think it matters that we begin with the atom and field in some definite (pure) states - which have some definite properties granted. If we assume that any collection of such definite properties (variables) that have an existence independent of measurement is sufficient to describe the subsequent atom-field interaction and resulting state of the overall system then we're not going to be able to construct a model that matches the experimental results.

The fact that there is no way to fully describe this using these kinds of 'realistic' properties means that this entangled entity (consisting of the atom and field, or the 2 atoms) does not possesses some of these properties independent of measurement.

So we've gone from a classical situation in which the assumption that things can be described by a collection of variables - even if we have to treat those variables statistically because we don't know their value - to the quantum situation where it's not even legitimate to think in these terms. There's no way we can replace QM with an 'ignorance' model; we can't say "oh the properties or variables exist but we just don't know them".

So the very properties we measure in experiments are inextricably bound with the measurement. Those properties, or variables, aren't 'there' just waiting to be discovered by the measurement - in a real sense they're not 'there' at all until we do the measurement.

I like the intro to Feynman's classic path integral paper in which he shows that the classical law for chaining conditional probabilities gets mapped to the same law but now applied to amplitudes in QM - he draws conclusions about the existence of 'properties' from this and I've always seen that as a kind of pre-cursor to Bell's treatment.

My view is that this is just one feature of the 'weirdness' of QM. Same 'probability' laws but now applied to amplitudes - I can't explain that in any satisfactory way other than to say "them there's the rules - get over it".

Another point of weirdness is the fact that in classical mechanics we can have two phase space points, arbitrarily close together, that we can always in principle distinguish. Distinguishability in QM is characterized by orthogonality and there's a sense in which two non-orthogonal states can 'mimic' each other with a certain probability. Can I explain this other than by saying "them there's the rules - get over it"? Nope.

If anyone else can then I'd love to be enlightened.
 
  • #110
rubi said:
It means that you can assign values to unperformed measurements. Mathematically, it's just the requirement that you have functions ##O_\xi : \Lambda\rightarrow \mathbb R## from the space ##\Lambda## of states to the real numbers for all possible measurements ##\xi##. If you want to prove Bell's theorem, it's enough to have these functions for all spin measurements ##\xi=(\text{Alice},\alpha)## of Alice and ##\xi=(\text{Bob},\beta)## for Bob. (Concretely, this means that the functions ##A(\alpha,\lambda)## and ##B(\beta,\lambda)## exist.)

I don't know a proof of the inequality that doesn't assume counterfactual definiteness.
Any scientific model has to make predictions. Doesn't it follow that any scientific model has to include some form counterfactual definiteness?
 
  • #111
A. Neumaier said:
Yes, that was me...
Thank you...
Carry on.
 
  • #112
ljagerman said:
I like a simplistic approach to "weirdness" in quantum mechanics, particularly when teaching amateur scientists. The big three weirdnesses are (1) the uncertainty principle, (2) wave-particle duality, and (3) entanglement.
1: Everything in the universe, notably subatomic particles, and always in at least some random motion. So if we try to pin down location, momentum is uncertain, and vice versa.
2: Particles are particles, but their locations in space-time may be wave-like if graphed or plotted. I.e., the waves in this duality are waves of probability in the behavior of particles.
3: Two entangled particles may show interdependent behavior, but that behavior is always, to at least some extent, uncertain. So only probabilities are entangled, not firm unequivocal information. Also, there may be more than four dimensions, and entangled particles may be immediately adjacent in one of those additional dimensions.
Of course, if you present QT like this, it's weird. The uncertainty principle is a quite straight-forward consequence of the structure of quantum theory. It's a mathematical consequence. Item (2) is a no-brainer since there is no wave-particle duality in quantum theory for more than 90 years by now. Fortunately we don't teach Aristotelian physics anymore to our high schoolers and university freshmen anymore. We also shouldn't teach "old quantum theory" anymore (or only in a class about the history of physics, which is a very interesting and important subject by itself but shouldn't be used as didactics to teach QT).

That leaves entanglement, and that's indeed a pretty amazing consequence of the formalism of QT we are unused to in everyday life. Here you need to get the concepts straight, particularly a good grasp of probability theory and the important difference between correlations and causal effects. Unfortunately this subject is presented wrong in almost all popular-science writings about QT. The trouble is that many popular-science authors like to present QT as weird, because they think this makes the subject more interesting for the readers, but it does no good in offering what's really done in the physics labs around the world for laymen. Rather one should try to tell the public what's really done in the labs and what's found in experimental and theoretical research!
 
  • #113
vanhees71 said:
The uncertainty principle is a quite straight-forward consequence of the structure of quantum theory. It's a mathematical consequence.

That's true - but lots of things in QM could be described as "straightforward consequences" of the formalism. I have never equated dicking about with formalism as being equivalent to 'understanding' - that's a very 'recipe' driven approach. Ultimately, and frustratingly for me, it may be all we can actually get from QM. We spend a lot of time learning classical physics and much of it is pretty intuitive, but I don't have the same kind of intuition when it comes to QM. I've had to learn a different kind of intuition when it comes to QM that mostly derives from the formalism and using the formalism. So I have an intuition about the formalism and how to use it but I have no real intuitive feel for what that formalism actually means in a 'physical' sense (OK that's vague I know but I hope you get the drift).

That's not a very satisfactory state of affairs, for me at least. Can I explain why we have to represent 'states' using an abstract mathematical object that might ultimately bear no relation to 'reality' but is just some abstract mathematical device we use to kind of update our 'probabilities' (or more accurately pre-probabilities)? I don't really have a good feel for why this should be so. Can I explain why the conditional probability chaining rule gets applied to amplitudes in QM and what that really means? I don't even have any idea how to think of that as being natural and 'obvious' - other than saying that's just how it is. Often things become 'natural' and 'obvious' when viewed from the right perspective - I don't think that 'right' perspective exists in QM just yet (except for large systems or ensembles where largely 'classical' thinking can be applied).

On the one hand we have the macroscopic classical world which can be described by a set of fairly intuitive laws or axioms - and these laws are all, to some greater or lesser extent reasonably intuitive. But underneath all of this is this quantum substrate from which these 'intuitive' laws and behaviours arise, almost certainly through decoherence. But the laws governing this quantum substrate are not at all intuitive to my mind. They are simply a recipe that must be learned from which we eventually gain a kind intuition about how it works through use and practice.

There is, of course, no reason why nature should behave according to a set of laws that appear 'intuitive' to our evolution-conditioned brains. Ultimately we have this picture in which underneath it all is this substrate that has to be described more in terms of potentialities from which our macroscopic world of actualities emerges. I think that's quite strange - but maybe it's not to some. The underlying quantum world seems somehow insubstantial and we only connect to it through measurement (or environmental 'measurement' perhaps).

If we consider a 'particle' in a box (maybe an ion in a trap, for example) then to ask what its momentum is, without reference to measurement, is meaningless. Not only that, to assume that it actually possesses some value for this momentum is incorrect because not only is that tantamount to assuming a hidden-variable description for QM, it also implies that the classical chain rule can be applied to the probabilities of those values. That could all certainly be described as a very straightforward consequence of the formalism.

I've seen lots of posts on these forums from (I assume) interested non-professionals trying to get a grasp of what QM means. Ultimately we can't satisfy them; the only answer we give essentially boils down to "learn the maths and the formalism". I wish we could do better :-)

Maybe all I'm saying is that the formalism isn't 'enough' for me. I get the feeling that I'm in the minority here :frown:
 
  • Like
Likes OCR, vanhees71 and Nugatory
  • #114
PeterDonis said:
I was asking about the assumption of locality in the particular question you responded to here.
See my post #100.

zonde said:
Any scientific model has to make predictions. Doesn't it follow that any scientific model has to include some form counterfactual definiteness?
No, because a model can make predictions about what happens without predicting what would have happened. That's the case in QM. The Kochen-Specker theorem excludes the possibility for these functions ##O_\xi## to exist. The value of the observable ##\hat O_1## depends on what other commuting observables ##\hat O_\xi## are measured simultaneously. Hence, ##\hat O_\xi## cannot possibly be modeled as functions on some state space, because the value of an ordinary function doesn't depend on what other functions you're looking at. If you give me a certain ##\lambda##, then ##O_1(\lambda)## will always be the same number, no matter what other ##O_\xi(\lambda)## I care to compute.
 
  • #115
ljagerman said:
The big three weirdnesses are (1) the uncertainty principle,
A similar uncertainty principle holds already in classical mechanics. Do you find it weird that one cannot resolve an oscillating signal arbitrarily well both in time and in frequency? If not, why do you find the same relation weird between position and momentum?
 
  • #116
Simon Phoenix said:
I think you're using 'properties' in a different sense than I was too.
You are talking about a different concept of 'object' than I. Thus you get weirdness where I get meaning.

It is somewhat inconsistent to cling to a weird philosophy of what an object is and at the same time complain that the result is weirdness.

Also you changed subject, whereas I was responding to your example of laser light and parametric downconversion.
 
Last edited:
  • #117
Nugatory said:
How does this compare with the EPR criterion: "A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system"?

A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47 (1935), 777-781
defined the so-called EPR criterion verbatim as follows:
EPR said:
If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.

I just added a slightly polished version of my contributions in this thread, including this quote, to my thermal interpretation FAQ.
 
  • #118
Simon Phoenix said:
Can I explain why we have to represent 'states' using an abstract mathematical object that might ultimately bear no relation to 'reality' but is just some abstract mathematical device we use to kind of update our 'probabilities' (or more accurately pre-probabilities)? I don't really have a good feel for why this should be so. Can I explain why the conditional probability chaining rule gets applied to amplitudes in QM and what that really means? I don't even have any idea how to think of that as being natural and 'obvious' - other than saying that's just how it is. Often things become 'natural' and 'obvious' when viewed from the right perspective
In my view the right perspective is given by my thermal interpretation of QM; see the link in my preceding post. At least I find nothing weird in it.

I abstracted this interpretation from paying attention to what I saw the majority of physicists actually do in their papers on applications of quantum mechanics (mostly in the shut-up-and-calculate mode), rather than listening to what the minority of physicists writing about quantum interpretations think about these issues. It made a huge difference! The latter had left me unsatisfied for many years...
 
  • #119
rubi said:
No, because a model can make predictions about what happens without predicting what would have happened.
Please explain or give simple example.
My position is that we use the model exactly the same way whether we ask "what will happen?" or we ask "what would have happened?". It's exactly the same input for the model and therefore it has to produce exactly the same output.
 
  • #120
zonde said:
My position is that we use the model exactly the same way whether we ask "what will happen?" or we ask "what would have happened?". It's exactly the same input for the model and therefore it has to produce exactly the same output.
That only works in models whose predictions are computed by functions that are defined on some state space. This is exactly not the case in QM.

Assume we have a model whose predictions are computed by functions ##O_\xi :\Lambda\rightarrow\mathbb R##. Then we can add these functions and multiply them as follows: ##(O_\xi + O_\zeta)(\lambda) := O_\xi(\lambda) + O_\zeta(\lambda)## and ##(O_\xi O_\zeta)(\lambda) := O_\xi(\lambda) O_\zeta(\lambda)##
Given some element ##\lambda\in\Lambda##, we can define define the evaluation map ##v_\lambda## that takes a function ##O_\xi## and evaluates it at ##\lambda##: ##v_\lambda(O_\xi) := O_\xi(\lambda)##
It is now easy to prove that ##v_\lambda(O_\xi + O_\zeta) = v_\lambda(O_\xi) + v_\lambda(O_\zeta)## and ##v_\lambda(O_\xi O_\zeta) = v_\lambda(O_\xi) v_\lambda(O_\zeta)##. We take these identities as the definining identities for an evaluation map.

In quantum mechanics, observables aren't functions ##O_\xi : \Lambda\rightarrow\mathbb R##, but rather operators ##\hat O_\xi## that are defined on a Hilbert space. We can now ask ourselves whether this is just an artifact of the formulation. It turns out that it is impossible to reformulate the theory in the previous language. If it were possible to map the operators ##\hat O_\xi## to ordinary functions ##O_\xi## on some state space ##\Lambda##, then there would be evaluation maps ##v## such that at least for commuting ##\hat O_\xi##, the defining identities of such evaluation maps would be satisfied, i.e. for commuting ##\hat O_\xi##, ##\hat O_\zeta##, we would have ##v(\hat O_\xi + \hat O_\zeta) = v(\hat O_\xi) + v(\hat O_\zeta)## and ##v(\hat O_\xi \hat O_\zeta) = v(\hat O_\xi) v(\hat O_\zeta)##. The Kochen-Specker theorem tells us that no such evaluation map ##v## exists. However, if the ##\hat O_\xi## could be mapped to ordinary functions on some state space ##\Lambda##, there would be plenty of these evaluation maps: One for every ##\lambda\in\Lambda##. Thus, not all quantum mechanical observables ##\hat O_\xi## can be represented as ordinary functions ##O_\xi:\Lambda\rightarrow \mathbb R## on some state space ##\Lambda##. Hence, QM violates counterfactual definiteness.

The simplest example of this is the GHZ state. See also http://www.phy.pku.edu.cn/~qiongyihe/content/download/3-2.pdf.
 
Last edited:
  • Like
Likes Mentz114

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 35 ·
2
Replies
35
Views
9K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K