I Quantum physics vs Probability theory

  • #61
PeroK said:
This second quotation is, quite simply, nonsense. It does not reflect a failure of 100 years of QM development by the leading physicists of the 20th century. It reflects your failure, hitherto, to understand what QM is saying.
I don't have an issue understanding QM. to understand how to use the formalism and how to apply it to correctly calculate results for which you don't need to resolve those kind of issues. So in that i understand QM quite well.

But whenever phyiscs text-books tried to "intuitively"-explain QM aspects it left me more confused then before. Heisenbergs uncertainty principle is a prime example of this. but when i learned the theoretical proof of it was rather easy to understand what it meant. from that point i learned that at least for me it is far better to derive my intuition from the behavior of mathematical apparatus rather then rely on any attempt of physicist to explain it in "classical" terms that usually also contradicts the math of QM.

PeroK said:
My concern is that we've indulged you in a fairly pointless exercise in analysing the foundations of QM vis-a-vis classical PT. Whereas, all along your issue is simply that of someone trying to learn QM for the first time and being confused by it.
I found the answer i asked, albeit it took 3 pages.

Indeed i haven't realized that what i am looking for is a far more general framework to analyse possible constructions of theories capable of describing quantum experiments - just like the No-Go-theorems need it to discuss the possibility of hidden-variable theories. the underlying premise for the required framework is the same.

my concern is that the point of view is just too different from most physicist that it gets difficult to express the questions i have in terms they understand. Then again this was an issue i might have better posted in the probabily theory forums since it needed only basic information about several physical experiments in question but a deeper understanding of PT was needed for the entire rest. The idea that i explicitly did not want for a model it terms of classic QM may also be problematic for people that are too familiar with it to even understand why anyone would want that - given that QM works well enough. Then again the article of Kochen-Specker where such formalism was developed i wonder why for so many here it appeared initially unthinkable.
 
Physics news on Phys.org
  • #62
Killtech said:
I don't have an issue understanding QM. to understand how to use the formalism and how to apply it to correctly calculate results for which you don't need to resolve those kind of issues. So in that i understand QM quite well.

But whenever phyiscs text-books tried to "intuitively"-explain QM aspects it left me more confused then before. Heisenbergs uncertainty principle is a prime example of this. but when i learned the theoretical proof of it was rather easy to understand what it meant. from that point i learned that at least for me it is far better to derive my intuition from the behavior of mathematical apparatus rather then rely on any attempt of physicist to explain it in "classical" terms that usually also contradicts the math of QM.

What book are you using?

One problem I can see with your approach is how you would map your mathematics to experiment? Especially as experiments involve macroscopic, classical apparatus. Can you explain the double-slit experiment, for example, purely in terms of the mathematical formalism?
 
  • Like
Likes Killtech
  • #63
Killtech said:
But whenever phyiscs text-books tried to "intuitively"-explain QM aspects it left me more confused then before. Heisenbergs uncertainty principle is a prime example of this. but when i learned the theoretical proof of it was rather easy to understand what it meant. from that point i learned that at least for me it is far better to derive my intuition from the behavior of mathematical apparatus rather then rely on any attempt of physicist to explain it in "classical" terms that usually also contradicts the math of QM.

That's it! You have to build your intuition from the math. There's no other way. It's good advice to stay away from any text that claims otherwise. In terms of the writings of the founding fathers for me that implied to rather read Schrödinger, Dirac, Born, Pauli, and particularly Sommerfeld than Bohr or Heisenberg.

Concerning foundations, stay away from philosophy books, where even usual words get rid of any clear meaning leaving you in the dark and fog of utmost confusion ;-)).

Concerning foundational physical questions like EPR, entanglement, and the like, it's also good to look at the real-lab experiments by quantum opticians and read their papers (with a good theoretical textbook as a background like Garrison and Ciao, Quantum optics, to get the full QFT description which is the only true thing).
 
  • #64
Killtech said:
The idea that i explicitly did not want for a model it terms of classic QM may also be problematic for people that are too familiar with it to even understand why anyone would want that - given that QM works well enough. Then again the article of Kochen-Specker where such formalism was developed i wonder why for so many here it appeared initially unthinkable.
This is the double edged sword of specialization into camps: the breeding of researchers into large silos who vehemently overreact to everyone who speaks against the accepted wisdom of the group; this is a widely documented phenomenon within the social sciences which has been studied from a variety of viewpoints (pedagogic, sociological, economic, political, doxastic, etc), but I digress.

The direct downside of specialization is that specialists in different fields are unable to converse with each other, even when talking about the same topic for a multitude of reasons. To quote Feynman: In this age of specialization men who thoroughly know one field are often incompetent to discuss another. The great problems of the relations between one and another aspect of human activity have for this reason been discussed less and less in public. When we look at the past great debates on these subjects we feel jealous of those times, for we should have liked the excitement of such argument.
 
  • Like
Likes Killtech
  • #65
PeroK said:
What book are you using?
over all the time 15 years i looked through quite a few different books but also scrips i could find on the internet. far from all, quite a few left me with a brain hemorrhage :) - more often those more akin to an experimental focus. But there were also those this only sticked to axiomatic approaches... i liked those the most.

PeroK said:
One problem I can see with your approach is how you would map your mathematics to experiment? Especially as experiments involve macroscopic, classical apparatus. Can you explain the double-slit experiment, for example, purely in terms of the mathematical formalism?
Now you are starting to understand where i am coming from because this is exactly the fundamental problem i am running into. For a parson initially home in pure mathematics and within the autistic spectrum this is the most difficult part to sort out. I just can't handle the constructions physics have made here (as for me it appears as far from canonical as it can get) and even something simple like a well defined mapping algorithm between an experimental setup and the corresponding observable operator it measures lacking a proper definition leaves me hanging with a something that prohibits me form a well-defined interpretation mapping.

Then again PT offers a framework to model experiments in a very clear an reasonable way i can fully understand so it is a natural tool to get back to in order to see how i can fix my experiment-to-math gap.
 
  • Like
Likes PeroK
  • #66
PeroK said:
Can you explain the double-slit experiment, for example, purely in terms of the mathematical formalism?
I am not sure what your question exactly means? "Explain" is a wide term. question: does any modelling of a random experiment in PT explain anything? i mean reading through Kochen-Specker i wonder whether you are asking if this can be done in an non-contextual way? in that sense it would have an canonical way to apply to a wide array of other instances. If that is the case of your question i think it might be possible.
 
  • #67
Killtech said:
I am not sure what your question exactly means? "Explain" is a wide term. question: does any modelling of a random experiment in PT explain anything? i mean reading through Kochen-Specker i wonder whether you are asking if this can be done in an non-contextual way? in that sense it would have an canonical way to apply to a wide array of other instances. If that is the case of your question i think it might be possible.

You might want to check that your mathematical formalism predicts the results obtained by experiment. Somehow you have to map the mathematical model to a specific experimental set-up.
 
  • #68
PeroK said:
You might want to check that your mathematical formalism predicts the results obtained by experiment. Somehow you have to map the mathematical model to a specific experimental set-up.
PT toolbox provides you with both - albeit the mapping is at first trivial. for example the way you distinguish outcomes implicitly defines what can be observed directly: observables. you could in general define a mapping between all types of possible detectors to these observables by which you identify outcomes. but of course this stays entirely on a macroscopic level. once this is estabilished and you have a correct model for your experiment you can start comparing the QM model vs yours since both yield the same results. now you try to find a mapping between each information stored in your quantum state model (e.g. wave function in the sense of a decomposition into some basis with each coefficient holding 1 real number of abstract information) to your macroscopic outcome space such that for each such information varieting (within its allowed frame) will yield the same change in results for both models.

now the problem is that a wave function stores a lot of information so you need a very general experimental setup to be able to make each information make a distinguishable difference in the results (taken over many realizations with same starting conditions). the simple double slit setup doesn't even have parameters to play with so it isn't suited for this. but one could think of each slit width as a parameter and so on until one gets enough degrees of freedom for this kind of mapping.

in the end you should have a mapping of paramters to outcome distributions (events in PT terminology) and those again are associated to QM mathematical framework via its interpretations. The final stage is now to rearrange the original state space of the PT model in terms of the mathematical objects of QM via that mapping function - which therefore functions as an interpretation.
 
Last edited:
  • #69
Killtech said:
PT toolbox provides you with both - albeit the mapping is at first trivial. for example the way you distinguish outcomes implicitly defines what can be observed directly: observables. you could in general define a mapping between all types of possible detectors to these observables by which you identify outcomes. but of course this stays entirely on a macroscopic level. once this is estabilished and you have a correct model for your experiment you can start comparing the QM model vs yours since both yield the same results. now you try to find a mapping between each information stored in your quantum state model (e.g. wave function in the sense of a decomposition into some basis with each coefficient holding 1 real number of abstract information) to your macroscopic outcome space such that for each such information varieting (within its allowed frame) will yield the same change in results for both models.

now the problem is that a wave function stores a lot of information so you need a very general experimental setup to be able to make each information make a distinguishable difference in the results (taken over many realizations with same starting conditions).

Hmm. You're not saying anything specific here. Let's say I'm an experimenter and I have results from a double-slit experiment using electrons. When either slit is open I get a single-slit pattern. But, when both slits are open I do not get the sum of two single-slit patterns, I get a different pattern; an "interference" pattern.

How does the mathematical formalism of QM explain that result? It has to be specific to that experiment.

If you can't do that, then you are studying pure mathematics; but not physics. Not that there is anything wrong with pure mathematics!
 
  • Like
Likes vanhees71 and Auto-Didact
  • #70
PeroK said:
Hmm. You're not saying anything specific here. Let's say I'm an experimenter and I have results from a double-slit experiment using electrons. When either slit is open I get a single-slit pattern. But, when both slits are open I do not get the sum of two single-slit patterns, I get a different pattern; an "interference" pattern.

How does the mathematical formalism of QM explain that result? It has to be specific to that experiment.
Sorry, i have edited my prior post after posting with a little more elaboration. but i think you misunderstand my goal a little. i do not aim to explain anything. i am rather looking for a clear construction principle how to associate elements of the QM formalism to macroscopic observations made in the experiments - other then using the standard interpretations i am struggling with.
 
  • #71
Killtech said:
Sorry, i have edited my prior post after posting with a little more elaboration. but i think you misunderstand my goal a little. i do not aim to explain anything. i am rather looking for a clear construction principle how to associate elements of the QM formalism to macroscopic observations made in the experiments - other then using the standard interpretations i am struggling with.

I don't believe you can. If we exclude highly specialist macroscopic objects that have been experimentally created to display QM phenomena, and look at "ordinary" macroscopic objects. Like a particle detector.

You can't account for every particle in the detector and environment explicitly. Schrodinger's cat might be a good example. Just how, in QM terms, are you going to define a "live" cat and a "dead" cat? How do you define a cat, for that matter! You can do it in veterinary terms. But, there is no QM definition of a cat.

You have to accept that mathematical and physical reasoning from QM does not extend to a cat. Roughly you need at least:

QM
Molecular chemistry
Organic chemistry
Cell biology
Biology

QM underpins the whole edifice, but you can't understand a cat using only QM.

Theorectically, let's assume, we could do it. But, it's practically impossible.
 
  • #72
PeroK said:
I don't believe you can. If we exclude highly specialist macroscopic objects that have been experimentally created to display QM phenomena, and look at "ordinary" macroscopic objects. Like a particle detector.

Theorectically, let's assume, we could do it. But, it's practically impossible.
Well, i have to disagree here. Initially when i was looking for simply finding a way to express Schrödigers equations in pictures to better understand what it does - because i found that whenever dealing with diff equations it turned out to be extremely helpful to depict them visually to get a good intuition how solutions should look like and why certain theorems hold - the aspect that it was complex valued was a bit of a obstacle. so i using polar coordinates representation ##\Psi = \sqrt {\rho} e^{i u}## i got around it and checked the time evolution equations for both quantities. since it always helps to find similar already well understood equations as a shortcut to picturing these, i found that classical physics offers a lot. for example the time evolutions here are the simple continuity equation while the one for the probability-density-current can be written in terms of the Navier-Stokes-equations of a super-fluid with a peculiar non-linear self interaction ##\hbar^2 \frac {\nabla^2 \sqrt \rho} {2 m \sqrt \rho}## (pressure term?). Now I have looked like this object interacts with its environment just to find that it does so again in a quite familiar fashion - along Schrödigerns original attempt to interpret the wave function as the charge density (albeit here ##\rho## is encoded in it via Borns rule) and motivated by the interpretation when Dirac equations continuity equation goes negative. most convincing however is to find how intuitive this makes the H-atom solutions: a problem with Bohrs models was that a charged particle with angular momentum would emit an EM-wave and lose energy - but a charged fluid can take the form of a disk with an non zero angular momentum such that it does not change over time thus not emitting energy. Only if you combine two different energy solutions you would immediately find ##\partial _{t} \rho = \partial _{t} <E_{1}+E_{2}|E_{1}+E_{2}>## ##= \partial_{t} 2<E_{1},E_{2}> ## ##= sin((E_{2}-E_{1}) \hbar t) const## an oscillating charge distribution thus such a solution should (classically) rapidly lose energy by emitting an EM wave of proper frequency and collapse to the lower state. this behavior which would make only a discrete set of energy-eigenstates classically stable is however only possible for a non-linear system.

But generally what is stirring me up is that when looking into the time evolution of the probability density it has a non-linear self interaction term according to Schrödinger (within the current time evo) - from a PT point of view this is a No-Go for a true probability density because those would allow different realizations/outcomes of an experiment to interact with each other (this seems to be the root of all QM non classicality). That said interpreting this as a interaction with an alternate universe make a lot of sense to me here. but the easiest way to remedy this problem is if ##\rho## would simply be a physical density instead (to which a probability density is merely proportional) since those do obviously have such interactions. At this crossroads i view the latter approach as the more canonical while most QM interpretations take the other road - and i must understand why.

But yeah, i know superfluids are not be exactly common on macroscopic level. Neither are non-linear systems of that kind easy to find, however there are a lot of macroscopic non-linear examples that show a lot of interesting stuff, like for example solitions: solutions to non-linear wave equations which exhibit particle behavior. So there may not be perfect macroscopic match for the wave function behavior but you can get quite close. And for me I find it sufficient to follow that visualization of QM then sticking to abstract point particles.

But without a connection to experiments it has limited usability.
 
Last edited:
  • #73
Killtech said:
I am terribly sorry to have misunderstood this classification. is there a way I can remedy this mistake?

After reviewing the thread, I have changed the level to "I". Some aspects of the discussion probably can't be addressed fully except at the "A" level, but your posts indicate that you do not have the background needed for an "A" level discussion. Since the discussion is clearly beyond the "B" level at this point, "I" level seems like the best compromise.
 
  • #74
Stephen Tashi said:
If that refers to my questions, the problem is to show that "quantum logic" or "quantum probability" or "probability amplitudes" are organized mathematical topics that generalize ordinary probability theory. The alternative to that possibility is that these these terms are not defined in some unified mathematical context, but are informal descriptions of aspects of calculations in QM.
What do you think about Streater's Classical and Quantum Probability?

There's also Hardy's Quantum Theory From Five Reasonable Axioms which tries to reconstruct both classical and quantum probabilistic theories.
 
  • Like
Likes PeroK
  • #75
kith said:
What do you think about Streater's Classical and Quantum Probability?

There's also Hardy's Quantum Theory From Five Reasonable Axioms which tries to reconstruct both classical and quantum probabilistic theories.

I haven't looked at Streater's paper yet. Hardy's approach uses "physical probability". It's what I'd call the Axiom Of Average Luck. It modifies the Law Of Large Numbers to say that a probability can be physically approximated to any given desired accuracy by independent experiments - as oppose to the mathematical statement of the law which only deals with the probability of getting a good approximation.
 
  • #76
Stephen Tashi said:
Hardy's approach uses "physical probability". It's what I'd call the Axiom Of Average Luck. It modifies the Law Of Large Numbers to say that a probability can be physically approximated to any given desired accuracy by independent experiments - as oppose to the mathematical statement of the law which only deals with the probability of getting a good approximation.
Do you think this is enough to do physics or is there something missing? If it is enough, classical and quantum probabilities in physics are on equal footing (because rigorous probability theory itself isn't needed for physics if we take this point of view).

But try Streater, I think his treatment is much more aligned with what you are looking for.
 
  • #77
kith said:
If it is enough, classical and quantum probabilities in physics are on equal footing (because rigorous probability theory itself isn't needed for physics if we take this point of view).

Connecting probability theory with applications of probability theory is (yet another) problem of interpretation. Probability theory doesn't say that random variables have realizations, it doesn't say that we can do random sampling and it doesn't comment of whether events are "possible" or "impossible". Probability theory is circular. It only talks about probabilities.

Attempts to connect the law of large numbers to physical reality seem to work well. However, attempts to use martingale methods of gambling may also seem to work well. Suppose the probability of an event can always(!) be approximated to two decimal places by 10,000 independent trials. How times will Nature be performing sets of 10,000 trials? Will there be a physical consequence if one set of these trials fails to achieve two decimal accuracy? I don't know to reconcile the concept of "physical probability" (results of repeated experiments) with a scheme for how many times Nature conducts such series of experiments. There is also the problem that if I look for places where Nature has repeated an experiment, it is me that is grouping things into batches of independent experiments. The frequency of successes will depend on how I group them.
 
  • Like
Likes bhobba
  • #78
From comments on this thread, I take away (among other things) that classical probability is fine in its domain, but there are some instances where it won't work (Bell, two-slit, etc.) But outside of the evident counter-examples, I am not always sure where the boundary lies. For example, if I google "Schrödinger equation and brownian motion", I get a number of articles such as those attempting, using classical statistical methods, to derive the equation, or to apply it non-quantum phenomenon, such as
https://www.researchgate.net/publication/237152270_Quantum_equations_from_Brownian_motion
https://www.springer.com/gp/book/9783540570301
https://onlinelibrary.wiley.com/doi...978(199811)46:6/8<889::AID-PROP889>3.0.CO;2-Z
But could such an endeavor (either deriving the S. equation by applying classical statistics to stochastic processes, or conversely, applying the S. equation to a macro phenomenon) even make sense?
 
  • #79
nomadreid said:
From comments on this thread, I take away (among other things) that classical probability is fine in its domain, but there are some instances where it won't work (Bell, two-slit, etc.)

Are there actually instances where classical probability theory "won't work"? Or are such failures the failure of the assumptions made in modeling phenomena with classical probability theory - for example, assuming events are independent when they (empirically) are not.

Griffiths uses the term "pre-probabilities" to describe mathematical structures that are used to derive probabilities, but which are not themselves probabilities. ( section 3.5 https://plato.stanford.edu/entries/qm-consistent-histories/ ). The manipulations of "pre-probabilities" can resemble the manipulations used for probabilities. Because the pre-probabilities of QM use complex numbers, one might call them "complex" or "quantum" probabilities. But the success of pre-probabilities does not imply that classical probability theory won't work. It does imply that modeling certain physical phenomena is best done by thinking in terms of pre-probabilities instead of making simplifying assumptions of independence and applying classical probability theory directly.
 
  • Like
Likes nomadreid, vanhees71, *now* and 1 other person
  • #80
Stephen Tashi said:
Are there actually instances where classical probability theory "won't work"?
No. In the objective Bayesian interpretation, probability is simply the logic of plausible reasoning. Logic always works. If logic seems to fail, the error is somewhere else.
nomadreid said:
But could such an endeavor (either deriving the S. equation by applying classical statistics to stochastic processes, or conversely, applying the S. equation to a macro phenomenon) even make sense?
It makes sense.

The classical derivation comes from
Nelson, E. (1966). Derivation of the Schrödinger Equation from Newtonian Mechanics, Phys Rev 150(4), 1079-1085
and is known as Nelsonian stochastics.

A conceptually IMHO much better variant comes from Caticha and is named "entropic dynamics":
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357

Both have a problem known as "Wallstrom objection" that the Schrödinger equation is derived only for wave functions which have no zero's in the configuration space.
 
  • Like
Likes Stephen Tashi and nomadreid
  • #81
Wasn't Nelson himself quite critical against his own baby recently? I'd have to search for the source, where I read about it ;-)).
 
  • #82
Elias1960 Thanks very much for the very informative answer.
Elias1960 said:
A conceptually IMHO much better variant comes from Caticha and is named "entropic dynamics":
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357

Both have a problem known as "Wallstrom objection" that the Schrödinger equation is derived only for wave functions which have no zero's in the configuration space.

The Caticha variant has the added advantage that it is more accessible. :woot: Anyway, when I looked up the "Wallstrom objection", I got a lot of attempts to get around it, such as https://arxiv.org/abs/1101.5774, https://arxiv.org/abs/1905.03075, and others. Have any of them successfully served as a complement to either the classic derivation or to entropic dynamics?
 
  • #83
nomadreid said:
The Caticha variant has the added advantage that it is more accessible. :woot: Anyway, when I looked up the "Wallstrom objection", I got a lot of attempts to get around it, such as https://arxiv.org/abs/1101.5774, https://arxiv.org/abs/1905.03075, and others. Have any of them successfully served as a complement to either the classic derivation or to entropic dynamics?
It seems, the first of your quoted approches https://arxiv.org/abs/1101.5774 would fail to save entropic dynamics. There would be no potential ##v^i(q) = \partial_i \phi(q)## at all, but entropic dynamics requires that such a function globally exists.
Instead, if one simple explicitly excludes all ##\psi(q)## with zeroes somewhere, as in https://arxiv.org/abs/1905.03075, then a potential exists as a global function ##v^i(q) = \partial_i \phi(q)##, and Caticha's interpretation makes sense.
 
  • Like
Likes nomadreid
  • #84
Many thanks for that, Elias1960!
 
  • #85
Greetings all!

Stephen Tashi said:
Are there actually instances where classical probability theory "won't work"?
It depends. Ultimately quantum probabilities can be seen as Classical probabilities that are implicitly conditional. See the works of Andrei Khrennikov for nice expositions (https://arxiv.org/abs/1406.4886), this would be related to the "pre-probability" view above. In essence every quantum probability is like ##P(E_{i}|Q)##, i.e. chance of outcome ##E_{i}## given that variable ##Q## has been selected where as Classical Probability can have unconditional probabilities ##P(E_{i})##.

However constantly treating quantum probability this way is underdeveloped and probably more difficult than the standard way of folding all variable selections into a single non-comm Von Neumann algebra. It would be very difficult to treat Quantum Stochastic processes such as those of Belavkin this way.

Another way of phrasing the difference is that in quantum theory we can have fundamentally incompatible but non-contradictory events.
 
Last edited:
  • #86
vanhees71 said:
It's also obvious that with the SGE measurement of this spin component you change the state of the particle. Say, you have prepared to particle to have a certain spin-z component ##\sigma_z = +\hbar/2##, and now you measure the spin-x component. Then your particle is randomly deflected up or down (with 50% probability each)
Oh, you invoked the collapse! I had thought this was a no-no for you!
 
Last edited:
  • Like
Likes Auto-Didact
  • #87
No, I did not invoke the collapse. The time evolution of the wave function is entirely described by unitary time evolution, and the probabilities for finding the particle in one or the other partial beam after the ##\sigma_x## measurement is entirely determined by Born's rule using the time evoloved wave function. There's no need for collapse, particularly not in this simple example, where you can solve the time-dependent SGE (almost) exactly analytically.
 
  • #88
Why would one "avoid" collapse? Isn't state updating a normal part of QM?
 
  • #89
The collapse is an ad-hoc prescription which works well as such, but it has very fundamental problems in connection with relativistic QFT. It's contradicting the very construction of relativistic QFTs, where the only known (and very successful) models are those where interactions are strictly local, i.e., local observable operators commute at spacelike separation of their arguments, and this implies that a local measurement cannot have instant effects at far distant parts of entangled systems, and a collapse would mean an effect over space-like separated meaurement events.
 
  • #90
That's not true though. State-updating can be easily generalised to QFT without any problems with special relativity. See Hamhalter, J. "Quantum Measure Theory". It's a tough book, but Gleason's theorem and Lüders rule are generalised.

State updating doesn't lead to any problems, just like it doesn't cause signalling in entanglement in non-relativistic QM.

How do you update states in QFT if not via the usual rule? I know we don't do it normally in S-matrix calculations.
 
  • Informative
Likes Auto-Didact

Similar threads

  • · Replies 292 ·
10
Replies
292
Views
10K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
11
Views
963
  • · Replies 42 ·
2
Replies
42
Views
6K
  • · Replies 91 ·
4
Replies
91
Views
8K
Replies
204
Views
12K
  • · Replies 58 ·
2
Replies
58
Views
3K
Replies
47
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K