A How do entanglement experiments benefit from QFT (over QM)?

  • Thread starter DrChinese
  • Start date
  • Featured

DrChinese

Science Advisor
Gold Member
7,180
997
Going back a bit in this thread...

I think I have learned a bit more about QFT from some of the great posts here. Especially learning some of the situations in which QFT would be helpful for application. Specifically, it seems as if QFT is best to apply when scattering is being discussed and the results might include any of a variety of particles. On the other hand: while QFT might include elements that describe entanglement, apparently that is a weaker/less useful side of things. My sense is that explains why entanglement experiments don't require the deeper theory of QFT - the basics of entanglement are well described by QM/QED without the need for any relativistic considerations (I don't consider descriptions of entangled photons as being relativistic although others might).

And as to some of the discussions about "microcausality": As I now understand it, there are 2 key (and somewhat opposing) elements at play. Both relate to the act of performing a measurement on entangled Alice and considering what happens to remote Bob (the previously entangled partner):

1) No signaling theorem being that the marginal probability of an outcome for Bob does NOT change due to Alice's choice of measurement. In short, Bob's outcomes are always random.
2) The experimentally demonstrated quantum nonlocality being that the state of Bob DOES change due to Alice's choice of measurement. In short, Bob is cast into a pure state relative to Alice.

I realize some of the posters here may not agree with my assessments, no problem there. But hopefully I am a little further along than before. :smile:
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,969
4,993
Again, as soon as photons are involved, there's no other way then QED to describe them adequately. QFT contains of course everything about entanglement as any flavor of QT.

Also read again you QFT books about what "local interaction" and "microcausality" means in contradistinction to long-ranged correlations due to entanglement. This resolves the apparent contradiction between the possibility of long-range correlations described by entanglement on the one hand and the fact that no instantaneous or even acausal influence of A's measurement on B's photons are necessary.
 
26,252
6,862
1) No signaling theorem being that the marginal probability of an outcome for Bob does NOT change due to Alice's choice of measurement. In short, Bob's outcomes are always random.
Yes.

2) The experimentally demonstrated quantum nonlocality being that the state of Bob DOES change due to Alice's choice of measurement. In short, Bob is cast into a pure state relative to Alice.
I think a better phrasing would be that the correlations between Bob's and Alice's measurement outcomes can violate the Bell inequalities. Putting it in terms of "change of state" raises issues (discussed already quite thoroughly in some recent thread or other) that don't need to be raised to describe the experimental facts of quantum nonlocality.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
Yes sure, but it's an established fact of 100 years testing QT. For me that's the only conclusion I can come to in view of all the Bell tests disproving local deterministic HV theories and confirm QT.
Indeed, but the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value. It means for example that we cannot think of the state ##|\uparrow_{z}\rangle## to actually represent a particle with angular momentum ##\frac{\hbar}{2}## about the ##z##-axis, for we must select the sample space via our measurement choice. It only means:
If you choose to measure ##S_{z}## then you will get ##\uparrow## with probability ##1##
Taking it otherwise, that is to actually mean the particle has ##\frac{\hbar}{2}## angular momentum about the ##z##-axis, leads to nonlocality issues.

It is in this sense that we are led to the measurement "creating the value". I don't think it is sloppy language.
 

DrChinese

Science Advisor
Gold Member
7,180
997
1. Again, as soon as photons are involved, there's no other way then QED to describe them adequately.

2. QFT contains of course everything about entanglement as any flavor of QT.
1. And yet, entanglement fundamentally does not require photons and does not require QFT. Hard to make that a case for a more complex theory. The old case of spin 1/2 electrons brings about the fundamental issues of quantum locality that we wish to resolve.

2. I guess I can't dispute that. But I certainly saw doubts about the entanglement side from a number of the posters. Apparently there are some entanglement issues that are not fully resolved. Although you seem satisfied, so that is a good recommendation.
 

Lord Jestocost

Gold Member
2018 Award
420
281
Kurt Gottfried and Tung-Mow Yan in “Quantum Mechanics: Fundamentals” (Second Edition):

“Thus it is finally a matter of taste whether one calls quantum mechanics local or not. In the statistical distribution of measurement outcomes on separate systems in entangled states there is no hint of non-locality. Quantum theory does not offer any means for superluminal signaling. But quantum mechanics, and by that token nature itself, does display perfect correlations between distant outcomes, even though Bell's theorem establishes that pre-existing values cannot be assigned to such outcomes and it is impossible to predict which of the correlated outcome any particular event will reveal.” [emphasized by LJ]
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,969
4,993
Indeed, but the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value. It means for example that we cannot think of the state ##|\uparrow_{z}\rangle## to actually represent a particle with angular momentum ##\frac{\hbar}{2}## about the ##z##-axis, for we must select the sample space via our measurement choice. It only means:
If you choose to measure ##S_{z}## then you will get ##\uparrow## with probability ##1##
Taking it otherwise, that is to actually mean the particle has ##\frac{\hbar}{2}## angular momentum about the ##z##-axis, leads to nonlocality issues.

It is in this sense that we are led to the measurement "creating the value". I don't think it is sloppy language.
I think this formulation: "the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value"

is what leads to the misunderstandings documented by @DrChinese 's point of view. Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".

In this way you get rid of the misunderstanding as if the local measurement at A must lead to an instantaneous influence on the measured entities at B. It is in accordance with the fact that the temporal order of the measurements does not play any role (if the measurement events are space-like separated there's even no temporal order at all!), because you don't need the argument of the collapse proponent that the measurement at A causally affects the measurement at B. Both A and B can choose what they measure, and all you know from the state preparation are the probabilities for the outcomes of measurements at A and B. With sufficiently detailed measurement protocols and clever arrangements as described by the delayed-choice setups of Bell tests (and these are realized in various realizations of "quantum-erasure setups" in the real-world lab nowadays!) allow you to choose different subensembles based on the meausrements from the measurement protocol.

For me the only consistent interpretation, i.e., obeying both the locality/microcausality principle of the usual QFT formalism and the possibility of stronger-than-classically-possible long-ranged correlations described through entanglement, is the minimal statistical interpretation, based on the assumption that the random nature of the outcome of measurements (no matter whether you describe them in idealized (gedanken) setups as complete measurements or more realistically, taking into account the non-ideality of real-world measurement devices in terms of the POVM formalism) is inherent in nature and not due to incomplete knowledge of the state as in classical statistical physics.

The important lesson to be learned from all these discussions is that, when in doubt on metaphysical concepts, which are necessarily unsharp compared to the scientific content of a theory, you have to go back to the successful formalism and find a metaphysical interpretation that is consistent with it, i.e., the empirically well-established facts about the behavior of nature as analyzed for over 100 years since the first discovery of quantum aspects of nature in 1900. The great success of modern natural science methodology is due to the decoupling of science from philosophy, and as far as I can see, philosophy can only a posteriori build a metaphysical world view after the scientific issues are clear, and then it might be of some value also for the understanding of the implications of the scientific discoveries for a more general worldview.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".
I'm not sure it is this easy. So the initial preparation gives one an ensemble ##\rho##. When one selects a measurement you're saying it "selects the ensemble" you consider. What is the relation of this ensemble to the original ensemble given by a preparation? A subensemble or what?
 

Lord Jestocost

Gold Member
2018 Award
420
281
I think this formulation: "the fact that your choice of measurement selects the sample space is what leads to the fact that the measurement "creates" in some sense the value"

is what leads to the misunderstandings documented by @DrChinese 's point of view. Taking the minimal statistical interpretation seriously, you should rather say: "the choice of measurements selects the ensembles you consider, given an ensemble defined by the preparation of the state".

In this way you get rid of the misunderstanding as if the local measurement at A must lead to an instantaneous influence on the measured entities at B. It is in accordance with the fact that the temporal order of the measurements does not play any role (if the measurement events are space-like separated there's even no temporal order at all!), because you don't need the argument of the collapse proponent that the measurement at A causally affects the measurement at B. Both A and B can choose what they measure, and all you know from the state preparation are the probabilities for the outcomes of measurements at A and B. With sufficiently detailed measurement protocols and clever arrangements as described by the delayed-choice setups of Bell tests (and these are realized in various realizations of "quantum-erasure setups" in the real-world lab nowadays!) allow you to choose different subensembles based on the meausrements from the measurement protocol.

For me the only consistent interpretation, i.e., obeying both the locality/microcausality principle of the usual QFT formalism and the possibility of stronger-than-classically-possible long-ranged correlations described through entanglement, is the minimal statistical interpretation, based on the assumption that the random nature of the outcome of measurements (no matter whether you describe them in idealized (gedanken) setups as complete measurements or more realistically, taking into account the non-ideality of real-world measurement devices in terms of the POVM formalism) is inherent in nature and not due to incomplete knowledge of the state as in classical statistical physics.

The important lesson to be learned from all these discussions is that, when in doubt on metaphysical concepts, which are necessarily unsharp compared to the scientific content of a theory, you have to go back to the successful formalism and find a metaphysical interpretation that is consistent with it, i.e., the empirically well-established facts about the behavior of nature as analyzed for over 100 years since the first discovery of quantum aspects of nature in 1900. The great success of modern natural science methodology is due to the decoupling of science from philosophy, and as far as I can see, philosophy can only a posteriori build a metaphysical world view after the scientific issues are clear, and then it might be of some value also for the understanding of the implications of the scientific discoveries for a more general worldview.
Much wording around a simple question: Does an observable of a quantum system has the same value just before the measurement as is obtained by the measurement or not? (the Copenhagen’s deny that an observable has any value before the measurement)
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,969
4,993
I'm not sure it is this easy. So the initial preparation gives one an ensemble ##\rho##. When one selects a measurement you're saying it "selects the ensemble" you consider. What is the relation of this ensemble to the original ensemble given by a preparation? A subensemble or what?
In a highly idealized way you start with a preparation procedure. E.g., you prepare polarization-entangled (say the singlet state) photon pairs via parametric downconversion. Then A and B measure the polarization of both photons. If both choose to measure the polarization in the same direction, each get just ideally unpolarized photons. With sufficiently precise time stamps in each of the observers' measurement protocols they can relate the outcomes of their polarization measurements to each entangled pair and later select subensembles, i.e., they can select all pairs, where A measured horizontal polarization and look what B has found for his photon in the pair and finding the 100% correlation, i.e., whenever A finds H, B finds V and vice versa. It's of course a subensemble half as large as the original. The other partial ensemble is just complementary, and the total ensemble simply reflects that each of the single photon is perfectly unpolarized.

Of course, a more realistic evaluation of real-lab experiments you have to take into account that all preparations and measurements are non-ideal and you have to carefully evaluate the systematic and statistical errors. In the formalism that can (sometimes) described by the POVM formalism. I'm not arguing against the POVM formalism but against the claim it's something going beyond standard Q(F)T in the minimal interpretation.

Of course A and B can choose arbitrary directions for their polarization measurements, and you can still select subensembles and evaluate the correlations. You can choose appropriate different measurement setups to also demonstrate the violation of Bell's inequality. This is of course only possible on ensembles, because you need measurements of incompatible observables, which can not realized on a single system but only subsequently on ensembles of equally prepared systems. All these are indeed probabilistic statements about the outcome of measurements and nothing more.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,969
4,993
Much wording around a simple question: Does an observable of a quantum system has the same value just before the measurement as is obtained by the measurement or not? (the Copenhagen’s deny that an observable has any value before the measurement)
Within the minimal interpretation, which is a no-nonsense flavor of Copenhagen, it depends on the prepared state, whether an observable has a determined value or not. If it has not a determined value, you only know the probabilities for the outcome of measurements of this observable given the state the measured system is prepared in. That's it. There's no necessity for any additional elements within QT. It's a complete description of what's observed, including the randomness for the outcome of measurements on observables that are not determined by state preparation.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
Of course A and B can choose arbitrary directions for their polarization measurements, and you can still select subensembles and evaluate the correlations
But one's choice of measurement produces a complete sample space that cannot be understood as a subensemble of the preparation. The state ##\rho## and one's choice of a context give a complete sample space that cannot be seen as a subensemble of another, that's basically what the CHSH set up tells you, as does Fine's theorem.

That's what's confusing about QM, the perparation alone is not an ensemble. Only the preparation and a context.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
If it has not a determined value, you only know the probabilities for the outcome of measurements of this observable given the state the measured system is prepared in
Yes, but the Kochen-Specker theorem, the CHSH inequality and Fine's theorem show you that just because ##|\uparrow_{z}\rangle## will give ##\frac{\hbar}{2}## when measured along the ##z##-axis with certainty, the particle does not actually possess ##\frac{\hbar}{2}## along the ##z##-axis prior to the measurement.

I mean in a certain sense one just needs the Kochen-Specker theorem alone. If you cannot assign pre-existent values to variables, but then in the measurement one obtains a value, then how do you get out of the fact that the value arises in measurement?

I mean you are either saying there was a value prior to measurement or there wasn't. If there was you run into contextuality problems and possible fine tuning and you're sort of talking about a hidden variable theory. If you are saying the latter then literally the value is created by the measurement process. I don't see what else one could be saying.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,969
4,993
I don't understand this statement. Of course, all subensembles together give the prepared ensemble (everything in an idealized sense of no losses). The choice of the subensembles of course depend on the specific measurement setup.

Concerning CHSH, I think the example in Wikipedia,


is correctly described. You need indeed "four subexperiments" distinguished by different relative orientations of the polarization measurements. You can of course not do all four measurements on a single realization. So you select four different and mutually exclusive "subensembles" by each measurement. The total ensemble, given by the same state preparation of all subexperiments.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
I don't understand this statement. Of course, all subensembles together give the prepared ensemble (everything in an idealized sense of no losses). The choice of the subensembles of course depend on the specific measurement setup.
But it is literally not true due to the structure of quantum probability. All variables in a CHSH test cannot be considered as defined on a common sample space via Fine's theorem. Thus they cannot be considered to be drawn from a common ensemble. If they're not marginals on a common sample space they cannot be thought of as subensembles. See Streater's book "Lost Causes in Theoretical Physics" Chapter 6. They're not sub-experiments.

However the main point here is how you react to the Kochen-Specker theorem. It says that observables either have no pre-existent value or they do but they are contextual. Which one do you take to be true? If the former how do you avoid the conclusion that the measurement creates the value?
 
Last edited:

RUTA

Science Advisor
Insights Author
1,048
196
It's clear that vanhees71 isn't bothered by entanglement because it has a precise mathematical description and is empirically verified. He doesn't require any ontological explanation for entanglement and is confused by the fact that anyone else does. What confuses me is that he participates in foundations discussions, given his lack of appreciation for the ontological motives of the participants. Although, I must admit, I'm almost as bad when I point out a desire for dynamical explanation, e.g., via causal mechanisms and/or hidden variables, is what has to be abandoned and replaced by constraint-based explanation (with no dynamical counterpart). For many people, that's equivalent to telling them to forget ontological explanation altogether :-)
 
614
372
It's clear that vanhees71 isn't bothered by entanglement because it has a precise mathematical description and is empirically verified. He doesn't require any ontological explanation for entanglement and is confused by the fact that anyone else does. What confuses me is that he participates in foundations discussions, given his lack of appreciation for the ontological motives of the participants.
Forgive the psychoanalysis but from my experience with such matters, the fact that he sees that other serious physicists are worried about these issues and continues to participate in good faith demonstrates to me that he either feels he can actually alleviate our worries through his explanation, or - even though he believes his stance is pragmatically justified - he has some uncertainty regarding this important issue about which he subconsciously wishes to learn more; what better place to directly learn more than from those honestly expressing their doubts?
 

DarMM

Science Advisor
Gold Member
1,988
1,012
Well this just broaches the topic of vanhees realism once more. All we can say is that for suitably prepared initial posts there are probabilities of various vanhees responses formed as irreversible records on our monitors. To go beyond this and posit an ontic vanhees is unwarrented by the formalism.

Considering the physicsforums servers are in America and there are vanhees observations in Frankfurt, the correlation between these would require a nonlocally extended ontic vanhees. There is literally no other explanation.
 
614
372
In any case, I am happy that @vanhees71 does continue to discuss these matters because it helps to demonstrate - from the more rigorous contrary position - exactly how fragile the minimal interpretation actually is. The demonstration thereof in the public domain may naturally elucidate feelings of uneasiness among physicists - who are not used to encounter such fragile arguments w.r.t. physics - but it is necessary for them to take these feelings seriously, because we are talking about the currently accepted foundations of physics: all of (theoretical) physics based on these foundations is what is at stake.

In the face of this uneasiness the scientist is actually being forced to make an ethical decision which displays his character: either he confronts the matter head on and honestly admits that he doesn't know or he can pretend to know and so abandon the very principles of science; those who opt for the latter choice are easy to detect because they will then tend to even begin to argue for censorship of further discussion. Self-censorship is the beginning of the death of science; it is very interesting to note that Peter Woit's latest blogpost is also on this very topic.

As Feynman puts it, the scientist - funded by and therefore having obligations to society - actually only has one choice: the scientist must fearlessly admit that he does not know and live with the uncertainty that his beloved theory might actually be wrong: doing anything else is just an exercise in self-deception and - even worse - deception of others, including deception of the public; Smolin has made this very point clearer than anyone else I have encountered either in the popular or professional literature.

As Feynman says: I can live with doubt, and uncertainty, and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers, and possible beliefs, and different degrees of certainty about different things, but I'm not absolutely sure of anything. There are many things I don't know anything about, such as whether it means anything to ask "Why are we here?" I might think about it a little bit, and if I can't figure it out then I go on to something else. But I don't have to know an answer. I don't feel frightened by not knowing things, by being lost in the mysterious universe without having any purpose — which is the way it really is, as far as I can tell. Possibly. It doesn't frighten me.
 

Lord Jestocost

Gold Member
2018 Award
420
281
Regarding statistical interpretations of quantum mechanics, Paul Busch, Pekka J. Lahti and Peter Mittelstaedt put it in the following way in chapter IV.3. “Ensemble and Hidden Variable Interpretations” in „The Quantum Theory of Measurement” (Second Revised Edition, Springer):

“The statistical interpretations of quantum mechanics can be divided into two groups, the measurement statistics and the statistical ensemble interpretations (Sects. III.3.2-3). These interpretations rely explicitly on the relative frequency interpretation of probability, and in them the meaning of probability is often wrongly identified with the common method of testing probability assertions.....

In the measurement statistics interpretation the quantum mechanical probability distributions, such as ##p^A_T##, are considered only epistemically as the distributions for measurement outcomes………… In this pragmatic view quantum mechanics is only a theory of measurement outcomes providing convenient means for calculating the possible distributions of such outcomes. It may well be that such an interpretation is sufficient for some practical purposes; but it is outside the interest of this treatise to go into any further details, for example, to study the presuppositions of such a minimal interpretation. The measurement problem is simply excluded in such an interpretation…….....

The ensemble interpretation of quantum mechanics describes individual objects only statistically as members of ensembles. This interpretation is motivated by the idea that each physical quantity has a definite value at all times. Thus no measurement problem would occur in this interpretation. Some merits of the ensemble interpretation of quantum mechanics are put forward, for example, in [Bal70,88, d'Espagnat 1971]. But these merits seem to consist only of a more or less trivial avoiding of the conceptual problems, like the measurement problem, arising in a realistic approach. In fact it is only in the hidden variable approaches that one tries to take seriously the idea of the value-definiteness of all quantities.”
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,969
4,993
But it is literally not true due to the structure of quantum probability. All variables in a CHSH test cannot be considered as defined on a common sample space via Fine's theorem. Thus they cannot be considered to be drawn from a common ensemble. If they're not marginals on a common sample space they cannot be thought of as subensembles. See Streater's book "Lost Causes in Theoretical Physics" Chapter 6. They're not sub-experiments.

However the main point here is how you react to the Kochen-Specker theorem. It says that observables either have no pre-existent value or they do but they are contextual. Which one do you take to be true? If the former how do you avoid the conclusion that the measurement creates the value?
Again I don't understand. It must be possible to do the described experiments to test the CHSH relation. If you cannot do this within QT, you cannot even define the quantities entering this relation to test it.

In the example of the Wikipedia quoted above. There are four incompatible experimental setups necessary. Each experiment very clearly subdivides and ensemble in subensembles according to the polarization measurements on the two photons. If this were not possible, you couldn't do this very experiment.

Since the measurements are mutually incompatible you need to prepare four ensembles in the same state and do one of the four measurements to devide each of them in the appropriate subensembles and then combine the probabilistic outcomes to check the CHSH relation.

It's like in the simpler example to test the uncertainty relation ##\Delta x \Delta p \geq \hbar/2##. Of course you cannot measure position and momentum accurately on one particle. Thus you need to prepare a first ensemble of a single particle in the state ##\hat{\rho}## and do a very accurate position measurement and evaluate ##\Delta x##. Then you have to prepare a 2nd ensemble of a single particle, again in the same state ##\hat{\rho}##, and measure momentum very accurately and evaluate ##\Delta p##. With these two incompatible measurements together you can test the uncertainty relation for particles prepared in the state ##\hat{\rho}##.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
Each experiment very clearly subdivides and ensemble in subensembles according to the polarization measurements on the two photons. If this were not possible, you couldn't do this very experiment
That's not the point. The point is that the ensembles found in each of the four measurement choices have mathematical features preventing them from being understood as selections from one common ensemble. If you measure ##A, C## and you measure ##B, D## they cannot be thought of as subensembles of one ensemble nor alternate course grainings of a common ensemble. They are simply two different ensembles. That is a mathematical fact reflected in the fact that there is no Gelfand homomorphism subsuming all four observables.

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
 

Mentz114

Gold Member
5,397
279
[]

However even the whole CHSH set up is a side point. The main point is the KS theorem. Which says either the values don't pre-exist the measurement or they do but are contextual. It's one or the other. If you take the option that they don't pre-exist the measurement, then how do you avoid the measurement creating them?
Th problem that makes this unconvincing to me is that a projective measurement is not a measurement of a state but an enforced change of state. Measuring a dynamic variable is never projective. Given that the 'value' before projection is irrelevant to the dynamics ( the angular momentum is preserved ) isn't KS just saying that we cannot assign values because we cannot know them ?

The remote influence vs non-local correlations argument cannot be settled by the formalism.
The remote influence believers should explain what actually is passed between the locations and how this can be experimentally detected.
 

DarMM

Science Advisor
Gold Member
1,988
1,012
Th problem that makes this unconvincing to me is that a projective measurement is not a measurement of a state but an enforced change of state. Measuring a dynamic variable is never projective. Given that the 'value' before projection is irrelevant to the dynamics ( the angular momentum is preserved ) isn't KS just saying that we cannot assign values because we cannot know them ?
No, this is pretty clear in its actual proof.
 

Mentz114

Gold Member
5,397
279
No, this is pretty clear in its actual proof.
The proof only applies to projections not measurements.
What is the point in assigning values to irrelevant and unknowable properties ?
No mathematical theorem can prove the existence or not of a real thing.
 

Want to reply to this thread?

"How do entanglement experiments benefit from QFT (over QM)?" You must log in or register to reply here.

Related Threads for: How do entanglement experiments benefit from QFT (over QM)?

Replies
9
Views
2K
Replies
22
Views
2K
  • Posted
Replies
2
Views
973
  • Posted
2 3
Replies
68
Views
2K
  • Posted
Replies
1
Views
2K
Replies
0
Views
1K
  • Posted
2
Replies
38
Views
2K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top