I Measurement results and two conflicting interpretations

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
Summary
An illustration is given of the differences in the interpretation of measurement results in the thermal interpretation and in Born's statistical interpretation.
The following two examples illustrate the differences between the thermal interpretation
and Born's statistical interpretation for the interpretation of measurement results.

1. Consider some piece of digital equipment with 3 digit display measuring some physical quantity ##X## using ##N## independent measurements. Suppose the measurement results were 6.57 in 20% of the cases and 6.58 in 80% of the cases. Every engineer or physicist would compute the mean ##\bar X= 6.578## and the standard deviation ##\sigma_X=0.004## and conclude that the true value of the quantity ##X## deviates from ##6.578## by an error of the order of ##0.004N^{-1/2}##.

2. Consider the measurement of a Hermitian quantity ##X\in C^{2\times 2}## of a 2-state quantum system in the pure up state, using ##N## independent measurements, and suppose that we obtain exactly the same results. The thermal interpretation proceeds as before and draws the same conclusion. But Born's statistical interpretation proceeds differently and claims that there is no measurement error. Instead, each measurement result reveals one of the the eigenvalues ##x_1=6.57## or ##x_2=6.58## in an unpredictable fashion with probabilities ##p=0.2## and ##1-p=0.8##, up to statistical errors of order ##O(N^{-1//2})##. For ##X=\pmatrix{6.578 & 0.004 \cr 0.004 & 6.572}##, both interpretations of the results for the 2-state quantum system are consistent with theory. However, Born's statistical interpretation deviates radically from engineering practice, without any apparent necessity.

Clearly, the thermal interpretation is much more natural than Born's statistical interpretation, since it needs no other conventions than those that are valid for all cases where multiple measurements of the same quantity produce deviating results.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
1. is of course correct and has nothing to do with QT to begin with. It describes the measurement device's function. It's obviously constructed such as to measure expecation values of ##X## on an ensemble of ##N## (hopefully uncorrelated but equally) prepared systems. Of course that then the measurement gives an expectation value including the standard deviation as an estimate for the statistical accuracy is a tautology, because that's how the measurement device has been constructed.

2. A quantity ##X## is not hermitean. It's something defined by a measurement. In the quantum formalism it is described by a self-adjoint (which is in finite-dimensional unitary spaces the same as Hermitean) operator ##\hat{X}##, which can be represented for a two-level system by a matrix in ##\mathbb{C}^{2 \times 2}##, given a basis of the 2D unitary vector space.

Born's statistical interpretation does not claim that there is no measurement error but, as usual when formulating a theory, discusses first the case that the measurement errors are so small that they can be neglected. If you deal with unprecise measurements the analysis becomes much more complicated since then you deal not only with the quantum statistics a la Born but also with the statistics (and systematics!) of the inaccurate measurement device. That cannot be theorized about but has to be treated individually for any experiment and is thus not subject of theoretical phsyics but part of a correctly conducted evaluation of experimental results for the given (preparation and) measurement procedure.

As already detailed in a posting in another thread (forking threads is of course also a good way to confuse a discussion), the thermal interpretation is NOT equivalent to the standard interpretation. It's only equivalent if you use a measurement device as you describe here. If you don't resolve individual accurate measurements but simply measure coarse-grained expectation values by construction of course you get these expectation values (with the usual statistics, supposed the errors are Gaussian distributed, as you seem to assume silently here). But whenever physicists are resolving individual measurement results the standard interpretation has not been falsified, i.e., you find the eigenvalues of the hermitean operator rather than q-expectation values, and the statistics is consistent with the predicted probabilities a la Born.

Just saying that sufficiently coarse-grained ensemble-averaged measurements deliver q-expectation values is not a new interpretation but a tautology! It doesn't solve any (real or philosophical) interpretational problem of QT!
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
(forking threads is of course also a good way to confuse a discussion)
I am forking specific problems from the discussion of generalities. The other thread is so long that specific special situations discussed get drowned, and searching for them becomes quite time-consuming.
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
It doesn't solve any (real or philosophical) interpretational problem of QT!
Of course not for you, since you always argue that there are no such problems. Nonexisting problems cannot be solved. I am surprised that you take part at all in discussions about the foundations, since you always insist that there is nothing to be discussed there.

The thermal interpretation is for those who, like me, find the standard interpretation unsatisfactory and look for solutions of problems you don't even perceive.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
But the thermal representation as presented in #1 of this thread is a tautology, because you assume to have a measurement device constructed to measure expectation values, which is of course sometimes possible but not what's the issue about "interpretational problems". Then it's just empty.

On the other hand, you claim that what's always measured are q-expectation values, but that's obviously contradicting observational practice in labs. The experimentalists are very well able to measure observables of individual quantum systems accurately, and what comes out so far is what's predicted by standard QT and not q-expectation values.

I'm taking part of discussions about socalled "interpretational problems of QT", because I find them interesting. It's a social phenomenon that's nearly incomprehensible for me. I can understand, why Einstein was fighting against the Copenhagen collapse doctrine of his time, because it directly contradicts fundamentally the relativistic causality principle. Since this issue, however, is resolved for quite a while by, e.g., using collapse-free interpretations like the minimal interpretation or many worlds, it's enigmatic to me, why there's still an issue.

Even Weinberg in his newest textbook tries to solve an apparent issue with measurement problems by showing that it's impossible to derive Born's rule from the other postulates of QT, but what is the very point of this task, is not clear to me either. Born's rule is simply a postulate of the theory, discovered early in the development of modern QT, because it's very naturally resolving the wave-particle paradox of the "old QT". That it's the ##|\psi|^2## rule that is correct providing probabilities is also established quite recently by precise experiments, ruling out higher-order contributions to "interference terms" like, e.g., in

https://arxiv.org/abs/1612.08563

For me the final killing stroke of the "measurement problem" is Bell's work about local deterministic HV models and the huge research work by both theorists and experimentalists following it. The many very accurate Bell tests so far clearly show that the inseparability predicted by QT due to the possibility of entanglement of far-distant parts of composite quantum systems are real. The stronger-than-classical far-distant correlations between (indetermined) observables for me are an empirically established fact.

Short: What QT in the minimal statistical interpretation predicts is describing correctly all so far established empirical facts about Nature with high precision. Thus there is no problem with the theory or the theory's interpretation (interpretation in the scientific meaning of providing empirically testable predictions) from a physics point of view.

Maybe there's a problem from some philosophical points of view, but on the other hand, why should Nature care about our philosophical prejudices about how she should behave ;-)).

Last but not least, the irony is that there indeed still is a real problem in modern physics, and that's the inconsistency of General Relativity and Quantum Theory, i.e., there's no working quantum theory of the gravitational interaction (which may well be mean that what's necessary is a quantum theory of spacetime, given the fact that it is pretty hard to find relativistic theories of gravitation not related to spacetime-geometrical issues). One might argue that it could well be that GR (or some modifications of it but still classical theories) is all there is concerning gravitation, but then there's the problem of the unavoidable singularities like black holes or the big-bang singularity of cosmology. Although my astro colleagues have just made a "picture of a black hole", that doesn't mean that we understand everything related to black holes. Particularly it seems as if classical relativistic theories of gravitation, including GR (which must be very close to any right theory, at least as far as classical theories can be, since there are more and more accurate evidences for it being right, particularly the celebrated EHT result is in accordance with the expectations of GR, and this is indeed in the strong-gravitation limit of a "mass monster" of 6.3 billion solar masses), anavoidably have this issue with the singularities, and there indeed these theories break down.

Another, maybe related, fundamental issue is the question of the nature dark energy. Recent new redshift-distance measurements based on the use of quasars as standard candles, which are in accordance with other standard candles like type 1a supernovae, but provide more precise data at larger red shifts, indicate that the simple assumption of a "cosmological constant" is probably wrong, which would also probably help to solve the puzzle about discrepancies between the values of the Hubble Constant when measured in different ways:

https://www.aps.org/publications/apsnews/201805/hubble.cfm
https://doi.org/10.1038/s41550-018-0657-z

I think these are the true scientific big problems in contemporary physics and not some philosophical quibbles with the interpretations of the very well established modern QT. Maybe these problems indeed lead to a modification and extension not only to the still classical spacetime description a la GR or variations thereoff but also to a modification and extension of QT. Who knows? But I don't think that we find a solution of these scientific problems by beating the dead horse of the socalled "measurement problem of" or "interpretational issues with" QT.
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
But I don't think that we find a solution of these scientific problems by beating the dead horse of the socalled "measurement problem of" or "interpretational issues with" QT.
But the horse never died and, as I already hinted at in my PF interview, the measurement problem may well be closely related to a solution of the quantum gravity problem. Indeed, these problems you mention all require that one considers the whole universe (or at least huge parts of it that allow astronomers to make break-through pictures from very far away) as a single quantum system - something the statistical interpretation cannot do. Unlike you, both Weinberg and Peres are well aware of this. For Peres see this post that you never commented.

you assume to have a measurement device constructed to measure expectation values
No. i didn't make this assumption, you interpolated it. From any measurement device giving different results upon repetition, one has always this conclusion. This can be learnt in the first experimental physics exercises, where it is applied to repeatedly measuring mass or currents and asks for the best value - the value closest to the true one. A spin measurement repeated on an ensemble of identically prepared spins gives different 2-valued results upon repetition, hence is of this kind. The individual results are not reproducible, only the statistics is, just as with mass or current measurements. And the 2-valuedness may with good reasons be regarded as a property of the detector, not of the continuous spin! There is no stringent reason to assume that this law of statistics breaks down in microphysics.

A photodetector is also such a digital devise, since it can respond only in integer values, no matter how little intense the field is. But the intensity is a continuous parameter. Hence at low intensity, a photodetector necessarily stutters. See post #614 in the main thread.

On the other hand, you claim that what's always measured are q-expectation values, but that's obviously contradicting observational practice in labs. The experimentalists are very well able to measure observables of individual quantum systems accurately, and what comes out so far is what's predicted by standard QT and not q-expectation values.
No matter how accurately you measure the position of a dot created by a single event in a Stern-Gerlach experiment, it always gives only a very inaccurate measurement of the incident field intensity, which is given by a q-expectation.
 
Last edited:
However, Born's statistical interpretation deviates radically from engineering practice, without any apparent necessity.
The first reason is the need to explain interference. If the uncertainty is just epistemic uncertainty due to measurement device error/resolution limits, then we would expect the 20-80 split between 6.57 and 6.58 will be insensitive to whether the device is placed in the near field or far field of the (beam) source. But in QM, it is possible for the eigenstates to interfere, and so the probability of measuring 6.57 or 6.58 can vary based on this path length (and varies with a high regularity that is very well predicted by summing over histories).

I don't question you already know this, but I think the reason you are untroubled by this relative to others is due to the superdeterminist underpinnings of the Thermal Interpretation, which have not been fully articulated. I think without being more explicit on this, you and others will just continue to talk past each other.

The second reason is the need to explain correlations between detectors.
 
Last edited by a moderator:
24,312
5,990
No matter how accurately you measure a dot created by a single event in a Stern-Gerlach experiment, it always gives only a very inaccurate measurement of the incident field intensity, which is given by a q-expectation.
It is difficult to understand how the SG detectors can be so inaccurate as to wrongly report +1 or -1 when the true value is 0, yet at the same time so reliable that A) one or the other detector always misreports on every single experimental run and B) they never both misreport at the same time.
The discussion in this previous thread seems relevant:


[Edit: Several posts and portions of posts from this thread referring specifically to the SG experiment have been moved to the thread linked above. Please direct further replies regarding the SG experiment to that thread.]
 
Last edited:

atyy

Science Advisor
13,383
1,523
The OP is incorrect. Standard QM does claim that even if there is no measurement error, results will be probabilistic, but it does not exclude other sources of uncertainty, including uncertainty in which observable was measured.
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
The OP is incorrect. Standard QM does claim that even if there is no measurement error, results will be probabilistic, but it does not exclude other sources of uncertainty, including uncertainty in which observable was measured.
Which statement of the OP do you refer to? And howis it related to yur second sentence?

My statements in the OP refer to Born's rule in the form stated in Wikipedia (and in many textbooks).

It doesn't matter which observable was measured; the concrete matrix given was just an example that the numbers in the example are consistent with some observable.
 

atyy

Science Advisor
13,383
1,523
Born's statistical interpretation does not claim that there is no measurement error but, as usual when formulating a theory, discusses first the case that the measurement errors are so small that they can be neglected. If you deal with unprecise measurements the analysis becomes much more complicated since then you deal not only with the quantum statistics a la Born but also with the statistics (and systematics!) of the inaccurate measurement device.
I don't agree with all of @vanhees71's post #2, but I do agree with his point that standard quantum theory allows measurement error.
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
I don't agree with all of vanhees71's post #2, but I do agree with his point that standard quantum theory allows measurement error.
Thus you disagree with the statement in Wikipedia and many other textbooks, e.g., Landau and Lifschitz? They say unanimously:
Wikipedia said:
if an observable corresponding to a self-adjoint operator ##A## with discrete spectrum is measured [...] the measured result will be one of the eigenvalues ##\lambda## of ##A##.
No approximation is implied; nothing indicates that this would only apply to idealized measurements.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
The OP is incorrect. Standard QM does claim that even if there is no measurement error, results will be probabilistic, but it does not exclude other sources of uncertainty, including uncertainty in which observable was measured.
Indeed, and it claims that the results of measurements are the eigenvalues of the self-adjoint operators representing the observables and not the q-expectation values. The success of QT in describing what's really measured obviously shows that this is an empirically correct assumption, and the postulate of the "thermal interpretation" that the result of measurements are q-expectation values.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
Thus you disagree with the statement in Wikipedia and many other textbooks, e.g., Landau and Lifschitz? They say unanimously:

No approximation is implied; nothing indicates that this would only apply to idealized measurements.
There is no contradiction. Of course, in theoretical physics one first discusses idealized measurements, where the error due to the construction of the measurement apparatus is negligible. Indeed, better and better measurement devices are better and better approximations to such idealized measurement devices, and no matter what interpretation you follow (or believe in since this seems to be a quasi-religious issue ;-)), it's an established fact since 1925 that the predictions of QT, namely that what you measure if you measure with sufficient precision, are the eigenvalues of the self-adjoint operators that represent the observables and not q-expectation values. Thus the crucial assumption, deviating in the thermal from the standard interpretation is unanimously empirically invalidated.
 

DarMM

Science Advisor
Gold Member
1,343
557
No approximation is implied; nothing indicates that this would only apply to idealized measurements
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?

I would have assumed the former.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
I don't agree with all of @vanhees71's post #2, but I do agree with his point that standard quantum theory allows measurement error.
I'm really surprised that you disagree with well-established empirical facts. Unfortunately you don't in detail tell with what of these you disagree and why.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?

I would have assumed the former.
Of course. In theory textbooks one starts with discussing general laws of nature as described by theories and one deals with idealized precise measurements.

The very important topic of analyzing measurement errors (statistical and (!) systematical) are the subject of experimental physics and is taught particularly in the introductory and advanced labs, where students really do experiments, and this is for a good reason. The errors of measurement devices depend on each individual measurement device, and you need both theoretical analyses and empirical tests with real-world devices to learn the art to do correct and complete measurements.

Of course to understand why the macroscopic matter around us seems to behave according to the laws of classical physics rather than quantum physics one has to understand coarse graining and how measurement devices (including our senses!) do the corresponding "averaging over many microscopic degrees of freedom", and why it is so difficult to observe genuine quantum behavior on macroscopic objects (decoherence).
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
it's an established fact since 1925 that the predictions of QT, namely that what you measure if you measure with sufficient precision, are the eigenvalues of the self-adjoint operators that represent the observables and not q-expectation values.
Its a convention since 1927, when (as reported in Part I of my series of papers) Jordan, Dirac and von Neumann cast the findings of Born into the general statement today known as Born's rule. The established facts at the time were lots of spectral data and nothing else. Not the least mention of measuring with sufficient precision. Indeed, some passages like the following (translated from p.181f) of Heisenberg's 1927 paper on the uncertainty relation sound like the thermal interpretation:
Werner Heisenberg said:
When we want to derive physical results from that mathematical framework, then we have to associate numbers with the quantum-theoretical magnitudes -- that is, with the matrices (or 'tensors' in multidimensional space). [...] One can therefore say that associated with every quantum-theoretical quantity or matrix is a number which gives its 'value' within a certain definite statistical error.
His 'value' is the q-expectation, not an eigenvalue! And his statistical errors are the deviations from these, not the lack of precision in the sense of your claimed ''established fact since 1925''!
The very important topic of analyzing measurement errors (statistical and (!) systematical) are the subject of experimental physics and is taught particularly in the introductory and advanced labs, where students really do experiments, and this is for a good reason.
And for a good reason one learns there that measurement errors are always to be treated as in the first example of post #1, to get better accuracy through a repetition of the measurement.
 
Last edited:

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?
Well, it is nowhere discussed. The story is complicated.

The early Copenhagen view (Bohr and Heisenberg end of 1927, Como and Solvay conferences) was that conserved quantities were beables that had exact values independent of measurement, and that states were eigenstates, with random transitions between them. Then Jordan, Dirac and von Neumann generalized the setting to arbitrary maximally commuting systems of observables, still talking about 'being' rather than 'measuring'. Then it was recognized that this is inconsistent since joint probabilities cannot be defined; in the early literature wrong statements and derivaions not taking this into account exist till at least 1928. Then people were forced to indicate the measurement context, and the modern formulation emerged.

Nobody ever seems to have felt the need to investigate measurement errors beyond Heisenberg's uncertainty relation, which was viewed by Heisenberg as a statistical uncertainty of deviation from the mean; see the quote in post #19. On p.77f of his 1972 book ''Der Teil und das Ganze'', Heisenberg still writes:
Werner Heisenberg said:
Kann man in der Quantenmechanik eine Situation darstellen, in der sich ein Elektron ungefähr -- das heißt mit einer gewissen Ungenauigkeit -- an einem gegebenen Ort befindet und dabei ungefähr -- das heißt wieder mit einer gewissen Ungenauigkeit -- eine vorgegebene Geschwindigkeit besitzt, und kann man diese Ungenauigkeiten so gering machen, daß man nicht in Schwierigkeiten mit dem Experiment gerät?
This shows that Heisenberg's informal intuition was, and remained throughout his life, that of the thermal interpretation, which provides an electron with a definite but uncertain world tube in phase space.
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
Is that not just typical textbook glossing over of experimental realities or are they actually making the literal claim that the eigenvalue directly results regardless of the precision of the device?

I would have assumed the former.
If one assumes the former, i.e., that measurement results are not eigenvalues but only approximations to eigenvalues (the new convention presented explicitly by @vanhees71 in his lecture notes, which I have never seen elsewhere), one has other problems:

In this case one cannot even justify the interpretation of the q-expectation as an expectation value of measurement results - since the latter deviate arbitrarily (depending on the measurement accuracy) from the eigenvalues that figure in the derivation of the expectation value.

Instead, with the new convention of vanhees71, one would have to argue that there are additional, purely theoretical true values (the eigenvalues) that figure in the Born rule and take the place of the true positions and momenta in a classical description. But these theoretical true values are only approximately related to measurement, as approximately as the true positions and momenta in classical physics. Thus it would not make sense to refer in Born's rule to the eigenvalues as measurement results, since they are only theoretical constructs approximately related to measurement. As approximately as the true positions and momenta in classical physics, where one never refers to their measurement in the foundations.

The next problem is that with the new convention of vanhees71, the notion of measurement error becomes questionable. A spin measurement of an electron or silver atom is regarded, depending on who is talking about it, as an exact measurement of ##\pm 1##, ##\pm 1/2##, or (most properly) ##\pm \hbar/2##, even though until recently, ##\hbar## was a constant with an inaccurately known value. Measured is an inaccurate spot on a screen; the exactness results not from measurement but from theory. Similarly, in photon counting experiments, measured are inaccurate peaks of a photocurrent; the exactness results not from measurement but from theory, which (in idealized models predicting 100% efficiency) predicts these peaks to equal in number the integral number of photons arriving.

Worse, with the new convention of vanhees71, there is no longer a way to determine the accuracy of a measurement of quantities represented by an operator with continuous spectrum. Since there is no theory to tell which of the true values counts (as it is a random variable) the true value is completely unknown, hence the measurement error cannot even be quantified. The only quantifiable measurement error is the statistical error given by the deviation from the mean - the true value of the thermal interpretation, not that of the new convention of vanhees71.

Thus the merit of the new convention of vanhees71 is very questionable.
 
Last edited:

atyy

Science Advisor
13,383
1,523
I'm really surprised that you disagree with well-established empirical facts. Unfortunately you don't in detail tell with what of these you disagree and why.
Sorry, post #2 is fine (though I exclude your comments on the thermal interpretation, on which I reserve judgement, since I haven't studied this new attempt at interpretation). It is your subsequent post with your usual remarks about interpretation that I don't quite agree with.
 

Demystifier

Science Advisor
Insights Author
2018 Award
9,705
2,728
1. Consider some piece of digital equipment with 3 digit display measuring some physical quantity ##X## using ##N## independent measurements. Suppose the measurement results were 6.57 in 20% of the cases and 6.58 in 80% of the cases. Every engineer or physicist would compute the mean ##\bar X= 6.578## and the standard deviation ##\sigma_X=0.004## and conclude that the true value of the quantity ##X## deviates from ##6.578## by an error of the order of ##0.004N^{-1/2}##.
What if the digital equipment had 6 digit display instead of 3, with everything else being the same? What would every engineer and physicist do in that case? And what would the thermal interpretation say?
 

A. Neumaier

Science Advisor
Insights Author
6,043
2,197
What if the digital equipment had 6 digit display instead of 3, with everything else being the same? What would every engineer and physicist do in that case? And what would the thermal interpretation say?
1. An engineer might first suspect that internally, some rounding is performed and would tentatively conclude that the last three digits are spurious. If on other occasions, the last three digits would be arbitrary, the engineer would be surprised and puzzled on the given occasion, just as the pioneers of quantum mechanics were, would consult the literature, and after reading some of it would conclude that he is measuring a quantum system with selection rules that forbids the intermediate values. The thermal interpretation would say that these selection rules are due to the system having stable slow modes at these two values, while being unstable in between.

2. The measurements would correspond exactly to Born's rule for a quantum system with a Hilbert space of at least 1001 dimensions, measuring an observable in a state for which only the first and the 1001st component in the corrsponding eigenbasis (ordered by eigenvalues) is occupied.
 

vanhees71

Science Advisor
Insights Author
Gold Member
11,789
4,348
His 'value' is the q-expectation, not an eigenvalue! And his statistical errors are the deviations from these, not the lack of precision in the sense of your claimed ''established fact since 1925''!
Take again the Stern-Gerlach experiment as an example. If your claim that one always measures a q-expectation value were true, the SGE would always result in a single strip of Ag atoms on the screen, as predicted by classical physics, but already in 1923 Stern and Gerlach found two strips as predicted by "old quantum theory" (which is wrong in two ways, and by coincidence these two errors compensate). Modern quantum theory predicts, however, the correct result, namely in general two strips of Ag atoms on the screen (except for the case that the Ag atoms are prepared in a ##\sigma_z## eigenstate). That clearly proves your claim wrong, independent of any interpretation of QT.
 

Want to reply to this thread?

"Measurement results and two conflicting interpretations" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top