Did they do a Loopholes free Bell test?

  • Thread starter Nick666
  • Start date
  • Tags
    Bell Test
In summary, the conversation discusses the progress and challenges in conducting a loopholes-free Bell test. While some experiments have been able to eliminate two out of three loopholes, there are still plans to conduct more comprehensive experiments in the near future. The conversation also touches on the topic of closing the "free will" loophole and its implications in scientific models. Additionally, the conversation mentions the importance of loophole-free Bell violations in the field of quantum cryptography. Finally, there is a discussion on a recent experiment that claims to have achieved a loophole-free Bell violation, although there are some concerns and unanswered questions about the results.
  • #36
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
 
Physics news on Phys.org
  • #37
georgir said:
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
It's a technical term. It has been around for more than 20 years. http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.71.4287

‘‘Event-ready-detectors’’ Bell experiment via entanglement swapping
M. Żukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert
Phys. Rev. Lett. 71, 4287 – Published 27 December 1993
 
  • #38
georgir said:
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".

Well, it is a very special and unusual sort of "detection". We start with two photon-electron entangled pairs, and if we detect (that is, interact with) the photons in just the right way, we end up with the electrons entangled instead. When this happens, we say that we've "swapped" the photon-electron entanglement that we had for the electron-electron entanglement that we wanted. And as Gill says, the term has used that way for decades.

Googling for "Barrett-Kok entanglement swapping" will bring up a whole bunch of fairly technical stuff on the techniques used in this experiment.
 
  • #39
Nugatory said:
Well, it is a very special and unusual sort of "detection". We start with two photon-electron entangled pairs, and if we detect (that is, interact with) the photons in just the right way, we end up with the electrons entangled instead. When this happens, we say that we've "swapped" the photon-electron entanglement that we had for the electron-electron entanglement that we wanted. And as Gill says, the term has used that way for decades.

Googling for "Barrett-Kok entanglement swapping" will bring up a whole bunch of fairly technical stuff on the techniques used in this experiment.
If we had been talking about correlation instead of entanglement it would not have been weird at all. Suppose particles A and B have equal and opposite momentum, but whose amount is random. Suppose that completely independently of this, particles C and D have equal and opposite, randomly varying, momentum. Now catch particles B and C and if their momentum is equal and opposite say "go". It is no surprise that particles A and D have highly correlated momentum if we only look at them both on those occasions that we got the "go" signal. The extraordinary (and beautiful) thing is how the mathematics of Hilbert space quantum entanglement works in just the same way...
 
  • #40
Perhaps a stupid question, but do they also do these kinds of test with un-entangled particles for calibration? And if so how different are the results from entangled particles?
 
  • #41
georgir said:
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".

It is an active process that creates the entangled state. The requirement is that the photons are i) detected in a certain signature manner AND ii) are indistinguishable. It is theoretically possible to perform i) without ii). If you were to do that, you would not get entanglement, thus proving that the entanglement of A & D is dependent on the swapping operation of B & C. This is the creation of the event ready pairs.

An example of how you would accomplish such detection (of B & C) without creating entanglement (for A & D) would be to use photons B & C that are of different frequencies. So you would be post selecting ONLY, and there are no entangled pairs (of A & D) that result. So post selection alone won't accurately describe anything.
 
  • #42
akhmeteli said:
First, it was noted in another thread that the probability $p$=0.019/0.039 is not very impressive.
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably (see second link). That's what I have read and today there was another piece discussing the experiment:

Physicists claim 'loophole-free' Bell-violation experiment
http://physicsworld.com/cws/article...claim-loophole-free-bell-violation-experiment
There are still a few ways to quibble with the result. The experiment was so tough that the p-value – a measure of statistical significance – was relatively high for work in physics. Other sciences like biology normally accept a p-value below 5 per cent as a significant result, but physicists tend to insist on values millions of times smaller, meaning the result is more statistically sound. Hanson’s group reports a p-value of around 4 per cent, just below that higher threshold.That isn’t too concerning, says Zeilinger. “I expect they have improved the experiment, and by the time it is published they’ll have better data,” he says. “There is no doubt it will withstand scrutiny.”
Quantum weirdness proved real in first loophole-free experiment
https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/
 
Last edited:
  • #43
bohm2 said:
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably (see second link). That's what I have read and today there was another piece discussing the experiment:

Physicists claim 'loophole-free' Bell-violation experiment
http://physicsworld.com/cws/article...claim-loophole-free-bell-violation-experiment

Quantum weirdness proved real in first loophole-free experiment
https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/
As long as the experimenters only have N = 245 pairs of measurements and S = 2.4 that p-value is not going to go down. Here is a simple calculation which explains why: An empirical correlation between binary variables based on a random sample of size N has a variance of (1 - rho^2)/N. The worst case is rho = 0 and variance 1/N. In fact we are looking at four empirical correlations equal to approx +/- 0.6. So if we believe that we have four random samples of pairs of binary outcomes then each empirical correlation has a variance of about 0.64 / N where N is the number of pairs of observations for each pair of settings. If the four samples are statistically independent the variance of S is about 4 * 0.64 / N where N = 245 / 4. This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism. The chance that this occurs by chance is about 0.025.

Here's a little Monte Carlo simulation experiment which shows that this rough calculation is pretty reliable http://rpubs.com/gill1109/delft

However, if actually they performed several experiments, and the N = 245 is just one of them, and they combine the results of several experiments in a statistically responsible way, then obviously their p-value can get much smaller.
 
  • #44
bohm2 said:
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably.
Yes, I guess that this was what they felt was the maximal p-value they could get away with and still establish priority (they are in a race here), and that they are still running the experiment to achieve p-values at a level that is now regarded as the norm in physics (4 - 5 sigmas).
 
Last edited:
  • #45
Michel_vdg said:
Perhaps a stupid question, but do they also do these kinds of test with un-entangled particles for calibration? And if so how different are the results from entangled particles?
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?
 
  • #46
Michel_vdg said:
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?

I am sure there were no end of null results while they were getting everything tuned up. Realistically, I am sure they were calibrating to get a maximum of perfect correlations - which is the sure fire way to see that you are experiencing entanglement. The closer you get to 100% (as opposed to 50% for non-entangled pairs), the better entanglement you are achieving.

Are you asking why they don''t publish their process for calibration? And along with that, the null results too? That is not usually published as a part of most papers because it is of little interest to the intended readers. If the result is in concert with theory (as in this case), not much incentive there. If there were controversy as to the accuracy of the results, or whether can be replicated, then I am sure they would gladly provide that information.

Besides, this is not the first experiment to use entanglement. You don't really need to prove out every element of an experiment every time. (Are the detectors reliable, beam splitters effective, etc?)
 
  • #47
Michel_vdg said:
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?

Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.
 
  • #48
Is entanglement swapping simply post-selection? From the paper:

"We generate entanglement between the two distant spins by entanglement swapping in the Barrett-Kok scheme using a third location C (roughly midway between A and B, see Fig. 1e). First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state..."


You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.

By calling this POST-SELECTION you are really saying you think a local realistic explanation is viable. But if you are asserting local realism as a candidate explanation, obviously nothing that occurs at C can affect the outcomes at A & B; it occurs too late! So making the photons registered at C overlap and be indistinguishable cannot make a different to the sub-sample. But it does, because if they are distinguishable there will not be a Bell inequality violated.

So that is a contradiction. Don't call it post-selection unless you think that the overlap can be done away with and still get a sample that shows the same statistics as when they are indistinguishable pairs.
 
  • #49
Nugatory said:
Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.

"Before running the Bell test we first characterize the setup and the preparation of the spin-spin entangled state."


They go on to provide background on the key points. There is also Supplementary Information available, which they refer to.
 
  • #50
The important thing here is that the recording at C (which basically tells that the pair is an interesting successfully entangled pair) is spacelike separated from the random selection of settings and read-out of the spins at both A and B. So the "post selection" can in no local way depend on the settings at A or B, or vice versa. This rules out LHV explanations, and the detection loophole.
 
Last edited:
  • #51
Nugatory said:
Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.
Yes the equipment needs to be calibrated, that's also the case in medecine, but I thought once it is all calibrated that they also wound do a non-entagled testrun to lay next to the 245 pairs of measurements they now realized.
 
  • #52
Michel_vdg said:
... but I thought once it is all calibrated that they also wound do a non-entagled testrun to lay next to the 245 pairs of measurements they now realized.

If they could get entanglement without swapping, that would be even bigger news! :smile:
 
  • #53
DrChinese said:
You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.
Do I miss something? Whether photons are indistinguishable or not is measured.
In the paper about fig. 3b they write:
"(b) Time-resolved two-photon quantum interference signal. When the NV centres at A and B emit indistinguishable photons (orange), the probability of a coincident detection of two photons, one in each output arm of the beam-splitter at C is expected to vanish. The observed contrast between the case of indistinguishable versus the case of distinguishable photons of 3 versus 28 events in the central peak yields a visibility of (90±6)% (Supplementary Information)."
 
  • #54
billschnieder said:
And the photons are produced by the microwave pulses hitting the crystals https://d1o50x50snmhul.cloudfront.net/wp-content/uploads/2015/08/09_12908151-800x897.jpg?And the post-selection is based on the photons?
Yes, post selection is based on the photons. But these photons are emitted earlier, before measurement settings are generated by RNG.
There are two pulses. One earlier that generates electron-photon entanglement and one later that can be one of two different pulses that do spin rotation by two different angles.
 
  • #55
zonde said:
Do I miss something? Whether photons are indistinguishable or not is measured.
In the paper about fig. 3b they write:
"(b) Time-resolved two-photon quantum interference signal. When the NV centres at A and B emit indistinguishable photons (orange), the probability of a coincident detection of two photons, one in each output arm of the beam-splitter at C is expected to vanish. The observed contrast between the case of indistinguishable versus the case of distinguishable photons of 3 versus 28 events in the central peak yields a visibility of (90±6)% (Supplementary Information)."

You didn't miss anything. But they are measured indirectly to determine indistinguishability, just not it at C.

At C they are made to overlap. This is done in the set up. That makes them indistinguishable, and causes the entanglement swap. The measurement is later performed at A and B by observing a violation of a Bell inequality. That demonstrates the entanglement swap occurred, in accordance with theory.
 
  • #56
From their paper (page 3):

"First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state[...]. These detections herald the successful preparation and play the role of the event-ready signal in Bell's proposed setup. As can be seen in the spacetime diagram in Fig. 2a, we ensure that this event-ready signal is space-like separated from the random input bit generation at locations A and B."
 
Last edited:
  • Like
Likes DrChinese
  • #57
gill1109 said:
This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism.
Perhaps the null hypothesis is a bit pessimistic. S is a sample statistic and it could have a expectation much closer to zero than 2 under a no-correlation null hypothesis. Even adding the SD of this statistic to the mix could result in a much higher confidence level. But it could also go down.

One approach is to work out the randomization distribution of S ( actually S without the modulus) by randomizing the data between bins. This will give an estimate of the mean and variance of the statistic.
 
Last edited:
  • #58
Mentz114 said:
Perhaps the null hypothesis is a bit pessimistic. S is a sample statistic and it could have a expectation much closer to zero than 2 under a no-correlation null hypothesis. Even adding the SD of this statistic to the mix could result in a much higher confidence level. But it could also go down.

One approach is to work out the randomization distribution of S ( actually S without the modulus) by randomizing the data between bins. This will give an estimate of the mean and variance of the statistic.
They did also compute a randomization only based, possibly conservative, p-value. It was 0.039
 
  • #59
DrChinese said:
You didn't miss anything. But they are measured indirectly to determine indistinguishability, just not it at C.

At C they are made to overlap. This is done in the set up. That makes them indistinguishable, and causes the entanglement swap. The measurement is later performed at A and B by observing a violation of a Bell inequality. That demonstrates the entanglement swap occurred, in accordance with theory.
Just to be sure about details I looked at the paper about their previous experiment - http://arxiv.org/abs/1212.6136
They do a lot of things to tune the setups at A and B and make photons indistinguishable. Success of tuning is verified by observing HOM interference at C.But I still don't get your argument against calling detection at C a post-selection.
DrChinese said:
Is entanglement swapping simply post-selection? From the paper:

"We generate entanglement between the two distant spins by entanglement swapping in the Barrett-Kok scheme using a third location C (roughly midway between A and B, see Fig. 1e). First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state..."


You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.

By calling this POST-SELECTION you are really saying you think a local realistic explanation is viable. But if you are asserting local realism as a candidate explanation, obviously nothing that occurs at C can affect the outcomes at A & B; it occurs too late! So making the photons registered at C overlap and be indistinguishable cannot make a different to the sub-sample. But it does, because if they are distinguishable there will not be a Bell inequality violated.

So that is a contradiction. Don't call it post-selection unless you think that the overlap can be done away with and still get a sample that shows the same statistics as when they are indistinguishable pairs.
Detection at C distinguishes different entanglement states:
[itex]|\psi^-\rangle = (|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle)/\sqrt{2}[/itex]
[itex]|\psi^+\rangle = (|\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle)/\sqrt{2}[/itex]
[itex]|\phi^\pm \rangle = (|\uparrow\uparrow\rangle \pm |\downarrow\downarrow\rangle)/\sqrt{2}[/itex]
And without singling out (post-selecting) just one of those entangled states there is no Bell inequality violation.
And the action that is occurring at C is interference between two photons (photon modes) so that [itex]|\psi^-\rangle[/itex] and [itex]|\psi^+\rangle[/itex] can be told apart.
 
  • #60
Since they used a quantum random number generator instead of the human random decisions, can no conclusions be drawn regarding this aspect ? Something like... we have as much "free will" as a quantum random number generator ? They do mention "free will" in the arvix paper .
 
  • #61
DirkMan said:
Since they used a quantum random number generator instead of the human random decisions, can no conclusions be drawn regarding this aspect ? Something like... we have as much "free will" as a quantum random number generator ? They do mention "free will" in the arvix paper .
IMHO, it would have been better to choose the settings by a cascade of classical (pseudo) random number generators. Or even just read them from a big file of pre-generated settings. But they need to get these random settings very fast and it appears that quantum RNGs are faster than state of the art classical pseudo RNGs.

Yes it is amusing that a Bell type experiment is supposed to show that nature is intrinsically non-deterministic (and moreover, in a non-local way) but to make the experiment convincing, we have to assume that we have at least effective local randomness. Thus you cannot escape the super-determinism (conspiracy) loophole. You have to make appeal to that loophole ludicrous. Occam's razor has to come to the rescue.
 
  • #62
zonde said:
But I still don't get your argument against calling detection at C a post-selection.
With post-selection one usually means selection made after the experiment has been done, using knowledge of the results from both wings of the experiment.

Here, the selection is done outside the lightcone of the experiment. There is no way a local hidden variable model could make use of such a selection mechanism in order to violate the CHSH-inequality.
 
Last edited:
  • #63
zonde said:
Just to be sure about details I looked at the paper about their previous experiment - http://arxiv.org/abs/1212.6136

1. They do a lot of things to tune the setups at A and B and make photons indistinguishable.

2. Success of tuning is verified by observing HOM interference at C.

3. But I still don't get your argument against calling detection at C a post-selection.

4. And without singling out (post-selecting) just one of those entangled states there is no Bell inequality violation.
And the action that is occurring at C is interference between two photons (photon modes) so that [itex]|\psi^-\rangle[/itex] and [itex]|\psi^+\rangle[/itex] can be told apart.

Yes, there is post selection - no quibble about that. But if swapping were not occurring, that sample would not tell us anything. The local realist denies there is swapping that affects the resulting sample. They would say indistinguishability and overlap do not matter. Those things are ingredients for the swapping but NOT for the post-selection. That is what I am saying, as best I can tell.

1. For swapping purposes only. Not for post-selection.

2. HOM is not demonstrated on individual events. Just a couple of clicks separated by a specified timing.

3. As said, it is also post-selection.

4. I don't think so. The clicks in separate detectors show this, correct?

My assertion is that the post-selection steps could be performed WITHOUT indistinguishability (and therefore without swapping). Of course, then the experiment would not work. But the local realist shouldn't care since they will say that nothing that happens at C affects A & B's results. We know that idea is wrong.
 
  • #64
This beautifully crafted experiment now gives rise to a modified Quantum Randi Challenge (for all local realists out there):

Program three computers A, B, and C (or alternatively three subroutines that can be distributed on three computers), so that:

1. Computers A and B each send a signal to computer C.

2. Based on these signals, computer C outputs select/don't select.

3. Computers A and B are now both given exogenous binary random inputs, 0 or 1 (independently for both computers). 0 or 1 is just shorthand for each wing's binary choice of angles in a CHSH experiment.

4. Based on these inputs, computers A and B independently output 1 or -1.

5. Repeat from 1.

The only allowed communication between computers is the signals in step 1.

The challenge is this: For all selected pairs, the CHSH-inequality should be significantly violated. The above loop should run until the number of selected pairs is 1000 or more.
 
Last edited:
  • #65
Heinera said:
The above loop should run until the number of selected pairs is 1000 or more.
And why is 245 not enough?
 
  • #66
billschnieder said:
And why is 245 not enough?
Because we want to minimize the impact of flukes. You understand this, I'm sure.
 
  • Like
Likes billschnieder
  • #67
DrChinese said:
Yes, there is post selection - no quibble about that. But if swapping were not occurring, that sample would not tell us anything. The local realist denies there is swapping that affects the resulting sample. They would say indistinguishability and overlap do not matter. Those things are ingredients for the swapping but NOT for the post-selection.
Without post-selection, there will be no swapping. You have to understand that swapping is precisely the process of selecting a sub-ensemble which is correlated in a specific way from a larger ensemble which is not correlated.

For example, take a set of pairs of numbers X,Y where the corresponding numbers of each pair (x,y) are related by y = sin(x). It follows that x and y are correlated. With two such such sets, say X1, Y1, and X2, Y2 randomly generated at space-like separated locations A and B. We could do an additional "measurement" z on each x, e.g. z = cos(x), giving a set of measurement results Z1 and Z2 at A and B respectively. There is no correlation between X1 and X2 and therefore no correlation between Y1 and Y2 or Z1 and Z2. For each pair (x,y) from each arm we send the y value to a distant location C, even before the z values are available. At C we simply compare the incoming values and if they are indistinguishable, we generate a "good" signal. Based on the RNG used at A and B, we may not have very many "good" signals but we will get a few. By post-selecting the Z1 and Z2 values using the "good" results from location C, we get a sub-ensemble of the Z1 and Z2 results which are correlated with each other. This is essentially the process of "swapping". Replace "correlation" with "entanglement" and you have "entanglement swapping". Perhaps the confusion is with the common practice of discussing the technique in the context of a single measurement as opposed to ensembles.

There is no way for Alice(Bob) to know which of their results are "good" without information going from both Alice and Bob to station C, and then the back to Alice (Bob) in the form of "good" signals.

BTW, local realists do not deny swapping, they believe swapping has a local realistic explanation.
 
  • #68
billschnieder said:
Without post-selection, there will be no swapping. You have to understand that swapping is precisely the process of selecting a sub-ensemble which is correlated in a specific way from a larger ensemble which is not correlated.
Weather there is swapping or not does not really matter here. We could equally well have a theory where we just happened to produce the two electrons in an entangled state in the first place, and then the measurment of the photons at C would just confirm this entanglement, with no swapping taking place. The math would be the same. The only thing that matters for a LHV model, is that this confirmation is outside the lightcone of the experiment performed. So in no way can the settings or the results of the experiment influence the confirmation, nor vice versa.
 
  • #69
Heinera said:
Weather there is swapping or not does not really matter here.
It matters to have a proper understanding of what entanglement swapping entails.

The only thing that matters for a LHV model, is that this confirmation is outside the lightcone of the experiment performed. So in no way can the settings or the results of the experiment influence the confirmation, nor vice versa.
It matters also, that filtering results after the fact using information from both stations introduces "nonlocality". The "good" Z1 and Z2 ensembles are therefore nonlocally generated.
 
  • #70
billschnieder said:
It matters also, that filtering results after the fact using information from both stations introduces "nonlocality". The "good" Z1 and Z2 ensembles are therefore nonlocally generated.
But the whole point here is that in the experiment, they are not filtered after the fact . They are actually filtered prior to the fact (i.e. the performance of the experiment). See my post on the revised Quantum Randi Challenge earlier in this thread.
 
Last edited:

Similar threads

Replies
0
Views
669
Replies
1
Views
821
  • Quantum Physics
3
Replies
82
Views
10K
Replies
0
Views
749
Replies
19
Views
2K
Replies
71
Views
3K
Replies
8
Views
2K
Replies
6
Views
2K
Replies
25
Views
2K
  • Quantum Physics
Replies
28
Views
1K
Back
Top