Analyzing Innsbruck Bell experiment raw data sample

In summary, the source provided a sample of raw data from a study on Bell timing, with 999 detections parsed in .txt format. The first 99 of these detections were included in the conversation, showing unmatched raw data with A-B pairs on the right. It was noted that Alice and Bob each had four detectors and that the data needed to be sorted for counting coincidences. The data showed that Alice's detectors had a higher rate of triggering compared to Bob's, with almost half of the data being discarded due to not being able to be matched. The source also mentioned the difficulty in accurately matching pairs due to the timing and signal being picked out of the noise.
  • #1
Jabbu
180
0
Raw data sample source:
http://people.isy.liu.se/jalar/belltiming/

999 detections parsed in .txt format:
http://www.mediafire.com/download/1pi64hrydzs7r7h/bell999.zip

This below is the first 99 detections. It's unmatched raw data, so A-B pairs on the right are going to be different once the data is sorted out and ready for counting coincidences. Alice and Bob have four detectors each (0,1,2,3), two for 0 degrees and two for 45 degrees polarizer rotation.

Code:
# 0=vertical, no rotation
# 1=horizontal, no rotation
# 2=vertical, 45degree rotation
# 3=horizontal, 45degree rotation

      A-TIME    B-TIME      A-B
  1:  0.0000022 0.0000529   0-0
  2:  0.0000080 0.0000776   1-3
  3:  0.0000127 0.0000927   0-0
  4:  0.0000497 0.0001099   1-3
  5:  0.0000529 0.0001300   0-0
  6:  0.0000973 0.0002323   2-1
  7:  0.0001282 0.0002482   0-0
  8:  0.0001301 0.0002905   2-3
  9:  0.0001443 0.0003017   0-0
 10:  0.0001562 0.0003396   0-0
 11:  0.0002056 0.0003470   0-0
 12:  0.0002165 0.0003603   2-1
 13:  0.0002194 0.0003883   0-0
 14:  0.0002808 0.0004288   3-0
 15:  0.0002983 0.0005772   0-0
 16:  0.0003171 0.0005891   0-0
 17:  0.0003296 0.0006135   0-0
 18:  0.0003309 0.0006371   0-0
 19:  0.0003424 0.0006647   0-0
 20:  0.0003554 0.0007317   3-0
 21:  0.0003615 0.0007469   0-0
 22:  0.0003676 0.0007601   0-0
 23:  0.0003898 0.0007825   0-0
 24:  0.0004362 0.0008330   3-1
 25:  0.0004437 0.0008839   0-0
 26:  0.0004539 0.0008957   2-3
 27:  0.0004914 0.0008982   0-0
 28:  0.0004991 0.0009989   3-3
 29:  0.0005282 0.0010620   0-0
 30:  0.0005351 0.0010826   0-0
 31:  0.0005416 0.0011126   0-0
 32:  0.0005655 0.0011489   0-3
 33:  0.0006468 0.0011678   0-0
 34:  0.0006708 0.0012064   3-0
 35:  0.0006767 0.0012985   0-0
 36:  0.0006962 0.0013142   2-0
 37:  0.0007817 0.0013765   0-0
 38:  0.0007857 0.0014157   0-1
 39:  0.0007993 0.0015135   0-0
 40:  0.0008088 0.0015221   1-2
 41:  0.0008259 0.0015419   0-0
 42:  0.0008357 0.0015776   2-0
 43:  0.0008391 0.0016791   0-0
 44:  0.0008424 0.0017757   2-0
 45:  0.0008485 0.0017868   0-0
 46:  0.0008640 0.0018922   1-0
 47:  0.0008770 0.0019115   0-0
 48:  0.0008957 0.0019672   3-2
 49:  0.0009142 0.0019713   0-0
 50:  0.0009252 0.0019919   2-0
 51:  0.0010379 0.0020254   0-0
 52:  0.0010510 0.0020501   1-3
 53:  0.0010650 0.0021090   0-0
 54:  0.0010687 0.0021181   1-2
 55:  0.0010735 0.0021392   0-0
 56:  0.0010751 0.0021629   2-1
 57:  0.0010763 0.0021994   0-0
 58:  0.0010855 0.0022013   1-3
 59:  0.0010991 0.0022063   0-0
 60:  0.0011117 0.0022513   0-3
 61:  0.0011236 0.0022674   0-0
 62:  0.0011500 0.0023667   1-3
 63:  0.0011513 0.0023788   0-0
 64:  0.0012012 0.0024184   1-0
 65:  0.0012200 0.0025019   0-0
 66:  0.0012253 0.0025062   1-1
 67:  0.0012410 0.0026088   0-0
 68:  0.0012471 0.0026173   3-1
 69:  0.0012574 0.0026490   0-0
 70:  0.0012875 0.0026576   2-3
 71:  0.0013107 0.0028493   0-0
 72:  0.0013316 0.0028756   1-0
 73:  0.0013761 0.0029057   0-0
 74:  0.0014241 0.0029142   2-3
 75:  0.0014748 0.0029372   0-0
 76:  0.0014851 0.0029474   2-2
 77:  0.0015024 0.0030059   0-0
 78:  0.0015044 0.0030309   2-3
 79:  0.0015106 0.0031342   0-0
 80:  0.0015129 0.0031550   2-1
 81:  0.0015150 0.0031565   0-0
 82:  0.0015295 0.0032586   0-1
 83:  0.0015542 0.0032745   0-0
 84:  0.0015822 0.0032994   2-0
 85:  0.0016125 0.0033041   0-0
 86:  0.0016299 0.0033495   1-3
 87:  0.0016477 0.0033801   0-0
 88:  0.0016560 0.0033820   2-2
 89:  0.0016966 0.0034207   0-0
 90:  0.0016975 0.0034965   1-0
 91:  0.0017108 0.0035036   0-0
 92:  0.0017323 0.0036224   3-1
 93:  0.0018108 0.0037002   0-0
 94:  0.0018483 0.0037304   0-1
 95:  0.0018778 0.0037377   0-0
 96:  0.0019019 0.0037468   0-0
 97:  0.0019060 0.0037955   0-0
 98:  0.0019224 0.0038050   1-2
 99:  0.0019713 0.0038233   0-0

Out of 999 ticks Alice recorded 649 "+" and 115 "-" detections on her 0 degrees detectors (0,1), while there was 135 "+" and 100 "-" detections for 45 degrees rotation (2,3).

Out of 999 ticks Bob recorded 634 "+" and 121 "-" detections on his 0 degrees detectors (0,1), while there was 116 "+" and 128 "-" detections for 45 degrees rotation (2,3).Time-stamps and match-making. This is really the only problem here, but what a strange problem it is. You would think true matching pairs would be detected close together in the timeline and far away from other pairs, so they can be recognized and singled out, but for some reason they obviously are not.

The most peculiar thing, however, is that Alice's detectors constantly trigger at higher rate than Bob's. Almost half of Alice's data simply can not be matched and it needs to be discarded, but what to throw out and what to keep? So what in the world are those "extra" detections and why there is constantly much more detections on Alice's than Bob's detectors?
 
Last edited:
Physics news on Phys.org
  • #2
Nugatory said:
You need to look at a sample that's large enough to pick the signal out of the noise. Often only one member of the pair will be detected, and not all of the photons that make it into the detector are members of entangled pairs.

Have you tried playing around with the software that came along with that data sample?

The ratio does smooth out close to 50%-50% for 45 rotation, but 649:115 and 634:121 ratio for 0 degrees polarizer rotation is still far away. Doesn't that look like incoming photons polarization averages out around a certain angle rather than being uniformly random?

Their software wouldn't run for me, I guess I need older Python version. Does it work for you?
 
  • #3
Jabbu said:
Time-stamps and match-making. This is really the only problem here, but what a strange problem it is. You would think true matching pairs would be detected close together in the timeline and far away from other pairs, so they can be recognized and singled out, but for some reason they obviously are not.

The most peculiar thing, however, is that Alice's detectors constantly trigger at higher rate than Bob's. Almost half of Alice's data simply can not be matched and it needs to be discarded, but what to throw out and what to keep? So what in the world are those "extra" detections and why there is constantly much more detections on Alice's than Bob's detectors?

The issue about unmatched detections is not much really once you follow the logic. Here are a few points:

a. Perhaps 1 in a billion input photons are down converted. Virtually all of the rest are filtered out. But some light may bounce around a bit and not be filtered. If so, it may be unmatched and it will obviously not be entangled.

b. The creation time of the 2 photons is not exactly the same. But pairs are generally created far enough apart that 2 pairs will not "overlap". Keep in mind that a nanosecond translates into about a foot of distance.

c. Consequently, a time window must be defined to match up pairs. This is something the data analyst can work with. Extensive review of this parameter has not turned up any bias to date. PF member Peter Morgan (a mathematical physicist) has done substantial work on that.

d. Pairs may be created which are NOT polarization entangled. This can occur for a variety of reasons. Usually it is because they are distinguishable in some manner.

e. The actual collection of pairs is done by harvesting output photons at certain conic angles, ie a small deviation from straight out. Usually, it is about 2% off straight. Virtually all of the unconverted photons go straight through so this helps to filter out the unwanted ones as well.

In the end:

i) You end up losing some entangled pairs - these cannot assist in getting accurate entangled stats. This can be because either one or both of the photons in the pair are lost.
ii) You end up counting some unentangled pairs by mistake - these cannot assist in getting accurate entangled stats either.

The results of i) and ii) is always that your results is a smaller violation of your Bell inequality than expected. A typical result for CHSH inequality is 2.40 where 2.00 is the max possible for local realism, and 2.8 is the absolute max predicted by QM assuming perfect efficiency. A 2.40 reading, depending on the setup, may amount to 30 or more standard deviations.
 
  • #4
When detection pair is 2-2, 2-3, 3-2, or 3-3, it means both Alice and Bob polarizers are rotated at 45 degrees, but does that mean 0 or 90 degrees relative angle?

I think it's like this:

0-0, 0-1, 1-0, 1-1: theta_A = 0, theta_B = 0
0-2, 0-3, 1-2, 1-3: theta_A = 0, theta_B = +45
2-2, 2-3, 3-2, 3-3: theta_A = -45, theta_B = +45
2-0, 2-1, 3-0, 3-1: theta_A = -45, theta_B = 0

...so there are three possible relative angles: 0, 45, and 90 degrees, which makes two possibilities for 45 degrees, one possibility for 0 and one for 90 degrees?


Consider B-time: 0.0001300, it's easy to match it with A-time: 0.0001301, but there is about two possibilities of A-time for each B-time, and sometimes they are equally apart, so we have to choose the time before or the time after B-time. Is one preferred choice over the other? Is distance on both sides supposed to be the same, or is one photon actually supposed to arrive later than the other?

One A-time will usually be closer to each B-time than any other, so is it then reasonable to match every B-time with the closest A-time?
 
  • #5
DrChinese said:
c. Consequently, a time window must be defined to match up pairs. This is something the data analyst can work with. Extensive review of this parameter has not turned up any bias to date. PF member Peter Morgan (a mathematical physicist) has done substantial work on that.

Can you single out several matching pairs from the data to give an example how to apply that "time-window" and what value is it supposed to be for this data set?
 
  • #6
It is surprising that only one side has more photons, but i suppose there are double events when the detector click on both side and single events when it is only on one side.
 
  • #7
A closed thread is not an invitation to continue the same discussion in another place.
 

What is the purpose of analyzing Innsbruck Bell experiment raw data sample?

The purpose of analyzing Innsbruck Bell experiment raw data sample is to study the phenomenon of quantum entanglement and to further understand the principles of quantum mechanics.

How is the raw data sample collected in the Innsbruck Bell experiment?

The raw data sample is collected by measuring the correlations between entangled particles in the experiment. This is done by performing measurements on the particles' properties such as spin, polarization, or momentum.

What types of statistical analysis are commonly used to analyze the Innsbruck Bell experiment raw data sample?

Commonly used statistical analysis techniques for analyzing the Innsbruck Bell experiment raw data sample include correlation analysis, chi-square test, and t-test. These methods help determine the significance of the observed correlations and validate the results of the experiment.

How is quantum noise accounted for in the analysis of Innsbruck Bell experiment raw data sample?

Quantum noise is accounted for by using statistical techniques such as signal-to-noise ratio analysis and error bars in graphs. These methods help distinguish the effects of quantum entanglement from random noise in the data.

What are some challenges in analyzing Innsbruck Bell experiment raw data sample?

Some challenges in analyzing Innsbruck Bell experiment raw data sample include dealing with the small sample size due to the difficulty in creating entangled particles, addressing potential experimental errors, and interpreting the results in the context of quantum mechanics principles.

Similar threads

Replies
7
Views
758
Replies
80
Views
4K
  • Quantum Physics
Replies
14
Views
153
  • Advanced Physics Homework Help
Replies
11
Views
1K
Replies
5
Views
2K
  • Quantum Physics
Replies
5
Views
771
  • Quantum Physics
Replies
10
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
45
Views
3K
Replies
18
Views
1K
Replies
1
Views
955
Back
Top