Heisenberg Uncertainty vs Measurement Error

In summary, the paper shows that the physical Hilbert space of a quantized complex Klein-Gordon field is isomorphic to the physical Hilbert space of a Klein-Gordon random field.
  • #1
Derek P
297
43
Spun off from https://www.physicsforums.com/threads/is-quantum-physics-retro-deterministic.945431/#post-5984157.

@Gerinski said:
if we measure a particle's position at time X (not caring about its momentum) and we measure it again at a later time Y and we find it at some other position (again not caring about its momentum), do those 2 position measurements not enable us to infer what its position and momentum must have been between time X to Y?

@A. Neumaier said:
Yes, this is indeed more or less the way momentum is inferred from particle tracks. But the measured positions are uncertain, which implies a much bigger uncertainty in the resulting momentum. There is no way to escape the Heisenberg uncertainty relation.

I said:
It depends on the experimental arrangement but it is not difficult to use a shutter to fix the position and time with arbitrary precision at both measurements.

@A. Neumaier said:
You cannot construct a shutter with sharp enough borders to guarantee this - at some point the molecular structure of the shutter gets in the way.


My argument does not depend on being able to push the experimental accuracy to arbitrary limits. The point is that experimental limitations are not the same as quantum uncertainty and may usually be reduced to many orders of magnitude less than the latter. To put it simply, the uncertainty in the measurement of momentum does not derive from errors or even the uncertainty of the two positional measurements.

Experimentally, a one-dimensional system is harder to use than a two dimensional one a) because the particles have to have a large drift velocity on top of the momentum and b) slicing a moving beam is likely to be much less precise than using a static slit or pinhole, c) in two dimensions the deflection momentum is translated to an angle so the system resolves the measurement to position on a screen. But no matter, the timing method was, after all, a gendankenexperiment.

So to a feasible experiment with back-of-the envelope numbers.

Consider a field emission cathode and a polished dynode detector. The drift potential is 1 v, the drift distance is 10 m and the temperature is 0.1 Kelvin. The cathode is pulsed with a 70ps pulse and the resolution of the detector is the same.

Absolute accuracy is not relevant, it is repeatability that matters here. I'll use the term "error" here for the uncontrollable random variations.

Call the time error 100 ps.
Drift velocity is v = 600,000 m/s.
Effective shutter width is thus .06 mm which is far worse than the limitations of the metal surfaces, which will therefore be ignored.
Time of flight = 17 us.
The time-of-flight error is what limits the repeatability of the momentum measurement.
Temperature adds ~ +/- 8 ueV to the drift energy, or ~4 ppm in velocity.
This gives a scatter of 70 ps which is a little less than the timing error.
Just add the errors although this will be pessimistic:
Total measurement error < (100ps+70ps)/17us i.e. <10 ppm
The actual momentum of the electron is 600,000 * 10^-30.
The momentum measurement error is therefore 6 * 10^-30.
The actual positional measurement is defined by the metal surfaces to a few atomic dimensions, say 10^-9 m
The error product is thus 6 * 10^-39
Which is 5 orders of magnitude smaller than Planck's constant at 6.62 x 10^-34
 
Physics news on Phys.org
  • #2
It appears to me that you are calculating a classical measurement error, without factoring in any inherent quantum uncertainties for pairs of conjugate variables. Classical measurement error is with respect to a supposed true value, whereas quantum measurement error is just saying that if you prepare a large number of quantum systems in identical states and then perform measurements of one variable C on some of those systems and of D in others, then the standard deviation of the C results times the standard deviation of the D results will satisfy a certain inequality.
 
  • Like
Likes PeroK
  • #3
To follow up on this the HUP is a statistical law involving the variance of measurements on a large number of identically prepared systems. It doesn't say anything really about a specific pair of measurements. Even If you could say that at a certain time a particle had a definite measurement of position and a definite messurement of momentum that wouldn't directly contradict the HUP. As these are individual measurements and not a variance.
 
Last edited:
  • Like
Likes David Lewis, nomadreid and dlgoff
  • #4
At least for free quantum fields, for at least some physical cases, there is a way to consider quantum noise to be just noise, in the commonplace classical sense.

Europhys.Lett.87:31002, 2009, "Equivalence of the Klein-Gordon random field and the complex Klein-Gordon quantum field" (which is open access), shows that the Hilbert space of a quantized complex Klein-Gordon field is isomorphic to the Hilbert space of a Klein-Gordon random field. The reason the paper is not more widely known will be fairly clear if you look at it: it's the complex Klein-Gordon field, which is not known to be physical; the construction is somewhat messy; and the paper doesn't place the construction in a wider context very well. All three of these issues are fixed as well as I'm currently able in a paper currently with JMathPhys (it's past the editorial decision to send it to referees), arXiv:1709.06711, "Classical states, quantum field measurement", which gives a similar construction for the quantized electromagnetic field (and an unsurprisingly somewhat more abstruse construction for the Dirac spinor field); much more neatly (also with a much cleaner derivation of the quantized complex Klein-Gordon case in Appendix B); and with the construction much more clearly placed in the literature, as I think, in terms of the Koopman-von Neumann presentation of classical mechanics.

Heisenberg uncertainty from the perspective/coordinatization of this formalism is "just" noise/fluctuations, however it differs from thermal noise/fluctuations in that it is Poincaré invariant instead of being invariant only under a subgroup of the Poincaré group that leaves a time-like 4-vector invariant. From a classical perspective, measurement error is caused by uncontrolled noise/fluctuations, so Heisenberg uncertainty is —again, from this perspective— "just" a measurement error. For thermal noise/fluctuations we can reduce the temperature of an object by placing it in contact with a heat bath that is at a lower temperature, and comparably (at least for the free quantized EM field, quantum optics) we can construct squeezed light states to create regions of space-time that have reduced quantum noise/fluctuations (inevitably at the cost of other regions of space-time having higher noise/fluctuations, and to understand the process gets into the nature of interactions of quantum optics with macroscopic material apparatus, which I am far from understanding well enough). Note, however, that quantum noise is a computational resource in quantum computation, so we want to reduce it and increase it with proper care.

Note for @Derek P or other admin: feel free to delete this as at an inappropriate level for the question. It's about fields, not about particles — but, in its defense, particles are for most physicists just a particular coordinatization of fields, and it's past time to start replacing wave/particle thinking by wave/field thinking at the Intermediate level, as has been advocated for some time by https://www.researchgate.net/profile/Art_Hobson (whose personal site at uark.edu seems to have disappeared) in the physics teaching literature.
 
  • #5
nomadreid said:
It appears to me that you are calculating a classical measurement error, without factoring in any inherent quantum uncertainties for pairs of conjugate variables. Classical measurement error is with respect to a supposed true value, whereas quantum measurement error is just saying that if you prepare a large number of quantum systems in identical states and then perform measurements of one variable C on some of those systems and of D in others, then the standard deviation of the C results times the standard deviation of the D results will satisfy a certain inequality.
Absolutely. But this thread is specifically about the matter raised in the original thread, namely whether it is possible - with one-dimensional motion - to reduce the classical measurement error sufficiently to make a direct measurement of the uncertainty product. Since someone said it wasn't, I'm providing an example where it is!
 
  • #6
Peter Morgan said:
At least for free quantum fields, for at least some physical cases, there is a way to consider quantum noise to be just noise, in the commonplace classical sense.

Europhys.Lett.87:31002, 2009, "Equivalence of the Klein-Gordon random field and the complex Klein-Gordon quantum field" (which is open access), shows that the Hilbert space of a quantized complex Klein-Gordon field is isomorphic to the Hilbert space of a Klein-Gordon random field. The reason the paper is not more widely known will be fairly clear if you look at it: it's the complex Klein-Gordon field, which is not known to be physical; the construction is somewhat messy; and the paper doesn't place the construction in a wider context very well. All three of these issues are fixed as well as I'm currently able in a paper currently with JMathPhys (it's past the editorial decision to send it to referees), arXiv:1709.06711, "Classical states, quantum field measurement", which gives a similar construction for the quantized electromagnetic field (and an unsurprisingly somewhat more abstruse construction for the Dirac spinor field); much more neatly (also with a much cleaner derivation of the quantized complex Klein-Gordon case in Appendix B); and with the construction much more clearly placed in the literature, as I think, in terms of the Koopman-von Neumann presentation of classical mechanics.

Heisenberg uncertainty from the perspective/coordinatization of this formalism is "just" noise/fluctuations, however it differs from thermal noise/fluctuations in that it is Poincaré invariant instead of being invariant only under a subgroup of the Poincaré group that leaves a time-like 4-vector invariant. From a classical perspective, measurement error is caused by uncontrolled noise/fluctuations, so Heisenberg uncertainty is —again, from this perspective— "just" a measurement error. For thermal noise/fluctuations we can reduce the temperature of an object by placing it in contact with a heat bath that is at a lower temperature, and comparably (at least for the free quantized EM field, quantum optics) we can construct squeezed light states to create regions of space-time that have reduced quantum noise/fluctuations (inevitably at the cost of other regions of space-time having higher noise/fluctuations, and to understand the process gets into the nature of interactions of quantum optics with macroscopic material apparatus, which I am far from understanding well enough). Note, however, that quantum noise is a computational resource in quantum computation, so we want to reduce it and increase it with proper care.

Note for @Derek P or other admin: feel free to delete this as at an inappropriate level for the question. It's about fields, not about particles — but, in its defense, particles are for most physicists just a particular coordinatization of fields, and it's past time to start replacing wave/particle thinking by wave/field thinking at the Intermediate level, as has been advocated for some time by https://www.researchgate.net/profile/Art_Hobson (whose personal site at uark.edu seems to have disappeared) in the physics teaching literature.
Well, the experiment I suggested makes no assumptions about whether the system is "really" particles or fields. It is about measuring position and time. Okay, there is an assumption that a time-of-flight measurement with particles of known mass is a valid measurement of the momentum. But that assumption was never disputed. It's not my call whether posts like yours remain, but yes I do think it is off-topic. Not that anyone is going to listen to me!
 
  • #7
Derek P said:
The actual positional measurement is defined by the metal surfaces to a few atomic dimensions, say 10^-9 m

Maybe, I don’t understand your estimate. Do you know exactly the point in space where the electron “leaves” the field emission cathode and the point in space where the electron “arrives” at the active area of the first dynode?
 
  • Like
Likes nomadreid
  • #8
Derek P said:
Absolutely. But this thread is specifically about the matter raised in the original thread, namely whether it is possible - with one-dimensional motion - to reduce the classical measurement error sufficiently to make a direct measurement of the uncertainty product. Since someone said it wasn't, I'm providing an example where it is!
Ah, since I did not read the original thread, I did not know that this is what you were attempting; it wasn't clear in your first post in this thread. In fact, your statement that
Derek P said:
The error product is thus 6 * 10^-39
Which is 5 orders of magnitude smaller than Planck's constant
(in which you are comparing apples and oranges, as PeroK points out) makes it sound as if you are attempting to give a situation where the total measured error violates the Uncertainty Principle.
Now that this is understood, I can take my comment
nomadreid said:
It appears to me that you are calculating a classical measurement error, without factoring in any inherent quantum uncertainties for pairs of conjugate variables.
and let it bring up a further point: if you are attempting to quantify the classical measurement error, how do you tell, when measuring, how much of it is due to classical causes and how much is due to a quantum deviation? Any measurement you do will include both. In any case, again as PeroK points out, you would have to do this over a large number of measurements for the classical error and the quantum deviation to even begin to be comparable, and you would then be attempting to do something analogous to minimizing two variables in a single equation
 
  • #9
PeroK said:
To follow up on this the HUP is a statistical law involving the variance of measurements on a large number of identically prepared systems. It doesn't say anything really about a specific pair of measurements. Even If you could say that at a certain time a particle had a definite measurement of position and a definite messurement of momentum that wouldn't directly contradict the HUP.
Well, I think @bhobba has said something along those lines in another thread. However actually making both measurements together is impossible if, as we currently believe, they are represented by non-commuting operators. It's off-topic anyway as this thread is not about interpretation.
 
  • #10
Lord Jestocost said:
Maybe, I don’t understand your estimate. Do you know exactly the point in space where the electron “leaves” the field emission cathode and the point in space where the electron “arrives” at the active area of the first dynode?
Yes, if the cathode is a needle point and the anode a polished plate (curved to put the needle tip at the centre if you want to be fussy!) And since the scenario is about variance of the momentum, absolute accuracy is not needed, the distance just needs to stay the same each time.
 
Last edited:
  • #11
nomadreid said:
comparing apples and oranges
Well that's nonsense. They are still variances whether caused by HUP or by limitations in the apparatus.
makes it sound as if you are attempting to give a situation where the total measured error violates the Uncertainty Principle.
No, it violates the inequality. It does not violate the principle. That's allowed because the measurements are not on the same wavefunction: they are consecutive.
and let it bring up a further point: if you are attempting to quantify the classical measurement error, how do you tell, when measuring, how much of it is due to classical causes and how much is due to a quantum deviation? Any measurement you do will include both. In any case, again as PeroK points out, you would have to do this over a large number of measurements for the classical error and the quantum deviation to even begin to be comparable, and you would then be attempting to do something analogous to minimizing two variables in a single equation
If I've done my sums right, the total error will be five orders of magnitude greater than the classical error so, short of malicious demons interfering with the apparatus, the observed result can only be due to HUP.
 
  • #12
Derek P said:
Yes, if the cathode is a needle point and the anode a polished plate (curved to put the needle tip at the centre if yoiu want to be fussy!) And since the scenario is about variance of the momentum, absolute accuracy is not needed, the distance just needs to stay the same each time.

I think you can see that as you attempt to measure position at *both* times X (say at point A) and Y (at point B) in a progressively more accurate fashion, the time window must narrow and the location window must narrow too. How you expect that to work is beyond me. You would be waiting for a long time for a single trial to go from A to B.
 
  • Like
Likes nomadreid
  • #13
Derek P said:
They are still variances whether caused by HUP or by limitations in the apparatus.
Well, yes, but they are different variables, and even different kinds of variables. The variables of error in classical measurement are relating to single measurement errors caused by factors outside the measured quantities, and the Heisenberg uncertainty is not an "error" at all, but rather a statistical deviation due to the observables themselves being non-commutative.
Derek P said:
That's allowed because the measurements are not on the same wavefunction: they are consecutive.
I'm not sure what you mean by this.
Derek P said:
If I've done my sums right, the total error will be five orders of magnitude greater than the classical error so, short of malicious demons interfering with the apparatus, the observed result can only be due to HUP.
Offhand I don't see any theoretical objection to somehow getting the classical error to an insignificant level, but I am not convinced that your experiment does this. Both the classical measurement and the quantum deviation involve ranges; the quantum measurement has a huge range, including the possibility of being less than (or greater than) the limits of your classical measurement error. Indeed, on a single measurement the sum of the quantum deviation and the classical error can still be within your classical error range, and you will have at best been able to put some limits of the quantum deviation, but no more than that. In other words, you have not made clear how you intend to distinguish the classical error from a quantum deviation on a single measurement, or even with repeated measurements. For example, if indeed quantum deviations were greater than the classical measurement error, the calculation of your classical error would be different: roughly, the "correct value" would be different (and would fluctuate unpredictably from one measurement to the next, which would play havoc with such calculations). In other words, you can't dissect your measurements as if quantum effects were an after-thought to Newtonian mechanics.
 
  • #15
Derek P said:
Well that's nonsense. They are still variances whether caused by HUP or by limitations in the apparatus.

No, it violates the inequality. It does not violate the principle. That's allowed because the measurements are not on the same wavefunction: they are consecutive.

If I've done my sums right, the total error will be five orders of magnitude greater than the classical error so, short of malicious demons interfering with the apparatus, the observed result can only be due to HUP.
Have you calculated how many repetitions of your experiment are required to give a sample variance ##{\sigma^2}_N## of around 10-30 ?
If the apparatus is accurate to 10-12 then my guess is N ≈ 1018.
 
  • #16
Derek P said:
Yes, if the cathode is a needle point and the anode a polished plate (curved to put the needle tip at the centre if yoiu want to be fussy!) And since the scenario is about variance of the momentum, absolute accuracy is not needed, the distance just needs to stay the same each time.

With regard to the dynode: Do you know the exact penetration depth of the primary electron in question, namely the depth below the surface at which the secondary electrons are generated.
 
  • #18
Lord Jestocost said:
With regard to the dynode: Do you know the exact penetration depth of the primary electron in question, namely the depth below the surface at which the secondary electrons are generated.
Nope but it works with very thin films, so I don't think you are going to recover five orders of magnitude that way.
 
  • #19
Derek P said:
Well no. Because the particular setup I was discussing is the one dimensional case as raised by @Gerinski.

Why would that matter because it is the concept and principle that is relevant?

Zz.
 
  • #20
Derek P said:

Derek P said:
this thread is specifically about the matter raised in the original thread, namely whether it is possible - with one-dimensional motion - to reduce the classical measurement error sufficiently to make a direct measurement of the uncertainty product. Since someone said it wasn't, I'm providing an example where it is!

nomadreid said:
since I did not read the original thread, I did not know that this is what you were attempting; it wasn't clear in your first post in this thread

While it is true that the point under discussion in this thread originally came up in a previous thread, the discussion here can and should stand on its own, regardless of what anyone did or didn't say in another thread. All participants please bear that in mind.
 
  • Like
Likes Derek P and bhobba
  • #21
Mentz114 said:
Have you calculated how many repetitions of your experiment are required to give a sample variance ##{\sigma^2}_N## of around 10-30 ?
If the apparatus is accurate to 10-12 then my guess is N ≈ 1018.
I have no idea what those numbers are but I suspect you may have misunderstood my figures. They are in ordinary mks units. I can't see what you are trying to do or why on Earth you would need such a large sample to get a good estimate of the uncertainty. Or why it matters.
 
  • #22
Derek P said:
I have no idea what those numbers are but I suspect you may have misunderstood my figures. They are in ordinary mks units. I can't see what you are trying to do or why on Earth you would need such a large sample to get a good estimate of the uncertainty. Or why it matters.
Yes, I have certainly misunderstood the list of numbers and imagined the wrong procedure. A complete waste of time, actually.
 
  • Like
Likes Derek P
  • #23
ZapperZ said:
Why would that matter because it is the concept and principle that is relevant?

Zz.
It matters because the OP used the example. If it was up to me I'd have stuck with the single slit. But the OP was struggling with the idea of simultaneous measurement and came up with the idea of measuring momentum by time-of-flight. But that was dissed on the grounds of uncertainty about the path length. So I decided to see whether it was a reasonable objection - basically because I can think of no fundamental reason why you shouldn't measure the x momentum with "arbitrary precision subject to technological limitations of the apparatus".
 
  • #24
DrChinese said:
I think you can see that as you attempt to measure position at *both* times X (say at point A) and Y (at point B) in a progressively more accurate fashion, the time window must narrow and the location window must narrow too. How you expect that to work is beyond me. You would be waiting for a long time for a single trial to go from A to B.
The example used specific figures and got 5 orders of magnitude better than h-bar. So there's no need to push for higher precision.
 
  • #25
Mentz114 said:
Yes, I have certainly misunderstood the list of numbers and imagined the wrong procedure. A complete waste of time, actually.
No problem. Thanks for saying so.
 
  • #26
Derek P said:
[..]
I can think of no fundamental reason why you shouldn't measure the x momentum with "arbitrary precision subject to technological limitations of the apparatus".
Previous threads about the HUP have agreed with this. In principle there is no limit - but that is not the issue.
In the ensemble interpretation the HUP means it is not possible to prepare an ensemble in which the HUP (say between x and p ) is violated. Nothing is implied about individual x nor p measurements.

But I know that the ensemble is not your favourite thing.
 
  • Like
Likes nomadreid
  • #27
nomadreid said:
Derek P said: That's allowed because the measurements are not on the same wavefunction: they are consecutive.
I'm not sure what you mean by this.
After the first measurement the system is in a new state, specifically an eigenstate of the observation operator.
Offhand I don't see any theoretical objection to somehow getting the classical error to an insignificant level, but I am not convinced that your experiment does this. Both the classical measurement and the quantum deviation involve ranges; the quantum measurement has a huge range, including the possibility of being less than (or greater than) the limits of your classical measurement error. Indeed, on a single measurement the sum of the quantum deviation and the classical error can still be within your classical error range, and you will have at best been able to put some limits of the quantum deviation, but no more than that. In other words, you have not made clear how you intend to distinguish the classical error from a quantum deviation on a single measurement, or even with repeated measurements. For example, if indeed quantum deviations were greater than the classical measurement error, the calculation of your classical error would be different: roughly, the "correct value" would be different (and would fluctuate unpredictably from one measurement to the next, which would play havoc with such calculations). In other words, you can't dissect your measurements as if quantum effects were an after-thought to Newtonian mechanics.
Are you seriously suggesting that positional uncertainty of the solid metal electrodes could be comparable to the 100 microns imposed by the sampling time?
 
  • #28
Mentz114 said:
Previous threads about the HUP have agreed with this. In principle there is no limit - but that is not the issue.
In the ensemble interpretation the HUP means it is not possible to prepare an ensemble in which the HUP (say between x and p ) is violated. Nothing is implied about individual x nor p measurements.

But I know that the ensemble is not your favourite thing.
Fortunately, in this thread interpretation doesn't come into it. An adequate sample size to provide acceptable confidence limits goes without saying. That will be the same whether one privately considers the Ensemble Interpretation to be a cop-out or that Many Worlders are hippie mystics who've fried their brains on acid :)
 
Last edited:
  • #29
Derek P said:
Are you seriously suggesting that positional uncertainty of the solid metal electrodes could be comparable to the 100 microns imposed by the sampling time?
Earlier (post #10) you stated
Derek P said:
the distance just needs to stay the same each time.
This sounds as if you are assuming zero variation on the distance. Good trick. There are a few other hidden assumptions like that in your initial values.
Derek P said:
The actual momentum of the electron is 600,000 * 10^-30.
The momentum measurement error is therefore 6 * 10^-30.
Sorry, I missed the "therefore" part of that implication. I won't start on the word "actual".

In any case, if I understand your argument, is that if you can get the classical variation down below the variation allowed by the Uncertainty Principle... but before completing this argument, we hit the main snag that multiple posts in this thread have attempted to point out: the classical measurement error refers to a single measurement (even when obtained by multiple measurements), whereas the uncertainty of the Uncertainty Principle does not. Therefore, this beginning formulation cannot even use that word "below" (i.e., <) in and have it make mathematical sense.
 
  • #30
nomadreid said:
If I understand your argument
Aye, there's the rub. You do not understand my argument.
, is that if you can get the classical variation down below the variation allowed by the Uncertainty Principle...
No, it's that you can easily do so. Nothing more, nothing less.
but before completing this argument,
The argument is complete and the point proven as soon as the possibility of "getting the classical variation down below the variation allowed by the Uncertainty Principle" is demonstrated.
we hit the main snag that multiple posts in this thread have attempted to point out:
Quite so and in every case the poster has assumed I am trying to prove something which I am not. Reading my first post would probably help.
the classical measurement error refers to a single measurement (even when obtained by multiple measurements),whereas the uncertainty of the Uncertainty Principle does not. Therefore, this beginning formulation cannot even use that word "below" (i.e., <) in and have it make mathematical sense
Incorrect. The experiment that I suggested measures distance with a tape measure and velocity with a stopwatch. That's two measurements on two different properties with two different measurement errors. There is absolutely no reason why you shouldn't multiply the errors together if you want to.

Please re-read posts #1, #5 and #23 before arguing against something I haven't said.
 
  • #31
Derek P said:
Quite so and in every case the poster has assumed I am trying to prove something which I am not. Reading my first post would probably help.

Derek P said:
Please re-read posts #1, #5 and #23 before arguing against something I haven't said.

You apparently have never been a teacher. If everyone in your audience does not understand your explanation, one doesn't just attribute the problem to the audience, but rather it is time to try to reformulate the explanation. Hence, I would suggest that you go back, figure out where the misunderstandings are coming from, and restate your argument in such a way that those misunderstandings (if they are indeed such) are less likely to occur in your new explanation.

Derek P said:
That's two measurements on two different properties with two different measurement errors. There is absolutely no reason why you shouldn't multiply the errors together if you want to.

Fine. Just... that is not what the HUP is doing. But obviously I am not making this point strongly enough. Perhaps someone else (hello, other contributors?) can do a better job than I am doing.
 
  • #32
Derek P said:
That's two measurements on two different properties with two different measurement errors. There is absolutely no reason why you shouldn't multiply the errors together if you want to.
Right, and furthermore you can make both measurements with arbitrarily high precision and therefore make that product arbitrarily small. That's not a violation of the uncertainty principle.
 
  • #33
Nugatory said:
Right, and furthermore you can make both measurements with arbitrarily high precision and therefore make that product arbitrarily small. That's not a violation of the uncertainty principle.
Not sure whether I should say "Yes, that's what I said" or "No, I never said it was". :biggrin::biggrin::biggrin:
 
  • #34
nomadreid said:
You apparently have never been a teacher.
I find it hard to understand why anyone should mistake my activity here for teaching. Discussion, yes; argument, yes. But teaching?

Anyway if you really need it spelled out, my argument, as you call it, is simply a rebuttal of Dr Neumaier's claims concerning the apparatus proposed by Gerinski, namely that such apparatus is impossible. Nowhere have I suggested that the HUP can be violated or circumvented.
 
  • #35
Derek P said:
The example used specific figures and got 5 orders of magnitude better than h-bar. So there's no need to push for higher precision.

I think you are missing my point, although I seriously doubt your calculation makes sense (but don't care to debate it). There won't be anything in that very small region that came from any known region at any particular point in time. You won't have a sample.
 

Similar threads

  • Quantum Physics
Replies
18
Views
2K
  • Quantum Physics
Replies
3
Views
253
Replies
10
Views
1K
Replies
10
Views
1K
Replies
20
Views
1K
  • Quantum Physics
2
Replies
49
Views
7K
Replies
23
Views
3K
Replies
32
Views
2K
  • Quantum Physics
Replies
12
Views
2K
Back
Top