Does uncertainty principle imply non-conservation of energy?

In summary, Heisenberg's Energy-Time uncertainty inequality does not imply non-conservation of energy. Energy conservation is still fundamental in classical and relativistic physics, and it remains valid in quantum mechanics but with a probabilistic aspect. The uncertainty inequality simply means that in order to measure the energy with precision, one must interact with the system for a certain minimum time. This does not violate energy conservation.
  • #36
vanesch said:
But no need to argue here: it is a statement that can be falsified. Show me an apparatus that can make a measurement of the energy E of a system when it has access to the system during time T, and whose accuracy dE is better than that given by the uncertainty relationship: meaning: the apparatus will be able to distinguish with high certainty two different incoming states which differ by less than dE.

In order to follow up on this, I would like to propose the following.
Imagine a single particle, in a pure momentum state (and, it being free, hence a pure energy state). Imagine that at t = 0, we do a "position measurement", where this position measurement can be very crude, or very accurate. If it is very crude, then we assume that it is still closely in its pure energy state (say, we know up to 1 cm where it came by). If it is very accurate, then the particle is now in an almost pure position state.

The challenge is now: construct me a momentum measuring apparatus which will give me the momentum (or energy) within an accuracy dE, and where the measurement is completed after time T, such that T.dE << hbar. This precise measurement will then be used to find out whether I applied the "crude" or the "precise" position measurement, in order to establish the energy (or momentum) distribution of the state.
For the "crude" measurement, this should then be a highly peaked distribution, while for the precise measurement, this should be a very broad distribution.

My claim is that you cannot think up of a setup that can do this, if T.dE << hbar

For BIG T, there's no problem of course: let the particle fly freely over 20km, and measure its arrival time and position, and you then have the momentum at high precision. But it takes a long time to have your particle fly over 20 km. You'll have an initial uncertainty on the position too (depending exactly how you want to do it, but in order to respect the momentum resolution for the crude case, this cannot be better than 1cm).

So go ahead and think upon a measurement system that will give me E within dE after less than a time T, such that dE.T << hbar. I think it cannot be done. If it can, my statement is indeed wrong.

EDIT: you may want the particle to be charged, if this makes it any easier.
 
Last edited:
Physics news on Phys.org
  • #37
vanesch said:
I have to agree with nrqed here, Marlon. Normally you have a deltafunction [tex] \delta^4(\Sigma p_i)[/tex] at each vertex. What simply changes between the external lines and the internal lines (= virtual particles) is that for the external lines, we have [tex] p^2 = m^2 [/tex] while for the internal lines this is not true ; it is what's usually called the "on shell" condition for external particles, which is not held for the internal lines.

I know. Again, i am not talking about what conditions need to be imposed or respected at the vertex points. There is no debate necessary there. My point is that the violation of energy conservation occurs if you compare extenal lines to internal lines. This is also what my example on beta decay was trying to illustrate.

But, as Pat pointed out, the on-shell condition [tex] p^2 = m^2[/tex] has nothing to do with the conservation of the 4-momentum in a vertex. It is not because a particle is off-shell that the 4 components of p are not conserved at the vertex

But when did i say this is untrue ? Again, i am not talking about vertices only, i am talking about the difference in "energydistribution" between internal and external lines. Over the internal lines the on mass shell condition is no longer respected


marlon
 
  • #38
marlon said:
I know. Again, i am not talking about what conditions need to be imposed or respected at the vertex points. There is no debate necessary there. My point is that the violation of energy conservation occurs if you compare extenal lines to internal lines. This is also what my example on beta decay was trying to illustrate.

Well, I don't understand what you want to say.

Consider all the ingoing lines, set In, and all the outgoing lines, set Out.

Now, I think we both agree that if all the lines in In are pure momentum states (or almost so, in wave packets, for the nitpickers), that In has a well-defined energy value. So does Out, and the point is (I think we both agree here) that these two energy values are equal, hence energy conservation between ingoing and outgoing particles. It is what physically matters.

However (although this is not really about physics, but about the calculational procedure), consider now that you draw an arbitrary cut across the diagram, which is now split into two disjunct pieces A and B, in such a way that all of the In lines are in A, and all of the Out lines are in B.
You cut through a set of virtual particle lines in doing so. Now, my claim is that if you add up all the energy values for all these virtual particles (by picking just any value for all the free loop momenta over which you have to integrate to calculate the Feynman graph), that this algebraic sum of energies "flowing out of A" (and hence "flowing into B") is exactly equal to the energy value of the In states (= energy of the Out states).
Same for 3-momentum.

I think this is what Pat wanted to say.
 
  • #39
vanesch said:
This view makes it impossible to analyse the precise physics of a measurement apparatus, because it is "brought in by hand" externally to the formalism.
A measurement, by my definition, is any actual event or state of affairs from which the truth value of a proposition "S has property P" or "O has value V" can be inferred. There is nothing in this definition that prevents you from analyzing the physics of a measurement apparatus to your heart's content. Nor can you question the occurrence or existence of actual events or states of affairs, since the quantum formalism presupposes them. A probability algorithm without events to which probabilities can be assigned is as useful as a third sleeve.
you've taken away the only tool that might help you analysing what exactly goes on during a measurement process (when the apparatus interacts with the device)
A lot of pseudo-questions and gratuitous answers are generated by trying to analyze what is beyond mathematical and/or rational analysis. Strange that the founders understood this, starting from Newton who refused to frame hypotheses. Heisenberg insisted that "there is no description of what happens to the system between the initial observation and the next measurement... If we want to describe what happens in an atomic event, we have to realize that the word 'happens' can apply only to the observation, not to the state of affairs between two observations." Pauli stressed that "the appearance of a definite position x during an observation" is to be "regarded as a creation existing outside the laws of nature." Neither Bohr nor Heisenberg nor Pauli (and certainly not Schrödinger) spoke of collapses. They were invented by von Neumann (and Lüders) as a natural consequence of his transmogrification of a probability algorithm dependent of the times of the events to which the probabilities are assigned, into an evolving instantaneous state of affairs.
I can't imagine how far our understanding of the quantum world would have progressed by now if the amount of energy and ingenuity wasted on MWI and such had been invested in understanding why quantum mechanics is essentially a probability algorithm and why outcome-indicating events or states of affairs play a special role.
Show me an apparatus that can make a measurement of the energy E of a system when it has access to the system during time T, and whose accuracy dE is better than that given by the uncertainty relationship: meaning: the apparatus will be able to distinguish with high certainty two different incoming states which differ by less than dE.
You seem to be speaking of the relation between the energy spread (line width) of a quantum state and its lifetime. A shorter lifetime (and hence a sharper temporal localization of the quantum state) implies a larger energy spread. In other words, the shorter the time during which a state |a> changes into a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the energy spread of that state. This is the analogue of the following relation: the shorter the distance by which a state |a> has to be translated until it becomes a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the momentum spread of that state. The latter relation is not the uncertainty relation that limits the simultaneous measurement of position and momentum. This uncertainty relation has no temporal analogue. There is no such uncertainty relation for energy and time because there is no time operator and hence no commutator for energy and time.
Needless to say, when I use the "standard" language (which is as convenient as it is misleading) and speak of a state changing from |a> into |b>, I mean a probability algorithm that depends on the time of a measurement to the possible outcomes of which it assigns probabilities. Nothing in the formalism of (non-relativistic) quantum mechanics rules out that this time is an instant. What is ruled out is the possibility of distinguishing between states differing by a sufficiently small translation in time and/or space.
(Which, by the way, is why the spatiotemporal differentiation of the world is incomplete - it doesn’t go all the way down. Which, by the way, is of utmost importance for understanding the consistency of a fundamental physical theory that is nothing but a probability algorithm, with the existence of the events presupposed by this algorithm.)
...you do not necessarily end up with MWI (of course, if you try to be logically consistent, you would).
Claiming logical consistency for MWI is nothing short of hilarious.
...you will have to describe this interaction quantum mechanically.
Trouble is that the quantum-mechanical description (by your definition) is incomplete. It leads to correlations without ever producing correlata (measurement outcomes). Logically consistent, my foot!
 
  • #40
vanesch said:
Well, I don't understand what you want to say.

Consider all the ingoing lines, set In, and all the outgoing lines, set Out.

Now, I think we both agree that if all the lines in In are pure momentum states (or almost so, in wave packets, for the nitpickers), that In has a well-defined energy value. So does Out, and the point is (I think we both agree here) that these two energy values are equal, hence energy conservation between ingoing and outgoing particles. It is what physically matters.

I agree completely. I always stated that energy conservation id defined based upon the initial and final state.


However (although this is not really about physics, but about the calculational procedure), consider now that you draw an arbitrary cut across the diagram, which is now split into two disjunct pieces A and B, in such a way that all of the In lines are in A, and all of the Out lines are in B.
You cut through a set of virtual particle lines in doing so. Now, my claim is that if you add up all the energy values for all these virtual particles (by picking just any value for all the free loop momenta over which you have to integrate to calculate the Feynman graph), that this algebraic sum of energies "flowing out of A" (and hence "flowing into B") is exactly equal to the energy value of the In states (= energy of the Out states).
Same for 3-momentum.

I think this is what Pat wanted to say.
Well, it's possible that i don't completely get what you say but in between the A abd B points (which we call initial and final states) there can be virtual particles that have too much energy if you compare this value to the energy difference between A and B. So if you compare between this virtual particle and, let's say, point A : energy conservation is not respected. This is what happens in beta decay.

marlon
 
  • #41
Some time it is very difficult to figure out what is being argued here. So I will ask this question:

Are people arguing that the uncertainty relation between [itex]\Delta(E)[/itex] and [itex]\Delta(t)[/itex] doesn't exist, or invalid, or should be colored green?

Zz.
 
  • #42
koantum said:
A measurement, by my definition, is any actual event or state of affairs from which the truth value of a proposition "S has property P" or "O has value V" can be inferred.

Well, the statement you're disputing is the following:
a measurement apparatus to which a system is presented starting at t = 0, which has to provide an answer at time t = T, cannot give you a more accurate value of energy than given by dE were dE.T ~ hbar.

I ask you for a counter example to disprove the statement.

I can't imagine how far our understanding of the quantum world would have progressed by now if the amount of energy and ingenuity wasted on MWI and such had been invested in understanding why quantum mechanics is essentially a probability algorithm and why outcome-indicating events or states of affairs play a special role.

Well, with your superior understanding, I'm sure you're going me a superior counter example :biggrin:

You seem to be speaking of the relation between the energy spread (line width) of a quantum state and its lifetime.

No, I'm not. I'm talking about the time that a measurement apparatus has access to the system under study and its relationship to the potential accuracy on the energy measurement that can result from it. The state of the system can be as old, or as young, as you want, but the measurement apparatus has not yet access to it.


A shorter lifetime (and hence a sharper temporal localization of the quantum state) implies a larger energy spread. In other words, the shorter the time during which a state |a> changes into a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the energy spread of that state. This is the analogue of the following relation: the shorter the distance by which a state |a> has to be translated until it becomes a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the momentum spread of that state. The latter relation is not the uncertainty relation that limits the simultaneous measurement of position and momentum. This uncertainty relation has no temporal analogue. There is no such uncertainty relation for energy and time because there is no time operator and hence no commutator for energy and time.

I know all that, but it is not what I'm talking about. I'm talking about the thing you disputed. You said (using the quote of I don't remember who) that it is a FALSE statement that a measurement apparatus which completes its measurement in a time T cannot give a better resolution in its determined energy value than dE. The negation of that statement (which hence you hold for true) is:
there exists a measurement apparatus which completes its measurement within a time T, and which has an intrinsic energy resolution better than dE, right ? So I now ask you to show me such an apparatus.
This is NOT something about the state of the system under study. I'm going to offer to that measurement apparatus a pure energy state (or at least one which is highly accurate), and then I'm going to offer to that apparatus one which is badly defined (for instance, because it follows immediately after a position measurement). And you will have to show me that the SAME apparatus will deduce, in the first case, a sharp peak, and in the second case, a broad spread. Because that's what an accurate apparatus is supposed to give you.

Needless to say, when I use the "standard" language (which is as convenient as it is misleading) and speak of a state changing from |a> into |b>, I mean a probability algorithm that depends on the time of a measurement to the possible outcomes of which it assigns probabilities. Nothing in the formalism of (non-relativistic) quantum mechanics rules out that this time is an instant.

Well, SHOW ME SUCH AN APPARATUS, and tell me how it works.
And show me that I can present, to this apparatus, the two states I was talking about, and that it produces the peak in the first case, and the broad line in the second, and that I obtain this result within an arbitrary short time interval which is in contradiction with dE.T ~ hbar.

Claiming logical consistency for MWI is nothing short of hilarious.

I would advise you to keep your condescending remarks to yourself. This doesn't advance any discussion. And I wasn't talking about MWI here. My starting point was that the relationship between the time needed to obtain a definite measurement outcome and the energy resolution of the apparatus, when treating the system+apparatus quantum mechanically, is limited by the extra interaction term in the hamiltonian which describes the interaction between the system and the apparatus. If this term is big, the evolution can be fast, but the error introduced by it is large too. If this term is small (weak interaction), then the error introduced by it is small, but it takes more time to complete the evolution.
This point is OBVIOUS starting from MWI, but is the same whenever you treat (no matter what interpretation) the apparatus + system quantum mechanically.

It is of course correct, in standard quantum theory, that you could conceive, within one picosecond, 10^30 alternations between a position measurement, a momentum measurement, and an energy measurement, each with mindboggling accuracy. But only with "magical" deus ex machina measurement apparatus.
A real apparatus has to interact somehow physically with the system, and I'm asking you to provide me with an example of such an apparatus, as a thought experiment.
 
  • #43
marlon said:
Well, it's possible that i don't completely get what you say but in between the A abd B points (which we call initial and final states) there can be virtual particles that have too much energy if you compare this value to the energy difference between A and B. So if you compare between this virtual particle and, let's say, point A : energy conservation is not respected. This is what happens in beta decay.

I don't understand this. The TOTAL energy flowing through all the cut virtual particles from the A part to the B part will be exactly equal to the energy of the in particles. This is guaranteed, because at each vertex, there is conservation of energy, so through a complete cut of the diagram in two pieces can flow only the correct amount of energy. Of course what happens in individual lines isn't said this way: one virtual line can carry more energy, if this is compensated by another one.
This is like an electrical circuit: if you cut it in two, then the current flowing through all of the cut wires, algebraically summed, will equal the current flowing into A (and flowing out of B), simply because the currents are conserved at each node point (Kirchhoff's law).
 
  • #44
marlon said:
I agree completely. I always stated that energy conservation id defined based upon the initial and final state.
What Patrick VanEsch and I are trying to argue is that even in any intermediate state, four-momentum is conserved which is insured by conservation of four-momenta at the vertices. There is no mathematical way to have the four-momentum not conserved in all intermediate states if it is conserved at the vertices!

It's like an electrical circuit, as Patrick (the other Patrick) pointed out. If you have 3A flowing into two wires in parallel, because the current is conserved at the vertex (the node), the total current in the intermediate state (the two wires in parallel) MUST be equal to the total initial current. Now, if you look at *only one* of two wires in parallel, let's say one which contains 1 A and you claim that current is not conserved, then it is because you are not including all the wires in the intermediate state (i.e. all the particles in the intermediate state of a Feynman diagram). That's all there is to it!

Well, it's possible that i don't completely get what you say but in between the A abd B points (which we call initial and final states) there can be virtual particles that have too much energy if you compare this value to the energy difference between A and B.
I am not sure about the meaning of this last line. Since you agree that the energy is conserved between initial and final states, shouldn't the energy difference between A and B necessarily be zero?

So if you compare between this virtual particle and, let's say, point A : energy conservation is not respected. This is what happens in beta decay.

marlon

Can we look at an explicit Feynman diagram, please? If you want to focus on beta decay, then tell us exactly what diagram you have in mind. I am assuming you have in mind the *tree* diagram? With a W exchange?
At *all* steps of the diagram the energy is conserved.

Of course, the W is off-shell, but that has nothing to do with energy not being conserved, as we are trying to emphasize.

An even more clean cut example is electron-positron scattering in the s channel at tree level. The photon is off-shell, but it's energy *is* equal to the initial combined energy of the electron-positron pair. Do you dispute that?? If yes, then we should simply write down the expression instead of talking "in the air".

Regards

The other Patrick
 
  • #45
Once again: if the Hamiltonian is time independent as it is, for example, in QED, then Energy is strictly conserved, for a very short time or a very long time, or any time -- we are talking Noether's Thrm. And, yes there are statements that can be made about the evolution of individual states within a state described by a density matirx.

There is, of course a energy-time Uncertainty Princ. in classical E&M, something to do with basic properties of Fourier representations. As far as I know, energy is still conserved in E&M. And, sometimes it's useful to remember the distinctions bwetween open and closed systems when probing the mysteries of any HUP, or close cousin thereof.

However, the name of the game is: we are talking exact solutions; that is, in field theory, all appropriate Feynman diagrams. Leave some out, and the approximate solution will not, repeat, not conserve energy -- that is, it is a faulty, inexact representation of the system at hand. But with a finagle here, a finesse there, we can overcome this nominally serious difficulty -- this has been discussed ad-nauseum for 70 years or so. Any discussion, repeat any(well almost) discussion of time independent perturbation theory or of scattering theory -- best to start with non-relativistic theory before jumping into LSZ, Wightman, ... (In the past, I've mentioned Wigner and Weisskopf and Breit and Wigner's work on resonances. They worked out many of the issues discussed here quite a long time ago; and their work still stands as impressive, correct, and useful. Probably a third or more of Cohen-Tannoudji et al's Atom Photon Interactions:Basic Processes and Applications is devoted to the issues raised in this thread. So-called dispersion or S-Matrix Theory dealt and deals with the issues of this thread as well -- see Weinberg's Field Theory Vol I for an introduction.

There's no mystery here:

A finite approximation to an infinite series is, generally, not a terribly good approximation. The symmetries of the finite approx may well not be shared by the exact series sum. So, what's the problem?

Regards,
Reilly
 
  • #46
Hi vanesch, let me begin by noting a few points. They may have been raised already in this thread, but if so they bear repetition.

As said, there is no commutation relation for energy and time operators because there is no time operator.

All measurements are ultimately position measurements, so when we talk about measurements of momentum or energy, we have to think up, if not actually implement, a measurement procedure, which won't be unique.

Experimentalists differ from theorists, who frequently proceed on the assumption that every possible ONB corresponds to an "in principle" implementable measurement. In this they manifest their boundless faith in the ingenuity of experimentalists, hats off. (I was going to write, heads off.) In non-relativistic quantum mechanics, they can get away with this assumption, which has such consequences as: take a particle, make three measurements in a row, the first and third with detectors in position space, the second with detectors in momentum space, and get virtually instantaneous propagation over any distance.

The analogy between the quantum-mechanical Psi(q,t) and its (spatial) Fourier transform Phi(p,t) associated with a particle on the one hand and, on the other, the proper wave function f(x,t) and its transform g(k,t), which are used in the study of classical signals, is superficial and misleading. What comes out if you pop x and t into f is a physical quantity associated with every point x and every instant t. What you insert into Psi(q,t) and what comes out is totally different. q is not a point of space but an eigenvalue of the position operator, which serves to calculate probability distributions over possible outcomes of position measurements. t is not an instant of time but the time of a measurement. And Psi is not a physical quantity but an abstract expression that, when square-integrated over a region R, gives the probability of finding the particle in R if the appropriate measurement is made at the time t. Psi(q,t) is not a physical quantity associated with every t, nor does it serve to (statistically) answer the question: when is the particle at a given point? It concerns the question: where is the particle at a given time? (There is an extensive literature on the time-of-arrival problem, but this is essentially a discussion of how to realize a measurement of the time of an event as a position measurement, and of finding a suitable operator for this measurement.)

Since there are no detectors in momentum space, the closest mock-up of a bona fide quantum-mechanical momentum measurement is to make position measurements at two given times. And whenever position measurements are made (which is whenever measurements are made), the uncertainty relation for q and p must be taken into account. A sharper measurement of the distance between the two positions implies a fuzzier relative momentum. If, using the (relativistic or non-relativistic) relation between the energy and the momentum of a freely propagating particle, we turn this into an energy measurement, then that uncertainty relation turns into one between energy and position rather than one between energy and time.

The uncertainty between q and p, combined with relativistic covariance, appears to require a corresponding uncertainty between t and E. Such an uncertainty relation exists: as a sharper probability distribution over the possible outcomes of a position measurement implies a fuzzier probability distribution over the possible outcomes of a momentum measurement, so a sharper probability distribution over the possible outcomes of a measurement of the time of an event (such as the time at which a clock's second hand points upward) implies a fuzzier probability distribution over the possible outcomes of a measurement of the hand's position. There is a fairly extensive literature on quantum clocks, the common denominator of which appears to be that a sharper time indicator implies a fuzzier momentum or angular momentum, which in turn implies a fuzzier energy. So it's again essentially position and energy rather than time and energy. And it's primarily about time measurements, which are comparatively easy to mock up, rather than about energy measurements, which appear to be the most problematic. Einstein's famous photon-box argument invokes the relativistic equivalence of energy and mass to measure the photon's energy via a weight loss of the box, and Bohr's famous rebuttal even invokes the general relativistic effect of gravitational fields on the rate at which clocks "tick". This is heavy artillery to resolve a purely quantum-mechanical issue.

So at the end of the day I'm discouraged by the defeat of Einstein and others to spend time trying to invent a realistic measurement scheme that is not constrained by some ET uncertainty. However, while the PQ uncertainty is a fundamental feature of pure, non-relativistic quantum mechanics, the ET uncertainty is not, and such a measurement scheme was found by Aharonov and Bohm, whose names you will surely recollect. I quote the Summary and Conclusion of their article "Time in the quantum theory and the uncertainty relation for time and energy", which is reprinted in the volume Quantum Theory and Measurement edited by Wheeler and Zurek.
There has been an erroneous interpretation of uncertainty relations of energy and time. It is commonly realized, of course, that the "inner" times of the observed system (defined as, for example, by Mandelstamm and Tamm) do obey an uncertainty relation [tex]\Delta E\Delta t\geq h[/tex] where [tex]\Delta E[/tex] is the uncertainty of the energy of the system, and [tex]\Delta t[/tex] is, in effect, a lifetime of states in that system. It goes without saying that whenever the energy of any system is measured, these "inner" times must become uncertain in accordance with the above relation, and that this uncertainty will follow in any treatment of the measurement process. In addition, however, there has been a widespread impression that there is a further uncertainty relation between the duration of measurement and the energy transfer to the observed system. Since this cannot be deduced directly from the operators of the observed system and their statistical fluctuation, it was regarded as an additional principle that had to be postulated independently and justified by suitable illustrative examples. As was shown by us, however, this procedure is not consistent with the general principles of the quantum theory, and its justification was based on examples that are not general enough.
Our conclusion is then that there are no limitations on measurability which are not obtainable from the mathematical formalism by considering the appropriate operators and their statistical fluctuation; and as a special case we see that energy can be measured reproducibly in an arbitrarily short time.​
Regards - koantum
 
  • #47
koantum said:
Hi vanesch, let me begin by noting a few points. They may have been raised already in this thread, but if so they bear repetition.

As said, there is no commutation relation for energy and time operators because there is no time operator.

Yes, I know that, and as I said, this is not the issue.

All measurements are ultimately position measurements, so when we talk about measurements of momentum or energy, we have to think up, if not actually implement, a measurement procedure, which won't be unique.

Indeed, you can start with this. I'm not sure that all measurements are ultimately position measurements in principle, in fact. But we surely can do so as a working hypothesis. But this already implies something important: it means that there is no way to consider, say, the momentum operator as a hermitean "measurement operator" in quantum theory. If you want to be precise, you'll have to continue your unitary evolution until you arrive at a position. Now, of course, if there's enough decoherence, then you can take the shortcut and avoid this last step, as a good approximation. But if you want to look into the fine details of how your "desired quantity" transforms into a position measurement (or 2 position measurements), well, then you will have to work this unitary evolution out in detail.

And now we come to the main point: IF you consider that you can only do position measurements, and you want to set up a (combination of) position measurements such that you want to turn this into an energy measurement AND you require the entire measurement not to last longer than a time T, THEN this implies that you have an inaccuracy of dE on your deduced energy measurement.

EVEN if you can consider position measurements "instantaneous".



Experimentalists differ from theorists, who frequently proceed on the assumption that every possible ONB corresponds to an "in principle" implementable measurement. In this they manifest their boundless faith in the ingenuity of experimentalists, hats off. (I was going to write, heads off.)

:rofl: :rofl:

Well, you can - as you do - turn a head-less theorist into a reasonable experimentalist, by claiming that the only measurements you can REALLY perform are position measurements, and that you now have to DEDUCE other quantities by thinking up combinations of position measurements.


The analogy between the quantum-mechanical Psi(q,t) and its (spatial) Fourier transform Phi(p,t) associated with a particle on the one hand and, on the other, the proper wave function f(x,t) and its transform g(k,t), which are used in the study of classical signals, is superficial and misleading. What comes out if you pop x and t into f is a physical quantity associated with every point x and every instant t. What you insert into Psi(q,t) and what comes out is totally different. q is not a point of space but an eigenvalue of the position operator, which serves to calculate probability distributions over possible outcomes of position measurements. t is not an instant of time but the time of a measurement. And Psi is not a physical quantity but an abstract expression that, when square-integrated over a region R, gives the probability of finding the particle in R if the appropriate measurement is made at the time t. Psi(q,t) is not a physical quantity associated with every t, nor does it serve to (statistically) answer the question: when is the particle at a given point? It concerns the question: where is the particle at a given time? (There is an extensive literature on the time-of-arrival problem, but this is essentially a discussion of how to realize a measurement of the time of an event as a position measurement, and of finding a suitable operator for this measurement.)

I agree with all this but it is besides the point. Well I don't agree with this, in that it makes statements which do not necessarily have to be true (they make statements about the ontological status of certain objects, where I prefer to assign different truth values), but I agree that this is a position to take. But all this has nothing to do with what I am claiming, and which CAN be framed in this view. The claim is, that if your entire measurement scheme is finished in a time T, then the energy value you can deduce from it has an error of dE.

Since there are no detectors in momentum space, the closest mock-up of a bona fide quantum-mechanical momentum measurement is to make position measurements at two given times. And whenever position measurements are made (which is whenever measurements are made), the uncertainty relation for q and p must be taken into account. A sharper measurement of the distance between the two positions implies a fuzzier relative momentum. If, using the (relativistic or non-relativistic) relation between the energy and the momentum of a freely propagating particle, we turn this into an energy measurement, then that uncertainty relation turns into one between energy and position rather than one between energy and time.

Yes, but it will turn out that this entire scheme can only be COMPLETED within a time T (that is, from the moment your system is "available" to the moment where you have your result in your hands) if the accuracy on the energy is smaller than dE. That's all I'm saying.

The uncertainty between q and p, combined with relativistic covariance, appears to require a corresponding uncertainty between t and E. Such an uncertainty relation exists: as a sharper probability distribution over the possible outcomes of a position measurement implies a fuzzier probability distribution over the possible outcomes of a momentum measurement, so a sharper probability distribution over the possible outcomes of a measurement of the time of an event (such as the time at which a clock's second hand points upward) implies a fuzzier probability distribution over the possible outcomes of a measurement of the hand's position. There is a fairly extensive literature on quantum clocks, the common denominator of which appears to be that a sharper time indicator implies a fuzzier momentum or angular momentum, which in turn implies a fuzzier energy.

It's not so much the sharpness of the time measurement, but an upper boundary on the total duration of the measurement scheme which allows you to deduce the energy I'm talking about.

For instance, you will not be able, no matter what you do, to measure the energy of, say, a cold neutron in one nanosecond, with a precision better than one nano-electronvolt.
Although the quote of the article seems to contradict this:

and as a special case we see that energy can be measured reproducibly in an arbitrarily short time.

I'd like to see the setup of the measurement scheme that does this. I am going to look at it, the library of my institute seems to have the book available...
 
  • #48
vanesch said:
I'd like to see the setup of the measurement scheme that does this. I am going to look at it, the library of my institute seems to have the book available...

Ok, I looked at the paper. For sure it is a smart example! I'll concentrate on the description around page 723.

It seems to me, though, that, although the interaction time with the system is indeed arbitrary short, that the result is not available in this time, because what happened is that there was a transfer of energy from the particle to the first (and the second) condenser, and now we displaced the problem to the energy measurement of the condensers. So in order to measure THIS energy accurately enough, we will need again a time T (or an ingenious scheme which will, in its turn, transfer the knowledge to a further system).

But I admit that I learned something here, so thanks.
What I learned (correct me if I'm wrong), is that the system must not remain "available" during the entire measurement time T (the time between the "availability" of the system state, and the "presentation" of the result, which corresponds here with the completion of the energy measurement of the condensors). In MWI speak (even though you do not like it), the final evolution into clearly distinguished pointerstates takes time T, but the interaction between system and apparatus can stop before and only needs a time delta-t.

The key seems to be equation (25), which we can in fact augment:

H = px^2/2m + py^2/2m + y px g(t)

and the usual justification of the energy accuracy - time needed uncertainty relation goes by saying that the perturbation, introduced by the third term gives you dE, while its duration is T, and what is done, is to introduce a second interaction term:

H = px^2/2m + py^2/2m + y px g(t) - y px g(t-DT)

so that the perturbative effect of the first is compensated by the corrective action of the second.
 
  • #49
Regarding dE*dt - This expression does not mean that it takes a time dt to measure energy to a precision of dE. Counter examples can be found in the literature. One of them comes to mind

Time in the Quantum Theory and the Uncertainty Relation for Time and Energy, Y. Aharonov and D. Bohm, Phys. Rev. Vol. 122(8) June 1, 1961, pp 1649 - 1658. The abstract reads
Because time does not appear in the Schodiner equation as an operator but only as a parameter, the time-energy uncertainty relation must be formulated in a special way. This problem has in fact been studied by many authors and we give a summary of their treatments. We then criticize the main conclusion of these treatments; viz., that in a measurement of energy carried out in a time interval, dt, there must be a minimum uncertainty in the transfer of energy to the observed system, given by d(E' - E) > h/dt. We show that this conclusion is erroneous in two respects. First, it is not consistent with the general principles of the quantum theory, which require that all uncertainy relations be expressible in terms of the mathematical formalism, i.e., by means of operators, wave functions, etc. Secondly, the examples of the measurement processes that were used to derive the above uncertainty relation are not general enough. We then develop a systematic presentation of our point of view, with regard to the role of time in quantum theory, and give a concrete example of a measurement process not satisfying the above uncertainty relation.

Pete
 
  • #50
Yes, this is exactly the paper that koantum talked about and which I commented. It is apparently true that the interaction between the measurement apparatus and the system can be arbitrary short, I didn't realize this. But this was not the claim. The claim was that it takes time dT to have the result available.
In the Bohm example, in a very short time, the px momentum (and eventually, the py momentum) energy equivalent has been transferred to the condenser state...
But now we must still measure the energy of the condenser !
While it is true (and I ignored this) that you can now consider the condensers as independent from the system under measurement, you will now have a new energy measurement to perform, so the problem starts all over again.
If you accept to talk about the quantum states of the condensers, after the interaction with the particle, they are now entangled with the particle states, but their own states are not yet sufficiently separated for them to be "pointer states". They will need to evolve during at least a time dT, before they are "grossly orthogonal" for different energy input states which are of the order of dE.

Nevertheless (and that's what I learned), the advantage of this is that you can have rapidly successing energy measurements on a same system.

However, for instance, you do not have the result available in very short time (so that you could take a decision based upon that to act on the system, for instance).
 
  • #51
Hi Vanesch, in my previous post I just collected my thoughts on the subject; I didn’t mean they are all equally relevant to your challenge, though they certainly have a bearing on it.

I just had a look at what must be your bible (I'm kidding) - The many-worlds interpretation of quantum mechanics edited by DeWitt and Graham. I actually own the book, which isn't half bad. It contains an article by DeWitt "The many-universes interpretation of quantum mechanics", which contains a discussion of a system-apparatus interaction in the Heisenberg picture. DeWitt shows that the apparatus operator after the measurement depends on the undisturbed system operator (which was measured). Since in the Heisenberg picture the system operator depends on time (qua "c-number"), this means that the system's energy (like any other system operator) can be measured with unlimited precision at a precise time!

There is an ambiguity here which is often overlooked: "time of measurement" can mean (i) time at which the outcome becomes "available" or (ii) time at which the system observable possessed the indicated value. We seem to agree now that what I just said is true of (ii). You still deny that it is true of (i).
vanesch said:
It seems to me, though, that, although the interaction time with the system is indeed arbitrary short, that the result is not available in this time, because what happened is that there was a transfer of energy from the particle to the first (and the second) condenser, and now we displaced the problem to the energy measurement of the condensers. So in order to measure THIS energy accurately enough, we will need again a time T (or an ingenious scheme which will, in its turn, transfer the knowledge to a further system)... What I learned... is that the system must not remain "available" during the entire measurement time T (the time between the "availability" of the system state, and the "presentation" of the result, which corresponds here with the completion of the energy measurement of the condensors).
What do you mean by "available", "transfer of knowledge", and "presentation of the result"? You are playing the old game of the "shifty cut" (as Bell called it), agreeing that the buck must stop somewhere (such as when the outcome is "available" or "presented"). Available to whom? Presented to whom? I'm echoing Bell's famous question: whose knowledge? Those who use evolution speak (which I try to avoid) call this buck stopper "collapse" (I wouldn’t know of what) or "world branching" (which isn't any better), and they either endorse the slogan "quantum states are states of knowledge" or invoke observers to account for the real or apparent collapses or for the real or apparent world branchings.
In MWI speak... the final evolution into clearly distinguished pointerstates...
This is not specifically MWI speak. So when are pointer states clearly distinguished? When they are distinguishable according to the neurobiology of human, or primate, or mammalian, or vertebrate... perception? If this is your buck stopper, then you are absolutely right because perception (even human) is a notoriously slow process, as psychologists and neurobiologists will confirm.
 
Last edited:
  • #52
koantum said:
There is an ambiguity here which is often overlooked: "time of measurement" can mean (i) time at which the outcome becomes "available" or (ii) time at which the system observable possessed the indicated value. We seem to agree now that what I just said is true of (ii). You still deny that it is true of (i).

Well, yes, but I admit having been puzzled by the Bohm example, and that it indicated something important. I think that there's now a flaw in the reasoning which tells us that dT.dE > hbar because of the interaction term. It is true that the MAGNITUDE of this interaction term is correctly described that way, but the magnitude is too coarse a measure to say that it is the measurement error, which can indeed be made smaller, by introducing a reverse evolution. So it is not because you introduce an interaction term with magnitude > dE, that you've introduced an irreducible error on the potential error measurement (as is often claimed, and as I believed). So this kills (ii), and this is what I learned.

However, (i) finds its origin elsewhere. It follows from the claim that the expectation values of all quantities of states which differ by dE only start to show significantly different time evolutions after a time dt such that dt.dE>hbar.

And in as much as, as of Bohm, a very fast interaction can now transfer the energy value to another system, this other system's state will now have to evolve, such that, whatever quantity is going to be measured (position for instance), it is clearly different for E and for E+dE (first of the system, and then of the "helper system" such as the condensers).
Now, to clearly distinguish between a system in a state E and a state E+dE, one must measure something of which the expectation value is different, meaning that we will have to let it evolve for at least the time dt for the expectation values of these states to be clearly distinguishable.

This is not specifically MWI speak. So when are pointer states clearly distinguished?

When their in product is neglegible.
 
  • #53
vanesch said:
When their in product is neglegible.
I expected this answer. So what are the conditions of negligibility?
 
  • #54
vanesch said:
However (although this is not really about physics, but about the calculational procedure), consider now that you draw an arbitrary cut across the diagram, which is now split into two disjunct pieces A and B, in such a way that all of the In lines are in A, and all of the Out lines are in B.
You cut through a set of virtual particle lines in doing so. Now, my claim is that if you add up all the energy values for all these virtual particles (by picking just any value for all the free loop momenta over which you have to integrate to calculate the Feynman graph), that this algebraic sum of energies "flowing out of A" (and hence "flowing into B") is exactly equal to the energy value of the In states (= energy of the Out states).

Well, to make my point more clear. Let's look at the Feynman diagram of neutron beta decay : A neutron (two down quarks and one up) disappears and is replaced by a proton (two up quarks and one down), an electron, and an anti-electron neutrino.

The Standard Model will tell us that one down quark disappears in this process while an up quark and a VIRTUAL W boson is produced. The W boson then decays to produce an electron and an anti-electron type neutrino. So this is the Feynman diagram i want to talk about.

Now, let's look at the virtual W boson. The mass-energy difference between a neutron and a proton is very much less than the mass-energy of a W boson. This would imply that the W boson cannot exist. Ofcourse, it does...So what do we do with this apparent contradiction ? Well, the answer is in the fact that the W boson is VIRTUAL. Now, if you would compare the energy of the W boson to that of the initial and final states (neutron and proton), energyconservation is NOT conserved. This is what i wanted to say. Ofcourse, when comparing initial and final energies, there is no problem. The violation only appear when we comape initial states with intermediate states or intermediate states with final states. That's what i meant by saying "in between vertex points"

regards
marlon
 
  • #55
marlon said:
Now, let's look at the virtual W boson. The mass-energy difference between a neutron and a proton is very much less than the mass-energy of a W boson. This would imply that the W boson cannot exist. Ofcourse, it does...So what do we do with this apparent contradiction ? Well, the answer is in the fact that the W boson is VIRTUAL. Now, if you would compare the energy of the W boson to that of the initial and final states (neutron and proton), energyconservation is NOT conserved. This is what i wanted to say.

But the "mass-energy" of your *virtual* W boson is NOT c^2 times the mass listed in the PDG files for the (real) W boson. That's exactly what it means to be off shell, that E^2 - p^2 must NOT be equal to m^2 (for c=1).
If, in your diagram, you calculate the 4-th component of the energy-momentum vector (E) of your virtual W boson, you will find that it is exactly equal to the sum of the energy of your up and down quark. As such, the total energy of this virtual W boson is MUCH LOWER than m c^2.
No REAL W boson can exist with such a low total energy, because for a REAL W boson, we have that E^2 = p^2 + m^2 (the on-shell condition), so we have that E > m. But this condition is not valid for a virtual boson, and E can have any value, also much lower than m. And E will have exactly the right value for energy to be conserved:

4th component of (real) quark u + (real) quark d = 4th component of (virtual) W boson = 4th component of final states. So there IS conservation of energy, no ?

EDIT: anti-up quark instead of up quark...
 
Last edited:
  • #56
As vanesch has emphasized, we are just bantering words back and forth about a perturbation calculation. Sometimes it helps if the words give us a physical picture in our minds.

In your example, another way of thinking about what happens between the vertices is to say that energy is conserved, which means that the "virtual" W does not have the same (rest) mass as a "real" W. To obsreve directly "real" W's requires large amounts of energy.

I think this is the viewpoint of Patrick^2, and, though I am no expert, it is a viewpoint that I think is becoming quite common these days. For example, take this quote form Ticiatti's field theory text:

"The activity of the [itex]\psi[/itex] field represented by this line is not a particle state since q^2 is not required to be m^2. It is called a virtual particle, though it really has nothing to do with particles. It represents unobservable, transient behavior of the field which gives rise to scattering and is the quantum field theory version of a force."

Regards,
George
 
  • #57
George Jones said:
As vanesch has emphasized, we are just bantering words back and forth about a perturbation calculation. Sometimes it helps if the words give us a physical picture in our minds.

In your example, another way of thinking about what happens between the vertices is to say that energy is conserved, which means that the "virtual" W does not have the same (rest) mass as a "real" W. To obsreve directly "real" W's requires large amounts of energy.

I think this is the viewpoint of Patrick^2, and, though I am no expert, it is a viewpoint that I think is becoming quite common these days. For example, take this quote form Ticiatti's field theory text:

"The activity of the [itex]\psi[/itex] field represented by this line is not a particle state since q^2 is not required to be m^2. It is called a virtual particle, though it really has nothing to do with particles. It represents unobservable, transient behavior of the field which gives rise to scattering and is the quantum field theory version of a force."

Regards,
George


Indeed, that's exactly what I meant.
 
  • #58
George Jones said:
In your example, another way of thinking about what happens between the vertices is to say that energy is conserved, which means that the "virtual" W does not have the same (rest) mass as a "real" W. To obsreve directly "real" W's requires large amounts of energy.

Yes, i indeed agree with this and i do get the point. I also realize that both Vanesch as well as nrqed are saying this. My point is however that total energyconservation is defined based upon final and initial states. Thus, when one wants to talk about W boson in this context, one automatically refers to the real W bosons. It is this point that i was trying to outline : the (virtual) W bosons in beta decay are off mass shell just because the real W bosons could never be used in this example of beta decay.

However, i do realize that this is getting a semantical discussion and in the long run i very well understand what Vanesch is trying to say. So as a summary let's conclude this : "the (virtual) W bosons in beta decay are off mass shell just because the real W bosons could never be used in this example of beta decay. These virtual W bosons can exist thanks to the HUP and therefore they are short-lived just to do their interaction."


marlon
 
  • #59
marlon said:
Yes, i indeed agree with this and i do get the point. I also realize that both Vanesch as well as nrqed are saying this. My point is however that total energyconservation is defined based upon final and initial states.

Sure, I agree with this, and I'd even say that *it doesn't matter* whether, in a Feynman graph, there is "calculational conservation of energy" in the intermediate lines, because it is a *calculational tool* in order to calculate a series expansion approximation and not something physical. Of course, conserving energy in each step of the calculation will guarantee you conservation between initial and final states, so it is *useful* to require this.

Only, in the standard way of calculating a Feynman graph, it happens to be so that there is this conservation (and if you happen to find your own different way of calculating such a diagram, where it is not conserved, then that wouldn't even be a problem).
 
  • #60
marlon said:
Yes, i indeed agree with this and i do get the point. I also realize that both Vanesch as well as nrqed are saying this. My point is however that total energyconservation is defined based upon final and initial states. Thus, when one wants to talk about W boson in this context, one automatically refers to the real W bosons. It is this point that i was trying to outline : the (virtual) W bosons in beta decay are off mass shell just because the real W bosons could never be used in this example of beta decay.

However, i do realize that this is getting a semantical discussion and in the long run i very well understand what Vanesch is trying to say. So as a summary let's conclude this : "the (virtual) W bosons in beta decay are off mass shell just because the real W bosons could never be used in this example of beta decay. These virtual W bosons can exist thanks to the HUP and therefore they are short-lived just to do their interaction."


marlon


First, energy conservation, when valid, applies to a system throughout it's lifetime, not just for beginning and final states. (See, for example, Noether's Thrm.)

Second: Feynman diagrams represent terms in a infinite perturbation series. It is the series sum (if it exists) that preserves unitarity, and thereby forces the requirement of all possible itermediate states -- sometime, conveniently or not, termed virtual states or virtual particles -- just as is the case with, say NR Coulomb scattering, the exact scattering states are superpositions of plane waves of many different free Hamiltonian energies. But, of course, a free particle is not an eigenstate of the full Hamiltonian, hence said free particle is not in a well defined energy state.

With my very cynical hat on, I'll say, once again, 'virtual' particles are a convenient conceptual fiction -- a perusal of the history of QFT will flesh-out this notion. They are useful to the theoretician, as long as she does not take them too seriously.

And, as can be seen from many books, cf. Goldberger&Watson Collision Theory, Blatt and Weiskopf's Nuclear Theory -- talks about a virtual n-p state, and the loose nature of the use of "virtual" in physics language, similar discussions about "virtual" can be found in Mott and Massey's Theory of Atomic Collisions (1933). The idea of "virtual" has been around for a very long time. To be fair, I'll note that in the early days, virtual states often meant resonant or unstable states. Somehow the idea of virtual has, in my opinion. probably outrun its usefulness -- too many folks are too confused about the idea to make it safe to use. Better to talk about internal line in a diagram, period.

The problem is strictly one of language not of physics. People sometimes take figurative language too seriously, and then spend tortuous energy trying to understand that which cannot be well understood. (How high is up?)

Regards,
Reilly
 
  • #61
reilly said:
First, energy conservation, when valid, applies to a system throughout it's lifetime, not just for beginning and final states. (See, for example, Noether's Thrm.)

Yep that is true. I don't think anybody tries to counter that. The question is ofcourse about the "when valid"-part.

With my very cynical hat on, I'll say, once again, 'virtual' particles are a convenient conceptual fiction -- a perusal of the history of QFT will flesh-out this notion. They are useful to the theoretician, as long as she does not take them too seriously.
Well, obviously this is your opinion on this matter and i respect it. I just do not agree with what you are saying but there is no point in starting a debate on this.

regards
marlon
 
  • #62
marlon said:
Yep that is true. I don't think anybody tries to counter that. The question is ofcourse about the "when valid"-part.


Well, obviously this is your opinion on this matter and i respect it. I just do not agree with what you are saying but there is no point in starting a debate on this.

regards
marlon
I am still hoping that you will pick a specific Feynman diagram (a weak interaction at tree level with a virtual W boson in the intermediate state if you want, or anything else) so that we can write down the expression and show that the total energy is conserved even in the intermediate state.

Let me consider the electron-positron scattering diagram in the s channel at tree level. The photon in the intermediate state is off-shell of course. But the energy (as well as the three-momentum) of the off-shell photon is euqal to the energy (and three-momentum) of the initial electron and positron, [itex] q_{\gamma}= P_{electron} + P_{positron}[/itex] (where the P's are the physical four-momenta of the electron and positron). Therefore energy is conserved. QED (no pun intended)



Regards

Patrick
 
  • #63
nrqed said:
I am still hoping that you will pick a specific Feynman diagram (a weak interaction at tree level with a virtual W boson in the intermediate state if you want, or anything else) so that we can write down the expression and show that the total energy is conserved even in the intermediate state.

Ok. How about we look at the Feynman diagram of the neutron beta decay. This is the example that i have used several times to illustrate my point. The W boson NEEDS to be off mass shell because a real W boson is far to heavy. As we have agreed before, energyconservation is defined for initial and final states, for real particles, etc etc... For energyconservation to be respected, the W boson should be real yet this is impossible. Hence the apparent violation. Now ofcourse, in the beta decay energy is indeed conserved throughout the entire diagram (thanks to the W boson being virtual, not real). I know that that is exactly what you want to say and it has been my fault not to agree with you on that directly. I should have been more clear on that.

Let me consider the electron-positron scattering diagram in the s channel at tree level. The photon in the intermediate state is off-shell of course. But the energy (as well as the three-momentum) of the off-shell photon is euqal to the energy (and three-momentum) of the initial electron and positron, [itex] q_{\gamma}= P_{electron} + P_{positron}[/itex] (where the P's are the physical four-momenta of the electron and positron). Therefore energy is conserved. QED (no pun intended)



Regards

Patrick
Yes that is true. I do see your point and i hope that i have made my point more clear. Again, i think most of the confusion/discussion was actually caused by the fact that i did not state my point clear enough. I hope that's solved now.

regards
marlon
 
  • #64
marlon said:
Ok. How about we look at the Feynman diagram of the neutron beta decay. This is the example that i have used several times to illustrate my point. The W boson NEEDS to be off mass shell because a real W boson is far to heavy. As we have agreed before, energyconservation is defined for initial and final states, for real particles, etc etc... For energyconservation to be respected, the W boson should be real yet this is impossible. Hence the apparent violation. Now ofcourse, in the beta decay energy is indeed conserved throughout the entire diagram (thanks to the W boson being virtual, not real). I know that that is exactly what you want to say and it has been my fault not to agree with you on that directly. I should have been more clear on that.
It still seems to me that we are not saying the same thing. Well, it seems like some statements agree with me and then other statements are in disagreement. I agree that the energy is conserved throughout the entire diagram thanks to the W being off-shell, but then you seem to say the opposite in the next sentences so I am a bit confused.

I am saying that even for the virtual (i.e. off-shell) states energy is conserved. My example of the s-channel of the electron-positron scattering is such an example. Energy of the initial state = energy of the virtual photon in the intermediate state = energy of the final state. Don't we agree on this?

This is just saying that the total energy of the initial state (electron + positron) = energy of the virtual photon in the intermediate state = total energy of the electron and positron in the final state. This is what I mean by energy conservation in the intermediate state!


Also, you say that for energy to be conserved, the W would have to be real. I would say exactly the opposite. If the W was real, energy would not be conserved. Imposing energy conservation forces the W to be off-shell (i.e. virtual).

If you say that the W is off-shell and that the energy is not conserved, then how do you calculate the energy of the W? There is no way to calculate it so the whole Feynman diagram is undefined! There would be no way to calculate a cross section or anything if there is no rule to get the four-momentum of the virtual W (one cannot use P^2 = M^2 c^4 since it is not on-shell). I am saying that the way the energy of the virtual W is determined is by imposing conservation of energy!


Regards

PAtrick
 
  • #65
vanesch said:
Yes, this is exactly the paper that koantum talked about and which I commented. It is apparently true that the interaction between the measurement apparatus and the system can be arbitrary short, I didn't realize this. But this was not the claim. The claim was that it takes time dT to have the result available.
In the Bohm example, in a very short time, the px momentum (and eventually, the py momentum) energy equivalent has been transferred to the condenser state...
But now we must still measure the energy of the condenser !
While it is true (and I ignored this) that you can now consider the condensers as independent from the system under measurement, you will now have a new energy measurement to perform, so the problem starts all over again.
If you accept to talk about the quantum states of the condensers, after the interaction with the particle, they are now entangled with the particle states, but their own states are not yet sufficiently separated for them to be "pointer states". They will need to evolve during at least a time dT, before they are "grossly orthogonal" for different energy input states which are of the order of dE.

Nevertheless (and that's what I learned), the advantage of this is that you can have rapidly successing energy measurements on a same system.

However, for instance, you do not have the result available in very short time (so that you could take a decision based upon that to act on the system, for instance).
The article I quoted shows, not that you can measure energy in an instant, but that dE*dT > h can be violated.
This expression is not a postulate of quantum mechanics. It comes about in certain systems in which the relation is true, but the relation is not valid in all possible situations and not as interpreted as I believe that you are interpreting it. BTW - I don't know what you're talking about regarding this capacitor example you gavel. Where did it come into play here?

Pete
 
  • #66
I found it somewhere,maybe useful.
Do they violate energy conservation?
We are really using the quantum-mechanical approximation method known as perturbation theory. In perturbation theory, systems can go through intermediate "virtual states" that normally have energies different from that of the initial and final states. This is because of another uncertainty principle, which relates time and energy.
In the pictured example, we consider an intermediate state with a virtual photon in it. It isn't classically possible for a charged particle to just emit a photon and remain unchanged (except for recoil) itself. The state with the photon in it has too much energy, assuming conservation of momentum. However, since the intermediate state lasts only a short time, the state's energy becomes uncertain, and it can actually have the same energy as the initial and final states. This allows the system to pass through this state with some probability without violating energy conservation.
Some descriptions of this phenomenon instead say that the energy of the system becomes uncertain for a short period of time, that energy is somehow "borrowed" for a brief interval. This is just another way of talking about the same mathematics. However, it obscures the fact that all this talk of virtual states is just an approximation to quantum mechanics, in which energy is conserved at all times. The way I've described it also corresponds to the usual way of talking about Feynman diagrams, in which energy is conserved, but virtual particles can carry amounts of energy not normally allowed by the laws of motion.
 

Similar threads

Replies
1
Views
806
  • Quantum Physics
Replies
2
Views
875
Replies
13
Views
1K
Replies
3
Views
946
  • Quantum Physics
Replies
33
Views
2K
Replies
16
Views
1K
Replies
10
Views
1K
Replies
10
Views
1K
Replies
4
Views
3K
Replies
2
Views
2K
Back
Top