Does uncertainty principle imply non-conservation of energy?

  • Thread starter loom91
  • Start date
  • #26
nrqed
Science Advisor
Homework Helper
Gold Member
3,721
277
marlon said:
Well, here's our fundamental difference. In my second post here, i clearly stated that total energy conservation applies to the initial and final state and NOT the intermediate state. In QFT, thanks to this violation of total energy conservation many interactions exhibit the following property : Due to the existence of virtual particles (which DO NOT exist in the initial state for the obvious reason) there is more "total" energy than in the initial state. Many such interactions are known in QFT and this can only exist if energy conservation is violated during the period between initial and final state. It is in this period that interactions with the vaccuum can occur as well. If this were not possible, concepts like the vacuum polarization tensor would not exist or be usefull.
It is clear that we won't make progress unless we look at a specfic Feynman diagram because I am saying that the total energy in the intermediate state is equal to the initial energy and you are saying that it is not. So we just have to write down the expression for a Feynman diagram and see!!! Let's apply the Feynman rules, look at the intermediate state (in which there are virtual particles), add up their energies and see what happens! Otherwise, it's no different than being indoors and arguing "the sky is blue" "no, the sky is orange" "no the sky is blue" "no, the sky is orange!" and so on, without going out and looking at the darn sky


This is not the point. We are talking about the spread of energy that exists during vertex points !
I am really not sure what "during vertex points" means. I still argue that if four-momentum conservation is imposed at all vertices, then four-momentum will be conserved in all intermediate states.

The simplest example I can think of is the one-loop vacuum polarization diagram. Photon goes to an electron-positron virtual pair and this pair turns back into a photon. If we call q_0 the energy of the initial photon, the sum of the energy of the electron-positron will be q_0, for any loop momentum. Do you disagree with that?? (notice that the energy of a virtual particle is the zeroth component of its four-vector. One cannot use [itex] {\sqrt{m^2- {\vec p}^2}}[/itex] since it is not on-shell)

Or if you prefer, take the scattering of an electron off an external em field. There are 4 diagrams to one loop (vertex correction, mass renormalization on the two external electron lines, vacuum polarization on the photon line). Pick any intermediate state and we'll calculate the total energy to see.

No, one cannot speak about the four momentum of particles in intermediate states since energy is uncertain. Also, many interactions are known (i gave an example) where this is not true. What you state here is totally contradictory.

regards
marlon
tehn how can one ever write down an expression for a Feynman integral? The propagator for a photon, say, is 1/q^2. So if the momentum of a virtual photon is not defined, how can we get anything calculated?

Regards,

Patrick
 
Last edited:
  • #27
nrqed
Science Advisor
Homework Helper
Gold Member
3,721
277
marlon said:
I never stated that four momentum is partially conserved. Again, my definition of energy conservation is very clear and i have stated it several times.


marlon
Well, you said that three-momentum is conserved everywhere but that energy is not. So I see as partial conservation of four-momentum, which cannot hold as a covariant principle.
 
  • #28
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
I have to agree with nrqed here, Marlon. Normally you have a deltafunction [tex] \delta^4(\Sigma p_i)[/tex] at each vertex. What simply changes between the external lines and the internal lines (= virtual particles) is that for the external lines, we have [tex] p^2 = m^2 [/tex] while for the internal lines this is not true ; it is what's usually called the "on shell" condition for external particles, which is not held for the internal lines.
But, as Pat pointed out, the on-shell condition [tex] p^2 = m^2[/tex] has nothing to do with the conservation of the 4-momentum in a vertex. It is not because a particle is off-shell that the 4 components of p are not conserved at the vertex ; simply that the "energy" and "3-momentum" parts are re-distributed in such a way over the different lines meeting in the vertex, that they do not match the on-shell condition anymore.\

At least, that's how I understood this.

Now, I can conceive that there are other ways, in QFT, to calculate the map from initial states to final states - after all, this Feynman diagram stuff is only a calculational procedure. There are maybe ways to do this without concerving energy in intermediate stages, I don't know. If the procedure is not explicitly lorentz covariant, this is thinkable.
 
  • #29
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
koantum said:
Huh? You can't know a given system will always keep a particular energy, can you? So how can you know its precise energy without waiting till the end of time?
Well, simply because a stationary state (eigenstate of Hamiltonian) takes as time evolution a phase factor.


The following is a quote from an excellent article by Jan Hilgevoord, The uncertainty principle for energy and time (American Journal of Physics 64 (12), pp 1451-6):
There exist many other formulations of the uncertainty principle for energy and time on which we shall only comment briefly. Some formulations are simply wrong, such as the statement that for a measurement of the energy with accuracy dE a time dt>hbar/dE is needed. This statement is wrong because it is an assumption of quantum mechanics that all observables can be measured with arbitrary accuracy in an arbitrarily short time and the energy is no exception to this. Indeed, consider a free particle; its energy is a simple function of its momentum and a measurement of the latter is, at the same time, a measurement of the former. Hence, if we assume that momentum can be accurately measured in an arbitrarily short time, so can energy...​
And right you are, Pete.
Well, I'm affraid that this is a wrong statement, but it touches upon interpretational issues again. If you introduce again a "magical instantaneous collapse", then yes, of course you can measure energy instantaneously... and now you'll have to explain to me how you build such an apparatus (always the same problem bites you when postulating this immediate collapse!).

If you consider that you can treat the quantum system and the measurement apparatus as quantum objects, then of course the "instantaneous collapse" doesn't happen, and one needs to introduce a unitary evolution of the interaction between system and measurement apparatus. And here the devil comes in:
this interaction is described through A TERM IN THE HAMILTONIAN OF THE OVERALL SYSTEM. Now if you want this measurement to complete quickly (in this view: to have full entanglement between the system states and the pointer states of the apparatus), then this interaction has to be rather strong, but that means that it INTRODUCES AN ERROR IN THE ENERGY OF THE SYSTEM ALONE. The measurement has *ALTERED* the hamiltonian significantly. So the only way to do this is by introducing a small interaction, which will then give rise to a slow time evolution from product state into fully entangled state. The more accurate you want your energy measurement to be, the smaller the interaction term between the system and apparatus has to be, and hence the slower the time evolution which will entangle the system fully with the pointer states.

In a way, it occurs to me now that the statement in the article gives us a means to know if collapse is "immediate": find a measurement apparatus of energy which violates the energy-time uncertainty (measurement time versus measured value precision). If this can be built, then this is a proof that it doesn't happen through unitary evolution, but is "immediate projection".
 
  • #30
165
0
vanesch said:
Well, simply because a stationary state (eigenstate of Hamiltonian) takes as time evolution a phase factor.
There you go again with your time evolution. A stationary state (like any ket or quantum state) is a tool for calculating the probabilities of possible measurement outcomes on the basis of actual outcomes (the "preparation"). The time dependence of this tool is a dependence on the time of the measurement to the possible outcomes of which the probabilities are assigned. There is no such thing as an evolving quantum state.
If you introduce again a "magical instantaneous collapse", then yes, of course you can measure energy instantaneously.
Again??? And where does Hilgevoord introduce this "magical instantaneous collapse"? It is YOU who force those who buy your evolving quantum states to postulate reduction in order to avoid the absurdities that arise from YOUR chimera of an evolving quantum state.
and now you'll have to explain to me how you build such an apparatus (always the same problem bites you when postulating this immediate collapse!).
You are confusing separate issues. Ultimately it is only positions that can be measured. The values of other observables are inferred from the outcomes of position measurements. There are different ways of using position measurements to measure, for instance, momentum, which is why "momentum measurement" isn't well-defined unless a measurement procedure is specified. Whatever conceptual or practical problems may be associated with energy measurements, they have nothing to do with the projection postulate.
The measurement has *ALTERED* the hamiltonian significantly.
Since quantum states are represented by unit vectors, the dependence of the probabilities of measurement outcomes on the time of measurement can of course be taken care of by a unitary transformation U. The hamiltonian is the self-adjoint operator that appears in the exponent of U. It has something to do with the interval between measurements. It has nothing to do with the measurements themselves.
 
  • #31
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
There you go again with your time evolution. A stationary state (like any ket or quantum state) is a tool for calculating the probabilities of possible measurement outcomes on the basis of actual outcomes (the "preparation"). The time dependence of this tool is a dependence on the time of the measurement to the possible outcomes of which the probabilities are assigned. There is no such thing as an evolving quantum state.
As anybody adhering to this view will sooner or later discover, this view, by definition, makes it impossible to analyse the precise physics of a measurement apparatus, because it is "brought in by hand" externally to the formalism. So discussing the physics of the time dependence of a measurement will be difficult from this starting point (one of my main reasons not to adhere to this vision, btw), because you've taken away the only tool that might help you analysing what exactly goes on during a measurement process (when the apparatus interacts with the device).
Now, as long as we stay far away from any limits eventually imposed by quantum theory, this can be intuitively handled, but when we come to things such as the constraints of uncertainty principles, there's no tool left.


koantum said:
You are confusing separate issues. Ultimately it is only positions that can be measured. The values of other observables are inferred from the outcomes of position measurements. There are different ways of using position measurements to measure, for instance, momentum, which is why "momentum measurement" isn't well-defined unless a measurement procedure is specified.
Agreed, but in order to be able to link a position measurement to an energy value, you'll find out that you'll always need to have your state to be analysed evolve during a finite amount of time in an apparatus (for instance, a setup such as a mass spectrometer).

It has something to do with the interval between measurements. It has nothing to do with the measurements themselves.
That's interpretation-dependent. Even von Neumann considers this unitary evolution into pointer states as the pre-measurement evolution, using of course standard unitary evolution.

But no need to argue here: it is a statement that can be falsified. Show me an apparatus that can make a measurement of the energy E of a system when it has access to the system during time T, and whose accuracy dE is better than that given by the uncertainty relationship: meaning: the apparatus will be able to distinguish with high certainty two different incoming states which differ by less than dE.
 
Last edited:
  • #32
selfAdjoint
Staff Emeritus
Gold Member
Dearly Missed
6,786
7
vanesch said:
because you've taken away the only tool that might help you analysing what exactly goes on during a measurement process (when the apparatus interacts with the device).
But that's not a reason to reject the viewpoint. In fact the viewpoint is the only one that really respects the model without bending or adding to it. When you attempt to introduce the measuring apparatus into the quantum world and analyze what goes on during the measurement process, what do you get? Multiple Worlds! Science Fiction! Putting the measurement apparatus in by hand obeys the Bohr criterion that our world - the world of the apparatus - is not a quantum world.

There is nothing presented in QM that justifies breaking Bohr. You would have to discover physics beyond QM to justify it, and I don't think anything I've seen in all the self-made puzzlements over the measurement problem that qualifies as that.
 
  • #33
reilly
Science Advisor
1,075
1
Don't forget that the true Compton Scattering amplitude is the sum of all possible Feynman diagrams. The usual 2nd order diagrams therefore give an approximation. Looked at from an old-fashioned viewpoint, this is saying out of the infinite number of QED states carrying an electron's charge, we are looking at two. But, virtually by definition, a two intermediate-state approximation is not an energy eigenstate, so there's no reason to suppose that in low order approximations energy is conserved -- all the terms in the perturbation series are required to preserve serious energy conservation. (This is thoroughly discussed in most any text that covers time-dependent perturbation theory, and or scattering theory -- Lippman Schwinger, dispersion theory.)

Virtual particles are a convenient fiction; as in the grandchildren of what used to be called virtual levels. When used with a good dose of common sense, they are a highly valuable concept. They make it easier to talk about equations. That's all there is to it.

I don't expect much agreement. So let me provide a challenge: it's well known that the problem of finding the scattering states of an electron in a static Coloumb field can be solved exactly in parabolic coordinates. This means for this problem all the diagrams can be summed. So, not only are there "virtual photons" but "virtual electrons" as well buried in this exact amplitude. By looking at the Fourier transforms of the wavefunctions, and the scattering amplitude you'll be able to ferret out the role of virtual states in a rigorous way. See if you can prove me wrong

What you'll see is that a single pole term gets mediated by many other terms, the combination of which is needed to satisfy energy conservation in intermediate states -- all of them together in blissful superposition.

Or, here's the bones of an even simpler approach to the issue of "virtual
Think of a square well problem in 1-D. Say in the middle there's a potential V, say from 0-L. An exact solution in the potential sector will have the form of exp+W,
or exp-W, where W*W = -E - V. Whereas before and after the potential,

W*W = -E. So there's an energy jump. But, this energy jump will not show up in regular perturbation theory. It will take the full infinite expression from perturbation theory to get the correct "in-potential" wave function. (The diagrams show the particle as "free" as it traverses the potential; does not look good for energy conservation.)

So, beware of anything but a figurative approach to the idea of a virtual particle; otherwise you'll have to deal with some pretty serious math.

Regards,
Reilly
 
Last edited:
  • #34
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
selfAdjoint said:
But that's not a reason to reject the viewpoint. In fact the viewpoint is the only one that really respects the model without bending or adding to it. When you attempt to introduce the measuring apparatus into the quantum world and analyze what goes on during the measurement process, what do you get? Multiple Worlds! Science Fiction! Putting the measurement apparatus in by hand obeys the Bohr criterion that our world - the world of the apparatus - is not a quantum world.
There are two ways to answer this. The first one is the way von Neumann answered it: you can put the "cut" where you want. I think von Neumann would agree that if you were to inquire into the nature of the physics of a specific measurement apparatus, that you should treat it quantum-mechanically (and then he'd put the cut later, as in reading off the display or something). So you do not necessarily end up with MWI (of course, if you try to be logically consistent, you would).
The second one is that we cannot make statements about the physics happening in a measurement apparatus. This is an annoying feature (especially for an instrumentalist such as me :-).

So now let's go back to the statement over which there was a dispute:
my statement was that "a measurement yielding an energy measurement with accuracy dE needs to interact with the system for at least a time dt given by the E-t uncertainty relationship", which was said to be a false statement.
My answer is that if you consider quantum theory as ONLY a theory that is an "algorithm to calculate probabilities of outcomes" (which it certainly is too, of course), that you cannot make ANY STATEMENT about the correctness or not of the above statement, because YOU HAVEN'T GOTTEN ANY MEANS to go figure out how much time it would take for a measurement to have an energy measurement accuracy or anything, AS YOU DON'T HAVE A PHYSICAL DESCRIPTION of what goes on in the apparatus. You have to believe the salesman! You cannot analyse the apparatus.

So I'm not (this time :-) fighting over MWI, I'm just arguing that, if you want to make any meaningful statement over the potential limits on measurement time and energy accuracy during the interaction of a measurement apparatus and a system, that you will have to describe this interaction quantum mechanically. Otherwise you wouldn't even be able to say anything. Unless the salesman of the apparatus told you that it was an "instantaneous energy measurement apparatus" and hence that the corresponding hermitean operator is the hamiltonian. And no way to verify the salesman's statements.
Now, IF you allow for a quantum mechanical description of the apparatus-system interaction (or even only the "essential part" of it, not including the electronics or anything), then you will find out that there's a link between the measurement time (evolution time of the system) and the possible energy accuracy, which is limited by the heisenberg E-t relationship, simply because of the disturbance NEEDED in the hamiltonian to make the apparatus+system evolve quickly enough = the interaction term between both.
You can consider this QM interaction in the Copenhagen picture, on the condition that you put the Heisenberg cut AFTER the essential part of the measurement process, in which case the interaction is described in a unitary way. And if you DON'T do that, you simply have nothing to say about this interaction.
 
  • #35
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
reilly said:
I don't expect much agreement. So let me provide a challenge: it's well known that the problem of finding the scattering states of an electron in a static Coloumb field can be solved exactly in parabolic coordinates. This means for this problem all the diagrams can be summed. So, not only are there "virtual photons" but "virtual electrons" as well buried in this exact amplitude. By looking at the Fourier transforms of the wavefunctions, and the scattering amplitude you'll be able to ferret out the role of virtual states in a rigorous way. See if you can prove me wrong

What you'll see is that a single pole term gets mediated by many other terms, the combination of which is needed to satisfy energy conservation in intermediate states -- all of them together in blissful superposition.
The claim is simply that during this process, if you started out with a state which was a quite well-defined energy state, that at ANY time, when considering the WHOLE state, you'd find the same energy expectation value, with small dispersion.

You're right of course that, to go from the initial to the final state, you can use whatever calculational procedure you like, and that there's a difference between "energy conservation during time evolution" and "energy conservation during calculation" :smile: , the latter not being any law of nature.
That said, in the USUAL way of using perturbative QFT, with Feynman graphs, there IS also "energy conservation during calculation", which is what we were discussing about, I thought. But nothing stops you from using another calculation where this might not be true.
 
  • #36
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
vanesch said:
But no need to argue here: it is a statement that can be falsified. Show me an apparatus that can make a measurement of the energy E of a system when it has access to the system during time T, and whose accuracy dE is better than that given by the uncertainty relationship: meaning: the apparatus will be able to distinguish with high certainty two different incoming states which differ by less than dE.
In order to follow up on this, I would like to propose the following.
Imagine a single particle, in a pure momentum state (and, it being free, hence a pure energy state). Imagine that at t = 0, we do a "position measurement", where this position measurement can be very crude, or very accurate. If it is very crude, then we assume that it is still closely in its pure energy state (say, we know up to 1 cm where it came by). If it is very accurate, then the particle is now in an almost pure position state.

The challenge is now: construct me a momentum measuring apparatus which will give me the momentum (or energy) within an accuracy dE, and where the measurement is completed after time T, such that T.dE << hbar. This precise measurement will then be used to find out whether I applied the "crude" or the "precise" position measurement, in order to establish the energy (or momentum) distribution of the state.
For the "crude" measurement, this should then be a highly peaked distribution, while for the precise measurement, this should be a very broad distribution.

My claim is that you cannot think up of a setup that can do this, if T.dE << hbar

For BIG T, there's no problem of course: let the particle fly freely over 20km, and measure its arrival time and position, and you then have the momentum at high precision. But it takes a long time to have your particle fly over 20 km. You'll have an initial uncertainty on the position too (depending exactly how you want to do it, but in order to respect the momentum resolution for the crude case, this cannot be better than 1cm).

So go ahead and think upon a measurement system that will give me E within dE after less than a time T, such that dE.T << hbar. I think it cannot be done. If it can, my statement is indeed wrong.

EDIT: you may want the particle to be charged, if this makes it any easier.
 
Last edited:
  • #37
3,763
8
vanesch said:
I have to agree with nrqed here, Marlon. Normally you have a deltafunction [tex] \delta^4(\Sigma p_i)[/tex] at each vertex. What simply changes between the external lines and the internal lines (= virtual particles) is that for the external lines, we have [tex] p^2 = m^2 [/tex] while for the internal lines this is not true ; it is what's usually called the "on shell" condition for external particles, which is not held for the internal lines.
I know. Again, i am not talking about what conditions need to be imposed or respected at the vertex points. There is no debate necessary there. My point is that the violation of energy conservation occurs if you compare extenal lines to internal lines. This is also what my example on beta decay was trying to illustrate.

But, as Pat pointed out, the on-shell condition [tex] p^2 = m^2[/tex] has nothing to do with the conservation of the 4-momentum in a vertex. It is not because a particle is off-shell that the 4 components of p are not conserved at the vertex
But when did i say this is untrue ? Again, i am not talking about vertices only, i am talking about the difference in "energydistribution" between internal and external lines. Over the internal lines the on mass shell condition is no longer respected


marlon
 
  • #38
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
marlon said:
I know. Again, i am not talking about what conditions need to be imposed or respected at the vertex points. There is no debate necessary there. My point is that the violation of energy conservation occurs if you compare extenal lines to internal lines. This is also what my example on beta decay was trying to illustrate.
Well, I don't understand what you want to say.

Consider all the ingoing lines, set In, and all the outgoing lines, set Out.

Now, I think we both agree that if all the lines in In are pure momentum states (or almost so, in wave packets, for the nitpickers), that In has a well-defined energy value. So does Out, and the point is (I think we both agree here) that these two energy values are equal, hence energy conservation between ingoing and outgoing particles. It is what physically matters.

However (although this is not really about physics, but about the calculational procedure), consider now that you draw an arbitrary cut across the diagram, which is now split into two disjunct pieces A and B, in such a way that all of the In lines are in A, and all of the Out lines are in B.
You cut through a set of virtual particle lines in doing so. Now, my claim is that if you add up all the energy values for all these virtual particles (by picking just any value for all the free loop momenta over which you have to integrate to calculate the Feynman graph), that this algebraic sum of energies "flowing out of A" (and hence "flowing into B") is exactly equal to the energy value of the In states (= energy of the Out states).
Same for 3-momentum.

I think this is what Pat wanted to say.
 
  • #39
165
0
vanesch said:
This view makes it impossible to analyse the precise physics of a measurement apparatus, because it is "brought in by hand" externally to the formalism.
A measurement, by my definition, is any actual event or state of affairs from which the truth value of a proposition "S has property P" or "O has value V" can be inferred. There is nothing in this definition that prevents you from analyzing the physics of a measurement apparatus to your heart's content. Nor can you question the occurrence or existence of actual events or states of affairs, since the quantum formalism presupposes them. A probability algorithm without events to which probabilities can be assigned is as useful as a third sleeve.
you've taken away the only tool that might help you analysing what exactly goes on during a measurement process (when the apparatus interacts with the device)
A lot of pseudo-questions and gratuitous answers are generated by trying to analyze what is beyond mathematical and/or rational analysis. Strange that the founders understood this, starting from Newton who refused to frame hypotheses. Heisenberg insisted that "there is no description of what happens to the system between the initial observation and the next measurement... If we want to describe what happens in an atomic event, we have to realize that the word 'happens' can apply only to the observation, not to the state of affairs between two observations." Pauli stressed that "the appearance of a definite position x during an observation" is to be "regarded as a creation existing outside the laws of nature." Neither Bohr nor Heisenberg nor Pauli (and certainly not Schrödinger) spoke of collapses. They were invented by von Neumann (and Lüders) as a natural consequence of his transmogrification of a probability algorithm dependent of the times of the events to which the probabilities are assigned, into an evolving instantaneous state of affairs.
I can't imagine how far our understanding of the quantum world would have progressed by now if the amount of energy and ingenuity wasted on MWI and such had been invested in understanding why quantum mechanics is essentially a probability algorithm and why outcome-indicating events or states of affairs play a special role.
Show me an apparatus that can make a measurement of the energy E of a system when it has access to the system during time T, and whose accuracy dE is better than that given by the uncertainty relationship: meaning: the apparatus will be able to distinguish with high certainty two different incoming states which differ by less than dE.
You seem to be speaking of the relation between the energy spread (line width) of a quantum state and its lifetime. A shorter lifetime (and hence a sharper temporal localization of the quantum state) implies a larger energy spread. In other words, the shorter the time during which a state |a> changes into a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the energy spread of that state. This is the analogue of the following relation: the shorter the distance by which a state |a> has to be translated until it becomes a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the momentum spread of that state. The latter relation is not the uncertainty relation that limits the simultaneous measurement of position and momentum. This uncertainty relation has no temporal analogue. There is no such uncertainty relation for energy and time because there is no time operator and hence no commutator for energy and time.
Needless to say, when I use the "standard" language (which is as convenient as it is misleading) and speak of a state changing from |a> into |b>, I mean a probability algorithm that depends on the time of a measurement to the possible outcomes of which it assigns probabilities. Nothing in the formalism of (non-relativistic) quantum mechanics rules out that this time is an instant. What is ruled out is the possibility of distinguishing between states differing by a sufficiently small translation in time and/or space.
(Which, by the way, is why the spatiotemporal differentiation of the world is incomplete - it doesn’t go all the way down. Which, by the way, is of utmost importance for understanding the consistency of a fundamental physical theory that is nothing but a probability algorithm, with the existence of the events presupposed by this algorithm.)
...you do not necessarily end up with MWI (of course, if you try to be logically consistent, you would).
Claiming logical consistency for MWI is nothing short of hilarious.
...you will have to describe this interaction quantum mechanically.
Trouble is that the quantum-mechanical description (by your definition) is incomplete. It leads to correlations without ever producing correlata (measurement outcomes). Logically consistent, my foot!
 
  • #40
3,763
8
vanesch said:
Well, I don't understand what you want to say.

Consider all the ingoing lines, set In, and all the outgoing lines, set Out.

Now, I think we both agree that if all the lines in In are pure momentum states (or almost so, in wave packets, for the nitpickers), that In has a well-defined energy value. So does Out, and the point is (I think we both agree here) that these two energy values are equal, hence energy conservation between ingoing and outgoing particles. It is what physically matters.
I agree completely. I always stated that energy conservation id defined based upon the initial and final state.


However (although this is not really about physics, but about the calculational procedure), consider now that you draw an arbitrary cut across the diagram, which is now split into two disjunct pieces A and B, in such a way that all of the In lines are in A, and all of the Out lines are in B.
You cut through a set of virtual particle lines in doing so. Now, my claim is that if you add up all the energy values for all these virtual particles (by picking just any value for all the free loop momenta over which you have to integrate to calculate the Feynman graph), that this algebraic sum of energies "flowing out of A" (and hence "flowing into B") is exactly equal to the energy value of the In states (= energy of the Out states).
Same for 3-momentum.

I think this is what Pat wanted to say.
Well, it's possible that i don't completely get what you say but in between the A abd B points (which we call initial and final states) there can be virtual particles that have too much energy if you compare this value to the energy difference between A and B. So if you compare between this virtual particle and, let's say, point A : energy conservation is not respected. This is what happens in beta decay.

marlon
 
  • #41
ZapperZ
Staff Emeritus
Science Advisor
Education Advisor
Insights Author
35,729
4,499
Some time it is very difficult to figure out what is being argued here. So I will ask this question:

Are people arguing that the uncertainty relation between [itex]\Delta(E)[/itex] and [itex]\Delta(t)[/itex] doesn't exist, or invalid, or should be colored green?

Zz.
 
  • #42
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
koantum said:
A measurement, by my definition, is any actual event or state of affairs from which the truth value of a proposition "S has property P" or "O has value V" can be inferred.
Well, the statement you're disputing is the following:
a measurement apparatus to which a system is presented starting at t = 0, which has to provide an answer at time t = T, cannot give you a more accurate value of energy than given by dE were dE.T ~ hbar.

I ask you for a counter example to disprove the statement.

I can't imagine how far our understanding of the quantum world would have progressed by now if the amount of energy and ingenuity wasted on MWI and such had been invested in understanding why quantum mechanics is essentially a probability algorithm and why outcome-indicating events or states of affairs play a special role.
Well, with your superior understanding, I'm sure you're going me a superior counter example :biggrin:

You seem to be speaking of the relation between the energy spread (line width) of a quantum state and its lifetime.
No, I'm not. I'm talking about the time that a measurement apparatus has access to the system under study and its relationship to the potential accuracy on the energy measurement that can result from it. The state of the system can be as old, or as young, as you want, but the measurement apparatus has not yet access to it.


A shorter lifetime (and hence a sharper temporal localization of the quantum state) implies a larger energy spread. In other words, the shorter the time during which a state |a> changes into a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the energy spread of that state. This is the analogue of the following relation: the shorter the distance by which a state |a> has to be translated until it becomes a state |b> such that |<b|a>|2 is smaller than a chosen value, the larger the momentum spread of that state. The latter relation is not the uncertainty relation that limits the simultaneous measurement of position and momentum. This uncertainty relation has no temporal analogue. There is no such uncertainty relation for energy and time because there is no time operator and hence no commutator for energy and time.
I know all that, but it is not what I'm talking about. I'm talking about the thing you disputed. You said (using the quote of I don't remember who) that it is a FALSE statement that a measurement apparatus which completes its measurement in a time T cannot give a better resolution in its determined energy value than dE. The negation of that statement (which hence you hold for true) is:
there exists a measurement apparatus which completes its measurement within a time T, and which has an intrinsic energy resolution better than dE, right ? So I now ask you to show me such an apparatus.
This is NOT something about the state of the system under study. I'm going to offer to that measurement apparatus a pure energy state (or at least one which is highly accurate), and then I'm going to offer to that apparatus one which is badly defined (for instance, because it follows immediately after a position measurement). And you will have to show me that the SAME apparatus will deduce, in the first case, a sharp peak, and in the second case, a broad spread. Because that's what an accurate apparatus is supposed to give you.

Needless to say, when I use the "standard" language (which is as convenient as it is misleading) and speak of a state changing from |a> into |b>, I mean a probability algorithm that depends on the time of a measurement to the possible outcomes of which it assigns probabilities. Nothing in the formalism of (non-relativistic) quantum mechanics rules out that this time is an instant.
Well, SHOW ME SUCH AN APPARATUS, and tell me how it works.
And show me that I can present, to this apparatus, the two states I was talking about, and that it produces the peak in the first case, and the broad line in the second, and that I obtain this result within an arbitrary short time interval which is in contradiction with dE.T ~ hbar.

Claiming logical consistency for MWI is nothing short of hilarious.
I would advise you to keep your condescending remarks to yourself. This doesn't advance any discussion. And I wasn't talking about MWI here. My starting point was that the relationship between the time needed to obtain a definite measurement outcome and the energy resolution of the apparatus, when treating the system+apparatus quantum mechanically, is limited by the extra interaction term in the hamiltonian which describes the interaction between the system and the apparatus. If this term is big, the evolution can be fast, but the error introduced by it is large too. If this term is small (weak interaction), then the error introduced by it is small, but it takes more time to complete the evolution.
This point is OBVIOUS starting from MWI, but is the same whenever you treat (no matter what interpretation) the apparatus + system quantum mechanically.

It is of course correct, in standard quantum theory, that you could conceive, within one picosecond, 10^30 alternations between a position measurement, a momentum measurement, and an energy measurement, each with mindboggling accuracy. But only with "magical" deus ex machina measurement apparatus.
A real apparatus has to interact somehow physically with the system, and I'm asking you to provide me with an example of such an apparatus, as a thought experiment.
 
  • #43
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
marlon said:
Well, it's possible that i don't completely get what you say but in between the A abd B points (which we call initial and final states) there can be virtual particles that have too much energy if you compare this value to the energy difference between A and B. So if you compare between this virtual particle and, let's say, point A : energy conservation is not respected. This is what happens in beta decay.
I don't understand this. The TOTAL energy flowing through all the cut virtual particles from the A part to the B part will be exactly equal to the energy of the in particles. This is guaranteed, because at each vertex, there is conservation of energy, so through a complete cut of the diagram in two pieces can flow only the correct amount of energy. Of course what happens in individual lines isn't said this way: one virtual line can carry more energy, if this is compensated by another one.
This is like an electrical circuit: if you cut it in two, then the current flowing through all of the cut wires, algebraically summed, will equal the current flowing into A (and flowing out of B), simply because the currents are conserved at each node point (Kirchhoff's law).
 
  • #44
nrqed
Science Advisor
Homework Helper
Gold Member
3,721
277
marlon said:
I agree completely. I always stated that energy conservation id defined based upon the initial and final state.
What Patrick VanEsch and I are trying to argue is that even in any intermediate state, four-momentum is conserved which is insured by conservation of four-momenta at the vertices. There is no mathematical way to have the four-momentum not conserved in all intermediate states if it is conserved at the vertices!

It's like an electrical circuit, as Patrick (the other Patrick) pointed out. If you have 3A flowing into two wires in parallel, because the current is conserved at the vertex (the node), the total current in the intermediate state (the two wires in parallel) MUST be equal to the total initial current. Now, if you look at *only one* of two wires in parallel, let's say one which contains 1 A and you claim that current is not conserved, then it is because you are not including all the wires in the intermediate state (i.e. all the particles in the intermediate state of a Feynman diagram). That's all there is to it!

Well, it's possible that i don't completely get what you say but in between the A abd B points (which we call initial and final states) there can be virtual particles that have too much energy if you compare this value to the energy difference between A and B.
I am not sure about the meaning of this last line. Since you agree that the energy is conserved between initial and final states, shouldn't the energy difference between A and B necessarily be zero?

So if you compare between this virtual particle and, let's say, point A : energy conservation is not respected. This is what happens in beta decay.

marlon
Can we look at an explicit Feynman diagram, please? If you want to focus on beta decay, then tell us exactly what diagram you have in mind. I am assuming you have in mind the *tree* diagram? With a W exchange?
At *all* steps of the diagram the energy is conserved.

Of course, the W is off-shell, but that has nothing to do with energy not being conserved, as we are trying to emphasize.

An even more clean cut example is electron-positron scattering in the s channel at tree level. The photon is off-shell, but it's energy *is* equal to the initial combined energy of the electron-positron pair. Do you dispute that?? If yes, then we should simply write down the expression instead of talking "in the air".

Regards

The other Patrick
 
  • #45
reilly
Science Advisor
1,075
1
Once again: if the Hamiltonian is time independent as it is, for example, in QED, then Energy is strictly conserved, for a very short time or a very long time, or any time -- we are talking Noether's Thrm. And, yes there are statments that can be made about the evolution of individual states within a state described by a density matirx.

There is, of course a energy-time Uncertainty Princ. in classical E&M, something to do with basic properties of Fourier representations. As far as I know, energy is still conserved in E&M. And, sometimes it's useful to remember the distinctions bwetween open and closed systems when probing the mysteries of any HUP, or close cousin thereof.

However, the name of the game is: we are talking exact solutions; that is, in field theory, all appropriate Feynman diagrams. Leave some out, and the approximate solution will not, repeat, not conserve energy -- that is, it is a faulty, inexact representation of the system at hand. But with a finagle here, a finesse there, we can overcome this nominally serious difficulty -- this has been discussed ad-nauseum for 70 years or so. Any discussion, repeat any(well almost) discussion of time independent perturbation theory or of scattering theory -- best to start with non-relativistic theory before jumping into LSZ, Wightman, .... (In the past, I've mentioned Wigner and Weisskopf and Breit and Wigner's work on resonances. They worked out many of the issues discussed here quite a long time ago; and their work still stands as impressive, correct, and useful. Probably a third or more of Cohen-Tannoudji et al's Atom Photon Interactions:Basic Processes and Applications is devoted to the issues raised in this thread. So-called dispersion or S-Matrix Theory dealt and deals with the issues of this thread as well -- see Weinberg's Field Theory Vol I for an introduction.

There's no mystery here:

A finite approximation to an infinite series is, generally, not a terribly good approximation. The symmetries of the finite approx may well not be shared by the exact series sum. So, what's the problem?

Regards,
Reilly
 
  • #46
165
0
Hi vanesch, let me begin by noting a few points. They may have been raised already in this thread, but if so they bear repetition.

As said, there is no commutation relation for energy and time operators because there is no time operator.

All measurements are ultimately position measurements, so when we talk about measurements of momentum or energy, we have to think up, if not actually implement, a measurement procedure, which won't be unique.

Experimentalists differ from theorists, who frequently proceed on the assumption that every possible ONB corresponds to an "in principle" implementable measurement. In this they manifest their boundless faith in the ingenuity of experimentalists, hats off. (I was going to write, heads off.) In non-relativistic quantum mechanics, they can get away with this assumption, which has such consequences as: take a particle, make three measurements in a row, the first and third with detectors in position space, the second with detectors in momentum space, and get virtually instantaneous propagation over any distance.

The analogy between the quantum-mechanical Psi(q,t) and its (spatial) Fourier transform Phi(p,t) associated with a particle on the one hand and, on the other, the proper wave function f(x,t) and its transform g(k,t), which are used in the study of classical signals, is superficial and misleading. What comes out if you pop x and t into f is a physical quantity associated with every point x and every instant t. What you insert into Psi(q,t) and what comes out is totally different. q is not a point of space but an eigenvalue of the position operator, which serves to calculate probability distributions over possible outcomes of position measurements. t is not an instant of time but the time of a measurement. And Psi is not a physical quantity but an abstract expression that, when square-integrated over a region R, gives the probability of finding the particle in R if the appropriate measurement is made at the time t. Psi(q,t) is not a physical quantity associated with every t, nor does it serve to (statistically) answer the question: when is the particle at a given point? It concerns the question: where is the particle at a given time? (There is an extensive literature on the time-of-arrival problem, but this is essentially a discussion of how to realize a measurement of the time of an event as a position measurement, and of finding a suitable operator for this measurement.)

Since there are no detectors in momentum space, the closest mock-up of a bona fide quantum-mechanical momentum measurement is to make position measurements at two given times. And whenever position measurements are made (which is whenever measurements are made), the uncertainty relation for q and p must be taken into account. A sharper measurement of the distance between the two positions implies a fuzzier relative momentum. If, using the (relativistic or non-relativistic) relation between the energy and the momentum of a freely propagating particle, we turn this into an energy measurement, then that uncertainty relation turns into one between energy and position rather than one between energy and time.

The uncertainty between q and p, combined with relativistic covariance, appears to require a corresponding uncertainty between t and E. Such an uncertainty relation exists: as a sharper probability distribution over the possible outcomes of a position measurement implies a fuzzier probability distribution over the possible outcomes of a momentum measurement, so a sharper probability distribution over the possible outcomes of a measurement of the time of an event (such as the time at which a clock's second hand points upward) implies a fuzzier probability distribution over the possible outcomes of a measurement of the hand's position. There is a fairly extensive literature on quantum clocks, the common denominator of which appears to be that a sharper time indicator implies a fuzzier momentum or angular momentum, which in turn implies a fuzzier energy. So it's again essentially position and energy rather than time and energy. And it's primarily about time measurements, which are comparatively easy to mock up, rather than about energy measurements, which appear to be the most problematic. Einstein's famous photon-box argument invokes the relativistic equivalence of energy and mass to measure the photon's energy via a weight loss of the box, and Bohr's famous rebuttal even invokes the general relativistic effect of gravitational fields on the rate at which clocks "tick". This is heavy artillery to resolve a purely quantum-mechanical issue.

So at the end of the day I'm discouraged by the defeat of Einstein and others to spend time trying to invent a realistic measurement scheme that is not constrained by some ET uncertainty. However, while the PQ uncertainty is a fundamental feature of pure, non-relativistic quantum mechanics, the ET uncertainty is not, and such a measurement scheme was found by Aharonov and Bohm, whose names you will surely recollect. I quote the Summary and Conclusion of their article "Time in the quantum theory and the uncertainty relation for time and energy", which is reprinted in the volume Quantum Theory and Measurement edited by Wheeler and Zurek.
There has been an erroneous interpretation of uncertainty relations of energy and time. It is commonly realized, of course, that the "inner" times of the observed system (defined as, for example, by Mandelstamm and Tamm) do obey an uncertainty relation [tex]\Delta E\Delta t\geq h[/tex] where [tex]\Delta E[/tex] is the uncertainty of the energy of the system, and [tex]\Delta t[/tex] is, in effect, a lifetime of states in that system. It goes without saying that whenever the energy of any system is measured, these "inner" times must become uncertain in accordance with the above relation, and that this uncertainty will follow in any treatment of the measurement process. In addition, however, there has been a widespread impression that there is a further uncertainty relation between the duration of measurement and the energy transfer to the observed system. Since this cannot be deduced directly from the operators of the observed system and their statistical fluctuation, it was regarded as an additional principle that had to be postulated independently and justified by suitable illustrative examples. As was shown by us, however, this procedure is not consistent with the general principles of the quantum theory, and its justification was based on examples that are not general enough.
Our conclusion is then that there are no limitations on measurability which are not obtainable from the mathematical formalism by considering the appropriate operators and their statistical fluctuation; and as a special case we see that energy can be measured reproducibly in an arbitrarily short time.​
Regards - koantum
 
  • #47
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
koantum said:
Hi vanesch, let me begin by noting a few points. They may have been raised already in this thread, but if so they bear repetition.

As said, there is no commutation relation for energy and time operators because there is no time operator.
Yes, I know that, and as I said, this is not the issue.

All measurements are ultimately position measurements, so when we talk about measurements of momentum or energy, we have to think up, if not actually implement, a measurement procedure, which won't be unique.
Indeed, you can start with this. I'm not sure that all measurements are ultimately position measurements in principle, in fact. But we surely can do so as a working hypothesis. But this already implies something important: it means that there is no way to consider, say, the momentum operator as a hermitean "measurement operator" in quantum theory. If you want to be precise, you'll have to continue your unitary evolution until you arrive at a position. Now, of course, if there's enough decoherence, then you can take the shortcut and avoid this last step, as a good approximation. But if you want to look into the fine details of how your "desired quantity" transforms into a position measurement (or 2 position measurements), well, then you will have to work this unitary evolution out in detail.

And now we come to the main point: IF you consider that you can only do position measurements, and you want to set up a (combination of) position measurements such that you want to turn this into an energy measurement AND you require the entire measurement not to last longer than a time T, THEN this implies that you have an inaccuracy of dE on your deduced energy measurement.

EVEN if you can consider position measurements "instantaneous".



Experimentalists differ from theorists, who frequently proceed on the assumption that every possible ONB corresponds to an "in principle" implementable measurement. In this they manifest their boundless faith in the ingenuity of experimentalists, hats off. (I was going to write, heads off.)
:rofl: :rofl:

Well, you can - as you do - turn a head-less theorist into a reasonable experimentalist, by claiming that the only measurements you can REALLY perform are position measurements, and that you now have to DEDUCE other quantities by thinking up combinations of position measurements.


The analogy between the quantum-mechanical Psi(q,t) and its (spatial) Fourier transform Phi(p,t) associated with a particle on the one hand and, on the other, the proper wave function f(x,t) and its transform g(k,t), which are used in the study of classical signals, is superficial and misleading. What comes out if you pop x and t into f is a physical quantity associated with every point x and every instant t. What you insert into Psi(q,t) and what comes out is totally different. q is not a point of space but an eigenvalue of the position operator, which serves to calculate probability distributions over possible outcomes of position measurements. t is not an instant of time but the time of a measurement. And Psi is not a physical quantity but an abstract expression that, when square-integrated over a region R, gives the probability of finding the particle in R if the appropriate measurement is made at the time t. Psi(q,t) is not a physical quantity associated with every t, nor does it serve to (statistically) answer the question: when is the particle at a given point? It concerns the question: where is the particle at a given time? (There is an extensive literature on the time-of-arrival problem, but this is essentially a discussion of how to realize a measurement of the time of an event as a position measurement, and of finding a suitable operator for this measurement.)
I agree with all this but it is besides the point. Well I don't agree with this, in that it makes statements which do not necessarily have to be true (they make statements about the ontological status of certain objects, where I prefer to assign different truth values), but I agree that this is a position to take. But all this has nothing to do with what I am claiming, and which CAN be framed in this view. The claim is, that if your entire measurement scheme is finished in a time T, then the energy value you can deduce from it has an error of dE.

Since there are no detectors in momentum space, the closest mock-up of a bona fide quantum-mechanical momentum measurement is to make position measurements at two given times. And whenever position measurements are made (which is whenever measurements are made), the uncertainty relation for q and p must be taken into account. A sharper measurement of the distance between the two positions implies a fuzzier relative momentum. If, using the (relativistic or non-relativistic) relation between the energy and the momentum of a freely propagating particle, we turn this into an energy measurement, then that uncertainty relation turns into one between energy and position rather than one between energy and time.
Yes, but it will turn out that this entire scheme can only be COMPLETED within a time T (that is, from the moment your system is "available" to the moment where you have your result in your hands) if the accuracy on the energy is smaller than dE. That's all I'm saying.

The uncertainty between q and p, combined with relativistic covariance, appears to require a corresponding uncertainty between t and E. Such an uncertainty relation exists: as a sharper probability distribution over the possible outcomes of a position measurement implies a fuzzier probability distribution over the possible outcomes of a momentum measurement, so a sharper probability distribution over the possible outcomes of a measurement of the time of an event (such as the time at which a clock's second hand points upward) implies a fuzzier probability distribution over the possible outcomes of a measurement of the hand's position. There is a fairly extensive literature on quantum clocks, the common denominator of which appears to be that a sharper time indicator implies a fuzzier momentum or angular momentum, which in turn implies a fuzzier energy.
It's not so much the sharpness of the time measurement, but an upper boundary on the total duration of the measurement scheme which allows you to deduce the energy I'm talking about.

For instance, you will not be able, no matter what you do, to measure the energy of, say, a cold neutron in one nanosecond, with a precision better than one nano-electronvolt.
Although the quote of the article seems to contradict this:

and as a special case we see that energy can be measured reproducibly in an arbitrarily short time.
I'd like to see the setup of the measurement scheme that does this. I am going to look at it, the library of my institute seems to have the book available...
 
  • #48
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
vanesch said:
I'd like to see the setup of the measurement scheme that does this. I am going to look at it, the library of my institute seems to have the book available...
Ok, I looked at the paper. For sure it is a smart example! I'll concentrate on the description around page 723.

It seems to me, though, that, although the interaction time with the system is indeed arbitrary short, that the result is not available in this time, because what happened is that there was a transfer of energy from the particle to the first (and the second) condenser, and now we displaced the problem to the energy measurement of the condensers. So in order to measure THIS energy accurately enough, we will need again a time T (or an ingenious scheme which will, in its turn, transfer the knowledge to a further system).

But I admit that I learned something here, so thanks.
What I learned (correct me if I'm wrong), is that the system must not remain "available" during the entire measurement time T (the time between the "availability" of the system state, and the "presentation" of the result, which corresponds here with the completion of the energy measurement of the condensors). In MWI speak (even though you do not like it), the final evolution into clearly distinguished pointerstates takes time T, but the interaction between system and apparatus can stop before and only needs a time delta-t.

The key seems to be equation (25), which we can in fact augment:

H = px^2/2m + py^2/2m + y px g(t)

and the usual justification of the energy accuracy - time needed uncertainty relation goes by saying that the perturbation, introduced by the third term gives you dE, while its duration is T, and what is done, is to introduce a second interaction term:

H = px^2/2m + py^2/2m + y px g(t) - y px g(t-DT)

so that the perturbative effect of the first is compensated by the corrective action of the second.
 
  • #49
2,946
0
Regarding dE*dt - This expression does not mean that it takes a time dt to measure energy to a precision of dE. Counter examples can be found in the literature. One of them comes to mind

Time in the Quantum Theory and the Uncertainty Relation for Time and Energy, Y. Aharonov and D. Bohm, Phys. Rev. Vol. 122(8) June 1, 1961, pp 1649 - 1658. The abstract reads
Because time does not appear in the Schodiner equation as an operator but only as a parameter, the time-energy uncertainty relation must be formulated in a special way. This problem has in fact been studied by many authors and we give a summary of their treatments. We then criticize the main conclusion of these treatments; viz., that in a measurement of energy carried out in a time interval, dt, there must be a minimum uncertainty in the transfer of energy to the observed system, given by d(E' - E) > h/dt. We show that this conclusion is erroneous in two respects. First, it is not consistent with the general principles of the quantum theory, which require that all uncertainy relations be expressible in terms of the mathematical formalism, i.e., by means of operators, wave functions, etc. Secondly, the examples of the measurement processes that were used to derive the above uncertainty relation are not general enough. We then develop a systematic presentation of our point of view, with regard to the role of time in quantum theory, and give a concrete example of a measurement process not satisfying the above uncertainty relation.
Pete
 
  • #50
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,028
16
Yes, this is exactly the paper that koantum talked about and which I commented. It is apparently true that the interaction between the measurement apparatus and the system can be arbitrary short, I didn't realise this. But this was not the claim. The claim was that it takes time dT to have the result available.
In the Bohm example, in a very short time, the px momentum (and eventually, the py momentum) energy equivalent has been transferred to the condenser state....
But now we must still measure the energy of the condenser !
While it is true (and I ignored this) that you can now consider the condensers as independent from the system under measurement, you will now have a new energy measurement to perform, so the problem starts all over again.
If you accept to talk about the quantum states of the condensers, after the interaction with the particle, they are now entangled with the particle states, but their own states are not yet sufficiently separated for them to be "pointer states". They will need to evolve during at least a time dT, before they are "grossly orthogonal" for different energy input states which are of the order of dE.

Nevertheless (and that's what I learned), the advantage of this is that you can have rapidly successing energy measurements on a same system.

However, for instance, you do not have the result available in very short time (so that you could take a decision based upon that to act on the system, for instance).
 

Related Threads on Does uncertainty principle imply non-conservation of energy?

Replies
7
Views
4K
Replies
54
Views
3K
Replies
11
Views
3K
Replies
2
Views
983
Replies
20
Views
2K
Replies
9
Views
2K
Replies
3
Views
2K
  • Last Post
Replies
5
Views
727
Top