Is there a probability in QM that an event happens at time t?

In summary, the conversation discusses the probability amplitude of a particle at a given time and location, as well as the possibility of predicting the probability of an event occurring at a certain time. The concept of correlation functions and their limitations in measuring actual correlations is also mentioned. The conversation then delves into a scenario involving particle decay and the potential use of Hilbert spaces to calculate probabilities.
  • #36
vanhees71 said:
Already for the momentum operator a wave function which is not smooth is not in the domain, and thus you cannot naively calculate expectation values or matrix elements with such functions.
Sure, one cannot calculate naively, but it doesn't mean that one cannot calculate at all. To calculate such things, one has to precisely define how one calculates it. Eq. (9) is exactly the precise definition of our calculation.

vanhees71 said:
The way out is to use a sequence of wave functions that approximate the wave function in question and take the limit for the expectation values/matrix elements.
No, it is a way out, not the way out. In this paper we use a different way out. Our alternative definition turns out to have physical consequences, which in the end solve physical problems that appear with standard mathematical definitions. So we solve a physical problem by thinking more carefully how certain formal mathematical entities should really be defined, in order to get sensible physics. From this point of view, one can say that it turns out that the standard (not ours) definition was "naive".
 
Physics news on Phys.org
  • #37
Demystifier said:
Sure, one cannot calculate naively, but it doesn't mean that one cannot calculate at all. To calculate such things, one has to precisely define how one calculates it. Eq. (9) is exactly the precise definition of our calculation.
As I said, you need to use the correct limiting procedures.
Demystifier said:
No, it is a way out, not the way out. In this paper we use a different way out. Our alternative definition turns out to have physical consequences, which in the end solve physical problems that appear with standard mathematical definitions. So we solve a physical problem by thinking more carefully how certain formal mathematical entities should really be defined, in order to get sensible physics. From this point of view, one can say that it turns out that the standard (not ours) definition was "naive".
I don't understand, what problem you want to solve to begin with. No matter what, I'm not convinced that starting from obvious wrong maths leads to anything useful. Self-adjoint operators don't have non-real expectation values, as soon as you properly define them!
 
  • #38
Demystifier said:
Note that the operator ##\overline{H}## is not an observable. It is a non-hermitin and non-self-adjoint operator that generates a non-unitary evolution.
If ##H## is self-adjoint then also ##\pi H \pi## is self-adjoint.
 
  • #39
vanhees71 said:
If ##H## is self-adjoint then also ##\pi H \pi## is self-adjoint.
In finite-dimensional Hilbert space, yes. In infinite-dimensional, not necessarily. We have explained it in detail in the cited paper.
 
  • Like
Likes PeroK
  • #40
vanhees71 said:
Self-adjoint operators don't have non-real expectation values, as soon as you properly define them!
Definitions are not written in stones, they can be changed.
 
  • #41
Then use another word. In the natural sciences it's good practice not to change the meaning of words!
 
  • Like
Likes Demystifier
  • #42
vanhees71 said:
Then use another word. In the natural sciences it's good practice not to change the meaning of words!
I'm thinking what word would better convey the idea. Perhaps "extended Hamilton operator", because we are extending the domain on which the operator ##H\propto\nabla^2## is allowed to act.
 
  • #43
vanhees71 said:
I don't understand, what problem you want to solve to begin with.
The arrival time problem, as a central problem of the more general problem indicated by the title of this thread.
 
  • Like
Likes vanhees71
  • #44
iirc from a previous thread, it's concluded that the Zeno effect isn't due to measurement per se, but interaction between the measurement device and the measured system. I.e. the effect is present in the dynamics (even for seemingly, but not actually, indirect measurement scenarios) regardless of when we build "collapsed states" of measurement outcomes from some projective decomposition. https://www.physicsforums.com/threads/geiger-counters-and-measurement.1015428/
 
  • Like
Likes vanhees71
  • #45
I will not claim that I have read your article "properly", but I have read through a lot of it and scanned through the rest.

My questions are:

In your setup, you sample at t0,t1, ... until you get a hit and you calculate P(0), P(1), ...
Let's say that we performed these experiments two ways. In the first case (case 'a'), I sample exactly as described in your paper (t0 on until I get a hit). I will call these Pa(x) for x=0, 1, ....

In the second case (case 'b'), I start my sampling at t(n-1). I will call these Pb(x) for x=n-1, n, n+1, ....

I am guessing that by performing measurements more often, I will get fewer hits (a la Zeno). So by the time I reach t(n-1), we will be less likely to have reached the end in cases 'a' than with cases 'b'. Ie, Sum(Pa(0) to Pa(n-1)) < Pb(n-1)
Is that correct?

Next, I will "normalize" the result "tails" (sample n and on) for both cases as follows:
1) Compute the sums Sa=sum of Pa(x) and Sb=sum of Pb(x) for x=n, n+1, n+2, ...
2) Compute Ta(x)=Pa(x+1)/Sa and Tb(x)=Pb(x+n)/Sb for x=0,1,2,...
Lets call Ta and Tb the tails.
Will the tails always the same?
My guess would be that the different sampling history from t0 to tn-1 would (in some cases) have an effect than cannot be completely erased by that Sa and Sb "normalization" that we applied. But it would take me some time with you math to verify this.

Thanks.
 
  • #46
Morbert said:
iirc from a previous thread, it's concluded that the Zeno effect isn't due to measurement per se, but interaction between the measurement device and the measured system. I.e. the effect is present in the dynamics (even for seemingly, but not actually, indirect measurement scenarios) regardless of when we build "collapsed states" of measurement outcomes from some projective decomposition. https://www.physicsforums.com/threads/geiger-counters-and-measurement.1015428/
I don't have tome to read the whole thread. Can you tell which post concludes it?
 
  • #47
More questions:
The quasi-spontaneous is interesting. I assume it would apply to isotope decay.
For example, Lithium 8 has a half life under a second. But left in an empty universe, would it ever decay?
It sounds to me that an empty universe (or one with only a single lithium 8 atom) would be "timeless".
 
  • #48
.Scott said:
More questions:
The quasi-spontaneous is interesting. I assume it would apply to isotope decay.
For example, Lithium 8 has a half life under a second. But left in an empty universe, would it ever decay?
We claim that it wouldn't.
.Scott said:
It sounds to me that an empty universe (or one with only a single lithium 8 atom) would be "timeless".
Not necessarily, it can be oscillating if it is in a superposition of two energies.
 
  • #49
.Scott said:
In the second case (case 'b'), I start my sampling at t(n-1). I will call these Pb(x) for x=n-1, n, n+1, ....
It doesn't really matter when the sampling starts, i.e. what time is chosen as "initial".
.Scott said:
I am guessing that by performing measurements more often, I will get fewer hits (a la Zeno). So by the time I reach t(n-1), we will be less likely to have reached the end in cases 'a' than with cases 'b'.
I don't understand. What performing measurements more often has to do with the choice of time of the first measurement?
 
  • #50
Demystifier said:
It doesn't really matter when the sampling starts, i.e. what time is chosen as "initial".
That would suggest that your were discussing (in your paper) strictly logarithmic decay - So P(n)/P(n-1) is a constant.
Is that really the case?
If it isn't, then knowledge of what measurements were attempted before t0 could be used to tweak the P(n)'s.


Demystifier said:
I don't understand. What performing measurements more often has to do with the choice of time of the first measurement?
So you have two labs (a and b), each making observations at the same delta t (say 100msec) and each one sets up for a series of measurement at the start of each minute. By 2 seconds into the minute, lab a has already made 21 measurements but lab b is just making it's first measurement. My thought is that Pb(20) will be greater than the sum of Pa(0) to Pa(20) only because the rapid samplings at 'a' would have a Zeno-type effect - keeping the experiment at the initial setup state.
 
  • #51
Morbert said:
iirc from a previous thread, it's concluded that the Zeno effect isn't due to measurement per se, but interaction between the measurement device and the measured system. I.e. the effect is present in the dynamics (even for seemingly, but not actually, indirect measurement scenarios) regardless of when we build "collapsed states" of measurement outcomes from some projective decomposition. https://www.physicsforums.com/threads/geiger-counters-and-measurement.1015428/
Indeed. Measurement always involves the interaction between measurement device and measured system. While for macroscopic objecs one can often neglect the influence of the measurement on the system, that's impossible for microscopic objects like single elementary particles. For sure, the explanation of the quantum Zeno effect is not to be based on an abuse of the well-understood mathematics of quantum theory and self-adjoint (not hermitean!) operators in Hilbert space!
 
  • #52
.Scott said:
That would suggest that your were discussing (in your paper) strictly logarithmic decay - So P(n)/P(n-1) is a constant.
Is that really the case?
If it isn't, then knowledge of what measurements were attempted before t0 could be used to tweak the P(n)'s.
Measurements performed before ##t_0## are irrelevant to events that will happen after ##t_0##. The decay does not need to be exponential, but the stochastic process is Markovian (if you are familiar with that concept).
 
  • Like
Likes vanhees71
  • #53
vanhees71 said:
For sure, the explanation of the quantum Zeno effect is not to be based on an abuse of the well-understood mathematics of quantum theory and self-adjoint (not hermitean!) operators in Hilbert space!
Just tell me one thing. If a wave packet travels towards the detector, and a part of the wave packet has already entered the detector region but the detector has not clicked yet, do I have to update the wave function of the packet?
 
  • #54
Demystifier said:
If a wave packet travels towards the detector, and a part of the wave packet has already entered the detector region but the detector has not clicked yet, do I have to update the wave function of the packet?
Does your model include the detailed interaction between the wave packet and the detector? Or does your model just treat the detector as a black box that emits clicks?

The answer to your question will depend on which type of model you have.
 
  • #55
Demystifier said:
Measurements performed before ##t_0## are irrelevant to events that will happen after ##t_0##. The decay does not need to be exponential, but the stochastic process is Markovian (if you are familiar with that concept).
I did just read an article on "Markovian" and it involved "current state" which I believe to be at the crux of my question. In essence I am asking about the nature of this "current state" in the kind of experiments explored in your paper.

I think I have a better way of asking this question. Basically I want to know whether there can be a QM case where this "current state" can be interrogated to reveal some of its history.

So, let's say that I have I have 512 experimental set-ups numbered 0 to 511. Each one simply repeats the same experiment over and over - but each one runs a slightly different variation of what is described in your paper. In all 512 set-ups, ##t_0##, ##t_10##, and all measurements after ##t_10## are always measured. But depending on the set-up some of the ##t_0## to ##t_9## measurements are made and some are skipped.

As examples:
In set-up number 0 (binary 000000000), ##t_1## through ##t_9## are all skipped.
In set-up number 1 (binary 000000001), ##t_1## through ##t_8## are skipped, but ##t_9## is made.
In set-up number 9 (binary 000001001), only ##t_1## and ##t_5## are made, the other 7 are skipped.

In set-up number 511, all the ##t_n##'s are measured, so the P(n)'s can be calculated exactly from the equations in your paper.

In every case, we capture measurement results starting with ##t_10##.

Question: Based only on the results of that captured information, could it ever be possible to deduce which data set goes with which set-up?

Can that much information be available in what the description of "Markovian" refers to as the "current state"?
 
  • #56
Demystifier said:
I don't have tome to read the whole thread. Can you tell which post concludes it?
Here is the relevant paper: https://arxiv.org/abs/quant-ph/0307075 the thread eventually touches on.

The paper models a continuous indirect measurement of the decay of an unstable atom with a Hamiltonian consisting of three terms: the atom, the interaction between the atom and the photon+detector terms that couple to it, and the photon+detector terms decoupled from the atom (see equations 7+8). The detector is physical insofar as it has a finite bandwidth.

The distinction between measurement and non-measurement is made with the Hamiltonian, specifically a form-factor that gets renormalised when the detector is present.
 
  • Like
Likes Demystifier
  • #57
PeterDonis said:
Does your model include the detailed interaction between the wave packet and the detector?
No.
PeterDonis said:
Or does your model just treat the detector as a black box that emits clicks?
Yes.
 
  • Like
Likes .Scott
  • #58
.Scott said:
Can that much information be available in what the description of "Markovian" refers to as the "current state"?
No.
 
  • Like
Likes .Scott
  • #59
Demystifier said:
Measurements performed before ##t_0## are irrelevant to events that will happen after ##t_0##. The decay does not need to be exponential, but the stochastic process is Markovian (if you are familiar with that concept).
The decay cannot even be exactly exponential, because the energy is bounded from below. See the textbook by Sakurai about this. The stochastic process of a closed quantum system is Markovian.
 
  • #60
vanhees71 said:
No matter what, I'm not convinced that starting from obvious wrong maths leads to anything useful.
Sometimes it does. A good example is the Dirac delta function, because when Dirac introduced it, it was "obviously wrong" as it was not even a function. Nevertheless, it produced results that made perfect sense from a physical point of view, and later Laurent Schwartz found a way to make it "right" in a rigorous mathematical sense. I'm convinced that we have found something similar, in the sense that our idea is essentially consistent and correct, even if it still needs to be somewhat refined for the sake of mathematical purity. We developed our idea on the level of Dirac (and a little bit more), what remains is that someone does it on the level of Schwartz.
 
  • Like
Likes haushofer
  • #62
Closing this thread since the OPs question has been answered.

Jedi
 
  • Sad
Likes Demystifier

Similar threads

  • Quantum Physics
Replies
23
Views
1K
Replies
4
Views
862
  • Quantum Physics
Replies
2
Views
287
  • Quantum Physics
2
Replies
64
Views
3K
  • Quantum Physics
Replies
6
Views
912
Replies
4
Views
1K
  • Quantum Physics
Replies
7
Views
1K
Replies
12
Views
2K
  • Quantum Physics
2
Replies
69
Views
4K
  • Quantum Physics
Replies
2
Views
1K
Back
Top