Demystifier said:
Measurements performed before ##t_0## are irrelevant to events that will happen after ##t_0##. The decay does not need to be exponential, but the stochastic process is Markovian (if you are familiar with that concept).
I did just read an article on "Markovian" and it involved "current state" which I believe to be at the crux of my question. In essence I am asking about the nature of this "current state" in the kind of experiments explored in your paper.
I think I have a better way of asking this question. Basically I want to know whether there can be a QM case where this "current state" can be interrogated to reveal some of its history.
So, let's say that I have I have 512 experimental set-ups numbered 0 to 511. Each one simply repeats the same experiment over and over - but each one runs a slightly different variation of what is described in your paper. In all 512 set-ups, ##t_0##, ##t_10##, and all measurements after ##t_10## are always measured. But depending on the set-up some of the ##t_0## to ##t_9## measurements are made and some are skipped.
As examples:
In set-up number 0 (binary 000000000), ##t_1## through ##t_9## are all skipped.
In set-up number 1 (binary 000000001), ##t_1## through ##t_8## are skipped, but ##t_9## is made.
In set-up number 9 (binary 000001001), only ##t_1## and ##t_5## are made, the other 7 are skipped.
In set-up number 511, all the ##t_n##'s are measured, so the P(n)'s can be calculated exactly from the equations in your paper.
In every case, we capture measurement results starting with ##t_10##.
Question: Based only on the results of that captured information, could it ever be possible to deduce which data set goes with which set-up?
Can that much information be available in what the description of "Markovian" refers to as the "current state"?