Peter Morgan
Gold Member
- 309
- 116
Thank you. I didn't have a copy of Ballentine available yesterday. I associate a focus on preparation as distinct from measurement with Margenau, but there is surely room for Ballentine's take on that approach in general to be different in details.PeterDonis said:In his textbook, he gives basically the same description you quoted, but he also says that an equivalent way of looking at the quantum state is as representing an ensemble of preparations all done according to the same preparation process. Since the preparation process is describable in ordinary classical terms, it seems like a better place to anchor the meaning of the quantum state than in a "quantum system" that we can't observe directly anyway.
I much like his postulate 1 on page 43,
That wording delicately avoids introducing the idea of a 'system' as orthodox interpretations usually do, but leaves space for people reading it from an orthodox perspective to think there's an implied idea of an ensemble of systems.
On page 49, Ballentine introduces a modified Postulate 1a, "To each dynamical variable there is a Hermitian operator whose eigenvalues are the possible values of the dynamical variable", which is almost right for me, except that In my version I would replace "dynamical observable", giving
Postulate 1b: To each dataset there is a Hermitian operator whose eigenvalues include all the entries that occur in the dataset.
One point of this kind of formulation is to make no assertion that we are discussing particles or fields. It is too operational for most people —and also for me— so that the task from this point on is to rebuild the world we experience from this much too abstract idea of what the interplay between theory and experiment is about: the comparison of expected statistics, predicted by a theory, with the statistics we can compute for actually recorded datasets, together with the decision problem that we face when there is inevitably a mismatch between actual and expected. The decision problem is subtle insofar as statistical significance is subtle: a single new data entry will only very rarely change our future predictions and even large differences of expected and actual statistics for large datasets may for decades be thought by orthodoxy to be not statistically significant enough or otherwise flawed, often in hard to pin down ways and subject to personal choice.
I keep trying to pull my comments here back to the question of orthodoxy in the original post, but I have to apologize for my thinking being on a different continent from much of the current orthodoxy. I could go on and on about what I love about Ballentine and where I think we can helpfully adjust his take, but I will stop now.