Incompleteness of Griffiths' consistent histories interpretation

In summary: Everett interpretation" was not considered due to the fact that it would have required "erasing the history of the interpretation" (to paraphrase Saunders).This is the principle of Liberty: the physicist can use whatever framework he chooses when describing a system.This is a nice principle, but it raises the question whether it would not be more straightforward to require only ##\operatorname{Re}D(\alpha,\beta)=0## for all ##\alpha\neq \beta##, which would be enough to ensure that applicability of classical logic and probability theory within a single framework.This is not done, because of the observation by Lajos Diós that the (canonical)
  • #106
Demystifier said:
Which leads me to the crux of the problem. In CH interpretation, what is a feature of the matter (rather than a feature of particular descriptions)?
Vanilla* CH won't let you assert properties independent of some declared set of histories. How palatable you find this will determine your attitude towards CH, which is reasonable. But at the very least, in exchange for accepting the protean nature of quantum state spaces, we get clear and unambiguous accounts of measurement processes and closed quantum systems.

*Extended probability decoherent histories (EP-DH) privileges a specific fine-grained set (histories of particle positions in the case of quantum mechanics), and hence might be more palatable
 
Last edited:
  • Informative
Likes Demystifier
Physics news on Phys.org
  • #107
Morbert said:
@Fra I'm not sure I understand your objections. I linked that paper to square two points:
i) CH does not select any set as more correct than any other
ii) We rely on sets of histories of quasiclassical/hydrodynamic variables to describe our experiences and observations

I.e. There is a selection principle imposed by our quasiclassical nature. It's just that it doesn't select sets of quasiclassical histories as more correct. Instead it selects sets of quasiclassical histories as more useful for quasiclassical beings like ourselves.

This is distinct from a typical Copenhagen treatment because while Copenhagen keeps the classical world outside our quantum description (either via a Heisenberg cut or a region of quantum-classical ambivalence), CH let's us describe all components of measurement quantum mechanically.
My dissatisfaction with this is that this provides no addtional explanation of neither "quantum mechanical" nature of interactions nor to the nature of the hamiltonian which is what I see as the implicit challenge.

It even worsens things by removing the ("classical") context where the inferences on the quantum part are made(the empirical sane part). In my view, trying to "get rid" of the mesurement centered stance is not a goal, i prefer to embrace it. Instead CH seems to assume that the model we have corroborated for small subsystems is valid for the whole universe imagined as a closed system. That generally leads to unreasonable fine tuning and seen as a learning algorithm i find that not viable. Such things do work for small subsystems but that method, and we are seduced by its success and try to apply it to the whole universe which i think is a fallacy.

/Fredrik
 
  • #108
Fra said:
My dissatisfaction with this is that this provides no addtional explanation of neither "quantum mechanical" nature of interactions nor to the nature of the hamiltonian which is what I see as the implicit challenge.
I don't think any theory or interpretation will do that. There will always be some ab initio assumptions.
Instead CH seems to assume that the model we have corroborated for small subsystems is valid for the whole universe imagined as a closed system. That generally leads to unreasonable fine tuning and seen as a learning algorithm i find that not viable. Such things do work for small subsystems but that method, and we are seduced by its success and try to apply it to the whole universe which i think is a fallacy.
Skimming the literature, I see a fair few CH publications addressing the relation between classical and quantum phenomena, and how quantum is the more universal paradigm: See chapter 6 of Roland Omnes's "The Interpretation of Quantum Mechanics" or chapters 10, 11, 16-19 of his "Understanding Quantum Mechanics" or this paper (pdf) by Omnes exploring quantum-classical correspondence. Or this paper by Omnes, or this paper by Gell-Mann etc etc.

Re/ the closed universe and fine tuning: CH doesn't insist the universe is closed, it just provides a way to treat the measurer and the measured together under the same theory, which is useful for cosmologists who want to apply this thinking to the universe as a whole, as if it were closed. I'm not a cosmologist so I'm not too familiar with the state of the art, but I know people like JJ Halliwell have written extensively on the application of decoherent histories to cosmology. Fine tuning will presumably ultimately be addressed by a theory like inflation or CCC or some no-boundary proposal (if it can be recovered from recent criticisms by Neil Turok et al) rather than an interpretation per se.
 
  • Like
Likes dextercioby
  • #109
Morbert said:
Could you expand on this? I.e. What do you think is the significance of the Diosi paper re/ incompleteness of CH?
gentzen said:
The Diosi paper is a strong indication that a closer investigation of (tensor) products should have been done, or rather that issues which occur in the context of (tensor) products had been neglected before. Which issues exactly I cannot say, because I have not done that investigation either. Maybe this is related to what Griffiths writes in the section "8.4 Open issues" (or maybe not):
I now studied more in detail what people do with CH and what they wrote about it. Basically, they do what is convenient, and this includes working with inhomogeneous histories by just algebraically adding them. Section "8.4.1 Entangled histories" is about something less convenient, even so it also happens when you use such inhomogeneous histories for coarse graining. In conclusion, I would say that this section is unrelated to a closer investigation of (tensor) products.

Getting rid of the real part in the consistency condition is desirable, because it simplifies the structure of the theory, and is also the right thing to do in many ways. But improving the analysis of (tensor) products and independence would probably just indicate that those don't force giving up that real part. In a certain sense, CH is constructed in a simple but non-relativistic way. And classical logic (and probability theory) too are enforced in CH in a rather simple way. That is a strength, not a weakness. But it makes it hard to upgrade CH to QFT (without sacrificing that simplicity), and also hard to "properly" analyse locality with it (without slightly cheating, by going outside of the provided logical language, and by "proving too little").

Morbert said:
@gentzen I think it's ok to interrogate assumed distinctions in interpretations. Lubos Motl once called consistent histories a homework problem for Copenhagen. There's no problem with Saunders arguing that Hartle agrees with him if his argument is interesting (I don't know if it is). And if consistent histories as a project ends up being useful to people doing work on many worlds, great!
Saunders claims here that people like Halliwell or Hartle would actually support MWI. I always wondered where the claim that CH and MWI were nearly identical came from. This video made me look into Saunders writtings, and indeed I could find that claim there, written in way that I did not like. Wallace later demonstrated how such "content" can be written in an appropriate way, by inserting footnotes like "I do not want to make any historical claim here as to the influence or otherwise of Dowker and Kent’s work on proponents of consistent-histories approaches: my account is intended to capture the logic of the situation, rather than its chronology." or by parenthetical clarifications like "(And indeed, this is how Hartle, on my reading, does seem to understand it; see Hartle 2010.)"

For Saunders, I didn't yet read stuff that felt like being influenced by CH, Hartle, or Gell-Mann. For Wallace, it was rather the opposite, that I first heard stuff from him, which I thought made sense, and later found out that Hartle had said similar things (I don't care who said them first, they could be wrong anyway)
What is an Observer? A Panel with James Hartle, Susanne Still, David Wallace, and Alan Guth
The Return of the Observer by James Hartle

For Wallace, what confuses me is how he can base arguments for MWI on CH, and then claim that MWI would just work for QFT, while CH seems non-relativistic to me.

Morbert said:
Also, Weinberg's objections here seem somewhat subjective. Just as a the freedom of a physicist to choose a coarse-graining is a feature not a bug, perhaps the ability to choose from different fine-grainings is also a feature.
I find his objections fair, because both Griffiths and Omnès are quite clear that they like Copenhagen in general. So it seems fair to use the general objections against instrumentalism against them.
 
  • Like
Likes Morbert
  • #110
gentzen said:
Saunders claims here that people like Halliwell or Hartle would actually support MWI. I always wondered where the claim that CH and MWI were nearly identical came from. This video made me look into Saunders writtings, and indeed I could find that claim there, written in way that I did not like. Wallace later demonstrated how such "content" can be written in an appropriate way, by inserting footnotes like "I do not want to make any historical claim here as to the influence or otherwise of Dowker and Kent’s work on proponents of consistent-histories approaches: my account is intended to capture the logic of the situation, rather than its chronology." or by parenthetical clarifications like "(And indeed, this is how Hartle, on my reading, does seem to understand it; see Hartle 2010.)"

For Saunders, I didn't yet read stuff that felt like being influenced by CH, Hartle, or Gell-Mann. For Wallace, it was rather the opposite, that I first heard stuff from him, which I thought made sense, and later found out that Hartle had said similar things (I don't care who said them first, they could be wrong anyway)
What is an Observer? A Panel with James Hartle, Susanne Still, David Wallace, and Alan Guth
The Return of the Observer by James Hartle
Saunders should probably be careful with his double-edged sword. I could see Hartle and Gell-Mann accepting the charge that they are Everettian and proceed to distinguish Everettian contributions from contemporary MWI speak of all histories occurring. Here Gell-Mann stipulates "equally real histories" of Everett to mean the theory treats all histories equally apart from their probabilities, and not to mean all histories occur

For Wallace, what confuses me is how he can base arguments for MWI on CH, and then claim that MWI would just work for QFT, while CH seems non-relativistic to me.
Hmm, I thought the one of the features of CH was its is readily generaliseable. Here, Hartle discusses CH applied to Hamiltonian QM, to sum-over-histories QM (with and without time), and to QFT. Here he outlines "Generalised Decoherent Histories Quantum Mechanics of Quantum Spacetime (GDH)"
 
  • #111
Thanks for your suggestions. My main message in the thread was a way in which I find CH incomplete, but so are many other interpretations IMO, but I've learned that different people think QM or various interpretations is "incomplete" in different ways - (beyond the old way Einstein thought QM was "incomplete", which I do not even count)
Morbert said:
I don't think any theory or interpretation will do that. There will always be some ab initio assumptions.
Yes, I agree none of the major mainstream interpretations do this as far as I know, which is a decent reason for trying to inventing one that does, while keeping ab initio assumptions at a minimum and place them where i think is more natural, which is in the observer/agent centered view.

Morbert said:
Skimming the literature, I see a fair few CH publications addressing the relation between classical and quantum phenomena, and how quantum is the more universal paradigm: See chapter 6 of Roland Omnes's "The Interpretation of Quantum Mechanics" or chapters 10, 11, 16-19 of his "Understanding Quantum Mechanics" or this paper (pdf) by Omnes exploring quantum-classical correspondence. Or this paper by Omnes, or this paper by Gell-Mann etc etc.
Hmm my point was subtle. I do not suggest that classical paradigm is better than quantum paradim in any way. IF you had to choose, of course the QM is more universal, I agree. I just find that from the way the theory is constructed and if you TRY to understand it from inference, then the quantum description sort of must "live" in a domain where an observer can register and store information and make preparations(*). If you try to explain this, from another perspective, one is no longer answering the original question. I can see how can can say just assume the paradigm of QM (hilbert space + hamiltonian) or take it as an axiom, and then argue from there about how to explain measurements from the bigger perspective, then the CH could make more sense, I just don't think that answers the right question, and I do not see how it helps solving any open questions.

(*) Edit: And as this solid refennce as we all agree (ie classical reality) does not actual exists, except in a classical or quasi-sense, the theory itself might need revisions to make sense even without it. But without instead trying to make use of some imaginary external view, which does not need measurements/interactions, but which just postulate it.

/Fredrik
 
  • #112
Fra said:
In my view, trying to "get rid" of the mesurement centered stance is not a goal, i prefer to embrace it. Instead CH seems to assume that the model we have corroborated for small subsystems is valid for the whole universe imagined as a closed system.
Making measurements superfluous is certainly one of the main goals of CH. And doing so in a closed system, preferably even one which is rather small, is also one of the goals. Why rather small? Because they want to calculate!

Now Hartle undoubtedly wants to think about cosmology (or "the whole universe"). But he acknowledges that CH has to be "generalized" for that. So modeling the whole universe as a closed system described by CH may not be among the goals of "pure" CH. This is also hinted by how environmental decoherence is integrated with CH by proponents, namely as a way to justify why the consistency conditions will be satisfied (approximately) in typical measurement situations.

The connection to "thinking about cosmology" is probably rather that CH wants to recover normal logic and probabilistic reasoning. But how does CH recover normal logic? In your terms, by ensuring that there can be an agent which knows (remembers) all those things which have happened in the past. I guess you don't like that, but that is why I investigated it, and now write it down. The consistency conditions simply force the Hilbert space to be big enough, at least if the initial state is pure (##\rho = \ket{\phi}\bra{\phi}##). The consistency condition ##\operatorname{Tr}(C_\alpha\rho C_\beta^\dagger)=0## then reduce to ##(C_\beta\ket{\phi},C_\alpha\ket{\phi})=0##, i.e. the vectors ##C_\alpha\ket{\phi}## are orthogonal. So if there are ##m## histories ##\alpha## with non-zero probability, then the dimension ##N## of the Hilbert space satisfies ##N \geq m##.

If the initial state ##\rho## has rank ##r## instead of being pure, then we only get ##r N \geq m##. (One proves this by replacing ##\rho## with a pure space in a suitably enlarged Hibert space, see Diósi for details.) This bound can really be achieved, for example take ##r## (orthogonal) projectors ##P_i## to eigenspaces of ##\rho## at ##t_1##, and ##N## (orthogonal) projectors ##Q_j## at ##t_2##, such that ##Q_j P_i \neq 0## for all ##i,j##. Then ##\operatorname{Tr}(Q_l P_k \rho P_i Q_j)=\delta_{ik}\delta_{jl}p_{ij}## with ##p_{ij}>0##. This result first feels a bit less intuitive. Perhaps an intuitive explanation is that CH is about suppressing interference, and starting with a density matrix where interference is already suppressed is just as good as knowing which things happened in the past. Hmm ... not overly convincing. Maybe that was one reason why Gell-Mann and Hartle additionally investigated strong decoherence conditions.
 
  • #113
Fra said:
I can see how can can say just assume the paradigm of QM (hilbert space + hamiltonian) or take it as an axiom, and then argue from there about how to explain measurements from the bigger perspective, then the CH could make more sense, I just don't think that answers the right question, and I do not see how it helps solving any open questions.
This charge could also be laid at classical theories and their interpretations no? We would judge a classical theory by its ability to reproduce characteristics of our world, without insisting the theory explain its success either inherently or through an associated interpretation.

CH as a project does not intend to explain the success of quantum theories. It instead seeks to eliminate the ambiguities associated with more traditional presentations of quantum theories that explicitly invoke measurement contexts.
 
  • Like
Likes Fra and gentzen
  • #114
gentzen said:
The connection to "thinking about cosmology" is probably rather that CH wants to recover normal logic and probabilistic reasoning. But how does CH recover normal logic? In your terms, by ensuring that there can be an agent which knows (remembers) all those things which have happened in the past. I guess you don't like that, but that is why I investigated it, and now write it down. The consistency conditions simply force the Hilbert space to be big enough, at least if the initial state is pure (##\rho = \ket{\phi}\bra{\phi}##). The consistency condition ##\operatorname{Tr}(C_\alpha\rho C_\beta^\dagger)=0## then reduce to ##(C_\beta\ket{\phi},C_\alpha\ket{\phi})=0##, i.e. the vectors ##C_\alpha\ket{\phi}## are orthogonal. So if there are ##m## histories ##\alpha## with non-zero probability, then the dimension ##N## of the Hilbert space satisfies ##N \geq m##.

If the initial state ##\rho## has rank ##r## instead of being pure, then we only get ##r N \geq m##. (One proves this by replacing ##\rho## with a pure space in a suitably enlarged Hibert space, see Diósi for details.) This bound can really be achieved, for example take ##r## (orthogonal) projectors ##P_i## to eigenspaces of ##\rho## at ##t_1##, and ##N## (orthogonal) projectors ##Q_j## at ##t_2##, such that ##Q_j P_i \neq 0## for all ##i,j##. Then ##\operatorname{Tr}(Q_l P_k \rho P_i Q_j)=\delta_{ik}\delta_{jl}p_{ij}## with ##p_{ij}>0##. This result first feels a bit less intuitive. Perhaps an intuitive explanation is that CH is about suppressing interference, and starting with a density matrix where interference is already suppressed is just as good as knowing which things happened in the past. Hmm ... not overly convincing. Maybe that was one reason why Gell-Mann and Hartle additionally investigated strong decoherence conditions.
The following might be an aside, and might not have a sharp point, but the concept of "an agent that knows all those things which have happened in the past" is an interesting idea that probably hides some subtlety, and touches on my comment in the other thread. Using your example above, and assuming a pure state, we have some set that satisfies the medium decoherence condition ##(C_\beta\ket{\phi},C_\alpha\ket{\phi})=0##. We also know that ##C_\alpha|\phi\rangle = c|\phi_\alpha\rangle##, and so we can identify the "pointer observable" ##O = \sum^m_\alpha \lambda_\alpha |\phi_\alpha\rangle\langle \phi_\alpha| = \sum_\alpha^m \lambda_\alpha P_{\phi_\alpha}## allowing us to extend our histories like so ##C^\dagger_\alpha \rightarrow C^\dagger_\alpha P_{\phi_\alpha}(T)## where ##T## is some time in the future of each history ##C_\alpha##. I.e. The property ##P_{\phi_\alpha}(T)## records the history ##C_\alpha##, and if ##m=N## then ##O## cannot be further fine-grained. So it would be impossible to record history if ##m\gt N## and a history being recordable is implies a history in a framework that satisfies medium decoherence. "Exact medium decoherence can thus be characterized by records, and the physical formation of records is a way to understand mechanisms by which medium decoherence occurs" -- Gell-Mann and Hartle

So we could associate an agent that knows all things that happened in the past with an observable that constitutes an exhaustive record of a history in a given maximally fine-grained framework satisfying medium decoherence. Of course, this stretches the notion of an agent. If we insist our agent is a human, or some similar entity that relies on biology, then ##m \ll N## and the histories our agent can know will be much more coarse-grained.

The subtlety is, while ##O## records the history in the set ##\{C_\alpha\}## that occurs, we can also identify an observable ##O'## that doesn't commute with ##O##, and records the history in the alternative set ##\{C_{\alpha'}\}##. Working with this set, we can't consistently extend these histories with ##O##, and so we cannot conclude the agent ##O## records these histories.

So we might alternatively say there is no agent which knows all things that happened in the past in the sense that there is no pointer observable capable of resolving which history occurs for all frameworks.

[edit]-
Perhaps an intuitive explanation is that CH is about suppressing interference, and starting with a density matrix where interference is already suppressed is just as good as knowing which things happened in the past. Hmm ... not overly convincing. Maybe that was one reason why Gell-Mann and Hartle additionally investigated strong decoherence conditions.
After working through the case where ##\rho## is not pure, I think we get an interesting result: It allows us to consistently increase the number of histories in our set to ##rN## as you show, but now there is no agent that can know which history occurred with certainty, because there is no observable with a large enough decomposition to correlate with each possible history, even though these histories satisfy medium decoherence.

[edit 2] - Oops, after rereading your post I think I just ended up saying the same thing you did
 
Last edited:
  • #115
Morbert said:
This charge could also be laid at classical theories and their interpretations no?
Yes, absolutely...
Morbert said:
We would judge a classical theory by its ability to reproduce characteristics of our world, without insisting the theory explain its success either inherently or through an associated interpretation.
.. but as I see it we aren't working here with judging classical theories anymore (that's moot). IMHO, classical theories are does not hold the same inferential standard as a theory of measurement. This is why my expectations on a reconstructed theory of measurement, is more than we ever had in classical mechanics.
Morbert said:
CH as a project does not intend to explain the success of quantum theories. It instead seeks to eliminate the ambiguities associated with more traditional presentations of quantum theories that explicitly invoke measurement contexts.
Yes I see, this is fine, it's nice to see there is an interpretation for every obsession :smile:

/Fredrik
 
  • #116
gentzen said:
The consistency conditions simply force the Hilbert space to be big enough
Yes. Indeed this is a general issue with QM, that if you take QM to be fundamentally correct in it's mathematical structure for the whole universe, you are lead to a description with a hilbert space and hamiltonian that is out of bounds for any reasonable computation and representation. You are also typically lead into ridicilous fine tuning problems, when trying to determine all the parameters required to even attempt to define it. As long as we do not have to actually compute soemthing, this may seem as a non-issue, as mathematicians can always consider an imaginary supercomputer that can do it. But then we loose contact with reality.

A "theory" that is not computable, is not a viable theory IMO.

For small subsystems, QM formalism makes perfect sense, even tough the reason for the quantum logic probably can be clarified. So if the discussion is limited to small subsystem, then most of my objections is not relevant. But as we have not QG theory, I always keep this in mind even when pondering about QM foundations, as I think they are connected.

/Fredrik
 
  • Like
Likes gentzen
  • #117
There are some interesting things discussed here... but I am not sure what to say as think we are entering the realm of speculation outside the mainstream interpretations...
Morbert said:
The following might be an aside, and might not have a sharp point, but the concept of "an agent that knows all those things which have happened in the past" is an interesting idea that probably hides some subtlety, and touches on my comment in the other thread. Using your example above, and assuming a pure state, we have some set that satisfies the medium decoherence condition ##(C_\beta\ket{\phi},C_\alpha\ket{\phi})=0##. We also know that ##C_\alpha|\phi\rangle = |\phi_\alpha\rangle##, and so we can identify the "pointer observable" ##O = \sum^m_\alpha \lambda_\alpha |\phi_\alpha\rangle\langle \phi_\alpha| = \sum_\alpha^m \lambda_\alpha P_{\phi_\alpha}## allowing us to extend our histories like so ##C^\dagger_\alpha \rightarrow C^\dagger_\alpha P_{\phi_\alpha}(T)## where ##T## is some time in the future of each history ##C_\alpha##. I.e. The property ##P_{\phi_\alpha}(T)## records the history ##C_\alpha##, and if ##m=N## then ##O## cannot be further fine-grained. So it would be impossible to record history if ##m\gt N## and a history being recordable is implies a history in a framework that satisfies medium decoherence. "Exact medium decoherence can thus be characterized by records, and the physical formation of records is a way to understand mechanisms by which medium decoherence occurs" -- Gell-Mann and Hartle

So we could associate an agent that knows all things that happened in the past with an observable that constitutes an exhaustive record of a history in a given maximally fine-grained framework satisfying medium decoherence. Of course, this stretches the notion of an agent. If we insist our agent is a human, or some similar entity that relies on biology, then ##m \ll N## and the histories our agent can know will be much more coarse-grained.
A possible conceptual solution to the limited encoding problem that I envision, is that if you consider a common evolutionary context, an agent can be in a sense "consistent" with a past that it has technically "forgotten/erased", in the sense that positions that is never "challenged" by environment, does not need to be explicitly stored. In this way, evolution solves a fine tuning problem, and the encoding problem.

But I as I read CH, there are not such ideas in there as the idea of evolution is definitely a new theory. But I can see this as a potential way out of some of these ponderings... but if one insist in a pure interpretation I fear we are runlikely to get anywere except in circles, becuse as I see it at least most of the objections I have are rooted in the paradigm of QM itself, you just can't get around it with just "interpretations".

/Fredrik
 
  • #118
Morbert said:
It instead seeks to eliminate the ambiguities associated with more traditional presentations of quantum theories that explicitly invoke measurement contexts.
the reason why WHY I prefer to embrace the "ambigous measurement" context, rather than hide it is this:

As many others I also see that of course as "measurement process" is nothing but a "physical interaction"! As an observer is just made of matter, this is all clear, I don't think anyone questions this.

But obviously there are dual views here? You can equally turn it around and say thay physical interactions is just parts of the universe making inferences/measurements on each other. This is my choice, and when you take this perspective, which should be an allowed dual - it seems clear that we need another paradigm.

It's trying to solving the same problem in two dual ways, and each strategy has it's own problems, but each view also offers unique insights.

/Fredrik
 
  • #119
One of the ideas triggered by this thread was that CH could be incomplete as a generalized probability theory, by not singling out the complex numbers. After working through Scott Aaronson's Why are amplitudes complex? (17 Dec 2018), my conclusion is that this is true. The formalism of CH works just as well for real numbers and for quaternions. (And none of the interpretation related discussions raises any points changing that conclusion.) It is even a good thing in a certain sense, in that this leads to well recognizable differences (for example, the allowed projectors are different, as is the consistency condition) between the three variants. (What is also good is that the notion of locality makes sense for all three variants, does occur in the interpretation related discussions, and hence can be "granted" to CH too.)

But I also learned that this part is hard to capture, and most interpretations are incomplete in this sense. The main exceptions are the information based interpretations and QBism. The important point seems to be that a state which can be reproduced arbitrarily often is measurable, and not just "theoretically measurable", but measurable by local quantum measurements and the correlations between the classical results of those local measurements. MWI certainly fails in this respect, but it remains slightly unclear whether Copenhagen (or the minimal statistical interpretation) "fully" fails or not. After all, it somehow assumes that the intial state is prepared by a repeatable procedure, and that it can be know.While trying to learn whether newer developments addressed this point, I came across What is quantum mechanics? A minimal formulation (11 Nov 2017) by R. Friedberg, P. C. Hohenberg. Which is sad, because those are former CH proponents, which now dismiss CH as Excess Baggage. But they don't manage to single out complex numbers either, nor do their references. And CH is dismissed because the structure of the consistent sets is complicated, not because they would have explained some "real insight" that would have made them superfluous. (And since they were focusing on nonrelativistic quantum mechanics, time and histories should have been unproblematic.)

The complicated structure of the consistent sets is an old topic. But Griffiths and Hartle's Reply to Arian Kent (3 Oct 1997) is fully correct in its assessment. If you want, make your own theory with similar properties like CH, but don't claim that CH must be updated to fix some inconsistency. The rules for using frameworks are clear: if the facts to be discussed are included as projectors in a framework, then you can reason in that framework about those facts, and any other similar framework will give the same conclusions. So proposals like Petros Wallden's Contrary Inferences in Consistent Histories and a Set Selection Criterion might be nice theories or interpretations in their own right, but they are not needed for fixing CH.
 
  • Like
Likes dextercioby
  • #120
gentzen said:
generalized probability theory, by not singling out the complex numbers
Without going into murky details, my personal stance right now is the the complex state is related to that the state contains a mix of non-commutative information - and the reason in turn WHY the IGUS should make use of such encoding scheme I think might be understood in some way, so roughly these things do not worry me anymore, but what I am concerned with is singling out he use of real numbers (which complex number rely of also of course). This is a problem if you consider the IGUS to have limited capacity, or a bound. Then how can such a IGUS represent the continuum? This - for me - is a bigger problem. I expect the continuum model, to be a limiting case of when a LARGE IGUS describes a small system. The problem with a too large embedding is when you consider random walks and entropy, then the a prior probability for a random fixed real number is zero. And you immediately get into renormalisation questions, without us understanding the proper limit in which the continuum models does make sense. This is another reason why I am not comfortable with any of the major interpretations.

/Fredrik
 
  • Like
Likes gentzen
  • #121
Fra said:
what I am concerned with is singling out he use of real numbers (which complex number rely of also of course). This is a problem if you consider the IGUS to have limited capacity, or a bound. Then how can such a IGUS represent the continuum? This - for me - is a bigger problem. I expect the continuum model, to be a limiting case of when a LARGE IGUS describes a small system.
The limit is rather the assumption of a state that can be "reproduced arbitrarily often". If you go to the description of bigger and bigger systems, then this assumption becomes more and more ridiculous. So the size of the IGUS is less problematic than the size of the system it tries to describe.

CH doesn't seem well suited for studying such accuracy issues. The information based interpretations (and also the thermal interpretation) are better suited for that. For example, Scott also points out in his post an exponential accuracy issue with using correlations between classical measurement results for measuring nonlocal states. (And this issue translates into that the state preparation and measurements have to be repeated ridiculously often.) Quantum memory could provide an exponential advantage here, but I once raised doubts whether preparation procedures are really as accurately repeatable as required for such results.
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
2
Views
743
  • Quantum Interpretations and Foundations
Replies
7
Views
706
  • Quantum Interpretations and Foundations
3
Replies
100
Views
5K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
Replies
27
Views
2K
  • Quantum Interpretations and Foundations
Replies
34
Views
4K
  • Quantum Interpretations and Foundations
Replies
8
Views
3K
  • Quantum Interpretations and Foundations
Replies
25
Views
11K
  • STEM Academic Advising
Replies
27
Views
2K
Back
Top