Bob_for_short said:
Information on some object consists of many "points" like a high quality image consists of many pixels. This is what QM describes. One, single point on a screen in a double-slit experiment is useless for getting information about the whole interference picture. It is just one element of the ensemble of elements representing information about the system.
meopemuk said:
I would prefer the point of view that there is a degree of randomness (or unpredictability, or lack of a cause-effect relationship) in Nature, which cannot be explained by our existing theories.
I can relate to both these points and I think a possible solution is even to answer the personal question I wrote in the other post:
"And what about the "information" contained in the evolution laws? Why does only some information evolve, and other parts are eternal? If "infomation is in some sense fundamental", then it appears very ad hoc or even incoherent (to me at least) to make such a distinction."
My objection here is exactly the apparent incoherence that at one level (individual QM events) there is apparently a lack of causality, and at another level (ensemble or statistical levle) there is PERFECT causation(determinism).
This is why my personal view is that even the rules of causation (the laws of physics ~ the hamiltonian) must be treated on the same footing, and thus we should talk about information about laws; this is exactly what one gets in the inference approach, where "information about law" is respected. Thus we get a link between the two apparent unrelated levels (single event level) and (perfect statistics), and I think the reality is in fact somewhere in between, which means the perfect causation at probabiltiy level is actually an idealisation. But the symptoms are apparent only in extreme domains.
So if we constrains also causal laws to the operational perspective, then I argue that the complexity bounds of the observer, actually puts a physical limit on what causal relations that are physically inferrable or measureable.
Then at least we reach a coherence in the framwork, where ALL information is subject to operational constraints. No "laws of inference" will escape as meta laws.
Then the explanation if why the causality is lacking between individual events is because it's not possible to infere - with any level of confidence - anything from a single data point. The more data we have, the more confident do we get. But the limiting case of PERFECT confidence is also unphysical - THIS is not respect in corrent physics abstractions, and its' why I find it incoherent from this choice of analysis.
Edit: This is IMO also the root of a lot of infinities. But assuming infininte confidence in some things (while this is not really true) it's no surprise that odd things happens like "infinite probabilities" etc. When we "count" possibilities, the standard procedure ignores WEIGHTING the possibilities with the limited confidence in the inference system (causation rules) that is used. To connect to Bob's quite different "reformulation" - this is how I would like to perform a "reformulation. But ignoring the fact that some inferences are not perfect, we thereby assign them a unphysical weight and thus of course when we try to sum the result it diverges.
/Fredrik