Fra
- 4,383
- 724
This is hard to discuss. I admit that part of my confidence is from other directions, that happens to merge with Barandes picture at the intersection.
I'll try again in this way, not to fill in the missing parts, but to try to dress what I think are the keys
If you consider two independent bayesian networks, but then combine them either "as is" as two independnent parts, or you combine then under some constraint, then this constraint implies a dependence between the two previously independent parts - in different ways, depending on what the constrain is of course. But the general observations has nothing todo with those details.
The magic lines in understanding the constraints, what they mean, conceptually. Barandes only has a correspondence, which via the time dependent stochastic matrices implements this constraint. But he offers no first principle explanation. This is missing, and I agree.
So I think the generalization is to consider not one bayesian network, but a system of bayesian networks that are dependent via constraints; and during certain circumstances this generalisation is a unistochastic process.
The reason for my confidence in the ideas is that that my own interpretation contains alterantive mechanisms here, that fill in the missing parts at least for me.
The most natural interpretation of the constraint is to see the two indenepdent bayesian networks as encoding predictions of the future; but from non-commutative perspectives, or different basis. And sometimes this may offer better predictive power per storage. So a global memory constraint would be a natural interpretation of a constraint. Or energy constraint if that makes more sense. I mean, now matter HOW you divide a ssytem into parts, there is some sort of conservation usually, thats implicit in the division. What that conservation is can vary.
In this sense this can all be seem as generalized inference system, where you instead of having a "simple" bayesian network, entertain parallell networks representing different encodings and consider encoding their composition. But this implies via a constraint that can mean different things in different contexts, that the parts become dependent via a top-down constraint. What properties does this system have? If its unistochastic, then Barandes showed that such systems always will exhibit quantum behaviour.
So what possible intuition do we have for such top-down constraint? There are apparently many, from a model perspectrive, but which ones would be suitable for nature and why? This is strongly related to I think information theoretic interpreations of QM, which is why i like Barandes correspondence so I feel I ought to defend the idea when some suggest that it adds nothing, which i think is not very fair.
He's view at least offers a handle to a new paradigm, to be modest. But it's up to us, what todo with this handle. Build onto it, or consider it a useless appendix
There are many other interpretations or formalisms i would consider useless before Barandes.
I would say non of this is "weird", but its rejecting realism, in the sense that it is a strong information theoretic approach. But if you take information processing as REAL, when done not by humans, but by physical subsystems, doing "stochastic" basic processing. Then it is abstract, but still "real". There is nothing surrealistic in this this. For me it is a "realistic" explanation, but not in the seense of how Bell uses the word.
/Fredrik
I'll try again in this way, not to fill in the missing parts, but to try to dress what I think are the keys
pines-demon said:How can we understand entanglement under Barandes interpretation?
- My main concerns are sections V to VII. In this section he tries to see causal locality in a Bayesian network analogy. I would like to understand some version of it.
- His new microscopic principle of causality is defined as:
They key lies on composition of subsystems, namely in what a constraint implicit in the composition imply.jbergman said:I should dig into this paper but don't have the time. I did want to say that Bayesian networks have very strong markovian properties which his unistochastic processes don't have, so I am not sure what the analogy is here.
If you consider two independent bayesian networks, but then combine them either "as is" as two independnent parts, or you combine then under some constraint, then this constraint implies a dependence between the two previously independent parts - in different ways, depending on what the constrain is of course. But the general observations has nothing todo with those details.
The magic lines in understanding the constraints, what they mean, conceptually. Barandes only has a correspondence, which via the time dependent stochastic matrices implements this constraint. But he offers no first principle explanation. This is missing, and I agree.
So I think the generalization is to consider not one bayesian network, but a system of bayesian networks that are dependent via constraints; and during certain circumstances this generalisation is a unistochastic process.
The reason for my confidence in the ideas is that that my own interpretation contains alterantive mechanisms here, that fill in the missing parts at least for me.
The most natural interpretation of the constraint is to see the two indenepdent bayesian networks as encoding predictions of the future; but from non-commutative perspectives, or different basis. And sometimes this may offer better predictive power per storage. So a global memory constraint would be a natural interpretation of a constraint. Or energy constraint if that makes more sense. I mean, now matter HOW you divide a ssytem into parts, there is some sort of conservation usually, thats implicit in the division. What that conservation is can vary.
In this sense this can all be seem as generalized inference system, where you instead of having a "simple" bayesian network, entertain parallell networks representing different encodings and consider encoding their composition. But this implies via a constraint that can mean different things in different contexts, that the parts become dependent via a top-down constraint. What properties does this system have? If its unistochastic, then Barandes showed that such systems always will exhibit quantum behaviour.
So what possible intuition do we have for such top-down constraint? There are apparently many, from a model perspectrive, but which ones would be suitable for nature and why? This is strongly related to I think information theoretic interpreations of QM, which is why i like Barandes correspondence so I feel I ought to defend the idea when some suggest that it adds nothing, which i think is not very fair.
He's view at least offers a handle to a new paradigm, to be modest. But it's up to us, what todo with this handle. Build onto it, or consider it a useless appendix
There are many other interpretations or formalisms i would consider useless before Barandes.
I would say non of this is "weird", but its rejecting realism, in the sense that it is a strong information theoretic approach. But if you take information processing as REAL, when done not by humans, but by physical subsystems, doing "stochastic" basic processing. Then it is abstract, but still "real". There is nothing surrealistic in this this. For me it is a "realistic" explanation, but not in the seense of how Bell uses the word.
/Fredrik