Information Causality: A New Physical Principle by M. Pawlowski et al.

In summary, the paper introduces the principle of Information Causality, stating that communication of m classical bits causes information gain of at most m bits. They show that this principle is respected both in classical and quantum physics, and that all stronger than quantum correlations violate it. They suggest that Information Causality, being a generalization of no-signaling, is one of the foundational properties of Nature.
  • #1
MTd2
Gold Member
2,028
25
http://arxiv.org/abs/0905.2292

A new physical principle: Information Causality
Authors: M. Pawlowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, M. Zukowski
(Submitted on 14 May 2009)

Abstract: Quantum physics exhibits many remarkable features. For example, it gives probabilistic predictions (non-determinism), does not allow copying of unknown states (no-cloning), its correlations are stronger than any classical correlations but information cannot be transmitted faster than light (no-signaling). However, all the mentioned features do not single out quantum physics. A broad class of theories exist which share all of them with quantum mechanics and allow even stronger than quantum correlations. Here, we introduce the principle of Information Causality, stating that communication of m classical bits causes information gain of at most m bits. We show that this principle is respected both in classical and quantum physics, and that all stronger than quantum correlations violate it. We suggest that Information Causality, being a generalization of no-signaling, is one of the foundational properties of Nature.

***

I checked the background of the authors. They seem to be quite respectable people and work with very down to Earth stuff.
 
Physics news on Phys.org
  • #2


MTD2, thanks for thinking of me :)

I'll try to skim that paper later. Just seeing the paper I appreciate them looking for a information theoretic new founding principles of QM, but the arguments wherby QM is emergent and unique is still missing it seems.

In my elaborations there are also bounds on the mutual entropy. But this mutual entropy is the information divergence between the independent view and the full view.

[tex]I(x,y) = - S_{KL}(p(x,y)|p(x)p(y))[/tex]

Still this measure in dependent on the implicit structure of the probability spaces.

One bound in my view is the the information divergence, as constructed from the inside observer, is bounded by it's own information capacity (~mass). If the microstructure of the probability spaces are also constrained to be encoded by the physical observer with finite complexity, this loosely speaking introduces a cutoff P_max, beyond which further differentiation in certainty is impossible.

I expect QM to be emergent, when such observers are interacting and the key is the reconstruction of the probability microstructure. The players own complexity, contstrains their own actions. So a simple observer can only afford "simple strategies" due to this limit. This reconstruction of probability logic, would I think yield the quantum logic as a evolutionary result. It is simple amore efficient strategy than classical logic. The microstructure implicit in the probability space, and in particular it's dynamics is my personal expectation of what is the key.

This is related to their bound, but their argument seems to ignore several issues I definitely expect to be in there, for example the depdendence on the observers mass and the constrained "probability space". The continuum probability baggage if reconstructed, would probably provide a more powerful angle to this.

I'll try to read it better during the weekend and see if they have any input on this.

/Fredrik
 
Last edited:
  • #3


About the expression above, I don't think in terms of indepdnence, I think in terms or prior structure. So the "independent assumption" would correspond to the PRIOR structure of he probabilit. This is subject to correction as interactions proceed. Encoded in this is I think the observers strategy and action. So not only does this IMHO has the potential to explain QM, it has the potential to explain and unify interactions from an information theoretic view.

[tex]I(x,y) = - S_{KL}(p(x,y)|p(x)p(y)) = - S_{KL}(p|p_{prior})[/tex]

So I personally more think of this as a function of p and p_prior rather than as between x and y, x and y are just lables of distinguishable states in my view. Or an index for the states implicit in p.

So the measure would then be interpreted as a divergence between two information states - related by evolution, rather than two "variables".

Often in various "Bob-Alice pictures" then entire reasoning takes places relative an fictive outside observer. IF this is to be cured, I think the mass of the "view" must enter the reasoning?

Since I don't like to start with continuum probability baggage, the above is misleading, I would write

[tex]I(x,y) =- S_{KL}(m|m_{prior})[/tex]

Where m stands abstractly for microstate of the "view" or "observer". This microstructuree would have complexity as a qualifier, but also a state index. But this is the simplest possible construct, as evolution proceeds this could split and a complex consisting of several interconnected microstructures evolved to more efficiently represent a viable strategy will form.

The correspondencen between microstructure, microstate and probability, probability spaces and hilbert spaces and QM state vectors are exactly what i am looking to understand. In thta picture, complexity and mass problems can't be avoided I think. Also the problem of time seems to be unavoidable. Given any observer, there is always a "present". But the justification of the present, is only in terms of evolutionary stability.

/Fredrik
 
Last edited:
  • #4


I like the direction they are probing, but I can't help not appreciating their fundamental way of abstraction.

The work withing a given background context, that disturbs me.

For me this includes the measures themselves. The microstructure needed to even define physically a measure of relative information, is somehow given as one of the premises. I can't accept that as a fundamental starting point.

I might be stubborn but I think we need to be more radical. The "context" needed, that is usually considered either part of the premise, "background structures", bird views etc, must in my view be explained by evolution. And the reasoning should thus start from a minimum of structure. To me the entire context can only be encoded in the observers microstructure.

Not even the choice of logic, can escape this constraint. This is why I see no other way than to reconstruct the physical logic.

They say in the end

"Among the correlations that do not violate that bound it is not known whether Information Causality singles out exactly those allowed by quantum physics. If it does the new principle would acquire even stronger status, as the defning feature of quantum physics."

I also think that eventually, we might understand QM logic in a larger context. But I think the information theoretic approaches needed, is a new one, not the background dependent shannon type of information theory.

Part of the problems with the current theory that I find painfil is that often the communication channels are assumed part of a given context. I really cant' see how that is a realistic inside scenario. Suppose you are Alice, then first of all "Bob" is not well defined, and basically everything in your environment is subject to questioning.

The entire argument, lined out in the paper doesn't comply to any inside view at all as I see it. It disturbs me alot.

The entire premise in the reasoning comes out as unphysical to me.

Usually the "context" is supported not by alice nor by bob, but but the entire laboratory frame and it's microstructure. This works fine as long as we constrain ourselves to particle physics, but that is a special case as I see it. And to understand the logic of the special case, I think we need to understand the general case.

We lack an "inside view" of information theory. Most attempts I've seen actually work in a presumed fixed context, from which you by reduction can define imagine inside views. But that's not the same thing. In a real inside view, there is no outside, there is no "external context", or at least it's completely invisible, even on the level of logic.

Edit: in particular does this mean that I don't think that in a real inside view there is determinism. Note that QM as it stands is deterministic on the level of state vector evolution. This is why QM formalism exactly as it stands today, IMO is effective, not fundamental.

/Fredrik
 
  • #5


Fra said:
We lack an "inside view" of information theory. Most attempts I've seen actually work in a presumed fixed context, from which you by reduction can define imagine inside views. But that's not the same thing. In a real inside view, there is no outside, there is no "external context", or at least it's completely invisible, even on the level of logic.

I see it a bit like this.

The situation that we IMHO are currently in, is perhaps a bit analogous to when Riemann started to develop the instrinsic view of differential geometry, rather than the previous one, which was always embedded in a larger external context without true intrinsic curvatures.

I see that the story is the same with information theory. We still lack a solid intrinsic formulation of this. I think the quest for that goes hand in hand with the physical basis of probability, and reconstruction of the continuum.

/Fredrik
 
  • #6


Many thanks MTd2 for posting that. Sounds very interesting, I'll have a read through in detail when I have some more time. Hopefully it solves the following puzzle, phrased as a thought experiment. Apologies if this is just something really obvious, but it has puzzled me before and this paper sounds like it will answer it.

We have two black boxes, each has two buttons marked 1 & 2 and each has two light bulbs, one green and one red, and we give one box each to Alice and Bob. Suppose that they travel outside of each others light cone, so no communication is possible. They each press one of the two buttons, and then one of the two bulbs lights up according to some probability distribution. Regardless of how the boxes are wired up internally, what are the possibilities?
Say, A1 = outcome of Alices box if she presses 1, A2 if she presses 2 (assign -1 to green, +1 to red).
B1,B2 similarly for Bob.

Classically, the correlation <A1 B1> + <A1 B2> + <A2 B1> - <A2 B2> must be bounded by 2. As is well known, if they were performing a Bell test, measuring the spins of entangled particles along some directions, it can be as high as [itex]2\sqrt{2}[/itex]. By the Tsirelson bound, this is the maximum possible correlation in QM.
All we really need physically (afaik) is that there is no signaling, so that the button Alice chooses to press has no effect on Bob's probabilities and vice-versa.

The following setup would satisfy no signaling, but has a correlation of 4 and therefore is outside what can be achieved with QM.

1) Alice and Bob each have a 50/50 chance of seeing either bulb light up.
2) If they both press button 1 then they both see the same colour bulb light up.
3) If one or both of them press button 2 then they each see a different colour light up.

The question is, how could such pairs of boxes be set up to violate some clear physical principle? I assume that it could be used somehow to break "Information Causality"?

Edit: Reading a bit further, I see that they do actually mention my thought experiment, which they refer to as van Dam's protocol, referenced from http://arxiv.org/abs/quant-ph/0501159v1" , and it allows a "computational free lunch".
 
Last edited by a moderator:
  • #7


I see that the paper is answering exactly what I asked.
Suppose Alice was allowed to send a single bit of classical information to Bob (so they can't actually be outside each other's light cones), but no more. Then, utilizing her black box, she would actually be able to send two bits. Bob wouldn't be able to read both bits, but he would be able to choose which bit to read, and could then determine it with certainty. A bit like encoding two bits of information as two non-commuting variables and sending this down a single-bit communication channel. This kind of thing can happen whenever the Tsirelson bound is broken.

This raises two questions though.

1) Is this a realistic physical principle. That is, is there a good reason why Information Causality should not be able to be broken in this way? I've no idea, it doesn't sound very convincing.

2) They only argue that the new principle is equivalent to Tsirelson's bound, and not to QM. So, is Tsirelson's bound equivalent to QM? That's a purely mathematical statement and it shouldn't be too difficult to either prove or come up with a counterexample (if someone else hasn't already done it...).

I'd have liked them to go a bit further with this. As it is it is rather unconvincing.
 
Last edited:

1. What is information causality and how does it differ from other principles?

Information causality is a principle that states that the amount of information gained by one observer about a system is limited by the amount of information the other observer has about the system. This differs from other principles, such as causality and locality, which focus on the physical influence of one event on another rather than the exchange of information.

2. How was information causality first proposed?

Information causality was first proposed by M. Pawlowski et al. in 2009 in their paper "Information Causality as a Physical Principle." The authors used the concept of quantum information theory to develop a framework for understanding the limitations of information exchange between two observers.

3. What are the implications of information causality for quantum mechanics?

Information causality has important implications for our understanding of quantum mechanics. It provides a new perspective on the relationship between information and physical systems, and has potential applications in areas such as cryptography and quantum computing.

4. How does information causality relate to the concept of entanglement?

Information causality and entanglement are closely related concepts. Entanglement refers to the phenomenon where two or more particles become correlated in such a way that the state of one particle is dependent on the state of the other. Information causality limits the amount of information that can be gained by one observer about a system that is entangled with another observer's system.

5. What are some potential criticisms of information causality?

Some potential criticisms of information causality include the lack of a clear definition of "information" and the difficulty in applying the principle to complex systems. Additionally, some researchers have questioned whether information causality is truly a fundamental physical principle or if it is an emergent property of other principles, such as causality and locality.

Similar threads

Replies
1
Views
1K
  • Beyond the Standard Models
Replies
3
Views
1K
  • Programming and Computer Science
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
2
Replies
39
Views
5K
  • Beyond the Standard Models
Replies
2
Views
2K
  • Quantum Physics
Replies
2
Views
1K
  • Beyond the Standard Models
Replies
14
Views
3K
Replies
2
Views
951
Back
Top