A Blue-Eye Paradox: Solution Not Unique

  • A
  • Thread starter Thread starter Demystifier
  • Start date Start date
  • Tags Tags
    Paradox
  • #151
Demystifier said:
@maline and @stevendaryl from your responses I can conclude that the main controversy is whether or not the message
"At least one of monks has blue eyes. This message is sent to all monks."
is equivalent to
"At least one of monks has blue eyes. Now all monks know that."

I say it's equivalent, and you say it's not. You say that the first message contains more relevant information than the second one. But intuitively, it doesn't make sense to me. So I need to think more about it.

As I said, there are (at least) three levels of statements about Alice, Bob and Carol (the three islanders):
  1. Alice's eye color
  2. What Alice believes about her eye color.
  3. What Alice believes about what Bob believes about his own eye color.
  4. What Alice believes about what Bob believes about what Carol believes about her own eye color.
When the monk announces to everyone "There is at least one blue-eyed person", that doesn't affect level 1, nor level 2, nor even level 3, but it does affect level 4.
 
Physics news on Phys.org
  • #152
Demystifier said:
But we are talking about logical people. There is no any logical reason for Alice to believe that she has brown eyes.

But there is no logical reason for her to conclude that she has blue eyes. As far as she knows, she MIGHT have brown eyes.

Once again, here are 4 possible worlds:
  1. W1: All three have blue eyes.
  2. W2: Only Carol and Bob have blue eyes.
  3. W3: Only Carol has blue eyes.
  4. W4: Nobody has blue eyes.
In W1, Alice believes that W2 is possible.
In W2, Bob believes that W3 is possible.
In W3, Carol believes that W4 is possible.

So Alice believes that it is possible that (Bob believes that it is possible that (Carol believes it is possible that (nobody has blue eyes))).

After the announcement, Alice no longer believes this.
 
  • #153
lukesfn said:
With the above graph, have you thought about worlds where no announcements where made?

You can consider such worlds, if you like, but the point is that nobody in the actual world would think that they were in that world. So there is no chain of arrows from the actual world to that world. So that world would not come into play in the reasoning.
 
  • #154
stevendaryl said:
After the announcement, Alice no longer believes this.
The correct number of days after the announcement, Alice no longer believes W2 is possible, but she already new W3 and W4 where not possible.

stevendaryl said:
When the monk announces to everyone "There is at least one blue-eyed person", that doesn't affect level 1, nor level 2, nor even level 3, but it does affect level 4
The announcement affects a world that Alice already didn't believe existed, which is odd.
 
  • #155
Demystifier said:
@maline and @stevendaryl from your responses I can conclude that the main controversy is whether or not the message
"At least one of monks has blue eyes. This message is sent to all monks."
is equivalent to
"At least one of monks has blue eyes. Now all monks know that."

I say it's equivalent, and you say it's not. You say that the first message contains more relevant information than the second one. But intuitively, it doesn't make sense to me. So I need to think more about it.
Let me try to explain why I might be wrong.
Suppose that I receive the letter
"At least one of monks has blue eyes. Now all monks know that."
What can I conclude from it? I can interpret it in two interestingly different ways:
The first interpretation is that all the others have received the same message as I did, namely
"At least one of monks has blue eyes. Now all monks know that."
The second interpretation is that all the others received a shorter message than I did, namely
"At least one of monks has blue eyes."
In the first interpretation, "that" refers to both sentences, so it involves self-reference.
In the second interpretation, "that" refers only to the first sentence, so it doesn't involve self-reference.

The first interpretation is equivalent to "At least one of monks has blue eyes. This message is sent to all monks." (Do you agree?)
But the second interpretation is not equivalent to it. So I might have been wrong in first taking the first interpretation (to achieve equivalence), but later taking the second interpretation (to show that it does not carry so much new information). If that was my mistake (which seems to be the case), then I admit that I was wrong and now accept that the standard solution is correct. So now it seems I know where my mistake was, but let me not make the final conclusion before thinking about it once again.
 
  • Like
Likes Heinera
  • #156
lukesfn said:
I am wondering that if for that for certain numbers of blue eyed people, the inductive reasoning feels so strong that it is mistaken for deductive reasoning.

I think that you might be confusing "inductive reasoning" with the mathematical technique of "mathematical induction", which is a form of DEDUCTIVE reasoning.

Inductive reasoning is a matter of generalizing from a bunch of examples. Every swan I've ever seen is white, so I conclude "All swans are white". That's not logically valid, because there might be a black swan that I've never seen.

Mathematical induction amount to proving a statement about all positive integers by showing that it's true for n=0 and showing that it is possible to reduce the case for one number to the case for a smaller number. Mathematical induction is logically valid.
 
  • #157
lukesfn said:
The correct number of days after the announcement, Alice no longer believes W2 is possible, but she already new W3 and W4 where not possible.

Yes, Alice knows that W3 and W4 are not possible, but as far as she knows, Bob believes that W3 is possible, and as far as she knows, Bob believes that Carol believes that W4 is possible.
 
  • #158
Demystifier said:
Let me try to explain why I might be wrong.
Suppose that I receive the letter
"At least one of monks has blue eyes. Now all monks know that."
What can I conclude from it? I can interpret it in two interestingly different ways:
The first interpretation is that all the others have received the same message as I did, namely
"At least one of monks has blue eyes. Now all monks know that."
The second interpretation is that all the others received a shorter message than I did, namely
"At least one of monks has blue eyes."
In the first interpretation, "that" refers to both sentences, so it involves self-reference.
In the second interpretation, "that" refers only to the first sentence, so it doesn't involve self-reference.

Exactly right. Saying "At least one monk has blue eyes, and all monks received this message" is equivalent to:
  1. At least one monk has blue eyes.
  2. All monks know that 1 is the case.
  3. All monks know that 2 is the case.
  4. All monks know that 3 is the case.
  5. Etc.
Each additional statement gives more information (up until you run out of monks). If a monk only receives message 1, he might think that he's the only one who received the message. If he receives messages 1&2, he might think that the other monks only received message 1. If he receives messages 1, 2, and 3, he might think that the other monks only received messages 1&2. Etc.
 
  • #159
Demystifier said:
Yes. My answer is that, for ##n\geq 3##, they will all kill themselves after ##n## days, counting from the day at which they all together learned about the suicide law. This assumes that they did learn the suicide law together at the same day. If they learned it in another way, then the answer depends on how exactly did they learn it.
Learning about the suicide rule is not equivalent to a new public statement that there is at least one blue-eyed monk. The countdown starts when the statement made by the guru is combined with the daily suicide opportunity to publicly demonstrate a minimum number of blue-eyed monks that is greater than 1.

A simple statement of the rules would never, under any circumstances, resolve the blue-eyed monk count to any monk. In contrast, a statement that there is at least one blue-eyed monk would resolve the blue-eyed monk count to a monk if he was the only blue-eyed monk ... and he would communicate this to all other monks on the next scheduled suicide opportunity.
 
  • #160
stevendaryl said:
Mathematical induction amount to proving a statement about all positive integers by showing that it's true for n=0 and showing that it is possible to reduce the case for one number to the case for a smaller number. Mathematical induction is logically valid.
Yes, thank you, you are probably correct, I what little formal knowledge I have in this area has been long forgotten.

Also, I don't think anything else I am saying leads anywhere particularly useful apart from being things my intuition doesn't like.

I feel like either the instant application of perfect logic being something quite un-natural, can cause very unintuitive results in some cases, or instant application of perfect logic as a concept has some flaw in scope of this puzzle.
 
  • #161
lukesfn said:
Yes, thank you, you are probably correct, I what little formal knowledge I have in this area has been long forgotten.

Also, I don't think anything else I am saying leads anywhere particularly useful apart from being things my intuition doesn't like.

I feel like either the instant application of perfect logic being something quite un-natural, can cause very unintuitive results in some cases, or instant application of perfect logic as a concept has some flaw in scope of this puzzle.

I think the reasoning is completely air-tight.

Suppose that only Carol has blue eyes, and she receives the announcement that there is at least one blue-eyed person. Then she would know it was her, and would kill herself. So we conclude:
  • Fact 1: If Carol has blue eyes and nobody else has blue eyes, she kills herself in one day.
This is logically equivalent to:
  • Fact 1': If Carol has blue eyes, and she doesn't kill herself after one day, then somebody else also has blue eyes.
Now, suppose that only Carol and Bob have blue eyes and they receive the announcement. Bob doesn't know whether he has blue eyes, or not. But he knows Fact 1'. So if Carol doesn't kill herself after one day, then there are at least two blue-eyed people. Looking around, Bob would know that the second person was him. So he would kill himself on the second day. So we conclude:
  • Fact 2: If Carol and Bob have blue eyes and Alice doesn't have blue eyes, then Bob kills himself on the second day.
This is logically equivalent to:
  • Fact 2': If Carol and Bob both have blue eyes, and Bob does not kill himself on the second day, then Alice has blue eyes.
Now, suppose that Alice, Bob and Carol all three have blue eyes. After two days, Bob doesn't kill himself. Then Alice uses Fact 2' to conclude that she herself has blue eyes. So she kills herself on the third day. So we conclude:
  • Fact 3: If Alice, Bob and Carol all have blue eyes, then Alice kills herself on the third day.
So the mathematical induction here is just the chain from Fact 1 to Fact 2 to Fact 3.
 
  • #162
stevendaryl said:
Okay, well I'm assuming that the population is fixed (other than deaths by suicide).
That is a good assumption. There's no reason to make the problem more complex than it already is.

Not quite. For the puzzle to work, a blue-eyed islander has to consider it possible that his eyes are non-blue.
That there have to exist non blue-eyed islanders (or whatever) does not necessarily follow. An islander merely has to consider it to be possible that their own eyes are not blue.
Demystifier said:
(3) No other source of information, except by the prophet himself, provides sufficient information for all blue-eyers to eventually realize that they are blue-eyers.

then one arrives at the final result

(4) If there are n blue-eyers, they will realize that they are blue-eyers after n days, starting from the first day at which the information was given by the prophet.

Note that (1) and (2) are not sufficient to get (4). To get (4), one also needs (3). In other words, (3) is tacitly assumed in the standard solution that leads to the final result (4).

Your assumption #3 is very explicitly stated in the well-formulated versions of the riddle. The story has to be carefully crafted so as to make your assumption #3 explicit. For example, the island has no reflective surfaces, and communicating eye color (whether by talking, writing, or even surreptitious looks) is forbidden.

This why I have complaining about the pervasive use of non-standard versions of the riddle in this thread. It's akin to people trying to understand the twin paradox in special relativity by creating their own non-standard quintuplet paradox that misses the point. This is why this thread has become so long.

The other thing that has to be made clear is that the new knowledge imparted by the guru has to be common knowledge. Everyone has to have heard, understood, and believed the guru's statement.
 
  • #163
stevendaryl said:
I think the reasoning is completely air-tight.
Well, I have thought it all through from many angles, and I can't see any cracks my self, but I often wast large amounts of time on impossible problems before I really understand exactly why they are impossible. The only way I can see to attack the problem is to try to think of a reason why a Alice wouldn't apply the induction method, but any such argument seems unconvincing.
 
  • #164
Demystifier said:
Yes they will.
Ok, so you claim three blue-eyed monks will comit suicide at t+3 without any announcement. Imagine three blue-eyed monks, each sitting in solitude. They all know about the suicide rule, but have no idea about their own eye colour, so they just sit there. One day (time = t) they are all placed in the same room. According to your argument, this action alone should somehow trigger a logical process so that they all know they have blue eyes at time t+3. How?
 
  • Like
Likes .Scott
  • #165
stevendaryl said:
it seems notoriously difficult to formulate mathematically.
I believe I have it:
##T_n## represents the statement that there are ##n## blue-eyed monks on the island.
##M_m## represents the Monk number ##m##.
##M_b## represents every other blue-eyed monk. More specifically, for the 1st person monk ##M_m## given in the context of the ##M_b## usage, it is the set of all blue-eyed monks except ##M_m##.
##B_m## represents the statement that ##M_m## is blue-eyed.
##B## represents the statement "I am blue-eyed". More specifically, for each ##m## given in the context of where ##B## is used, ##M_m## knows ##B_m##.

##Q_{0,0}## represents what a blue-eyed monk knows who sees no blue-eyed monks and has no guru clue.
##Q_{0,0} = (B\Rightarrow T_1)\wedge(\overline{B}\Rightarrow T_0)##

##Q_{n,0}## represents what a blue-eyed monk knows who sees ##n## blue-eyed monks where ##n>0## and has no guru clue.
##Q_{n,0} = (B\Rightarrow (T_{n+1}\wedge(M_b## knows ##Q_{n,0})))\wedge(\overline{B}\Rightarrow (T_n\wedge(M_b## knows ##Q_{n-1,0})))##

##Q_{0,0.5}## represents what a blue-eyed monk knows who sees no blue-eyed monks and has received the ##\overline{T_0}## public announcement from the guru and has not had a suicide opportunity.
##Q_{0,0.5} = B \wedge T_1##

##Q_{n,0.5}## represents what a blue-eyed monk knows who sees ##n## blue-eyed monks where ##n>0## and has received the ##\overline{T_0}## public announcement from the guru but there has not been a suicide opportunity since then.
##Q_{n,0.5} = (B\Rightarrow (T_{n+1}\wedge(M_b## knows ##Q_{n,0.5})))\wedge(\overline{B}\Rightarrow (T_n\wedge(M_b## knows ##Q_{n-1,0.5})))##

##Q_{1,1}## represents what a blue-eyed monk knows who sees 1 blue-eyed monks and there has been 1 suicide opportunity since the ##\overline{T_0}## public announcement.
##Q_{1,1} = B\wedge T_2\wedge(M_b## knows ##Q_{1,1})##
The other term to that, ##\overline{B}\Rightarrow(T_1\wedge(M_b## knows ##Q_{0,1}))##, is dropped because ##(T_1\wedge(M_b## knows ##Q_{0,1}))##, translated roughly as "there is only one blue-eyed monk and he knows he is dead", is demonstrated as false.

##Q_{n,1}## represents what a blue-eyed monk knows who sees ##n## blue-eyed monks where ##n>1## and has received the ##\overline{T_0}## public announcement from the guru and there has been 1 suicide opportunity since then.
##Q_{n,1} = (B\Rightarrow (T_{n+1}\wedge(M_b## knows ##Q_{n,1})))\wedge(\overline{B}\Rightarrow (T_n\wedge(M_b## knows ##Q_{n-1,1})))##

These can be used to demonstrate that you need the declaration and that you need as many suicide opportunities as blue=eyed monks.
But more importantly, it shows the correct recursive statements.
 
  • #166
Heinera said:
Ok, so you claim three blue-eyed monks will comit suicide at t+3 without any announcement. Imagine three blue-eyed monks, each sitting in solitude. They all know about the suicide rule, but have no idea about their own eye colour, so they just sit there. One day (time = t) they are all placed in the same room. According to your argument, this action alone should somehow trigger a logical process so that they all know they have blue eyes at time t+3. How?
In the meantime I have changed my mind in the post #155, which you liked.
 
  • #167
Demystifier said:
Yes. My answer is that, for ##n\geq 3##, they will all kill themselves after ##n## days, counting from the day at which they all together learned about the suicide law. This assumes that they did learn the suicide law together at the same day. If they learned it in another way, then the answer depends on how exactly did they learn it.
I believe your logic here is that learning the rules together and seeing blue-eyed monks is a suitable substitute for a declaration from the guru that he sees a blue-eyed monk. But that is not the case. The difference is that monks observing other monks can never reveal to any monk anything about his own eye color. In contrast, there is a condition (##T_1##) when such a declaration will reveal the eye color to a monk.
That is the critical difference - and why simply learning the rule together on the same day will not trigger the same progression as the guru.

If that was not you logic, tell me what it is.
 
  • #168
regardng "Okay, well I'm assuming that the population is fixed (other than deaths by suicide)."
D H said:
That is a good assumption. There's no reason to make the problem more complex than it already is.
Our perfectly logical monks will make no such assumption. They will need a reason to believe that the blue-eyed monk population isn't changing. So the problem description needs to include either a static-population rule or a stipulation that the monks can recognize each other.

I prefer the stipulation that they recognize each other because, from my point of view, analysis of the original game is complete and it's time to move on to scenarios where not all monks have the same information - such as post #111.
 
  • #169
.Scott said:
If that was not you logic, tell me what it is.
In the meanwhile, I have changed my logic. See post #155.
 
  • #170
Demystifier said:
In the meanwhile, I have changed my logic. See post #155.

Well, your post #155 admitted the possibility that your reasoning was mistaken, but didn't go so far as to confirm or deny the standard conclusion.
 
  • Like
Likes Demystifier
  • #171
.Scott said:
I prefer the stipulation that they recognize each other because, from my point of view, analysis of the original game is complete and it's time to move on to scenarios where not all monks have the same information - such as post #111.
With regard to what happens when six monks on the first day after the guru makes here statement? It breaks the chain. They might have broken the rule against looking at mirrors, they might have broken the about discussing eye color, or they might have committed suicide for some non-related reasons. Whichever is the case, the chain is broken. Terence Tao, in a comment on his version of the problem stated

"With these sorts of logic puzzles, there is always the implied assumption of “idealised conditions” (no deafness, colour-blindness, or other physical disabilities, no pirates to come and randomly abduct half the population on Day 17, no unexpected breakdown of causality or other laws of physics, and so forth). It is a trivial matter to “break” the logic puzzle using non-idealised conditions, but this is not the most interesting aspect to the puzzle."​

To be brutally honest, these side discussions are akin to discussing the relativistic triplet paradox when the discussants don't understand the twin paradox. Properly constructed versions of the problem showcase the difference between mutual knowledge and common knowledge. Throughout this thread, there hasn't been much discussion on those key differences.

This thread started on a bad footing, and it shows in that the discussion has gone on for nine pages. (Long threads typically are not a sign of a good discussion.) One way it started off on a wrong footing was that it did not create a base of common knowledge for the discussion. Which version of the problem should be discussed? Monks committing suicide? That's a rather non-standard version of the problem.

This lack of a common basis for a discussion is a bit strange given that the problem is about the difference between mutual knowledge and common knowledge. The opening post did link to the wikipedia article on common knowledge, but in my opinion, that article (like many at wikipedia) isn't that good. The article on common knowledge at the Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/common-knowledge/, is much better and is much more in depth.
 
  • #172
D H said:
This lack of a common basis for a discussion is a bit strange given that the problem is about the difference between mutual knowledge and common knowledge. The opening post did link to the wikipedia article on common knowledge, but in my opinion, that article (like many at wikipedia) isn't that good. The article on common knowledge at the Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/common-knowledge/, is much better and is much more in depth.

The difference between common knowledge and mutual knowledge can be formally captured using indexed "knowledge operators". Let K_i P mean "Person #i knows P". Then to say that P is mutual knowledge is to say that, for each individual i,

K_i P

To say that P is common knowledge is to say

For all i: K_i P
For all i and j: K_i (K_j P)
For all i, j and k: K_i (K_j (K_k P))
etc.
 
  • #173
D H said:
With regard to what happens when six monks on the first day after the guru makes here statement? It breaks the chain. They might have broken the rule against looking at mirrors, they might have broken the about discussing eye color, or they might have committed suicide for some non-related reasons. Whichever is the case, the chain is broken.
Bear in mind that this problem is intended to reflect "history". All monks are following the rule and no monk is committing suicide unless he know he is blue-eyed.
 
  • #174
Demystifier said:
In the meanwhile, I have changed my logic. See post #155.
Does this mean you do not believe that day 1 will become a trigger to the suicide countdown - even with no messages or statements from the guru?
 
  • #175
Hi @Demystifier:

I confess that I have not read through all 174 previous posts. However I did randomly scan through about 20-30 of them. None of the ones I scanned mentioned the following point.

Before the public announcement that at least one person had blue eyes, the following was not common knowledge:
Every person knows that: every person knows that: every person knows that: . . . etc. at least one person has blue eyes.​
If I understand the problem statement correctly, the above must be common knowledge that they all have at the same time in order for all the N people with blue eyes to deduce, after N-1 days, that they all have blue eyes.

CORRECTION
I have come to realize that the above description of the common knowledge gained from the public announcement is incomplete.
Every person knows that at least one person has blue eyes, AND
every person knows that every person knows that at least one person has blue eyes, AND
every person knows that every person knows that every person knows that at least one person has
blue eyes, AND​
. . . etc.​

Note that before the announcement:
if there is one person with blue eyes, it is not true that every person knows that at least one person has blue eyes;
if there are two persons with blue eyes, it is not true that every person knows that every person knows that at least one person has blue eyes;
if there are three persons with blue eyes, it is not true that every person knows that every person knows that every person knows that at least one person has blue eyes;
etc.

Regards,
Buzz
 
Last edited:
  • #176
What is the deeper message of all this?

After a rather convoluted roundabout argument (which I don't want to reproduce here) I have finally found a way to convince myself that the standard solution is OK, and that my alternative solution is not. I guess it answers some of the most recent questions addressed to me.

But my philosophic mind is not satisfied with this. I want some deeper message to be extracted from this logical riddle. So is there a deeper or more general conclusion that can be inferred from the solution of the blue-eyes puzzle? For instance, has this puzzle been used to illustrate some more general result of mathematical logic? Something like Godel theorems illustrated by the puzzling sentence "This sentence cannot be proved."?

I don't know about any general theorem of that kind, but let me explain what philosophical message I have been extracted from it.

When I see a new conceptual problem, usually my first reaction is to try to solve it intuitively, at once, without using any formal argument. Often such an approach doesn't work.

In the next step I study a more detailed and technical solution of the problem, and after that, in hindsight, I try to modify my intuition to make me see the solution even without the detailed technical procedure. When I succeed in that, then I feel that I learned something deep. Often this approach works for me.

But sometimes even this doesn't work. Sometimes I cannot intuitively comprehend the solution even after I see the formal technical one. For instance, I cannot see intuitively that some simple dynamical equation has chaotic solutions, even after seeing that at a technical formal level.

Well, the blue-eyes puzzle is of that last kind, at least for me. At the intuitive level, I want to see what new message is conveyed by the prophet. But to answer that question, it seems unavoidable to use statements of the form: "I know that you know that he knows that you know ..." And no matter how hard I try, such sentences are intuitively incomprehensible to me. I can find a way to deal with them formally, but my intuition fails. And this is what makes me frustrated. (And what made me doubtful about the standard solution.)

So this is the deeper philosophical message I take from it. The solutions of some problems are too complex for intuitive understanding. In some cases, the only way to understand the solution is by following the formal technical procedure. One has to accept it and live with it. Accepting this makes me less frustrated. :smile:
 
Last edited:
  • #177
Demystifier said:
But to answer that question, it seems unavoidable to use statements of the form: "I know that you know that he knows that you know ..." And no matter how hard I try, such sentences are intuitively incomprehensible to me.
Perhaps this may help:The Kiss
By Coventry Patmore (1823–1896)

‘I SAW you take his kiss!’ ‘’Tis true.’
‘O modesty!’ ‘’Twas strictly kept:
He thought me asleep—at least, I knew
He thought I thought he thought I slept.’
 
  • Like
Likes Andreas C
  • #178
Demystifier said:
So this is the deeper philosophical message I take from it. The solutions of some problems are too complex for intuitive understanding. In some cases, the only way to understand the solution is by following the formal technical procedure.

No, you CAN intuitively understand it, it's hard, not complex, its difficulty is based on quantity instead of "quality". With 3 monks/islanders, it's very easy to understand. When you add more persons, you just add more steps. Finding what the 8940198th powers of the 3rd, 1928th, 5904508139th and 99999929893895821st digits of ##2^{12345678901234567890}## without the aid of a computer is hard. Really hard. Is it complex? No. It's just a bunch of arithmetics. Proving Fermat's theorem is extremely hard, because it's actually complex, it's not a bunch of complications one on top of the other.
 
  • #179
Demystifier said:
Well, the blue-eyes puzzle is of that last kind, at least for me. At the intuitive level, I want to see what new message is conveyed by the prophet. But to answer that question, it seems unavoidable to use statements of the form: "I know that you know that he knows that you know ..." And no matter how hard I try, such sentences are intuitively incomprehensible to me. I can find a way to deal with them formally, but my intuition fails.
Those "I know that you know that he knows that you know ..." statements were just distractions. It's more a case of "I might think that someone else might think that someone else might think ...". I thought StevenDaryl's post 144 makes it clear. Also, why bother with those statements when the much simpler sequence of "Day 1 the guru says there is at least 1 blue-eyed. No suicide on day 2 demonstrates to everyone that there are at least 2. No suicide on day 3 ..."? It's like the bird that flies between two oncoming trains: you can solve for it's total flying distance with an infinite series, or you can just calculate how long it will be before the trains collide. Do it the simple way.

This puzzle has modified my "intuition" by making me more careful in describing what might be new information in a public declaration with content that apparently everyone already knows and to be more on the alert to situations that can create "domino" logic.
 
  • #180
Andreas C said:
No, you CAN intuitively understand it, it's hard, not complex, its difficulty is based on quantity instead of "quality". With 3 monks/islanders, it's very easy to understand. When you add more persons, you just add more steps. Finding what the 8940198th powers of the 3rd, 1928th, 5904508139th and 99999929893895821st digits of ##2^{12345678901234567890}## without the aid of a computer is hard. Really hard. Is it complex? No. It's just a bunch of arithmetics. Proving Fermat's theorem is extremely hard, because it's actually complex, it's not a bunch of complications one on top of the other.
You have the point, the word "complex" is not the best word to describe some kinds of hardness.
 
  • #181
.Scott said:
Those "I know that you know that he knows that you know ..." statements were just distractions. It's more a case of "I might think that someone else might think that someone else might think ...". I thought StevenDaryl's post 144 makes it clear. Also, why bother with those statements when the much simpler sequence of "Day 1 the guru says there is at least 1 blue-eyed. No suicide on day 2 demonstrates to everyone that there are at least 2. No suicide on day 3 ..."? It's like the bird that flies between two oncoming trains: you can solve for it's total flying distance with an infinite series, or you can just calculate how long it will be before the trains collide. Do it the simple way.

This puzzle has modified my "intuition" by making me more careful in describing what might be new information in a public declaration with content that apparently everyone already knows and to be more on the alert to situations that can create "domino" logic.
I agree, it's technically simpler not to think about the information content of the prophet's public statement, but to concentrate on the day-by-day analysis. But my point (and my philosophic problem) is that it is not intuitive to ignore the information content.
 
  • #182
.Scott said:
Also, why bother with those statements when the much simpler sequence of "Day 1 the guru says there is at least 1 blue-eyed. No suicide on day 2 demonstrates to everyone that there are at least 2. No suicide on day 3 ..."?

Right. You can prove by induction that "no suicide on day n demonstrates to everyone that there are at least n people with blue eyes". That doesn't seem to involve "Alice knows that Bob knows that Carol knows that ..." But they have to be equivalent, I would think. So I would think that, implicitly, the reasoning about Day n must involve n levels of "knows that".
 
  • #183
.Scott said:
Also, why bother with those statements when the much simpler sequence of "Day 1 the guru says there is at least 1 blue-eyed. No suicide on day 2 demonstrates to everyone that there are at least 2. No suicide on day 3 ..."?
That would not clarify the central point: that "demonstrating to everyone" - i.e. common knowledge- is different from "everyone knowing".
 
  • #184
I believe I may have managed to construct a formal, non-infinitely recursive, statement and proof of the problem, without the use of indeterminate terms like 'perfect logician', or the need for modal logic. I would be very appreciative of any comments anybody can offer. The statement and proof is too long to post here, so instead I have saved it as a pdf where it is accessible on this link. The latex version is saved here, in case anybody wants to use some of that in commenting.

What I have actually proved (E&OE) is that if there are n blues then they all leave/suicide no later than the nth day (which I have numbered as day n-1 because things can be expressed more concisely in symbolic logic if you start counting at 0). I don't think anybody (maybe not even Terence Tao, at least, as far as I can see) has yet proven the other part, which is that nobody leaves/suicides before that day. I think that is a much harder thing to prove, because it effectively involves proving that the proposition 'I am blue' is undecidable in the theories available to the blues prior to day n-1. And we all know how difficult proving undecidability is! I have an idea about how such a proof might go, and will play around with it to see where it goes, but I am not terribly optimistic.

I was surprised to find that, in the end, it was possible to axiomatise the problem without getting into any complex 'A knows that B knows that C knows that ...' jargon, nor was it necessary to invoke the infinitely recursive concept of 'common knowledge'. It seems to be possible to axiomatise it with about a dozen axioms, most of which are obvious, basic arithmetic, common-sense facts about the world, or prescribed rules of the island's society. These axioms are arranged in two series of nested logical theories in an 'object' language L0, with the theories in a sequence corresponding to days on the island. One series is for propositions that all monks know (Public Knowledge, but not necessarily Common Knowledge). The other is for propositions that individual monks know (which consists of all the facts that all monks know, plus any private knowledge). The only difference between corresponding elements of the two series is what a given monk can see on the given day, and can remember seeing from earlier days. Seeing generates private knowledge, since the monks are not allowed to communicate.

Only one axiom asserts anything about what others know. It is:
$$5:\ \ \ numBlue(d)\geq y+1 \wedge seesBlue(m,d)\leq y\to knowsSelfIsBlue(m,d)$$
Informally, it says that if there are at least ##y+1## blues, and monk ##m## - which may be the monk doing the reasoning or another monk - sees fewer blues than that, then monk ##m## will know that he is blue.

I would be grateful for any comments or suggestions.
 
  • Like
Likes Demystifier
  • #185
stevendaryl said:
You can prove by induction that "no suicide on day n demonstrates to everyone that there are at least n people with blue eyes".
Hi Steven:

It seems to me that this proof requires some omitted prior common knowledge. Specifically, the required prior common knowledge is the "Everyone knows ... etc." that I presented in post #175. Without that prior knowledge, no blue eyed person can logically deduce they have blue eyes from seeing others with blue eyes.

That prior common knowledge is exactly what the guru's statement produces.

ADDITION
Actually, although the guru's announcement does provide the common knowledge I describe in post #175, a more limited common knowledge than that would be sufficient for the proof to be valid.

Let P(X) represent: "Everyone with blue eyes knows X."
Let Y represent: "There is at least one person on the island with blue eyes."
Let P1 = P(Y)
Let Pk = P(Pk-1)
Then the required common knowledge if there are N persons with blue eyes is:
P1 & P2 & . . . & PN

Regards,
Buzz
 
Last edited:
  • #186
stevendaryl said:
Right. You can prove by induction that "no suicide on day n demonstrates to everyone that there are at least n people with blue eyes". That doesn't seem to involve "Alice knows that Bob knows that Carol knows that ..." But they have to be equivalent, I would think. So I would think that, implicitly, the reasoning about Day n must involve n levels of "knows that".
You can't demonstrate it with "Alice knows that Bob knows..." because that is the wrong line. You can demonstrate it with something more like Alice can't be sure that Bob can be sure that Carol can be sure that there are no blue-eyed monks.
But if you want the whole thing you need your post 144 or my post 165.
 
  • #187
maline said:
That would not clarify the central point: that "demonstrating to everyone" - i.e. common knowledge- is different from "everyone knowing".
I'm not sure what the point of the "demonstrating to everyone" vs. "everyone knows" debate is.

The key point is that the guru makes an announcement that eliminates the possibility that no one is blue-eyed (##\overline{T_0}##), that all the blue-eyed monks get this message, and that all the blue-eyed monks know that all the blue-eyed monks got the message. If there are other semantics that do not unambiguously denote the situation as either compliant or non-compliant to those conditions, then don't use those semantics.
 
  • #188
andrewkirk said:
I don't think anybody (maybe not even Terence Tao, at least, as far as I can see) has yet proven the other part, which is that nobody leaves/suicides before that day.
Someone may leave/suicide before that day. The issue isn't to prove that it doesn't happen, but to specify the conditions under which it cannot happen.
If we change the problem by stipulating this sequence on day 0, then we can demonstrate that there is no information available to any blue-eyed monk that can lead to him proving that he is not blue-eyed:
1) The memories of all monks are erased.
2) All monks are given the rules.
3) All monks are introduced to each other.
4) All monks are told that all monks have gone through this same induction process.

So far, no information has been provided to any monk about their own eye color.

Since monks are not allowed to communicate anything about eye color, that source of information is not available.
The lack of suicides/departures dos not communicate anything. This was the exception before. Although the monks were not allowed to communicate anything about eye-color before, the act of suicide or departure was implicitly exempted. If it had not been exempted, then the monks would have acted differently. For example, if they departed/suicide, they would have arranged to conceal this activity - or conversely, in situations where not departing/suiciding would have tipped off other monks, they would have flipped a coin: heads they hide or simulate suicide, tails they continue as usual.

By the rules, only the guru and rules-following monks know the eye color of other monks. If the guru stays silent, there are no other mechanisms for monks to communicate eye-color, so each monk can never discover his own eye-color.

Which brings us back to this (slightly modified to accommodate DH's remarks in post #171):
.Scott said:
Try this one:
Suicides opportunities are only at midnights.

Given that all monks:
1) Recognize all other monk on the island that they share with them.
2) Always exercise perfect logic.
3) Will commit suicide if and only if they know they are blue-eyed.
4) Always know when a suicide has taken place on their island and who committed suicide.
5) Know that all other monks are like themselves, always logical, always follow the rules, always recognize all other monks, always know when a suicide has taken place on the island they are on.
6) All monks follow the rules and no monk commits suicide unless he knows that his eyes are blue.

You, a perfectly logical monk, arrive on the island on day 100 and see 12 blue-eyed monks (blues). On day 103, the guru makes his declaration. Then on day 104, six blues commit suicide.

If the guru makes no additional announcements, what if any suicide activity will occur?
 
  • #189
.Scott said:
he key point is that the guru makes an announcement that eliminates the possibility that no one is blue-eyed (¯¯¯¯¯T0T0¯\overline{T_0}), that all the blue-eyed monks get this message, and that all the blue-eyed monks know that all the blue-eyed monks got the message
But for "eliminating the possibility that no one is blue-eyed" to be meaningful at all, you need to point out that there is someone who doesn't know that someone knows that someone knows etc. that someone has blue eyes.
 
  • #190
maline said:
But for "eliminating the possibility that no one is blue-eyed" to be meaningful at all, you need to point out that there is someone who doesn't know that someone knows that someone knows etc. that someone has blue eyes.
Someone can't be sure that that others can't be sure .. that others can't be sure that there are no blue-eyed monks. Then "everyone knowing that every has new information on the same day that there is a blue-eyed monk" becomes consequential.
 
  • #191
.Scott said:
Someone can't be sure that that others can't be sure .. that others can't be sure that there are no blue-eyed monks. Then "everyone knowing that every has new information on the same day that there is a blue-eyed monk" becomes consequential.
Okay, so we're all saying the same thing.
 
  • #192
andrewkirk said:
Only one axiom asserts anything about what others know. It is:
5: numBlue(d)≥y+1∧seesBlue(m,d)≤y→knowsSelfIsBlue(m,d)5: numBlue(d)≥y+1∧seesBlue(m,d)≤y→knowsSelfIsBlue(m,d)​
5:\ \ \ numBlue(d)\geq y+1 \wedge seesBlue(m,d)\leq y\to knowsSelfIsBlue(m,d)
Informally, it says that if there are at least y+1y+1y+1 blues, and monk mmm - which may be the monk doing the reasoning or another monk - sees fewer blues than that, then monk mmm will know that he is blue.
I don't think this is correct. You need monk m to know that "numBlue(d)≥y+1".

andrewkirk said:
I don't think anybody (maybe not even Terence Tao, at least, as far as I can see) has yet proven the other part, which is that nobody leaves/suicides before that day. I think that is a much harder thing to prove, because it effectively involves proving that the proposition 'I am blue' is undecidable in the theories available to the blues prior to day n-1. And we all know how difficult proving undecidability is!
What is hard about undecidability? All you need to do is demonstrate two "models"- scenarios as to who is in fact blue, and how people react, satisfying all the axioms- one in which monk m is blue, and one in which he is not, while in both, no one leaves up until day n-3. Then on day n-2, monk m will be unable to differentiate between these models, and so cannot conclude he is blue. Similar models exist for each monk, so no one leaves until day n-1.
 
Last edited:
  • #193
 
  • Like
Likes maline
  • #194
maline said:
I don't think this is correct. You need monk m to know that "numBlue(d)≥y+1".
If the antecedent ##numBlue(d)\geq y+1## is a theorem of the theory we are working in (##T(0,k)##) then all monks know it, including monk m, because that theory represents what all monks know on day ##k##.
Alternatively, if it is not a theorem, the entailment is satisfied because the antecedent is denied.
What is hard about undecidability? All you need to do is demonstrate two "models"- scenarios as to who is in fact blue, and how people react, satisfying all the axioms- one in which monk m is blue, and one in which he is not, while in both, no one leaves up until day n-3. Then on day n-2, monk m will be unable to differentiate between these models, and so cannot conclude he is blue. Similar models exist for each monk, so no one leaves until day n-1.
I considered that possibility before posting, and couldn't see a way to make it rigorous - although that could just be because I haven't done much model theory. We need to prove that the monk is unable to differentiate between possible models. To me that sounds equivalent to our proving (within our meta-language L and meta-theory T2) that monk m cannot prove from within ##T(1,n-2,m)## in L0 that ##numBlue(n-2)>n-2##, in other words that the proposition ##numBlue(n-2)>n-2## is undecidable in ##T(1,n-2,m)##.
Can you think of a rigorous way to prove that the monk is unable to differentiate? It seems blindingly obvious to me that he cannot differentiate. But seeming obvious is one thing and proof is another.
 
  • #195
andrewkirk said:
Can you think of a rigorous way to prove that the monk is unable to differentiate? It seems blindingly obvious to me that he cannot differentiate. But seeming obvious is one thing and proof is another.
Didn't I do that at the start of post #188? Do I need to make it more detailed?
 
  • #196
andrewkirk said:
Alternatively, if it is not a theorem of the entailment is satisfied because the antecedent is denied.
No, for the antecedent to be denied you need the negation of the entailment to be a theorem.

In English: The number of blues was n from day zero, but no one left because the statement about the number of blues was not yet a known fact. So your condition needs to be "If 'numBlue(d)≥y+1' is a theorem of T(1,d,m)" (The knowledge of monk m on day d). Of course that forces you to use a language and system that allow such determinations to be made internally, by the monks- they are thinking about what each other are thinking.
 
  • #197
andrewkirk said:
Can you think of a rigorous way to prove that the monk is unable to differentiate? It seems blindingly obvious to me that he cannot differentiate. But seeming obvious is one thing and proof is another.
Again, all you have to do is show that both situations are consistent with the axioms. For any finite n you can just list off what each monk sees and knows. I'm not sure how to do it for the pronumeral case, but it doesn't seem difficult...
 
  • #198
.Scott said:
Didn't I do that at the start of post #188? Do I need to make it more detailed?
It is possible that what you have outlined there may be able to be turned into a proof, but we cannot be certain unless/until that is done. For instance it relies on a concept of 'information', which is not formally defined and about which we have no axioms.

maline said:
No, for the antecedent to be denied you need the negation of the entailment to be a theorem.
I think you may have something there. I will need to reflect on that. If the objection holds then, as you say it may push us towards having to axiomatise by adding another level of language. When I started drafting the note, my initial idea was to not only have nested theories in sequences indexed by day, but matching nested languages too, with each language able to refer to what is provable in earlier languages. That would allow us to formally define the conditions of each monk on day d reasoning about what other monks know on day d-1, without running into the problem of a circular definition that prompted me to try axiomatising the problem. And fortunately, monks only need to know what the others know on previous days in order to reach the correct conclusions. They don't need to know anything about what the others know today. That sequence of languages would be a bit like the hierarchies of set-like objects that Russell used to avoid his own paradox - before the neater solution of ZF set theory was devised.

I then thought of the axiomatisation in my note and thought that would greatly shortcut the machinery of having to have a nested sequence of languages. But if you are right, it is an illegal shortcut so it'll be back to the nested language sequence to get a non-circular axiomatisation. I could probably manage the axiomatisation but I fear that then even semi-formally proving the result in the new, much larger, system would be beyond what my 21st century concentration span can cope with.

maline said:
Again, all you have to do is show that both situations are consistent with the axioms.
What I am unsure of here is how we can show that. It seems to me that what we need to prove is relative consistency - ie that, assuming the system is consistent to start with, it remains consistent if we add an axiom that says nobody leaves before day n-1 (relative, because otherwise we would need to start by proving the consistency of arithmetic, which may be an easy task for Gentzen, but not for me). I have been unable to convince myself that going through and showing that all axioms are satisfied by a model with n-1 blues is sufficient. How can I be sure that there won't be some obscure combination of the expanded set of axioms into a deduction that proves a contradiction, which cannot be proved in the original axiom set?
 
  • Like
Likes Demystifier and maline
  • #199
andrewkirk said:
my initial idea was to not only have nested theories in sequences indexed by day, but matching nested languages too, with each language able to refer to what is provable in earlier languages. That would allow us to formally define the conditions of each monk on day d reasoning about what other monks know on day d-1, without running into the problem of a circular definition that prompted me to try axiomatising the problem. And fortunately, monks only need to know what the others know on previous days in order to reach the correct conclusions. They don't need to know anything about what the others know today.
I like this solution; it is much more elegant than a recursion on the monks, and show clearly how the system is finite and self- contained. Of course, you still need the theories to be indexed by "monk" as well, because the monks have different values for "SeesBlue" depending on their own colors.

I am still curious what a formalization of the full infinitely-recursive idea of "common knowledge" looks like. I do believe it's possible.

andrewkirk said:
I have been unable to convince myself that going through and showing that all axioms are satisfied by a model with n-1 blues is sufficient. How can I be sure that there won't be some obscure combination of the expanded set of axioms into a deduction that proves a contradiction, which cannot be proved in the original axiom set?
I'm not an expert here, but this is how I understand it: You build a model by explicitly defining all the constants, functions etc. of your language using terms from the basic underlying system- Robinson Arithmetic in our case. Then all wff's of your language have a defined truth value. The model consists of all the true wff's in this construction, and it is actually a subtheory of Robinson Arithmetic. Then its consistency is guaranteed by the consistency of the arithmetic (which we assume), so if you can show that the axioms you want are true wff's of the model, you have proven that they are consistent. If another such model includes the negation of one of the axioms, then you have proven that this axiom in independent of the others and undecidable in terms of them.
 
  • #200
maline said:
Again, all you have to do is show that both situations are consistent with the axioms. For any finite n you can just list off what each monk sees and knows.
Hi Maline:

It is unnecessary to consider all monks. It is sufficient to consider only blue eyed monks. If there are N blue eyed monks, then they each of them will see N-1 blue eyed monks. Until the guru tells all of them that there is at least one blue eyed monk, each blue eyed monk, say M, knows that there are two possibilities. M knows that If M has blue eyes, then there are at N blue eyed monks, and that if M's eyes are not blue, then that there are N-1 blue eyed monks. M does not know and cannot deduce that he has blue eyes even after N or more days have passed with no actions taken by the the blue eyed monks that M can see have blue eyes.

After the guru announces to all N blue monks there is at least one blue eyed monk, each blue eyed monk, say M, can deduce that:
If M does not have blue eyes, then all of the N-1 blue eyed monks that M can see will know they all have blue eyes after N-1 days, and that they will all take appropriate action then. If they do not take that appropriate action, M will deduce that he has blue eyes at that point.

Regards,
Buzz
 
Back
Top