Does probability depend on the amount of information one has?

  • Context: Undergrad 
  • Thread starter Thread starter philosophe
  • Start date Start date
  • Tags Tags
    Information Probability
Click For Summary

Discussion Overview

The discussion centers around the nature of probability and whether it is influenced by the amount of information one possesses. Participants explore this concept through examples, including the probability of a mail room being open at a specific time and the probabilities associated with answering multiple-choice questions. The conversation touches on different interpretations of probability, particularly in relation to prior information.

Discussion Character

  • Exploratory
  • Debate/contested
  • Conceptual clarification

Main Points Raised

  • One participant argues that their friend's assertion of a 100% probability for the mail room being open is based on a single instance, which raises questions about the validity of such an estimate.
  • Another participant suggests that the friend's estimate is unbiased but has infinite variance, drawing a parallel to coin tosses where previous outcomes do not determine future results.
  • A different viewpoint distinguishes between prior probability and the probability being discussed, indicating a potential misunderstanding between the two concepts.
  • One participant introduces the Bayesian interpretation of probability and emphasizes the importance of prior information in understanding probability, referencing the late Edward Jaynes' work.
  • There is a mention of significant mathematical theorems and their relevance to the discussion of probability, suggesting a broader philosophical context.

Areas of Agreement / Disagreement

Participants express differing views on the nature of probability, particularly regarding the influence of prior information and the validity of estimates based on limited data. There is no consensus on the interpretation of probability or the implications of the examples provided.

Contextual Notes

The discussion highlights limitations in understanding probability based on single instances and the potential for infinite variance in estimates. The distinction between prior probability and other forms of probability remains unresolved.

philosophe
Messages
1
Reaction score
0
The following is the account of an argument that I had with my friend the other day:

At 7.45 PM my friend asked me what time the mail room would be open till. I said that I did not know but that it was unlikely that the mail room would be open this late. He then said that he was not busy at the moment so he might as well just go and check it out; he then came back five minutes later with his mail and said that it was in fact open. I said that although it happened to be open, the probability of the mail room being open at 7.45 PM was quite low so taking that chance wasn't necessarily the best decision. But my friend argues that the probability of the mail room being open at 7.45 PM was 100% because there is no instance in which he could go down to the mail room at 7.45 PM and find it closed!

Compare this to a multiple choice quiz with four options for every question. Intuitively it seems that putting down A, B, C or D would give you a 25% chance of being correct. But if the answer is B and you mark it D, then you have a 0% chance of being correct. How are the two cases different, if they are different?

Basically I want to get to the question of whether or not probability depends on the amount of information one has.
 
Last edited:
Physics news on Phys.org
What you're trying to do is estimate a population parameter (the probability that the mail room will be open on any abitrary day at 7:45) based on a sample of size one.
The estimate of the population parameter is still unbiased, so your friend is right in saying his estimate is 100% since there is no evidence that it is ever closed, but his estimate has infinite variance.
By his logic, if you toss a coin and it comes up heads, all subsequent tosses of the same coin must be heads, since the coin has never shown a tail.
 
You're talking about prior probability (the chance that it would be open on a given day at that time) and he's just talking about probability.
 
philosophe said:
Basically I want to get to the question of whether or not probability depends on the amount of information one has.

The buzzwords you want are "the Baysesian interpretation of probability" and "prior information". The late Edward Jaynes was working on his magnum opus when he died; fortunately it has been made available over the web and makes for fascinating reading; see this website.

From your handle I guess you might be a 18th century natural philosopher (i.e. what we'd probably call a scientist), but if you are a philosophy student, I have argued that the most important problem in philosophy which philosophers are almost entirely ignoring (in preference to far less momentum problems like space and time) is the problem of the interpretation of probability (and statistics). I also have pointed out that a result in ergodic Ramsey theory widely regarded as the most important theorem of the previous century is almost unknown to the public:

The most important theorem for 21st century mathematics is Szemeredi's lemma, which belongs to ergodic theory; see Terence Tao, What is Good Mathematics?, which proposes Szemeredi as the canonical example of good mathematics. For philosophy as well as math students, there is no topic more important.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 29 ·
Replies
29
Views
6K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 42 ·
2
Replies
42
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K