On the myth that probability depends on knowledgeby A. Neumaier Tags: depends, knowledge, myth, probability 

#1
May211, 05:22 AM

Sci Advisor
PF Gold
P: 1,942





#2
May211, 01:01 PM

Sci Advisor
P: 3,175

Someone will have to explain what "objective probabilities" are. If you begin with the assumption that there are probabilities that would be agreed upon by every observer, I suppose you automatically make them independent of knowledge by postulating that all those observers have the same knowledge.




#3
May211, 01:18 PM

Sci Advisor
HW Helper
P: 1,322

This thread title made me laugh, so I'll bite.
What is the objective probability that the gas molecules in a box of air are in configuration x? Given that the gas molecules were in a definite state in the past, can the "objective" answer be anything other than [tex] \delta(x  x_{\mbox{actual}}(t)) [/tex] (schematically) ? I'm genuinely curious what people think. 



#4
May211, 01:32 PM

Sci Advisor
P: 8,470

On the myth that probability depends on knowledge
Objective probabilities in an experiment can be understood in frequentist terms, as the frequency in which some event would occur in the limit as the number of trials of the experiment went to infinity, with the factors that you wish to control (because you want to probability of A given some facts B about the conditions) being the same in each trial but others allowed to vary. For example, on each trial of Physics Monkey's experiment involving a box of air we might make sure that all the macroscopic conditions such as temperature and pressure and volume are identical, then in the limit as the number of trials goes to infinity, we can look at the fraction of trials where the molecules were in configuration x. This would define an objective probability that a box of air with a given temperature, pressure, volume, etc. has its molecules in configuration x.




#5
May211, 01:48 PM

P: 5,462

I would be interersted in the forum comments on the following scenario here.
Consider the PC screen you are looking at. It has M pixels, which can take on N colours. This limits us to a finite number of states of the screen. Some of these states offer information, some do not. The first question of interest is what is the entropy change in passage from one screen state to another, since there is zero energy change involved. The second question is more subtle. For any pixel the presence of any colour (except one) implies a signal, which implies information. It is possible to draw up an error correcting scheme to obtain the 'correct' pixel colour for any colour except one. A black colour implies either that the signal specifies no colour or that the signal is absent for some reason (ie no connection). It is not possible to distinguish in this case. 



#6
May211, 01:53 PM

Sci Advisor
PF Gold
P: 1,942

Thus in your case, there is no x_actual, since there are many boxes of air, and what is actual depends on the box, but the probability does not. 



#7
May211, 01:56 PM

Sci Advisor
PF Gold
P: 1,942

The probability of decay of any particular radioactive isotope is a welldefined, measurable quantity, independent of what observes know about this isotope. 



#8
May211, 01:58 PM

Sci Advisor
PF Gold
P: 1,942





#9
May211, 02:09 PM

Sci Advisor
P: 3,175

As to the mathematics, I compare it to the following very ordinary situation: Let ABC be a right triangle with right angle BCA. Let BC = 3. Does the length of the hypotenuse depend on our knowledge of side CA or does it have some "objective" length no matter what we know or don't know? On the one hand, you can argue that the statement "Let ABC be a right triangle..." specifies we have a specific right triangle and that it's hypotenuse must therefore have an objective length regardless of our state of knowledge. On the other hand, you can argue that the length of the hypotenuse is a function of what else is known about triangle. As to dealing with any problem of forgetting information, the situation with Bayesian probability is no worse than the situation with triangles. In the above situation, suppose that we are given that CA = 4 and then you "forget" that fact. Does the hypotenuse go from being 5 to being unknown? A reasonable practical answer could be yes. For example, if someone read you a homework problem and included the information that CA =4 and then said. "No, wait. I told you wrong. Forget that. The side CA wasn't given." would you keep thinking that the hypotenuse must be 5? 



#10
May211, 02:15 PM

Sci Advisor
HW Helper
P: 1,322





#11
May211, 02:21 PM

Sci Advisor
P: 8,470





#12
May211, 02:22 PM

Sci Advisor
HW Helper
P: 1,322

And besides, who are you to say that I cannot think about probabilities for a single case. You are just declaring that the Bayesian school is wrong by fiat. But what would you say to the standard sort of gambling example. Imagine I offer you the following game. I'll roll one die, but I don't tell you anything more about the die except that it is 6 sided. You can pick either {1} or {2,...,6} and if your set comes up then you get a big wad of money. Assuming you like money, which set would you choose? The choice to the go with {2,...,6} in the absence of other information is a form of probabilistic reasoning with only a single event. 



#13
May211, 02:37 PM

P: 5,462





#14
May211, 02:50 PM

P: 2,799

One should ask what is the whole point of the probability measure in the first place? Either you just define some measures, decide some axioms and you've got just some measure theoretic definition  some mathematics, but then what? Or you see it as a way to determine the odds of a possible future, in the context of inductive inference. As a guide for further action. In this case, the ensemble makes no sense. The ensemble is a descriptive view, it is completely sterile as a tool for placing bets on the future. I think we can all agree that the question isn't to discuss axioms of probability theory. The question is what value they have in realistic situations, where we need to make decisions based upon incomplete informaiton. The main value of probability is not just statistics or book keeping. Not in my book. I haven't had time to read up on anything yet but I noticed Neumaier referring to someone (whittaker something?) the derived the probabiltiy axioms starting from expectations. In that context I'll also note that cox, jaynes and others also derived probability as somewhat unique rules of rationala inference. This does tie probability to inductive inference. /Fredrik 



#15
May211, 02:57 PM

P: 2,799

So scientific knowledge, is really nothing but the negotiated agreements of a group of observers. But the point is that this consensus is still not objective, it can only be judged from a particular observer, or another competing observer group. There IS no outside, or external perspective from which scientific agreements are judged. This is why, technically is still knowledge of a particular observer (or just agreement of a GROUP of observer). /Fredrik 



#16
May211, 03:07 PM

P: 2,799

For me the whole purpose of probability, is that it is a measure of the odds, or propensity conditional upon the given situation. The question to which probability theory is the answer (in the inductive inference view) is that it is that the mathematical framework to rationally rate degrees of belief, and thus the rational constraints on any random rational action in a game theoretic scenario.
This renders the measure completely observer depdenent, where the observers IS the "player", and the one placing bets and taking risks. The only problem is of course that the above well known view, is only classical. Ie. it only works for commuting sets of information, which are combined with classical logic. We need the corresponding generalisation to rational actions based upon the corresponding "measure" that is constructed from "adding" noncommuting information sets. All this does not need any ensembles or imaginary "repeats". Instead the EXPECTATIONS on the future, are inferred from some rational measure of the futures based on the present. In the classical case it's just classical statistics and logic. The quantum case is confused, but it's some quantum logic form of the same. But there is no coherent understanding of it yet. I think this roots alot of the confusion. /Fredrik 



#17
May211, 03:15 PM

P: 2,799

I've got my own view and don't claim to be a pure bayesian but I'll throw in my cents.
If there is NO history at all, I'd say not even the probability space makes sense. In this sense even the probability SPACE can fade out and be ereased. This concerns what happens to all points in state space that are rarely or never visited in the lifespace of a system  are they still real, or physical? /Fredrik 



#18
May211, 04:16 PM

P: 2,799

Then one asks what is the purpose of this expectation? Is the PURPOSE just to compare frequencies of historical events, in retrospect? No. That has no survival value. I think the purpose is as and action guide. This mean that it does in fact not matter, if the expectations are met or not. They still constrain the action of the individual system holding it. Just look at how a poker game work. Expectations rules rational actions. It doesn't matter if the expectations are "right" in retrospect, because then there are new decisions to make. You always look forward, not back. /Fredrik 


Register to reply 
Related Discussions  
What does it mean by 'implicitly depends on x' ?  Calculus  1  
Good Phys. Knowledge vs. Good Math Knowledge  General Math  2  
Current depends on what ?  Introductory Physics Homework  10  
dI/dt depends on v?  Advanced Physics Homework  1  
The Myth Of Empirical Knowledge, Data, Evidence  General Discussion  5 