Register to reply 
On the myth that probability depends on knowledge 
Share this thread: 
#145
May1111, 02:21 PM

Mentor
P: 17,530




#146
May1111, 03:03 PM

Sci Advisor
P: 1,943

The behavior of a physical system is independent of what anyone knows or doesn't know about it, hence doesn't depend on knowledge. Physics describes physical systems as they are, independent of who considers them and who knows how much about them. The probabilities in physics express properties of Nature, not of the knowledge of observers. At a time when nobody was there to know anything, the decay probability of C13 atoms was already the same as today  and we use this today to date old artifacts. Poor or good knowledge only affect how close one comes with one's chosen description to what actually is the case. 


#147
May1111, 05:00 PM

P: 2,799

It's in this sense even the "data" if you prefer that word, is encoded in the system of mictrostrucure that constitutes the observing system. IMO, there exists no fixed timeless observer independent degrees of freedom of nature. Even the DOFs are observer dependent; thus so is any real data (encoded in physical states). The beliefe in some fundamental DOFs that encode "data" in the objective sense, would be nice, and alot of people do think this, but it's nevertheless a plain conjecture, that has no rational justification. What do exist are effective DOFs, that interacting observer agree upon; so much is clear and so much is Necessary. Anything beyond this, is IMHO assumptions structural realists can't do without. /Fredrik 


#148
May1111, 07:36 PM

Mentor
P: 17,530

As I said before, a Bayesian definition of probability does depend on knowledge. I don't know why you bother asserting the contrary when it is such a widelyknown definition of probability. 


#149
May1111, 07:54 PM

Sci Advisor
P: 1,395

I am still stuck on the concept that you can't make meaningful statements about the probabilities of single events. What about the following scenario:
1) you have a group of 2 atoms of isotope A, with 5 second halflife 2) you have a group of 2 atoms of isotope B, with 5 year halflife What is the probability that one of the A atoms will decay before one of the B atoms? From posts Arnold Neumaier has made on this thread, it seems he will say that the question as I have phrased it above is not scientifically meaningful. If this is true (i.e. Arnold does think that it is meaningless, and I have not misunderstood something, then please answer the following question: How big do I have to make the pools (5 atoms, 5000 atoms, 5x10^23 atoms) before the question DOES become scientifically meaningful? Because if I have not misunderstood, other statements Prof. Neumaier has made on this thread indicate that he *does* think scientifically meaningful statements can be made about probabilities of events from "large ensembles", so it seems that at some point, the pools must reach a critical size where "statistical significance" (or whatever the proper term is) is achieved. 


#150
May1211, 02:57 AM

Sci Advisor
P: 1,943




#151
May1211, 03:00 AM

Sci Advisor
P: 1,943

from it. The subjective interpretation may be legitimate to guide actions, but it is not science. I have been using successfully Bayesian methods without this concept of Bayesian probability, in an objective context. 


#152
May1211, 03:24 AM

P: 2,799

The big difference is that the action space of a computer, is largely constrained. A computer can not ACT upon it's information in the same way a human can. The computer can at best print on the screen, buy or sell recommendations. But since computer the feedback to programs and computers are different. A computer program that makes good predictions gets to live. Bad programs are deleted. In theory howerver, one can imagine an AI system that uses the feedback from stock market business to secure it's own existence. Then Systems that fail to learn will die out, good learners are preferred. So the analogy is different just because the state and action space of a "classical normal computer" IS fixed, at least in the context we refer to it here, as an abstraction. A general system in nature, does not have a fixed state or action space. This is exactly how learning works. "artificial" intelligence with preprogrammed strategies and selections fails to be real intelligence just becase there is no feedback to revise and evolve the action space. Some selfmodifying algorithms can partly do this but it's still living in a givne computer. This is ni principle not different from how the cellullar based complex biological system wel call human brain can ENCODE and know about stock market. The biggest different is that of complexity, and the flexibility of state and action spaces. The actions possible for a computer is VERY constrained, beucase it's how it's built. /Fredrik 


#153
May1211, 03:32 AM

Sci Advisor
P: 1,943

This is the case e.g., when analysing past data. You can say p% of the population of the US in the census of year X earned above Y Dollars. It is also the case when you have a theoretical model defining the ensemble. You can say the probability to cast an even number with a perfect die is 50%, since the die is an anonymous member of the theoretical ensemble. But you cannot say anything about the probability of casting an even number in the next throw at a particular location in space and time, since this is an ensemble of size 1  so the associated probabilities are provably 0 or 1. In practice, interest is mainly in the prediction of incompletely specified ensembles. In this case, the scientific practice is to replace the intended ensemble by a theoretical model of the ensemble, which is precisely known once one estimates its parameters from the available part of the ensemble, using a procedure that may also depend on other assumptions such as a prior (or a class of priors whose parameters are estimated as well). In this case, all computed/estimated probabilities refer to this theoretical (often infinitely large) ensemble, not to a particular instance. (From a mathematical point of view, ensemble = probability space, the sample space being the set of all realizations of the ensemble.) Now there is a standard way to infer from the model statements about the intended ensemble: One specifies one 's assumptions going into the model (such as independence assumptions, Gaussian measure assumptions, etc.), the method of estimating the parameters from the data, and a confidence level deemed adequate, and which statistical tests are used to check the confidence level for a particular prediction in a particular situation. Then one makes a definite statement about the prediction (such as ''this bridge is safe for crossing by trucks up to 10 tons'') accompanied perhaps by mentioning the confidence level. The definite statement satisfies the scientific standards of derivation and is checkable. It may still be right or wrong  this is in the nature of scientific statements. If a method of prediction and assessment of confidence leads to wrong predictions significantly higher than the assigned confidence level the method will be branded as unreliable and phased out from scientific practice. Note that this again requires an ensemble  i.e., many predictions to be implementable. Again, a confidence level for a single prediction may serve only as a subjective guide. The statement ''Isotope X has a half life of Y years'' is a statement about the ensemble of all atoms representing isotope X. A huge subensemble of the still far huger full ensemble has been observed, so that we know the objective value of Y quite well, with a very small uncertainty,, and we also know the underlying model of a Poisson process. If we now have a group of N atoms of isotope X, we can calculate from this information a confidence interval for any statement of the form ''In a time interval T, between MK and M+K of the N atoms will decay''. If the confidence is large enough we can state it as a prediction that in the next experiment checking this, this statement will be found correct. And we were entitled to publish it if X was a new or interesting isotope whose decay was measured by a new method, say. Nowhere in all I said was any reference made to a "a measure of a state of knowledge", so that the ''Bayesian probability interpretation'' as defined in http://en.wikipedia.org/wiki/Bayesian_probability is clearly inapplicable. 


#154
May1211, 04:29 AM

P: 154

if i created a device to drop a coin the same exact way each time, and i put the coin in heads up each time, the first drop would presumably be the only drop with a probability of 5050. it seems the knowledge of that outcome would effect the probability of every other drop. please help me out if my thinking is flawed.



#155
May1211, 05:32 AM

Mentor
P: 17,530

The point is that it is perfectly wellaccepted to consider probability to depend on knowledge. It is not a myth. Your continued refusal to recognize this obvious fact makes you seem irrational and biased. How can anyone reason or debate with someone who won't even acknowledge commonly accepted meanings of terms? 


#156
May1211, 06:26 AM

Sci Advisor
P: 1,943




#157
May1211, 06:34 AM

Sci Advisor
P: 1,943

I never saw anyone before equating interpretation with definition. They are worlds apart. And about the semantics of myth: from http://en.wikipedia.org/wiki/Myth : 


#158
May1211, 06:45 AM

P: 154




#159
May1211, 07:03 AM

Sci Advisor
P: 1,943

The objective probability is independent of how much an observer knows, and can be determined approximately from sufficiently many experiments. To someone who knows none or only few experimental outcomes, the objective probability will be unknown rather than 5050. The subjective probability depends on the prejudice an observer has (encoded in the prior) and the amount of data (which modify the prior), so it may well be 5050 for an observer with no knowledge. 


#160
May1211, 07:06 AM

Mentor
P: 17,530




#161
May1211, 07:52 AM

P: 154




#162
May1211, 07:58 AM

Sci Advisor
P: 1,943

Subjectively, it depends on what you are willing to substitute for your ignorance. If _I_ were the subject and had no knowledge, I'd defer judgment rather than assert an arbitrary probability. This is the scientifically sound way to proceed. 


Register to reply 
Related Discussions  
What does it mean by 'implicitly depends on x' ?  Calculus  1  
Good Phys. Knowledge vs. Good Math Knowledge  General Math  2  
Current depends on what ?  Introductory Physics Homework  10  
DI/dt depends on v?  Advanced Physics Homework  1  
The Myth Of Empirical Knowledge, Data, Evidence  General Discussion  5 