On the myth that probability depends on knowledge

  • #101
Fra said:
Umm... I'd say physics (and natural science in general) is ALL about us learning ABOUT nature, what we can say about nature.

''us learning'' is the subject of psychology, not of physics. The subject of physics is the objective description of the kinematics and dynamics of systems of Nature.
 
Physics news on Phys.org
  • #102
A. Neumaier said:
''us learning'' is the subject of psychology, not of physics.

In the case of and observer = human scientist, that's of course correct. I agree.

But like I've argued, the subjective interpretation would make no sense if it was all about human observers. Science is FAPP objective in terms of human-human comparasions.

All human scientists will agree upon the description of nature in the sense physicists talk about. We agree there.

But THE physics is about how one subsytems of the universe, "learns" about the states and behaviour of the other subsystems. It's about how the state of a proton, encodes and infers expectations of it's environment (fellow observers, such as other neutrons, electrons etc), and how the action of the proton follows from rationality constraints in this game.

This will have testable predictions for human science, and it may help understand how interactions are scaled as the observer scales down from human laboratory device to a proton which is then a proper inside observer (except WE humans, observe this inside observer form the outside (the lab)).

So the physics analogy, is that the action of a proton is similarly a game. The action of the proton is based upong it's own subjective expectations of it's envionment. It tests this by acting ("driving over the bridge"). A stable proton in equilibrium will have a holographically encoded picture corresponding to external reality. But a system not in equilibrium or in agreement with heavly evolve and changes it's state, sometimes it even decomposes and is destroyed.

This is the "learning" I'm talking about. But it's actually analogous to how science works. So the analogies is still good, but the real thing is one subsystem of the universe makes inferences about it's physical environment. We humans are like very MASSIVE observing system that observes these inside observers interacting. So human science IS like a DESCRIPTION of the inside game. BUT as we also consider cosmological models, this assymmetry does not hold, and we are forced to consider that human scientists are indeed also inside observers playing a game not JUST descriptive scientists. Except of course on a cosmo scale clearly all EARTHBASED human scientists will still indeed agree upon science.

So nothing of what I say threatens the integrity and soundness of science. On the contrary does it deepen in.

/Fredrik
 
  • #103
A. Neumaier said:
Originally Posted by Studiot
Subjective probability has a place in physical science.

No, since it is not testable.

It is testable: humans are testable!
 
  • #104
lalbatros said:
It is testable: humans are testable!

There is a difference between testing a human and testing the assertion that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time. The latter is impossible and proves that the statement has no scientific content.
 
  • #105
Fra said:
In the case of and observer = human scientist, that's of course correct. I agree.
In the case of a machine, it is a matter of artificial intelligence, not of physics.

Physics is about interpreting experiments in an observer-independent way.
 
  • #106
There is a difference between testing a human and testing the assertion that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time. The latter is impossible and proves that the statement has no scientific content.

Actually that is where you are wrong.

You introduced the example of radioative decay, which is exactly parallel.

I take particular exception to the notion that my statements 'have no scientific content'.

That is a highly coloured value judgement sir!
 
  • #107
Studiot said:
Actually that is where you are wrong.
You haven't proven me wrong. You haven't provided a way to test the statement, thus making it amenable to the scientific method.
Studiot said:
You introduced the example of radioative decay, which is exactly parallel.
No, it isn't. Radioactive decay is a mass phenomenon and the probability for decay applies (as I had explicitly argued) _only_ to the ensemble of all isotopes of a particular kind, and not to any single decay. The latter is completely unpredictable and a probability statement about it is - like any statement assigning a probability dofferent from 0 or 1 to a single event - completely uncheckable.

Thus applying the probability for the decay of an anonymous atom to a particular atom has as much scientific content as claiming that a ghost has appeared on my desk.
Studiot said:
I take particular exception to the notion that my statements 'have no scientific content'.
The statement that I called devoid of scientific content, namely ''that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time'' was mine, not yours.
 
  • #108
A. Neumaier said:
Studiot said:
I take particular exception to the notion that my statements 'have no scientific content'.
The statement that I called devoid of scientific content, namely ''that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time'' was mine, not yours.
Whereas the statement that you actually made in this context, namely
Studiot said:
You test your assessment by driving over the bridge.
is plain wrong.

How can a nontestable statement have scientific content?
 
  • #109
No, it isn't. Radioactive decay is a mass phenomenon and the probability for decay applies (as I had explicitly argued) _only_ to the ensemble of all isotopes of a particular kind, and not to any single decay. The latter is completely unpredictable and a probability statement about it is - like any statement assigning a probability dofferent from 0 or 1 to a single event - completely uncheckable.

I grow weary of this verbal fencing - it achieves nothing.

Instead of constantly flatly refuting everyone else's comments you might gain something if you asked for more information about why such and such statement was made.

Radioactive decay, for instance, is actually a function of time, not mass.
The objective measure is the fraction ( a pure number) decaying within a certain time period.

So it is with lorry journeys and bridges.

Again I repeat this is a quantum mechanics forum.

In QM there are at least two ways of interpreting probability, since there are at least two independent variables.

So it is with lorry journeys and bridges.
 
  • #110
A. Neumaier said:
Of course, the model reflects knowledge, prejudice, assumptions, the authorities trusted, assessment errors, and all that, but that's the same as in _all_ modeling. Hence it is not a special characteristics of probability.
If the model depends on knowledge and the result of the model is a probability then how can you claim that probability does not depend on knowledge? And I agree that it is not peculiar to probability.

I think you are confusing your concept of "subjective" with knowledge. With a specified family of priors and an algorithm for determining the hyper parameters from the available knowledge then the probability depends on the knowledge objectively. I believe that you are really just saying that scientists shouldn't just use subjective "gut feeling" priors.
 
  • #111
A. Neumaier said:
There is a difference between testing a human and testing the assertion that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time. The latter is impossible and proves that the statement has no scientific content.

Ok .. let's work this through:

There is a bridge, trucks drive over it. Each time a truck drives over it, one of two things will happen .. it will collapse or it won't. Objectively, for each trial (i.e. truck journey) there is no way to say with certainty which outcome will be obtained until either the truck crosses safely, or the bridge collapses. Ok so far?

Now consider two bridges, a wooden bridge designed for pedestrian traffic, and a steel bridge designed for truck traffic. You are the truck driver ... which bridge do you take? I guess that is what you are calling a subjective probability judgment? It seems to me that there is a higher objective probability that the wooden bridge will collapse when the truck is driven across it. Do you agree with that? If you do agree, then can you explain how you measure the difference between the cases? Or is the difference unmeasurable?

Note the exactly the same analogy can be drawn for radioactive decay lifetimes of different isotopes: given two atoms of different isotopes, one with a half-life of 5 seconds, the other with a half-life of 5 years, which is more likely to decay in a given time interval? It seems that there is a clear, objective difference between the probabilities of the two events. What is wrong with that analysis?
 
  • #112
Well SC you seem to have caught the essence of it.

The bridge assessment question is faced by some bridge engineers every working day of their lives.

You may have heard of AILs - Abnormal Indivisible Loads.

When a load larger than the legally allowable the max gross weight needs to be transported the transport company approaches the bridge authority for any bridge they propose to pass over to ask under what conditions they can cross the bridge.

A real world example might be a train company transporting a 250 tonne locomotive to another location along roads and across bridges where the max gross weight is 38 tonnes.
 
Last edited:
  • #113
A. Neumaier said:
There is a difference between testing a human and testing the assertion that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time. The latter is impossible and proves that the statement has no scientific content.

Is that not a bad news for engineers, specially in the nuclear safety field?
Reliability theory and practice is completely build on the assumption that probabilities (even very small) have a meaning even though they often cannot be measured.

The book "The Black Swan" by Taleb illustrated very well the risk of blindly using probabilities.

This discussion is interresting.
Jaynes has clearly shown that the concept of probability needs to be analysed more deeply.
I have no doubt that probabilities are -in way- subjective and that this explains conceptual difficulties in physics, specially quantum mechanics.

The "frequentist" interpretation is conveniently used to hide difficulties in quantum mechanics, but these difficulties remain even if they are hidden.
 
Last edited:
  • #114
Reliability theory and practice is completely build on the assumption that probabilities (even very small) have a meaning even though they often cannot be measured.

The engineering answer to a quantity that cannot be calculated or measured exactly is to bracket it between upper and lower bounds to prove that is lies within acceptable limits.

The whole idea of limit state is that the probability of failure is quantifiable in this way and acceptably low.
 
  • #115
Studiot said:
The engineering answer to a quantity that cannot be calculated or measured exactly is to bracket it between upper and lower bounds to prove that is lies within acceptable limits.

The whole idea of limit state is that the probability of failure is quantifiable in this way and acceptably low.

I agree, but the evaluations cannot be tested ... and humans often do mistakes!
The black swarn approach would be to ban Nuclear Power plants: never trust Gaussian assumptions if your life is at stake.
Or in extended form: never trust any assumption if your life is at stake.
 
  • #116
Mistakes can usually be caught and corrected if proper procedures are followed.
That is what the independent check is all about for instance.

Deliberate mis-evaluation is more difficult to cope with.
 
  • #117
Studiot said:
Instead of constantly flatly refuting everyone else's comments.
I only refute what doesn't hold water.
Studiot said:
Radioactive decay, for instance, is actually a function of time, not mass.
The objective measure is the fraction ( a pure number) decaying within a certain time period.
a ''mass phenomenon'' does not refer to masses measured in kg, but to masses measured in large numbers. I could have written as well ''ensemble phenomenon''.
Studiot said:
So it is with lorry journeys and bridges.

Again I repeat this is a quantum mechanics forum.
I don't see the connection of lorries and bridges with quantum mechanics.
 
  • #118
SpectraCat said:
There is a bridge, trucks drive over it. Each time a truck drives over it, one of two things will happen .. it will collapse or it won't. Objectively, for each trial (i.e. truck journey) there is no way to say with certainty which outcome will be obtained until either the truck crosses safely, or the bridge collapses. Ok so far?
Yes, and since you say ''each'' time, you acknowledge that it is a matter of ensembles, not of driving across this bridge now. The single instance is not a matter of probability, but what happens each time someone does something is. That's the whole point.
SpectraCat said:
Note the exactly the same analogy can be drawn for radioactive decay lifetimes of different isotopes: given two atoms of different isotopes, one with a half-life of 5 seconds, the other with a half-life of 5 years, which is more likely to decay in a given time interval? It seems that there is a clear, objective difference between the probabilities of the two events. What is wrong with that analysis?
That you equate objective probabilities for ''each time'' with subjective probabilities for
a single instance. Applying the probability is admissible only if you regard the single instance as member of the observed ensemble, and then it refers to the ensemble and not to the single instance. This becomes obvious if you ask for the reason why the subjectve probability was assigned. invariably there will be an explanation involving
''each time''.

Suppose a second person would assign different probabilities based on ignorance, and a third person would assign different probabilities based on better knowledge unknown to the driver. Since all are subjective probabilities, all are as valid as any other. Now the driver picks one of the roads and drives - with or without success. Who of the three were correct or wrong? Being subjective probabilities, all were right. Thus the scientific method is impotent to distinguish between these probability assignments - although they would be mutually conflicting if they were saying something about the bridge rather than the subject defining them. This clearly shows that subjective probabilities are properties of the subject and not properties of the bridge.
 
  • #119
Studiot said:
When a load larger than the legally allowable the max gross weight needs to be transported the transport company approaches the bridge authority for any bridge they propose to pass over to ask under what conditions they can cross the bridge.
But this is a matter of law, not of science.
 
  • #120
But this is a matter of law, not of science.

Are you seriously suggesting that the weight carrying capacity (ie whether it is physically possible to support a statd weight) of a structure is a matter of human legislature, not of science?

''mass phenomenon'' does not refer to masses measured in kg, but to masses measured in large numbers

Are you suggesting that the probability of atomic decay (chain reactions apart) is a function of the number of atoms present.
And I always thought that the measure was the probability that a certain % would decay in a specific time, regardless of quantity.
 
  • #121
Studiot said:
Are you seriously suggesting that the weight carrying capacity (ie whether it is physically possible to support a statd weight) of a structure is a matter of human legislature, not of science?
Don't exaggerate my statements so that they look foolish!

I am seriously suggesting that the probability of failure of a particular structure at a particular time (unless it virtually equals 0 or 1) is not a matter of science, since there is no way to check the agreement of the assignment with what actually happens.

What is a matter of science is the calibration of an ensemble model for bridges of a certain kind that allows one to assign failure probabilities to arbitrary bridges in the ensemble.

Such a model can be used by legislation to place limits on the weights of specific bridges in dependence on their characteristic parameters, in such a way that the failure probability in the ensemble under the legally allowed operation conditions remains below a level tolerated by the legislating body.

This is how limit state analysis is applied in real life.
Studiot said:
Are you suggesting that the probability of atomic decay (chain reactions apart) is a function of the number of atoms present.
And I always thought that the measure was the probability that a certain % would decay in a specific time, regardless of quantity.
No. I was suggesting that verifying decay probabilities is done by measuring how many atoms from a huge ensemble decay in a a certain time interval large enough that so many decays actually happen that the probabilistic estimate has some statistical accuracy.

Nobody is able to check a statement about decay probabilities by looking at a single particle for a single half-life, to see whether it decays with 50% probability.
 
  • #122
I think your analysis of atomic decay is flawed.

You claim

''mass phenomenon'' does not refer to masses measured in kg, but to masses measured in large numbers

Whether we are talking about kg or numbers of atoms they are essentially the same, since one is directly proportional to the other.

Where in the SI (or any other) system are masses measured not in kilograms but in 'large numbers'?

Your presentation of statistics is also flawed.

It should yield the same result whether you
Take one single atom and observe the decay for a specific time period and repeat the observation 10 million times, combining the results into a probability.
Or whether you take 10 million atoms and observe them all for one single specific time period together, again combining the results into a % probability.

Let us now say that in 100 hundred hours of observation of the 10 million atoms taken together, 50 million have decayed.

Are you saying that you disagree that if you had observed 10 million atoms separately for 100 hours each some other number than 50 million (near enough) would have decayed?

I maintain that the probability anyone of these particular atoms decaying in 100 hours is 50%, regardless of whether it is surrounded by zero or trillions of similar atoms.

Do you not agree?
 
  • #123
Studiot said:
I think your analysis of atomic decay is flawed.

You claim
Whether we are talking about kg or numbers of atoms they are essentially the same, since one is directly proportional to the other.

Where in the SI (or any other) system are masses measured not in kilograms but in 'large numbers'?
Why should this create a flaw in my analysis? You mentioned ''not mass but time'', and I replied that you has misinterpreted my usage of the word ''mass''. For the statistical analysis only the number of instances matter, not any equivalent description in other units.
Studiot said:
Your presentation of statistics is also flawed.

It should yield the same result whether you
Take one single atom and observe the decay for a specific time period and repeat the observation 10 million times, combining the results into a probability.

Or whether you take 10 million atoms and observe them all for one single specific time period together, again combining the results into a % probability.
No. _Your_ argument is flawed. Suppose I observe a single particle in 10 million consecutive periods whose length is one half-life. I observe exactly one decay, say in period 2. Or in period 8. Or in period 50. In a very unlucky case perhaps in period 2345. In neither case can I conclude anything about the true decay probability in that period.

If there were anything to be combined into a probability, the decay probability per period would appear to be 10^{-7} in each case, which is nonsense since the 10 million observations are not independent.

On the other hand, if I observe 10 million atoms for a half-life and find 50.2% decayed, I have a good confirmation of my theoretical model.

Thus there is a world of differences between the two scenarios you described.
 
  • #124
Suppose I observe a single particle in 10 million consecutive periods whose length is one half-life. I observe exactly one decay, say in period 2. Or in period 8. Or in period 50. In a very unlucky case perhaps in period 2345. In neither case can I conclude anything about the true decay probability in that period.

That is a different experiment from the one I proposed and I one have not commented on.

It is quite invalid to use it to provide any commentary whatsoever on the experiment I proposed, although I agree with your observation that since you have only observed 1 decay you have not gained much information.

So I repeat my question

Do you agree with my conclusions from the experiments as I posted them or not?
 
  • #125
Studiot said:
I agree with your observation that since you have only observed 1 decay you have not gained much information.
Thus you should agree that applied to only one particle, one can't check any probabilistic statement about it. Therefore assigning probabilities to single events is scientifically meaningless.

Studiot said:
So I repeat my question

Do you agree with my conclusions from the experiments as I posted them or not?
Once one can repeat experiments on multiple particles, one an ensemble, and
if the size of the ensemble is large enough, it is meaningful to talk about probabilities.

Thus your experiments do not contradict my statement that assigning probabilities to single events is scientifically meaningless.
 
  • #126
Thus you should agree that applied to only one particle, one can't check any probabilistic statement about it. Therefore assigning probabilities to single events is scientifically meaningless.

No the second statement does not follow from the former.

And yes, one can check some probabilistic statements about even one single atom.

Throughout most of your high handed, sometimes rude, responses to my comments I have mainly been trying to point out that you have make sweeping, all embracing statements by the use of these small words like 'any'. You are just courting refutation by using them.

For instance I can check the statement:

The probability that one atom will decay within 30 years is 1 (or zero).

It may be a far fetched scenario but it is checkable.

Edit - I realize that zero probability is strictly not checkable in this case so should be struck out.
 
Last edited:
  • #127
Studiot said:
No the second statement does not follow from the former.

And yes, one can check some probabilistic statements about even one single atom.

Throughout most of your high handed, sometimes rude, responses to my comments I have mainly been trying to point out that you have make sweeping, all embracing statements by the use of these small words like 'any'. You are just courting refutation by using them.

For instance I can check the statement:

The probability that one atom will decay within 30 years is 1 (or zero).

It may be a far fetched scenario but it is checkable.

Edit - I realize that zero probability is strictly not checkable in this case so should be struck out.

Of course. Usyally I qualified my statements of this kind by saying ''probability different from zero and 1'', and thish was also meant in the present case.

Thus the conclusion of our long dispute is that applied to a single instance of a system, one can't check any statement about it of the form ''The probability that the statement
S applies to this system is p'', where 0<p<1. Therefore assigning probabilities different
from 0 or 1 to single events is scientifically meaningless.
 
  • #128
Thus the conclusion of our long dispute is that applied to a single instance of a system, one can't check any statement about it of the form ''The probability that the statement
S applies to this system is p'', where 0<p<1. Therefore assigning probabilities different
from 0 or 1 to single events is scientifically meaningless.

Again no I don't agree.

If you had stuck to decaying atoms that would have been fine but you again chose to generalise.

This brings us back to structural engineering.

The bridge assessment example I gave is a real world example from my professional experience.
Admittedly extreme examples like that only occurred 2 or 3 times a year - the overload was normally much less severe.

But it did occur and had to be coped with in a scientific (=rational) manner.

And yes I got the load across safely.

As regards to limit state, you presumabably realize that there are many limit states and that the controlling limit state is usually not the limit state of collapse but the limit state of serviceability.

Since this last state is a non destructive state it can be checked, even in a single instance.
 
  • #129
A. Neumaier said:
Thus you should agree that applied to only one particle, one can't check any probabilistic statement about it. Therefore assigning probabilities to single events is scientifically meaningless.
...
Once one can repeat experiments on multiple particles, one an ensemble, and
if the size of the ensemble is large enough, it is meaningful to talk about probabilities.

I see your point of view, but my main objection is this:

As far as I can tell, your concept of probability is useless in decision making, simply because the decision generally has to be made before the confidence in the odds are well defined enough.

This is the basic challange of reasoning upon incomplete information in the first place. Not only do we now konw, we do not even konw (in your objective sense) the odds, so we need to place our bets based upon EXPECTATIONS of the odds. Yes, at some point these expectations are subjective.

But the question is then: What do you do? The point is that, either you make a decision or you don't. But reality will not halt the game, "no decision" is in fact also a decision, it's the decision that "we do not have sufficient scientifc basis for a decision".

You describe a strict descriptive view, and you pretty much say that when the descriptive view fails then so does science, right? If we accept that, I can understand your perspective. There is a kind of rationality in your reasoning. This again brings us to the root issue, what is the basic problem here?

I think you see it as a descriptive problem?
I see it as a decision problem.

Actually, if we for the sake of argument accept the descriptive view, I agree with a lot of what you have said.

/Fredrik
 
  • #130
The probability that one atom will decay within 30 years is 1

There is an absolutely wonderful short story by Ray Bradbury on this subject, about two ornaments on a mantleshelf.
 
  • #131
Studiot said:
The bridge assessment example I gave is a real world example from my professional experience.
Admittedly extreme examples like that only occurred 2 or 3 times a year - the overload was normally much less severe.

But it did occur and had to be coped with in a scientific (=rational) manner.

And yes I got the load across safely.

As regards to limit state, you presumably realize that there are many limit states and that the controlling limit state is usually not the limit state of collapse but the limit state of serviceability.

Since this last state is a non destructive state it can be checked, even in a single instance.

Ah. But now you changed the assertion.

You no longer discuss a probabilistic statement of the form ''the probability for crossing the bridge safely in this particular instance is 99.99%'', which is uncheckable and hence unscientific.

Instead, you discuss a definite statement ''the bridge can be crossed safely'' (because of an underlying probabilistic analysis)!

I agree that the latter is a scientific statement. based on a probabilistic analysis that refers to the ensemble of all bridges taking into consideration for constructing the model on which the analysis is based.
 
  • #132
Are you avoiding my previous: https://www.physicsforums.com/showpost.php?p=3284903&postcount=110

This whole debate is purely semantic. If you require probabilities to be defined only over an ensemble then the probabilities do not depend on knowledge (for Bayesians the posterior is not a function of the prior given an infinite amount of data). If you allow probabilities to be defined over individual trials or small samples then the posterior is a function of the prior so the probabilities do depend on knowledge.

That dependence on knowledge may be objective if you have a well-defined rule for generating a prior based on the knowledge, or it may be subjective if you have a "gut feeling" prior.
 
  • #133
DaleSpam said:
No. It was an oversight.
DaleSpam said:
If the model depends on knowledge and the result of the model is a probability then how can you claim that probability does not depend on knowledge? And I agree that it is not peculiar to probability.
Because this sort of dependence of knowledge is universal to every discussion, hence adds no information to the discussion. It is like emphasizing in a discussion of a computer program ''programs depend on knowledge'' - true but not relevant for the substance of what a computer program is.

Knowledge needs no mention in discussing deterministic models, so it creates an undue and misleading emphasis if mentioned for probabilities. The usual usage there suggests that the dependence of probability on knowledge somehow explains its peculiar nature, while in fact it acts as a smoke screen hiding the real issues.
DaleSpam said:
I think you are confusing your concept of "subjective" with knowledge. With a specified family of priors and an algorithm for determining the hyper parameters from the available knowledge then the probability depends on the knowledge objectively. I believe that you are really just saying that scientists shouldn't just use subjective "gut feeling" priors.
I am saying more:

With a specified family of priors and an algorithm for determining the hyper parameters from a set of data then the probability depends on the data objectively. Independently of whether the data arise from knowledge, simulation from a hypothetical source, prejudice, fraud, divination, or anything else.

That it depends on knowledge if the data depend on knowledge is true but irrelevant.

The model is only as good as the data, that's the only relevant point here.
DaleSpam said:
This whole debate is purely semantic.
Of course. It is a matter of precise usage of the concepts. Semantics is important in interpretation issues.
DaleSpam said:
If you require probabilities to be defined only over an ensemble then the probabilities do not depend on knowledge (for Bayesians the posterior is not a function of the prior given an infinite amount of data).
But one is never given that much data.
DaleSpam said:
If you allow probabilities to be defined over individual trials or small samples then the posterior is a function of the prior so the probabilities do depend on knowledge.
No. it depends on the sample, which could come from a computer simulation rather than from real data. It depends on knowledge only if the sample represents the knowledge someone has about the intended application; so mentioning knowledge is less accurate and makes more unspoken assumptions.

What if nobody has ever seen the data but the computer program processing it? Does the program then know? Or does the human who started the program know? Knowledge is a philosophically difficult concept prone to misunderstanding.
DaleSpam said:
That dependence on knowledge may be objective if you have a well-defined rule for generating a prior based on the knowledge, or it may be subjective if you have a "gut feeling" prior.
If you substitute ''knowledge'' by ''data'' I'd agree. The latter is a much more descriptive word.

Why substitute it with an unspecific word that assumes that there is someone having the knowledge and invites associations with states of the mind of experimenters?
 
  • #134
It seem to me that the thought of God controls all the quantum processes,and the thought of God continuosly work all time ,but the human can not understand the thought of God.Then before the eye of people,the objective probability nature of quantum physics appears.
 
  • #135
ndung200790 said:
It seem to me that the thought of God controls all the quantum processes,and the thought of God continuosly work all time ,but the human can not understand the thought of God.Then before the eye of people,the objective probability nature of quantum physics appears.

Note that, with the definition I gave here, objective probability is not restricted to the quantum domain.
 
  • #136
Please teach me what is your definition of objective probability.Because in classical physics,probability depends on the knowledge of the human,but we would definitely guess the happening even if we were supplyed enough the information.
 
  • #137
I think that the definitely happening of events in classical physics do not affect the ''law'' of large number events.The large number of ''definitely happening'' events regulates the probability of single event if we have not enough the information about this single concrete event.
 
  • #138
ndung200790 said:
Please teach me what is your definition of objective probability.Because in classical physics,probability depends on the knowledge of the human,but we would definitely guess the happening even if we were supplyed enough the information.

It is amply discussed if you follow the thread; you should read it all and post questions to a particular posting if you don't understand something there.
 
  • #139
A. Neumaier said:
Yes, and since you say ''each'' time, you acknowledge that it is a matter of ensembles, not of driving across this bridge now. The single instance is not a matter of probability, but what happens each time someone does something is. That's the whole point.

That you equate objective probabilities for ''each time'' with subjective probabilities for
a single instance. Applying the probability is admissible only if you regard the single instance as member of the observed ensemble, and then it refers to the ensemble and not to the single instance. This becomes obvious if you ask for the reason why the subjectve probability was assigned. invariably there will be an explanation involving
''each time''.

Suppose a second person would assign different probabilities based on ignorance, and a third person would assign different probabilities based on better knowledge unknown to the driver. Since all are subjective probabilities, all are as valid as any other. Now the driver picks one of the roads and drives - with or without success. Who of the three were correct or wrong? Being subjective probabilities, all were right. Thus the scientific method is impotent to distinguish between these probability assignments - although they would be mutually conflicting if they were saying something about the bridge rather than the subject defining them. This clearly shows that subjective probabilities are properties of the subject and not properties of the bridge.

Maybe I don't completely understand, but it seems like you didn't really answer my question .. since this is a quantum forum, let us stick to the question of atomic decay. Consider my example from before: Atom A is of an isotope with a half-life of 5 seconds, atom B is from an isotope with a half-life of 5 years. I agree that the half-lives are characteristics of ensembles, which I think you agree can be stated objectively.

So we have two objective statements:

1) Atoms A & B come from different ensembles.

2) The half-lives of the two ensembles are 5 sec & 5 years for A & B, respectively.

Can you please answer the following questions?

In your view, is it possible to make an objective statement about the *relative* probability of decay of A vs. B for some time interval?

If so, how should it be phrased? Is there a way of obtaining a quantitative measure of the relative probability?

If it is not possible to make an objective statement about the relative decay probabilities, then please explain why? Is it because we are relying on the knowledge about the ensemble statistics for A & B in order to make such a statement? Does that automatically make it a subjective statement in your view? Or is there something else that makes any such judgment subjective?
 
  • #140
SpectraCat said:
Consider my example from before: Atom A is of an isotope with a half-life of 5 seconds, atom B is from an isotope with a half-life of 5 years. I agree that the half-lives are characteristics of ensembles, which I think you agree can be stated objectively.

So we have two objective statements:

1) Atoms A & B come from different ensembles.

2) The half-lives of the two ensembles are 5 sec & 5 years for A & B, respectively.

Can you please answer the following questions?

In your view, is it possible to make an objective statement about the *relative* probability of decay of A vs. B for some time interval?
It depends (a) on your definition of relative probability (I don't know this concept), and
(b) on whether A,B are anonymous atoms from a large ensemble (where the answer is likely yes) or particular selected atoms (where the answer is no if the statement of interest still contains a probability).
SpectraCat said:
If so, how should it be phrased? Is there a way of obtaining a quantitative measure of the relative probability?
Since you invented the concept, you are responsible for giving it an appropriate meaning, before we can discuss it.
SpectraCat said:
If it is not possible to make an objective statement about the relative decay probabilities, then please explain why?
At the moment I can't say anything since I don't understand what you mean.

So let me guess: One possible intended interpretation might be:

A, B are specific atoms (defined by their position under an atom microscope say), and the statement is that
in the next ten minute, atom A will decay N times as likely as atom B, where N is the number of seconds in a year, the statement is untestable and hence subjective.
 
  • #141
A. Neumaier said:
[...]
If you substitute ''knowledge'' by ''data'' I'd agree. The latter is a much more descriptive word.

Why substitute it with an unspecific word that assumes that there is someone having the knowledge and invites associations with states of the mind of experimenters?

Ah so that's what you are talking about! "available data is input for a calculation" is certainly very different from "states of the mind of experimenters affect the experiment". :smile:
 
  • #142
harrylin said:
Ah so that's what you are talking about! "available data is input for a calculation" is certainly very different from "states of the mind of experimenters affect the experiment". :smile:

Yes. Knowledge is very different from data. Probabilistic models depend on the data from which they are derived, but this is very different from a dependence on knowledge.

Wikipedia says (http://en.wikipedia.org/wiki/Knowledge ):
Knowledge is a collection of facts, information, and/or skills acquired through experience or education or (more generally) the theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); and it can be more or less formal or systematic.[1] In philosophy, the study of knowledge is called epistemology, and the philosopher Plato famously defined knowledge as "justified true belief." There is however no single agreed upon definition of knowledge, and there are numerous theories to explain it.
Knowledge acquisition involves complex cognitive processes: perception, learning, communication, association and reasoning

Knowledge is something that someone has or may have in different degrees, and it is very difficult to say what it means to have knowledge, and what counts as knowledge (rather than as prejudice, assumption, guess, etc.) is difficult to say.
 
  • #143
A. Neumaier said:
Knowledge needs no mention in discussing deterministic models
I disagree. How else can you reconcile Liouville's theorem and the determinism of classical mechanics with our inability to predict chaotic systems and the second law of thermo? I think that an understanding of how knowledge (or data) impacts our ability to predict a system's behavior is crucial to all models, deterministic or not.

A. Neumaier said:
in fact it acts as a smoke screen hiding the real issues.
What are these real issues you are referring to?

A. Neumaier said:
With a specified family of priors and an algorithm for determining the hyper parameters from a set of data then the probability depends on the data objectively. Independently of whether the data arise from knowledge, simulation from a hypothetical source, prejudice, fraud, divination, or anything else.

That it depends on knowledge if the data depend on knowledge is true but irrelevant.
Huh? The data is the knowledge. I don't get your point here.

A. Neumaier said:
But one is never given that much data.
Which is one reason why I like the more general Bayesian definition of probability.

A. Neumaier said:
If you substitute ''knowledge'' by ''data'' I'd agree. The latter is a much more descriptive word.

Why substitute it with an unspecific word that assumes that there is someone having the knowledge and invites associations with states of the mind of experimenters?
I agree. "Data" is a better word without connotations of some person. This is kind of similar to how the word "observer" or "observation" has irritating human-mind connotations when it usually means some sort of measurement device.
 
  • #144
DaleSpam said:
I disagree. How else can you reconcile Liouville's theorem and the determinism of classical mechanics with our inability to predict chaotic systems and the second law of thermo? I think that an understanding of how knowledge (or data) impacts our ability to predict a system's behavior is crucial to all models, deterministic or not.
Our inability to predict chaotic systems is not due to lack of knowledge but due to the systems sensitivity with respect to even the tiniest perturbations. Perturbations that are so tiny that the classical description breaks down before they are taken into account.

The second law of thermodynamics does _not_ follow from the determinism of classical mechanics. But the assumption that only macroscopic variables are considered relevant together with the Markov approximation produces the second law, without any recourse to questions of knowledge.

The second law was already in operation long before there was anyone around to know.
DaleSpam said:
What are these real issues you are referring to?
The things probabilities actually and immediately depend on: The model and its parameters. All other dependence is implicit and redundant.
DaleSpam said:
Huh? The data is the knowledge. I don't get your point here.

Data are not knowledge since they don't depend (like knowledge) on a knower.

Data may be produced from sloppy or careful measurements, from a simulation, from manipulation of raw measurements by removing outliers, performing transformations, and lots of other stuff that make the connection between data and knowledge long and tenuous. Do you know that x= 1 kg simply because someone hands you the data?
 
  • #145
A. Neumaier said:
Our inability to predict chaotic systems is not due to lack of knowledge
This is not always correct. Even in a system without any perturbations our inability to know exactly the initial conditons leads directly to an inability to predict the results for chatoic systems.


A. Neumaier said:
The things probabilities actually and immediately depend on: The model and its parameters. All other dependence is implicit and redundant.
The frequentist definition of probability does not depend on knowledge, but a Bayesian definition of probability does depend "actually and immediately" on knowledge. Whether you apply those defitions to physics or some other pursuit doesn't change the definitions.
 
  • #146
DaleSpam said:
This is not always correct. Even in a system without any perturbations our inability to know exactly the initial conditions leads directly to an inability to predict the results for chatoic systems.
I was referring to perturbations in the initial conditions. But even if we knew them exactly we could not solve the system exactly, so after the first time step we have introduced tiny perturbations in the initial conditions of the next step, which change the subsequent trajectory.
DaleSpam said:
The frequentist definition of probability does not depend on knowledge, but a Bayesian definition of probability does depend "actually and immediately" on knowledge. Whether you apply those definitions to physics or some other pursuit doesn't change the definitions.
Even a Bayesian must today rely on the definition of probability given by Kolmogorov, or a mathematically equivalent one like that in Paul Whittle's nice book ''Probability via expectations'' . None of these depends on knowledge.

The behavior of a physical system is independent of what anyone knows or doesn't know about it, hence doesn't depend on knowledge. Physics describes physical systems as they are, independent of who considers them and who knows how much about them. The probabilities in physics express properties of Nature, not of the knowledge of observers.
At a time when nobody was there to know anything, the decay probability of C13 atoms was already the same as today - and we use this today to date old artifacts.

Poor or good knowledge only affect how close one comes with one's chosen description to what actually is the case.
 
  • #147
A. Neumaier said:
Data are not knowledge since they don't depend (like knowledge) on a knower.

Data, represented in reality is dependent on a memory structure (or microstructure, or a system of non-commuting microstructures) to encode it.

It's in this sense even the "data" if you prefer that word, is encoded in the system of mictrostrucure that constitutes the observing system.

IMO, there exists no fixed timeless observer independent degrees of freedom of nature. Even the DOFs are observer dependent; thus so is any real data (encoded in physical states).

The beliefe in some fundamental DOFs that encode "data" in the objective sense, would be nice, and a lot of people do think this, but it's nevertheless a plain conjecture, that has no rational justification.

What do exist are effective DOFs, that interacting observer agree upon; so much is clear and so much is Necessary. Anything beyond this, is IMHO assumptions structural realists can't do without.

/Fredrik
 
Last edited:
  • #148
A. Neumaier said:
Even a Bayesian must today rely on the definition of probability given by Kolmogorov, or a mathematically equivalent one like that in Paul Whittle's nice book ''Probability via expectations'' . None of these depends on knowledge.
http://en.wikipedia.org/wiki/Bayesian_probability "Bayesian probability interprets the concept of probability as 'a measure of a state of knowledge', in contrast to interpreting it as a frequency or a 'propensity' of some phenomenon."

As I said before, a Bayesian definition of probability does depend on knowledge. I don't know why you bother asserting the contrary when it is such a widely-known definition of probability.
 
Last edited:
  • #149
I am still stuck on the concept that you can't make meaningful statements about the probabilities of single events. What about the following scenario:

1) you have a group of 2 atoms of isotope A, with 5 second half-life
2) you have a group of 2 atoms of isotope B, with 5 year half-life

What is the probability that one of the A atoms will decay before one of the B atoms?

From posts Arnold Neumaier has made on this thread, it seems he will say that the question as I have phrased it above is not scientifically meaningful. If this is true (i.e. Arnold does think that it is meaningless, and I have not misunderstood something, then please answer the following question:

How big do I have to make the pools (5 atoms, 5000 atoms, 5x10^23 atoms) before the question DOES become scientifically meaningful? Because if I have not misunderstood, other statements Prof. Neumaier has made on this thread indicate that he *does* think scientifically meaningful statements can be made about probabilities of events from "large ensembles", so it seems that at some point, the pools must reach a critical size where "statistical significance" (or whatever the proper term is) is achieved.
 
Last edited:
  • #150
Fra said:
Data, represented in reality is dependent on a memory structure (or microstructure, or a system of non-commuting microstructures) to encode it.

It's in this sense even the "data" if you prefer that word, is encoded in the system of mictrostrucure that constitutes the observing system.

So you'd say that a program that receives a continuous stream of data, uses it to make and store some statistics of it (not the data themselves, which are never looked at by anyone/anything except this program), and then spits out a prediction of a probability for the Dow Jones index to be above some threshold at a fixed date knows about the stock market?
 
Back
Top