On the myth that probability depends on knowledge

In summary, the conversation discusses the concept of objective probabilities and how they relate to knowledge. It is mentioned that objective probabilities are properties of an ensemble, not of single cases, and that they can be understood in frequentist terms as the frequency of an event occurring in the limit of infinite trials. The idea of forgetting knowledge and its effect on probabilities is also discussed, with one participant strongly disagreeing and another questioning the definition of "objective probabilities."
  • #141
A. Neumaier said:
[...]
If you substitute ''knowledge'' by ''data'' I'd agree. The latter is a much more descriptive word.

Why substitute it with an unspecific word that assumes that there is someone having the knowledge and invites associations with states of the mind of experimenters?

Ah so that's what you are talking about! "available data is input for a calculation" is certainly very different from "states of the mind of experimenters affect the experiment". :smile:
 
Physics news on Phys.org
  • #142
harrylin said:
Ah so that's what you are talking about! "available data is input for a calculation" is certainly very different from "states of the mind of experimenters affect the experiment". :smile:

Yes. Knowledge is very different from data. Probabilistic models depend on the data from which they are derived, but this is very different from a dependence on knowledge.

Wikipedia says (http://en.wikipedia.org/wiki/Knowledge ):
Knowledge is a collection of facts, information, and/or skills acquired through experience or education or (more generally) the theoretical or practical understanding of a subject. It can be implicit (as with practical skill or expertise) or explicit (as with the theoretical understanding of a subject); and it can be more or less formal or systematic.[1] In philosophy, the study of knowledge is called epistemology, and the philosopher Plato famously defined knowledge as "justified true belief." There is however no single agreed upon definition of knowledge, and there are numerous theories to explain it.
Knowledge acquisition involves complex cognitive processes: perception, learning, communication, association and reasoning

Knowledge is something that someone has or may have in different degrees, and it is very difficult to say what it means to have knowledge, and what counts as knowledge (rather than as prejudice, assumption, guess, etc.) is difficult to say.
 
  • #143
A. Neumaier said:
Knowledge needs no mention in discussing deterministic models
I disagree. How else can you reconcile Liouville's theorem and the determinism of classical mechanics with our inability to predict chaotic systems and the second law of thermo? I think that an understanding of how knowledge (or data) impacts our ability to predict a system's behavior is crucial to all models, deterministic or not.

A. Neumaier said:
in fact it acts as a smoke screen hiding the real issues.
What are these real issues you are referring to?

A. Neumaier said:
With a specified family of priors and an algorithm for determining the hyper parameters from a set of data then the probability depends on the data objectively. Independently of whether the data arise from knowledge, simulation from a hypothetical source, prejudice, fraud, divination, or anything else.

That it depends on knowledge if the data depend on knowledge is true but irrelevant.
Huh? The data is the knowledge. I don't get your point here.

A. Neumaier said:
But one is never given that much data.
Which is one reason why I like the more general Bayesian definition of probability.

A. Neumaier said:
If you substitute ''knowledge'' by ''data'' I'd agree. The latter is a much more descriptive word.

Why substitute it with an unspecific word that assumes that there is someone having the knowledge and invites associations with states of the mind of experimenters?
I agree. "Data" is a better word without connotations of some person. This is kind of similar to how the word "observer" or "observation" has irritating human-mind connotations when it usually means some sort of measurement device.
 
  • #144
DaleSpam said:
I disagree. How else can you reconcile Liouville's theorem and the determinism of classical mechanics with our inability to predict chaotic systems and the second law of thermo? I think that an understanding of how knowledge (or data) impacts our ability to predict a system's behavior is crucial to all models, deterministic or not.
Our inability to predict chaotic systems is not due to lack of knowledge but due to the systems sensitivity with respect to even the tiniest perturbations. Perturbations that are so tiny that the classical description breaks down before they are taken into account.

The second law of thermodynamics does _not_ follow from the determinism of classical mechanics. But the assumption that only macroscopic variables are considered relevant together with the Markov approximation produces the second law, without any recourse to questions of knowledge.

The second law was already in operation long before there was anyone around to know.
DaleSpam said:
What are these real issues you are referring to?
The things probabilities actually and immediately depend on: The model and its parameters. All other dependence is implicit and redundant.
DaleSpam said:
Huh? The data is the knowledge. I don't get your point here.

Data are not knowledge since they don't depend (like knowledge) on a knower.

Data may be produced from sloppy or careful measurements, from a simulation, from manipulation of raw measurements by removing outliers, performing transformations, and lots of other stuff that make the connection between data and knowledge long and tenuous. Do you know that x= 1 kg simply because someone hands you the data?
 
  • #145
A. Neumaier said:
Our inability to predict chaotic systems is not due to lack of knowledge
This is not always correct. Even in a system without any perturbations our inability to know exactly the initial conditons leads directly to an inability to predict the results for chatoic systems.


A. Neumaier said:
The things probabilities actually and immediately depend on: The model and its parameters. All other dependence is implicit and redundant.
The frequentist definition of probability does not depend on knowledge, but a Bayesian definition of probability does depend "actually and immediately" on knowledge. Whether you apply those defitions to physics or some other pursuit doesn't change the definitions.
 
  • #146
DaleSpam said:
This is not always correct. Even in a system without any perturbations our inability to know exactly the initial conditions leads directly to an inability to predict the results for chatoic systems.
I was referring to perturbations in the initial conditions. But even if we knew them exactly we could not solve the system exactly, so after the first time step we have introduced tiny perturbations in the initial conditions of the next step, which change the subsequent trajectory.
DaleSpam said:
The frequentist definition of probability does not depend on knowledge, but a Bayesian definition of probability does depend "actually and immediately" on knowledge. Whether you apply those definitions to physics or some other pursuit doesn't change the definitions.
Even a Bayesian must today rely on the definition of probability given by Kolmogorov, or a mathematically equivalent one like that in Paul Whittle's nice book ''Probability via expectations'' . None of these depends on knowledge.

The behavior of a physical system is independent of what anyone knows or doesn't know about it, hence doesn't depend on knowledge. Physics describes physical systems as they are, independent of who considers them and who knows how much about them. The probabilities in physics express properties of Nature, not of the knowledge of observers.
At a time when nobody was there to know anything, the decay probability of C13 atoms was already the same as today - and we use this today to date old artifacts.

Poor or good knowledge only affect how close one comes with one's chosen description to what actually is the case.
 
  • #147
A. Neumaier said:
Data are not knowledge since they don't depend (like knowledge) on a knower.

Data, represented in reality is dependent on a memory structure (or microstructure, or a system of non-commuting microstructures) to encode it.

It's in this sense even the "data" if you prefer that word, is encoded in the system of mictrostrucure that constitutes the observing system.

IMO, there exists no fixed timeless observer independent degrees of freedom of nature. Even the DOFs are observer dependent; thus so is any real data (encoded in physical states).

The beliefe in some fundamental DOFs that encode "data" in the objective sense, would be nice, and a lot of people do think this, but it's nevertheless a plain conjecture, that has no rational justification.

What do exist are effective DOFs, that interacting observer agree upon; so much is clear and so much is Necessary. Anything beyond this, is IMHO assumptions structural realists can't do without.

/Fredrik
 
Last edited:
  • #148
A. Neumaier said:
Even a Bayesian must today rely on the definition of probability given by Kolmogorov, or a mathematically equivalent one like that in Paul Whittle's nice book ''Probability via expectations'' . None of these depends on knowledge.
http://en.wikipedia.org/wiki/Bayesian_probability "Bayesian probability interprets the concept of probability as 'a measure of a state of knowledge', in contrast to interpreting it as a frequency or a 'propensity' of some phenomenon."

As I said before, a Bayesian definition of probability does depend on knowledge. I don't know why you bother asserting the contrary when it is such a widely-known definition of probability.
 
Last edited:
  • #149
I am still stuck on the concept that you can't make meaningful statements about the probabilities of single events. What about the following scenario:

1) you have a group of 2 atoms of isotope A, with 5 second half-life
2) you have a group of 2 atoms of isotope B, with 5 year half-life

What is the probability that one of the A atoms will decay before one of the B atoms?

From posts Arnold Neumaier has made on this thread, it seems he will say that the question as I have phrased it above is not scientifically meaningful. If this is true (i.e. Arnold does think that it is meaningless, and I have not misunderstood something, then please answer the following question:

How big do I have to make the pools (5 atoms, 5000 atoms, 5x10^23 atoms) before the question DOES become scientifically meaningful? Because if I have not misunderstood, other statements Prof. Neumaier has made on this thread indicate that he *does* think scientifically meaningful statements can be made about probabilities of events from "large ensembles", so it seems that at some point, the pools must reach a critical size where "statistical significance" (or whatever the proper term is) is achieved.
 
Last edited:
  • #150
Fra said:
Data, represented in reality is dependent on a memory structure (or microstructure, or a system of non-commuting microstructures) to encode it.

It's in this sense even the "data" if you prefer that word, is encoded in the system of mictrostrucure that constitutes the observing system.

So you'd say that a program that receives a continuous stream of data, uses it to make and store some statistics of it (not the data themselves, which are never looked at by anyone/anything except this program), and then spits out a prediction of a probability for the Dow Jones index to be above some threshold at a fixed date knows about the stock market?
 
  • #151
DaleSpam said:
http://en.wikipedia.org/wiki/Bayesian_probability "Bayesian probability interprets the concept of probability as 'a measure of a state of knowledge', in contrast to interpreting it as a frequency or a 'propensity' of some phenomenon."

As I said before, a Bayesian definition of probability does depend on knowledge. I don't know why you bother asserting the contrary when it is such a widely-known definition of probability.

As Wikipedia says, the above is a particular _interpretation_, not a _definition_ of probability. If you'd take it as a definition, you'd not be able to derive the slightest thing
from it.

The subjective interpretation may be legitimate to guide actions, but it is not science.

I have been using successfully Bayesian methods without this concept of Bayesian probability, in an objective context.
 
  • #152
A. Neumaier said:
So you'd say that a program that receives a continuous stream of data, uses it to make and store some statistics of it (not the data themselves, which are never looked at by anyone/anything except this program), and then spits out a prediction of a probability for the Dow Jones index to be above some threshold at a fixed date knows about the stock market?

In the obviously restricted sense yes.

The big difference is that the action space of a computer, is largely constrained. A computer can not ACT upon it's information in the same way a human can. The computer can at best print on the screen, buy or sell recommendations. But since computer the feedback to programs and computers are different. A computer program that makes good predictions gets to live. Bad programs are deleted. In theory howerver, one can imagine an AI system that uses the feedback from stock market business to secure it's own existence. Then Systems that fail to learn will die out, good learners are preferred.

So the analogy is different just because the state and action space of a "classical normal computer" IS fixed, at least in the context we refer to it here, as an abstraction. A general system in nature, does not have a fixed state or action space. This is exactly how learning works. "artificial" intelligence with preprogrammed strategies and selections fails to be real intelligence just becase there is no feedback to revise and evolve the action space. Some self-modifying algorithms can partly do this but it's still living in a givne computer.

This is ni principle not different from how the cellullar based complex biological system wel call human brain can ENCODE and know about stock market. The biggest different is that of complexity, and the flexibility of state and action spaces.

The actions possible for a computer is VERY constrained, beucase it's how it's built.

/Fredrik
 
  • #153
SpectraCat said:
I am still stuck on the concept that you can't make meaningful statements about the probabilities of single events. What about the following scenario:

1) you have a group of 2 atoms of isotope A, with 5 second half-life
2) you have a group of 2 atoms of isotope B, with 5 year half-life

What is the probability that one of the A atoms will decay before one of the B atoms?

From posts Arnold Neumaier has made on this thread, it seems he will say that the question as I have phrased it above is not scientifically meaningful. If this is true (i.e. Arnold does think that it is meaningless, and I have not misunderstood something, then please answer the following question:

How big do I have to make the pools (5 atoms, 5000 atoms, 5x10^23 atoms) before the question DOES become scientifically meaningful? Because if I have not misunderstood, other statements Prof. Neumaier has made on this thread indicate that he *does* think scientifically meaningful statements can be made about probabilities of events from "large ensembles", so it seems that at some point, the pools must reach a critical size where "statistical significance" (or whatever the proper term is) is achieved.

In general, if you have a complete specification an ensemble, you can derive scientific statements about anonymous members of the ensemble.

This is the case e.g., when analysing past data. You can say p% of the population of the US in the census of year X earned above Y Dollars.

It is also the case when you have a theoretical model defining the ensemble. You can say the probability to cast an even number with a perfect die is 50%, since the die is an anonymous member of the theoretical ensemble. But you cannot say anything about the probability of casting an even number in the next throw at a particular location in space and time, since this is an ensemble of size 1 - so the associated probabilities are provably 0 or 1.

In practice, interest is mainly in the prediction of incompletely specified ensembles.
In this case, the scientific practice is to replace the intended ensemble by a theoretical model of the ensemble, which is precisely known once one estimates its parameters from the available part of the ensemble, using a procedure that may also depend on other assumptions such as a prior (or a class of priors whose parameters are estimated as well).

In this case, all computed/estimated probabilities refer to this theoretical (often infinitely large) ensemble, not to a particular instance. (From a mathematical point of view, ensemble = probability space, the sample space being the set of all realizations of the ensemble.)

Now there is a standard way to infer from the model statements about the intended ensemble: One specifies one 's assumptions going into the model (such as independence assumptions, Gaussian measure assumptions, etc.), the method of estimating the parameters from the data, and a confidence level deemed adequate, and which
statistical tests are used to check the confidence level for a particular prediction in a particular situation. Then one makes a definite statement about the prediction
(such as ''this bridge is safe for crossing by trucks up to 10 tons'') accompanied perhaps by mentioning the confidence level. The definite statement satisfies the scientific standards of derivation and is checkable. It may still be right or wrong - this is in the nature of scientific statements.

If a method of prediction and assessment of confidence leads to wrong predictions significantly higher than the assigned confidence level the method will be branded as unreliable and phased out from scientific practice. Note that this again requires an ensemble - i.e., many predictions to be implementable. Again, a confidence level for a single prediction may serve only as a subjective guide.

The statement ''Isotope X has a half life of Y years'' is a statement about the ensemble
of all atoms representing isotope X. A huge subensemble of the still far huger full ensemble has been observed, so that we know the objective value of Y quite well, with
a very small uncertainty,, and we also know the underlying model of a Poisson process.

If we now have a group of N atoms of isotope X, we can calculate from this information
a confidence interval for any statement of the form ''In a time interval T, between M-K and M+K of the N atoms will decay''. If the confidence is large enough we can state it as a prediction that in the next experiment checking this, this statement will be found correct. And we were entitled to publish it if X was a new or interesting isotope whose decay was measured by a new method, say.

Nowhere in all I said was any reference made to a "a measure of a state of knowledge", so that the ''Bayesian probability interpretation'' as defined in http://en.wikipedia.org/wiki/Bayesian_probability is clearly inapplicable.
 
  • #154
if i created a device to drop a coin the same exact way each time, and i put the coin in heads up each time, the first drop would presumably be the only drop with a probability of 50-50. it seems the knowledge of that outcome would effect the probability of every other drop. please help me out if my thinking is flawed.
 
  • #155
A. Neumaier said:
As Wikipedia says, the above is a particular _interpretation_, not a _definition_ of probability.
Now you want to take a semantic debate about the word "probability" and add a semantic debate about the word " definition". :rolleyes:

The point is that it is perfectly well-accepted to consider probability to depend on knowledge. It is not a myth. Your continued refusal to recognize this obvious fact makes you seem irrational and biased. How can anyone reason or debate with someone who won't even acknowledge commonly accepted meanings of terms?
 
  • #156
Darken-Sol said:
if i created a device to drop a coin the same exact way each time, and i put the coin in heads up each time, the first drop would presumably be the only drop with a probability of 50-50. it seems the knowledge of that outcome would effect the probability of every other drop. please help me out if my thinking is flawed.
If your device were deterministic, and you were able to replicate things with infinite precision, the later outcomes would be the same as the first one. But neither of these assumptions can be realized.
 
  • #157
DaleSpam said:
Now you want to take a semantic debate about the word "probability" and add a semantic debate about the word " definition". :rolleyes:

The point is that it is perfectly well-accepted to consider probability to depend on knowledge. It is not a myth. Your continued refusal to recognize this obvious fact makes you seem irrational and biased. How can anyone reason or debate with someone who won't even acknowledge commonly accepted meanings of terms?

You seem to imply that semantics is irrelevant for meaning.

I never saw anyone before equating interpretation with definition. They are worlds apart.

And about the semantics of myth:

from http://en.wikipedia.org/wiki/Myth :
Many scholars in other fields use the term "myth" in somewhat different ways. In a very broad sense, the word can refer to any traditional story.

from http://en.wikipedia.org/wiki/National_myth :
A national myth is an inspiring narrative or anecdote about a nation's past. Such myths often serve as an important national symbol and affirm a set of national values.

Thus something may be well accepted and still be a myth.
 
  • #158
A. Neumaier said:
If your device were deterministic, and you were able to replicate things with infinite precision, the later outcomes would be the same as the first one. But neither of these assumptions can be realized.

i'm just using a cheap chute and a pencil. 9 out of ten times its heads, so far. that one tails, does that set the odds back to 50-50? even though the results say 90% heads. would an observer with no knowledge have a 50-50 chance?
 
  • #159
Darken-Sol said:
i'm just using a cheap chute and a pencil. 9 out of ten times its heads, so far. that one tails, does that set the odds back to 50-50? even though the results say 90% heads. would an observer with no knowledge have a 50-50 chance?
It depends on whether you think in terms of subjective or objective probability.

The objective probability is independent of how much an observer knows, and can be determined approximately from sufficiently many experiments. To someone who knows none or only few experimental outcomes, the objective probability will be unknown rather than 50-50.

The subjective probability depends on the prejudice an observer has (encoded in the prior) and the amount of data (which modify the prior), so it may well be 50-50 for an observer with no knowledge.
 
  • #160
A. Neumaier said:
You seem to imply that semantics is irrelevant for meaning.

I never saw anyone before equating interpretation with definition. They are worlds apart.

And about the semantics of myth:

from http://en.wikipedia.org/wiki/Myth :


from http://en.wikipedia.org/wiki/National_myth :


Thus something may be well accepted and still be a myth.
You are clearly not a reasonable person to discuss with. No progress can be made in such a conversation.
 
  • #161
A. Neumaier said:
It depends on whether you think in terms of subjective or objective probability.

The objective probability is independent of how much an observer knows, and can be determined approximately from sufficiently many experiments. To someone who knows none or only few experimental outcomes, the objective probability will be unknown rather than 50-50.

The subjective probability depends on the prejudice an observer has (encoded in the prior) and the amount of data (which modify the prior), so it may well be 50-50 for an observer with no knowledge.

your saying there was only one outcome objectively, even though i couldn't be certain. so subjectively i had 2 choices, and then one choice for each successive drop?
 
  • #162
Darken-Sol said:
your saying there was only one outcome objectively, even though i couldn't be certain. so subjectively i had 2 choices, and then one choice for each successive drop?

Objectively, the odds seem to be close to 90-10, according to your description, though I don't know whether your sample was large enough to draw this conclusion with some confidence.

Subjectively, it depends on what you are willing to substitute for your ignorance.

If _I_ were the subject and had no knowledge, I'd defer judgment rather than assert an arbitrary probability. This is the scientifically sound way to proceed.
 
  • #163
the knowledge has no effect on the probability of the outcome, just probable correct answers. i think i got it. i guess i agree with you then.
 
  • #164
Quantum mechanics has demonstrated that what we do not know can arise from what we cannot know. Information that parts of a system can have about other parts of a system is not really separate from the systems themselves. We have to stop pretending to be omniscient.
 
  • #165
I have just now been introduced to probability theory by Jaynes, and the way he described probability (as a tool for prediction), it definitely depends on information. I suppose that what you call "probability", is what he might have called statistical "frequency".

Thus it is "just" a matter of words and definition, but, as I just discovered, it's an important one and you are right to bring it up!

Jaynes argues, or in fact he shows, that quite some paradoxes (incl. in QM such as Bell's) result from confusions between, on the one hand:
- our probabilistic inferences and predictions based on the information that we have,
and on the other hand:
- the effects and statistics of physical measurements that allow to verify those predictions.

Harald
 
Last edited:
  • #166
harrylin said:
I have just now been introduced to probability theory by Jaynes, and the way he described probability (as a tool for prediction), it definitely depends on information. I suppose that what you call "probability", is what he might have called statistical "frequency".

Thus it is "just" a matter of words and definition, but, as I just discovered, it's an important one and you are right to bring it up!

Jaynes propbabilities are subjective, then the dependence on knowledge is appropriate.
When he applies it to statistical mechanics, though, he gets the right results only if he assumes the right sort of knowledge, namely those of the additive conserved quantities. Would someone apply his max entropy principle using onlz knowledge about the expectation of the square of H, say, he would get very wrong formulas.

Thus one needs to know the correct formulas to know which sort of information one may use as input to his subjective approach...

For a detailed discussion, see Sections 10.6 and 10.7 of my book

Arnold Neumaier and Dennis Westra,
Classical and Quantum Mechanics via Lie algebras,
2008, 2011. http://lanl.arxiv.org/abs/0810.1019
 
  • #167
A. Neumaier said:
Jaynes propbabilities are subjective, then the dependence on knowledge is appropriate.
When he applies it to statistical mechanics, though, he gets the right results only if he assumes the right sort of knowledge, namely those of the additive conserved quantities. Would someone apply his max entropy principle using onlz knowledge about the expectation of the square of H, say, he would get very wrong formulas.

Thus one needs to know the correct formulas to know which sort of information one may use as input to his subjective approach...

For a detailed discussion, see Sections 10.6 and 10.7 of my book

Arnold Neumaier and Dennis Westra,
Classical and Quantum Mechanics via Lie algebras,
2008, 2011. http://lanl.arxiv.org/abs/0810.1019

It appears to me that what you call "subjective" is what he called "objective"; and of course any prediction is based on certain assumptions (theories that are based on human knowledge). Anyway, thanks for the link - and if you want to call a prediction based on QM, "subjective", then that's fine to me. :wink:
 
<h2>1. What is the myth that probability depends on knowledge?</h2><p>The myth that probability depends on knowledge is the belief that the likelihood of an event occurring can be affected by what we know or believe about it. This is often used to justify superstitious or irrational beliefs, but in reality, probability is based on objective factors and cannot be influenced by subjective knowledge or beliefs.</p><h2>2. Why is this myth misleading?</h2><p>This myth is misleading because it implies that our beliefs or knowledge can somehow alter the likelihood of an event occurring, when in fact, probability is determined by objective factors such as chance and randomness. Our knowledge or beliefs may influence our perception of probability, but they do not actually affect the outcome of events.</p><h2>3. What evidence disproves this myth?</h2><p>There is ample evidence from various fields such as mathematics, statistics, and psychology that disproves this myth. For example, the laws of probability, such as the law of large numbers and the law of averages, are based on objective principles and are not influenced by our knowledge or beliefs. Additionally, studies have shown that people's perception of probability is often biased and can be easily manipulated, further disproving the idea that probability depends on knowledge.</p><h2>4. How does this myth impact decision making?</h2><p>This myth can have a significant impact on decision making, as people may make choices based on their subjective beliefs about probability rather than objective factors. This can lead to poor decision making and can also perpetuate superstitious or irrational thinking. It is important to understand that probability is not influenced by our knowledge or beliefs in order to make informed and rational decisions.</p><h2>5. How can we combat this myth?</h2><p>To combat this myth, it is important to educate ourselves and others about the principles of probability and how it is determined by objective factors. We can also practice critical thinking and question our own biases and beliefs when making decisions. Additionally, promoting scientific literacy and critical thinking skills in education can help combat this and other myths that can lead to misinformation and irrational thinking.</p>

1. What is the myth that probability depends on knowledge?

The myth that probability depends on knowledge is the belief that the likelihood of an event occurring can be affected by what we know or believe about it. This is often used to justify superstitious or irrational beliefs, but in reality, probability is based on objective factors and cannot be influenced by subjective knowledge or beliefs.

2. Why is this myth misleading?

This myth is misleading because it implies that our beliefs or knowledge can somehow alter the likelihood of an event occurring, when in fact, probability is determined by objective factors such as chance and randomness. Our knowledge or beliefs may influence our perception of probability, but they do not actually affect the outcome of events.

3. What evidence disproves this myth?

There is ample evidence from various fields such as mathematics, statistics, and psychology that disproves this myth. For example, the laws of probability, such as the law of large numbers and the law of averages, are based on objective principles and are not influenced by our knowledge or beliefs. Additionally, studies have shown that people's perception of probability is often biased and can be easily manipulated, further disproving the idea that probability depends on knowledge.

4. How does this myth impact decision making?

This myth can have a significant impact on decision making, as people may make choices based on their subjective beliefs about probability rather than objective factors. This can lead to poor decision making and can also perpetuate superstitious or irrational thinking. It is important to understand that probability is not influenced by our knowledge or beliefs in order to make informed and rational decisions.

5. How can we combat this myth?

To combat this myth, it is important to educate ourselves and others about the principles of probability and how it is determined by objective factors. We can also practice critical thinking and question our own biases and beliefs when making decisions. Additionally, promoting scientific literacy and critical thinking skills in education can help combat this and other myths that can lead to misinformation and irrational thinking.

Similar threads

Replies
8
Views
1K
Replies
2
Views
1K
  • Quantum Physics
Replies
29
Views
3K
Replies
14
Views
844
Replies
93
Views
4K
Replies
26
Views
3K
Replies
1
Views
984
Replies
118
Views
10K
  • Quantum Interpretations and Foundations
2
Replies
62
Views
1K
Replies
63
Views
6K
Back
Top