I Regarding consciousness causing wavefunction collapse

Trollfaz
Messages
143
Reaction score
14
What are the experiments that disprove the idea that consciousness causes wavefunction collapse?
 
Physics news on Phys.org
Trollfaz said:
What are the experiments that disprove the idea that consciousness causes wavefunction collapse?
There are no such experiments (despite the fact that a paper coauthored by my brother (who is a psychologist by education) claims the opposite).
 
  • Like
Likes bhobba
Is there any proof for the consciousness causes collapses idea?
 
I believe this idea was entertained by a few in the very early days of QM and only for a short time, but the mythology persists.

Cheers
 
Demystifier said:
There are no such experiments (despite the fact that a paper coauthored by my brother (who is a psychologist by education) claims the opposite).

Does consciousness cause wave function collapse in Bohmian Mechanics?
 
  • Like
Likes Feeble Wonk
Trollfaz said:
Is there any proof for the consciousness causes collapses idea?

Of course not. Its very much like solipsism - inherently unprovable. Even the reason for its introduction, which leads to all sorts of weird effects - is no longer is relevant. Its very backwater these days - like Lorentz Ether Theory is to relativity. You can't disprove it, but modern presentations of SR based on symmetry make it totally irrelevant.

Thanks
Bill
 
But didn't the scientists conducted the double slit experiment without anyone recording the results, but with the detector on?
 
atyy said:
Does consciousness cause wave function collapse in Bohmian Mechanics?
No, why do you ask?
 
  • #10
Trollfaz said:
But didn't the scientists conducted the double slit experiment without anyone recording the results, but with the detector on?
Yes, but scientists didn't check whether detector detected anything when nobody was looking at it.
 
  • #11
Hm, but you can look later on the photoplate or (nowadays) the digitallly stored measurement data and check what the detector has registered. The investigated system only "cares" about what it's really interacting with, i.e., the detector and not with some "consciousness" (whatever that might be) looking at the result (maybe 100 years later)!
 
  • #12
vanhees71 said:
Hm, but you can look later on the photoplate or (nowadays) the digitallly stored measurement data and check what the detector has registered. The investigated system only "cares" about what it's really interacting with, i.e., the detector and not with some "consciousness" (whatever that might be) looking at the result (maybe 100 years later)!
Yes, but if you look later, you only know what is there later. You cannot know what was there before. You can only assume that it was there before, but you cannot prove that assumption by scientific method. You can "prove" it by using some philosophy, but philosophy is not science, right? :-p
 
  • Like
Likes bohm2 and AlexCaledin
  • #13
Now you got me ;-).
 
  • Like
Likes Hypercube and Demystifier
  • #14
Demystifier said:
No, why do you ask?

In Bohmian Mechanics, the wave function of the universe does not collapse. Yet Bohmian Mechanics says that predictions obtained with collapse are correct. Since objectively the wave function of the universe does not collapse, I thought wave function collapse in Bohmian Mechanics is subjective (ie. requires consciousness).
 
  • #15
atyy said:
In Bohmian Mechanics, the wave function of the universe does not collapse. Yet Bohmian Mechanics says that predictions obtained with collapse are correct. Since objectively the wave function of the universe does not collapse, I thought wave function collapse in Bohmian Mechanics is subjective (ie. requires consciousness).
This is very much like saying that validity of Bayes formula for conditional probability requires consciousness. Would you say that Bayes formula requires consciousness?
 
  • Like
Likes vanhees71
  • #16
With an argument involving Bayes and his (purely mathematical) theorem nowadays you can argue for anything you like, including a huge pile of bovine excrements. SCNR :mad:
 
  • #17
vanhees71 said:
With an argument involving Bayes and his (purely mathematical) theorem nowadays you can argue for anything you like, including a huge pile of bovine excrements. SCNR :mad:
How that works? I would also like to know that general powerful technique of argumentation based on Bayes. :biggrin:
 
  • #18
Well, you can, e.g., create a whole new philosophy "of it all" called "quantum Bayesianism".
 
  • #19
Demystifier said:
This is very much like saying that validity of Bayes formula for conditional probability requires consciousness. Would you say that Bayes formula requires consciousness?

I'm not sure. My instinct is to say it depends.

If interpreted in a frequentist sense, then Bayes's theorem does not require consciousness.

If interpreted in a subjective Bayesian sense, then Bayes's theorem does require consciousness.

I don't believe the objective Bayesian approach makes any sense.
 
  • #20
vanhees71 said:
Well, you can, e.g., create a whole new philosophy "of it all" called "quantum Bayesianism".

Surely no need to "create" since the name at least is already in use? E.g.

https://plato.stanford.edu/entries/quantum-bayesian/

https://arxiv.org/pdf/quant-ph/0608190.pdf

http://www.physics.usyd.edu.au/~ericc/SQF2014/slides/Ruediger%20Schack.pdf

etc.

I know about this only because it is one of many interpretations discussed in Michael Raymer's July 2017 book from Oxford U. Press, Quantum Physics: What Everyone Needs to Know.

And certainly @atyy is correct when he says "If interpreted in a subjective Bayesian sense, then Bayes's theorem does require consciousness"; here's a syllogism from the last link above, a slide show put together by Schack:

A quantum state determines probabilities through the Born rule.
Probabilities are personal judgements of the agent who assigns them.
HENCE: A quantum state is a personal judgement of the agent who assigns it.​
 
Last edited:
  • #21
UsableThought said:
Surely no need to "create" since the name at least is already in use? E.g.

https://plato.stanford.edu/entries/quantum-bayesian/

https://arxiv.org/pdf/quant-ph/0608190.pdf

http://www.physics.usyd.edu.au/~ericc/SQF2014/slides/Ruediger%20Schack.pdf

etc.

I know about this only because it is one of many interpretations discussed in Michael Raymer's July 2017 book from Oxford U. Press, Quantum Physics: What Everyone Needs to Know.

And certainly @atyy is correct when he says "If interpreted in a subjective Bayesian sense, then Bayes's theorem does require consciousness"; here's a syllogism from the last link above, a slide show put together by Schack:

A quantum state determines probabilities through the Born rule.
Probabilities are personal judgements of the agent who assigns them.
HENCE: A quantum state is a personal judgement of the agent who assigns it.​
Sounds wise. How does the personal judgement of the agent affect a future interaction or measurement of the state. Is there still a state if there is no agent ?
 
  • #22
I heard from Sean Carroll that if our consciousness does indeed affect the experiment, then it is through the four fundamental forces or an unknown fifth force. He argued that the "fifth force" would have already been detected if it exists, but the fact that nothing is found shows that psychokinesis is wrong, we cannot change the wavefunction with our consciousness.
 
  • #23
Ok, it's a matter of opinion, but I consider this subjective interpretation of probabilities as gibberish. Nobody following this new idea (why it is attributed to poor Bayes is not clear to me either by the way) has ever been able to explain to me what this means for real-world measurements, which use of course the frequentist interpretation of probabilities, and the frequentist interpretation just works. So why do I need a new unsharp subjective redefinition about the statistical meaning of probability theory?
 
  • Like
Likes Mathematech
  • #24
Thats why i would say that the global consciousness project, dean radins double slit experiment are pseudoscience. The conclusion are all derived from cherry picking of data.
 
  • #25
vanhees71 said:
Ok, it's a matter of opinion, but I consider this subjective interpretation of probabilities as gibberish. Nobody following this new idea (why it is attributed to poor Bayes is not clear to me either by the way) has ever been able to explain to me what this means for real-world measurements, which use of course the frequentist interpretation of probabilities, and the frequentist interpretation just works. So why do I need a new unsharp subjective redefinition about the statistical meaning of probability theory?

I would say that Bayesian probability is probability done right, but luckily for frequentists, the difference between a correct Bayesian analysis and in incorrect frequentist analysis disappears in the limit of many trials.:wink:

Suppose I flip a coin once and I get heads. So the relative frequency for heads is 1. Does that mean that the probability is 1? Of course not! I don't have enough data to say that. So I flip the coin 10 times, and I get 4 heads and 6 tails. Does that mean that the probability of heads is 40%? No, those 10 coin flips could have been a fluke. So I flip the coin 100 times or 1000 times. How many flips does it take before I know that the pattern isn't a fluke? The answer is: there is never a time that I know for certain that it isn't a fluke.

Bayesian reasoning is reasoning in the presence of uncertainty, when there is a limited amount of data. But we're ALWAYS in that situation.
 
  • Like
Likes Auto-Didact
  • #26
stevendaryl said:
I would say that Bayesian probability is probability done right, but luckily for frequentists, the difference between a correct Bayesian analysis and in incorrect frequentist analysis disappears in the limit of many trials.:wink:

Suppose I flip a coin once and I get heads. So the relative frequency for heads is 1. Does that mean that the probability is 1? Of course not! I don't have enough data to say that. So I flip the coin 10 times, and I get 4 heads and 6 tails. Does that mean that the probability of heads is 40%? No, those 10 coin flips could have been a fluke. So I flip the coin 100 times or 1000 times. How many flips does it take before I know that the pattern isn't a fluke? The answer is: there is never a time that I know for certain that it isn't a fluke.

Bayesian reasoning is reasoning in the presence of uncertainty, when there is a limited amount of data. But we're ALWAYS in that situation.

In practice, frequentist probability is more mathematically tractable than Bayesian probability. Using Bayesian probability, there is always a potentially infinite number of hypotheses about what is going on, and the only effect of data gathering is to shift the relative likelihood of the various possibilities. In contrast, frequentist probability has a criterion for rejecting hypotheses. The hypothesis that a coin is a fair coin can be rejected if repeated coin flips show a departure from 50/50 that is larger than the level of significance. So a frequentist approach is a lot less cluttered, since you are constantly clearing away falsified hypotheses.
 
  • #27
stevendaryl said:
I would say that Bayesian probability is probability done right, but luckily for frequentists, the difference between a correct Bayesian analysis and in incorrect frequentist analysis disappears in the limit of many trials.:wink:

Suppose I flip a coin once and I get heads. So the relative frequency for heads is 1. Does that mean that the probability is 1? Of course not! I don't have enough data to say that. So I flip the coin 10 times, and I get 4 heads and 6 tails. Does that mean that the probability of heads is 40%? No, those 10 coin flips could have been a fluke. So I flip the coin 100 times or 1000 times. How many flips does it take before I know that the pattern isn't a fluke? The answer is: there is never a time that I know for certain that it isn't a fluke.

Bayesian reasoning is reasoning in the presence of uncertainty, when there is a limited amount of data. But we're ALWAYS in that situation.
Well, you should do the analysis in a complete way and give the uncertainties (e.g., by giving the standard deviations of your result). The point is that, as you admit, to get the probabilities from experiment you have to repeat the experiment often enough to "collect enough statistics". That's the frequentist approach to statistics, which is well founded in probability theory in terms of the law of large numbers.
 
  • #28
vanhees71 said:
Well, you should do the analysis in a complete way and give the uncertainties (e.g., by giving the standard deviations of your result). The point is that, as you admit, to get the probabilities from experiment

I'm not admitting that. I'm saying that it's actually impossible to get objective probabilities from experiment.

you have to repeat the experiment often enough to "collect enough statistics".

No, that's what frequentists say--that you have to collect enough data. I'm saying the opposite, that there is no such thing as collecting enough statistics. No matter how much data you collect, your estimate of probability will always be subjective.

That's the frequentist approach to statistics, which is well founded in probability theory in terms of the law of large numbers.

I'm saying that opposite of that. The law of large numbers doesn't support the frequentist approach. What the law of large numbers says is that the difference between the (incorrect) frequentist approach and the (correct) Bayesian approach goes to zero as the number of trials goes to infinity.
 
  • #29
Hm, how do you then explain the amazing accuracy with which many of the probabilistic prediction of QT are confirmed by experiments, using the frequentist interpretation of probability?

Or, put in another way. How do you, as a "Bayesian", interpret probabilities and how can you, if there's no objective way to empirically measure probabilities with higher and higher precision by "collecting statistics, verify or falsify the probabilistic predictions of QT?
 
  • #30
vanhees71 said:
Well, you should do the analysis in a complete way and give the uncertainties (e.g., by giving the standard deviations of your result).

The frequentist approach to giving uncertainties is just wrong. It's backwards.

Let me illustrate with coin flipping. Suppose you want to know whether you have a fair coin. (There's actually evidence that there is no such thing as a biased coin: weighting one side doesn't actually make it more likely to land on that side. But that's sort of beside the point...) What you'd like to be able to do is to flip the coin a bunch of times, and note how many heads and tails you get, and use that data to decide whether your coin is fair or not. In other words, what you want to know is:
  • What is the probability that my coin is unfair, given the data?

But the uncertainty that frequentists compute is:
  • What is the probability of getting that data, if I assume that the coin is unfair?
By itself, that doesn't tell us anything about the likelihood of having a fair or unfair coin.

(Note: technically, you would compute something like the probability of getting that data under the assumption that the coin's true probability for head, P_H, is more than \epsilon away from \frac{1}{2})
 
  • #31
vanhees71 said:
Hm, how do you then explain the amazing accuracy with which many of the probabilistic prediction of QT are confirmed by experiments, using the frequentist interpretation of probability?

I already said how: The difference between the (incorrect) frequentist analysis and the (correct) Bayesian analysis goes to zero in the limit as the number of trials becomes large.

Or, put in another way. How do you, as a "Bayesian", interpret probabilities and how can you, if there's no objective way to empirically measure probabilities with higher and higher precision by "collecting statistics, verify or falsify the probabilistic predictions of QT?

For a Bayesian, at any given time, there are many alternative hypotheses that could all explain the given data. Gathering more data will tend to make some hypotheses more likely, and other hypotheses less likely. The point of gathering more data is to decrease your uncertainty about the various hypotheses. But unlike frequentists, nothing is ever verified, and nothing is every falsified. That isn't a problem, in principle. In practice, it's cumbersome to keep around hypotheses that have negligible likelihood. So I think there is a sense in which Popperian falsification is a heuristic tool to make science more tractable.
 
  • #32
I'm again too stupid to follow this argument. I'd describe the coin-throughing probability experiment as follows. I assume that the coin is stable and there's a probability ##p## for showing head (then necessarily the probability for showing tail is ##q=1-p##).

As a frequentist, to figure out the probability ##p## I have to through the coin very often and check the relative frequencies with which I get head or tail, and standard probability theory tells me that this is not as stupid an idea as you tell since we can easily verify the Law of Large Numbers for this simple case. The probability for getting ##0 \leq H \leq N## head obviously is
$$P_N(H)=\binom{N}{H} p^H(1-p)^{N-H}.$$
To go on I define the generating function
$$f(x)=\sum_{H=1}^N \binom{N}{H} \exp(x H) p^H(1-p)^{N-H}=(1+p \exp x-p)^N$$
to evaluate the expectation value for ##H## and its standard deviation
$$\overline{H}=\langle H \rangle =f'(0)= N p, \quad \sigma_{H}^2=\langle H^2 \rangle-\langle H \rangle^2=Np(1-p).$$
The expectation value of the relative frequency for head is thus
$$p_N = \frac{\overline{H}}{N}=p$$
and its standard deviation
$$\sigma_{p_N}=\frac{\sigma{H}}{N}=\frac{p(1-p)}{\sqrt{N}}.$$
For large ##N## the probability distribution for ##p_N## is Gaussian around the mean value ##p## with a width of ##\mathcal{O}(1/\sqrt{N})##, i.e., for ##N \rightarrow \infty## the relative frequencies for head converge in some weak (or "probabilistic") sense to ##p##.

That's more a plausibility argument than a real strict proof, but it can be made rigorous, and it shows that the frequentist interpretation is valid. I don't thus see any need to introduce another interpretation of probabilities than the frequentist one for any practical purpose.

Of course, if you cannot make ##N## very large for some reason, you have to live with large uncertainties. Then you might start with philosophical speculations about the "meaning of probabilities for a small number of events", since physics claims to be an objective science there are some demands for a discovery (e.g., the famous ##5\sigma##-significance rule in HEP physics).
 
  • #33
stevendaryl said:
I already said how: The difference between the (incorrect) frequentist analysis and the (correct) Bayesian analysis goes to zero in the limit as the number of trials becomes large.
How then can the "frequentist analysis" be wrong? It cannot be wrong, because in the hard empirical sciences we consider only sufficiently often repeatable observations as clear evidence for the correctness of a probabilistic description. "Unrepeatable one-time experiments" are useless for science.

For a Bayesian, at any given time, there are many alternative hypotheses that could all explain the given data. Gathering more data will tend to make some hypotheses more likely, and other hypotheses less likely. The point of gathering more data is to decrease your uncertainty about the various hypotheses. But unlike frequentists, nothing is ever verified, and nothing is every falsified. That isn't a problem, in principle. In practice, it's cumbersome to keep around hypotheses that have negligible likelihood. So I think there is a sense in which Popperian falsification is a heuristic tool to make science more tractable.
Then Bayesianism is simply irrelevant for the natural sciences.
 
  • Like
Likes Mentz114
  • #34
Let me bring up a hoary example illustrating the problem with the frequentist notion of uncertainty.

Suppose you're a doctor, and you have some fairly accurate test for some disease. You've confirmed that:
  • If you have the disease, there is a 99% probability that you will test positive, and only a 1% chance that you will test negative.
  • If you don't have the disease, there is a 99% probability that you will test negative, and only a 1% chance that you will test positive.
So you test a patient, and he tests positive for the disease. You tell him: "You probably have the disease; but there is a 1% uncertainty in the diagnosis." Should the patient be worried, or not?

Well, 99% certainty sounds pretty certain, so the patient ought to be worried. But the Bayesian analysis would tell us this:
  • Let p(D) be the a priori probability that the patient has the disease (before any tests are performed).
  • Let p(\neg D) = 1 - p(D) be the a priori probability that he doesn't have the disease.
  • Let p(P|D) be the probability of testing positive, given that the patient has the disease (99% in our example).
  • Let p(P|\neg D) be the probability of testing positive, given that the patient does not have the disease (1% in our example).
  • Then the probability of the patient having the disease, given that he tests positive, is p(D|P) = p(P|D) \frac{p(D)}{p(P|D) p(D) + p(P|\neg D) P(\neg D)}
If p(D) = 0.0001 (1 in 10,000) then this gives us: P(D|P) \approx 0.98%. In other words, the probability that he doesn't have the disease is 99%.

So the 1% uncertainty in the test accuracy is completely inaccurate as a way to estimate the uncertainty in whether the patient has the disease.
 
  • #35
What has this example to do with what we are discussing?
 
  • #36
vanhees71 said:
I'm again too stupid to follow this argument. I'd describe the coin-throughing probability experiment as follows. I assume that the coin is stable and there's a probability ##p## for showing head (then necessarily the probability for showing tail is ##q=1-p##).

[Stuff deleted]

That's more a plausibility argument than a real strict proof, but it can be made rigorous, and it shows that the frequentist interpretation is valid. I don't thus see any need to introduce another interpretation of probabilities than the frequentist one for any practical purpose.

That's backwards from what you really want. You're starting with a probability, p, and then you're calculating the likelihood that you get H heads out of N flips. What you want is to calculate p from H and N, because p is the unknown.

There are two different uncertainties involved in this thought experiment:
  • The uncertainty in p, given \frac{M}{N}.
  • The uncertainty in \frac{M}{N}, given p.
What you want is the first, but what you calculate is the second. Of course, in the limit that N \rightarrow \infty, if the second goes to zero, then so does the first. But for finite N (which is all we ever have), we don't have any way to calculate the relationship between the two without using subjective priors.

If N is finite (which it always is), it's just incorrect for the frequentist to say that there is an uncertainty of 1% that the coin's true probability is \frac{1}{2} \pm \epsilon
 
  • #37
vanhees71 said:
What has this example to do with what we are discussing?

The issue is the meaning of frequentist uncertainty. If we're trying to determine whether a coin is biased, then what we want to know is the likelihood that the coin is biased (or to make it definite, the likelihood that its bias is greater than \epsilon for some appropriate \epsilon). The frequentist uncertainty doesn't tell us this.
 
  • #38
Of course, if I have thrown the coin only a few times, my uncertainty about ##p## for head, given the relative frequency, is very large and thus my uncertainty to be sure that it's biased (or not) is also large. Of course, you have to do the experiment with sufficient statistics to decide with some given significance level. That's why, e.g., physicists build the LHC just for finding the Higgs with sufficient significance (not that it wouldn't be good to find something else too, but that's not the issue here).
 
  • #39
UsableThought said:
I know about this only because it is one of many interpretations discussed in Michael Raymer's July 2017 book from Oxford U. Press, Quantum Physics: What Everyone Needs to Know.
Good book, have a copy of it. I'll be recommending people read it, for lay-men audience, in addition to 'Sneaking a Look At God's Cards'.
 
  • #40
stevendaryl said:
That's backwards from what you really want. You're starting with a probability, p, and then you're calculating the likelihood that you get H heads out of N flips. What you want is to calculate p from H and N, because p is the unknown.

There are two different uncertainties involved in this thought experiment:
  • The uncertainty in p, given \frac{M}{N}.
  • The uncertainty in \frac{M}{N}, given p.
What you want is the first, but what you calculate is the second. Of course, in the limit that N \rightarrow \infty, if the second goes to zero, then so does the first. But for finite N (which is all we ever have), we don't have any way to calculate the relationship between the two without using subjective priors.

If N is finite (which it always is), it's just incorrect for the frequentist to say that there is an uncertainty of 1% that the coin's true probability is \frac{1}{2} \pm \epsilon
This is an interesting discussion. I can see why the frequentist method might have some pitfalls. It's essentially trying to do a proof by contradiction based on assumed values. So we can see why statements of probability get to the heart of the matter much more directly. But isn't one of the pitfalls of Bayesean logic is that it depends on good priors? If our subjective beliefs about the prior probability is wrong(not merely inaccurate), then the posterior would be further from the truth than a frequentist analysis.
 
  • #41
vanhees71 said:
That's more a plausibility argument than a real strict proof, but it can be made rigorous, and it shows that the frequentist interpretation is valid. I don't thus see any need to introduce another interpretation of probabilities than the frequentist one for any practical purpose.

. . . Bayesianism is simply irrelevant for the natural sciences.

For any practical purpose? Irrelevant for the natural sciences?

Examples of the usefulness of subjective probability (a category which of course includes Bayesian probability) can be found in primers on scientific inference; it is regarded as especially handy for situations which lack enough information to support a reference class. This "reference class problem" affects all models, but is considered especially difficult for frequentistism.

Here's one such example of using subjective probability when a reference class is lacking; this is drawn from Philosophy of Science: A Very Short Introduction, by Samir Okasha (note this was published in 2002 and so does not reflect more recent Mars missions & speculation about microorganism habitat):

Suppose a scientist tells you that the probability of finding life on Mars is extremely low. Does this mean that life is found only on a small proportion of all celestial bodies? Surely not. For one thing, no one knows how many celestial bodies there are, nor how many of them contain life. So a different notion of probability is at work here. Now since there either is life on Mars or there isn't, talk of probability in this context must presumably reflect our ignorance of the state of the world, rather than describing an objective feature of the world itself. So it is natural to take the scientist's statement to mean that in the light of all the evidence, the rational degree of belief to have in the hypothesis that there is life on Maris is very low.​

stevendaryl said:
If we're trying to determine whether a coin is biased, then what we want to know is the likelihood that the coin is biased (or to make it definite, the likelihood that its bias is greater than ϵ for some appropriate ϵ). The frequentist uncertainty doesn't tell us this.

Not sure you'll think this relevant, but frequency counting has been used to identify biased dies. In 1894, a zoologist named Ralph Weldon rolled a set of more than 26,000 times; the numbers 5 and 6 came up too often; examination of the dies showed that the way holes were drilled in the faces, to represent the numbers, resulted in consistent imbalances. Wikipedia mentions Weldon's dice trial, but the description I just cited comes form yet another "Very Short" primer, this one on probability, by John Haigh. That book also mentions a trial done about 70 years later by a man named Willard Longcor, in which Longcor collected various makes of dice and threw each make over 20,000 times; cheaply made dies again showed bias where precision dies such as those used in Las Vegas casino did not show bias - at least not after 20,000 throws. That experiment is mentioned in a blog post here.

FallenApple said:
But isn't one of the pitfalls of Bayesean logic is that it depends on good priors?

Absolutely. From what I read, this is how the reference class problem manifests itself in Bayesian probability, or so mentions Wikipedia in its article on the problem.

Back to the argument about "who's better, frequentist or Bayesian" - assertions that any particular approach is "always" superior seem to me to miss the point: probability models can only be said to be valid to the extent they are useful; and the utility of any particular model seems as if it must vary according to the situation. A recent and interesting book I am reading about the evolution of probability and how in some ways Bayesian analysis in particular has run into trouble in medical studies and similarly difficult applications is Willful Ignorance: The Mismeasurement of Uncertainty, by the statistician and author Herbert Weisberg. I will close with a couple of interesting quotes from that book, starting with this, describing arguments between Bayesians and frequentists:

The disagreement between Bayesians and frequentists arises from a clash between two extreme positions. Bayesians assume that our prior uncertainty should always be framed in terms of mathematical probabilities; frequentists assume it should play no role in our deliberations. Very little serious attention has been paid recently to approaches that attempt to reconcile or transcend these differences.
The other quote has to do with a problem that slides by beneath many discussions of probability: sometimes people assume that probability, as it is mathematically described, is a feature of the universe; when actually, as I think Weisberg makes a good argument for, it is an invention. And this invention, in its various iterations and variations, carries assumptions about the nature of uncertainty which are not always adequate and can be misleading - e.g. the unwitting belief that uncertainty in all cases can be viewed in the manner first introduced by classical probability. Weisberg cites Nassim Nicholas Taleb (The Black Swan, etc.) on this point:

Taleb has dubbed unquestioning belief in the "laws" of classical probability theory the ludic fallacy. The term is derived from the Latin word ludus (game). Taleb chose this term because the underlying metaphor of mathematical probability is the world as a huge casino, with rules like those in a game of chance. Ludic probability gradually supplanted an earlier usage of the word probability that reflected a qualitative analysis of uncertainty grounded in legal, ethical, and even religious considerations.​

Note that Weisberg has no interest in going back in time to a non-mathematical approach to probability. I haven't gotten all the way through, but as I mentioned above, he promises to eventually examine problems with Bayesian analysis that have cropped up with trials in medicine, etc., where results can't be replicated and so on. He has ideas for how to improve this situation and says that this is the real point of his book.
 
Last edited:
  • #42
FallenApple said:
isn't one of the pitfalls of Bayesean logic is that it depends on good priors?

That's not really a "pitfall" of Bayesian logic, it's a manifestation of the way that Bayesian logic forces you to make your prior assumptions explicit so you can reason about them.

Also, the more data you collect, the smaller the effect of your priors.

FallenApple said:
If our subjective beliefs about the prior probability is wrong(not merely inaccurate), then the posterior would be further from the truth than a frequentist analysis.

How so?
 
  • #43
PeterDonis said:
That's not really a "pitfall" of Bayesian logic, it's a manifestation of the way that Bayesian logic forces you to make your prior assumptions explicit so you can reason about them.

Also, the more data you collect, the smaller the effect of your priors.
How so?

The posterior is just the likelihood(model from current data) times the prior. If I just throw in some distribution that was heavily based on incorrect past analysis then wouldn't the posterior estimates be worse than a standalone analysis? Collecting more data reduces the effect of priors but if the priors were good in the first place, then we would not need to rely on current data as much. Not saying that Bayesian logic is bad. I really find the idea of updating knowledge to be more consistent with scientific progress( adding pieces of knowledge at a time to contribute to the overall picture). But part of this is that if the past knowledge is wrong, the current evidence is just going to be dragged back due to giving much credence where it shouldn't be.
 
  • #44
FallenApple said:
If I just throw in some distribution that was heavily based on incorrect past analysis then wouldn't the posterior estimates be worse than a standalone analysis?

A standalone analysis based on what?

Basically you seem to be saying that a badly done analysis will give worse results than an analysis that isn't badly done. Of course that's true, but so what?

FallenApple said:
if the priors were good in the first place, then we would not need to rely on current data as much.

In other words, if you already know the right answer, more data doesn't change the answer. Again, that's true, but so what?

If you are saying that frequentist analysis somehow magically avoids the problem of having bad starting assumptions, I don't see how that's the case. If you have bad starting assumptions, you're going to have problems no matter what technique you use. But Bayesian analysis, as I said, forces you to at least make those bad starting assumptions explicit.
 
  • #45
PeterDonis said:
That's not really a "pitfall" of Bayesian logic, it's a manifestation of the way that Bayesian logic forces you to make your prior assumptions explicit so you can reason about them.

Bayesian methods seem to get a lot of different names for very closely related procedures. So for purposes of this discussion I'm going to assume that Bayesian logic = Bayesian inference = Bayesian statistics = Bayesian probability. With that in mind, this statement seems to suggest that priors are at all times strictly an advantage for Bayseian methods, thus devoid of any problem that could be called a pitfall. However there views to the contrary; see for example https://plato.stanford.edu/entries/statistics/#DetPri
PeterDonis said:
If you are saying that frequentist analysis somehow magically avoids the problem of having bad starting assumptions, I don't see how that's the case. If you have bad starting assumptions, you're going to have problems no matter what technique you use. But Bayesian analysis, as I said, forces you to at least make those bad starting assumptions explicit.

And this statement seems to imply that frequentism inherently can't or doesn't make its assumptions explicit. This can hardly be the case, or else there would be no such problem as the reference class problem - i.e. you can't have a reference class problem if you aren't choosing a reference class to begin with as one of your starting assumptions. See the third paragraph in this section of the same reference: https://plato.stanford.edu/entries/statistics/#PhyProClaSta

It's not that there aren't differences; it's how to describe these in a non-partisan manner. E.g. this is the conclusion of the section linked to above; "classical statistical procedures" refers to procedures that interpret probabilities as frequencies, i.e. frequentist statistics:

Summing up, it remains problematic that Bayesian statistics is sensitive to subjective input. The undeniable advantage of the classical statistical procedures is that they do not need any such input, although arguably the classical procedures are in turn sensitive to choices concerning the sample space (Lindley 2000). Against this, Bayesian statisticians point to the advantage of being able to incorporate initial opinions into the statistical analysis.​

For anyone who wants to dig further into all this, the "Lindley 2000" reference above leads to a very long and technical paper that can be found online here: http://www.phil.vt.edu/dmayo/personal_website/Lindley_Philosophy_of_Statistics.pdf
 
Last edited:
  • #46
stevendaryl said:
But the Bayesian analysis would tell us this:
  • Let p(D)p(D)p(D) be the a priori probability that the patient has the disease (before any tests are performed).
  • Let p(¬D)=1−p(D)p(¬D)=1−p(D)p(\neg D) = 1 - p(D) be the a priori probability that he doesn't have the disease.
  • Let p(P|D)p(P|D)p(P|D) be the probability of testing positive, given that the patient has the disease (99% in our example).
  • Let p(P|¬D)p(P|¬D)p(P|\neg D) be the probability of testing positive, given that the patient does not have the disease (1% in our example).
  • Then the probability of the patient having the disease, given that he tests positive, is p(D|P)=p(P|D)p(D)p(P|D)p(D)+p(P|¬D)P(¬D)p(D|P)=p(P|D)p(D)p(P|D)p(D)+p(P|¬D)P(¬D)p(D|P) = p(P|D) \frac{p(D)}{p(P|D) p(D) + p(P|\neg D) P(\neg D)}
If p(D)=0.0001p(D)=0.0001p(D) = 0.0001 (1 in 10,000) then this gives us: P(D|P)≈P(D|P)≈P(D|P) \approx 0.98%.

What is particular Bayesian about this? As far as I understand no one debates this result. Just a frequentist would say that this means that, if the doctor performs this test on ##N## randomly selected people, the fraction of people who actually have the disease among the ones he diagnoses with the disease will only be 1% (for large enough N). The bayesian would say

stevendaryl said:
In other words, the probability that he doesn't have the disease is 99%.

This seems to introduce a new probability concept, as all probabilities so far were relative frequencies (##p(P|D)## and ##p(P|\neg D)## would most likely be the relative frequency in a clinical trial in practice, and you said yourself that ##p(D)## is the relative frequency of the disease in the population). To me it appears just confusing to also call this a probability.
 
  • Like
Likes vanhees71 and Mentz114
  • #47
Dr.AbeNikIanEdL said:
What is particular Bayesian about this? As far as I understand no one debates this result. Just a frequentist would say that this means that, if the doctor performs this test on ##N## randomly selected people, the fraction of people who actually have the disease among the ones he diagnoses with the disease will only be 1% (for large enough N).

The issue is that frequentists' criterion for significance is like substituting the accuracy for a test as the criterion for the likelihood of the disease.

As I said, the frequentists are computing: What is the likelihood of getting result R if hypothesis H is true, or P(R|H). When H is the null hypothesis, they want to say that their result is significant if P(R|H) is tiny. But what you really care about is the likelihood that hypothesis H is true, given result R, P(H|R). Those are completely different numbers.
 
  • Like
Likes DanielMB and PeterDonis
  • #48
stevendaryl said:
The issue is that frequentists' criterion for significance is like substituting the accuracy for a test as the criterion for the likelihood of the disease.

As I said, the frequentists are computing: What is the likelihood of getting result R if hypothesis H is true, or P(R|H). When H is the null hypothesis, they want to say that their result is significant if P(R|H) is tiny. But what you really care about is the likelihood that hypothesis H is true, given result R, P(H|R). Those are completely different numbers.
Yes, a significance test only quantifies the support that the data gives to the null hypothesis. There are other techiques. I used receiver operating characteristics to evaluate test results ( when I practised medical statistics) as did many others.

See https://en.wikipedia.org/wiki/Receiver_operating_characteristic
 
  • #49
UsableThought said:
this statement seems to imply that frequentism inherently can't or doesn't make its assumptions explicit. This can hardly be the case...

UsableThought said:
...it remains problematic that Bayesian statistics is sensitive to subjective input. The undeniable advantage of the classical statistical procedures is that they do not need any such input

Do you see the contradiction between these two statements?
 
Back
Top