Trollfaz
- 143
- 14
What are the experiments that disprove the idea that consciousness causes wavefunction collapse?
There are no such experiments (despite the fact that a paper coauthored by my brother (who is a psychologist by education) claims the opposite).Trollfaz said:What are the experiments that disprove the idea that consciousness causes wavefunction collapse?
Demystifier said:There are no such experiments (despite the fact that a paper coauthored by my brother (who is a psychologist by education) claims the opposite).
Trollfaz said:Is there any proof for the consciousness causes collapses idea?
No, why do you ask?atyy said:Does consciousness cause wave function collapse in Bohmian Mechanics?
Yes, but scientists didn't check whether detector detected anything when nobody was looking at it.Trollfaz said:But didn't the scientists conducted the double slit experiment without anyone recording the results, but with the detector on?
Yes, but if you look later, you only know what is there later. You cannot know what was there before. You can only assume that it was there before, but you cannot prove that assumption by scientific method. You can "prove" it by using some philosophy, but philosophy is not science, right?vanhees71 said:Hm, but you can look later on the photoplate or (nowadays) the digitallly stored measurement data and check what the detector has registered. The investigated system only "cares" about what it's really interacting with, i.e., the detector and not with some "consciousness" (whatever that might be) looking at the result (maybe 100 years later)!
Demystifier said:No, why do you ask?
This is very much like saying that validity of Bayes formula for conditional probability requires consciousness. Would you say that Bayes formula requires consciousness?atyy said:In Bohmian Mechanics, the wave function of the universe does not collapse. Yet Bohmian Mechanics says that predictions obtained with collapse are correct. Since objectively the wave function of the universe does not collapse, I thought wave function collapse in Bohmian Mechanics is subjective (ie. requires consciousness).
How that works? I would also like to know that general powerful technique of argumentation based on Bayes.vanhees71 said:With an argument involving Bayes and his (purely mathematical) theorem nowadays you can argue for anything you like, including a huge pile of bovine excrements. SCNR![]()
Demystifier said:This is very much like saying that validity of Bayes formula for conditional probability requires consciousness. Would you say that Bayes formula requires consciousness?
vanhees71 said:Well, you can, e.g., create a whole new philosophy "of it all" called "quantum Bayesianism".
Sounds wise. How does the personal judgement of the agent affect a future interaction or measurement of the state. Is there still a state if there is no agent ?UsableThought said:Surely no need to "create" since the name at least is already in use? E.g.
https://plato.stanford.edu/entries/quantum-bayesian/
https://arxiv.org/pdf/quant-ph/0608190.pdf
http://www.physics.usyd.edu.au/~ericc/SQF2014/slides/Ruediger%20Schack.pdf
etc.
I know about this only because it is one of many interpretations discussed in Michael Raymer's July 2017 book from Oxford U. Press, Quantum Physics: What Everyone Needs to Know.
And certainly @atyy is correct when he says "If interpreted in a subjective Bayesian sense, then Bayes's theorem does require consciousness"; here's a syllogism from the last link above, a slide show put together by Schack:
A quantum state determines probabilities through the Born rule.
Probabilities are personal judgements of the agent who assigns them.
HENCE: A quantum state is a personal judgement of the agent who assigns it.
vanhees71 said:Ok, it's a matter of opinion, but I consider this subjective interpretation of probabilities as gibberish. Nobody following this new idea (why it is attributed to poor Bayes is not clear to me either by the way) has ever been able to explain to me what this means for real-world measurements, which use of course the frequentist interpretation of probabilities, and the frequentist interpretation just works. So why do I need a new unsharp subjective redefinition about the statistical meaning of probability theory?
stevendaryl said:I would say that Bayesian probability is probability done right, but luckily for frequentists, the difference between a correct Bayesian analysis and in incorrect frequentist analysis disappears in the limit of many trials.
Suppose I flip a coin once and I get heads. So the relative frequency for heads is 1. Does that mean that the probability is 1? Of course not! I don't have enough data to say that. So I flip the coin 10 times, and I get 4 heads and 6 tails. Does that mean that the probability of heads is 40%? No, those 10 coin flips could have been a fluke. So I flip the coin 100 times or 1000 times. How many flips does it take before I know that the pattern isn't a fluke? The answer is: there is never a time that I know for certain that it isn't a fluke.
Bayesian reasoning is reasoning in the presence of uncertainty, when there is a limited amount of data. But we're ALWAYS in that situation.
Well, you should do the analysis in a complete way and give the uncertainties (e.g., by giving the standard deviations of your result). The point is that, as you admit, to get the probabilities from experiment you have to repeat the experiment often enough to "collect enough statistics". That's the frequentist approach to statistics, which is well founded in probability theory in terms of the law of large numbers.stevendaryl said:I would say that Bayesian probability is probability done right, but luckily for frequentists, the difference between a correct Bayesian analysis and in incorrect frequentist analysis disappears in the limit of many trials.
Suppose I flip a coin once and I get heads. So the relative frequency for heads is 1. Does that mean that the probability is 1? Of course not! I don't have enough data to say that. So I flip the coin 10 times, and I get 4 heads and 6 tails. Does that mean that the probability of heads is 40%? No, those 10 coin flips could have been a fluke. So I flip the coin 100 times or 1000 times. How many flips does it take before I know that the pattern isn't a fluke? The answer is: there is never a time that I know for certain that it isn't a fluke.
Bayesian reasoning is reasoning in the presence of uncertainty, when there is a limited amount of data. But we're ALWAYS in that situation.
vanhees71 said:Well, you should do the analysis in a complete way and give the uncertainties (e.g., by giving the standard deviations of your result). The point is that, as you admit, to get the probabilities from experiment
you have to repeat the experiment often enough to "collect enough statistics".
That's the frequentist approach to statistics, which is well founded in probability theory in terms of the law of large numbers.
vanhees71 said:Well, you should do the analysis in a complete way and give the uncertainties (e.g., by giving the standard deviations of your result).
vanhees71 said:Hm, how do you then explain the amazing accuracy with which many of the probabilistic prediction of QT are confirmed by experiments, using the frequentist interpretation of probability?
Or, put in another way. How do you, as a "Bayesian", interpret probabilities and how can you, if there's no objective way to empirically measure probabilities with higher and higher precision by "collecting statistics, verify or falsify the probabilistic predictions of QT?
How then can the "frequentist analysis" be wrong? It cannot be wrong, because in the hard empirical sciences we consider only sufficiently often repeatable observations as clear evidence for the correctness of a probabilistic description. "Unrepeatable one-time experiments" are useless for science.stevendaryl said:I already said how: The difference between the (incorrect) frequentist analysis and the (correct) Bayesian analysis goes to zero in the limit as the number of trials becomes large.
Then Bayesianism is simply irrelevant for the natural sciences.For a Bayesian, at any given time, there are many alternative hypotheses that could all explain the given data. Gathering more data will tend to make some hypotheses more likely, and other hypotheses less likely. The point of gathering more data is to decrease your uncertainty about the various hypotheses. But unlike frequentists, nothing is ever verified, and nothing is every falsified. That isn't a problem, in principle. In practice, it's cumbersome to keep around hypotheses that have negligible likelihood. So I think there is a sense in which Popperian falsification is a heuristic tool to make science more tractable.
vanhees71 said:I'm again too stupid to follow this argument. I'd describe the coin-throughing probability experiment as follows. I assume that the coin is stable and there's a probability ##p## for showing head (then necessarily the probability for showing tail is ##q=1-p##).
[Stuff deleted]
That's more a plausibility argument than a real strict proof, but it can be made rigorous, and it shows that the frequentist interpretation is valid. I don't thus see any need to introduce another interpretation of probabilities than the frequentist one for any practical purpose.
vanhees71 said:What has this example to do with what we are discussing?
Good book, have a copy of it. I'll be recommending people read it, for lay-men audience, in addition to 'Sneaking a Look At God's Cards'.UsableThought said:I know about this only because it is one of many interpretations discussed in Michael Raymer's July 2017 book from Oxford U. Press, Quantum Physics: What Everyone Needs to Know.
This is an interesting discussion. I can see why the frequentist method might have some pitfalls. It's essentially trying to do a proof by contradiction based on assumed values. So we can see why statements of probability get to the heart of the matter much more directly. But isn't one of the pitfalls of Bayesean logic is that it depends on good priors? If our subjective beliefs about the prior probability is wrong(not merely inaccurate), then the posterior would be further from the truth than a frequentist analysis.stevendaryl said:That's backwards from what you really want. You're starting with a probability, p, and then you're calculating the likelihood that you get H heads out of N flips. What you want is to calculate p from H and N, because p is the unknown.
There are two different uncertainties involved in this thought experiment:
What you want is the first, but what you calculate is the second. Of course, in the limit that N \rightarrow \infty, if the second goes to zero, then so does the first. But for finite N (which is all we ever have), we don't have any way to calculate the relationship between the two without using subjective priors.
- The uncertainty in p, given \frac{M}{N}.
- The uncertainty in \frac{M}{N}, given p.
If N is finite (which it always is), it's just incorrect for the frequentist to say that there is an uncertainty of 1% that the coin's true probability is \frac{1}{2} \pm \epsilon
vanhees71 said:That's more a plausibility argument than a real strict proof, but it can be made rigorous, and it shows that the frequentist interpretation is valid. I don't thus see any need to introduce another interpretation of probabilities than the frequentist one for any practical purpose.
. . . Bayesianism is simply irrelevant for the natural sciences.
stevendaryl said:If we're trying to determine whether a coin is biased, then what we want to know is the likelihood that the coin is biased (or to make it definite, the likelihood that its bias is greater than ϵ for some appropriate ϵ). The frequentist uncertainty doesn't tell us this.
FallenApple said:But isn't one of the pitfalls of Bayesean logic is that it depends on good priors?
FallenApple said:isn't one of the pitfalls of Bayesean logic is that it depends on good priors?
FallenApple said:If our subjective beliefs about the prior probability is wrong(not merely inaccurate), then the posterior would be further from the truth than a frequentist analysis.
PeterDonis said:That's not really a "pitfall" of Bayesian logic, it's a manifestation of the way that Bayesian logic forces you to make your prior assumptions explicit so you can reason about them.
Also, the more data you collect, the smaller the effect of your priors.
How so?
FallenApple said:If I just throw in some distribution that was heavily based on incorrect past analysis then wouldn't the posterior estimates be worse than a standalone analysis?
FallenApple said:if the priors were good in the first place, then we would not need to rely on current data as much.
PeterDonis said:That's not really a "pitfall" of Bayesian logic, it's a manifestation of the way that Bayesian logic forces you to make your prior assumptions explicit so you can reason about them.
PeterDonis said:If you are saying that frequentist analysis somehow magically avoids the problem of having bad starting assumptions, I don't see how that's the case. If you have bad starting assumptions, you're going to have problems no matter what technique you use. But Bayesian analysis, as I said, forces you to at least make those bad starting assumptions explicit.
stevendaryl said:But the Bayesian analysis would tell us this:
If p(D)=0.0001p(D)=0.0001p(D) = 0.0001 (1 in 10,000) then this gives us: P(D|P)≈P(D|P)≈P(D|P) \approx 0.98%.
- Let p(D)p(D)p(D) be the a priori probability that the patient has the disease (before any tests are performed).
- Let p(¬D)=1−p(D)p(¬D)=1−p(D)p(\neg D) = 1 - p(D) be the a priori probability that he doesn't have the disease.
- Let p(P|D)p(P|D)p(P|D) be the probability of testing positive, given that the patient has the disease (99% in our example).
- Let p(P|¬D)p(P|¬D)p(P|\neg D) be the probability of testing positive, given that the patient does not have the disease (1% in our example).
- Then the probability of the patient having the disease, given that he tests positive, is p(D|P)=p(P|D)p(D)p(P|D)p(D)+p(P|¬D)P(¬D)p(D|P)=p(P|D)p(D)p(P|D)p(D)+p(P|¬D)P(¬D)p(D|P) = p(P|D) \frac{p(D)}{p(P|D) p(D) + p(P|\neg D) P(\neg D)}
stevendaryl said:In other words, the probability that he doesn't have the disease is 99%.
Dr.AbeNikIanEdL said:What is particular Bayesian about this? As far as I understand no one debates this result. Just a frequentist would say that this means that, if the doctor performs this test on ##N## randomly selected people, the fraction of people who actually have the disease among the ones he diagnoses with the disease will only be 1% (for large enough N).
Yes, a significance test only quantifies the support that the data gives to the null hypothesis. There are other techiques. I used receiver operating characteristics to evaluate test results ( when I practised medical statistics) as did many others.stevendaryl said:The issue is that frequentists' criterion for significance is like substituting the accuracy for a test as the criterion for the likelihood of the disease.
As I said, the frequentists are computing: What is the likelihood of getting result R if hypothesis H is true, or P(R|H). When H is the null hypothesis, they want to say that their result is significant if P(R|H) is tiny. But what you really care about is the likelihood that hypothesis H is true, given result R, P(H|R). Those are completely different numbers.
UsableThought said:this statement seems to imply that frequentism inherently can't or doesn't make its assumptions explicit. This can hardly be the case...
UsableThought said:...it remains problematic that Bayesian statistics is sensitive to subjective input. The undeniable advantage of the classical statistical procedures is that they do not need any such input