"Expected Result" Bias in Research

Click For Summary
The discussion centers on the challenges of bias in research, particularly in Civil Engineering experiments. A key concern is how to handle anomalous data readings that contradict prior knowledge or seem physically impossible. Researchers face the dilemma of whether to discard such results or investigate further, which can introduce bias in the interpretation of data. The conversation highlights the importance of statistical methods, such as running multiple experiments to establish averages and reduce reliance on single data points. The application of Bayes' theorem is discussed as a framework for understanding how prior beliefs influence the interpretation of new data. It raises questions about the potential for systematic biases in historical research, particularly in particle physics, where previous findings may have shaped current hypotheses. The importance of collaboration and peer discussion is emphasized, suggesting that researchers should not work in isolation but rather engage with others to validate their analyses and mitigate bias. Overall, the dialogue underscores the complexity of balancing reasonable skepticism with the need for rigorous scientific inquiry.
person123
Messages
326
Reaction score
52
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
 
Physics news on Phys.org
person123 said:
Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense.
Then you need to multiple repeats of the experiment to get multiple reading and determine an average.
Start playing with statistics ( NOT my forte) But others here will hopefully guide you

The point is, you cannot rely on just a small number of data-points
 
This is a real effect - no skulduggery required. You work as hard as you can, find all the sources you think of, and if you get the "right" answer you pat yourself on the back for a job well done. If you get the "wrong" answer, you go back and keep looking for errors.

The Particle Data Book has historical data on various measurements. One is below. You can pretty much see that (whatever else is going on) the errors are not Gaussian and random.

1610462653454.png

Now, how do we get around this? One way is by blinding. For example, in a counting experiment, one might not count in the signal region until counts in nearby control regions are demonstrated to be understood. Or one might introduce an offset unknown to the bulk of the collaboration that is removed only at the very end; again this is only done when various control checks pass.

Is this perfect? Absolutely not. Science is done by people.
 
  • Like
Likes russ_watters, Choppy and BillTre
person123 said:
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?

This all makes sense by approaching the problem from the perspective of Bayes theorem. According to Bayes theorem, the confidence you have in a hypothesis ##H## given a set of observations ##O##, ##p(H|O)##, is the product of two quantities the strength of the evidence, ##p(O|H)/p(O)##, and the prior probability of the hypothesis being correct, ##p(H)##:
$$p(H|O) = \frac{p(O|H)}{p(O)}p(H)$$

For gauge readings that make sense (i.e. those which confirm your prior beliefs), you don't need very strong evidence in favor of those hypotheses for you to believe them. However, if a reading goes against your prior beliefs and supports a hypothesis with a low prior probability, you would want much stronger evidence (e.g. more repeats of the experiment and independent sources of evidence) to convince you that the alternative hypothesis with a low prior is correct.
 
  • Like
Likes Choppy and person123
My question then becomes: can we prevent situations like the case of particle data through time, where (at least how I understand it) researchers consistently got similar erroneous results due to using previous research as their prior while using Baye's theorem effectively? Or does using our priors necessarily lead to these issues of bias?

(Also, I know almost nothing about particle physics, but would it be possible that there were systematic errors due to measurements at the time which caused the shift? Like is it possible that in the 70s and 80s the techniques used consistently overestimated the mass?)
 
person123 said:
How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
Don't do your research in isolation? Your question is acting like you're the only one worthy of analyzing data. Do what davenn suggests, run multiple experiments, ask yourself where your errors possibly could be, and then you ask your peers if they think your analysis is plausible given your set up. This should be done on pretty much EVERY experiment IMO.

A lot of papers acknowledge people they had conversations with, and bias is one of those reasons. You HAVE to talk to other scientists! Don't do everything in isolation! Don't be scared to talk to others in your field about your research!
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
1
Views
1K
Replies
3
Views
2K