"Expected Result" Bias in Research

  • #1
person123
307
45
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
 

Answers and Replies

  • #2
davenn
Science Advisor
Gold Member
9,651
9,180
Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense.


Then you need to multiple repeats of the experiment to get multiple reading and determine an average.
Start playing with statistics ( NOT my forte) But others here will hopefully guide you

The point is, you cannot rely on just a small number of data-points
 
  • #3
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
29,613
15,086
This is a real effect - no skulduggery required. You work as hard as you can, find all the sources you think of, and if you get the "right" answer you pat yourself on the back for a job well done. If you get the "wrong" answer, you go back and keep looking for errors.

The Particle Data Book has historical data on various measurements. One is below. You can pretty much see that (whatever else is going on) the errors are not Gaussian and random.

1610462653454.png

Now, how do we get around this? One way is by blinding. For example, in a counting experiment, one might not count in the signal region until counts in nearby control regions are demonstrated to be understood. Or one might introduce an offset unknown to the bulk of the collaboration that is removed only at the very end; again this is only done when various control checks pass.

Is this perfect? Absolutely not. Science is done by people.
 
  • Like
Likes russ_watters, Choppy and BillTre
  • #4
Ygggdrasil
Science Advisor
Insights Author
Gold Member
3,522
4,181
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?

This all makes sense by approaching the problem from the perspective of Bayes theorem. According to Bayes theorem, the confidence you have in a hypothesis ##H## given a set of observations ##O##, ##p(H|O)##, is the product of two quantities the strength of the evidence, ##p(O|H)/p(O)##, and the prior probability of the hypothesis being correct, ##p(H)##:
$$p(H|O) = \frac{p(O|H)}{p(O)}p(H)$$

For gauge readings that make sense (i.e. those which confirm your prior beliefs), you don't need very strong evidence in favor of those hypotheses for you to believe them. However, if a reading goes against your prior beliefs and supports a hypothesis with a low prior probability, you would want much stronger evidence (e.g. more repeats of the experiment and independent sources of evidence) to convince you that the alternative hypothesis with a low prior is correct.
 
  • Like
Likes Choppy and person123
  • #5
person123
307
45
My question then becomes: can we prevent situations like the case of particle data through time, where (at least how I understand it) researchers consistently got similar erroneous results due to using previous research as their prior while using Baye's theorem effectively? Or does using our priors necessarily lead to these issues of bias?

(Also, I know almost nothing about particle physics, but would it be possible that there were systematic errors due to measurements at the time which caused the shift? Like is it possible that in the 70s and 80s the techniques used consistently overestimated the mass?)
 
  • #6
romsofia
555
247
How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
Don't do your research in isolation? Your question is acting like you're the only one worthy of analyzing data. Do what davenn suggests, run multiple experiments, ask yourself where your errors possibly could be, and then you ask your peers if they think your analysis is plausible given your set up. This should be done on pretty much EVERY experiment IMO.

A lot of papers acknowledge people they had conversations with, and bias is one of those reasons. You HAVE to talk to other scientists! Don't do everything in isolation! Don't be scared to talk to others in your field about your research!
 

Suggested for: "Expected Result" Bias in Research

  • Last Post
Replies
3
Views
346
Replies
2
Views
186
Replies
5
Views
349
Replies
3
Views
531
Replies
1
Views
422
Replies
11
Views
393
Replies
2
Views
318
Top