"Expected Result" Bias in Research

Click For Summary

Discussion Overview

The discussion revolves around the concept of bias in research, particularly focusing on the tendency to favor expected results over potentially erroneous data. Participants explore the implications of this bias in experimental settings, especially in the context of Civil Engineering and particle physics. The conversation touches on statistical methods, the role of prior beliefs in hypothesis testing, and the importance of collaboration in research.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant raises concerns about the bias towards desired results in research, questioning how to discern when it is reasonable to discard anomalous data versus when it reflects a bias towards fitting data to expected outcomes.
  • Another participant suggests that relying on a small number of data points is insufficient and emphasizes the need for multiple repetitions of experiments to establish reliable averages.
  • A different viewpoint highlights that biases can occur without malicious intent, noting that researchers may unconsciously seek confirmation of their hypotheses and may overlook errors when results align with expectations.
  • Bayes' theorem is introduced as a framework for understanding how prior beliefs influence the interpretation of experimental results, with one participant explaining that stronger evidence is required to accept hypotheses that contradict prior beliefs.
  • A participant questions whether reliance on prior research can lead to systematic biases, particularly in the context of historical particle physics data, and wonders if measurement techniques of the past may have contributed to consistent errors.
  • Another participant stresses the importance of collaboration and peer discussion in research, arguing against conducting research in isolation and advocating for seeking feedback from others in the field.

Areas of Agreement / Disagreement

Participants express a range of views on the nature and implications of bias in research, indicating that there is no consensus on how to best address or understand these biases. Some agree on the necessity of collaboration, while others focus on the statistical and theoretical aspects of bias.

Contextual Notes

Limitations in the discussion include the potential for missing assumptions regarding the nature of biases, the dependence on definitions of what constitutes a "reasonable" result, and unresolved questions about the historical context of measurement techniques.

Who May Find This Useful

This discussion may be of interest to researchers in Civil Engineering, particle physics, and those studying research methodologies, particularly in relation to biases in data interpretation and the role of collaboration in scientific inquiry.

person123
Messages
326
Reaction score
52
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
 
Engineering news on Phys.org
person123 said:
Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense.
Then you need to multiple repeats of the experiment to get multiple reading and determine an average.
Start playing with statistics ( NOT my forte) But others here will hopefully guide you

The point is, you cannot rely on just a small number of data-points
 
This is a real effect - no skulduggery required. You work as hard as you can, find all the sources you think of, and if you get the "right" answer you pat yourself on the back for a job well done. If you get the "wrong" answer, you go back and keep looking for errors.

The Particle Data Book has historical data on various measurements. One is below. You can pretty much see that (whatever else is going on) the errors are not Gaussian and random.

1610462653454.png

Now, how do we get around this? One way is by blinding. For example, in a counting experiment, one might not count in the signal region until counts in nearby control regions are demonstrated to be understood. Or one might introduce an offset unknown to the bulk of the collaboration that is removed only at the very end; again this is only done when various control checks pass.

Is this perfect? Absolutely not. Science is done by people.
 
  • Like
Likes   Reactions: russ_watters, Choppy and BillTre
person123 said:
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?

This all makes sense by approaching the problem from the perspective of Bayes theorem. According to Bayes theorem, the confidence you have in a hypothesis ##H## given a set of observations ##O##, ##p(H|O)##, is the product of two quantities the strength of the evidence, ##p(O|H)/p(O)##, and the prior probability of the hypothesis being correct, ##p(H)##:
$$p(H|O) = \frac{p(O|H)}{p(O)}p(H)$$

For gauge readings that make sense (i.e. those which confirm your prior beliefs), you don't need very strong evidence in favor of those hypotheses for you to believe them. However, if a reading goes against your prior beliefs and supports a hypothesis with a low prior probability, you would want much stronger evidence (e.g. more repeats of the experiment and independent sources of evidence) to convince you that the alternative hypothesis with a low prior is correct.
 
  • Like
Likes   Reactions: Choppy and person123
My question then becomes: can we prevent situations like the case of particle data through time, where (at least how I understand it) researchers consistently got similar erroneous results due to using previous research as their prior while using Baye's theorem effectively? Or does using our priors necessarily lead to these issues of bias?

(Also, I know almost nothing about particle physics, but would it be possible that there were systematic errors due to measurements at the time which caused the shift? Like is it possible that in the 70s and 80s the techniques used consistently overestimated the mass?)
 
person123 said:
How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
Don't do your research in isolation? Your question is acting like you're the only one worthy of analyzing data. Do what davenn suggests, run multiple experiments, ask yourself where your errors possibly could be, and then you ask your peers if they think your analysis is plausible given your set up. This should be done on pretty much EVERY experiment IMO.

A lot of papers acknowledge people they had conversations with, and bias is one of those reasons. You HAVE to talk to other scientists! Don't do everything in isolation! Don't be scared to talk to others in your field about your research!
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
1K
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
6K
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K