"Expected Result" Bias in Research

In summary, when you're working on an experiment, it's reasonable to be biased in favor of the results that make sense to you. You need to multiple repeats of the experiment to get multiple reading and determine an average. This is a real effect - no skulduggery required.
  • #1
person123
328
52
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
 
Physics news on Phys.org
  • #2
person123 said:
Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense.
Then you need to multiple repeats of the experiment to get multiple reading and determine an average.
Start playing with statistics ( NOT my forte) But others here will hopefully guide you

The point is, you cannot rely on just a small number of data-points
 
  • #3
This is a real effect - no skulduggery required. You work as hard as you can, find all the sources you think of, and if you get the "right" answer you pat yourself on the back for a job well done. If you get the "wrong" answer, you go back and keep looking for errors.

The Particle Data Book has historical data on various measurements. One is below. You can pretty much see that (whatever else is going on) the errors are not Gaussian and random.

1610462653454.png

Now, how do we get around this? One way is by blinding. For example, in a counting experiment, one might not count in the signal region until counts in nearby control regions are demonstrated to be understood. Or one might introduce an offset unknown to the bulk of the collaboration that is removed only at the very end; again this is only done when various control checks pass.

Is this perfect? Absolutely not. Science is done by people.
 
  • Like
Likes russ_watters, Choppy and BillTre
  • #4
person123 said:
Hi. I'm an undergraduate student and I've been doing research in Civil Engineering for a couple years now. One thing which I've thought about repeatedly is a bias toward the desired result. Of course sometimes people might be biased for some ulterior motive or sloppy work, but the bias I'm interested is one where it's unclear to me what the correct approach is.

Say you're running an experiment and a gauge reads a result which is clearly wrong; maybe it goes against all previous work, it's not physically possible, or just doesn't make sense. It might be reasonable to discard that result, or maybe check/replace the gauge -- to do something in response to that specific reading. In a way you're biased (you wouldn't discard the result if it seemed reasonable), but it makes some sense to be biased (you don't want to rerun the entire experiment because of one reading, and you don't want to publish a result which is clearly wrong).

How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?

This all makes sense by approaching the problem from the perspective of Bayes theorem. According to Bayes theorem, the confidence you have in a hypothesis ##H## given a set of observations ##O##, ##p(H|O)##, is the product of two quantities the strength of the evidence, ##p(O|H)/p(O)##, and the prior probability of the hypothesis being correct, ##p(H)##:
$$p(H|O) = \frac{p(O|H)}{p(O)}p(H)$$

For gauge readings that make sense (i.e. those which confirm your prior beliefs), you don't need very strong evidence in favor of those hypotheses for you to believe them. However, if a reading goes against your prior beliefs and supports a hypothesis with a low prior probability, you would want much stronger evidence (e.g. more repeats of the experiment and independent sources of evidence) to convince you that the alternative hypothesis with a low prior is correct.
 
  • Like
Likes Choppy and person123
  • #5
My question then becomes: can we prevent situations like the case of particle data through time, where (at least how I understand it) researchers consistently got similar erroneous results due to using previous research as their prior while using Baye's theorem effectively? Or does using our priors necessarily lead to these issues of bias?

(Also, I know almost nothing about particle physics, but would it be possible that there were systematic errors due to measurements at the time which caused the shift? Like is it possible that in the 70s and 80s the techniques used consistently overestimated the mass?)
 
  • #6
person123 said:
How do you decide when it makes sense to be biased? When are you being reasonable, and when are you trying to fit the data with the results you're looking for? People who do research, has this been a significant concern for you?
Don't do your research in isolation? Your question is acting like you're the only one worthy of analyzing data. Do what davenn suggests, run multiple experiments, ask yourself where your errors possibly could be, and then you ask your peers if they think your analysis is plausible given your set up. This should be done on pretty much EVERY experiment IMO.

A lot of papers acknowledge people they had conversations with, and bias is one of those reasons. You HAVE to talk to other scientists! Don't do everything in isolation! Don't be scared to talk to others in your field about your research!
 

What is "Expected Result" Bias in Research?

"Expected Result" Bias in Research refers to the tendency of researchers to unconsciously manipulate or interpret their data in a way that confirms their original hypothesis or expected results. This can lead to inaccurate conclusions and misleading findings.

How does "Expected Result" Bias occur?

"Expected Result" Bias can occur due to a researcher's personal beliefs, prior knowledge, or desire to obtain a certain outcome. It can also be influenced by external factors such as funding sources or pressure to produce positive results.

What are the consequences of "Expected Result" Bias?

The consequences of "Expected Result" Bias can range from minor inaccuracies in research findings to severe impact on public policy decisions. It can also undermine the credibility of scientific research and hinder progress in a particular field.

How can "Expected Result" Bias be avoided?

To avoid "Expected Result" Bias, researchers should strive for objectivity and transparency in their methods and analysis. This can include using blind studies, having multiple researchers review the data, and openly discussing any potential biases.

What are some examples of "Expected Result" Bias in research?

One example of "Expected Result" Bias is the pharmaceutical industry's tendency to only publish studies that show positive results for their drugs, while ignoring or not publishing studies that show negative results. Another example is the use of p-hacking, where researchers manipulate their data to achieve statistically significant results.

Similar threads

Replies
1
Views
722
  • General Discussion
Replies
3
Views
889
Replies
19
Views
1K
  • General Discussion
Replies
3
Views
2K
  • General Discussion
Replies
1
Views
654
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
991
Replies
1
Views
423
Replies
9
Views
2K
Back
Top