Dangers of Using Statistics Wrongly in Scientific Research

In summary: If you have a hypothesis, then you should test it using different data.P-hacking is statistics bass-ackwards.This is true. P-hacking is statistics backwards.
Physics news on Phys.org
  • #2
Indeed a very interesting article. Thanks for sharing it.
jedishrfu said:
We shouldn't look at data trying to find something interesting but should instead have a hypothesis in mind allowing the data to prove or disprove it and what can happen when we don't do that.
Yes, I completely agree with you.
 
  • #3
There has been an ongoing discussion on the blog of Andrew Gelman, a professor of statistics at Columbia University, regarding p-hacking and deep data dives in general, and in particular the work of Brian Wansink, of which the Ars Technica article above refers to.

Here is one blog post, among many others:

http://andrewgelman.com/2016/12/15/hark-hark-p-value-heavens-gate-sings/
 
  • #4
FiveThirtyEight also has run a few features about p-hacking. One has a nice interactive demonstrating how one can p-hack a dataset to say one conclusion or another (https://fivethirtyeight.com/features/science-isnt-broken/#part1) and in another, they run some surveys and p-hack to find spurious correlations such as those linking raw tomato consumption with Judaism or drinking lemonade with believing Crash deserved to win best picture (http://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/).

These are important points to consider when someone starts making wild claims about how data mining with artificial intelligence will revolutionize a new field or do something like help to cure cancer.
 
  • Like
Likes StatGuy2000 and RogueOne
  • #5
In his book "Introduction to Medical Statistics" Second Edition Robert Mould gives a example of the dangers of interpreting correlations. Actual data on the number of storks documented in various towns shows a striking linear correlation with population. Finding correlations between seemingly unrelated variables can be dangerous in drawing conclusion if we do not have some underlying ideas for guidance of a possible relationship between the variables to start. In the case of the stork data a biologist would know that storks make nests on houses so no surprise with the stork population correlation.

P-hacking is statistics bass-ackwards.​
 
  • Like
Likes RogueOne
  • #6
How do we reconcile the advice "Don't do p-hacking" with advice like "Always graph you data to see what it looks like"? Is this just a matter of accepting perceptions of patterns that we find visually "obvious" and rejecting patterns detected by other means?
 
  • #7
Stephen Tashi said:
How do we reconcile the advice "Don't do p-hacking" with advice like "Always graph you data to see what it looks like"? Is this just a matter of accepting perceptions of patterns that we find visually "obvious" and rejecting patterns detected by other means?

I would say do the opposite of p-hacking. Analyze your data in multiple ways, and only trust your conclusion if the statistical significance is robust to multiple means of analysis.
 
  • #8
Folks in data mining do a form of p hacking when scoring and clumping groups of data and they must develop a rationale that describes what they found.

As an example, analysis of bank customer history can identify a group of customers planning to leave the bank because they match others who have. From there you can drill down to see why both groups are similar and develop marketting plans to stem the loss.

In contrast, Cornell researchers developed a program to tease out the equations that describe a system based on measurement. It successfully discovered the equations of motion of a compound pendulum.

Some biology researchers did the same thing and got some great equatons but couldn't publish because they couldn't explain them with some new plausible theory.
 
  • #9
jedishrfu said:
Folks in data mining do a form of p hacking when scoring and clumping groups of data and they must develop a rationale that describes what they found.
Therefore they adjust their significance levels for multiple testing.
 
  • #10
jedishrfu said:
An interesting article in Ars Technica on p-hacking vs deep data dives:

https://arstechnica.com/science/201...mindless-eating-mindless-research-is-bad-too/

We shouldn't look at data trying to find something interesting but should instead have a hypothesis in mind allowing the data to prove or disprove it and what can happen when we don't do that.
Of course you should look at data to find something interesting. The point is that you shouldn't use the same data to test your hypotheses.
 
  • Like
Likes Vanadium 50 and jedishrfu

1. What are the potential consequences of using statistics incorrectly in scientific research?

The consequences of using statistics incorrectly in scientific research can be severe. It can lead to inaccurate conclusions, which can result in wasted time, resources, and funding. It can also damage the credibility of the research and the researcher.

2. How can using statistics incorrectly impact the validity of a study?

Using statistics incorrectly can significantly impact the validity of a study. It can lead to biased results, invalid conclusions, and erroneous findings. This can ultimately render the entire study useless and call into question the integrity of the research.

3. What are some common mistakes that researchers make when using statistics?

Some common mistakes that researchers make when using statistics include using small sample sizes, failing to account for confounding variables, and misinterpreting p-values. Other mistakes include using inappropriate statistical tests or failing to check for assumptions before conducting the analysis.

4. How can researchers ensure they are using statistics correctly in their research?

Researchers can ensure they are using statistics correctly by seeking guidance from a statistician or consulting statistical resources. It is also essential to carefully plan and design the study, use appropriate statistical methods, and thoroughly check for errors and assumptions before drawing conclusions.

5. What are some best practices for using statistics in scientific research?

Some best practices for using statistics in scientific research include clearly defining the research question, choosing appropriate statistical methods, using a large and representative sample size, and properly analyzing and interpreting the data. It is also crucial to be transparent and report all findings, even if they do not support the initial hypothesis.

Similar threads

  • General Discussion
Replies
12
Views
1K
  • General Discussion
Replies
6
Views
1K
  • General Discussion
Replies
14
Views
2K
Replies
1
Views
96
  • Biology and Medical
Replies
2
Views
1K
  • General Discussion
Replies
28
Views
10K
  • General Discussion
Replies
15
Views
2K
  • STEM Career Guidance
Replies
7
Views
5K
  • Beyond the Standard Models
Replies
11
Views
2K
  • General Discussion
Replies
34
Views
10K
Back
Top