Dangers of Using Statistics Wrongly in Scientific Research

  • News
  • Thread starter jedishrfu
  • Start date
  • #1
14,291
8,320
  • Like
Likes RogueOne, Drakkith and Borg

Answers and Replies

  • #2
eys_physics
268
72
Indeed a very interesting article. Thanks for sharing it.
We shouldn't look at data trying to find something interesting but should instead have a hypothesis in mind allowing the data to prove or disprove it and what can happen when we don't do that.
Yes, I completely agree with you.
 
  • #3
StatGuy2000
Education Advisor
1,940
1,056
There has been an ongoing discussion on the blog of Andrew Gelman, a professor of statistics at Columbia University, regarding p-hacking and deep data dives in general, and in particular the work of Brian Wansink, of which the Ars Technica article above refers to.

Here is one blog post, among many others:

http://andrewgelman.com/2016/12/15/hark-hark-p-value-heavens-gate-sings/
 
  • #4
Ygggdrasil
Science Advisor
Insights Author
Gold Member
3,522
4,181
FiveThirtyEight also has run a few features about p-hacking. One has a nice interactive demonstrating how one can p-hack a dataset to say one conclusion or another (https://fivethirtyeight.com/features/science-isnt-broken/#part1) and in another, they run some surveys and p-hack to find spurious correlations such as those linking raw tomato consumption with Judaism or drinking lemonade with believing Crash deserved to win best picture (http://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/).

These are important points to consider when someone starts making wild claims about how data mining with artificial intelligence will revolutionize a new field or do something like help to cure cancer.
 
  • Like
Likes StatGuy2000 and RogueOne
  • #5
gleem
Science Advisor
Education Advisor
2,107
1,564
In his book "Introduction to Medical Statistics" Second Edition Robert Mould gives a example of the dangers of interpreting correlations. Actual data on the number of storks documented in various towns shows a striking linear correlation with population. Finding correlations between seemingly unrelated variables can be dangerous in drawing conclusion if we do not have some underlying ideas for guidance of a possible relationship between the variables to start. In the case of the stork data a biologist would know that storks make nests on houses so no surprise with the stork population correlation.

P-hacking is statistics bass-ackwards.​
 
  • #6
Stephen Tashi
Science Advisor
7,785
1,542
How do we reconcile the advice "Don't do p-hacking" with advice like "Always graph you data to see what it looks like"? Is this just a matter of accepting perceptions of patterns that we find visually "obvious" and rejecting patterns detected by other means?
 
  • #7
Ygggdrasil
Science Advisor
Insights Author
Gold Member
3,522
4,181
How do we reconcile the advice "Don't do p-hacking" with advice like "Always graph you data to see what it looks like"? Is this just a matter of accepting perceptions of patterns that we find visually "obvious" and rejecting patterns detected by other means?

I would say do the opposite of p-hacking. Analyze your data in multiple ways, and only trust your conclusion if the statistical significance is robust to multiple means of analysis.
 
  • #8
14,291
8,320
Folks in data mining do a form of p hacking when scoring and clumping groups of data and they must develop a rationale that describes what they found.

As an example, analysis of bank customer history can identify a group of customers planning to leave the bank because they match others who have. From there you can drill down to see why both groups are similar and develop marketting plans to stem the loss.

In contrast, Cornell researchers developed a program to tease out the equations that describe a system based on measurement. It successfully discovered the equations of motion of a compound pendulum.

Some biology researchers did the same thing and got some great equatons but couldn't publish because they couldn't explain them with some new plausible theory.
 
  • #9
DrDu
Science Advisor
6,258
906
Folks in data mining do a form of p hacking when scoring and clumping groups of data and they must develop a rationale that describes what they found.
Therefore they adjust their significance levels for multiple testing.
 
  • #10
DrDu
Science Advisor
6,258
906
An interesting article in Ars Technica on p-hacking vs deep data dives:

https://arstechnica.com/science/201...mindless-eating-mindless-research-is-bad-too/

We shouldn't look at data trying to find something interesting but should instead have a hypothesis in mind allowing the data to prove or disprove it and what can happen when we don't do that.
Of course you should look at data to find something interesting. The point is that you shouldn't use the same data to test your hypotheses.
 
  • Like
Likes Vanadium 50 and jedishrfu

Suggested for: Dangers of Using Statistics Wrongly in Scientific Research

  • Last Post
Replies
12
Views
594
  • Last Post
Replies
1
Views
450
Replies
4
Views
358
Replies
2
Views
357
  • Last Post
Replies
2
Views
310
Replies
5
Views
392
  • Last Post
Replies
13
Views
928
Replies
3
Views
484
Top