News Dangers of Using Statistics Wrongly in Scientific Research

AI Thread Summary
The discussion centers on the concepts of p-hacking and deep data dives, emphasizing the importance of starting with a hypothesis rather than searching for interesting data patterns without guidance. The Ars Technica article highlights the pitfalls of this approach, particularly in the context of Brian Wansink's research. It is noted that p-hacking leads to misleading conclusions, as seen in various examples, including correlations between unrelated variables, such as stork populations and human populations. The conversation explores the tension between visual data analysis and the dangers of drawing conclusions from spurious correlations. Recommendations include analyzing data through multiple methods and ensuring that findings are statistically significant across these analyses. The importance of developing a rationale for data findings is also discussed, with examples from banking and physics illustrating the need for a theoretical framework to support conclusions. Overall, the consensus is that while exploring data is valuable, it must be done with a clear hypothesis to avoid misleading interpretations.
Messages
15,463
Reaction score
10,175
  • Like
Likes RogueOne, Drakkith and Borg
Physics news on Phys.org
Indeed a very interesting article. Thanks for sharing it.
jedishrfu said:
We shouldn't look at data trying to find something interesting but should instead have a hypothesis in mind allowing the data to prove or disprove it and what can happen when we don't do that.
Yes, I completely agree with you.
 
There has been an ongoing discussion on the blog of Andrew Gelman, a professor of statistics at Columbia University, regarding p-hacking and deep data dives in general, and in particular the work of Brian Wansink, of which the Ars Technica article above refers to.

Here is one blog post, among many others:

http://andrewgelman.com/2016/12/15/hark-hark-p-value-heavens-gate-sings/
 
FiveThirtyEight also has run a few features about p-hacking. One has a nice interactive demonstrating how one can p-hack a dataset to say one conclusion or another (https://fivethirtyeight.com/features/science-isnt-broken/#part1) and in another, they run some surveys and p-hack to find spurious correlations such as those linking raw tomato consumption with Judaism or drinking lemonade with believing Crash deserved to win best picture (http://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/).

These are important points to consider when someone starts making wild claims about how data mining with artificial intelligence will revolutionize a new field or do something like help to cure cancer.
 
  • Like
Likes StatGuy2000 and RogueOne
In his book "Introduction to Medical Statistics" Second Edition Robert Mould gives a example of the dangers of interpreting correlations. Actual data on the number of storks documented in various towns shows a striking linear correlation with population. Finding correlations between seemingly unrelated variables can be dangerous in drawing conclusion if we do not have some underlying ideas for guidance of a possible relationship between the variables to start. In the case of the stork data a biologist would know that storks make nests on houses so no surprise with the stork population correlation.

P-hacking is statistics bass-ackwards.​
 
  • Like
Likes RogueOne
How do we reconcile the advice "Don't do p-hacking" with advice like "Always graph you data to see what it looks like"? Is this just a matter of accepting perceptions of patterns that we find visually "obvious" and rejecting patterns detected by other means?
 
Stephen Tashi said:
How do we reconcile the advice "Don't do p-hacking" with advice like "Always graph you data to see what it looks like"? Is this just a matter of accepting perceptions of patterns that we find visually "obvious" and rejecting patterns detected by other means?

I would say do the opposite of p-hacking. Analyze your data in multiple ways, and only trust your conclusion if the statistical significance is robust to multiple means of analysis.
 
Folks in data mining do a form of p hacking when scoring and clumping groups of data and they must develop a rationale that describes what they found.

As an example, analysis of bank customer history can identify a group of customers planning to leave the bank because they match others who have. From there you can drill down to see why both groups are similar and develop marketting plans to stem the loss.

In contrast, Cornell researchers developed a program to tease out the equations that describe a system based on measurement. It successfully discovered the equations of motion of a compound pendulum.

Some biology researchers did the same thing and got some great equatons but couldn't publish because they couldn't explain them with some new plausible theory.
 
jedishrfu said:
Folks in data mining do a form of p hacking when scoring and clumping groups of data and they must develop a rationale that describes what they found.
Therefore they adjust their significance levels for multiple testing.
 
  • #10
jedishrfu said:
An interesting article in Ars Technica on p-hacking vs deep data dives:

https://arstechnica.com/science/201...mindless-eating-mindless-research-is-bad-too/

We shouldn't look at data trying to find something interesting but should instead have a hypothesis in mind allowing the data to prove or disprove it and what can happen when we don't do that.
Of course you should look at data to find something interesting. The point is that you shouldn't use the same data to test your hypotheses.
 
  • Like
Likes Vanadium 50 and jedishrfu
Back
Top