- #1
cmmcnamara
- 122
- 1
Hey all I have what I assume to be a fairly vague question. I'm taking a programming class right now for MATLAB and the current project we are working on is a simple histogram display from reading an excel data file. The coding is extremely easy however, having never taken a statistics course and having a very vague understanding of the subject I am a bit stumped on how to properly choose bin size which specifies the number of bins for MATLAB to draw the histogram. I have been reading a bit on the topic on Wikipedia, however the methodology for choosing a bin size seems highly subjective to me and dependent on the nature of the collected data.
For my project the professor chose that the data to be analyzed was to be the height of our class. My rationale and selection for methodology goes as follows. I chose the Freedman-Diaconis bin sizing which specifies that the bin size is equivalent to twice the interquartile range divided by the number of data points to the one-third power (see http://en.wikipedia.org/wiki/Histogram#Number_of_bins_and_width). Based on Wikipedia's description it tends to be less sensitive to extreme data points than the standard deviation rule. I thought that given my class's small size (32 people) that this method was the best because with such a small sample size I figured that the histogram function would be highly sensitive to extreme values and therefore not be as representative as a method which is more sensitive to extreme values. Would this be a correct reasoning for selecting bin size by this method? There are quite a few other methods listed but some such as the square root method don't seem to have any useful description. Could someone validate my reasoning here? I realize that this topic seems to have "no right answer" but I at least want to believe my logic is sound. Thanks in advance!
For my project the professor chose that the data to be analyzed was to be the height of our class. My rationale and selection for methodology goes as follows. I chose the Freedman-Diaconis bin sizing which specifies that the bin size is equivalent to twice the interquartile range divided by the number of data points to the one-third power (see http://en.wikipedia.org/wiki/Histogram#Number_of_bins_and_width). Based on Wikipedia's description it tends to be less sensitive to extreme data points than the standard deviation rule. I thought that given my class's small size (32 people) that this method was the best because with such a small sample size I figured that the histogram function would be highly sensitive to extreme values and therefore not be as representative as a method which is more sensitive to extreme values. Would this be a correct reasoning for selecting bin size by this method? There are quite a few other methods listed but some such as the square root method don't seem to have any useful description. Could someone validate my reasoning here? I realize that this topic seems to have "no right answer" but I at least want to believe my logic is sound. Thanks in advance!