Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Appropriate statistical test for this situation?

  1. Jul 18, 2013 #1
    Can anyone help me with this, please?

    It's about how you go about trying to decide if two distributions are consistent, statistically speaking; specifically, what statistical test, or tests, is (are) most appropriate to use.

    Here's the data:

    N(A) N(B) G/R/P
    0043 0046 #101
    0264 0235 #102
    0033 0029 #103

    N(A) N(B) G/R/P
    0172 0201 #201
    1686 1496 #202
    1444 1336 #203

    Astronomical observations were made, and reduced to data. By two quite different teams, using different telescopes, cameras, data reduction routines, etc. In the first two columns (N(A) and N(B)) are counts, with leading zeros to ensure everything lines up nicely. "A" and "B" are two states, or conditions, or ... they are distinct and - for the purposes of this question - unambiguous. So the first cell of the first table says 43 cases of A (or with condition A) were observed.

    The third column (G/R/P) is the name/label of the group/region/population observed. The two teams each observed the same group/region/population; the first table is the first team's data, the second the second.

    There is nothing to say what the underlying ("true") distribution is, or should be. Nor any way to compare what the two teams observed: the 43 could be a proper subset of the of 172 (first column, first row), an overlap, or disjoint. However, assume no mistakes at all in the assignment of "A" and "B".

    Clearly, the two distributions - of states A and B, across the three groups/regions/populations - are different. However, is that difference statistically significant? What test - or tests - are appropriate to use, here?

    More details? Consider these:

    i) what's observed is white dwarf stars, in three different clusters; A is DA white dwarfs, B DB ones
    ii) globular clusters, in three different galaxies; A is 'red' GCs, B 'blue'
    iii) spiral galaxies, in three different galaxy clusters; A is 'anti-clockwise', B 'clockwise'
    iv) radio galaxies, in three different redshift bins; A is 'FR-I', B 'FR-II'
    v) GRBs, in three different RA bins; A is 'long', B is 'short'

    (I don't think the details matter, in terms of the type of statistical test to use; am I right?)
     
  2. jcsd
  3. Jul 18, 2013 #2
    To clarify something; namely, what 'distribution' am I asking about?

    Express the data as ratios, of N(A)/N(B) (two significant figures only):

    A/B G/R/P
    0.93 #101
    1.12 #102
    1.14 #103

    A/B G/R/P
    0.86 #201
    1.13 #202
    1.08 #203

    Now 0.93 != 0.86, 1.12 != 1.13, and 1.14 != 1.08, so the two teams' values of the three ratios are not the same (duh!)

    The ratio in group/region/population #01 is not, necessarily, the same as that in G/R/P #02 (ditto #03).

    Given the underlying data - which is counts, not ratios - is the ordered triple* (0.93, 1.12, 1.14) inconsistent with the ordered triple (0.86, 1.13, 1.08)?

    Oh, and I should have asked about inconsistent with ...

    * am I using the term correctly?
     
  4. Jul 18, 2013 #3

    bapowell

    User Avatar
    Science Advisor

    A [itex]\chi^2[/itex] test would likely be appropriate here. Once you've computed the [itex]\chi^2[/itex], the associated p-value can be used to determine the significance, [itex]\alpha[/itex], of the test. The p-value comparison assumes that the two distributions are actually the same, and gives the probability that any differences between them result from a statistical fluke. The conventional reading of a test that gives a p-value better than a significance level of, say, [itex]\alpha = 0.05[\itex], is that there is only a 5% chance that any differences are due to a statistical fluke. Often, this is termed as "there is a 5% chance of falsely rejecting the null hypothesis" (null hypothesis = the hypothesis assumed in the significance test -- that the two distributions are the same). This is known as a Type I error, or false positive. People are often tempted to invert this, and say that there is a 95% chance that the two distributions are different, but strictly speaking this is incorrect and sloppy.
     
  5. Jul 18, 2013 #4
    Thanks!

    Calculate the [itex]\chi^2[/itex] statistic on the following (a contingency table)? Or something else?

    NA.1 NB.1 NA.2 NB.2 G/R/P
    0043 0046 0172 0201 #01
    0264 0235 1686 1496 #02
    0033 0029 1444 1336 #03
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Appropriate statistical test for this situation?
  1. Scary statistics (Replies: 27)

  2. Star Statistics (Replies: 2)

  3. The test of science (Replies: 5)

Loading...