Appropriate statistical test for this situation?

  • Thread starter Thread starter Jean Tate
  • Start date Start date
  • Tags Tags
    Statistical Test
AI Thread Summary
To determine if the two distributions of states A and B are statistically significant, a chi-squared test is recommended. This test will compare the observed counts from two different teams across three groups to assess the likelihood that any differences are due to chance. The resulting p-value will indicate whether the null hypothesis—asserting that the distributions are the same—can be rejected. A p-value below the conventional threshold of 0.05 suggests a significant difference between the distributions. The discussion emphasizes the importance of correctly interpreting the p-value and avoiding common misconceptions about statistical significance.
Jean Tate
Messages
27
Reaction score
4
Can anyone help me with this, please?

It's about how you go about trying to decide if two distributions are consistent, statistically speaking; specifically, what statistical test, or tests, is (are) most appropriate to use.

Here's the data:

N(A) N(B) G/R/P
0043 0046 #101
0264 0235 #102
0033 0029 #103

N(A) N(B) G/R/P
0172 0201 #201
1686 1496 #202
1444 1336 #203

Astronomical observations were made, and reduced to data. By two quite different teams, using different telescopes, cameras, data reduction routines, etc. In the first two columns (N(A) and N(B)) are counts, with leading zeros to ensure everything lines up nicely. "A" and "B" are two states, or conditions, or ... they are distinct and - for the purposes of this question - unambiguous. So the first cell of the first table says 43 cases of A (or with condition A) were observed.

The third column (G/R/P) is the name/label of the group/region/population observed. The two teams each observed the same group/region/population; the first table is the first team's data, the second the second.

There is nothing to say what the underlying ("true") distribution is, or should be. Nor any way to compare what the two teams observed: the 43 could be a proper subset of the of 172 (first column, first row), an overlap, or disjoint. However, assume no mistakes at all in the assignment of "A" and "B".

Clearly, the two distributions - of states A and B, across the three groups/regions/populations - are different. However, is that difference statistically significant? What test - or tests - are appropriate to use, here?

More details? Consider these:

i) what's observed is white dwarf stars, in three different clusters; A is DA white dwarfs, B DB ones
ii) globular clusters, in three different galaxies; A is 'red' GCs, B 'blue'
iii) spiral galaxies, in three different galaxy clusters; A is 'anti-clockwise', B 'clockwise'
iv) radio galaxies, in three different redshift bins; A is 'FR-I', B 'FR-II'
v) GRBs, in three different RA bins; A is 'long', B is 'short'

(I don't think the details matter, in terms of the type of statistical test to use; am I right?)
 
Astronomy news on Phys.org
To clarify something; namely, what 'distribution' am I asking about?

Express the data as ratios, of N(A)/N(B) (two significant figures only):

A/B G/R/P
0.93 #101
1.12 #102
1.14 #103

A/B G/R/P
0.86 #201
1.13 #202
1.08 #203

Now 0.93 != 0.86, 1.12 != 1.13, and 1.14 != 1.08, so the two teams' values of the three ratios are not the same (duh!)

The ratio in group/region/population #01 is not, necessarily, the same as that in G/R/P #02 (ditto #03).

Given the underlying data - which is counts, not ratios - is the ordered triple* (0.93, 1.12, 1.14) inconsistent with the ordered triple (0.86, 1.13, 1.08)?

Oh, and I should have asked about inconsistent with ...

* am I using the term correctly?
 
Jean Tate said:
Clearly, the two distributions - of states A and B, across the three groups/regions/populations - are different. However, is that difference statistically significant? What test - or tests - are appropriate to use, here?
A \chi^2 test would likely be appropriate here. Once you've computed the \chi^2, the associated p-value can be used to determine the significance, \alpha, of the test. The p-value comparison assumes that the two distributions are actually the same, and gives the probability that any differences between them result from a statistical fluke. The conventional reading of a test that gives a p-value better than a significance level of, say, \alpha = 0.05[\itex], is that there is only a 5% chance that any differences are due to a statistical fluke. Often, this is termed as "there is a 5% chance of falsely rejecting the null hypothesis" (null hypothesis = the hypothesis assumed in the significance test -- that the two distributions are the same). This is known as a Type I error, or false positive. People are often tempted to invert this, and say that there is a 95% chance that the two distributions are different, but strictly speaking this is incorrect and sloppy.
 
bapowell said:
A \chi^2 test would likely be appropriate here. Once you've computed the \chi^2, the associated p-value can be used to determine the significance, \alpha, of the test. The p-value comparison assumes that the two distributions are actually the same, and gives the probability that any differences between them result from a statistical fluke. The conventional reading of a test that gives a p-value better than a significance level of, say, \alpha = 0.05[\itex], is that there is only a 5% chance that any differences are due to a statistical fluke. Often, this is termed as "there is a 5% chance of falsely rejecting the null hypothesis" (null hypothesis = the hypothesis assumed in the significance test -- that the two distributions are the same). This is known as a Type I error, or false positive. People are often tempted to invert this, and say that there is a 95% chance that the two distributions are different, but strictly speaking this is incorrect and sloppy.
<br /> Thanks!<br /> <br /> Calculate the \chi^2 statistic on the following (a contingency table)? Or something else?<br /> <br /> NA.1 NB.1 NA.2 NB.2 G/R/P<br /> 0043 0046 0172 0201 #01<br /> 0264 0235 1686 1496 #02<br /> 0033 0029 1444 1336 #03
 
TL;DR Summary: In 3 years, the Square Kilometre Array (SKA) telescope (or rather, a system of telescopes) should be put into operation. In case of failure to detect alien signals, it will further expand the radius of the so-called silence (or rather, radio silence) of the Universe. Is there any sense in this or is blissful ignorance better? In 3 years, the Square Kilometre Array (SKA) telescope (or rather, a system of telescopes) should be put into operation. In case of failure to detect...
Thread 'Could gamma-ray bursts have an intragalactic origin?'
This is indirectly evidenced by a map of the distribution of gamma-ray bursts in the night sky, made in the form of an elongated globe. And also the weakening of gamma radiation by the disk and the center of the Milky Way, which leads to anisotropy in the possibilities of observing gamma-ray bursts. My line of reasoning is as follows: 1. Gamma radiation should be absorbed to some extent by dust and other components of the interstellar medium. As a result, with an extragalactic origin, fewer...
Both have short pulses of emission and a wide spectral bandwidth, covering a wide variety of frequencies: "Fast Radio Bursts (FRBs) are detected over a wide range of radio frequencies, including frequencies around 1400 MHz, but have also been detected at lower frequencies, particularly in the 400–800 MHz range. Russian astronomers recently detected a powerful burst at 111 MHz, expanding our understanding of the FRB range. Frequency Ranges: 1400 MHz: Many of the known FRBs have been detected...

Similar threads

Back
Top