- #1
LostMinosHeins
- 1
- 0
According to the CDC 1 out of 102 persons who are males of a particular demographic have HIV. That mean's there is 1/102 = .98 * 100 = 1% chance of having HIV. There is a swab HIV test available that is 91.7% accurate at being able to identify HIV positive people. This means that 12 out of 13 people who are HIV positive will test positive on the test.
So say a person takes this HIV test and they fit into the demographic of the CDC statistic. If 1/13 people who are HIV positive tests negative on the test, it seems there is an 8.3% possibility that any person taking the test in general period is HIV positive, is that correct? On its face it doesn't seem to make much sense because it doesn't even put you below the 1% chance you have HIV if your a male in the USA according to the CDC. So that does that really mean that being a male in the USA according to the CDC makes you less likely to have HIV than being a person who tests negative on the swab test? It might be comparing apples and oranges because the 1/13 number assumes every one of the 13 has HIV but it would still be useful to compare them.
But using the 1% statistic and assuming the person is a randomly selected individual of that demographic it seems (logically) like the possibility would be reduced from 1% if someone of that category tested negative on the test. It seems like a compound statistical problem, is there a way of calculating how far below 1% a person who has tested negative on the swab test now has of having HIV using these two statistics? One statistic it looks like assumes every person being tested is HIV positive and the other one is a random sample of all males in the US of a particular demographic but it would be informative to use them to be able to derive this i just don't know if they can be because i was told if you use the rule of multiplying statistics they have to be of the same group and add up to 100. I think it depends whether the probabilities represent independent or dependent events. Which category do these two events fall into two? And how can they be manipulated to calculate how likely it is that the person has HIV if they fall into both categories?
So say a person takes this HIV test and they fit into the demographic of the CDC statistic. If 1/13 people who are HIV positive tests negative on the test, it seems there is an 8.3% possibility that any person taking the test in general period is HIV positive, is that correct? On its face it doesn't seem to make much sense because it doesn't even put you below the 1% chance you have HIV if your a male in the USA according to the CDC. So that does that really mean that being a male in the USA according to the CDC makes you less likely to have HIV than being a person who tests negative on the swab test? It might be comparing apples and oranges because the 1/13 number assumes every one of the 13 has HIV but it would still be useful to compare them.
But using the 1% statistic and assuming the person is a randomly selected individual of that demographic it seems (logically) like the possibility would be reduced from 1% if someone of that category tested negative on the test. It seems like a compound statistical problem, is there a way of calculating how far below 1% a person who has tested negative on the swab test now has of having HIV using these two statistics? One statistic it looks like assumes every person being tested is HIV positive and the other one is a random sample of all males in the US of a particular demographic but it would be informative to use them to be able to derive this i just don't know if they can be because i was told if you use the rule of multiplying statistics they have to be of the same group and add up to 100. I think it depends whether the probabilities represent independent or dependent events. Which category do these two events fall into two? And how can they be manipulated to calculate how likely it is that the person has HIV if they fall into both categories?
Last edited: