Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Stats: How to combine the results of several binary tests

  1. Jul 17, 2011 #1
    (I asked this question in the Stack Exchange http://stats.stackexchange.com/questions/13014/how-to-combine-the-results-of-several-binary-tests" [Broken], but didn't get anything that was helpful to me.)

    I'm a programmer (former EE) working on a medical application. My last stats course was in engineering school 38 years ago, so I'm not exactly current on my stats lingo.

    I have the results of essentially 18 different binary (in programmer-speak -- ie, yes/no results, with no adjustable threshold) medical tests, each one of which is a (presumably independent) "proxy" measurement for the disorder being tested for. For each of the tests I have statistics (though in a few cases "artistically derived") for # of true positive, false positive, true negative, false negative when compared to the "gold standard", so I can compute specificity, sensitivity, PPV, NPV, etc. (Typical specificity/sensitivity values are, in %, 50/71, 24/85, 29/84, 72/52.) I do not have a collection of results for the entire suite of tests which would show which combination was true for a given specific patient, and I have no real prospect of making such measurements (or any other new measurements), at least not before an initial formula is produced.

    What I want to do is, given the individual statistics, derive a formula that, for a given set of inputs (list of test results for a single patient), will decide "probably positive", "probably negative", or "ambiguous" (and, to keep the FDA and the medical bean counters happy, it would be nice if the formula had some small degree of rigor). (The whole idea here is to avoid the expensive and uncomfortable "gold standard" test where possible, especially on the negative side.)

    The best scheme I've come up with so far is to combine specificities using the formula
    Code (Text):
    spec_combined = 1 - (1 - spec_1) * (1 - spec_2) * ... (1 - spec_N)
    combine the selectivities the same way, and then take the ratio
    Code (Text):
    (1 - sens_combined) / (1 - spec_combined)
    Using >> 1 for POSITIVE, << 1 for NEGATIVE, and near 1 for "ambiguous".

    This works fairly well, though it seems to behave strangely for some inputs, and it clearly lacks any real mathematical/statistical rigor.

    So I'm wondering how one should go about deriving the "correct" formula, given the available statistics.
    Last edited by a moderator: May 5, 2017
  2. jcsd
  3. Jul 17, 2011 #2
    What you should be doing is http://en.wikipedia.org/wiki/Bayesian_inference" [Broken]. In addition to the data you have, you'll also need the frequency of the disorder in the population. (That'll give you "prior probabilities".) If you have those frequencies as a function of age and ethnicity, all the better.

    I'm afraid your description of what you're doing was confusing enough that I didn't fully follow it. (Definitions of "specificity, sensitivity, PPV, NPV" would help. I could probably guess at specificity and sensitivity if I really wanted to think about it, but the only thing NPV means to me is "net present value", and PPV is a complete mystery.) It might be that what you're already doing is Bayesian inference, or close, in which case you can now give it a name and make claim to that small degree of rigor you desire. If not, read up!
    Last edited by a moderator: May 5, 2017
  4. Jul 17, 2011 #3
    Of course, the "population" is people referred for testing, so there's already a suspicion of the disorder that upsets the validity of any frequency measurement over the general population. But we can probably come up with an approximate frequency in the referred population. (It would be on the order of 30%, I suspect.)

    I've looked at Bayesian inference, but (with my weak stats background) have not been able to make much out of it, without a few hints at what I should be looking at.

    Specificity, et al, I assumed are standard stats terms:

  5. Jul 17, 2011 #4
    The idea is actually very simple. You ask, "What is the probability of getting the results I got if the person is sick?", and "What is the probability of getting the results I got if the person is well?" You'll probably assume, unless you have other information, that each test is independent of the others, so the calculations just turn into a multiplication of the appropriate factors. You then multiply those factors by the prior probabilities to get the ratio [itex]\frac{\mathbb{P}\left[\text{sick}\right]}{\mathbb{P}\left[\text{well}\right]}[/itex]. Don't know if this description is clear, but it's actually quite intuitive once you understand. It is now pretty clear to me that this isn't what you're doing, since you would need the stats for all four possible test results, and you only seem to be using two.

    Thanks for the links. Yes, if you were talking to people who earn a living doing exactly this, you probably wouldn't need to define your terms or even your abbreviations. But on a general purpose forum like this, there might be non-experts who know the answer but not the jargon. The folks I talk to generally use the terms false positive, true positive, false negative, and true negative, which seem a lot more obvious to me.
    Last edited: Jul 17, 2011
  6. Jul 17, 2011 #5
    Like I said, I've got the 4 quantities true positive/false positive/true negative/false negative -- it's just a matter of figuring out how to use them.

    Looking at Bayesian inference again, it seems like maybe I can work something out. I'll give it a shot.
  7. Jul 17, 2011 #6
    Yes -- I understand. What I meant was that you weren't using them all in your formulas.

    I think you'll be surprised how obvious it is once you've figured it out.
  8. Jul 17, 2011 #7
    What you are doing looks basically right, but there are lots of continuity issues. If you are doing this for something medical, and you don't really understand what you are doing, you should definitely have someone check over your results.

    pmsrw3 is right you are really looking to do something along the lines of Bayesian Inference. It sounds like you don't actually have to do the inference, since the results for each *independent* test are handed to you (they may in fact not be independent, which could be very important for your actual results if you mix them).

    Generally speaking you would try and *classify* the the patient as sick/healthy/unsure, based on the ratio of the two probabilities p(sick)/p(healthy).

    The issue that pmsrw3 was frequencies of sick or healthy in your test population (these are best thought of as proportions and not priors). Once again, those are all done separately, and could be over totally different populations from one another, and could be totally different populations that your code is applying the model on. All of those issues could be bad for what you are trying to do.

    Anyways, the model you are trying to use given those tables, and assuming independence, is very straight forward.

    Write down your tables.
    Test 1
    (Passed /Failed)
    Truly Sick p1 f1
    Truly Healthy p2 f2

    Where p1+p2=1 and f1+f2=1

    Now for 1 test, calculate the probability that the person is sick or healthy.
    p(Sick | test=Passed)= p1
    p(Healthy | test=pailed) = p2
    p(Sick | test=failed)=f1

    Now for the patient, assuming all of the tests are independent, calculate the probability of them being sick, and the prob of being healthy. The tests are multiplied. Generally speaking, if you can do the experiment yourself, you don't want all of these different separate studies being mushed into one classifier.

    p(sick | N tests)=p(sick | Test1=Restult1)* p(sick | T2=Restult2)*...
    p(healthy | N tests=p(healthy | Test1=Results1) *p(healthy | Restult2)*....

    Then take the ration of the two.
    R=p(sick| N tests)/p(healthy | N tests)

    You have to choose what the "safe" regions are. High R's and very low R's are of course more discerning than R's near 1.
  9. Jul 18, 2011 #8
    One point about the scenario is that people taking these tests have a "high suspicion" of a positive -- probably in the neighborhood of 10-20% (though of course it will tend to vary perversely -- the more accurate & accessible the tests become the more likely they are to be applied to lower suspicion populations, skewing the stats that feed the formula, and it will vary by practitioner, etc).

    But the probability of a "positive" is high enough that we're not dealing with a thin "tail" on the probability curve, which I would interpret as (helpfully) lowering the sensitivity of the scheme to the precise population makeup.
  10. Jul 18, 2011 #9
    Yes, I agree. If the ratio of your prior probabilities is not much different from 1, they can practically be ignored. (I mean, if you can get something, I would use it, but...) It's when you're doing random testing for conditions with an incidence of 10^-5 that this becomes important.
  11. Jul 19, 2011 #10
    What about mutually exclusive tests?

    What if I have mutually exclusive tests? Eg, I can test for red, green, blue, and yellow, and only one of those four can be true. How do I factor that into Bayesian inference?

    If I were testing just red or not red and correlating that with my "gold standard" I can see how I'd compute the Bayesian factors, but I suspect it falls apart with mutually exclusive tests.
  12. Jul 19, 2011 #11
    Re: What about mutually exclusive tests?

    Could you clarify what you mean by "mutually exclusive tests"? Do you mean each patient is either red, green, blue, or yellow, and you have some tests that, with less than perfect fidelity, report on that? (In that case nothing has really changed -- this was your original problem, but with two possibilities instead of four.) Or do you mean that the tests themselves are mutually exclusive: e.g., if you test a patient for red you can't test him for blue? Or do you mean that the test results are mutually exclusive: the test will always report red, green, blue, or yellow?

    Anyway, it doesn't noticeably change anything. You calculate the probability of getting the results you got for each of the (possibly four now) possibilities, then multiply those by the prior probabilities for each.

    Bayesian inference is really a very broad framework. It is not restricted to binary tests, and in fact it can even be used when there's a multidimensional continuum of possibilities.
  13. Jul 19, 2011 #12
    I mean each of the four colors has a different correlation to the diagnosis of foot fungus, and if one color is present the others are not. And since they're mutually exclusive they're obviously not "independent".
  14. Jul 19, 2011 #13
    The only place we used independence in the above dialog was in calculating how to combine the results of multiple tests. If I understand right, you're saying that one test (or observation, or whatever) gives one of the four results. This doesn't affect its independence from OTHER tests, so it doesn't affect the ability to multiply results from different tests. We assumed mutually exclusive results in the case of the binary test (result always either positive or negative, never both, never neither), so it's even formally similar. In this case, instead of having four numbers (true/false negative/positive), you'll have eight: probability that a sick person shows red, green, blue, yellow; probability that a well person shows red, green, blue yellow. You get those 8 P's from just the kind of data you already described to us: a collection of cases in which both the color test and the gold standard were applied. If a patient tests red (for instance), your multiply P(sick) by P(red|sick) and you multiply P(well) by P(red|well). It's completely parallel to the binary case.

    More broadly, independence of tests is not a condition, anyway, as long as you can calculate a probability of getting the results you got. Correlation just means you won't be able to do a simple multiplication of the P's from the distinct tests.
  15. Jul 20, 2011 #14
    I don't suppose you have any suggestions as to how to "fudge" non-independent measurements, so they don't muck up an estimate too much? The vast majority of discussions I can find on the topic of non-independence just deal with the obvious cases analogous to drawing cards from a deck.
  16. Jul 20, 2011 #15
    I was actually thinking about that. I have some ideas. I don't think it's fundamentally difficult (at least as long as you restrict yourself to pairwise correlations), but it would certainly require data that you said in the OP that you don't have -- test results on a individual basis. That is, you'd want, for each individual, a list of the tests that were done on him/her, and their results. (For this purpose, it might not be necessary that each of these series include the gold standard test, if that would help you get bigger numbers.)
  17. Jul 20, 2011 #16
    Basically, for two pairs of tests, I know that if the first test contributes, say, a multiplier of 1.05 to the probability, then the second test will be "influenced" by about 1.025. I suppose I could simply divide the running product by the square root of the first test's multiplier, but I'm not sure how well-behaved that approach would be, and it's hard to apply to other "influence factors".
  18. Jul 20, 2011 #17
    Sorry, I don't get that. How do you know this? How do you know that one test influences another at all? How do you know by what factor? You could only know the answers to these questions on the basis of data that you said in your first post you don't have!
  19. Jul 21, 2011 #18
    Call it a hunch.
  20. Aug 3, 2011 #19
    Just want to say that I've worked this out fairly well, using Bayesian inference and an ad-hoc mechanism to deal with the observations that are strongly connected.

    (Again, mathematical rigor is not so important here so long as the algorithm "behaves" well for most cases. )
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook