Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Agreement between diagnostic tests controlled for raters

  1. Jun 8, 2015 #1
    Suppose I would like to calculate the agreement between 2 different diagnostic tests to detect a disease using 2 experienced raters. The following assumptions hold:
    1) The rating of a disease is based on a categorical scale like grade I, II and so on. Therefore, it is a process of categorising the disease into different grades.
    2) Non of the tests is considered as a reference.
    3) The raters are independents.

    My goal is to calculate the agreement between the 2 testes after controlling for the variation between the raters opinions. In other words, I seek to know whether there is a real difference between the 2 tests in categorising the disease without polluting the result because of the expected variantion in the raters opinions.
     
  2. jcsd
  3. Jun 10, 2015 #2
    You can only do this if the two scorers are both working on (subsets of) the two sets of diagnostic tests, or if you have some other data about the scorers. Or to put it another way If diagnostic X is being scored by scorer A and diagnostic Y is being scored by scorer B then you need some other data regarding the scorers.
     
  4. Jun 10, 2015 #3
    Both tests X and Y are scored by both raters A and B at a case to case base. In other words, all cases are scanned using X and Y, and then scored by the 2 raters ( independently). But I don`t want the variation between A and B scores to spoil the true agreement value between X and Y scans. So what to do ?
    I can use the following method, each rater scores the disease and the agreement test between X and Y is measured using Cohen kappa ( or an alternative test for categorical case). Then the average kappa of the 2 raters is used finally to measure the agreement between the tests. However, this method seems to be so weak because it given an equal weighting for both raters I guess.
     
    Last edited: Jun 10, 2015
  5. Jun 10, 2015 #4

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    Maybe Mann-Whitney, or Kruskal-Wallis?
     
  6. Jun 10, 2015 #5
    So how can we combine the non-parametric analysis using Kruskal-Wallis with Cohen kappa that analysis the degree of agreement between raters.
     
  7. Jun 29, 2015 #6
    If I understand this right, it sounds like you are trying to determine convergent validity of the two tests. This is how I am making sense of "agreement of the tests." Since the data is categorical, you would either use a Spearman correlation or Kendall's tau correlation to check correlation of tests.

    Also from a design standpoint, I think you need more pairs of raters. Some groups should use test 1 first, and some should use test 2 first.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Agreement between diagnostic tests controlled for raters
  1. To F test or T test (Replies: 1)

  2. Hypothesis Testing (Replies: 3)

Loading...