- #1
DaanV
- 26
- 0
This is from labwork experiments. I have no background in mathematics.
We have an experimental setup that measures the levels of two targets: The Reference (R) and the target of Interest (I). We are interested in the ratio I/R. For healthy individuals we know this ratio to be precisely 1. For patients we expect I/R > 1.
In order to validate our experimental setup we want to learn the technical variance of our results, such that we can set thresholds above which we can confidently distinguish healthy from patient (e.g. "If I/R ≥ 1.034, subject has disease"). In order to find this we will be measuring healthy material in ~30-fold, resulting in 30 scores for I, R and thus I/R.
For most publications on the subject that I have seen so far, people seem to just take the scores for I/R, calculate the variance and the confidence intervals from there.
My question is: Is this a statistically valid approach?
My gut feeling is that since I and R both have their own independent technical variance (even if the means are not independent), then the variance of the ratio between the two should be greater than just the variance of the ratios? Can anyone confirm or refute my feeling?
Thanks in advance for any help provided!
Sincerest apologies if this topic is in the wrong place.
We have an experimental setup that measures the levels of two targets: The Reference (R) and the target of Interest (I). We are interested in the ratio I/R. For healthy individuals we know this ratio to be precisely 1. For patients we expect I/R > 1.
In order to validate our experimental setup we want to learn the technical variance of our results, such that we can set thresholds above which we can confidently distinguish healthy from patient (e.g. "If I/R ≥ 1.034, subject has disease"). In order to find this we will be measuring healthy material in ~30-fold, resulting in 30 scores for I, R and thus I/R.
For most publications on the subject that I have seen so far, people seem to just take the scores for I/R, calculate the variance and the confidence intervals from there.
My question is: Is this a statistically valid approach?
My gut feeling is that since I and R both have their own independent technical variance (even if the means are not independent), then the variance of the ratio between the two should be greater than just the variance of the ratios? Can anyone confirm or refute my feeling?
Thanks in advance for any help provided!
Sincerest apologies if this topic is in the wrong place.