# Covariance of two related sums

1. Feb 8, 2007

### Pieter2

I do have a series of channels that contain the number of radioactive counts within a small energy range. Since the occurence of radioactive decay is statistical, the error in the number of counts is simply the square of the number of counts. Each channel contains counts from two different sources, where I can determine the sum of all counts originating from source A (sum_A), which leaves me also an error. What I now like to do is calculating the sum of counts from source B (sum_B), by subtracting the sum of the total counts (sum_total): sum_B = sum_total - sum_A. I can now derive the error in sum_B as follows:

error(sum_B)^2 = error(sum_total)^2 * (d(sum_B) / d(sum_total))^2 + error(sum_A)^2 * (d(sum_B) / d(sum_A))^2 + 2 * cov(sum_A, sum_total) * (d(sum_B) / d(sum_total)) * (d(sum_B) / d(sum_A))

This is simple, if I only knew the covariance of sum_A and sum_total. I have no idea of how to determine this covariance, someone else?

Or in other words: I have a series of numbers x that follow the formula x = p - q. How do I determine cov(sum(p) - sum(q))? Where the summing is done over all i between 0 and n.

2. Feb 8, 2007

### Pieter2

I have simplified the problem, using the fact that the covariance of a sum equals the sum of the covariances. I now have A = B + C, where the errors in B and C are known. what is cov(B, C)?

Last edited: Feb 8, 2007
3. Feb 8, 2007

### D H

Staff Emeritus
Your simplification is not necessarily valid.

The variance of the sum of two or more random variables is equal to the sum of each of their variances only when the random variables are independent. In other words, in "using the fact that the covariance of a sum equals the sum of the covariances", you implicitly assumed that cov(A,B) is identically zero.

If A and B are truly independent random processes (and they are, if I read the setup correctly), the cov(A,B) = 0 and your simplification is valid.

Last edited: Feb 8, 2007