Covariance of two related sums

  • Context: Graduate 
  • Thread starter Thread starter Pieter2
  • Start date Start date
  • Tags Tags
    Covariance Sums
Click For Summary
SUMMARY

The discussion focuses on calculating the covariance of two related sums in the context of radioactive decay counts. The user seeks to determine the covariance between the total counts from two sources, A and B, using the formula for error propagation. Key points include the relationship between the variances of independent random variables and the necessity of knowing the covariance of sum_A and sum_total to accurately compute sum_B's error. The conclusion emphasizes that if A and B are independent, the covariance is zero, validating the simplification used in the calculations.

PREREQUISITES
  • Understanding of statistical error propagation
  • Knowledge of covariance and variance in random variables
  • Familiarity with radioactive decay statistics
  • Basic concepts of independent random processes
NEXT STEPS
  • Research statistical error propagation techniques in radioactive decay analysis
  • Learn about covariance calculations in multivariate statistics
  • Explore the implications of independence in statistical models
  • Study advanced topics in probability theory related to random variables
USEFUL FOR

Researchers in nuclear physics, statisticians working with radioactive decay data, and anyone involved in error analysis of statistical measurements.

Pieter2
Messages
2
Reaction score
0
I do have a series of channels that contain the number of radioactive counts within a small energy range. Since the occurrence of radioactive decay is statistical, the error in the number of counts is simply the square of the number of counts. Each channel contains counts from two different sources, where I can determine the sum of all counts originating from source A (sum_A), which leaves me also an error. What I now like to do is calculating the sum of counts from source B (sum_B), by subtracting the sum of the total counts (sum_total): sum_B = sum_total - sum_A. I can now derive the error in sum_B as follows:

error(sum_B)^2 = error(sum_total)^2 * (d(sum_B) / d(sum_total))^2 + error(sum_A)^2 * (d(sum_B) / d(sum_A))^2 + 2 * cov(sum_A, sum_total) * (d(sum_B) / d(sum_total)) * (d(sum_B) / d(sum_A))

This is simple, if I only knew the covariance of sum_A and sum_total. I have no idea of how to determine this covariance, someone else?

Or in other words: I have a series of numbers x that follow the formula x = p - q. How do I determine cov(sum(p) - sum(q))? Where the summing is done over all i between 0 and n.
 
Physics news on Phys.org
I have simplified the problem, using the fact that the covariance of a sum equals the sum of the covariances. I now have A = B + C, where the errors in B and C are known. what is cov(B, C)?
 
Last edited:
Your simplification is not necessarily valid.

The variance of the sum of two or more random variables is equal to the sum of each of their variances only when the random variables are independent. In other words, in "using the fact that the covariance of a sum equals the sum of the covariances", you implicitly assumed that cov(A,B) is identically zero.

If A and B are truly independent random processes (and they are, if I read the setup correctly), the cov(A,B) = 0 and your simplification is valid.
 
Last edited:

Similar threads

Replies
13
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
Replies
15
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K