Standard Deviation with Paired Values

Click For Summary
The discussion focuses on the proper method for calculating fluorescence normalized by absorbance in bacterial cultures grown in multiple wells. The original approach involved computing mean absorbance and background subtraction, but it was recognized that each well's absorbance should be treated individually to account for variability. The proposed method involves calculating fluorescence per well divided by the corresponding absorbance, then propagating the standard deviations accordingly. The user seeks confirmation on the error propagation strategy, particularly regarding the treatment of the regression intercept as a correlated error source across measurements. Overall, the discussion emphasizes the importance of accurate statistical methods in experimental data analysis.
Roo2
Messages
44
Reaction score
0
Hello,

Upon following advice from BvU I'm starting a new thread to request help on a concrete issue I have. I have a container with 12x8 wells. Each well contains a roughly equal amount of bacteria. The bacteria are fluorescent, and I am trying to quantify their fluorescence as precisely as possible. Each strain of bacteria that I test is treated in replicate - that is, I grow it in 4 - 6 wells, depending on the experiment. To normalize for the fact that the pipetting and growth rates are not identical between wells, I am normalizing the fluorescence of the bacteria by the absorbance through the well, which scales linearly with bacterial concentration and volume of solution.

When I read absorbance, 0 bacteria (empty media) produced a non-zero value. Therefore, I made a standard curve comparing absorbance using the multiplate reader and absorbance of the same sample using a cuvette spectrophotometer, in which the pathlength is constant and measurement variability is minimal (and which is blanked such that empty media is set to 0). I performed a dilution of a bacterial culture and for each concentration, I measured once on the spectrophotometer and 12 replicate wells in the plate reader (to encompass variability in pipetting and well-to-well variation). The standard curve appears below:

0NoX5S0.png


For tabulating my value of interest (fluorescence normalized by absorbance), I was originally doing the following:

1. Compute mean absorbance for n replicates of a given sample
2. Background subtract the intercept of the calibration curve (0.042) and propagate the standard deviation in mean absorbance with that of the intercept (0.0008).
3. Divide mean fluorescence by the quantity computed in step 2, and propagate the standard deviations of both quantities.

However, I realized that this is probably the wrong way to go about the issue - the absorbance of a given well provides information only about that well, not any of the other replicates. Therefore, it seems that the best way to compute my value of interest would be to compute (fluorescence/absorbance) for each of n wells, and report the mean +/- s.d. of those. However, absorbance is actually the background-subtracted quantity calculated in step 2 above, except now for each individual sample's absorbance rather than the mean absorbance.

My point of confusion is in the error propagation. I am subtracting a minuend with no standard deviation (the measurement of raw absorbance for the nth sample) and a subtrahend with an associated standard deviation (the intercept of my regression). In this case, to propagate the error of the measurement, is the following strategy appropriate?

1. s.d.(absorbance) = s.d.(absorbanceraw) propagated with s.d.(regression intercept)
- result: s.d.(absorbance) = s.d.(intercept) = 0.0008 because the s.d. of a single raw absorbance measurement is 0.

2. fluorescence (f) / absorbance (a): s.d.(fluorescence) = 0 because it's a single measurement. s.d.(absorbance) is calculated in step 1.
- result: s.d.(f/a) = (f/a) * sqrt { (s.d.(f) / f)2 + (s.d.(a) / a)2} = (f/a) * sqrt{ 0 + (s.d.(a)/a)2} = f/a * (s.d.(a) / a).

3. s.d. of mean (fluorescence / absorbance) = the sum of all n values calculated in step 2 (with error propagated by addition) divided by n (s.d would then become s.d. of sum divided by n because n has no error).

This seems like the correct way to go about it, but statistics are not my forte, so I was hoping someone could take a look and catch my mistakes, if present.
 
Last edited:
Physics news on Phys.org
Your x-intercept is the same for all measurements, right? Then it is a 100% correlated error source for all measurements.
Adding its contribution linearly for the final average should work, as its effect is always in the same direction.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 42 ·
2
Replies
42
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
3
Views
1K
  • · Replies 24 ·
Replies
24
Views
6K