Obtaining standard deviation of a linear regression intercep

Click For Summary
The discussion focuses on calculating the standard deviation of the intercept from a linear regression for background correction in an experiment involving quantities A and B. The user seeks clarification on whether to multiply the standard error from Excel's LINEST function by the square root of the number of data points and how to account for degrees of freedom. They express confusion about propagating errors when normalizing quantity A by the background-subtracted quantity B, particularly given the relationship between A and B across samples. The need for a clearer example is suggested to facilitate understanding of the error propagation process. The thread highlights the complexities of error calculations in linear regression and normalization.
Roo2
Messages
44
Reaction score
0
Hello,

I have an experiment that I'm trying to conduct where I measure quantity A and normalize by quantity B. I then want to report normalized quantity A with error bars showing standard deviation. Quantity B is obtained via a standard curve that I generated (8 data points measured once each as the independent variable, 8 data points measured 10x as the dependent variable). From this I performed a linear regression, and using Excel's LINEST function, obtained the standard errors of the slope and intercept.

I don't really care about the slope (since I'm normalizing I don't care what the true value of B is; I just need to make sure it's correct relative to the other samples). All I want to do is perform background correction by subtracting the intercept and performing the appropriate error propagation. However, for the error propagation I need the s.d. of the intercept, and LINEST gives me the s.e. For conversion, do I multiply the s.e. by the square root of the number of data points in the regression? Do I subtract 2 from N to account for the lost degrees of freedom? Does it matter that for each independent variable I have 10 measurements of the dependent variable (i.e. is my N going to be 80)?

Thanks for any advice!
 
Physics news on Phys.org
Check out this thread for expressions. Note that the error on the intercept is usually very strongly correlated to the error on the slope: unless the center of mass of the measurements is on the y axis, "wiggling the slope" changes the intercept.

[edit] note I changed the link to the thorough one that has the references in it.

My impression is LINEST returns the standard deviation for the intercept (but they do indeed call it the standard error).
 
Last edited:
Thanks! This was very informative.

If I may, I'd like to ask one more question that's related to this topic, but not necessarily to the subject line. Quantity B is related in a linear way to quantity A - the more quantity B there is, the more quantity A. When I measure these quantities for a sample treated under a given condition, I combine n measurements for A and n measurements for B, background subtract the mean of B according to the linear regression (propagating the STdev of the intercept along with the STDev of B), and then divide mean(A) by mean(B)subtracted, propagating the previously propagated STdev of B with the STDev of A.

However, I don't think I'm doing this correctly - A and B are related for each sample but not necessarily between samples, and mean(An)/mean(Bn) != mean(An/Bn). Given this, I'm a bit confused as to where I start calculating the deviation. The standard deviation of mean(An/Bn) should capture the variation of both quantity A and quantity B; however, B first needs to be background subtracted according to the linear regression. How do I propagate the error of the intercept from the regression, given that I apply it to n individual samples which are then pooled?

Thanks again.
 
Hope you can understand this is very hard to follow for a reader.
I can't make out what mean(B)subtracted could possibly be.
Perhaps better to post a new thread with a concrete case/example so people can follow your steps and give comment.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 42 ·
2
Replies
42
Views
5K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 64 ·
3
Replies
64
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K