Mathemaica: Nonlinear Least Squares

AI Thread Summary
The discussion centers on the interpretation of the p-value in the context of using NonlinearModelFit in Mathematica. The p-value represents the probability of observing a t-statistic as extreme as the one calculated, indicating how likely it is that the parameter being tested is different from zero. The conversation explains that the p-value is derived from the distribution of the statistic, similar to Z-scores, and emphasizes the importance of understanding the relationship between the t-distribution and the p-value. A small p-value suggests that the parameter is likely significantly different from zero, which is crucial for determining the validity of the fitted model parameters. The reference to a book reinforces the idea that small p-values indicate a meaningful difference in the context of statistical analysis.
Niles
Messages
1,834
Reaction score
0
Hi

I like to fit in Mathematica using NonlinearModelFit. When I look at the fitted parameters, there is an entry called "P-value". Here is what it means: "The p-value is the probability of observing a t-statistic at least as far from 0 as the one obtained.". I'm not quite sure what this means. Is it something like chi-square?


Niles.
 
Physics news on Phys.org
Hey Niles.

Basically you have a distributed for your statistic which is usually related to an estimator.

For example when you try to say estimate the population mean given a sample, the distribution of this population mean follows a distribution with the centre of the distribution being the mean of the sample, and typically the variance is based on how big your sample is.

Just like we have Z-scores, we can also have similar things for the t-distribution.

Now in line with the Z-scores, imagine for a second that you want to find the probability that a particular Z-statistic is from the origin. In terms of probability we write this as P(Z > -z and Z < z where z > 0) (if z is negative make it positive).

This translates into finding P(-z < Z < z) for some value of |z| corresponding to your statistic (we take the absolute value).

Now if |z| is at the centre this gives us a probability of 0, but if z is far away then this gives us a very big probability and signifies that there is a lot of error involved.

I'd double check the reference to make sure its P(T < |t|) as opposed to P(T > |t|) though, but the idea is the same (except that the latter one corresponds to P(T < -t AND T > t) for a corresponding test-statistic for a 'normalized' test-statistic distribution (like the t-distribution).
 
Back
Top