Mathemaica: Nonlinear Least Squares

Click For Summary
SUMMARY

The discussion focuses on the interpretation of the "P-value" in the context of using the NonlinearModelFit function in Mathematica. The P-value represents the probability of observing a t-statistic as extreme as the one obtained, indicating the likelihood that a parameter is significantly different from zero. The conversation clarifies the relationship between P-values and statistical distributions, particularly the t-distribution, and emphasizes the importance of understanding the correct interpretation of these values in statistical analysis.

PREREQUISITES
  • Understanding of statistical concepts such as P-values and t-statistics.
  • Familiarity with Mathematica and its NonlinearModelFit function.
  • Knowledge of statistical distributions, particularly the t-distribution.
  • Basic proficiency in hypothesis testing and significance levels.
NEXT STEPS
  • Research the implications of P-values in statistical hypothesis testing.
  • Learn about the t-distribution and its applications in statistical analysis.
  • Explore advanced features of Mathematica's NonlinearModelFit for model fitting.
  • Study the relationship between sample size and variance in statistical estimations.
USEFUL FOR

Statisticians, data analysts, researchers using Mathematica for nonlinear modeling, and anyone interested in understanding the significance of P-values in statistical tests.

Niles
Messages
1,834
Reaction score
0
Hi

I like to fit in Mathematica using NonlinearModelFit. When I look at the fitted parameters, there is an entry called "P-value". Here is what it means: "The p-value is the probability of observing a t-statistic at least as far from 0 as the one obtained.". I'm not quite sure what this means. Is it something like chi-square?


Niles.
 
Physics news on Phys.org
Hey Niles.

Basically you have a distributed for your statistic which is usually related to an estimator.

For example when you try to say estimate the population mean given a sample, the distribution of this population mean follows a distribution with the centre of the distribution being the mean of the sample, and typically the variance is based on how big your sample is.

Just like we have Z-scores, we can also have similar things for the t-distribution.

Now in line with the Z-scores, imagine for a second that you want to find the probability that a particular Z-statistic is from the origin. In terms of probability we write this as P(Z > -z and Z < z where z > 0) (if z is negative make it positive).

This translates into finding P(-z < Z < z) for some value of |z| corresponding to your statistic (we take the absolute value).

Now if |z| is at the centre this gives us a probability of 0, but if z is far away then this gives us a very big probability and signifies that there is a lot of error involved.

I'd double check the reference to make sure its P(T < |t|) as opposed to P(T > |t|) though, but the idea is the same (except that the latter one corresponds to P(T < -t AND T > t) for a corresponding test-statistic for a 'normalized' test-statistic distribution (like the t-distribution).
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
Replies
6
Views
1K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 23 ·
Replies
23
Views
3K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 12 ·
Replies
12
Views
4K