- #1
arivero
Gold Member
- 3,430
- 140
I have found a narrative reading of bootstrap theory: http://www.slac.stanford.edu/spires/find/hep/www?key=1264621 . It is dated August 1984.
Last edited by a moderator:
Chew's Bootstrap Theory, also known as the Bootstrap Resampling Method, is a statistical technique developed by Bradley Efron in 1979. It is a non-parametric method used for estimating the sampling distribution of a statistic based on a smaller sample size. It is often used in hypothesis testing and constructing confidence intervals.
Chew's Bootstrap Theory was first introduced in August 1984 by statistician David Chew in a paper titled "Bootstrap Confidence Intervals for Extremal Quantiles".
The theory works by repeatedly resampling the original dataset, with replacement, to create new samples of the same size. This creates a large number of resampled datasets, from which a statistic of interest can be calculated. The distribution of these statistics can then be used to approximate the sampling distribution of the original statistic.
One of the main advantages of using Chew's Bootstrap Theory is that it does not rely on any assumptions about the underlying distribution of the data. It is also a flexible and versatile method that can be applied to a wide range of statistical problems. Additionally, it can provide more accurate results compared to other traditional statistical methods.
One limitation of Chew's Bootstrap Theory is that it may not perform well with small sample sizes or when the data has a high degree of skewness. Additionally, it may be computationally intensive when dealing with large datasets. It is also important to note that the results obtained from the bootstrap method may vary depending on the specific dataset and the sample size.