This question concerns estimating the PDF of noise, based upon observations of a data stream consisting of noise embedded with transient signals. I would like to know if my Proposed Solution is a correct approach. Suppose I have "long" stream of seismic data, consisting of noise, and with some occasional pulses of transient signals; we can make assumptions about the time-bandwidth product. I DO assume the that the time-width of each signal pulse is very small relative to the length of the data stream. Objective: Estimate the noise in the data. Question: How much of my data stream can be signal in order for my estimate of the noise variance to be correct within Δσ? Proposed Solution: (1) Perform a hypothesis test H0: pure noise, H1: is a signal plus noise, with unknown variance. (2) Form a maximum likelihood ratio. (3) If the probability of a missed detection--the probability of choosing H0 over H1 when H1 is true, is sufficiently large--then that means my data is mostly noise. (4) How to estimate the error, or uncertainty Δσ? Thanks! I am sure this is easy for you DSP types?