I have a research problem that requires me to find the 95th percentile of a sampling distribution like this: http://www.nuclearphynance.com/User Files/9231/1minVARCLOSE.png My first question is, what would be a mathematically sound way to calculate something like this, with several clusters around and gaps between the 10's and significant leptokurtosis? More specifically, this comes from quantitative finance where they have the idea of "value at risk" (VaR) where one wants to know with 5% chance that your one-day loss will be at least the 95th percentile or above. The naive approach is to fit a normal distribution to the historical losses, then find the 95th percentile on that fitted distribution. But I have noticed that this has no meaningful use for a leptokurtotic distribution. The second question I have is less statistics than quant finance, but nonetheless I'll ask in case someone has an idea how to approach it. In particular, observe that my losses sampled every 1-minute. This means that I would end up calculating a 1-minute VaR, which is again, not very meaningful because one usually wants to know what's a worst case scenario say every 1 day. A common approach seems to be to scale the losses into expected 1-day losses, so multiply the losses by a factor of 7*60 (7 hours * 60 minutes in a trading day), before taking the VaR. While this works well in practice if you started with daily return and are looking to know what is the 1-month VaR, 1-week VaR etc., I realized that nonlinear behavior is more pronounced at high frequency (1 second, 1 minute etc.) and it is not sound to scale the losses by a factor of 7*60. How should I deal with this? Perhaps I should throw out the VaR technique altogether for intraday returns?