Aside from the discussion about derivatives there is an interesting discussion in the document about risk management using
VaR (Value at risk). I am unclear what assumptions are made about the statistics and sample space, in trying to inductively determine the risks.
I do know the statistics of financial events are tail heavy but the central limit theorem tells us that even tail heavy statistics should average out eventually to Gaussian statistics provided the statistics are ergodic stationary and bounded. However, there is no limit to the number of measurements required to get a degree of confidence in the estimate of the statistics.
Var's measure the maximum risk one is likely to see for a given confidence interval. This is from the article:
This suggests that most firms are computing their VaR purely based on the historical data of the firm. Well this may be okay since typically a VaRs predict the expected maximum loss within a confidence interval over a short period of time (one to two days typically but 10 days for capital requirement risk assessment) it is only valid if the recent past statistics of the firm give a good measure of future performance of the firm.
This is likely not the case in an irrational market and as a consequence sampling should be done over a long enough period of time to account for short term market irrationality. Well, over a long period of time the volatility of a firms stock could change significantly one would hope that there are metrics of a firms performance (such as price to earnings and leverage) whose statistics better corrolate too loss risk over longer intervals of time.
Additionally, in the grand scheme of things 95% confidence bounds on daily risk may not be relevant because in two days our confidence of an not seeing an event that falls outside of these bounds drops to 90.25 % and after about 14 days we can't even be 50% confident. In other words 95% confidence says, in one day we are probably safe but in about two weeks it's a crap shoot. Of course, I'm presuming successive days as independent events if but if there is a lot of low frequency noise (auto regressive) the confidence won't fall off so quickly.
Finally pay attention to the parts I bolded. Banks are risk weighting their capital based on purely inductive statistics but do back test their models and add a multiplier to try to give a safety factor for errors. They use a 99% confidence level and a 10 day holding period for their capital requirements. I would like to dive into the statistics of this further but perhaps in another thread. But before I do, does anyone beside me think it's odd that financial firms are reporting their Var's with less robust confidence levels and time frames then is required to meet capital requirement regulations.