One of my interests is financial risk. I am fascinated by complex problems involving numbers and behaviour. One of the most popular approaches to financial risk management is called Value at Risk or VaR.
VaR and the assumptions underlying it are some of the key elements of todays financial system. VaR is most likely to be the key risk methodologies going forward. VaR has been created in lots of flavors.
- Parametric
- Historical
- Extreme
- Vanilla
- Flaming
Ok, I am making the last one up, the good people in academia and Risk metrics haven't invented flaming VaR on purpose. Flaming VaR has been created by market reality hitting bad models. There is a tiny secret that people in the multi-billion dollar financial risk business don't want you to know... VaR blows goats. The nice thing about VaR as a system of risk is it only fails 1-5% of the time, of course this is when you need it the most. It is a bit like buying airbags for you car and being told they work for every accident situation except head on collisions.
Why does VaR fail and why should we be scared.
VaR fails because it assumes prices are log normally distributed. Common sense and a brief reading of financial history will tell you this isn't so.
VaR scenarios are usually built using price data sets that are limited. Example: today people are pointing to previous US recessions looking for clues about the future. Most of these commentators are looking back to the 70's or 50's and only the US. Small hint: a statistical sample of 3-7 is pretty pointless for analysis. An anecdote is not analytic.
VaR expresses risk as a dollar amount at risk over a given period of time with a confidence interval. Right now some bright risk manager at some bank is dutifully reporting to someone, "sir, we have $5m at risk daily within the 99% confidence interval." His counterpart is then replying "good Humphrey, because we have $10m of risk padding".
Confidence intervals and statistics make most people roll over and play dead. The certainty with which someone quotes VaR makes the other person think, "hmm ok, we are safe, this fellow speaks math and must be bright." But lets look at that earlier VaR statement. It basically says for that for a given 1 day period there is 1% chance that we are going to lose more than $5m. Hey there are most likely going to be 3 of those days next years. Now the belt and suspenders genius in the room multiplies the $5m by 2 and says we are OK. You may have noticed some problems already, most financial losses take place over many days.
VaR approaches are such that the longer the length of time the less accurate the sample. huh??? Think about it, If I have a pool of 30 year data and there are roughly 7,800 days to look at. If I want to sample a one month period I have 360 samples, for the smart kids in the class a rolling approach can be used. Even the rolling period has the flaws that I am trying to understand maybe an extreme externally driven price driver (macro-economic shock etc) with a 30 year window. Those shocks are rare and far between, and this is the problem with historical and even parametric approaches to risk modeling, they treat markets like the live in a bubble. It is a classic system failure. Hint Warren Buffet thinks VaR and Options theory is junk.
Some of the bright lads over Goldman got this wrong, but they are in good company with the Nobel Prize winner of LTCM among others. They calculated that the chance their portfolio would blow up was a 25th standard deviation event. To normal people 25th Standard deviation is statistically equivelant to 1 in over a trillion billion chances give or take, which equates to the sun blowing up in 7.5 billion years before the portfolio would be stressed. Now if this market worked in a bubble that might be right, but disasters happen, govt.s fall, things change, stock markets disappear every 150 years or so. I am sure there is some poor quant looking at his numbers wondering how this could happen. It happened at LTCM and it will happen wherever VaR approaches are used. The problem is the model, read the Black Swan for a good insight into the problem. Historical price data are useless for external event based shocks. I bet there isn't a financial model out there that has the impacts for a second (post 1907) San Francisco Earthquake modeled into it, but that is a 25% probability by 2025 for such an event. Historical data approaches that use market price data are flawed and doomed.
It is a bit like driving a ship and predicting the landscape for the next few months by looking out the side port hole. For those of you who like Monte Carlo forecasting, feel free to look out the port hole a few million times, the information won't be any more effective, but you will have greater statistical confidence to order the ship to go faster.
Imagine if back in 1905, you and I are planning to build the Titanic. You tell me that after looking over all of the history books you discover that a ship that size has never sank. You then proceed to tell me that given this fact, statistically speaking, my ship will never sink. I then proceed to build a bigger and therefore safer ship.
Once you give a mathematician a framework, good or bad they will iterate (improve it) forever. Nothing is more fun then honing a fine model to a pointless point. Rarely do people challenge the framework, I mean look at all the data and the dancing around they can make it do. I would advise using common sense or Gott's theorom to pull them back to reality. Expand the referent frame. Hint: there isbig money to be made betting against the VaR heads. Go long gamma in the tails and spread the bets.
Moody's and S&P for example have securities called CDO squareds, that are rated super AAA. These things were and for some fools still are considered safer than the US govt. The flaw in this of course is that things that are themselves components of other larger systems at risk can't overcome the failure of the larger system. Logically if the US govt were to have an issue, most likely US based securities may have even greater issues. Common sense isn't a requirement with a math or economics PHD. I think there should be a variable CS applied to each equation in risk a CS would be positive or negative indicating whether someone with common sense had agreed that the results, not the approach made sense.
The reason to be nervous about this whole thing is that having a bad risk system is worse than no risk system. Financial risk is a mix of human behaviour in the pursuit of return. If we believe we are risk secure vs. exposed we exhibit different behaviour. The current horrible risk systems; VaR, Basel II and others are far more dangerous than earlier naive risk systems usually based on rules of thumb and common sense. It is safer to have everyone a little bit worried and driving more cautiously than have everyone told they are driving hummers and have Michael Shumacher skills when they don't. Basel II is going to cause a financial disaster.
Prepare for more fall out in the coming 6-9 months as one or two more international banks fail and a few hundred mid sized US banks suffer the same fate. The ratings system which used a mean variance approach to risk and past prices delivered this one.
It turns out we are all collectively safer when we all feel a little more at risk. Quite a few hedge funds in the statistical arbitrage, debt arb and mortgage back space will also be closing in the coming weeks and months as margin calls and collatoral requirements get upped. The global macro guys and 130/30 equity guys should be ok as they were seeking real alpha with less leverage.
The sad thing is that the global fianancial system is implementing Basel II. This means that after we start making it out of this issue most likely in late 08 to early 09. We can get ready for the big one, which will probably be a 30th standard deviation event starting in 2011, just 1,000 days away.