It seems impossible to read the news today without seeing a story related to the impact of a financial services institutions failure to control an unforeseen risk or event. Traditionally, the practice of risk management has been built around the expected result not the unexpected, or rate, occurrence. The historical examples of December 14, 1914, an 18-sigma event, and October 19, 1987, a 20-sigma event, coupled with the more recent financial crisis of 2008 which included 18 events over 5-sigma must make us reexamine how we manage risk in our financial services institutions.
If your risk models assume a normal distribution, you would expect a 5-sigma event to occur once every 7000 years, but historically we have seen a 5-sigma event nearly every 2.5 years. Many risk models would disregard rare events such as these as statistically insignificant, but in effect these events that are resident in the tails of the distributions used in most risk models, are where the majority of the exposure lurks for your organizations. The measurement error from these events also compounds very quickly if one is relying on risk models predicated on a normal distribution.
This drama in the market continues to give the regulators more support to augment and develop enhanced regulations aimed at controlling the industry’s next move. But are the increasing regulatory requirements helping to predict the next catastrophic event? Are organizations able to mitigate risk to avoid financial and reputational loss? Or are they another example of a lemming following their peer off a cliff.
Adding new regulations that require additional models in an attempt to predict risk across asset categories may only result in a false sense of security.
In order to effectively manage risk, today’s risk manager must strike a balance between quantitative models and the qualitative understanding of the makeup of their business environment.
Organizational leadership is responsible for understanding and providing guidance to manage risk as well as to predict risk. It is the obligation of the Risk Committee to understand and question the key assumptions that are being used to drive decision making, from managing risks to forecasting revenue to establishing capital reserves. At the heart of the multitudes of risk reports attempting to analyze disparate data are models. These models are built to calculate the likelihood and impact of an event(s) on the organizations health. Risk models today have become commonplace. However, it has been found that the application of a larger volume of risk models alone will not improve risk management – it is the risk models ability to both quantify and qualify risk that enable the organization to better forecast where and how an unforeseen event could occur.
To demonstrate the naivety of using past data to predict the future Nassim Taleb offered this analogy. “I have just completed a thorough statistical examination of the life of President Bush. For 55 years, close to 16,000 observations, he did not die once. I can hence pronounce him as immortal, with a high degree of statistical significance”. (Taleb 2001)
By challenging risk assumptions and evaluating them under the specific nature of the economic environment faced presently, an organization can best determine how much risk is present in the extreme ends of the models (tail). Common pitfalls of quantitative approaches used in the attempt to predict the future and quantify risk are as follows.
Many organizations constructed models that are based upon managing during “normal” markets.
By building models that manage a normal market it could be said that an organization is not managing risk at all. If your organization is facing fat tails (unexpected results), as shown through the historical examples above, the power of the tails can have immense impacts.
Using the past to predict the future is one common pitfall.
Often the historical sample chosen does not reflect the future or is potentially reflective of only one market condition. Model governance is one of the tools that can help organizations better align their risk profiles with their risk appetites. The key to successful risk management is accurately diagnosing how much risk lurks in the extreme conditions of your models. The risk that the new regulatory environment is trying to address does not occur during periods of stable business environments. In the past, most models have been built upon the assumption of a normal distribution of returns or stable environments. However, it has been shown that few if any markets of financial assets adhere to a normal distribution or even any type of stable distribution.
It is important to note that risk is not variance, as variance cannot be a substitute for understanding the business drivers and economic environment an organization is operating in.
Another way to say this is risk is variability combined with the specific characteristics/nature of the asset class being evaluated. It has been shown that the impacts of the economic crisis in 2008 were exponentially increased due to the over-reliance on VAR (Value at Risk) models. These models do not adequately predict the multiplying or leverage effect of situations that fall outside of the bounds of a normal environment. This was especially true given the historical sample sets used at the time of the crash typically were based upon a five year period. The 2003-2007 sample set can be characterized as a sustained bull market in almost every asset class. VAR models, while can be a useful tool during stable and growing economic times, during turbulent times are a poor predictor of the future outcome. As such, financial institutions were lulled into a false sense of security and over-levered their organizations leading to the largest financial crisis of our time.
To demonstrate the naivety of using past data to predict the future Nassim Taleb offered this analogy. “I have just completed a thorough statistical examination of the life of President Bush. For 55 years, close to 16,000 observations, he did not die once. I can hence pronounce him as immortal, with a high degree of statistical significance” (Taleb 2001)
As a result of an over-reliance on models to manage risk, regulators are now mandating the use of more models to evaluate risk. The objective of distilling all risk faced by organizations into a series of variables that provide a quantifiable indication of risk exposure may not be entirely feasible.
As with most reactions to human crises, we have gone to an extreme in the hope of giving us a sense of control over our future. The reality is that an over-reliance on quantifying risk may prevent today’s risk managers and risk committee’s from adequately fulfilling their responsibilities. Markets have many layers of complexity that cannot be easily if at all distilled into a mathematical problem and as such cannot be easily distilled into a single multivariable equation.
Today’s risk manager must truly understand the qualitative risks that are inherent in their asset portfolio in order to then apply mathematical analysis which can then be used as an additional input into the evaluation of their organization’s risk profile.
Examples of qualifying variables include:
- The broader economic environment.
- The risk or tail profile of the specific asset category, is variance good, bad or neutral for that asset category?
- Asset specific information, e.g. liquidation value, leverage, etc.
The objective of the BCBS framework is to enable improved data interoperability and comparability which will over the long term improve our understanding of risk across the industry sector. While it is important to note that the improved focus on data infrastructure will not by itself improve risk management, the principles will likely lead to higher quality output from risk models and improved alignment of risk exposure with your organizations risk appetite. The benefits of effectively adopting these principles go far beyond regulatory compliance and go directly to the bottom line.
A sound approach to risk management must avoid the following:
- Equating variance with risk.
- Over-reliance on quantitative measures without qualitative insights.
- Assuming a normal distribution of returns.
- Predicting the future based upon past results.
Share this page: