Impact of Volatility Clustering on Value at Risk

We have seen that volatility clustering is a common phenomenon observed in financial data. This has serious implications for how risk managers calculate VaR for their portfolios to be adequately covered.

We know that financial markets are characterized by unexpected information and shocks. When a market shock happens, the volatility in prices of financial assets increases significantly. Now, the phenomenon of volatility clustering suggests that the impact of this market shock will be felt for some time in the future. The risk managers should use this knowledge to adjust their VaR estimates, reduce risk, increase capital, or take other measures to ensure that their institution can bear the additional risk in the high shock period.

We learned that one way to do this is to use EWMA volatility instead of using simple rolling volatility. EWMA volatility is conditional while the other is not. This means that EWMA volatility reacts faster to the market shock. The unconditional methods generate forecasts under the assumption that returns are independently and identically distributed.

It is important for risk managers to incorporate volatility clustering in their models to avoid taking more risk than they want.

This will however, not entirely eliminate the VaR breaches. When backtested, you will still find a few violations with actual losses crossing the VaR levels. What is important is the timing and size of these exceptions. The risk managers will need to watch out for clusters of large losses. Small losses that do not suggest any dependence are relatively less important.

The Basel regulation provides some guidelines on the minimum period of historical data to be used.

The choice of historical observation period (sample period) for calculating value-at-risk will be constrained to a minimum length of one year. For banks that use a weighting scheme or other methods for the historical observation period, the “effective” observation period must be at least one year (that is, the weighted average time lag of the individual observations cannot be less than 6 months).

The objective of this clause was to set the minimum observation period; unfortunately it ends up impacting how banks choose to forecast volatility in the models. It actually rules out the use of conditional volatility models. For example, Jorion (2002a) found EWMA to have a weighted average time lag of only 16.7 days compared to the minimum six month lag required by the standard. He found that the requirement of a minimum “effective” observation period destroyed the advantage from using a weighting scheme that is responsive to recent volatility changes, such as the EWMA. According to Jorion the Basel II rule of “effective” observation period being at least one year constrained banks to use slow moving models in order to generate smoother capital requirements.

Membership
Learn the skills required to excel in data science and data analytics covering R, Python, machine learning, and AI.
I WANT TO JOIN
JOIN 30,000 DATA PROFESSIONALS

Free Guides - Getting Started with R and Python

Enter your name and email address below and we will email you the guides for R programming and Python.

Saylient AI Logo

Take the Next Step in Your Data Career

Join our membership for lifetime unlimited access to all our data analytics and data science learning content and resources.