In this paper we compare different volatility models with the use of AEX daily data on options in the period 2000-2013. The benchmark at which we compare different volatility models is the implied volatility. Implied volatility derives from information gathered in the market and theoretically it should therefore incorporate all the information available.
We compare the implied volatility with historical volatility, AR, ARMA, ARCH, GARCH (1,1), EWMA, EGARCH, GJRGARCH. To evaluate the models we use the loss functions RMSE, HMSE and MAD. To see if the loss functions are significantly different from one another we use the Diebold-Marino statistic.
It appears that.....
This paper give some new insights on the broad literature in this field because of the time frame we consider. The financial crisis of 2007 is a good natural experiment to see how the volatility models react in pre crisis, crisis and post crisis periods. What remains unresolved
Introduction
Among many different variables, time-variation is of crucial importance in financial time-series for several reasons. Pricing derivatives, calculating measures of risk, hedging against portfolio risk; all of these require the conditional variance that captures all the market information. This lead to enormous interest among researchers in the topic. As a result, many models have been developed. Nevertheless as different volatility models can result in different estimates, it is very problematic to recognize one model as the best one among others.
Not only the selection of right model is challenging but the fact that we cannot actually observe the conditional variance and it must be estimated using volatility model begs a discussion on its own as well. The two most common ways to estimate the volatility are based on the historical volatility and the implied volatility. The approach based on historical volatility uses historical observations to forecast the future volatility. The implied volatility is derived from a statistical option pricing model. The important difference is that the historical volatility is backward looking and the implied volatility is forward looking .
Canina and Figlewski (1993) state that because the implied volatility contains information about the market expectations, it has been widely accepted as a good forecast for the future volatility. They find that the implied volatility is not an unbiased predictor of future volatility. One reason for this appearing contradiction between theory and empirical findings could be documented by Lehar et al. (2001) as they point to the volatility smile which shows the limitations of the Black-Scholes-Merton model and thus that the assumption behind implied volatility does not hold in practice. On the other hand, Giot (2003) state that the implied volatility is still an important measure for the market volatility since the volatility index is based on the implied volatility. Because of that it is widely used by not only practitioners but academics as well. New methods for the volatility modeling have been introduced in the last few decades. One of the most important ones is the Autoregressive Conditional Heteroskedasticity (ARCH) model “which captures time-varying volatility and volatility clustering ’’ (Liu et al., 2009). Bluhm and Yun (2001) note that the stochastic volatility model gives a more realistic description of financial time-series than the ARCH-type models. Another method to measure volatility is the Generalized-ARCH model which is able to capture important features of returns data and is flexible enough to accommodate specific aspects of individual assets (Christoffersen, 2003). On the other hand, the stochastic volatility model assumes that ‘’volatility is changing randomly according to some stochastic differential equation or some discrete random processes’’ (Anderson, 2003).
An extensions of traditional GARCH