The Predictive Performance of Extreme Value Analysis Based-Models in Forecasting the Volatility of Cryptocurrencies

Show more

1. Introduction

Cryptocurrencies have attracted a lot of attention since Bitcoin was first proposed by Nakamoto [1] . They are highly volatile and show extreme tail movements as compared to traditional financial markets and fiat currencies. This provides a new investment asset category to investors, practitioners, and policymakers in financial markets and portfolio management. Bitcoin is one of the most traded and still, the largest cryptocurrency, representing about 62.24% of the total estimated cryptocurrencies capitalisation as of March 2021 (https://coinmarketcap.com) [2] . As of March 28, 2021, the cryptocurrencies market capitalization was valued at about US $1517b. Remarkable growth has also been witnessed in other important digital currencies like Ethereum, Ripple, and Litecoin which are among the top ten cryptocurrencies by market capitalization. Despite being largely unregulated by government institutions, cryptocurrency prices and exchanges exhibit most stylized facts from established exchanges [3] . Nevertheless, these cryptocurrencies are characterized by periods of high volatility, large shocks and extreme price jumps.

Accurate forecasts of volatility and hence Value-at-Risk is important to investors, practitioners, and policymakers for making informed decisions and portfolio risk management. It is also important to utilize a model capable of capturing the stylized characteristics and volatility dynamics of cryptocurrencies by combining conventional and novel techniques [4] . The Generalized Autoregressive Conditional Heteroscedastic (GARCH) model and its variants are famous volatility models for modelling traditional financial time series as well as for cryptocurrencies. The popularity of GARCH-type models for describing the dynamics of cryptocurrencies volatility is due to their deterministic dependence of the conditional variance on past observations.

Several studies have employed variants of GARCH-type models for several cryptocurrencies to select the best volatility model or a superior set of models. Fakhfekh and Jeribi [5] applied various GARCH-type models with different error distributions to sixteen of the most popular cryptocurrencies and found that the TGARCH model with double exponential distribution provided the best fit. Ngunyi *et* *al.* [6] applied several GARCH-type models with different error distributions to eight of the most popular cryptocurrencies and found that the asymmetric GARCH models with long memory property and heavy-tailed innovations provided the best fit for all cryptocurrencies. Chu *et* *al.* [7] using GARCH models with different error distributions concluded that the IGARCH (1, 1) model estimates the Bitcoin volatility better than the competing models. Therefore, the selection of the appropriate distribution of cryptocurrencies returns is also a major challenge in cryptocurrencies risk management.

Alternatively, extreme value theory could be useful to better understand the characteristics of the extreme tail distribution of cryptocurrencies. However, only a few attempts have been made so far to examine extreme price movements of different cryptocurrencies. In the recent past, a limited number of studies have investigated the tail behaviour of cryptocurrencies using extreme value theory. Borri [8] modelled the conditional tail-risk in four major cryptocurrencies and the results showed that these cryptocurrencies are highly exposed to tail-risk within the crypto market contexts. Gangwal and Longin [9] presented an extreme value analysis of the returns of Bitcoin and showed that the returns followed a Frèchet distribution; Begušić *et* *al.* [10] also provided evidence that extreme prices of Bitcoin are considerably more frequent, implying that Bitcoin exhibits heavier tails than stock returns. Zhang *et* *al.* [11] utilized extreme value analysis to investigate the tail risk behaviour of the high-frequency (hourly) log-returns of the four most popular cryptocurrencies estimating value at risk and expected shortfall with varying thresholds. The empirical results found that Ripple was the riskiest cryptocurrency exhibiting the largest potential gain or loss for both positive and negative (hourly) log-returns at every percentile and threshold while Bitcoin was the least risky cryptocurrency.

In a Value-at-Risk context, Gkillas and Katsiampa [12] apply extreme value theory to estimate Value at Risk and Expected shortfall as measures of tail risk for five cryptocurrencies. Likitratcharoen *et* *al.* [13] predicted the Value at Risk (VaR) of Bitcoin, Ethereum and Ripple using historical and Gaussian parametric, VaR. Their backtesting results show that the historical VaR model is suitable for measuring cryptocurrency risk over delta normal VaR only for a high confidence level of critical values.

The objective of this study is twofold. First, a comprehensive in-sample volatility modelling is implemented utilizing a variety of GARCH-type models to account for volatility clustering and leverage effects present in cryptocurrency returns. The probability distributions assumed for the standardized innovations include the Skewed Student-*t*, skewed Generalized error (GED), generalized hyperbolic (GHYP), Johnson’s SU distributions. Second, we apply the GARCH-EVT model that combines the conditional heteroscedastic model and extreme value theory to examine the tail behaviour of eight major cryptocurrencies. The GARCH models and GARCH-EVT model are then used to estimate the out-of-sample 1-day-ahead Value at Risk (VaR) forecasts. The forecasting performance is evaluated using unconditional and conditional coverage tests to backtest the accuracy of VaR forecasts. The accuracy of forecast estimates is evaluated to determine which technique most accurately models extreme market risk on the eight cryptocurrencies.

The research contributes to the literature in two ways. First, it fits GARCH-type models using heavy-tailed innovations distributions to account for volatility clustering, asymmetry and leverage effects present in cryptocurrency returns. Second, it provides more accurate results based on a hybrid model combining conditional heteroscedastic model and extreme value analysis, namely the generalized Pareto distribution (GPD). The GPD is the only non-degenerate distribution that approximates asymptotically the limiting distribution of exceedances. We, therefore, consider only the relevant information of extremes providing more accurate risk estimates. The remaining part of the paper is organised as follows: Section 2 describes the methodology; GARCH modelling with selected innovations distribution, extreme value theory, value-at-risk estimation and backtesting procedures. Section 3 presents data description, empirical results and a discussion of the backtesting results. Finally, Section 4 concludes the study.

2. Methodology

2.1. GARCH Modelling

The generalized autoregressive conditional heteroscedastic (GARCH) model (Engle, [14] ; Bolleslev, [15] ) constitutes a benchmark in financial econometrics that is commonly used to estimate and forecast volatility of financial returns.

Let
${r}_{t}$ denote the daily log returns of the corresponding cryptocurrencies data series at time *t* for
$t=1,\cdots ,n$ , computed as the logarithm of prices at the end of day *t* divided by the price at the end of the preceding day
$t-1$ ,
${r}_{t}=\mathrm{ln}\left({p}_{t}/{p}_{t-1}\right)$ . The GARCH model can be specified as:

${r}_{t}={\mu}_{t}+{\sigma}_{t}{z}_{t}$ (1)

where ${\mu}_{t}$ denotes the conditional mean and ${\sigma}_{t}$ denotes the volatility process, ( ${\sigma}_{t}^{2}$ being the conditional variance). ${z}_{t}$ the innovations, are independent and follow a distribution with zero mean and unit variance. For brevity, all selected GARCH models are restricted to a maximum order of one ( $p=q=1$ ). The parsimonious GARCH (1, 1) models tend to be more flexible, efficient and significant than higher order models in the out-of-sample analysis [16] .

In this study, several GARCH-type specifications are considered namely the Standard GARCH (SGARCH), IGARCH (1, 1), EGARCH (1, 1), GJR-GARCH (1, 1), Asymmetric Power ARCH (APARCH) (1, 1), Threshold GARCH (TGARCH) (1, 1) and Component GARCH (CGARCH) (1, 1), to model the time-varying volatility of the selected cryptocurrencies. All of the GARCH-type models selected follow the specification in Equation (1); however, they differ in the conditional variance specification.

The conditional variance for the standard GARCH (SGARCH) (1, 1) process is given by:

${\sigma}_{t}^{2}=\omega +\alpha {\epsilon}_{t-1}^{2}+\beta {\sigma}_{t-1}^{2},$ (2)

where $\omega >0$ , $\alpha \ge 0$ , $\beta \ge 0$ and $\alpha +\beta <1$ to ensure a uniquely stationary process and positive conditional variance. The GARCH (1, 1) model captures volatility clustering in the data through the persistence parameter $\alpha +\beta $ . However, if the persistence parameter $\alpha +\beta $ equals 1, the GARCH model converges to the Integrated GARCH model, where the long term volatility bears an infinite process.

The Integrated GARCH (IGARCH) model is a special version of SGARCH (1, 1) model where, the persistence parameter ( $\alpha +\beta $ ) is equal to 1 and typically allows a unit root under the GARCH process. Thus, the conditional variance in the IGARCH (1, 1) is expressed in Equation (3), given that $\beta $ is set equal to ( $1-\alpha $ ) with parameter restrictions $\omega >0$ , $\alpha \ge 0$ and $1-\alpha \ge 0$ :

${\sigma}_{t}^{2}=\omega +\alpha {\epsilon}_{t-1}^{2}+\left(1-\alpha \right){\sigma}_{t-1}^{2}\mathrm{.}$ (3)

In both the SGARCH and IGARCH models, the impact of positive and negative news on the conditional variance is assumed to be symmetrical. These models restrict all coefficients to be greater than zero and thus cannot explain the negative correlation between return and volatility. Some long-memory GARCH-type models are also introduced to forecast cryptocurrencies price volatility by capturing some stylized facts such as asymmetry and fat tails in the cryptocurrency price return innovations and to provide better VaR’s computations.

The exponential GARCH (EGARCH) model by Nelson [17] , incorporates the asymmetric impact of positive and negative shocks on volatility whereby the latter is believed to produce greater levels of volatility, despite having the same magnitude. This model is specified in logarithmic form, which suggests that parameters are unrestricted, and are thereby allowed to take negative values while ensuring a positive conditional variance. In addition, the conditional variance is written as a function of past standardized innovations, instead of past innovations. The volatility dynamics of an EGARCH (1, 1) can be expressed as:

${\mathrm{log}}_{e}{\sigma}_{t}^{2}=\omega +{\alpha}_{1}{z}_{t-1}+{\gamma}_{1}\left(\left|{z}_{t-1}\right|-E\left|{z}_{t-1}\right|\right)+{\beta}_{1}{\mathrm{log}}_{e}\left({\sigma}_{t-1}^{2}\right)$ (4)

where the coefficient ${\alpha}_{1}$ captures the sign effect, and ${\gamma}_{1}>0$ the size of the leverage effect. The persistence parameter for this model is ${\beta}_{1}$ .

The Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten *et* *al.* [18] is similar to EGARCH (1, 1) in incorporating the asymmetric impact of positive and negative shocks. The conditional variance responds asymmetrically via the use of an indicator function *I*. The volatility equation of a GJR-GARCH (1, 1) model is given as:

${\sigma}_{t}^{2}=\omega +{\alpha}_{1}{\epsilon}_{t-1}^{2}+{\gamma}_{1}{I}_{t-1}{\epsilon}_{t-1}^{2}+{\beta}_{1}{\sigma}_{t-1}^{2},$ (5)

where
${\gamma}_{1}$ now represents the “leverage” term. The indicator function *I* takes on value of 1 for
${\epsilon}_{t-1}\le 0$ and 0 otherwise. The persistence depends on the parameter
${\gamma}_{1}$ , through
${\alpha}_{1}+{\beta}_{1}+{\gamma}_{1}\kappa $ , where
$\kappa $ denotes the expected value of the standardized residuals.

The asymmetric power ARCH (APARCH) model of Ding *et* *al.* [19] allows for both leverage and the Taylor effect, named after Taylor [20] who observed that the sample autocorrelation of absolute returns were usually larger than that of squared returns.

The APARCH (1, 1) model can be expressed as:

${\sigma}_{t}^{\delta}=\omega +{\alpha}_{1}{\left(\left|{z}_{t-1}\right|-{\gamma}_{1}{z}_{t-1}\right)}^{\delta}+{\beta}_{1}{\sigma}_{t-1}^{\delta},$ (6)

where $\delta \in R{e}^{+}$ , is a Box-Cox transformation of ${\sigma}_{t}$ , and $-1<{\gamma}_{1}<1$ is the coefficient in the leverage term. The persistence parameter is equal to ${\beta}_{1}+{\alpha}_{1}{\kappa}_{1}$ , where ${\kappa}_{1}$ is the expected value of the standardized residuals under the Box-Cox transformation of the term, which includes the leverage parameter ${\gamma}_{1}$ .

The component standard GARCH (CS-GARCH) model of Engle and Lee [21] decomposes the component of the conditional variance so as to investigate the long and short-run movements of volatility. Let ${q}_{t}$ represent the permanent component of the conditional variance, the component model can be written as

${\sigma}_{t}^{2}={q}_{t}+{\alpha}_{1}\left({z}_{t-1}^{2}-{q}_{t-1}\right)+{\beta}_{1}\left({\sigma}_{t-1}^{2}-{q}_{t-1}\right)$ (7)

${q}_{t}={\alpha}_{0}+\rho {q}_{t-1}+\varphi \left({z}_{t-1}-{\sigma}_{t-1}^{2}\right)$

where effectively the intercept of the GARCH model is now time-varying following first order autoregressive type dynamics.

The Nonlinear GARCH (NGARCH) model of Higgins *et* *al.* [22] is given by

${\sigma}_{t}^{2}=\omega +{\alpha}_{1}{\epsilon}_{t-1}^{2}+{\gamma}_{1}{\epsilon}_{t-1}+{\beta}_{1}{\sigma}_{t-1}^{2}$ (8)

The Nonlinear Asymmetric GARCH (NAGARCH) model of Engle and Ng [23] is a model with the specification:

${\sigma}_{t}^{2}=\mathrm{}\omega +\mathrm{}\alpha {\left({\epsilon}_{t-1}-\mathrm{}\theta {\sigma}_{t-1}\right)}^{2}+\mathrm{}\beta {\sigma}_{t-1}^{2}$ (9)

where $\alpha \ge 0,\mathrm{}\beta \ge 0,\mathrm{}\omega >0$ and $\alpha \left(1+\mathrm{}{\theta}^{2}\right)+\mathrm{}\beta <1$ , which ensures the non-negativity and stationarity of the variance process.

For stock returns, the parameter $\theta $ is usually estimated to be positive; in this case, it reflects a phenomenon referred to as the “leverage effect”, signifying that negative returns increase future volatility by a larger amount than positive returns of the same magnitude.

For each GARCH-type model, the innovation process
${z}_{t}$ is allowed to follow one of the following four skewed and heavy-tailed distributions: the Skewed Student-*t*, skewed Generalized error (GED), generalized hyperbolic (GHYP), Johnson’s SU distributions since the cryptocurrencies returns have heavier tails than the normal distribution.

The skewed Student-t (SST) distribution by Azzalini and Capitanio [24] , has a density given by

$f\left(x;\delta ,\nu ,\mu ,\beta \right)=\frac{1}{\delta}{t}_{\nu}\left(\frac{x-\mu}{\delta}\right)2{T}_{\nu +1}\left(\beta \left(\frac{x-\mu}{\delta}\right)\sqrt{\frac{\nu +1}{{\left(\frac{x-\mu}{\delta}\right)}^{2}+\nu}}\right)$ (10)

where
${t}_{\nu}$ is the density of standard Student *t* distribution with
$\nu $ degrees of freedom and
${T}_{\nu +1}$ is the distribution function of the standard Student *t* distribution with
$\nu +1$ degrees of freedom.

The skewed generalized error distribution (SGED) by Theodossiou [25] is given by

${f}_{\text{SGED}}\left(x\mathrm{;}\mu \mathrm{,}\sigma \mathrm{,}k\mathrm{,}\lambda \right)=\frac{C}{\sigma}\mathrm{exp}\left(-\frac{{\left|x-\mu +\delta \sigma \right|}^{k}}{{\left[1-\text{sign}\left(x-\mu +\delta \sigma \right)\lambda \right]}^{k}{\theta}^{k}{\sigma}^{k}}\right)$ (11)

where

$C=\frac{k}{2\theta}\Gamma {\left(\frac{1}{k}\right)}^{-1}\mathrm{,}$

$\theta =\Gamma {\left(\frac{1}{k}\right)}^{\frac{1}{2}}\Gamma {\left(\frac{3}{k}\right)}^{-\frac{1}{2}}S{\left(\lambda \right)}^{-1}\mathrm{,}$

$\delta =2\lambda AS{\left(\lambda \right)}^{-1}\mathrm{,}$

$S\left(\lambda \right)=\sqrt{1+3{\lambda}^{2}-4{A}^{2}{\lambda}^{2}}\mathrm{,}$

and

$A=\Gamma \left(\frac{2}{k}\right)\Gamma {\left(\frac{1}{k}\right)}^{-\frac{1}{2}}\Gamma {\left(\frac{3}{k}\right)}^{-\frac{1}{2}},$

$\mu $ and
$\sigma $ are the mean and standard deviation parameters respectively,
$\lambda $ is a skewness parameter, sign is the sign function, and
$\Gamma \left(\alpha \right)={\displaystyle {\int}_{0}^{\infty}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{z}^{\alpha -1}{\text{e}}^{z}\text{d}z$ is the gamma function. The scaling parameter *k* and
$\lambda $ satisfy the following constraints
$k>0$ and
$-1<\lambda <1$ . The parameter *k* controls the height and tails of the density function and the skewness parameter
$\lambda $ controls the rate of descent of the density around the mode of the random variable *x*, where
$\text{mode}\left(x\right)=\mu -\delta \sigma $ .

The generalized hyperbolic (GH) distribution by Barndorff-Nielsen [26] is given by

$\begin{array}{c}f\left(x\mathrm{;}\lambda \mathrm{,}\alpha \mathrm{,}\beta \mathrm{,}\mu \mathrm{,}\delta \right)=\frac{{\left(\delta \sqrt{{\alpha}^{2}-{\beta}^{2}}\right)}^{\lambda}{\left(\delta \alpha \right)}^{1/2-\lambda}}{\sqrt{2\pi}\delta {K}_{\lambda}\left(\delta \sqrt{{\alpha}^{2}-{\beta}^{2}}\right)}{\left(1+\frac{{\left(x-\mu \right)}^{2}}{{\delta}^{2}}\right)}^{\lambda /2-1/4}\\ \text{\hspace{0.17em}}\times \mathrm{exp}\left(\beta \left(x-\mu \right)\right){K}_{\lambda -1/2}\left(\alpha \delta \sqrt{1+\frac{{\left(x-\mu \right)}^{2}}{{\delta}^{2}}}\right)\end{array}$ (12)

where ${K}_{\lambda}$ is the modified third-order Bessel function. The density is defined under the following parameter restrictions.

$\delta \ge 0\text{\hspace{1em}}\text{and}\text{\hspace{1em}}\left|\beta \right|<\alpha \text{\hspace{1em}}\text{if}\text{\hspace{1em}}\lambda >0$

$\delta >0\text{\hspace{1em}}\text{and}\text{\hspace{1em}}\left|\beta \right|<\alpha \text{\hspace{1em}}\text{if}\text{\hspace{1em}}\lambda =0$

$\delta >0\text{\hspace{1em}}\text{and}\text{\hspace{1em}}\left|\beta \right|\le \alpha \text{\hspace{1em}}\text{if}\text{\hspace{1em}}\lambda <0$

The class of generalized hyperbolic distribution variants can be obtained by changing the values of the parameter $\lambda $ ; hence, $\lambda $ is called the class-defining parameter.

The Johnson system of distributions consists of families of distributions that, through specified transformations, can be reduced to the standard normal random variable. A random variable *X* from the Johnson translation system is represented as a transformation of the normal distribution given by

$X=\xi +\lambda {r}^{-1}\left(\frac{Z-\gamma}{\delta}\right)$

where *Z* is a standard normal random variable,
$\gamma $ and
$\delta $ are shape parameters,
$\xi $ is a location parameter,
$\lambda $ is a scale parameter and
$r(\cdot )$ denotes one of the following normalizing transformations:

$r\left(y\right)=(\begin{array}{ll}y\hfill & \text{forthe}\text{\hspace{0.17em}}{S}_{N}\left(\text{normal}\right)\text{\hspace{0.17em}}\text{family}\mathrm{,}\hfill \\ \mathrm{log}\left(y\right)\hfill & \text{forthe}\text{\hspace{0.17em}}{S}_{L}\left(\text{lognormal}\right)\text{\hspace{0.17em}}\text{family}\mathrm{,}\hfill \\ \mathrm{log}\left(y/\left(1-y\right)\right)\hfill & \text{forthe}\text{\hspace{0.17em}}{S}_{B}\left(\text{bounded}\right)\text{\hspace{0.17em}}\text{family}\mathrm{,}\hfill \\ \mathrm{log}\left(y+\sqrt{{y}^{2}+1}\right)\hfill & \text{forthe}\text{\hspace{0.17em}}{S}_{U}\left(\text{unbounded}\right)\text{\hspace{0.17em}}\text{family}\hfill \end{array}$

where $X>\xi $ and $\lambda =1$ for the ${S}_{L}$ family; $\xi $ feasible combination of the skewness and kurtosis values. The cryptocurrency returns considered in this study have skewness and kurtosis values that correspond to Johnson’s ${S}_{U}$ -distribution. Thus, we only consider the ${S}_{U}$ family of the Johnson translation system. The reparameterized Johnson SU distribution, as discussed in Rigby and Stasinopoulos [27] , is a four-parameter distribution denoted by JSU $\left(\mu \mathrm{,}\sigma \mathrm{,}\nu \mathrm{,}\tau \right)$ , with mean $\mu $ and standard deviation $\sigma $ for all values of the skew and shape parameters $\nu $ and $\tau $ respectively.

The parameters of all GARCH-type models are estimated using Maximum Likelihood, since it is generally consistent and efficient, and provides asymptotic standard errors that are valid under non-normality. The most appropriate GARCH-type model is the one that minimizes the Kullback-Leibler distance between the model and the observed values. The selection is based on information criteria namely; the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC).

2.2. Extreme Value Theory and the Peaks-over-Threshold Model

In this section, we describe how to obtain the quantile
${z}_{q}$ by applying EVT techniques to the distribution of GARCH-models filtered innovations. The Peak-over-threshold (POT) modelling approach is illustrated as follows. First, we fix a sufficiently high threshold *u* and assume that excess residuals over this threshold follow a generalized Pareto distribution (GPD) with tail index
$\xi $ .

${G}_{\xi \mathrm{,}\beta}\left(y\right)=(\begin{array}{ll}1-{\left(1+\frac{\xi y}{\beta}\right)}^{-\frac{1}{\xi}}\hfill & \text{if}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\xi \ne \mathrm{0,}\hfill \\ 1-\mathrm{exp}\left(-\frac{y}{\beta}\right)\hfill & \text{if}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\xi =\mathrm{0,}\hfill \end{array}$ (13)

where
$\beta >0$ is scale parameter and the support is
$y\ge 0$ when
$\xi \ge 0$ and
$0\le y\le -\beta /\xi $ when
$\xi <0$ .
$\xi $ is the shape parameter, which governs the tail behaviour of
${G}_{\xi \mathrm{,}\beta}\left(y\right)$ . Consider a general distribution function *F* and the corresponding excess distribution above the threshold *u* defined by:

${F}_{u}\left(y\right)=\mathrm{Pr}\left(X-u\le \text{\hspace{0.17em}}|X>u\right)=\frac{F\left(u+y\right)-F\left(u\right)}{1-F\left(u\right)},\text{\hspace{1em}}y\ge 0$ (14)

For
$0\le y$ , Balkema and De Haan [28] and Pickands [29] showed that for a large class of distributions *F* it is possible to find a positive measurable function
$\sigma \left(u\right)$ such that

$\underset{u\to {x}_{F}}{lim}\underset{0\le y\le {x}_{F}-u}{\mathrm{sup}}\left|{F}_{u}\left(y\right)-{G}_{\xi ,\beta \left(u\right)}\left(y\right)\right|=0$ (15)

The GPD is generalized in the sense that it subsumes several other specific distributions under its parametrization. When $\xi >0$ , the distribution function ${G}_{\xi \mathrm{,}\beta}$ is the parameterized version of a heavy-tailed ordinary Pareto distribution; when $\xi =0$ we have a light-tailed exponential distribution and when $\xi <0$ we have a short-tailed Pareto type II distribution.

The tail of the underlying distribution is assumed to begin at the threshold *u*, with *N* the random variables of exceeding observations. For a random sample of size *n* the proportion of extremes is then *N*/*n*. Assuming that the
${N}_{u}$ excesses over the threshold are independently and identically distributed (i.i.d) with exact GPD distribution, the parameters
$\xi $ and
$\beta $ are estimated by maximum likelihood. Smith [30] showed that maximum likelihood estimates
$\stackrel{^}{\xi}$ and
$\stackrel{^}{\beta}$ of the GPD parameters
$\xi $ and
$\beta $ are consistent and asymptotically normal as
${N}_{u}\to \infty $ provided
$\xi >-1/2$ . Even under the weaker assumption that the excesses are i.i.d from
${F}_{u}\left(y\right)$ which is only approximately GPD he also obtained unbiased and asymptotically normal results for
$\xi $ and
$\beta $ provided a sufficient rate of convergence.

By setting
$x=u+y$ , the following equality holds for points
$x>u$ in the tail of *F* obtained from Equation (14):

$1-F\left(x\right)=\left(1-F\left(u\right)\right)\left(1-{F}_{u}\left(x-u\right)\right)$ (16)

The first term,
$\left(1-F\left(u\right)\right)$ , can be estimated non-parametrically using the random proportion of the data on the tail *N*/*n* and we can also estimate the term
$1-{F}_{u}\left(x-u\right)$ , by approximating the excess distribution,
${F}_{u}\left(y\right)$ with a GPD fitted by maximum likelihood, to get the tail estimator:

$\stackrel{^}{F}\left(x\right)=1-\frac{{N}_{u}}{n}{\left(1+\stackrel{^}{\xi}\left(\frac{x-\stackrel{^}{u}}{\stackrel{^}{\beta}}\right)\right)}^{-\frac{1}{\stackrel{^}{\xi}}}\mathrm{,}$ (17)

For x number of observations in the tail is fixed to be $N=k$ , this gives us a random threshold at the $\left(k+1\right)\text{th}$ order statistic. The GPD with parameter $\xi $ and $\beta $ is fitted to the data ${Z}_{\left(1\right)}-{Z}_{\left(k+1\right)}\mathrm{,}\cdots \mathrm{,}{Z}_{\left(k\right)}-{Z}_{\left(k+1\right)}$ , the excess amounts over the threshold for all residuals exceeding the threshold. The tail estimator for ${F}_{Z}\left(z\right)$ is then given by

${\stackrel{^}{F}}_{Z}\left(z\right)=1-\frac{k}{n}{\left(1+\stackrel{^}{\xi}\left(\frac{z-{z}_{\left(k+1\right)}}{\stackrel{^}{\beta}}\right)\right)}^{-1/\stackrel{^}{\xi}}\mathrm{,}$ (18)

For $q>1-k/n$ , we can invert Equation (18) to get

${\stackrel{^}{z}}_{q}={z}_{k+1}+\frac{\stackrel{^}{\beta}}{\stackrel{^}{\xi}}\left({\left(\frac{1-q}{k/n}\right)}^{-\stackrel{^}{\xi}}-1\right)$ (19)

which is the *q*-th quantile of the data distribution.

2.3. Measure of Value-at-Risk

Value at Risk (VaR) is a measure of risk that determines the losses that may happen in extreme events for a given confidence level. The main parameters of VaR are the significance level (confidence level
$1-\alpha $ ) and the risk horizon (*h*), which is the period of time in terms of trading days.

Consider $\left({X}_{t}\mathrm{,}t\in Z\right)$ a strictly stationary time series representing daily observations of the negative log-return of a financial asset price. The dynamics of ${X}_{t}$ is assumed to be given by:

${X}_{t}={\mu}_{t}+{\sigma}_{t}{Z}_{t}$ (20)

where the innovations ${Z}_{t}$ follow a strict white noise process, independent and identically distributed, with zero mean, unit variance and marginal distribution function ${F}_{Z}\left(z\right)$ . We assume that ${\mu}_{t}$ and ${\sigma}_{t}$ are both measurable with respect to ${F}_{t-1}$ the information about the return process available up to time $t-1$ .

Let
${F}_{X}\left(x\right)$ denote the marginal distribution of
$\left({X}_{t}\right)$ and, for a horizon
$h\in N$ , let
${F}_{{X}_{t+1}+\cdots +{X}_{t+h\mathrm{|}{F}_{t}}}\left(x\right)$ denote the predictive distribution of the return over the next *h* days, given information on returns up to and including day *t*. For
$0<q,1$ , the *q*-th unconditional quantile for the marginal distribution is denoted by:

${x}_{q}=\mathrm{inf}\left\{x\in \mathbb{R}:P\left(X>x\right)\le 1-q\right\}=\mathrm{inf}\left\{x\in \mathbb{R}:{F}_{X}\left(x\right)\ge q\right\},$ (21)

and a conditional quantile is a quantile of the predictive distribution for the return over the next *h* days denoted by

${x}_{q}^{t}\left(h\right)=\mathrm{inf}\left\{x\in \mathbb{R}:{F}_{{X}_{t+1}+\cdots +{X}_{t+h|{F}_{t}}}\left(x\right)\ge q\right\}$ (22)

We are principally interested in estimating unconditional and conditional quantiles in the tails of negative log-returns for the 1-step predictive distribution. Since

${F}_{X+t|{F}_{t}}\left(x\right)=P\left\{{\mu}_{t+1}+{\sigma}_{t+1}{Z}_{t+1}\le x|{F}_{t}\right\}={F}_{Z}\left(\frac{x-{\mu}_{t+1}}{{\sigma}_{t+1}}\right)$

The quantile is denoted by ${x}_{q}^{t}$ and simplify to

${x}_{q}^{t}={\mu}_{t+1}+{\sigma}_{t+1}{z}_{q}$ (23)

where
${z}_{q}$ is the upper *q*-th quantile of the marginal distribution of
${Z}_{t}$ which by assumption does not depend on *t*. Mathematically, VaR is the *q*-th quantile of the underlying distribution of returns.

To estimate risk measure, VaR for the cryptocurrency market, our main interest is on extreme value theory-based models: we consider only the conditional GPD approach and conventional GARCH models.

The Peak over Threshold: Conditional GPD Approach

Different approaches have been proposed in the literature to estimate risk measures. The unconditional GPD has the advantages that it focuses directly on the tail of the distribution. However, it doesn’t recognize the fact that returns are not i.i.d. The econometric models of volatility such as the GARCH-process under different innovation’s distributions yield VaR estimates which reflects the current volatility dynamics. The weakness of this GARCH modelling approach is that it focuses on modelling the whole conditional return distribution as time-varying, and not only the tail distribution that is of interest. This approach may sometimes fail to accurately estimate risk measures like VaR.

In order to overcome the drawbacks of each of the above methods, McNeil and Frey [31] proposed to combine ideas from these two approaches. By first filtering, the returns with a GARCH model is that we get essentially i.i.d. series on which it is straightforward to apply the EVT technique. The advantage of this GARCH?EVT combination lies in its ability to capture conditional heteroscedasticity in the time series through the GARCH framework, while simultaneously, modelling the extreme tails behaviour through the EVT method. The conditional GPD produces a VaR, which reflects the current volatility background. The combined approach denoted conditional GPD, may be presented in the following three steps:

Step 1: Fit a GARCH-type model to the return data by quasi-maximum likelihood. Estimate ${\mu}_{t+1}$ and ${\sigma}_{t+1}$ from the fitted model and extract the standardized residuals ${z}_{t}$ .

Step 2: Consider the standardized residuals computed in Step 1 to be realizations of a white noise process, and estimate the tails of the innovations using extreme value theory. Next, compute the quantiles of the innovations.

Step 3: Construct VaR from parameters estimated in steps 1 and 2.

Assuming that the volatility dynamics of log-returns can be represented by Equation (2). Given the 1-step forecasts ${\mu}_{t+1}$ , ${\sigma}_{t+1}$ and the estimate quantile of standardized residuals series, ${\text{VaR}}_{t+1}\left(Z\right)$ , using the Equation (19) the VaR for the return series can be estimated as:

${\stackrel{^}{\text{VaR}}}_{t+1}^{q}={\stackrel{^}{\mu}}_{t+1}+{\stackrel{^}{\sigma}}_{t+1}{\stackrel{^}{z}}_{q}$ (24)

2.4. Statistical Backtesting of Model-Based VaR Forecasts

To back-test the accuracy for the estimated VaRs, we computed the empirical failure rates. By definition, the failure rate is the number of times returns (in absolute values) exceed the forecasted VaR. If the model is correctly specified, the failure rate should be equal to the specified VaR’s level. In this study, the backtesting VaR is based on the Kupiec’s [32] and Christoffersen [33] for unconditional and conditional coverage tests.

For purposes of implementing VaR forecast tests, the first step is to define the “hit sequence” of VaR violations:

${I}_{t+1}=(\begin{array}{ll}1\hfill & \text{if}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{r}_{t+1}<-{\text{VaR}}_{t+1}^{\alpha}\hfill \\ 0\hfill & \text{if}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{r}_{t+1}\ge -{\text{VaR}}_{t+1}^{\alpha}\hfill \end{array}$ (25)

where ${\text{VaR}}_{t+1}^{\alpha}$ is the VaR prediction at time $t+1$ for risk quantile level $\alpha $ . Under the null hypothesis of correct specification the hit sequence should be an independent Bernoulli distributed variable.

${H}_{0}\mathrm{:}{I}_{t+1}\sim Bernoulli\left(\alpha \right)\mathrm{,}$ (26)

$f\left({I}_{t+1}\mathrm{,}p\right)={\left(1-p\right)}^{1-{I}_{t+1}}{p}^{{I}_{t+1}}\mathrm{.}$ (27)

The accuracy and reliability of VaR methodology are tested by evaluating the out-of-sample performance of the estimated VaR forecasts. The backtesting procedure consists of comparing the out-of-sample VaR estimates with actual realized loss in the next period. For a VaR forecast model to be accurate in its predictions, then the average hit sequence or hit ratio or the failure rate over the full sample should be equal
$\alpha $ for the
$\left(1-\alpha \right)\mathrm{\%}$ quantile VaR (*i.e.*, for 95% VaR). As expected, the closer the hit ratio is to the expected value, the better the forecasts of the risk model. If the hit ratio is greater than the expectation, then the model underestimates the risk; with a hit ratio smaller than
$\left(1-\alpha \right)\mathrm{\%}$ , the model overestimates risk.

The unconditional coverage (UC) test uses the fraction/ratio of observed violations for a particular risk model
$\pi $ and compares it with *p*. For this purpose the likelihood Bernoulli function is required and is given by:

$L\left(\pi \right)=\pi {\left(1-\pi \right)}^{1-{I}_{t+1}}{\pi}^{{I}_{t+1}}={\left(1-\pi \right)}^{{T}_{0}}{\pi}^{{T}_{1}}$ (28)

where ${T}_{0}$ , ${T}_{1}$ are the number of 0 s and 1 s in the sample $\left(T={T}_{0}+{T}_{1}\right)$ . The maximum likelihood estimator is $\stackrel{^}{\pi}={T}_{1}/T$ . The null hypothesis can be tested by means of the following likelihood ratio test:

$L{R}_{uc}=-2\mathrm{ln}\left(\frac{L\left(\alpha \right)}{L\left(\stackrel{^}{\pi}\right)}\right)\sim {\chi}_{1}^{2}$ (29)

Under the null hypothesis that the VaR model is correct $L{R}_{uc}$ is asymptotically chi-square distributed with one degree of freedom. However, this test focuses only on the number of exceptions.

In practice, situations arise when the VaR model passes the unconditional coverage test but all violations are clustered. To reject a VaR model with clustered violations, a test of independence of the hit sequence is required. Suppose the hit sequence is assumed to exhibit time dependence and follows a first-order Markov sequence with the following transition probability matrix:

${\pi}_{1}=\left(\begin{array}{cc}1-{\pi}_{01}& {\pi}_{01}\\ 1-{\pi}_{11}& {\pi}_{11}\end{array}\right)$ (30)

where ${\pi}_{ij}=\mathrm{Pr}\left({I}_{t}=j|{I}_{t+1}=i\right)$ , ${\pi}_{01}$ is the probability of getting a violation tomorrow given no violation today, ${\pi}_{11}$ is the probability of getting a violation tomorrow given today is also a violation. Then the corresponding likelihood function is given as:

$L\left({\pi}_{1}\right)={\left(1-{\pi}_{01}\right)}^{{T}_{00}}{\pi}_{01}^{{T}_{01}}{\left(1-{\pi}_{11}\right)}^{{T}_{10}}{\pi}_{11}^{{T}_{11}},$ (31)

where
${T}_{ij}$ is the number of observations with a *j* following *i*. If the hit sequence is independent over time, the probability of a violation tomorrow does not depend on today having a violation or not. Hence, the null hypothesis in the independence test is
${H}_{0}:{\pi}_{01}={\pi}_{11}=\pi $ . The transition probability matrix will take the form:

$\stackrel{^}{\pi}=\left(\begin{array}{cc}1-\stackrel{^}{\pi}& \stackrel{^}{\pi}\\ 1-\stackrel{^}{\pi}& \stackrel{^}{\pi}\end{array}\right)$ (32)

Then, independence can be tested using a likelihood ratio test statistics defined as follows:

$L{R}_{ind}=-2\mathrm{ln}\left(\frac{L\left(\stackrel{^}{\pi}\right)}{L\left({\stackrel{^}{\pi}}_{1}\right)}\right)\sim {\chi}_{1}^{2}\mathrm{.}$ (33)

Ultimately, VaR users are interested in being able to test simultaneously whether the hit sequence is independent and the average number of violations is correct. The conditional coverage (CC) test jointly examines whether the percentage of exceptions is statistically equal to the one expected and the serial independence of the exception indicator. A sequence of VaR forecasts at-risk level $\alpha $ has the correct conditional coverage if $\left\{{I}_{t}\left(\alpha \right)\mathrm{;}t=\mathrm{1,}\cdots ,T\right\}$ is an independent and identically distributed sequence of Bernoulli random variables with parameter $\alpha $ . In this test, the null hypothesis takes the form: ${H}_{0}:{\pi}_{01}={\pi}_{11}=\alpha $ . To test this hypothesis a joint test of independence of the hit sequence and the unconditional coverage of the VaR forecasts is required. Thus, under the null hypothesis of the expected proportion of exceptions equals $\alpha $ and the failure process is independent, the appropriate likelihood ratio test statistic is of the form:

$L{R}_{cc}=-2\mathrm{ln}\left(\frac{L\left(\alpha \right)}{L\left({\stackrel{^}{\pi}}_{1}\right)}\right)\sim {\chi}_{2}^{2}$ (34)

Under the null hypothesis the likelihood ratio statistic, $L{R}_{cc}$ , is asymptotically Chi-square distributed, with two degree of freedom. Note also that $L{R}_{cc}=L{R}_{uc}+L{R}_{ind}$ .

3. Data Description and Empirical Results

3.1. Data Description

In this study, the data set consists of daily closing prices (in US dollars) of the eight largest cryptocurrencies in terms of market capitalization traded from August 8, 2015, to September 16, 2020 (1859 observations). The data are publicly available online at https://coinmarketcap.com/coins/. We only considered cryptocurrencies with no missing values, which resulted in eight cryptocurrencies: Bitcoin (BTC), Ethereum (ETH), Ripple (XRP), Litecoin (LTC), Monero (XMR), Stella (XLM), Dash (DASH) and Tether (USTD). Figure 1 presents time series plots for the cryptocurrency daily trading prices for the given period of August 8, 2015, to September 16, 2020. The sample period covers both relatively volatile and stable periods, with phases of price fluctuations and occasional extreme price jumps. All the cryptocurrencies display visible patterns of volatility clustering dynamics over time.

In order to expressively visualize some features for each cryptocurrency data, daily returns were computed by
${r}_{t}=\mathrm{ln}\left({p}_{t}/{p}_{t-1}\right)$ with
${p}_{t}$ denoting daily closing price in time (*t*). The data adjustment procedure is applied to obtain stationary time-series for the returns of the cryptocurrencies considering heteroscedasticity. Figure 2 presents the dynamic evolution of log return series for all cryptocurrencies and illustrates the stylized feature of leptokurtosis that arises from a pattern of time-varying volatility clustering in the cryptocurrencies where periods of high (low) volatility are followed by periods of high (low) volatility.

Figure 1. Daily prices of the eight major cryptocurrencies for the period starting from August 8, 2015 to September 16, 2020.

Figure 2. Daily logarithmic returns of the eight major cryptocurrencies for the period starting from August 8, 2015 to September 16, 2020.

Table 1 reports summary statistics of cryptocurrencies and statistical test results. All cryptocurrencies record a negative mean which is close to zero except for Tether while standard deviation values are all slightly above zero value. Except for Bitcoin, all the other cryptocurrencies are significantly negatively skewed. Additionally, all series have excess-kurtosis implying fat-tails and non-normally distributed. Concerning the normal distribution, the Jarque-Bera test suggests that all the cryptocurrencies are not distributed normally. To test stationarity in cryptocurrencies return series, the Augmented Dickey and Fuller (ADF) test is used. The results of the ADF test accepted the null hypotheses, meaning that all the series were non-stationary at all levels. The significant autoregressive conditional heteroscedasticity (ARCH) confirmed the presence of ARCH effects in all the cryptocurrencies studied. The Ljung-Box Q statistics on lag (20) of squared returns confirmed the significant ARCH effects.

Table 1. Descriptive statistics and statistical test results for eight cryptocurrencies for the period starting from August 8, 2015 to September 8, 2020.

Note: Std Dev (Standard deviation), J.B. (Jarque-Bera), ARCH (autoregressive conditional heteroscedasticity), ADF (augmented Dickey and Fuller), the value of J.B., ADF, ARCH(5), ARCH(10), and *Q*(5), *Q*(10), *Q*^{2}(5), *Q*^{2}(10) Ljung are statistically significant for * at 1%.

3.2. Parameter Estimates of GARCH-Type Models

In this section, results from the estimated GARCH-type models are presented. The sampled period is divided into two sub-sample periods: the in-sample period extending from October 13th 2015 till December 3rd 2018, and the out-of-sample period covering the period from December 4th 2018 till November 18th 2019. In-sample returns are used to estimate the parameters of the selected models, subject to the assumptions and constraints of each model. Accordingly, the calculated in-sample parameters are applied to forecast the volatilities for both the in-sample and out-of-sample periods. First, we estimate GARCH, EGARCH, GJRGARCH, APARCH, CSGARCH, NGARCH and NAGARCH models concerning long memory test results to account for the long memory properties of our cryptocurrency returns.

Table 2 presents BIC values of the fitted GARCH-type specifications: GARCH, EGARCH, GJRGARCH, APARCH, CSGARCH, NGARCH and NAGARCH under different error distributions. The skewed generalized error distribution has minimum BIC values for Bitcoin, Ethereum, Ripple and Litecoin. Skewed-Student’s-*t* distribution, which accounts for both asymmetry and heavy tails, is selected as the most suitable distribution for modelling this data set. Thus, the results deduce that the use of fat-tailed distribution to describe innovations distribution is justified.

Table 3 (*Panel* *A*) reports the estimation results of the NGARCH model with selected innovations distribution. The mean parameters are not significantly different from zero for all eight cryptocurrency price returns indicating that the GARCH components are covariance stationery. The GARCH (1, 1)-type model results reveal that the lagged conditional volatility for each cryptocurrency is statistically significant. In addition, the shock squared term in the variance equation is statistically significant, which means the lagged volatility and current news immediately reflect in the price of the cryptocurrencies. It is observed that under different distributional assumptions, the parameters vary, implying that the distributional assumption does have a certain effect on the estimation process. The skewness parameter, having a very low *p*-value, is quite significant. Moreover, the shape parameters for both the Student’s-*t* and skewed-*t* distributions are significantly high, confirming the presence of heavy tails in the series. The results further show that the *p*-values of the GARCH parameters are very low except for LTC and ETHM, indicating that these parameters are also highly significant.

For the goodness-of-fit test (*Panel* *B*), the diagnostic results reveal that the NGARCH specifications filter the serial autocorrelation, conditional volatility dynamics and leverage effects present in cryptocurrencies return series. The Box-Pierce and ARCH-LM tests do not reject the null hypothesis of a correct model specification and show the power of the NGARCH model to take into account the major stylized facts of time series prices behaviour. However, the NGARCH model fails to capture extreme events normally experienced in the cryptocurrency markets. The standardized residuals of the NGARCH model are closer approximately independently and identically distributed (i.i.d) which is a standard requirement for extreme value theory to be applied. Therefore, we can apply successfully EVT methods to i.i.d residual series. Obviously, in what follow we choose the NGARCH-EVT approach to compute the one-day-ahead VaR for all cryptocurrencies. The forecast performance of this model should be evaluated for the out-of-sample period and using more accurate performance criteria.

Table 2. The Bayesian Information Criterion (BIC) for GARCH model selection.

Table 3. Estimation results of NGARCH (1, 1) models with selected innovations distribution.

3.3. Parameter Estimates of the GARCH-EVT Model

In Extreme value theory (EVT) modelling, Peak over threshold (POT) approach is normally used to estimate the parameters of the generalized Pareto distribution (GPD). The POT method generally depends on the selection of the threshold. In this study, an optimal threshold value is set at 90% quantile of the total observations to estimate the GPD parameters for both left and right tails. Table 4 presents parameter estimates of the fitted GPD with their corresponding standard errors enclosed in brackets for both the left and right tails of the cryptocurrencies standardized residuals. The shape parameter ( $\xi $ ) is positive and significantly different from zero for all cryptocurrencies indicating heavy-tailed distributions and a finite variance. This also implies that the tail distribution of cryptocurrencies belongs to Frechet class which is heavy-tailed. However, the shape parameter is negative except for Ethereum on the left tail. The scale parameters are also positive and significant for all cryptocurrencies both for the left and right tails.

3.4. Forecasting Performance Analysis

To evaluate the out-of-sample performance of the VaR forecast models, we used

Table 4. Parameter estimates of the Generalized Pareto model for selected u for the daily log returns of the eight crypto-currencies.

a rolling windows scheme with a window size of 1358 days and 500 days are reserved for the out-of-sample forecast period. The evaluation is based on the one-step-ahead forecast that is produced from a series of rolling sample size with an estimation window of 1358 observations kept constant and simply rolled forward after every 25 days. The advantage of a rolling window procedure is two-fold: to assess the stability of the model over time and the accuracy of the forecasting. Stability amounts to examining whether the coefficients are time-invariant. The one-day ahead VaR is calculated at 95% and 99% confidence levels. Both levels of confidence are used for out-of-sample backtesting of VaR, following Basel II Backtesting Requirements, which stipulates that backtesting of VaR needs to be done on confidence levels other than 99%. Backtesting is used to evaluate the relative performance of conventional GARCH models and the GARCH-EVT approach to forecast value at risk. Kupiec’s unconditional coverage and Christoffersen’s conditional coverage tests are used at two different levels of significance of 95% and 99% which are considered to reflect extreme market conditions.

Table 5 presents VaR forecast violation percentages and *p*-values in parentheses of unconditional coverage tests for GARCH (1, 1), EGARCH (1, 1), APARCH (1, 1), NGARCH (1, 1) and GARCH (1, 1)-EVT models with skewed-*t* distribution for eight cryptocurrencies returns. The exceedances involve counting the number of actual realized returns that exceed the VaR forecast and comparing this number with the expected number of exceedances. The closer the observed number of exceedances is to the hypothetically expected number, the more preferable the model is for estimating accurate forecasts. More exceedances indicate that the model underestimates Value at Risk and fewer exceedances indicate that the model overestimates Value at Risk. The expected exceedances are 25 for the 95% confidence level and 5 for 99% confidence level. The null hypothesis of Kupiec’s unconditional coverage test assumes that the probability of occurrence of variations equals the expected level of significance. Under the null hypothesis, a good model should be the one that does not reject the null hypothesis. Hence, the test with a *p*-value greater than 0.05 for the unconditional coverage test indicates that the number of violations is statistically equal to the expected. These backtesting results demonstrate that GARCH-EVT clearly outperforms GARCH benchmark VaR predictors.

Table 5. VaR forecast violations of the cryptocurrencies in terms of actual and expected exceedances and Unconditional Coverage (UC) results.

Table 6 also presents test statistic and *p*-values in parentheses of conditional coverage tests for GARCH (1, 1), EGARCH (1, 1), APARCH (1, 1), NGARCH (1, 1) and GARCH (1, 1)-EVT models with skewed-*t* distribution. For the conditional coverage test, likewise, a good model should accept the null hypothesis, that is, correctly identifying the number of violations and being independent. The null hypothesis of the conditional coverage test indicates that the probability of occurrence of the violations equals the expected significance level and the violation is independently distributed through time. The empirical results suggest that the combined GARCH-EVT model performs best in estimating out-of-sample VaR forecasts in the specified backtesting period and this makes it relatively better in forecasting VaR. The superior performance is attributed to the combined approachability to appropriately capture the statistical features of the data.

Table 6. Conditional Coverage (CC) results of backtesting.

4. Conclusion

Cryptocurrencies unlike conventional financial assets such as currencies exchange rates and stock prices are characterized by high volatility and extreme price movements. This paper employed GARCH-type models and extreme value theory to model the volatility and tail behaviour of the cryptocurrencies returns. Modelling the tail behaviour of the returns of cryptocurrencies is of utmost importance for both investors and policy-makers. The GARCH-EVT approach is implemented in modelling the tail distribution of cryptocurrencies return series and forecasting out-of-sample value at risk. The back-testing results demonstrate the superiority of the heavy-tailed GARCH-EVT models in forecasting out-of-sample value at risk. Overall, the model provides a significant improvement in forecasting value-at-risk over the widely used conventional GARCH models. This study can be extended by considering intra-day cryptocurrencies data and more robust models such as the GARCH-EVT-Copula model that can also capture the dependence structure of between cryptocurrencies.

References

[1] Nakamoto, S. (2008) Bitcoin: A Peer-to-Peer Electronic Cash System. Bitcoin, 4.

https://bitcoin.org/bitcoin.pdf

[2] coinmarketcap.com. (2021) Overview of Available Cryptocurrencies.

https://coinmarketcap.com

[3] Schnaubelt, M., Rende, J. and Krauss, C. (2019) Testing Stylized Facts of Bitcoin Limit Order Books. Journal of Risk and Financial Management, 12, Article No. 25.

https://doi.org/10.3390/jrfm12010025

[4] Peng, Y., Albuquerque, P.H.M., de Sá, J.M.C., Padula, A.J.A. and Montenegro, M.R. (2018) The Best of Two Worlds: Forecasting High-Frequency Volatility for Cryptocurrencies and Traditional Currencies with Support Vector Regression. Expert Systems with Applications, 97, 177-192. https://doi.org/10.1016/j.eswa.2017.12.004

[5] Fakhfekh, M. and Jeribi, A. (2020) Volatility Dynamics of Cryptocurrencies Returns: Evidence from Asymmetric and Long Memory GARCH Models. Research in International Business and Finance, 51, Article ID: 101075.

https://doi.org/10.1016/j.ribaf.2019.101075

[6] Ngunyi, A., Mundia, S. and Omari, C. (2019) Modelling Volatility Dynamics of Cryptocurrencies Using GARCH Models. Journal of Mathematical Finance, 9, 591-615. https://doi.org/10.4236/jmf.2019.94030

[7] Chu, J., Chan, S., Nadarajah, S. and Osterrieder, J. (2017) GARCH Modelling of Cryptocurrencies. Journal of Risk and Financial Management, 10, Article No. 17.

https://doi.org/10.3390/jrfm10040017

[8] Borri, N. (2019) Conditional Tail-Risk in Cryptocurrency Markets. Journal of Empirical Finance, 50, 1-19. https://doi.org/10.1016/j.jempfin.2018.11.002

[9] Gangwal, S. and Longin, F. (2018) Extreme Movements in Bitcoin Prices: A Study Based on Extreme Value Theory. ESSEC Working Paper Ser, 8, 1-17.

[10] Begušić, S., Kostanjčar, Z., Stanley, H.E. and Podobnik, B. (2018) Scaling Properties of Extreme Price Fluctuations in Bitcoin Markets. Physica A: Statistical Mechanics and its Applications, 510, 400-406. https://doi.org/10.1016/j.physa.2018.06.131

[11] Zhang, Y., Chan, S. and Nadarajah, S. (2019) Extreme Value Analysis of High-Frequency Cryptocurrencies. High Frequency, 2, 61-69.

https://doi.org/10.1002/hf2.10032

[12] Gkillas, K. and Katsiampa, P. (2018) An Application of Extreme Value Theory to Cryptocurrencies. Economics Letters, 164, 109-111.

https://doi.org/10.1016/j.econlet.2018.01.020

[13] Likitratcharoen, D., Ranong, T.N., Chuengsuksomboon, R., Sritanee, N. and Pansriwong, A. (2018) Value at Risk Performance in Cryptocurrencies. The Journal of Risk Management and Insurance, 22, 11-28.

[14] Engle, R.F. (1982) Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica, 50, 987-1007.

https://doi.org/10.2307/1912773

[15] Bollerslev, T. (1986) Generalized Autoregressive Conditional Heteroskedasticity. Journal of Econometrics, 31, 307-327. https://doi.org/10.1016/0304-4076(86)90063-1

[16] Hansen, P.R. and Lunde, A. (2005) A Forecast Comparison of Volatility Models: Does Anything Beat a GARCH (1, 1)? Journal of Applied Econometrics, 20, 873-889. https://doi.org/10.1002/jae.800

[17] Nelson, D.B. (1991) Conditional Heteroskedasticity in Asset Returns: A New Approach. Econometrica, 59, 347-370. https://doi.org/10.2307/2938260

[18] Glosten, L.R., Jagannathan, R. and Runkle, D.E. (1993) On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks. The Journal of Finance, 48, 1779-1801.

https://doi.org/10.1111/j.1540-6261.1993.tb05128.x

[19] Ding, Z., Granger, C.W. and Engle, R.F. (1993) A Long Memory Property of Stock Market Returns and a New Model. Journal of Empirical Finance, 1, 83-106.

https://doi.org/10.1016/0927-5398(93)90006-D

[20] Taylor, S. (1986) Modelling Financial Time Series. John Wiley & Sons, Great Britain.

[21] Engle, R. F. and Lee, G. (1999) A Long-Run and Short-Run Component Model of Stock Return Volatility. In: Cointegration, Causality, and Forecasting: A Festschrift in Honour of Clive WJ Granger, 475-497.

[22] Higgins, M.L. and Bera, A.K. (1992) A Class of Nonlinear ARCH Models. International Economic Review, 33, 137-158. https://doi.org/10.2307/2526988

[23] Engle, R. F. and Ng, V. K. (1993) Measuring and Testing the Impact of News on Volatility. The Journal of Finance, 48, 1749-1778.

https://doi.org/10.1111/j.1540-6261.1993.tb05127.x

[24] Azzalini, A. and Capitanio, A. (2003) Distributions Generated by Perturbation of Symmetry with Emphasis on a Multivariate Skew T-Distribution. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65, 367-389.

https://doi.org/10.1111/1467-9868.00391

[25] Theodossiou, P. (2015) Skewed Generalized Error Distribution of Financial Assets and Option Pricing. Multinational Finance Journal, 19, 223-266.

https://doi.org/10.17578/19-4-1

[26] Barndorff-Nielsen, O. (1978) Hyperbolic Distributions and Distributions on Hyperbolae. Scandinavian Journal of Statistics, 5, 151-157.

[27] Rigby, R.A. and Stasinopoulos, D.M. (2005) Generalized Additive Models for Location, Scale and Shape. Journal of the Royal Statistical Society: Series C (Applied Statistics), 54, 507-554. https://doi.org/10.1111/j.1467-9876.2005.00510.x

[28] Balkema, A.A. and de Haan, L. (1974) Residual Life Time at Great Age. The Annals of Probability, 2, 792-804. https://doi.org/10.1214/aop/1176996548

[29] Pickands, J. (1975) Statistical Inference Using Extreme Order Statistics. Annals of Statistics, 3, 119-131. https://doi.org/10.1214/aos/1176343003

[30] Smith, R. L. (1987) Approximations in Extreme Value Theory. North Carolina University at Chapel Hill Center for Stochastic Processes.

[31] McNeil, A.J. and Frey, R. (2000) Estimation of Tail-Related Risk Measures for Heteroscedastic Financial Time Series: An Extreme Value Approach. Journal of Empirical Finance, 7, 271-300. https://doi.org/10.1016/S0927-5398(00)00012-8

[32] Kupiec, P.H. (1995) Techniques for Verifying the Accuracy of Risk Measurement Models. The Journal of Derivatives, 3, 73-84.

https://doi.org/10.3905/jod.1995.407942

[33] Christoffersen, P.F. (1998) Evaluating Interval Forecasts. International Economic Review, 39, 841-862. https://doi.org/10.2307/2527341