Open access

Long Memory in the Volatility of Local Currency Bond Markets: Evidence from Hong Kong, Mexico and South Africa

Written By

Pako Thupayagale

Submitted: 01 December 2011 Published: 12 September 2012

DOI: 10.5772/50888

From the Edited Volume

Risk Management - Current Issues and Challenges

Edited by Nerija Banaitiene

Chapter metrics overview

2,244 Chapter Downloads

View Full Metrics

1. Introduction

Investors can potentially improve the risk-adjusted performance of their portfolios by investing internationally and thereby take advantage of the associated return and diversification benefits. The potential gains provided by emerging markets have attracted significant investor attention which, in turn, has led to substantial capital inflows to these economies. Local currency-denominated sovereign bonds have been the fastest growing market in emerging market space in the past few years. More recently, the global quest for yield in a context of accommodative monetary policies in the advanced economies has created a positive external environment for emerging market bonds. In addition, the secondary market for emerging market local currency debt has also been supported by high interest and amortisation payments. Emerging market bonds are also benefiting from a track record of strong risk-adjusted returns and low correlations with other asset classes. Such characteristics are attractive from a portfolio optimisation perspective. These attributes have also seen considerable attention devoted to analysis of the various risk-return attributes of these markets. In particular, recent empirical literature has focused on the characterisation of the volatility profile of emerging market bond returns. Indeed, the accurate estimation of volatility plays a central role in many applications in finance, including optimal portfolio selection (e.g., diversification strategies), valuation of derivatives (e.g., option pricing) and risk management (e.g., value-at-risk calculation). These applications have motivated an extensive empirical literature on volatility modelling.

While recent empirical literature has focused on the characterisation of the volatility profile of a variety of asset classes; in particular, the long memory properties of these assets, surprisingly little attention has been devoted to the analysis of fixed income markets, especially in emerging markets. The empirical literature on long memory dynamics of fixed income volatility and its implications for portfolio and risk management appears to be limited. Most of the extant literature appears concentrated in the advanced economies. For example, several authors have examined the various aspects of long memory behaviour in interest rates and yield spreads. [1-3] The purpose of this chapter is to augment this line of analysis concerning the long memory attributes of fixed income volatility in emerging market local currency debt market in light of investor interest in the potential alpha generation of these markets, amid wider capital inflows into emerging market bonds as investors search for yield.

This study will focus on government bond markets from Hong Kong, Mexico and South Africa. According to the most recent survey from the Emerging Markets Trading Association (EMTA) these three local currency debt markets are among the most vibrant and actively traded in emerging markets. [4] As a result of their (comparatively high) liquidity, developed institutional frameworks and credible monetary policies, these markets are therefore of interest to investors. Indeed, local currency bond markets have emerged as an important asset class in many emerging markets; a point which becomes salient in the current low-yield environment, where investors targeting high returns and diversification benefits have channelled capital to emerging bond markets such as these.

As a result of regulatory initiatives and various reforms, emerging market local currency sovereign debt markets have has grown rapidly in size and sophistication. According to [4], emerging market debt trading volumes were at USD1.8 trillion in the third quarter of 2011. This represents a 3 percent increase from the USD1.7 trillion reported for the second quarter of 2011. Turnover in local market instruments was at USD1.3 trillion in the third quarter of 2011, (i.e., 76 percent of total reported volume). Mexican and Hong Kong securities were the first and second most frequently traded emerging market debt in the third quarter of 2011 at USD282 billion and USD176 billion, respectively compared to USD136 billion and USD201 billion a year earlier. The next most frequently traded local markets debt were those from Brazil (USD160 billion), South African (USD113 billion) and Turkey (USD69 billion).

Analysis is on Hong Kong, Mexico and South Africa due to the availability of data.

Domestic institutional investors are typically the largest investors in local currency bond markets. For example, pension funds tend to have long-term liabilities which are typically funded by investments in long-term investment grade securities that provide a prudent risk-return profile. As a result, the examination of long memory in volatility would appear to be of interest to institutional and other long-term investors. Furthermore, it has been shown that it is important to model the long memory volatility structure when pricing derivative contracts with long maturity. [5] In addition, in order to assess future returns from both active and passive investment strategies or the need for policy intervention, especially over long horizons, it is important to forecast volatility. These applications have motivated an extensive empirical literature on modelling long memory dynamics in asset return data. Analysis of the long-term volatility dynamics of emerging market local currency bonds appear limited. Therefore, this study attempts to help fill this gap by addressing a range of issues relating to the estimation and forecasting of fixed income return volatility especially over long horizons.

Against this background, this paper has three objectives. First, evidence of long memory in volatility within leading emerging fixed income markets is investigated. In particular, the existence of long memory behaviour in the volatility of returns from Hong Kong, Mexico and South Africa are examined which appears to have little or no previous research establishing the existence of its long memory properties. In order to estimate the long memory parameter d, this study makes use of methods based on wavelets, which have been recently used to capture the fractal structure of high frequency data.[6] Second, the existence of long memory dynamics in bond volatility data (i.e., a fractal structure in the data) will be further investigated in order to test if the extraction of this long-run component can be exploited for purposes of generating improved volatility forecasts especially over long horizons. The long memory property is examined in order to determine if it helps deliver more accurate forecasts over a long(er) horizon. Third, an important and topical area of research concerns the calculation of value-at-risk (VaR) in financial markets. This methodology is widely used by financial institutions and regulatory agencies to measure, monitor and manage market risk. This analysis compares whether long memory volatility estimates can help deliver more accurate VaR estimates relative to standard models (i.e., GARCH and RiskMetrics).

In total, the findings of this investigation will provide a range of volatility estimates and forecasts which could potentially inform portfolio management strategies and guide policymaking. In particular, while most empirical studies focus on the United States and other developed markets, recent research has begun to look at emerging markets, however, limited evidence exists with respect to these markets. This analysis contributes to the empirical literature by focusing on various aspects of long memory behaviour in local currency debt markets. The findings from this research complement those in previous studies and may provide an interesting comparison to existing studies.

The rest of the chapter is structured as follows. Section II presents a description of long memory in time series. Section III introduces the data. Section IV presents the empirical methodology and associated results. In particular, this starts with a presentation of the standard GARCH model which is often used to present initial evidence of long memory behaviour. Then the wavelet method and the estimator employed is introduced, along with the relevant findings. This includes a discussion on wavelet analysis, the discrete wavelet transform, the estimator employed and the relevant findings. Section V provides the forecast evaluation techniques used and the out-of-sample forecast results. Section VI considers the evaluation of value-at-risk in the context of the Basle adequacy criteria. Section VII concludes and identifies topics for further research.

Advertisement

2. Long memory in time series

Interest in long memory (or long range dependent) processes can be traced to the examination of data in the physical sciences. Formal models with long memory initially pertained to hydrological studies investigating how to regularise the flow of the Nile river in view of its nonperiodic (flooding) cycles. [7] This feature was described as the “Joseph effect” alluding to the biblical reference in which seven years of plenty where to be followed by seven years of famine. [8] In this sense, long memory process concern observations in the remote past that are highly correlated with observations in the distant future. The implications of long memory in financial markets was related to the use of Hurst’s ‘rescaled range’ statistic to detect long memory behaviour in asset return data. [9] It was observed that if security prices display long memory then the arrival of new market information cannot be arbitraged way, which in turn means that martingale models for security prices cannot be derived through arbitrage. As such, long memory processes can be characterised as having fractal dimensions, in the form of non-linear behaviour marked by distinct but nonperiodic cyclical patterns and long-term dependence between distant observations. [10]

A variety of measures have been used to detect long memory in time series. For example, in the time domain, long memory is associated with a hyperbolically decaying autocovariance function. Meanwhile, in the frequency domain, the presence of long memory is indicated by a spectral density function that approaches infinity near the zero frequency; in other words, such series display power at low frequencies. [11] Finally, a pattern of self-similarity in the aggregated sequences of a time series is an indicator of long memory (this refers to the property of a self-similar process, in which, different time aggregates display the same autocorrelation structure). These notions have led several authors to develop stochastic models that capture long memory behaviour, such as the fractionally-integrated I(d) time series models. [12-13] In particular, fractional integration theory asserts that the fractional difference parameter which indicates the order of integration, is not an integer value (0 or 1) but a fractional value. Fractionally integrated processes are distinct from both stationary and unit-root processes in that they are persistent (i.e., they reflect long memory) but are also mean reverting and as a consequence provide a flexible alternative to standard I(1) and I(0) processes. [14] Specifically, the long memory parameter is given by d(0,0.5) while when d>0.5 the series is nonstationary and when d(0.5,0) the series is antipersistent.

Since, non-zero values of the fractional differencing parameter imply dependence between distant observations, considerable attention has been directed to the analysis of fractional dynamics in financial time series data. Indeed, long memory behaviour has been reported in the returns of various asset classes. [15] Against this background, a rapidly expanding set of models has been developed to capture long memory dynamics is asset return data.

Advertisement

3. Data description

The data analysed in this study are obtained from the global bond index (GBI) series for emerging markets (EM) compiled by JP Morgan. In particular, the fixed income data used comprise of daily total returns for Hong Kong, Mexico, and South Africa from December 31, 2001 to April 9, 2012, representing 2571 observations.

More formally, the change in the local bond index Β^can be expressed as:

Β^/Β^t11=t+γtE1

where is the local currency return and γ is the currency return. t=(yieldtyieldt1)×DV01t+couponreturntand γt=(1+t)(FXt/FXt11) and the coupon return is the return derived from the interest payment made on the fixed income product. Therefore, when an investor buys a local market index, equation (1) suggests that fixed income returns can be decomposed into its predictable coupon return, FX changes, and changes in local yields. Furthermore, in order to compute return volatility, this analysis focuses on squared daily returns, as a proxy for the volatility of the selected emerging markets. In addition, the volatility series is standardised prior to further analysis.

Advertisement

4. Empirical methodology and results

4.1. Preliminary observations

Table 1 presents the time series properties of the data using some basic methods. The results of the Augmented Dickey-Fuller (ADF) unit root test offer evidence in favour of stationary fixed income returns. While this test may be deficient in terms of its ability to capture an order of integration that may not be an integer, the finding of stationary bond returns is consistent with those of many previous studies. [15] However, based on the standard normality and Lagrange Multiplier ARCH tests, fixed income return data exhibit non-normality and ARCH effects. [16-17] These non-white noise characteristics of the data motivate estimation of GARCH(1,1) model using the assumption of the Student t distribution.

MexicoSouth AfricaHong Kong
Mean
Standard deviation
Skewness
Kurtosis
Normality test
ARCH (5) test
ARCH (10) test
ADF unit root test:
Constant
Constant & Trend
0.040496
0.339043
0.538071
20.74158
33829**
64.97**
42.23**

-46.72**
-46.81**
0.040533
0.381478
-0.220239
8.738281
3546**
92.71**
53.92**

-33.53**
-32.88**
0.017929
0.186277
-0.008431
6.603305
1390**
72.38**
47.92**

-61.55**
-61.57**

Table 1.

Description of the Data

4.2. GARCH(1,1) model

The GARCH (1,1) specification comprises a return (or mean) and a variance equation. In particular, the returns generating process can be described by:

rt=μ+εtwhereεt|Φt1N(0,ht)E2

where rt denotes the returns process, which may include autoregressive and moving average components, and εt is the error term, which is assumed to be normally distributed with zero mean and varianceht, given the information setΦt1. The conditional variance is modelled as:

ht=ω+αεt12+βht1E3
For htto be well-defined, ω,α,andβ are constrained to be non-negative. In addition, the unconditional variance is given by σ2=ω/1αβ and for a finite unconditional variance to exist α+β < 1. Furthermore, in the GARCH model the effect of a shock on volatility decays exponentially over time and the speed of decay is measured by the extent of volatility persistence (which is reflected in the magnitude and significance of the summation of the α and β parameters).

The GARCH (1,1) model estimates are reported in Table 2. The results confirm the previous findings on the importance of GARCH effects by showing that the GARCH and ARCH terms are all statistically significant. The parameters of the conditional variance equations are all positive and statistically significant. Furthermore, they satisfy the positivity constraint for the GARCH(1,1) model. Furthermore evidence of persistence in variance as measured by the GARCH model is reflected in the magnitude and significance of the ARCH and GARCH terms (indeed, as this sum approaches unity the greater the degree of persistence). Therefore, in order to have an indication of long memory in fixed income return volatility the level of volatility persistence (i.e., α+β) is assessed.

The results indicate that volatility in these markets is very persistent, with the level of volatility persistence being 0.9775 for Mexico, 0.9782 for South Africa and 0.9912 for Hong Kong, which underscores the highly persistent nature of shocks to volatility, which also in turn is suggestive of a long memory component to volatility behaviour in these fixed income markets. The models’ appropriateness has also been checked by applying the Box-Pierce Q statistic test to standardised and squared standardised residuals. Basic diagnostics indicate that the GARCH models are well-specified.

4.3. Wavelet analysis

To estimate the long memory parameter d of emerging market local currency debt of Hong Kong, Mexico and South Africa this study considers methods based on the discrete wavelet transform. Wavelet analysis, plays an important role in the characterisation of time series, by detecting scaling structures in data. More precisely, since wavelets are localised both in time and frequency, they provide a means to collect information on both the frequency and time characteristics of a time series. In fact, wavelets are already widely used as detectors of patterns in areas as diverse as digital signal processing and exploration geophysics. In the empirical finance literature they have recently been used to determine time-dependence in asset return data by comparing the scale decompositions of observations that exhibit significant autocorrelation between observations widely separated in time. [18] In this manner, the scaling properties of daily returns of emerging market government bonds are analysed in order to capture temporal dependencies in the volatility process.

MexicoSouth AfricaHong Kong
ω
α
β
Q(5) 1/
Q(5) 2/
Sign bias test
Negative size bias test
Positive size bias test
Joint test
0.0029 [0.0006]**
0.1245 [0.0166]**
0.8530 [0.0170]**
1.2197
1.8322
1.1775
1.3357
3.1661**
7.7588**
0.0031 [0.0007]**
0.0756 [0.0118]**
0.9026 [0.0123]**
4.0842
5.4248
2.1147
1.4692
3.3072**
8.3955
0.0003 [0.0001]*
0.0510 [0.0089]**
0.9402 [0.0095]**
1.0838
1.3785
2.6833
2.2911
5.6812**
11.3702**

Table 2.

GARCH Estimates

A wavelet is defined as a wave-like function with an amplitude that oscillates around zero and has a finite or quickly decreasing time support. These functions are well suited to locally approximating variables in time or space as they have the ability to be manipulated by being either ‘stretched’ or ‘squeezed’ so as to mimic the series under investigation. [19] The power of wavelet analysis is that it makes it possible to decompose a time series into its high- and low-frequency components, which are localised in time. Wavelets, also allow the selection of an appropriate trade-off between resolution in the time and frequency domains, while traditional Fourier analysis stresses resolution in the frequency domain at the expense of the time domain. [20] Wavelets therefore provide a convenient and efficient method to analyse complex signals. [21] Wavelet theory is applicable to several subjects. They are especially useful where a signal (e.g. long memory) lasts for a finite time or shows markedly different behaviour in different time periods. These methods, have emerged as a useful tool in the empirical finance literature where long-run and short-run relationships can be distinguished. [22]

4.4. The discrete wavelet transform

A discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discreetly sampled, as is often the case in econometric analysis. A wavelet transform is a scaling function used to transform a signal into a father (Φ) and mother(μ) wavelet, where the former, are representations of a signal’s trend component (i.e., the approximation coefficients) and the latter represent the deviations from the trend component (i.e., the detail coefficients). The discrete wavelet series approximation to a continuous signal f(t) is given by

f(t)kaj,kΦJ,k(t)+kdj,kμJ,k(t)+kdj1,kμJ1,k(t)++kd1,kμ1,k(t)E4

where j is the number of multi-resolution scales, and k ranges from 1 to the number of coefficients in the corresponding scale and the coefficients aJ,k,dJ,k,,d1,kare the wavelet transform coefficients. Applications of wavelet analysis with respect to time series analysis make use of a DWT. The DWT maps the vector f = (f1,f2,,fn)' to a vector of wavelet coefficients, which contains aj,kand dj,k, j = 1, 2,..., J, which are the approximation and detail coefficients, respectively.

In the empirical literature, Haar and Daubechies wavelets represent typical wavelets and have been used in the characterisation of the time series properties of asset return data. The Haar wavelet is the simplest wavelet and provides a basis for studying more complex wavelets.

The Haar transform assumes a discrete signal and decomposes the signal into two sub-signals of half its length reflecting the trend process and fluctuations from the trend process.

Since a wavelet is used to decompose a given function into different scale components, it follows that each scale component can then be studied with a resolution that matches its scale. [23]

The data used in this study are discreetly sampled, accordingly, the discrete wavelet transform is used, which permits the generation of the approximation coefficients, (aj,k), which capture the trend of a time series and the detail coefficients, (dj,k), reflecting the deviations from the trend at each scale. Because the original function can be represented in terms of a wavelet expansion, data operations can be performed using the corresponding wavelet coefficients. This leads to a continuum of time-scale representations of the signal, all with different resolutions. Hence, multi-resolution analysis, which allows the computation of the coefficients corresponding to the wavelet transform of the observed time series.

The analysis of fractionally integrated processes through the use of wavelets is based on the result that the detail coefficients of a zero mean long memory process are asymptotically normally distributed with varianceσ222(Jj)d, where σ2is a finite constant, j is the scaling parameter and d measures long memory in the relevant volatility series. [24] To estimate d using wavelet theory, the logarithmic variance transformation regression estimator is widely used. This procedure is based on the exploitation of the variance of the detail coefficients at each scale, which generates a statistically consistent estimator of the long memory parameter. The estimator of the parameter of the fractional integration, d, is based on the following least squares regression:

lnVar(dj,k)=lnσ2+dln22(Jj)+εE5

where Var(dj,k) is the variance of the detail coefficients associated with the value of the scaling parameter j, where, j =1,..., J, and εis a random error term. Since the variance of the detail coefficients decomposes the variance of the initial time series over different scales, this permits an analysis of the dynamics of the series at each scale.

To estimated, this study uses multi-resolution analysis via the Haar wavelet to generate the respective detail coefficients of each volatility series at each dimension of scaling parameter. From here, the variance of the detail coefficients at each scale are computed and then the regression specified in equation (5) is performed, where the slope coefficient provides an estimate ofd. To check for the robustness of the results, and, therefore, avoid spurious conclusions of long memory dynamics the Daubechies 4 (D4) wavelet is also examined.

The regression results of this analysis are presented in Table 3 and 4. The slope coefficient of the regression given in equation (3) provides an estimate, d. Table 3 shows that, when the Haar wavelet is used, this study is able to find evidence of long memory across the three volatility measures used. The long memory parameter, d, ranges from 0.2363 (Mexico) to 0.3423 (Hong Kong). Furthermore, the evidence obtained is significantly different from zero for all the fixed income markets. These results indicate that volatility realisations have a predictable component insofar as distant observations in the volatility series are associated with each other, albeit over long lags. The significant size of d obtained from this model illustrates the importance of modelling long memory in fixed income data. Furthermore, the result of d (0, 0.5) from these models is in contrast to the findings from the unit root tests that led to a conclusion of d = 0.

Volatility seriesIdentifierParameter EstimateStandard ErrorR2
Hong KongIntercept
Slope (d)
1.4167**
0.3423
0.1288
0.0310
0.9396
MexicoIntercept
Slope (d)
1.0344*
0.2363**
0.2958
0.0281
0.9164
South AfricaIntercept
Slope (d)
1.0151*
0.2679**
0.1989
0.0664
0.9172

Table 3.

Estimates of the Long Memory Parameter using the Haar Wavelet

In econometric analysis, it is important to perform diagnostic checks in order to assess the validity of the initial estimates ofd. Therefore, to avoid spurious evidence of long memory (due to the choice of wavelet employed) in the volatility process of the time series, equation (4) is re-estimated using the Daubechies 4 (D4) wavelet. These results are presented in Table 4. The results are broadly similar in magnitude to those obtained using the Haar wavelet. The noticeable exception relates to the case of South Africa where the long memory parameter falls from 0.2679 (when the Haar wavelet is used) to 0.1784 (when the D4 wavelet is used). This notwithstanding, the results are all statistically significant. In sum, the results of this analysis suggest that bond return volatility in emerging markets is characterised by stochastic processes which have a long memory component.

Volatility seriesIdentifierParameter EstimateStandard ErrorR2
Hong KongIntercept
Slope (d)
1.0822**
0.3577**
0.0105
0.0887
0.8953
MexicoIntercept
Slope (d)
1.5824**
0. 2611*
0.1996
0.1083
0.9076
South AfricaIntercept
Slope (d)
1.6585**
0.1784**
0.2210
0.1575
0.9412

Table 4.

Estimates of the Long Memory Parameter using the Daubechies 4 Wavelet

The analysis indicates robust evidence of long memory behaviour in the return volatility of emerging market debt. Further, wavelet methods provide a robust fit for the data as evidence by the R2readings presented in the final columns of both Table 3 and 4. If fixed income data exhibit long memory, then it displays significant autocorrelation between distant observations. This, in turn, implies that the series realisations may have a predictable component; and, perhaps, past trends in the data can be used to predict future volatility. Therefore, attention now turns to an exploration of the forecast performance of models with long memory relative to the standard volatility models.

Advertisement

5. Volatility forecasting

The evidence accumulated so far suggests that fixed income return volatility in emerging markets follows a long memory process. This, in turn, implies the existence of fractional dynamics in the data which may be exploited to potentially construct improved volatility forecasts, especially over longer forecasting horizons. In order to evaluate the forecasting performance of long memory models (especially over long(er) horizons) versus short memory models (i.e., the GARCH model), the respective data sets are simply split in half and then each model is estimated for all series covering the first part of the sample and then these estimates are used to forecast volatility over the sample period covered by the second half of the data. In this manner, out-of-sample forecast accuracy is evaluated. In addition to calculating the daily forecasts, this study also calculates monthly forecasts using the well-known property that volatility forecasts are additive, such that the sum of five daily volatility forecasts produces the weekly forecasts. And, the summation of weekly forecasts produces monthly forecasts.

In addition to the GARCH and long memory model the RiskMetrics model is also considered for comparative purposes. The RiskMetrics model was popularised by the investment bank JP Morgan and is widely used by financial institutions to model and forecast volatility, especially in the context of the Basle Committee adequacy criteria. This model is essentially an exponentially weighted moving average (EWMA). Under the EWMA, the fitted variance from the model, ht,which provides the multi-step ahead volatility forecast, is a weighted function of the immediately preceding volatility forecast and actual volatility is given below:

ht=λht1+(1λ)h^t1E6

where 0λ1is the smoothing parameter, such that whenλ= 0 the model reduces to a random walk process and when λ=1 the model is equivalent to the prior period forecast of volatility. The value of λis determined empirically by the value that minimizes the ‘in-sample’ sum of squared prediction errors. In this study λis set to 0.94 following standard market practice, which is also consistent with previous research which indicates that this value produces accurate forecasts. [25]

5.1. Standard forecast evaluation

Two standard symmetric measures are used to evaluate forecast accuracy, namely, the mean absolute error (MAE) and the root mean square error (RMSE). They are defined below:

MAE=1τt=T+1T+τ|htfrt2|E7
RMSE=1τt=T+1T+τ(htfrt2)2E8

where τis the number of forecast data points and rt2is the proxy for volatility. Both the MAE and RMSE assume the underlying loss function to be symmetric. Furthermore, under these evaluation criteria the model which minimises the loss function is preferred.

Table 5 reports out-of-sample performance of the estimated models based on the MAE and RMSE forecast error statistics. At the daily level, the results are not unexpected. That is, the GARCH model dominates forecast accuracy for South Africa on the basis of both the MAE and RMSE. For Mexico, the GARCH model dominates forecast performance on the basis of the MAE while the RiskMetrics models delivers the most accuracy when the RMSE is used as a criterion. For Hong Kong the GARCH process is preferred on the basis of the MAE, which surprisingly, the long memory model delivers the best performance when the RMSE is used as a reference. However, in some cases the forecast accuracy of all the models are close; for instance at the daily level the forecast MAE statistics for the GARCH, RiskMetrics and FIGARCH models are virtually indistinguishable. More generally, the findings of GARCH superiority at the daily level are consistent with a wide empirical literature attesting to the forecast superiority of the GACH model at forecasting volatility over daily frequencies or short horizons.

ModelGARCHRiskMetricsFIGARCH
Forecast Error StatisticMAERMSEMAERMSEMAERMSE
Hong Kong1.84e-05*2.82e-043.36e-058.09e-042.73e-041.92e-05*
Mexico1.91e-043.18e-04*4.49e-05*5.72e-046.22e-056.73e-04
South Africa2.63e-05*1.83e-04*2.82e-041.95e-042.88e-052.31e-04

Table 5.

Daily Forecast Results

At the monthly level (i.e., at a longer horizon) the GARCH model also delivers the most accurate results. This finding is surprising. Long memory implies that widely separated observations are associated with each other which in turn suggests that volatility realizations are connected over long lags. The results shows that at even comparatively longer horizons the GARCH model still delivers the most accurate volatility forecasts. Indeed, Table 6 shows that the forecast MAE statistics for Mexico and South Africa are 3.13e-03 and 3.92e-03, respectively, which are smaller than those from long memory models. The same results holds true for the forecast RMSE error statistics. For Hong Kong fixed returns This result appears to suggest that long memory models while theoretically appealing are not particularly helpful in deriving accurate volatility forecast especially over long horizons.

ModelGARCHRiskMetricsFIGARCH
Forecast Error StatisticMAERMSEMAERMSEMAERMSE
Hong Kong4.27e-042.26e-03*2.35e-03*4.09e-034.39e-044.17e-03
Mexico3.13e-03*4.66e-035.72e-034.31e-03*4.58e-036.30e-03
South Africa3.92e-03*4.23e-03*4.89e-034.82e-034.27e-034.31e-03

Table 6.

Monthly Forecast Results

Advertisement

6. Value-At-Risk evaluation

VaR is a widely used measure to capture the exposure of a portfolio to market risk. The VaR of a position describes the expected maximum loss over a target horizon within a given confidence interval due to an adverse movement in the relevant fixed income yield (or price). VaR is now widely used as an internal risk management tool by financial institutions and as a regulatory measure of risk exposure. [26] In addition, the VaR method is the cornerstone of the 1996 market risk amendment to the Basle Accord (Bank of International Settlements, (BIS), 1996). The Basle Accord prescribes the VaR method in order that financial institutions can meet the capital requirements to cover the market risk they incur in the process of their daily business operations. Under this framework, operational evaluation takes the form of backtesting volatility forecasts and exception reporting.

In particular, the Basle Accord stipulates that for the purpose of calculating regulatory market risk capital it is required that VaR estimates be calculated at the 99 percent probability level using daily data over a minimum sample period of at least one business year (equivalent to 250 trading days) and that these estimates be updated at least every quarter (i.e., 60 trading days). Against this background, the well-known delta-normal specification is employed:

VaR=Nαhf3VE9

where Nαis the appropriate standard normal deviate, hfis the volatility forecast, the number three represents the minimum regulatory Basle multiplicative factor and V is the initial portfolio value. While Basle Accord prescribes a 99 percent probability the 97.5 percent and 95 percent confidence level is also examined for greater comprehensive and consistency with previous studies. The validity of such VaR calculations are assessed or ‘backtested’ by comparing actual daily trading (net) losses with the estimated VaR and noting the number of ‘exceptions’, in the sense of days when the VaR estimate was insufficient to cover actual trading losses. Regulatory scrutiny is therefore triggered where such exceptions occur frequently, and in practice this leads to a range of penalties for the financial institution concerned. [27]

In line with the rolling window approach to VaR evaluation mandated by the Basle Committee rules initial volatility forecasts and VaR measures are constructed over intervals of 60 trading days, with the initial estimation sample then rolled forward and the models updated every 60 observations before the next set of volatility forecasts are produced. The first 3-years of data (representing 752 observations) are used for initial model parameter estimation, leaving 1819 observations for volatility forecasting and the construction and evaluation of VaR measures. Specifically, this provides 30 sub-samples of 60 trading days length over which VaR is assessed. This assessment is conducted through appraisal of the out-of-sample VaR failure rates associated with VaR measures constructed using the forecast values of those volatility measures. The assessment of VaR performance is conducted through appraisal of the ‘out-of-sample’ VaR failure rates associated with VaR measures constructed using the forecast values derived from the GARCH, RiskMetrics and long memory model. The focus on the ‘out-of-sample’ failure rates is motivated by the requirements of risk managers, who obtain VaR estimates in real time and must use parameters obtained from an already observed sample in order to evaluate the risks associated with current and future random movements in risk factors. As a result, credible test of VaR construction methods under alternative volatility forecasting models is their performance outside the sample used to estimate the underlying parameters.

Table 7 reports the out-of-sample VaR failure rates. The results are very diverse and highlight that in many of the markets considered the forecasting model that minimises the percentage number of daily VaR exceedances is sensitive to the specification of the probability level. When the Basle Committee rules are applied (i.e., the 99 percent probability level) the results indicate that the GARCH and RiskMetrics that provide the exceedance-minimising method for the fixed income markets considered. At the 99 percent probability level the long memory model is the generally the weakest performer. However, in the case of Hong Kong and South Africa the long memory model is second best model in terms of delivering accurate VaR measures. In addition, it is important to note that in many cases is level of accuracy between the various models is close as reflected by the closeness of the VaR failure rates. At the 97.5 and 95 percent probability levels model performance is more varied with all models demonstrating varied degrees of accuracy. As a generalization, these results are mixed but the evidence suggests that at the Basle prudential level the simpler models help in providing improved VaR estimates that minimise occasions when the minimum capital requirement identified by the VaR methodology would have fallen short of actual trading losses.

Hong KongMexicoSouth Africa
model99%97.5%95%99%97.5%95%99%97.5%95%
RM
GARCH
FIGARCH
0.0178*
0.0192
0.0185
0.0347
0.0256*
0.0391
0.0224*
0.0536
0.0493
0.0154
0.0152*
0.0179
0.0326
0.0312
0.0297*
0.0378
0.0356*
0.0521
0.0224
0.0192*
0.0222
0.0311
0.0286*
0.0303
0.0218*
0.0261
0.0323

Table 7.

VaR Failure Rates – Out-of-Sample

Advertisement

7. Conclusions

Recent empirical evidence concerning the nature of volatility dynamics in fixed income markets suggests the existence of a long memory component. Since volatility in fixed income returns is an important aspect of portfolio management it is essential to accurately characterise the time series properties of fixed income volatility especially in the context of emerging markets where local currency-denominated sovereign bonds have been the fastest growing market in recent years. Accordingly, the objective of this analysis was to examine the existence of long memory behaviour in the volatility structure of total return indices for the local currency bond markets of Hong Kong, Mexico and South Africa. Against this background, the long memory parameter is estimated using methods based on wavelets, which have gained prominence in recent years. Furthermore, this study has compared and evaluated the performance of a long memory model versus a standard volatility models (the ubiquitous GARCH and RiskMetrics processes) in order to evaluate their power in delivering accurate volatility forecasts over long(er) horizons in an out-of-sample setting. This endeavour is motivated by recognition of the importance of accurate volatility forecasts in a wide range of applications, including tactical and strategic decision making and the limited empirical evidence available to date for emerging fixed income markets. Then, the performance of the standard GARCH, RiskMetrics and FIGARCH models are evaluated in the context of value-at-risk (VaR) estimation given the Basle regulatory framework.

The main findings of this research are threefold. First, evidence of long memory is conclusively demonstrated in emerging market local currency sovereign debt markets. In addition, to counteract the possibility of finding spurious evidence of long memory a variety of wavelet forms are considered. The findings from these tests are complementary and therefore suggest that the finding of long memory is not spurious. Second, the presence of a long memory structure in the volatility of these fixed income markets suggests volatility observations in the recent past and the remote past are associated with each other. Since the series realisations are not independent over time then past volatility may potentially be exploited to predict future volatility, especially over long horizons. Accordingly, the out-of-sample forecasting performance of the long memory model and the standard GARCH and RiskMetrics models are compared. While, none of the estimated models consistently outperforms the others, a key generalisation can be made. In particular, on the basis of the forecast MAE and RMSE statistics it is shown that the information content of long memory models does not consistently generate improved volatility forecasts, especially over long horizons, relative to the standard GARCH model. Indeed, the GARCH model generally provides the most accurate forecasts at the monthly horizon. With respect to VaR estimation, the results show that both the standard GARCH and RiskMetrics models generally deliver more accurate VaR measures relative to the long memory process.

These findings have three important implications. First, the exploitation of long memory models based on wavelet analysis may not have great relevance in the context of emerging market debt in terms of delivering superior forecast performance. Second, the existence of a long memory structure in volatility is not an essential condition for the derivation of accurate volatility forecasts at any time horizon, especially over a long horizon. Indeed, this research suggests that long memory models appear to be of limited practical forecast value, especially over long horizons, for Hong Kong, Mexico and South Africa. Put differently, the computational complexity of long memory modelling is not commensurate with the benefits (in terms of forecast power). Third, the results of the VaR estimation may provide guidance on more effective prudential standards for operational risk measurement and, as result, may help ensure adequate capitalisation and reduce the probability of financial distress. The results highlight the importance of using out-of-sample forecasting techniques and the stipulated probability level for the identification of methods that minimise the occurrence of VaR exceptions. Standard models – RiskMetrics and GARCH – that are already widely used by market participants are generally shown to outperform the more computationally intensive wavelet-derived FIGARCH model in estimating VaR across the probability levels considered.

In sum, this research has evaluated the long memory properties of return volatility in fixed income markets. This paper also complements the literature on long memory models and the forecast performance of these models that has attracted interest in other asset classes. In addition, the results of this study may potentially be used to inform portfolio and risk analysis. In particular, it is shown that in the context of VaR estimation existing models based on the GARCH and/or RiskMetrics process are more accurate (and simpler) than their long memory counterpart. Some caveats to these results exist, however. First, squared returns provide a noisy proxy for the ‘true’ volatility. In the case of this analysis, data constraints limited alternative options. However, future research may find that the application of realised variance may produce more accurate forecasts. Second, future research may also consider exploring the relevance of other long memory models, for example, models with asymmetric effects given that market volatility is often reported as being ‘directional’, i.e., volatility is higher in a down – than an upmarket.

References

  1. 1. Mc CarthyJ.PantolonC.LiH. C.Investigating long memory in yield spreads. J Fixed Income. 20091917381
  2. 2. SchotmanP. C.TschernigR.BudekJ.Long memory and the term structure of risk. J Financ Econ. 2008645997
  3. 3. Mc CarthyJ.Di SarioR.SaraogluH.LiH.Tests of long-range dependence in interest rates using wavelets. Q Rev Econ & Financ. 2004441809
  4. 4. Emerging market trading association. Third quarter2011debt trading volume survey supplementary analysis. 2011. EMTA.
  5. 5. BollerslevT.MikkelsenH. O.Modelling and pricing long memory in stock market volatility. J Econometrics. 19967315184
  6. 6. GencayR.SelcukF.WhitcherB.Scaling properties of foreign exchange volatility. Phys A, 200124966
  7. 7. Hurst HE.Long-term storage capacity of reservoirs. Transactions of the American Society of Civil Engineers. 195111677099
  8. 8. MandelbrotB. B.WallisJ.NoahJoseph.operationalhydrology.Water Resources Research. 1968490918
  9. 9. Mandelbrot BB.When can price be arbitraged efficiently? A limit to the validity of the random walk and martingale models. Rev Econ Stat. 197153225236
  10. 10. Mandelbrot BB. Fractals: form, chance and dimensions. New York: FreePress. 1977
  11. 11. LoA. A.long-termmemory.instock.marketprices.Econometrica. 199159512791313
  12. 12. GrangerC. W. J.JoyeuxR.An introduction to long-memory time series models and fractional differencing. J Time Series Analysis. 1980
  13. 13. Hosking JRM. Fractional differencing. Biometrika.1981
  14. 14. Baillie RT.Long memory processes and fractional integration in econometrics. J Econometrics. 1996731559
  15. 15. PoonS. H. A.practicalguide.toforecasting.financialmarket.volatilityJohn Wiley & Sons Ltd: Chichester; 2005
  16. 16. JarqueC. M.BeraA. K. A.testof.normalityof.observationsregressionresiduals.Int Stat Rev. 19875516372
  17. 17. Engle RF.Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica. 1982509871007
  18. 18. Di SarioR.LiH.Mc CarthyJ.SaraogluH.Long memory in the volatility of an emerging equity market: the case of Turkey. J International Financial Markets Instittions & Money. 20081830512
  19. 19. JensenM.Using wavelets to obtain a consistent ordinary least squares estimator of the fractional differencing parameter. J Forecasting. 1999181732
  20. 20. GencayR.SelcukF.WhitcherB.An introduction to wavelets and other filtering methods in finance and economics. Academic Press: San Diego; 2002
  21. 21. Ramsey JB.Wavelets in economics and finance: past and future. Stud Nonlinear Dyn E. 200263127
  22. 22. AussemA.CampbellJ.MurtaghF.Wavelet-based feature extraction and decomposition strategies for financial forecasting. J Computational Intelligence Financ. 1998March/April: 512
  23. 23. MallatS. G. A.theoryfor.multiresolutionsignal.decompositionthe.waveletrepresentation. I. E. E.IEEE Transactions on Pattern Analysis and Machine Intelligence. 198911674693
  24. 24. JensenM.An alternative maximum likelihood estimator of long-memory processes using compactly supported wavelets. J Econ Dyn Control. 200024361387
  25. 25. FlemingJ.KirbyC.OstdiekB.The economic value of volatility timing. J Financ. 200156329455
  26. 26. JorionP.Valueat.risk 3rded.NewYork.Mc GrawMcGraw-Hill; 2007
  27. 27. SaundersA.MMCornettFinancial.institutionsmanagement. a.riskmanagement.approachth ed. New York: McGraw-Hill; 2003

Notes

  • Analysis is on Hong Kong, Mexico and South Africa due to the availability of data.
  • The Haar transform assumes a discrete signal and decomposes the signal into two sub-signals of half its length reflecting the trend process and fluctuations from the trend process.

Written By

Pako Thupayagale

Submitted: 01 December 2011 Published: 12 September 2012