Open access peer-reviewed chapter

Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation

By Masaaki Kijima and Chun Ming Tam

Submitted: April 30th 2012Reviewed: September 19th 2012Published: March 6th 2013

DOI: 10.5772/53568

Downloaded: 2686

1. Introduction

Fractional Brownian motion (fBm) was first introduced within a Hilbert space framework by Kolmogorov [1], and further studied and coined the name ‘fractional Brownian motion’ in the 1968 paper by Mandelbrot and Van Ness [2]. It has been widely used in various scientific fields, most notability in hydrology as first suggested in [3]. It also plays an important role in communication technology by enriching the queuing theory in terms of simulating real network traffic.

In recent years, it has been steadily gaining attention in the area of finance, as it appears that the traditional stochastic volatility model driven by ordinary Brownian motion implies geometric time-decay of the volatility smile, which is much faster than in what we see in real market data (which has a hyperbolic decay). Such stylized feature is called the volatility persistence, see [4] for detail discussion about the statistical test for the existence of long-range memory (author has also proposed a robust extension of the R/Sstatistics for that particular purpose). Several modeling approaches have been suggested capturing this persistence in conditional variance either via a unit-root or long memory process. In order to keep the pricing-framework largely intact, it is more interesting to study the long memory process, and fBm has a particular good match due to its similarity to the ordinary Brownian motion and its Gaussian properties.

In this chapter, we will outline several approaches to simulate fractional Brownian motion with H>1/2, including the exact methods and approximate methods, where the Hurst Index His a parameter used in literature to generalize Brownian motion into fractional Brownian motion, first made popular by Benoit Mandelbrot, which we will give a detail definition in Definition 1.1. We also provide a brief introduction of the truncated fractional Brownian motion (long-memory model in continuous time) as proposed in [5,6].

1.1. Financial motivation and backgrounds

Numerous empirical studies have pointed out that, in options markets, the implied volatility back-out from the Black-Scholes equation displays volatility skews or smiles; the smile effect, which is well known to practitioners, refers to the U-shape price distortion on the implied volatility surface.

In Hull and White [7] and Scott [8], they have proposed this feature of volatility to be captured by stochastic regime, known as the stochastic volatility model:

{dS(t)S(t)=μ(t,St)dt+σ(t)dB1(t)d(lnσ(t))=k(θln(σ(t))dt+νdB2(t))E1

Here, Stis the asset price and σ(t)is the instantaneous volatility at time t, and {B1(t),B2(t)}are ordinary standard Brownian motions. Hull and White [7] have shown that, the price of European option at time tof exercise date T is the conditional expectation of the Black Scholes option pricing formula where the constant volatility from the original formula is replaced by the quadratic average over the period [t,T]:

σt,T2=1T-ttTσ2udu E2

Such models are successful at capturing the symmetric smile and skewness of the implied volatility by imposing relations between the driving Brownian motions in (1) (symmetric smile is explained by independence while skewness can be explained by linear correlation).

Due to the temporal aggregation effect evident in (2), however, the smile effect deteriorates along with time-to-maturity since the temporal aggregation gradually erases the conditional heteroskedasticity; in the standard stochastic volatility setup, this particular decay is much faster than what is observed in market data. The phenomenon of slow decaying volatility smile is known as the volatility persistence (long-range dependence of volatility process). This phenomenon is particularly poignant for high frequency data, for which the conditional variance process displays near unit-root behavior.

Furthermore, we emphasize the existence of such phenomenon collaborated by large quantities of researches, pointing out that the conditional volatility of asset returns also displays long range dependence: [9-12] have discussed extensively the evidence of such phenomenon in empirical data. Bayesian estimation in [13] of stochastic volatility models shows similar patterns of persistence.

Motivated by this inadequacy, long-memory process was deemed more appropriate enrichment for this purpose. Hence, fractional Brownian motion is a prime candidate among all long-memory process given its tractability and similarity with the ordinary Brownian motion: both the fractional Brownian motion and ordinary Brownian motion are self-similar with similar Gaussian structure. For discussions of estimation and evidence of the long-range dependence in conditional variance of asset returns, the reader is referred to [10] and section 3.1 of [14].

Now, we provide a formal definition of fractional Brownian motion (fBm). We adopt the definition as given in [15].

Definition 1.1 Let H(0,1)be a constant. A fractional Brownian motion {BHt}t0with Hurst index His a continuous and centered Gaussian process with covariance function

E[BH(t)BH(s)]=12(t2H+s2H|ts|2H)E3

In particular, for H=12, it reduces to the ordinary Brownian motion with E[BH(t)BH(s)]=min(t,s).

From equation (3) we have the following properties:

  1. BH0=0and EBHt=0 , t0.

  2. BH()has stationary increment: BHt+s-BH(s)has the same distribution as BH(t)for any s,t0.

  3. BHis self-similar, meaning that BHTthas the same distribution law as THBHt.

  4. BH()is a Gaussian process with the variance EBHt2=t2H, t0.

  5. BHhas continuous trajectories.

Fractional Brownian motions are divided into three very different categories: H<12, H=12, H>12. This is of particular importance because there is a deterministic difference between the case of H<12and H>12, as we introduce the mathematical notion of long-range dependence.

Definition 1.2 (Long-range dependence) A stationary sequence XnnNexhibits long-range dependence if the autocovariance function γncovXk,Xk+nsatisfies

limnγncn-α=1 E4

This can be written as n~n-α,

In this case, for some constants cand α(0,1), the dependence between Xkand Xk+ndecays slowly as nand

n=1γn=

If we set Xk=BHk-BHk-1and Xk+n=BHk+n-BHk+n-1and apply equation (3), we have

γHn=12n+12H+n-12H-2n2H

where γHn= covBHk-BHk-1,BHk+n-BHk+n-1. In particular,

limnγHnH2H-1n2H-2=1

Therefore, we conclude that: for H>1/2, n=1γHn=, and for H<1/2, n=1γHn<. Hence, only in the case of H>1/2, fractional Brownian motions display long-memory dependence.

As pointed out in [16], large lags difference between γ()may be difficult to estimate in practice, so that models with long-range dependence are often formulated in terms of self-similarity. Self-similarity allows us to extrapolate across time scales and deduce long time behavior from short time behavior, which is more readily observed.

Because we are interested in capturing the long-memory phenomenon observed in financial markets, the rest of this chapter will only concern the case of H>1/2.

1.2. Stochastic integral representation

In the original paper [2], Mandelbrot and van Ness represent the fBm in stochastic integral with respect to the ordinary Brownian motion:

BHt=1ΓH+12-0t-sH-12--sH-12dBs+0tt-sH-12 dBs E5

where Γ()is the gamma function.

They have also included an alternative representation of the fBm which is the basis of the model in [5,6]:

B^Ht=0tt-sH-12ΓH+12dBs E6

This version of fBm is ‘truncated’ in the sense that the integration from negative infinity to zero in equation (5) is truncated. We will refer to the model (6) as the ‘truncated fractional Brownian motion’ in the rest of this chapter. As pointed out in [2], the representation (6) was first proposed by Paul Lévy to define the fBm by the Riemann-Liouville fractional integral, while the original integral in equation (5) is the Weyl fractional integral.

The definition (6) of fBm is in general not asymptotically covariance-stationary, even though it retains self-similarity. For further technical discussion and rigorous definition of the truncated fractional Brownian motion, we refer the reader to [5].

Given the difference, if one employs analytical tools such as Malliavin calculus should be applied with care and note the differences between the two versions of fBM. [17] offers an in-depth discussion of the theoretical differences.

2. Fractional brownian motions in financial models

We first look at several examples that utilize the fractional Brownian motions in the realm of financial modeling.

2.1. Asset price model

In the previous section, we mention that the motivation of fBms in finance models is to capture the long-range dependence in the volatility process. However, it is worth discussing the possibility of applying it to the asset process itself.

In practice, it is considered that an asset process driven by fBm will result in arbitrage. The idea is that, since for H1/2, BH(t)is not a semimartingale in continuous time, the asset process described by BH(t)violates the NFLVR (no free lunch with vanishing risk), a weaker version of arbitrage, and thus doesn’t admit an equivalent martingale measure according to Theorem 7.2 of [18]. Such findings and construction of arbitrage can be found in Rogers [19].

In contrast, Cheridito [20] has given multiple classes of trading strategies that allow various level of arbitrages (NFLVR, arbitrage in the classical sense and strong arbitrage) under fBm driven assets, and shown that the arbitrages are all eliminated if the inter-transaction time is not zero, i.e., the classes of strategies become arbitrage-free. Such assumption is reasonable in practice, given the physical limit and non-zero transaction cost. For more information on arbitrage theory and its generalization, the reader is referred to [18-20].

2.2. Stochastic volatility model

As we have mentioned in the introductory section, the main motivation for fBm is to capture the volatility persistence, the stylized feature observed in empirical data. There are several prominent models involving a volatility process with fBm. Here, we just outline several of them.

2.2.1. Long-memory continuous model

In [5, 6], Comte and Renault consider the following stochastic volatility model driven by fBm:

{dS(t)S(t)=rdt+σ(t)dB(t)σ(t)=σ0ex(t)dx(t)=kx(t)dt+νdB^H(t)E7

where the log-volatility term follows a fractional-OU process driven by the truncated fBm (6). The asset price process follows a geometric Brownian motion with volatility persistence.

Although Mandelbrot [2] deemed it as signifying the origin too heavily, the model (7) is easier and more robust than the ordinary fBms from the perspective of simulation. A simulation example is explored in Section 5.

2.2.2. Affine fractional stochastic volatility model

In [21], Comte et al. assume that the squared-volatility process follows

σ2t=θ+Xαt E8

where Xαtis defined by the fractional integral:

Xαt=-tt-sH-1/2ΓH+1/2xsds E9

Similar treatment of the fractional integration is outlined in [5]. The affine structure in (8) is similar to the one originally studied by Duffie et al. [22].

The affine structure is adopted for the extra tractability, and thus better suited for practical option pricing and hedging than the original idea (7). In fact, Comte et al. [21] have shown that this model can better depict the difference between the short and long memory properties in the resulting option prices.

2.2.3. Martingale expansion

Fukasawa [23] adopts and expands the asymptotic expansion technique first proposed by Yoshida [24] of European option prices around the Black-Scholes equation by means of perturbation technique and partial Malliavin calculus. It is shown that the logarithm of the asset process can be expressed as

lnSt=Zt=rt-120tgYsn2ds+0tgYsnθdWs'+1-θ2dWs

with

Ysn=y+ϵnWsH, WtH=0tKHt,sdWs'

Here, ris the riskless rate, θ(-1,1), y∈ℝ is a constant, (W,W')is a 2-dimensional standard (independent) Brownian motion, ϵn0is a deterministic sequence for n, and g()is the stochastic volatility process which is an adapted process for the minimal filtration.

Note that WtHis a fractional Brownian motion with Hurst index H, where KHt(t,s)is the kernel of the stochastic integral representation over a finite interval of Brownian motion. According to [15], pertaining to our interest, for the case of H>1/2, the kernel has the following expression:

KHt,s=cHs1/2-Hstu-sH-3/2uH-1/2du

where

cH=H2H-1β2-2H,H-1/21/2

Then, according to Corollary (2.6) of Fukasawa [23], the implied volatility can be expressed as

σ{1ϵn2ρ13d2}+o(ϵn)=aTH1/2log(K/S)+σ+bTH+1/2+o(ϵn)E10

where d2is the typical argument in the N(d2)of the Black-Scholes formula, and

ρ13=2θg'ycH'THgy , σ=gy, a=θg'ycH'σϵn, b=-ar-σ22E11

Equation (10) can be seen as an approximation ϵnis the model parameter calibrated from market data. This shape of this approximation remains largely stationary, to check the accuracy of the approximation, we have no means but performing Monte Carlo simulation of fBm.

3. Simulation with exact methods

First, we look at methods that completely capture the covariance structure and true realization of the fractional Brownian motion (fBm) or fractional Gaussian noise (fGn). Any method described in this section has their starting point at the covariance matrix. The approximate scheme we see later is merely numerically close to the value of fBm (or fGn) or asymptotically coincides with it. The collection of algorithm in this section is not meant to be exhaustive. For more algorithm and discussion, see [25].

3.1. Hosking method

The Hosking method utilizes the well-known conditional distribution of the multivariate Gaussian distribution on a recursive scheme to generate samples based on the explicit covariance structure. This method generates a general stationary Gaussian process with given covariance structure, not limited to generating fBms.

More specifically, this algorithm generates an fGn sequence {Zk}and fBm is recovered by accumulative sum. That is, the distribution of Zn+1conditioned on the previous realization Zn,Z1,Z0can be explicitly computed.

Denote γ(k)as the autocovariance function of the zero-mean process:

γ(k):=E(XnXn+k)

where we assume for convenience that γ0=1. For n,k=0,1,2, we have the following recursive relationship for the n+1×(n+1)autocovariance matrix Γn=γi-ji,j=0,,n:

Γn+1=1cn'cnΓn E12
=Γ(n)F(n)c(n)c(n)'F(n)1 E13

where cnis the (n+1)-column vector with elements cnk=γk+1, k=0,,nand Fn=1i=n-ji,j=0,,nis the n+1×(n+1)‘mirrored’ identity matrix

Fn=0010101000

Theorem 3.1 (Multivariate Gaussian distribution) Any multivariate Gaussian random vector zcan be partitioned into z1and z2with the mean vector and covariance matrix with the corresponding partition:

μ=μ1μ2 Σ=Σ11Σ12Σ21Σ22 E14

The distribution of z1conditioned on z2=ais a multivariate normal z1z2=a~N(μ¯,Σ¯)with

μ¯=μ1+Σ12Σ22-1a-μ2E15
Σ¯=Σ11-Σ12Σ22-1Σ21 E16

If we substitute equation (12) into the partition in (14) with Σ11=1, μ=0, we have the following expression for the conditional distribution:

μn+1=E(Zn+1|Zn,,Z0)=c(n)'Γ(n)1(ZnZ1Z0)E17
σn+12=VarZn+1Zn,,Z0=1-c(n)'Γn-1c(n) E18

With Z0~N(0,1), subsequently Xn+1for n=0,1,can be generated.

Taking the inverse of Γat every step is computational expensive; the algorithm proposed by Hosking [26] computes the inverse Γn+1-1recursively. The next result is due to Dieker [25].

Proposition 3.1 (Hosking algorithm for simulating fGn) Define dn=Γn-1c(n), and applying the blockwise method of inversion on equation (13):

Γn+1=1σn2σn2Γn-1+Fndnd(n)'F(n)-Fnd(n)-d(n)'F(n)1 E19

where σn+12satisfies the recursion

σn+12=σn2-γn+1-τn-12σn2 E20

with τndn'Fncn=cn'Fnd(n). Also, the recursion for dn+1=Γn+1-1c(n+1)is obtained as

dn+1=dn-ϕnFnd(n)ϕn

where

ϕn=γn+2-τnσn2

With μ1=γ1Z0, σ12=1-γ12, τ0=γ12, μn+1,σn+12,τn+1can be readily computed, and fractional Brownian motion is recovered by the cumulative summation.

This algorithm is also applicable to non-stationary processes (see [27] for details). Even though this algorithm is very simple and easy to understand and sample paths can be generated on-the-fly, the complexity of this algorithm is of O(N2)and computational (as well as memory) expense of this algorithm grows at a prohibitive speed.

3.2. Cholesky method

Given that we are dealing with the covariance structure in matrix form, it is natural to go with the Cholesky decomposition: decomposing the covariance matrix into the product of a lower triangular matrix and its conjugate-transpose Γn=LnLn*. If the covariance matrix is proven to be positive-definite (the situation will be addressed in the next subsection), L(n)will have real entries and Γn=LnLn'.

Suppose that in matrix form the n+1×(n+1)product is given by

γ(0)γ(1)γ(2)γ(n)γ(1)γ(0)γ(1)γ(n-1)γ2γ(1)γ(0)γ(n-2)γ(n)γ(n-1)γ(n-2)γ(0)=l00000l10l1100l20l21l220ln0ln1ln2lnn×l00l10l20ln00l11l21ln100l22ln20000lnn

It is easy to see that l002=γ0and that l10l00=γ(1)and l102+l112=γ(0)on i=1(2nd row). For i1, the entries of the lower triangular matrix can be determined by

li,0=γil0,0

li,j=1lj,jγi-j-k=0j-1li,klj,k, 0<jn

li,i2=γ0-k=0i-1li,k2

Given independent, identically distributed (i.i.d.) standard normal random variables Vii=0,,n+1, the fGn sequence is generated by

Zn+1=k=0n+1ln+1,kVk

Or in matrix form, we have Z(n)=L(n)V(n). If Γ(n)is assumed to be positive-definite, the non-negativity of li,i2is guaranteed and L(n)is guaranteed to be real. The covariance structure of the process is captured, since

CovZ(n)=CovLnVn=LnCovVnLn'=LnLn'=Γn E21

Even though the Cholesky method is easy to understand and implement, the computation time is O(N3), which renders this scheme extremely uneconomical in practice. To resolve this problem, we will proceed to another exact method. The idea is similar to retain the same relation as equation (21), but with a different decomposition.

3.3. Fast fourier transform method

As we have seen from the last section, using the Cholesky decomposition seems to be the most straightforward idea to simulate Gaussian process with a given covariance structure; but, it also is the most rudimentary and thus slow. In order to improve upon the speed, the idea of utilizing the fast Fourier transform (FFT) was proposed by Davies and Harte [28] and further generalized by Dietrich and Newsam [29].

Similar to the idea before, this method tries to find a decomposition of the covariance matrix as Γ=GG'and the sample is generated by y=Gxfor given standard normal random variable x. Then, on the given covariance structure, we have

Covy=CovGx=G CovxG'=GG'=Γ

The idea is to 'embed' the original covariance matrix a circulant matrix in order to carry out the FFT. Before we outline the idea, we shall give out some detail of the linkage between Fourier transform and the circulant matrix.

Definition 3.1 (Circulant matrix) Circulant matrix is a special case of the Toeplitz matrix where each row vector is shifted to the right (the last element is shifted back to the beginning of the row). In matrix form, an n-by-n circulant matrix can be written as

C=c0cn-1cn-2c1c1c0cn-1c2c2c1c0c3cn-1cn-2c1c0

Remark: As one can see, the first row/column completely describes the whole matrix, and it can be put more succinctly in the following form:

cj,k=cj-kmod n, where 0j,kn-1

Note that the indices range from 0 to n-1 instead of the usual convention that ranges from 1 to n.

Definition 3.2 (Generating circulant matrix) We define an n-by-n generating circulant matrix by

G=00001100000100000100000010

By a simple calculation, we can see that the ‘square’ of the generating circulant matrix is given by

G2=00010000011000001000000100

From the point of view of row and column operation of the matrix, this can be seen as each row of the matrix being shifted one element forward, where the bumped element is replaced to the end of the row (it can also be thought of as the whole row is shifted down and the bumped row is placed back on top, but this is irrelevant to our interest). Arbitrary power can be deduced accordingly; this operation has a cycle of n iterations.

The generating circulant matrix is served as our building block. Looking back at our original circulant matrix, we have a corresponding polynomial

px=c0+c1x+c2x2++cn-1xn-1 E22

Then, the original circulant matrix C can be expressed as

C=c0cn-1cn-2c2c1c1c0cn-1c3c2c2c1c0c3c3c2c1cn-1cn-1cn-2cn-3c1c0

C=pG=c01+c1G+c2G2++cn-1Gn-1 E23

This can be verified by doing the row-operation of arbitrary power on G as shown above. It can be shown that each operation is one-element sub-diagonal compared to the previous power.

Definition 3.3 (Fourier matrix) The Fourier matrix is introduced as

F=111111ξξ2ξn-2ξn-11ξ2ξ2×2ξn-21ξ3ξ3×2ξ21ξn-1ξn-2ξ2ξ

Here, we define the n-th unity root as ω=e2πi1n, and ξ=ω¯=e-2πi1nis the conjugate of the unity root.

The Fourier matrix can be defined using the positive argument ωinstead of ξ.Also, as we will see later, some definition includes the normalizing scalar 1n(or 1n). This is analogous to the continuous counterpart of the Fourier integral definition Ff=-xte-2πiftdtor -xte2πiftdt, as long as the duality is uphold by the opposite sign in the exponential power in the inverse Fourier transform. This duality will be restated in the diagonalization representation of the circulant matrix later.

Proposition 3.2 If 1nnormalizes the Fourier matrix, then 1nFis a unitary matrix. It is symmetric (i.e., FT=F), and the inverse of the Fourier matrix is given by

F-1=nnF-1=1n1nF-1=1n1nF¯T=1nF¯=1n111111ωω2ωn-2ωn-11ω2ω2×2ωn-21ω3ω3×2ω21ωn-1ωn-2ω2ω

Proposition 3.3 If we multiply the Fourier matrix with the generating circulant matrix, we have

FG=111111ξξ2ξn-2ξn-11ξ2ξ2×2ξn-21ξ3ξ3×2ξ21ξn-1ξn-2ξ2ξ00001100000100001000010=11111ξξ2ξn-2ξn-11ξ2ξ2×2ξn-21ξ3ξ3×21ξ21ξn-1ξn-2ξ2ξ1

This is the same as shifting (rotating) the first column to the back of the matrix, and is also equivalent to multiplying the first row with ξ0,the 2nd row with ξ1, etc. In matrix operation, it can be seen as

FG=ξ000000ξ100000ξ2000000000ξn-1111111ξξ2ξn-2ξn-11ξ2ξ2×2ξn-21ξ3ξ3×2ξ21ξn-1ξn-2ξ2ξ=ΛF

where Λis the diagonal matrix with the k-th diagonal Λk=ξk, for 0kn-1. It follows that

FGF-1=Λ E24

That is, the Fourier matrix diagonalizes the generating circulant matrix with eigenvalues ξk0kn-1.

Theorem 3.2 The circulant matrix is decomposable by the Fourier matrix, i.e. C=F-1ΛFwith eigenvalue matrix Λ=pξkk=0n-1. Also, with equation (23), the diagonalization of C can be written as

FCF(1)=F(c01+c1G+c2G2++c(n1)Gn1)F1=c01+c1(FGF1)+c2(FGF1)2++cn1(FGF1)n1=(p(1)00000p(ξ)00000p(ξ2)000000000p(ξn1))

Note that FGF-12=FGF-1FGF-1=FGGF-1=FG2F-1. The other powers can be deduced iteratively.

This theorem gives us the fundamental theoretical framework to build up the FFT exact simulation of fBms. The basic idea of the simulation is to embed the covariance matrix into a bigger circulant matrix to carry out the discrete Fourier transform as outlined above (with technique of FFT). Such technique is called Circulant Embedding Method (CEM), and is outlined in Dietrich and Newsam [29] and Perrin et al. [30].

Suppose that we need sample size of N (N should be a power of 2, i.e. N=2gfor some gNfor the sake of convenience when facilitating FFT). Generate the N-by-N covariance matrix Γwith entries Γj,k=γ(j-k), where γis the covariance function given in the definition of fractional Gaussian noise (fGn), by

Γ=γ(0)γ(1)γ(N-1)γ(1)γ(0)γ(N-2)γ(N-1)γ(N-2)γ(0)

The technique to simulate fGn with FFT is called the Circulant Embedding Method (CEM), generalized by Davies and Harte [28], and consists of embedding this covariance matrix into a bigger M-by-M (with M = 2N) circulant covariance matrix C such as

C=γ(0)γ(1)γ(N-1)0γ(N-1)γ(N-2)γ(2)γ(1)γ(1)γ(0)γ(N-2)γ(N-1)0γ(N-1)γ(3)γ(2)γ(N-1)γ(N-2)γ(0)γ(1)γ(2)γ(3)γ(N-1)00γ(N-1)γ(1)γ(0)γ(1)γ(2)γ(N-2)γ(N-1)γ(N-1)0γ(2)γ(1)γ(0)γ(1)γ(N-3)γ(N-2)γ(1)γ(2)0γ(N-1)γ(N-2)γ(N-3)γ(1)γ(0)

where the covariance matrix is embedded on the top left hand corner. It is sufficient to point out that

C0,k=γk, k=0,,N-1 γ2N-k, k=N+2,,2N-1 

Remark: As Perrin et al. [30] have pointed out, the size M can be M2(N-1), and the case M=2(N-1)is minimal embedding. For any other choice of M, the choice of C0,N,, C0,M-N+1is arbitrary and can be conveniently chosen as long as the symmetry of the matrix is upheld; more zeros can be padded if M is bigger to make C circulant. For the rest of the chapter, we will concern ourselves with the case M = 2N.

From Theorem 3.2, we know that, given any circulant matrix, it can be decomposed as C=QΛQ*, where

Qj,k=12Nexp-2πijk2N, for j,k=0,,2N-1 E25

The matrix Λis the diagonal matrix with eigenvalues

λk=j=02N-1c0,jexp2πijk2N, for j,k=0,,2N-1 E26

This differs slightly from the previous definition, but similar to the continuous counterpart; the sign of the exponential power in the Fourier transform is just conventional difference. The approach is identical as long as the duality is maintained. That is, if written in the form of C=QΛQ*, the sign of the exponential power of the component in Qand Λshould be opposite. In the case of the previous theorem where C=F-1ΛF, it is easy to check that F-1 and Λ(ξ)indeed have the opposite sign in the exponential power.

It should be noted that Cis not guaranteed to be positive-definite. Davies and Harte [28] suggest setting zero every negative value that may appear in Λ. In Perrin et al. [30], they prove that the circulant covariance matrix for fGn is always non-negative definite, so the approach is feasible without any modification. The reader is referred to Dietrich and Newsam [29] and Wood and Chan [31] for more detail on dealing with this issue.

Assuming that Cis positive definite and symmetric, the eigenvalues are positive and real. The ‘square root’ of Cis readily formed, S=QΛ1/2Q*, where Λ1/2is the diagonal matrix with eigenvalues 1,λ1,,λ2N-1. It is easy to check that SS*=SS'=C. So, Shas the desired properties we look for.

Theorem 3.3 (Simulation of fGn with FFT) The simulation of the sample path of fGn, we are going to simulate y=SV, consists of the following steps:

  1. Compute the eigenvalues λkk=0,,2N-1from equation (26) by means of FFT. This will reduce the computational time from O(N2)to O(NlogN).

  2. Calculate W=Q*V.

  3. Generate two standard normal random variables W0=V01and WN=VN1

  4. For 1j<N, generate two standard normal random variables Vj1and Vj2and let

 Wj=12 Vj1 +i Vj2 

W2N-j=12 Vj1 -i Vj2 

  1. Compute Z=QΛ1/2W. This can be seen as another Fourier transform of the vector Λ1/2W:

  2. Zk=12Nj=02N-1λjWjexp(-2πijk2N)

  3. It is identical to carry out FFT on the following sequence:

wj={λ02NV0(1)j=0;λj4N(Vj(1)+iVj(2))j=1,,N1;λN2NVN(1)j=N;λj4N(V2Nj(1)iV2Nj(2))j=N+1,,2N1;

Due to the symmetric nature of the sequence, the Fourier sum of {wj} = Zkk=02N-1is real. The first N samples have the desired covariance structure. But, since the 2nd half of samples (N…2N-1) are not independent of the first N samples, this sample cannot be used.

  1. Recover fBm from the recursive relationship:

  2. BH0=0; BHi=BHi-1+Zi-1 for 1iN

4. Approximate methods

As we have seen in the previous section, the exact methods all take on the covariance structure matrix as starting point and try to reproduce the covariance structure by different decomposition. That can be time and resource consuming, so rather it is preferable to have approximation of the fractional Brownian motion that permits robust simulation.

In this section, we will start with the Mandelbrot representation due to historical reason and move onto several other methods that provide us with better understanding of the process and increasing robustness.

4.1. Mandelbrot representation

Recalling from Section 2.3, fractional Brownian motion permits a stochastic integral representation. To approximate equation (5), it is natural to truncate the lower limit from negative infinity to some point, say at –b:

B~Hn=CHk=-b0n-kH-12--kH-12B1k+k=0nn-kH-12B2k E27

Note that the CHis not the same constant term as in equation (5), because one has to re-calculate the normalizing factor due to the truncation. As pointed out in [25], the recommended choice of bis N3/2. Even though this is a straightforward way to generate fractional Brownian motion, it is rather inefficient. It is included in this section for the sake of completeness.

4.2. Euler hypergeometric integral

fBm permits the stochastic integral form involving the Euler hypergeometric integral:

BHt=0tKHt,sdBs E28

where B(s)is the standard Brownian motion and

KHt,s=t-sH-12ΓH+12F2,1H-12;12-H;H+12;1-ts E29

is the hypergeometric function, which can be readily computed by most mathematical packages. By discretizing (28), at each time index tj, we have

BHtj=nTi=0j-1titi+1KHtj,sdsδBi E30

where δBi=TnΔBiand ΔBiis drawn according to the standard normal distribution. This means that δBii=1nis the increments of a scaled Brownian motion on [0,T]with quadratic variation T. The inner integral can be computed by the Gaussian quadrature efficiently.

4.3. Construction by correlated random walk

This particular algorithm proposed in [32] of constructing fBm relies on the process of correlated random walks and summation over generated paths. This is similar to the generation of ordinary Brownian method through summation of the sample paths of normal random walk, which is related to the central limit theorem.

Definition 4.1 (Correlated Random Walk) For any p[0,1], denote Xnpas the correlated random walk with persistence index p. It consists of a jump on each time-step with jump size of either +1 or -1 such that:

  • X0p=0, PX1p=-1=12,PX1p=+1=12

  • n1,ϵnpXnp-Xn-1pwhich equals either +1or -1

  • n1,Pϵn+1p=ϵnpσXkp, 0kn=p

Theorem 4.1 For any m1,n0, we have

E[ϵmpϵm+np]=(2p1)nE31

In order to add additional randomness into the correlated random walks, we replace the constant persistence index pwith a random variable μ, and we denote the resulting correlated random walk as Xnμ. Or, to put it more formally, denote by Ppthe law of Xpfor a given persistence index p. Now, consider a probability measure μon [0,1], which we call the corresponding probability law Pμ, the annealed law of the correlated walk associated to μ. Note that Pμ01Ppdμ(p).

Proposition 4.1 For all m1, n0, we have

E[ϵmμϵm+mμ]=01(2p1)ndμ(p)E32

The next result is due to Enriquez [32]. The proof is based on Lemma 5.1 of Taqqu [33].

Theorem 4.2 For 12<H<1, denote by μHthe probability on [12,1]with density 1-H23-2H1-p1-2H. Let XμH,ii1be a sequence of independent processes with probability law PμH. Then,

LDlimNLlimNcHXNtμH,1++XNtμH,MNHM=BHt E33

where cH=H2H-1Γ3-2H, Lstands for the convergence in law, and LDmeans the convergence in the sense of weak convergence in the Skorohod topology on D[0,1], the space of cadlag functions on [0,1]. Here, []is the floor function and rounds the argument to the closest integer, M is the number of trajectories of correlated random walks and N is number of time steps.

Remark: For H=1/2, there is a similar expression (see [32]). The order of limit in equation (33) is important, because if reversed, the sum would result in 0. Theorem 4.2 is mostly for theoretical construction.

In [32], the above theorem is further simplified from double summations into a single summation by applying Berry-Essen bound: As long as MNis of order ON2-2H,

LDlimNcHXNtμH,1++XNtμH,M(N)NHM(N)=BHt E34

In practice, any probability measure with moment equivalent to 1n2-2HL(n), where Lis a slowly varying function, will be used. This could be shown by Karamata’s theorem, for which further elaboration is found in [33]. In [32], three families of equivalent measures are provided, and specifically the 2nd family of the measures μH,k'k>0is most appropriate for simulation purpose: For H>12, μH,k'has the density of 1-1-U1k12-2H2 with the corresponding normalizing factor cH,k'=cHk. Uis a standard uniform random variable. The error given by the Berry-Essen bound for H>12 is given by

0.65×DHN1-H/kM E35

where DH=62H-1(H+1)(2H+1). Here, kserves as a control variable of order kN=o(N), and the error term contains 1k which can be seen as a way to restrict error with the price of distortion of the covariance relation in XN, though it is advisable to keep k1. For more discussion on the choice of k, we refer to Section 4 of [32].

Theorem 4.3 (Simulation of fBm with correlated random walk) Simulating fBm with correlated random walk for the case of H>12consists of the following steps:

  1. Calculate M(N) by the tolerable error level from equation (35). CalculateNT, where is the floor function, and create time-index {ti}: {1,2,, NT}.

  2. Simulate M independent copies of μH,kjj=1M=1-1-U1k12-2H2for M trajectories.

  3. Simulate M copies of correlated random walks:

    • If ti=1, ϵ1j=2*Bernoulli12-1, X1j=ϵ1j

    • If ti>1, ϵtjj=ϵtj-1j*2*BernoulliμH,kj-1, Xtjj=Xtj-1j+ϵtjj

  4. At each tj, calculate

    • BHtj=cH'XNtjμH,1++XNtjμH,MNHM

Remark: For any given time-horizon T, it is easier to simulate the path of BH1with given resolution N and scale it to arrive at BHT=THBH(1).

This algorithm is interesting from a theoretical point of view, since it gives us a construction of fractional Brownian motion reflecting its ordinary Brownian motion counterpart with the help of central limit theorem. But, in practice, it might not be fast enough for simulation purpose that requires large number of simulated paths.

4.4. Conditional random midpoint displacement

This algorithm is put forth by Norros et al. [34] and uses the similar approach to compute the conditional distribution of the fractional Gaussian noises as we have seen in the Hosking algorithm in Section 3.1. The difference is that this algorithm does not completely capture the covariance of all the sample points, instead it chooses certain number of points of the generated samples and uses a different ordering to do conditioning on (recall that, in the Hosking method, the conditioning is done by chronological order).

Again we are interested in the stationary fractional Gaussian noises and will back out the fractional Brownian motion on the time interval [0,1], which can be scaled back according to self-similarity relationship. We will first outline the idea of bisection method, which will be expanded into the conditional mid-point replacement scheme later on.

4.4.1. Bisection scheme and basic algorithm

In this section we adopt the notation in Norros et al. [34]: Z(t)is the fractional Brownian motion, and Xi,jis the fractional Gaussian noise of a certain interval jin a given level i.

The idea is to simulate Zton the interval [0,1]. First, given Z0=0and Z(1)with the standard normal distribution of N(0,1), we compute the conditional distribution of Z12Z0,Z1~N12Z1,2-2H-14. The bisection involves the indices i and j, where i indicates the ‘level’ and j for the ‘position. Let

Xi,j=Zj2-i-Zj-12-i, for i=0,1,2,., j=1,2i

It is easy to see that, at any given level i, the interval [0,1] is divided into 2isub-intervals.

If we denote (i-1)th level as the ‘mother-level’, it will be divided twice finer in the next level. So given any interval on the mother level, it is easy to observe the relationship

Xi,2j-1+Xi,2j=Xi-1,j E36

Because of equation (36), it is enough to just generate Xi,jfor odd number j. So, let us proceed from left to right, assuming that the sample points Xi,1,Xi,2k have already been generated k0,1,,2i-1-1. For the point Xi,2k+1, we have

Xi,2k+1=ei,kXi,max2k-m+1,1,,Xi,2k,Xi-1,k+1,,Xi-1,mink+n,2i-1'+vi,kUi,k E37

where Ui,kare i.i.d. standard Gaussian random variables i=0,1,; k=0,1,,2i-1-1. Equation (37) can be rewritten as

Xi,2k+1=e(i,k)[Xi,max(2km+1,1),,Xi,2k,Xi1,k+1,,Xi1,min(k+n,2i1)]'+v(i,k)Ui,k=E[Xi,2k+1|Xi,max(2km+1,1),,Xi,2k,Xi1,k+1,,Xi1,min(k+n,2i1)]E38
vi,k=VarXi,2k+1Xi,max2k-m+1,1,,Xi,2k,Xi-1,k+1,,Xi-1,mink+n,2i-1 E39

As mentioned before, this scheme conditions on a fixed number of past samples instead of the whole past, where the two integers (m0, n1)indicate the number of the intervals the expectation and variance are conditioned on, and is called RMD(m,n). Looking at (38)and (39)Xi,max2k-m+1,1,,Xi,2k” indicates that there are at most m neighboring increments to the left of the interval in question (Xi,2k+1), and “Xi-1,k+1,,Xi-1,mink+n,2i-1” indicates that there are at most n neighboring increments to the right of the interval.

Denote by Γikthe covariance matrix with Xi,2k+1as the first entry, and Xi,max2k-m+1,1,,Xi,2k,Xi-1,k+1,,Xi-1,mink+n,2i-1as the rest of the entries. Then, we have

Γik=CovXi,2k+1,Xi,max2k-m+1,1,,Xi,2k,Xi-1,k+1,,Xi-1,mink+n,2i-1

where

Covx1,x2,,xn=Cov(x1,x1)Cov(x1,x2)Cov(x1,xn)Cov(x2,x1)Cov(x2,x2)Cov(x2,xn)Cov(xn,x1)Cov(xn,x2)Cov(xn,xn)

Similar to (14)-(16), we can partition Γikas

Γik=Var(Xi,2k+1)Γik(1,2)Γik(2,1)Γik(2,2)

Note that Γik(1,2)=Γik2,1'. Hence, we have

ei,k=Γik1,2Γik2,2-1E40
vi,k=ΓikΓik(2,2) E41

By the stationarity of the increment of Zand by self-similarity, ei,kis independent of iand kwhen 2kmand k2i-1-n(meaning that the sequence is not truncated by maxand min), and it only depends on ionly when 2i<m+2n.

4.4.2. On-the-fly RMD(m,n) generation

Norros et al. [34] further propose that, instead of having the previous level (i-1) completely generated first, partition and conditioning can be done ‘on-the-fly’, meaning that we can have multiple unfinished levels going at the same time. Unlike the previous RMD(m,n) scheme, the level here is defined differently.

First define the ‘resolution’ by δ, as the smallest interval that we will be dealing with in this scheme. Note that this is different from the previous sub-section where at the i-th level δ=2-i, which can be bisected finer indefinitely. In the on-the-fly RMD scheme, the minimum interval length is defined as δ.

At each level i, the interval [0,2iδ]is split finely into interval with length of δand Xi,ksamples are generated on each point until all points within the interval are filled. Then, the trace is expanded to the next level i+1, the interval [0,2i+1], and this procedure can be considered as ‘enlargement’. So, instead of having a pre-determined time-horizon and zooming in with twice-finer resolution in the original RMD scheme, the new RMD scheme has a pre-determined resolution and expand twice-fold the horizon at each level.

Within the same interval [0,2i], the following rules are applied for the inner intervals:

  1. Must have n-1 (or all) right-neighboring intervals (of which, have the same length as the mother-interval).

  2. Also, there should be m (or all) left-neighboring intervals (of which, have the same length as the interval being considered).

  3. If there are intervals that satisfy both the conditions above, choose the one as left as possible.

  4. When all intervals are filled out, expand to the next level by ‘enlargement’.

Here we use

Yi,j=Zj2iδ-Zj-12iδ, i=0,1,, j=1,2, E42

instead of Xi,jto avoid confusion with the original RMD scheme notation.

Similar to the ordinary RMD, we have the following equations from the conditional distribution of multivariate Gaussian processes: The enlargement stage is defined by

Yi,1=E[Yi,1|Yi1,1]+Var[Yi,1|Yi1,1]Ui,1=e(i,1)Yi1,1+v(i,1)Ui,1E43

Inner interval points are constructed similar to the ordinary RMD scheme (the right-neighboring intervals are of level i+1 instead of i-1) as

Yi,2k+1=ei,kYi,max2k-m+1,1,,Yi,2k,Yi+1,k+1,,Yi+1,mink+n,Ni+1'+vi,kUi,k E44

where Ni+1is the last generated increment on the (i+1)-th level. The order of splitting for the on-the-fly scheme is given in Figure 1, where its ordinary RMD counterpart’s splitting order is also given.

Norros et al. [34] have done an extensive comparison between on-the-fly RMD schemes in terms of accuracy and robustness compared to the FFT and aggregate methods. On-the-fly RMD and FFT are significantly faster than the aggregate method, and on-the-fly RMD can generate samples with no fixed time-horizon, while for FFT the whole trace has to be generated before it can be used. So, RMD seems superior in terms of flexibility.

4.5. Spectral method

In this subsection, we will look into the spectral method of approximating the fractional Gaussian noises, which has the origin from spectral analysis in physics: A time-domain can be transformed into a frequency-domain without loss of information through Fourier transform. With the typical Fourier-time series, the original input is deterministic and transformed into the spectral density that represents the magnitude of different frequencies in the frequency domain. It is possible to extend this approach to analyzing stochastic processes. Though it is impossible to study all realization, it is possible to analyze in a probabilistic/distribution sense by observing the expected frequency information contained in the autocovariance function.

Figure 1.

The order of splits by the ordinary RMD(m,n) scheme (top) and the on-the-fly RMD(m,n) scheme (bottom). Note that, for the on-the-fly scheme, the order changes according to the choice of n.

Spectral density is computed for frequencies, -πλπ, as

fλ=j=-γjexpijλ E45

The γ()here is the autocovariance function, which can be recovered by the inverse formula

γj=12π-ππf(λ)exp(-ijλ) E46

The spectral density of the fGn can be approximated according to [25] and [35] as

fλ=2sinπHΓ2H+11-cosλλ-2H-1+Bλ,H E47

where

Bλ,H=j=12πj+λ-2H-1+2πj-λ-2H-1 

Note that the domain is only -πλπ, since any frequency higher would only correspond to amplitude between our desired sample points.

The problem with the above expression is that there is no known form for B(λ,H), Paxson [35] proposes the following scheme for B(λ,H):

Bλ,HB~3λ,H=a1d+b1d+a2d+b2d+a3d+b3d+a3d'+b3d'+a4d'+b4d'8Hπ 

where

d=-2H-1, d'=-2H, ak=2kπ+λ, bk=2kπ-λ

Moreover, with the help of the Whittle estimator, Paxson [35] shows that

B~3λ,H''=1.0002-0.000134λB~3λ,H-2-7.65H-7.4

gives a very robust and unbiased approximation for the B(λ,H). See Appendix A of [35] for a detailed comparison and justification of this approximation.

With the approximation scheme for the spectral density at hand, we can now look at the spectral analysis of a stationary discrete-time Gaussian process (fractional Gaussian noise; fGn) X={Xn:n=0,,N-1}, which can be represented in terms of the spectral density f(λ)as

Xn=0πfλπcosnλdB1λ-0πfλπsinnλdB2λ

where B1and B2are independent standard Brownian motions and the equality is understood in terms of distribution. Define ξnλ=fλπcosnλand fix some integer l. After setting tk=πklfor k=0,,l-1, we can approximate it by a simple function ξnldefined on [0,π]for 0nN-1by

ξnlλ=ft1πcosnt110λ+k=0l-1ftk+1πcosntk+11(tk,tk+1]λ E48

which is similar to the typical construction of stochastic integral.

Define the sine counterpart as θnl(λ), and then integrate both ξnlλand θnl(λ)with respect to dB1λ and dB2λon [0,π]to approximate Xn. Then, we have

X^nl=k=0l-1ftk+1lcosntk+1Uk0-sinntk+1Uk1 E49

where Uk()are i.i.d. standard normal random variables. Uk(0)and Uk(1)are independent, as they are resulted from integration from the two aforementioned independent Brownian motions.

Similar to the Fourier transform approach, the fGns can be recovered by applying the FFT to the sequence of X^nlefficiently to the following coefficient:

ak={0,k=012(Uk1(0)+iUk1(1)),k=1,,l1Uk1(0)f(tk)l,k=l12(U2lk1(0)+iU2lk1(1)),k=l+1,,2l1E50

It is easy to check that the covariance structure of fGns can be recovered, with the help of product-to-sum trigonometric identity, as

Cov(X^m(l),X^n(l))=k=0l1f(tk+1)lcos((mn)tk+1)20πf(λ)2πcos(nλ)dλ=12πππf(λ)exp(inλ)dλ=γ(n)

Paxson [35] has also proposed another method for simulating fGns, where in [25] it was proven to be related to (50) with the case l=N/2 :

bk={0,k=0Rkf(tk)Nexp(iΦk),k=1,,N/21f(tN/2)2NUN/2(0),k=N/2bk*,k=N/2+1,,N1E51

Here, Rkis a vector of exponentially distributed random variables with mean 1, and Φkare uniformly distributed random variables on [0,2π]independent of Rk. This method is of order Nlog(N), and only one FFT is required instead of 2 times compared to the Davis-Harte FFT method. Hence, it is about 4 times faster.

Remark: The Paxson algorithm in (54) is improved by Dieker [25] to retain the normality of the sequence and its relationship with the original spectral representation.

5. A numerical example: fBm volatility model

Finally, this section provides a numerical example of Monte Carlo simulation of fBm volatility model. In Section 2.3, we have briefly mentioned the truncated fractional Brownian motion. This section outlines the stochastic volatility model explored by Comte and Renault [6]. We follow the example given in [6], with the following setup to simulate the volatility process:

 σt=σ0ext  dxt=-kxtdt+νdB^Ht E52

where νis the volatility factor of the log-volatility process x(t). The volatility process is the exponential of an OU-process driven by the truncated fBm. Also, we assume that x0=0, k>0, 12<H<1.

Solving the OU-process with integrating factor, we have

xt=0tνe-kt-sdB^Hs E53

By applying the fractional calculus or using the formulas provided in [5], we can formulate xtin another way as

xt=0tat-sdBs E54

where B(t) is an ordinary standard Brownian motion and

aθ=νΓH+12ddx0θe-kuθ-uH-12du=νΓH+12θα-ke-θ0θekuuα du  E55

By applying the ordinary discretization scheme to (54), we have

x~t=j=1tN=tatN-tj-1nBtj-Btj-1 E56

Here, the coefficient acan be calculated by symbolic packages such as matlab and mathematics. In our case of OU-process, it is a summation of constant with incomplete gamma function and gamma function.

Figure 2.

shows a sample path of σ t = exp ⁡ x ~ t for k = 1 ,   σ 0 = 0.1 , ν = 0.3 ,   H = 0.75 ,   T = 2   . For the purpose of comparison, a sample path of the volatility process driven by an ordinary OU-process (the dotted line): σ t = e x p ⁡ ( y t ) ,   where   y t = ∫ 0 t ν e - k t - s d B s , is shown alongside, which has the same parameters except the Hurst index.

The sample path of the fractional-OU driven volatility process has shown more of a persistent trend, i.e. more prominent trend (more smooth and less reversal) compared to the ordinary-OU driven volatility process, which is what to be expected according [5]. This approach only imitate fractional Brownian motion if the time-step in discretization scheme is very small, renders it not robust enough for practical purpose. But for analytical purpose, it can be shown it is equivalent to the S-transform approach outlined in [36].For more discussion of its statistical property and justification of its stability as compared to the original stationary version, we direct the reader to [5] and [6]. We provided this approach for the sake of completeness and readers’ interest.

We have also included fractional Brownian motion simulatedsimulated by the circulant-embeddingmethod, with the same parameters as above in Figure 3. This approach is more robust since the covariance structure does not depends on the step-size. Figure 3 shows a sample path of t=expxt, where xtis the fBM generated by the circulant-embedding FFT with the same parameters as Figure 2: k=1, σ0=0.1,ν=0.3, H=0.75, T=2 .In these two examples, the fractional Brownian motions are scaled, so that the variance of BH(T)equals to the ordinary Brownian motion B1/2(T) .

Figure 3.

6. Conclusion

Motivated by the inadequacy in capturing long-dependence feature in volatility process of the traditional stochastic volatility framework, we explore the possibility of fractional Brownian motion (fBm) in financial modeling and various schemes of the Monte Carlo simulation. Starting from the general definition, fBm can be considered as an extension of the ordinary Brownian motion with an autocovariance function that depends on both time indices instead of just the minimum between the two.

With different values of Hurst index, we can distinguish fractional Brownian motion into three different cases: H<12, H=12and H>12. Since only the case of H>12displays a long-dependence behavior, that is the case we are interested in. Several prominent examples of fBm in financial modeling are given.

Simulation schemes are divided into the exact schemes and approximate schemes. While the former will capture the complete structure for the whole length of sample size, the latter either approximates the value of the real realization or truncates the covariance structure for robustness.

We start with the exact scheme of Hosking algorithm that utilizes the multivariate Gaussian distribution of fractional Gaussian noises and simulates the sample points conditioned on the previous samples. Alternatively, instead of simulating each conditioned on the past sample points, we can first construct the covariance matrix of the size of the sample we want, and proceed to find the ‘square root’ of the covariance matrix and multiply with a standard normal variable vector, for which the product vector will be the fractional Gaussian noise (fGn) with exact covariance structure as the covariance matrix. To find the ‘square root’, we first investigate the Cholesky decomposition, but the computational and memory expense is too large to be feasible in practice. In contrast, fast Fourier transform (FFT) simulation embeds the original covariance matrix in a larger circulant matrix and simulates by diagonalizing the circulant matrix into the product of eigenvalue matrix and unitary matrix. The FFT method is significantly more robust than the previous schemes.

We then look into the approximate schemes; namely the construction of fBm with correlated random walks, which can be viewed as an extension of construction of Brownian motion with ordinary random walk. This method gives us interesting insight into the true working of fBm, especially the idea of long-range dependence. This approach is not only interesting and easy to implement, but also the error can be calculated explicitly. The drawback of this approach is that the speed slows down significantly with large sample points, and the tradeoff is made based on the error function. The last simulation approach we look at is the conditional random midpoint displacement (RMD) scheme, which is mathematically similar to the Hosking scheme, but with fixed number of past sample points it conditions on. The on-the-fly version of RMD scheme can indefinitely generate sample points with given resolution. Finally, we include the spectral method for approximating fBm.

Comparing all the schemes and also referring the studies done in [34], we conclude that if the time-horizon is known beforehand, the FFT/spectral schemes would be the best scheme due to the high speed and accuracy. Alternately, if samples should be generated indefinitely, the on-the-fly conditioned RMD scheme seems to offer similar level of accuracy and speed as the FFT scheme.

At last, we provided numerical examples for the truncated version of fBM proposed in [5, 6], as well as the fBM generated by FFT for comparison in robustness.

Acknowledgments

We wish to express our gratitude to Professor Alex Novikov for sharing relevant research materials with us, and Professor Bernt Øksendal and Professor Enriquez Nathanaël for interesting discussions and remarks.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Masaaki Kijima and Chun Ming Tam (March 6th 2013). Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation, Theory and Applications of Monte Carlo Simulations, Victor (Wai Kin) Chan, IntechOpen, DOI: 10.5772/53568. Available from:

chapter statistics

2686total chapter downloads

3Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Monte-Carlo-Based Robust Procedure for Dynamic Line Layout Problems

By Wai Kin (Victor) Chan and Charles J. Malmborg

Related Book

First chapter

Introduction

By Jose Ignacio Huertas

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us