Fractional Brownian motion (fBm) was first introduced within a Hilbert space framework by Kolmogorov , and further studied and coined the name ‘fractional Brownian motion’ in the 1968 paper by Mandelbrot and Van Ness . It has been widely used in various scientific fields, most notability in hydrology as first suggested in . It also plays an important role in communication technology by enriching the queuing theory in terms of simulating real network traffic.
In recent years, it has been steadily gaining attention in the area of finance, as it appears that the traditional stochastic volatility model driven by ordinary Brownian motion implies geometric time-decay of the volatility smile, which is much faster than in what we see in real market data (which has a hyperbolic decay). Such stylized feature is called the volatility persistence, see  for detail discussion about the statistical test for the existence of long-range memory (author has also proposed a robust extension of the statistics for that particular purpose). Several modeling approaches have been suggested capturing this persistence in conditional variance either via a unit-root or long memory process. In order to keep the pricing-framework largely intact, it is more interesting to study the long memory process, and fBm has a particular good match due to its similarity to the ordinary Brownian motion and its Gaussian properties.
In this chapter, we will outline several approaches to simulate fractional Brownian motion with , including the exact methods and approximate methods, where the Hurst Index is a parameter used in literature to generalize Brownian motion into fractional Brownian motion, first made popular by Benoit Mandelbrot, which we will give a detail definition in Definition 1.1. We also provide a brief introduction of the truncated fractional Brownian motion (long-memory model in continuous time) as proposed in [5,6].
1.1. Financial motivation and backgrounds
Numerous empirical studies have pointed out that, in options markets, the implied volatility back-out from the Black-Scholes equation displays volatility skews or smiles; the smile effect, which is well known to practitioners, refers to the U-shape price distortion on the implied volatility surface.
Here, is the asset price and is the instantaneous volatility at time t, and are ordinary standard Brownian motions. Hull and White  have shown that, the price of European option at time of exercise date T is the conditional expectation of the Black Scholes option pricing formula where the constant volatility from the original formula is replaced by the quadratic average over the period :
Such models are successful at capturing the symmetric smile and skewness of the implied volatility by imposing relations between the driving Brownian motions in (1) (symmetric smile is explained by independence while skewness can be explained by linear correlation).
Due to the temporal aggregation effect evident in (2), however, the smile effect deteriorates along with time-to-maturity since the temporal aggregation gradually erases the conditional heteroskedasticity; in the standard stochastic volatility setup, this particular decay is much faster than what is observed in market data. The phenomenon of slow decaying volatility smile is known as the volatility persistence (long-range dependence of volatility process). This phenomenon is particularly poignant for high frequency data, for which the conditional variance process displays near unit-root behavior.
Furthermore, we emphasize the existence of such phenomenon collaborated by large quantities of researches, pointing out that the conditional volatility of asset returns also displays long range dependence: [9-12] have discussed extensively the evidence of such phenomenon in empirical data. Bayesian estimation in  of stochastic volatility models shows similar patterns of persistence.
Motivated by this inadequacy, long-memory process was deemed more appropriate enrichment for this purpose. Hence, fractional Brownian motion is a prime candidate among all long-memory process given its tractability and similarity with the ordinary Brownian motion: both the fractional Brownian motion and ordinary Brownian motion are self-similar with similar Gaussian structure. For discussions of estimation and evidence of the long-range dependence in conditional variance of asset returns, the reader is referred to  and section 3.1 of .
Now, we provide a formal definition of fractional Brownian motion (fBm). We adopt the definition as given in .
Definition 1.1 Let be a constant. A fractional Brownian motion with Hurst index is a continuous and centered Gaussian process with covariance function
In particular, for , it reduces to the ordinary Brownian motion with .
From equation (3) we have the following properties:
has stationary increment: has the same distribution as for any .
is self-similar, meaning that has the same distribution law as .
is a Gaussian process with the variance .
has continuous trajectories.
Fractional Brownian motions are divided into three very different categories: . This is of particular importance because there is a deterministic difference between the case of and , as we introduce the mathematical notion of long-range dependence.
Definition 1.2 (Long-range dependence) A stationary sequence exhibits long-range dependence if the autocovariance function satisfies
This can be written as ,
In this case, for some constants and , the dependence between and decays slowly as and
If we set and and apply equation (3), we have
where . In particular,
Therefore, we conclude that: for , , and for , . Hence, only in the case of , fractional Brownian motions display long-memory dependence.
As pointed out in , large lags difference between may be difficult to estimate in practice, so that models with long-range dependence are often formulated in terms of self-similarity. Self-similarity allows us to extrapolate across time scales and deduce long time behavior from short time behavior, which is more readily observed.
Because we are interested in capturing the long-memory phenomenon observed in financial markets, the rest of this chapter will only concern the case of .
1.2. Stochastic integral representation
In the original paper , Mandelbrot and van Ness represent the fBm in stochastic integral with respect to the ordinary Brownian motion:
where is the gamma function.
This version of fBm is ‘truncated’ in the sense that the integration from negative infinity to zero in equation (5) is truncated. We will refer to the model (6) as the ‘truncated fractional Brownian motion’ in the rest of this chapter. As pointed out in , the representation (6) was first proposed by Paul Lévy to define the fBm by the Riemann-Liouville fractional integral, while the original integral in equation (5) is the Weyl fractional integral.
The definition (6) of fBm is in general not asymptotically covariance-stationary, even though it retains self-similarity. For further technical discussion and rigorous definition of the truncated fractional Brownian motion, we refer the reader to .
Given the difference, if one employs analytical tools such as Malliavin calculus should be applied with care and note the differences between the two versions of fBM.  offers an in-depth discussion of the theoretical differences.
2. Fractional brownian motions in financial models
We first look at several examples that utilize the fractional Brownian motions in the realm of financial modeling.
2.1. Asset price model
In the previous section, we mention that the motivation of fBms in finance models is to capture the long-range dependence in the volatility process. However, it is worth discussing the possibility of applying it to the asset process itself.
In practice, it is considered that an asset process driven by fBm will result in arbitrage. The idea is that, since for , is not a semimartingale in continuous time, the asset process described by violates the NFLVR (no free lunch with vanishing risk), a weaker version of arbitrage, and thus doesn’t admit an equivalent martingale measure according to Theorem 7.2 of . Such findings and construction of arbitrage can be found in Rogers .
In contrast, Cheridito  has given multiple classes of trading strategies that allow various level of arbitrages (NFLVR, arbitrage in the classical sense and strong arbitrage) under fBm driven assets, and shown that the arbitrages are all eliminated if the inter-transaction time is not zero, i.e., the classes of strategies become arbitrage-free. Such assumption is reasonable in practice, given the physical limit and non-zero transaction cost. For more information on arbitrage theory and its generalization, the reader is referred to [18-20].
2.2. Stochastic volatility model
As we have mentioned in the introductory section, the main motivation for fBm is to capture the volatility persistence, the stylized feature observed in empirical data. There are several prominent models involving a volatility process with fBm. Here, we just outline several of them.
2.2.1. Long-memory continuous model
where the log-volatility term follows a fractional-OU process driven by the truncated fBm (6). The asset price process follows a geometric Brownian motion with volatility persistence.
Although Mandelbrot  deemed it as signifying the origin too heavily, the model (7) is easier and more robust than the ordinary fBms from the perspective of simulation. A simulation example is explored in Section 5.
2.2.2. Affine fractional stochastic volatility model
In , Comte et al. assume that the squared-volatility process follows
where is defined by the fractional integral:
The affine structure is adopted for the extra tractability, and thus better suited for practical option pricing and hedging than the original idea (7). In fact, Comte et al.  have shown that this model can better depict the difference between the short and long memory properties in the resulting option prices.
2.2.3. Martingale expansion
Fukasawa  adopts and expands the asymptotic expansion technique first proposed by Yoshida  of European option prices around the Black-Scholes equation by means of perturbation technique and partial Malliavin calculus. It is shown that the logarithm of the asset process can be expressed as
Here, is the riskless rate, , y∈ℝ is a constant, is a 2-dimensional standard (independent) Brownian motion, is a deterministic sequence for , and is the stochastic volatility process which is an adapted process for the minimal filtration.
Note that is a fractional Brownian motion with Hurst index H, where is the kernel of the stochastic integral representation over a finite interval of Brownian motion. According to , pertaining to our interest, for the case of , the kernel has the following expression:
Then, according to Corollary (2.6) of Fukasawa , the implied volatility can be expressed as
where is the typical argument in the of the Black-Scholes formula, and
Equation (10) can be seen as an approximation is the model parameter calibrated from market data. This shape of this approximation remains largely stationary, to check the accuracy of the approximation, we have no means but performing Monte Carlo simulation of fBm.
3. Simulation with exact methods
First, we look at methods that completely capture the covariance structure and true realization of the fractional Brownian motion (fBm) or fractional Gaussian noise (fGn). Any method described in this section has their starting point at the covariance matrix. The approximate scheme we see later is merely numerically close to the value of fBm (or fGn) or asymptotically coincides with it. The collection of algorithm in this section is not meant to be exhaustive. For more algorithm and discussion, see .
3.1. Hosking method
The Hosking method utilizes the well-known conditional distribution of the multivariate Gaussian distribution on a recursive scheme to generate samples based on the explicit covariance structure. This method generates a general stationary Gaussian process with given covariance structure, not limited to generating fBms.
More specifically, this algorithm generates an fGn sequence and fBm is recovered by accumulative sum. That is, the distribution of conditioned on the previous realization can be explicitly computed.
Denote as the autocovariance function of the zero-mean process:
where we assume for convenience that For , we have the following recursive relationship for the autocovariance matrix :
where is the -column vector with elements and is the ‘mirrored’ identity matrix
Theorem 3.1 (Multivariate Gaussian distribution) Any multivariate Gaussian random vector can be partitioned into and with the mean vector and covariance matrix with the corresponding partition:
The distribution of conditioned on is a multivariate normal with
If we substitute equation (12) into the partition in (14) with , we have the following expression for the conditional distribution:
With , subsequently for can be generated.
Proposition 3.1 (Hosking algorithm for simulating fGn) Define , and applying the blockwise method of inversion on equation (13):
where satisfies the recursion
with . Also, the recursion for is obtained as
With , can be readily computed, and fractional Brownian motion is recovered by the cumulative summation.
This algorithm is also applicable to non-stationary processes (see  for details). Even though this algorithm is very simple and easy to understand and sample paths can be generated on-the-fly, the complexity of this algorithm is of and computational (as well as memory) expense of this algorithm grows at a prohibitive speed.
3.2. Cholesky method
Given that we are dealing with the covariance structure in matrix form, it is natural to go with the Cholesky decomposition: decomposing the covariance matrix into the product of a lower triangular matrix and its conjugate-transpose . If the covariance matrix is proven to be positive-definite (the situation will be addressed in the next subsection), will have real entries and .
Suppose that in matrix form the product is given by
It is easy to see that and that and on (2nd row). For , the entries of the lower triangular matrix can be determined by
Given independent, identically distributed (i.i.d.) standard normal random variables , the fGn sequence is generated by
Or in matrix form, we have . If is assumed to be positive-definite, the non-negativity of is guaranteed and is guaranteed to be real. The covariance structure of the process is captured, since
Even though the Cholesky method is easy to understand and implement, the computation time is , which renders this scheme extremely uneconomical in practice. To resolve this problem, we will proceed to another exact method. The idea is similar to retain the same relation as equation (21), but with a different decomposition.
3.3. Fast fourier transform method
As we have seen from the last section, using the Cholesky decomposition seems to be the most straightforward idea to simulate Gaussian process with a given covariance structure; but, it also is the most rudimentary and thus slow. In order to improve upon the speed, the idea of utilizing the fast Fourier transform (FFT) was proposed by Davies and Harte  and further generalized by Dietrich and Newsam .
Similar to the idea before, this method tries to find a decomposition of the covariance matrix as and the sample is generated by for given standard normal random variable x. Then, on the given covariance structure, we have
The idea is to 'embed' the original covariance matrix a circulant matrix in order to carry out the FFT. Before we outline the idea, we shall give out some detail of the linkage between Fourier transform and the circulant matrix.
Definition 3.1 (Circulant matrix) Circulant matrix is a special case of the Toeplitz matrix where each row vector is shifted to the right (the last element is shifted back to the beginning of the row). In matrix form, an n-by-n circulant matrix can be written as
Remark: As one can see, the first row/column completely describes the whole matrix, and it can be put more succinctly in the following form:
Note that the indices range from 0 to n-1 instead of the usual convention that ranges from 1 to n.
Definition 3.2 (Generating circulant matrix) We define an n-by-n generating circulant matrix by
By a simple calculation, we can see that the ‘square’ of the generating circulant matrix is given by
From the point of view of row and column operation of the matrix, this can be seen as each row of the matrix being shifted one element forward, where the bumped element is replaced to the end of the row (it can also be thought of as the whole row is shifted down and the bumped row is placed back on top, but this is irrelevant to our interest). Arbitrary power can be deduced accordingly; this operation has a cycle of n iterations.
The generating circulant matrix is served as our building block. Looking back at our original circulant matrix, we have a corresponding polynomial
Then, the original circulant matrix C can be expressed as
This can be verified by doing the row-operation of arbitrary power on G as shown above. It can be shown that each operation is one-element sub-diagonal compared to the previous power.
Definition 3.3 (Fourier matrix) The Fourier matrix is introduced as
Here, we define the n-th unity root as , and is the conjugate of the unity root.
The Fourier matrix can be defined using the positive argument instead of Also, as we will see later, some definition includes the normalizing scalar (or ). This is analogous to the continuous counterpart of the Fourier integral definition or , as long as the duality is uphold by the opposite sign in the exponential power in the inverse Fourier transform. This duality will be restated in the diagonalization representation of the circulant matrix later.
Proposition 3.2 If normalizes the Fourier matrix, then is a unitary matrix. It is symmetric (i.e., ), and the inverse of the Fourier matrix is given by
Proposition 3.3 If we multiply the Fourier matrix with the generating circulant matrix, we have
This is the same as shifting (rotating) the first column to the back of the matrix, and is also equivalent to multiplying the first row with the 2nd row with , etc. In matrix operation, it can be seen as
where is the diagonal matrix with the k-th diagonal . It follows that
That is, the Fourier matrix diagonalizes the generating circulant matrix with eigenvalues .
Theorem 3.2 The circulant matrix is decomposable by the Fourier matrix, i.e.with eigenvalue matrix . Also, with equation (23), the diagonalization of C can be written as
Note that . The other powers can be deduced iteratively.
This theorem gives us the fundamental theoretical framework to build up the FFT exact simulation of fBms. The basic idea of the simulation is to embed the covariance matrix into a bigger circulant matrix to carry out the discrete Fourier transform as outlined above (with technique of FFT). Such technique is called Circulant Embedding Method (CEM), and is outlined in Dietrich and Newsam  and Perrin et al. .
Suppose that we need sample size of N (N should be a power of 2, i.e. for some for the sake of convenience when facilitating FFT). Generate the N-by-N covariance matrix with entries , where is the covariance function given in the definition of fractional Gaussian noise (fGn), by
The technique to simulate fGn with FFT is called the Circulant Embedding Method (CEM), generalized by Davies and Harte , and consists of embedding this covariance matrix into a bigger M-by-M (with M = 2N) circulant covariance matrix C such as
where the covariance matrix is embedded on the top left hand corner. It is sufficient to point out that
Remark: As Perrin et al.  have pointed out, the size M can be , and the case is minimal embedding. For any other choice of , the choice of is arbitrary and can be conveniently chosen as long as the symmetry of the matrix is upheld; more zeros can be padded if M is bigger to make C circulant. For the rest of the chapter, we will concern ourselves with the case M = 2N.
From Theorem 3.2, we know that, given any circulant matrix, it can be decomposed as , where
The matrix is the diagonal matrix with eigenvalues
This differs slightly from the previous definition, but similar to the continuous counterpart; the sign of the exponential power in the Fourier transform is just conventional difference. The approach is identical as long as the duality is maintained. That is, if written in the form of , the sign of the exponential power of the component in and should be opposite. In the case of the previous theorem where , it is easy to check that and indeed have the opposite sign in the exponential power.
It should be noted that is not guaranteed to be positive-definite. Davies and Harte  suggest setting zero every negative value that may appear in . In Perrin et al. , they prove that the circulant covariance matrix for fGn is always non-negative definite, so the approach is feasible without any modification. The reader is referred to Dietrich and Newsam  and Wood and Chan  for more detail on dealing with this issue.
Assuming that is positive definite and symmetric, the eigenvalues are positive and real. The ‘square root’ of is readily formed, , where is the diagonal matrix with eigenvalues . It is easy to check that . So, has the desired properties we look for.
Theorem 3.3 (Simulation of fGn with FFT) The simulation of the sample path of fGn, we are going to simulate, consists of the following steps:
Compute the eigenvalues from equation (26) by means of FFT. This will reduce the computational time from to .
Generate two standard normal random variables and
For , generate two standard normal random variables and and let
Compute . This can be seen as another Fourier transform of the vector :
It is identical to carry out FFT on the following sequence:
Due to the symmetric nature of the sequence, the Fourier sum of = is real. The first N samples have the desired covariance structure. But, since the 2nd half of samples (N…2N-1) are not independent of the first N samples, this sample cannot be used.
Recover fBm from the recursive relationship:
4. Approximate methods
As we have seen in the previous section, the exact methods all take on the covariance structure matrix as starting point and try to reproduce the covariance structure by different decomposition. That can be time and resource consuming, so rather it is preferable to have approximation of the fractional Brownian motion that permits robust simulation.
In this section, we will start with the Mandelbrot representation due to historical reason and move onto several other methods that provide us with better understanding of the process and increasing robustness.
4.1. Mandelbrot representation
Recalling from Section 2.3, fractional Brownian motion permits a stochastic integral representation. To approximate equation (5), it is natural to truncate the lower limit from negative infinity to some point, say at –b:
Note that the is not the same constant term as in equation (5), because one has to re-calculate the normalizing factor due to the truncation. As pointed out in , the recommended choice of is . Even though this is a straightforward way to generate fractional Brownian motion, it is rather inefficient. It is included in this section for the sake of completeness.
4.2. Euler hypergeometric integral
fBm permits the stochastic integral form involving the Euler hypergeometric integral:
where is the standard Brownian motion and
is the hypergeometric function, which can be readily computed by most mathematical packages. By discretizing (28), at each time index , we have
where and is drawn according to the standard normal distribution. This means that is the increments of a scaled Brownian motion on with quadratic variation. The inner integral can be computed by the Gaussian quadrature efficiently.
4.3. Construction by correlated random walk
This particular algorithm proposed in  of constructing fBm relies on the process of correlated random walks and summation over generated paths. This is similar to the generation of ordinary Brownian method through summation of the sample paths of normal random walk, which is related to the central limit theorem.
Definition 4.1 (Correlated Random Walk) For any , denote as the correlated random walk with persistence index . It consists of a jump on each time-step with jump size of either +1 or -1 such that:
which equals either or
Theorem 4.1 For any , we have
In order to add additional randomness into the correlated random walks, we replace the constant persistence index with a random variable, and we denote the resulting correlated random walk as . Or, to put it more formally, denote by the law of for a given persistence index . Now, consider a probability measure on , which we call the corresponding probability law , the annealed law of the correlated walk associated to . Note that .
Proposition 4.1 For all , we have
Theorem 4.2 For , denote by the probability on with density . Let be a sequence of independent processes with probability law . Then,
where , stands for the convergence in law, and means the convergence in the sense of weak convergence in the Skorohod topology on , the space of cadlag functions on . Here, is the floor function and rounds the argument to the closest integer, M is the number of trajectories of correlated random walks and N is number of time steps.
Remark: For H=1/2, there is a similar expression (see ). The order of limit in equation (33) is important, because if reversed, the sum would result in 0. Theorem 4.2 is mostly for theoretical construction.
In , the above theorem is further simplified from double summations into a single summation by applying Berry-Essen bound: As long as is of order ,
In practice, any probability measure with moment equivalent to , where is a slowly varying function, will be used. This could be shown by Karamata’s theorem, for which further elaboration is found in . In , three families of equivalent measures are provided, and specifically the 2nd family of the measures is most appropriate for simulation purpose: For , has the density of with the corresponding normalizing factor . is a standard uniform random variable. The error given by the Berry-Essen bound for is given by
where . Here, serves as a control variable of order , and the error term contains which can be seen as a way to restrict error with the price of distortion of the covariance relation in , though it is advisable to keep. For more discussion on the choice of k, we refer to Section 4 of .
Theorem 4.3 (Simulation of fBm with correlated random walk) Simulating fBm with correlated random walk for the case of consists of the following steps:
Calculate M(N) by the tolerable error level from equation (35). Calculate, where is the floor function, and create time-index : .
Simulate M independent copies of for M trajectories.
Simulate M copies of correlated random walks:
At each , calculate
Remark: For any given time-horizon T, it is easier to simulate the path of with given resolution N and scale it to arrive at .
This algorithm is interesting from a theoretical point of view, since it gives us a construction of fractional Brownian motion reflecting its ordinary Brownian motion counterpart with the help of central limit theorem. But, in practice, it might not be fast enough for simulation purpose that requires large number of simulated paths.
4.4. Conditional random midpoint displacement
This algorithm is put forth by Norros et al.  and uses the similar approach to compute the conditional distribution of the fractional Gaussian noises as we have seen in the Hosking algorithm in Section 3.1. The difference is that this algorithm does not completely capture the covariance of all the sample points, instead it chooses certain number of points of the generated samples and uses a different ordering to do conditioning on (recall that, in the Hosking method, the conditioning is done by chronological order).
Again we are interested in the stationary fractional Gaussian noises and will back out the fractional Brownian motion on the time interval , which can be scaled back according to self-similarity relationship. We will first outline the idea of bisection method, which will be expanded into the conditional mid-point replacement scheme later on.
4.4.1. Bisection scheme and basic algorithm
In this section we adopt the notation in Norros et al. : is the fractional Brownian motion, and is the fractional Gaussian noise of a certain interval in a given level .
The idea is to simulate on the interval . First, given and with the standard normal distribution of , we compute the conditional distribution of . The bisection involves the indices i and j, where i indicates the ‘level’ and j for the ‘position. Let
It is easy to see that, at any given level , the interval [0,1] is divided into sub-intervals.
If we denote th level as the ‘mother-level’, it will be divided twice finer in the next level. So given any interval on the mother level, it is easy to observe the relationship
Because of equation (36), it is enough to just generate for odd number . So, let us proceed from left to right, assuming that the sample points have already been generated . For the point , we have
where are i.i.d. standard Gaussian random variables . Equation (37) can be rewritten as
As mentioned before, this scheme conditions on a fixed number of past samples instead of the whole past, where the two integers indicate the number of the intervals the expectation and variance are conditioned on, and is called RMD(m,n). Looking at and “” indicates that there are at most m neighboring increments to the left of the interval in question (, and “” indicates that there are at most n neighboring increments to the right of the interval.
Denote by the covariance matrix with as the first entry, and as the rest of the entries. Then, we have
Similar to (14)-(16), we can partition as
Note that . Hence, we have
By the stationarity of the increment of and by self-similarity, is independent of and when and (meaning that the sequence is not truncated by and ), and it only depends on only when .
4.4.2. On-the-fly RMD(m,n) generation
Norros et al.  further propose that, instead of having the previous level (i-1) completely generated first, partition and conditioning can be done ‘on-the-fly’, meaning that we can have multiple unfinished levels going at the same time. Unlike the previous RMD(m,n) scheme, the level here is defined differently.
First define the ‘resolution’ by , as the smallest interval that we will be dealing with in this scheme. Note that this is different from the previous sub-section where at the i-th level , which can be bisected finer indefinitely. In the on-the-fly RMD scheme, the minimum interval length is defined as .
At each level , the interval is split finely into interval with length of and samples are generated on each point until all points within the interval are filled. Then, the trace is expanded to the next level , the interval , and this procedure can be considered as ‘enlargement’. So, instead of having a pre-determined time-horizon and zooming in with twice-finer resolution in the original RMD scheme, the new RMD scheme has a pre-determined resolution and expand twice-fold the horizon at each level.
Within the same interval , the following rules are applied for the inner intervals:
Must have n-1 (or all) right-neighboring intervals (of which, have the same length as the mother-interval).
Also, there should be m (or all) left-neighboring intervals (of which, have the same length as the interval being considered).
If there are intervals that satisfy both the conditions above, choose the one as left as possible.
When all intervals are filled out, expand to the next level by ‘enlargement’.
Here we use
instead of to avoid confusion with the original RMD scheme notation.
Similar to the ordinary RMD, we have the following equations from the conditional distribution of multivariate Gaussian processes: The enlargement stage is defined by
Inner interval points are constructed similar to the ordinary RMD scheme (the right-neighboring intervals are of level i+1 instead of i-1) as
where is the last generated increment on the (i+1)-th level. The order of splitting for the on-the-fly scheme is given in Figure 1, where its ordinary RMD counterpart’s splitting order is also given.
Norros et al.  have done an extensive comparison between on-the-fly RMD schemes in terms of accuracy and robustness compared to the FFT and aggregate methods. On-the-fly RMD and FFT are significantly faster than the aggregate method, and on-the-fly RMD can generate samples with no fixed time-horizon, while for FFT the whole trace has to be generated before it can be used. So, RMD seems superior in terms of flexibility.
4.5. Spectral method
In this subsection, we will look into the spectral method of approximating the fractional Gaussian noises, which has the origin from spectral analysis in physics: A time-domain can be transformed into a frequency-domain without loss of information through Fourier transform. With the typical Fourier-time series, the original input is deterministic and transformed into the spectral density that represents the magnitude of different frequencies in the frequency domain. It is possible to extend this approach to analyzing stochastic processes. Though it is impossible to study all realization, it is possible to analyze in a probabilistic/distribution sense by observing the expected frequency information contained in the autocovariance function.
Spectral density is computed for frequencies, , as
The here is the autocovariance function, which can be recovered by the inverse formula
Note that the domain is only , since any frequency higher would only correspond to amplitude between our desired sample points.
The problem with the above expression is that there is no known form for , Paxson  proposes the following scheme for :
, , ,
Moreover, with the help of the Whittle estimator, Paxson  shows that
gives a very robust and unbiased approximation for the . See Appendix A of  for a detailed comparison and justification of this approximation.
With the approximation scheme for the spectral density at hand, we can now look at the spectral analysis of a stationary discrete-time Gaussian process (fractional Gaussian noise; fGn) , which can be represented in terms of the spectral density as
where and are independent standard Brownian motions and the equality is understood in terms of distribution. Define and fix some integer . After setting for , we can approximate it by a simple function defined on for by
which is similar to the typical construction of stochastic integral.
Define the sine counterpart as , and then integrate both and with respect to and on to approximate . Then, we have
where are i.i.d. standard normal random variables. and are independent, as they are resulted from integration from the two aforementioned independent Brownian motions.
Similar to the Fourier transform approach, the fGns can be recovered by applying the FFT to the sequence of efficiently to the following coefficient:
It is easy to check that the covariance structure of fGns can be recovered, with the help of product-to-sum trigonometric identity, as
Here, is a vector of exponentially distributed random variables with mean 1, and are uniformly distributed random variables on independent of . This method is of order , and only one FFT is required instead of 2 times compared to the Davis-Harte FFT method. Hence, it is about 4 times faster.
Remark: The Paxson algorithm in (54) is improved by Dieker  to retain the normality of the sequence and its relationship with the original spectral representation.
5. A numerical example: fBm volatility model
Finally, this section provides a numerical example of Monte Carlo simulation of fBm volatility model. In Section 2.3, we have briefly mentioned the truncated fractional Brownian motion. This section outlines the stochastic volatility model explored by Comte and Renault . We follow the example given in , with the following setup to simulate the volatility process:
where is the volatility factor of the log-volatility process . The volatility process is the exponential of an OU-process driven by the truncated fBm. Also, we assume that .
Solving the OU-process with integrating factor, we have
By applying the fractional calculus or using the formulas provided in , we can formulate in another way as
where B(t) is an ordinary standard Brownian motion and
By applying the ordinary discretization scheme to (54), we have
Here, the coefficient can be calculated by symbolic packages such as matlab and mathematics. In our case of OU-process, it is a summation of constant with incomplete gamma function and gamma function.
The sample path of the fractional-OU driven volatility process has shown more of a persistent trend, i.e. more prominent trend (more smooth and less reversal) compared to the ordinary-OU driven volatility process, which is what to be expected according . This approach only imitate fractional Brownian motion if the time-step in discretization scheme is very small, renders it not robust enough for practical purpose. But for analytical purpose, it can be shown it is equivalent to the S-transform approach outlined in .For more discussion of its statistical property and justification of its stability as compared to the original stationary version, we direct the reader to  and . We provided this approach for the sake of completeness and readers’ interest.
We have also included fractional Brownian motion simulatedsimulated by the circulant-embeddingmethod, with the same parameters as above in Figure 3. This approach is more robust since the covariance structure does not depends on the step-size. Figure 3 shows a sample path of , where is the fBM generated by the circulant-embedding FFT with the same parameters as Figure 2: .In these two examples, the fractional Brownian motions are scaled, so that the variance of equals to the ordinary Brownian motion .
Motivated by the inadequacy in capturing long-dependence feature in volatility process of the traditional stochastic volatility framework, we explore the possibility of fractional Brownian motion (fBm) in financial modeling and various schemes of the Monte Carlo simulation. Starting from the general definition, fBm can be considered as an extension of the ordinary Brownian motion with an autocovariance function that depends on both time indices instead of just the minimum between the two.
With different values of Hurst index, we can distinguish fractional Brownian motion into three different cases: and . Since only the case of displays a long-dependence behavior, that is the case we are interested in. Several prominent examples of fBm in financial modeling are given.
Simulation schemes are divided into the exact schemes and approximate schemes. While the former will capture the complete structure for the whole length of sample size, the latter either approximates the value of the real realization or truncates the covariance structure for robustness.
We start with the exact scheme of Hosking algorithm that utilizes the multivariate Gaussian distribution of fractional Gaussian noises and simulates the sample points conditioned on the previous samples. Alternatively, instead of simulating each conditioned on the past sample points, we can first construct the covariance matrix of the size of the sample we want, and proceed to find the ‘square root’ of the covariance matrix and multiply with a standard normal variable vector, for which the product vector will be the fractional Gaussian noise (fGn) with exact covariance structure as the covariance matrix. To find the ‘square root’, we first investigate the Cholesky decomposition, but the computational and memory expense is too large to be feasible in practice. In contrast, fast Fourier transform (FFT) simulation embeds the original covariance matrix in a larger circulant matrix and simulates by diagonalizing the circulant matrix into the product of eigenvalue matrix and unitary matrix. The FFT method is significantly more robust than the previous schemes.
We then look into the approximate schemes; namely the construction of fBm with correlated random walks, which can be viewed as an extension of construction of Brownian motion with ordinary random walk. This method gives us interesting insight into the true working of fBm, especially the idea of long-range dependence. This approach is not only interesting and easy to implement, but also the error can be calculated explicitly. The drawback of this approach is that the speed slows down significantly with large sample points, and the tradeoff is made based on the error function. The last simulation approach we look at is the conditional random midpoint displacement (RMD) scheme, which is mathematically similar to the Hosking scheme, but with fixed number of past sample points it conditions on. The on-the-fly version of RMD scheme can indefinitely generate sample points with given resolution. Finally, we include the spectral method for approximating fBm.
Comparing all the schemes and also referring the studies done in , we conclude that if the time-horizon is known beforehand, the FFT/spectral schemes would be the best scheme due to the high speed and accuracy. Alternately, if samples should be generated indefinitely, the on-the-fly conditioned RMD scheme seems to offer similar level of accuracy and speed as the FFT scheme.
We wish to express our gratitude to Professor Alex Novikov for sharing relevant research materials with us, and Professor Bernt Øksendal and Professor Enriquez Nathanaël for interesting discussions and remarks.