Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation

Fractional Brownian motion (fBm) was first introduced within a Hilbert space framework by Kolmogorov [1], and further studied and coined the name ‘fractional Brownian motion’ in the 1968 paper by Mandelbrot and Van Ness [2]. It has been widely used in various scientific fields, most notability in hydrology as first suggested in [3]. It also plays an important role in communication technology by enriching the queuing theory in terms of simulating real network traffic.


Introduction
Fractional Brownian motion (fBm) was first introduced within a Hilbert space framework by Kolmogorov [1], and further studied and coined the name 'fractional Brownian motion' in the 1968 paper by Mandelbrot and Van Ness [2].It has been widely used in various scientific fields, most notability in hydrology as first suggested in [3].It also plays an important role in communication technology by enriching the queuing theory in terms of simulating real network traffic.
In recent years, it has been steadily gaining attention in the area of finance, as it appears that the traditional stochastic volatility model driven by ordinary Brownian motion implies geometric time-decay of the volatility smile, which is much faster than in what we see in real market data (which has a hyperbolic decay).Such stylized feature is called the volatility persistence, see [4] for detail discussion about the statistical test for the existence of long-range memory (author has also proposed a robust extension of the R / S statistics for that particu- lar purpose).Several modeling approaches have been suggested capturing this persistence in conditional variance either via a unit-root or long memory process.In order to keep the pricing-framework largely intact, it is more interesting to study the long memory process, and fBm has a particular good match due to its similarity to the ordinary Brownian motion and its Gaussian properties.
In this chapter, we will outline several approaches to simulate fractional Brownian motion with H > 1 / 2 , including the exact methods and approximate methods, where the Hurst In- dex H is a parameter used in literature to generalize Brownian motion into fractional Brow- nian motion, first made popular by Benoit Mandelbrot, which we will give a detail definition in Definition 1.1.We also provide a brief introduction of the truncated fractional Brownian motion (long-memory model in continuous time) as proposed in [5,6].

Financial motivation and backgrounds
Numerous empirical studies have pointed out that, in options markets, the implied volatility back-out from the Black-Scholes equation displays volatility skews or smiles; the smile effect, which is well known to practitioners, refers to the U-shape price distortion on the implied volatility surface.
In Hull and White [7] and Scott [8], they have proposed this feature of volatility to be captured by stochastic regime, known as the stochastic volatility model: Here, S(t) is the asset price and σ(t) is the instantaneous volatility at time t, and { B 1 ( t ) , B 2 ( t )} are ordinary standard Brownian motions.Hull and White [7] have shown that, the price of European option at time t of exercise date T is the conditional expectation of the Black Scholes option pricing formula where the constant volatility from the original formula is replaced by the quadratic average over the period t, T : Such models are successful at capturing the symmetric smile and skewness of the implied volatility by imposing relations between the driving Brownian motions in (1) (symmetric smile is explained by independence while skewness can be explained by linear correlation).
Due to the temporal aggregation effect evident in (2), however, the smile effect deteriorates along with time-to-maturity since the temporal aggregation gradually erases the conditional heteroskedasticity; in the standard stochastic volatility setup, this particular decay is much faster than what is observed in market data.The phenomenon of slow decaying volatility smile is known as the volatility persistence (long-range dependence of volatility process).This phenomenon is particularly poignant for high frequency data, for which the conditional variance process displays near unit-root behavior.
Furthermore, we emphasize the existence of such phenomenon collaborated by large quantities of researches, pointing out that the conditional volatility of asset returns also displays long range dependence: [9][10][11][12] have discussed extensively the evidence of such phenomenon in empirical data.Bayesian estimation in [13] of stochastic volatility models shows similar patterns of persistence.
Motivated by this inadequacy, long-memory process was deemed more appropriate enrichment for this purpose.Hence, fractional Brownian motion is a prime candidate among all long-memory process given its tractability and similarity with the ordinary Brownian motion: both the fractional Brownian motion and ordinary Brownian motion are self-similar with similar Gaussian structure.For discussions of estimation and evidence of the longrange dependence in conditional variance of asset returns, the reader is referred to [10] and section 3.1 of [14].Now, we provide a formal definition of fractional Brownian motion (fBm).We adopt the definition as given in [15].
Hurst index H is a continuous and centered Gaussian process with covariance function In particular, for , it reduces to the ordinary Brownian motion with From equation (3) we have the following properties: 1. B H (0) = 0 and E B H (t) = 0 , ∀ t ≥ 0 .
3. B H (⋅ ) is self-similar, meaning that B H (Tt) has the same distribution law as (T ) H B H (t) .

B H ( ⋅ ) is a
Gaussian process with the variance E B H (t) 2 = t 2H , ∀ t ≥ 0 .

B H (⋅ ) has continuous trajectories.
Fractional Brownian motions are divided into three very different categories: . This is of particular importance because there is a deterministic difference between the case of H < This can be written as (n) ~|n| -α , In this case, for some constants c and α ∈ (0,1) , the dependence between X k and X k +n de- cays slowly as n → ∞ and Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568 If we set ) and apply equation (3), we have where γ H Therefore, we conclude that: for Hence, only in the case of H > 1 / 2 , fractional Brownian motions display long-memory dependence.
As pointed out in [16], large lags difference between γ( ⋅ ) may be difficult to estimate in practice, so that models with long-range dependence are often formulated in terms of selfsimilarity.Self-similarity allows us to extrapolate across time scales and deduce long time behavior from short time behavior, which is more readily observed.
Because we are interested in capturing the long-memory phenomenon observed in financial markets, the rest of this chapter will only concern the case of H > 1 / 2 .

Stochastic integral representation
In the original paper [2], Mandelbrot and van Ness represent the fBm in stochastic integral with respect to the ordinary Brownian motion: where Γ( ⋅ ) is the gamma function.
They have also included an alternative representation of the fBm which is the basis of the model in [5,6]: This version of fBm is 'truncated' in the sense that the integration from negative infinity to zero in equation ( 5) is truncated.We will refer to the model (6) as the 'truncated fractional Brownian motion' in the rest of this chapter.As pointed out in [2], the representation (6) was first proposed by Paul Lévy to define the fBm by the Riemann-Liouville fractional integral, while the original integral in equation ( 5) is the Weyl fractional integral.

Theory and Applications of Monte Carlo Simulations
The definition (6) of fBm is in general not asymptotically covariance-stationary, even though it retains self-similarity.For further technical discussion and rigorous definition of the truncated fractional Brownian motion, we refer the reader to [5].
Given the difference, if one employs analytical tools such as Malliavin calculus should be applied with care and note the differences between the two versions of fBM.[17] offers an in-depth discussion of the theoretical differences.

Fractional brownian motions in financial models
We first look at several examples that utilize the fractional Brownian motions in the realm of financial modeling.

Asset price model
In the previous section, we mention that the motivation of fBms in finance models is to capture the long-range dependence in the volatility process.However, it is worth discussing the possibility of applying it to the asset process itself.
In practice, it is considered that an asset process driven by fBm will result in arbitrage.The idea is that, since for H ≠ 1 / 2 , B H ( t ) is not a semimartingale in continuous time, the asset process described by B H ( t ) violates the NFLVR (no free lunch with vanishing risk), a weak- er version of arbitrage, and thus doesn't admit an equivalent martingale measure according to Theorem 7.2 of [18].Such findings and construction of arbitrage can be found in Rogers [19].
In contrast, Cheridito [20] has given multiple classes of trading strategies that allow various level of arbitrages (NFLVR, arbitrage in the classical sense and strong arbitrage) under fBm driven assets, and shown that the arbitrages are all eliminated if the inter-transaction time is not zero, i.e., the classes of strategies become arbitrage-free.Such assumption is reasonable in practice, given the physical limit and non-zero transaction cost.For more information on arbitrage theory and its generalization, the reader is referred to [18][19][20].

Stochastic volatility model
As we have mentioned in the introductory section, the main motivation for fBm is to capture the volatility persistence, the stylized feature observed in empirical data.There are several prominent models involving a volatility process with fBm.Here, we just outline several of them.

Long-memory continuous model
In [5,6] where the log-volatility term follows a fractional-OU process driven by the truncated fBm (6).The asset price process follows a geometric Brownian motion with volatility persistence.
Although Mandelbrot [2] deemed it as signifying the origin too heavily, the model ( 7) is easier and more robust than the ordinary fBms from the perspective of simulation.A simulation example is explored in Section 5.

Affine fractional stochastic volatility model
In [21], Comte et al. assume that the squared-volatility process follows where (t) is defined by the fractional integral: Similar treatment of the fractional integration is outlined in [5].The affine structure in ( 8) is similar to the one originally studied by Duffie et al. [22].
The affine structure is adopted for the extra tractability, and thus better suited for practical option pricing and hedging than the original idea (7).In fact, Comte et al. [21] have shown that this model can better depict the difference between the short and long memory properties in the resulting option prices.

Martingale expansion
Fukasawa [23] adopts and expands the asymptotic expansion technique first proposed by Yoshida [24] of European option prices around the Black-Scholes equation by means of perturbation technique and partial Malliavin calculus.It is shown that the logarithm of the asset process can be expressed as

Theory and Applications of Monte Carlo Simulations
Here, r is the riskless rate, θ ∈ ( -1,1) , y∈ℝ is a constant, ( W , W ' ) is a 2-dimensional stand- ard (independent) Brownian motion, ϵ n → 0 is a deterministic sequence for n → ∞ , and g( ⋅ ) is the stochastic volatility process which is an adapted process for the minimal filtration.
Note that W t H is a fractional Brownian motion with Hurst index H, where K H t (t, s) is the kernel of the stochastic integral representation over a finite interval of Brownian motion.According to [15], pertaining to our interest, for the case of H > 1 / 2 , the kernel has the follow- ing expression: where Then, according to Corollary (2.6) of Fukasawa [23], the implied volatility can be expressed as where d 2 is the typical argument in the N (d 2 ) of the Black-Scholes formula, and Equation ( 10) can be seen as an approximation ϵ n is the model parameter calibrated from market data.This shape of this approximation remains largely stationary, to check the accuracy of the approximation, we have no means but performing Monte Carlo simulation of fBm.

Simulation with exact methods
First, we look at methods that completely capture the covariance structure and true realization of the fractional Brownian motion (fBm) or fractional Gaussian noise (fGn).Any method described in this section has their starting point at the covariance matrix.The approximate scheme we see later is merely numerically close to the value of fBm (or fGn) or asymptotically coincides with it.The collection of algorithm in this section is not meant to be exhaustive.For more algorithm and discussion, see [25].

Hosking method
The Hosking method utilizes the well-known conditional distribution of the multivariate Gaussian distribution on a recursive scheme to generate samples based on the explicit cova- riance structure.This method generates a general stationary Gaussian process with given covariance structure, not limited to generating fBms.
More specifically, this algorithm generates an fGn sequence {Z k } and fBm is recovered by accumulative sum.That is, the distribution of Z n+1 conditioned on the previous realization Denote γ(k) as the autocovariance function of the zero-mean process: where we assume for convenience that γ(0) = 1.For n, k = 0,1, 2 … , we have the following recursive relationship for the (n + 1) × (n + 1) autocovariance matrix Γ(n) = {γ(i -j)} i, j=0,…,n : where c(n) is the (n + 1) -column vector with elements c(n ,…,n is the (n + 1) × (n + 1) 'mirrored' identity matrix Theorem 3.1 (Multivariate Gaussian distribution) Any multivariate Gaussian random vector z can be partitioned into z 1 and z 2 with the mean vector and covariance matrix with the cor- responding partition: The distribution of If we substitute equation ( 12) into the partition in (14) with Σ 11 = 1, μ = 0 , we have the following expression for the conditional distribution: Theory and Applications of Monte Carlo Simulations With Z 0 ~N (0,1) , subsequently X n+1 for n = 0,1, … can be generated.
Taking the inverse of Γ(⋅ ) at every step is computational expensive; the algorithm proposed by Hosking [26] computes the inverse Γ(n + 1) -1 recursively.The next result is due to Dieker [25].
This algorithm is also applicable to non-stationary processes (see [27] for details).Even though this algorithm is very simple and easy to understand and sample paths can be generated on-the-fly, the complexity of this algorithm is of O ( N 2 ) and computational (as well as memory) expense of this algorithm grows at a prohibitive speed.

Cholesky method
Given that we are dealing with the covariance structure in matrix form, it is natural to go with the Cholesky decomposition: decomposing the covariance matrix into the product of a lower triangular matrix and its conjugate-transpose Γ(n) = L (n)L (n) * .If the covariance matrix is proven to be positive-definite (the situation will be addressed in the next subsection), L (n) will have real entries and Γ(n Suppose that in matrix form the (n + 1) × (n + 1) product is given by ( It is easy to see that l 00 2 = γ(0) and that l 10 l 00 = γ(1) and l 10 2 + l 11 2 = γ(0) on i = 1 (2 nd row).For i ≥ 1 , the entries of the lower triangular matrix can be determined by Given independent, identically distributed (i.i.d.) standard normal random variables (V i ) i=0,…,n+1 , the fGn sequence is generated by Or in matrix form, we have Z (n) = L (n)V (n) .If Γ(n) is assumed to be positive-definite, the non-negativity of l i,i 2 is guaranteed and L (n) is guaranteed to be real.The covariance struc- ture of the process is captured, since Even though the Cholesky method is easy to understand and implement, the computation time is O ( N 3 ) , which renders this scheme extremely uneconomical in practice.To resolve this problem, we will proceed to another exact method.The idea is similar to retain the same relation as equation ( 21), but with a different decomposition.

Fast fourier transform method
As we have seen from the last section, using the Cholesky decomposition seems to be the most straightforward idea to simulate Gaussian process with a given covariance structure; Theory and Applications of Monte Carlo Simulations but, it also is the most rudimentary and thus slow.In order to improve upon the speed, the idea of utilizing the fast Fourier transform (FFT) was proposed by Davies and Harte [28] and further generalized by Dietrich and Newsam [29].
Similar to the idea before, this method tries to find a decomposition of the covariance matrix as Γ = GG' and the sample is generated by y = Gx for given standard normal random variable x.Then, on the given covariance structure, we have The idea is to 'embed' the original covariance matrix a circulant matrix in order to carry out the FFT.Before we outline the idea, we shall give out some detail of the linkage between Fourier transform and the circulant matrix.
Definition 3.1 (Circulant matrix) Circulant matrix is a special case of the Toeplitz matrix where each row vector is shifted to the right (the last element is shifted back to the beginning of the row).In matrix form, an n-by-n circulant matrix can be written as Remark: As one can see, the first row/column completely describes the whole matrix, and it can be put more succinctly in the following form: Note that the indices range from 0 to n-1 instead of the usual convention that ranges from 1 to n.

Definition 3.2 (Generating circulant matrix) We define an n-by-n generating circulant matrix by
From the point of view of row and column operation of the matrix, this can be seen as each row of the matrix being shifted one element forward, where the bumped element is replaced to the end of the row (it can also be thought of as the whole row is shifted down and the bumped row is placed back on top, but this is irrelevant to our interest).Arbitrary power can be deduced accordingly; this operation has a cycle of n iterations.
The generating circulant matrix is served as our building block.Looking back at our original circulant matrix, we have a corresponding polynomial Then, the original circulant matrix C can be expressed as This can be verified by doing the row-operation of arbitrary power on G as shown above.It can be shown that each operation is one-element sub-diagonal compared to the previous power.

Definition 3.3 (Fourier matrix)
The Fourier matrix is introduced as Theory and Applications of Monte Carlo Simulations Here, we define the n-th unity root as ω = e The Fourier matrix can be defined using the positive argument ω instead of ξ.Also, as we will see later, some definition includes the normalizing scalar Proposition 3.3 If we multiply the Fourier matrix with the generating circulant matrix, we have This is the same as shifting (rotating) the first column to the back of the matrix, and is also equivalent to multiplying the first row with ξ 0 , the 2 nd row with ξ 1 , etc.In matrix operation, it can be seen as where Λ is the diagonal matrix with the k-th diagonal Λ k = ξ k , for 0 ≤ k ≤ n -1 .It follows that Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568 That is, the Fourier matrix diagonalizes the generating circulant matrix with eigenvalues { ξ k } 0≤k ≤n-1 .
Theorem 3.2 The circulant matrix is decomposable by the Fourier matrix, i.e.C = F -1 ΛF with eigenvalue matrix Λ = { p ( ξ k )} k =0…n-1 .Also, with equation ( 23), the diagonalization of C can be written as The other powers can be deduced iter- atively.
This theorem gives us the fundamental theoretical framework to build up the FFT exact simulation of fBms.The basic idea of the simulation is to embed the covariance matrix into a bigger circulant matrix to carry out the discrete Fourier transform as outlined above (with technique of FFT).Such technique is called Circulant Embedding Method (CEM), and is outlined in Dietrich and Newsam [29] and Perrin et al. [30].
Suppose that we need sample size of N (N should be a power of 2, i.e.N = 2 g for some g ∈ N for the sake of convenience when facilitating FFT).Generate the N-by-N covariance matrix Γ with entries Γ j,k = γ(| j -k | ) , where γ is the covariance function given in the definition of fractional Gaussian noise (fGn), by The technique to simulate fGn with FFT is called the Circulant Embedding Method (CEM), generalized by Davies and Harte [28], and consists of embedding this covariance matrix into a bigger M-by-M (with M = 2N) circulant covariance matrix C such as γ( 0) where the covariance matrix is embedded on the top left hand corner.It is sufficient to point out that Remark: As Perrin et al. [30] have pointed out, the size M can be M ≥ 2(N -1) , and the case M = 2(N -1) is minimal embedding.For any other choice of M , the choice of C 0,N , … , C 0,M -N +1 is arbitrary and can be conveniently chosen as long as the symmetry of the matrix is upheld; more zeros can be padded if M is bigger to make C circulant.For the rest of the chapter, we will concern ourselves with the case M = 2N.
From Theorem 3.2, we know that, given any circulant matrix, it can be decomposed as The matrix Λ is the diagonal matrix with eigenvalues This differs slightly from the previous definition, but similar to the continuous counterpart; the sign of the exponential power in the Fourier transform is just conventional difference.The approach is identical as long as the duality is maintained.That is, if written in the form of C = QΛQ * , the sign of the exponential power of the component in Q and Λ should be op- posite.In the case of the previous theorem where C = F -1 ΛF , it is easy to check that F -1 and Λ(ξ) indeed have the opposite sign in the exponential power.
It should be noted that C is not guaranteed to be positive-definite.Davies and Harte [28]  suggest setting zero every negative value that may appear in Λ .In Perrin et al. [30], they prove that the circulant covariance matrix for fGn is always non-negative definite, so the approach is feasible without any modification.The reader is referred to Dietrich and Newsam [29] and Wood and Chan [31] for more detail on dealing with this issue.

Approximate methods
As we have seen in the previous section, the exact methods all take on the covariance structure matrix as starting point and try to reproduce the covariance structure by different decomposition.That can be time and resource consuming, so rather it is preferable to have approximation of the fractional Brownian motion that permits robust simulation.
In this section, we will start with the Mandelbrot representation due to historical reason and move onto several other methods that provide us with better understanding of the process and increasing robustness.

Mandelbrot representation
Recalling from Section 2.3, fractional Brownian motion permits a stochastic integral representation.To approximate equation ( 5), it is natural to truncate the lower limit from negative infinity to some point, say at -b: Note that the C H is not the same constant term as in equation ( 5), because one has to re-cal- culate the normalizing factor due to the truncation.As pointed out in [25], the recommended choice of b is N 3/2 .Even though this is a straightforward way to generate fractional Brownian motion, it is rather inefficient.It is included in this section for the sake of completeness.

Euler hypergeometric integral
fBm permits the stochastic integral form involving the Euler hypergeometric integral: where B(s) is the standard Brownian motion and is the hypergeometric function, which can be readily computed by most mathematical packages.By discretizing (28), at each time index t j , we have Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568Theorem 4.2 For 1 2 < H < 1 , denote by μ H the probability on 1 2 , 1 with density (1 -H )2 3-2H (1 -p) 1-2H .Let ( X μ H ,i ) i≥1 be a sequence of independent processes with probabili- ty law P μ H .Then, where c H = ) , L stands for the convergence in law, and L D means the convergence in the sense of weak convergence in the Skorohod topology on D 0,1 , the space of cadlag functions on 0,1 .Here, ⋅ is the floor function and rounds the argument to the closest integer, M is the number of trajectories of correlated random walks and N is number of time steps.
Remark: For H=1/2, there is a similar expression (see [32]).The order of limit in equation ( 33) is important, because if reversed, the sum would result in 0. Theorem 4.2 is mostly for theoretical construction.
In [32], the above theorem is further simplified from double summations into a single summation by applying Berry-Essen bound: As long as In practice, any probability measure with moment equivalent to where L is a slowly varying function, will be used.This could be shown by Karamata's theorem, for which further elaboration is found in [33].In [32], three families of equivalent measures are provided, and specifically the 2 nd family of the measures (μ H ,k ' ) k >0 is most appropriate for simulation purpose: . U is a standard uniform random variable.The error given by the Berry-Essen bound for H > 1. Must have n-1 (or all) right-neighboring intervals (of which, have the same length as the mother-interval).

2.
Also, there should be m (or all) left-neighboring intervals (of which, have the same length as the interval being considered).

3.
If there are intervals that satisfy both the conditions above, choose the one as left as possible.

4.
When all intervals are filled out, expand to the next level by 'enlargement'.
Here we use instead of X i, j to avoid confusion with the original RMD scheme notation.
Similar to the ordinary RMD, we have the following equations from the conditional distribution of multivariate Gaussian processes: The enlargement stage is defined by Inner interval points are constructed similar to the ordinary RMD scheme (the right-neighboring intervals are of level i+1 instead of i-1) as where N i+1 is the last generated increment on the (i+1)-th level.The order of splitting for the on-the-fly scheme is given in Figure 1, where its ordinary RMD counterpart's splitting order is also given.
Norros et al. [34] have done an extensive comparison between on-the-fly RMD schemes in terms of accuracy and robustness compared to the FFT and aggregate methods.On-the-fly RMD and FFT are significantly faster than the aggregate method, and on-the-fly RMD can generate samples with no fixed time-horizon, while for FFT the whole trace has to be generated before it can be used.So, RMD seems superior in terms of flexibility.

Spectral method
In this subsection, we will look into the spectral method of approximating the fractional Gaussian noises, which has the origin from spectral analysis in physics: A time-domain can be transformed into a frequency-domain without loss of information through Fourier transform.With the typical Fourier-time series, the original input is deterministic and transformed into the spectral density that represents the magnitude of different frequencies in the frequency domain.It is possible to extend this approach to analyzing stochastic processes.
Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568 Though it is impossible to study all realization, it is possible to analyze in a probabilistic/ distribution sense by observing the expected frequency information contained in the autocovariance function.Note that, for the on-the-fly scheme, the order changes according to the choice of n.
Spectral density is computed for frequencies, -π ≤ λ ≤ π , as The γ( ⋅ ) here is the autocovariance function, which can be recovered by the inverse formula The spectral density of the fGn can be approximated according to [25] and [35] as where Theory and Applications of Monte Carlo Simulations Note that the domain is only -π ≤ λ ≤ π , since any frequency higher would only correspond to amplitude between our desired sample points.
The problem with the above expression is that there is no known form for B(λ, H ) , Paxson [35] proposes the following scheme for B(λ, H ) : where Moreover, with the help of the Whittle estimator, Paxson [35] shows that B ˜3(λ, H ) '' = 1.0002 -0.000134λ (B ˜3(λ, H ) -2 -7.65H -7.4 ) gives a very robust and unbiased approximation for the B(λ, H ) .See Appendix A of [35] for a detailed comparison and justification of this approximation.
With the approximation scheme for the spectral density at hand, we can now look at the spectral analysis of a stationary discrete-time Gaussian process (fractional Gaussian noise; fGn) X = {X n : n = 0, … , N -1} , which can be represented in terms of the spectral density f (λ) as where B 1 and B 2 are independent standard Brownian motions and the equality is understood in terms of distribution.Define ξ n (λ) = f (λ) π cos (nλ) and fix some integer l .After set- ting t k = πk l for k = 0, … , l -1 , we can approximate it by a simple function ξ n (l ) defined on 0, π for 0 ≤ n ≤ N -1 by which is similar to the typical construction of stochastic integral.
Define the sine counterpart as θ n (l ) (λ) , and then integrate both ξ n (l ) (λ) and θ n (l ) (λ) with re- spect to d B 1 (λ) and d B 2 (λ) on 0, π to approximate X n .Then, we have Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568 where U k (⋅) are i.i.d.standard normal random variables.U k (0) and U k (1) are independent, as they are resulted from integration from the two aforementioned independent Brownian motions.
Similar to the Fourier transform approach, the fGns can be recovered by applying the FFT to the sequence of X ^n (l ) efficiently to the following coefficient: It is easy to check that the covariance structure of fGns can be recovered, with the help of product-to-sum trigonometric identity, as Paxson [35] has also proposed another method for simulating fGns, where in [25] it was proven to be related to (50) with the case l = N / 2 : Here, R k is a vector of exponentially distributed random variables with mean 1, and Φ k are uniformly distributed random variables on 0,2π independent of R k .This method is of order Nlog(N ) , and only one FFT is required instead of 2 times compared to the Davis-Harte FFT method.Hence, it is about 4 times faster.The sample path of the fractional-OU driven volatility process has shown more of a persistent trend, i.e. more prominent trend (more smooth and less reversal) compared to the ordinary-OU driven volatility process, which is what to be expected according [5].This approach only imitate fractional Brownian motion if the time-step in discretization scheme is very small, renders it not robust enough for practical purpose.But for analytical purpose, it can be shown it is equivalent to the S-transform approach outlined in [36].For more discussion of its statistical property and justification of its stability as compared to the original stationary version, we direct the reader to [5] and [6].We provided this approach for the sake of completeness and readers' interest.
We have also included fractional Brownian motion simulatedsimulated by the circulant-embeddingmethod, with the same parameters as above in Figure 3.This approach is more robust since the covariance structure does not depends on the step-size.Figure 3 shows a sample path of (t) = exp (x(t)) , where x(t) is the fBM generated by the circulant-embedding FFT with the same parameters as Figure 2: k = 1, σ 0 = 0.1, ν = 0.3, H = 0.75, T = 2 .In these two examples, the fractional Brownian motions are scaled, so that the variance of B H ( T ) equals to the ordinary Brownian motion B 1/2 ( T ) .Simulation schemes are divided into the exact schemes and approximate schemes.While the former will capture the complete structure for the whole length of sample size, the latter either approximates the value of the real realization or truncates the covariance structure for robustness.
We start with the exact scheme of Hosking algorithm that utilizes the multivariate Gaussian distribution of fractional Gaussian noises and simulates the sample points conditioned on the previous samples.Alternatively, instead of simulating each conditioned on the past sample points, we can first construct the covariance matrix of the size of the sample we want, and proceed to find the 'square root' of the covariance matrix and multiply with a standard normal variable vector, for which the product vector will be the fractional Gaussian noise (fGn) with exact covariance structure as the covariance matrix.To find the 'square root', we first investigate the Cholesky decomposition, but the computational and memory expense is too large to be feasible in practice.In contrast, fast Fourier transform (FFT) simulation embeds the original covariance matrix in a larger circulant matrix and simulates by diagonalizing the circulant matrix into the product of eigenvalue matrix and unitary matrix.The FFT method is significantly more robust than the previous schemes.
We then look into the approximate schemes; namely the construction of fBm with correlated random walks, which can be viewed as an extension of construction of Brownian motion with ordinary random walk.This method gives us interesting insight into the true working of fBm, especially the idea of long-range dependence.This approach is not only interesting and easy to implement, but also the error can be calculated explicitly.The drawback of this approach is that the speed slows down significantly with large sample points, and the tradeoff is made based on the error function.The last simulation approach we look at is the conditional random midpoint displacement (RMD) scheme, which is mathematically similar to the Hosking scheme, but with fixed number of past sample points it conditions on.The onthe-fly version of RMD scheme can indefinitely generate sample points with given resolution.Finally, we include the spectral method for approximating fBm.
Comparing all the schemes and also referring the studies done in [34], we conclude that if the time-horizon is known beforehand, the FFT/spectral schemes would be the best scheme due to the high speed and accuracy.Alternately, if samples should be generated indefinitely, the on-the-fly conditioned RMD scheme seems to offer similar level of accuracy and speed as the FFT scheme.
At last, we provided numerical examples for the truncated version of fBM proposed in [5,6], as well as the fBM generated by FFT for comparison in robustness.

1 2 and H > 1 2
, as we introduce the mathematical notion of longrange dependence.Definition 1.2 (Long-range dependence) A stationary sequence {X n } n∈N exhibits long-range dependence if the autocovariance function

2πi 1 n 1 n
, and ξ = ω ¯= e -2πi is the conjugate of the unity root.

nF
).This is analogous to the continuous counterpart of the Fourier integral definitionF ( f ) = ∫ -∞ ∞ x(t)e -2πift dt or ∫ -∞ ∞ x(t)e 2πiftdt , as long as the duality is uphold by the opposite sign in the exponential power in the inverse Fourier transform.This duality will be restated in the diagonalization representation of the circulant matrix later.is a unitary matrix.It is symmetric (i.e., F T = F ), and the inverse of the Fourier matrix is given by

1 k
1)(2H + 1) .Here, k serves as a control variable of order k (N ) = o(N ) , and the error term contains which can be seen as a way to restrict error with the price of distor-Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568

Figure 1 .
Figure 1.The order of splits by the ordinary RMD(m,n) scheme (top) and the on-the-fly RMD(m,n) scheme (bottom).Note that, for the on-the-fly scheme, the order changes according to the choice of n.
Theory and Applications of Monte Carlo Simulations

1 2 , H = 1 2 and H > 1 2. 1 2
Motivated by the inadequacy in capturing long-dependence feature in volatility process of the traditional stochastic volatility framework, we explore the possibility of fractional Brownian motion (fBm) in financial modeling and various schemes of the Monte Carlo simulation.Starting from the general definition, fBm can be considered as an extension of the ordinary Brownian motion with an autocovariance function that depends on both time indices instead of just the minimum between the two.With different values of Hurst index, we can distinguish fractional Brownian motion into three different cases: H < Since only the case of H > displays a longdependence behavior, that is the case we are interested in.Several prominent examples of fBm in financial modeling are given.

Fractional
Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568 Theory and Applications of Monte Carlo Simulations Fractional Brownian Motions in Financial Models and Their Monte Carlo Simulation http://dx.doi.org/10.5772/53568 By a simple calculation, we can see that the 'square' of the generating circulant matrix is given by Theory and Applications of Monte Carlo Simulations