Next Generation Geological Modeling for Hydrocarbon Reservoir Characterization

Hydrocarbon reservoir characterization is a process for quantitatively assigning reservoir properties, recognizing geological and geophysical information and quantifying uncertainties in spatial variability (Fowler et al., 1999). It represents an indispensable tool for optimizing costly reservoir management decisions for hydrocarbon field development. In fact reservoir characterization is the first step in the reservoir development program taking into account structural and depositional architecture, pore systems, mineralogy of the reservoir, post deposition diagenesis and the distribution and nature of reservoir fluids. The technologies and tools for reservoir characterization continue to develop and expand, particularly with the aggressive proliferation of three-dimensional (3D) and lately 3D timelapse (4D) data. However, one of the most critical areas of inquiry remains development of workflows that best capture and represent geological uncertainty and apply that in the integrated environment to optimize reservoir development and production planning (Goins, 2000; Kramers, 1994). Although the challenges to find, develop and produce ever new hydrocarbon resources are numerous, the ability of petroleum industry to increase the recovery from existing resources has become a global endeavor. It has been generally accepted that the conventional oil production practices produce, on average, approximately one third of the original oil in place (Kramers, 1994) where estimated remaining unrecovered mobile oil varies with different depositional environments. For example, depositional systems with more complicated stratigraphy and facies architecture, such as fluvial systems or deep-sea fans, may demonstrate even larger amounts of unrecovered mobile oil, ranging from 40-80% (Tyler and Finley, 1991; Larue and Yue, 2003). This common industrial knowledge represents a great incentive to increase the overall production, geared by large capital investments in Smart Reservoir Management workflows and Enhanced Oil Recovery (EOR) operations (Alvarado and Manrique, 2010). Still, the use of oversimplified and uncertain geological models based on a sparse data from limited number of widely-spaced wells can render hydrocarbon recovery forecasting as a daunting task. Moreover, inaccurate description of reservoir heterogeneities is probably one of the outstanding reasons for erroneous description of reservoir connectivity leading to the failure in predicting field performance (Damsleth et al., 1992). Overestimating performance could lead to investment disasters, whereas its underestimation could lead to under-designed production facilities that restrict hydrocarbon recovery.


Introduction
Hydrocarbon reservoir characterization is a process for quantitatively assigning reservoir properties, recognizing geological and geophysical information and quantifying uncertainties in spatial variability (Fowler et al., 1999). It represents an indispensable tool for optimizing costly reservoir management decisions for hydrocarbon field development. In fact reservoir characterization is the first step in the reservoir development program taking into account structural and depositional architecture, pore systems, mineralogy of the reservoir, post deposition diagenesis and the distribution and nature of reservoir fluids. The technologies and tools for reservoir characterization continue to develop and expand, particularly with the aggressive proliferation of three-dimensional (3D) and lately 3D timelapse (4D) data. However, one of the most critical areas of inquiry remains development of workflows that best capture and represent geological uncertainty and apply that in the integrated environment to optimize reservoir development and production planning (Goins, 2000;Kramers, 1994). Although the challenges to find, develop and produce ever new hydrocarbon resources are numerous, the ability of petroleum industry to increase the recovery from existing resources has become a global endeavor. It has been generally accepted that the conventional oil production practices produce, on average, approximately one third of the original oil in place (Kramers, 1994) where estimated remaining unrecovered mobile oil varies with different depositional environments. For example, depositional systems with more complicated stratigraphy and facies architecture, such as fluvial systems or deep-sea fans, may demonstrate even larger amounts of unrecovered mobile oil, ranging from 40-80% (Tyler and Finley, 1991;Larue and Yue, 2003). This common industrial knowledge represents a great incentive to increase the overall production, geared by large capital investments in Smart Reservoir Management workflows and Enhanced Oil Recovery (EOR) operations (Alvarado and Manrique, 2010). Still, the use of oversimplified and uncertain geological models based on a sparse data from limited number of widely-spaced wells can render hydrocarbon recovery forecasting as a daunting task. Moreover, inaccurate description of reservoir heterogeneities is probably one of the outstanding reasons for erroneous description of reservoir connectivity leading to the failure in predicting field performance (Damsleth et al., 1992). Overestimating performance could lead to investment disasters, whereas its underestimation could lead to under-designed production facilities that restrict hydrocarbon recovery.
To mitigate and manage such scenarios, the modern workflows optimize field recovery performance by combining major disciplines of reservoir geosciences, e.g. geology, geophysics and petrophysics with numerical simulations into integrated flow models for a variety of field development operations. To maximize reservoir profitability, quantification of the effect of stratigraphic and structural uncertainties on its dynamic performance becomes of principal importance (Charles et al., 2001;Seiler et al., 2009). However, it continues to be an issue in geological modeling that the underlying structural frameworks do not correctly portray the true reservoir structure configuration or size, and this uncertainty impacts an accurate evaluation of hydrocarbon gross volumes. It is therefore essential to validate accordingly the role of individual components of high-resolution geological model (HRGM; see Fig. 1) and rank their impact on the uncertainty of HRGM at various levels ( Fig. 1). This paper focuses on recent advances in the technology for building high-resolution, geocellular models and its role in state-of-the-art and future EOR workflows. We first introduce some very basic tools and concepts of geostatistical spatial analysis and modeling, relevant for further understanding of the subject. Furthermore, we highlight what are perceived as some of the outstanding capabilities that differentiate the DecisionSpace  Desktop Earth Modeling, as the next-generation geological modeling tool, from standard industrial approaches and workflows. Current geomodeling practice uses grids to represent 3D reservoir volumes. Estimating gridding parameters is a difficult task and commonly results in artifacts due to topological constraints and misrepresentation of important aspects of the structural framework which may introduce substantial difficulties for dynamic reservoir simulator later in the workflow. We describe a fundamentally novel method that has a potential to resolve most of the common geocellular modeling issues by implementing the concept of interpolation or simulation of reservoir properties using Local Continuity Directions (Yarus et al., 2009;. Finally, we address some latest developments in the integration of next-generation geological modeling into advanced workflows for quantitative uncertainty assessment and risk management  combining two critical steps of reservoir characterization: a) the reconciliation of geomodels with well-production and seismic data, referred to as history-matching (HM) (Oliver and Chen, 2011) and b) dynamic ranking and selection of representative model realizations for reservoir production forecast.

Highlights of geostatistical analysis and modeling
To capture diversity of geologically-complex reservoirs, the next-generation of geological modeling tools are relying increasingly on geostatistical methodologies (Isaaks and Srivastava, 1989;Yarus and Chambers, 1994;Chambers et al., 2000a;Chambers et al., 2000b;Deutsch, 2002;Yarus and Chambers, 2010). The in-depth explanation of geostatistical terminology is available in Olea, 1991. Tailored to identify data limitations and provide better representation of reservoir heterogeneity, geostatistical tools are particularly effective when dealing with data sets with vastly different degrees of spatial density and diverse vertical and horizontal resolution. Classical statistics methods are based on an underlying assumption of data independence in randomly sampled measurements. These assumptions are not true for geosciences data sets, where the data gathered are regionalized (map-able) and demonstrate strong dependence on the distance and orientation. Geostatistical tools provide the unique ability to integrate different types of data, with pronounced variation in scales and direction of continuity. One of the fundamental tools in geostatistics is the variogram, a measure of statistical dissimilarity between the pairs of data measurements. It represents a model of spatial correlation and continuity that quantifies the directions and scales of continuity. Variogram analysis can be applied to any regionalized variable X and is used to compute the average square differences between data measurements based on different separation intervals h, known as the lag interval. where (h) corresponds to the semivariogram (note that denominator 2n represents the symmetry relation between data points X i and X i+h ) and index i runs over the number of data pairs, n. We will in this document refer to semivariogram simply as a variogram. Visualization of a generic variogram (h) is given in Fig. 2. The left-hand panel corresponds to three distinctive cases of geological continuity: the red curve describes the omnidirectional or isotopic variogram that assumes a single, average characteristic direction of continuity, while blue and green curves, correspond to the minimum and maximum direction of continuity, respectively; an anisotropic variogram. The curve of omnidirectional variogram will converge to the black line that represents the true variance of the data and is usually referred to as sill. One additional parameter of the variogram, which is not depicted in Fig. 2 is the nugget effect and corresponds to a discontinuity along the Y-axis resulting in a vertical shift of the variogram curve at the origin. The distance at which the variogram curve levels out at the sill corresponds to the data correlation range. The righthand panel of Fig. 2 is an example of the variogram polar plot, with the major ellipse axis, corresponding to maximum and the (perpendicular) minor ellipse axis, to the minimum direction of continuity. distinctive cases of geological continuity: single, average characteristic direction of continuity (red curve), minimum and maximum direction of continuity (blue and green curves, respectively). The black line represents the true variance of the data and is referred to as sill. The right-hand panel depicts the variogram polar plot, with the major ellipse axis, corresponding to maximum and the (perpendicular) minor ellipse axis, to the minimum direction of continuity. Fig. 3. Principles of kriging, the geostatistical interpolation method. The value of the unsampled location Z 0 , is estimated based on linear combination of measurements at locations Z 1 to Z 3 , where weights  i at locations Z i are calculated from the variogram model.
Among the numerous interpolation methods, the geostatistical kriging algorithm is commonly used in the geosciences. Kriging is an unbiased, linear, least-square regression technique that automatically "de-clusters" data to produce best local or block estimates with minimized error variance. Figure 3 depicts the principles of linear, weighted estimation of the value at location Z 0 , based on measured values at locations Z 1 to Z 3 : where the weights  i at locations Z i are calculated from the variogram model. Unlike the more conventional linear weighting estimators, the kriging weights,  i, account for distance Distance (h) www.intechopen.com and orientation. The constraint for an unbiased estimator is satisfied by maintaining 1 i    .

Definition of lithofacies and mapping of lithotype proportions
High-resolution geomodeling of lithofacies and reservoir properties begins by creating a properly sealed structural framework and then integrating available data from cores, welllogs and seismic surveys. Ideally the lithology model is based on the interpretation of deposition facies from core description. However, for siliciclastic environments, the lithologies are typically electro-facies based on wireline logs calibrated to a few core descriptions. Finally, we often used petro-facies for carbonate reservoir as the primary depositional facies are destroyed by post depositional diagenetic processes. Once the lithology is created, facies are populated with characteristic petrophysical properties. Micro-logs can identify very thin impermeable shale layers, as thin as 1-2 ft. In order to maintain vertical communication and small scale heterogeneity in the model, particularly in assessing the sweep efficiency of EOR applications (Maučec and Chambers, 2009), e.g. CO 2 flooding (Culham et al., 2010), it is critical for well blocking to preserve small-scale facies heterogeneities in each interval, assign them correctly to common lithotypes and prevent inadvertently eliminating the essential geological information. The integrity of the blocking and preservation of small scale features is directly proportional to the vertical cells size. If small scale, thin bedding, is critical to sweep efficiencies, then a small vertical cell is required in the model, often resulting in a large multi-million cell geological model, which may be used without upscaling in the dynamic simulator.
Modeling complex geologic environments (e.g. fluvial, deltaic) require the ability to control vertical relationships and lateral relationships between the facies. In stratigraphic modeling, which is done on an interval-by-interval basis, the task is to identify the depositional environment and primary depositional facies. Each depositional environment is controlled by physical processes of sedimentation and erosion, which requires the creation of internal bedding geometry, e.g. layers representative of the depositional system. These layers act as lines of internal correlation that affect the gathering of statistical information in variogram computation and the distribution of properties in subsequent modeling steps. Once the layering styles are specified for each interval, the well data are re-sampled (coarsened or blocked) at the scale of the layers and a single property value assigned to each layer along the wellbore. For continuous properties, DecisionSpace  Desktop Earth Modeling uses standard averaging methods to assign a value to the gravity center of each grid cell (layer) along the wellbore, biased to the lithology code. For discrete properties, coded by integers (e.g. facies) the most commonly occurring facies code is chosen. Conventional modeling approaches use the average or global facies proportions per interval of interest, which implicitly assumes that the facies proportions are unrealistically the same everywhere throughout the interval and therefore applying a constant lithotype proportion curve (LPC) to the entire interval is inaccurate. Traditional techniques to introduce a geological trend in the data usually require laborious creation of pseudo-wells or application of a generic trend map. The use of a generalized trend map implies that the geological continuity throughout the interval would be fairly similar. In other words, the interpretation of underlying statistics (e.g. histograms, variograms) and characteristics such as anisotropy and correlation length, would in mathematical terms, assume the condition of stationarity (Caers, 2005). However, most reservoirs are non-stationary and the introduction of simple trend (vertical or horizontal) could "de-trend" the data, resulting in inadequate solutions and further complicate the computation and interpretation of variograms. The next-generation earth modeling reintroduces a graphical method that generates individual LPCs for each well in the field with the ability to create pseudo-wells to invoke a geological trend that honors the conceptual depositional model. When displayed in the map view ( Fig. 4a), the LPCs allow for a quick QC of the vertical variation of the facies distribution and proportions, even before creating the facies model. Editing and copy/move functionality allows the modeler to impose an interpretation to better control trends. The lithotype proportion map (LPM), created from the VPCs, literally consists of hundreds of high resolution trend maps accounting for vertical and lateral nonstationarity ( Fig. 4b).

Geologically-driven facies modeling
Facies simulation algorithms, commonly used in earth modeling suffer from a variety of challenges when trying to generate models based on often sparse real data, particularly when attempting to honor depositional facies boundary conditions and proportions, capture depositional overprinting or accounting for geological non-stationarity. For example, the Sequential Indicator Simulation (SIS) (Caers, 2005), lacks the ability to control facies boundary conditions, the Truncated Gaussian Simulation (TGS) (Caers, 2005), provides for only simple facies transitional boundaries and while Object (also termed Boolean) Simulation (OS) can manage most non-overprinted complex facies sets, it is unstable in the www.intechopen.com presence of high density, closely spaced wells, highly computationally intensive and reliant on the (3D) training image models (Caers, 2005). Multi-Point GeoStatistics (MPS) (Strebelle and Journel, 2001) is rapidly growing in popularity offering the modeler the ability to create geological models with complex geometries, while conditioning to large amounts of well and seismic data. However, as pointed out by Daly and Mariethoz (2011), it is still a relatively new topic, which has had a long academic history and is now just finding its way into commercial software. They also pointed several deficiencies in current implementations related to 1) performance, 2) training image generation, and 3) non-stationarity. The DecisionSpace  Desktop Earth Modeling approaches facies modeling very differently from most current software offerings. The facies simulation workflow utilizes a powerful combination of describing the geological trends with LPMs created from multiple VPCs, as described previously, integrated with Plurigaussian simulation (PGS), a robust and welltested algorithm with a long industrial history. The PGS is an expansion of TGS method and uses two Gaussian variograms simultaneously. There are a numerous advantages of PGS over other methods:  as trends for each facies within each layer and every reservoir interval in the model are calculated, based on the LPMs that account for spatial non-stationarity, the PGS methodology, captures most inter-and intra-facies relationships including post depositional overprinting, such as diagenesis.  as a pixel-based method, PGS works easily with closely spaced or sparse well control.  PGS has been available in two commercial software offerings (HERESIM™ and ISATIS™) since the early 1990s, however its use is not necessarily intuitive.  While it is possible to overcome some of the challenges presented by traditional algorithms through the intervention by experts, the implementation of a LPM with PGS workflow can be presented simply and intuitively. The next-generation earth model approach to facies modeling introduces a set of facies templates based on the understanding of realistic depositional environments (Walker and James, 1992). The DecisionSpace  Desktop Earth Modeling implements a library of more than forty standard depositional systems, presented with maps and cross-sectional views (see Fig. 5). The PGS facies modeling requires a set of rules to establish lithotype relationships, where a lithotype is a group of facies sharing common depositional and petrophysical properties. Knowledge of the proportions is not sufficient for accurate modeling of the lithotypes and the depositional system templates help modelers to visualize relationships between the facies and to provide an associated lithotype "rule box" (see: upper left schematics, Fig. 5 expanded view) that specifies their mathematical relationship. Rules are based on simple or complex lithotype transitions; a) simple transitions corresponds to a strict transition from one lithotype to another and are modeled with one variogram, b) complex transitions require the definition of two lithotype sets, each controlled by its own variogram that can have different anisotropy directions. A lithotype set is a set of lithotypes that share a common spatial model, such as a channel and its associated levy. An example of modeling complex transition of lithotypes, represented by two lithosets and associated variogram models is given in Fig. 6. In this figure the vertical and horizontal sets are schematic representations depicting which variogram controls a lithotype set, when in reality the two variogram models interact to create the final facies model. Because PGS uses lithology proportions, rather than indicators as in SIS, a wider variety variogram model types are www.intechopen.com available to the modeler. It also provides the ability to model two lithosets, each can exhibit different anisotropic conditions that can be used with calculated LPM to produce realistic models of geological facies distributions.  Separate lithology rules are used for each depositional sequence with up to two variograms to controlling different directions and scales. Petrophysical Property Modeling populates facies models with petrophysical properties (porosity, permeability, water saturation, etc.) by interval and constrained by facies. The DecisionSpace  Desktop Earth Modeling uses the Turning Bands algorithm as the simulation method with Collocated CoSimulation as one of the options. Kriging and Collocated CoKriging are additional geostatistical algorithms available. Collocated CoKriging is a variation of the classical kriging interpolation, where the variogram is computed from a secondary variable that serves as an additional spatial constraint. Fig. 7, gives an example of facies model realization and facies-constrained model of porosity distribution.

Novel concepts in local continuity modeling
Traditional reservoir modeling techniques use simplified two-point statistics to describe the pattern of spatial variation in geological properties. Such techniques implement a variogram model that quantifies the average of expected variability as a function of distance and direction. In reservoirs, where the geological characteristics are very continuous and easily correlated from well to well, the range (or scale) of correlation will be large while in reservoirs, where the geological characteristics change quickly over short distances, the correlation scales will be shorter. The later phenomenon is very common in sedimentary environments, where the primary mechanism of transport during deposition is water, resulting in highly channelized structures (e.g. deltaic channels, fluvial deposits, turbidities). These environments usually demonstrate a large degree of local anisotropy and of correlation variation between directions along the channel axis and perpendicular to the channel axis. Because the principles of conventional (i.e. two-point) geostatistical practice still require the nomination of a single (average) direction of maximum continuity its use for modeling complex sedimentary environments becomes highly challenging if not impossible. Recently, technology for 3D volumetric modeling of geological properties, using a Maximum Continuity Field (MCF) (Yarus et al., 2009) has been proposed. The new method represents geological properties within a volume of the subsurface by distributing a plurality of data points in the absence of the grid with the notion of geological continuity and directionality represented by MCF, hence entitled as the Point-Vector (PV) method. It introduces several game-changing components to the area of geomodeling:


Direct control over local continuity directions is controlled using a predefined azimuth map and the local dip (an angle from the horizontal/azimuth plane) of the horizons. The Fault Displacement Field (FDF), annotated in Fig. 8 with symbol FT (fault throw) can be, for example, calculated from the underlying seismic amplitude data (Liang et al., 2010).  Interactive operation with "geologically intuitive" datasets, such as layering intervals, projection maps and hand drawings via the notion of MCF and the  Retention of the maximum fidelity of geological model by postponing the creation of grid/mesh until the final stage of (static) model building, immediately before integrating into dynamic model. The reservoir property modeling does not need a standard grid but only the "correct" distance between the points to estimate/simulate the property and data around it. The key to implementation of these ideas emerges from interpretation of concepts of MCF and their implementation into kriging equations (Eq. 2) for geostatistical estimation. Almost all available geostatistical software restricts the user to certain types of variogram model functions (e.g. spherical, exponential, Gaussian etc.) to ensure that a unique set of kriging weights can always be found and to "force" a single direction of maximum continuity. However, it is very rare in geology to have a single direction of maximum continuity representative everywhere. Instead, the PV method defines the attributes of Maximum Continuity Vector (MCV) as location, magnitude, direction and length, representing the correlation length (see insert of Fig. 8), along which the magnitude of the geological property remains "substantially the same", e.g. within 10%. This allows flexibility in specifying local directions of maximum continuity and interpolation of properties in 3D geological models, aligned with underlying geological structure. The PV property interpolation workflow follows the methodology of  1. Define structural model and 3D grids where MCF is stored and the property values are interpolated. 2. Define the MCF, using some predefined azimuth map (see Fig. 9b) and the local dip (an angle from the horizontal/azimuth plane) of the horizons. 3. Pre-process of all the fault displacements in the 3D grid (see Fig. 9c). 4. Add known data points (i.e. spatially located known reservoir property) to the model, create the covariance neighborhood and the variogram. The covariance calculations take local continuity into account by aligning the axes of the variogram with the local continuity direction. 5. Run ordinary kriging estimator (see Eq. 2), using the created covariance neighborhood.
For each point to estimate, the kriging finds the nearest set of known data along shortest geometrical (or Euclidean) distances. The 2D implementation of property interpolation is schematically depicted in Fig. 8. The center of the search ellipse (or ellipsoid in 3D) is associated with the location of MCV denoted by V1 and V2. The data points, detected inside the search ellipse ("blue" triangles) are considered in the interpolation along the MCV while the data outside the ellipse ("red" squares) are not included. The relative dimensions of the search ellipsoid, i.e. the ratios between major (M), intermediate (I) and minor (m) axis length correspond to "local" anisotropy factor. In a faulted reservoir the property associated with MCV V2 is interpolated across the fault line, following the fault throw vector FT. To validate the PV method, the sealed structural framework, containing top and bottom horizons with a single internal fault, was built from a fluvial reservoir of the Brugge synthetic model (Peters et al., 2009). Additional details on the validation of PV method are available in   Fig. 9a an example of a facies realization for the Brugge fluvial reservoir zone is given. The blue area represents the sand body (pay zone) distributed on shale (non-pay zone in red). Using the facies distribution as a basis or constraint for the generation of a vector field emulates a certain "pre-knowledge" on the geological structure but is in no way a required step. Fig. 9b represents MCF defined on virtually regular mesh of points. No particular continuity information was assumed for the shale zone, only for the discrete sand bodies. The azimuth alone is used from the MCF; the dip angle is calculated as the normal direction relative to the local curvature of the horizon. Fig. 9c visualizes fault displacement vector field with a constant throw of ~50 m. An example permeability distribution, generated with the MCF-based method of reservoir property interpolation is shown in Fig. 9d, depicting the flattened 2D map view of the major-minor plane and the fault location. The PV method allows the user to define variable sizes of search ellipsoids throughout the model volume of interest (VOI). By this notion, a single-size search sphere was defined for the shale facies, where no particular continuity information was assumed (see Fig. 9b). A variable sized search ellipsoid and different anisotropy factors (i.e. ratios between the length (in ft) of the major direction and minor direction of the search ellipsoid in M-m plane) were considered for the sand zone to validate the impact on the interpolation. Qualitatively, the anisotropy ratio of 10:1 (major/minor = 10000/1000) can be interpreted for example as the case with less uncertain (and more trusted) MCF data, used to obtain permeability distribution as depicted in Fig. 9d. The www.intechopen.com corner insert of Fig. 9d represents the M-I plane cross-section, indicating the fault location. The permeability model effectively represents a VOI with no imposed stratigraphic grid and as such retains the maximum available resolution and information density, limited only by the resolution of underlying data and MCF. Permeability maps are smoothed with recursive Gaussian filter (two samples half-width).

Conditioning reservoir models to production data
Accurate modeling of the hydrocarbon reservoir production behavior is of fundamental importance in reservoir engineering workflows for optimization of reservoir performance, operational activities, and development plans. A realistic description of geological formations and their fluid-flow-related properties, such as lithofacies distribution and www.intechopen.com permeability, represents a crucial part of any modeling activity. To quantify and reduce the uncertainty in the description of hydrocarbon reservoirs, the parameters of geological model are usually adjusted and reconciled with pressure and multi-phase production data by history-matching (HM). With an advent of computing capabilities in recent decades, the classical (i.e. manual) HM has evolved to so-called computer-Assisted (or Automated) HM (AHM) technology. When we hereafter in this document refer to the history-matching process, the AHM workflow is assumed. As an inverse problem, HM is highly non-linear and ill-posed by its nature, which means that, depending on the prior information, one can obtain a set of non-unique solutions that honor both the prior constraints and conditioned data with associated uncertainty. To assess the uncertainty in estimated reservoir parameters, one must sample from the posterior distribution, and the Bayesian methods (Lee, 1997) provide a very efficient framework to perform this operation. Using Bayes' formula, the posterior distribution (i.e., the probability of occurrence of model simulated parameter, m, given the measured data values, d) is represented as being proportional to the product of prior and likelihood probability distributions of the reservoir model: where, | (|) with D C as the data covariance matrix ( ). The relationship between the data and the model parameters is expressed as a non-linear function that maps the model parameters into the data space,  dg ( m ) , where d is the data vector with N observations representing the output of the model, m is a vector of model parameters, and g is the forward model operator that maps the model parameters into the data domain. For history-matching problems, g represents the reservoir simulator. Using Bayes' theorem (Eq. 3), both prior and likelihood pdfs are combined to define the posterior pdf as: O m as the most probable estimate. The HM minimization algorithm renders multiple plausible model realizations and the consequence of non-linearity is that it requires an iterative solution. When considering realistic field conditions, the number of parameters of the prior model expands dramatically (i.e., order of 10 6 ) and computation of the prior term of the objective function becomes highly demanding and time consuming. A variety of model parameterization and reduction techniques have been implemented in HM workflows, ranging from methods based on linear expansion of weighted eigenvectors of the specific block covariance matrix M C (Rodriguez et al., 2007;Jafarpour and McLaughlin, 2009;Le Ravalec Dupin, 2005) to methods, where expensive covariance matrix computations are avoided by generating model updates in wave-number domain (Maučec et al., 2007;Jafarpour and McLaughlin, 2009;) that do not require specifying the model covariance matrix, M C , and performing expensive inversions.

Sampling from the posterior distribution
Two methods have been proposed to sample parameters of posterior distribution, for example, sequential Markov chain Monte Carlo (MCMC) algorithms (Neal, 1993) and approximate sampling methods, such as Randomized Maximum Likelihood (RML) (Kitanidis, 1995), both with some inherent deficiencies. Traditional Markov chain Monte Carlo (MCMC) methods attempt to simulate direct draws from some complex statistical distribution of interest. MCMC techniques use the previous sample values to randomly generate the next sample value in a form of a chain, where the transition probabilities between sample values are only a function of the most recent sample value. The MCMC methods arguably provide, statistically, the most rigorous and accurate basis for sampling posterior distribution and uncertainty quantification but they come at high computational costs. On the other hand, the approximate, but faster, RML methods are, in practice, applicable mostly to linear problems.
In an attempt to improve computational efficiency and mixing for the MCMC algorithm, Oliver et al. , 1996, proposed a two-step approach in which (1) model and data variables were jointly sampled from the prior distribution and (2) the sampled model variables were calibrated to the sampled data variables, with Metropolis-Hastings sampler (Hastings, 1970) used as the acceptance test. The method works well for linear problems, though it does not hold for non-linear problems, such as HM studied here. To improve on that, Efendiev et al. , 2005, proposed a rigorous two-step MCMC approach to increase the acceptance rate and reduce the computational effort by using the sensitivities calculated from tracing streamlines (Datta-Gupta and King, 2007). When the sensitivities are known, the solution of the HM inverse problem is greatly simplified. One of the most important advantages of the streamline approach is the ability to analytically compute the sensitivity of the streamline Generalized-Travel-Time (GTT) with respect to reservoir parameters, e.g. porosity or permeability. The GTT is defined as an optimal time-shift t   at each well, so as to minimize the production data misfit function J:  (8) where N dj stands for the number of observed data (d) at well j and obs j y and cal j y correspond to observed and calculated production data, respectively, at well j. Fig. 10. Illustration of Generalized Travel-Time (GTT) inversion by systematically shifting the calculated fractional flow curve f w to the observed history (modified from Datta-Gupta and King, 2007). Red and magenta symbols correspond to the initially-calculated and shifted curve, respectively, while the black line represents the observed curve. This is illustrated in Fig. 10, where the calculated fractional flow response 1 is systematically shifted in small-time increments towards the observed response, every data point in the fractional-flow curve has the same shift time, 12 ... tt t      and the data misfit is computed for each time increment. The misfit function J directly corresponds to the term  dg ( m ) , given in Eqs. 5 and 7 that defines the misfit between the observed data and simulated response. The objective of HM inversion workflow is to minimize the misfit in production response by reconciling the geological model with observed (measured) dynamic production data. The two-step MCMC algorithm (Efendiev et al. 2005) uses an approximate likelihood calculation to improve on the (low) acceptance rate of the one-step algorithm (Ma et al, 2008). This approach does not compromise the rigor in traditional MCMC sampling, as it adequately samples from the posterior distribution and obeys the detailed balance (Maučec et al., 2007), thus, a sufficient condition for a unique stationary distribution. The main steps of the streamline-based, two-step MCMC algorithm are depicted in a flowchart in Fig. 11. A pre-screening based on approximate likelihood calculations eliminates most of the rejected samples, and the exact MCMC is performed only on the accepted proposals, with higher acceptance rate. The approximate likelihood calculations are fast and typically involve a linearized approximation around an already accepted state rather than an expensive computation, such as a flow simulation, using the streamline sensitivities, calculated per geomodel grid block and per production well. Fig. 11. Flowchart of the streamline-based, two-step MCMC algorithm as implemented in AHM workflow (Maučec et al. 2007;Maučec et al. 2011a;Maučec et al. 2011b).
The streamline-based, two-step MCMC history-matching workflow represents an integral component of the new prototype technology for Quantitative Uncertainty Management (QuantUM) ) that seamlessly integrates the next-generation Earth Modeling API, VIP ® and/or Nexus ® reservoir simulators and the code DESTINY (MCERI Research Group, 2008), used as a generator of streamline-based sensitivities. The workflow was validated by reconciling the dynamic well production data with a completely synthetic model of the Brugge field (Maučec et al. 2011a). The stratigraphy of Brugge field combines four different depositional environments and one internal fault with a modest throw. The dimensions of the field are roughly 10x3 km. In the original (referred to as the "truth-case") high-resolution model of 20 million grid cells essential reservoir properties, including sedimentary facies, porosity and permeability, Net-To-Gross (NTG), and water saturation were created for the purpose of generating well log curves in the 30 wells (Peters et al. 2009). Multiple realizations of high-resolution (211x76x56, i.e., approximately 900k grid cells) of facies-constrained permeability model (referred to as "initial") were generated using the information on the structural model, well properties and depositional environments based on the "truth-case" model. In general, the real-field HM workflows can be highly computationally demanding, mostly on the account of time-consuming forward reservoir simulations. Furthermore, when the full-fledged uncertainty analysis of the high-resolution geological model is addressed through the generation of multiple (i.e., sometimes in the order of 100s) static model realizations, the multi-iteration AHM workflows may become prohibitively expensive. The QuantUM AHM module addresses the issue of computational efficiency in two ways: a) takes full advantage of parallel execution of VIP ® and/or Nexus ® reservoir simulator, wherever multi-CPU cores are available and b) uses the option of computational load distribution via standard submission protocol wherever the multi-node computational resources are available. The proposals generated by the Metropolis-Hastings sampler of the two-step MCMC-based inversion workflow are very likely positively correlated; therefore, the convergence diagnostics ought to be governed by the estimators averaged over the ensemble of realizations. QuantUM AHM workflow implements the maximum entropy test (Full et al., 1983), where the ( implemented as convergence measures in AHM workflow are given in Maučec et al. 2007 andMaučec et al. 2011a. Selected results of QuantUM AHM workflow validation are given in Fig. 12, with additional information available in (Maučec et al. 2011a;Maučec et al. 2011b):  The behavior of (negative) entropy, S (Fig. 12a) and the objective function, defined as the logarithm of transition probability of two-step Metropolis-Hastings sampler (Fig.  12b) demonstrate the convergence rate of the MCMC sequence with the burn-in period of approximately 750 samples (i.e., the total number of processed samples is 1500, a product of 50 model realizations and 30 MCMC iterations).  Comparison of dynamic well production responses, here defined as the water-cut curves, calculated with prior ( Fig. 12c) and posterior (Fig. 12d) model realizations demonstrate the efficiency of water-cut misfit reduction, between the simulated and observed data. To demonstrate the case, the production response of one of the wells with the most pronounced production dynamics over the 10-year period, is depicted. Such non-monotonic behavior is usually most challenging to match.  The HM workflow demonstrates a significant reduction in the discrepancy of mean dynamic response (Fig. 12e), calculated over ensemble of 50 history-matched models with respect to observed watercut well water-cut curve as well as impressive reduction of the ensemble-averaged variance (Fig. 12f) of history-matched responses with respect to observed production curves.  Figs. 12g and 12h, depict log-permeability maps for one realization of top layer of Brugge fluvial reservoir, corresponding to prior (i.e., initial, not history-matched) and posterior (i.e., history-matched) models, respectively. The areas where the historymatching algorithm attempts to reconcile the static model with dynamic data, by connecting spatially-separated high-permeability areas to facilitate the fluid flow, as governed by the calculated streamline-sensitivities, are emphasized.

Quantification of reservoir production forecast uncertainty
By its nature, the probabilistic history-matching workflows use multiple equally probable but non-unique realizations of geological models that honor prior spatial constraints and www.intechopen.com  (Peters et al. 2009): a) and b) algorithm convergence diagnostics, (negative) entropy and objective function, respectively, c) and d) well water-cut response curves, obtained with an ensemble of prior and posterior models, respectively, e) and f) ensembleaveraged statistical estimators, mean and variance, respectively and g) and h) prior and posterior log-permeability distribution maps of model top layer, respectively are conditioned to the data as well as approximate the forecast uncertainty. But the crux of the matter here is two-fold: a) throughout the inversion, some model realizations may have created non-geologically realistic features and b) many of the underlying geological parameters may have an insignificant effect on recovery performance. In addition to the traditional, single-parameter sensitivity studies to identify the important and geologically relevant parameters, a more sophisticated version uses streamline www.intechopen.com simulation as a tool for pre-screening geological models (Gilman et. al., 2002) or Experimental Design (ED) techniques that optimally identify values of uncertainty and their variation range (Prada et. al., 2005). Unfortunately, in realistic reservoir forecasting projects, not all of the flow simulations may be necessary but the clear distinction between important and un-important parameters is unknown a priori. A workflow aiming to identify and isolate model realizations, relevant to reservoir forecast analysis, out of a full spectrum of statistically probable models has recently been proposed by Scheidt and Caers, 2009 combining the following steps (Fig. 13):  Computation of a single parameter, namely, pattern-dissimilarity distance, used to distinguish two individual model realizations in terms of dynamic performance. The objective is to identify a set of representative reservoir models through patterndissimilarity distance analysis focusing on the dynamic properties of the realizations.  Computation of pattern-dissimilarity distances are computed via rapid streamline simulations carried out for each ensemble member. Analysis of pattern-distances gives rise to a set of representative models, which are then simulated using a full-physics finite-difference simulator.  Derivation of the forecast uncertainty from the outcome of these intelligently selected few full-physics simulations. Further details are available in (Scheidt and Caers, 2009;Maučec et al., 2011b).

The concept of dynamic data (dis)similarity
To describe the degree of (dis)similarity between reservoir model realizations in an ensemble it is not required to identify individual reservoir characteristics and corresponding dynamic responses for each ensemble member as knowledge about a representative "difference" (hereafter referred to as "distance") between any two realizations is sufficient. An example of pattern-dissimilarity distance θ, defined as an Euclidean measure, that describes the degree of (dis)similarity between any two of reservoir realizations m indexed with k and l within an ensemble of size I, in terms of geologic characteristics and pertinent dynamic response, kl r  , i.e. recovery factor, oil production rate, etc. where, by definition, the pattern-dissimilarity distance honors self-similarity ( 0 kk   ) and symmetry ( kl lk    ). Pattern-dissimilarity distances are evaluated using rapid streamline simulation, and assembled into a pattern-dissimilarity matrix,

Multi dimensional scaling and cluster analysis
Multi-dimensional scaling (MDS) is used to translate the pattern-dissimilarity matrix models into a p-dimensional Euclidean space (Borg and Groenen, 2005) where each element of the matrix is represented with a unique point. Hereafter, Euclidean space will be simply referred to as the E space, with individual points arranged in such a way that their distances correspond in a least-squares sense to the dissimilarities of individual realizations. Euclidean distances tend to exhibit strong correlation with pattern-dissimilarity distances.
www.intechopen.com The workflow of Scheidt and Caers, 2009 incorporates a classical variant of metric MDS where intervals and ratios between points are preserved in a manner of highest fidelity and where it is assumed that the pattern-dissimilarity distances directly correspond to distances in the E space. MDS finds the appropriate coordinates consistent with Euclidean distance measure and the resulting map is governed exclusively by the pattern-dissimilarity distances. Subject to the condition, that the distances are strongly correlated with the dynamic response r  , points within close proximity of each other exhibit similar recovery characteristics, and hence, they are expected to contain similar geologic features. A clustering algorithm is used to compartmentalize the Euclidean space into few distinct clusters and to identify the realizations within the closest proximity of the cluster centroids. The characteristic nonlinear structure of points in the E calls for kernel techniques such as kernel Principal Component Analysis (k-PCA) (Schölkopf et al., 1996), where prior to clustering nonlinear domains are mapped into a linear domain. When the MDS data are mapped into a linear, high-dimensional feature domain F, using appropriate kernel method e.g. k-PCA, cluster analysis, such as k-means clustering (Tabachnick and Fidell, 2006) may be applied to further compartmentalize the feature space F in order to identify cluster centroids, where centroid of a cluster corresponds to a point whose parameter values are the mean of the parameter values of all the points in the clusters. The pattern-dissimilarity distances θ, closest to the cluster centroids in the Euclidean space are defined with respect to recovery factor (RF) responses of two individual realizations k and l at times t  T, following . The geological realizations, corresponding to identified cluster-centroids are selected as the "representative samples" of the entire uncertainty space and simulated with the full-physics simulator as the reference case. Simulation outcome is post-processed to compute the distribution of ultimate recovery factor (URF) after a lengthy period of production, usually selected as the target forecast uncertainty for quantification. The cumulative density functions (CDFs) are finally constructed for the selected model realizations with weights assigned to URFs based on the number of models in each particular cluster and finally quantitatively compared to the reference CDF derived from full-physics simulations.

Discussion and conclusions
Reservoir characterization encompasses techniques and methods that improve understanding of geological petrophysical controls on a reservoir fluid flow. Presence of a large number of geological uncertainties and limited well data often render recovery forecasting a difficult task in typical appraisal and early development settings. Moreover, in geologically-complex, heavily faulted reservoirs, quantification of the effect of stratigraphic and structural uncertainties on the dynamic performance, fluid mobility and in situ hydrocarbons is of principal importance. Although the generation of a sound structural framework is one of the major contributors to uncertainty in hydrocarbons volumes, and therefore risk, in reservoir characterization it often represents a compromise between the actual structure and what the modern modeling technology allows (Hoffman et al., 2007). In this paper we focus on some of the outstanding features that have the potential to significantly differentiate the DecisionSpace  Desktop Earth Modeling, as the nextgeneration geological modeling technology, from standard industrial approaches and workflows mainly in the areas of geologically-driven facies modeling, reservoir property modeling in grid-less modality and the state-of-the-art workflows for dynamic quantitative uncertainty and risk management throughout the asset lifecycle. The facies simulation workflow utilizes a powerful combination of describing the spatial geological trends with lithotype proportion curves and matrices, integrated with Plurigaussian simulation (PGS), a robust and widely-tested algorithm with long industrial history. While the implementation of PGS is not unique to DecisionSpace  Desktop Earth Modeling its geologically highly intuitive approach to modeling, based on understanding of realistic depositional environments puts the geologist back into the driver's seat. However, even the most recent advances in geomodeling practice that represents 3D reservoir volumes with highresolution geocellular grids may only mitigate but not eliminate the fact that estimating gridding parameters commonly results in artifacts due to topological constraints and misrepresentation of important aspects of the structural framework, which may introduce substantial difficulties for dynamic reservoir simulator later in the workflow. Hence we look into the future of building geological models and present and validate the evolving technology, with the truly game-changing industrial potential to utilize Maximum Continuity Fields for 3D reservoir property interpolation, performed in the absence of a geocellular grid. The selection of optimal geocellular parameters with attributes like cell size, number of cells and layering, is postponed throughout the process and rendered at user's discretion only before incorporation into the reservoir simulator. Models, generated in such fashion, retain the maximum available resolution and information density, limited only by the resolution of underlying data and structural continuity. Regardless of the modeling approach, geoscientists and engineers often select diverse geomodel realizations such that the reservoir simulation outcome will cover a sufficiently large range of uncertainty to approximate the reservoir recovery forecast statistics throughout the asset lifecycle. One of the differentiating attributes of next-generation reservoir characterization is to integrate the reconciliation of geomodels with wellproduction and seismic data, with dynamic ranking and selection of representative model realizations for reservoir production forecasting. We outline and validate the evolving technology for Quantitative Uncertainty Management that seamlessly interfaces the DecisionSpace  Desktop, VIP ® and/or Nexus ® reservoir simulators. The highest available adherence to geological detail with respect to structural features that control depositional continuity (e.g. facies) is maintained through implementation of advanced model parameterization, based on Discrete Cosine Transform (Rao and Yip, 1990), an industrial standard in image compression. The AHM algorithm utilizes a highly efficient derivative of sequential (Markov chain) Monte Carlo sampling, where the acceptance rate is increased and computational effort reduced, by the utilization of streamline-based sensitivities. By its nature, the probabilistic HM workflows render multiple equally probable but nonunique realizations of geological models that honor both, prior constraints and production data, with associated uncertainty. Throughout the inversion, however, some model realizations may have created non-geologically realistic features and these models are inadequate for forecasting of recovery performance. We outline the workflow for dynamic quantification production uncertainty that utilizes rapid streamline simulations to calculate data-pattern dissimilarities, Multi Dimensional Scaling to correlate dynamic model responses with pattern dissimilarities and kernel-based clustering methods to intelligently identification and ranking of geo-models, representative in forecasting decisions. When integrated into the DecisionSpace  Desktop suite of reservoir characterization tools, such technology will assist in Smart Reservoir Management and decision making by combining multiple types and scales of data, honoring most first order effects, capturing a full range of outcomes and reducing analysis and decision time.