The migration of magma within a volcano produces a deformation signature at the Earth’s surface. Inverse models of geodetic data estimate parameters that characterize the magma migration. These characterizations are tied to the specific model that relates migration to the observed deformation. A model is a simplified representation of a natural system. A modeler is tasked with the challenge of designing a model that represents the system, in the context of the available data and purpose of the model. This chapter presents a systematic approach to quantitatively simulate geodetic data with finite element models (FEMs) in the framework of a deformation modeling protocol. This chapter will (1) address the design and execution of FEMs that can account for the geophysical complexity of a volcano deformational system and (2) define techniques for including FEMs in both linear and nonlinear inverse methods to characterize a magmatic system based on observed geodetic data. With these techniques, researchers can estimate magmatic migration within active volcanoes and understand how uncertainties in the data propagate into predictions. These estimates comprise some measure of central tendency, a sense of uncertainty, and a quantification of biases.
- inverse model
The upward migration of magma controls the eruption cycle of an active volcano. The specific characteristics of this migration (the impulse) and surrounding structure of the volcano produce a specific deformation signature (the response) at the Earth’s surface. Forward models of such a system, with given magma migration characteristics and host domain configurations, predict the surface deformation. However, in practice we are faced with the much more challenging problem of using inverse models of geodetic data to estimate key controlling parameters that characterize the inaccessible magma migration at depth. The resulting estimates are strongly controlled by the geometric configuration and distribution of materials that host the migration and modulate how the magma migration impulse translates to the deformational response of the Earth’s surface. This impulse-response perspective underpins the concepts presented in this chapter.
A model is a simplified representation of a natural system. The degree of simplification depends on the desired accuracy of predictions, the available constraining information and its uncertainty, and the limitations of the modeling device and methods . Model design is an assembly of decisions that a modeler makes to represent the natural system. Each decision must be justified with sensitivity analyses –some may be formal quantitative assessments, while others may be simple Boolean logic. Ultimately, the reliability of the model depends on how well model predictions compare to observations of the natural system . This chapter focuses on applications of finite element models (FEMs) for simulating quasi-static volcano deformation. Such models are useful for explaining how magma flux, the migration and storage of magma at depth, translates to geodetic observations. With FEMs, this translation can simultaneously account for the complex geometric configurations and distributions of material properties of an active volcano. Thus, FEMs are key to advancing multidisciplinary understanding of active volcanoes.
A given model prediction may be thought of as an estimate, which requires two components (1) a measure of central tendency and (2) an expression of uncertainty. Quantitative confrontations between model predictions and observations must account for both central tendency and uncertainty. Because model predictions are a function of the model design components, a modeler must examine how variations in model design propagate into model predictions. It is important to include the uncertainties in the constraining information that guides the model design. With experience, a modeler will begin to intuitively identify and understand which model design component variations most strongly influence or bias prediction variations.
The volcano deformation protocol described in this chapter is a formalized process developed from general methods for mathematic modeling ; protocol methods for other Earth science fields ; and the authors’ experience specific to applications of FEMs for simulating quasi-static deformation to explain geodetic observations. The protocol includes a systematic guide to multidisciplinary quantitative assessments and interpretations for volcano deformation.
Concepts are presented in this chapter using a case study of Okmok volcano, Alaska (Figure 1), because this volcano is the target of the most sophisticated FEM analyses, to date. The remainder of this chapter is divided into three sections. Section 2 summarizes different types of geodetic data and introduces other types of data that influence the translation between magma flux (impulse) and deformation (response). Section 3 describes the volcano deformation modeling protocol. This protocol may be viewed as a checklist for conducting analyses of volcano deformation. Section 4 describes the necessary components for reporting quantitative numerical modeling analyses of volcano deformation.
2.1. Geodetic data
Deformation of the Earth’s surface is driven by an array of processes that span deep-seated mantle convection, lithospheric stresses, magma migration and storage, surface loading process, and anthropogenic activities, such as reservoir impoundment and fluid extraction and injection. This paper will focus on deformation of earthquakes and active volcanoes. Geodetic data precisely define point positions,
2.1.1. GPS data
First, position time series are collected for thousands of global positioning system (GPS) stations using data from the constellations of global positioning system (GPS) satellites. These data provide three-component position estimates with uncertainties approaching 1 mm. Some GPS stations provide continuously sampled position records, while others are re-occupied periodically for campaign-style measurements. Continuously sampled sites provide systematic position samplings for time intervals that are limited only by the instrument or infrastructure, but have relatively expensive power and telemetry or data logging requirements. Alternatively, campaign-style measurements rely on less expensive, but more labor-intensive re-occupation strategies having relatively long time intervals separating sequential occupations. The technical aspects of GPS data processing are far beyond the scope of this work. For our purposes, however, it is suffice to say that GPS data provide three component displacements with known uncertainties at time
2.1.2. InSAR data
Interferometric synthetic aperture radar (InSAR) imagery characterizes changes in the phase distribution of radar scenes acquired for a given target from two separate satellite passes. Because the radar wavelength is constant and the initial phase of radar wave and positioning of a satellite are known for both passes, the distribution of phase changes can be related to the displacements parallel to the look direction of the satellite in unwrapped InSAR images. The technical aspects of InSAR processing are far beyond the scope of this paper, but aspects relevant to modeling are given here. Unwrapping refers to the process of spatially integrating the phase data to achieve a map of line-of-sight (LOS) displacements and for our purposes, InSAR implies the unwrapped image. We will note, however, that techniques exist to directly analyze the unwrapped phase data . LOS displacements may be cast as either relative to the satellite or ground position. For the remainder of this paper, LOS displacement refers to ground displacement in the direction of the satellite. An InSAR image provides a map of incremental LOS displacement that represents the difference between the positions of the pixels at the time of the second and first satellite passes, respectively. That is, an InSAR image provides incremental displacement and provides no information about the displacement between the image acquisition times. InSAR images are susceptible to artifacts caused by spatial variations in atmospheric moisture and orbital uncertainties . The former can be identified via pair-wise inspection or stacking techniques  and the latter can be accounted for with linear inverse methods, as discussed below. The pixels of an InSAR image are not independent for a variety of reasons. An InSAR image may be processed with a variety of quadtree  or other reduction techniques to provide an LOS deformation map with corresponding uncertainties . Alternatively one may examine model residuals to estimate uncertainties [9, 10, 11].
2.2. Seismic data
Energy from a seismic source propagates through the Earth and is recorded as motion with seismometers or geophones at or near the Earth’s surface. The resulting seismic data may be used to estimate the distribution of internal rock properties based on well-established relationships between seismic velocities and elastic moduli  and the propagation and storage characteristics of magma, all of which are generally time-dependent, within an active volcano.
Seismic wave propagation is a function of the specific material properties of the rock that hosts the waves. More specifically, these material properties describe the p and s wave propagation velocities,
Magma processes may serve as a source of seismic energy. The propagation of fluid-filled cracks perturbs the local stress field and produces microseismicity. We can use microseismicity to track magma migration. The case of a propagating dike is precisely analogous to tracking hydrofracture propagation in energy application. Alternatively, the viscous flow of magma produces VLF seismic signals that reveal episodic dike breakout .
2.3. Auxiliary data
2.3.1. Topography and bathymetry
For deformation studies, the land or seafloor of a volcano is represented as a stress-free surface because we assume that the shear resistance of the low viscosity atmosphere or water column is zero and that the normal stress variations are also small. This latter assumption may require consideration for seafloor deformation studies having substantial overlying water columns. Not surprisingly, the shape of the free surface will influence deformation predictions. We can visualize this effect by considering how the deformation field would be influenced by the extreme case of a vertical cliff. In this case, the true free surface would be orthogonal to the customary flat surface of an HEHS domain. Thus, an irregular free surface estimated from topography and bathymetry data, both of which are readily available for the entire surface of the Earth, will influence deformation predictions. Cayol and Cornet  demonstrated that topographic gradients less than 20° are well approximated by a flat free surface. Because the nonuniform internal structure of an active volcano implies a nonuniform land surface, deformation models should generally address the irregular free surface of topography or bathymetry.
2.3.2. Thermal data
Rock rheology is strongly temperature-dependent . Not surprisingly, heat from an active magma chamber propagates into and weakens the surrounding country rock. Temperature,
2.3.3. Petrologic data
Petrologic analyses of frozen lava sampled at or near the land surface reveals the pressure and temperature history of material when it was at depth . For example, the presence of volatiles can be studied to infer the pressure of crystallization, and hence depth of a magma system . Alternatively, plagioclase zoning records episodic perturbations to the pressure or temperature conditions within a magma chamber . Such episodes can be interpreted as variations in magma flux, which in turn, can be used to account for episodic deformation.
The deformation modeling protocol  provides a framework that subjects the modeling effort to a rigorous system of checks to ensure self-consistency, repeatability, and reliability. An innovational aspect of the protocol is that it requires a dynamic modeling process that iteratively updates and improves model configurations with new information and evaluations of prediction misfit. Adherence to such a protocol maximizes the effectiveness of FEM applications and is essential for quantitative analyses, parameter estimations, and interpretations of geodetic data for active volcanoes.
The first step in the protocol is to identity the purpose of the modeling, which will include a precise definition of the scientific problem and statement of objectives in mathematical terms. This step includes a literature review and the assembly and visual (qualitative) inspection of geodetic and auxiliary data. Based on this information, the modeler will formulate scientific questions regarding the relationship between magma flux (what we cannot directly observe) and the observed geodetic data and identify goals that will answer these questions. The modeler will then devise strategic testable hypotheses to address these goals. These tests will use statistical hypothesis testing (SHyT) to ensure that quantitative estimates and interpretations include the propagation of prediction uncertainties. A typical SHyT analysis comprises competing null (
Based on an F-ratio (F) test, which explicitly accounts for the uncertainty in both data and model predictions , and the critical value appropriate for the 95% confidence level (Fcrit), we would accept
3.2. Conceptual model
The conceptual model is a qualitative representation of the deformational system. A suitable conceptual model of a volcano deformational system requires a domain configuration comprising the geometry, governing equations, boundary conditions, initial conditions, and loading conditions or mechanisms. Some or all of these components may be a function of time.
3.2.1. Domain configuration
The initial decisions of the conceptual model include the choice of units and coordinate system. For scientific investigations, the units will be Système International (SI). Modelers are strongly cautioned against using any other system in an effort to ensure consistency. The modeler must then establish a global coordinate system, which generally approximates the local volcanic system in Cartesian coordinates [east, north, up]. This approximation, which neglects the curvature of the Geoid, is justified for the relatively small near-field regions of an active volcano. The UTM coordinate system is a particularly convenient choice for a global coordinate system. Some models may require local coordinate systems to formulate directional aspects, such as directionally-dependent material properties, fault-slip, or dike and sill openings. The modeler will document all coordinate system transformations. The geometry of the model domain is a representation of the model space. The top of the domain represents the land surface or seafloor. The nearfield geometry may include local geometric complexities, such as a high-resolution representation of the topography. The far-field represents how the domain extends beyond the nearfield region in both horizontal and depth dimensions.
While the actual spatial system of an active volcano is a 3D entity, the modeler is tasked with choosing the simplest dimensional configuration that adequately represents the system. While lower dimensionality configurations are computationally simpler, the modeler must remember that this simplicity has precise meanings for the higher dimensions. For example, a 1D domain implies that the domain is constant and extends to infinity in the two dimensions that are not explicitly simulated. Likewise, a 2D domain implies that the system is constant and extends to infinity in the third dimension. A full 3D domain may be recovered by sweeping a 2D axisymmetric domain about an axis of symmetry. Sensitivity analyses are necessary to justify the choice of dimensionality. For example, justification is satisfied for a 2D domain if results from a 3D domain are sufficiently similar. The free surface should generally reflect topography or bathymetry, unless the goals of the modeling are qualitative. This implies the general need for 3D systems, except for special cases where topography and bathymetry are effectively flat. However, as near-field deformation becomes less sensitive to topography and bathymetry with distance, a modeler may design a domain having fidelity to topography and bathymetry that decays with distance from the near-field. Such a strategy is aligned with mesh design considerations discussed later.
3.2.2. Governing equations
The governing equations specify the physical behavior of the system. The equations of static elasticity are expressed here in index notation for a heterogeneous and isotropic material, using Einstein summation:
If the elastic properties are taken outside of the spatial derivatives, then Eq. (2) reduces to the Navier formulation Sadd  and; with the appropriate loading, boundary, and initial conditions; describes the behavior of an HEHS domain that is commonly assumed in deformation models of point  (Figure 2) and dislocation  deformation sources. However, the fundamental existence of an active volcano requires localized complexity, or a distribution of material properties,
These governing equations can be adapted to account for more complicated transient viscoelastic quasi-static behavior. For example, the strain rate (
3.2.3. Boundary conditions
Boundary conditions constrain the constants of integration resulting from solutions to the governing equations (Eq. (2)) and define how the domain behaves along the domain boundaries. HEHS domains require a flat, stress-free surface and lateral and depth boundaries extending to infinity and having zero displacement (Figure 2). On the other hand, FEMs allow for great flexibility in boundary specifications. All external FEM boundaries (and internal boundaries, if they exist) require boundary condition specifications to constrain the displacement solution over the domain. These specifications include displacements, displacement derivatives, stress, or stress-dependent displacements. The default boundary condition of the FEM formulation is stress-free, which is appropriate for the land surface of seafloor. The lateral and depth spatial extents of the domain are generally represented as zero displacement,
3.2.4. Initial conditions
The governing equations express displacement with respect to the initial conditions. For volcano deformation, the initial conditions are usually steady state. These conditions should not be interpreted as zero stress, strain, or displacement. Instead, steady state condition implies time independence. The initial conditions for elastic systems are always steady state, as is self-evident by the absence of time in Eq. (2).
3.2.5. Loading conditions
The loading conditions can be interpreted as the impulse that causes the model to respond or deform. Alternatively, without loading, the model will remain in the state of defined by the initial conditions. Loading may take the form of elastic dislocations that can be implemented in FEMs via kinematic constraint equations . These dislocations are useful for simulating fault parallel modes of earthquakes or opening modes of dikes and sills. Pressurized magma chambers may be simulated with an applied pressure,
3.2.6. Calibration targets
Some data are used to constrain the model configuration. For example, a distribution of material properties may be estimated from a seismic tomography model of the volcano. However, other data, such as geodetic data that sample the surface deformation of a volcano, may be used to quantify some process of interest, such as the location and pressurization caused by an episode of magma intrusion. The magma intrusion is characterized by calibration parameters that are adjusted in a way that produces predictions that best account for the calibration targets. Rather than use all relevant data for calibration, it is important that the modeler reserve some of the data for the verification analyses discussed below. A pitfall of many inverse (calibration) analyses of geodetic data is that all the geodetic data are used in the inversion to estimate the calibration parameters. Consequently, modelers may report that the model with the calibrated parameters well-predicts these data. However, this method is faulty because the inverse analysis forces the suite of parameters that best fit the data. Thus it should be no surprise that the model well-predicts the data. However, the reliability of such a model is unknown because the model is not tested against information that is independent of the inverted data. Verification data are reserved for this independent test and are necessary to demonstrate a model’s reliability.
3.2.7. Calibration parameters
Calibration (adjustable) parameters are identified as part of the conceptual model. These parameters will be calibrated to the data that are identified as calibration targets. Decisions regarding the choice of calibration parameters are strongly controlled by the availability of the constraining information. Well-studied volcanoes having more data warrant more calibration parameters, whereas volcanoes having relatively sparse information are best simulated with fewer parameters.
An equally important consideration for defining the number of parameters is whether the model response is a linear or nonlinear response to a given parameter. A model having numerous linear calibration parameters may be much easier to calibrate via matrix inversion methods that a model having a few nonlinear calibration parameters that require sampling of a multidimensional nonlinear parameter space. For example, a single execution of an FEM is necessary to characterize the model response to each linear parameter. For linear parameters, the resulting response is a linear function of the parameter impulse –doubling the impulse doubles the response. This is a convenient property of linear systems. However, numerous samples are required to adequately characterize the model response to a single nonlinear parameter. For example, doubling the depth of a magma chamber having a given pressurization does not double the resulting surface deformation. The number of samples required to characterize nonlinear parameters depends on the degree of nonlinearity, which is generally poorly known for sophisticated FEM domains. While experience plays a role in deciding on the specific suite of calibration parameters, the calibration process itself will quantitatively reveal the resolution and precision of the calibration parameters. Furthermore, testing and sensitivity analyses may be needed to guide the identification of calibration parameters.
3.2.8. Sensitivity analyses
Some conceptual model components are specified a priori and justified based on analogous published studies or based on the modeler’s experience. However, other components may have multiple options for which the implications are not clearly understood. Sensitivity analyses are identified that will be conducted to test how the model predictions will respond to variations in model design. Almost any question regarding the model design, construction, execution, and interpretation posed during any phase of the protocol may be addressed with sensitivity analyses. Modelers should expect to conduct an exhaustive array of sensitivity analyses over the protocol. Alternatively, if all the answers are known, then modeling is not necessary and purpose must be re-defined.
3.3. Model assembly and execution
3.3.1. Solution method
The modeler identifies the specific mathematical method and precisely explains components. The components of a mathematical model are governing equations, boundary conditions, loading specifications, and initial conditions. The mathematical methods used for analyses of deformation are separated into two distinct classes. First, analytical solutions are closed form equations. These equations are solutions to boundary value problems. A rich panoply of analytical solutions are available to simulate a variety of volcano deformation scenarios for the Earth’s lithosphere . Likewise, analytical solutions are available for static deformation due to surface loads applied to lithospheric plates [16, 33, 34]. However, these analytical solutions require simplifying assumptions that cannot account for known complexities of an active volcano. These assumptions may include some combination of half space domains having either homogeneous or layered isotropic material properties. While such analytical solutions are attractive because of their low computational requirements, their true cost lies in their inability to account for the known complexity of an actively deforming volcano. It is well known that such complexities strongly influence predictions of volcano deformation (For example, see  and references therein).
This chapter is concerned with FEMs, which allow for a broad flexibility in model design and simulation. A vast body of literature documents the capabilities and limitations of the finite element method methods [35, 36], which was first proposed by . Accordingly, the formulation of FEMs is far beyond the scope of this chapter and we refer the reader to the availability of thorough textbook treatments of FEM formulation. Instead our focus is on how to implement FEMs rather than formulate them. While there are some initiatives to provide finite element code for deformation studies in Earth science (e.g., Computational Infrastructure for Geodynamics, https://geodynamics.org), there are some excellent proprietary general purpose FEM codes, such as Abaqus and COMSOL. Such codes may be used to build FEMs that honor the conceptual model devised as described above in Section 3.2.
The FEM method divides the geometry of the simulated domain into an assembly of discrete elements. The mechanical behavior is defined piecewise over each element using constitutive mechanical relationships and the resulting assembly is simultaneously integrated over the domain to minimize global energy constraints in the context of specified initial, boundary, and loading conditions. It is this piecewise formulation that bestows FEM with powerful capabilities to simulate the deformational behavior (satisfy Eq. (2)) over distributions of material properties in arbitrary geometric domains that simulate the complex systems of active volcanoes (e.g., Figure 3). The solution for an FEM approximation is a matrix problem, for which a greater number of elements translates to a bigger matrix and consequently, greater computing requirements. Therefore, an FEM is inherently a matrix formulation having a size that depends on the number of elements that comprise the domain mesh. The quality of the FEM solution is a function of the quality of the mesh, the latter of which is a function of the shape and size of the elements. The FEM approximation converges to an exact solution as the characteristic element size converges to zero.
This leads the modeler to the fundamental problem of a trade-off between the quality of the FEM approximation and the limitations of the available computing resources. For a given domain space, a larger number of smaller elements translates to a larger matrix problem that may become unfeasible. Alternatively, a smaller number of large elements yields a smaller matrix problem that requires nominal computing resources, but with a cost of a relatively poor approximation to a solution to Eq. (2). The design of a suitable mesh is a fundamental challenge for designing FEM simulations. A common approach is to tessellate the near field region of the domain with relatively small element sizes that become larger toward the far-field boundaries. This strategy accounts for the need for a more refined mesh in areas having a relatively higher strain gradient (c.f. Eq. (2)), such as near the deformation source compared to the far-field boundaries that experience relatively low strain gradients.
Validation ensures a numerical model is working properly and model design is functioning. We can validate the FEM configurations (e.g., mesh, boundary conditions, and loading conditions) by comparing model predictions to known benchmarks, which ideally come from exact analytical solutions. Available analytical solutions for displacement due to various loading strategies are limited to very specialized configurations, such as dislocations and pressurized cavities embedded in HEHS domains [13, 30]. It is because of these limitations that we turn to FEMs in the first place.
However, once we define our working FEM configuration, we can often simplify the configuration so that it is not too different from the more complicated working FEM counterpart, but are sufficiently similar to an HEHS configuration. For example, consider an FEM that simulates a pressurized sphere embedded in a domain having a distribution of material properties, a free surface having the geometric irregularity of topography, far-field boundary conditions of zero displacement, and initial conditions of equilibrium. We can modify the domain to have spatially constant elastic properties and a flat free surface. Such a model reasonably approximates the configuration of Mogi  and if FEM prediction errors are small, say <5%, then the FEM may be considered to be validated. If prediction errors are too large, then the FEM configuration is problematic. Ideally, any FEM of volcano deformation should be validated.
For cases where the FEM configuration is not amenable to reconfigurations that adequately resemble the configuration of an analytical solution configuration, the modeler must rely on sensitivity analyses to demonstrate the adequacy of the FEM approximation. For example, how do we know if a mesh is adequately refined for a model that cannot be validated with an analytical solution? First, we must benchmark the FEM formulation, if it has not already been done so. Second, we can then compare predictions from models having increasing refined mesh configurations. Once the modeler determines that increased mesh refinement does not change the predictions, then the mesh is adequate. We can devise such strategies of sensitivity analyses to examine the influence of a wide range of FEM configuration aspects, with the goal of demonstrating that the model is performing correctly. Predictions, estimations, and interpretations based on FEMs that are either not validated or lack sufficient sensitivity analyses of the domain configuration are suspect.
3.4.1. Forward model
The predicted three-component displacement,
where L is the LOS unit vector;
3.4.2. Inverse model
The inverse model estimates the calibration parameters based on the data. These estimates include both a central tendency and sense of variability or uncertainty for each calibration parameter (Figure 3). The matrix
188.8.131.52. Linear calibration parameters
Linear calibration parameters comprise the vector
Eqs. (8) and (9) provide a mechanism for providing estimates of central tendency and uncertainties for linear calibration parameters, in a way that accounts for the uncertainties of the data. For linear inverse methods, the parameter PDFs are Gaussian distributions that are conveniently characterized with 95% intervals
184.108.40.206. Nonlinear calibration parameters
Nonlinear calibration parameters are embedded in the structure of
The calibration process identifies the model and suite of calibrated adjustable parameters that best predict the calibration data. Thus, it is a circular argument to suggest a model is reliable if the calibrated model well-predicts the data, because the calibration process forces the model to do so. Instead, verification addresses the reliability of a model by confronting predictions with data that are independent of the calibration process. For example, an active volcano may have both InSAR and GPS data that characterize the deformation of the volcano. Instead of using both InSAR and GPS in the calibration process, one might calibrate to the InSAR and then quantitatively assess the calibrated model’s ability to predict the GPS data. A validated, calibrated, and verified model is necessary to provide reliable predictions.
A well-crafted hypothesis provides an explanation of some phenomenon, predicts the outcome of future observations, and must include a chance of showing the hypothesis to be wrong. We give an analogous treatment to a model of volcano deformation –A reliable model provides an explanation of volcano deformation, predicts the outcome of future observations, and must include a chance of showing the model predictions to be wrong. The post-audit addresses the latter two requirements. A deformation model should be confronted with new geodetic or other relevant information as it becomes available. This confrontation should take the form of SHyT analyses. If the model fails, the modeler must re-visit an earlier stage of the protocol, as appropriate. The post-audit requirement recognizes that all models are wrong and require continuous evaluation in an effort to converge on the truth.
The protocol discussed above outlines the components that should be included in a written modeling report. Section 3 is intended for use as a checklist for both authors and reviewers of modeling analyses for volcano deformation. While all components of the protocol are necessary, the validation step is particularly important because it is a mechanism that demonstrates the adequacy of numerous model design aspects. Unfortunately, validation is often missing from written reports of FEM-based analyses of volcano deformation. This oversight undermines the reliability of any estimates, results, interpretations, discussions, and conclusions based on FEM analyses. Furthermore, few FEMs of volcano deformation include verification. This problem undermines the reliability of model predictions. The general lack of validation and verification in reported FEM-based analyses of volcano deformation is sufficiently severe that we recommend that authors who conduct FEM-based analyses of volcano deformation neither submit manuscripts without validation and that editors and reviewers reject any manuscript having FEM analyses of volcano deformation that lack validation, verification, and a mechanism to confront a model with a post-audit.
This work is funded, in part, by NSF EAR 1316082 and NASA ESI 16-ESI16-0037.