Open access

Introductory Chapter: Challenges of Uncertainty Quantification

Written By

Jan Peter Hessling

Published: 05 July 2017

DOI: 10.5772/intechopen.69517

From the Edited Volume

Uncertainty Quantification and Model Calibration

Edited by Jan Peter Hessling

Chapter metrics overview

1,776 Chapter Downloads

View Full Metrics

1. Preamble

Uncertainty is beyond awareness our indisputable decision-maker. A meeting announced to start at 12:00 may implicitly be understood to start in the time interval 12:00–12:01. Hence, we should have arrived at 12:01, at the latest. Alternatively, the interval could be 12:00–12:05. The communicated uncertainty of the start of the meeting is clearly ambiguous: accustomed to analog clocks discretized in 5-minute intervals, the latter is plausible, but used to digital clocks the former makes more sense. A meeting scheduled at 12, however, means something quite different to most of us. In that case, it can start as late as 12:30. The invisible practice in everyday life is to communicate uncertainty through a vaguely perceived precision, suggesting random variability. It is more often than not confused with accuracy, or systematic deviation (see Figure 1).

Figure 1.

Illustrations [1] of precision (left) and accuracy (middle) of four samples (●), and corresponding schematic probability density for the population of all possible outcomes (right), often utilized in uncertainty quantification.

Results repeated within ±1% variation tell nothing about the range of possible errors or uncertainty. An entirely deterministic algorithm has perfect precision. This is normally the situation of scientific modeling, before uncertainties are considered. The precision usually thought of as random variability for any given set up is often re-interpreted as the total variability between known different situations. That is a dubious strategy to assign numbers of uncertainty. Without extensive consideration, it is generally impossible to assess whether or not the considered history is representative for the current problem. For instance, errors in modeling of fluid flow velocities and electromagnetic fields at nearly singular points in space or time, such as sharp corners, or deficiencies in describing collective phenomena like resonances, are usually far too complex to be understood by studying examples only. An extensive analysis based on a large or even infinite set of hypothetical variations is required. The widely practiced intuitive assessment of uncertainty exemplified above, based on experience and communicated with precision, jeopardizes decision-making: uncertainties of this kind are subjective and encourage different interpretations. Invalid uncertainty assessment is also a major cause of false rejection of modeling as a general tool, depriving us all means for making educated guesses through scientific model prediction of important matters, like future weather conditions and risk of major nuclear power accidents.

1.1. The goal

Uncertainty quantification targets objective association of quantitative traceable numbers representing uncertainty to modeling, simulation, and calculation results. By applying a well-documented and widely accepted method with known performance, for the last 20–30 years of so, such a methodology has been established and widely recognized for measurement models, to the extent a quantitative assessment of uncertainty now almost always is required for measurement apparatus. It is not yet so for scientific modeling, as the advanced computations in modern science and technology generally are far more difficult to analyze than measurement models. The uncertainty should predict the range of possible modeling errors, but without exaggeration. If so, modeling results and observations are consistent, which means no more than they are not contradictory. Expressed in terms of conventional mathematical statistics developed by Fisher [2] and Popper [3], the hypothesis that the model accurately reproduces observations cannot be falsified. These perspectives, outlined in the early 20th century while studying, e.g., crop growth in agriculture and demography, still hold well for modern uncertainty quantification addressing complex applications, such as nuclear power generation, fatigue testing, etc. Mathematical statistics is indeed the genesis of most uncertainty quantification approaches and techniques utilized today.

The mere evaluation of uncertainty is, however, not automatically of any value. Unwarranted assumptions of uncertainties entering the evaluation are deceiving. Respecting what is not known is usually far more important than accurately describing what is known. Lack of knowledge tends to increase the uncertainty and often leads to ambiguity, an important ingredient in qualitative science. In quantitative science addressed here though, any lack of well-defined information is normally defied by bold simplifying assumptions, simply because current methodologies require complete knowledge. Closing the gap of ambiguity in this way reflects willful ignorance [4]. Therefore, it is important to consider alternative hypotheses of uncertainty. For instance, parameter correlations are very rarely known, but nevertheless have a major influence on the evaluated uncertainty. In this respect, it is important to view the model with all of its parameters as one composite unit. The hypothesis touched upon above, stating that the model reproduces observations, implies that propagated parameter errors combine coherently, according to the behavior of the deterministic model equations. Correlations are thus essential components of uncertainty, as they may attenuate or amplify contributions from different uncertain parameters by means of destructive or constructive interference. If such effects are not taken into account, uncertainty quantification may evolve into con artistry.

1.2. The preparation

In many respects and for good reasons, methods of uncertainty quantification (UQ) [5] are in their infancy. The need of viable and credible UQ methods is rapidly increasing, with higher utilization of advanced computations. The excess computational power at disposal for UQ is unfortunately not increasing nearly as rapidly as the total resources. The reason is simple. Most computational models are discretized in space and time, truncated, or simplified by neglecting minor but complicated contributions. Such approximations cannot be traced to lack of knowledge or ability, but are often required to enable computation. As soon as the resources increase, eliminating these model reductions as much as possible is most logical and desirable. Weather forecasting [6] illustrates the principle. Proper propagation of disturbances requires comparable resolution in space and time. Reducing the unit cell of analysis from 10 km × 10 km down to 5 km × 5 km to render more detailed forecasts increases the computational load no less than 24 = 16 times. Even so, the unit cell will still be larger than desired. Additional resources will therefore mainly be spent on improvements of the deterministic model formulation in the future, leaving a relatively small fraction to be spent on improved UQ. However, with model samples that can be evaluated independently in different computer kernels, the challenge of improved UQ by additional sampling translates into an economical issue. Then it does not compete with the advancement of computer architecture required to solve the dependent deterministic equations.

UQ combines several advanced mathematical disciplines and can be applied to a plethora of disparate applications not only in technology and science, but also in econometrics and for risk assessment. This makes the subject exceedingly difficult to master, but also hard to understand and learn by studying examples. Physical modeling usually provides the basis for setting up the underlying deterministic model. Major simplifications as well as coarse assumptions are common. For instance, Navier-Stokes equations of fluid flow may require both physical and mathematical idealizations like continuous media and differentiability, as well as neglect of higher-order turbulence contributions. Already at this first stage, contributions to uncertainty are building up. Finite element methods (FEMs) discretize physical fields in space and time caused by fixed (solids) or moving (fluids) matter. Signal processing techniques such as temporal sampling, digital filtering, and state space formulations for Kalman filtering and model prediction control convert infinite-dimensional continuous physical differential models to finite systems of difference equations, suitable for computers. Numerical methods then provide the means for solving these equations, with maximum efficiency and minimum error. Preferably with known error estimates, which may be re-phrased in terms of uncertainty in the proceeding UQ. Knowledge of computer science is needed for efficient programming and maintaining numerical precision throughout the calculation, but also for managing large complex software modules. The studied system may also exhibit critical properties. The chaotic nature of weather forecast models is one example. More than 50 years ago, Lorenz assessed an absolute upper prediction horizon of about two weeks [6]. Explained by “the butterfly effect” [6, p. 206], this limit is still believed to be accurate: Even the slightest possible change in initial conditions may render a monumental change in the forecast after some time, which clearly is a major complication for credible UQ. Understanding these preparatory stages is crucial, as they accommodate many sources of uncertainty.

1.3. Overview

Uncertainty quantification can now be addressed. Statistics of all kinds of uncertain quantities are then propagated in two possible directions, as explained in Figure 2 (adapted from Ref. [7]).

Figure 2.

Uncertainty quantification (UQ) and model calibration, or inverse UQ. Identifying or matching the model against identification data often requires simplified surrogate models. The model should be checked or validated before it is utilized for prediction comprising a best estimate and its uncertainty.

Fundamentally, statistics of populations rather than finite samples drawn from them are propagated, which avoids sampling variance, the principal complication addressed in mathematical statistics with statistical inference [2]. There are thus two generic types of uncertainty1 to some extent corresponding to accuracy and precision, respectively:

  • Epistemic uncertainty, i.e., unknown and unpredictable systematic but repeatable errors due to lack of knowledge and imperfect simplifications.

  • Aleatoric uncertainty, i.e., non-repeatable errors of a statistical nature. Typically, the variable outcome of finite random draws (sampling variance).

Applications of UQ are typically concerned with epistemic uncertainty due to imperfect modeling, calculation and signal processing, finite discretization (FEM) as well as inaccurate boundary and initial conditions, etc. Mathematical statistics, on the other hand, focuses on aleatoric uncertainty due to finite statistical sampling. In the latter case, modeling has an entirely different character. The quantities of interest are usually not a result of a complex model implemented in a large computer program but rather directly observable, like mean and variance of some measure of performance, frequency, length, or response time. In that case, the uncertainty due to the variability of small observation sets presumably dominates over model errors.

1.4. Some common tools

Bayesian approaches [8] make the difference between epistemic and aleatoric uncertainties almost invisible. Generalizing observed frequencies of observation to also include other kinds of knowledge requires a shift of perspective from experimental testing, to the observer and his/her degree of belief. Since our belief rarely is complete or totally absent, this still has the appearance of probability, but is conceptually different. Nevertheless, belief is the enabler for unifying epistemic and aleatoric uncertainty consistently within the same framework of UQ. Our belief often refers to a model’s track record, or how it has performed in different situations over a long period of time. That may be difficult to assess quantitatively, but could in principle be made with multimodel calibration. Only independent data sets/model results must be included, as dependencies will underestimate the uncertainty severely. Worth emphasizing is also that any piece of prior information available before the uncertainty is quantified must reflect some kind of knowledge or experience. Any reduction of uncertainty due to a guessed prior is purely hypothetical and deceptive.

Random sampling reduces the difference between the practices of UQ and mathematical statistics even further by introducing sampling variance of finite random ensembles, making it a primary target to control in both fields. The basic motivation for random sampling is its simplicity, while a severe drawback is the added sampling variance. Much larger ensembles than the computational power allows for may be required. The obvious work-around is to substitute the full model with a much less demanding approximate surrogate model, which allows for excessive sampling. The surrogate is often affine, i.e., linear in uncertain parameters and obtained with traditional linear regression. Aleatoric sampling errors are then exchanged with presumably smaller epistemic ones. Alternatively, the sampling variance may be reduced by imposing deterministic components in the random sampling methodology, like stratified sampling, perhaps combined with latin-hypercube [9] or orthogonal sampling exclusion rules. It is indeed possible to extend these amendments of determinism into entirely deterministic sampling, as in the unscented Kalman filter [10]. The sampling variance is then completely(!) exchanged with sampling errors due to imperfections of the reproducible sampling rule [11]. Just knowing the modeling error is entirely reproducible is of great value when differential changes are of primary interest, as in product development.

Model calibration or inverse UQ is an inverse problem usually requiring an implicit solution. The high complexity of the full model normally prohibits ubiquitous trial-and-error search and steepest descent methods like the Newton-Raphson method [12], to minimize the model prediction error. Just as for excessive random sampling, surrogate models are often utilized. In this case though, the iterative character of many inverse solutions requires even higher computational efficiency. The maximum likelihood method is perhaps the most common approach to inverse propagation of uncertainty. Virtually all methods require complete statistical information. That is a major issue since available information normally is incomplete. Just like Bayesian estimation can be invalidated by faulty prior distributions, inappropriate assumptions of unknown calibration data statistics may invest far too much credibility in the calibrated model, making it likely to fail any validation test. What is particularly detrimental is the ubiquitous assumptions of uncorrelated calibration errors. Allowance of incomplete statistical information in model calibration is therefore one of the most urgent tasks to address in future development of model calibration, to remedy overconfident faulty model predictions.

References

  1. 1. Images retrieved from Wikimedia Commons, the Free Media Repository [Internet]. Available from: https://commons.wikimedia.org [Accessed: 27-04-2017]
  2. 2. Fisher RA. Statistical Methods, Experimental Design, and Scientific Inference. Oxford: Oxford University Press; 1990
  3. 3. Popper Karl. The Logic of Scientific Discovery. London and New York: Routledge Classics; 2002
  4. 4. Weisberg HI. Willful Ignorance: The Mismeasure of Uncertainty. Hoboken, NJ: John Wiley & Sons; 2014
  5. 5. Smith RC. Uncertainty Quantification: Theory, Implementation, and Applications. Vol. 12. SIAM - Society for Industrial and Applied Mathematics; 2013
  6. 6. Kalnay E. Atmospheric Modeling, Data Assimilation and Predictability. Cambridge: Cambridge University Press; 2003
  7. 7. Hessling JP. Identification of complex models. SIAM/ASA Journal on Uncertainty Quantification. 2014;2(1):717-744
  8. 8. Sivia D, Skilling J. Data Analysis: A Bayesian Tutorial. Oxford: Oxford University Press; 2006
  9. 9. Helton JC, Davis FJ. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering and System Safety. 2003;81(1): 23-69
  10. 10. Julier SJ, Uhlmann JK. Unscented filtering and nonlinear estimation. Proceedings of the IEEE. 2004;92(3):401-422
  11. 11. Hessling JP. Deterministic sampling for propagating model covariance. SIAM/ASA Journal on Uncertainty Quantification. 2013;1(1):297-318
  12. 12. Ypma TJ. Historical development of the Newton–Raphson method. SIAM Review. 1995;37(4): 531-551

Notes

  • Errors are realized uncertainty. The uncertainty predicts the range of possible errors. Such errors are unknown, otherwise we would eliminate them. Their analysis requires a concept like uncertainty.

Written By

Jan Peter Hessling

Published: 05 July 2017