Uncertainty is beyond awareness our indisputable decision-maker. A meeting announced to start at 12:00 may implicitly be understood to start in the time interval 12:00–12:01. Hence, we should have arrived at 12:01, at the latest. Alternatively, the interval could be 12:00–12:05. The communicated uncertainty of the start of the meeting is clearly
Results repeated within ±1% variation tell nothing about the range of possible errors or uncertainty. An entirely deterministic algorithm has perfect precision. This is normally the situation of scientific modeling, before uncertainties are considered. The precision usually thought of as random variability for any given set up is often re-interpreted as the total variability between known different situations. That is a dubious strategy to assign numbers of uncertainty. Without extensive consideration, it is generally impossible to assess whether or not the considered history is
1.1. The goal
Uncertainty quantification targets objective association of quantitative traceable numbers representing uncertainty to modeling, simulation, and calculation results. By applying a well-documented and widely accepted method with known performance, for the last 20–30 years of so, such a methodology has been established and widely recognized for measurement models, to the extent a quantitative assessment of uncertainty now almost always is required for measurement apparatus. It is not yet so for scientific modeling, as the advanced computations in modern science and technology generally are far more difficult to analyze than measurement models. The uncertainty should predict the range of possible modeling errors, but without exaggeration. If so, modeling results and observations are
The mere evaluation of uncertainty is, however, not automatically of any value. Unwarranted assumptions of uncertainties entering the evaluation are deceiving. Respecting what is not known is usually far more important than accurately describing what is known. Lack of knowledge tends to increase the uncertainty and often leads to
1.2. The preparation
In many respects and for good reasons, methods of uncertainty quantification (UQ)  are in their infancy. The need of viable and credible UQ methods is rapidly increasing, with higher utilization of advanced computations. The excess computational power at disposal for UQ is unfortunately not increasing nearly as rapidly as the total resources. The reason is simple. Most computational models are discretized in space and time, truncated, or simplified by neglecting minor but complicated contributions. Such approximations cannot be traced to lack of knowledge or ability, but are often required to enable computation. As soon as the resources increase, eliminating these model reductions as much as possible is most logical and desirable. Weather forecasting  illustrates the principle. Proper propagation of disturbances requires comparable resolution in space and time. Reducing the unit cell of analysis from 10 km × 10 km down to 5 km × 5 km to render more detailed forecasts increases the computational load no less than 24 = 16 times. Even so, the unit cell will still be larger than desired. Additional resources will therefore mainly be spent on improvements of the deterministic model formulation in the future, leaving a relatively small fraction to be spent on improved UQ. However, with model samples that can be evaluated independently in different computer kernels, the challenge of improved UQ by additional sampling translates into an economical issue. Then it does not compete with the advancement of computer architecture required to solve the dependent deterministic equations.
UQ combines several advanced mathematical disciplines and can be applied to a plethora of disparate applications not only in technology and science, but also in econometrics and for risk assessment. This makes the subject exceedingly difficult to master, but also hard to understand and learn by studying examples.
Fundamentally, statistics of
Epistemic uncertainty, i.e., unknown and unpredictable systematic but repeatable errors due to lack of knowledge and imperfect simplifications.
Aleatoric uncertainty, i.e., non-repeatable errors of a statistical nature. Typically, the variable outcome of finite random draws (sampling variance).
Applications of UQ are typically concerned with epistemic uncertainty due to imperfect modeling, calculation and signal processing, finite discretization (FEM) as well as inaccurate boundary and initial conditions, etc. Mathematical statistics, on the other hand, focuses on aleatoric uncertainty due to finite statistical sampling. In the latter case, modeling has an entirely different character. The quantities of interest are usually not a result of a complex model implemented in a large computer program but rather directly observable, like mean and variance of some measure of performance, frequency, length, or response time. In that case, the uncertainty due to the variability of small observation sets presumably dominates over model errors.
1.4. Some common tools
Bayesian approaches  make the difference between epistemic and aleatoric uncertainties almost invisible. Generalizing observed frequencies of observation to also include other kinds of knowledge requires a shift of perspective from experimental testing, to the observer and his/her degree of belief. Since our belief rarely is complete or totally absent, this still has the appearance of probability, but is conceptually different. Nevertheless, belief is the enabler for unifying epistemic and aleatoric uncertainty consistently within the same framework of UQ. Our belief often refers to a model’s track record, or how it has performed in different situations over a long period of time. That may be difficult to assess quantitatively, but could in principle be made with multimodel calibration. Only independent data sets/model results must be included, as dependencies will underestimate the uncertainty severely. Worth emphasizing is also that any piece of
Random sampling reduces the difference between the practices of UQ and mathematical statistics even further by introducing sampling variance of finite random ensembles, making it a primary target to control in both fields. The basic motivation for random sampling is its simplicity, while a severe drawback is the added sampling variance. Much larger ensembles than the computational power allows for may be required. The obvious work-around is to substitute the full model with a much less demanding approximate surrogate model, which allows for excessive sampling. The surrogate is often
Model calibration or inverse UQ is an inverse problem usually requiring an implicit solution. The high complexity of the full model normally prohibits ubiquitous trial-and-error search and steepest descent methods like the Newton-Raphson method , to minimize the model prediction error. Just as for excessive random sampling, surrogate models are often utilized. In this case though, the iterative character of many inverse solutions requires even higher computational efficiency. The maximum likelihood method is perhaps the most common approach to inverse propagation of uncertainty. Virtually all methods require complete statistical information. That is a major issue since available information normally is incomplete. Just like Bayesian estimation can be invalidated by faulty prior distributions, inappropriate assumptions of unknown calibration data statistics may invest far too much credibility in the calibrated model, making it likely to fail any validation test. What is particularly detrimental is the ubiquitous assumptions of uncorrelated calibration errors. Allowance of incomplete statistical information in model calibration is therefore one of the most urgent tasks to address in future development of model calibration, to remedy overconfident faulty model predictions.
- Errors are realized uncertainty. The uncertainty predicts the range of possible errors. Such errors are unknown, otherwise we would eliminate them. Their analysis requires a concept like uncertainty.