Open access peer-reviewed chapter

Application of Quantum Physics Assumptions for Risk Assessment

Written By

Marek Rozycki

Submitted: 25 September 2019 Reviewed: 10 December 2019 Published: 14 January 2020

DOI: 10.5772/intechopen.90825

From the Edited Volume

Risk Management and Assessment

Edited by Jorge Rocha, Sandra Oliveira and César Capinha

Chapter metrics overview

809 Chapter Downloads

View Full Metrics

Abstract

Risk assessment is the result of assumptions of people performing it. Therefore, its use may be limited, because in principle it is difficult to predict events that we are not aware of. A certain solution to this problem seems to be the application of inception theory and quantum physics assumptions to describe future phenomena. The aim of the study will be to demonstrate the experience of risk assessment attempts using quantum physics assumptions. The current application of new assumptions for risk assessment in the case of road infrastructure allows for the thesis that a change in the approach to risk assessment is necessary in all areas related to human activity.

Keywords

  • risk assessment
  • quantum physics assumptions

1. Introduction

With the formulation of the theory of relativity, we gained a new tool with which to explain the world. It appears that the laws of quantum mechanics explain the processes governing the deepest layers of reality, operating at the levels of the smallest particles of our world. Through experiments and analyses, we can assume that these laws explain phenomena both on the micro- and macroscale. Rules different from classical physics explain the heretofore unexplained and, crucially, allow us to design new experiments within a world entirely unavailable to our senses. The analyses of the visible outcomes of interactions between components—that is to say, events in the real world—return little information about the structure of the observed reality. By interpreting outcomes (events), instead of images of detailed relations, we project our expectations; the mathematical structures we use to explain outcomes are merely an attempt to fit our model to reality, and we can only determine the model’s applicability when, and insofar as, the observable reality confirms our reasoning. Nevertheless, the fact remains that mathematics allows us to draw conclusions as to the rules governing the functioning of the world. The more appropriate the model we use, the better will be our results. If reality is like music, then the tools of analysis are our music score [1]. The notes we use will be the substance of music, but not the music itself. Similarly with analyses, a model of reality used for analysis is the score whose reality is created with many unspecified or loosely defined components. This may lead us to conclude that reality can only perform a score once it has been written. Such an anthropocentric approach leads us to believe that we can influence and shape events. This is especially clear in analysing risk. We usurp the right to assess risk and event probability and expect that reality will perform our freshly composed score. We must, however, allow that mathematical analysis may try to impose its assumptions on reality, which may or may not succeed (e.g. in management methods used in banks and insurance companies).

The job of the risk assessor at such institutions is to draw conclusions about the structure of the world based on mathematical structures. In this context, the ideas of quantum physics—in particular, the concept of the state of an object in a Hilbert space [2], the phase space and the quantum system—may help our analyses, allowing a fuller understanding of reality.

In light of the present investigation, we can conclude that quantum mechanics does not apply to individual events but is a theory of interactions between groups of events (composition series) whose behaviour observably conforms to the laws of statistics. All measurement attempts made within a quantum system will, in essence, be performed on groups of identically prepared objects. These objects may realise every possible state. The results of these measurement attempts come in the form of probability distribution of all possible measurement results. In line with this interpretation, we can focus strictly on looking for the probability distribution and ignore individual events.

Risk assessors have been using this interpretation for some time now. The theory of inertia [3], as an example, asserts that the probabilities of a given state occurring and not occurring are equal. This means that at any given moment a given event may occur or not. If we assign a value of 1 to the state occurring, and a value of 0 to the state not occurring, the distribution will result in a mean of 0 and standard deviation of 1. The potential of the event occurring has equal probability, which means that at any given moment we can expect a given state to occur and not occur—just as described by Schrödinger in his famous 1935 experiment [4]. Each object interacts with its environment, and at the quantum level this interaction can be described as consistent with the second law of thermodynamics, i.e. with quantum decoherence, where every system moves towards increased entropy if it is devoid of energy needed to preserve its current state. The identified risk potential is an expression of quantum entanglement and exhibits a tendency to equalisation (entanglement reduction), towards irreversible change of interference between the system and the environment. Risk, therefore, should be understood as the measurement of loss of information about a given system, as a result of its interaction with the environment. In this context, it is imperative to attempt a quantum-mechanical analysis of risk.

Advertisement

2. Quantum mechanics in interpretation of phenomena

The desire to understand the world and to describe it in terms of mathematical formulae is as old as the human desire to dominate it. Each age has tried to explain observable correlations as causes and effects, in a manner peculiar to itself. In ancient Greece, atomistic theories of the likes of Democritus, according to which matter consisted of final, eternal, unchanging and indivisible atoms, clashed with continuous theories of Aristotle and others, who believed that matter was fluid and ever-changing. Such theoretical clashes across the ages have always encouraged further investigations into our questions about the world, leading to new questions, new theories and new clashes. This progress of human knowledge sped up with the industrial revolution and the creation of more efficient tools of observation. When electrons were discovered, the structure of previously indivisible atoms was called into question. The discovery of the atomic nucleus led to the formulation of the planetary model of the atom, according to which nearly all the mass and positive charge are concentrated in the nucleus, around which negatively charged electrons orbit. During his work on black-body radiation, Max Plank formulated the hypothesis of quanta of energy, when existing analyses based on classical physics proved ineffectual. Further discoveries and ideas followed; one of which was the wave-particle duality, which posits that the entire universe behaves consistently with laws governing either the behaviour of waves or that of particles. This idea, and the results of many experiments which it enabled (including the work of Wheeler [5]), suggests that the observed structures change as the result of observation. So, it is probable that a given event’s result, which we wish to observe, will occur depending on whether and how we observe the event. The manner in which we choose to measure the “present moment” will influence that which caused the present event. Ideas like these are far removed from those of Aristotle and Democritus and require a completely fresh gaze. The premises of quantum physics appear to be verifiable both on micro and macro level, which begs the question: if quantum physics applies to the level of elementary particles, will they apply to modelling real events? If the answer is yes, there should be no objection to the use of analytical tools proper to quantum physics in analysing events such as those which risk analysts are concerned with. The language of mathematics seems to be the only tool precise enough to describe the subtle and rich structures of reality. Structures transparent to human senses are revealed in mathematics, and numbers allow the identification of their states and properties (e.g. their minima, maxima or functions). Let us attempt a certain simplification. The classical, Newtonian understanding of phenomena is determined by the notions of motion, point particle and rigid body. Classical mechanics relies on the premise that there exist objective, quantifiable objects, in motion along specific trajectories, possessing other specific properties, such as position, mass or charge. Elements of a system interact, as do point particles, in strictly defined ways. Contacts and collisions occur, whether directly or indirectly through fields (e.g. of energy, temperature, etc.). These observations lead to the conclusion that the properties of a physical situation can be defined by absolute terms and numbers. We identify laws of cause and effect and conjecture that, under identical conditions, objects will behave identically. We conclude that all objects in the world are determinate and their behaviour is strictly defined and uniform. In the standard (classical) probability measure of a given state occurring, a family of events is characterised as follows:

There exists a sample space Ω and a family Z of subsets of the sample space Ω, called events. The following premises are true:

  1. ∅ (empty set) and Ω (sample space) are events.

  2. If A is an event, then A’ = Ω - A also is an event.

  3. If A and B are events, the sum of sets A∪B is also an event.

    Probability is the function P:Z01 where:

  4. PΩ=1.

  5. If A and B are events and the product of sets AB=, then

PAB=PA+PB.E1

It follows from premises (1), (2) and (3) that if A and B are events, AB is also an event.

This model works in general but fails in particular, detailed analyses where it turns out that it is impossible to define generic behaviours of such elementary particles as electrons.

We would reach similar conclusions in analysing human behaviours. We cannot predict human reactions to specific stimuli; we can only predict the probability that the person will behave in this or that specific way. In creating models of reality, we try to describe complicated reactions which consist both of defined and undefined situations. In effect, individual points of such a system become a blur. They become less sharply defined as discrete entities and tend towards the state which is the result of our analyses. Whilst we can define, as an example, the length of a road tunnel using classical physics, the result of the measurement of the total velocity of the motion of vehicles in the tunnel, or the time needed to evacuate people from the tunnel in an emergency, will depend on the chosen method of measurement. Defining the method of measurement is key in assessing the safety of a tunnel, both for particular tunnels and all tunnels in general. This in turn creates the risk that, in defining an object, one can describe its properties so that the correct (i.e. well defined) response to the question about its state will be elicited only if the object is in the ground state (assumed by the analysis operator). To return to the example of a road tunnel, the very question whether the tunnel is “safe” is, in effect, a question not about the state of the object but the properties of the analysis operator. The response to such a question will be incidental and unreliable. The basic problem here appears to be expressing the measurement numerically. If we do not use a precise measurement method, the result will always be a blur.

It remains a fact that there are such values, or their relations, which can never be specified precisely (expressed as discrete numbers) at the same time, for example, the number of people at risk in a particular road tunnel emergency. We can assume the minimum, maximum or mean (expected) value, but we can only arrive at a probability of the value of our prediction.

It could be argued that an object in a particular state has the unique ability to respond to the demands of its environment and display a particular property. When its state conforms with the state expected by the operator, the object can return an unambiguous response to the operator’s “question”. In the case of the road tunnel, its state usually does not conform with the state expected by the operator (questioning the tunnel’s safety), and as a result the question generates a random response out of a set of the operator’s ground states. Of the many states identified by the analysis operator, the current state of the object may be constructed, but each of these states may reveal itself as the response to the operator’s question, without the possibility of predicting which. In other words, the complete state of the object fractures into multiple ground states of the operator, and the object picks a state haphazardly and returns it to the operator as its response.

These observations are consistent with the tenets of the Copenhagen interpretation of quantum mechanics. Accordingly, we can set forth the following theses:

  1. Every system is fully described by the wave function Y which fully describes the observer’s understanding of the system (Heisenberg).

  2. A description of nature is probabilistic. The probability of an event is the squared modulus of the wave function associated with the event (Max Born).

  3. We cannot know the values of all properties of a system at a given time; imprecise properties may be expressed as probabilities (Heisenberg’s uncertainty principle).

  4. When the size of the system approaches macroscale, the quantum-mechanical description ought to yield results consistent with the results from a classical treatment (Bohr’s and Heisenberg’s correspondence principle).

An attempt to model future events within a given system consistent with the classical probability modelling must result in producing questions inadequate to the possibility of eliciting responses, as we are unable to predict all variants of events and correlations. Classical probability attempts to ignore reality and proceeds against logic, which allows us to accept the instances of “black swans”. Meanwhile, if we accept that the real-life result is a wave function of the relations between the preparation of the system and its measurement and that it has an operational character inextricable from the observer, we can attempt to identify the final states and define the occurrence probability of the elements leading up to these final states.

Let us consider the following model. Let us suppose, in accordance with the above premises, that the probability p is described as the squared modulus of a certain complex number A, which is the amplitude of probability:

p=A2E2

This pattern has been confirmed in multiple experiments, of which the most representative is the one using a quantum gun.

Let us consider an electron gun (although it could equally well be a ball launcher or traffic organisation in a tunnel). Let us launch electrons towards screen E through an obstacle with two apertures S1,2 and define the variable x (probability) along the screen E. The number of apertures may be greater. The screen is positioned at the distance L. The system may be illustrated as shown in Figure 1.

Figure 1.

Aperture experiment.

Let us assume that the passage of an electron through the apertures can happen in two distinct ways, S1 and S2, each of which is described by the probability amplitude AS1 and AS2.

In classical reasoning, the launched electron can reach the screen either through aperture S1 (trajectory S1) or aperture S2 (trajectory S2). This level will result in flares on the screen appropriate to the location of the apertures, which is illustrated below (Figure 2). The position of the flares conforms to Gaussian (normal) distribution.

Figure 2.

Probability distribution for positions of electrons hitting the screen, consistent with the tenets of classical physics.

In practice, such an ideal model cannot occur. This is because it is also probable that the electron will not pass through the aperture, or that events will suffer from mutual interference. In effect, there will be areas of maximum likelihood of the electron hitting the screen, as well as areas which will never be hit by the electron (under constant conditions of the experiment). This situation will result in a probability amplitude which can be formulated as

AS1 or S2=AS1+AS2.E3

This observation replaces the practice of adding up probabilities of classical physics: the final result is not a sum of probabilities of the electron passing through the aperture. It can be illustrated as shown in Figure 3 above.

Figure 3.

Probability distribution for locations of electrons hitting the screen, accounting for possible interferences.

The following equation is, therefore, true:

pS1 or S2=AS1 or S22=AS1+AS22=AS12+AS22E4

The probability depends on the relative phase of the amplitudes AS1 and AS2, whilst in pS1 and pS2 such phases do not occur. Consequently, event probabilities transfer onto the amplitudes, and this results in the occurrence of new phenomena, unaccounted for by classical physics. Let us assume, for the purposes of our argument, that a given phenomenon can occur in two distinct ways, S1 and S2, each of which is described by the probability amplitude AS1 and AS2. The influence of an interference must be considered if we cannot identify the aperture through which the electron will travel. If we define the phenomenon occurrence probability as = p11(x) + p2 (x) + I(x),

where I(x) denotes the influence of interference calculated as:

Ix=2p1xp2xcosφ1xφ2xE5

and we must define values for the correlation φ1;2 of x, then in the particle behaviour analysis we can assume that the correlation is linear and can be expressed with the equation

Ix=2p1xp2xcosαxE6

where the constant α depends on the mass and energy of the launched particles, distance of screen from apertures and other conditions. To consider questions other than the particles, we need to define the influence of interference as a corrective.

Probability calculations informed by quantum physics can be defined as follows. There exists a sample space Ω and a family Zk of subsets of Ω, called k-events. The following premises are true:

  1. (1) ∅ (empty set) and Ω (sample space) are k-events.

  2. (2) If A is a k-event, then A’ also is a k-event.

  3. (3K) If A and B are k-events and the product of sets AB=, then AB is also a k-event.

  4. Probability if the function P:Zk01, so.

  5. (4) PQ=1.

  6. (5) If A and B are k-events and the product of sets AB=, then P(AB) = P(A) + P(B).

The difference between the two approaches hinges on exchanging premise (3), considered earlier, with premise (3K), which prevents us from considering alternatives for the k-events whose conjunction is not itself a k-event.

It follows from premises (1), (2) and (3K) that if A and B are k-events, the product AB is a k-event if and only if the sum AB is a k-event.

This condition is met when considering random variables which are significant in analysing possible real events.

The function X:ΩR is a random variable if for each interval abR the set ωϵΩXωϵab is an event.

A random variable will be continuous if the function :RR is nonnegative and for each interval ab,PωϵΩ:Xωϵab=abfxdx .

The function f will be the distribution density of the variable X. So if for each particle ψq is a wave function which we can define as fq=ψq2, that is the distribution density for the position of the particle along a straight line, then abψq2dq defines the probability that the particle will be within the interval ab.

In probability theory, the pair of random variables XY is called a two-dimensional (bivariate) variable. A nonnegative function hxy is called the distribution density of a bivariate random variable XY, if for any numbers a<b and c<d there is equality:

PωϵΩ:(Xωϵabi(Yωϵcd=abcdhxydydxE7

We can also demonstrate that

fx=+hxydyE8

is the distribution density of the variable X and gx=+hxydx is the distribution density of the variable Y.

If instead of classical probability we employ quantum-mechanical density amplitude probability, we arrive at a correct definition. This leads us to conclusively abandon the “objective realism” which determines classical probability and replace it with quantum probability. To examine our reasoning, we shall consider the following example. A pair of random variables (q, p) are given, with known distribution densities. We need to establish the distribution density of the bivariate random variable (q, p) as a nonnegative function h(q,p) such that

  1. +hqpdp=ψq2 and +hqpdp=ϕp2.

  2. The above relation between ϕ and ψ, as well as the uncertainty principle, is realised.

  3. The expected values of the observable quantum random variables, calculated according to the definition of expected value in probability theory, are equal to the expected values of the same variables, calculated according to quantum operation formalism.

Following Leon Cohen’s interpretation, there exist functions fulfilling conditions (1) and (2), but a function fulfilling all three conditions does not exist. Consequently, we cannot go too far in probabilistic interpretations of quantum mechanics.

Even so, we can treat both p and q as random variables, but if we consider p and q jointly, we are leaving probability theory behind: the pair pq is not a random variable. Instead, we must broaden the applicability of probability theory. None of the above premises contradict our knowledge of real-life phenomena inside road tunnels so it seems justifiable to use quantum probability in risk assessment calculation models. Quantum physics focuses primarily on an object’s state which may, for its final manifestation, depend on the tools of analysis we use. The state of the object at a given moment is represented by the direction (radius) in the Hilbert space. A Hilbert space illustrates the type of phase space in which referring to real numbers alone is incomplete. A given state should be treated as a superposition of all possible states at every moment, and quantum mechanics typically deals with complex vector spaces. The resultant model of reality is very rich and composed of many interconnected structures whose states are synchronised and the probability of any state’s occurrence at any given moment is always the same.

The conclusion to which quantum physics points us is that event occurrence or nonoccurrence probability is always the same at 50%, and the certainty of this result will be the derivative of the set probability distribution.

Advertisement

3. Risk identification

If we want to apply mathematical models to risk analysis, we must clarify our premises and definitions. Risk, in popular understanding, measures the possibility of loss of a given state and may be positive (profit) and negative (loss). Most commonly, “risk” is applied in the context of safety. Most people identify safety as a primary need, without which they experience anxiety and insecurity. It is psychological needs like these that cause individuals, societies, states and organisations to act on their environments in order to remove or reduce factors which increase anxiety, fear, uncertainty or insecurity. As a result, no matter how we define safety, it will ultimately remain an individual interpretation of a given phenomenon. For some people dangerous actions which, if successful, will make them a hero seem right, and their evaluation of possible consequences does not stop them from taking such actions, which would cause fear and inaction in another person. In this context, the security of larger organisations should not rely solely on such subjective assessments. State security is not the same as the sum total of individual securities of each of the state’s citizens, and the safety of an organisation is not tantamount to the safety of each of its stakeholders. In the aftermath of the financial crash which bankrupted many companies in 2008, renewed efforts were undertaken to clarify safety for use both in financial management and in other areas of life. In line with the proposed guidelines, safety must be defined as freedom from unacceptable risk. In process safety procedures used in chemical process facilities, “safety” is understood as the absence of unacceptable risk to health, life, property or environment, whereas risk is the product of probability (frequency of occurrence) of a given phenomenon and the scale of losses (size of undesired results) formulated as

risk=probability×resultsE9

The use of this formula is, however, vitiated by the cognitive determinism of the person identifying the probability and the results. With the 2009 ISO 31000 standard, a new understanding of risk has been proposed. Both the standard and the UN Recommendations on the Transport of Dangerous Goods [6] redefine “risk” as the “effect of uncertainty on objectives”. The standard is a collection of frameworks, processes and rules which ought to be complied with during the risk assessment process in every organisation, commercial and otherwise. Building on the earlier considerations, we can reformulate risk as follows:

RiskuncertaintyobjectiveE10

where “uncertainty” is defined as blurred probability of an event, which cannot be foreseen with absolute certainty, and all the related possibilities and probabilities are variable and possible.

In using measurement instruments (in this context, mathematical analysis) to measure a quantum state, a certain aspect of this state must be adjusted to the state of the instrument used. This is called an observable. In accordance with quantum mechanics’ second postulate, each observable is represented by the linear map (vector—Hermitian operator) acting in a Hilbert space, and the eigenvalues of this operator present all possible results of its measurements. The third postulate proposes that the likelihood, that the measurement of the measurable magnitude of observation A will return a k eigenvalue of the Hermitian operator, equals λk2, thus confirming the aptness of the risk assessment method, but does not bring us any closer to calculating the actual risk in a particular situation. Real-life events, even in their vector manifestation, do not allow the prediction of the direction of change. It follows that we should limit ourselves to identifying the possibility of each state occurring, based on their probability as derived from probability distribution.

Advertisement

4. Example application

Risk analyses are typically related to incident analyses. Let us consider the risk of an incident inside a road tunnel. Real data is available: over the last few years, the number of traffic incidents (traffic jams, attempts to reverse, collisions and fires) averages 340 per annum. We can further identify the likely rate of each type of incident within the total number. Let us conduct an analysis of the risk of change to the state of safety, based on the above-discussed postulates.

  1. Step 1. Identify the objective

    Accepting the definition of risk as the effect of uncertainty on objectives, we must first identify the objectives. This will be the number of incidents acceptable under the circumstances. Let us assume that the current state is acceptable, which means that we expect 340 incidents in the coming year. Let us take the year to be 365 days. The expected value (objective) will be 381 over the course of the year. Let us now attempt to measure the influence of uncertainty (i.e. the likelihood of a particular distribution) on the objective, i.e. the given annual number of incidents.

  2. Step 2. Identify the event distribution

    Let us assume that incidents occur according to normal distribution. Traffic intensity is constant. There are no planned maintenance works.

    For a given tunnel, the number of incidents (vehicles stopped) has been calculated, at certain traffic intensity, as 340 per year.

    Calculating the uncertainty of 340 incidents per year (Table 1).

Graphic representation of uncertainty of incident estimate as shown in Figure 4 above.

ParameterEquation/symbolValueComments
Expected meanλ340.000
MedianMe≈λ+1/3 + 0.02/λ340.333
Mode340.000Equals the greatest integer lesser than λ
Kurtosis1/λ0.003
Cumulative probability0.514
Probability mass function for the Poisson distribution0.0216
Skewness√ λ18.439Measures the asymmetry of the distribution about its mean

Table 1.

Uncertainty parameters for the presumed outcome.

Figure 4.

Representation of uncertainty of 340 incidents occurring.

Conclusion: For the analysed tunnel, there is a 50% uncertainty that the number of incidents (vehicles stopped) in the year will be between 319 and 375.

The analysis leads to the following conclusions:

  1. We do not know if the number of incidents will equal to 0.

  2. It is likely that at least 289 incidents will occur.

  3. We must be prepared for no fewer than 319 but ideally 375 incidents.

  4. We do not know if the number of incidents will exceed 401.

The analysis allows conclusions which will enable a better preparation for incidents than would have been the case using the standard method, which assumes that the risk of incident equals the product of its value and the probability of its occurrence.

Advertisement

5. Conclusion

We ought to understand risk management process as actions coordinated towards the achievement of a predefined state of acceptability. Acceptance of a state, that is to say preparation for it, is the key process in ensuring that an organisation may continue functioning. Risk management process comprises the following stages [7]:

  • Risk recognition, including identification of objectives

  • Risk identification, including possible consequences

  • Risk analysis, or risk magnitude estimation, i.e. quantification of uncertainty in attaining objectives

  • Risk evaluation, i.e. comparison of results with objectives

Risk analysis is, in this process, just one of the stages and ought not to be conducted apart from the other elements. Aside from describing dangers and consequences, the process will result in identification of conditions for decision-taking regarding actions which consider the uncertainties of danger and dangerous events occurring, as well as identification of possibilities for avoiding or limiting losses.

Therefore, the full process ought to comprise calculations of the influence of uncertainty on our objectives and evaluation of results, including the evaluation whether occurrences of motion deviation are counterbalanced by solutions used.

Risk analysis will be most fully realised when we assume that to identify the level of attainment of objectives (i.e. identification of possible risk), we must first identify the probability of each possible state’s occurrence, based on probability distribution.

References

  1. 1. Heller M. Elementy mechaniki kwantowej dla filozofów. Kraków: Copernicus Center Press; 2017. p. 17
  2. 2. Hilbert D. Grundlagen der Geometrie. Leipzig: Teubner; 1899 (1903, 1977)
  3. 3. Różycki M. Inertia in procurement risk management. In: Sustainability and Scalability of Business: Theory and Practice. New York: Nova Science Publishers; 2018. pp. 237-245
  4. 4. Schrödinger E. Die gegenwärtige Situation in der Quantenmechanik. Die Naturwissenschaften. 1935;23(50):844-849
  5. 5. Wheeler JA. Delayed choice experiments and the Bohr-Einstein dialog. The American Philosophical Society and the Royal Society: papers read at a meeting, June 5, 1980. Physical Review Letters. 1981;84(1):1-5
  6. 6. Joint ADR, ADN agreement and the RID Regulations
  7. 7. ISO/IEC Guide 51: 2014(en). Safety aspects—Guidelines for their inclusion in standards. Available at: https://www.iso.org/iso-31000-risk-management.html

Written By

Marek Rozycki

Submitted: 25 September 2019 Reviewed: 10 December 2019 Published: 14 January 2020