Open access peer-reviewed chapter

The Monte Carlo Techniques and the Complex Probability Paradigm

Written By

Abdo Abou Jaoude

Submitted: 27 May 2020 Reviewed: 28 May 2020 Published: 07 July 2020

DOI: 10.5772/intechopen.93048

From the Edited Volume

Forecasting in Mathematics - Recent Advances, New Perspectives and Applications

Edited by Abdo Abou Jaoude

Chapter metrics overview

3,002 Chapter Downloads

View Full Metrics

Abstract

The concept of mathematical probability was established in 1933 by Andrey Nikolaevich Kolmogorov by defining a system of five axioms. This system can be enhanced to encompass the imaginary numbers set after the addition of three novel axioms. As a result, any random experiment can be executed in the complex probabilities set C which is the sum of the real probabilities set R and the imaginary probabilities set M. We aim here to incorporate supplementary imaginary dimensions to the random experiment occurring in the “real” laboratory in R and therefore to compute all the probabilities in the sets R, M, and C. Accordingly, the probability in the whole set C=R+M is constantly equivalent to one independently of the distribution of the input random variable in R, and subsequently the output of the stochastic experiment in R can be determined absolutely in C. This is the consequence of the fact that the probability in C is computed after the subtraction of the chaotic factor from the degree of our knowledge of the nondeterministic experiment. We will apply this innovative paradigm to the well-known Monte Carlo techniques and to their random algorithms and procedures in a novel way.

Keywords

  • degree of our knowledge
  • chaotic factor
  • complex probability set
  • probability norm
  • complex random vector
  • convergence probability
  • divergence probability
  • simulation

1. Introduction

“Thus, joining the rigor of the demonstrations of science to the uncertainty of fate, and reconciling these two seemingly contradictory things, it can, taking its name from both, appropriately arrogate to itself this astonishing title: the geometry of chance.”

Blaise Pascal

“You believe in the God who plays dice, and I in complete law and order.”

Albert Einstein, Letter to Max Born

“Chance is the pseudonym of God when He did not want to sign.”

Anatole France

“There is a certain Eternal Law, to wit, Reason, existing in the mind of God and governing the whole universe.”

Saint Thomas Aquinas

“An equation has no meaning for me unless it expresses a thought of God.”

Srinivasa Ramanujan

Calculating probabilities is the crucial task of classical probability theory. Adding supplementary dimensions to nondeterministic experiments will yield a deterministic expression of the theory of probability. This is the novel and original idea at the foundations of my complex probability paradigm. As a matter of fact, probability theory is a stochastic system of axioms in its essence; that means that the phenomena outputs are due to randomness and chance. Adding new imaginary dimensions to the nondeterministic phenomenon happening in the set R will lead to a deterministic phenomenon, and thus, a probabilistic experiment will have a certain output in the set C of complex probabilities. If the chaotic experiment becomes fully predictable, then we will be completely capable to foretell the output of random events that occur in the real world in all probabilistic processes. Accordingly, the task that has been achieved here was to extend the set R of random real probabilities to the deterministic set C=R+M of complex probabilities and this by incorporating the contributions of the set M which is the set of complementary imaginary probabilities to the set R. Consequently, since this extension reveals to be successful, an innovative paradigm of stochastic sciences and prognostic was put forward in which all nondeterministic phenomena in R was expressed deterministically in C. I coined this novel model by the term “the complex probability paradigm” that was initiated and established in my 14 earlier research works [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14].

Advertisement

2. The purpose and the advantages of the current chapter

The advantages and the purpose of the present chapter are to [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]:

  1. Extend the theory of classical probability to cover the complex numbers set, hence to connect the probability theory to the field of complex analysis and variables. This task was initiated and developed in my earlier 14 works.

  2. Apply the novel paradigm and its original probability axioms to Monte Carlo techniques.

  3. Prove that all phenomena that are nondeterministic can be transformed to deterministic phenomena in the complex probabilities set which is C.

  4. Compute and quantify both the chaotic factor and the degree of our knowledge of Monte Carlo procedures.

  5. Represent and show the graphs of the functions and parameters of the innovative model related to Monte Carlo algorithms.

  6. Demonstrate that the classical probability concept is permanently equal to 1 in the set of complex probabilities; thus, no chaos, no randomness, no ignorance, no uncertainty, no unpredictability, no nondeterminism, and no disorder exist in

    Ccomplexset=Rrealset+Mimaginaryset.

  7. Prepare to apply this inventive paradigm to other topics in prognostics and to the field of stochastic processes. These will be the goals of my future research publications.

Regarding some applications of the novel established model and as a subsequent work, it can be applied to any nondeterministic experiments using Monte Carlo algorithms whether in the continuous or in the discrete cases.

Moreover, compared with existing literature, the major contribution of the current chapter is to apply the innovative complex probability paradigm to the techniques and concepts of the probabilistic Monte Carlo simulations and algorithms.

The next figure displays the major aims and purposes of the complex probability paradigm (CPP) (Figure 1).

Figure 1.

The diagram of the major aims of the complex probability paradigm.

Advertisement

3. The complex probability paradigm

3.1 The original Andrey Nikolaevich Kolmogorov system of axioms

The simplicity of Kolmogorov’s system of axioms may be surprising [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. Let E be a collection of elements {E1, E2, …} called elementary events and let F be a set of subsets of E called random events. The five axioms for a finite set E are:

Axiom 1:F is a field of sets.

Axiom 2:F contains the set E.

Axiom 3: A nonnegative real number Prob(A), called the probability of A, is assigned to each set A in F. We have always 0 ≤ Prob(A) ≤ 1.

Axiom 4:Prob(E) equals 1.

Axiom 5: If A and B have no elements in common, the number assigned to their union is

ProbAB=ProbA+ProbB

hence, we say that A and B are disjoint; otherwise, we have

ProbAB=ProbA+ProbBProbAB

And we say also that ProbAB=ProbA×ProbB/A=ProbB×ProbA/B which is the conditional probability. If both A and B are independent then ProbAB=ProbA×ProbB.

Moreover, we can generalize and say that for N disjoint (mutually exclusive) events A1,A2,,Aj,,AN (for 1jN), we have the following additivity rule:

Probj=1NAj=j=1NProbAj

And we say also that for N independent events A1,A2,,Aj,,AN (for 1jN), we have the following product rule

Probj=1NAj=j=1NProbAj

3.2 Adding the imaginary part M

Now, we can add to this system of axioms an imaginary part such that:

Axiom 6: Let Pm=i×1Pr be the probability of an associated complementary event in M (the imaginary part) to the event A in R (the real part). It follows that Pr+Pm/i=1 where i is the imaginary number with i=1 or i2=1.

Axiom 7: We construct the complex number or vector Z=Pr+Pm=Pr+i1Pr having a norm Z such that

Z2=Pr2+Pm/i2.

Axiom 8: Let Pc denotes the probability of an event in the complex probability universe C where C=R+M. We say that Pc is the probability of an event A in R with its associated event in M such that

Pc2=Pr+Pm/i2=Z22iPrPmand is always equal to 1.

We can see that by taking into consideration the set of imaginary probabilities, we added three new and original axioms, and consequently the system of axioms defined by Kolmogorov was hence expanded to encompass the set of imaginary numbers.

3.3 A brief interpretation of the novel paradigm

To summarize the novel paradigm, we state that in the real probability universe R, our degree of our certain knowledge is undesirably imperfect and hence unsatisfactory; thus, we extend our analysis to the set of complex numbers C which incorporates the contributions of both the set of real probabilities which is R and the complementary set of imaginary probabilities which is M. Afterward, this will yield an absolute and perfect degree of our knowledge in the probability universe C=R+M because Pc = 1 constantly. As a matter of fact, the work in the universe C of complex probabilities gives way to a sure forecast of any stochastic experiment, since in C we remove and subtract from the computed degree of our knowledge the measured chaotic factor. This will generate in the universe C a probability equal to 1 (Pc2=DOKChf=DOK+MChf=1=Pc). Many applications taking into consideration numerous continuous and discrete probability distributions in my 14 previous research papers confirm this hypothesis and innovative paradigm. The extended Kolmogorov axioms (EKA) or the complex probability paradigm (CPP) can be shown and summarized in the next illustration (Figure 2).

Figure 2.

The EKA or the CPP diagram.

Advertisement

4. The Monte Carlo techniques and the complex probability paradigm parameters

4.1 The divergence and convergence probabilities

Let RE be the exact result of the stochastic phenomenon or of a multidimensional or simple integral that are not always possible to compute by probability theory ordinary procedures or by deterministic numerical means or by calculus [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. And let RA be the phenomenon and integrals approximate results calculated by the techniques of Monte Carlo:

The relative error in the Monte Carlo methods is Rel.Error=RERARE=1RARE.

Additionally, the percent relative error is = 100%×RERARE and is always between 0% and 100%. Therefore, the relative error is always between 0 and 1. Hence

0RERARE10RERARE1ifRARE0RERARE1ifRARE0RARERERA2RE

Moreover, we define the real probability in the set R by

Pr=1RERARE=11RARE=11RAREif0RARE1+1RAREifRERA2RE=RAREif0RARE2RAREifRERA2RE

= 1 − the relative error in the Monte Carlo method.

= probability of Monte Carlo method convergence in R.

And therefore,

Pm=i1Pr=i11RERARE=i111RARE=i1RARE=i1RAREif0RAREi1RAREifRERA2RE=i1RAREif0RAREiRARE1ifRERA2RE

= probability of Monte Carlo method divergence in the imaginary complementary probability set M since it is the imaginary complement of Pr.

Consequently,

Pm/i=1Pr=1RARE=1RAREif0RARERARE1ifRERA2RE

= the relative error in the Monte Carlo method.

= probability of Monte Carlo method divergence in R since it is the real complement of Pr.

In the case where 0RARE0RARE10Pr1 and we deduce also that 01RARE10Pm/i1 and 0Pmi.

And in the case where RERA2RE1RARE202RARE10Pr1 and we deduce also that 0RARE110Pm/i1 and 0Pmi.

Consequently, if RA=0 or RA=2RE that means before the beginning of the simulation, then

Pr=ProbconvergenceinR=0
Pm=ProbdivergenceinM=i
Pm/i=ProbdivergenceinR=1

And if RA=RE that means at the end of Monte Carlo simulation, then

Pr=ProbconvergenceinR=1
Pm=ProbdivergenceinM=0
Pm/i=ProbdivergenceinR=0

4.2 The complex random vector Z in C=R+M

We have Z=Pr+Pm=RARE+i1RAREif0RARE2RARE+iRARE1ifRERA2RE=ReZ+iImZ.

where

ReZ=Pr=RAREif0RARE2RAREifRERA2RE=the real part ofZ

and

ImZ=Pm/i=1RAREif0RARERARE1ifRERA2RE=the imaginary part ofZ.

That means that the complex random vector Z is the sum in C of the real probability of convergence in R and of the imaginary probability of divergence in M.

If RA=0 or RA=2RE (before the simulation begins), then

Pr=RARE=0RE=0orPr=2RARE=22RERE=22=0.

and

Pm=i1RARE=i10RE=i10=iorPm=iRARE1=i2RERE1=i21=i

therefore Z=0+i=i.

If RA=RE2 or RA=3RE2 (at the middle of the simulation), then

Pr=RAREif0RARE2RAREifRERA2RE=RE2RE=0.5if0RARE23RE2RE=0.5ifRERA2REPr=0.5

and

Pm=i1RAREif0RAREiRARE1ifRERA2RE=i1RE2RE=0.5iif0RAREi3RE2RE1=0.5iifRERA2REPm=0.5i

therefore Z=0.5+0.5i.

If RA=RE (at the simulation end), then

Pr=RARE=RERE=1if0RARE2RARE=2RERE=21=1ifRERA2REPr=1

and

Pm=i1RAREif0RAREiRARE1ifRERA2RE=i1REREif0RAREiRERE1ifRERA2RE=0if0RARE0ifRERA2REPm=0

therefore Z=1+0i=1.

4.3 The degree of our knowledge, DOK

We have

DOK=Z2=Pr2+Pm/i2=RARE2if0RARE2RARE2ifRERA2RE+1RARE2if0RARERARE12ifRERA2RE=RARE2+1RARE2if0RARE2RARE2+RARE12ifRERA2RE=2RARE22RARE+1if0RARE2RARE26RARE+5ifRERA2RE

From CPP we have that 0.5DOK1, then if DOK = 0.5

2RARE22RARE+1=0.5if0RARE2RARE26RARE+5=0.5ifRERA2RE

then solving the second-degree equations for RARE gives

RARE=1/2if0RARERARE=3/2ifRERA2RERA=RE/2if0RARERA=3RE/2ifRERA2REand vice versa.

That means that DOK is minimum when the approximate result RA is equal to half of the exact result REif0RARE or when the approximate result is equal to three times the half of the exact result ifRERA2RE, which means at the middle of the simulation.

In addition, if DOK=1, then

2RARE22RARE+1=1if0RARE2RARE26RARE+5=1ifRERA2RERARE2RARE=0if0RARE2RARE26RARE+4=0ifRERA2RE
RA=0ORRA=REif0RARERA=2REORRA=REifRERA2REand vice versa.

That means that DOK is maximum when the approximate result RA is equal to 0 or 2RE (before the beginning of the simulation) and when it is equal to the exact result RE (at the end of the simulation). We can deduce that we have perfect and total knowledge of the stochastic experiment before the beginning of Monte Carlo simulation since no randomness was introduced yet, as well as at the end of the simulation after the convergence of the method to the exact result.

4.4 The chaotic factor, Chf

We have

Chf=2iPrPm=2i×RAREif0RARE2RAREifRERA2RE×i1RAREif0RAREiRARE1ifRERA2RE

Since i2=1 then

Chf=2RARE1RAREif0RARE22RARERARE1ifRERA2RE

From CPP we have that 0.5Chf0, and then if Chf=0.5

2RARE1RARE=0.5if0RARE22RARERARE1=0.5ifRERA2RERA=RE/2if0RARERA=3RE/2ifRERA2RE

and vice versa.

That means that Chf is minimum when the approximate result RA is equal to half of the exact result RE if0RARE or when the approximate result is equal to three times the half of the exact result ifRERA2RE, which means at the middle of the simulation.

In addition, if Chf=0 then

2RARE1RARE=0if0RARE22RARERARE1=0ifRERA2RERA=0ORRA=REif0RARERA=2REORRA=REifRERA2RE

And, conversely, if RA=0ORRA=REif0RARERA=2REORRA=REifRERA2RE, then Chf=0.

That means that Chf is equal to 0 when the approximate result RA is equal to 0 or 2RE (before the beginning of the simulation) and when it is equal to the exact result RE (at the end of the simulation).

4.5 The magnitude of the chaotic factor, MChf

We have

MChf=Chf=2iPrPm=2i×RAREif0RARE2RAREifRERA2RE×i1RAREif0RAREiRARE1ifRERA2RE

Since i2=1 then

MChf=2RARE1RAREif0RARE22RARERARE1ifRERA2RE

From CPP we have that 0MChf0.5, and then if MChf=0.5

2RARE1RARE=0.5if0RARE22RARERARE1=0.5ifRERA2RERA=RE/2if0RARERA=3RE/2ifRERA2RE

and vice versa.

That means that MChf is maximum when the approximate result RA is equal to half of the exact result REif0RARE or when the approximate result is equal to three times the half of the exact result ifRERA2RE, which means at the middle of the simulation. This implies that the magnitude of the chaos (MChf) introduced by the random variables used in Monte Carlo method is maximum at the halfway of the simulation.

In addition, if MChf=0, then

2RARE1RARE=0if0RARE22RARERARE1=0ifRERA2RERA=0ORRA=REif0RARERA=2REORRA=REifRERA2RE

And, conversely, if RA=0ORRA=REif0RARERA=2REORRA=REifRERA2RE, then MChf=0.

That means that MChf is minimum and is equal to 0 when the approximate result RA is equal to 0 or 2RE (before the beginning of the simulation) and when it is equal to the exact result RE (at the end of the simulation). We can deduce that the magnitude of the chaos in the stochastic experiment is null before the beginning of Monte Carlo simulation since no randomness was introduced yet, as well as at the end of the simulation after the convergence of the method to the exact result when randomness has finished its task in the stochastic Monte Carlo method and experiment.

4.6 The probability Pc in the probability set C=R+M

We have

Pc2=DOKChf=DOK+MChf=2RARE22RARE+1if0RARE2RARE26RARE+5ifRERA2RE2RARE1RAREif0RARE22RARERARE1ifRERA2RE
=1if0RARE1ifRERA2REPc2=1for0RA2RE

Pc=1 = probability of convergence in C; therefore,

Pc=RARE=1if0RARE2RARE=1ifRERA2RERA=REif0RARERA=REifRERA2RE

RA=REfor0RA2RE continuously in the probability set C=R+M. This is due to the fact that in C, we have subtracted in the equation above the chaotic factor Chf from our knowledge DOK, and therefore we have eliminated chaos caused and introduced by all the random variables and the stochastic fluctuations that lead to approximate results in the Monte Carlo simulation in R. Therefore, since in C we have always RA=RE, then the Monte Carlo simulation which is a stochastic method by nature in R becomes after applying the CPP a deterministic method in C since the probability of convergence of any random experiment in C is constantly and permanently equal to 1 for any iterations number N.

4.7 The rates of change of the probabilities in R, M, and C

Since

Z=Pr+Pm=RARE+i1RAREif0RARE2RARE+iRARE1ifRERA2RE=ReZ+iImZ

Then

dZdRA=dPrdRA+dPmdRA=ddRARARE+i1RAREif0RAREddRA2RARE+iRARE1ifRERA2RE=ddRARARE+ddRAi1RAREif0RAREddRA2RARE+ddRAiRARE1ifRERA2RE
=1REiRE=1RE1iif0RARE1RE+iRE=1REi1ifRERA2RE

Therefore,

RedZdRA=dPrdRA=+1REif0RARE1REifRERA2RE
=constant>0if0RAREandRE>0constant<0ifRERA2REandRE>0

That means that the slope of the probability of convergence in R or its rate of change is constant and positive if0RARE, and constant and negative ifRERA2RE, and it depends only on RE; hence, we have a constant increase in Pr (the convergence probability) as a function of the iterations number N as RA increases from 0 to RE and as RA decreases from 2RE to RE till Pr reaches the value 1 that means till the random experiment converges to RE:

ImdZdRA=1idPmdRA=dPm/idRA=1REif0RARE+1REifRERA2RE
=constant<0if0RAREandRE>0constant>0ifRERA2REandRE>0

That means that the slopes of the probabilities of divergence in R and M or their rates of change are constant and negative if0RARE and constant and positive ifRERA2RE and they depend only on RE; hence, we have a constant decrease in Pm/i and Pm (the divergence probabilities) as functions of the iterations number N as RA increases from 0 to RE and as RA decreases from 2RE to RE till Pm/i and Pm reach the value 0 that means till the random experiment converges to RE.

Additionally,

dZdRA2=dPrdRA2+1idPmdRA2=dPrdRA2+dPm/idRA2=1RE2+1RE2if0RARE1RE2+1RE2ifRERA2RE
dZdRA2=1RE2+1RE2=2RE2for0RA2RE
dZdRA=2RE=constant>0ifRE>0;

that means that the module of the slope of the complex probability vector Z in C or of its rate of change is constant and positive and it depends only on RE; hence, we have a constant increase in ReZ and a constant decrease in ImZ as functions of the iterations number N and as Z goes from (0, i) at N = 0 till (1,0) at the simulation end; hence, till ReZ=Pr reaches the value 1 that means till the random experiment converges to RE.

Furthermore, since Pc2=DOKChf=DOK+MChf=1, then

Pc=1 = probability of convergence in C, and consequently

dPcdRA=d1dRA=0,

which means that Pc is constantly equal to 1 for every value of RA, of RE, and of the iterations number N, which means for any stochastic experiment and for any simulation of Monte Carlo method. So, we conclude that in C=R+M, we have complete and perfect knowledge of the random experiment which has become now a deterministic one since the extension in the complex probability plane C defined by the CPP axioms has changed all stochastic variables to deterministic variables.

Advertisement

5. The new paradigm parameter evaluation

We can infer from what has been developed earlier the following:

The real probability of convergence PrN=1RERANRE.

We have 0NNC where N = 0 corresponds to the instant before the beginning of the random experiment when RAN=0=0or=2RE and where N=NC (iterations number needed for the method convergence) corresponds to the instant at the end of the random experiments and Monte Carlo methods when RAN=NCRE.

The imaginary complementary probability of divergence PmN=iRERANRE.

The real complementary probability of divergence PmN/i=RERANRE.

The random vector of complex probability

ZN=PrN+PmN=1RERANRE+iRERANRE

The degree of our knowledge

DOKN=ZN2=Pr2N+PmN/i2=1RERANRE2+RERANRE2=1+2iPrNPmN=12PrN1PrN=12PrN+2Pr2N=12RERANRE+2RERANRE2.

DOKN is equal to 1 when PrN=Pr0=0 and when PrN=PrNC=1.

The Chaotic factor

ChfN=2iPrNPmN=2PrN1PrN=2PrN+2Pr2N=2RERANRE+2RERANRE2

ChfN is null when PrN=Pr0=0 and when PrN=PrNC=1.

The magnitude of the chaotic factor MChf

MChfN=ChfN=2iPrNPmN=2PrN1PrN=2PrN2Pr2N=2RERANRE2RERANRE2

MChfN is null when PrN=Pr0=0 and when PrN=PrNC=1.

At any iteration number N0NNC, the probability calculated in the set C of complex probabilities is as follows:

Pc2N=PrN+PmN/i2=ZN22iPrNPmN=DOKNChfN=DOKN+MChfN=1

then

Pc2N=PrN+PmN/i2=PrN+1PrN2=12=1PcN=1(continuously).

Thus, the prediction in the set C of the probabilities of convergence of the random Monte Carlo methods is always certain.

Let us consider afterward a multidimensional integral and a stochastic experiment to simulate the Monte Carlo procedures and to quantify, to draw, as well as to visualize all the prognostic and CPP parameters.

Advertisement

6. The flowchart of the prognostic model of Monte Carlo techniques and CPP

The flowchart that follows illustrates all the procedures of the elaborated prognostic model of CPP.

Advertisement

7. Simulation of the new paradigm

Note that all the numerical values found in the simulations of the new paradigm for any iteration cycles N were computed using the 64-bit MATLAB version 2020 software and compared to the values found by Microsoft Visual C++ programs. Additionally, the reader should be careful of the truncation and rounding errors since we represent all numerical values by at most five significant digits and since we are using Monte Carlo techniques of simulation and integration which yield approximate results under the influence of stochastic aspects and variations. We have considered for this purpose a high-capacity computer system: a workstation computer with parallel microprocessors, a 64-bit operating system, and a 64-GB RAM.

7.1 The continuous random case: a four-dimensional multiple integral

The Monte Carlo technique of integration can be summarized by the following equation:

a1b1a2b2anbnfx1x2xn.dx1dx2dxnb1a1×b2a2××bnanNj=1Nfx1jx2jxnj

Let us consider here the multidimensional integral of the following function:

04/304/304/304/3xyzw.dxdydzdw=04/304/304/3x2204/3yzw.dydzdw=04/304/304/31618yzw.dydzdw=8904/304/3y2204/3zw.dzdw=8904/304/31618zw.dzdw=648104/3z2204/3w.dw=648104/31618w.dw=512729w2204/3=512729×1618=512729×89=4,0966,561=0.62429507696997411

RE=0.62429507696997411 by the deterministic methods of calculus.

fxyzw=xyzw, where x, y, z, and w follow a discrete uniform distribution U such that

xU04/3,yU04/3,zU04/3,wU04/3
04/304/304/304/3xyzw.dxdydzdw4/30×4/30×4/30×4/30Nj=1Nxjyjzjwj=256/81Nj=1Nxjyjzjwj=RA

with 1NNC after applying Monte Carlo method.

Furthermore, the four figures (Figures 36) illustrate and prove the increasing convergence of Monte Carlo simulation and technique to the exact result RE=0.62429507696997411 for N = 50, 100, 500, and N=NC=100,000 iterations. Consequently, we have limN+PrN=limN+1RERANRE=1RERERE=10=1 which is equal to the probability of convergence of Monte Carlo technique as N+.

Figure 3.

The increasing convergence of the Monte Carlo method up to N = 50 iterations.

Figure 4.

The increasing convergence of the Monte Carlo method up to N = 100 iterations.

Figure 5.

The increasing convergence of the Monte Carlo method up to N = 500 iterations.

Figure 6.

The increasing convergence of the Monte Carlo method up to N = 100,000 iterations.

Moreover, Figure 7 shows undoubtedly and graphically the relation of all the parameters of the complex probability paradigm (Chf,RA,Pr,MChf,RE,DOK,Pm/i,Pc) to the Monte Carlo technique after applying CPP to this four-dimensional integral.

Figure 7.

The CPP parameters and the Monte Carlo method for a multiple integral.

7.2 The discrete random case: the matching birthday problem

An interesting problem that can be solved using simulation is the famous birthday problem. Suppose that in a room of n persons, each of the 365 days of the year (not a leap year) is equally likely to be someone’s birthday. It can be proved from the theory of probability and contrary to intuition that only 23 persons need to be present for the probability to be better than fifty-fifty that at least two of them will have the same birthday.

Many people are interested in checking the theoretical proof of this statement, so we will demonstrate it briefly before doing the problem simulation. After someone is asked about his or her birthday, the probability that the next person asked will not have the same birthday is 364/365. The probability that the third person’s birthday will not match those of the first two people are 363/365. It is well-known that the probability of two independent and successive events happening is the product of the probability of the separate events. In general, the probability that the nth person asked will have a birthday different from that of anyone already asked is

Pallnbirthdaysaredifferent=365365×364365×363365××365n1365

The probability that the nth person asked will provide a match is 1 minus this value:

Pmatching birthdays=1365365×364365×363365××365n1365=1365×364×363××365n1365n=RE

which shows that with 23 persons, the chances are 50.7%; with 55 persons, the chances are 98.6% or almost theoretically certain that at least two out of 55 people will have the same birthday. The table gives the theoretical probabilities of matching birthdays for a selected number of people n (Table 1).

Number of people nTheoretical probability = RE
n = 5P = 0.027135573700
n = 10P = 0.116948177711
n = 15P = 0.252901319764
n = 20P = 0.411438383581
n = 22P = 0.475695307663
n = 23P = 0.507297234324
n = 25P = 0.568699703969
n = 30P = 0.706316242719
n = 35P = 0.814383238875
n = 40P = 0.891231809818
n = 45P = 0.940975899466
n = 50P = 0.970373579578
n = 55P = 0.986262288816
n = 100P = 0.999999692751
n = 133P = 0.999999999999
n = 365P = 1.000000000000

Table 1.

Some theoretical probabilities of matching birthdays for n people where 1n365.

Without using the probability theory, we can write a routine that uses the random number generator to compute the approximate chances for groups of n persons. Obviously, what is needed here is to choose n random integers from the set of integers {1, 2, 3, …, 365} and to check whether there is a match. When we repeat this experiment a large number of times, we can calculate afterward the probability of at least one match in any gathering of n persons. Note that if n366, then Pmatching birthdays=1 by the famous pigeonhole principle.

Furthermore, the four figures (Figures 811) illustrate and prove the increasing convergence of Monte Carlo simulation and technique to the exact result RE=0.706316242719 for n = 30 and for N = 50, 100, 500, and N=NC=750,000 iterations. Consequently, we have limN+PrN=limN+1RERANRE=1RERERE=10=1 which is equal to the probability of convergence of Monte Carlo technique as N+.

Figure 8.

The increasing convergence of the Monte Carlo method up to N = 50 iterations.

Figure 9.

The increasing convergence of the Monte Carlo method up to N = 100 iterations.

Figure 10.

The increasing convergence of the Monte Carlo method up to N = 500 iterations.

Figure 11.

The increasing convergence of the Monte Carlo method up to N = 750,000 iterations.

Moreover, Figure 12 shows undoubtedly and graphically the relation of all the parameters of the complex probability paradigm (Chf,RA,Pr,MChf,RE,DOK,Pm/i,Pc) to the Monte Carlo technique after applying CPP to this problem of matching birthday.

Figure 12.

The CPP parameters and the Monte Carlo techniques for the matching birthday problem.

7.2.1 The cubes of complex probability

In Figure 13 and in the first cube, the simulation of Chf and DOK as functions of the iterations N and of each other is executed for the problem of matching birthday. If we project Pc2(N) = DOK(N) − Chf(N) = 1 = Pc(N) on the plane N = 0 iterations, we will get the line in cyan. The starting point of this line is point J (DOK = 1, Chf = 0) when N = 0 iterations, and then the line gets to point (DOK = 0.5, Chf = −0.5) when N = 375,000 iterations and joins finally and again point J (DOK = 1, Chf = 0) when N = NC = 750,000 iterations. The graphs of Chf(N) (pink, green, blue) in different planes and DOK(N) (red) represent the other curves. We can notice that point K (DOK = 0.5, Chf = −0.5, N = 375,000 iterations) is the minimum of all these curves. We can notice also that point L has the coordinates (DOK = 1, Chf = 0, N = NC = 750,000 iterations). Additionally, the three points J, K, and L correspond to the same points that exist in Figure 12.

Figure 13.

Chf and DOK in terms of each other and of N for the problem of matching birthday.

In Figure 14 and in the second cube, we simulate the probability of convergence Pr(N) and its complementary real probability of divergence Pm(N)/i as functions of the iterations N for the problem of matching birthday. If we project Pc2(N) = Pr(N) + Pm(N)/i = 1 = Pc(N) on the plane N = 0 iterations, we will get the line in cyan. The starting point of this line is point (Pr = 0, Pm/i = 1), and the final point is point (Pr = 1, Pm/i = 0). The graph of Pr(N) in the plane Pr(N) = Pm(N)/i is represented by the red curve. The starting point of this graph is point J (Pr = 0, Pm/i = 1, N = 0 iterations), and then it gets to point K (Pr = 0.5, Pm/i = 0.5, N = 375,000 iterations) and joins finally point L (Pr = 1, Pm/i = 0, N = NC = 750,000 iterations). The graph of Pm(N)/i in the plane Pr(N) + Pm(N)/i = 1 is represented by the blue curve. We can notice how much point K is important and which is the intersection of the blue and red graphs when Pr(N) = Pm(N)/i = 0.5 at N = 375,000 iterations. Additionally, the three points J, K, and L correspond to the same points that exist in Figure 12.

Figure 14.

Pm/i and Pr in terms of each other and of N for the problem of matching birthday.

In Figure 15 and in the third cube, we simulate the vector of complex probabilities Z(N) in C as a function of the real probability of convergence Pr(N) = Re(Z) in R and of its complementary imaginary probability of divergence Pm(N) = i × Im(Z) in M, and as a function of the iterations N for the problem of matching birthday. The graph of Pr(N) in the plane Pm(N) = 0 is represented by the red curve, and the graph of Pm(N) in the plane Pr(N) = 0 is represented by the blue curve. The graph of the vector of complex probabilities Z(N) = Pr(N) + Pm(N) = Re(Z) + i × Im(Z) in the plane Pr(N) = iPm(N) + 1 is represented by the green curve. The graph of Z(N) has point J (Pr = 0, Pm = i, N = 0 iterations) as the starting point and point L (Pr = 1, Pm = 0, N = NC = 750,000 iterations) as the end point. If we project Z(N) curve on the plane of complex probabilities whose equation is N = 0 iterations, we get the line in cyan which is Pr(0) = iPm(0) + 1. This projected line has point J (Pr = 0, Pm = i, N = 0 iterations) as the starting point and point (Pr = 1, Pm = 0, N = 0 iterations) as the end point. We can notice how much point K is important, and it corresponds to Pr = 0.5 and Pm = 0.5i when N = 375,000 iterations. Additionally, the three points J, K, and L correspond to the same points that exist in Figure 12.

Figure 15.

The vector of complex probability Z in terms of N for the problem of matching birthday.

Advertisement

8. Perspectives and conclusion

In the current chapter, the extended and original Kolmogorov model of eight axioms (EKA) was connected and applied to the random and classical Monte Carlo techniques. Thus, we have bonded Monte Carlo algorithms to the novel CPP paradigm. Accordingly, the paradigm of “complex probability” was more expanded beyond the scope of my 14 earlier studies on this topic.

Also, as it was proved and demonstrated in the original paradigm, when N = 0 (before the beginning of the random simulation) and when N = NC (after the convergence of Monte Carlo algorithm to the exact result), then the chaotic factor (Chf and MChf) is 0, and the degree of our knowledge (DOK) is 1 since the stochastic aspects and variations have either not commenced yet or they have terminated their job on the random phenomenon. During the course of the nondeterministic phenomenon (N > 0), we have 0 < MChf ≤ 0.5, 0.5 ≤ DOK < 1, and − 0.5 ≤ Chf < 0, and it can be noticed that throughout this entire process, we have continually and incessantly Pc2 = DOK − Chf = DOK + MChf = 1 = Pc, which means that the simulation which seemed to be random and nondeterministic in the set R is now deterministic and certain in the set C=R+M, and this after adding the contributions of M to the experiment happening in R and thus after removing and subtracting the chaotic factor from the degree of our knowledge. Additionally, the probabilities of convergence and divergence of the random Monte Carlo procedure that correspond to each iteration cycle N have been determined in the three sets of probabilities which are C, M, and R by Pc, Pm, and Pr, respectively. Subsequently, at each instance of N, the novel Monte Carlo techniques and CPP parameters DOK, Chf, MChf, RE, RA, Pr, Pm, Pm/i, Pc, and Z are perfectly and surely predicted in the set of complex probabilities C with Pc kept as equal to 1 continuously and forever. Also, referring to all these shown simulations and obtained graphs all over the entire chapter, we can visualize and quantify both the system chaos and stochastic influences and aspects (expressed by Chf and MChf) and the certain knowledge (expressed by DOK and Pc) of Monte Carlo algorithms. This is definitely very wonderful, fruitful, and fascinating and demonstrates once again the advantages of extending the five axioms of probability of Kolmogorov and thus the benefits and novelty of this original theory in applied mathematics and prognostics that can be called verily: “the complex probability paradigm.”

Moreover, it is important to mention here that one essential and very well-known probability distribution was taken into consideration in the current chapter which is the uniform and discrete probability distribution as well as a specific generator of uniform random numbers, knowing that the original CPP model can be applied to any generator of uniform random numbers that exists in literature. This will yield certainly analogous results and conclusions and will confirm without any doubt the success of my innovative theory.

As a prospective and future challenges and research, we intend to more develop the novel conceived prognostic paradigm and to apply it to a diverse set of nondeterministic events like for other stochastic phenomena as in the classical theory of probability and in stochastic processes. Additionally, we will implement CPP to the first-order reliability method (FORM) in the field of prognostic in engineering and also to the problems of random walk which have huge consequences when applied to economics, to chemistry, to physics, and to pure and applied mathematics.

Advertisement

Nomenclature

Rthe events real set
Mthe events imaginary set
Cthe events complex set
ithe imaginary number with i2=−1 or i=−1
EKAextended Kolmogorov axioms
CPPcomplex probability paradigm
Probany event probability
Prthe probability in the real set R = the probability of convergence in R
Pmthe probability in the complementary imaginary set M that corresponds to the real probability set in R = the probability of divergence in M
Pcthe probability in R of the event with its associated event in M = the probability in the set C=R+M of complex probabilities
REthe exact result of the random experiment
RAthe approximate result of the random experiment
Zcomplex probability number = complex random vector = sum of Pr and Pm
DOK = Z2the degree of our knowledge of the stochastic experiment or system, it is the square of the norm of Z
Chf
MChfthe magnitude of the chaotic factor of Z
Nthe number of iterations cycles = number of random vectors
NCthe number of iterations cycles till the convergence of Monte Carlo method to RE = the number of random vectors till convergence.

References

  1. 1. Abou Jaoude A, El-Tawil K, Kadry S. Prediction in complex dimension using Kolmogorov’s set of axioms. Journal of Mathematics and Statistics, Science Publications. 2010;6(2):116-124
  2. 2. Abou Jaoude A. The complex statistics paradigm and the law of large numbers. Journal of Mathematics and Statistics, Science Publications. 2013;9(4):289-304
  3. 3. Abou Jaoude A. The theory of complex probability and the first order reliability method. Journal of Mathematics and Statistics, Science Publications. 2013;9(4):310-324
  4. 4. Abou Jaoude A. Complex probability theory and prognostic. Journal of Mathematics and Statistics, Science Publications. 2014;10(1):1-24
  5. 5. Abou Jaoude A. The complex probability paradigm and analytic linear prognostic for vehicle suspension systems. American Journal of Engineering and Applied Sciences, Science Publications. 2015;8(1):147-175
  6. 6. Abou Jaoude A. The paradigm of complex probability and the Brownian motion. Systems Science and Control Engineering, Taylor and Francis Publishers. 2015;3(1):478-503
  7. 7. Abou Jaoude A. The paradigm of complex probability and Chebyshev’s inequality. Systems Science and Control Engineering, Taylor and Francis Publishers. 2016;4(1):99-137
  8. 8. Abou Jaoude A. The paradigm of complex probability and analytic nonlinear prognostic for vehicle suspension systems. Systems Science and Control Engineering, Taylor and Francis Publishers. 2016;4(1):99-137
  9. 9. Abou Jaoude A. The paradigm of complex probability and analytic linear prognostic for unburied petrochemical pipelines. Systems Science and Control Engineering, Taylor and Francis Publishers. 2017;5(1):178-214
  10. 10. Abou Jaoude A. The paradigm of complex probability and Claude Shannon’s information theory. Systems Science and Control Engineering, Taylor and Francis Publishers. 2017;5(1):380-425
  11. 11. Abou Jaoude A. The paradigm of complex probability and analytic nonlinear prognostic for unburied petrochemical pipelines. Systems Science and Control Engineering, Taylor and Francis Publishers. 2017;5(1):495-534
  12. 12. Abou Jaoude A. The paradigm of complex probability and Ludwig Boltzmann’s entropy. Systems Science and Control Engineering, Taylor and Francis Publishers. 2018;6(1):108-149
  13. 13. Abou Jaoude A. The paradigm of complex probability and Monte Carlo methods. Systems Science and Control Engineering, Taylor and Francis Publishers. 2019;7(1):407-451
  14. 14. Abou Jaoude A. Analytic prognostic in the linear damage case applied to buried petrochemical pipelines and the complex probability paradigm. In: Fault Detection, Diagnosis and Prognosis. London, UK: IntechOpen; 2020. DOI: 10.5772/intechopen.90157
  15. 15. Abou Jaoude A. The Computer Simulation of Monté Carlo Methods and Random Phenomena. United Kingdom: Cambridge Scholars Publishing; 2019
  16. 16. Abou Jaoude A. The Analysis of Selected Algorithms for the Stochastic Paradigm. United Kingdom: Cambridge Scholars Publishing; 2019
  17. 17. Abou Jaoude A. Applied mathematics: Numerical methods and algorithms for applied mathematicians [PhD thesis]. Spain: Bircham International University. 2004. Available from: http://www.bircham.edu
  18. 18. Abou Jaoude A. Computer science: Computer simulation of Monté Carlo methods and random phenomena [PhD thesis]. Spain: Bircham International University. 2005. Available from: http://www.bircham.edu
  19. 19. Abou Jaoude A. Applied statistics and probability: Analysis and algorithms for the statistical and stochastic paradigm [PhD thesis]. Spain: Bircham International University. 2007. Available from: http://www.bircham.edu
  20. 20. Metropolis N. The Beginning of the Monte Carlo Method. Los Alamos Science (1987 Special Issue dedicated to Stanislaw Ulam); 1987. pp. 125-130
  21. 21. Eckhardt R. Stan Ulam, John von Neumann, and the Monte Carlo Method. Los Alamos Science, Special Issue (15); 1987. pp. 131-137
  22. 22. Mazhdrakov M, Benov D, Valkanov N. The Monte Carlo Method. Engineering Applications. Cambridge: ACMO Academic Press; 2018. p. 250. ISBN: 978-619-90684-3-4
  23. 23. Peragine M. The Universal Mind: The Evolution of Machine Intelligence and Human Psychology. San Diego, CA: Xiphias Press; 2013. [Retrieved: 17 December 2018]
  24. 24. McKean HP. Propagation of chaos for a class of non-linear parabolic equations. In: Lecture Series in Differential Equations, Session 7. Arlington, VA: Catholic University; 1967. pp. 41-57. Bibcode: 1966PNAS...56.1907M. DOI: 10.1073/pnas.56.6.1907. PMC: 220210. PMID: 16591437
  25. 25. Herman K, Theodore HE. Estimation of particle transmission by random sampling. National Bureau of Standards: Applied Mathematics Series. 1951;12:27-30
  26. 26. Turing AM. Computing machinery and intelligence. Mind. LIX. 1950;238:433-460. DOI: 10.1093/mind/LIX.236.433
  27. 27. Barricelli NA. Symbiogenetic evolution processes realized by artificial methods. Methodos. 1957:143-182
  28. 28. Del Moral P. Feynman–Kac Formulae. Genealogical and Interacting Particle Approximations. Series: Probability and Applications. Berlin: Springer; 2004. p. 575
  29. 29. Assaraf R, Caffarel M, Khelif A. Diffusion Monte Carlo methods with a fixed number of walkers. Physical Review E. 2000;61(4):4566-4575. Bibcode: 2000PhRvE...61.4566A. DOI: 10.1103/physreve.61.4566. Archived from the original (PDF) on 2014-11-07
  30. 30. Caffarel M, Ceperley D, Kalos M. Comment on Feynman–Kac path-integral calculation of the ground-state energies of atoms. Physical Review Letters. 1993;71(13):2159. Bibcode: 1993PhRvL...71.2159C. DOI: 10.1103/physrevlett.71.2159. PMID: 10054598
  31. 31. Hetherington JH. Observations on the statistical iteration of matrices. Physical Review A. 1984;30(2713):2713-2719. Bibcode: 1984PhRvA...30.2713H. DOI: 10.1103/PhysRevA.30.2713
  32. 32. Fermi E, Richtmyer RD. Note on Census-Taking in Monte Carlo Calculations (PDF). LAM. 805 (A). Declassified Report Los Alamos Archive; 1948
  33. 33. Rosenbluth MN, Rosenbluth AW. Monte-Carlo calculations of the average extension of macromolecular chains. The Journal of Chemical Physics. 1955;23(2):356-359. Bibcode: 1955JChPh...23...356R. DOI: 10.1063/1.1741967
  34. 34. Gordon NJ, Salmond DJ, Smith AFM. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Radar and Signal Processing, IEE Proceedings F. April 1993;140(2):107-113. DOI: 10.1049/ip-f-2.1993.0015. ISSN 0956-375X
  35. 35. Kitagawa G. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics. 1996;5(1):1-25. DOI: 10.2307/1390750. JSTOR 1390750
  36. 36. Carvalho H, Del Moral P, Monin A, Salut G. Optimal non-linear filtering in GPS/INS integration. IEEE Transactions on Aerospace and Electronic Systems. 1997;33(3):835-850
  37. 37. Del Moral P, Rigal G, Salut G. Nonlinear and Non-Gaussian Particle Filters Applied to Inertial Platform Repositioning. LAAS-CNRS, Toulouse, Research Report No. 92207, STCAN/DIGILOG-LAAS/CNRS Convention STCAN No. A.91.77.013. 1991. p. 94
  38. 38. Crisan D, Gaines J, Lyons T. Convergence of a branching particle method to the solution of the Zakai. SIAM Journal on Applied Mathematics. 1998;58(5):1568-1590. DOI: 10.1137/s0036139996307371
  39. 39. Crisan D, Lyons T. Nonlinear filtering and measure-valued processes. Probability Theory and Related Fields. 1997;109(2):217-244. DOI: 10.1007/s004400050131

Written By

Abdo Abou Jaoude

Submitted: 27 May 2020 Reviewed: 28 May 2020 Published: 07 July 2020