Open access

Variance Reduction of Monte Carlo Simulation in Nuclear Engineering Field

Written By

Pooneh Saidi, Mahdi Sadeghi and Claudio Tenreiro

Submitted: 17 April 2012 Published: 06 March 2013

DOI: 10.5772/53384

From the Edited Volume

Theory and Applications of Monte Carlo Simulations

Edited by Victor (Wai Kin) Chan

Chapter metrics overview

3,779 Chapter Downloads

View Full Metrics

1. Introduction

The Monte Carlo method is a numerical technique that using random numbers and probability to solve problems. It represents an attempt to model nature through direct simulation for any possible results, by substituting a range of values (a probability distribution) for any factor that has inherent uncertainty. The method is named after the city in the Monaco principality, because of roulette, a simple random number generator. The name and the systematic development of Monte Carlo method dates from about 1944.The name “Monte Carlo” refers to the Monte Carlo Casino in Monaco because of the similarity of statistical simulation to games of chance and was coined by Metropolis during the Manhattan Project of World War II, ‎[1].

Monte Carlo is now used routinely in many fields, such as radiation transport in the Nuclear Engineering, Dosimetry in Medical Physics field, Risk Analysis, Economics… in all the applications the physical process of the solution is simulated directly based on the major components of a Monte Carlo algorithm that must be available during the simulations. The primary components of a Monte Carlo simulation are:

Probability density functions (pdf’s) the physical system must be described by a set of pdf’s;

Random number generator: a source of random numbers uniformly distributed on the unit interval must be available;

Sampling rule, a prescription for sampling from the specified pdf’s;

Scoring: the outcomes must be accumulated into overall tallies or scores for the quantities of interest;

Error estimation: an estimate of the statistical error (variance) as a function of the number of trials and other quantities must be determined;

Variance reduction techniques: methods for reducing the variance in the estimated solution to reduce the computational time for Monte Carlo simulation;

Parallelization and vectorization algorithms to allow.

In the field of nuclear engineering, deterministic and stochastic (Monte Carlo) methods are used to solve radiation transport problems. Deterministic methods solve the transport equation for the average particle behavior and also contain uncertainties associated with the discretization of the independent variables such as space, energy and angle of the transport equation and can admit solutions that exhibit non-physical features. Although the physics of photon and electron interactions in matter is well understood, in general it is impossible to develop an analytic expression to describe particle transport in a medium. This is because the electrons can create both photons (e.g., as bremsstrahlung) and secondary or knock-on electrons (δ-rays) and conversely, photons can produce both electrons and positrons.‎[2] The Monte Carlo (MC) method obtains results by simulating individual particles and recording some aspects of their average behavior. The average behavior of particles in the physical system is then inferred from the average behavior of the simulated particles.‎[3] This method also enables detailed, explicit geometric, energy, and angular representations and hence is considered the most accurate method presently available for solving complex radiation transport problems. For example the most important role of Monte Carlo in radiotherapy is to obtain the dosimetric parameters with high spatial resolution.‎[4] As the cost of computing in the last decades continues to decrease, applications of Monte Carlo radiation transport techniques have proliferated dramatically. On the other hand, Monte Carlo techniques have become widely used because of the availability of powerful code such as BEAM, EGSnrc, PENELOPE and ETRAN/ITS/MCNP on personal computers. These codes able to accommodate complex 3-D geometries, inclusion of flexible physics models that provide coupled electron-photon and neutron-photon transport, and the availability of extensive continuous-energy cross section libraries derived from evaluated nuclear data files.‎[5]

It should be noted that these codes are general purpose, and are therefore not optimized for any particular application and are strongly depended on the solution subject. One of the difficulties associated with Monte Carlo calculations is the amount of computer time required to generate sufficient precision in the simulations. Despite substantial advancements in computational hardware performance and widespread availability of parallel computers, the computer time required for analog MC is still considered exorbitant and prohibitive for the design and analysis of many relevant real-world nuclear applications especially for the problems with complex and large geometry. But there are many ways (other than increasing simulation time) in the Monte Carlo method that users can improve the precision of the calculations. These ways known as Variance Reduction techniques and are required enabling the Monte Carlo calculation of the quantities of interest with the desired statistical uncertainty. Without the use of variance reduction techniques in complex problems, Monte Carlo code should run the problem continuously for weeks and still not obtain statistically significant reliable results. The goal of Variance Reduction techniques is to produce more accurate and precise estimate of the expected value than could be obtained in analog calculation with the same computational efforts. Variance reduction parameters are vary with problem types so iterative steps must be repeated to determine VR parameters for different problems.‎[6]

Advertisement

2. Conceptual role of the Monte Carlo simulation

The conceptual role of the Monte Carlo simulations is to create a model similar to the real system based on known probabilities of occurrence with random sampling of the PDFs.

This method is used to evaluate the average or expected behavior of a system by simulating a large number of events responsible for its behavior and observing the outcomes. Based on our experience concerning the distribution of events that occur in the system; almost any complex system can be modeled. Increasing the number of individual events (histories) improve the reported average behavior of the system.

In many applications of Monte Carlo the physical process is simulated directly and there is no need to even write down the differential equations that describe the behavior of the system. The only requirement is that the physical or mathematical system be described by probability density functions (PDF). Once the probability density functions are known the Monte Carlo simulation can proceed by random sampling from the probability density functions. Many simulations are then performed multiple trials or histories and the desired result is taken as an average over the number of observations. In many practical applications one can predict the statistical error the variance in this average result and hence an estimate of the number of Monte Carlo trials that are needed to achieve a given error

Advertisement

3. Accuracy, precision and relative error in Monte Carlo simulation

The first component of a Monte Carlo calculation is the numerical sampling of random variables with specified PDFs. Each random variable defines as a real number that is assigned to an event. It is random because the event is random and also is variable because the assignment of the value varies over the real values. In principle, a random number is simply a particular value taken on by a random variable.

When the random number generator is used on a computer, random number sequence is not totally random. Real random numbers are hard to obtain. A logarithm function made the random number and the function repeats itself over time. When the sequence walked through, it will start from the beginning. The typical production of random numbers is in the range between 0 and 1.

A sequence of real random numbers is unpredictable and therefore un-reproducible. A random physical process, for example radioactive decay, cosmic ray arrival times, nuclear interactions, and etc, can only generate these kinds of sequences. If such a physical process is used to generate the random numbers for a Monte Carlo calculation, there is no theoretical problem. The randomness of the sequence is therefore not totally random; this phenomenon is called pseudorandom. [7] Pseudo Random numbers look nearly random however when algorithm is not known and may be good enough for our purposes. Pesudo random numbers are generated according to a strict mathematical formula and therefore reproducible and not at all random in the mathematical sense but are supposed to be indistinguishable from a sequence generated truly randomly. That is, someone who does not know the formula is not supposed to be able to tell that a formula was used rather than a physical process. When using Monte Carlo simulation, it is desirable to have any variable depending on a uniform distributed variable, ρ. The probability, P, that a random number is smaller for a certain value, s, should be equal for both distributions. ‎[8]

P   x < ρ = P   ( y < s ) E1

The probability is in the range between 0 and 1, can be rewritten as a cumulative distribution:

- ρ g x d x =   - s f y d y =   ρ E2

The left side of equation (2) is the uniform distribution between 0 and 1 and f(y) is the distribution needed. In this way any distribution can be made with a uniform distribution.

Monte Carlo results are obtained by simulating particle histories and assigning a score xi to each particle history. The particle histories typically produce a range of score depending on the selected tally. By considering the f(x) as the probability density function (pdf) for selecting a particle history that scores x to the estimated tally being, the true answer (or mean) is the expected value of x, where:‎[3]

E X = x f x d x = t r u e   m e a n E3

By assuming a scalar value for each Monte Carlo simulation output, the Monte Carlo sample mean of the first n simulation runs is defined as follow:

x - = 1 n i = 1 n x i E4

Where xi is the value of x selected from probability density function, f(x), for the ith history and n is the total number of the histories which are calculated in the problem. The sample mean   x - , is the average value of the xi for all the histories used in the problem. But generally it does not give an accurate estimate, on the other hand there is no idea how much confidence can be considered in the estimate. So to evaluate the quantity of confidence in the estimation the sample variance can be used. Sample variance provides an estimate of how much the individual samples are spread around the mean value and is obtained as follow:

δ 2 = x - E X 2 f x d x = E x 2 - ( E ( x ) ) 2 E5

Where δ 2 is the sample variance, E(X) is true mean and f(x) is the probability density function (pdf).

The standard deviation of scores has been obtained by the square root of the variance ( δ 2 ) , which is estimated via Monte Carlo method as s.

The standard deviation is obtained by the following equation:

s 2 = 1 n - 1 i = 1 n ( x i - x - ) 2 ~   x 2 - - x i 2 E6

Where

x 2 - = 1 n i = 1 n x i 2 E7

To define the confidence interval in Monte Carlo estimation two statistical theorems are used: the law of large number and the central limit theorem.

The law of large number provides an estimate of the uncertainty in the estimate without any idea concerning the quantity of n that must be consider in calculation in practice.

To define confidence interval for the precision of a Monte Carlo result, the Central Limit Theorem of probability is used as follow:‎[9]

lim n Prob E x + α δ n < x - < E X + β δ n = 1 2 π α β e - t 2 2 d t E8

Where α and β can be any arbitrary values and n is the number of histories in the simulation. According to Eq. (6), as the uncertainty is proportional to 1 n , by increasing the number of histories by quadrupled the uncertainty in the estimation will half, which is an inherent drawback of the Monte Carlo method. So for large n, in terms of the sample standard deviation, s x - , the Eq. (6) can be rewritten as:

Prob α < x - - E X σ n < β ~ 1 2 π α β e - t 2 2 d t E9

And for large n Eq.7 can be written as:

P r o b   [ x - - λ s x - n E X x - + λ s x - n ] ~ 1 2 π - λ λ e - t 2 / 2 d t E10

λ is the number of standard deviation, from the mean, over which the unit normal is integrated to obtain the confidence coefficient. Results for various values of λ are shown in Table1. So to have confidence level the estimation for x is generally obtained as:

x ¯ ± λ s x n E11

For example for λ=1, the interval, [ x ¯ - λ s x n ,   x ¯ + λ s x n ] has a 68% chance of containing the true mean.

λ 0.25 0.50 1.00 1.5 2.00 3.00 4.00
Nominal Confidence Limit 20% 38% 68% 87% 95% 99% 99.99%

Table 1.

Results for various values of λ

Eq. (8) shows that the deviation of the sample mean from the true mean approaches zero as n , and the quantity of δ n present a measured of the deviation of the sample mean from the population mean by using n samples.

To construct a confidence interval for sample mean, x , ¯ that has a specified probability of the containing the true unknown mean, the sample standard deviation s(x),is used to approximate the population standard deviation, δ(x). But this required that E(x) and δ2 be finite and exist.

The sample variance of x - is then given by:

s x - 2 = s 2 n E12

It should be noted that the confidence intervals are valid only if the physical phase space is adequately sampled by the Monte Carlo calculation. The uncertainty of the Monte Carlo sampled physical phase space represents the precision of the simulation. There are several factors that can affect the precision such as tally type, variance reduction techniques and the number of histories simulated. Generally uncertainty or error caused by the statistical fluctuations of the xi, refers to the precision of the results and not to the accuracy. Accuracy is a measure of how close the sample mean, x - , is to the true mean. (Figure 1)

Figure 1.

Schematic diagram of the definition for accuracy and precision

On the other hand the difference between the true mean and the sample mean is called the systematic error. To estimate the relative error at the 1δ level which represents the statistical precision Eq. (10) is used. ‎[10]

R = s x - x - E13

In terms of Central Limit Theorem, the estimated relative error squared R2 should be proportional to 1 n . So as each history will take on average, the same amount of computer time and the used computer time, T, in a Monte Carlo calculation should be directly proportional to n (the number of histories); therefore R2T should be approximately constant. Thus, the metric of efficiency for a given tally, called the figure-of-merit (FOM), includes computer time as well and define as:‎[11]

F O M = 1 R 2 T E14

Where R is the relative error for the sample mean, and T is the total computer time taken to simulate n histories.‎[12], ‎[13]

The FOM is also a tally reliability indicator in the sense that if the tally is well behaved, the FOM should be approximately constant (with the possible exception of statistical fluctuations early in the problem), and is thus an important and useful parameter to assess the quality (statistical behavior) of a tally bin. If the FOM is not approximately constant, the confidence intervals may not include the expected score value E(x) the expected fraction of the time.

Considering the following form of the previous relation can show the significant of the actual value of the FOM. ‎[14]

The above expression shows a direct relationship between computer time and the value of the FOM. Increasing the FOM for a given tally will subsequently reduce the amount of computer time required to reach a desired level of precision. Thus the FOM can be used to measure the efficiency when the variance reduction techniques have been used. The ratio of FOMs before and after using the variance reduction techniques, gives the factor of improvement.

Another use of FOM is to investigate the improvement of the new version of a Monte Carlo code.

The ratio of the FOMs for identical sample problems gives the factor of improvement. When the FOM is not a constant as a function of n, means that the result is not statistically stable; that is, no matter how many histories have been run, the important particles are showing up infrequently and have not yet been sampled enough.[3,9]

Another additional use of the FOM is to estimate the required computer time to reach a desired precision by considering: T~1/(R2 x FOM).

Advertisement

4. Variance reduction

The uncertainty of Monte Carlo simulation can be decreased by implementing some accurate physical models but this leads to longer calculation times. On the other hand, the accuracy of Monte Carlo dose calculation ‎[15] is mainly restricted by the statistical noise, because the influence of Monte Carlo method approximations should be much smaller. This statistical noise can be decreased by a larger number of histories leading to longer calculation times as well. However, there are a variety of techniques to decrease the statistical fluctuations of Monte Carlo calculations without increasing the number of particle histories. These techniques are known as variance reduction.‎[16] Variance Reduction techniques are often possible to substantially decrease the relative error, R, by either producing or destroying particles, or both.

Decreasing the standard deviation, δ, and increasing the number of particle histories, n, for a given amount of computer time conflict with each other. Because decreasing δ requires more computer time per history and increasing n, results in less time per history ‎[17] In general not all techniques are appropriate for all applications, also in some case some techniques tend to interfere with each other so choosing the Variance Reduction technique strongly depend on the solution.

The main goal of all the variance reduction techniques is to increase precision and decrease the relative error. The precision of the calculation is increased by increasing the number of particle histories but needs a large amount of computer running time so to accelerate the Monte Carlo simulation and reduce the computing time these techniques are applied.[16-17]

Monte Carlo variance reduction techniques can be divided into four classes:

The truncation method like geometry truncation, time and energy cut off;

The population control method like Russian roulette, geometry splitting and weight windows;

The modified sampling method (source biasing); and

The partially deterministic method like point detectors.

Advertisement

5. Popular variance reduction techniques

Several of the more widely used variance reduction techniques are summarized as follow:

5.1. Splitting/Roulette

Geometric Splitting/Russian roulette is one of the oldest and most widely used variance reduction techniques, and when used properly, can significantly reduce the computational time of a Monte Carlo calculation.

Approximately 50% of CPU time is consumed to track secondary and higher-order photons and the electrons they set in motion. It is possible to remove a part of these photons by Russian roulette. ‎[18]

Generally when particles move from a region of importance Ii to a more important region Ij, (Ii < Ij ), the particle is split into n = Ij=Ii identical particles of weight w=n (if n is not an integer, splitting is done in a probabilistic manner so that the expected number of splits is equal to the importance ratio) it means that the number of particles is increased to provide better sampling and the weight of the particle is halved. Figure 2 shows a schematic diagram of geometry splitting, when a particle moves from a lower importance region to a region with higher importance. Splitting increases the calculation time and decreases the variance whereas Russian roulette does the complete opposite.‎[19]

Figure 2.

The splitting process

In case of moving to a less important region Russian roulette is played and the particle is killed with probability 1 - (Ij=Ii), or followed further with probability Ij=Ii and weight w × Ii=Ij.

It means that the particles are killed to prevent wasting time on them.(Figure 3)

The objective of these techniques is to spend more time sampling important spatial cells and less time sampling unimportant spatial cells. This is done by dividing the problem geometry into cells and assigning each cell i, an importance Ii.

Figure 3.

The Russian roulette process

Energy splitting/roulette are similar to geometric splitting/roulette except that energy splitting/roulette is performed on the energy domain rather that on the spatial domain.

Russian roulette can be shown that the weights of all particle tracks are the same in a cell no matter which geometrical path the tracks have taken to get to the cell, assuming that no other biasing techniques, e.g. implicit capture, are used. In the simulations if a track's energy drops through a prescribed energy level, the roulette game (based on the input value of the survival probability) is played. If the game is won, the track's history is continued, but its weight is increased by the reciprocal of the survival probability to conserve weight.‎[20] Russian roulette is frequently, if not always, used in radiation transport problems and can be applied at any time during the life of a particle, usually after an interaction has taken place. Russian roulette always increases variance since it cuts off histories that could still contribute to the detector, but it also always reduces the simulation time in compare with an implicit capture (which will explained later) scheme without weight thresholds.

Generally, in a deep penetration shielding problem the number of particles diminishes to almost nothing in an analog simulation, but splitting helps keep the numbers built up. To have accurate and precise results it is recommended to keep the population of tracks traveling in the desired direction more or less constant that is, approximately equal to the number of particles started from the source. Particles are killed immediately upon entering a zero importance cell, acting as a geometry cutoff. Geometry splitting/Russian roulette works well only in problems without any extreme angular dependence.‎[21] In the extreme case, if no particles ever enter an important cell where the particles can be split, the Splitting/Russian roulette is useless. Energy splitting and roulette typically are used together. Energy Splitting/roulette is independent of spatial cell. If the problem has space-energy dependence, the space-energy dependent weight window is normally a better choice.

Splitting and roulette are very common techniques in Monte Carlo simulation; not only because of their simplicity but also since they only deal with variance reduction via population control and do not modify pdfs, they can be used in addition to most other techniques to have more effect.

5.2. Energy cut off

A Monte Carlo simulation can be made much faster, by stopping a particle once its energy drops below certain threshold energy (cutoff energy). According to the particle energy and the material that the particle is travelling through, the travelling path length of the particle can estimate. If this path length is below the required spatial resolution, particles are terminated and assume their energy is absorbed locally. This can be done by energy cut off that terminate tracks and thus decrease the time per history. ‎[22] Because low-energy particles can produce high energy particles, the energy cutoff can be used only when it is known that low-energy particles are either of zero or almost zero importance at the specific region (low energy particles have zero importance in some regions and high importance in others).‎[23] In the Monte Carlo simulations Ecut is the photon energy cut-off parameter. It means, if a scattered photon is created with energy less than Ecut the photon will not be transported and the energy deposited locally. According to the above explanation seems the smaller Ecut the more accurate are the results but there are two criteria that should be considered in the simulations for selecting Ecut: (a) the mean free path (MFP) of photons with energy equal or less than Ecut should be small in compared with the voxel sizes or (b) the energy fraction carried by photons with energy less than Ecut is negligible compared with the energy fraction deposited. In terms of efficiency selecting the higher Ecut results in decreasing the CPU time, but on the other hand, selecting a higher value for Ecut can makes it a source of additional statistical fluctuations if it becomes comparable or even bigger than the average energy deposited by electrons. In this case the answer will be biased (low) if the energy cutoff is killing particles that might otherwise have contributed in the process even if N →∞. ‎[9]

5.3. Time cutoff

A time cutoff is like a Russian roulette, with zero survival probability. The time cutoff terminates tracks and thus decreases the computer time per history. Particles are terminated when their time exceeds the time cutoff. The time cutoff can only be used in time-dependent problems where the last time bin will be earlier than the cutoff. The energy cutoff and time cutoffs are similar; but more caution must be considered with the energy cutoff because low energy particles can produce high-energy particles, whereas a late time particle cannot produce an early time particle. ‎[10]

Advertisement

6. Weight window technique/weight window generator

The weight window technique administers the splitting and rouletting of particles based on space and energy dependent importance. This technique is one of the most used and effective variance reduction methods that deals with both the direct decrease of variance via a large number of samples (through splitting) and the decrease of simulation time via Russian roulette, and is therefore a very effective variance reduction technique. On the other hand this technique combines Russian roulette and splitting.

To apply this variance reduction technique a lower weight bound and the width of the weight window for each energy interval of each spatial cell should be considered.

If a particle's weight is below the lower weight bound, Russian roulette is performed, and the particle's weight is either increased to be within the weight window or the particle is terminated. On the other hand, if the particle's weight is above the upper weight bound, the particle is split such that the split particles all have weights within the weight window. If the particle's weight falls within the weight window, no adjustment is performed.‎[24]

As shown in Figure 4, if a particle has a weight equal to wini, which is lower than wL, Russian roulette will play with survival weight equal to ws which is also provided by the user. It should be noted that the ws has to be between the windows define by wu and wL. If wini is greater than wu, the particle is split into a predefine number of particles until all the particles are within the defined window. If wini is within the window the particle continues with the same weight.

One problem that may arise when using weight windows is that over-splitting might occur when a particle enters a region or it is generate in a region with higher weight than the upper limit of the weight window in that region. This can usually be solved by modifying some of the weight window parameters. ‎[21]

Figure 4.

Schematic of the weight window technique ‎[21]

The weight windows generator is used to determine weight windows for the simulations. When generating weight windows, it is easy to generate unwanted zeros. Zero weight windows in a region are either due to particles not entering that region or due to particles that did enter the region but did not add to the tally score. To increase the number of particles that enter or generate in a region of the system, a uniformly distributed volumetric photon source that covers the whole system is used.‎[25]

The weight window generator calculates the importance of each cell in the problem. This is done by noting that the importance of a particle at a point in phase space is equal to the expected score a unit weight particle would generate. Thus, the cell's importance can be estimated as follow:

I m p o r t a n c e   =   t o t a l   s c o r e   d u e   t o   p a r t i c l e s   e n t e r i n g   t h e   c e l l t o t a l   w e i g h t   e n t e r i n g   t h e   c e l l E15

As both the weight window and geometry splitting use the Russian roulette, there is a question concerning the difference between these two methods. The main differences are:

The weight windows are space–energy dependent whereas geometry splitting is only dependent on space;

The geometry splitting, splits the particles despite the weight of the particle but the weight window works completely in opposite way, it means before roulette is played and the particles split, the weight of the particle is checked against the weight window;

The geometry splitting is only applied at surfaces, but the weight window method is applied at surfaces and collision sites or both;

The geometry splitting method is based on the ration of importances across the surface but weight windows utilizes absolute weights;

As the geometry splitting is weight independent, will preserve any weight fluctuation but weight window can control weight fluctuation by other variance reduction techniques to force all particles in a cell to have a weight within a weight window.

The weight windows can be generated via the weight window generator but it requires considerable user understanding and intervention to work correctly and effectively.

Advertisement

7. Implicit capture

Implicit capture, survival biasing, and absorption by weight reduction are synonymous. Implicit capture is a very popular variance reduction technique in radiation transport MC simulations. The implicit capture technique involves launching and tracing packets of particles instead of one by one. At launch, each packet is assigned an initial weight W0. The packet is traced with a step length distribution determined by the total attenuation coefficient, δ. Implicit capture is a variance reduction technique that ensures that a particle always survives a collision (i.e., the particle is never absorbed).

All of the variance reduction techniques vary the physical laws of radiation transport to sample more particles in regions of the phase-space that contribute to the objective.

To compensate for this departure from the physical laws of radiation transport, the concept of particle weight, w, is introduced, where the weight can be considered as the number of particles being transported. When a variance reduction technique is applied, the weight of the particle is adjusted using the following “conservation” formula: [20]

w(biased probability density function) =w0(unbiased probability density function)E16

Where w0 is the weight before the variance reduction technique is applied. In the implicit capture technique, a particle always survives a collision, but the particle emerges from the collision with a weight that has been reduced by a factor of δs / δ, which is the probability of scattering. Thus, the total particle weight is conserved.

If W0 is the initial weight of the particle and the weight w that the particle will have after a collision, the relationship between them can be describe as follow:

w 0 p = δ s δ                                     i f   W ' = W p * = 1 - p                 i f   W ' = 0 W ' ¯ = p W 0 + p * × 0 = δ s δ W 0 E17

Where p is the probability of the particle being scattered after a collision, δ s , is the scattering macroscopic cross-section, δ is the total macroscopic cross-section and W ' ¯ is the expected outcome of the weight. For the implicit capture the particle always survives a collision with weight:

W ' = δ s δ W 0 E18

When implicit capture is used rather than sampling for absorption with probability δs, the particle always survives the collision and is followed with a new weight. Implicit capture can be assumed as a splitting process in which the particle is split into absorbed weight and surviving weight. ‎[10] The main advantage of implicit capture is that a particle that has reached the vicinity of the tally region is not absorbed just before a score is made. In general Implicit capture always reduces the variance, but the total figure-of-merit may not improve, as the simulation time is increased because of the longer particle histories. It is, however, widely used because of its simplicity and ease of implementation.

Advertisement

8. Forced collisions

The forced collision method is a variance reduction scheme that increases sampling of collisions in specified regions.

If the number of mean-free paths (MFP) to the next photon interaction be larger than the simulated phantom thickness, the photon will leave the region of interest without any interacting or depositing energy. Prediction of this event is not possible and to have precision results the photon behaviour must be traced through the interest region until it escapes from the region. In this case the computing time spent on the transport of escaping photons is then wasted and if the fraction of escaping photons is very large, the simulation will be very inefficient. The forced collision technique improves the efficiency by considering only the fraction of photons that interact in the phantom.

As shown in Figure 5, when a specified particle enters a region defined as the forced collision region, the incident particle splits into un-collided and collided particles. The un-collided particle passes through the current cell without collision and is stored in the bank until later when its track is continued at the cell boundary. The collided particle is forced to collide within the specified cell.

It means that the “passing-through case” and the “collision case” are analysed due to a single incident particle simulation. These split particles are weighted by the following Equation:

W u n c o l l = W 0 e - δ d W c o l l = W 0 ( 1 - e - δ d ) E19

Where W0 is the initial weight of the particle, Wuncoll is the un-collided particle’s weight and Wcoll is the weight of collided particle. d is the distance to region surface in the particle’s direction, and δ is the macroscopic total cross section of the region material. ‎[26]

Figure 5.

Schematic diagram of the forced collision. ‎[26]

The probability of colliding within a distance x is given by:

P r o b = 1 - e - δ x E20

The particle’s weight is then reduced appropriately and Russian roulette is used to ensure that the calculation time is not increased significantly by the technique. ‎[27]

Advertisement

9. Exponential transformation

The exponential transform also called path length stretching is a variance reduction technique designed to enhance efficiency for deep penetration problems (e.g. shielding calculations) or surface problems (e.g. build-up in photon beams). It is often used for neutron Monte Carlo simulation and is directly applicable to photons as well.

In applying the exponential transformation a stretching parameter is used to increase distances travelled in directions of interest, while in the use of Russian roulette and splitting other parameters are introduced to increase the death probabilities in regions of low importance and the number of independent particles found in regions of high importance. ‎[7]

Exponential transformation samples the distance to collision from a non-analog probability density function. Specifically, it involves stretching the distance between collisions in the direction of interest and reducing the distance between collisions in directions of little interest by modifying the total macroscopic cross section as follow:

δ * = δ ( 1 - p μ ) E21

Where δ * is the modified total cross section, δ is the true total cross section, p is the exponential transform parameter used to vary the degree of biasing, p < 1, and μ is the cosine of the angle between the preferred direction and the particle’s direction.

It should be mentioned that the exponential transformation technique can produce large weight fluctuations and subsequently produce unreliable mean and variance estimates. Exponential transformation generally decreases the variance but increases the time per history.‎[28]

Also it should be noted that due to the large weight fluctuations that can be produced by the exponential transform the exponential transform should be used accompanied by weight control.

Advertisement

10. Conclusion

In this chapter we discussed the accuracy, precision, relative error & figure of merit in the Monte Carlo simulation, the Monte Carlo method is considered to be the most accurate method presently available for solving radiation transport problems in nuclear engineering field but it is extremely expensive computationally, because this method should simulate individual particles and simulating the average behaviour of the particles in the system. Also for complex problems that the probability is small that a particle will contribute to the interest region, some form of variance reduction must be applied to reduce the required computer time to obtain the results with sufficient precision.

The disadvantage associated with Monte Carlo codes is that they require long calculation times to obtain well converged results, especially when dealing with complex systems. Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques.

In other word to shorten the calculation time and to decrease the error of the results obtained with Monte Carlo methods, i.e. to improve the efficiency of a Monte Carlo calculation, variance reduction techniques must be used so in this chapter we also discussed various variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. As already mentioned, it is possible to reduce the estimated variance of a sample for a given number of histories using techniques which increase the positive sampling efficiency and reduce the intrinsic dispersion of the contributions to the response, biasing the natural occurrence of more important paths and reducing the natural occurrence of less important, and especially ‘nearly zero’ importance paths.

In this chapter, several variance reduction strategies have been described with the aim of reducing CPU time.

In general assign the variance reduction techniques results in several factors that should be noted in the simulations:

Highly reliable systems

Financial engineering/quantitative finance

Simulation optimization

Lots of alternatives

Noisy gradient estimates

Metamodeling/mapping

Real-time control using simulation

The usage of variance reduction methods in Monte Carlo simulations is not straightforward. There is a problem that these methods may be used as a black box leading to results not being analysed correctly. Note, all of the variance reduction techniques have a certain amount of over-head associated with their use, and in many situations the cost of this over-head outweighs the advantages of the technique.

References

  1. 1. Anderson H.L. Metropolis, Monte Carlo and the MANIAC, Los Alamos Science 1986 14, 96–108.
  2. 2. Rogers D.W.O. Monte Carlo Techniques in Radiotherapy, Ionizing Radiation Standards Institute for National Measurement Standards, Medical Physics 2002; 58(2) 63–70.
  3. 3. Carter L.L., & Cashwell E.D. Particle transport simulation with the Monte Carlo method, ERDA Critical Review Series, 1975; TID-26607.
  4. 4. Sadeghi M., Saidi P. & Tenreiro C. Dosimetric Characteristics of the Brachytherapy Sources Based on Monte Carlo Method. In Mode Ch.J. (ed) Applications of Monte Carlo Methods in Biology, Medicine and Other Fields of Science. InTech; 2011.p155-176.
  5. 5. Rogers D.W.O. Fifty years of Monte Carlo simulations for medical physics. Phys. Med. Biol. 2006; 51(13) R287–R301.
  6. 6. Osnes H. Variance reduction techniques for Monte Carlo simulation, ISSN 0808-2839. SINTEF and University of Oslo, 1997.
  7. 7. Kahn H. Use of different Monte Carlo sampling techniques. In Symposium on Monte Carlo Methods, H.A. Meyered, John Wiley and Sons, New York 1956; 146- 190.
  8. 8. Sjenitzer B. Variance reduction using the correction method on criticality calculations TU Delft, 2009.
  9. 9. Kendall S.M. & Stuart A. The advanced theory of statistics, vol. 1, C. Griffin & Co., London, 4th edn., 1977.
  10. 10. MCNP, A General Monte Carlo N-Particle Transport Code, Version 5, Volume I: Overview and Theory. X-5 Monte Carlo Team, Los Alamos national laboratory, April 24, 2003 (Revised 2/1/2008).
  11. 11. Este G. & Cashwell E. MCNP1B Variance error estimator, TD-6-27-78. 1978; Los Los Alamos National Laboratory
  12. 12. Wanger J.C., Peplow D.E. & Evans T.M. Automated variance reduction applied to nuclear well-logging problems. Nucl Tech 2009; 168(3) 799-809.
  13. 13. OLSHER R.H. A practical look at Monte Carlo variance reduction methods in radiation shielding. Nucl Eng & Tech 2006; 38(3) 225-230.
  14. 14. Briesmeister J.F. Editor, MCNP – A General Monte Carlo N-Particle Transport Code, Version 4A, LA-12625. 1993; Los Alamos National Laboratory.
  15. 15. Nelson W.R., Hirayama H. & Rogers D.W.O. The EGS4 code system SLAC Report SLAC-265 Press W H, Teukolsky S A, Vetterling W.T and Flannery B.P. 1997, Numerical Recipes in Fortran 77, Cambridge: Cambridge University Press, 1985.
  16. 16. Bielajew A.F. & Rogers D.W.O. the parameter reduced electron-step transport algorithm for electron Monte Carlo transport. Nucl. Instrum. Methods 1987 PRESTA; B 18 165–81.
  17. 17. Haghighat A. & Wagner J.C. Monte Carlo Variance Reduction with Deterministic Importance Functions. Prog Nucl Energy 2003; 42(1) 25-53.
  18. 18. Kawrakow I. & Fippel M. Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC. Phys. Med. Biol. 2000; 45(8) 2163–2183.
  19. 19. Booth, T.E. & Hendricks J.S. Importance Estimation in Forward Monte Carlo Calculations, Nucl Tech & Fusion 1984; 5(1) 90-100.
  20. 20. Wagner JC. & Haghighat A. Automated Variance Reduction of Monte Carlo Shielding Calculations Using the Discrete Ordinates Adjoint Function. Nucl Sci & Eng 1998, 128(2) 186–208.
  21. 21. Kawrakow I & Fippel M. Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC. Phys. Med. Biol 2000; 45(8): 2163-2183.
  22. 22. Noack K. Efficiency and reliability in deep-penetration Monte Carlo calculations, Ann. Nucl. Energy 1991, 18(6), 309-316.
  23. 23. Booth T.E. A Sample problem in variance reduction in MCNP, LA-10363-MS, 1985; Los Alamos National Laboratory.
  24. 24. Smith H.P. & Wagner J.C. A Case Study in Manual and Automated Monte Carlo Variance Reduction with a Deep Penetration Reactor Shielding Problem. Nucl Sci & Eng 2005; 149, 23–37.
  25. 25. Rodriguez M., Sempau J. & Brualla L. A combined approach of variance-reduction techniques for the efficient Monte Carlo simulation of linacs. Phys Med Biol 2012; 57(10) 3013-24.
  26. 26. Abe Sh. Watanabe Y. & Niita K. Sakamoto Y. Implementation of a forced collision method in the estimation of deposit energy distribution with the PHITS code. Prog Nucl Sci & Technology 2011; 2.477-480.
  27. 27. Smith H.P. & Wanger J.C. A case study in manual and automated Monte Carlo variance reduction with a deep penetration reactor shielding problem. Nucl Mathematical & Computational Sci 2003; A Century in Review, A Century AnewGatlinburg, Tennessee, April 6-11, 2003, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2003)
  28. 28. Zhaohong B. & Xifan W. Studies on variance reduction technique of Monte Carlo simulation in composite system reliability evaluation. Elec Power Sys Res 2002; 63(1) 59-64

Written By

Pooneh Saidi, Mahdi Sadeghi and Claudio Tenreiro

Submitted: 17 April 2012 Published: 06 March 2013