Open access peer-reviewed chapter

Severe Nuclear Accidents and Learning Effects

By Thomas Rose and Trevor Sweeting

Submitted: December 4th 2017Reviewed: March 20th 2018Published: November 5th 2018

DOI: 10.5772/intechopen.76637

Downloaded: 758


Nuclear accidents with core melting as the ones in Fukushima and Chernobyl play an important role in discussing the risks and chances of nuclear energy. They seem to be more frequent than anticipated. So, we analyse the probability of severe nuclear accidents related to power generation. In order to see learning effects of reactor operators, we analyse the number of all known accidents in time. We discuss problems of data acquisition, statistical independence of accidents at the same site and whether the known accidents form a random sample. We analyse core melt accidents with Poisson statistics and derive future accident probabilities. The main part of the chapter is the investigation of the learning effects using generalised linear models with a frequentist and a Bayesian approach and the comparison of the results.


  • nuclear accidents
  • learning effect
  • Poisson distribution
  • generalised linear model
  • frequentist approach
  • Bayesian approach

1. Introduction

The Fukushima reactor disaster in 2011 made the question of nuclear safety relevant again. Similar accidents are known to have happened in the Soviet Union in 1986 (Chernobyl) and in the USA in 1979 (Three Mile Island). These core melt accidents are the most severe ones in nuclear reactors. When the rods containing the nuclear fuel and the fission products melt, a huge amount of radioactivity is set free within the reactor and possibly into the atmosphere.

But the rate of such accidents seemed much higher than previously claimed. So, we tried to study the probability of such events empirically by looking at the real events.

This a posteriori approach differs from the a priori approach of Probabilistic Risk Assessment (PRA) which is done during the design phase of a reactor. PRA determines failure probability prior to accidents by analysing possible paths towards a severe accident, rather than using existing data to determine probability empirically.

After an accident very often ‘learning from experience’ is claimed. The luckily low number of severe accidents does not allow for testing this claim. But reactor operators should be interested in reducing all incidents and accidents; so, their frequency should decrease with increasing operating experience. We use the total time reactors are operating, the reactor-years, as a measure of experience, analyse the accidents as a function of this experience with generalised linear models and compare a frequentist and a Bayesian approach.

Accidents can and did happen in several areas of nuclear energy, e.g. military use for weapons or submarine propulsion, medical use or fundamental research. Discussing the risks of nuclear energy involves very different arguments in all these areas. We restricted the study to accidents in nuclear reactors for power generation.

According to our analysis, we have to expect one core melt accident in 3700 reactor-years with a 95% confidence interval of one in 1442 reactor-years and one in 13,548 reactor-years. In a world with 443 reactors, with 95% confidence we have to expect between 0.82 and 7.7 core melt accidents within the next 25 years.

Analysing all known accidents, we can show a learning effect. The probability of an incident or accident per reactor-year decreased from 0.01 in 1963 to 0.004 in 2010. Furthermore, there is an indication of a slightly larger learning effect prior to 1963.

It is well known that the actual number of all incidents and accidents is much higher than the numbers published in scientific journals. Therefore, we studied whether the known incidents and accidents are distributed randomly over the reactors using countries. While the data are random for most of the countries, this is not the case for the USA. From the present data, we cannot decide whether this is due to higher incident rates or to more effective sampling.

After this introduction the second section will explain some basics of the Poisson distribution. In Section 3 we present the data acquisition and its problems. Section 4 contains the discussion of core melt accidents and predictions for future events. The learning effect analysis is presented in Section 5.

While some of the results have been published already elsewhere [1], the underlying statistical work is presented here.


2. Poisson distribution

Rare and random incidents related to a time of reference, an area of reference or similar can be described by the Poisson distribution. Examples are the number of surface defects in body part stamping in the automotive industry or the number of calls in a call centre within a given time.

If the probability of an incident per time is known to be p, then within the time interval T, we expect a total number of λ=pTincidents. But the actual number of incidents within Twill fluctuate randomly. The Poisson distribution allows us to calculate the probability of a given number xof events within T:

Probability ofxincidents ifλincidentsareexpected=eλλxx!E1

If the time of reference, T, is 1 year, then λis the expected number of incidents within 1 year. If λis much smaller than one, then it is also the probability of one incident within 1 year and of at least one incident per year. Analysing not only one but many reactors, the expected total number of accidents is simply the sum of the expected number for each single reactor, and, as long as the reactor incidents are independent of each other, the actual number of accidents is Poisson distributed.

In analysing real systems, the number of (statistically fluctuating) incidents xis known, and λhas to be determined. Then, the best estimate for λis simply this empirical value x. However, this estimate is not necessarily the true value of λbecause the incidents occur randomly. Poisson statistics allow us to compute an interval that contains the true value of λwith a confidence level α (typically 90, 95 or 99%), the so-called confidence interval. This is determined by calculating two values, λ1and λ2, for a given number of incidents x. For the 95% confidence interval, we choose λ1<xsuch that the probability of observing xor more events is 2.5% and λ2>xsuch that the probability of observing xor fewer events is 2.5%. Then, the interval λ1to λ2is a 95% confidence interval. This means that if we study many cases, then in 95% of these cases, the true value of λlies within this interval. The more cases we observe the narrower the confidence interval will be and the closer the estimate of λwill be to the true value.

As an example, suppose that the empirical number of events is x=4. Then, the Poisson distribution with a value for λequal 1.090 gives the probability that the number of events is greater than or equal to 4 to be 2.5%. If λis 10.242, then the probability that the number of events is less than or equal to 4 is also 2.5%. Thus, for the empirical value of x=4, we say that the true value for λis between 1.090 and 10.242 with 95% confidence.

A similar measure of the probable distance between the estimated empirical value and the true value is the standard error. In large samples the probability that the distance between the estimated and the true value is less than the standard error is approximately 68%.


3. Data acquisition

3.1. How many reactors?

The International Atomic Energy Agency in Vienna publishes data on all power reactors worldwide [2]. The same and additional information about connection to the grid, shut down, operator, manufacturer and fuel supplier can be found in several Wikipedia entries [3, 4].

It was 1952 when the Soviet Union connected the first nuclear power reactor worldwide to the grid. Two years later the UK followed with Calder Hall. The number of reactors increased steadily until the mid-1980s. After that it grew only from 420 to about 450 in 2011. From this time the number of reactors remained nearly constant.

Table A1 in Appendix shows for all countries worldwide the total amount of nuclear energy produced, of reactor-years and accidents. The total energy in TWh is produced until 31 Dec. 2015. The amount of reactor-years has been calculated from the Wikipedia sources [3, 4] until 31.12.2011 to be comparable with the accident data.

The total operating time of all reactors until the end of 2011 was 14,766 reactor-years.

3.2. How many accidents?

First of all one has to define nuclear incidents or accidents. In 1990, the IAEA introduced the INES scale of incidents or accidents with seven levels [5]. The level 1 event is called an anomaly with, e.g. ‘minor problems with safety components…’, levels 2–4 are called incidents and levels 5–7 are called accidents. Two of the three destroyed reactors in Fukushima and the accident in Chernobyl were classified as level 7 with ‘Major release of radioactive material with widespread health and environmental effects…’. The 1979 Three Mile Island accident in the USA was level 5 with ‘Severe damage to the reactor core…’ [6].

The USA uses a different scale to classify all, not only nuclear accidents. Major accidents are ‘defined as incidents that either resulted in the loss of human life or more than US$50,000 of property damage, the amount the US federal government uses to define major energy accidents that must be reported’ [7].

While the reactor data are publicly and easily available, this does not hold for the accident data.

According to the treaty of the International Atomic Energy Agency (IAEA), every member state has to inform the IAEA about events ‘at Level 2 or above’, but these data are publicly available only for 12 months. So, information about accidents in the past is not easy to get. We found two sources. One set of data has been published by the UK newspaper The Guardian[8], and another set published by Benjamin Sovacool in two papers [7, 9] and in his book Contesting the Future of Nuclear Power[10]. The Guardianlist includes INES levels where known. Sovacool lists ‘major accidents’ according to the USA definition.

The Guardianlists 24 and Sovacool 99 events related to all kinds of nuclear technology. Both lists include the same core melt accidents: Windscale, UK, 1957, in a production plant for military use; Simi Valley, USA, 1959, in a research reactor; Monroe, USA, 1966, in a demonstration breeder reactor; Dumfries, UK, 1967, in a power reactor; Lucens, Switzerland, 1969, in an experimental reactor; Three Mile Island, USA, 1979, in a power reactor; Chernobyl, USSR, 1986, in a power reactor; and Fukushima, Japan, 2011, in three power reactors on the same site. The accidents in the three Fukushima reactors were caused by the same earthquake and the subsequent tsunami so we count them as one. This leaves four core melt accidents in power reactors.

In order to analyse the learning effect, we treated The Guardianand Sovacool data separately. From The Guardian’s list of 24 incidents, we included only the ones related to power production. This left 16 accidents of INES level 2 and higher. From Sovacool’s list, we excluded five accidents not related to power generation.

3.3. Do the accident data represent a random sample?

These lists of publicly known events represent a sample of all incidents and accidents. Only random samples allow to draw conclusions to the underlying population. But are these samples really random? The data had been published by nuclear regulating authorities or collected by scientists, journalists and interested laypeople from a multitude of sources. Depending on the duties of the regulators or the public interest in nuclear energy or the emphasis of the press towards it, events might be detected more often in some countries than in others. So, we compared the number of (known) incidents in each country with its reactor-years.

If the incident probability is the same in all countries and if the probability to detect an accident is also independent of the country, then the number of accidents in a country should be proportional to the number of reactor-years in that country. Plotting the number of accidents versus the reactor-years should result in a straight line. A plot of these data is shown in Figure 1. The rightmost point shows the USA data.

Figure 1.

Total number of accidents in several countries versus total number of its reactor-years; the straight line is a linear fit through all data except the rightmost point (data fromTable 1).

So, for all countries except the USA, there seems to be a linear dependence between reactor-years and number of accidents. This is supported by a linear regression for all countries except the USA which gives a slope of 0.0036781 accidents per reactor-year with a standard error of 0.0004785. For each country but the USA, the expected value calculated from the 0.0036781 accidents per reactor-year is within the 95% confidence interval of the empirical accidents. Only for the USA, the empirical accident number of 54 in 3731 reactor-years is far away from the expected number of about 15.2.

While the data for all countries except the USA are compatible with a rate of 3.678 accidents per 1000 reactor-years, the USA data resemble 13.06 accidents per 1000 reactor-years.

So, with the exception of the USA, there is no indication from the limited available data of non-random sampling or of countries having different overall accident rates. The USA data indicate that here either sampling is not random or the accident rate is higher than in the rest of the world. The present data do not allow us to determine which of these alternatives is the more likely explanation and further studies are needed.


4. Statistics of severe nuclear accidents

4.1. Results of previous PRA calculations

There have been several studies on reactor safety in the past. The first was the reactor safety study or Rasmussen report published in 1975 by the US Nuclear Regulatory Commission as report WASH-1400 or NUREG75/014 [11]. Five years later the German reactors were analysed in the Deutsche Risikostudie Kernkraftwerke [12]. In 1990 Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants[13] was published. While the first two studies analysed typical reactors in their respective countries, the last one investigated five specified reactors.

What are the results of these studies? WASH-1400 states:

‘The Reactor Safety Study carefully examined the various paths leading to core melt. Using methods developed in recent years for estimating the likelihood of such accidents, a probability of occurrence was determined for each core melt accident identified. These probabilities were combined to obtain the total probability of melting the core. The value obtained was about one in 20,000 per reactor per year. With 100 reactors operating, as is anticipated for the U.S. by about 1980, this means that the chance for one such accident is one in 200 per year’ [11].

So, the probability for a core melt accident per reactor year is 5×105.

The results of NUREG 1150 [14] can be found in Tables 3.2, 4.2, 5.2, 6.2 and 7.2 for the reactors Surry, Peach Bottom, Sequoyah, Grand Gulf and Zion, respectively. The German data are in the Deutsche Risikostudie [12]. The mean values vary between 4×106and 3.40×104accidents per reactor-year.

4.2. Empirical analysis

Based on the list and information of Sovacool, the following accidents are not included in the present study of severe accidents: Chalk River (1952) showed no core meltdown; Windscale (1957) was a military reactor only used for weapon production; Simi Valley (1959) was an experimental reactor; Monroe (1966) was an experimental reactor; and Lucens (1969) was an experimental reactor and probably showed no core meltdown. In Fukushima three of the six reactors at the site suffered severe destruction with INES ratings of 5–7. This threefold accident is counted as one because all three were triggered by the same cause, the tsunami with subsequent earthquake.

There remain four core melt accidents in nuclear reactors for power generation.

Given the number of severe accidents, 4, and the cumulative reactor-years, 14,766, it is straightforward to calculate the probability pof a core melt accident at one reactor in 1 year:


So, we expect one severe accident in 3700 reactor-years.

This simple calculation contains several uncertainties. Firstly, it is assumed that all reactors at all times have the same failure probability. Secondly, because of the small sample size of four events, it is subject to statistical fluctuations. This can be expressed through the confidence interval. Within a 95% confidence limit, the empirical value of four events leads to a confidence interval of 1.0899 and 10.2416 events in 14,766 reactor-years. Therefore, with a confidence of 95%, the failure rate is between one accident in 1442 and one accident in 13,548 reactor-years. Nevertheless, the most probable value is 1 in 3700 reactor-years.

Based on this value, it is possible to calculate the probability of accidents in the future. In a world with 443 reactors, we should expect 2.99 core melt accidents within the next 25 years with a 95% confidence interval of 0.82 accidents and 7.7 accidents. The USA with 104 reactors have to expect 0.7 core melt accidents within 25 years, with 95% confidence interval between 0.2 and 1.8 accidents.


5. Learning effects

5.1. Introduction

Experience and learning from operating power reactors and from analysing incidents and accidents are important for further reducing accident rates. Increasing operational experience should result in decreasing accident rates. This can be tested empirically by comparing accident rates with the amount of operational experience. In a simple approach, operational experience can be measured by the cumulative number of reactor-years up to a given date.

The small number of core melt accidents makes it difficult to detect any learning effect. Therefore, for this analysis we also included minor accidents and incidents. The two different datasets from The Guardianwith 35 accidents and from Sovacool with 99 accidents were analysed independently. The Guardiandata were grouped according to INES levels, and here all incidents of level 2 and above were included. One of the criteria for a level 2 incident is a ‘significant contamination within a facility into an area not expected by design’. So, these incidents must be avoided by all means. From Sovacool’s data all accidents related to nuclear power generation were included. Some of the basic results given below are summarised in [1], but the analysis here is more detailed.

5.2. Preliminary analysis

In order to analyse the rather low number of accidents, the total number of accidents, which is the cumulative number of accidents that had happened until a given year, was compared to the total number of reactor-years until that year, which is the cumulative reactor-years. Thus, the accident rate is

accident rate=cumulative number of accidentscumulative reactor years.E3

Without any learning effect, the increase in accidents per reactor-year should be the same for every reactor-year; so, this accident rate should remain constant. A learning effect would decrease the accident rate.

We start by investigating The Guardiandata. As discussed in Section 2, after excluding some accidents from the study, the final number of nuclear power-related incidents or accidents with level 2 and above is 16. The accident rate calculated from these data is plotted against the cumulative reactor-years in Figure 2. In order to present the data more clearly, the accident rate is displayed in a logarithmic scale. Every point represents the data of 1 year. The lines are 95% pointwise confidence intervals obtained from Poisson statistics.

Figure 2.

Accident rate = cumulative accidents/cumulative reactor-years on a log scale vs. cumulative reactor years, each data point representing 1 year. The lines are 95% pointwise confidence limits.Source:The Guardian.

A decreasing trend in this plot would indicate the presence of a learning effect. As can be readily seen, the first accident in 1957 resulted in a relatively high accident rate of about 0.05 per reactor-year. The following years saw no (publicly known) accident so the observed rate decreases drastically. Such a decreasing behaviour would be expected if an initial learning effect exists. However, after around 500 reactor-years, the plot appears to stabilise, with the accident rate varying around a constant value of about 1 in 1000 reactor-years. The plot does not indicate a learning effect. We investigate this further using a more detailed statistical analysis in Section 4.2.

Next, the Sovacool data is considered. As discussed in Section 2, after excluding some accidents from the study, the final number of nuclear power-related incidents or accidents with level 2 and above is 99. Figure 3 is a plot of the log accident rate against cumulative reactor-years for these data, along with 95% pointwise confidence limits.

Figure 3.

Accident rate = cumulative accidents/cumulative reactor-years on a log scale vs. cumulative reactor-years, each data point representing 1 year. The lines are 95% pointwise confidence limits.Source: Sovacool.

The slight decreasing trend in the latter portion of the graph along with the confidence limits suggests the possible presence of a small learning effect, with a larger effect apparent in the early years. We investigate this further using a more detailed statistical analysis in Section 5.3.

5.3. Formal statistical analysis

In order to investigate the possibility of a learning effect more formally, we constructed a suitable statistical model. The notation and assumptions below, summarised in the supplementary online material for [1], are common to the analyses of both The Guardianand the Sovacool data.

Let ntbe the number of reactors that are operational in year t, coded as t=1,,T. For r=1,,nt,let Ytrbe the number of accidents at reactor rin year t. It is assumed that accidents at a given reactor in any given year occur independently. Then, accidents at that reactor over a 1-year period will occur according to a (possibly nonhomogeneous) Poisson process, so that Ytrwill be distributed as Poisson (λtr)where λtris the expected number of accidents at reactor rin year tor approximately the probability of at least one accident at the reactor in year t. Assuming independence of the Ytrover the reactors’ operational time t, the total number of accidents Yt=r=1ntYtrin year twill be distributed as Poisson (λt), where λt=r=1ntλtris the expected total number of accidents in year t. If we further assume that the reactors have the same probability of failure in any given year, then λtr=et, where etis the expected number of accidents per reactor in year tand λt=ntet. Any variation across reactors will lead to extra-Poisson variation, which can be assessed following model fitting.

Define Nt=u=1tnuto be the cumulative number of reactor-years at year t. We use Ntas a measure of nuclear operational experience in year tand postulate that the expected number of accidents per reactor per year, et, is some function et=eNtof Nt. Without any loss of generality, we may write eN=αexp0Nβxdx, where βNis the (instantaneous) rate of learning when the number of reactor-years has reached N.

Now, let Xt=u=1tYube the cumulative number of accidents up to time t. Assuming independence of the Ytsover time, Xtwill be distributed as Poisson (Λt), where Λt=u=1tλu=u=1tnueNu. There is no learning if and only if the function βis identically zero, in which case eNt=α, the constant expected number of accidents per reactor per year, and Λt=Ntα. It follows that the expected cumulative accident rate EXt/Nt=α. If, however, there is learning, then β>0and eNwill be a decreasing function of N, so that a plot of Xt/Ntagainst Ntwill exhibit a decreasing trend, as illustrated in Figures 2 and 3.

5.3.1. Analysis of The Guardiandata

For The Guardiandata, we took βN=β, so that there is either no learning or a constant rate of learning. In this case the expected number of accidents per reactor per year etNt=αexpβNt, an exponentially decreasing function of the number of reactor-years. Since logλt=lognt+logαβNt, the model is a generalised linear model (GLM) with Poisson family and log link function [15]. The analysis was implemented in the programming language R.

Figure 2 suggests the absence of any learning effect, but to investigate this formally, we set up and tested the null hypothesis H0:β=0. Based on the dataset from 1956 to 2011, a likelihood analysis produced a positive estimate of 1.58×105for β,but with a standard error of 5.5×105,this is far from being statistically significant (with a p-value of 0.78). If all the estimated values were the true values of the parameters, then the probability of a severe accident per reactor-year would reduce from 0.0012 to 0.0009 over the period. If, however, βis taken to be zero, then the estimated probability of a severe accident throughout the period is 0.0010.

Given the erratic behaviour in the early years, with just one accident in 1957 followed by a run of zero accidents over the next 19 years, it is important to investigate the sensitivity of the results to the early data. For the somewhat more informative data discussed in the next section with Sovacool’s data, we will proceed more formally by elaborating the model to take into account the possibly different learning behaviour in the early years. In the case of The Guardiandata, the GLM results based on the years 1958–2011 produce a negative estimate of 8.61×105for β, indicating an increasing accident rate. However, the associated standard error of 5.7×105is large, and so again this value of βis far from statistically significant. If βis taken to be zero, then the estimated probability throughout this period is 0.0010, which is the same as the result based on the complete dataset.

Finally, consideration of only the more recent data from 1970 onwards produces a positive estimate of 7.29×106for β, which would give rise to a very slight decrease in the accident rate from 0.0011 to 0.0010 over this period. However, again the result is not statistically significant, with a standard error of 6.0×105. If βis taken to be zero, then the estimated probability throughout this period is again 0.0010. So, overall, there is no evidence from these data of any learning effect, at least beyond the initial few years of operation.

5.3.2. Analysis of the Sovacool data

The larger size of the Sovacool dataset allows us to elaborate the model to investigate the possibility of a learning effect more formally. To this end we choose a suitable formulation for the function eN. A change-point model could be used, but we preferred to use a smooth alternative that does no presuppose the existence of a sudden change in the accident rate. A commonly used functional form that models different rates of change at the early and late portions of a series is the biexponential function, given by


Here, βis the ultimate rate of learning relevant in the later years. The initial rate of learning βI, relevant for the early years, can be obtained as a function of all the parameters in the model.

A convenient parameterisation of this function is eN=αeβN1+eηNϕ, where η=β0βand ϕ=logα0/α/η.With this parameterisation the instantaneous learning rate is


In particular, the initial rate is βI=β+η/1+eηφ. If the change from the initial to the final rate is quite pronounced, then it can be shown that this model will approximate to a change-point model, with the change-point at N=ϕ. We can now set up the likelihood function Lθ, where θ=γ,β,ϕ,ηand γ=logαα, and carry out a likelihood analysis [16]. Starting values for the computation may be obtained from graphical inspection and/or by fitting a generalised linear model to the data after 1962, using the Poisson family with a log link function.

The main hypothesis of interest is H0:β=0, which corresponds to no learning in the later years. Another hypothesis of interest is that there is a constant rate of learning throughout the entire period, that is, H1:βI=β. The maximum likelihood estimates and standard errors for various parameters, along with the p-values for the indicated null hypotheses, are exhibited in Table 1.

ParameterEstimateStandard errorNull hypothesisp-Value

Table 1.

Likelihood results from the biexponential Poisson model for the Sovacool data.

We see that there is some evidence of a learning effect over the latter portion of the data, formally verifying what seems to be indicated in Figure 3. Moreover, the rate of learning is fairly constant throughout the period from around 1962 to 2010, as can be seen from Figures 4 and 5. In these figures the observed accident rate Yt/ntper reactor in year tis plotted against year, in contrast to Figures 3 and 4 in which the cumulative accident rate is plotted against cumulative reactor-years. The superimposed lines in Figures 4 and 5 are the estimated theoretical annual accident rates eNtobtained from the biexponential Poisson model. Figure 5 is the same as in Figure 4, except that omitting the data before 1964 allows for a higher resolution of the yaxis.

Figure 4.

Observed and theoretical annual accident rate per year.

Figure 5.

Same as inFigure 4for the years 1964–2010.

Although the data indicate a possible nonconstant learning effect over the period, with a larger effect at the beginning of the period up to about 1962, we see from Table 1 that this is not statistically significant, owing to the highly variable nature of the early data when there were relatively few reactors and only two accidents. If the initial and final rates of learning do differ, then the best estimate of ϕ, the effective change-point in terms of the number of reactor years, is 43.10, which corresponds to the year 1961. This estimate is highly variable; however, a 90% confidence interval for ϕ, constructed from the profile likelihood of logφϕ, gave values of ϕbetween 3 and 221, which roughly correspond to the years 1957 and1966, respectively. These change-point results are unreliable, however, and more reliable estimates are obtained later in this section.

The high variability in the change-point contributes to the high degree of error in the estimate of βIas seen in Table 1. However, whether or not there is a change in the rate of learning over the period, the estimated probability of an accident or incident at a reactor in 1 year falls from 0.010 in 1963 to 0.004 in 2010.

As a diagnostic for the model, one may calculate the standardised response residuals rt=ytλ̂t/λ̂tfrom the observed values ytof Ytand the estimated model values λ̂t. When plotted against the year, these showed no unusual pattern. Moreover, the observed standard deviation of these residuals was 0.982, indicating that our initial assumption that λtris constant over reactors was a reasonable one. Specifically, if we suppose that there is a positive but constant variation over reactors, so that varλtr=σ2,then the theoretical variance of the tth residual at the true parameter values will be 1+eNtσ2. Thus, the observed residuals would exhibit extra-Poisson variability, which does not appear to be the case here.

We further carried out a Bayesian analysis of these data. We used a noninformative prior of the form πθ1/α. A higher-order asymptotic approximation was computed, using the method in [17]. This was supplemented by the Monte Carlo method described in that paper. The results of the latter analysis, which may be considered to be exact having negligible simulation error, are given in Table 2. These are very similar to the asymptotic results.

ParameterPosterior meanPosterior credible intervalPosterior probability

Table 2.

Bayes results from the biexponential Poisson model for the Sovacool data.

We see that the Bayesian credible interval for β×105is consistent with the likelihood analysis, providing evidence of a learning effect over the latter portion of the data. The credible interval for βIβprovides some evidence of a difference between the initial and final rates of learning, although this difference may be very small. If the initial and final rates of learning do differ, then the Bayes estimate of the change-point ϕis 39.37, which corresponds to the year 1961, as in the likelihood analysis. However, the exact Bayesian 90% credible interval is tighter than the approximate confidence interval produced earlier and corresponds to the years 1959–1963.

Whether or not there is a change in the rate of learning over the period, the estimated probabilities of an accident or incident at a reactor in 1963 and 2010 are identical to those obtained earlier from the likelihood analysis.


6. Summary

Previous Probabilistic Risk Assessments estimated the probability of a core melt accident to be in the range of one in several 10,000 to one in several 100,000 reactor-years. The real core melt accidents in the past happened with a probability of one in 3,700 years. Much more frequent than anticipated before. Thus, a world with 443 reactors has to expect 2.99 core melt accidents within the next 25 years, a country like the USA with 103 reactors 0.7 core melt accidents.

The Guardiandata showed that incidents and accidents happen with a probability of approximately 0.001=1×103per reactor-year. The data are consistent with no learning effect on the side of the plant operators. The second investigation based on Sovacool’s data shows a decrease of the accident rate from 0.010=10×103per reactor-year in 1963 to 0.004=4×103in 2010. There is also some indication of a stronger learning effect until the beginning of the 1960s, although this is not statistically significant. Between 1963 and 2010, the operating experience increased from 96 to 14,704 reactor-years. So, while operating experience increased by a factor of over 150, the probability of a minor or severe accident at a reactor decreased by merely a factor of 2.5.

It might be interesting to compare the last results with the empirical core melt probability of 1/3700=0.27×103. Depending on the dataset, a core melt accident is only 37 times (The Guardiandata) or 15 times rarer than other accidents or incidents. Regarding the possible outcomes of a core melt accidents, these differences seem to be unexpectedly low and might indicate that the datasets used do not contain all incidents and accidents that happened in the past.

This guess finds support in an article by Phillip A. Greenberg. ‘Between 1990 and 1992 the US Nuclear Regulatory Commission received more than 6600 “Licensee Event Reports” because US nuclear plants failed to operate as designed and 107 reports because of significant events (including safety system malfunctions and unplanned and immediate reactor shutdowns)’ [18].

Our work shows the possibility of studying learning effects within the nuclear industry. But more detailed results require more analysis and more information from reactor operators and regulators. But this is difficult on an international scale because of the restrictive information policy of the IAEA.



This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.


Conflict of interest

No potential conflict of interest was reported by the authors.


A. Appendix

CountryTotal energy producedReactor yearsAccidents
East Germany143.2179.95342471
South Africa351.4354.21095890
South Korea3007.33380.4191780

Table A1.

Total nuclear energy produced in TWh [3] (until 31 Dec. 2015) and reactor-years [3] (until 31 Dec. 2011) and number of accidents [7] (until 31 Dec. 2011); countries which started to build reactors which were not operating are excluded.



Thomas Rose wants to thank Fachhochschule Münster for giving him a sabbatical when this work started and the Department of Science and Technology Studies, University College London for accepting him as an Honorary Senior Research Associate to pursue this work further.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Thomas Rose and Trevor Sweeting (November 5th 2018). Severe Nuclear Accidents and Learning Effects, Statistics - Growing Data Sets and Growing Demand for Statistics, Türkmen Göksel, IntechOpen, DOI: 10.5772/intechopen.76637. Available from:

chapter statistics

758total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Introductory Chapter: Statistics

By Türkmen Göksel

Related Book

Statistical Approaches With Emphasis on Design of Experiments Applied to Chemical Processes

Edited by Valter Silva

First chapter

Introductory Chapter: How to Use Design of Experiments Methodology to Get Most from Chemical Processes

By Valter Bruno Reis e Silva, Daniela Eusébio and João Cardoso

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us