Severe Nuclear Accidents and Learning Effects

Nuclear accidents with core melting as the ones in Fukushima and Chernobyl play an important role in discussing the risks and chances of nuclear energy. They seem to be more frequent than anticipated. So, we analyse the probability of severe nuclear accidents related to power generation. In order to see learning effects of reactor operators, we analyse the number of all known accidents in time. We discuss problems of data acquisition, statistical independence of accidents at the same site and whether the known accidents form a random sample. We analyse core melt accidents with Poisson statistics and derive future accident probabilities. The main part of the chapter is the investigation of the learning effects using generalised linear models with a frequentist and a Bayesian approach and the comparison of the results.


Introduction
The Fukushima reactor disaster in 2011 made the question of nuclear safety relevant again. Similar accidents are known to have happened in the Soviet Union in 1986 (Chernobyl) and in the USA in 1979 (Three Mile Island). These core melt accidents are the most severe ones in nuclear reactors. When the rods containing the nuclear fuel and the fission products melt, a huge amount of radioactivity is set free within the reactor and possibly into the atmosphere.
But the rate of such accidents seemed much higher than previously claimed. So, we tried to study the probability of such events empirically by looking at the real events.
This a posteriori approach differs from the a priori approach of Probabilistic Risk Assessment (PRA) which is done during the design phase of a reactor. PRA determines failure probability prior to accidents by analysing possible paths towards a severe accident, rather than using existing data to determine probability empirically.
After an accident very often 'learning from experience' is claimed. The luckily low number of severe accidents does not allow for testing this claim. But reactor operators should be interested in reducing all incidents and accidents; so, their frequency should decrease with increasing operating experience. We use the total time reactors are operating, the reactor-years, as a measure of experience, analyse the accidents as a function of this experience with generalised linear models and compare a frequentist and a Bayesian approach.
Accidents can and did happen in several areas of nuclear energy, e.g. military use for weapons or submarine propulsion, medical use or fundamental research. Discussing the risks of nuclear energy involves very different arguments in all these areas. We restricted the study to accidents in nuclear reactors for power generation.
According to our analysis, we have to expect one core melt accident in 3700 reactor-years with a 95% confidence interval of one in 1442 reactor-years and one in 13,548 reactor-years. In a world with 443 reactors, with 95% confidence we have to expect between 0.82 and 7.7 core melt accidents within the next 25 years.
Analysing all known accidents, we can show a learning effect. The probability of an incident or accident per reactor-year decreased from 0.01 in 1963 to 0.004 in 2010. Furthermore, there is an indication of a slightly larger learning effect prior to 1963.
It is well known that the actual number of all incidents and accidents is much higher than the numbers published in scientific journals. Therefore, we studied whether the known incidents and accidents are distributed randomly over the reactors using countries. While the data are random for most of the countries, this is not the case for the USA. From the present data, we cannot decide whether this is due to higher incident rates or to more effective sampling.
After this introduction the second section will explain some basics of the Poisson distribution. In Section 3 we present the data acquisition and its problems. Section 4 contains the discussion of core melt accidents and predictions for future events. The learning effect analysis is presented in Section 5.
While some of the results have been published already elsewhere [1], the underlying statistical work is presented here.

Poisson distribution
Rare and random incidents related to a time of reference, an area of reference or similar can be described by the Poisson distribution. Examples are the number of surface defects in body part stamping in the automotive industry or the number of calls in a call centre within a given time.
If the probability of an incident per time is known to be p, then within the time interval T, we expect a total number of λ = pT incidents. But the actual number of incidents within T will fluctuate randomly. The Poisson distribution allows us to calculate the probability of a given number x of events within T: If the time of reference, T, is 1 year, then λ is the expected number of incidents within 1 year. If λ is much smaller than one, then it is also the probability of one incident within 1 year and of at least one incident per year. Analysing not only one but many reactors, the expected total number of accidents is simply the sum of the expected number for each single reactor, and, as long as the reactor incidents are independent of each other, the actual number of accidents is Poisson distributed.
In analysing real systems, the number of (statistically fluctuating) incidents x is known, and λ has to be determined. Then, the best estimate for λ is simply this empirical value x. However, this estimate is not necessarily the true value of λ because the incidents occur randomly.
Poisson statistics allow us to compute an interval that contains the true value of λ with a confidence level α (typically 90, 95 or 99%), the so-called confidence interval. This is determined by calculating two values, λ 1 and λ 2 , for a given number of incidents x. For the 95% confidence interval, we choose λ 1 < x such that the probability of observing x or more events is 2.5% and λ 2 > x such that the probability of observing x or fewer events is 2.5%. Then, the interval λ 1 to λ 2 is a 95% confidence interval. This means that if we study many cases, then in 95% of these cases, the true value of λ lies within this interval. The more cases we observe the narrower the confidence interval will be and the closer the estimate of λ will be to the true value.
As an example, suppose that the empirical number of events is x = 4 . Then, the Poisson distribution with a value for λ equal 1.090 gives the probability that the number of events is greater than or equal to 4 to be 2.5%. If λ is 10.242, then the probability that the number of events is less than or equal to 4 is also 2.5%. Thus, for the empirical value of x = 4 , we say that the true value for λ is between 1.090 and 10.242 with 95% confidence.
A similar measure of the probable distance between the estimated empirical value and the true value is the standard error. In large samples the probability that the distance between the estimated and the true value is less than the standard error is approximately 68%.

How many reactors?
The International Atomic Energy Agency in Vienna publishes data on all power reactors worldwide [2]. The same and additional information about connection to the grid, shut down, operator, manufacturer and fuel supplier can be found in several Wikipedia entries [3,4].
It was 1952 when the Soviet Union connected the first nuclear power reactor worldwide to the grid. Two years later the UK followed with Calder Hall. The number of reactors increased Severe Nuclear Accidents and Learning Effects http://dx.doi.org/10.5772/intechopen.76637 steadily until the mid-1980s. After that it grew only from 420 to about 450 in 2011. From this time the number of reactors remained nearly constant. Table A1 in Appendix shows for all countries worldwide the total amount of nuclear energy produced, of reactor-years and accidents. The total energy in TWh is produced until 31 Dec. 2015. The amount of reactor-years has been calculated from the Wikipedia sources [3,4] until 31.12.2011 to be comparable with the accident data.
The total operating time of all reactors until the end of 2011 was 14,766 reactor-years.

How many accidents?
First of all one has to define nuclear incidents or accidents. In 1990, the IAEA introduced the INES scale of incidents or accidents with seven levels [5]. The level 1 event is called an anomaly with, e.g. 'minor problems with safety components…', levels 2-4 are called incidents and levels 5-7 are called accidents. Two of the three destroyed reactors in Fukushima and the accident in Chernobyl were classified as level 7 with 'Major release of radioactive material with widespread health and environmental effects…'. The 1979 Three Mile Island accident in the USA was level 5 with 'Severe damage to the reactor core…' [6].
The USA uses a different scale to classify all, not only nuclear accidents. Major accidents are 'defined as incidents that either resulted in the loss of human life or more than US$50,000 of property damage, the amount the US federal government uses to define major energy accidents that must be reported' [7].
While the reactor data are publicly and easily available, this does not hold for the accident data.
According to the treaty of the International Atomic Energy Agency (IAEA), every member state has to inform the IAEA about events 'at Level 2 or above', but these data are publicly available only for 12 months. So, information about accidents in the past is not easy to get. We found two sources. One set of data has been published by the UK newspaper The Guardian [8], and another set published by Benjamin Sovacool in two papers [7,9] and in his book Contesting the Future of Nuclear Power [10]. The Guardian list includes INES levels where known. Sovacool lists 'major accidents' according to the USA definition. in a power reactor; and Fukushima, Japan, 2011, in three power reactors on the same site. The accidents in the three Fukushima reactors were caused by the same earthquake and the subsequent tsunami so we count them as one. This leaves four core melt accidents in power reactors.
In order to analyse the learning effect, we treated The Guardian and Sovacool data separately.
From The Guardian's list of 24 incidents, we included only the ones related to power production.

Statistics -Growing Data Sets and Growing Demand for Statistics
This left 16 accidents of INES level 2 and higher. From Sovacool's list, we excluded five accidents not related to power generation.

Do the accident data represent a random sample?
These lists of publicly known events represent a sample of all incidents and accidents. Only random samples allow to draw conclusions to the underlying population. But are these samples really random? The data had been published by nuclear regulating authorities or collected by scientists, journalists and interested laypeople from a multitude of sources.
Depending on the duties of the regulators or the public interest in nuclear energy or the emphasis of the press towards it, events might be detected more often in some countries than in others. So, we compared the number of (known) incidents in each country with its reactor-years.
If the incident probability is the same in all countries and if the probability to detect an accident is also independent of the country, then the number of accidents in a country should be proportional to the number of reactor-years in that country. Plotting the number of accidents versus the reactor-years should result in a straight line. A plot of these data is shown in Figure 1. The rightmost point shows the USA data.
So, for all countries except the USA, there seems to be a linear dependence between reactoryears and number of accidents. This is supported by a linear regression for all countries except the USA which gives a slope of 0.0036781 accidents per reactor-year with a standard error of 0.0004785. For each country but the USA, the expected value calculated from the 0.0036781 accidents per reactor-year is within the 95% confidence interval of the empirical accidents. Only for the USA, the empirical accident number of 54 in 3731 reactor-years is far away from the expected number of about 15.2.
While the data for all countries except the USA are compatible with a rate of 3.678 accidents per 1000 reactor-years, the USA data resemble 13.06 accidents per 1000 reactor-years.  Table 1). So, with the exception of the USA, there is no indication from the limited available data of non-random sampling or of countries having different overall accident rates. The USA data indicate that here either sampling is not random or the accident rate is higher than in the rest of the world. The present data do not allow us to determine which of these alternatives is the more likely explanation and further studies are needed.

Results of previous PRA calculations
There have been several studies on reactor safety in the past. The first was the reactor safety study or Rasmussen report published in 1975 by the US Nuclear Regulatory Commission as report WASH-1400 or NUREG75/014 [11]. Five years later the German reactors were analysed in the Deutsche Risikostudie Kernkraftwerke [12]. In 1990 Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants [13] was published. While the first two studies analysed typical reactors in their respective countries, the last one investigated five specified reactors.
What are the results of these studies? WASH-1400 states: 'The Reactor Safety Study carefully examined the various paths leading to core melt. Using methods developed in recent years for estimating the likelihood of such accidents, a probability of occurrence was determined for each core melt accident identified. These probabilities were combined to obtain the total probability of melting the core. The value obtained was about one in 20,000 per reactor per year. With 100 reactors operating, as is anticipated for the U.S. by about 1980, this means that the chance for one such accident is one in 200 per year' [11].
So, the probability for a core melt accident per reactor year is 5 × 10 −5 .

Empirical analysis
Based on the list and information of Sovacool, the following accidents are not included in the present study of severe accidents: Chalk River (1952) showed no core meltdown; Windscale (1957) was a military reactor only used for weapon production; Simi Valley (1959) was an experimental reactor; Monroe (1966) was an experimental reactor; and Lucens (1969) was an experimental reactor and probably showed no core meltdown. In Fukushima three of the six reactors at the site suffered severe destruction with INES ratings of 5-7. This threefold accident is counted as one because all three were triggered by the same cause, the tsunami with subsequent earthquake.

Statistics -Growing Data Sets and Growing Demand for Statistics
There remain four core melt accidents in nuclear reactors for power generation.
Given the number of severe accidents, 4, and the cumulative reactor-years, 14,766, it is straightforward to calculate the probability p of a core melt accident at one reactor in 1 year: So, we expect one severe accident in 3700 reactor-years.
This simple calculation contains several uncertainties. Firstly, it is assumed that all reactors at all times have the same failure probability. Secondly, because of the small sample size of four events, it is subject to statistical fluctuations. This can be expressed through the confidence interval. Within a 95% confidence limit, the empirical value of four events leads to a confidence interval of 1.0899 and 10.2416 events in 14,766 reactor-years. Therefore, with a confidence of 95%, the failure rate is between one accident in 1442 and one accident in 13,548 reactor-years. Nevertheless, the most probable value is 1 in 3700 reactor-years.
Based on this value, it is possible to calculate the probability of accidents in the future. In a world with 443 reactors, we should expect 2.99 core melt accidents within the next 25 years with a 95% confidence interval of 0.82 accidents and 7.7 accidents. The USA with 104 reactors have to expect 0.7 core melt accidents within 25 years, with 95% confidence interval between 0.2 and 1.8 accidents.

Introduction
Experience and learning from operating power reactors and from analysing incidents and accidents are important for further reducing accident rates. Increasing operational experience should result in decreasing accident rates. This can be tested empirically by comparing accident rates with the amount of operational experience. In a simple approach, operational experience can be measured by the cumulative number of reactor-years up to a given date.
The small number of core melt accidents makes it difficult to detect any learning effect. Therefore, for this analysis we also included minor accidents and incidents. The two different datasets from The Guardian with 35 accidents and from Sovacool with 99 accidents were analysed independently. The Guardian data were grouped according to INES levels, and here all incidents of level 2 and above were included. One of the criteria for a level 2 incident is a 'significant contamination within a facility into an area not expected by design'. So, these incidents must be avoided by all means. From Sovacool's data all accidents related to nuclear power generation were included. Some of the basic results given below are summarised in [1], but the analysis here is more detailed.

Preliminary analysis
In order to analyse the rather low number of accidents, the total number of accidents, which is the cumulative number of accidents that had happened until a given year, was compared to the total number of reactor-years until that year, which is the cumulative reactor-years. Thus, the accident rate is accident rate = cumulative number of accidents _________________________ cumulative reactor years . (3) Without any learning effect, the increase in accidents per reactor-year should be the same for every reactor-year; so, this accident rate should remain constant. A learning effect would decrease the accident rate.
We start by investigating The Guardian data. As discussed in Section 2, after excluding some accidents from the study, the final number of nuclear power-related incidents or accidents with level 2 and above is 16. The accident rate calculated from these data is plotted against the cumulative reactor-years in Figure 2. In order to present the data more clearly, the accident rate is displayed in a logarithmic scale. Every point represents the data of 1 year. The lines are 95% pointwise confidence intervals obtained from Poisson statistics.
A decreasing trend in this plot would indicate the presence of a learning effect. As can be readily seen, the first accident in 1957 resulted in a relatively high accident rate of about 0.05 per reactor-year. The following years saw no (publicly known) accident so the observed rate decreases drastically. Such a decreasing behaviour would be expected if an initial learning effect exists. However, after around 500 reactor-years, the plot appears to stabilise, with the accident rate varying around a constant value of about 1 in 1000 reactor-years. The plot does not indicate a learning effect. We investigate this further using a more detailed statistical analysis in Section 4.2.
Next, the Sovacool data is considered. As discussed in Section 2, after excluding some accidents from the study, the final number of nuclear power-related incidents or accidents with level 2 and above is 99. Figure 3 is a plot of the log accident rate against cumulative reactoryears for these data, along with 95% pointwise confidence limits.
The slight decreasing trend in the latter portion of the graph along with the confidence limits suggests the possible presence of a small learning effect, with a larger effect apparent in the early years. We investigate this further using a more detailed statistical analysis in Section 5.3.

Formal statistical analysis
In order to investigate the possibility of a learning effect more formally, we constructed a suitable statistical model. The notation and assumptions below, summarised in the supplementary online material for [1], are common to the analyses of both The Guardian and the Sovacool data.
Let n t be the number of reactors that are operational in year t, coded as t = 1, … , T . For r = 1, … , n t , let Y tr be the number of accidents at reactor r in year t. It is assumed that accidents at a given reactor in any given year occur independently. Then, accidents at that reactor over a 1-year period will occur according to a (possibly nonhomogeneous) Poisson process, so that Y tr will be distributed as Poisson ( λ tr ) where λ tr is the expected number of accidents at reactor r in year t or approximately the probability of at least one accident at the reactor in year t. Assuming independence of the Y tr over the reactors' operational time t, the total number of accidents Y t = ∑ r=1 n t Y tr in year t will be distributed as Poisson ( λ t ), where λ t = ∑ r=1 n t λ tr is the expected total number of accidents in year t. If we further assume that the reactors have the same probability of failure in any given year, then λ tr = e t , where e t is the expected number of accidents per reactor in year t and λ t = n t e t . Any variation across reactors will lead to extra-Poisson variation, which can be assessed following model fitting.  Severe Nuclear Accidents and Learning Effects http://dx.doi.org/10.5772/intechopen.76637 decreasing function of N , so that a plot of X t / N t against N t will exhibit a decreasing trend, as illustrated in Figures 2 and 3.

Analysis of The Guardian data
For The Guardian data, we took β ( N ) = β , so that there is either no learning or a constant rate of learning. In this case the expected number of accidents per reactor per year e t ( N t ) = α exp ( − β N t ) , an exponentially decreasing function of the number of reactor-years. Since log λ t = log n t + log α − β N t , the model is a generalised linear model (GLM) with Poisson family and log link function [15]. The analysis was implemented in the programming language R. Figure 2 suggests the absence of any learning effect, but to investigate this formally, we set up and tested the null hypothesis H 0 : β = 0 . Based on the dataset from 1956 to 2011, a likelihood analysis produced a positive estimate of 1.58 × 10 −5 for β, but with a standard error of 5.5 × 10 −5 , this is far from being statistically significant (with a p-value of 0.78). If all the estimated values were the true values of the parameters, then the probability of a severe accident per reactor-year would reduce from 0.0012 to 0.0009 over the period. If, however, β is taken to be zero, then the estimated probability of a severe accident throughout the period is 0.0010.
Given the erratic behaviour in the early years, with just one accident in 1957 followed by a run of zero accidents over the next 19 years, it is important to investigate the sensitivity of the results to the early data. For the somewhat more informative data discussed in the next section with Sovacool's data, we will proceed more formally by elaborating the model to take into account the possibly different learning behaviour in the early years. In the case of The

Statistics -Growing Data Sets and Growing Demand for Statistics
Guardian data, the GLM results based on the years 1958-2011 produce a negative estimate of − 8.61 × 10 −5 for β , indicating an increasing accident rate. However, the associated standard error of 5.7 × 10 −5 is large, and so again this value of β is far from statistically significant. If β is taken to be zero, then the estimated probability throughout this period is 0.0010, which is the same as the result based on the complete dataset.
Finally, consideration of only the more recent data from 1970 onwards produces a positive estimate of 7.29 × 10 −6 for β, which would give rise to a very slight decrease in the accident rate from 0.0011 to 0.0010 over this period. However, again the result is not statistically significant, with a standard error of 6.0 × 10 −5 . If β is taken to be zero, then the estimated probability throughout this period is again 0.0010. So, overall, there is no evidence from these data of any learning effect, at least beyond the initial few years of operation.

Analysis of the Sovacool data
The larger size of the Sovacool dataset allows us to elaborate the model to investigate the possibility of a learning effect more formally. To this end we choose a suitable formulation for the function e ( N ) . A change-point model could be used, but we preferred to use a smooth alternative that does no presuppose the existence of a sudden change in the accident rate. A commonly used functional form that models different rates of change at the early and late portions of a series is the biexponential function, given by Here, β is the ultimate rate of learning relevant in the later years. The initial rate of learning β I , relevant for the early years, can be obtained as a function of all the parameters in the model.
In particular, the initial rate is β If the change from the initial to the final rate is quite pronounced, then it can be shown that this model will approximate to a change-point model, with the change-point at N = ϕ . We can now set up the likelihood function L ( θ ) , where θ = (γ, β, ϕ, η) and γ = log α , and carry out a likelihood analysis [16]. Starting values for the computation may be obtained from graphical inspection and/or by fitting a generalised linear model to the data after 1962, using the Poisson family with a log link function.
The main hypothesis of interest is H 0 : β = 0 , which corresponds to no learning in the later years. Another hypothesis of interest is that there is a constant rate of learning throughout the entire period, that is, H 1 : β I = β . The maximum likelihood estimates and standard errors for various parameters, along with the p-values for the indicated null hypotheses, are exhibited in Table 1.
We see that there is some evidence of a learning effect over the latter portion of the data, formally verifying what seems to be indicated in Figure 3. Moreover, the rate of learning is  Figures 3 and 4 in which the cumulative accident rate is plotted against cumulative reactor-years. The superimposed lines in Figures 4 and 5 are the estimated theoretical annual accident rates e ( N t ) obtained from the biexponential Poisson model. Figure 5 is the same as in Figure 4, except that omitting the data before 1964 allows for a higher resolution of the y axis.

Statistics -Growing Data Sets and Growing Demand for Statistics
Although the data indicate a possible nonconstant learning effect over the period, with a larger effect at the beginning of the period up to about 1962, we see from Table 1 that this is not statistically significant, owing to the highly variable nature of the early data when there were relatively few reactors and only two accidents. If the initial and final rates of learning do differ, then the best estimate of ϕ , the effective change-point in terms of the number of reactor years, is 43.10, which corresponds to the year 1961. This estimate is highly variable; however, a 90% confidence interval for ϕ , constructed from the profile likelihood of log ϕ , gave values of ϕ between 3 and 221, which roughly correspond to the years 1957 and1966, respectively. These change-point results are unreliable, however, and more reliable estimates are obtained later in this section.
The high variability in the change-point contributes to the high degree of error in the estimate of β I as seen in Table 1. However, whether or not there is a change in the rate of learning over the period, the estimated probability of an accident or incident at a reactor in 1 year falls from 0.010 in 1963 to 0.004 in 2010.
As a diagnostic for the model, one may calculate the standardised response When plotted against the year, these showed no unusual pattern. Moreover, the observed standard deviation of these residuals was 0.982, indicating that our initial assumption that λ tr is constant over reactors was a reasonable one. Specifically, if we suppose that there is a positive but constant variation over reactors, so that var ( λ tr ) = σ 2 , then the theoretical variance of the tth residual at the true parameter values will be 1 + e ( N t ) σ 2 . Thus, the observed residuals would exhibit extra-Poisson variability, which does not appear to be the case here.
We further carried out a Bayesian analysis of these data. We used a noninformative prior of the form π ( θ ) ∝ 1 / α . A higher-order asymptotic approximation was computed, using the method in [17]. This was supplemented by the Monte Carlo method described in that paper. The results of the latter analysis, which may be considered to be exact having negligible simulation error, are given in Table 2. These are very similar to the asymptotic results.
We see that the Bayesian credible interval for β × 10 −5 is consistent with the likelihood analysis, providing evidence of a learning effect over the latter portion of the data. The credible interval for β I − β provides some evidence of a difference between the initial and final rates of learning, although this difference may be very small. If the initial and final rates of learning do differ, then the Bayes estimate of the change-point ϕ is 39.37, which corresponds to the year 1961, as in the likelihood analysis. However, the exact Bayesian 90% credible interval Whether or not there is a change in the rate of learning over the period, the estimated probabilities of an accident or incident at a reactor in 1963 and 2010 are identical to those obtained earlier from the likelihood analysis.

Summary
Previous Probabilistic Risk Assessments estimated the probability of a core melt accident to be in the range of one in several 10,000 to one in several 100,000 reactor-years. The real core melt accidents in the past happened with a probability of one in 3,700 years. Much more frequent than anticipated before. Thus, a world with 443 reactors has to expect 2.99 core melt accidents within the next 25 years, a country like the USA with 103 reactors 0.7 core melt accidents.
The Guardian data showed that incidents and accidents happen with a probability of approximately 0.001 = 1 × 10 −3 per reactor-year. The data are consistent with no learning effect on the side of the plant operators. The second investigation based on Sovacool's data shows a decrease of the accident rate from 0.010 = 10 × 10 −3 per reactor-year in 1963 to 0.004 = 4 × 10 −3 in 2010. There is also some indication of a stronger learning effect until the beginning of the 1960s, although this is not statistically significant. Between 1963 and 2010, the operating experience increased from 96 to 14,704 reactor-years. So, while operating experience increased by a factor of over 150, the probability of a minor or severe accident at a reactor decreased by merely a factor of 2.5.
It might be interesting to compare the last results with the empirical core melt probability of 1 / 3700 = 0.27 × 10 −3 . Depending on the dataset, a core melt accident is only 37 times (The Guardian data) or 15 times rarer than other accidents or incidents. Regarding the possible outcomes of a core melt accidents, these differences seem to be unexpectedly low and might indicate that the datasets used do not contain all incidents and accidents that happened in the past.
This guess finds support in an article by Phillip A. Greenberg. 'Between 1990 and 1992 the US Nuclear Regulatory Commission received more than 6600 "Licensee Event Reports" because US nuclear plants failed to operate as designed and 107 reports because of Statistics -Growing Data Sets and Growing Demand for Statistics significant events (including safety system malfunctions and unplanned and immediate reactor shutdowns)' [18].
Our work shows the possibility of studying learning effects within the nuclear industry. But more detailed results require more analysis and more information from reactor operators and regulators. But this is difficult on an international scale because of the restrictive information policy of the IAEA. Statistics -Growing Data Sets and Growing Demand for Statistics