Open access peer-reviewed chapter

Recent Developments in Count Rate Processing Associated with Radiation Monitoring Systems

By Romain Coulon and Jonathan Dumazert

Submitted: May 27th 2017Reviewed: September 25th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.71233

Downloaded: 303

Abstract

This chapter presents some recent data processing developments associated with radiation monitoring systems. Radiation monitors have to continuously provide count rate estimations with accuracy and precision. A filtering technique based on a Centered Significance Test coupled with a Brown’s double exponential filter has been developed and used in compensation measurement and moving sources detection schemes.

Keywords

  • data processing
  • nuclear counting
  • radiation monitor
  • signal processing
  • filtering
  • frequentist inference

1. Introduction

During the last decades, ionizing ray detectors have grown in performance, thanks to digital electronics developments (ADC and FPGA), allowing for an advanced processing of nuclear impulse signals. It is also noteworthy that this field has favored the development of real-time processing algorithms dealing with count rate data.

The architecture of a typical nuclear measurement system is presented in Figure 1. It can be divided in four parts:

  • Voltage supply,

  • Detector part,

  • Front-end electronics,

  • User interface.

Figure 1.

Schematic architecture of a nuclear measurement apparatus.

The detector part contains the physical sensor (noble gas, scintillation material, and semiconductor) in which radiation interacts with matter. A conversion unit (preamplifier or photo-converter) converts the induced charges or photons in amplified voltage pulses. In the case of gas or semiconductor-based sensors, a high voltage is required to polarize the medium, and a low voltage is needed to supply active components of the preamplifier. In the case of scintillators, a high voltage supplies the photomultiplier.

Front-end electronics is composed of an analog filter, an analog-to-digital converter (ADC), and a digital filter. An analog shaping filter can be used to adapt the signal before digital conversion (dynamic range and respect to Shannon rules), and/or to maximize signal-to-noise ratio (SNR). The ADC digitalizes the signal with a given frequency and resolution.1 This digital signal is processed into a fast electronic component, typically a microcontroller, or a field-programmable gate array (FPGA). The embedded firmware has to comply with the very high-frequency of the ADC output with a processing period in the range of 1–10 ns. The algorithms implemented in the firmware perform the pulse processing, which mainly consists in triggering, first digital filtering for SNR maximization, stabilizing the baseline, estimating the dead time, and counting a number of pulse events N over a period of time ∆τ. This general description is not exhaustive and a variety of architectures is conceivable, depending on the mix between analog and digital processing. Though modern trends tend to favor digital filtering, analog filtering can still be retained to comply with cost reduction or embedded strategies. For instance, the front-end associated with a scintillator can directly digitalize the output voltage of a photomultiplier using a 500 MS/s ADC and process the pulses using a FPGA (notably when pulse shape discrimination is needed). On the other hand, the initial signal can be filtered using analog components (trapezoidal filtering), before digitalization with a 10 MS/s ADC and count processing with a microcontroller.

An interface is built on a computer connected with the front-end electronic card. The software reads, at each given time interval ∆t, a new count value Ni according to a defined communication protocol. This second processing can be divided in two parts: filtering of the count rate signal and displaying. The period ∆t has to be chosen in compliance with continuous measurement requirement, typically close to retinal persistence of 0.1 s. We can highlight here, the quantitative difference between the digital pulse processing coded in VHDL or Verilog for the FPGA (very fast), and the count rate processing which can be coded in C/C++ into a microcontroller or/and the PC interface (7–8 orders of magnitude slower). In compact systems, the count rate processing is usually incorporated into the firmware while, in larger systems, the count rate processing is remotely implemented in the PC.

This chapter will not address pulse processing techniques, for which details can be found in [13], but presents some recently developed techniques to process count rate signal using frequentist inference. Bayesian inference can also be implemented to process count rate as for instance for gamma spectrum unfolding or photon-limited imaging filtering [4, 5]. These are very efficient to accurately processed nuclear counting data, but become unsuited of online applications. After describing the theoretical model of the counting process, a smoothing technique will be presented as a fundamental building block, ensuring an online and adaptive filtering of the signal. The issue of composite measurements will then be addressed with a method allowing improving metrological reliability for particle discrimination (compensation technique). Finally, the use of detectors in a network to address moving source detection will be developed.

2. Nuclear counting model

Nuclear disintegration can occur following different processes depending on the A/Z ratio of the concerned isotope. Major disintegration processes read: β, β+, ε, α, and spontaneous fission decay, presented in the following nuclear equations, where X is the mother nucleus and Y the daughter nucleus

XZAT1/2YZ+1A+β10+υ¯e00E1
XZAT1/2YZ1A+β10++υe00E2
XZA+e10T1/2YZ1A+υe00E2a
XZAT1/2YZ2A4+α24E3
XZAT1/2Y1+Z1A1Y2+Z1A1kn01E4

Subsequently, the daughter nucleus is, most of the time, released in an exited state and usually reaches its fundamental level by gamma-ray emission:

YZAYZA+γ00E5

According to the detector type, ββ+α, n, or γ particles are detected and counted. The required time τd for an unstable nucleus to decay is undetermined, and takes its value in an exponential distribution whatever the time lap between its creation and the observation is (memoryless phenomenon). The probability distribution p(τd = t′) of the decay instant, where t′ and λ are, respectively, the observation instant and the decay constant of the nucleus, is given by:

pτd=t=λexpλtE6

The observation of an unstable nucleus over a time t forms a Bernoulli trial in which two results can be observed: the nucleus has decayed or the nucleus has not decayed. A probability of p and 1 − p can be, respectively, associated with each branch of the trial for every instant as illustrated in Figure 2.

Figure 2.

Illustration of the Bernoulli trail applied to an individual nucleus disintegration.

The probability pX → Y(t) to observe a disintegration of the mother nucleus X → Y before time t (t′ = 0 being the start of the observation) is obtained as:

pXYt=0tpτd=tdt=1eλtE7

In a radioactive source containing a population of NX unstable nuclei, the decay of an individual nucleus does not impact the decay of the others. The Bernoulli trial is therefore repeated NX times (Figure 3) during the observation time t, and the number of observed decays n is described by a Binomial law such as:

pNXYt=n=NX!NXn!n!pXYtn1pXYtNXnE8

Figure 3.

Illustration of Bernoulli trials applied to a population of unstable nuclei.

In practice, NX is very large and pX → Y(t) is usually very small (1/λ ≫ t). In these conditions, the Binomial law converges toward a Poisson law Psuch as:

pNXYt=n=PNXλt=NXλtnn!eNXλtE9

Expectation and variance of the number of decays are equal to NXλt. The number of counts N measured before observation time t is obtained by weighting Eq. (9) with the detection efficiency ε and the probability η of the detected particle to be emitted during decay. The expected count rate ρ = εηNXλ thus becomes the parameter of the distribution of measured count values before t:

pNt=n=Pρt=ρtnn!eρtE10

At each time ti, sampled such as t0 = 0 &  ∀ i ≥ 1, ti = ti − 1 + ∆t, the raw estimation of the count rate ρ(ti) = ρi is provided by measuring Ni, which is a time-dependent random variable taking its values in a Poisson distribution such as:

NiPρitE11

A challenge in radiation monitoring is to provide count rate estimation ρi at each time ti maximizing both precision σ(ρi) → 0 and accuracy σ(ti) → 0. Algorithmic techniques to meet this expectation are discussed in the next section.

3. Count rate smoothing

The aim of smoothing algorithms is to improve the estimation of ρi, originally defined as: ρ̂i=Nit. This improvement can be achieved by using past values Ni − 1, Ni − 2, … recorded in a memory according to the assumption that p(ρi|Ni, Ni − 1, Ni − 2, …) is more precise than p(ρi|Ni). If we consider, in a first approach, a constant count rate ρ, the estimator which maximizes the likelihood of a homogenous Poisson process is the average [6]:

ρ̂i=1m+1tj=imiNjE12

where m + 1 is the temporal depth of the filter and ϑ is a kernel function in which each ϑj, 1 ≤ j ≤ i equal to one. According to the property of equality between variance and expectation, the associated variance σ2ρ̂ican be estimated as:

σ2ρ̂i=1m+1tj=imiNjE13

The relative stochastic uncertainty σρ̂i/ρ̂iis inversely proportional to the square root of the historical depth m + 1.

In practice, counting processes are not homogenous (ρ is not constant). In this case, it is important to provide an estimate of the time t̂ieffectively corresponding to the current count rate estimate ρ̂i. Because the sampling times ti over the temporal depth m + 1 are identically weighted, the estimate ρ̂ifrom Eq. (12) is associated to a time estimate t̂i=tim+12twith a temporal precision σt̂i=m+1t23. We therefore see that σρ̂ican only be minimized to the detriment of σt̂i, leading to a degradation of accuracy when ρ is varying. One way to address this issue is to actualize the temporal depth mi after every count rate estimation. The optimal value for mi is a function of the temporal behavior of the rate ρ at the time ti:dtt=ti=0or dtt=ti>0.

First approaches consist in the implementation of preset count filter providing a fixed variance σ2ρ̂i, or finite impulse response (FIR) filters in which a kernel function ϑ is used to assign more weight to recent than older count values such as Eq. (12) become:

ρ̂i=j=imiϑijNjm+1tj=1mϑjE14

Among FIR filters, the exponential moving average (EMA) remains widespread [7, 8], but do not fully deals with the tread-off issue between accuracy and precision.

The algorithm translation of the actualization of mi is the building of infinite impulse response (IIR) dedicated to nuclear counting [9]. Such nonlinear filtering requires a hypothesis test to detect the changes in count rate ρ. The null hypothesis H0 and the detection hypothesis H1 are formalized as follows:

H0:jimii,ρj=θ0E15
H1:jimii,ρj=θ1E16

In a first approach [10], a sequential probability ratio test (SPRT) has been assessed under the assumption that θ1 is a known value. Later, generalized tests in which θ1 is an unknown parameter have been introduced, notably the generalized likelihood ratio test (GLR) [11] and the centered significance test (CST) [12]. In these change detection algorithms, several estimations of the current count rate are calculated using different temporal depths k such as:

ρ̂ik=1k+1tj=ikiNjE17

In the rest of the discussion, we will conventionally use notation ρ̂ikto designate both the underlying random variable and its actual values.

In the CST test, the vector ρ̂ik,1kmiis scanned to find a potential change in the true rate ρ. For every temporal depth k ∈ ⟦1; mi⟧, the difference between count rate estimations ρ̂ik=ρ̂imiρ̂ikis the quantity which will be tested for significance.

The method is based on a comparison between actual and expected distributions of ρ̂ikunder H0 and H1, respectively [13]. The distribution Dof ρ̂ikis the difference between two weighted Poisson distributions 

ρ̂ikD=1mi+1tPρ̂imimi+1t1k+1tPρ̂ikk+1tE18

The expectation Eρ̂ik=Eρ̂imiEρ̂ik=θbetween times (i − mi)∆t and it. Moreover, we will make use of assumption θ̂=ρ̂imiρ̂ik, as common in nuclear counting experiments will finite statistics [1]. The variances associated with both uncorrelated random processes are summed to obtain a cumulative standard deviation for ρ̂ik. According to the equality between expectation and variance, we obtain:

σρ̂ik=σ2ρ̂imimi+1tmi+1t2+σ2ρ̂ikk+1tk+1t2ρ̂imimi+1t+ρ̂ikk+1tE19

We will note DEρ̂ikσρ̂ik, the distribution of the difference random variable with its first and second order moments.

Under H0, Eρ̂ik=0(cf. left curve in Figure 4). A decision threshold (DT) is determined in compliance with a given risk of false detection αik=pH1H0. DT is defined in the following formula, where Q1αikis the quantile of the error function (err) with a confidence level1αik:

αik=DTDH00σρ̂ikdρ̂ikerrQ1αikE20

Figure 4.

Illustration of distributions DH0 (left curve) and DH1 (right curve) and construction rules of the hypothesis test.

In practice, for embedded implementations, it is impossible to sample and interpolate distributions Q1αik=fαikfor every value of i and k. Moreover, when Eρ̂imiand Eρ̂ikare large enough, distribution DH0may be approximated as N0σρ̂ik, with Nthe Normal law. Under this assumption, for every value of i and k, αik=αand Q1αik=Q1α, where Q1 − α is a quantile of Nand err becomes:

errQ1α=22ΦQ1αE21

where Φ is the cumulative distribution function of the centered Normal law.

As illustrated in Figure 4, DT can be calculated thanks to the weighting of the standard deviation by Q1 − α such as:

DTik=Q1ασρ̂ikE22

If ρ̂ikDTik, hypothesis H0 is accepted with a confidence level equal to 1 − α.

Under H1, DH1Eρ̂ikσρ̂ikNDLikσρ̂ik+DLik(cf. right curve in Figure 4), where DL is defined as the detection limit. DL is determined in compliance with a given risk of non-detection β = p(H0| H1). DL is obtained in the following formula:

β=DTNDLikσρ̂ik+DLik=errQ1βE23

As illustrated in Figure 4, DL can be calculated thanks to the weighting of the associated standard deviation by the quantile Q1 − β of the error function erf such as:

DLik=DTik+Q1βσρ̂ik+DLikE24

An equivalent confidence level 1 − α = 1 − β = γ is considered, and Eq. (23) is solved recursively such as:

y1,
DLi,yk=Qγσρ̂ik+σρ̂ik+DLi,y1kE25

With

DLi,0k=2Qγσρ̂ikE26

When y → ∞,

DLik=Qγ2+2Qγσρ̂ikE27

If ρ̂ikDLik, hypothesis H1 is accepted with a confidence level 1 − β = γ.

The number Li of significant changes recorded into memory ρ̂ikis used to calculate the next value of temporal depth mi + 1:

Li=dimargk,1kmiρ̂ik>DLikE28

If Li = 0, true rate ρ is considered to remain constant and historical depth may be extended mi + 1 = mi + 1, to the benefit of a reduction of σρ̂(better precision). On the other hand, if Li > 0, true rate ρ is considered to change and the historical depth needs to be reduced mi + 1 = mi − Li, to the benefit of σt̂(better accuracy).

At every elementary time step ∆t, the retained count rate estimate ρ̂iis therefore calculated over an adaptable temporal depth mi, Eq. (17) becoming:

ρ̂i=1mi+1tj=imiiNjE29

With

σ2ρ̂i=1mi+1t2j=imiiNjE30

This nonlinear approach performs advantageously in comparison with conventional linear filters [12, 14], allowing to maintain sufficient precision while rate changes in the signal occur.

Remaining high-frequency fluctuations can now be reduced using a second, recursive smoother, for instance a Brown’s double exponential filter [14]. A first exponential smoothing ρ̂i1is performed on ρ̂iwith a smoothing parameter δi such as:

ρ̂i1=δiρ̂i+1δiρ̂i1E31

With

σ2ρ̂i1=δiσρ̂i2+1δiσρ̂i12E32

A last exponential smoothing ρ̂i2is eventually performed on ρ̂i1under the form:

ρ̂i2=δiρ̂i1+1δiρ̂i11E33

With

σ2ρ̂i2=δiσρ̂i12+1δiσρ̂i112E34

The parameter δi changes as a function of the parameter mi and its strength is set with the parameter W:

δi=1exp1Wmi1E35

Finally, the Brown’s estimation ρ̂iis calculated such as:

ρ̂i=2ρ̂i1ρ̂i2E36

With

σ2ρ̂i=4σ2ρ̂i1+σ2ρ̂i2E37

Figures 5 and 6 illustrate the advantage of the hereby described nonlinear filters over conventional, moving average filters with a 20% rate variation, respectively, in a low count rate configuration (5 counts per sample) and in a higher count rate configuration (500 counts per sample). The nonlinear filter has been set with parameters  = 1.645 (γ = 90%) and W = 0.2 and compared to moving average filters set with m = 50 samples (soft) and m = 500 samples (hard). Figure 5 shows that nonlinear filtering offers a better compromise between precision and accuracy, though the detection of small changes within large statistical fluctuation remains unreachable ρ̂ikDLik. At higher count rates (Figure 6), nonlinear filtering permits the detection of the rate change ρ̂ik>DLikand ensures a significant gain, operating both faster and more precisely than both moving averages.

Figure 5.

Behavior of smoothing filters over a 20% rate variation at 5 counts per sample.

Figure 6.

Behavior of smoothing filters over a 20% rate variation at 500 counts per sample.

Such nonlinear smoothing algorithms, easily embedded into programmable components, have for instance been implemented into a Geiger-Müller dosimeter fixed on a wireless robot used for radiological threat detection [15]. This algorithmic building block is plays a key role in the nuclear counting methods studied in the next sections, namely compensation measurements and sensor network processing.

4. Compensation measurement

In many cases, radiation monitoring requires the counting of a signal from a first radiation source within an interference signal induced by a second particle emitter, namely α/β vs. γ; n vs. γ, γ vs. γ … The most efficient techniques consist in the recognition of the particle origin associated with each individual pulse event by coincidence/anti-coincidence, pulse height discrimination (PHD) or pulse shape discrimination (PSD) [16]. However, event-by-event discrimination techniques may be found unreliable in particular mixed field configurations. Compensation methods are an alternative solution when addressing such limitations [17, 18]: their principle lies within measuring count rates ρA from a first detector A, sensitive to all particles, and comparing the result with count rates ρB from a second detector B, only significantly sensitive to background contributions (typically gamma rays). The estimation ρC of the count rate associated with particles of interest is obtained by subtraction of ρA with ρB such as:

ρC=ρAωρBE38

where ω is a correction factor taking in account the fact that detector B is not strictly equivalent to detector A in terms of response as a function of the energy and spatial localization of incident background particles. Three challenges are to be faced in compensation measurement:

  • increase in fluctuation level;

  • apparition of negative count rates without physical sense;

  • loss of reliability (impact of energy and anisotropy of the background signal).

Values of ρA, it and ρB, it at the time ti are described by Poisson processes, as already stated in the third section of this chapter. Therefore, if ω = 1, values of ρC, it are described by a Skellam process SkρA,itρB,itsuch as:

ρC,itSkρA,itρB,itE39

The expectation and the variance of the random variable ρ̂C,itare, respectively, Eρ̂C,it=ρ̂A,itρ̂B,it=ρA,itρB,itand σ2ρ̂C,it=ρ̂A,it+ρ̂B,it=ρA,it+ρB,itunder the same assumption as in Section 4. The variance definition highlights an increase of fluctuation level in comparison with single-channel measurement. It is therefore required to reduce this variance using a suitable smoothing filter, such as the CST a nonlinear filter described in the previous section (cf. Eqs. (36) and (37)):

ρ̂A,iσρ̂A,i=CSTρ̂A,iE40
ρ̂B,iσρ̂B,i=CSTρ̂B,iE41

Reduced variances σ2ρ̂A,iand σ2ρ̂B,icalculated according to Eq. (36) are used to determine σ2ρ̂C,ias:

σ2ρ̂C,i=σ2ñ̂A,i+σ2ρ̂B,iE42

If the compensation factor ω remains constant but different from 1, Eq. (42) becomes:

σ2ρ̂C,i=σ2ρ̂A,i+ω2σ2ρ̂A,iE43

In practice, the factor ω is not constant, due to the impact of energy and spatial distributions of incident particles. We then introduce notations ω¯and σ2(ω) for the expectation and variance of the variable ω. A resulting variance is therefore calculated by taking into account both statistical error and bias such as:

σ2ρ̂C,i=σ2ρ̂A,i+ω¯2σ2ρ̂B,i+ρ̂B,i2σ2ωE44

The estimation of ω¯and σ2(ω) is complicated by the experimental dependence of these parameters. An approach is proposed in [19, 20], in which a database is built from measures acquired in representative areas and in absence of the signal particle of interest: ρC = 0. In these conditions, compensation factors ωq=ρ̂A,qρ̂B,qare obtained for each measurement point (1 ≤ q ≤ nq), allowing for an empirical mean ω¯=1nqq=1nqωqand variance σ2ω=1nqq=1nqωqω¯2to be estimated.

Based on the generalized variance expressed in Eq. (43), a hypothesis test is built to select positive and significant values of ρ̂C,i. Algorithm 1 presents the detection test in which the presence of particles of interest is detected in compliance with a confidence level γ governing the test. Most of the time, ωq values can be considered to follow a Normal law, which allows us to apply an envelope coverage factor associated with a confident level γ as:

Algorithm 1:

If ρ̂C,i>Qγ2σ2ρ̂C,i,

Then ρ̂C,i=ρ̂C,i(detection hypothesis H1 is accepted)

Else ρ̂C,i=0(detection hypothesis H1 is rejected)

Figure 7 synthetizes the principle, inputs, and outputs of the compensation technique.

Figure 7.

Principle of the compensation technique.

The method improves the reliability of compensation measurement with the use of a recorded database. Moreover, accuracy and decision threshold Qγ2σ2ρ̂C,iassociated with the particles of interest are optimized using an adaptive filter, smoothing individual channels while suppressing all negative or non-significant values. The approach described in the present section has been successfully implemented in varied applications, such as α/β contamination meters or gadolinium-based neutron detectors [20, 21].

As a perspective, it has been demonstrated that the multiplication of channels, such as illustrated in Figure 8, allows the system to learn a prior distribution for the signal over a set of pixels as a function of incident energy and spatial origin of background particles. Dispatching ρ̂A,qand ρ̂B,qdata along (X > 1) dimensions induces a reduction of detection threshold Qγ2σ2ρ̂C,iand thus an improvement of measurement reliability. Compensation factors ω¯j,kand variances σ2(ωj, k) are determined for every 1 ≤ j ≤ X A-type detector, and every 1 ≤ k ≤ X B-type detector. Resulting count rates ρC, i are estimated as:

ρ̂C,i=j=1Xρ̂A,i,jk=1Xω¯j,kρ̂B,i,kE45
σ2ρ̂C,i=j=1Xσ2ρ̂A,i,jk=1Xω¯j,kσρ̂B,i,k2+ρ̂B,i,kσωj,k2E46

Figure 8.

Principle of the compensation technique for pixelated detectors.

5. Moving source detection

Radiation portal monitors (RPM) are implemented to detect radioactive sources, carried by a vehicle in motion, through the monitoring of a count rate measured by large-volume detectors. Two main issues arise in RPM development: correcting the shadow shielding effect observed when the vehicle is dense enough to impact the baseline of the signal, and improving the detection capability (increasing true detection minus false alarm detection probability).

RPM detection strategy is based on a hypothesis test where the estimated signal ρ̂iat the time ti is continuously compared to a threshold h, itself determined in comparison with the signal distribution under H0. Let θ0 be the expected background count rate without any vehicle in the environment surrounding the RPM. A decision threshold (DT) is set, following the same philosophy as presented previously (Eqs. (20)(26)), as a function of variance σ2(θ0) and a confidence level associated with a false detection risk α

DT=Q1ασθ̂0θ0=Q1α2σ2θ0E47

During the passage of a dense vehicle, θ0 will decrease due to gamma-ray attenuation as the vehicle acts as a radiation shield. Such baseline alteration is noted ωθ0, where ω ϵ [0; 1] is the attenuation factor. An added count rate from a source with intrinsic rate θ1, put onboard the vehicle, will thus lead to a total signal θT = ωθ0 + θ1. If ω = 1 and θ1 > DT, the source is detected with a non-detection risk:

βω=1=errθ1DTσθ1+θ0=errθ1Q1α2σ2θ0σθ1+θ0E48

If ω < 1, Eq. (48) becomes:

βω<1=errθ1Q1α2σ2ωθ0σθ1+ωθ0>βω=1E49

This effect, so-called “shadow effect,” induces a significant loss in detection capability (β↗) even for ω ≈ 1.

Many works have been done in order to restore the baseline (ω → 1) and all of them use a database recorded when a representative sample of void vehicle in passing through the RPM [22, 23]. An alternative method based on time series analysis has been developed to restore the baseline without using any prior knowledge about the vehicle and the experimental conditions hoping for gain in flexibility [24]. The latter is described below.

In the first place, the minimization of DT requires the implementation of an efficient smoothing filter, minimizing the high-frequency variance σ2(θ0) and subsequently the β risk, while preserving the temporal shape of the signal of interest θ1(ti). Thus, the nonlinear filter CST (Eqs. (36) and (37)) has proven efficient for this purpose. The single-channel RPM estimates the random variable ρitP(θTt) at each time ti such as:

ρ̂iσρ̂i=CSTρ̂iE50

Estimations are continuously recorded into an historical memory with a depth m allowing calculating, at each time step ti, the filtered logarithmic derivative ρ̇̂iof the signal:

i1m,
ρ̇̂i=1α1ρ̂iρ̂ilρ̂iα1ρ̇̂i1E51

with α1 and l being, respectively, a smoothing parameter and the derivative depth.

The trend of the signal, which can be constant, decreasing or increasing, is represented by a slope state Di with values between −1 and 1 such as:

ρ̇̂i<α2Di=1E52
ρ̇̂iα2Di=0E53
ρ̇̂i>α2Di=1E54

with α2 > 0 being a parameter for variation significant.

The state of the signal Si is labeled by a number between 1 and 8, defined as illustrated in Figure 9. The first line describes the passage of a vehicle containing a source without shadow effect; the second line corresponds to the passage of a dense vehicle with no source; and the third one to the passage of a dense vehicle containing a radioactive source (shadow shielding).

Figure 9.

Schematic view of possible states of the system.

States Si can be determined with knowledge of Di and Si − 1 using a sequential logic algorithm detailed under the form of a state diagram in Figure 10. To solve the problem, states 3 and 8 automatically pass to state 1 after a preset watchdog time τw.

Figure 10.

State diagram of the state determination algorithm.

Knowing the state of the system, the baseline of the signal can be restored. The upper level (UL) (state 1) and the lower level LL (state 6) are firstly estimated in a recorded time series at time ti:

zUL=arg1kξSik=1E55
zLL=arg1kξSik=6E56
UL=1dimZULk=1dimZULρ̂ikE57
LL=1dimZLLk=1dimZLLρ̂ikE58

The baseline is restored to obtain corrected count rate estimations ρ̂such as:

k ∈ ⟦1; ξ⟧,

Sik1234,ρ̂ik=ρ̂ikE59
Sik5678,ρ̂ik=ρ̂ik+ULLLE60

Figure 11 illustrates the baseline restoration: the correction algorithm enables the detection of a source originally hidden by shadow effect. A simulation study has shown the significant gain in detection probability with the maintaining of a stable false detection rate [24].

Figure 11.

Signal and state evolutions over the simulation of a source passing into a dense vehicle.

The conception of a RPM primarily consists in designing detection blocks with a maximized sensitivity according to the application view and cost-effectiveness strategies. Signal processing is then to be implemented in the system in order to tune its detection capabilities. The improvement of RPM performance forms an active topic of research. It has notably been established that the spectral analysis of the signal, even for unresolved detectors, allows a gain in detection performance [25]. Another upgrade can be achieved by time series analysis techniques, especially when RPM are deployed in a network, which allows the implementation of correlation methods [26]. Figure 12 presents the schematic of a RPM network implementing n channels and dedicated to moving source detection.

Figure 12.

Schematic of a system based on correlation detection.

The network configuration enables two complementary types of detection: the first one based on traditional temporal analysis of individual channel H1ρ̂1,,ρ̂n, the second one based on frequency analysis, searching for a phase ϕ maximizing the correlation between channels H1φρ̂1,,ρ̂n. When the network is linear and the source carrier has a constant velocity, this phase corresponds to the periodic echo of the signal increase on the first channel as seen on the other channels. The difference in nature between both methods introduces a quantitative information gain, and thus a potential improvement in detection capability [2729].

A correlation vector is calculated by scanning the product of all channels with phase φ such as:

φ1ξ1n1,

Rφ=ρ̂1,i1ρ̂2,iφ+1ρ̂3,i2φ+1ρ̂j,ij1φ+1ρ̂n,in1φ+1Rφ=j=1nρ̂j,ij1φ+1E61

For the vehicle passing from detectors 1 to n.

The algorithm firstly determines a phase φ0 maximizing , then the significance of the associated temporal correlation is evaluated with a hypothesis test in which H0 is the null hypothesis (no echo detected) and H1 is the detection hypothesis (echo detected). Values of  are compared to the mean and variance of their distribution. Mean R¯and empirical variance σ2() are calculated according to:

R¯=n1ξ1φ=1ξ1n1RφE62
σ2Rϕ=n1ξ1φ=1ξ1n1RφR¯2E63

The detection test reads:

Algorithm 2:

If Rϕ0>R¯+Q1α2σ2Rϕ

Then H1 is accepted

Else H0 is accepted

The use of the empirical variance σ2() ensures a significant gain in detection capability under challenging signal-to-noise ratios [27, 28]. However, an algorithmic refinement is achieved with the introduction of a Normal law a priori on count rate distributions [29]. Thus, a modified variance σ2() is obtained by the estimation of individual variances σ2(ρj, i) provides by the nonlinear filter (cf. Eq. (37)) for every detector j and memory slot i. Its calculation is presented in the following recursive formula:

k2n&φ1ξ1n1,
σ2Rϕk=σ2Rϕk1σ2ρ̂k,ik1ϕ+1+ρ̂k,ik1ϕ+12+σ2ρ̂k,ik1ϕ+1Rϕk12E64

with

Rφk1=ρ̂1,1E65
σ2Rφ1=σ2ρ̂1,i1E66

The detection algorithm mixes the detection according to each individual channel with the detection using the correlation factor. In both cases, a decision threshold (DT) is calculated as a function of a false detection risk α. Let DTj, ϕ the decision threshold associated with the channel j and the phase ϕ reads:

j1n&φ1ξ1n1,
DTj,φ=Q1α2σ2ρ̂j,ij1φ+1E67

And let DTRϕ be the decision threshold associated with the correlation factor such as

φ1ξ1n1,
DTRϕ=Q1α2σ2RφnE68

Algorithm 2 presents the mechanism of the cumulative detection.

Algorithm 3:

If j1n&φ1ξ1n1,DTj,φρ̂j,ij1o¨+10,

And if φ1ξ1n1,DTRφ −  ≤ 0,

Then, the detection hypothesis H0 is accepted and the hypothesis H1 is rejected,

Else if ϕ1ξ1n1&j1n,DTj,φρ̂j,ij1φ+1>0,

Then, the detection hypothesis H1 is accepted and the hypothesis H0 is rejected,

Or if φ1ξ1n1,DTRφRφ>0,

Then, the detection hypothesis H1 is accepted and the hypothesis H0 is rejected,

And, the velocity of the source is equal to Lφtwhere L is the distance between detectors.

It has been proven in [2729], the largely significant added-value in term of detection capability permits by the implementation of the correlation based detection. The true detection rate is increased while maintaining very low false alarm rate. Figure 13 presents a system realized by the CEA which implements the correlation method [30].

Figure 13.

Photography of the RPM prototype (Katrina) developed by the CEA in the framework of the SECUR-ED project funded by the European Commission [30].

All of these algorithms will be implemented in a dedicated DSP card [3] and the compliance of the RPM system will the standard ANSI42-35 will be tested in due course [31].

6. Conclusion

Different count rate processing methods have been presented in this chapter: an adaptive smoother, a background discrimination method and two algorithms improving the detection of moving sources. In these algorithms, frequentist inferences are realized on the basis of measured data. These types of approaches are well suited for real-time processing, allowing taking decision with very few iterations, compared to Bayesian inferences which are more suited for post-processing analyses.

The nonlinear smoother is proved to be a key building block in radiation monitors, delivering a fine estimation of count rate expectation with a minimized associated variance. Both expectation and variance estimations are used to apply hypothesis tests addressing many problematics in radiation monitoring such as for instance those already developed hereby: compensation and RPM network.

Notes

  • Current ADCs are available with tradeoffs between resolution and sampling frequency such as: 16 bit / 100MS/s and 8 bit / 1 GS/s. CAEN Electronic instrumentation, 724 Digitizer Family, CEAN data sheet, 2015.

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Romain Coulon and Jonathan Dumazert (December 20th 2017). Recent Developments in Count Rate Processing Associated with Radiation Monitoring Systems, Ionizing Radiation Effects and Applications, Boualem Djezzar, IntechOpen, DOI: 10.5772/intechopen.71233. Available from:

chapter statistics

303total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Application of Radiation Technologies for Quality Improvement of LEDs Based upon AlGaAs

By Alexandr V. Gradoboev, Anastasiia V. Simonova, Ksenia N. Orlova and Olga O. Babich

Related Book

First chapter

Air Kerma Rate Constants for Nuclides Important to Gamma Ray Dosimetry and Practical Application

By Marko M. Ninkovic and Feriz Adrovic

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us