We present computational methods for attribute estimation of photon-counting and photon-processing detectors. We define a photon-processing detector as any imaging device that uses maximum-likelihood methods to estimate photon attributes, such as position, direction of propagation and energy. Estimated attributes are then stored at full precision in the memory of a computer. Accurate estimation of a large number of attributes for each collected photon does require considerable computational power. We show how mass-produced graphics processing units (GPUs) are viable parallel computing solutions capable of meeting the required computing needs of photon-counting and photon-processing detectors, while keeping overall costs affordable.
- photon-processing detectors
- maximum-likelihood estimation
- parallel processing
- gamma-ray photons
- charged particles
In broad terms, detectors used in imaging can be classified into a small number of categories depending on their working principles. These categories include integrating detectors, pixelated photon-counting detectors as well as a new class of detectors, which we refer to as photon-processing detectors.
An integrating detector measures charges accumulated at each pixel location. These charges are induced by light impinging on the detector and are proportional to the average number of photons incident on each pixel. Dedicated circuitry reads out these changes and converts them to numbers roughly proportional to the charge accumulated at each pixel.
A photon-counting detector works by counting the number of photoelectric interactions observed during the exposure time. Count registers associated with each pixel are read at the end of the exposure time, thus making the output of photon-counting detectors a collection of pixel counts.
A photon-processing detector may use any existing detector technology to measure several “attributes” for each photon entering the detector. Attributes include the photon position, its direction of propagation and the amount of energy it deposited in the detector. This is accomplished by reengineering the detector design so that additional information can be extracted from raw unprocessed data. Important aspects of any photon-processing detector include the algorithm used to estimate photon attributes from raw data as well as how these attributes are represented and stored in the memory of a computer.
Particle-processing detectors are a variation on photon-processing detectors and are designed to detect charged particles, such as alpha and beta particles. Particle-processing detectors enable a new imaging technique—called charged-particle emission tomography (CPET)—which attains high-resolution 3D imaging in living organisms so long as accurate estimation of parameters for each charged particle is available.
This chapter is organized as follows. Section 2 provides an overview of detectors suitable for photon counting and photon processing. Maximum-likelihood estimation (MLE) and its properties are discussed in some detail in Section 3. The next section—Section 4—introduces graphics processing units (GPUs) and the compute unified device architecture (CUDA) programming model. Section 5 presents algorithms for photon-counting detectors, while Section 6 discusses photon- and particle-processing detectors and presents fast GPU-based algorithms for maximum-likelihood estimation of photon parameters. Finally, Section 7 summarizes this chapter and discusses possible applications of photon-processing detectors.
A portion of this chapter has been adapted from Y. Ding, “Charged-Particle Emission Tomography” .
2. Detectors for photon counting and photon processing
2.1. Gamma-ray cameras
Gamma-ray cameras are used in nuclear medicine to image gamma-ray photons emitted by radioactive elements. The first gamma-ray camera was developed by Hal Oscar Anger in 1957 . Anger’s original design, often referred to as an “Anger camera,” is still widely used today. A diagram of an Anger camera is provided in Figure 1.
An Anger camera includes a scintillation crystal, a light guide and an array of photomultiplier tubes (PMTs). When a gamma-ray photon interacts with the scintillation crystal, a burst of visible-light photons is produced. Some of these photons propagate through the crystal and the light guide and enter one or more PMTs. When a photon enters a PMT and interacts with it, a measurable electrical signal in the form of a narrow current pulse is produced. This pulse is transmitted to amplifying electronics, so that it can be analyzed. A transimpedance amplifier converts the current pulse to voltage. A shaping amplifier further amplifies the signal and reshapes it, making it broader and smoother. A broad signal is easier to sample via an analog-to-digital converter. The output of the analog-to-digital converter can be scaled to obtain an integer number representing the number of photons entering the PMT. Digitized samples collected from each of the PMTs are then scanned for events. Detected events are stored in the memory of a computer in the form of scaled PMT samples .
Detailed analysis of the physical processes that take place inside the scintillation crystal and each PMT allows us to derive a statistical model for the scaled PMT samples produced by a gamma-ray camera with PMTs. Because of noise, PMT samples can be thought of random variables. If we normalize each PMT signal by the gain of the PMTs and we ignore the noise in the gain, random variables can be shown to be conditionally independent and to follow Poisson statistics with means, respectively, . Thus, we can write:
Functions are called mean detector response functions (MDRFs), and they describe the mean detector response upon detection of a gamma-ray photon with energy at location .
2.2. Semiconductor detectors for charged particles
Semiconductor pixelated detectors can be used to measure position and energy of charged particles, including alpha and beta particles. One possible detector configuration consists of a layer of semiconductor material (which we refer to as the “active volume”), a set of anodes placed on one side of the detector’s active volume, and some data-processing circuitry (such as application-specific integrated circuits or ASICs) that measures the anode signals and converts them into digital signals.
When a charged particle enters the detector’s active volume and deposits some of its energy, electron-hole pairs are produced along the particle’s track. The electrons and holes drift in opposite directions under an electric field applied throughout the detector’s active volume. This process is accompanied by the production of electrical charges, which are collected by electrodes on one side of the detector’s active volume. These charges are then converted to digital signals (e.g., number of electron-hole pairs produced) and are either sent to a computer or accumulated in count registers to form an image.
An example of a semiconductor pixelated detector for alpha and beta particle is the Medipix2 sensor (Figure 2) developed at CERN . The Medipix2 sensor features an array of 256 × 256 square pixels of size 55 μm. The counter in each pixel of a Medipix2 sensor can record the duration of an event that is above a threshold, from which the energy collected at each pixel and the particle’s residual energy can be measured.
A statistical model for the data produced by a semiconductor detector for charged particles (such as the Medipix2 sensor) must take into account the so-called charge sharing effect  as well as many variables, including particle’s position and energy , its angle of incidence (denoted as the unit vector ) and bias voltage applied across the semiconductor. Some recent results for the Medipix2 sensor have been reported in [1, 7]. When a highly energetic particle enters the detector, a large number of charges will be collected at its electrodes. In such a case, the statistics of pixel outputs (where denotes the number of detector pixels) conditioned on, , and approach Gaussian statistics, and we can write:
in which is the mean of the pixel and is the standard deviation of .
2.3. Intensified charge-coupled detectors
A charge-coupled detector (CCD) is a semiconductor device that produces a pixelated image by converting incoming photons into electric charges, which are then stored at each pixel location. These charges are induced by photons with energy exceeding the semiconductor bandgap. The most general form for the mean output (calculated by imaging the same object over and over again) is :
in which varies from 1 to the total number of pixels , is the quantum efficiency at pixel , point on detector face, photon energy and along direction . The function is the spectral photon radiance at point for photon energy, time and along direction [8, 9]. In Eq. (3), the spatial extent of the detector was denoted as “” and “” means integration over all the possible directions incident on the detector. Finally, integration over the time variable starts at time and ends at time .
An intensified charge-coupled detector (ICCD) uses an image intensifier (such as a microchannel plate (MCP)) to amplify scintillation light before imaging it onto a CCD sensor. The image intensifier provides optical gain (in the range from 105 to 106 or more) so that low-energy optical photons (emitted, e.g., upon interaction of a charged particle with a scintillator) can be imaged with practically any CCD sensor. Lenses, usually placed between the image intensifier and the CCD sensor, reimage the image intensifier’s output window on the CCD sensor. Examples of intensified charge-coupled detectors include the iQID sensor developed at the University of Arizona by Brian W. Miller .
A proper statistical model for an intensified charge-coupled detector must consider both the statistics of the output produced by the image intensifier and the statistics of the data produced by the CCD sensor. To find a model for the image intensifier, we begin by noticing that each point in the CCD sensor can be propagated back through the lenses all the way to the entrance face of the image intensifier. Therefore, we can consider the number of photons impinging on the image intensifier at locations that fall within pixel on the CCD sensor. Under broad conditions, we can show that obeys Poisson statistics and we denote the mean of as . For large enough , the statistics of are approximatively Gaussian.
A general expression that relates to the sensor output takes the form:
in which denotes the mean of the image intensifier amplification (gain) . The variance, , of is related to and the statistics of as follows :
in which is the variance of the amplification anddenotes the variance of the sensor’s readout noise. If the blur introduced by the image intensifier and optics located between the image intensifier and the CCD sensor is smaller than the size of a sensor pixel, then output is independent on for any . If we further assume that the amplification and the readout noise also obey Gaussian statistics, we can write [1, 7]:
3. Maximum-likelihood estimation
Maximum-likelihood estimation (MLE) is a statistical method that uses observed noisy data to estimate model parameters. For a good historical treatment of the concept of maximum-likelihood estimation, the interested reader can consult . Given a set of observed data and an underlying model (which depends on some unknown parameters), MLE calculates the values of the parameters that better explain the observed data. The observed data that are used for maximum-likelihood estimation are realizations of random variables. Thus, parameters we estimate from these data are realizations of random variables as well.
Maximum-likelihood estimation can, in principle, be used with all the detectors discussed above. For example, we show how maximum-likelihood estimation is used to estimate position of interaction from PMT data, and we discuss an efficient parallel algorithm for it. Moreover and as we argue in Section 6, maximum-likelihood estimation is the estimation method of choice for photon-processing detectors.
3.1. Mathematical description
Let us denote the parameters we want to estimate as the vector. The model itself is characterized by a probability density function (PDF), denoted as . We use the vector to refer to the complete data, while we denote the incomplete data as [3, 8]. We stress that we do not directly observe , but only indirectly and through the vector . Vectors and are statistically related via the PDF . Probability density functionsand allow us to write
in which is the PDF of the observed data given the parameter . Eq. (7) above takes into account two separate “mechanisms” that, when concatenated, produce a sample from the value of. The first mechanism produces the complete data according to , while the second mechanism samples to produce the incomplete data .
MLE solves the estimation problem by finding the vector that maximized the likelihood for observed data . Mathematically, this concept is formalized as:
in which the “hat” symbol denotes an estimated quantity, and we have defined the likelihood as:
Likelihood has to be interpreted as a function of for fixed (measured) . In Eq. (8), we used “” to denote the value of that maximizes the likelihood. Becauseis the result of a noisy measurement, the actual value of in Eq. (8) will change if the measurement is repeated. In other words, is a random quantity, and this implies that the ML estimate is random as well.
Alternatively,can be calculated by rewriting Eq. (8) as
in which we have introduced the log-likelihood 
Because the logarithm is a strictly monotonic function, the expression in Eq. (10) is equivalent to the one in Eq. (8). Often, the log-likelihood is numerically easier to calculate with a computer than the likelihood .
3.2. Properties of ML estimates
Maximum-likelihood estimates have many desirable properties. Some of these properties are summarized below.
Asymptotic efficiency. Ifrepresents a set of repeated independent and identically distributed samples, asymptotic efficiency of MLE implies that, as increases, the variance of each component of converges to the smallest possible value, which is given by the Cramér-Rao lower bound [12, 13].
Functional invariance. Assume the ML estimate of a parameter vector is and consider a function of . We can identify with the parameter vector , and we can consider a maximum-likelihood estimate of . Then E12
This equation shows that the property of being a maximum-likelihood estimate is preserved if we consider a function of the maximum-likelihood estimate itself.
Sufficiency. Any quantity calculated from samples and used to estimate an unknown parameter vector is said to be a sufficient statistic for if no other quantity that can be calculated from the same samples would provide additional information regarding the value of the parameter vector . In simple terms, a sufficient statistic is a function of the samples that “compresses” them without losing any information about. Sufficiency for a maximum-likelihood estimate can be stated by saying that is a function of a sufficient statistic for .
Consistency. Consistency of an estimator regards the behavior of the estimator as the sample size increases. Consider the case in which is a set of repeated independent and identically distributed samples. It is possible to show that, when the range of the elements ofdoes not dependent on the parameter vector , there exists a maximum-likelihood estimate that, as increases, converges in probability to the true value of the parameter vector. A consistent maximum-likelihood estimate is unique .
Asymptotic normality. Because the ML estimate of is a random variable, it makes sense to consider its probability density function. As the sample size increases, the probability density function of converges to the probability density function of a normally distributed random variable with mean equal to the true value of the parameter we want to estimate and covariance matrix equal to the inverse of the Fisher information matrix .
4. Graphics processing units and CUDA
Driven by the insatiable demand for real-time rendering in gaming and entertainment, graphics processing units (GPUs) have become highly parallel devices capable of running general-purpose code. Newer products that offer an ever-increasing amount of computational power are constantly introduced in the market at very competitive prices.
Programming languages have been developed to harness the parallel capabilities of GPU devices. The most widespread language for GPU programming is called compute unified device architecture (CUDA), which was introduced in 2006 by NVIDIA. Due to its similarity to C, CUDA has rapidly become the de facto programming language for GPUs.
4.1. The CUDA programming model
In CUDA, the GPU is usually referred to as the device and the computer that hosts it is referred to as the host. Many GPU devices can be installed in the same host, and it is not uncommon to have systems with more than one GPU device. Each GPU device has its own memory, which we refer to as device memory. In CUDA, it is also common to refer to the memory installed in the host as host memory. CUDA provides library functions to allocate blocks of memory in device memory and to transfer blocks of data from host memory to device memory and vice versa. As shown in Figure 3, a typical GPU device includes some GPU cores (ranging in number from a few hundreds to a few thousands) and some control logic.
Programmers access the parallel capabilities of a CUDA-enabled device by writing kernels, which are pieces of code that look very similar to regular C functions. In CUDA, a kernel is run in parallel on many different GPU cores. A kernel in execution is referred to as a thread. Threads are grouped into blocks, which can be 1D, 2D or 3D, and blocks are grouped into a grid. Grids can be 1D, 2D or 3D. The size and dimensionality of blocks and grids are decided by the programmer via an execution configuration, which is also used to call a kernel and instruct the GPU hardware to execute threads.
In a GPU device, thread scheduling is extremely efficient and it is performed by the hardware and without the intervention of the programmer. To improve performance, the hardware also suspends execution for threads that are waiting for completion of memory transfers between device memory and GPU registers. When that happens, the hardware selects for execution threads that already have data to be processed. The programmer is typically unaware of what threads are running at any given time, nor does he know what kernel instruction is being executed by a specific thread. In other words, the programmer cannot rely on any particular scheduling order of GPU threads. There are, however, situations in which it is necessary to ensure that a block of threads has reached a certain instruction in a kernel before all the threads in the block can continue. In CUDA, this is accomplished via synchronization barriers.
Synchronization barriers are often used when threads have to exchange data with each other via shared variables. Without any synchronization mechanism, a thread will not be able to know if the content of a shared variable has already been written by a cooperating thread. Synchronization barriers solve this problem by suspending thread execution until all the threads in the same block have reached a synchronization barrier. In CUDA, synchronization barriers are allowed only among the threads in the same block.
GPU devices are equipped with different memory spaces. This includes global memory (which is used to share data between the host and the device) as well as shared memory. While global memory is rather slow and physically separated from the GPU cores, shared memory is much faster and it is built on the same chip as the GPU cores. Threads use shared memory to efficiently share data among them.
Another type of memory space available in a GPU device is texture memory. As the name suggests, texture memory has been designed to speed up and facilitate 3D rendering in computer games and computer-generated scenes. This is the reason why texture memory supports unique features including hardware-based on-the-fly interpolation of texture data.
4.2. Workflow of a CUDA application
The basic steps that are needed to execute a kernel are summarized in Figure 4.
In a typically CUDA application, one or more blocks of device memory are allocated by the host via the cudaMalloc(…) CUDA library function. The host then copies input data from host memory via one or more cudaMemcpy(…) function calls. Kernel execution is started with a call of the form my_kernel < <<N, M>> > (…), in which my_kernel is the name of the kernel, N is the grid size and M is the block size. Parameters, such as pointers to device memory, are passed to the kernel as parameters enclosed in parentheses. Once all the threads have finished executing, the control returns to the CPU. Results can be copied from device memory to host memory via one or more calls to cudaMemcpy(…). Finally, device memory that was previously allocated is released via the cudaFree(…) call.
The CUDA environment automatically defines read-only built-in variables that can only be used in a kernel. These variables include blockIdx, blockDim and threadIdx. The variable threadIdx enumerates threads within each block in ascending order and starting from 0. Similarly, blockIdx enumerates blocks within the grid. The size of each block is contained in the variable blockDim. Built-in variables are used by the programmer to calculate which element(s) of the input array(s) a thread has to work on, or where in device memory a result has to be stored.
5. Algorithms for photon-counting detectors
To make our discussion more concrete, we begin this section by considering a GPU algorithm for maximum-likelihood estimation (MLE) of position of interaction of gamma-ray photons in an Anger camera. We then comment on ways to adapt our algorithm to other cases, including photon-counting and photon-processing detectors (Section 6).
We showed in Section 2.1 that digitized PMT signals obey Poisson statistics and we denoted the means of as , respectively. If photon energy is known, the likelihood for the estimation of position under the assumption of Poisson noise is written as:
Functions are called mean detector response functions (MDRFs) and they can be either measured, derived analytically or estimated via simulation codes. Using Eq. (13), an ML estimate of can be found as:
Equivalently, we can consider the logarithm of in the maximization step and write:
in which we omitted the term as it does not depend on and, therefore, it will not affect the estimate .
The algorithm we present here uses the fact that, for fixed and , the log-likelihood is a smooth function of . Hence, maximum-likelihood estimate can be searched for in an iterative fashion by first evaluating over a coarse grid of -by-points that uniformly spans the whole detector space. The point of the grid that maximizes is used in the next iteration as the center of a new grid smaller than the previous one by a factor . This process is repeated times. We refer this algorithm as the contracting grid algorithm [18, 19].
Figure 5 shows pseudocode for a possible GPU implementation of the contracting grid algorithm. We used superscripts to make it clear on which memory space a given variable is stored. Variables with no superscript will denote either numerical constants (such as the contracting factor ) or local variables, typically stored in GPU registers.
The algorithm of Figure 5 assumes that an array of PMT sample vectors is available. These data are stored in device memory and we decided to use in our GPU implementation a grid of size with 2D blocks of size . This thread hierarchy follows naturally from the data we have to process and how we process them. In fact, the block index is used to index one of the vectors, while the 2D thread index is used to identify a point of the contracting grid (of size ).
Our GPU implementation uses shared memory to either store data that are used multiple times during thread execution (this would be the case, e.g., of PMT data vector ) or to share common variables among all the threads in the same block. Each thread in a block calculates the likelihood for one of the points in the contacting grid and shares the value of the likelihood among all the threads in the same block.
MDRF data (previously estimated via simulation codes ) are stored in a 2D layered texture and used during the calculation of the log-likelihood (denoted as in the pseudocode). MDRF data are transparently interpolated by the hardware during texture fetching. Moreover, we set texture boundary conditions so that, should the point fall outside the detector’s entrance face, would evaluate to 0. Physically, this can be interpreted as no PMT signals being produced for a gamma-ray “interaction” outside the detector’s entrance face.
Besides code speed and clarity, a layered texture makes our code extremely flexible. By changing , and/or , it is possible to change the size of the contracting grid or its contracting factor to find the desired trade-off between speed and estimation accuracy.
5.1. Comments and applications to photon counting
The algorithm we discussed above was specifically designed for gamma-ray data and it uses calibration data in the form of mean detector response functions (MDRFs). The output of the algorithm is a list of positions in the form . This list can directly be fed to an algorithm for list-mode image reconstruction. Implementation details and results are reported in . Common practice, however, is to bin the list-mode data and count the number of points that fall within each bin. As we argue in , one drawback of this step is that it introduces some error, as all the points within each bin are represented with a single point location.
The algorithm we presented in Figure 5 is one example of a contracting grid algorithm for maximum-likelihood estimation. The main assumption we made was that the likelihood (or its logarithm) is a smooth function of , the vector of parameters we want to estimate. This is true for many estimation problems. Therefore, the algorithm of Figure 5 provides a general pattern for the implementation of maximum-likelihood estimation on a GPU device.
6. Photon-processing detectors and algorithms
For each photon-absorption event in a detector, there are many parameters we can consider. These parameters include photon position with respect to a plane or a reference point, direction of propagation and energy the photon deposited in the detector. Depending on the application (e.g., single-photon emission computed tomography for 4D angiography or coincidence detection in positron emission tomography), we might also need to consider the time the photon impinged on the detector. Finally, some imaging techniques (such as two-photon quantum imaging) do require measurements of quantum mechanical parameters, one example being quantum spin.
6.1. Mathematical description
We refer to a set of photon parameters as an attribute vector, and we denote it as . Hence, depending on the application, an attribute vector might have five or more components. We denote the number of components of as . Because of noise, it is not possible to estimate exactly the components of and we use the notation to denote an estimated attribute vector.
We define a photon-processing detector as any imaging device that :
uses a gain mechanism (such as an image intensifier) to obtain multiple measurements (e.g., multiple pixel values) for each absorbed photon;
uses these measurements to perform maximum-likelihood estimation of photon attribute vector , for;
stores the estimated attributes at full precision as a list and without performing any binning.
Photon-processing detectors are fundamentally different than photon-counting detectors. While photon-counting detectors only consider photon position and record the number of photons (or charged particles) that fall within each bin over a predetermined amount of time, photon-processing detectors use maximum-likelihood to estimate a wide range of attributes and retain all the estimated information at full precision as the list .
The full information from a photon-processing detector is retained if we simply store the estimated attributes for each of photons as the list , but an equivalent construction as a random point process in attribute space offers new theoretical insights. From , we introduce this point process as [3, 9]
where is the -dimensional Dirac delta function. The mean of the point process is obtained by averaging over the statistics of each of the attribute vectors and then over itself for a given object. This calculation gives a function of for fixed . This function can be regarded as a vector in the Hilbert space , which is the vector space of square-integrable functions of real variables. As shown in [22, 23], we can introduce the linear operator that maps the object (belonging to the infinite-dimensional Hilbert space ) to . In symbols,
A similar expression but for photon-counting detectors is:
in which the vector is the mean over many realizations of the vector . Vector belongs to the Euclidian vector space , which is the space of all -dimensional vectors. We refer to as a continuous-to-discrete operator  as it maps the function of continuous variable to a discrete vector with components. On the other hands, is a continuous-to-continuous operator as it maps to the function of continuous variable .
The key difference between and is that must necessarily have a nontrivial null space, as it maps vectors in an infinite-dimensional Hilbert space to vectors in a finite-dimensional vector space. This means that for any imaging system that produces photon-counting data, there exist nonzero objects that, on average, do not produce any data. Equivalently, we can say that there exist two objects and with for which . A continuous-to-continuous operator—such as the operator defined above—maps an object in an infinite-dimensional Hilbert space to another infinite-dimensional Hilbert space. Therefore, the same dimensionality analysis we considered for does not apply to . In fact, might allow a lower dimensional null space than for the same imaging system .
6.2. Relationship to radiometry and the Boltzmann transport equation
The word “radiometry” refers to a set of techniques used in optics to describe and calculate the distribution of light. An important concept used in radiometry is that of radiance, denoted as , which is a function that describes the radiant flux in an optical system as a function of three-dimensional spatial position and direction . Since the radiant flux is measured in Watts, the units of are Watts per square meter per steradian, or W/ (m2ster) . From the radiance, other important radiometric quantities can be calculated. This includes the irradiance (power per unit area), radiant intensity (power per unit solid angle) and radiant flux (power).
Spectral dependence can be introduced in the basic definition of radiance by considering radiance per unit wavelength , which we denote as . The units of are W/ (m2sternm), provided that the units of wavelength are nanometers (nm). Spectral radiance can also be measured in photon units by first expressing wavelength in terms of energy (, in which is Planck’s constant and is the speed of light) and then by dividing by the energy of a photon. We denote this new quantity as , and its units are (photons/s)/(m2ster). Finally, we can consider a time-dependent spectral photon radiance, and we denote this function as .
In radiometry, the Boltzmann transport equation (BTE) allows to calculate at any point inside an optical system by taking into account absorption, emission, scattering and propagation of light. In its most general form, the BTE is written as :
where is the speed of light in the medium, is the total attenuation coefficient (with contributions from both absorption and scattering processes), is an integral operator describing the angular and energy dependence of the scattering, and describes any light source. In general, the function depends on ,, and . If the light source is isotropic (i.e., does not depend on ) and independent of time, we can write
where the term (units: ster) accounts for integration of a solid angle over a sphere. Under these hypotheses, a steady-state solution to Eq. (20) is found by setting the partial derivative to zero. The result is
which can be further rewritten in operator form as
which describes a practical way to obtain the function from knowledge of the radiance function inside an optical system and the Boltzmann operator .
6.3. Particle-processing detectors
An example of a particle-processing detector for beta particles is shown in Figure 6. This detector includes two layers of ultrathin phosphor foils separated by an air gap, an image intensifier, a high numerical aperture lens system and a light sensor [1, 7].
In Figure 6, an incoming beta particle interacts with a layer of phosphor (just a few microns thick) at location where it deposits some of its energy, which the layer of phosphor gives off as a flash of visible light. The particle further propagates and interacts at with a second phosphor layer, thus producing a flash of visible light here as well. Light flashes produced at each layer get amplified by an image intensifier and imaged onto a sensor. The flash of light generated at the first layer spreads out considerably as it propagates through the air gap, thus resulting in a much broader signal on the sensor than that corresponding to the flash of light generated at the second layer. This is used to determine at which layer each flash of light was generated. The direction of the particle and its position at either phosphor foil can be estimated from the two interaction positions and the distance between the two foils (assumed known). If the particle’s residual energy is of interest, the second phosphor foil can just be replaced by a thick scintillator, so that the particle is stopped.
A CUDA algorithm for maximum-likelihood estimation of position and particle direction for the setup of Figure 6 has been developed by Garrett Hinton at the Center for Gamma-Ray Imaging, University of Arizona. Following the same approach outlined in Section 5, the algorithm uses a four-dimensional contracting grid to simultaneously estimate location and from an image frame collected with an ultrafast CMOS camera. Experimental setup and preliminary results have been presented in [1, 7].
7. Summary and applications
This chapter provided a general overview of detector technology and algorithms for photon counting and photon processing. We started by describing detectors suitable for photon counting and photon processing. Statistical models for the data produced by these detectors were presented. We then introduced maximum-likelihood estimation and discussed its properties. We emphasized that a maximum-likelihood estimate is any parameter that maximizes the likelihood function given the detector output, and we pointed out that the likelihood function is the probability density function of the detector output conditioned on the parameter being estimated.
We then commented on graphics processing units (GPUs) and the CUDA programming environment. Through CUDA-like pseudocode, we provided a maximum-likelihood algorithm for estimation of position of interaction for gamma-ray cameras. This algorithm used a contracting-grid approach to find a maximum of the likelihood function. Our approach heavily relied upon GPU textures to quickly retrieve calibration data. The same approach is applicable to many estimation problems.
Photon-processing detectors were introduced and defined as any system that collects multiple measurements to perform maximum-likelihood estimation of multiple event parameters (such as position, direction and energy), which are stored as a list and in full precision in the memory of a computer. The same data can also be represented as a point process, and we introduced a linear operator that maps the object being imaged to the mean of this point process. We used a dimensionality analysis to describe the advantages of photon-processing detectors over photon-counting detectors.
Particle-processing detectors are a variation of photon-processing detector. As an emerging technology, particle-processing detectors will find applications in many fields, one of them being medical imaging. In a new technique, called charged-particle emission tomography (CPET), particle-processing detectors are being evaluated for 3D in vivo imaging with alpha and beta particles [1, 7]. Like photon-processing detectors, particle-processing detectors convey a larger amount of information than conventional detectors for charged particles. This enables high-resolution 3D reconstruction of the distribution of radionuclides emitting charged particles without the need to kill an animal to image a collection of thinly sliced tissue sections.
Drug development will take advantage of CPET to determine drug pharmacokinetics, 3D transduction across cell membranes and targeting to tissues of interest. In the development of internal radioimmunotherapy, CPET imaging can be used to collect data on the 3D heterogeneous distributions of targeting molecules and in the estimation of delivered radiation dose. Finally, CPET will likely become a valuable technique in the emerging fields of personalized medicine and theranostics, in which diagnostics and therapy are combined in an attempt to avoid the “one-size-fits-all” approach to treatment that is often successful for some patients but not for others.
This chapter has been supported by National Institutes of Health (Grants R01 EB000803 and P41 EB002035).