Rationale, Instrumental Accuracy, and Challenges of PET Quantification for Tumor Segmentation in Radiation Treatment Planning

In the past few decades, radiation therapy of cancer has reached a high level of dosimetric and spatial accuracy due to the strong efforts of both science and industry. This has led to the emergence of today’s precise radiation therapy procedures, such as stereotactic radiosurgery and stereotactic body radiation therapy, whose very names indicate a resemblance to surgical precision. This resemblance stems from the high precision with which a dose can be delivered relative to the calculated value in a treatment plan, which is typically within a few percent. However, there are still limits to this precision arising from the technical limitations of the delivery machines and due to the finite errors caused by patient motion and the positioning uncertainties during patient set-up for treatment (LoSasso, 2003; Palta & Mackie, 2011). In addition, one needs to consider the finite penumbra of a radiation therapy beam. Although the dose gradient is fairly steep (at a depth of 10 cm in water, the dose falls off by more than a factor of 10 at less than 5 mm distance from a 1-mm diameter 6 MV pencil beam), photon scatter in the patient and in the accelerator head leads to low-dose but long-range wings of the dose kernel (Kirov et al., 2006). Hence, despite radiation therapy’s high dose delivery precision with respect to the planned dose, it is still a relatively blunt instrument when compared to surgery.


Introduction
In the past few decades, radiation therapy of cancer has reached a high level of dosimetric and spatial accuracy due to the strong efforts of both science and industry. This has led to the emergence of today's precise radiation therapy procedures, such as stereotactic radiosurgery and stereotactic body radiation therapy, whose very names indicate a resemblance to surgical precision. This resemblance stems from the high precision with which a dose can be delivered relative to the calculated value in a treatment plan, which is typically within a few percent. However, there are still limits to this precision arising from the technical limitations of the delivery machines and due to the finite errors caused by patient motion and the positioning uncertainties during patient set-up for treatment (LoSasso, 2003;Palta & Mackie, 2011). In addition, one needs to consider the finite penumbra of a radiation therapy beam. Although the dose gradient is fairly steep (at a depth of 10 cm in water, the dose falls off by more than a factor of 10 at less than 5 mm distance from a 1-mm diameter 6 MV pencil beam), photon scatter in the patient and in the accelerator head leads to low-dose but long-range wings of the dose kernel . Hence, despite radiation therapy's high dose delivery precision with respect to the planned dose, it is still a relatively blunt instrument when compared to surgery.
As an added difficulty, while the precision of matching the delivered to the planned dose in radiation therapy is well known, the boundaries of the tumor target are not. One of the reasons for this is that the high-resolution imaging modalities-computed tomography (CT) and magnetic resonance imaging (MRI)-are unable to identify the metabolically active or molecularly relevant parts of the tumor. In contrast, positron emission tomography (PET) offers an important advantage by defining the tumor based on its molecular properties (Ling et al., 2000;Schöder & Ong, 2008). In fact, PET has one of the highest sensitivities and specificities in detecting metabolically active tumor tissue (Gambhir et al., 2001). As a result, PET is now widely used for cancer staging and to supplement traditional imaging systems that are used to define the target volume for radiation therapy (i.e. CT and MRI) (Gregoire & Chiti, 2011;Gregoire et al., 2007;Nestle et al., 2009). However, despite PET's ability to identity metabolic and molecular activity, delineating gross tumor volume (GTV) with PET is problematic due to uncertainties in the biological and physiological processes governing tracer uptake and the instrumental inaccuracy of PET images. These biological and physical uncertainties lead to significant ambiguity of the position of the tumor boundary in images generated by PET in PET/CT and PET/MRI scanners, which in turn leads to uncertainty as to where to aim the precise beam of modern radiotherapy.
In this chapter, which is aimed as an introductory review, we address in order the following questions: i. What does radiation treatment planning (RTP) need from imaging? ii. What can PET provide for RTP? iii. What are the artifacts of PET images and how do they affect RTP? iv. What are the primary challenges of PET-based tumor segmentation?
Within the context of these questions and the RTP process, we describe the effects of the factors that are mainly responsible for degradation of the PET image, including limited resolution, photon attenuation, scatter, noise, and image reconstruction. In addition, we specifically address the impact of potential inaccuracies of the current artifact correction strategies on segmenting lesions in PET images.

The imaging requirements of radiation treatment planning
In radiation therapy (RT), the prescribed dose can be delivered to a phantom and verified with precision better than 5% for most points within a patient, even for very large and nonuniform intensity modulated radiation therapy (IMRT) fields . Furthermore, with the introduction of image guidance and respiratory gating techniques, both before and during treatment, similar precision can be achieved for dynamic treatments in which the target is moving. As a result, high treatment delivery precision is becoming standard today for more and more tumor types and disease sites ('A Practical Guide To Intensity-Modulated Radiation Therapy,' 2003; Palta & Mackie, 2011). This high RT dose delivery precision sets high accuracy and precision requirements in imaging for both tumor boundary definition (segmentation) and for activity concentration determination for dose painting (Ling et al., 2000), as described in Section 3 below.
This leads to the question: how accurately are GTV contours currently drawn? At present, tumor boundaries are drawn by radiation oncologists using CT images, in which the tumor may or may not be clearly seen, in a process resembling art. Using PET images in the treatment planning process is becoming more common since it provides additional functional information Nestle et al., 2009). The use of PET images in treatment planning has led to better agreement among physicians on target definition (Fox et al., 2005;Steenbakkers et al., 2006) and is hypothesized to improve the outcome of therapy. However, CT and PET have vastly different resolution and noise properties. As a result, when CT images are combined with PET images, which have much poorer resolution, an additional source of uncertainty is introduced into the segmentation process. Thereby, the uncertainties in PET images are translated into uncertainty of the RTP contours. Here it is also worth mentioning that many studies that show large discrepancies www.intechopen.com between PET-derived GTV and CT-derived GTV are performed without intravenous (IV) contrast. The routine use of IV contrast during PET/CT is expected to enable the clinician in further refining the GTV (Haerle et al., 2011).
Ideally, drawing target contours with an accuracy of the order of 1 mm is desirable to match the precision of contemporary dose delivery. While this is achievable with CT and MRI, it is still a challenge for PET. Lastly, GTV is also derived based on clinical examination. Because all imaging modalities (PET, CT, or MRI) have limitations in defining mucosal involvement, the value of a physical examination by a radiation oncologist cannot be underestimated.
Currently, due to the limited resolution of PET and its poorly known quantification accuracy, defining the target for radiation therapy by segmenting PET images is problematic. For this reason, it is important to understand and try to minimize the potential image degrading factors for each patient PET scan (Kang et al., 2009).

What can PET provide to radiation treatment planning?
PET reveals functional information about elevated cell metabolic activity, including proliferation or molecular processes that may help localize the most active or potentially radiation-resistant parts of a tumor as well as cancerous metabolic activity not visible in CT images (Erdi et al., 2002;Zanzonico, 2006). Fig. 1 shows an example of how fluorodeoxyglucose [FDG-]PET (red contour) can lead to alteration of a CT-defined GTV and the final planning treatment volume (PTV). This illustrates the substantial difference between the use of PET for diagnostic and RTP purposes. Whereas the goal of diagnostic PET is to detect the presence of a tumor by its abnormal uptake, the goal of using PET in RTP is to delineate cancerous from normal tissue with high accuracy. This poses new challenges for PET. The physical basis for these challenges is discussed in section 5 of this chapter. This discussion cannot be considered complete without addressing the various relationships between the actual tumor volume, its estimate as outlined by the physician (GTV), the established margins for both the clinical target volume (CTV) and PTV, the physician's prescription, and the delivered dose (Mackie & Gregoire, 2010;Palta & Mackie, 2011).Traditional imaging techniques, such as CT and x-ray, attempt to identify volume in a binary (tumor/no tumor) fashion via anatomical abnormalities because they do not supply any functional or metabolic information about the tumor cells or subregions. As a result, most radiation therapy planning and delivery is based on irradiating a homogenous target and avoiding the surrounding tissue. These plans are designed to maximize the likelihood of tumor control while minimizing the likelihood of normal tissue complications constrained by the properties of radiation, by the dose delivery precision and by the tumor position uncertainties. The lack of in-vivo biological information about the tumor at voxel size scale requires assuming homogeneity of the target, which implies that the dose delivery should also be homogeneous in order to maximize the probability of tumor control.
In this case, the uncertainty in subclinical disease and delivery is dealt with by extending margins from the estimated GTV to account for the subclinical disease (CTV) and delivery uncertainty (PTV). The beam penumbra leads to a gradual fall off on the edge and outside of these margins. However, the segmentation's boundary, physician's prescription, and treatment plans are based on a hard boundary. The dose is effectively delivered to a soft boundary where that dose may or may not fall off proportionally with the tumor cell density. Quantitatively accurate PET images using tracers that capture the appropriate tumor information have the potential to make the definition of these soft boundaries more explicit and thus, more conformal to the targeted tumor function.
Currently there is an increased interest in using PET to define highly aggressive or radiation-resistant parts of the tumor and selectively treat them with a higher dose in order to increase tumor control probability (TCP) without increasing normal tissue complication probability (NTCP). This treatment approach is known as "dose painting" (Ling et al., 2000;Lee et al.,2008;Aerts et al.,2010;Petit et al., 2009;Bentzen & Gregoire, 2011). Fig. 2 shows how PET has been used for dose painting by increasing the dose to physician-selected regions by using information from the fused PET/CT image as well as from other sources. Due to the recently observed highly complex spatial distribution of various tissue properties probed with different tracers on a micro-environmental scale, it was suggested that treatments can be prescribed and delivered on a voxel-by-voxel basis with a technique known as "dose painting by numbers" (Petit et al., 2009;Bentzen & Gregoire, 2011). However, although PET has the potential to provide this type of information, there are difficulties, which need to be addressed for the implementation of such treatments. Among the main challenges are: the limited spatial resolution and signal-to-noise ratio of the imaging system; the dynamic nature of the tumor microenvironment, which can shift between planning and treatment; the fact that there are few tracers that can reliably provide the needed biological information about the relevant molecular processes, and those that do exist require further validation; and the finite setup and delivery uncertainties in radiation therapy.

PET uncertainties
The accuracy of the PET segmentation task is limited by the uncertainties of the PET image. As described in a paper by Boelaard (Boellaard, 2009) the factors leading to uncertainty in PET can be divided into three groups: biological (e.g., glucose levels, inflammation, patient comfort, heterogeneity of tumor composition, perfusion, etc.), physical (e.g., positron annihilation physics, detector limitations, reconstruction method), and technical (e.g., activity specification and injection errors, residuals, injection time). The uncertainty contributed by each of these factors can be quite large -up to tens of percent-and these estimates are often approximate or reflect observed maximum deviations. As an example of one technical factor, Fig. 3 shows an illustration of the uncertainty in administered activity. Compared to the accuracy of dose delivery that is achievable in RT today (of the order of a few percent), further efforts are necessary to better quantify the uncertainty contributed by each PET image degrading factor. This is especially true if dose-painting by numbers is considered. In the next section, we address in more detail the physical factors. www.intechopen.com

Physical PET image degrading factors and their effect on segmentation
The activity distribution in a PET image differs from the actual tracer activity distribution due to physical limitations of the PET scanners (Surti et al., 2004;Cherry et al., 2003) and the physical processes summarized in Table 1. In this section we briefly describe the effects of these processes and provide some insight on how each of them may affect the accuracy of PET segmentation even after correction. An illustration of some of the physical phenomena based on a realistic Monte Carlo simulation of a PET scan is presented in Fig. 4.

No.
Factor Brief Description 1 PET Resolution Smearing of activity obtained with PET due to physical and detector phenomena (see Table 2 (Boellaard, 2009), since they are not subject of the present chapter.

Resolution
The factors affecting PET resolution (Cherry et al., 2003;Tomic et al., 2005) are summarized in Table 2. The finite resolution of PET scanners degrades quantification accuracy and can make small objects invisible if they are comparable or smaller than the Full Width at Half Maximum (FWHM) of the point spread function (PSF). It can also lead to loss of contrast for larger objects due to blurring. The effect is known as partial volume effect (PVE). Various PVE correction methods exist, which apply the correction either at a region or at a voxel level (Soret et al., 2007), during or post reconstruction. Most of the post-reconstruction correction methods lead to variance increase and often use anatomical images (CT or MRI) to control noise amplification. Since PET images are functional and therefore do not need to match the anatomical structures, PVE corrections which do not use anatomical images were also explored (Boussion et al., 2009;Kirov et al., 2008). Fig.5 shows PET images before and after applying one of these approaches. A straightforward and promising approach implemented by some vendors is to incorporate the point-spread function in the reconstruction process . For isotopes with higher positron energy (e.g. 13 N and 82 Rb) the positron range contribution to the PET PSF is greater and its spatial variance becomes important (Alessio & MacDonald, 2008). Fig. 4. Annihilation photons originating from a FDG cylinder placed next to an air cavity (short yellow lines close to center) in a water cylinder (horizontal green lines) within a model of the GE Discovery LS PET scanner (Schmidtlein et al., 2006) as simulated by the GATE Monte Carlo code (Jan et al., 2004). Many photons are seen to scatter in the phantom and one even scatters in air, producing a delta electron (red). Some photons are absorbed in the phantom or scattered outside the solid angle of the scanner. A simulation result from this arrangement is shown in Fig.12 Distance the positron travels prior to annihilation, typically up to about 3 mm for the most often used radioactive isotopes (Zanzonico, 2006) 2 Detector Size and Distance to Detector PET spatial resolution usually cannot be less than half the size of the face of detector elements; increasing the diameter of the detector ring decreases its efficiency and increases the effect of photon non-colinearity 3 Photon Non-colinearity Photons are not emitted exactly back-to-back since the positron is not fully stopped at the time of annihilation 4 Block Effect Scattering of photons in neighboring detectors broadens resolution; also the block readout scheme or detector coding can affect resolution 5 Depth of Interaction Photons may protrude from one detector without interaction and interact within a neighboring detector 6 Under-sampling In addition to finite detector size, the angular sampling interval and selected voxel size also impose a limit on resolution 7

Reconstruction
Smoothing functions are applied to data during or after reconstruction to suppress noise in the data  (Kirov et al., 2008) as seen on a PET image before (left) and after (right) PVE correction. Improved contrast is observed especially for the small lesions, but there are modifications of the image related to the Gibbs phenomenon that require further investigation.
The smearing of the tracer distribution due to the finite PET resolution affects segmentation. This is in addition to any biological effects, which may lead to variable gradients of the tumor cell density or metabolism on the tumor periphery and thereby also blur the tracer uptake distribution. PVE related blurring leads to a size dependent variation of the optimal segmentation threshold. This is addressed by using adaptive threshold methods where the threshold is a function of the size of the lesion, as was shown for spherical objects (Erdi et al., 1997;Lee, 2010). The reconstruction parameters (number of iterations, filter types) affect the resolution and their effect on the segmentation threshold has been demonstrated by www.intechopen.com several groups (Daisne et al., 2003;Ford et al., 2006). Some of the more advanced autosegmentation approaches combine a resolution correction step within the segmentation process using either deconvolution , or convolution (De Bernardi et al., 2009) operations.

Attenuation correction
The attenuation correction (AC) (Zaidi & Hasegawa, 2003) may cause inaccuracies (Mawlawi et al., 2006) due to: i. incorrect attenuation caused by streaking CT artifacts in proximity to metallic implants ii. movement resulting in loss of registration between the CT and PET images iii. use of contrast media, which may cause an overestimation of the AC especially for older bilinear AC algorithms (Nehmeh et al., 2003) iv. truncation -for large or mis-positioned patients, part of the patient may be outside the CT Field of View (FOV). Unaccounted attenuation results in underestimation of the SUV and will produce streaking artifacts at the edge of the CT images -a rim of high activity at the edge of the CT FOV Dual energy CT attenuation correction methods have been proposed for reducing (iii) above Rehfeld et al., 2008) Each of the mechanisms listed above can affect quantification within the images. As a result, they can alter the basic data that automated segmentation approaches use for determining the probability for assigning a voxel to a certain class. The effects of streaking artifacts in CT due to metallic implants are difficult to quantify due to the gross mismatch between the estimated attenuation coefficients and the actual amount of attenuation in the data. This can cause an artificial increase of uptake in the reconstructed images in some regions and reduction in others. Because of the statistical nature of iterative reconstruction and of the irregularity of the CT artifacts, it is difficult to determine where these effects will be seen. Some vendor software smoothes or redefines regions in the images to remove these CT artifacts, which allows the reconstruction of more reliable PET images. In Fig.6. is shown an artifact that was caused by inaccurate attenuation correction due to CT streaking artifact. The non-AC corrected image indicates this artifact to be clearly visible just above the prosthesis. PET and CT misregistration causes systematic shifts in the intensity within the PET images. This may cause similar systematic shifts in the boundaries of the various segmentation www.intechopen.com schemes. These intensity shifts are especially noticeable in the lung, where small shifts in the patient's position can shift bone into lung and vice versa. In general regions in the patient that were dense during the CT and then less dense during the PET acquisition will show artificially elevated uptake. This effect is demonstrated in Fig. 7. for Monte Carlo simulated PET images of a female thorax, for which the correct position of the AC map is known (Kang et al., 2009). In practice, most vendors use heavily smoothed CT images for AC in the reconstruction process, which smooth these effects but do not eliminate them especially for larger misalignments.
In addition, scatter correction estimates rely on the attenuation map from the CT to estimate the contribution of the scatter events and can therefore be offset because of attenuation correction misalignment, as discussed in more detail in the next section.

Scatter corrections
Photon scatter can lead to erroneous lines of response (LOR, Fig. 8). In the top right inset one can see that photons Compton scattered up to 50 degrees will fall inside a coincident photon energy window above 375 keV. 3D PET provides much higher sensitivity, but also results in a higher fraction of scattered events compared to 2D PET. In Fig.9 is illustrated how scattered photons obtained from Monte Carlo simulation can contaminate the PET energy window. It should be noted that 2D PET systems are in the process of being phased out by the manufacturers, however, a large number of these scanners are currently in use.
Various methods for evaluation of scatter exist (Bailey, 1998;Surti et al., 2004;Zaidi & Koral, 2004) and can be performed at different levels of accuracy: uniform, in which the number of scattered photons across the sinogram is approximated by a smooth function, or more detailed and based on the attenuation correction (AC) image. Historically scatter correction strategies included uniform sinogram tail fitting methods which are outdated and may perform well for uniform phantoms. Experimental approaches using dual or multiple energy windows to separate scatter events have also been investigated (Grootoonk et al., 1996;Trebossen et al., 1993).
Vendors currently use CT based scatter correction schemes and apply the corrections so that they are invisible for the user. Calculating the scatter from the AC image was initiated by www.intechopen.com Olinger (Ollinger, 1996) and can be performed either analytically using the Klein-Nishina formula for single scattered photons, which is becoming the industry standard, or using Monte Carlo. The latter is a more accurate method, but it is very time consuming and currently different groups are working on improving the efficiency of these calculations.
There are three main mechanisms through which scatter can affect the quantitative accuracy of the reconstructed images: i) inaccurate modeling of the scatter distribution; ii) inaccurate scatter fraction that is used for scaling the correction in some of the correction approaches; and iii) a shift of the location of the distribution. In practice, the largest contribution to the errors seen in scatter correction is due to an inaccurate estimate of the scatter fraction. The reason is that the scatter fraction is often obtained by using a tail fit of the projection data outside the patient, which can be affected by patient motion, pulse pileup, or spilled activity. This causes the entire distribution of the scatter counts to be over-or under-corrected. This uniform over-or under-correction in the projection data, can however be re-distributed unevenly by the attenuation and normalization corrections.  (Jan et al., 2004). Photons with energies below 300 keV are discarded since the coincidence energy windows typically range from 350 or 375 keV to 650 keV. The different spectra correspond to photons from all coincidences (solid line), photons from coincidence of which at least one photon was scattered in the phantom (long dash), spectra of only single-scattered photons (short dash), and spectra of only multiple-scattered photons (dotted line).
In the case of extreme over correction the edges of the images will show zero counts and the resulting images have a "bleached out" appearance. This is most often seen when activity outside the patient's body as determined from the CT mask is present. This alters the tail fit from which the scatter fraction is estimated. An example of this occurs when a patient begins a scan with arms up (CT arms up) and then part way through the scan lowers their arms. Alternatively, this can happen at high activities when pulse pile-up can lead to compounding the true counts so that they produce pulses above the energy window and to compounding scattered counts to fall inside that window. This may increase the tail of scattered events in the sinogram and lead to scatter overcorrection. This is most often seen near the bladder or heart in normal scans and sometimes with short half-life tracers due to their high initial activity. In either case, resulting images have a dramatic loss of contrast and appear bleached out. In the case of severe over correction, because of the loss of contrast and quantitative accuracy, the affects on segmentation can be dramatic. However, these images are often poor enough that no usable diagnostic information remains, and it is obvious that they are not useful for segmentation.
Poor modeling of the scatter distribution can cause local regions of the projection data to be over or under-corrected. Discrepancies in the modeled scatter distribution are further scaled by the system sensitivity and attenuation corrections during image reconstruction. The net result is that the over-corrected regions will show reduced uptake and the under-corrected regions will show increased uptake. However, because the scatter distribution, even after being scaled is smoothly varying in projection space, poor modeling of the shape of the distribution is generally difficult to identify and likely a small effect. The exception to this is when the entire distribution is improperly scaled due to a poor estimate of the scatter fraction.
Although, the scatter distribution is a slowly varying function of position in projection space, the application of the attenuation correction scales these counts so that they concentrate in the more highly attenuated regions in the images (Fig.10). Because of this, in some regions of the thorax the apparent number of scattered photons in the images may approach the number of true coincidences (Kang et al., 2009). Therefore, an inaccuracy of the  10. (a-f) Effect and distribution of scattered and random (SR) coincidences for the NEMA-2 Image Quality phantom, as obtained from a Monte Carlo simulated scan (Jan et al., 2004;Schmidtlein et al., 2006), which allows exact separation of these events.

www.intechopen.com
scatter correction of about 15% (Chang et al., 2009), can result in a similar quantification error in the PET signal for these regions. This can alter the tumor to background ratio, which can affect most segmentation methods.
Finally, the scattered events can affect segmentation accuracy if the AC map is shifted with respect to the PET image due to patient movement between the CT and the subsequent PET scan. In addition to shifting the true counts image as was shown in Section 5.2 (Fig 7), this will cause a similar shift of the scatter counts image (Fig.10f). The under-and overcorrections will be most pronounced at the extreme edges of the patient along the axis of the shift. However, unless the shift is substantial, the overall effect may be small if the attenuation correction is smoothed. In general, this is a second order effect when compared to the effect of the attenuation correction misregistration on the true coincidences.

Random corrections
The number of random events increases with increasing the injected activity and may exceed the number of true events by few times. However, the random correction is very accurate. Usually it is obtained in one of the following three ways: (i) real time subtraction of the count rate from a delayed timing window for which no true coincidences are possible, (ii) off-line correction using a low-noise estimate of the random events rate obtained by smoothing the delayed sinogram; or (iii) random rates calculated from the single events rate in each detector. Direct subtraction of real time measured random coincidences increases noise in the corrected image. Brasse et al (Brasse et al., 2005) have shown that while smoothed delayed random estimates provide the lowest noise images, singles-based random estimates perform only marginally worse, but without the dead-time penalty and increased data bandwidth of the delayed counts approach.
In analogy with the discussion in Section 5.3, although random counts are quite smooth and uniform, because of the attenuation correction, any uncorrected random counts will be pushed in the regions with high attenuation (Fig.10) and they may cause inaccuracies similar to these described for scattered photons. However due to much higher accuracy of the random events correction, these inaccuracies are expected to be smaller than those introduced by the inaccuracy of the scattered events correction.

Normalization correction
The normalization correction corrects for the "non-uniformity of detector response related to the geometry of the scanner" and for the difference in sensitivity between the different detector channels (Bailey, 1998). It is performed during routine re-calibration of the scanner by using a uniform activity cylinder or by a scanning rod source. The vendor of the scanner specifies the source and the correction procedures. For more information on different approaches for normalization correction see (Bailey, 1998) and (Badawi, 1999). Improper or out of date normalization files can add some artifacts to the reconstructed image. The most common artifacts are caused by unaccounted for change in the efficiency of some detectors. They can be seen in transverse slices as ring-like light and dark intensity patterns centered on the transverse axis. The effect on automated segmentation schemes is likely to be similar to that of CT streak artifacts but less intense.

Dead time
Corrections are needed to account for count loses due to the electronics dead time. Since the losses are larger at high count rates, this can be done by repetitive scans of a decaying source. High count rates can also lead to mis-positioning and misclassifying events due to pulse pile-up in block detector systems (Badawi, 1999). Mis-positioning may cause loss of imaging accuracy if normalization is performed at count rates very different from these in clinical scans. Additionally, at high activities, pulse pile-up can cause true counts to be pushed out and scattered events pushed in the coincidence energy window (event misclassification). As discussed in Section 5.3, this may lead to an overestimation of the scatter correction and therefore can affect segmentation as described in section 5.3.

Image reconstruction and noise
PET image reconstruction is inherently noisy due to the Poisson processes that govern the detection and the interpretation of the emitted photons (e.g. decay, detection, energy spectrum). While all modern PET scanners are supplied with and clinics use statistically based (iterative) image reconstruction, it is useful to discuss the resolution and noise properties of images generated with older, deterministic, image reconstruction methods first. In deterministic image reconstruction methods, such as filtered back-projection, the back-projection process mixes the Poisson distributed projection data via the projection operator (often a uniformly spaced Radon transform) to generate images whose voxels have a Gaussian noise distribution (Alpert et al., 1982;Schmidtlein et al., 2010). The resulting images have better signal-to-noise ratios for high contrast objects than for low contrast objects (this is not true for OSEM, as explained below). Additionally, the projection operator and related post-filtering tend to smooth the noise and create covariance between neighboring voxels. Post filtering greatly increases the ability to interpret the images.
The most popular iterative image reconstruction algorithm is Ordered Subsets Expectation Maximization (OSEM), which is a form of Maximum Likelihood Expectation Maximization (MLEM) with accelerated convergence achieved by iterating over smaller subsets of the data (typically an angular component of the sinogram) (Tarantola et al., 2003). This iterative reconstruction seeks the most likely image given the data and a statistical model of the system. However, like all un-regularized maximum likelihood estimates, these reconstruction methods begin to fit the noise of the data if iterated too many times. Furthermore, the signal-to-noise (SNR) ratio over the image is more uniform, and because of that hot objects will have higher noise compared to colder objects (Schmidtlein et al., 2010). As a result, the ability of iterative statistical reconstruction to produce increased contrast in low uptake regions is one of the primary reasons that these methods are superior to deterministic methods for diagnostic purposes.
Overall, the noise of voxels in OSEM generated images is best described as log-normally or multi-variate normally distributed (Barrett et al., 1994) where the standard deviation is proportional to the voxel intensity. It follows then from the proportionality and the nonuniform sampling that the noise and resolution properties in these images are position dependent (Fessler & Rogers, 1996). In addition, the covariance between neighboring voxels is increased, that adds complexity when evaluating the statistical properties of regions of interest (Buvat 2002;Schmidtlein et al, 2010). Following iterative reconstruction, most data is typically smoothed to reduce the effects of the data over-fitting. To avoid these effects and/or non-uniform spatial resolution, some penalized-likelihood (regularization) schemes have been developed (Fessler & Rogers, 1996), but are rarely used. Li (Li, 2011) most recently analyzed the noise properties of penalized-likelihood algorithms.
Two new features that are now available with the latest generation of PET scanners are point spread function modeling (PSF) and time-of-flight (TOF) reconstruction. Both of these modifications to the reconstruction process alter the noise in the reconstructed images by altering the projection operator. PSF reconstruction uses the measured spatial resolution of the scanner to account for blurring (Table 2). Counter intuitively (deconvolution is usually a noise amplifying process) this process results in smoother images. This behavior can be explained by realizing that, with the addition of the PSF information into the system model, the over fitting normally seen in maximum likelihood models is constrained by the improved system model (mathematically this can also be seen through the propagation of the PSF kernel through the forward and back projectors (Rapisarda et al., 2010;Tong et al., 2010). In TOF reconstruction, the use of the timing information restricts lines-of-response (LOR) in the projection matrix to a smaller portion of the object. In addition this reduces both the amount of random and scattered events in the object. Hence, for TOF reconstruction the contrast can be improved, though the effect is most pronounced in patients with more scatter (Chang et al., 2011).
Therefore resolution, contrast and signal-to-noise ratio are dependent on the reconstruction type and parameters. With few exceptions among the statistical auto-segmentation methods, e.g. (Yu, Caldwell, Mah, & Mozeg, 2009;Hatt et al., 2009), most other auto-segmentation methods are dependent on these characteristics of the images.

Misregistration and motion
Patient motion (e.g. breathing) can lead to loss of registration between CT and PET images. Due to the much longer PET acquisition time a PET image encompasses lesion positions over different breathing phases. This can affect the accuracy of the contours shown in Fig.1. Breathing motion artefacts in PET/CT images can be corrected for by binning PET data according to the breathing phase (e.g. in 10 bins), and then correct each of those data sets for attenuation using a phase-matched CT image set deduced from 4D-CT images. This method is referred to as 4D PET/CT . Another technique is the breath hold technique PET/CT acquisition, where both PET and CT images will be acquired at the same breathing amplitude, e.g. deep inspiration (Nehmeh et al., 2011). The basis of this technique as well as a description of the different motion tracking devices are summarized in a review (Nehmeh & Erdi, 2008).
An interesting case of registered PET/CT images is shown in Fig.11. In this case, motion is suspected to be the reason for what seems to be activity in the air cavity. The behavior of the PET signal at tissue-air and tissue-lung interfaces was separately investigated and showed a steep PET signal drop for a cork or air cavity next to 18 F activity (Fig.12) . Therefore, motion is suspected to be the reason for what seems to be activity in the air cavity.
www.intechopen.com Fig. 11. A RTP contour displayed on registered CT (left) and PET (right) images from a FDGbased PET/CT scan. PET signal appears to originate from ~ 1-cm wide air cavity inside the trachea, as seen with respect to the RTP contour.
At 3 mm and beyond into the cavity the PET signal intensity drops to 30% below the peak value. Increased signal is observed on the opposing wall of the cavity due to positrons crossing the cavity (Fig. 12). The intensity of this peak is ~ 0.2% of that of the main peak. This was confirmed also by a Monte Carlo simulation, which gives the positron annihilation position. Fig. 12. Effect of the presence of air and lung cavities positioned next to activity on the PET image intensity. Relative PET signal profiles across FDG-air (solid line) and FDG-cork (dashed line) interfaces, as obtained by OSEM reconstruction of a 2D high sensitivity scan on the GE Discovery LS PET scanner. The profiles were summed over three neighboring slices at the center of the image to reduce noise. The positron annihilation position for this geometry, as obtained from a Monte Carlo simulation (Fig.4), is shown with the dotted line.

Other artifacts
In addition to the physical processes described above, other sources of error in PET images may be due to improper calibration, faulty detector blocks, or another malfunctioning of the scanner hardware as well as spilled activity on the cover of the detector rings, e.g. contamination from urinal splash.

Overall PET inaccuracy
For simple phantoms, the overall quantification inaccuracy of PET scans has been measured. However, for more realistic cases in which the tracer distribution is a complex superposition of the perturbing effects of the various phenomena discussed above, PET accuracy is both unknown and not yet well investigated. In a Monte Carlo based investigation it was shown how different corrections and reconstruction algorithms can affect accuracy for non-uniform activity and attenuation varying in one direction (Kirov et al., 2007). The development of a new class of physical phantoms, capable of producing realistic activity distributions, similar to those observed in clinical scans (Kirov et al., 2011 ), will further aid in quantification of the overall PET inaccuracy.
The artifacts discussed in the previous sections and summarized in Table 3, ultimately contribute to the inaccuracy of the segmentation boundary in the form of offsets and global inaccuracies as well as in uncertainty of the boundary location due to the noise of the voxel intensities. Here we present an example of a formalism that allows to explicitly represent the overall uncertainty of the position of the segmentation contour as a function of these inaccuracies and the uncertainty associated with a noise model based on a spherical tumor with constant uptake in an image reconstructed with OSEM. In this case, assuming that the uptake is increasing in a direction perpendicular to the segmentation boundary, the intensity of a given voxel at position r can be approximated as a truncated Taylor series expansion by Here is image intensity at a voxel at position, , and, is the position of the segmentation edge. It should be noted that this edge is a function of the segmentation method; however, once the segmentation is performed it is at a fixed position. By using a central difference approximation of the derivative, this can be rearranged to represent an estimate of the distance from the segmentation edge, = − as, where, = , = , = +∆ , = −∆ , and ∆ is the distance between voxels. Now by assuming the independence of one voxel to another (i.e. ignoring the covariance), the uncertainty of the boundary at any particular position can be estimated by, where the first term represents the contribution of uncertainty from the voxel intensities, and the second term represents the contribution due to the uncertainty from the gradient.
Here, is the variance of the distance from the segmentation contour corresponding to www.intechopen.com threshold T, and is the variance of voxel intensity. The above equation can be further simplified by substituting and for in (2), averaging these two results, and then by using the approximation, ≈ ≈ .
Noting this and using the approximation that, for MLEM reconstructed images, the variances of the data are proportional to the square of the mean , ≅ , (Barrett et al., 1994;Schmidtlein et al., 2010), and evaluating at this can be rewritten to give the effect of noise on segmentation as, The parameter, , can be estimated by finding a region of uniform uptake, such as the liver, through direct measurement. The uncertainties introduced by the physical artifacts (Table  3) and their corrections discussed in this chapter as well as other factors including biological uncertainty can in principle be incorporated into the parameters of Eq.
(3) to quantitatively model the overall uncertainty of the segmentation, provided that their covariances are not significant. The above formalism ignores the covariance between neighboring voxels. In practice, the covariance can be included and estimated as shown by Buvat (Buvat, 2002), and the variance can be estimated by using a staggered checkerboard pattern (Schmidtlein et al., 2010).

Factor
Brief Description of Effect on Segmentation PET Resolution Tracer distribution blurring and variation of segmentation threshold with object size Photon Attenuation AC artifacts can strongly affect relative voxel intensities and therefore, the position of the delineation contour

Photon Scatter
Although slowly varying photon scatter can affect local intensities due to the effect of attenuation correction (Fig.10); the effects of severe over-correction can confound segmentation Random Coincidences Generally negligible effects due to accurate corrections Depth of Interaction Decreased spatial resolution and increased edge uncertainty Electronics Dead Time Can affect voxel intensities and also the scatter fraction estimate

Image Reconstruction
Parameters of iterative reconstruction (e.g., iteration number, post-reconstruction filters) modify the smoothness of an image to which segmentation methods are sensitive Registration with the Attenuation Image Attenuation artifacts and scatter artifacts

Motion
In addition to offsetting the lesion position, motion can cause loss of resolution, attenuation artifacts, and scatter artifacts Table 3. Contribution of the various physical PET artifacts to segmentation inaccuracy. www.intechopen.com

Challenges for PET based tumor segmentation
According to Udupa et al, (Udupa et al., 2006) the image segmentation task is a process consisting of two stages: a high level process to recognize the rough region of interest (ROI) containing the tumor and a lower level process to delineate the tumor within that ROI. Although the delineation stage may be based mostly on low level information (the image intensities), it often requires high level information and interpretation including knowledge from areas which are not present nor reflected in the PET image, namely anatomy, physiology and pathology. This imposes the need for the segmentation to be performed by a physician expert or by a team of such experts (e.g. a nuclear medicine physician and a radiation oncologist) (MacManus & Hicks, 2008). At the same time manual only PET segmentation may lead to large inter-and intra-observer variations, due for example to different intensity windowing during display. This has prompted the development of PET auto-segmentation methods which are based on the intensities of the image voxels or on properties derived from these intensities. It has been shown that the use of such methods lead to reduction of inter-observer variability in delineation (van Baardwijk et al., 2007).
A vast variety of PET auto-segmentation methods with different level of complexity have been developed in the last 15 years. Several recently published reviews present comprehensive list and classifications of these methods (Boudraa & Zaidi, 2006;Lee, 2010;Zaidi & El Naqa, 2010). A simpler classification can be based in part on the numerical simplicity of the approach and in part on the popularity of that approach. In order of increasing complexity (and decreasing popularity) the line-up is as follows: a) methods using a fixed threshold value in terms of intensity or standard uptake value (SUV), e.g. (Erdi et al., 1997;Mah et al., 2002;Paulino & Johnstone, 2004); b) adaptive threshold methodse.g. (Erdi et al., 1997;Black et al., 2004;Nehmeh et al., 2009;Nestle et al., 2005;Seuntjens et al., 2011) and c) advanced segmentation methods, which include a large variety of more complex numerical approaches using gradient , statistical (Aristophanous et al., 2007;Hatt et al., 2009;Dewalle-Vignion et al., 2011), region growing (Day et al., 2009); deformable models (Li et al., 2008), texture analysis ) as well as other supervised or unsupervised learning methods (Zaidi, 2006;Zaidi & El Naqa, 2010;Belhassen & Zaidi, 2010). A similar classification is adopted by the educational task group on PET auto-segmentation within the American Association of Physicists in Medicine (AAPM TG211).
The advantage of the fixed threshold methods is their simplicity, ease of implementation and use. Large discrepancies between some of these methods with respect to volumes visually determined by a physician were found for non-small lung cancer (NSLC) (Nestle et al., 2005) and for head and neck cancer . By design the adaptive threshold methods should provide better accuracy, however it is very important to adapt the parameters in each of these methods to the scanner and protocol used in each institution (Fig. 13) and to pay special attention to the phantom data sets used for this process (Lee, 2010). It is known that the fixed and adaptive threshold methods are challenged by irregularly shaped non-uniform activity distributions (Black et al., 2004;Hatt, Cheze le Rest, van Baardwijk, et al., 2011). Finally, the advanced segmentation tools have been demonstrated to be more accurate and robust to non-uniform activity distributions, Montgomery et al., 2007;Li et al., 2008;Hatt, Cheze Le Rest, Albarghach, et al., 2011), however their implementation, if not part of a commercial software may be significantly more demanding. Although the anatomical, metabolic and functional contours do not necessarily need to match, using images from different imaging imaging modalities (e.g. CT, PET, and MRI) is beneficial for tumor segmentation (El Naqa et al., 2007;Yu, Caldwell, Mah, & Mozeg, 2009;). Fig. 13. PET segmentation thresholds obtained with different automatic segmentation algorithms that use a fixed threshold displayed on top of a profile of the activity across a real lesion: FPT -40 % of peak activity, MTS-mean target SUV (Black et al., 2004) , BGbackground-based method (Nestle et al., 2005).
Despite these developments, the segmentation of PET images is still a challenge since the quantitative accuracy of PET with respect to the underlying histopathology is not well known. The quantitative accuracy of the PET image is affected by how well the selected tracer identifies the biological target and by the physical factors summarized in this chapter. In addition to using an accurate auto-segmentation tool by an experienced physician, resolution of each of these two problems for each patient is needed to claim reliable and accurate PET based tumor delineation.

Disclaimer
The material in this chapter is not to be used as a substitute for medical advice, diagnosis or treatment of any health condition or problem. Radiation therapy and planning and PET/CT imaging should only be undertaken by qualified individuals. Your facility's installation and set-up may differ from our experience and the ideas presented in this chapter are for discussion purposes only. You should not rely on the material presented here without independent evaluation and verification. Follow safety procedures and the instructions of medical equipment manufacturers. Medical physics treatment practices change frequently and therefore information contained in this chapter may be outdated, incomplete, or incorrect. This book's stated purpose is to provide a discussion of the technical basis and clinical applications of positron emission tomography (PET), as well as their recent progress in nuclear medicine. It also summarizes current literature about research and clinical science in PET. The book is divided into two broad sections: basic science and clinical science. The basic science section examines PET imaging processing, kinetic modeling, free software, and radiopharmaceuticals. The clinical science section demonstrates various clinical applications and diagnoses. The text is intended not only for scientists, but also for all clinicians seeking recent information regarding PET.