Open access

Image Processing Methods for Automatic Cell Counting In Vivo or In Situ Using 3D Confocal Microscopy

Written By

Manuel G. Forero and Alicia Hidalgo

Submitted: 24 November 2010 Published: 23 August 2011

DOI: 10.5772/23147

From the Edited Volume

Advanced Biomedical Engineering

Edited by Gaetano D. Gargiulo and Alistair McEwan

Chapter metrics overview

4,769 Chapter Downloads

View Full Metrics

1. Introduction

Image processing methods have opened the opportunity to extract quantitative information from confocal microscopy images of biological samples, dramatically increasing the range of questions that can be addressed experimentally in biology. Biologists aim to understand how cells behave and what genes do to build a normal animal, and what goes wrong in disease or upon injury. For this, they look at how alterations in gene function and application of drugs affect tissue, organ or whole body integrity, using confocal microscopy images of samples stained with cell specific markers. Image-processing methods have enormous potential to extract information from this kind of samples, but surprisingly, they are still relatively underexploited. One useful parameter to quantify is cell number. Cell number is the balance between cell division and cell death; it is controlled tightly during growth and it can be altered in disease, most notoriously neurodegeneration and cancer. Injury (e.g. spinal cord injury) results in an increase in cell death, plus a homeostatic regulation of cell proliferation. Thus to understand normal animal development, injury responses and disease, it is important to find out how many cells die or divide, or how many cells of a given type there are in an organ. Generally, cells are counted using automated methods after dissociating cells from a tissue (e.g. fluorescence-activated cell sorting, FACS, based), or when they are distributed in a dish in cell culture experiments, using image processing techniques in 2D (e.g. using Metamorph software). However, these approaches alter the normal cellular contexts and the procedures themselves can alter the relative numbers of cells. To maintain information relevant to how genes and cells behave in the organism, it is best to count cells in vivo (i.e. in the intact animal) or at least in an entire organ or tissue (i.e. in situ). Counting in vivo or in situ is generally carried out manually, or it consists of estimates of number of cells stained with a particular cell marker or inferences from anatomical alterations. These methods can be extremely time-consuming, estimates can be inaccurate, and the questions that can be addressed using these methods are limited. Manual counting can be experimentally cumbersome, tedious, labour intensive and error prone. The advent of confocal microscopy, which allows the capture of 3D images, has enabled the development of automatic and semi-automatic image processing methods to count cells in whole tissues or entire small animals. Whereas excellent automated methods can be purchased commercially and are widely used to count cells after dissociation or in cell culture, fewer methods have been developed to count cells in situ or in vivo. Such methods are challenging, as they require large stacks of images to capture the whole sample, and can encounter greater difficulty in distinguishing labelled cells from background signal. Some automatic techniques have been developed to segment cell nuclei from mammalian tissue sections or from whole Drosophila brains in 2D and 3D images (Lin et al., 2003; Shimada et al., 2005; Wählby 2003; Wählby et al., 2004), but they are not useful to analyse large sample sizes because the intensive computation slows down the process. Identifying all the nuclei is extremely challenging from the point of view of imaging because cells can be tightly packed. In any case, counting all nuclei is not always most informative, as it does not qualify on cell type (is the number of neurons or glia altered?) or cell state (do the changes affect dividing or dying cells?). Cell Profiler (Carpenter, 2006) enables combinations of image-processing methods that can be used to count cells, but it is not very user friendly for most biologists as it requires computation expertise.

We have developed a range of publicly available methods that can count the number of dividing or dying cells, neurons or glia, in intact specimens of fruit-fly Drosophila embryos (Forero et al, 2009, 2010, 2010a). Quantification is automatic, accurate, objective and fast, enabling reliable comparisons of multiple specimens of diverse genotypes. Additionally, results are reproducible: automatic programs perform consistently and always yield the same cell count for a given sample regardless of the number of times it is counted. Drosophila is a powerful model organism generally used to investigate gene function, developmental processes and model human diseases. Working in vivo or in situ with Drosophila is one of the main reasons behind using it as a model organism. Using Drosophila, researchers have investigated the number of dying cells, glial cells, and progeny cells in a neuroblast lineage, or the number of cells within mosaic cell clones (Maurange et al 2008; Bello et al, 2006, 2008; Rogulja-Ortmann et al. 2007; Franzdottir et al. 2009; Ho et al. 2009). Our methods can be used to automate these quantitative analyses. Although our image processing methods were developed from Drosophila images, these methods can be adapted to work on other sample types (i.e. mammalian tissues).

The identification and counting of cells is a difficult task both for the human eye and for image processing: i) Most often, cell visualisation with immunohistochemical markers results in background signal (i.e. spots) as well as the signal corresponding to the cells; ii) there is also natural variability within biological samples, as cell size and shape can vary; iii) if a marker detects abundant cells, they can be tightly packed and it can be difficult to determine the boundaries between adjacent cells; iv) and the properties of the detector, the fluorescence settings and the lasers can also introduce error (Dima et al., 2002). As a result, it can be difficult to decide what is a cell and what is not. Consequently, manual counting is extremely error prone. Image processing methods are ideal for objective quantifications, since once a good method has been established to identify the objects, all samples are treated in the same way thus eliminating error. When analysing cell counts in whole organisms (i.e. Drosophila embryos), tissues or organs, it is not appropriate to use projections of a stack of images into a single 2D image, since this will occlude cells and form tight clusters rendering it impossible to separate the individual cells. In vivo quantification requires object recognition in 3D, which is achievable using confocal microscopy.

In this chapter, we review the most relevant steps to be considered in the development of automatic methods to segment and count cells in 3D for in-situ or in vivo preparations. The principles described will enable researchers of multiple disciplines to apply the logic of image processing to modify the currently available programs making them applicable to their own samples and research questions, as well as help them make further developments. For two complementary reviews of image processing techniques and a description of some of the existing software employed to analyse biology samples, please see (Meijering & Cappellen, 2007) and (Peng, 2008).

Advertisement

2. Methodology

Counting cells in Drosophila is a complex task, due to variability in image quality resulting from different cell markers. Cells are segmented according to their characteristics. But cell shape changes with cell state (i.e. arrest, mitosis, or apoptosis). For instance, during mitosis the shape is irregular and it can be difficult to determine when a dividing cell can be considered as two daughter cells. Nuclei and glia cells have a more regular shape, between elliptical and circular. Apoptotic cells have initially a very irregular shape, later on very round, and can appear subdivided into different parts depending on the timing within apoptosis. Depending on the kind of cells or cell state to be visualised, a different cell marker (i.e. antibody) is employed. As a result, different image-processing methods must be developed to quantify cells of different qualities.

2.1. Visualisation of distinct cell types and states using immunohistochemistry

Cells to be counted in Drosophila embryos were visualised with immunohystochemistry methods, using antibodies as follows (Figure 1). (1) Dying (apoptotic) cells were stained with anti-cleaved-Caspase-3 (hereafter called Caspase) (Figure 1a), a widely used marker for apoptotic cells. The protein Caspase-3 is evolutionarily conserved. The commercially available antibodies that we have used (Caspase-3, Cell Signalling Technology) cross-react with a wide range of species, including Drosophila. Caspase is initially cytoplasmic and as apoptosis progresses it reveals intense, round, shrunken cells. Organisms stained with Caspase yield images with cells of irregular shape and size, low signal intensity and high intensity background. (2) Dividing (mitotic) cells were stained with anti-pHistone-H3 (hereafter called pH3, Figure 1b). pH3 labels the phosphorylated state of the evolutionarily conserved Histone-H3 characteristic of M-phase (mitosis) of the cell cycle. The commercially available antibodies we used (Upstate Biotechnology) work well in a wide range of species. The embryonic nuclei stained with pH3 are sparsely distributed and do not tend to overlap or form large clusters. As pH3 stains chromosomes, shape can be irregular. Nuclei can appear connected and must be separated. (3) Glial cell nuclei were stained with anti-Repo (hereafter called Repo) (Figure 1c). Repo (Developmental Studies Hybridoma Bank, Iowa) is the general nuclear marker for all glial cells, except the midline glia, in Drosophila. Nuclei stained with Repo tend to be rather regular. pH3 and Repo antibodies yield high signal intensity and low background, and stain nuclei that are relatively sparsely distributed in the organism. (4) Neuronal nuclei were stained with anti-HB9 (hereafter called HB9, gift of H. Brohier) in embryos (Figure 1d). Pan-neuronal anti-Elav does not consistently yield stainings of comparable quality and visualising all nuclei compromises resolution during object identification. Thus, a compromise solution is using HB9, which stains with strong signal and low background a large subset of interneurons and all motorneurons.

Figure 1.

Drosophila embryos labelled with: (a) Anti-cleaved-Caspase-3 to visualise apoptotic cells. (b) Anti-p-Histone-H3 to visualise mitotic cells. (c) Anti-Repo to visualise glial cells. (d) Anti-HB9 to visualise a subset of neuronal nuclei. A fraction of the ventral nerve cord is shown in each case; all images show single confocal optical sections.

Whole embryos were dechorionated in bleach, then fixed in 4% formaldehyde in phosphate buffer (PBS) for 20 minutes at room temperature, washed in PBS with 0.1% Triton-X100 (Sigma) and stained following standard protocols (Rothwell and Sullivan, 2000). Embryos were incubated in diluted primary antibodies overnight at 4°C and the following day in secondary antibodies for 2 hours at room temperature. Antibodies were diluted in PBS 0.1% Triton as follows: (1) Rabbit anti-cleaved-Caspase-3 1:50; (2) Guinea-pig HB9 1:1000; (3) Mouse anti-Repo at 1:100; (4) Rabbit-anti-phospho-Histone-H3 at 1:300. Secondary antibodies were directly conjugated to Alexa-488 and used at 1:250. Anti-Caspase had a tendency to yield high background, and different batches produced by Upstate Biotechnology had different staining qualities. Thus each new batch had to be optimised. To reduce background, embryos were first blocked in 1% Bovin Serum Albumin (BSA, Sigma) and incubated in very small volumes (10 microliters worth of embryos in a 50-100 microliter volume of diluted antibody), and the antibody was not reused. Signal amplification was not used (i.e. no avidin) since this raised the Caspase background considerably. All other antibodies were more robust and worked well using standard conditions, and antibody aliquots were reused multiple times. Samples were mounted in Vectashield (Vector Labs) or 70% glycerol. Mounted whole embryos were scanned using a BioRad Radiance 2000 or Leica TCS-SP2-AOBS laser scanning confocal microscopes. The settings at the confocal microscope were fixed for all samples and acquisition was set to ensure that the dynamic range of the histogram covered all grey values. The conditions for scanning were 60x lens, no zoom and 0.5μm slice step, acquisition resolution of 512 x 512 pixels, no averaging. Fixed iris (pinhole =1), laser intensity, gain and offset were maintained throughout all samples of the same experiment. Software algorithms were developed and evaluated using Java and ImageJ under an Ubuntu Linux platform in a PC Pentium 4 running at 3 GHz with 1.5 GB RAM.

2.1. Development

Most published techniques segment and count cells in two dimensions. With the appearance of confocal microscopes, which allow to visualise cells plane by plane in 3D, new techniques have been developed to count them in 3D.

In general, the automatic and semiautomatic techniques developed to count cells follow these steps:

  • Acquisition.

  • Filtering for noise reduction.

  • Segmentation.

  • Post processing, including morphological filtering and separation of cells.

  • Classification.

2.2. Acquisition

The acquisition protocol is a very important step. If the quality of the images is poor or strongly changes from one stack to another, it renders the development of an automatic counting method challenging. For a given experiment were all samples are labelled with the same cell marker and fluorophore, there can be considerable variability in the quality of the images, and if of bad quality it can even become impossible for an experienced biologist to identify reliably the cells. Therefore, several parameters must be optimised experimentally, such as those relating to the treatment of samples (e.g. fixative, detergent, dilutions of antibodies, incubation period, etc.) and the acquisition (e.g. laser intensity, filters, gain and offset of the amplifiers, magnification, etc). Once the best quality of images is obtained, all of these parameters must be fixed, and samples that do not produce images of adequate quality should be rejected.

3D image processing techniques can be used to improve the quality of segmentation. This is important when the signal to noise ratio is low, given that some spots can be considered noise in a 2D image, but recognized as true particles in 3D (Gué, 2005). To work in 3D, other techniques should be considered before filtering. In fluorescence confocal microscopy signal intensity decreases with tissue thickness. Thus, frequently 3D techniques apply an intensity correction. One of the simplest techniques employs the maxima or the average of the foreground on each image to construct a function of the intensity attenuation and the inverse function is used to compensate the intensity loss (Adiga, 2000; Lin, 2003; Wählby, 2004). However, the result is not always satisfactory, especially when the background or the foreground changes abruptly or the background has some complexity, making it difficult to define the foreground automatically. This is a common issue in Drosophila samples. More complex techniques can also be used, although they are time-consuming (Conchello, 1995; Guan, 2008; Kervrann, 2004; Rodenacker, 2001; Roerdink, 1993; Wu, 2005) or require complex acquisition (Can, 2003).

Images are also degraded by out-of-focus blur, albeit to a lesser degree than with epi-fluorescence. The Z resolution is lower than in the X-Y plane, which affects the results of 3D segmentation techniques. De-blurring and restoration techniques, which both improve image definition and reduce noise should be considered before applying 3D segmentation techniques. Some of these methods are based on the knowledge of the Point Spread Function (PSF) or are blind when the PSF is unknown. The Richarson-Lucy (Richardson, 1972; Lucy, 1974) and the Tikhonov deconvolution methods are two of the best known methods. Others include maximum likelihood estimation, Wiener and wavelets (see review by Sarder & Nehorai, 2006). Deconvolution methods can achieve very good results, but at the expense of a very high computational cost. However, if a convenient segmentation technique is used to process each image based only in its properties, an intensity correction procedure can be avoided. Given such complexity and pitfalls, techniques have been developed to take the alternative route of avoiding these steps. Accordingly, images are filtered and segmented in 2D, and 3D techniques are only applied once the intensity of the cells is no longer relevant, i.e. after the images have been segmented, thus gaining speed in the process.

2.3. Filtering

3D restoration methods improve the quality of the images reducing noise. When these methods are not employed, other noise reduction techniques must be used. In confocal microscopy images, noise follows a Poisson distribution as image acquisition is based on photon emission. Given that the number of photons produced is very small, statistical variation in the number of detected photons is the most important source of noise. Although some researchers employ linear filters like the Gaussian operator to reduce noise in confocal microscopy (Wählby, 2004; Fernandez, 2010), they are not the most recommended to reduce Poisson noise, which is signal dependent. Additionally, the use of linear filters results in a lower definition of the cell borders, making it more difficult to distinguish cells, especially when they are tightly packed. In the Poisson distribution the mean and variance are not independent. Therefore, variance stabilising transformations (VST), like the Anscombe (Anscombe, 1948) and, the Freeman and Tukey (Freeman & Tukey, 1950) transforms, which approximately transform a random variable following a Poison distribution into a Gaussian, could be applied (Kervrann, 2004a) before the use of a linear filter.

Bar-Lev and Enis (Bar-Lev & Enis 1988) developed a method for obtaining a class of variance stabilizing transformations, which includes the Ascombe and, Freeman and Tukey transforms. In this case, images are transformed, then filtered by using a linear operator and then the inverse transform is applied before segmentation. However these transforms have an important limitation, as they are not useful when the number of counts or photons per pixel is lower than about 20 (Starck, 2002). Furthermore, bad results are also related to the inverse process (Makitalo & Foi, 2011). New efforts have been made to improve these two aspects (Foi, 2008, 2009; Makitalo & Foi, 2011, 2011a), but their developments have not been tested for cell counting in confocal microscopy samples. Other models based on the analysis of the acquisition system have been proposed (Calapez & Rosa, 2010).

Given the nature of the noise, non-linear filters are more appropriate. These filters in general reduce the noise and the significant intensity heterogeneity typical of confocal images, without strongly affecting the signal provided by the stained cells. The median filter is one of the simplest methods and we found it provides good results (Forero et al, 2009, 2010, 2010a). Many other median filter variations can also be employed, although they can require a more exhaustive and time-consuming calculation, and some parameters to be fixed (Mitra, 2001; Forero & Delgado, 2003). Outlier filters can also serve to eliminate noise, while keeping the edges on the image. In this kind of filter the value of each pixel p is replaced by the mean or the median of the pixels included in a window centered in p, if the original value of p is further from the mean or the median than a threshold t defined by the user. Noise reduction techniques based on wavelets are also employed to filter confocal images. They can yield good results with an appropriate bank of filters. Other edge preserving methods like bilateral filters (Tomasi & Manduchi, 1998) can also be employed (Shen, 2009; Rodrigues, 2008). 3D filters have also been used, but the computational cost is higher and results can be affected by the difference in the resolution between the x-y plane and the z-axis. 2D restoration of the 3D methods mentioned above can also be employed, but unfortunately they are still time-consuming.

In addition to the Poisson noise filters, other filters may be required to eliminate noise specific to the kind of images being processed. For example, signal intensity is heterogeneous in HB9 labelled nuclei, and image background is characterised by extremely small spots or particles of very high intensity. To eliminate these small spots and render signal intensity uniform, a grey scale morphological opening with a circular structural element of radius r, higher than the typical radius of the spots, is applied to each slice of the stack. As a generalization, particles of any particular size can be eliminated by morphological granularimetry. In this way, granularimetry defined as:

G= Open (rmin)  Open (rmax)E1

is used to eliminate particles of radius between rmax and rmin.

Another morphogical noise reduction technique, the alternating sequential filter (ASF) has also been used to reduce noise in confocal images (Fernandez, 2010). This filter removes particles starting from the smallest ones and moving toward the largest ones by doing an alternating succession of opening and closing morphological operations with structural elements of progressively larger size (Sternberg, 1986; Serra, 1988).

2.4. Segmentation

After filtering, segmentation is carried out. Segmentation is a procedure that subdivides the image in disjoint regions or classes in order to identify the structures or objects of interest appearing in the image. These structures can be basically identified by their similarity or discontinuity. On the one hand, the detection of the edges or contours of the objects of interest is given by searching the local discontinuities in the intensity of the grey levels of the image. On the other hand, the extraction of the objects can be found by searching the homogeneous areas in the grey level values. Thresholding techniques allow separating the pixels of the image between background and foreground. In the simplest case, bilevel or binarisation, the pixels take only two possible different grey levels. The objects in the foreground are considered to belong only to one class and are separated from the background by choosing an optimum threshold grey level t, in the interval [0, L], where L is the maximum grey level in the image, based on certain criteria. Mathematically, binarisation is a process of transformation that converts an image represented by the function q(x, y) into the image r(x, y) given by:

r(x,y)={1ifq(x,y)>t0ifq(x,y)tE2

where (x, y) represent the position of each pixel in the image.

A third kind of method to segment cells in confocal microscopy consists on the use of active contour models. In their original description, snakes (Kass et al. 1988), the active contours were seen as a dynamic elastic band that was located outside or inside the objects to be segmented, and by contraction or expansion of the band the borders of the objects were obtained. The snakes look for the borders by minimizing the energy of the band, using the gradient of the image as one of the parameters to calculate the energy. This technique is very sensitive to noise and initiation (i.e. where the band is initially located), and several methods have been developed to overcome the limitations of finding a good initiation and of segmenting nuclei (Clocksin, 2003; Chan et al., 2000, Chan & Vese, 2001, Osher & Sethian, 1988), using level sets (Cheng, 2009).

As cell borders are fuzzy, we preferred thresholding to edge detection methods for segmentation. Depending on the intensity variation in the cells through each image, local or global thresholding can be employed. An alternative consists on using more than one global threshold (Long et al., 2007). Long et al. calculates a first threshold and cells detected over that threshold are segmented and counted. Then the regions where the cells have been counted are ignored and a new threshold is calculated. This second threshold is lower than the first one and allows detecting cells of lower intensity. Then these new cells are also processed and counted.

Due to fluorescence attenuation through the stack of images, cells are more clearly seen in the first slices and for this reason using only one threshold to binarise the whole stack is not appropriate. Instead, a threshold value is found for each image. The method chosen to find the threshold t is critical and varies with the marker employed to label the cells or nuclei and the characteristics of the resulting images. Thus a different binarisation method was developed for each cell marker.

2.4.1. Neuronal nuclei

The method employed to binarise images depends on the characteristics of the distribution of the intensities of the objects and background in the images, which can be studied trough the histogram. One of the most popular thresholding methods, Otsu, works especially well when the typical histogram of the images is bimodal, with a Gaussian distribution. It works also well in highly contrasted images, where there is a strong intensity difference between foreground and background. This was the case for nuclei labelled with HB9 antibodies, and therefore this was the method employed to binarise such images (Forero et al, 2010). A frequent case to be considered when working with stacks, is when no cells or nuclei but only background appear in some images. Whereas a very low threshold can be found, this would yield false nuclei. To solve this problem, low thresholds are not taken into account when the maximum intensity of an image is lower than a quarter of the maximum grey level or if the threshold is lower than 20, a value found empirically corresponding to the highest standard intensity of the background. In these cases, images are binarised using the last valid threshold obtained in a previous image of the stack. If a very low threshold is found in the first image of the stack, the threshold takes the value of the maximum grey level and the binarised image becomes black. The resulting binarised images are employed as masks and combined, using a logic AND operation, with the images resulting of the opening operation to produce images were the background becomes black (grey level ‘zero’) and the intensities of the foreground remain unmodified. For further details, see (Forero et al, 2010).

2.4.2. Apoptotic cells

The typical histogram h(q), where q is the grey level intensity, of median-filtered Caspase images is composed of two modes, the first one corresponding to the background and the second one to the sample. There isn’t a third mode that would belong to the apoptotic cells, due to the very small number of pixels belonging to them. In some Caspase images, the histogram becomes unimodal, when the background is so low as to disappear, and images only include the sample.

The following thresholding method was developed. The shape of the second mode, corresponding to the sample, can be roughly approximated to a Gaussian function G(q), and the pixels belonging to the Caspase cells are considered outliers. The highest local maximum of the histogram serves to identify the sample mode. To identify the outliers, assuming the sample’s pixel grey level intensities are normally distributed, the Gaussian function Gb(q) that best fits the shape of the sample’s mode is found. This is achieved by minimizing the square error between the histogram h(q) in the interval corresponding to the mode and G(q), that is

Gb(q)=arg(minqmin<qc<qmaxerror(q))E3

where

error(q)=qcqmax[G(q)h(q)]2E4

and

G(q)=eqμ(q)22σ(q)2E5

μ(q) and σ(q) are the mean and standard deviation of the mode respectively, calculated in the interval [q, qmax], given by

μ(q)=q=qcqmaxh(q)qq=qcqmaxh(q)   σ(q)=q=qcqmaxh(q)(qμ)2q=qcqmaxh(q)E6

qc is a cut-off value given by the global minimum between the first and the second modes, if the histogram is bimodal, or the first local minimum of the histogram, if it is unimodal, and qmax is the maximum grey level of the histogram. The threshold is obtained from the standard score (z-score), which rejects the outliers of the Gaussian function. The z-score is given by

z=(qμb)σbE7

where μb and σb are the mean and standard deviation of the best Gaussian function respectively and q is pixel intensity. It is considered that a grey level is an outlier if z≥3, therefore the threshold t is given by

t=μb+3σbE8

2.4.3. Mitotic and glial cells

In images stained with either pH3 or Repo in Drosophila embryos, the mode corresponding to the cells is almost imperceptible due to the corresponding small number of pixels compared to the number of background pixels. Given the low number of foreground pixels the histogram can be considered unimodal. To binarise unimodal images, rather than using thresholding techniques, we assumed that the background follows a Gaussian distribution G(q) and considered the pH3 cells outliers. To identify the best Gaussian function, we minimised the square error in the histogram h(q) in the interval between the mode and threshold, given by

t=μb+3σbE9

following the same procedure employed to threshold apoptotic cells explained before.

2.5. Post-processing

After segmentation, or in parallel, other methods can also be developed to reduce remaining noise, to separate abutting cells and to recover the original shape of the objects before the classification. Which method is used will depend on the object to be discriminated.

2.5.1. Filtering

Some raw Caspase images have small spots of high intensity, which can be confused with cells in later steps of the process. To eliminate these spots without affecting the thresholding technique (if the spot filter is applied before thresholding the histogram is modified affecting the result), the raw images are filtered in parallel and the result is combined with the thresholding outcome. If a square window of side greater than the diameter of a typical spot, but smaller than the diameter of a cell, is centered in a cell, the mean of the pixel intensities inside the window should be close to the value of the central pixel. If the window is centered in a spot, the pixel mean should be considerably lower than the intensity of the central pixel. To eliminate the spots, a mobile window W is centered in each pixel. Let p(x,y) and s(x,y) be the original input image and the resulting filtered image respectively, and m(x,y) the average of the intensities inside the window centered in (x,y). If m(x,y) is lower than a certain proportion α with respect to the central pixel, it becomes black, otherwise it retains its intensity. That is

s(x,y)={0if m(x,y)<αp(x,y)p(x,y)if m(x,y)αp(x,y)E10

where

m(x,y)=x,yWp(x,y)E11

After thresholding, cells and small spots appear white, while after spot filtering the spots appear black. The result from both images is combined using the following expression:

q(x,y)={0if  min[t(x,y),s(x,y)] =01if min[t(x,y),s(x,y)] >0E12

where q(x, y) is the resulting image and t(x, y) the image resulting form thresholding.

The combination of filtering and thresholding results in separating candidate objects (Caspase-positive cells) from background. The spot filter also separates cells that appear very close in the z-axis.

To render the Caspase-positive cells more similar in appearance to the original raw images, three-dimensional morphological operations are then performed throughout the whole stack. Firstly, morphological closing followed by opening are applied to further remove noise and to refine the candidate structures. Secondly, the objects containing holes are filled with foreground colour verifying that each hole is surrounded by foreground pixels.

2.5.2. Cell separation

Cells that appear connected must be separated. This is most challenging. Several automatic and semi-automatic methods deal with the problem of how to separate cells within clusters in order to recognise each cell. Initially some seeds or points identifying each cell are found. A seed is a small part of the cell, not connected to any other, that can be used to mark it. If more than one seed is found per cell, it will be subdivided (i.e. over-segmentation), but if no seed is found the cell will not be recognised. In some semiautomatic methods seeds are marked by hand. Several methods have been proposed to identify only one seed per cell avoiding over-segmentation. The simplest method consists of a seeding procedure developed during the preparation of the samples to avoid overlaps between nuclei (Yu et al., 2009). More practical approaches involve morphological filters (Vincent, 1993) or clustering methods (Clocksin, 2003; Svensson, 2007). Watershed based algorithms are frequently employed for contour detection and cell segmentation (Beucher & Lantuejoul, 1979; Vincent & Soille, 1991), some employing different distance functions to separate the objects (Lockett & Herman, 1994; Malpica, 1997). In this way, cells are separated by defining the watershed lines between them. Hodneland et al. (Hodneland, 2009) employed a topographical distance function and Svensson (Svensson, 2007) presented a method to decompose 3D fuzzy objects, were the seeds are detected as the peaks of the fuzzy distance transforms. These seeds are then used as references to initiate a watershed procedure. Level set functions have been combined with watershed in order to reduce over-segmentation and render the watershed lines more regular. In the method developed by Yu et al. (Yu et al., 2009) the dynamic watershed is constrained by the topological dependence in order to avoid merged and split cell segments. Hodneland et al. (Hodneland, 2009) also combine level set functions and watershed segmentation in order to segment cells, and the seeds are created by adaptive thresholding and iterative filling. Li et al. propose a different approach, based on gradient flow tracking (Li et al. 2007, 2008). These procedures can produce good results in 2D, although they are generally time consuming. They do not provide good results if the resolution of the images is low and the borders between the cells are imperceptible.

Watershed and h-domes are two morphological techniques commonly used to separate cells. These two techniques are better understood if 2D images or 3D stacks are seen as a topological relief. In the 2D case the height in each point is given by the intensity of the pixel in that position where the cells are viewed as light peaks or domes separated by dark valleys (Vincent, 1993). The basic idea behind watershed consists in imaging a flooding of the image, where the water starts to flow from the lower points of the image. The edges between the regions of the image tend to be placed on the watershed. Frequently, the watershed is applied to the gradient of the image, so the watershed is located in the crests, i.e. in the highest values. Watershed and domes techniques are also applied on distance images. In this way, each pixel or voxel of an object takes the value of the minimum distance to the background, and the highest distance will correspond to the furthest point from the borders. The cells are again localized at the domes of the mountains, while the watershed is used to find the lowest points in the valleys that are used to separate the mountains, i.e. the cells (Malpica et al., 1997). In this way, watershed can be used to divide joined objects, using the inverted of the distance transformation and flooding the mountains starting from the inverted domes that are used as seeds or points from where the flooding begins. The eroded points and the resulting points of a top-hat transformation can also be used as seeds in several watershed procedures.

2.5.2.1. Apoptotic cells

The solution to the cell separation problem depends on the shape of the cells and how close they are. Apoptotic cells, for example, do not appear very close, although it is possible to find some abutting one another. They can also have a very irregular shape and can appear subdivided. Therefore, we reached a compromise when trying to separate cells. When watershed was used in 3D many cells were subdivided resulting in a cell being counted as multiple cells, thus yielding false positives. On the other hand, if a technique to subdivide cells is not used, abutting cells can be counted only as one, yield false negatives. In general, if there are few abutting cells, the number of false negatives is low. A compromise solution was employed. Instead of using a 3D watershed, a 2D watershed starting from the last eroded points was used, thus separating objects in each plane. In this way, irregular cells that were abutting in one slice were separated, whilst they were kept connected in 3D. The number of false negatives was reduced without increasing the number of false positives. Although some cells can still be lost, this conservative solution was found to be the best compromise.

2.5.2.2. Mitotic and glial cells

Mitotic and glial cells in embryos were separated by defining the watershed lines between them. To this end, the first step consisted in marking each cell with a seed. In order to find the seeds a 3D distance transformation was applied. To mark the cells, we applied a 3D h-dome operator based on a morphological gray scale reconstruction (Vincent, 1993). We found h = 7 to be the standard minimum distance between the centre of a cell and the surrounding voxels. This marked all the cells, even if they were closely packed. To avoid a cell having more than one seed, we found the h-domes transform of an image q(x,y). A morphological reconstruction of q(x,y) was performed by subtracting from q(x,y)-h, where h is a positive scalar, the result of the reconstruction from the original image (Vincent, 1992, 1993), that is

Dh(q(x,y))=q(x,y)ρ(q(x,y)h)E13

where the reconstruction

ρ(q(x,y)h)E14

is also known as the h-maxima transform. The h extended-maxima, i.e. the regional maxima of the h-maxima transform, can be employed to mark the cells (Vincent, 1993; Wählby 2003, Wählby et al. 2004). However, we found that a more reliable identification of the cells that prevented losing cells, was achieved by the binarisation method of thresholding the h-domes images (Vincent, 1993). Given that each seed is formed of connected voxels, 3D domes could be identified and each seed labelled with 18-connectivity.

Due to the intensity variation of the cells, several seeds can be found in one cell, resulting in over-segmentation. To prevent over-segmentation after watershed, redundant seeds must be eliminated, to result in only one seed per cell. Wählby et al. (Wählby et al., 2004) have used the gradient among the seeds as a way to determine if two seeds belong to a single cell and then combine them. However, we found that for mitotic cells a simpler solution was successful at eliminating excess seeds. Multiple seeds can appear in one cell if there are irregularities in cell shape. The resulting extra peaks tend not to be very high and, when domes are found, they tend to occupy a very small number of voxels (maximum of 10). Instead, true seeds are formed of a minimum of 100 voxels. Consequently, rejecting seeds of less than 20 voxels eliminated most redundant seeds.

Recently, Cheng and Rajapakse (Cheng and Rajapakse, 2009) proposed an adaptive h transform in order to eliminate undesired regional minima, which can provide an alternative way of avoiding over-segmentation. Following seed identification, the 3D watershed employing the Image Foresting Transform (IFT) was applied (Lotufo & Falcao, 2000; Falcao et al., 2004), and watershed separated very close cells.

2.5.2.3. Neuronal nuclei

To identify the seeds in images of HB9 labelled cells, a 2D regional maxima detection was performed and following the method proposed by Vincent (Vincent, 1993), a h-dome operator based on a morphological gray scale reconstruction was applied to extract and mark the cells. The choice of h is not critical since a range of values can provide good results (Vincent, 1993). The minimum difference between the maximum grey level of the cells and the pixels surrounding the cells is 5. Thus, h=5 results in marking cells, while distinguishing cells within clusters. Images were binarised by thresholding the h-domes images.

Some nuclei were very close. As we did with the mitotic cells, a 3D watershed algorithm could be employed to separate them. However in our tests the results were not always good. We found better and more time-computing efficient results from employing both the intensity and the distance to the borders as parameters to separate nuclei. In this way, first a 2D watershed was applied to separate nuclei in 2D, based on the intensity of the particles. Subsequently, 3D erosion was used in order to increase their separation and a 3D distance transformation was applied. In this way each voxel of an object takes the value of the minimum distance to the background. Then the 3D domes were found and used as seeds to mark every cell. A fuzzy distance transform (Svensson, 2007), which combines the intensity of the voxels and the distance to the borders, was also tested. Whilst with our cells this did not work well, it might be an interesting alternative with different kinds of cells when working with other kinds of cells. The images were then binarised. Once the seeds were found, they were labelled employing 18-connectivity and from the seeds a 3D region growing was done to recover the original shape of each object, using as mask the stack resulting from the watershed (see Forero et al, 2010).

2.6. Classification

The final step is classification, whereby cells are identified and counted. This step is done according to the characteristics that allow to identify each cell type and reject other particles. A 3D labelling method (Lumia, 1983; Thurfjell, 1992; Hu, 2005) is first employed to identify each candidate object, which is then one by one either accepted or rejected according to the selected descriptors. To find the features that better describe the cells, a study of the best descriptors must be developed. Several methods are commonly employed to do this. Some methods consider that descriptors follow a Gaussian distribution, and use the Fisher discriminant to separate classes (Fisher, 1938; Duda et al., 2001). Other methods select the best descriptors after a Principal Components Analysis (Pearson, 1901; Duda, 2001). In this method, a vector of descriptors is obtained for each sample and then the principal components are obtained. The descriptors having the highest eigen values, that is, those having the highest dispersion, are selected as best descriptors. It must be noted that this method can result on the selection of bad descriptors when the two classes have a very high dispersion along a same principal component, but their distribution overlaps considerably. In this case the descriptor must be rejected.

In our case, we found that dying cells stained with Caspase and mitotic cells with pH3 are irregular in shape. Therefore, they cannot be identified by shape and users distinguish them from background spots of high intensity by their bigger size. Thus, apoptotic and mitotic cells were selected among the remaining candidate objects from the previous steps based only on their volume. The minimum volume can be set empirically or statistically making it higher than the volume occupied by objects produced by noise and spots of high intensity that can still remain. The remaining objects are identified as cells and counted. Using statistics, a sufficient number of cells and rejected particles can be obtained to establish their mean and standard deviation, thus finding the best values that allow to separate both classes using a method like the Fisher discriminator.

Nuclei have a very regular, almost spherical, shape. In this case more descriptors can be used to better describe cells and get a better identification of the objects. 2D and 3D descriptors can be employed to analyse the objects. Here we only present some 2D descriptors. For a more robust identification the representation of cells should preferably be translation, rotation and scale invariant. Compactness, eccentricity, statistical invariant moments and Fourier descriptors are compliant with this requirement. We did not use Fourier descriptors for our studies given the tiny size of the cells, which made obtaining cells’ contours very sensitive to noise. Therefore, we only considered Hu’s moments, compactness and eccentricity.

Compactness C is defined as

C=P2AE15

where A and P represent the area and perimeter of the object respectively. New 2D and 3D compactness descriptors to analyse cells have been introduced by Bribiesca (2008), but have not been tested yet.

Another descriptor corresponds to the flattening or eccentricity of the ellipse, whose moments of second order are equal to those of the object. In geometry texts the eccentricity of an ellipse is defined as the ratio between the foci length a and the major axis length D of its best fitting ellipse

E=aDE16

Its value varies between 0 and 1, when the degenerate cases appear, being 0 if the ellipse is in fact a circumference and 1 if it is a line segment. The relationship between the focal length and the major and minor axes, D and d respectively, is given by the equation

D2=d2+a2E17

then,

E=D2d2DE18

Nevertheless, some authors define the eccentricity of an object as the ratio between the length of the major and minor axes, also being named aspect ratio, and elongation because it quantifies the extension of the ellipse and is given by

e=dD=1E2E19

In this case, eccentricity also varies between 0 and 1, but being now 0 if the object is a line segment and 1 if it is a circumference.

The moment invariants are obtained from the binarised image of each cell; pixels inside the boundary contours are assigned to value 1 and pixels outside to value 0. The central moments are given by:

μrs=x=0N1y=0M1(xx¯)r(yy¯)sf(x,y)  forr,s= 0, 1, , E20

where f(x,y) represents a binary image, p and q are non-negative integers and (x¯,y¯) is the barycentre or centre of gravity of the object and the order of the moment is given by r + s. From the central moments Hu (Hu, 1962) defined seven rotation, scale and translation invariant moments of second and third order

ϕ1=η20+η02ϕ2=(η20η02)2+4η112ϕ3=(η303η12)2+(3η21η03)2ϕ4=(η30+η12)2+(η21+η03)2ϕ5=(η303η12)(η30+η12)[(η30+η12)23(η21+η03)2]+(3η21η03)(η21+η03)[3(η30+η12)2(η21+η03)2]ϕ6=(η20η02)[(η30+η12)2(η21+η03)2]+4η11(η30+η12)(η21+η03)ϕ7=(3η21η03)(η30η12)[(η30+η12)23(η21+η03)2]+(3η12η30)(η21+η03)[3(η30+η12)2(η21+η03)2]E21

Moments 1 to 6 are, in addition, invariant to object reflection, given that only the magnitude of 7 is constant, but its sign changes under this transformation. Therefore, 7 can be used to recognize reflected objects. As it can be seen from the equations, the first two moments are functions of the second order moments. 1 is function of 20 and 02, the moments of inertia of the object with respect to the coordinate axes x and y, and therefore corresponds to the moment of inertia, measuring the dispersion of the pixels of the object with respect to its centre of mass, in any direction. 2 indicates how isotropic or directional the dispersion is.

One of the most common errors in the literature consists of the use of the whole set of Hu’s moments to characterise objects. They must not be used simultaneously since they are dependant (Flusser, 2000), given that

ϕ3=ϕ52+ϕ72ϕ43E22

Since Hu’s moments are not basis (meaning by a basis the smallest set of invariants by means of which all other invariants can be expressed) given that they are not independent and the system formed by them is incomplete, Flusser (2000) developed a general method to find bases of invariant moments of any order using complex moments. This method also allows to describe objects in 3D (Flusser et al, 2009).

As cells have a symmetrical shape, the third and higher odd order moments are close to zero. Therefore, the first three-order Hu’s moment ϕ3 is enough to recognize symmetrical objects, the others being redundant.

That is, eccentricity can be also derived from Hu’s moments by:

e=ϕ1ϕ2ϕ1+ϕ2E23

and, from Equation (19) it can be found that:

E=1e2=2ϕ2ϕ1+ϕ2E24

Therefore, eccentricity is not independent of the first two Hu’s moments and it must not be employed simultaneously with these two moments for classification.

Advertisement

3. Conclusion

We have presented here an overview of image processing techniques that can be used to identify and count cells in 3D from stacks of confocal microscopy images. Contrary to methods that count automatically dissociated cells or cells in culture, these 3D methods enable cell counting in vivo (i.e. in intact animals, like Drosophila embryos) and in situ (i.e. in a tissue or organ). This enables to retain normal cellular context within an organism. To give practical examples, we have focused on cell recognition in images from fruit-fly (Drosophila) embryos labelled with a range of cell markers, for which we have developed several image-processing methods. These were developed to count apoptotic cells stained with Caspase, mitotic cells stained with pH3, neuronal nuclei stained with HB9 and glial nuclei-stained with Repo. These methods are powerful in Drosophila as they enable quantitative analyses of gene function in vivo across many genotypes and large sample sizes. They could be adapted to work with other markers, with stainings of comparable qualities used to visualise cells of comparable sizes (e.g. sparsely distributed nuclear labels like BrdU, nuclear-GFP, to count cells within a mosaic clone in the larva or adult fly).

Because automatic counting is objective, reliable and reproducible, comparison of cell number between specimens and between genotypes is considerably more accurate with automatic programs than with manual counting. While a user normally gets a different result in each measurement when counting manually, automatic programs obtain consistently a unique value. Thus, although some cells may be missed, since the same criterion is applied in all the stacks, there is no bias or error. Consistent and objective criteria are used to compare multiple genotypes and samples of unlimited size. Furthermore, automatic counting is considerably faster and much less labour intensive.

Following the logical steps explained in this review, the methods we describe could be adapted to work on a wide range of tissues and samples. They could also be extended and combined with other methods, for which we present an extended description, as well as with some other recent developments that we also review. This would enable automatic counting in vivo from mammalian samples (i.e. brain regions in the mouse), small vertebrates (e.g. zebra-fish) or invertebrate models (e.g. snails) to investigate brain structure, organism growth and development, and to model human disease.

References

  1. 1. AdigaP. U.ChaudhuriB.2001Some efficient methods to correct confocal images for easy interpretation. Micron, 324June 2001), 3633700968-4328
  2. 2. AnscombeF. J.1948The transformation of Poisson, Binomial and Negative-Binomial data. Biometrika, 353December 1948), 2462540006-3444
  3. 3. Bar-LevS. K.EnisP.1988On the classical choice of variance stabilizing transformations and an application for a Poisson variate. Biometrika, 1988, 754December 1988), 8038040006-3444
  4. 4. BelloB. C.IzerginaN.CussinusE.ReichertH.2008Amplification of neural stem cell proliferation by intermediate progenitor cells in Drosophila brain development. Neural Development, 31February 2008), 51749-8104
  5. 5. BelloB.ReichertH.GirthF.2006The brain tumor gene negatively regulates neural progenitor cell proliferation in the larval central brain complex of Drosophila. Development, 13314July 2006), 263926481011-6370
  6. 6. BeucherS.LantuejoulC.1979Use of watersheds in contour detection International workshop on image processing: Real-time and motion detection/estimation. IRISA, (September 1979), 1322
  7. 7. BribiescaE.2008An easy measure of compactness for 2D and 3D shapes. Pattern Recognition. 412February 2008), 5435540031-3203
  8. 8. CalapezA.RosaA.2010A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images. IEEE Transactions on Image Processing, 199September 2010), 240824181057-7149
  9. 9. CanA.et al.2003Attenuation correction in confocal laser microscopes: A novel two-view approach. Journal of Microscopy, 2111July 2003), 67790022-2720
  10. 10. Carpenter AE et al.2006CellProfiler: image analysis software for identifying and quantifying cell phenotypes Genome Biology, 710October 2006), Article R1000, 1465-6906
  11. 11. ChanT. F.SandbergB. Y.VeseL. A.2000Active contours without edges for vector-valued images. Journal of Visual Communication and Image Representation. 112February 2000), 1301411047-3203
  12. 12. ChanT.VeseL.2001Active contours without edges.IEEE Transactions on Image Processing. 102February 2001), 2662771057-7149
  13. 13. ChengJ.RajapakseJ.2009Segmentation of clustered nuclei with shape markers and marking function, IEEE Transactions on Biomedical Engineering, 563March 2009), 7417480018-9294
  14. 14. ClocksinW.2003Automatic segmentation of overlapping nuclei with high background variation using robust estimation and flexible contour models. Proceedings 12th International Conference on Image Analysis and Processing, 6826870-76951-948-2Italy, September 17-19, 2003
  15. 15. ConchelloJ. A.1995Fluorescence photobleaching correction for expectation maximization algorithm. Three-Dimensional microscopy: image acquisition and processing. Proceedings of the 1995 SPIE symposium on electronic imaging: Science and technology. Wilson, T. & Cogswell C. J. (Eds.). 2412138146978-0-81941-759-6March 23, 1995
  16. 16. DimaA.ScholzM.ObermayerK.2002Automatic segmentation and skeletonization of neurons from confocal microscopy images based on the 3-D wavelet transform. IEEE Transactions on Image Processing, 2002, 117July 2002), 7908011057-7149
  17. 17. DudaR.HartP.StorkD.2001Pattern classification. John Wiley & sons, 2nd Ed. 978-0-47105-669-0
  18. 18. FalcaoA.StolfiJ.de AlencarLotufo. R.2004The image foresting transform: theory, algorithms, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence. 261January 2004), 19290162-8828
  19. 19. FernandezR.DasP.MirabetV.MoscardiE.TraasJ.VerdeilJ. L.MalandainG.GodinC.2010Imaging plant growth in 4D: robust tissue reconstruction and lineaging at cell resolution. Nature Methods, 77July 2010), 5475531548-7091
  20. 20. FisherR. A.1938The use of multiple measurements in taxonomic problems. Annals of Eugenics. 7179188
  21. 21. FlusserJ.2000On the Independence of Rotation Moment Invariants. Pattern Recognition., 339September 2000), 140514100031-3203
  22. 22. FlusserJ.ZitovaB.SukT.2009Moments and Moment Invariants in Pattern Recognition. Wiley, 978-0-47069-987-4
  23. 23. FoiA.2008Direct optimization of nonparametric variance-stabilizing transformations. 8èmes Rencontres de Statistiques Mathématiques. CIRM, Luminy, December.
  24. 24. FoiA.2009Optimization of variance-stabilizing transformations. Available from http://www.cs.tut.fi/~foi/,preprint.
  25. 25. ForeroM. G.DelgadoL. J.2003Fuzzy filters for noise removal. In: Fuzzy Filters for Image Processing, Nachtegael M.; Van der Weken, D.; Van De Ville, D. & Etienne E.E, (Eds.), (July 2003), 124Springer, Berlin, Heidelberg, New York, 3-54000-465-3
  26. 26. ForeroM. G.PennackJ. A.LearteA. R.HidalgoA.2009DeadEasy Caspase: Automatic counting of apoptotic cells in Drosophila. PLoS ONE, Public Library of Science, 45May 2009), Article e5441, 1932-6203
  27. 27. ForeroM. G.PennackJ. A.HidalgoA.2010DeadEasy neurons: Automatic counting of HB9 neuronal nuclei in Drosophila. Cytometry A, 774April 2010), 3713781552-4922
  28. 28. ForeroM. G.LearteA. R.CartwrightS.HidalgoA.2010aDeadEasy Mito-Glia: Automatic counting of mitotic cells and glial cells in Drosophila. PLoS ONE, Public Library of Science, 55May 2010), Article e10557, 1932-6203
  29. 29. FranzdóttirS. R.EngelenD.Yuva-AydemirY.SchmidtI.AhoA.KlämbtC.2009Switch in FGF signalling initiates glial differentiation in the Drosophila eye. Nature, 4607256August 2009), 7587610028-0836
  30. 30. FreemanM. F.TukeyJ. W.1950Transformations related to the angular and the square root. The Annals of Mathematical Statistics, 214December 1950), 6076110003-4851
  31. 31. GuanY. Q.et al.2008Adaptive correction technique for 3D reconstruction of fluorescence microscopy images. Microscopy Research and Technique, 712February 2008), 1461570105-9910X
  32. 32. GuéM.MessaoudiC.SunJ. S.BoudierT.2005Smart 3D-Fish: Automation of distance analysis in nuclei of interphase cells by image processing. Cytometry A, 671September 2005), 18261552-4922
  33. 33. HodnelandE.TaiX.C.GerdeH.H.2009Four-color theorem and level set methods for watershed segmentation. International Journal of Computer Vision, 823May 2009), 2642830920-5691
  34. 34. HoM. S.ChenM.JacquesC.GiangrandeA.ChienC. Y.2009Gcm protein degradation suppresses proliferation of glial progenitors. PNAS, 10616April 2009), 677867830027-8424
  35. 35. HuM.(19621962Visual pattern recognition by moment’s invariant. IRE Transaction on information theory, 82February 1962), 1791870096-1000
  36. 36. HuQ.QianG.NowinskiW. L.2005Fast connected-component labelling in three-dimensional binary images based on iterative recursion. Computer Vision and Image Understanding, 893September 2005) 4144341077-3142
  37. 37. KassM.WitkinA.TerzopoulosD.1988Snakes: Active contour models, International Journal of Computer Vision. 14January 1988), 3213310920-5691
  38. 38. KervrannC.LeglandD.PardiniL.2004Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy. Journal of Microscopy. 214, (June 2004), 2973140022-2720
  39. 39. KervrannC.2004aAn adaptive window approach for image smoothing and structures preserving, Proceedings of the European Conforence on Computer Vision ECCV04. 3023132144ISBN 354021982X, Prague, Czech Republic, May 11-14, 2004
  40. 40. LiG.LiuT.TarokhA.NieJ.GuoL.MaraA.HolleyS.WongS.2007D cell nuclei segmentation based on gradient flow tracking. BMC Cell Biology, 81September 2007), Article 40, 1471-2121
  41. 41. LiG.LiuT.NieJ.GuoL.ChenJ.ZhuJ.XiaW.MaraA.HolleyS.WongS.(2008Segmentation of touching cell nuclei using gradient flow tracking. Journal of Microscopy, 2311July 2008), 47580022-2720
  42. 42. LinG.AdigaU.OlsonK.GuzowskiJ.BarnesC.RoysamB.2003A hybrid 3D watershed algorithm incorporating gradient cues and object models for automatic segmentation of nuclei in confocal image stacks. Cytometry A, 561November 2003), 23361552-4922
  43. 43. LongF.PengH.MyersE.2007Automatic segmentation of nuclei in 3D microscopy images of C. Elegans. Proceedings of the 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 5365391-42440-672-2VA, USA, April 12-15, 2007
  44. 44. LotufoR.FalcaoA.2000The ordered queue and the optimality of the watershed approaches, in Mathematical Morphology and its Applications to Image and Signal Processing, Kluwer Academic Publishers, (June 2000), 341350
  45. 45. LucyL. B.1974An iterative technique for the rectification of observed distributions. Astronomical Journal. 796June 1974), 7457540004-6256
  46. 46. LumiaR.1983A new three-dimensional connected components algorithm. Computer Vision, Graphics, and Image Processing. 222August 1983), 2072170073-4189X
  47. 47. MakitaloM.FoiA.2011Optimal inversion of the Anscombe transformation in low-count Poisson image denoising. IEEE Transactions on Image Processing, 201January 2011), 991091057-7149
  48. 48. MakitaloM.FoiA.2011aA closed-form approximation of the exact unbiased inverse of the Anscombe variance-stabilizing transformation. IEEE Transactions on Image Processing. Accepted for publication, 1057-7149
  49. 49. MalpicaN.de SolórzanoC. O.VaqueroJ. J.SantosA.VallcorbaI.Garcia-SagredoJ. M.del PozoF.1997Applying watershed algorithms to the segmentation of clustered nuclei. Cytometry, 284August 1997), 2892970196-4763
  50. 50. MaurangeC.ChengL.GouldA. P.2008Temporal transcription factors and their targets schedule the end of neural proliferation in Drosophila. Cell. 1335May 2008), 5919020092-8674
  51. 51. MeijeringE.CappellenG.2007Quantitative biological image analysis. In: Imaging Cellular and Molecular Biological Functions, Shorte, S.L. & Frischknecht, F. (Eds.), 4570Springer, 978-3-54071-331-9Berlin Heidelberg
  52. 52. MitraS. K.SicuranzaG. L. .Eds.2001Nonlinear Image Processing, Elsevier, 8-13120-844-3
  53. 53. OsherS.SethianJ. A.1988Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. Journal of Computational Physics 791November 1988), 12490021-9991
  54. 54. PearsonK.1901On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 26July-December 1901), 5595721478-6435
  55. 55. PengH.2008Bioimage informatics: a new area of engineering biology. Bioinformatics, 2417September 2008), 182718361367-4803
  56. 56. RichardsonW. H.1972Bayesian-based iterative method of image restoration. Journal of the Optical Society of America. 621January 1972), 55590030-3941
  57. 57. RodenackerK.et al.2001Depth intensity correction of biofilm volume data from confocal laser scanning microscopes. Image Analysis and Stereology, 20No. Suppl. 1, (September 2001), 5565601580-3139
  58. 58. RodriguesI.SanchesJ.Bioucas-DiasJ.2008Denoising of medical images corrupted by Poisson noise. 15th IEEE International Conference on Image Processing, ICIP 2008, 17561759978-1-42441-765-0October 12-15, 2008
  59. 59. RoerdinkJ. B. T. M.BakkerM.1993An FFT-based method for attenuation correction in fluorescence confocal microscopy. Journal of Microscopy. 16913140022-2720
  60. 60. Rogulja-OrtmannA.LüerK.SeibertJ.RickertC.TechnauG. M.2007Programmed cell death in the embryonic central nervous system of Drosophila Melanogaster. Development, 1341January 2007), 1051161011-6370
  61. 61. RothwellW. F.SullivanW.2000Fluorescent analysis of Drosophila embryos. In: Drosophila protocols. Ashburner, M.; Sullivan, W. & Hawley, R.S., (Eds.), Cold Spring Harbour Laboratory Press, 1411580-87969-586-2
  62. 62. SarderP.NehoraiA.2006Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Processing Magazine, 233May 2006), 32451053-5888
  63. 63. SerraJ.1988Image Analysis and Mathematical Morphology Vol. II: Theoretical Advances. Academic Press, 0-12637-241-1
  64. 64. ShenJ.SunH.ZhaoH.JinX.2009Bilateral filtering using fuzzy-median for image manipulations. Proceedings 11th IEEE International Conference Computer-Aided Design and Computer Graphics. 158161978-1-42443-699-6Huangshan, China, August 19-21, 2009
  65. 65. ShimadaT.KatoK.KamikouchiA.ItoK.2005Analysis of the distribution of the brain cells of the fruit fly by an automatic cell counting algorithm Physica A: Statistical and Theoretical Physics. 3501May 2005), 1441490378-4371
  66. 66. StarckJ. L.PantinE.MurtaghF.2002Deconvolution in astronomy: A Review. The Publications of the Astronomical Society of the Pacific, 114800October 2002), 105110690004-6280
  67. 67. SternbergS. R.1986Grayscale morphology. Computer Vision, Graphics, and Image Processing, 353September 1986), 3333550073-4189X
  68. 68. SvenssonS.2007A decomposition scheme for 3D fuzzy objects based on fuzzy distance information. Pattern Recognition Letters. 282January 2007), 2242320167-8655
  69. 69. ThurfjellL.BengtssonE.NordinB.1992A new three-dimensional connected components labeling algorithm with simultaneous object feature extraction capability. CVGIP: Graphical Models and Image Processing, 544July 1992), 3573641049-9652
  70. 70. TomasiC.ManduchiR.1998Bilateral filtering for gray and color images. Proceedings Sixth International Conference on Computer Vision, Chandran, S. & Desai, U. (Eds.), 8398468-17319-221-9India, January 4-7, 1998
  71. 71. VincentL.SoilleP.1991Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 136June 1991), 5835980162-8828
  72. 72. VincentL.1992Morphological grayscale reconstruction: definition, efficient algorithm and applications in image analysis. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 6336351063-6919Champaign, IL, USA, June 15-18, 1992
  73. 73. VincentL.1993Morphological grayscale reconstruction in image analysis: applications and efficient algorithms. IEEE Transactions on Image Processing. 22April 1993), 1762011057-7149
  74. 74. WählbyC.2003PhD dissertation. Algorithms for applied digital image cytometry.Upsala University, (October 2003)
  75. 75. WählbyC.SintornI.ErlandssonF.BorgeforsG.BengtssonE.et al.2004Combining intensity, edge and shape information for 2d and 3d segmentation of cell nuclei in tissue sections. Journal of Microscopy. 2151July 2004), 67760022-2720
  76. 76. WuH. X.JiL.2005Fully Automated Intensity Compensation for Confocal Microscopic Images. Journal of Microscopy. 2201October 2005), 9190022-2720
  77. 77. YuW.LeeH. K.HariharanS.BuW.AhmedS.2009Quantitative neurite outgrowth measurement based on image segmentation with topological dependence. Cytometry A 75A4April 2009), 2892971552-4922

Written By

Manuel G. Forero and Alicia Hidalgo

Submitted: 24 November 2010 Published: 23 August 2011