Open access

AO-Based High Resolution Image Post-Processing

Written By

Changhui Rao, Yu Tian and Hua Bao

Submitted: 27 March 2011 Published: 20 January 2012

DOI: 10.5772/31150

From the Edited Volume

Topics in Adaptive Optics

Edited by Robert K. Tyson

Chapter metrics overview

2,836 Chapter Downloads

View Full Metrics

1. Introduction

In this chapter, we will discuss the AO-based high resolution image post-processing, which is applied in astronomical observation, solar observation, and retinal imaging, etc..

Advertisement

2. The high resolution restoration of adaptive optics image

The adaptive optics (AO) can be used to compensate for wave aberrations of atmosphere to achieve the resolution of diffraction limited in real time [1-5]. However, the correction is generally partial due to the hardware restrictions, such as the finite degree of freedom of the deformable mirror, the latency of the AO loop, the measure error in the wavefront sensor (WFS) and so on. So post-processing is necessary to improve the quality of AO images, which are still corrupted by residual aberrations. Generally, the corruption can be described as a convolving process:

g(x,y)=f(x,y)*h(x,y)+n(x,y)E1

where “*” denotes the two-dimensional convolution, and g(x,y) represents the observed degraded image; f(x,y), the real object, that is, the object to be restored; h(x,y) is the residual PSF, convolved with the real object of the optical system; n(x,y) stands for the system noise. In frequency space, the Eq.(1) is equal to

G(u,v)=F(u,v)H(u,v)+N(u,v)E2

The primary goal of post-processing is to find the true object f(x,y), and the process can be described as a deconvolution. In AO system, h(x,y) can be detected partially from WFS. If h(x,y) is known, a deconvolution process can be applied to g(x,y) to estimate f(x,y) easily. The method can be generally called deconvolution from wavefront sensing (DWFS), which assumed that the measurement of WFS was accurate. In practical system, however, the WFS is suffered from various kind of noises such as the photon noise and the readout noise of CCD, moreover the situation is more risky in closed-loop [6]. Also, the frame rate and exposure time of the camera taken the corrected images is different from those of WFS camera in practice. So it is difficult to match the short exposure images with the corresponding WFS data.

Therefore, ordinary DWFS is not suitable for post-processing of AO images. In this chapter, three methods of post-processing with different technical routines were described. In section 1, a post-processing method based on self-deconvolution was introduced. The scheme of the algorithm was following the self-deconvolving data reconstruction algorithm (SeDDaRA) which proposed by Caron [7,8]. And the method had been verified in solar images. In section 2, a post-processing method for the partially-corrected AO image was proposed, in which, the WFS data is used and revised while the AO system performs the close-loop correction on the given object. The algorithm has been tested in human eye fundus images. In section 3, a method based on blind deconvolution was introduced. The method combined the frame selection technique to get the most suitable images from AO close-loop series and then implemented the blind deconvolution. Also, the method was verified in celestial images captured by AO system.

2.1. A modified self-deconvolution restoration algorithm

Caron et al. [7,8] proposed a blind deconvolution method SeDDaRA. The method uses a power law relation applied to the Fourier transform of degraded data to extract a filter function and is proved to work well on a wide variety of signal types. The key operation in SeDDaRA is to determine the transfer function (extracted filter function) to deconvolve the images and the most important step is to choose the tuning parameter. In this section, a method based on SeDDaRA is proposed to process deconvolution to solar images corrected by 37-element AO system. The method utilizes the information recorded by WFS in closed-loop status to estimate the deconvolved operator.

From Eq. (2), the most important in deconvolution is to find the deconvolved operator H(u,v).The SeDDaRA gives the following equation to estimate the H(u,v) [8]

He(u,v)=[KGS{|G(u,v)N(u,v)|}]α(u,v)E3

where α(u,v) is a tuning parameter and KG is a real positive scalar chosen to ensure |He(u,v)|≤1. S{ }, which denotes a smoothing operation. There are three condition equations, whcih should subject to: (1) 0≤α(u,v) <1, (2) the S{ } is separable and (3) F(u,v) and N(u,v) are uncorrelated[4]. The SeDDaRA chooses α(u,v) by solving

S{|G(u,v)|}S{|F(u,v)|}=[KGS{|G(u,v)|}]α(u,v)E4

As the α(u,v) is dependent on the spatial frequency and is related to the unknown object F(u,v), it is not easy to determine. The α(u,v) could be set to a constant value, but that would cause distortion in high frequency band [8]. Sudo et al. has applied SeDDaRA to process solar images and the set α=0.5 as a constant according to experience [9]. In this section, in order to choose a suitable tuning parameter which can estimate the extracted filter more precisely, a method that utilizes the long exposure WFS data to determine the α(u,v) was proposed. The method was proven to be effective in AO-corrected solar images post-processing.

In adaptive optics, the WFS provides Zernike coefficients which could generate PSF h(x,y) and corresponding H(u,v), though it is not accurate due to the measurement noise. Increasing exposure time could alleviate the impact of noise. The longer exposure PSF sums multi-frames short exposure PSF simply.

In spite of the fact that the long exposure PSF cannot match the image taken by AO camera as well, it could describe some rules of wavefront aberrations during the time which long exposure PSF measured. That means the longer exposure PSF would be beneficial to provide prior knowledge to the SeDDaRA, i.e., it would be helpful to determine the α(u,v).

In practical system, the frame rate of camera taken corrected AO images is much slower than frame rate of WFS camera (30Hz vs. 1600Hz). Therefore, the latter one’s exposure time is much shorter (10ms vs. 0.625ms) and the AO images are longer exposure compared with the WFS data. One AO image can be decomposed as N frames short exposure images, the right side of Eq. (1-4) then can be written as (for simplicity, the noise is ignored)

[KGS{|G(u,v)|}]α(u,v)=S{|G1(u,v)|}S{|F(u,v)|}+S{|G2(u,v)|}S{|F(u,v)|}++S{|GN(u,v)|}S{|F(u,v)|}E5

Considering the images has been corrected by AO once, the residual aberration is small, so we assume that the Fourier transform of relative longer exposure PSF, i.e. long exposure optical transfer function is real and smooth and OTFLE(u,v)≈|OTFLE (u,v)|≤1. Then the right side of Eq, (5) is actually the long exposure optical transfer OTFLE (u,v). that is

S{|G1(u,v)|}S{|F(u,v)|}+S{|G2(u,v)|}S{|F(u,v)|}++S{|GN(u,v)|}S{|F(u,v)|}=|OTFLE(u,v)|E6

So there is

[KGS{|G(u,v)|}]α(u,v)=|OTFLE(u,v)|E7

which makes

α(u,v)=ln|OTFLE(u,v)|lnKGS{|G(u,v)|}E8

The α(u,v) is determined and dependented on | OTFLE (u,v)|. He(u,v) can be easily got according to Eq. (3) and can be implemented in many deconvolution algorithms. The deconvolution scheme we employed is constrained least square filtering [10,11] which is given by

F(u,v)=[He*(u,v)|He(u,v)|2+μ|P(u,v)|2]G(u,v)E9

where μ is a parameter to control the regularization strength and P(u,v) is the Fourier transform of Laplacian operator. Here, the “*” denotes the conjugate part. The regularization P(u,v) can alleviate the noise sensitivity during the deconvolution. Because the AO system has pupil limit, it is necessary to add band-limit constraint to the deconvolution process which also can prevent noise amplification.

A 37-element AO system had been developed and installed on 26cm Solar Structure Telescope located on the Phoenix Hill, Kunming, Yunnan, China. Fig. 1 shows the optical layout of the AO system, which works in visible spectral range. The imaging CCD’s frequency rate is 30Hz and the WFS CCD’s frequency rate is about 1600Hz, more details can be found in [1,2].

Figure 1.

The optical layout of the adaptive optics

Fig. 2 shows the image of solar granulation corrected by AO system and the restored images by different algorithms including our method result, DWFS with long exposure PSF and SeDDaRA with constant α=0.45 respectively. The four images have been energy normalized.

Figure 2.

Image of solar granulation. (a) The image corrected by AO system; (b) The image restored by our method; (c) The image restored by long exposure PSF; (d) The image restored by SeDDaRA with a constant α=0.45.

To measure the quality of solar granulation, we employed the following criterion [11]

r=(ΔI)rmsI¯×100%E10

where I¯ denotes the mean of image I and (ΔI)rms represents the RMS value of image I. The results of the four images are (a) 2.05%, (b) 4.06%, (c) 3.03% and (d) 2.95%. It shows our method can improve the image quality efficiently.

In order to illustrate the improvement of image quality, the cross sections of the images are shown in Fig. 3. It shows that the contrast of the solar granulation image has been improved by deconvolution method we proposed.

Also, the cross section curve of deconvolution result is smooth. The noise was not amplified as the present of Laplacian operator was introduced in the deconvolution scheme.

Figure 3.

The cross section of Fig.2

Fig. 4 gives the image of sunspot and the restored image deconvolved by our method. The images are also energy normalized.

Figure 4.

The solar spot images. (a) the original images corrected by AO; (b) the corresponding images restored by our method.

According to the image structure characteristic, a contrast standard to evaluate the improvement was defined as

C=ImaxIminImax+IminE11

where Imax and Imin are the maximum and minimum values in the image. The contrast of original image and deconvolution result are 0.4491 and 0.7008, respectively. The outcome proves the validity of our method.

In summary, the method bases on SeDDaRA and costs shorter time compared with those based on blind deconvolution or other iterative methods. As the prior knowledge of long exposure PSF is introduced, it is simple to determine the tuning parameter α(u,v) which is crucial to the quality of restored images. The method is more useful when the target is low contrast and extended widely such as solar images and human eye fundus images which would lead the method based on blind deconvolution more difficult. The method can be combined with AO data process package to offer higher resolution images.

2.2. Hybrid deconvolution

The retina of human eye is a multi-layers structure which consists of photoreceptor cells and capillary blood vessels locate at the back of the human eyeballs. The unique structure of the blood vessels and cells in the retina has been used for biometric identification and disease diagnosis. The AO can compensate the wave aberrations of the human eyes to achieve the resolution of diffraction limited in real-time. However, the correction by AO system is always partial due to the hardware restrictions as stated before. So the post-processing is necessary to improve the quality of the images which is still corrupted by the residual aberrations. In the section, a method of obtaining high-quality retinal images based on image post-processing, which uses wavefront error measured simultaneously by a wavefront sensor, is described.

The post-processing involves the image restoration (deconvolution) technique which needs accurate knowledge of the residual aberration when the AO system is achieving the close-loop status and simultaneously recorded images which is partially corrected by AO system to estimate the undistorted images of the human eyes.

According to Eq. (1, 2), the deconvolution processing can be described with the expression according to Eq. (12)

o=FT1(H1G)=FT1(H1HO+H1N)E12

Where FT denotes Fourier transform; “o” represents the estimation of the undistorted image and “O” is its corresponding Fourier transform; “H”, “G” and “N” are the Fourier transforms of point spread function (PSF), degraded image and the Gaussian white noise respectively. The “H” is also called optical transfer function (OTF) which can be computed from the measured aberration by wavefront sensor like Hartmann-Shack wavefront sensor (HSWFS). As the existence of the noise, directly inverse filtering in (12) would result in amplification of noise, introducing unwanted variance into the deconvolved images. Also the HSWFS is contaminated by the CCD random noise, so the PSF calculated from the wavefront measurements are not accurate. The key to the problem is to use a joint estimation to estimate the undistorted image and the “real” PSF. The method can be formalized to the minimization of the criterion function

J(o,h)=J1(o,h)+J2(o)+J3(h)E13

Where J1(o,h)=1σ2FT1(HOG)2 is the term of fidelity to image data (considering the noise of Gaussian noise), and 2 is the norm-squared operator, σ2is the noise variance; J2(o)=k(o)2is the term of regularization to constrain the amplification of the noise in high frequencies; k()denotes the differential operator with kth order. Here, k=2 is chosen. So actually J2(o) is a measure of smoothness by Laplacian operator, and it would constrain the information in high frequency of estimate image in order to alleviate the noise sensitivity problem. J3(h)=hh02is the penalty term to control the estimation of PSF, which should not be too different from h0 measured by HSWFS. The reason to do so is that the “real” PSF is impossible to be far away from h0 which is considered to be part of “real” PSF.

The normal blind deconvolution usually gets into local minima and hardly finds a unique solution especially when there is only one single blurred image to be restored. We use a joint estimation to overcome this drawback because the estimation of PSF is related to the PSF measured by HSWFS. There are two advantages: on one hand PSF can be revised by blurred image and estimation of undistorted image; on the other hand the revised PSF can give a better estimation of undistorted image. Also, the existence of h0 reduces the uncertainty of blind deconvolution hugely.

A table-top small adaptive AO system for vivid human retinal high resolution imaging [5] was built. The system is working for real-time (25Hz) closed-loop measurement and correction of ocular aberration. Fig. 5 shows a schematic diagram of the setup. A laser diode with wavelength 780nm was used for illumination. After retina reflection, the light exits from the pupil (6mm) and passes through dichroic mirror, beam expender telescope, deformable mirror (DM) with 37 elements, mirror and beam splitter, projects into HSWFS (97 subapertures in11 by 11 arrays). The slope data of wavefront of HSWFS is acquired by a computer and transferred to control DM.

Figure 5.

The Schematic diagram of the table-top AO system for human eye retina

The device was tested with a vivid eye of a volunteer. Fig. 6(a) and (b) show that the original blurred image of blood vessels in the retina from a vivid human eye without AO correction and the image corrected by AO. It is obvious that the latter one has better resolution and visual quality than the previous one.

Figure 6.

Real eye experiment: (a) Original blood vessels image (400×400 pixels), (b) the image corrected by AO (400×400 pixels), (c) simultaneously recorded HS image (240×240 pixels), (d) reconstructed residual phase (PV=0.606μm), (e) revised PSF, (f) image restored by post-processing(400×400 pixels).

As illustrated before, the AO system cannot achieve the full correction due to the limitation of the hardware, mainly the correction capability of the DM. The Fig. 6(c) shows the HS image when AO system is in closed-loop status, and the image in Fig. 6(b) is recorded simultaneously. In the post-processing stage, the residual phase to restore the partially corrected image would be utilized. OTF/PSF which characterizes the residual phase should be estimated firstly. The residual OTF of the ocular can be approximated as the product of the pupil OTF and a function that called AO-OTF

H=OTFp×OTFAOE14

The AO-OTF is related to the residual phase in the pupil plane

OTFAO=FT{|FT{exp[jϕ(x)]}|2}E15

Where φ(X¯) represents the residual phase in the pupil plane which can be computed from measured aberration by HSWFS with modal reconstruction [12]. It is shown in figure 6 (d). And the pupil OTF can be easily calculated from the pupil geometry. Fig. 6 (e) gives the corresponding PSF.

The next step is to estimate the undistorted image from degraded image G and ocular OTF H with the help of Eq. (13). Fig. 6 (f) is the picture of the restored image by the post-processing.

According to Fig. 6 (f), the visual quality is better than the image in Fig. 6 (b) which is partially corrected by AO system. The blood vessels are more distinctive. Also, the cells in the fundus of eye are much clearer. There are more structures and small features available.

The contrast is defined as

C=ImaxIminImax+IminE16

where Imax and Imin are the maximum and minimum values in the image. The contrast of the image without AO correction and the image partially corrected by AO is 0.723 and 0.902 respectively. The contrast of the image handled by post-processing achieves 0.982. The contrast of the restored image is higher that would be helpful to find more useful information of the vessels in the retina.

As shown in Fig. 7(d), the restored image presents more details. The contrast is improved: 0.9135 versus 0.9797. The performance of the post-processing depends on the accuracy of wavefront sensor and the deconvolution technique. So there are lots of improvement can be done to ameliorate the capability of correction. In this chapter, we have shown that the partially corrected retinal blood vessel as well as cells in fundus of eye images can be restored to get higher visual quality, which would be benefit for the biometric identification and disease diagnosis in the clinical use.

2.3. Multi-frame blind deconvolution

To obtain high-resolution observational results is one of the important goals of astronomical observations, but under the influence of atmospheric turbulence, the ground-based optical telescope can hardly obtain high-quality images. The method of DWFS provides an easy approach for wavefront correction, but as stated before, the DWFS performances is highly restricted by the detection accuracy of the WFS.

In the section, a post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by AO is proposed. The method tries to give the estimation on the real object by using the multiple short-exposure degraded images captured by AO closed-loop correction. It could not only improve the image quality of the object, but also make corrections on the wavefront detection which would help the engineer to analyze the ability of AO system. This algorithm does not need any a priori knowledge, only a positivity constraint and band-limit constraint are applied to the iterative process, for the convergence of the algorithm. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithms. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by AO.

Figure 7.

Experiment on cells in fundus of eye: (a) the image corrected by AO (400×400 pixels; (b) corresponding details of (a) (100×100 pixels); (c) and (d) are image restored by post-processing and corresponding details

The multi-frame degradation model for a given object can be written as:

gm(x,y)=f(x,y)hm(x,y)+n(x,y),1<mME17

in which gm(x, y) and hm(x, y) represent, respectively, the m-th frame of short-exposure degraded image of the given object f(x, y) and the corresponding PSF. It required hm(x, y) to be co-prime, and that is generally satisfied in 2-dimensional images. Comparing Eq.(17) with Eq.(1), it was clear that the multi-frame joint estimation transforms the problem of “knowing one deriving two” to the problem of “knowing M deriving M+1”. Although the equation was still underdetermined, the degree of uncertainty had been greatly reduced. And the greater M, the greater the possibility that f(x, y) approaches to the real solution.

Atmospheric turbulence is a random process: its intensity varies with time. When the AO system was used for the compensation of atmospheric turbulence, the effect of its correction also changed with time. So, not all frames of the recorded short-exposure degraded images are suitable for participating in the blind deconvolution process, and a selection of the images or a frame selection is necessary.

The frame selection technique introduces a standard to evaluate the quality of partially compensated AO images, and picks up those with good qualities.

Hence, the frame selection is actually to pick up the best part of AO compensations from the series of short-exposure images. The Shannon entropy is taken as the quality evaluation standard, and it is easy to define the Shannon entropy of the i-th frame to be

Si=x,yψi(x,y)logψi(x,y)E18

in which ψi(x, y) represents the probability density function of the i-th frame, namely,

ψi(x,y)=gi(x,y)x,ygi(x,y)E19

The better the AO compensation, the smaller the value of Si [13]. So, the M frames with the smallest Si are selected as the degraded images waiting for the blind deconvolution processing (denoted by gm(x, y) in Eq.(17)). To adopt the frame selection as a preprocessing can improve the iterative quality and convergence of the blind deconvolution.

For convenience, by omitting the coordinates the Eq.(17) can be rewritten simply as:

gm=fhm+n,1<mME20

Under the condition that n is a Gaussian noise, the maximum likelihood estimate of f is:

f˜=argminE(||gmhmf˜||2)E21

and similarly, the maximum likelihood estimate of hm can be written as:

h˜m=argminE(||gmh˜mf||2)E22

in which E(•) denotes the mathematical expectation and ||•||, the norm. According to the maximum likelihood criterion, we can define the cost function:

J(f,hm)=h˜mf˜gm2E23

and by minimizing J(f, hm), f and hm could be estimated.

In an incoherent imaging system the positivity constraint, which means both f and hm are non-negative, could be satisfied. So in the iterative process, to add in the positivity constraint is reasonable:

f=p2E24
hm=qm2E25

Miura[14] has proved that by exchanging the variables in Eqs.(24, 25), the number of the unknown image elements to be estimated can be effectively reduced. Even for the single frame blind deconvolution of Eq.(17), it can change an under-determined equation into an over-determined (i.e., the number of equations is greater than the number of unknowns). On this basis, Miura proposed a single-frame blind deconvolution method under the conditions of band-limitation and canonical constraint [14]. But this method is easy to be trapped into an ineffective solution when the initial guess of the result is improper. To avoid this, multiple degraded images are added to be the first guess of true object in the estimation on the real object and point spread function.

In 2000, we built a set of 61-element AO system on the Yunnan Observatory 1.2m telescope, and used it in the visible-light high-resolution imaging observations of astronomical objects [3, 4]. In May 2004, we completed the update and reform of the system. The AO system is placed on the Coude optical path of the 1.2m telescope. The corrected area of the AO system is the 1.06m full-aperture, the wavefront detection is operated at the 400nm-700nm wavelength, and the imaging observation is conducted at the 700nm-900nm wavelength. In the AO system, the AO correction loop consists of a 9×9 array of high-frame-frequency weak-light Hartmann-Shack wavefront sensors, the 61-element deformable mirror is matched with the former, the real-time wavefront processor is responsible for compensating the high-order aberration caused by atmospheric turbulence, and the image detection system is responsible for the high-resolution imaging observation.

Observations on a FK50 star with the magnitude of 2. m06 were made. From the 100 frames of degraded images after the close-loop correction of the AO system, 5 frames were picked up by the frame selection as the images ready for the blind deconvolution processing. These are shown in the panels (a)~(e) of Fig. 8, while Fig. 8(f) is the image after the post-processing. The 6 frame images are normalized to the same energy. 15 iterations were involved in the calculations.

The normalized Strehl ratio is defined as:

SR=i/Eidiff/Ediff×100%E26

In this formula, i and E indicate the maximum value and total energy of the given image respectively; and idiff and Ediff are, correspondingly, the maximum value and total energy of the image in the diffraction limit. By calculations the Strehl ratio of Fig. 8(f) is 69.9%, which represents improvements on the five degraded images, respectively, by factors of 1.56, 1.45, 1.53. 1.49 and 1.44.

After the blind deconvolution, the full width at half-maximum (FWHM) of the image is improved. Fig. 9 compares the cross-sectional diagrams of Fig. 8(a) and Fig. 8(f). It is clear that after the post-processing, the FWHM of Fig. 8(f) is 4.1 pixels (about 0.15arcsec), reaching the diffraction limit of the system, and it is an improvement by 1.4 pixels on Fig. 8(a), a partially-corrected image of the AO system.

Figure 8.

Multi-frame blurred images and the corresponding restored image. (a)~(e) the five blurred images picked up by frame-selection respectively; (f) the restored image and (h) the corresponding 3D view.

In summary, the image quality was fairly improved by the blind deconvolution, the energy became more concentrated, and the resolution was raised.

Figure 9.

Comparison of FWHM

Observation was also made on the binary star Hei42aa, with the magnitudes of 2. m46 and 3. m76. From 100 frames of short-exposure images after the close-loop correction of the AO system, 5 frames with the smallest Shannon entropies were selected as the images ready for the blind deconvolution processing. Fig. 10(a) shows the average of the 5 degraded images, and Fig. 10(b) is the restored image after 20 iterations, the two images having the same total energy. The panels (c,d) are the corresponding 3D diagrams of the panels (a,b).

Figure 10.

Blurred images of double star and the restored image. (a) the average of blurred image; (b) the restored image; (c) and (d), the 3D view of (a) and (b)

After post-processing, the image energy becomes more concentrated to the two stars, making them more easily distinguishable. Also the separation between the two peaks is 7.9 pixels (0.29arcsec), consistent with the actual spacing of 0.3arcsec in star map.

The experimental results show that, although the AO system can provide restored results close to the diffraction limit in real time, limitations of the hardware means that the correction is generally partial. The post-processing based on frame selection and multi-frame blind deconvolution can effectively compensate for the insufficiency of the AO system and give a corrected image of better quality.

Advertisement

3. The visual enhancement of adaptive optics image

Due to the pollution of various photoelectric noises of an AO imaging system, the degraded images are mostly not very clear and weak edges are difficult to recognize, so the visual enhancement of AO-corrected images is necessary to exhibit abundant details covered by large scale noise and to enhance low-contrast edges [15]. This work is mainly divided into three parts: noise reducing, edge smoothing and contrast enhancement. In order to illustrate the effective of these processing better in our work, we will use AO-corrected Solar and photoreceptor cell images as examples.

3.1. Noise reducing

In this section, we will discuss the method of noise type detection and noise intensity estimation. This part of work is very import in image denoising, because it will direct us to choose filtering algorithms and control the filtering parameters.

In an AO imaging system, there are many noise sources, such as thermal noise, photon noise, dark current noise, CCD readout noise as well as camera background noise and so on. It is very important to choose suitable denoising methods according to noise types. In the following, we will discuss two typical approaches used to process the infrared image and the visible image, respectively.

For the infrared images, the camera background noise is the major noise pollution source, and this kind of noise is usually considered as additive noise, which can be expressed as follows:

J(i,j)=I(i,j)+N(i,j)E27

where, J(i,j)is the actually captured image; I(i,j)is the unpolluted ideal image; N(i,j)is the additive background noise.

To filter the background noise, the camera dark and flat-field correction is widely used. The basic idea of this method is to get additive noise term N(i,j) in Eq.(27) and the flat-field coefficient. The expression can be formulated as follows:

g(i,j)=T(i,j)×|J(i,j)N(i,j)|E28

where, T(i,j)is the flat-field coefficient, g(i,j)is the filtered image, and || is the absolute operator.

Fig. 11(a) and (b) shows the two solar rice images before and after the dark and flat field correction respectively, which are acquired by the 37-element solar AO imaging system [1, 2], and Fig. 11 (c) shows the background noise. According to this experiment, it can be seen that the dark and flat-field correction could suppress the camera background noise effectively.

Figure 11.

a) the solar image before flat-field correction; (b) the solar image after the dark and flat-field correction; (c) the camera background noise image.

Because the camera background noise can be corrected by imaging system, reducing this kind of noise is relatively simple and easy. But unfortunately, the other noise resources are very difficult to detect in AO imaging system, and it seems impossible to distinguish them from each other. Due to the noise model of an AO imaging system is very complex, we suggest considering the other noise pollutions as “combined noise”, and using the hybrid filtering methods to suppress noise. Moreover we assume the combined noise as the additive noise, because if the combined noise in real application is more in line with the characteristics of multiplicative noise, the logarithms can be simply used to change it into the additive form as follows:

J(i,j)=I(i,j)×N(i,j)logJ(i,j)=logI(i,j)+logN(i,j)E29

To illustrate how to reduce the combined noise in AO images, we use the human photoreceptor cell image as example, which are acquired by the table-top 37-elements AO flood illumination retinal imaging system [5].

Because we have assumed the combined noise as the additive noise, the neighborhood average algorithm can be used to suppress the undesirable pixel variance in flat areas. However, it is well known that the simply neighborhood average processing blurs image edges violently. To solve this problem, the bilateral filtering proposed by Tomasi [16], is used to reduce large scale noise in photoreceptor cell image. The basic idea of this method is that, relevant pixels are not only geometric closeness but also photometric similarity, and the combination between pixels prefers near values to distant values in both domain and range. This method can be expressed as follows:

Bx,y=k(x,y)i,jGI(i,j)×e(xi)2+(yj)22σ2+(Ix,yI(i,j))22δ2E30

where G is the filter window located at(x,y); I(i,j)presents pixels in filter window; σis the pixel geometric closeness; δis the RMS of image noise, standing for noise intensity; δis used for normalization.

In the real applications, the results of bilateral filter are highly depends on the parameter of noise intensity: if δ=i=0m1j=0n1(g(i,j)g¯)2m×n is too small, the bilateral filter degrades as the Gaussian filter and edges will be blurred; if δ is too large, the bilateral filter can not suppress the noise sufficiently. In order to solve this difficulty, we select a noise estimation area in the background. Fig. 12 shows a real human photoreceptor cell image, in which a 128×128 pixels estimation area is selected at left-bottom corner.

Figure 12.

The photoreceptor cell image with noise estimation area

Fig. 13(a) shows the noise area in background, Fig. 13(b) displays the gray histogram of this area, according to which we could compute the noise intensity, namely parameter δ used in the bilateral filtering, which takes the form as follows:

δ=i=0m1j=0n1(g(i,j)g¯)2m×nE31

where m and n are the size of estimating zone; g(i,j)represents pixels in the zone and g¯ is the corresponding mean value of these pixels.

Figure 13.

a) the noise in the background; (b) the gray histogram in noise estimation area

Fig. 14(a) is a part of the photoreceptor cell image; Fig. 14(b) and Fig. 14(c) shows the filtered results by the bilateral filter and the Gaussian filter, respectively. It can be seen that, through the noise intensity estimation, the bilateral filter suppressed the large scale noise effectively; but the traditional Gaussian filter blurred image seriously.

Figure 14.

a) the photoreceptor cell image; (b) the result of the bilateral filter controlled by noise intensity estimation; (c) the result of Gaussian filter

3.2. Edge smoothing

In this section, we will discuss a difficult task in low level image processing, that how to smooth image edges polluted by noise, but not blur them. In our experiments, we find the coherent diffusion can be used to solve this problem under delicate control.

The partial derivative equation (PDE) based the anisotropic diffusion introduced by Perona and Malik [17] shows good capabilities of noise reducing and maintaining edges, by using an edge stopping function to carry out thermal diffusion between adjacent pixels, where diffusion are implemented within homogeneous areas and stopped near edge, the discrete P-M function can be expressed as follows:

Ii,jt+1=Ii,jt+λ[cNNI+cSSI+cEEI+cWWI]E32

where Ii,jt and Ii,jt+1 are image pixels at time t andt+1; cαis conduction coefficient, where α denotes four diffusion directions; λis used to control diffusion intensity; is the gradient operator.

Generally, the PDE based algorithms can be roughly categorized into two classes: one is based on local pixel statistics, to balance the extent of noise smoothing and edge preserving, e.g. P-M anisotropic diffusion and speckle reducing anisotropic diffusion [18-19]; the other is based on local geometries, to control the smoothing behaviors both in intensity and direction, e.g. coherence-enhancing diffusion filtering [20] and nonlinear coherent diffusion [21].

In our study, we find that if the image contains the structural features, the geometric anisotropic diffusion performances much better than the local statistics anisotropic diffusion. The geometric anisotropic diffusion is not only protect the blurring of edges, but also coherent enhance jaggy edges. But the use of this kind of methods should be careful, that large scale noise must be suppressed sufficiently before this processing, so as to avoid the appearance of false edges. In our filtering framework, this part of work has been finished by bilateral filtering in previous section. In order to smooth the photoreceptor cell edges and filter the residual noise, the nonlinear coherent diffusion is utilized in our method, which can be expressed as follows:

I(i,j,t)t=div(DIi,j)E33

where, parameter D is diffusion tensor, which is usually constructed by structure tensor C with the following description:

C=[wIx2wIxIywIxIywIy2]E34

in which, Wis the detecting window, used to capture image local patterns. After eigenvalue decomposition, structure tensor C can be written as follows:

C=[UV][μ100μ2][UTVT]E35

Where the eigenvectorsU, Vcorrespond to the directions of edge tangent direction and edge normal direction; the eigenvalues μ1 and μ2 stand for the strengths of maximum and minimum brightness variations.

As we have known, the two-dimensional structure tensor corresponds to an ellipse function (see Fig. 15), whose geometric properties are summarized as follows [22]:

  1. μ1μ2
  2. μ1μ2
  3. μ1μ2

Figure 15.

Geometric expression of structure tensor

Within the framework of structure tensor driven diffusion, structure tensor C can be designed to control diffusion behaviors, based on the principle that diffusion tensor D should has the same principal directions with structure tensorC, but with different diffusion intensities. We use this conclusion to design a new diffusion tensor with the following description:

D(x,y)=[UV][βmin00βmax][UTVT]E36

where βmin and βmin are the diffusion intensities of the tangent and normal directions, which take the form:

βmin={0,α,ifλmaxλmin<δmin,λmin>δmaxelseβmax={α×(1(λmaxλminλmax)2),ifβmin00,elseE37

In our method, the coherence diffusion will take four different diffusion behaviors: if the detection window locates at corners, then λmaxλmin>δmax andβmaxβmin0, the diffusion will be stop there, see Fig. 16(b-a); if the detection window locates at edges, then λmax>>λmin0 and the coherence diffusion will diffuse along the tangent direction, see Fig. 16(b-b) and (b-c); finally, if the detection window locates in the flat region, then λmaxλmin0 andβmaxβmin0 the isotropic diffusion will be carried out, see Fig. 16(b-d).

Figure 16.

a) the photoreceptor image with four structure features; (b) four diffusion behaviors used in our method

In the real applications, the parameters δmin and δmax used in the coherence diffusing were set 10 and 200, and the iteration was stopped when mean square error (MSE) is smaller than 0.001.

MSE=1M×Ni,jM.N(It(i,j)It1(i,j))2E38

where M and N are the image size; It(i,j)and It1(i,j) are the filtered images at time t andt1.

Fig. 17(a) shows a part of photoreceptor cell image; Fig. 17(b) is the result of coherent diffusion controlled by our diffusion tensor; Fig. 17(c) is the result simply processed by the traditional nonlinear coherent diffusion. It can be seen that, our method filtered the residual noise containing in the bilateral filtering result, smoothed the irregular photoreceptor cell edges, and most important is that the combination of bilateral filtering and coherent diffusion effectively avoid the appearance of false edges, which are often produced by the traditional coherent diffusion methods.

Figure 17.

a) the photoreceptor image; (b) the result of our method; (c) the result of the traditional nonlinear coherent diffusion.

3.3. Contrast enhancement

In this section, we will discuss two methods of enhancing image contrast based on spatial and frequency domain respectively, and we will show that if combined with priori knowledge, these methods are more effective.

There are many contrast enhancing methods in low level image processing, such as gray level based transformation, histogram equalization and frequency domain high pass filtering and so on. However, these common methods usually partially improve image contrast in real application. Following, we will introduce a special method for photoreceptor cell images, and a band pass filtering method used by other articles [23].

In our method, image edges and brightness are extracted, and both of them are merged into the coherent diffused result. The proposed algorithm takes the form:

R=a×E(I)+b×B(I)+c×IE39

wherea, band c are the weight coefficients; Iis the coherent diffused image; E()is the edge image, detected by Sobel operator; B()is the brightness image, extracted by two overlapped windows, see Fig. 8, which is defined as follows:

B(x,y)=i,jM1Ii,j×g(i,j)M1i,jM2i,jM1Ii,j×g(i,j)M2M1E40

where g(i,j) is the normalized Gauss function; M1and M2 are the detecting windows.

Figure 18.

Retinal cell image brightness detecting.

In our study, we have the priori knowledge that, the size of human photoreceptor cells is in the region of(3μm,6μm), the magnification of AO optical system is11.7, and camera pixel size is8μm. With this information, we can determine the cells size in image within the region of (5×5,9×9) pixels, so that the size of the two overlapped windows is 3×3 and 11×11 pixels, respectively.

There are three relationships between detecting windows and photoreceptor cells to be consider: if M1 locates in bright region and M2 covers dark region, the output of B(x,y) is positive, see Fig. 18(a); reversely, if M2 locates in dark region and M1 covers bright region, the output of B(x,y) is negative, see Fig. 18 (b); finally, if both M1 and M2 locate in the flat region, the output of B(x,y) is close to zero.

To illustrate our contrast enhancement processing more clearly, the interim edge image and the brightness image are displayed in Fig. 19(a) and (b), and Fig. 19(c) is the merged image.

Figure 19.

a) edge image; (b) brightness image; (c) merged result of edge, brightness and coherent diffused image.

In real applications, our method is applied in two group objects. Group 1 is the snapshot in the condition of dark-adapted pupil, and group 2 is the snapshot with dilated pupil (with Tropicamide 1%). Fig. 20(a) and (c) present two examples of the original AO photoreceptor cell image of group 1 and group 2 respectively, and the corresponding results are shown in Fig. 20(b) and (d).

To provide a quality measurement for our results, contrast criterion is used:

r=(ΔI)rmsI¯×100%E41

The quality indexes defined by Eq. (2-15) before and after processing are 4.84% and 8.88% for Fig. 20(a) and (b), and 3.24% and 5.61% for Fig. 20(c) and (d). In order to illustrate the validity of our method further, Fig. 21(a) and (b) present the contrast indexes of 15 images of both groups respectively, which shows that the contrast of AO photoreceptor cell images are effectively improved by our method.

Figure 20.

Results of hybrid filtering and enhancing (a) image captured with dilated pupil; (b) the processed result of (a); (c) image captured with dark-adaptive pupil; (d) the processed result of (c).

Figure 21.

Statistical contrast improvements of both groups: (a) improvement of contrast criterions of group 1; (b) improvement of contrast criterions of group 2.

In contrast with our hybrid filtering method, the frequency domain algorithms could also be used in AO image processing. Here, we introduce a frequency enhancement method used for the photoreceptor cell images. As we have known, frequency processing tends to enhance image details and amplify noise at the same time, so it is important to estimate the spatial frequency of imaging objects. For the human photoreceptor cells, the spatial frequency can be estimation by following method:

f=2×feye×tg0.5dcone(cycles/degree)E42

where feye is the focal length of human eye to be tested, with the experience value17mm; dconeis the photoreceptor cell diameter, within the range3~6um. So the reasonable photoreceptor cell spatial frequency is in the range of 50~100 cycles/degree. Fig. 22 is an illustration, in which Fig. 22 (a) shows the power spectrum of a photoreceptor cell image; Fig. 22(b) is the corresponding spectrum curve, from which the photoreceptor cell start-stop frequency is displayed.

Figure 22.

Illustration of spatial frequency estimation: (a) power spectrum of photoreceptor cell image; (b) corresponding spectrum curve

With this information, Butterworth band pass filtering can be used to suppress the high frequency noise, above cell stop frequency; and enhance the image energy, within the start-stop frequency. Fig. 23(a) is a typical photoreceptor cell image captured at 1.0 degree central eccentricity; Fig. 23(b) shows the result of frequency domain processing; Fig. 23(c) is the result of our hybrid filtering and enhancing method.

Figure 23.

The comparison of frequency enhancement with the hybrid filtering and enhancing method. (a) the photoreceptor cell image; (b) result of frequency processing; (c) result of our hybrid filtering and enhancing method.

It can be concluded from our experiments that the hybrid filtering and enhancing methods can effectively improve the photoreceptor cell image quality; due to the complex of the noise model in an AO imaging system, the combined denoising method performs much better than the traditional single filtering.

References

  1. 1. Changhui Rao, Lei Zhu, Xuejun Rao, etc., “Performance of 26 37-element solar adaptive optics for the 26cm solar fine structure telescope at Yunnan Astronomical Observatory”, Applied Optics, 49 31 G129-135 (2010)
  2. 2. Changhui Rao, Lei Zhu, Xuejun Rao, etc., “37 -Element Solar Adaptive Optics 26 26cm Solar Fine Structure Telescope at Yunnan Astronomical Observatory,” Chinese Optics Letters 10, 966-968 (2010)
  3. 3. Changhui Rao, Wenhan Jiang, Yudong Zhang and etc., “Upgrade on 61 -element adaptive optical system 12 1.2m telescope of Yunnan Observatory”, Proc. SPIE. 5490, 943 (2004).
  4. 4. Changhui Rao, Wenhan Jiang, Yudong Zhang and etc., “Performance on the 61 -element upgraded adaptive optical system for 12 telescope of the Yunnan Observatory”, Proc. SPIE 5639, 11 (2004)
  5. 5. Ning Ling, Yudong Zhang, Xuejun Rao and etc., “Small Table-Top Adaptive Optical Systems for Human Retina Imaging,” Proc. SPIE, 4825 99~108(2002)
  6. 6. J.-P. Veran and F. Rigaut and H. Maitre and D. Rouan, “Estimation of the Adaptive Optics Long-exposure Point Spread Function using Control Loop Data,” J.Opt.Soc.Am.A 14, 3057-3069(1997)
  7. 7. CaronJ. N.NamaziN. M.LuckeR. L.RollinsC. J.P. R. Lynn Jr.,"Blind data restoration with an extracted filter function," Optics Letters 261164 1164 (2001)
  8. 8. J. N. Caron, N. M. Namazi, R. L. Lucke, C. J. Rollins,and P. R. Lynn Jr., "Noniterative blind data restoration by use of an extracted filter function," Applied Optics 32, 6884-6889 (2002)
  9. 9. Yoshifumi Sudo, Naoshi Baba, Noriaki Miura, Satoru Ueno, and Reizaburo Kitai, "Application of self-deconvolution method to shift-and-add solar imaging", Applied Optics 45, 2707-2710 (2006)
  10. 10. RafaelC. Gonzales and Richard E. Woods, “Digital Image Processing 2 edition),” Pearson Education North Asia Limited, (2002)
  11. 11. Tian Yu, Rao Chang-Hui and etc., “Hybrid Deconvolution of Adaptive Optics Retinal Images from Wavefront Sensing,” Chinese Physics Letters, 25,105-107(2008)
  12. 12. Rao Changhui, Zhang Xuejun, Jiang Wenhan et al.. Image deconvolution from Hartman-Shack wavefront sensing: indoors experimental result [J]. Acta Optica Sincia, 227 789~793 (2002)
  13. 13. GreenJ. J.HuntB. R.Improved.restorationof.spaceobject.imageryJ. Opt. Soc.Am. 12 12, 2859-2865 (1999)
  14. 14. Noriaki Miura, "Blind deconvolution under band limitation", Optics Letters 23, 2312-2314, (2003)
  15. 15. Hua Bao, Changhui Rao, Yudong Zhang, Yun Dai, Xuejun Rao and Yubo Fan. “Hybrid filtering and enhancement of high-resolution adaptive-optics retinal images”, Opt. Letters 3422 3484~3486 (2009)
  16. 16. TomasiC.ManduchiR.Bilateralfiltering.forgray.colorimages.ProcI. E. E. IEEE Int. Conf. On Computer Vision, Bombay, India, 836846 (1998)
  17. 17. PeronaP.MalikJ.Scale-space“.edgedetection.usinganisotropic.diffusion,”I. E. E. E.Trans Pattern Anal Machine Intell., 12 7 629639 , Jul. (1990)
  18. 18. YuY.ActonS. T.Speckle“.reducinganisotropic.diffusion,”I. E. E. E.Trans Image Process., 11 11 12601270 , Nov (2002)
  19. 19. YuY.ActonS. T.Edge“.detectionin.ultrasoundimagery.usingthe.instantaneouscoefficient.ofvariation,”. I. E. E. E.Trans Image Process., 7 3 16401655 , Dec. (2004)
  20. 20. WeickertJ.Coherence-enhancing“.diffusionfiltering,”.ComputerVision.vol 31 2 3, 111127 (1999)
  21. 21. Abd-ElmoniemK. Z.YoussefA. B. M.KadahY. M.Real-time“.specklereduction.coherenceenhancement.inultrasound.imagingvia.nonlinearanisotropic.diffusion,”I. E. E. E.Trans Biomedical Eng., 49 9 9971014 , Sep. (2002)
  22. 22. HarrisC.StephensM.combinedA.corneredgedetector.Proc Vision Conference, Manchester, UK, 147151 (1988)
  23. 23. LiangJ. Z.WilliamsD. R. Aberrations and retinal image quality of the normal human eye [J]. Opt. Soc. Am. A, 14(11):2873~2883 1997

Written By

Changhui Rao, Yu Tian and Hua Bao

Submitted: 27 March 2011 Published: 20 January 2012