Open access peer-reviewed chapter

Adaptive Optics Confocal Scanning Laser Ophthalmoscope

By Jing Lu, Hao Li, Guohua Shi and Yudong Zhang

Submitted: March 26th 2011Reviewed: September 1st 2011Published: January 20th 2012

DOI: 10.5772/31117

Downloaded: 2050

1. Introduction

Scanning laser ophthalmoscope (SLO) was first presented by R. H. Webb (R. H. Webb, et al, 1987) in 1987, which is the same as a scanning laser microscope except that human eye is used as the objective lens and retina is usually the sample being imaged. With its high contrast real-time imaging and axial sectioning capability, SLO has many advantages and applications in retina imaging (Austin. Roorda, et al, 2002), eye tracking (D. X. Hammer, et al, 2005), hemodynamic (R. Daniel Ferguson, Daniel X. Hammer, 2004), tomography (Fernando Romero-Borja, et al, 2005), etc. However, due to ocular aberrations, resolution of image is dramatically degraded (R. H. Webb, et al, 1987), which results in that conventional SLOs always have large field (usually 10 ~20 ) and low resolution (lateral resolution>30μm, axial resolution>300μm for 6mm pupil).

Rochester University first used Adaptive Optics (AO) to compensate for monochromatic aberrations of human eye in a fundus camera (J. Liang, et al, 1997). Then Institute of Optics and Eleconics of China developed a table-top adaptive optical system for human retinal imaging (Yudong Zhang, et al, 2002, Ning Ling, Yudong Zhang, et al, 2002, Ning Ling, Xuejun Rao, et al, 2002, Ning Ling, Yudong Zhang, et al, 2004). They obtained a high resolution retina image with 6 ×6 field of view (Ling Ning, et al, 2005). The first AOCSLO was reported by Austin Roorda (Austin. Roorda, et al, 2002), it yielded the first real-time images of photoreceptors and blood flow in living human retina at video rates, with lateral resolution<2.5μm, axial resolution<100μm. In 2005, Indiana University developed Tracking AOCSLO (D. X. Hammer, et al, 2005), and David Merino combined AOCSLO and AOOCT (D. Merino, Chris Dainty, 2006) to improve both lateral resolution and axial resolution.

2. Basic concept

The schematic of AOCSLO system is shown in Fig.1. A beam of light emitted from SLD is collimated, passes through various optical components, and then enters the eye which focuses it to a small spot on retina. While the eye’s pupil plane is conjugate to the two scanning mirrors (horizontal scanner: 16 KHz resonant scanner, vertical scanner: 30Hz galvometric scanner), the angle of incidence of the incoming beam is continuously changed with line-of-sight as a pivot, which provide raster scanning of the focused spot of the retina. Light scattered back from the eye passes through the same optical path as the incoming beam then is collected and focused on a pinhole which is conjugate to the focused spot on retina. A photomultiplier tube is (PMT) placed after the pinhole to record the intensity of the scattered light for each position of the spot. By synchronizing the signals from PMT and two scanning mirrors, we can consecutively build the image of retina.

Figure 1.

The schematic of AOCSLO system. HS, horizontal scanner; VS, vertical scanner; WS, wavefront sensor; DM, deformable mirror; PMT, photomultiplier tubes; BS1, BS2, beam splitter; CL, collecting lens; FM1, FM2, fold mirrors; SLD,680nm; M1~M8, spherical mirrors; PH, pinhole; TL, trial lens; FT, fixing target.

Light scattered back from retina is split into two parts: one enters PMT for imaging and the other is captured by a Shack-Hartmann wavefront sensor. This arrangement avoids chromatic aberration caused by using different wavelength for imaging and wavefront sensing. In order to compensate for ocular aberrations, a deformable mirror is placed at the pupil conjugate plane.

2.1. Optical design

The main difference in optical path between SLO and AOCSLO is conjugate planes. There are seven pupil conjugate planes in AOCSLO optical system: actual pupil, HS, VS, DM, lenslet array of WS, and another 2 positions. It’s essential to preserving conjugate planes, as WS should measure the exact aberrations of the eye. If the pupil is not conjugate to the scanning mirrors, beam would wander across the pupil as it is scanned, and WS would not see a stable aberration pattern (nor would the DM).

The light delivery path is a double-pass system including light source, two scanners, WS, DM and PMT. It can be thought of as a series of telescopes that relay the pupil to the various conjugate elements. Spherical mirrors are used for pupil relay to prevent back reflection (which is particularly harmful in a double-pass system) and chromatic aberrations. The drawback of using spherical mirrors is that they have to be used off-axis, which generates off-axis aberrations like astigmatism and coma. Coma could be minimized by changing angles and distanced between the spherical mirrors. Astigmatism is compensated by trial lens.

The beam is scanned on the retina with a horizontal scanner (HS) and a vertical scanner (VS), which continuously changing system aberration. As AO system could not catch up with the speed of scanning mirror (16KHz for HS). The optical system should be designed for diffraction limited performance over the entire field while DM holds flat. For high resolution retinal imaging, 1×1 degree field is used. Zooming out to a larger field, the system is capable of 3×3 degrees. At each position of HS and VS, a completely new optical system configuration is encountered yielding a different wavefront aberration. A continuous x-y raster scan actually contains an infinite number of system configurations. To realistically represent this, tilt positions of the scanning mirrors are divided up into a seventeen configuration points spanning the area of a grid on retina, which is shown in Fig.2.

Figure 2.

Multi-configuration of rater scanning

Tilting angles and distances between mirrors are set as variables, and RMS wavefront errors of seventeen configurations are the target of optimization. Configurations of 1×1 degree are more weighted because this field of view is used in high resolution imaging, which is more critical in image quality. After optimization, spot diagram of the seventeen configurations is shown in Fig.3, in which each circle is the Airy disk of corresponding configuration. Most of the spot radius are smaller than Airy radius. In the peripheral configurations, coma and astigmatism are much larger than central configuration.

Figure 3.

Spot diagram of the seventeen configurations.

2.2. Adaptive optics in AOCSLO

AO system is used to compensate for the aberrations of human eye’s optics. Commonly we use a Shack-Hartmann wavefront sensor. The lenslet array is made up of 5-mm focal length lenslets, each with an actual diameter of 200μm. In our AOCSLO system, the wavefront sensor has square sub-apertures, 11 across the diameter and a total or 97 lenslets inside a 6-mm pupil (see Fig. 4). The centroid of the focused spot from each lenslet is estimated by calculating the first moment of intensity at every spot location.

Figure 4.

Shack-Hartmann wavefront sensor’s output image.

A 37-channel deformable mirror (DM) is placed in the optical path to compensate for the aberrations. DM is placed before the raster scanners to minimize the size of mirrors required for relaying the light. The clear diameter of DM is 40mm, and therefore the pupil has to be magnified to fill the mirror aperture. The size of the DM is the primary reason for the large size of the instrument. Aberrations are corrected both in the in-going and the out-going light paths. Correcting aberrations on the way into the eye helps to present a smaller focal spot on the retina and result in higher resolution of features in the retina. Correcting the aberrations on the way out helps to focus the light to a more compact spot in the pinhole plane, resulting in increased axial resolution. Figure 5 shows the wavefront sensor geometry superimposed on the DM actuator position geometry.

Figure 5.

Centers of the lenslets of the Shack-Hartmann wavefront sensor projected onto the DM actuator array.

2.3. Sectioning analysis

AOCSLO has a relatively high axial resolution, which makes it possible for retina sectioning. A method easy to implement is to use the deformable mirror (DM), which is shown in Fig.4. In Fig.6 (a), beam is focused on the plane of photoreceptor as DM is only used to correct for human eye’s aberrations. Then extra defocus is added on DM, which lets the beam focus on retina at a different depth as shown in Fig.6 (b). Light scattered back from this plane follows the same path as the incoming beam, and passes through the DM with extra defocus. As a result, focal position is stable in detecting arm, which means there is no need to move the pinhole. This method has many advantages, including decrease in moving elements, enhancement in speed and accuracy. According to the eye model Liou and Brenan set up in 1997, Fig.6 (c) shows the relationship between the defocus of DM and the depth of focal movement. Human retina is about 300μm thick, so an extra defocus of 4μm is enough to section the whole retina.

Figure 6.

Use deformable mirror (DM) for retina sectioning.

3. Result

3.1. Retina imaging

The AOSLO system was tested in adult volunteers, whose pupils were dilated by one drop of Mydrin-p to 6mm. Laser power on pupil plane was 100μW, which is more than 10 times lower than the maximum permissible exposure specified by the American National Standards Institute for the safe use of lasers (Laser Institute of America, 2000). Pinhole size is 40μm. Fig.7 shows retina images taken from the same area before and after close-loop. With AO correction, contrast and resolution were dramatically improved to reveal cone mosaic of human retina.

Figure 7.

Retina image before and after close loop. Scale bar is 100μm

3.2. Retina sectioning

The axial resolution of AOCSLO was measured experimentally and compared with the theoretically ideal axial resolution (See Fig.8). We used a model eye (6mm clear aperture, 100mm focal length doublet lens and a ground glass with 1% reflectivity) to measure axial resolution. The ground glass was moved through different positions near focus. At each of these defocus location, the intensity behind pinhole was registered. Then the point spread function (PSF) through focus could be measured.

Figure 8.

Axial resolution measured experimentally in different pinhole size. The maximum intensity of each PSF is normalized.

As discussed in Section 1.3, DM could be used for retina sectioning. In practice, the defocus was adjusted over a range of values ranging from -1.5 to +1.5μm, which corresponded to an axial range of 200μm in retina (each step size 0.375μm corresponded to about 25μm). Figure 9 is a montage showing an image obtained at each axial location.

Figure 9.

Different layers of human retina. The sectioning passes through Nerve Fiber Layer, Blood Vessels and Photoreceptor Layer.

4. Real-time deconvolution of AOCSLO image

The ocular aberrations compensation with AO isn’t perfect, and it still leaves some residual aberration. On one hand, the point-spread function (PSF) measured by wavefront sensor is suffered from various kinds of noises and it is just the PSF of the optical system. The PSF of the total system is affected by several factors, such as the image detector. On the other hand, wavefront measurements with Shack-Hartmann sensor lack information on the ocular scatter which may give rise to wavefront measure errors (Fernando Díaz-Doutón, et al, 2006, Justin M. Wanek, et al, 2007). Thus, image deconvolution is necessary to improve the quality of the images. We used a blind deconvolution algorithm named Incremental Wiener filter (Mou-yan Zou, Unbehauen Rolf, 1995) to recover the images

In the Increment Wiener Filter, we use an error array as

S(μ,ν)=Y(μ,ν)X^(μ,ν)H^(μ,ν),E1

where Y(μ,ν)is the frequency of the origin image; X^(μ,ν)and H^(μ,ν)are the estimations of frequency of the undistorted image and PSF. The estimations given by the Incremental Wiener filters are

X^new(μ,ν)=X^old(μ,ν)+H*^(μ,ν)S(μ,ν)|H^(μ,ν)|2+γx,E2
H^new(μ,ν)=H^old(μ,ν)+X*^(μ,ν)S(μ,ν)|X^(μ,ν)|2+γh.E3

Where * represents conjugation; γxand γhare small positive constants, they can be denoted by

γx=0.2|H^(0,0)|2,E4
γh=0.2|X^(0,0)|2.E5

Eq.(2) and Eq.(3) are iterated to estimate undistorted image and PSF. The Increment Wiener Filter is often used in conjunction with the object domain constraints. The constraint that both image and PSF should be positive is usually used.

The initial value of H^(μ,ν)can be calculated from the wavefront measurement, and the initial value of X^(μ,ν)can be calculated by Wiener Filter. The Wiener Filter is defined by

X^(μ,ν)=H*^(μ,ν)Y(μ,ν)|H^(μ,ν)|2+λ,E6

where λis the regularization parameter. The optimum value of λfor each frequency can be calculated if the undistorted image and noise power spectra are known. However, in many practical situations the information is not available and λis an empirical value.

Recently, advances in Graphics Processing Units (GPU) enable dramatic increases in computing performance. GPUs with programmable processing units can be used to accelerate many compute intensive applications. The standard C like language interface is also provided by NVIDIA’s GPU with compute unified device architecture(CUDA) computing environment, and make it much more accessible to software developers (NVIDIA, 2009). GPUs provide very high memory bandwidth and tremendous computational horsepower. For example, the NVIDIA GTX280 can achieve a sustained memory bandwidth of 141.7 GB/s and can achieve 933 billions of floating-point operations per second (GFLOPS). With help of GPU, the deconvolution algorithms Incremental Wiener filter was realized in real-time.

Experiments on retinal images of original photoreceptor cells and capillary blood vessels are shown in Fig.10. Fig.10 (a) is an original photoreceptor cells image, and Fig.10(b) is an original capillary blood vessels image. Both Fig.10 (a) and (b) were taken with aberration corrected by AO. Fig.10 (c) and (d) are the images restored by Incremental Wiener filter. The initial value of the PSF in our deconvolution algorithm was estimated from the wavefront sensor, andλin Eq.(6) was set to 2. It is enough to recover the images by twelve iterations of our deconvolution algorithm, and more iterations don’t improve the image quality evidently. The images in Fig. 10 are 256×256 pixels, and the field of view is 0.5 degree (about 140μm). We compared the calculation time of the Incremental Wiener filter programmed through CUDA and Matlab, and the results are shown in Table 1. Our host computer’s CPU is E4600, and its GPU is NVIDIA GTX280. As is shown in Table 1, our algorithm can be realized in real-time on GPU, and the calculation speed of the CUDA program is about 282 times faster than that of Matlab program.

Figure 10.

Experiments on original photoreceptor cells and capillary blood vessels images of retina. (a) an original photoreceptor cells image. (b) an original capillary blood vessels image. (c) and (d) are the images restored by Incremental Wiener filter.

ImageCUDA programMatlab program
Fig.3(c)5.13ms1444.10ms
Fig.3(d)5.11ms1440.05ms

Table 1.

Comparison between the calculation time of the Incremental Wiener filter programmed through CUDA and matlab

To measure the quality of the retinal images, we also define the contrast of the image Ias

C=iMjN[I(i,j)I¯]2MN,E7

where M and N denote the width and the height of imageI, and I¯represents the mean of imageI. Large value of C indicates that the image quality is improved. The contrasts of the four images in Fig.10 are (a) 32.7 (b) 21.1 (c) 39.4 (d) 39.6, respectively. It shows that our method can improve the image quality efficiently.

The original PSFs of Fig.10 (a) and (b) are shown in Fig.11 (a) and (b), respectively. The PSFs are measured by wave front sensor, and they are initial values of Incremental Wiener filter. The restored PSFs of Fig.10 (a) and (b) by Incremental Wiener filter are shown in Fig. 11(c) and (d), respectively. As shown in Fig.11, there are significant differences between the PSFs measured by wave front sensor and restored by our algorithm. It indicates that it isn’t accurate to use the PSF calculated from the wave front measurement as the PSF of the total system.

Figure 11.

a) and (b) are the original PSFs of Fig. 5(a) and (b), respectively. (c) and (d) are the restored PSFs of Fig5(c) and (d) by Incremental Wiener filter, respectively.

5. Tracking features in retinal images of AOCSLO

The high resolution of the AO retinal imaging systems has been compromised by the eye motion. This eye motion includes components of various frequencies from low to a relatively high frequency (50-100Hz) (David W. Arathorn, et al, 2007). When the observed object appears to be fixed in the field of view, the motion sweep any projected point of the object across many cone diameters. With a small field size (about 1 deg field, 280μm), the motion affects quality severely.

In order to remove the eye motion, a computational technique known as Kanade-Lucas-Tomasi (KLT) is applied to track the features in the retinal image (Jianbo Shi and Carlo Tomasi, 1994). KLT is a tracking algorithm with low computational complexity and high accuracy. The tracked features can be provided by Scale-Invariant Feature Transform (SIFT). SIFT is an image registration algorithm (David G. Lowe, et al, 2004), and it can automatically abstract point features which are blob-like structures with subpixel resolution from two images. With the tracked points, the second-order polynomial transformation can be used to remove distortions of the retina image. And if more complex transformation such as three-order polynomial transformation is used, more general distortions can also be removed. The second-order polynomial transformation of a distorted image point [xy]Tto the corrected image point[x'y']Tcan be written in the following form:

[x'y']=[a00a10a01a11a20a02b00b10b01b11b20b02][1xyxyx2y2],E8

where the distortion is represented byaandbparameters. The second-order polynomial transformation allows mappings of all lines to curves. Given a set of matched points, the parameters can be determined as follows. Assumekpoints match in the reference and distorted images. A matrixRis defined which contains the coordinates of thekmatched points in the reference image:

R=[x'1x'2x'ky'1y'2y'k].E9
TheDmatrix contains the corresponding points in the distorted image:
D=[111x1x2xky1y2ykx1y1x2y2xkykx21x22x2ky21y22y2k].E10
AnAmatrix is defined contains the transformation parameters:
A=[a00a10a01a11a20a02b00b10b01b11b20b02].E11

The linear regression estimate forAis given by:

A=RDT(DDT)1.E12

To implement the second-order polynomial transformation at least six corresponding point pairs are needed.

The point features abstracted by SIFT in the reference image are shown in Fig.12(a), and the points matched by KLT in an image are shown in Fig.12(b). The image with distortions removed by the second-order polynomial transformation is shown in Fig.12(c).

Figure 12.

Removing distortions by the second-order polynomial transformation. (a) Shows the point features abstracted by SIFT in the reference image. (b) Shows the points matched by KLT in a distorted image. (c) Shows the corrected image with distortions removed by the second-order polynomial transformation.

Figure 13.

Co-added 10 successive frames. (a) the reference frame. (b) the average image without distortions

Ten successive frames with distortions removed are co-added, and the result is shown in Fig.13 (b). The reference frame is shown in Fig.13 (a). If more frames are selected, the superposition is small and the image quality isn’t improved evidently. The power spectra of Figs.13 (a) and (b) are compared in Fig.14. It can be seen that at the high frequencies which probably correspond to noise, the power spectrum of Fig.13 (b) is decreased. It indicates that the noises are effectively suppressed and the image quality is improved.

Figure 14.

Average power spectra of Figs.8 (a) and (b).

6. Conclusion

In this chapter, we discuss the design of an Adaptive Optics Confocal Scanning Laser Ophthalmoscope (AOCSLO) and provide high resolution retina image in vivo. Then in order to improve the quality of AOCSLO image, we introduce real-time deconvolution and tracking features in retinal images.

Acknowledgments

This work was supported by The Main Direction Program of Knowledge Innovation of Chinese Academy of Sciences (No. KGCX2-Y11-920)

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Jing Lu, Hao Li, Guohua Shi and Yudong Zhang (January 20th 2012). Adaptive Optics Confocal Scanning Laser Ophthalmoscope, Topics in Adaptive Optics, Robert K. Tyson, IntechOpen, DOI: 10.5772/31117. Available from:

Embed this chapter on your site Copy to clipboard

<iframe src="http://www.intechopen.com/embed/topics-in-adaptive-optics/adaptive-optics-confocal-scanning-laser-ophthalmoscope" />

Embed this code snippet in the HTML of your website to show this chapter

chapter statistics

2050total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Advanced Methods for Improving the Efficiency of a Shack Hartmann Wavefront Sensor

By Akondi Vyas,M. B. Roopashree and B. Raghavendra Prasad

Related Book

First chapter

Dual Conjugate Adaptive Optics Prototype for Wide Field High Resolution Retinal Imaging

By Zoran Popovic, Jörgen Thaung, Per Knutsson and Mette Owner-Petersen

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us