Open access

Dual Conjugate Adaptive Optics Prototype for Wide Field High Resolution Retinal Imaging

Written By

Zoran Popovic, Jörgen Thaung, Per Knutsson and Mette Owner-Petersen

Submitted: 29 May 2012 Published: 18 December 2012

DOI: 10.5772/53640

From the Edited Volume

Adaptive Optics Progress

Edited by Robert K. Tyson

Chapter metrics overview

2,787 Chapter Downloads

View Full Metrics

1. Introduction

Retinal imaging is limited due to optical aberrations caused by imperfections in the optical media of the eye. Consequently, diffraction limited retinal imaging can be achieved if optical aberrations in the eye are measured and corrected. Information about retinal pathology and structure on a cellular level is thus not available in a clinical setting but only from histological studies of excised retinal tissue. In addition to limitations such as tissue shrinkage and distortion, the main limitation of histological preparations is that longitudinal studies of disease progression and/or results of medical treatment are not possible.

Adaptive optics (AO) is the science, technology and art of capturing diffraction-limited images in adverse circumstances that would normally lead to strongly degraded image quality and loss of resolution. In non-military applications, it was first proposed and implemented in astronomy [1]. AO technology has since been applied in many disciplines, including vision science, where retinal features down to a few microns can be resolved by correcting the aberrations of ocular optics. As the focus of this chapter is on AO retinal imaging, we will focus our description to this particular field.

The general principle of AO is to measure the aberrations introduced by the media between an object of interest and its image with a wavefront sensor, analyze the measurements, and calculate a correction with a control computer. The corrections are applied to a deformable mirror (DM) positioned in the optical path between the object and its image, thereby enabling high-resolution imaging of the object.

Modern telescopes with integrated AO systems employ the laser guide star technique [2] to create an artificial reference object above the earth’s atmosphere. Analogously, the vast majority of present-day vision research AO systems employ a single point source on the retina as a reference object for aberration measurements, consequently termed guide star (GS). AO correction is accomplished with a single DM in a plane conjugated to the pupil plane. An AO system with one GS and one DM will henceforth be referred to as single-conjugate AO (SCAO) system. Aberrations in such a system are measured for a single field angle and correction is uniformly applied over the entire field of view (FOV). Since the eye’s optical aberrations are dependent on the field angle this will result in a small corrected FOV of approximately 2 degrees [3]. The property of non-uniformity is shared by most optical aberrations such as e.g. the well known primary aberrations of coma, astigmatism, field curvature and distortion.

A method to deal with this limitation of SCAO was first proposed by Dicke [4] and later developed by Beckers [5]. The proposed method is known as multiconjugate AO (MCAO) and uses multiple DMs conjugated to separate turbulent layers of the atmosphere and several GS to increase the corrected FOV. In theory, correcting (in reverse order) for each turbulent layer could yield diffraction limited performance over the entire FOV. However, as is the case for both the atmosphere and the eye, aberrations do not originate solely from a discrete set of thin layers but from a distributed volume. By measuring aberrations in different angular directions using several GSs and correcting aberrations in several layers of the eye using multiple DMs (at least two), it is possible to correct aberrations over a larger FOV than compared to SCAO.

The concept of MCAO for astronomy has been the studied extensively [6-12], a number of experimental papers have also been published [13-16], and on-sky experiments have recently been launched [17]. However, MCAO for the eye is just emerging, with only a few published theoretical papers [3, 18-21]. Our group recently published the first experimental study [21] and practical application [22] of this technique in the eye, implementing a laboratory demonstrator comprising multiple GSs and two DMs, consequently termed dual-conjugate adaptive optics (DCAO). It enables imaging of retinal features down to a few microns, such as retinal cone photoreceptors and capillaries [22], the smallest blood vessels in the retina, over an imaging area of approximately 7 x 7 deg2. It is unique in its ability to acquire single images over a retinal area that is up to 50 times larger than most other research based flood illumination AO instruments, thus potentially allowing for clinical use.

A second-generation Proof-of-Concept (PoC) prototype based on the DCAO laboratory demonstrator is currently under construction and features several improvements. Most significant among those are changing the order in which DM corrections are imposed and the implementation of a novel concept for multiple GS creation (patent pending).

Advertisement

2. Brief anatomical description of the eye

The human eye can be divided into an optical part and a sensory part. Much like a photographic lens relays light to an image plane in a camera, the optics of the eye consisting of the cornea, the pupil, and the lens, project light from the outside world to the sensory retina (Fig. 1, left). The amount of light that enters the eye is controlled by pupil constriction and dilation. The human retina is a layered structure approximately 250 µm thick [23, 24], with a variety of neurons arranged in layers and interconnected with synapses (Fig. 1, right).

Figure 1.

Schematic drawings of the eye (left) and the layered retinal structure (right). (Webvision, http://webvision.med.utah.edu/book/part-i-foundations/simple-anatomy-of-the-retina/)

Visual input is transformed in the retina to electrical signals that are transmitted via the optic nerve to the visual cortex in the brain. This process begins with the absorption of photons in the retinal photoreceptors, situated at the back of the retina, which stimulate several interneurons that in turn relay signals to the output neurons, the retinal ganglion cells. The ganglion cell nerve fiber axons exit the eye through the optic nerve head (blind spot).

Unlike the regularly spaced pixels of equal size in a CCD chip the retinal photoreceptor mosaic is an inhomogeneous distribution of cone and rod photoreceptors of various sizes. The central retina is cone-dominated with a cone density peak at the fovea, the most central part of the retina responsible for sharp vision, with a decrease in density towards the rod-dominated periphery. Cones are used for color and photopic (day) vision and rods are used for scotopic (night) vision.

Blood is supplied to the retina through the choroidal and retinal blood vessels. The choroidal vessels line the outside of the eye and supply nourishment to the photoreceptors and outer retina, while the retinal vessels supply inner retinal layers with blood. Retinal capillaries, the smallest blood vessels in the eye, branch off from retinal arteries to form an intricate network throughout the whole retina with the exception of the foveal avascular zone (FAZ). The FAZ is the capillary-free region of the fovea that contains the foveal pit where the cones are most densely packed and are completely exposed to incoming light. Capillaries form a superficial layer in the nerve fiber layer, a second layer in the ganglion cell layer, and a third layer running deeper into the retina.

Advertisement

3. Brief theoretical background

3.1. AO calibration procedure

The AO concept requires a procedure for calculating actuator commands based on WFS signals relative to a defined set of zero points, so-called calibration. Both the DCAO demonstrator and the PoC prototype are calibrated using the same direct slope algorithm. The purpose is to construct an interaction matrix G by calculating the sensor response s = [s1, s2,..., sm]T to a sequence of DM actuator commands c = [c1, c2,..., cn]T. Here s is a vector of measured wavefront slopes, m/2 is the number of subapertures, and n is the number of DM actuators. This relation is defined by

s=Gc,E1

and the interaction matrix is given by

G=s1/c1s1/c2s1/cns2/c1s2/c2s2/cnsm/c1sm/c2sm/cn.E2

The relation above has to be modified to allow for multiple GSs and DMs by concatenating multiple s and c vectors. In the case of five GSs and two DMs we obtain

s1s2s3s4s5=Gc1c2,E3

where

G=s1/c1s1/c2s2/c1s2/c2s3/c1s3/c2s4/c1s4/c2s5/c1s5/c2.E4

The interaction matrix G is constructed by poking each DM actuator in sequence with a positive and a negative unit poke and calculating an average response, starting with the first actuator on DM1 and ending with the last actuator on DM2. In the case of five Hartmann patterns with 129 subapertures each and two DMs with a total of 149 actuators we obtain an interaction matrix dimension of 1290×149. The reconstructor matrix G+ is calculated using singular value decomposition (SVD) [25] since

G=UΛVT,E5

where U is an m×m unitary matrix, Λ is an m×n diagonal matrix with nonzero diagonal elements and all other elements equal to zero, and VT is the transpose of V, an n×n unitary matrix. The non-zero diagonal elements λi of Λ are the singular values of G. The pseudoinverse of G can now be computed as

G+=VΛ+UT,E6

which is also the least squares solution to Eq. (1). The diagonal values of Λ+ are set to λi-1, or zero if λi is less than a defined threshold value. Non-zero singular values correspond to correctable modes of the system. Noise sensitivity can be reduced by removing modes with very small singular values. DM actuator commands can then be calculated by matrix multiplication:

c1c2=G+s1s2s3s4s5.E7

However, even the most meticulous calibration of DM and WFS interaction will not yield optimal imaging performance due to non-common path errors between the wavefront sensor and the final focal plane of the imaging channel. The reduction of these effects by proper zero point calibration is therefore crucial to achieve optimal performance of an AO system. Several methods have been proposed to improve imaging performance [26-33]. The method implemented in our system is similar to the imaging sharpening method [29, 30], but a novel figure of merit is used, and the inherent singular modes of the AO system are optimized (patent pending).

3.2. Corrected field of view

In SCAO a single GS is used to measure wavefront aberrations and a single DM is used to correct the aberrations in the pupil plane. This will result in a small corrected FOV due to field dependent aberrations in the eye. However, the corrected FOV in the eye can be increased by using several GS distributed across the FOV and two or more DMs [3, 19-21]. A larger FOV than in SCAO can actually be obtained by using several GS and a single DM in the pupil plane, analogous to ground layer AO (GLAO) in astronomy [34], but the increase in FOV size and the magnitude of correction will be less than when using multiple DMs.

A relative comparison of simulated corrected FOV for the three cases of SCAO, GLAO, and DCAO in our setup is shown in Fig. 2. The simulated FOV is approximately 7×7 degrees, with a centrally positioned GS in the SCAO simulation and five GS positioned in an ‘X’ formation with the four peripheral GSs displaced from the central GS by a visual angle of 3.1 deg in the GLAO and DCAO simulations.

Figure 2.

Zemax simulation of a corrected 7×7 deg FOV in our setup using the Liou-Brennan eye model [35] for SCAO (left), multiple GS and single DM (middle), and DCAO (right). Color bar represents simulated Strehl ratio.

Advertisement

4. Experimental setups

4.1. DCAO demonstrator

Only a basic description highlighting modifications to the original DCAO demonstrator will be given here. The reader is referred to [21] for a detailed description of the setup. The basic layout of the DCAO demonstrator is shown in Fig. 3.

4.1.1. DCAO demonstrator wavefront measurement and correction

Continuous, relatively broadband (to avoid speckle effects), near-infrared light (834±13 nm) from a super-luminescent diode (SLD), delivered through a 1:5 fiber splitter and five single mode fibers, is used to generate the five GS beams. The advantage of using an SLD as a source is that the short coherence length of the SLD light generates much less speckle in the Shack-Hartmann WFS spots than a coherent laser source. The end ferrules of the single mode fibers are mounted in a custom fiber holder and create an array of point sources, which are imaged via the DMs and a Badal focus corrector onto the retina. The GSs are arranged in an ‘X’ formation, with the four peripheral GSs displaced from the central GS by a visual angle of 3.1 deg, corresponding to a retinal separation of approximately 880 µm in an emmetropic eye.

Reflected light from the GSs passes through the optical media of the eye and emerges through the pupil as five aberrated wavefronts. After the Badal focus corrector and the two DMs the light passes through a collimating lens array (CLA) consisting of five identical lenses, one for each GS. The five beams are focused by a lens (L7) to a common focal point (c.f. Fig. 8), collimated by a lens (L8) and individually sampled by the WFS, an arrangement consequently termed multi-reference WFS. In addition to separating the WFS Hartmann patterns as in [36] this arrangement makes it possible to filter light from all five GSs using a single pinhole (US Patent 7,639,369).

Custom written AO software for control of one or two DM and one to five GS was developed, tested, and implemented by Landell [37]. The pupil DM (DM1) will apply an identical correction for all field-points in the FOV. The second DM (DM2), positioned in a plane conjugated to a plane approximately 3 mm in front of the retina, will contribute with partially individual corrections for the five angular directions and thus compensate for non-uniform (anisoplanatic) or field-dependent aberrations. The location of DM2 was chosen to ensure an smooth correction over the FOV by allowing sufficient overlap of GS beam footprints.

Figure 3.

Basic layout of the DCAO demonstrator. Abbreviations: BPF – band-pass filter, BS – beamsplitter, CLA – collimating lens array, CM – cold mirror, DM1 – pupil DM, DM2 – field DM, FF – fiber ferrules, FS – field stop, FT – flash tube, LA – lenslet array, M – mirror, P – pupil conjugate plane, PL – photographic lens, PM – pupil mask, R – retinal conjugate plane, SF – spatial filter, SLD – superluminescent diode, WBS – wedge beamsplitter.

4.1.2. DCAO demonstrator retinal imaging

For imaging purposes, the retina is illuminated with a flash from a Xenon flash lamp, filtered by a 575±10 nm wavelength bandpass filter (BP). The narrow bandwidth of the BP is essential to minimize chromatic errors, in particular longitudinal chromatic aberration (LCA) [38] in the image plane of the retinal camera.

The illuminated field on the retina (approximately 10×10 degrees) is limited by a square field stop in a retinal conjugate plane. Visible light from the eye is reflected by a cold mirror (CM) and relayed through a pair of matched photographic lenses, chosen to minimize non-common path errors. An adjustable iris between the two photographic lenses is used to set the pupil size used for imaging, corresponding to a diameter of 6 mm at the eye.

Imaging is performed with a science grade monochromatic CCD science camera with 2048×2048 pixels and a square pixel cell size of 7.4 µm is used for imaging. The size of the CCD chip corresponds to a retinal FOV of 6.7×6.7 deg2. The full width at half maximum (FWHM) of the Airy disk in the image plane at 575 nm is 15 µm and hence the image is sampled according to the Nyquist-Shannon sampling theorem (two pixels per FWHM).

Figure 4.

Basic layout of the PoC prototype. Abbreviations: BPF – band-pass filter, BS – beamsplitter, CLA – collimating lens array, CM – cold mirror, DM1 – pupil DM, DM2 – field DM, FS –field stop, FT – flash tube, LA – lenslet array, M – mirror, P – pupil conjugate plane, PBS – pellicle beamsplitter, PFP/PFA – polarization filters, PL – photographic lens, PMF – flash pupil mask, PMGS – GS pupil mask, R – retinal conjugate plane, SF – spatial filter, SM – spherical mirror, SLD – superluminescent diode, TL – trial lens. Fixed corrective lenses are either lens pairs or single lenses.

4.2. PoC prototype

A PoC prototype (Fig. 4) has been developed to evaluate the clinical relevance of DCAO wide-field high-resolution retinal imaging. The prototype is currently under construction and features several improvements with regards to the DCAO demonstrator. Most significant among those are that the order in which DM corrections are imposed has been changed and a novel implementation of GS creation (patent pending). The size of the PoC prototype has been greatly reduced compared with the optical table design of the DCAO demonstrator to a compact joystick operated tabletop instrument 600×170×680 mm (H×W×D) in size. The opto-mechanical layout comprises five modules: a GS generation module, a main module, a WFS module, a flash module, and an imaging module.

4.2.1. PoC GS generation module

A novel method of GS creation has been implemented in the PoC prototype, whereby the CLA that is part of the WFS is also utilized to create the GS beams. Collimated 835±10 nm SLD light from a single mode fiber is polarized (PFP) and passes through a multi-aperture stop with five apertures (PMGS) that are aligned to the five CLA lenses. Since the CLA is used for GS generation and also enables single point spatial filtering in the multi-reference WFS we have an auto-collimating arrangement that greatly reduces system complexity and alignment. The GS rays pass through standard and custom relay optics and the DMs before entering the eye, where they form five spots arranged in an ‘X’ formation. The four peripheral GSs are diagonally displaced from the central GS by a visual angle of 3.1 deg (880 µm) on the retina.

4.2.2. PoC main module

Residual focus and astigmatism aberrations in the DCAO demonstrator that had not been compensated for by a Badal focus corrector and trial astigmatism lenses were corrected by DM1 after passing DM2, resulting in sub-optimal DM2 performance. The PoC prototype features a correct arrangement of the DMs where reflected light from the eye, corrected by trial lenses, first passes the pupil mirror DM1 before passing the field mirror DM2.

DM1 is a Hi-Speed DM52-15 (ALPAO S.A.S., Grenoble, France), a 52 actuator magnetic DM with a 9 mm diameter optical surface and 1.5 mm actuator separation. The magnification relative to the pupil of the eye is 1.5, thus setting the effective pupil area of the instrument to 6 mm at the eye. DM2 is a Hi-Speed DM97-15 (ALPAO S.A.S., Grenoble, France), a 97 actuator magnetic DM with a 13.5 mm diameter optical surface and 1.5 mm actuator separation. GS beam footprints on DM1 and DM2 are shown in Fig. 5. The last element of the main module is a dichroic beamsplitter (CM) that reflects collimated imaging light towards the retinal camera and transmits collimated GS light towards the WFS.

As the relay optics of the main module transmits both measurement (835 nm) and imaging (575 nm) light, custom optics were designed to assure diffraction limited performance at both wavelengths (Fig. 6). Due to the ocular chromatic aberrations the bandwidth of the flash illumination bandpass filter will induce a wavelength dependent focal shift in the instrument image plane. An evaluation of the focal shift for the 575±10 nm wavelengths transmitted by the flash illumination bandpass filter using the Liou-Brennan Zemax eye model [35] yields a ±6.9 µm focal shift at the retina (Fig. 7).

Figure 5.

GS beam footprints on DM1 (left) and DM2 (right).

Figure 6.

RMS wavefront error of the PoC main module custom relay optics at the main module exit pupil for three retinal field positions (0, 2.5, and 3.6 deg).

4.2.3. PoC WFS module

A multi-reference WFS with spatial filtering (Fig. 8) has been implemented in both the DCAO demonstrator and the PoC prototype. The design greatly reduces system complexity by implementing a single spatial filter to reduce unwanted light from parasitic source reflections and scattered light from the retina when imaging multiple Hartmann patterns with a single WFS camera.

Figure 7.

Chromatic focal shift over flash illumination bandpass filter bandwidth (575±10 nm) at the retina calculated using the Liou-Brennan eye model [35].

Transmitted GS light from the main module passes through the CLA and is reflected by a pellicle beam splitter. A second polarizing filter (PFA) removes unwanted backscattered reflections from the GS generation, and a lens brings the five GS beams to a common focus where they are spatially filtered by a single aperture (SF). A collimating lens finally relays the five beams onto a lenslet array (LA) with a focal length of 3.45 mm and a lenslet pitch of 130 µm.

The monochromatic WFS CCD camera has 1388×1038 pixels with a square pixel cell size of 6.45 µm, of which a central ROI of 964×964 pixels is used for wavefront sensing. The diameter of the diffraction limited focus spot of a lenslet is 2.44 λ f / d = 54 µm. Each spot will consequently be sampled by approximately 8×8 pixels, an oversampling that can be alleviated using pixel binning. The 6 mm pupil diameter of the eye is demagnified to 1.87 mm at the WFS and each Hartmann pattern will consequently be sampled by ~13 lenslets across the diameter (Fig. 9).

Figure 8.

Schematic drawing of the multi-reference WFS with spatial filtering.

Figure 9.

Zemax simulation of Hartmann spot image (left) and actual WFS image (right).

4.2.4. PoC flash and imaging modules

Retinal images are obtained by illuminating a 10×10 degree retinal field using a 4-6 ms spectrally filtered (575±10 nm) Xenon flash. A Canon EF 135mm f/2.0 L photographic lens is used to focus reflected light from the dichroic beamsplitter onto the science camera, a 2452×2056 pixel Stingray F-504B monochromatic CCD with a square pixel cell size of 3.45 µm (Allied Vision Technologies GmbH, Stadtroda, Germany). The physical size of the full chip corresponds to a retinal FOV of 8.28×6.94 deg with a pixel resolution of 0.059 mrad (0.974 µm on the retina).

Advertisement

5. Retinal imaging

AO retinal imaging reveals information about retinal structures and pathology currently not available in a clinical setting. The resolution of retinal features on a cellular level offers the possibility to reveal microscopic changes during the earliest stages of a retinal disease. One of the most important future applications of this technique is consequently in clinical practice where it will facilitate early diagnosis of retinal disease, follow-up of treatment effects, and follow-up of disease progression.

Both the DCAO demonstrator and the PoC prototype feature a narrow depth of focus, approximately 25 µm and 9 µm in the retina, respectively. This allows for imaging of different retinal layers, from the deeper photoreceptor layer to the superficial blood vessel and nerve fiber layers. Images are flat-fielded using a low-pass filtered image to reduce uneven illumination [39]. A Gaussian kernel with σ = 8 - 25 pixels is chosen depending on the imaged retinal layer. A smaller kernel is used for images of the photoreceptor layer and a larger kernel is consequently used for images of superficial layers. Final post-processing is performed by convolving an image with a σ = 0.75 pixel Gaussian kernel to reduce shot and readout noise. As the PoC prototype is still under construction all retinal images shown below have been acquired with the DCAO demonstrator.

5.1. Cone photoreceptor imaging

Imaging of the cone photoreceptor layer (Fig. 10) is accomplished by focusing on deeper retinal layers. The variation in cone appearance from dark to bright in Fig. 10 is an effect of the directionality [40] or waveguide nature of the cones. The retinal photoreceptor mosaic provides all information to higher visual processing stages and is many times directly or indirectly affected or disrupted by retinal disease. It is therefore of interest to study various parameters, e.g. photoreceptor spacing, density, geometry, and size, to determine the structural integrity of the mosaic. An example of this is given in Fig. 11, where the cone density of the mosaic in Fig. 10 has been calculated. Cone spacing, where possible, was obtained from power spectra of 128×128 pixel sub-regions with a 64 pixel overlap. Spacing (s) was converted to density (D) using the relation D = sqrt(3) / (2s2), and the density profile was constructed by fitting a cubic spline surface to the distribution of density values.

5.2. Retinal capillary imaging

Retinal capillaries, the smallest blood vessels in the eye, are difficult to image because of their small size (down to 5 µm), low contrast, and arrangement in multiple retinal planes. Even good-quality retinal imaging fails to capture any of the finest capillary details. The preferred clinical imaging method is fluorescein angiography (FA), an invasive procedure in which a contrast agent is injected in the patient’s bloodstream to enhance retinal vasculature contrast. The narrow depth of focus of both the DCAO demonstrator and the PoC prototype allows for imaging of retinal capillaries by focusing on the upper retinal layers. It is a non-invasive procedure with performance similar to FA [22]. An unfiltered camera raw image of the capillary network surrounding the fovea, the central region of the retina responsible for sharp vision, is shown in Fig. 12, and a flat-fielded image is shown in Fig. 13.

Figure 10.

DCAO image of cone photoreceptor layer. Variation in cone appearance from dark to bright is an effect of the directionality or waveguide nature of cone photoreceptors.

Figure 11.

Cone photoreceptor density profile calculated from cone distribution in Fig. 10. Color bar represents cell density in cells/mm2.

Figure 12.

Camera raw DCAO image of foveal capillaries.

5.3. Nerve fiber layer imaging

Evaluation of the retinal nerve fiber layer (RNFL) is of particular interest for detecting and managing glaucoma, an eye disease that results in nerve fiber loss. Changes in the RNFL are often not detectable using red-free fundus photography until there is more than 50% nerve fiber loss [41]. Although DCAO imaging does not yet provide information about RNFL thickness it can be used to obtain images with higher resolution and contrast than red-free fundus images (Fig. 14).

Figure 13.

Image in Fig. 12 after flat-field correction. Uneven flash illumination has been reduced and retinal vessel contrast has been improved.

Figure 14.

Montage of four DCAO images of the retinal nerve fibers and blood vessels.

Advertisement

6. Conclusions

In this chapter we have described the concept and practical implementation of dual-conjugate adaptive optics retinal imaging, i.e. multiconjugate adaptive optics using two deformable mirrors. Although the technique of adaptive optics is well established in the vision research community there are only a few publications on MCAO retinal imaging.

The DCAO instruments described here allow retinal features down to 2 µm to be resolved over a 7×7 degree FOV and enable tomographic imaging of retinal structures such as cone photoreceptors and retinal capillaries. We believe that this new technique has a future potential for clinical imaging at currently subclinical levels with an impact particularly important for early diagnosis of retinal diseases, follow-up of treatment effects, and follow-up of disease progression.

Advertisement

Acknowledgements

The authors would like to acknowledge financial support for this work from the Marcus and Amalia Wallenberg Memorial Fund (grant no. MAW 2009.0053) and from VINNOVA, the Swedish Governmental Agency for Innovation Systems (grant no. 2010-00518).

References

  1. 1. Babcock. HW. The Possibility of Compensating Astronomical Seeing. Publications of the Astronomical Society of the Pacific. 1953;65(386):229.
  2. 2. Foy, Labeyrie. Feasibility of Adaptive Telescope with Laser Probe. Astronomy and Astrophysics. 1985;152(2):L29-L31.
  3. 3. Dubinin A, Cherezova T, Belyakov A, Kudryashov A. Human Retina Imaging: Widening of High Resolution Area. Journal of Modern Optics. 2008;55(4-5):671-681.
  4. 4. Dicke RH. Phase-Contrast Detection of Telescope Seeing Errors and Their Correction. Astrophysical Journal. 1975;198(3):605-615.
  5. 5. Beckers JM. Increasing the Size of the Isoplanatic Patch with Multiconjugate Adaptive Optics. ESO Conference and Workshop on Very Large Telescopes and their Instrumentation; 1988; Garching, Germany: European Southern Observatory (ESO) p. 69.
  6. 6. Beckers JM. Detailed Compensation of Atmospheric Seeing Using Multiconjugate Adaptive Optics. Roddier FJ, editor1989. 215-217 p.
  7. 7. Ellerbroek BL. First-Order Performance Evaluation of Adaptive-Optics Systems for Atmospheric-Turbulence Compensation in Extended-Field-of-View Astronomical Telescopes. Journal of the Optical Society of America a-Optics Image Science and Vision. 1994;11(2):783-805.
  8. 8. Fried DL, Belsher JF. Analysis of Fundamental Limits to Artificial-Guide-Star Adaptive-Optics-System Performance for Astronomical Imaging. Journal of the Optical Society of America a-Optics Image Science and Vision. 1994;11(1):277-287.
  9. 9. Fusco T, Conan JM, Michau V, Rousset G, Mugnier LM. Isoplanatic Angle and Optimal Guide Star Separation for Multiconjugate Adaptive Optics. In: Wizinowich PL, editor. Adaptive Optical Systems Technology, Pts 1 and 22000. p. 1044-1055.
  10. 10. Johnston DC, Welsh BM. Analysis of Multiconjugate Adaptive Optics. Journal of the Optical Society of America a-Optics Image Science and Vision. 1994;11(1):394-408.
  11. 11. Owner-Petersen M, Goncharov A. Multiconjugate Adaptive Optics for Large Telescopes: Analytical Control of the Mirror Shapes. Journal of the Optical Society of America a-Optics Image Science and Vision. 2002;19(3):537-548.
  12. 12. Rigaut FJ, Ellerbroek BL, Flicker R. Principles, Limitations and Performance of Multi-Conjugate Adaptive Optics. Adaptive Optical Systems Technology, Pts 1 and 2. 2000;4007:1022-1031.
  13. 13. Berkefeld T, Soltau D, von der Luhe O. Multi-Conjugate Adaptive Optics at the Vacuum Tower Telescope, Tenerife. Adaptive Optical System Technologies Ii, Pts 1 and 2. 2003;4839:544-553.
  14. 14. Marchetti E, Hubin N, Fedrigo E, Brynnel J, Delabre B, Donaldson R, et al. Mad the Eso Multi-Conjugate Adaptive Optics Demonstrator. Adaptive Optical System Technologies Ii, Pts 1 and 2. 2003;4839:317-328.
  15. 15. Rimmele T, Hegwer S, Marino J, Richards K, Schmidt D, Waldmann T, et al. Solar Multi-Conjugate Adaptive Optics at the Dunn Solar Telescope. 1st Ao4elt Conference - Adaptive Optics for Extremely Large Telescopes. 2009.
  16. 16. von der Luhe O, Berkefeld T, Soltau D. Multi-Conjugate Solar Adaptive Optics at the Vacuum Tower. Comptes Rendus Physique. 2005;6(10):1139-1147.
  17. 17. Rigaut F, Neichel B, Boccas M, d’Orgeville C, Arriagada G, Fesquet V, et al. Gems: First on-Sky Results. Adaptive Optics Systems III; 2012: Proc. SPIE.
  18. 18. Bedggood P, Daaboul M, Ashman R, Smith G, Metha A. Characteristics of the Human Isoplanatic Patch and Implications for Adaptive Optics Retinal Imaging. J Biomed Opt. 2008;13(2):024008. Epub 2008/05/10.
  19. 19. Bedggood P, Metha A. System Design Considerations to Improve Isoplanatism for Adaptive Optics Retinal Imaging. Journal of the Optical Society of America a-Optics Image Science and Vision. 2010;27(11):A37-A47.
  20. 20. Bedggood PA, Ashman R, Smith G, Metha AB. Multiconjugate Adaptive Optics Applied to an Anatomically Accurate Human Eye Model. Optics Express. 2006;14(18):8019-8030.
  21. 21. Thaung J, Knutsson P, Popovic Z, Owner-Petersen M. Dual-Conjugate Adaptive Optics for Wide-Field High-Resolution Retinal Imaging. Optics Express. 2009;17(6):4454-4467.
  22. 22. Popovic Z, Knutsson P, Thaung J, Owner-Petersen M, Sjostrand J. Noninvasive Imaging of Human Foveal Capillary Network Using Dual-Conjugate Adaptive Optics. Investigative Ophthalmology & Visual Science. 2011;52(5):2649-2655.
  23. 23. Chan A, Duker JS, Ko TH, Fujimoto JG, Schuman JS. Normal Macular Thickness Measurements in Healthy Eyes Using Stratus Optical Coherence Tomography. Archives of Ophthalmology. 2006;124(2):193-198.
  24. 24. Ooto S, Hangai M, Sakamoto A, Tomidokoro A, Araie M, Otani T, et al. Three-Dimensional Profile of Macular Retinal Thickness in Normal Japanese Eyes. Investigative Ophthalmology & Visual Science. 2010;51(1):465-473.
  25. 25. Barrett HH, Myers KJ. Foundations of Image Science. Hoboken, NJ: Wiley-Interscience; 2004. xli, 1540 p. p.
  26. 26. Blanc A, Fusco T, Hartung M, Mugnier LM, Rousset G. Calibration of Naos and Conica Static Aberrations - Application of the Phase Diversity Technique. Astronomy & Astrophysics. 2003;399(1):373-383.
  27. 27. Carrano CJ, Olivier SS, Brase JM, Macintosh BA, An JR. Phase Retrieval Techniques for Adaptive Optics. Adaptive Optical System Technologies, Parts 1 and 2. 1998;3353:658-667.
  28. 28. Lofdahl MG, Scharmer GB, Wei W. Calibration of a Deformable Mirror and Strehl Ratio Measurements by Use of Phase Diversity. Applied Optics. 2000;39(1):94-103.
  29. 29. Muller RA, Buffingt.A. Real-Time Correction of Atmospherically Degraded Telescope Images through Image Sharpening. J Opt Soc Am A Opt Image Sci Vis. 1974;64(9):1200-1210.
  30. 30. Murray L. Smart Optics: Wavefront Sensor-Less Adaptive Optics - Image Correction through Sharpness Maximisation. NUI Galway; 2006.
  31. 31. Ren D, Rimmele TR, Hegwer S, Murray L. A Single-Mode Fiber Interferometer for the Adaptive Optics Wave-Front Test. Publications of the Astronomical Society of the Pacific. 2003;115(805):355-361.
  32. 32. Turaga D, Holy TE. Image-Based Calibration of a Deformable Mirror in Wide-Field Microscopy. Applied Optics. 2010;49(11):2030-2040.
  33. 33. Yoon G. Wavefront Sensing and Diagnostic Uses. In: Porter J, Queener H, Lin J, Thorn K, Awwal A, editors. Adaptive Optics for Vision Science: Principles, Practices, Design and Applications: (Wiley-Interscience; 2006. p. 63-81.
  34. 34. Rigaut F. Ground-Conjugate Wide Field Adaptive Optics for the Elts. Beyond Conventional Adaptive Optics; 2001; Venice, Italy: European Southern Observatory, Garching p. 11-16.
  35. 35. Liou HL, Brennan NA. Anatomically Accurate, Finite Model Eye for Optical Modeling. Journal of the Optical Society of America a-Optics Image Science and Vision. 1997;14(8):1684-1695.
  36. 36. Goncharov AV, Dainty JC, Esposito S. Compact Multireference Wavefront Sensor Design. Opt Lett. 2005;30(20):2721-2723.
  37. 37. Landell D. Implementation and Optimization of a Multi Conjugate Adaptive Optics Software System for Vision Research. MSc thesis. University of Gothenburg; 2005.
  38. 38. Marcos S, Moreno E, Navarro R. The Depth-of-Field of the Human Eye from Objective and Subjective Measurements. Vision Res. 1999;39(12):2039-2049. Epub 1999/05/27.
  39. 39. Howell SB. Handbook of Ccd Astronomy. Cambridge, U.K. ; New York: Cambridge University Press; 2000. xi, 164 p. p.
  40. 40. Stiles WS, Crawford BH. The Luminous Efficiency of Rays Entering the Eye Pupil at Different Points. Proceedings of the Royal Society of London. 1933;112:428-450.
  41. 41. Quigley HA, Addicks EM. Quantitative Studies of Retinal Nerve Fiber Layer Defects. Arch Ophthalmol. 1982;100(5):807-814. Epub 1982/05/01.

Written By

Zoran Popovic, Jörgen Thaung, Per Knutsson and Mette Owner-Petersen

Submitted: 29 May 2012 Published: 18 December 2012