Open access peer-reviewed chapter

Retina Recognition Using Crossings and Bifurcations

Written By

Lukáš Semerád and Martin Drahanský

Submitted: 02 June 2020 Reviewed: 21 January 2021 Published: 19 February 2021

DOI: 10.5772/intechopen.96142

From the Edited Volume

Applications of Pattern Recognition

Edited by Carlos M. Travieso-Gonzalez

Chapter metrics overview

612 Chapter Downloads

View Full Metrics


Recognition of people on the basis of biometric characteristics has been known for many centuries. One of the most used biometric features is fingerprint. Recently, we have also come across the iris pattern more often. Retinal recognition offers similarly reliable mechanisms, but they are not yet well explored. Our procedure for obtaining a biometric pattern is partly based on fingerprints. In comparison with fingerprints, retinal recognition identifies bifurcations or optical crossings, i.e., instead of papillary lines, the vessels are used. The procedure is more complicated due to the multiple layers in which the blood vessels intersect. Our work deals with determining the probabilities for various areas of the retina in which bifurcation and crossing occur. It also describes how recognition can be affected by various diseases.


  • Biometrics
  • human eye
  • human retina
  • biometric recognition
  • blood-vessel bifurcation
  • blood-vessel crossing
  • retina imaging
  • image processing
  • biometric entropy

1. Introduction

Biometric systems are an increasingly used today, whether it is access control systems or person recognition. Following the widely used biometrics of fingerprints, the use of the iris of the eye is increasing. The retina of the human eye is another possibility in biometrics, but it has not yet been explored as much as the previously mentioned features.

For various applications, it is necessary to create a reliable recognition system based on the given parameters. The main goal of this work is not to create a perfect application for determining the degree of agreement of two retinas, but rather to outline the principles for the individual components of recognition.

The work in the first chapter summarizes the theoretical foundations of the eye and retina. The formation of blood vessels in the vascular bed of the retina is also described here, and the entropic similarity of all retinas is indicated. Finally, tools for examining the eye and for recording a retinal image are also mentioned. The second chapter describes the biometric basis for retinal feature recognition. Existing principles are mentioned here. The following chapter presents the details of the individual parts of our established recognition principle. The used databases and the created programs are presented. The last chapter summarizes the results of all created programs.


2. The retina

2.1 Eye anatomy

Human eye consists of many working parts such as sclera, cornea, pupil, lens, iris, ciliary body, retina, optic nerve, choroid, etc. Sclera, the white colored outer layer of the eye, works as a protector of the eye. Cornea, a transparent circular part, refracts the light entering the eye onto the lens. Lens is a crystalline part located directly behind the pupil. Its task is to focus light onto the retina. Pupil is the dark spot at the center of a healthy iris. It acts as the shutter of a camera since the amount of light entering the human eye is regulated by the diameter of the pupil. Iris is the colored, visible part of the eye located in front of the lens. It regulates the amount of light entering the eye by widening (dilation) and narrowing (constriction) of the pupil. Ciliary body delivers oxygen and nutrients to the lens and cornea. It contains the ciliary muscle, which changes the shape of the lens when our eyes focus on an object. Optic nerve transfers all the visual information from the retina to the brain. Choroid is a thin vascular layer between the retina and the sclera. It provides oxygen and nourishment to the outer layers of the retina. It also contains a pigment that absorbs excess light (Figure 1) [2].

Figure 1.

Anatomy of an eye [1].

2.2 Anatomy of the retina

The retina is located at the back of the eye and is the only part of the central nervous system that is non-invasively observable. It is responsible for light rays sensing and vision at all. It is a very thin light-sensitive layer that is transparent. Retinal cameras capture deeper layers of the eye behind the retinal layer. Its thickness is 0.2-0.4 mm and contains two types of light-sensitive cells, rods and cones. The rods, which are approximately 75 to 150 million [3, 4], are used for more sensitive vision in low light. However, the rods only send information to the brain in the grayscale. On the contrary, the cones perceive incident light rays in color, but they need more light to function than rods. There are approximately 7 million of them in one eye and are divided according to the perception of colors red, green, and blue. The peripheral part of the retina is rod-dominated, whereas the yellow spot is cone-dominated. Low density of rods in the yellow spot makes it less sensitive to the light. Retina would be compared with 157-megapixel camera [3, 4].

Two main parts of the retina are shown in Figure 2. It is a blind spot (optic disk) and a macula (yellow spot). Optic disk is a circular area with an average surface of about 3 mm2 where ganglion cells, i.e., projection neurons transferring information from the retinal neurons, form the optic nerve, the central retinal artery comes into the retina and the retinal vein leaves the retina. The color of a normal optic disk varies from orange to pink. The optic disk is known as a “blind spot” since it does not contain photoreceptors, i.e., rods and cones. Corresponding parts of the visual field are not visible to a person. However, thanks to the ability of the brain to ignore or interpolate missing information from the other eye, we usually do not notice the blind spot. Opposite to the optic disk macula (yellow spot) is the sharpest vision area with a diameter of around 5 mm. The highest concentration of cones on the macula makes it responsible for the perception of colors. Fovea is located in the center of the macula with the densest concentration of photoreceptors in the eye. The image of the object falling on the macula (yellow spot) is reflected in the fovea.

Figure 2.

Retinal image [5].

2.3 Retinal vasculature

The structure of retinal vessels is much like the brain and remains unchanged throughout life. Two main sources of blood supplying the retina are the retinal artery and the vessels. The blood vessels nourish the retina’s outer layer with photoreceptors. Meanwhile, four major branches of the retinal artery called terminal arterioles, provide blood supply to the retina, nourishing in the first place the inside of it.

One type of retinal scanners is the fundus camera, which is a specialized low power microscope with an attached camera. Figure 3 shows the fundus camera used in the biometric laboratory of Faculty of Information Technology (FIT) at Brno University of Technology (BUT) in Czech Republic (CZ). No matter which kind of retinal scanners is used, without user’s conscious cooperation the retinal image acquisition is not possible.

Figure 3.

Slit lamp example (left) [6]; example of a non-mydriatic fundus camera (middle) [7]; (right) fundus camera used in the biometric laboratory at FIT BUT (right).

Blood vessels coming out from the blind spot and make a tree shape on the surface of the retina (shown in Figure 2). This tree shape hardly ever changes over the lifetime of an individual except due to some severe eye diseases such as hard glaucoma and cataracts. It is not affected by external environment, since the retina is not an external organ such as a fingerprint [8, 9]. Moreover, it differs from person to person due to many factors such as the thickness of vessels, the distance from each other, the presence of bifurcations (division points of a single vessel), crossings (intersection points of two or multiple vessels), ending points of vessels etc., which are all in different locations and in various numbers.

2.3.1 Medicine basis of vein structure

Retinal vasculature occurs mainly by angiogenesis. Its formation and the factors that regulate the development of superficial retinal vascular plexus in humans are already quite known. The cell–cell signaling that occurs between different cellular components affects the regression of the vessels, the sprouting angiogenesis, the vascular remodeling, and vessel differentiation events that are involved. These cellular components include neurons, glia, endothelial cells, pericytes, and immune cells.

An invasion of migrating astrocytes coming from the optic nerve and going into the retina precedes the development of the retinal vasculature. They begin from the optic nerve head and extend to the retina’s inner surface in a centrifugal fashion as a cell population that is proliferating. They form a cellular mesh-like network which provides the blood vessels a template in their wake. Astrocytes experience hypoxia and express the vascular endothelial growth factor (VEGF) strongly before blood vessels cover it. VEFG is the key stimulus for angiogenesis. It induces the endothelial cells’ migration and the nascent vascular network expansion over then retina’s inner surface. Bit by bit, the astrocytes start to downregulate the VEGF expression right after a perfused vascular network has formed. There emerges a typical stellate morphology of the retinal blood vessels. Sprouting angiogenesis are also believed to have formed the deeper networks of the retinal vasculature where new vessel sprouts are formed by proliferating endothelial cells and the vascular network is extended from pre-existing vessels [1].

The presence of astrocytes is related to the blood vessels in the retina. Retinal astrocytes and blood vessels are covering the entire retina in primates, except for the primate fovea, which does not have retinal astrocytes and blood vessels. Combined together, these observations imply that the retinal astrocyte network and the retinal vasculature are linked developmentally and are evolutionary.

The process of retinal angiogenesis explains why the pattern of retinal vascular network appears quite uniform in the population. Near the disc, the arterioles are more heavily concentrated in the outer choroid, especially nasal and temporal to the disc. The choroidal arteries are wavy or rippled, some like corkscrews, some with one or more 360° loops, and many with rather tightly twisted S-shaped turns.

When the outer vessels are gently removed, the smaller vessels are visible. Posteriorly, most prominently in the submuscular region, the vessels in the middle layers are highly complicated. Branches are usually not equal in length. The angles formed by two twigs range from 30° or 40° to 180°, and T-shaped branchings are common. After the bifurcation, a branch may continue in a straight path, making a sweeping C-shaped curve of 240° to almost 360° and diving inward to enter capillaries only a short distance from the parent trunk.

Most of the bifurcations of the larger choroidal arteries are dichotomous and the very acute angle formed by the offshoots points toward the disc. Second and third branchings may occur almost immediately so that a parent vessel seems to break up, fanlike, into four or six radiating branches after the first branching. Anastomoses between the larger choroidal arteries are not common but are frequent in the smaller branches [10].

2.4 Tools for medical examination and biometric scanning

An image is acquired in the retina, like how a camera captures it. The beam first passes through the pupil of the eye then appears in the focus of the lens on the retina, resembling a film. Specialized optical devices are used in medicine for the visual examination of the retina.

We will first describe the existing medical devices used for retinal acquisition and examination, followed by biometric devices. High-quality scans of the retina are provided by specialized medical devices. However, two significant disadvantages cause their failure in the biometric market. First is the very high cost, ranging from thousands (used devices) to tens of thousands of euro. Second, medical personnel is needed for data acquisition since medical devices only have the manual or semi-automatic mode. Up to now, there is still no device in the market that can work in fully-automatic mode (without user interaction). The device is still in development; however, its price is already estimated too high for the biometric market.

2.4.1 Medical devices

A frequently used ophthalmologist’s device for examining the human retina is a direct ophthalmoscope. The doctor examines the retina through the pupil at only a few centimeters. The disadvantage of this device is the relatively small observed area and need cooperation of patients [11].

For a more thorough observation of the retina, it is appropriate to use a so-called fundus camera (as be seen at Figure 3). After a relatively long acquisition and focus, the camera takes a picture of the back of the eye in a short time and the doctor can go through it for any length of time, or look for changes in the retina since the last examination. By rotating the eye, a large area of the retinal area can be examined.

A slit lamp allows examining the eye’s anterior segment using biomicroscopy. This, along with direct and indirect ophthalmoscopy, are the main ophthalmoscopic examination methods for the eye’s anterior and posterior parts, where the slit lamp is the most widely used. A fundus camera, also known as a retinal camera, is a special device to display the optic nerve’s posterior segment, the yellow spots, and the retina’s peripheral part. It works by indirect ophthalmoscopy where the instrument has a built-in source of primary white light, which can be modified by different types of filters.

The optical system focuses on the human eye where the light reflects from the retina and bounces back to the lens of the fundus camera. If mydriasis has to be first applied to the eye, then a mydriatic fundus camera is used. Its intention is to enlarge the “inlet opening” of the pupil, which allows scanning a larger portion of the retina. Non-mydriatic fundus cameras are favored as no procedure is done that will alter the normal sight of the subject. Mydriasis, on the other hand, is required for some subjects. The costs of these devices are calculated by medically specialized workplaces and are in tens of thousands of euros.

The optical device has a complex mechanical construction. The scanning device works based on the concept of medical eye-optic devices. These retinoscopes or fundus cameras are complex devices which are also quite expensive.

The reflection of a part of the light that came from a beam and hit the retina is scanned by the CCD camera. This concept is similar to retinoscope, where the eye lens concentrates on the retina’s surface due to the adjustment made to the beam of light that is coming from it. The ophthalmic lens receives back the reflection of only a part of the transmitted light beam and readjusts it. The beam leaves the eye below the angle where it entered the eye (return reflection). An image showing the eye’s surface can be obtained at roughly 10° surrounding the visual axis, as in Figure 4. A circular snapshot of the retina is captured by the device from the reflection of light coming from the cornea, which would be useless in raster scanning [13].

Figure 4.

Principle for obtaining an eye background image [12].


3. Biometric recognition and verification

A biometric recognition system (BRS) recognizes a person by analyzing the random pattern in his/her physiological or behavioral characteristics known as biometrics, which are unique, non-transferable, unforgettable, and always carriable. Among biometric characteristics, we can also count voice, signature, face, gait, fingerprint, DNA, odor, vein pattern, hand geometry, signature, iris, retina, etc. A BRS has become a usual requirement in strictly protected areas such as nuclear plants, military facilities, scientific laboratories, cash vault, border, airport, government office etc., as well as in our day-to-day life such as online banking, car locking, building access, phone unlocking etc. Comparing to other biometrics, eye biometrics, which include iris and retina [8], offers higher degree of randomness. Even for identical twins the pattern of retinal blood vessels and iris are very distinctive [14, 15]. In addition, eye biometrics remain the same for the entire lifetime of a person. Therefore, the error rate of eye biometrics-based BRS is very low. Even though, both eye biometrics offer very high security, the probability of occurrence of counterfeiting is lower in the retina-based BRS (RBRS) than the iris based BRS. Because without users’ cooperation and special camera like fundus camera or ophthalmoscope, it is not possible to capture retinal image. On the other hand, iris images can be captured by a normal camera at a distance.

In an RBRS, the unique pattern of blood vessels in the retinal image is used to recognize a person. Four kinds of approaches are generally found in the literature for capturing the uniqueness of retinal vessels, among which one approach is matching bifurcations and crossings of the blood vessel structure [16]. Bifurcation is the point where one blood vessel divides into two branches. The crossing is the point where blood vessels cross one another. Inspired by the idea of fingerprint minutiae [17, 18], Ortega et al., claimed in [9, 13] that using of bifurcations and crossings as feature points can overcome the drawback of the RBRS which uses the tree shape pattern of blood vessels of the whole retina proposed by Mariño et al. [19].

The retina is suitable for biometric purposes. As already mentioned, the pattern of blood vessels is unchanged during human life. In addition, the retina is well protected from the environment. However, this is also a disadvantage, because its capture is relatively complicated. In order to uniquely identify a person by capturing the uniqueness of the eye’s tree-shaped blood vessels, four kinds of approaches have been proposed [16]: (i) using general signal and image processing techniques on the raw retinal images; (ii) matching of the branching blood vessel structure as a whole; (iii) matching bifurcations and crossings of the blood vessel structure; and (iv) matching the pattern of the vessels that are traversing a well-defined region. In all these approaches, a database is created by storing the templates made by the features in the training phase. These features are accessed in the identification phase.

The uniqueness or randomness of tree-shaped blood vessels can be measured by biometric entropy which has unit in bits. The bigger the biometric entropy, the lower the chance that two retinas of two different persons will match. There are two ways to determine biometric entropy [16]. The first one is to fit the distribution of the features to existing theoretical models and the second one is to determine empirically the probability p of matching templates of two different persons. In the second way, the entropy is log2 p. Arkala et al., [16] measured the biometric entropy in the ring around the blind spot. Each vessel segment was represented by a triplet: position (in degrees around the ring), width (thickness in degrees), and angle (the angle that the segment makes with a radial line from the ring passing through the segment’s centroid). The biometric entropy result was approximately 17 bits. That means 1017 possible combinations of retinal patterns.

3.1 History of retinal recognition

The image of the bloodstream in the retina was found to be unique for two individuals. This allows the discovery of eye diseases by ophthalmologists Carleton Simon and Isidore Goldstein in 1935. Later on, they released a journal article about identifying unique patterns in the retina using vein imaging [20]. Dr. Paul Tower supported it, and in 1955, released an article on the study of monozygotic twins [14]. The article states that the patterns of the retinal vessel have the least resemblance to all the other patterns that were examined. Identifying the retina of the vessels was an unchanging idea back then.

Robert Hill, the founder of EyeDentify in 1975, spent most of his time and effort in the development of a simple and fully automated device that can capture a snapshot of the retina and use it to verify the user’s identity. These devices, however, did not emerge in the market even after several years [8, 21].

The remaining fundus cameras were modified by several other companies to capture an image of the retina to be used for identification. They, however, had many great disadvantages, such as the corresponding complex alignment of the optical axis, the visible light spectra, which causes discomfort, and the cameras being too expensive.

Infrared (IR) illumination was later discovered and used. Choroid reflects the radiation coming from the almost transparent beams that are hitting it. This reflection creates an image of the blood vessels in the eye. Since it is not visible, the pupil diameter is not reduced even when the eye is being irradiated.

The first prototype of the IR device was released in 1981. It has an eye-optic camera to illuminate the IR radiation. The camera was attached to an ordinary personal computer that will be used to analyze the captured image using a simple correlation comparison algorithm.

EyeDentification System 7.5 was launched after four years by EyeDentify Inc. Its verification is done using the retina image and the PIN entered by the user, with the user data stored in the database [8, 21].

ICAM 2001 was the last known retinal scanning device that was made by EyeDentify Inc. The device might have been able to store a maximum of 3,000 subjects with a storage capacity of 3,300 history transactions [8]. Unfortunately, the product was withdrawn due to low user acceptance and high price. Other companies, such as Retica Systems Inc., worked on a retinal acquisition device prototype for biometric purposes that might have been more user friendly and easier to integrate into commercial applications. Unfortunately, this device was also a failure in the market.

3.2 Limitations

Retinal biometrics limitations discourage further use of it as a biometric system. There are still no acceptable solutions found for these shortcomings [21].

Fear of eye damage - due to a myth about the devices damaging the retina. The level of infrared illumination used by these devices is low and has proven to be completely harmless. The people must be shared with this information so that they will not be afraid of using these devices.

Outdoor and indoor use - the return beam of the light passing through the pupil twice (once inward then outward of the eye) can be greatly weakened if the subject’s pupil is too small. This can result to an increase in the false rejection rate.

Ergonomics - the subject must be near to the sensor, which may cause discomfort.

Severe astigmatism - the eye must be focused on a point. This may be difficult for those with visual impairments such as astigmatism, which can negatively affect the template generation.

High Price - the cost of optical devices is always more than the price of other biometric systems such as fingerprint or voice recognition capturing devices.

High-security areas such as nuclear and arms development, even manufacturing, government and military facilities, and other critical infrastructure can make use of retinal recognition.

3.3 Recognition schemes

Several schemes can be used for recognizing retinal images. For instance, a retina image biometric recognition has different approaches. Farzin [8] and Hill [21] have segmented the blood vessels to generate features and store, at maximum, 256 12-bit samples, which are then shrunk to a reference record containing 40 bytes for each of the eye. The Time-domain stores the contrast information. Fuhrmann and Uhl [22] extracted the vessels which obtained the retina code. The retina code is a binary code describing the vessels surrounding the optical disc.

3.4 Verification phase

In order to be able to use the proposed algorithm universally, and therefore also for the verification phase, it is necessary to choose the parameters with regard to the verification steps. During the verification phase, when recognizing samples that should be identical, we encounter the problem of inaccuracy in imaging. We practically never manage to take a picture in exactly the same way. There are small inaccuracies such as rotating or zooming on the image. These deviations must be ignored to a small extent.

Another problem may be the absence of some points. Even these inaccuracies can affect the similarity score obtained. In the verification phase, it must be with a relatively low penalty. The values in the previous chapter are therefore chosen so that the same algorithm can be used for both the recognition and verification phases.


4. Our recognition method

The distribution of vascular lines in the retina of the human eye is unique (as shown in Chapter 3.1), which is similar to the papillary line on the human fingers. Currently, there is no single approach to retinal recognition. Our procedure follows dactyloscopy, where bifurcations, terminations, positions and directions of a detected point are stored. We look for “anomalies” on the vessels in the retina - the places of visual crossings and bifurcations - and also record their position within the retina. For images, it is not easy to recognize whether it is a crossing or bifurcation of a vessel as the two phenomena often overlap. Therefore, we are only interested in the feature and not on its specific type. The termination of the vessel takes place “until lost” so a specific place cannot and will not be detected. We locate the points according to the position relative to the optical disk and the fovea. Therefore, we also store their position within the image as will be further described in Chapter 4.2 - the coordinate system. The result is a set of vectors such that the system is not affected from the changes in retinal scanning (different rotations, zooms, or chamfers).

Recognition becomes problematic in the presence of diseases that are manifested by a change in the retina such as bleeding. As with other biometric features, a relatively large amount of human health information can be read from retinal manifestations. Therefore, it is appropriate that the biometric facility manager guarantees the non-misuse or non-storage of this sensitive data, for example under the GDPR Directive [9, 23].

4.1 Statistical evaluation of the crossings and bifurcations frequency

If we take a brief look at a few images of the human eye retina, we discover that crosses and bifurcations are not equally frequented in various areas. The probability of their occurrence is in some areas higher, in others almost zero. In the beginning, it should be noted that the ability to mark intersections and bifurcations strongly depends on the quality and contrast of the image. In the statistically empty parts are the very small capillaries that are undetectable in the image using automatic or manual search.

When we create the frequency map, the points can be regarded with different weights to recognize the pattern. Finding matching points in two retinas being compared in rare occurring sites may score higher than matching points in other areas. Therefore, we tried to statistically evaluate several hundred retinal images and create our own frequency scheme, which we will later use to adjust the evaluation when comparing two retinas.

4.2 Coordinate system

In order to be able to work uniformly with all retinas without major complications, we have introduced a polar coordinate system, where two values can be used to align the retinal image to same coordinate system with the others. Our coordinate system assumes distances between the optical disk and the fovea in the different retinas approximately similarly. We also assume, if the physical structure of the retina differs significantly, that its development proceeded by similar rules. For example, if the distance between the optical disk and the fovea is smaller than average, the entire retinal structure will be smaller, and this will not affect our system.

The main point of the entire coordinate system is the center of the optical disk. In the records of individual retinas, its position in a particular image is stored as the distance from the left and top of the image edges. In addition, the width and height of the optical disk area (1st line of the output text file) are stored here as well. The second record is the center of the fovea (2nd line). The width and height of fovea are no longer stored here because its boundaries are difficult to ascertain by a simple look. The distance r between these two points is the basic unit of length for our coordinate system in each retina. This value may differ for every single image but is always valid for one retina. The second value is given point angle ψ of the direction from the optical disk. An angle of 0° lies in the line to the fovea and the value increases as it goes clockwise. This means that the fovea will have coordinates (1, 0°) in all retinas according to our coordinate system. Bifurcations or crossings are expressed by r and ψ, respectively, and are stored on the next lines of the output file.

We convert the found bifurcations and crossings back from the polar to the Cartesian coordinate system when we need to display or evaluate the entered points globally. To do this, we need the distance between the center of the blind spot (CBS) and the yellow dot (CYS). Then, we calculate their Euclidean distance (d) and the angle between the centers of both points (α) according to the Eq. (1).


Calculate the bifurcation/crossing distance from the blind spot using Eq. (2):

Using Eq. (2), we calculated the bifurcation/crossing distance from the blind spot:


Then, calculate the coordinates dx and dy using Eqs. (3) and (4).

Then, using Eqs. (3) and (4), we calculated the coordinates dx and dy:


Lastly, calculate the point that resulted from the bifurcation/crossing in the Cartesian system [dx + x.CBS; dy + y.CBS].

4.3 Recognition scheme

The algorithm for determining the grade of conformity of the two retinas works by converting all points from the polar coordinate system (described earlier) to Cartesian coordinate system. It is not necessary to align or rotate both images. Due to the chosen system, which is based on the position of the optical disk and the fovea, these points on two retinas will always exactly overlap.

For the first point of intersection and bifurcation in the first retina, it is determined which set of points in the second retina is the closest one. Then, the same procedure is repeated for the point found from second retina - finding the closest point back in first retina. This will determine if the two points are really the closest. If we did not find points on both directions, as in the case of a marked point in only one retina, another point could be found, which has its pair from the first retina.

Figure 5 shows the scenario without two-way control. Green points from the first retina and blue points from the second retina are combined into one image. For a point in a red circle, we look for the nearest point that belongs to a pair with another point.

Figure 5.

Illustration of finding the nearest point.

If the distance between the two found points is less than the specified limit, then they are not considered as close. If they are “close”, they will be removed from the lists of both retinas and their distance is saved. Before saving, the distance value is converted to a percentage where 0% means zero distance between points and 99% is the maximum allowed distance between two points to be considered as close. And this value is then squared for better difference between near and far points.

The percentage value of the distance is then adjusted according to the statistical model described above. The value is multiplied by a number from 1 to 4 where a lower number means lower frequency in the statistical model. The reason is, if no nearby point is found in the high frequency region, then it is much worse for retinal conformity than if no point is found between two far points that are at distances from the optical disk. In addition, places with higher frequencies are usually closer to the center of the coordinate system where it is more accurate.

4.4 Used databases

For the testing purposes, we used several public or our internal databases. Messidor [24], e-ophtha [10], High-Resolution Fundus (HRF) [25] and Retina EBD STRaDe (EBD).

First of them is publicly available database Messidor [24] from team ADCIS. The Messidor database has 1,200 eye fundus color numerical images of the posterior pole. The images were captured by three ophthalmologic departments by using a color video 3CCD camera on a Topcon TRC NW6 non-mydriatic retinograph with a field of view of 45°. The captured images use 8 bits per color plane at 440 × 960, 240 × 488, or 304 × 536 pixels. There are 800 images that were captured with the pupil dilated (one drop of Tropicamide at 0.5%) and 400 without it.

Second one is database e-ophtha also from team ADCIS. This database has 47 images that are with exudates and 35 images without lesions. For our purposes we use only images of healthy retinas.

Next is High-Resolution Fundus image database from German Fiedrich-Alexander University. The HRF database has 15 images of healthy patients, 15 images of patients with diabetic retinopathy, and 15 images of glaucomatous patients. Each image has a binary gold standard vessel segmentation image. Moreover, particular datasets are provided with masks to determine the field of view (FOV). A group of experts from the retinal image analysis field and the medical staff from the cooperating ophthalmology clinics generated the gold standard data.

The EBD is internal set of iris and retina images our research group STRaDe (Security Technology Research and Development at the Faculty of Information Technology, Brno University of Technology (CZ), focused on security in IT and biometric systems). The database contains 684 images of both retinas from 110 distinct people, totaling 220 distinct samples. Unfortunately, a significant part of this set consists of very low-quality pictures. But in this database all persons have several images of each eye.

For additional checking of our algorithms we use our retinal fundus camera at our laboratory, which we make for several past years. We use 30 images from students, which captured during Biometric systems course. Some images have bad quality, which is useful for testing applications in a worse condition. Several images are from same person eye (further as “school database”).

4.5 Developed applications

We developed several application software modules to determine some properties of the retina, which will then be used to find out the degree of similarity of the two entered retina patterns.

4.5.1 Manual marking program

The first program (SW1) is developed for manual retina marking by our students. First, the edges of the optical disk are marked. The program stores the top left position and the width and height of the ellipse around the optical disk. Then, the fovea is marked. Both positions are stored in Cartesian coordinates, which are based on the image properties and resolutions. Each feature is then marked after both main structures in the retina. These points are stored in polar coordinates. Data from the images are stored as a plain text file. By this program, we marked all retinal images using Messidor’s e-optha and HRF databases.

4.5.2 Automatic marking program

The second program (SW2) stores the same information about the image as SW1, except that it performs the steps automatically. Details of the overall work of the program, its steps, and further development are summarized in work [5]. The program is developed in Python and was used on the same images as SW1. An average of 48 features was found in these retinal images.

4.5.3 Compare program

SW3 compares the detection accuracy between the manually marked-up results by SW1 and the automatically marked-up results by SW2. The algorithm is designed to compare the bifurcations/crossings that were selected manually, with the automatically detected set after they have been detected. The paired bifurcations/crossings are automatically found through a method like that in chapter 4.3.

The converted points are stored in the list with their position in it being used as ID for compiling disjunctive sets. A placeholder ID is assigned with −1 value. The problem is converted to an integer programming problem [26] to calculate the minimum pairing. The edges are then determined between the graphs’ individual vertices. The number of bifurcations/crossings that were manually found and paired can then be calculated.

4.5.4 Visualization program

SW4 collects the marked data by SW1 and SW2 in the previously described text file format into one picture. It collects individual pixels into a grid of adjustable size. For our purposes, a summary grid of 5 × 5 pixels was chosen as the most suitable. In the images, the fields with a higher frequency of occurrences are colored darker.

4.5.5 Recognition program

SW5 works according to the recognition principle described in chapter 4.3. For this purpose, we used database EBD, which have more images from one retina. For practical tests we used our school database of retina images, which has taken by students of our faculty.


5. Results

The representative sample of data is obtained by randomly choosing 460 images, which came from Messidor, 160 images came from e-optha, and 50 images came from HRF. The chosen retinal images have both the left and right eye images available. They were shrunk to a resolution of around 1 Mpx to fit them on the screen. FIT BUT students in 2016 and 2017 did the manual marking of bifurcations and crossings by SW1.

5.1 Accuracy of manual and automatic marking

After manually marking the points via SW1, SW2 also searched for the same points automatically. With this principle, we determine the accuracy of the positions of points during manual marking. The resulting average deviation of a minutiae was about 5 pixels [5].

Simultaneously, the automatic algorithm tested on the VARIA database [27], contained 233 images from 139 individuals, is enhanced. The resultant image emphasizing the comparison is shown in Figure 6.

Figure 6.

Comparison of manual and automatic marking. Manually marked points are displayed by green points and automatically marked points are displayed by blue points.

On the contrary, we assume that the hand marking will accurately highlight the blind and yellow spots. SW3 also compares the correct location of the automatic search. The success rates were 92.97% for the blind spot and 94.05% for the yellow spot. Wrong localization of spots was mainly caused by too much brightness or darkness in it.

5.2 Frequency of features occurrences

Figure 7 shows the frequencies for manual marking by SW1. The program SW4 also shows the outer circle area in which at least some points occur, and the axes of the retina and the inner ellipses indicate areas with minimal occurrences of points around the yellow (yellow) and blind (black) spots.

Figure 7.

Frequency visualization of bifurcations and crossings.

For SW5, we randomly selected 10 persons from database EBD. Each have three images for the left and right eye which are marked e.g. 1002-2-R for person 1002’s second image of the right eye. We mark all retinas by SW1 very carefully. The first and third images of the eye were marked by a specialist such that at least 24 hours must have passed in between the marking of the retinas for the same person. The second images were marked by an ordinary computer user who is not involved in the project.

Tables 1 and 2 shows the average results of evaluation after SW5.

Avg valueMin valueMax value
compare with second eye of same person65.52%39.69%80.38%
compare with different person eye67.66%45.32%74.21%
compare with our school database (different persons)57,62%43,36%74,58%

Table 1.

Recognition comparing results.

Avg valueMin valueMax value
same eye, marked by different person86.51%78.90%93.16%
same eye, marked by same person93.54%88.54%95.78%

Table 2.

Verification results.

Figure 8 shows two images of the same retina that are taken at significantly different angles. After the marking by SW1 and determining the similarity of the retinas by SW5, the result was 91,09%.

Figure 8.

Sample of two images of the same retina in the EBD database.


6. Conclusions

In this paper, we have presented different approaches for the evaluation of individual parameters in human recognition based on the retina of the human eye. The main principle was to locate the individual bifurcations and crossings in the retinal image. The main part was, of course, based on a comparison of the locations of the points in both images. Another part of the principle was based on a set of almost 1000 manually marked images where all bifurcations and crossings were located. Were tested these points in images for placement accuracy by automatic search. Depending on the frequency of occurrence of points in different parts of the retina, these points had different weights for final correspondence. Finally, the procedure was tested on several differently photographed retinas of one person.

In our principle, it is not a matter of finding the best recognition algorithm but finding out the best properties of the individual parts of the procedure. The algorithms could be improved especially in the recognition part itself. The evaluation algorithm was illustrative only to show how the individual parts worked.



This work was supported by The Ministry of Education, Youth and Sports of the Czech Republic from the National Programme of Sustainability (NPU II); project IT4Innovations excellence in science - LQ1602 and “Secure and Reliable Computer Systems” IGA FIT-S-17-4014.


  1. 1. Anatomy & Physiology, Connexions Web site.
  2. 2. M. D. Jakobiec, A. Frederic, “Ocular Anatomy, Embryology and Teratology,” Harpercollins, pp. 562-563, 1982
  3. 3. Barajas H. Atlas of the Human Eye: Anatomy & Biometrics. Palibrio, p. 74, ISBN 978-1506510330
  4. 4. Snell R.S., Lemp M.A. Clinical Anatomy of the Eye. 2nd Edition, Wiley-Blackwell, 2013, ISBN 978-0632043446
  5. 5. Pres M. Bifurcation localization in retina images. MSc. thesis, supervised by: Lukas Semerad, Brno University of Technology, Faculty of Information Technology, Czech Republic, 2016
  6. 6. Optimis Fusion:, accessed on 2018-06-06
  7. 7. Kowa VX-20:, accessed on 2018-06-06
  8. 8. Farzin H., Abrishami-Moghaddam H., Moin M.S. A Novel Retinal Identification System. EURASIP Journal of Advances in Signal Processing, Hindawi, Vol. 2008, p. 10, DOI 10.1155/2008/280635
  9. 9. K. Acharjya, G. Sahoo and S. Sharma, “An Extensive Review on Various Fundus Databases Use for Development of Computer-Aided Diabetic Retinopathy Screening Tool”, Proceedings of ICSCSP 2018, 2019
  10. 10. Decencière E., Cazuguel G., Zhang X., Thibault G., Klein J.C., Meyer F., Marcotegui B., Quellec G., Lamard M., Danno R., Elie D., Massin P., Viktor Z., Erginay A., La B. and Chabouis A. Teleophta: Machine learning and image processing methods for teleophthalmology. IRBM, Elsevier Masson, Vol. 34, No. 2, pp. 196-203, 2013
  11. 11. Timberlake G.T., Kennedy M. The Direct Ophthalmoscope – How it Works and How to Use it. University of Kansas, 2005, p. 39, available online on:
  12. 12. Robert Hill. Retina Identification. Michigan State University. 2005
  13. 13. E. Trucco, T. MacGillivray and Y. Xu, “Computational Retinal Image Analysis: Tools, Applications and Perspectives”, Academic Press, 2019
  14. 14. Tower P. The Fundus Oculi in Monozygotic Twins: Report of Six Pairs of Identical Twins. AMA Arch Ophthalmology, No. 54, pp. 225-239, 1955
  15. 15. J. Daugman, “How iris recognition works,” IEEE Transaction on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21– 30, 2004
  16. 16. Arakala, A., Culpepper, J. S., Jeffers, J., Turpin, A., Boztas, S., Horadami, K. J., McKendrick, A. M. Entropy of the Retina Template. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 1250-1259
  17. 17. A. Jain, L. Hong, S. Pankanti, and R. Bolle, “An identity authentication system using fingerprints,” Proceedings of the IEEE, vol. 85, no. 9, pp. 2127-2134, 1997
  18. 18. X. Tan and B. Bhanu, “A robust two step approach for fingerprint identification,” Pattern Recognition Letters, vol. 24, pp. 2127– 2134, 2003
  19. 19. C. Mariño, M. G. Penedo, M. J. Carreira, and F. González, Retinal Angiography Based Authentication. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003, pp. 306-313
  20. 20. Goldstein I., Simon C. A New Scientific Method of Identification. New York State Journal of Medicine, Vol. 35, 1935
  21. 21. Hill R.B. RetinaIdentification. In: Biometrics: Personal Identification in Networked Society. New York: Springer, 1996, pp. 123-141
  22. 22. Fuhrmann T., Hämmerle-Uhl J., Uhl A. Usefulness of Retina Codes in Biometrics. Advances in Image and Video Technology, 3rd Pacific Rim Symposium, Japan, 2009, Springer LNCS 5414, ISSN 0302-9743
  23. 23. J. Hájek and M. Drahanský, “Biometric-Based Physical and Cybersecurity Systems: Recognition-Based on Eye Biometrics: Iris and Retina”, Springer International Publishing, 2019, p. 37-102
  24. 24. Decencière E., Zhang X., Cazuguel G., Lay B., Cochener B., Trone C., Gain P., Ordonez R., Massin P., Erginay A., Charton B. and Klein J.C. Feedback on a publicly distributed database: the messidor database. Image Analysis & Stereology, Vol. 33, No. 3, pp. 231-234, 2014
  25. 25. Kohler T., Budai A., Kraus M., Odstrčilík J., Michelson G. and Hornegger J. Automatic no-reference quality assessment for retinal fundus images using vessel segmentation. 26th IEEE International Symposium on Computer-Based Medical Systems, pp. 95-100, 2013
  26. 26. Shih H.P. Two Algorithms for Maximum and Minimum Weighted Bipartite Matching. Department of Computer Science and Information Engineering National Taiwan University, 2008
  27. 27. M. Ortega, M. G. Penedo, J. Rouco, N. Barreira, M. J. Carreira, “Retinal ration using a feature points based biometric pattern”, EURASIP Journal on Advances in Signal Processing, vol. 2009, Article ID 235746, 13 pp., 2009

Written By

Lukáš Semerád and Martin Drahanský

Submitted: 02 June 2020 Reviewed: 21 January 2021 Published: 19 February 2021