It was recently proposed to use the human visual system’s ability to perform efficient photon counting in order to devise a new biometric authentication methodology. The relevant “fingerprint” is represented by the optical losses light suffers along different paths from the cornea to the retina. The “fingerprint” is accessed by interrogating a subject on perceiving or not weak light flashes, containing few tens of photons, thus probing the subject’s visual system at the threshold of perception, at which regime optical losses play a significant role. The name “quantum biometrics” derives from the fact that the photon statistics of the illuminating light, as well as the quantum efficiency at the light detection level of rod cells, are central to the method. Here we elaborate further on this methodology, addressing several aspects like aging effects of the method’s “fingerprint,” as well as its inter-subject variability. We then review recent progress towards the experimental realization of the method. Finally, we summarize a recent proposal to use quantum light sources, in particular a single photon source, in order to enhance the performance of the authentication process. This further corroborates the “quantum” character of the methodology and alludes to the emerging field of quantum vision.
- photon statistics
- quantum light
- visual perception
It was recently proposed  to use the human visual system’s ability to perform photon counting in order to devise a new biometric authentication scheme, which was called “quantum.” The claim made in  was that the scheme offers unbreakable security, not unlike the security offered by quantum cryptography [2, 3] against a potential impostor wishing to eavesdrop during the transmission of information. In our case, the “fingerprint” is a physical property of the visual system, including the eyeball, retina and brain. The “fingerprint” is registered and probed using weak-intensity light and the subject’s conscious perception thereof.
In this chapter we will further elaborate in intuitive terms on the workings of the quantum biometric methodology as were outlined in . To do so, we will summarize a recently proposed authentication algorithm , which is straightforward to understand, as compared to more elaborate algorithms discussed in . We will then address some basic issues of the authentication methodology. One has to do with the very first registration of one’s “fingerprint.” Another issue is related to aging effects on this “fingerprint,” which have to do with the visual acuity degrading with age. We will also address the central issue of the variability of the “fingerprint” among different individuals.
We will then review recent progress made towards the experimental realization of the quantum biometric methodology using laser light . Finally, we will summarize a recent proposal , to use quantum light in order to enhance the method’s performance in terms of the required time to run the authentication algorithm, for given desired values of the false-negative and false-positive authentication probability.
As a short introduction to the basics of our biometric authentication methodology, we first recapitulate the original experiment of Hecht et al. , eloquently described by Bialek . In particular, Hecht et al. were the first to unambiguously demonstrate that rod cells, the scotopic photoreceptors in our retina, are efficient photon detectors. Additionally, they obtained the threshold in the number of detected photons for the perception of vision to take place. We denote this threshold by , and from the work of  it follows that . We note that a recent psychophysical experiment performed by Vaziri and coworkers  using a single photon source for the stimulus light found that . We will here use the previously accepted value of , and defer to future work the analysis of our methodology’s dependence on the precise vale of , which still is a rather complex open problem.
In more detail, the three authors in  exposed their eyes to very weak-intensity light pulses, with the photon number within each pulse being so small, that the visual perception became a probabilistic event. Let be the probability of seeing such a light pulse. An expression for can be found as follows. Denote by the mean number of photons within a light pulse of duration and intensity , that is, . We know that coherent light has Poissonian photon statistics, that is, the probability to have exactly photons within such a pulse is .
However, when the mean number of photons per pulse incident on the eyeball is , the mean number of photons per pulse being incident on the retina is smaller by a factor , where . This factor describes the optical losses suffered by light along its path from the cornea to the retina. Moreover, from those photons incident on the retina, only a fraction will be detected by the illuminated rod cells, one reason being the finite quantum efficiency of the rod photoreceptors. All those factors can be lumped together in a single factor, call it , quantifying the various sources of optical loss. Then, the mean number per pulse of
Hence the probability that the number of photons detected by the illuminated patch of the retina is exactly is given by . If this number is larger than the detection threshold , the perception of “seeing” a spot of light will take place. Hence,
As noted by Bialek , this formula expresses the (perhaps surprising) fact that the probabilistic nature of our visual perception, which is a systemic effect concerning the retina and the brain, is fundamentally governed by the quantum statistical properties of the stimulus light.
To further understand the experiment of Hecht et al., we plot in Figure 1 examples of the dependence of the probability on the mean number of photons per pulse incident on the cornea, . In Figure 1a we keep the threshold and vary , whereas in Figure 1b we keep constant and vary . Both dependences are rather obvious to interpret. Regarding Figure 1a, it is evident that for a given perception threshold , higher optical loss (small ) requires a higher photon number for the perception of light to be highly probable. Similarly, regarding Figure 1b it is seen that given an optical loss factor , the smaller the threshold the fewer photons are needed to obtain a given value of .
What is interesting to note is that the change of (Figure 1a) leaves the overall shape of the functional dependence of versus pretty much invariant, that is, it roughly brings about a translation of the figure along the x-axis. In contrast, the change of qualitatively changes the shape of the dependence of versus . Now, what the authors in  observed was that although each one of the three authors participating in the measurement produced a different dependence of versus , all three curves could be coalesced by such a translation along the x-axis, and all could be fit with a common value of . This is shown in Figure 2.2 of .
The experimental apparatus used by Hecht et al. looks rather primitive from our modern technological perspective. Yet these authors managed to make a remarkable case: even though a subjective observable, as the optical loss parameter , which changes among individuals, perplexes the analysis of individuals’ responses to perceiving or not faint light pulses, there appear two objective properties of the human visual system: The first has to do with the wiring of the ganglion cells which communicate visual responses to the brain, and which wiring determines the perception threshold . The second is what Hecht et al. nicely demonstrated: retina’s photoreceptors are efficient single photon detectors. This follows from the fact that the experimentally extracted number of detected photons is much smaller than the number of illuminated rod cells. It took several years until the photo-detection properties of rod cells were unraveled with modern quantum-optical techniques [9, 10, 11, 12, 13, 14, 15].
3. Quantum biometrics
Whereas the variability of the parameter among individuals was a nuisance for Hecht et al., who wanted to demonstrate the single-photon-detection capability of rod cells, we now take this physiological capability for granted, and instead use the variability of as a biometric quantifier. However, a single number cannot offer a useful biometric “fingerprint.” Hence the idea analyzed in  was to use a whole map of values, the so-called -map, by considering several different paths of light towards the retina. Here we elaborate a bit more on this.
There are three ways to get several light paths to the retina. For all three we suppose that the stimulus light source consists of distinct laser beams, which can illuminate the cornea at several different spots (as shown in Figure 2a), either one at a time, or many. These laser beams are supposed to propagate in parallel from the light source to the cornea. Then, for an emmetropic individual (i.e., somebody not having any refractive errors) all these laser beams will be focused on the same spot on the retina. Instead, for a myopic individual these laser beams focus before the retina and thus will illuminate different spots on the retina, while for a hyperopic individual they focus behind the retina, and again illuminate different spots.
Now, as observed in Section 2, the factor quantifies both the optical losses, , suffered by light along its path from the cornea to the retina, and the probability of photon detection, , once the photons reach the retina. Thus, for an emmetropic subject the difference in between different laser beams stems only from the difference in , while for myopic or hyperopic individuals the difference in stems from both the different and the different for each laser beam. For the authentication algorithm to work, we need the perception of different patterns of simultaneously illuminated spots on the retina. Therefore, and for having a common presentation in the following, we assume that our laser beams are focused on different spots on the retina. This can be optically designed to be the case even for emmetropic subjects.
In Figure 2 we show the crux of the matter: suppose we have an array of, for example, nine laser beams, patterned in a matrix (Figure 2a). Further suppose that these beams are all illuminated simultaneously for a given time interval (what we previously referred to as laser pulse), and moreover, let us assume that the mean photon number per beam per pulse is very large, say photons. In such a scenario
To describe the workings of the methodology in more detail, we first note that the prerequisite is that the -map of the subject that will need to be authenticated by the biometric device has been already measured and stored. This is like taking a subject’s fingerprint and registering it in the relevant database. Now, for our biometric methodology this step is the most time consuming step, because the values for several different light paths must be measured. However, apart from aging effects to be discussed in the following, this step is required only once.
3.1 First registration of -map
The -value of a retinal spot can be estimated indirectly through Eq. (1) by measuring the percentage of times light is perceived when the spot is repeatedly illuminated. Precisely, suppose that a spot is illuminated times with coherent light pulses of mean photon number and that light is perceived in of these times. The fraction is an experimental proxy for , and can be estimated by solving the equation
To avoid amplifying the error made in the estimation of by , one should choose so that the slope of the right hand side of Eq. (1) is maximal. This is achieved when . Clearly, we cannot choose based on this condition since is unknown. Nevertheless, this condition is equivalent to . For , this gives . Hence, as a rule of thumb, a good practice to estimate the -value of a spot is to use such a laser intensity that light is perceived roughly 40% of the time.
Let us denote by the estimate of derived in this way. It turns out that an approximate 0.99-confidence interval for would be . That is, to determine with 99% confidence, we would roughly need pulses for a 10% error tolerance, and pulses for a 5% error tolerance.
This number of interrogations is clearly impractical. In  the first authentication algorithm proposed follows a similar route of estimating , making the process even more impractical, since several such time-consuming series of many interrogations would be needed, one for the registration, and one each time the subject needs to be authenticated.
This observation had motivated [1, 4] authentication algorithms that, rather than using the precise -values of retinal spots, only require that the -value of the illuminated spots be above the threshold , or below the threshold . For such algorithms, one only needs to construct a coarse -map, in which retinal spots are classified into high-(), low-(), or intermediate-() spots. As will be shown in a forthcoming work, a much smaller number of interrogations, typically between 10 and 40, is sufficient to classify a retinal spot. Having done so, that is, knowing the subject’s -map, we can then proceed with elaborating on the authentication process. Before doing so, we make some general comments.
3.2 Detailed description
When the subject wishes to be authenticated, for example, in order to enter a high-security facility, the biometric device must implement a measurement protocol in order to positively authenticate the subject. As already apparent, we have restricted the discussion to authentication. That is, we assume that when asking to be authenticated, the subject announces who he or she is. Then the device must make sure that the subject indeed is who he or she claims to be. So henceforth we suppose the biometric device is “aware” of the subject’s -map.
The result of the authentication protocol is either positive or negative, and two central quantifiers of its performance are the false-negative and false-positive probability, denoted by and , respectively. The former is the probability that a subject truly claiming to be who he or she is, is
Let us call Alice the subject who appears and wishes to be positively authenticated. Eve will be an impostor who maliciously claims to be Alice. Now, the biometric device knows Alice’s high-and low-spots. Hence if the former are illuminated, Alice is expected to perceive light. In contrast, if the low-spots are illuminated, Alice is expected not to perceive light.
Now, we will suppose that Eve is not aware of Alice’s -map. Is this a fair assumption? Indeed, we can first rightfully suppose that Eve does not have access to the -maps stored in the biometric device. If we do not make such an assumption, then pretty much all biometric identification methods are vulnerable to counterfeiting. Can Eve otherwise obtain Alices’s -map? Well, the only way that this seems feasible is by use of force, that is, Eve purchases the biometric device and forcefully has Alice undergo a measurement of her -map, and moreover Eve has found a way to enforce Alice’s truthful answers to Eve’s device interrogations on light perception. However, we can again safely assume that use of force is not something any biometric device can cope with. For that matter, even quantum cryptography would be irrelevant as a technology, since if somebody enters Bob’s office while Bob is securely transmitting information to Alice via a quantum communication channel, this somebody could forcefully obtain the information Bob wants to transmit, hence the quantum communication channel would obviously be of little help. So it is rightful to assume that the biometric “fingerprint” cannot be stolen from the device where it is stored, and use of force is not considered when comparing the performance of biometric authentication methodologies.
However, what should be allowed as a scenario is for the impostor to have technology that would allow her to estimate the “fingerprint” under consideration by physical means, which do not require access to the fingerprint database nor do they require use of force. For example, one could imagine when discussing, for example, face recognition, that Eve could take an image of Alice’s face without Alice noticing (e.g., from a distance using a high resolution camera) and then use this image to construct a face mask. This scenario is not prevented by physical laws. Nor is there any physical law preventing the face recognition test from being bypassed by an artificial face mask. So in comparing the security of various biometric methodologies, one should study what is in principle possible in terms of bypassing the biometric device, given the laws of physics. Based on current quantum technology, it is inconceivable how Eve would be able to infer Alice’s -map by physical means, although some comments where made in  along this line of thought.
In other words, it seems that even in principle, that is, based on the laws of physics and in particular the physics of quantum measurements, Eve cannot physically obtain Alice’s -map. This is one main advantage of this biometric methodology. In any case, the only option left to Eve when impersonating Alice is to second guess the biometric device’s interrogations. Is this possible? Can Eve know whether the device is illuminating a low-or a high-spot of Alice, and thus tune her responses accordingly? The answer is negative. The spots being illuminated are randomly chosen by the device, and as far as Eve is concerned, they could be of any kind.
A crucial detail is that the device illuminates every spot, no matter of what kind it is, with a light pulse
We will now elucidate all of the above using the specific authentication protocol outlined in .
3.3 Authentication protocol
This protocol is a variant, which is intuitively simpler to understand than the protocols discussed in . We assume that the biometric device simultaneously illuminates different retinal spots, some of which are low-spots, with the rest being high-spots. The subject taking the test is then questioned on how many spots she perceived. Let be the random variable quantifying how many high-spots where illuminated. Further, let be a random variable quantifying the number of bright spots perceived by the interrogated subject. We define as correct the response for which . As will be shown in the following, a single interrogation is not enough to obtain the desired performance metrics, therefore multiple interrogations will be used.
Now the probability that an impostor called Eve, pretending to be Alice, correctly responds to such an interrogation is
because Eve is not aware of what kind of spots are being illuminated, and can take any value between 0 and N, therefore Eve’s chance to guess this number correctly is . In contrast, Alice has a much larger probability to successfully respond. To fail, Alice should perceive a low-spot, or not perceive a high-spot, with these two errors not canceling out. It turns out  that the probability of Alice’s successful response is
where , with being the probability that Alice fails to perceive a stimulus on a high-spot, and being the probability that Alice does perceive a stimulus on a low-spot.
Now, as previously mentioned, one interrogation is not enough to achieve adequate performance with respect to the false-positive and false-negative probabilities. Therefore a number of sequential interrogations is used. This number is actually a random variable, coming about as follows . We define an integer success variable , initiated to at the beginning of the authentication process. Then if the subject responds correctly, is increased by 1, whereas it is decreased by 1 if the subject responds wrongly. Positive authentication is established when reaches a predefined positive value , whereas negative authentication is established when reaches a predefined negative value . The value is determined by the required false-positive probability , and the value by the desired false-negative probability . Thus, the random variable performs a random walk. If the interrogated subject is indeed Alice, then the probability for a positive step of is given by Eq. (3), and correspondingly, the probability for a negative step is . Similarly, if the interrogated subject is Eve (who claims to be Alice), then the respective probabilities for a positive and a negative step are given by Eq. (2) and .
For relatively small values of the parameter , it is , and Alice’s random walk drifts towards positive . For a number of illuminated spots it is , thefore Eve’s random walk drifts towards negative . The smaller the desired , the larger will be , and the more difficult will be for Eve’s success parameter to reach the positive authentication value . Similarly, the smaller the desired false-negative probability, the more negative will be , and the more difficult will be for Alice to fail the test. Incidentally, the highest priority for the interrogation is that an impostor will fail the test, that is, the highest priority is the smallness of . The smallness of is also important, but mostly of practical interest. This is because in the unfortunate circumstance that the true subject, Alice, fails the test, she would have to retake it. This will happen the more infrequently, the smaller is .
3.4 Optimal photon number
The reader might have inquired how the photon number per pulse per illuminated pixel is chosen. This is easily shown by considering the fact that the probability of Alice’s successful response, , is higher the smaller the parameter is. Using the probability-of-seeing expression of Eq. (1), one can calculate as a function of the incident photon number . It should be clear why there is a minimum in such a dependence. For very large , the probability of seeing tends to 1, therefore Alice will for sure perceive illuminated low-spots, therefore will tend to 1. Similarly, for too small , Alice will have a hard-time perceiving even the illuminated high-spots, therefore will tend to 1. In either extremes, will tend to 1, and it becomes minimum for some intermediate value of , which is about 60–80 photons per pulse .
4. Aging effects
One question recurring in presentations of the above scheme is the effect of aging, namely, it is reasonable to assume that the -map of a subject will change with time, like the visual acuity does. Thus it is expected that the values will become smaller with the subject’s age. Would this affect the authentication scheme? To address this question, we will use data from visual perimetry, in particular, differential threshold perimetry. This is a technique used to measure the sensitivity of one’s visual field and the construction of the so-called hill-of-vision. The technique is illustrated in Figure 3.
The subject fixates at the center of a half-sphere, the inner surface of which has a light background illumination (Figure 3a). Then, several spots are illuminated with varying intensity (on top of the background), and the subject reports whether he or she perceives the illuminated spot (Figure 3b), this leading to the threshold of perception. The position of each spot is defined with two angles, one accounting for the temporal vs. nasal position, and the other for the superior vs. inferior position (Figure 3c). The measured threshold as a function of these two angles defines the hill of vision (Figure 3d).
Now, as seen in Figure 4 depicting perimetric data , the visual field sensitivity indeed appears to degrade with age. We will use such data to comment on how age can affect the -map used as our biometric “fingerprint.” However, it should be first noted that such visual-field data do not exactly correspond to our case, because they are not fully scotopic. As the literature on scotopic differential perimetry is more sparse, we will use the aforementioned data on differential perimetry as indicative. In any case, there are two ways one can counter the effect of aging. A straightforward strategy is to periodically register the -map of an individual, for example, every 10 years. Another strategy would be to slowly increase with one’s age the optimal photon number per illuminated pixel per pulse, at the same rate as the measured downward rate of Figure 4. In either case, it appears that aging effects should not pose a problem in the long-term repeatability of the authentication process.
5. Variability of the -map
Another crucial issue is the variability of the -map. There are two kinds of variability of interest, one is the intra-subject variability, and the other is the inter-subject variability. With the former we mean the variability in one’s values for different paths (spots) towards (on) the retina. We clearly need this variability in order to be able to define in the first place the -map including high-, low-and intermediate-values. The latter is the variability of the -map among different subjects, in particular the variability of the values among individuals for geometrically similar spots on the retina. In Figure 5 we again show perimetric data  accounting for both types of variability.
Figure 5a depicts the variability of the differential threshold of one particular individual for various viewing angles in the central field-of-view. The observed variability from 2 dB up to 6 dB is enough to provide for the definition of an -map useable for our biometric methodology. Figure 5b shows the inter-subject variability, which again ranges from 2 to 6 dB, enough to be able to support our protocols.
Finally, related to the inter-subject variability is the question of how many different subjects would our methodology be able to authenticate without the possibility of a random coincidence of one’s -map with somebody else’s. In the next section we will discuss recent experimental progress towards realizing the quantum biometric methodology. There it will be shown that the laser stimulus we developed in  provides for a laser beam consisting of a pattern of pixels, so 25 pixels in total. Assuming that (i) we classify each pixel with three possibilities, that is, low-, intermediate-and high-, (ii) we use only low-and high-values for our authentication protocols, (iii) each three possibilities for the -values occur with the same probability of 1/3, and (iv) distribution of each kind of -value is random across the retina, we can estimate the number of possible users of such a biometric device is . With 50 pixels this number becomes .
6. Spatially selective laser light stimulus
The stimulus light source required to realize an authentication algorithm such as the one described above was recently reported in . It consists of two laser beams, one at 532 nm and one at 850 nm, which are combined in a fiber into a single beam. As the laser power at the exit of the fiber combiner fluctuates, in  a feedback loop was used to stabilize the power of the 532 nm, which is used as stimulus light. The infrared light is used for pointing, as will be described shortly. In order to create different patterns of pixels across the laser beam’s cross section, the laser beam was propagated through a liquid crystal display (LCD) in a multi-pass configuration. The activated dots of the LCD produced an optical loss in the laser beam, corresponding to dark pixels, whereas the inactivated dots produced the illuminated pixels. In order for the contrast between illuminated versus dark pixels to be acceptable, the beam went through the same configuration of LCD dots five times, as shown in Figure 6a. The five passes where chosen because the relative optical loss obtained from one pass between activated and inactivated LCD dot is 0.35. Now, since we need photon numbers up to 200 photons per illuminated pixel per pulse in order to scan the probability-of-seeing curve, the number of photons going through the inactivated LCD dots should be negligible compared to 200. Since , five passes provide for a photon background two orders of magnitude smaller than the stimulus photons. In Figure 6b and c we show examples of LCD dot patterns that produce various patterns of pixels across the laser beam. For example, a single pixel is created by a single inactivated dot in the LCD (Figure 6b), while the dot arrangement for a grid of pixels is shown in Figure 6c. For the moment  we can illuminate any pixel arrangement in a grid of pixels, each about 1 mm width.
In Figure 6d we show that indeed the photon statistics of the stimulus light at 532 nm are Poissonian. In particular, this is accomplished by the aforementioned intensity feedback, without which the photon number distribution is wider than the Poissonian. In Figure 6e we show that for photon numbers at least equal to 200 the variance of the photon number is equal to the mean photon number per pulse, hence our stimulus light exhibits Poissonian statistics for all photon numbers of interest for the biometrics protocol. It should also be noted that the control over the number of photons, that is, the ability to change the mean number of photons per illuminated pixel per pulse resides in the feedback system used to stabilize the stimulus light. By changing a voltage within the feedback system, we can scan the number of photons, for example, from 20 to 200 photons.
Finally, we discuss the role of the infrared light. The infrared light is used for pointing, that is, for providing information on the geometry of incidence of the stimulus light on the cornea. As can be seen in Figure 6a, the laser beam illuminates the eye through a beam splitter, so that the camera sitting behind the beam splitter can image the subject’s eye. Moreover, just before the eye we place a glass plate, so that the laser beam is reflected backwards into the camera, since the reflections off the spherical surface of the eye would miss the camera. However, the green stimulus light is too weak (maximum 200 photons per illuminated pixel per pulse) for its reflection to be detected by the camera. Here comes in the infrared light, which is not perceived by the visual system, thus its intensity can be high enough for its reflection to be visible in the camera. This is what is seen in Figure 6f–h, where we depict various examples of patterns of pixels incident on the eye. The large bright pixel on the top left part of each image is the reflection of an infrared lamp providing for ambient light for the camera. The other pixels are the infrared reflections of the illuminated pixels of the laser beam. Due to the spatial overlap of the stimulus and the infrared light, these infrared reflections convey the exact position of the stimulating pixels at 532 nm.
7. Quantum advantage with quantum light
One might wonder if there is some advantage to be gained by using quantum light sources for the stimulus light instead of laser light. Indeed, in  it was theoretically shown that a single-photon source, for example, a heralded single-photon source [18, 19, 20, 21] can lead to a quantum advantage. In particular, it was shown that the total interrogation time is reduced by using single photons. The advantage comes about because the narrower distribution of the incident photon number affects the probabilities and introduced in Section 3, which then reduce the value of the parameter . This leads to an increase of the probability, , that Alice responds correctly in a single interrogation. Finally, this increase in leads to a smaller number of interrogations required to achieve the same and .
The fact that we can use a single-photon source producing a number of, for example, 200 photons in a light pulse stimulating the visual system rests on the rather large temporal summation window , which is the time span within which the visual system cannot temporally resolve the perceived light. Were that not the case, one would need Fock states with up to 200 photons, which so far cannot be produced. In contrast, a heralded-single photon source working at 1 kHz rate would do.
It is interesting to note that the quantum advantage obtained, that is, the required number of required interrogations, is reduced by slightly more than 10% compared to laser light. This figure is at first sight not significant, the main reason being that the statistics of the detected photons differ only slightly  between quantum light and laser light, because of the high optical losses suffered by light. It is actually these losses that we take advantage of to define the fingerprint of this method. Since these losses are rather large (typical values of ), the photon statistics of quantum light are “degraded” to the Poissonian statistics. Yet in  we provided only the first such proof-of-principle. It is conceivable that different authentication protocols could result in a larger advantage, especially because the visual system is highly nonlinear. This nonlinearity could be used in different ways to amplify the small difference in the photons statistics of detected photons between quantum light and laser light.
We have elaborated on a new biometric authentication method, which is based on the human visual system’s ability to perform photon counting. The method works with weak light, in order for the effect of visual perception to take place when the light intensity is close to the visual threshold. In such a regime, optical losses suffered by light when propagating from the cornea to the retina are crucial in determining the outcome of perception of weak light flashes. These losses form the biometric “fingerprint” of our biometric authentication methodology. We have described an intuitive authentication algorithm based on illuminating a number of retinal spots being associated with either high optical losses or low optical losses, and used this algorithm to discuss basic features of our methodology, like aging effects, and the fingerprint’s inter-subject and intra-subject variability.
We then reviewed recent experimental progress towards developing a laser light stimulus source which provides for light patterns with the desired properties needed for the realization of the authentication protocols. Finally, we presented recent work in exploring a possible quantum advantage that could be obtained by using a quantum light source instead, like a heralded single-photon source.
From a broader perspective, this work further demonstrates the scientific potential of the emerging field of quantum vision, that is, the possibilities for exploring the human and animal visual system using modern photonic and quantum-optical tools [23, 24, 25, 26, 27, 28].
IK and ML acknowledge co-financing of this work by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call “RESEARCH-CREATE- INNOVATE,” with project title “Photonic analysis of the retina’s biometric photo-absorption” (project code: T1EDK-04921). OEM acknowledges financial support from the Scientific and Technological Research Council of Turkey (TÜBITAK), grant No. 120F200.
Loulakis M, Blatsios G, Vrettou CS, Kominis IK. Quantum biometrics with retinal photon counting. Physical Review Applied. 2017; 8:044012. DOI: 10.1103/PhysRevApplied.8.044012
Gisin N, Ribordy G, Tittel W, Zbinden H. Quantum cryptography. Reviews of Modern Physics. 2002; 74:145. DOI: 10.1103/RevModPhys.74.145
Pirandola S et al. Advances in quantum cryptography. Advances in Optics and Photonics. 2019; 12:1012. DOI: 10.1364/AOP.361502
Kominis IK, Loulakis M. Quantum advantage in biometric authentication with single photons. Journal of Applied Physics. 2022; 131:084401. DOI: 10.1063/5.0080942
Margaritakis A, Anyfantaki G, Mouloudakis K, Gratsea A, Kominis IK. Spatially-selective and quantum-statistics-limited light stimulus for retina biometrics and pupillometry. Applied Physics B. 2020; 126:99. DOI: 10.1007/s00340-020-07438-z
Hecht S, Shlaer S, Pirenne MH. Energy, quanta and vision. Journal of General Physiology. 1942; 25:819. DOI: 10.1085/jgp.25.6.819
Bialek W. Biophysics: Searching for Principles. Princeton: Princeton University Press; 2012. p. 656
Tinsley JN, Molodtsov MI, Prevedel R, Wartmann D, Espigul’e-Pons J, Lauwers M, et al. Direct detection of a single photon by humans. Nature Communications. 2016; 7:12172. DOI: 10.1038/ncomms12172
Baylor BA, Lamb TD, Yau KW. Responses of retinal rods to single photons. Journal of Physiology. 1979; 288:613. DOI: 10.1113/jphysiol.1979.sp012716
Rieke F, Baylor DA. Origin of reproducibility in the responses of retinal rods to single photons. Biophysical Journal. 1998; 75:1836. DOI: 10.1016/S0006-3495(98)77625-8
Rieke F, Baylor DA. Single-photon detection by rod cells of the retina. Reviews of Modern Physics. 1998; 70:1027. DOI: 10.1103/RevModPhys.70.1027
Sim N, Bessarab D, Jones CM, Krivitsky LA. Method of targeted delivery of laser beam to isolated retinal rods by fiber optics. Biomedical Optics Express. 2011; 2:2926. DOI: 10.1364/BOE.2.002926
Sim N, Cheng MF, Bessarab D, Jones CM, Krivitsky LA. Measurement of photon statistics with live photoreceptor cells. Physical Review Letters. 2012; 109:113601. DOI: 10.1103/PhysRevLett.109.113601
Phan NM, Cheng MF, Bessarab DA, Krivitsky LA. Interaction of fixed number of photons with retinal rod cells. Physical Review Letters. 2014; 112:213601. DOI: 10.1103/PhysRevLett.112.213601
Nelson PC. Old and new results about single-photon sensitivity in human vision. Physical Biology. 2016; 13:025001. DOI: 10.1088/1478-3975/13/2/025001
Racette L, Fisher M, Bebie H, Holló G, Johnson CA, Matsumoto C. Visual Field Digest. Switzerland: Haag-Streit AG; 2019
Heijl A, Lindgren G, Olsson J. Normal variability of static perimetric threshold values across the central visual field. Archives of Ophthalmology. 1987; 105:1544. DOI: 10.1001/archopht.1987.01060110090039
Oxborrow M, Sinclair AG. Single-photon sources. Contemporary Physics. 2005; 46:173. DOI: 10.1080/00107510512331337936
Buller GS, Collins RJ. Single-photon generation and detection. Measurement Science and Technology. 2010; 21:012002. DOI: 10.1088/0957-0233/21/1/012002
Eisaman MD, Fan J, Migdall A, Polyakov SV. Invited review article: Single-photon sources and detectors. Review of Scientific Instruments. 2011; 82:071101. DOI: 10.1063/1.3610677
Meyer-Scott E, Silberhorn C, Migdall A. Single-photon sources: Approaching the ideal through multiplexing. Review of Scientific Instruments. 2020; 91:041101. DOI: 10.1063/5.0003320
Holmes R, Victora M, Wang RF, Kwiat PG. Measuring temporal summation in visual detection with a single-photon source. Vision Research. 2017; 140:33. DOI: 10.1016/j.visres.2017.06.011
Brunner N, Branciard C, Gisin N. Possible entanglement detection with the naked eye. Physical Review A. 2008; 78:052110. DOI: 10.1103/PhysRevA.78.052110
Lucas F, Hornberger K. Incoherent control of the retinal isomerization in rhodopsin. Physical Review Letters. 2014; 113:058301. DOI: 10.1103/PhysRevLett.113.058301
Pizzi R, Wang R, Rossetti D. Human visual system as a double-slit single photon interference sensor: A comparison between modellistic and biophysical tests. PLoS One. 2016; 11:e0147464. DOI: 10.1371/journal.pone.0147464
Dodel A, Mayinda A, Oudot E, Martin A, Sekatski P, Bancal JD, et al. Proposal for witnessing non-classical light with the human eye. Quantum. 2017; 1:7. DOI: 10.22331/q-2017-04-25-7
Sarenac D, Kapahi C, Silva AE, Cory DG, Taminiau I, Thompson B, et al. Direct discrimination of structured light by humans. Proceedings of the National Academy of Sciences USA. 2020; 117:14682. DOI: 10.1073/pnas.1920226117
Pedram A, Müstecaplıoğlu ÖE, Kominis IK. Using quantum states of light to probe the retinal network. arXiv:2111.03285.