These days, flying insects are seen as genuinely agile micro air vehicles fitted with smart sensors and also parsimonious in their use of brain resources. They are able to visually navigate in unpredictable and GPS-denied environments. Understanding how such tiny animals work would help engineers to figure out different issues relating to drone miniaturization and navigation inside buildings. To turn a drone of ~1 kg into a robot, miniaturized conventional avionics can be employed; however, this results in a loss of their flight autonomy. On the other hand, to turn a drone of a mass between ~1 g (or less) and ~500 g into a robot requires an innovative approach taking inspiration from flying insects both with regard to their flapping wing propulsion system and their sensory system based mainly on motion vision in order to avoid obstacles in three dimensions or to navigate on the basis of visual cues. This chapter will provide a snapshot of the current state of the art in the field of bioinspired optic flow sensors and optic flow-based direct feedback loops applied to micro air vehicles flying inside buildings.
- optic flow
- sense and avoid system
- micro air vehicle (MAV)
- unmanned aerial vehicle (UAV)
- bioinspired robotics
1.1. The biorobotic approach
Fifty years ago, Karl von Frisch observed that foraging bees fly to a distance somewhat greater than 13 km from their hive in search of food sources , but honeybees were not able to be trained to collect a reward beyond this limit; it thus corresponded to their maximum foraging distance. The area of the circle, whose center is the hive and radius the maximum foraging distance, represents a huge surface area of 530 km2. Even knowing that the average length of a honeybee is about 13 mm, the volume of its brain is lower than 1 mm3 and contains around 960,000 neurons [2, 3], and each worker honeybee’s compound eye contains ~5500 facets comprised of nine photosensitive cells (i.e., 99,000 photosensitive cells for the whole worker bee’s visual system) ; it is still unknown what visual cues are used during honeybees’ journeys nor how they are used in flight to recognize its location and to navigate within a space whose dimensions are a million times larger than their bodies. Karl von Frisch was awarded the Nobel Prize in Physiology or Medicine 1973 for his scientific achievement in describing the honeybees’ “waggle dance,” which is used by bees to communicate both the distance and the azimuthal orientation of a profitable nectar source. The 8-shaped geometry of the waggle dance codes the position of a nectar source. The duration of the waggle is closely correlated to the distance of the nectar source , and the honeybee’s odometer appears to be driven by motion vision . The “8” orientation is highly correlated to the azimuthal orientation of a nectar source . In flight, honeybees use a kind of “solar compass” based on polarized ultraviolet light [7, 8, 9] instead of a “magnetic compass” to maintain their heading toward the nectar source or their hive. Karl von Frisch therefore concluded that bees “recruited” by this dance used the information encoded in it to guide them directly to the remote food source. To better understand the honeybee’s recruitment process, the Biorobotics Lab at the Freie Universität Berlin has developed a robotic honeybee mimicking the “waggle dance” using a biorobotic approach .
While the biological substrate has not yet been fully identified , the biorobotic approach is particularly useful both in the fields of neuroscience and robotics [13, 14, 15, 16, 17, 18, 19, 20], because the robotic model can be tested under similar experimental conditions such as ethological experiments and it can suggest new biological hypotheses (Figure 1). From these interactions between ethological experiments, computational models, and robotics (Figure 1), uncertainties can be removed by considering the minimum requirements to perform any navigational tasks (e.g., [21, 22, 23, 24, 25]). Insect-sized micro air vehicles (MAVs) are increasingly becoming a reality [26, 27, 28, 29, 30, 31] and in the future will have to be fitted with sensors and flight control devices enabling them to perform all kinds of aerial maneuvers inside buildings including takeoff, floor, ceiling and wall avoidance, tunnel-following, and landing.
1.2. What is optic flow?
The optic flow perceived by an agent (an animal, a robot, or a human) is particularly dependent on the structure of the environment [32, 33, 34, 35, 36]. Optic flow can be defined by a vector field of the apparent angular velocity of objects, surfaces, and edges in a visual scene caused by the relative motion between an agent and the scene (Figure 2). Optic flow (Eq. (1)) is the combination of two optic flow components: a translational optic flow and a rotational optic flow :
Flying insects like hymenopterans stabilize their heads by compensating for any body rotations . Accordingly, any robot’s visual system is assumed to be perfectly stabilized in space, therefore canceling all rotation due to body pitch and roll with respect to the inertial frame. Consequently, the robot’s visual system experiences only translational optic flow, and the visual system will receive a purely translational optic flow (). The translational optic flow (expressed in rad/s) can be defined as follows:
where is a unit vector describing the viewing direction, is the translational velocity vector and is the distance from the object as seen by photosensors.
However, in the horizontal plane, the magnitude of the translational optic flow, which describes the front-to-back motion occurring when the agent moves forward, depends only on the ratio between the relative linear speed and the distance from the objects providing an optical contrast in the environment (the walls in Figure 2) and the azimuth angle between the gaze direction and the speed vector (Eq. (3)):
Translational optic flow (Eq. (3)) is particularly appropriate for short-range navigation because it depends on the ratio between (i) the relative linear speed of an object in the scene with respect to the agent and (ii) the distance from obstacles in the surrounding environment: this visual angular speed cue does not require either speed or distance measurement (Eq. (3)).
1.3. Why use optic flow in robotics?
Flying robots are today capable of accurately evaluating their pose in outdoor flight using conventional sensors such as the global positioning system (GPS) and inertial measurement unit (IMU). This is very efficient for long-range navigation at hundreds of meters above the ground, without any obstacles around, for example, an airplane in cruising flight. Nevertheless, the expanding set of roles for flying robots increasingly calls for them to operate close to obstacles (<1 m) in all directions in GPS-denied or cluttered environments including buildings, warehouses, performance halls, or urban canyons. Robots can have difficulties receiving the GPS signal, but they have to pick up the 3D structure of the surrounding environment to avoid obstacles and accomplish their missions. At such a short distance from obstacles (<1 m), the environment is completely unpredictable: it is obviously very difficult to map the entire environment in 3D at such a scale in real time. A more efficient approach would consist of the robot continuously using local information to avoid obstacles while waiting for global information to pursue its mission. Most of the time, the use of emissive proximity sensors such as ultrasonic or laser range finders, radar, or scanning light detection and ranging (LIDAR) has been considered for this purpose. Such emissive sensors can be bulky, stealth-compromising, high energy, and low-bandwidth, compromising their utility for tiny and insect-like robots [26, 27, 28, 29, 30, 31]. It is well known that flying insects are sensitive to optic flow [16, 19, 36, 38]; moreover, they are able to measure the optic flow of their surroundings irrespective of the spatial texture and contrast [39, 40] and also that some of their neurons respond monotonically to optic flow [36, 41]. Consequently, there are considerable benefits in terms of both number of pixels and computational resources by designing guidance systems for micro flying robots fitted with passive sensing, such as motion vision, inspired by flying insects.
1.4. The chicken-and-egg problem of translational optic flow
A given magnitude of translational optic flow is a kind of chicken-and-egg problem (Eq. (3)), because an infinite number of couples (speed; distance) lead to the same speed/distance ratio, in other words the same optic flow magnitude, coming from a surface (Figure 3a). For instance, an optic flow magnitude of 2 rad/s (i.e., 115°/s) can be generated by a robot flying at 1 m/s at 0.5 m above the floor, flying at 2 m/s at 1 m above the floor, and so on (Figure 3a). To get around the optic flow chicken-and-egg problem, roboticists introduced the assumption prevailing in those days that robots have to measure their own speed by using a tachymeter on wheels [21, 42], a GPS unit , or a custom-built Pitot tube , in order to assess the distance from obstacles and then to avoid them, and conversely have to measure the distance from obstacle by means of an ultrasonic distance sensor  in order to assess their own ground speed. However, flying insects are not able to directly measure their true ground speed (but only their airspeed  or their airspeed rate ) or their distance from obstacles by using their binocular vision which is too limited . Flying insects do not actually solve the optic flow chicken-and-egg problem to cross tunnels but instead use visuomotor feedback loops directly based on optic flow by perceiving it in a wide field of view with their compound eyes seeing both the floor and the ceiling (Figure 3b) and also the walls [48, 49].
2. Bioinspired optic flow sensors
The criteria for evaluation of the potential of optic flow sensors for MAV applications include:
Visual sensors must be able to deal with the large dynamic range of natural irradiance levels, which can cover up to 9 decades during the course of the day.
Range of optic flow covered (i.e., the angular speed magnitude), defined by the number of optic flow decades. There is now evidence that flying insects are able to measure the optic flow over a range of more than 1.4 decades [41, 50].
Accuracy and precision, defined by systematic errors and coefficients of variation .
Output refresh rate, defined by the instantaneous output frequency (>10 Hz).
2.1. Taking inspiration from the compound eye
The structure of a compound eye is based on a large number of repeating units called ommatidia (Figure 4). Each ommatidia is composed of a facet (hexagonal lens, diameter from ~30 to ~60 μm) which focuses the incoming light toward the photosensitive cells [19, 51]. Each ommatidia optical axis is separated by an interommatidial angle which defines the spatial acuity of the visual system . The interommatidial angles are smaller in the frontal part of the visual system than in the lateral, dorsal, or ventral parts as observed in any compound eye [52, 53, 54]. In bioinspired robots, a sine-law gradient (Eq. (4)) is generally used in the horizontal plane in artificial compound eye design :
Once the spatial sampling has been carried out, the narrowness of the ommatidia performs a type of automatic low-pass filtering on the visual signals reaching the photosensitive cells. The diffraction of the light through the lens leads to a Gaussian angular sensitivity [52, 55, 56], which acts as a blurring effect. This angular sensitivity is described by the width at half height, called the acceptance angle (Eq. (5)):
where is the lens diameter, is the wavelength, is the rhabdom diameter (i.e., pixel diameter in artificial design), and is the ommatidium focal length. The acceptance angle and the interommatidial angle are roughly equal () in diurnal insects  for each ommatidium, which allows for continuity in visual signals (low aliasing) and avoids oversampling the environment. Moreover, the photosensitive cells’ dynamics can achieve a temporal frequency up to 100 Hz in dipterans ; well beyond human vision (central vision up to 50 Hz).
In artificial design, most of the time , the light diffraction is therefore not considered; thus, with
2.2. Taking inspiration from biological signal processing
Thirty years ago, a bioinspired local motion sensor (LMS) was designed [60, 61], the signal processing scheme of which was based on a time-of-travel algorithm directly inspired from fly’s motion-sensitive neurons [19, 51, 62]. A time-of-travel algorithm directly measures the delay taken by a contrast edge to travel between the visual axes of two adjacent pixels, separated by an inter-pixel angle . The optic flow is therefore naturally an inverse function of this delay (Eq. 6), and the optic flow range measurement depends jointly on the choice of the inter-pixel angle and on the timespan, which was reported from 10 to 230 ms in fly’s motion-sensitive neurons [19, 51, 62] and considered in the LMS signal processing as well:
The signal processing scheme of the bioinspired LMS is depicted in Figure 5a and can be broken down into the eight following steps:
Spatial low-pass filtering performed by the Gaussian angular sensitivity function of the defocused lens (correspond to a blurring effect) and characterized by the angular width at half height, called the acceptance angle . Unlike natural compound eyes in which spatial low-pass filtering automatically results from diffraction , the cutoff spatial frequency depends on the amount of defocusing [60, 61].
Phototransduction: a logarithmic amplifier was originally used (five decades of lighting) [60, 61], a linear amplifier was used (three decades of lighting) , or more recently auto-adaptive photodetectors were designed as artificial retinas [58, 65], which consisted of a logarithmic amplifier associated with a high-gain negative feedback loop (seven decades of lighting). Results with auto-adaptive pixels are reminiscent of analogous experiments carried out on single vertebrate  and invertebrate  photodetectors.
Temporal high-pass filtering of photoreceptor signals in each channel not only to cancel the direct component but also to accentuate the transient signals created by contrasting edges (derivative effect).
Temporal low-pass filtering to reduce noise (e.g., 100 Hz interference originating from artificial lighting). Such temporal band-pass filtering was identified in two large monopolar cells L1 and L2 inside the fly’s lamina [68, 69, 70]. However, two discrepancies can be reported in the technological design: the high-pass and low-pass filters are switched with respect to the biological model in the bioinspired LMS due to electronic constraints, and the low-pass filter is the fourth order in the bioinspired LMS rather than the third order in the biological model.
Hysteresis thresholding performed on signals to detect contrast transition and also to discriminate ON and OFF transitions. There is now strong evidence that the fly’s motion pathway is fed by separate ON and OFF pathways [51, 71, 72].
Pulse generation on the first channel though a low-pass filter, then generating a long-lived decaying signal (first-order unit impulse response) approximating a mathematical inverse function [60, 61].
Pulse generation on the second channel, sampling the long-lived decaying signal coming from the first channel through a diode minimum detector circuit [60, 61]. The LMS output is therefore a pulse-shaped signal whose magnitude represents the local optic flow magnitude.
Accordingly, the optic flow measurement (here in volts) results from sampling the long-lived exponentially decaying function (with a time constant ), varies inversely with , and hence increases with the true optic flow according to the following equation:
with representing the power supply voltage. This original analog functional scheme can measure the optic flow in a range , corresponding to a 1.2 decade of optic flow measurement.
This bioinspired LMS (Figure 5) features three major benefits: firstly, the LMS output responds monotonically to the magnitude of the optic flow and therefore acts like a local optic flow sensor, which is vital to get non-equivocal information about the distance from surrounding obstacles; secondly, the refresh rate of the LMS output is asynchronous and relatively high (>10 Hz depending on lighting conditions) which is suitable for indoor navigation; and, thirdly, the thresholding stage makes the LMS output virtually independent of texture and contrast.
The bioinspired LMS scheme based on a time-of-travel algorithm (Figure 5) is not a “correlator scheme” or “Hassenstein-Reichardt detector” (HR detector) [73, 74] whose output is dependent on texture and contrast. Another variant of the bioinspired LMS based on a time-of travel algorithm has been proposed over the past 20 years and is known as “facilitate-and-sample” algorithm [75, 76].
2.3. Local motion sensors and artificial retinas
Since the original analog design depicted in Figure 5a [60, 61], various versions of the bioinspired LMS have been built including analog-digital implementations as depicted in Figure 5b (for an FPAA implementation and an 8-bit microcontroller implementation running at 1 kHz , for an FPGA implementation running at 2.5 or 5 kHz , for an LTCC implementation running at 1 kHz [25, 79, 80], and for a 16-bit microcontroller implementation running at 2 kHz) . These various versions of the bioinspired LMS have been built in order to reduce size, mass, or power consumption while benefiting from computational resources to increase the number of LMSs.
An original LMS version was developed including an iC-Haus™ LSC 12-channel photosensor array forming 6 pixel and five LMSs . The outputs of five LMS were associated to a step merging based on a median filter; both the precision and the accuracy in the optic flow measurement were greatly improved . Moreover, to increase by at least four times the number of LMSs that can be embedded in a 16-bit microcontroller, a linear interpolation applied to photosensor signals was used to reduce the sampling rate and thus save computational resources . The best trade-off between computational load and accuracy was found at a sampling rate of 200 Hz .
In the framework of a European project called CurvACE (2009–2013; www.curvace.org), aiming at mimicking the
A recent optic flow sensor based on the M2APix retina (M2APix stands for Michaelis–Menten auto-adaptive pixel ) can auto-adapt in a 7 decade of lighting range and responds appropriately to step changes up to ±3 decades . The pixels do not saturate thanks to the normalization process performed by the very large-scale integration (VLSI) transistors ; this is due to the intrinsic properties of the Michaelis–Menten equation . A comparison of the characteristics of auto-adaptive Michaelis–Menten and Delbrück pixels  under identical lighting conditions (i.e., integrated in the same retina) demonstrated better performance of the Michaelis–Menten pixels in terms of dynamic sensitivity and minimum contrast detection .
Different kinds of algorithms have been developed to compute local motion; these have produced different hardware implementations including templates, time of travel, feature tracking, edge counting, edge correlation, and the Hassenstein-Reichardt correlator [30, 84, 85] and also different software implementations . However, analog VLSI motion sensors provide significant reductions in power consumption and payload while increasing bandwidth, improving both precision and accuracy in optic flow measurement for MAV applications.
3. Optic flow-based navigation inside buildings
3.1. Obstacle avoidance in the horizontal plane
3.1.1. Keeping the bilateral optic flow constant: a speed control system
The idea of introducing a speed control system based on optic flow was firstly developed by Coombs and Roberts . Their Bee-Bot adjusted its forward speed to keep the optic flow within a measurable range, using a bilateral optic flow criterion to control the robot’s speed. The bilateral optic flow criterion (sum of the left and right optic flows) as a feedback signal to directly control the robot’s speed was first introduced by Santos-Victor and colleagues  onboard the Robee robot. Qualitatively, the robot’s speed was scaled by the level of the environment’s visual clutter. Since then, the bilateral optic flow criterion as a feedback signal to directly control the robot’s forward speed has been tested on many robots in both straight and tapered corridors [25, 87, 88, 89, 90, 91, 92, 93, 94]. The desired bilateral optic flow was ~12°/s for the Bee-Bot robot , ~19°/s in , ~46°/s in , ~21°/s in , 190, or 250°/s in  and . The higher the desired bilateral optic flow, the faster the robot will advance while moving close to the walls.
3.1.2. Dual optic flow regulation
The first optic flow regulator was originally developed for ground avoidance when following terrain [25, 95]. An optic flow set point is compared to a measured optic flow to provide an error signal, this latter feeding into a regulator controlling a force orthogonal to the direction of motion. The combination of a unilateral optic flow regulator for controlling the lateral positioning on either side and a bilateral optic flow regulator for controlling the forward speed has been called a dual optic flow regulator . The dual optic flow regulator concept was originally developed for aerial vehicles endowed with natural roll and pitch stabilization abilities, in which planar flight control systems can be developed conveniently  in order to mimic honeybees’ abilities in the horizontal plane [16, 39, 97, 98] and to avoid the weaknesses of the optic flow balance strategy in the presence of lateral openings (see review ). The dual optic flow regulator was implemented for the first time onboard an 878-gram fully actuated hovercraft called LORA, which stands for
A unilateral optic flow regulator (Figure 6b) that adjusts the hovercraft’s lateral thrust so as to keep the two perceived lateral optic flows higher (left or right) equal to a sideways optic flow set point (noted ). The outcome is that the distance to the nearest wall becomes proportional to the hovercraft’s forward speed , as determined in (ii).
A bilateral optic flow regulator (Figure 6c) adjusts the hovercraft’s forward thrust so as to keep the sum of the two lateral optic flows (right and left) equal to a forward optic flow set point (noted ).
In a steady state, with a given corridor width of
As a consequence, the robot’s speed will asymptotically and automatically be scaled by the corridor width or even by the environment clutter (Figure 7b). By increasing the forward optic flow set point at a given sideways optic flow set point , one can change the robot’s forward speed. By reducing the sideways optic flow set point at a given forward optic flow set point, one can induce a graceful shift from “wall-following behavior” to “centering behavior” . “Centering behavior” occurs as a particular case of “wall-following behavior,” whenever . In addition, the dual optic flow regulator requires a third feedback loop to stabilize the robot around its vertical axis, which makes the robot experience purely translational optic flow. The robot’s heading is maintained by a heading-lock system (based on a micro-compass enhanced by a micro-gyrometer) controlling the rear thrusters differentially in closed-loop mode (Figure 6a).
3.1.3. Bioinspired visuomotor convergence
Humbert put forward the bioinspired visuomotor convergence concept during his PhD degree (PhD thesis , obstacle avoidance and speed control [101, 102], terrain-following , corridor-following [92, 93, 104], urban canyon-following ) to control both land-based and flying robots solely on the basis of optic flow. This theory is based on the spatial decompositions performed by the neurons in the insect visuomotor system [106, 107, 108] that extract relative velocity and proximity information from patterns of optic flow. Advantages of bioinspired visuomotor convergence include:
Significant improvements in signal-to-noise ratio of relative velocity and proximity information since one feedback signal is given across many estimates of optic flow .
Through proper choice of weighting functions, the rotational and translational components can be separated automatically and do not require any derotation procedure .
To compare the bioinspired visuomotor convergence theory to the “optic flow balance strategy” that frequently fails in one-sided corridors or those with openings in a wall (see review ) or the switching mode strategy employed in such environments [87, 88], the bioinspired visuomotor convergence in [109, 110] retains the strategy of balancing lateral optic flows and leverages the stability and performance guarantees of the closed loop to achieve stable quadrotor flight in corridor-like environments which include large openings in a wall or additional structures such as small poles.
3.1.4. Image expansion to avoid frontal obstacles
The “optic flow balance strategy” was originally suggested to explain the centering behavior along a straight corridor (see review ). However, it turned out that this strategy, when used alone, did not allow an agent to avoid frontal obstacles, i.e., following a corridor that included L-junctions or T-junctions without using the frontal view field . The frontal image expansion can therefore be used to estimate the time to contact  by means of the optic flow divergence [113, 114] and trigger a prespecified rotation angle around the robot’s vertical axis. A simulated small helicopter could therefore trigger U-turns when encountering frontal obstacles , a wheeled robot could trigger a rotating angle of 90°  or of 110°  in front of an obstacle, or the robot could stop and rotate on the spot until the frontal range once again became large enough . Other robots use a series of open-loop commands, called body saccades, to avoid a frontal obstacle. The saccade duration has either been set to a constant prespecified value [116, 117], determined according to a Gaussian distribution , or modulated using optic flow [119, 120, 121, 122, 123]. Recently, an optic flow-based algorithm has been developed to compute a quantified saccade angle; this has allowed a simulated fully actuated hovercraft to negotiate tight bends by triggering body saccades, on the basis of a time-to-contact criterion, and to realign its trajectory parallel to the wall along a corridor that includes sharp turns .
3.2. Obstacle avoidance in the vertical plane
Ventral optic flow can be used by aerial robots [125, 126, 127] to achieve different maneuvers: takeoff, terrain-following, flying nap of the earth, landing, and decking in the same way as honeybees do it [5, 97]. Ventral optic flow was also employed for ground avoidance onboard MAVs by maintaining the ventral optic flow at a given set point using a ventral optic flow regulator . Another control algorithm based on a “bang-bang” method was used onboard MAVs to control their lift such that if a certain threshold of ventral optic flow was exceeded, the MAV elevator angle would be moved to a preset deflection [120, 121, 128].
Recently, the dual optic flow regulation principle applied in the vertical plane was tested onboard an 80-gram rotorcraft called BeeRotor . As a third control feedback loop, an active system of reorientation based on a quasi-panoramic eye constantly realigned its gaze parallel to the nearest surface followed: BeeRotor demonstrated its abilities and achieved automatic terrain-following despite steep reliefs (Figure 8) without a need for inertial frames to access the verticality as flying insects do. Indeed, behavioral experiments performed 35 years ago on flying insects in zero gravity  or recent behavioral experiments with hymenopterans  or dipterans  demonstrated that flying insects do not actually sense verticality in flight by means of gravity perception as vertebrates do. The eye reorientation therefore enables BeeRotor, at an earlier stage, to detect the increase in the optic flow due to steep relief in order to properly avoid the obstacle . Additionally, in the framework of the “Green Brain” project managed by James Marshall, a dual optic flow regulator for both speed control and lateral positioning and a ventral optic flow for altitude control were implemented onboard a small quadrotor .
3.3. Obstacle avoidance inside a maze
In silico experiments in a maze were mainly carried out in urban-like environments with a flying robot at a relatively high speed in relatively wide urban canyons ( with = 1 m/s and a minimum urban canyon width = 4 m;  with = 13 m/s and = 40 m;  with = 14 m/s and = 50 m;  with = 2 m/s and = 10 m), hence generating an optic flow, coming from the walls, lower than 45°/s. On the other hand, navigating inside a building requires measuring not only the optic flow with a high refresh rate (>10 Hz), but also high optic flow magnitude in the range of those shown in Figure 3 (i.e., >100°/s). In order to achieve this, the LORA robot is driven by a body saccadic system (see section 3.1.) and a dual optic flow regulator-based intersaccadic system (see section 3.1.) as depicted in detail in . In Figure 9, the optic flow set points have been set at and ; the LORA robot is seen to explore at m/s inside the building and to adopt two possible routes along straight sections (following either the right wall or the left wall) according to Eqs. (9) and (10) leading to an operating point = 0.48 m/s and = 0.31 m in a steady state. Except three lateral contacts with the walls (red crosses in Figure 9) where there are either a salient angle (90°) or a returning angle (270°), the LORA robot is able to explore for ~23 minutes inside the building even though it is fitted with a minimalistic visual system (16 pixel forming eight LMSs).
4. Drones in the field of architecture
Over the last decade, camera-equipped unmanned aerial vehicles (UAVs) are increasingly used in the field of architecture for visually monitoring construction, operation of buildings, and control of a failure of superstructures (skyscraper, stadium, chimney, nuclear area activity, bridges, etc.) . UAVs can frequently survey construction sites, monitor work in progress, create documents for safety, and inspect existing structures, particularly for hard-to-reach areas. UAVs are used not only for 3D modeling for building reconstruction but also photogrammetric applications . These UAVs evaluate their pose (i.e., their position and their orientation) in outdoor flight with a GPS giving an output signal of about 7 Hz (see section 1.3.) forcing drones to work away from structures, which is a drawback to take high-resolution pictures. In indoor flight, drones work in GPS-denied environments. Consequently, the use of active proximity sensors such as ultrasonic, laser range finders, radar, or scanning light detection and ranging (LIDAR) has been most of the time considered for this purpose . However, such active sensors are bulky, high power consumption, low bandwidth, and low-output refresh rate (2 Hz–5 Hz), compromising their utility for UAV’s fast maneuvers close to obstacles or walls. Recently, a lightweight sensor composed of four stereo heads and an inertial measurement unit (IMU) were developed to perform FPGA-based dense reconstruction for obstacle detection in all directions at 7 Hz output refresh rate . In another application, a drone of a mass less than 500 g was developed for photogrammetric 3D reconstruction applied to a cultural heritage object  but requiring a motion capture system to determine accurately their pose of the robot at a frequency of 500 Hz. Better understanding how flying insects work will therefore help future drones to operate inside buildings where obstacles are very close.
The expanding set of roles for flying robots increasingly calls for them to operate at relatively high speed close to obstacles (<1 m) in all directions in GPS-denied or cluttered environments including buildings, warehouses, performance halls, or urban canyons. Tiny flying robots cannot rely on GPS signals in such complex environments. However, they have to pick up the 3D structure of the surrounding environment in real time to avoid collisions. At a short distance from obstacles (<1 m), the environment is completely unpredictable; most of the time, emissive proximity sensors have been considered for this purpose, but now optic flow sensing is truly becoming a part of MAV avionics in several companies (e.g., senseFly in Switzerland, Qualcomm or Centeye in the USA), and it can also be used as a direct feedback signal in MAV automatic guidance in the same way as flying insects use it.
Von Frisch K. The Dance Language and Orientation of Bees. Cambridge, Massachusetts: Harvard University Press; 1967
Menzel R, Giurfa M. Cognitive architecture of a mini-brain: The honeybee. Trends in Cognitive Sciences. 2001; 5(2):62-71
Haddad D, Schaupp F, Brandt R, Manz G, Menzel R, Haase ANMR. Imaging of the honeybee brain. Journal of Insect Science. 2004; 4(1):7
Seidl R, Kaiser W. Visual field size, binocular domain and the ommatidial array of the compound eyes in worker honey bees. Journal of Comparative Physiology. 1981; 143(1):17-26
Srinivasan MV, Zhang S, Altwein M, Tautz J. Honeybee navigation: Nature and calibration of the “odometer”. Science. 2000; 287(5454):851-853
Srinivasan MV. Going with the flow: A brief history of the study of the honeybee’s navigational ‘odometer’. Journal of Comparative Physiology A. 2014; 200(6):563-573
Kv F. Gelöste und ungelöste Rätsel der Bienensprache. Die Naturwissenschaften. 1948; 35(1):12-23
von Helversen O, Edrich W. Der polarisationsempfänger im Bienenauge: ein Ultraviolettrezeptor. Journal of Comparative Physiology. 1974; 94(1):33-47
Ogawa Y, Ribi W, Zeil J, Hemmi JM. Regional differences in the preferred e-vector orientation of honeybee ocellar photoreceptors. Journal of Experimental Biology. 2017
Landgraf T, Oertel M, Rhiel D, Rojas R. A biomimetic honeybee robot for the analysis of the honeybee dance communication system. IROS. 2010:3097-3102
Expert F, Ruffier F. Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers. Bioinspiration & Biomimetics. 2015; 10(2):026003
Webb B, Wystrach A. Neural mechanisms of insect navigation. Current Opinion in Insect Science. 2016; 15:27-39
Franz MO, Mallot HA. Biomimetic robot navigation. Robotics and Autonomous Systems. 2000; 30:133-153
Webb B. Can robots make good models of biological behaviour? Behavioral and Brain Sciences. 2001; 24(06):1033-1050
Webb B. Validating biorobotic models. Journal of Neural Engineering. 2006; 3(3):R25
Srinivasan MV. Visual control of navigation in insects and its relevance for robotics. Current Opinion in Neurobiology. 2011; 21(4):535-543
Floreano D, Ijspeert AJ, Schaal S. Robotics and neuroscience. Current Biology. 2014; 24(18):R910-R920
Ijspeert AJ. Biorobotics: Using robots to emulate and investigate agile locomotion. Science. 2014; 346(6206):196-203
Franceschini N. Small brains, smart machines: From fly vision to robot vision and back again. Proceedings of the IEEE. 2014; 102(5):751-781
Raharijaona T, Kerhuel L, Serres J, Roubieu F, Expert F, Viollet S, et al. Insect inspired visual motion sensing and flying robots. In: Handbook of Biomimetics and Bioinspiration: 2 Electromechanical Systems. World Scientific; 2014. pp. 565-611
Franceschini N, Pichon JM, Blanes C. From insect vision to robot vision. Philosophical Transaction: Biological Sciences. 1992; 337:283-294
Franceschini N, Ruffier F, Serres J. A bio-inspired flying robot sheds light on insect piloting abilities. Current Biology. 2007; 17(4):329-335
Lambrinos D, Möller R, Labhart T, Pfeifer R, Wehner R. A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems. 2000; 30(1):39-64
Horchler AD, Reeve RE, Webb B, Quinn RD. Robot phonotaxis in the wild: A biologically inspired approach to outdoor sound localization. Advanced Robotics. 2004; 18(8):801-816
Roubieu FL, Serres JR, Colonnier F, Franceschini N, Viollet S, Ruffier F. A biomimetic vision-based hovercraft accounts for bees’ complex behaviour in various corridors. Bioinspiration & Biomimetics. 2014; 9(3):036003
Duhamel PEJ, Pérez-Arancibia NO, Barrows GL, Wood RJ. Altitude feedback control of a flapping-wing microrobot using an on-board biologically inspired optical flow sensor. In: Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE; 2012. p. 4228-4235
Kushleyev A, Mellinger D, Powers C, Kumar V. Towards a swarm of agile micro quadrotors. Autonomous Robots. 2013; 35(4):287-300
Ma KY, Chirarattananon P, Fuller SB, Wood RJ. Controlled flight of a biologically inspired, insect-scale robot. Science. 2013; 340(6132):603-607
Dunkley O, Engel J, Sturm J, Cremers D. Visual-inertial navigation for a camera-equipped 25g nano-quadrotor. In: IROS2014 aerial open source robotics workshop; 2014
Moore RJ, Dantu K, Barrows GL, Nagpal R. Autonomous MAV Guidance with a lightweight omnidirectional vision sensor. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE; 2014. p. 3856-3861
Floreano D, Wood RJ. Science, technology and the future of small autonomous drones. Nature. 2015; 521(7553):460-466
Gibson JJ. The Perception of the Visual World. Boston: Houghton Mifflin; 1950
Whiteside TC, Samuel G. Blur zone. Nature. 1970; 225:94-95
Nakayama K, Loomis J. Optical velocity patterns, velocity-sensitive neurons, and space perception: A hypothesis. Perception. 1974; 3(1):63-80
Koenderink JJ, van Doorn AJ. Facts on optic flow. Biological Cybernetics. 1987; 56(4):247-254
Krapp HG, Hengstenberg R, et al. Estimation of self-motion by optic flow processing in single visual interneurons. Nature. 1996; 384(6608):463-466
Viollet S, Zeil J. Feed-forward and visual feedback control of head roll orientation in wasps ( Polistes humilis, Vespidae, hymenoptera). Journal of Experimental Biology. 2013; 216(7):1280-1291
Taylor GK, Krapp HG. Sensory systems and flight stability: What do insects measure and why? Advances in Insect Physiology. 2007; 34:231-316
Srinivasan M, Lehrer M, Kirchner W, Zhang S. Range perception through apparent image speed in freely flying honeybees. Visual Neuroscience. 1991; 6(05):519-535
Baird E, Srinivasan MV, Zhang S, Cowling A. Visual control of flight speed in honeybees. Journal of Experimental Biology. 2005; 208(20):3895-3905
Ibbotson M. Evidence for velocity–tuned motion-sensitive descending neurons in the honeybee. Proceedings of the Royal Society of London B: Biological Sciences. 2001; 268(1482):2195-2201
van der Zwaan S, Santos-Victor J. An insect inspired visual sensor for the autonomous navigation of a mobile robot. Proc of the Seventh International Sysposium on Intelligent Robotic Systems (SIRS). 1999
Griffiths S, Saunders J, Curtis A, Barber B, McLain T, Beard R. Obstacle and terrain avoidance for miniature aerial vehicles. Advances in Unmanned Aerial Vehicles. Springer. 2007:213-244
Beyeler A, Zufferey JC, Floreano D. Vision-based control of near-obstacle flight. Autonomous Robots. 2009; 27(3):201-219
Honegger D, Meier L, Tanskanen P, Pollefeys M. An open source and open hardware embedded metric optical flow cmos camera for indoor and outdoor applications. In: Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE; 2013. pp. 1736-1741
Burkhardt D, Gewecke M. Mechanoreception in Arthropoda: The chain from stimulus to behavioral pattern. In: Cold Spring Harbor symposia on quantitative biology. vol. 30. Cold Spring Harbor Laboratory Press; 1965. pp. 601-614
Srinivasan MV. In: Wallman J, editor. How Insects Infer Range from Visual Motion. Miles FA: Elsevier Science Ltd; 1993
Portelli G, Serres J, Ruffier F, Franceschini N. Modelling honeybee visual guidance in a 3-D environment. Journal of Physiology-Paris. 2010; 104(1):27-39
Portelli G, Ruffier F, Roubieu FL, Franceschini N. Honeybees’ speed depends on dorsal as well as lateral, ventral and frontal optic flows. PLoS One. 2011; 6(5):e19486
Carroll D, Bidwell N, Laughlin S, Warrant E. Insect motion detectors matched to visual ecology. Nature. 1996; 382(6586):63
Franceschini N, Riehle A, Le Nestour A. Directionally selective motion detection by insect neurons. Facets of Vision. Springer. 1989:360-390
Land MF. Visual acuity in insects. Annual Review of Entomology. 1997; 42(1):147-177
Rossel S. Regional differences in photoreceptor performance in the eye of the praying mantis. Journal of Comparative Physiology. 1979; 131(2):95-112
Land M. Optics and Vision in Invertebrates. In: Autrum H. Berlin Heidelberg New York: Springer; 1981
Götz KG. Optomotorische untersuchung des visuellen systems einiger augenmutanten der fruchtfliege Drosophila. Kybernetik. 1964; 2(2):77-92
Horridge GA. The compound eye of insects. Scientific American. 1977; 237:108-120
Laughlin S, Weckström M. Fast and slow photoreceptors—A comparative study of the functional diversity of coding and conductances in the Diptera. Journal of Comparative Physiology A. 1993; 172(5):593-609
Floreano D, Pericet-Camara R, Viollet S, Ruffier F, Brückner A, Leitel R, et al. Miniature curved artificial compound eyes. Proceedings of the National Academy of Sciences. 2013; 110(23):9267-9272
Song YM, Xie Y, Malyarchuk V, Xiao J, Jung I, Choi KJ, et al. Digital cameras with designs inspired by the arthropod eye. Nature. 2013; 497(7447):95-99
Blanes C. Appareil visuel élémentaire pour la navigation à vue d’un robot mobile autonome. DEA thesis (Neurosciences), Univ Aix- Marseille. 1986
Blanes C. Guidage visuel d’un robot mobile autonome d’inspiration bionique. PhD thesis, Institut National Polytechnique de Grenoble; 1991
Franceschini N. Early processing of colour and motion in a mosaic visual system. Neuroscience Research Supplements. 1985; 2:S17-S49
Roubieu FL, Expert F, Boyron M, Fuschlock BJ, Viollet S, Ruffier F. A novel 1-gram insect based device measuring visual motion along 5 optical directions. In: Sensors, 2011 IEEE. IEEE. 2011. pp. 687-690
Ruffier F, Viollet S, Amic S, Franceschini N. Bio-inspired optical flow circuits for the visual guidance of micro air vehicles. In: Circuits and Systems, 2003. ISCAS’03. Proceedings of the 2003 International Symposium on. vol. 3. IEEE; 2003. p. III-846
Mafrica S, Godiot S, Menouni M, Boyron M, Expert F, Juston R, et al. A bio-inspired analog silicon retina with Michaelis-Menten auto-adaptive pixels sensitive to small and large changes in light. Optics Express. 2015; 23(5):5614-5635
Normann RA, Perlman I. The effects of background illumination on the photoresponses of red and green cones. The Journal of Physiology. 1979; 286:491
Matic T, Laughlin S. Changes in the intensity-response function of an insect’s photoreceptors due to light adaptation. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology. 1981; 145(2):169-177
Laughlin S. The roles of parallel channels in early visual processing by the arthropod compound eye. Photoreception and Vision in Invertebrates. Springer. 1984:457-481
Laughlin SB. Coding efficiency and design in visual processing. Facets of Vision. Springer. 1989:213-234
Juusola M, French AS. Visual acuity for moving objects in first-and second-order neurons of the fly compound eye. Journal of Neurophysiology. 1997; 77(3):1487-1495
Riehle A, Franceschini N. Motion detection in flies: Parametric control over ON-OFF pathways. Experimental Brain Research. 1984; 54(2):390-394
Harris R, O’Carroll D. Afterimages in fly motion vision. Vision Research. 2002; 42(14):1701-1714
Hassenstein B, Reichardt W. Systemtheoretische analyse der zeit-, reihenfolgen-und vorzeichenauswertung bei der bewegungsperzeption des rüsselkäfers chlorophanus. Zeitschrift für Naturforschung B. 1956; 11(9–10):513-524
Reichardt W. Autocorrelation, a principle for the evaluation of sensory information by the central nervous system. Sensory Communication. 1961:303-317
Kramer J, Sarpeshkar R, Koch C. An analog VLSI velocity sensor. In: Circuits and Systems, 1995. ISCAS’95, 1995 IEEE International Symposium on. vol. 1. IEEE; 1995. p. 413-416
Moeckel R, Liu SC. Motion detection circuits for a time-to-travel algorithm. In: Circuits and Systems, 2007. ISCAS 2007. IEEE International Symposium on. IEEE. 2007. pp. 3079-3082
Aubépart F, Franceschini N. Bio-inspired optic flow sensors based on FPGA: Application to micro-air-vehicles. Microprocessors and Microsystems. 2007; 31(6):408-419
Pudas M, Viollet S, Ruffier F, Kruusing A, Amic S, Leppävuori S, et al. A miniature bio-inspired optic flow sensor based on low temperature cofired ceramics (LTCC) technology. Sensors and Actuators A: Physical. 2007; 133(1):88-95
Expert F, Viollet S, Ruffier F. Outdoor field performances of insect-based visual motion sensors. Journal of Field Robotics. 2011; 28(4):529-541
Sabiron G, Chavent P, Raharijaona T, Fabiani P, Ruffier F. Low-speed optic-flow sensor onboard an unmanned helicopter flying outside over fields. Robotics and Automation (ICRA), 2013 IEEE International Conference on. IEEE. 2013:1742-1749
Expert F, Roubieu FL, Ruffier F. Interpolation based “time of travel” scheme in a visual motion sensor using a small 2D retina. In: Sensors, 2012 IEEE. IEEE. 2012. pp. 1-4
Mafrica S, Servel A, Ruffier F. Minimalistic optic flow sensors applied to indoor and outdoor visual guidance and odometry on a car-like robot. Bioinspiration & Biomimetics. 2016; 11(6):066007
Delbruck T, Mead CA. Adaptive photoreceptor with wide dynamic range. Circuits and Systems, 1994. ISCAS’94., 1994 IEEE International Symposium on. vol. 4. IEEE. 1994:339-342
Xu P, Humbert JS, Abshire P. Analog VLSI implementation of wide-field integration methods. Journal of Intelligent & Robotic Systems. 2011; 64(3–4):465-487
Chao H, Gu Y, Napolitano M. A survey of optical flow techniques for robotics navigation applications. Journal of Intelligent & Robotic Systems. 2014; 73(1–4):361
Coombs D, Roberts K. Bee-bot: Using peripheral optical flow to avoid obstacles. In SPIE: Vol. 1825. Intelligent robots and computer vision XI. 1992:714-721
Santos-Victor J, Sandini G, Curotto F, Garibaldi S. Divergent stereo in autonomous navigation: From bees to robots. International Journal of Computer Vision. 1995; 14(2):159-177
Weber K, Venkatesh S, Srinivasan MV. Insect inspired behaviors for the autonomous robots. In: Srinivasan MV, Venkatesh S, editors. From Living Eyes to Seeing Machines. 11. Oxford, UK: Oxford University Press; 1997. pp. 226-248
Srinivasan MV, Chahl JS, Weber K, Venkatesh S, Nagle MG, Zhang SW. Robot navigation inspired by principles of insect vision. Robotics and Autonomous Systems. 1999; 26(2):203-216
Baratoff G, Toepfer C, Neumann H. Combined space-variant maps for optical flow navigation. Biological Cybernetics. 2000; 83(3):199-209
Argyros AA, Tsakiris DP, Groyer C. Biomimetic centering behavior for mobile robots with panoramic sensors. IEEE Robotics and Automation Magazine, Special issue on Mobile robots with panoramic sensors, Daniilides K, Papakolopoulos N, eds. 2004; 11:21-30
Humbert JS, Hyslop A, Chinn M. Experimental validation of wide-field integration methods for autonomous navigation. In: Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on. San Diego, CA: IEEE; 2007. p. 2144-2149
Humbert JS, Hyslop AM. Bioinspired visuomotor convergence. Robotics, IEEE Transactions on. 2010; 26(1):121-130
Roubieu FL, Serres J, Franceschini N, Ruffier F, Viollet S. A fully- autonomous hovercraft inspired by bees: wall following and speed control in straight and tapered corridors. In: Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on. IEEE; 2012. p. 1311-1318
Ruffier F, Franceschini N. Optic flow regulation: The key to aircraft automatic guidance. Robotics and Autonomous Systems. 2005; 50(4):177-194
Serres J, Dray D, Ruffier F, Franceschini N. A vision-based autopilot for a miniature air vehicle: Joint speed control and lateral obstacle avoidance. Autonomous Robots. 2008; 25(1–2):103-122
Srinivasan M, Zhang S, Lehrer M, Collett T. Honeybee navigation en route to the goal: Visual flight control and odometry. The Journal of Experimental Biology. 1996; 199(1):237-244
Serres J, G M RF, Franceschini N. A bee in the corridor: Centering and wall-following. Die Naturwissenschaften. 2008; 95:1181-1187
Serres J, Ruffier F. Optic Flow-Based Robotics. In: Wiley Encyclopedia of Electrical and Electronics Engineering. John Wiley & Sons, Inc.; 2016. p. 1-14
Humbert JS. Bio-inspired visuomotor convergence in navigation and flight control systems. California Institute of Technology; 2005
Humbert JS, Murray RM, Dickinson MH. Sensorimotor convergence in visual navigation and flight control systems. In: Proceedings of the 16th IFAC World Congress. Praha, Czech Republic; 2005
Humbert JS, Murray RM, Dickinson MH. A control-oriented analysis of bio-inspired visuomotor convergence. In: Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC’05. 44th IEEE Conference on. IEEE; 2005. p. 245-250
Humbert JS, Murray RM, Dickinson MH. Pitch-altitude control and terrain following based on bio-inspired visuomotor convergence. In: AIAA Conference on Guidance, Navigation and Control. vol. AIAA 2005–6280. San Francisco, CA; 2005
Conroy J, Gremillion G, Ranganathan B, Humbert JS. Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Autonomous Robots. 2009; 27(3):189-198
Hyslop A, Krapp HG, Humbert JS. Control theoretic interpretation of directional motion preferences in optic flow processing interneurons. Biological Cybernetics. 2010; 103(5):353-364
Hausen K. Motion sensitive interneurons in the optomotor system of the fly. Biological Cybernetics. 1982; 45(2):143-156
Krapp HG, Hengstenberg B, Hengstenberg R. Dendritic structure and receptive-field organization of optic flow processing interneurons in the fly. Journal of Neurophysiology. 1998; 79(4):1902-1917
Borst A, Haag J. Neural networks in the cockpit of the fly. Journal of Comparative Physiology A. 2002; 188(6):419-437
Keshavan J, Gremillion G, Escobar-Alvarez H, Humbert J. A analysis-based, controller-synthesis framework for robust bioinspired visual navigation in less-structured environments. Bioinspiration & Biomimetics. 2014; 9(2):025011
Keshavan J, Gremillion G, Alvarez-Escobar H, Humbert JS. Autonomous vision-based navigation of a Quadrotor in corridor-like environments. International Journal of Micro Air Vehicles. 2015; 7(2):111-124
Duchon AP, Warren WH. Robot navigation from a Gibsonian viewpoint. In: Systems, Man, and Cybernetics, 1994. Humans, Information and Technology., 1994 IEEE International Conference on. Vol. 3. IEEE. 1994. pp. 2272-2277
Lee DNA. Theory of visual control of braking based on information about time-to-collision. Perception. 1976; 5(4):437-459
Nelson R, Aloimonos J. Using flow field divergence for obstacle avoidance in visual navigation. In: Science Applications International Corp, Proceedings: Image Understanding Workshop. vol. 2; 1988
Ancona N, Poggio T. Optical Flow From 1D Correlation: Application to a Simple Time-To-Crash Detector. In: 4th International Conference on Computer Vision, Proceedings of the. Berlin, Germany; 1993. p. 209-214
Muratet L, Doncieux S, Briere Y, Meyer JA. A contribution to vision- based autonomous helicopter flight in urban environments. Robotics and Autonomous Systems. 2005; 50(4):195-209
i Badia SB, Bernardet U, Verschure PF. Non-linear neuronal responses as an emergent property of afferent networks: A case study of the locust lobula giant movement detector. PLoS Computational Biology. 2010; 6(3):e1000701
Zufferey JC, Floreano D. Fly-inspired visual steering of an ultralight indoor aircraft. IEEE Transactions on Robotics. 2006; 22(1):137-146
Reiser MB, Dickinson MH. A test bed for insect-inspired robotic control. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. 2003; 361(1811):2267-2285
Beyeler A, Zufferey JC, Floreano D. 3D vision-based navigation for indoor microflyers. In: Robotics and Automation, 2007 IEEE International Conference on. IEEE; 2007. p. 1336-1341
Barrows GL, Neely C, Miller KT. Optic flow sensors for MAV navigation. In: Fixed and Flapping Wing Aerodynamics for Micro Air Vehicle Applications. Bellingham, U.S.A.: Progress in Astronautics and Aeronautics, AIAA, Vol. 195. 2001. pp. 557-574
Green WE, Oh PY, Barrows G. Flying insect inspired vision for autonomous aerial robot maneuvers in near-earth environments. In: Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on. vol. 3. IEEE; 2004. p. 2347-2352
Lindemann JP, Weiss H, Möller R, Egelhaaf M. Saccadic flight strategy facilitates collision avoidance: Closed-loop performance of a cyberfly. Biological cybernetics. 2008; 98(3):213-227
Rezaei M, Saghafi F. Optical flow-based obstacle avoidance of a fixed-wing MAV. Aircraft Engineering and Aerospace Technology. 2011; 83(2):85-93
Serres JR, Ruffier F. Biomimetic autopilot based on minimalistic motion vision for navigating along corridors comprising U-shaped and S-shaped turns. Journal of Bionic Engineering. 2015; 12(1):47-60
Chahl JS, Srinivasan MV, Zhang SW. Landing strategies in honeybees and applications to uninhabited airborne vehicles. International Journal of Robotics Research. 2004; 23(2):101-110
Garratt MA, Chahl JS. Vision-based terrain following for an unmanned rotorcraft. Journal of Field Robotics. 2008; 25(4–5):284-301
Herissé B, Hamel T, Mahony R, Russotto FX. Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow. IEEE Transactions on Robotics. 2012; 28(1):77-89
Green WE, Oh PY. Optic-flow-based collision avoidance. IEEE Robotics & Automation Magazine. 2008; 15(1)
Nelson TE, Peterson JR. NSTA-NASA Shuttle Student Involvement Project. Experiment Results: Insect Flight Observation at Zero Gravity. 1982
Goulard R, Vercher JL, Viollet S. To crash or not to crash: How do hoverflies cope with free-fall situations and weightlessness? Journal of Experimental Biology. 2016; 219(16):2497-2503
Sabo C, Cope A, Gurny K, Vasilaki E, Marshall JA. Bio-inspired visual navigation for a Quadcopter using optic flow. In: AIAA Infotech @ Aerospace; 2016. p. 0404
Garratt MA, Cheung A. Obstacle Avoidance in Cluttered Environments Using Optic Flow. In: Australian Conference on Robotics and Automation; 2009
Ham Y, Han KK, Lin JJ, Golparvar-Fard M. Visual monitoring of civil infrastructure systems via camera-equipped Unmanned Aerial Vehicles (UAVs): a review of related works. Visualization in Engineering. 2016; 4(1):1
Küng O, Strecha C, Fua P, Gurdan D, Achtelik M, Doth KM, et al. Simplified building models extraction from ultra-light UAV imagery. ISPRS-international archives of the photogrammetry. Remote Sensing and Spatial Information Sciences. 2011; 3822:217-222
Holz D, Nieuwenhuisen M, Droeschel D, Schreiber M, Behnke S. Towards multimodal omnidirectional obstacle detection for autonomous unmanned aerial vehicles. Int Arch Photogramm Remote Sens Spatial Inf Sci(ISPRS). 2013; 1:W2
Gohl P, Honegger D, Omari S, Achtelik M, Pollefeys M, Siegwart R. Omnidirectional visual obstacle detection using embedded FPGA. In: Intelligent robots and systems (IROS), 2015 IEEE/RSJ International conference on. IEEE; 2015. p. 3938-3943
Louiset T, Pamart A, Gattet E, Raharijaona T, De Luca L, Ruffier F. A shape-adjusted tridimensional reconstruction of cultural heritage artifacts using a miniature Quadrotor. Remote Sensing. 2016; 8(10):858