Open access peer-reviewed chapter

Taking Inspiration from Flying Insects to Navigate inside Buildings

By Julien R. Serres

Submitted: November 9th 2016Reviewed: December 5th 2017Published: March 28th 2018

DOI: 10.5772/intechopen.72918

Downloaded: 372

Abstract

These days, flying insects are seen as genuinely agile micro air vehicles fitted with smart sensors and also parsimonious in their use of brain resources. They are able to visually navigate in unpredictable and GPS-denied environments. Understanding how such tiny animals work would help engineers to figure out different issues relating to drone miniaturization and navigation inside buildings. To turn a drone of ~1 kg into a robot, miniaturized conventional avionics can be employed; however, this results in a loss of their flight autonomy. On the other hand, to turn a drone of a mass between ~1 g (or less) and ~500 g into a robot requires an innovative approach taking inspiration from flying insects both with regard to their flapping wing propulsion system and their sensory system based mainly on motion vision in order to avoid obstacles in three dimensions or to navigate on the basis of visual cues. This chapter will provide a snapshot of the current state of the art in the field of bioinspired optic flow sensors and optic flow-based direct feedback loops applied to micro air vehicles flying inside buildings.

Keywords

  • optic flow
  • sense and avoid system
  • micro air vehicle (MAV)
  • unmanned aerial vehicle (UAV)
  • bionics
  • bioinspired robotics

1. Introduction

1.1. The biorobotic approach

Fifty years ago, Karl von Frisch observed that foraging bees fly to a distance somewhat greater than 13 km from their hive in search of food sources [1], but honeybees were not able to be trained to collect a reward beyond this limit; it thus corresponded to their maximum foraging distance. The area of the circle, whose center is the hive and radius the maximum foraging distance, represents a huge surface area of 530 km2. Even knowing that the average length of a honeybee is about 13 mm, the volume of its brain is lower than 1 mm3 and contains around 960,000 neurons [2, 3], and each worker honeybee’s compound eye contains ~5500 facets comprised of nine photosensitive cells (i.e., 99,000 photosensitive cells for the whole worker bee’s visual system) [4]; it is still unknown what visual cues are used during honeybees’ journeys nor how they are used in flight to recognize its location and to navigate within a space whose dimensions are a million times larger than their bodies. Karl von Frisch was awarded the Nobel Prize in Physiology or Medicine 1973 for his scientific achievement in describing the honeybees’ “waggle dance,” which is used by bees to communicate both the distance and the azimuthal orientation of a profitable nectar source. The 8-shaped geometry of the waggle dance codes the position of a nectar source. The duration of the waggle is closely correlated to the distance of the nectar source [5], and the honeybee’s odometer appears to be driven by motion vision [6]. The “8” orientation is highly correlated to the azimuthal orientation of a nectar source [1]. In flight, honeybees use a kind of “solar compass” based on polarized ultraviolet light [7, 8, 9] instead of a “magnetic compass” to maintain their heading toward the nectar source or their hive. Karl von Frisch therefore concluded that bees “recruited” by this dance used the information encoded in it to guide them directly to the remote food source. To better understand the honeybee’s recruitment process, the Biorobotics Lab at the Freie Universität Berlin has developed a robotic honeybee mimicking the “waggle dance” using a biorobotic approach [10].

While the biological substrate has not yet been fully identified [12], the biorobotic approach is particularly useful both in the fields of neuroscience and robotics [13, 14, 15, 16, 17, 18, 19, 20], because the robotic model can be tested under similar experimental conditions such as ethological experiments and it can suggest new biological hypotheses (Figure 1). From these interactions between ethological experiments, computational models, and robotics (Figure 1), uncertainties can be removed by considering the minimum requirements to perform any navigational tasks (e.g., [21, 22, 23, 24, 25]). Insect-sized micro air vehicles (MAVs) are increasingly becoming a reality [26, 27, 28, 29, 30, 31] and in the future will have to be fitted with sensors and flight control devices enabling them to perform all kinds of aerial maneuvers inside buildings including takeoff, floor, ceiling and wall avoidance, tunnel-following, and landing.

Figure 1.

Description of the biorobotic approach using successive interactions between animal behavior, computational models, and robotics. Picture of the honeybee landing on a milk thistle flower from Wikimedia commons (picture taken by Fir0002/Flagstaffotos under CC-BY license). Picture of the BeeRotor robot fitted with a twin CurvACE artificial compound eye from [11] under CC-BY license.

1.2. What is optic flow?

The optic flow perceived by an agent (an animal, a robot, or a human) is particularly dependent on the structure of the environment [32, 33, 34, 35, 36]. Optic flow can be defined by a vector field of the apparent angular velocity of objects, surfaces, and edges in a visual scene caused by the relative motion between an agent and the scene (Figure 2). Optic flow ω(Eq. (1)) is the combination of two optic flow components: a translational optic flow ωTand a rotational optic flow ωR[35]:

Figure 2.

The LORA robot fitted with a heading-lock system travels straightforward through a tapered corridor perceiving a purely translational optic flow. This optic flow experienced by the robot is schematized by the white arrows (i.e., angular velocity vectors) on the walls (adapted from [25] under CC-BY license).

ω=ωT+ωRE1

Flying insects like hymenopterans stabilize their heads by compensating for any body rotations [37]. Accordingly, any robot’s visual system is assumed to be perfectly stabilized in space, therefore canceling all rotation due to body pitch and roll with respect to the inertial frame. Consequently, the robot’s visual system experiences only translational optic flow, and the visual system will receive a purely translational optic flow (ωR=0). The translational optic flow (expressed in rad/s) can be defined as follows:

ωT=VVddDdE2

where dis a unit vector describing the viewing direction, Vis the translational velocity vector and Ddis the distance from the object as seen by photosensors.

However, in the horizontal plane, the magnitude of the translational optic flow, which describes the front-to-back motion occurring when the agent moves forward, depends only on the ratio between the relative linear speed Vand the distance Dφfrom the objects providing an optical contrast in the environment (the walls in Figure 2) and the azimuth angle φbetween the gaze direction and the speed vector (Eq. (3)):

ωT=VDφsinφE3

Translational optic flow (Eq. (3)) is particularly appropriate for short-range navigation because it depends on the ratio between (i) the relative linear speed of an object in the scene with respect to the agent and (ii) the distance from obstacles in the surrounding environment: this visual angular speed cue does not require either speed or distance measurement (Eq. (3)).

1.3. Why use optic flow in robotics?

Flying robots are today capable of accurately evaluating their pose in outdoor flight using conventional sensors such as the global positioning system (GPS) and inertial measurement unit (IMU). This is very efficient for long-range navigation at hundreds of meters above the ground, without any obstacles around, for example, an airplane in cruising flight. Nevertheless, the expanding set of roles for flying robots increasingly calls for them to operate close to obstacles (<1 m) in all directions in GPS-denied or cluttered environments including buildings, warehouses, performance halls, or urban canyons. Robots can have difficulties receiving the GPS signal, but they have to pick up the 3D structure of the surrounding environment to avoid obstacles and accomplish their missions. At such a short distance from obstacles (<1 m), the environment is completely unpredictable: it is obviously very difficult to map the entire environment in 3D at such a scale in real time. A more efficient approach would consist of the robot continuously using local information to avoid obstacles while waiting for global information to pursue its mission. Most of the time, the use of emissive proximity sensors such as ultrasonic or laser range finders, radar, or scanning light detection and ranging (LIDAR) has been considered for this purpose. Such emissive sensors can be bulky, stealth-compromising, high energy, and low-bandwidth, compromising their utility for tiny and insect-like robots [26, 27, 28, 29, 30, 31]. It is well known that flying insects are sensitive to optic flow [16, 19, 36, 38]; moreover, they are able to measure the optic flow of their surroundings irrespective of the spatial texture and contrast [39, 40] and also that some of their neurons respond monotonically to optic flow [36, 41]. Consequently, there are considerable benefits in terms of both number of pixels and computational resources by designing guidance systems for micro flying robots fitted with passive sensing, such as motion vision, inspired by flying insects.

1.4. The chicken-and-egg problem of translational optic flow

A given magnitude of translational optic flow is a kind of chicken-and-egg problem (Eq. (3)), because an infinite number of couples (speed; distance) lead to the same speed/distance ratio, in other words the same optic flow magnitude, coming from a surface (Figure 3a). For instance, an optic flow magnitude of 2 rad/s (i.e., 115°/s) can be generated by a robot flying at 1 m/s at 0.5 m above the floor, flying at 2 m/s at 1 m above the floor, and so on (Figure 3a). To get around the optic flow chicken-and-egg problem, roboticists introduced the assumption prevailing in those days that robots have to measure their own speed by using a tachymeter on wheels [21, 42], a GPS unit [43], or a custom-built Pitot tube [44], in order to assess the distance from obstacles and then to avoid them, and conversely have to measure the distance from obstacle by means of an ultrasonic distance sensor [45] in order to assess their own ground speed. However, flying insects are not able to directly measure their true ground speed (but only their airspeed [46] or their airspeed rate [38]) or their distance from obstacles by using their binocular vision which is too limited [47]. Flying insects do not actually solve the optic flow chicken-and-egg problem to cross tunnels but instead use visuomotor feedback loops directly based on optic flow by perceiving it in a wide field of view with their compound eyes seeing both the floor and the ceiling (Figure 3b) and also the walls [48, 49].

Figure 3.

(a) The chicken-and-egg problem of translational optic flow. An infinite number of couples (speed; distance) lead to the same speed/distance ratio ωground (the same angular velocity/optic flow magnitude). For instance, in this illustration an optic flow magnitude of ωground=2 rad/s (or 115°/s) is coming from the ground. (b) To get around the optic flow chicken-and-egg problem shown in (a), the flying agent has to pick up optic flow in a wide field of view or in different parts of the field of view, here by jointly measuring optic flow coming from the ceiling (ωceiling) and the floor (ωfloor).

2. Bioinspired optic flow sensors

The criteria for evaluation of the potential of optic flow sensors for MAV applications include:

  • Visual sensors must be able to deal with the large dynamic range of natural irradiance levels, which can cover up to 9 decades during the course of the day.

  • Range of optic flow covered (i.e., the angular speed magnitude), defined by the number of optic flow decades. There is now evidence that flying insects are able to measure the optic flow over a range of more than 1.4 decades [41, 50].

  • Accuracy and precision, defined by systematic errors and coefficients of variation CV<0.5.

  • Output refresh rate, defined by the instantaneous output frequency (>10 Hz).

2.1. Taking inspiration from the compound eye

The structure of a compound eye is based on a large number of repeating units called ommatidia (Figure 4). Each ommatidia is composed of a facet (hexagonal lens, diameter from ~30 to ~60 μm) which focuses the incoming light toward the photosensitive cells [19, 51]. Each ommatidia optical axis is separated by an interommatidial angle Δϕwhich defines the spatial acuity of the visual system [52]. The interommatidial angles Δϕare smaller in the frontal part of the visual system than in the lateral, dorsal, or ventral parts as observed in any compound eye [52, 53, 54]. In bioinspired robots, a sine-law gradient (Eq. (4)) is generally used in the horizontal plane in artificial compound eye design [21]:

Δϕφ=Δϕ90°sinφE4

Figure 4.

Head of a blue bottle fly Calliphora vomitoria, Austin’s ferry, Tasmania, Australia (2009). Picture: J.J. Harisson under CC-BY license. There are 6000 ommatidia per compound eye of a blowfly Calliphora erythrocephala. The two compound eyes cover 85% of the visual field, which represents a full panoramic view field except in their blind spot caused by their own body. The minimum interommatidial angle of a Calliphora erythrocephala visual system is ΔΦ=1.1° in the frontal part, up to 2° in the ventral part [52].

Once the spatial sampling has been carried out, the narrowness of the ommatidia performs a type of automatic low-pass filtering on the visual signals reaching the photosensitive cells. The diffraction of the light through the lens leads to a Gaussian angular sensitivity [52, 55, 56], which acts as a blurring effect. This angular sensitivity is described by the width at half height, called the acceptance angle Δρ(Eq. (5)):

Δρ=λDl+DrfE5

where Dlis the lens diameter, λis the wavelength, Dris the rhabdom diameter (i.e., pixel diameter in artificial design), and fis the ommatidium focal length. The acceptance angle Δρand the interommatidial angle are roughly equal (ΔρΔϕ) in diurnal insects [52] for each ommatidium, which allows for continuity in visual signals (low aliasing) and avoids oversampling the environment. Moreover, the photosensitive cells’ dynamics can achieve a temporal frequency up to 100 Hz in dipterans [57]; well beyond human vision (central vision up to 50 Hz).

In artificial design, most of the time Dlλ, the light diffraction is therefore not considered; thus, Δϕ=dfwith d the pixels pitch and f the focal length and ΔρΔϕcan be obtained by slightly defocusing the lens (see section 2.2.). Two artificial compound eyes have been already built with either inorganic semiconductor photoreceptors comprising 630 pixels with Δϕ=Δρ=4.2°[58] or organic photodiodes comprising 256 pixels with Δϕ=11°and Δρ=9.7°[59].

2.2. Taking inspiration from biological signal processing

Thirty years ago, a bioinspired local motion sensor (LMS) was designed [60, 61], the signal processing scheme of which was based on a time-of-travel algorithm directly inspired from fly’s motion-sensitive neurons [19, 51, 62]. A time-of-travel algorithm directly measures the delay Δttaken by a contrast edge to travel between the visual axes of two adjacent pixels, separated by an inter-pixel angle Δϕ. The optic flow is therefore naturally an inverse function of this delay Δt(Eq. 6), and the optic flow range measurement depends jointly on the choice of the inter-pixel angle Δϕand on the timespan, which was reported from 10 to 230 ms in fly’s motion-sensitive neurons [19, 51, 62] and considered in the LMS signal processing as well:

ωmeas=ΔϕΔtE6

The signal processing scheme of the bioinspired LMS is depicted in Figure 5a and can be broken down into the eight following steps:

Figure 5.

(a) Original analog local motion sensor (LMS). Only the thresholding stage of the ON contrast pathway is represented here, but both ON and OFF contrast pathways were implemented in the original full analog LMS in [61] (redrawn from [19]). (b) Analog-digital implementation of a local motion sensor (redrawn from [25, 63]).

  1. Spatial sampling realized by the photoreceptor’s optical axes separated by an angle Δϕ[52, 55, 56].

  2. Spatial low-pass filtering performed by the Gaussian angular sensitivity function of the defocused lens (correspond to a blurring effect) and characterized by the angular width at half height, called the acceptance angle Δρ. Unlike natural compound eyes in which spatial low-pass filtering automatically results from diffraction [52], the cutoff spatial frequency depends on the amount of defocusing [60, 61].

  3. Phototransduction: a logarithmic amplifier was originally used (five decades of lighting) [60, 61], a linear amplifier was used (three decades of lighting) [64], or more recently auto-adaptive photodetectors were designed as artificial retinas [58, 65], which consisted of a logarithmic amplifier associated with a high-gain negative feedback loop (seven decades of lighting). Results with auto-adaptive pixels are reminiscent of analogous experiments carried out on single vertebrate [66] and invertebrate [67] photodetectors.

  4. Temporal high-pass filtering of photoreceptor signals in each channel not only to cancel the direct component but also to accentuate the transient signals created by contrasting edges (derivative effect).

  5. Temporal low-pass filtering to reduce noise (e.g., 100 Hz interference originating from artificial lighting). Such temporal band-pass filtering was identified in two large monopolar cells L1 and L2 inside the fly’s lamina [68, 69, 70]. However, two discrepancies can be reported in the technological design: the high-pass and low-pass filters are switched with respect to the biological model in the bioinspired LMS due to electronic constraints, and the low-pass filter is the fourth order in the bioinspired LMS rather than the third order in the biological model.

  6. Hysteresis thresholding performed on signals to detect contrast transition and also to discriminate ON and OFF transitions. There is now strong evidence that the fly’s motion pathway is fed by separate ON and OFF pathways [51, 71, 72].

  7. Pulse generation on the first channel though a low-pass filter, then generating a long-lived decaying signal (first-order unit impulse response) approximating a mathematical inverse function [60, 61].

  8. Pulse generation on the second channel, sampling the long-lived decaying signal coming from the first channel through a diode minimum detector circuit [60, 61]. The LMS output is therefore a pulse-shaped signal whose magnitude represents the local optic flow magnitude.

Accordingly, the optic flow measurement ωmeas(here in volts) results from sampling the long-lived exponentially decaying function (with a time constant τ), varies inversely with Δt, and hence increases with the true optic flow ωaccording to the following equation:

ωmeasω=Vcceω0ωE7
ω0=ΔϕτE8

with Vccrepresenting the power supply voltage. This original analog functional scheme can measure the optic flow in a range ω044ω0, corresponding to a 1.2 decade of optic flow measurement.

This bioinspired LMS (Figure 5) features three major benefits: firstly, the LMS output responds monotonically to the magnitude of the optic flow and therefore acts like a local optic flow sensor, which is vital to get non-equivocal information about the distance from surrounding obstacles; secondly, the refresh rate of the LMS output is asynchronous and relatively high (>10 Hz depending on lighting conditions) which is suitable for indoor navigation; and, thirdly, the thresholding stage makes the LMS output virtually independent of texture and contrast.

The bioinspired LMS scheme based on a time-of-travel algorithm (Figure 5) is not a “correlator scheme” or “Hassenstein-Reichardt detector” (HR detector) [73, 74] whose output is dependent on texture and contrast. Another variant of the bioinspired LMS based on a time-of travel algorithm has been proposed over the past 20 years and is known as “facilitate-and-sample” algorithm [75, 76].

2.3. Local motion sensors and artificial retinas

Since the original analog design depicted in Figure 5a [60, 61], various versions of the bioinspired LMS have been built including analog-digital implementations as depicted in Figure 5b (for an FPAA implementation and an 8-bit microcontroller implementation running at 1 kHz [77], for an FPGA implementation running at 2.5 or 5 kHz [78], for an LTCC implementation running at 1 kHz [25, 79, 80], and for a 16-bit microcontroller implementation running at 2 kHz) [64]. These various versions of the bioinspired LMS have been built in order to reduce size, mass, or power consumption while benefiting from computational resources to increase the number of LMSs.

An original LMS version was developed including an iC-Haus™ LSC 12-channel photosensor array forming 6 pixel and five LMSs [63]. The outputs of five LMS were associated to a step merging based on a median filter; both the precision and the accuracy in the optic flow measurement were greatly improved [63]. Moreover, to increase by at least four times the number of LMSs that can be embedded in a 16-bit microcontroller, a linear interpolation applied to photosensor signals was used to reduce the sampling rate and thus save computational resources [81]. The best trade-off between computational load and accuracy was found at a sampling rate of 200 Hz [81].

In the framework of a European project called CurvACE (2009–2013;www.curvace.org), aiming at mimicking the Drosophila melanogaster’s compound eye, the first artificial compound eye was built [58]. This functional prototype with its 630 pixels (forming 630 artificial ommatidia) offered a wide view field (180 × 60°) over a significant range of lighting conditions and weights ~2 g [58] (see twin CurvACE version in Figure 8a).

A recent optic flow sensor based on the M2APix retina (M2APix stands for Michaelis–Menten auto-adaptive pixel [65]) can auto-adapt in a 7 decade of lighting range and responds appropriately to step changes up to ±3 decades [82]. The pixels do not saturate thanks to the normalization process performed by the very large-scale integration (VLSI) transistors [65]; this is due to the intrinsic properties of the Michaelis–Menten equation [66]. A comparison of the characteristics of auto-adaptive Michaelis–Menten and Delbrück pixels [83] under identical lighting conditions (i.e., integrated in the same retina) demonstrated better performance of the Michaelis–Menten pixels in terms of dynamic sensitivity and minimum contrast detection [65].

Different kinds of algorithms have been developed to compute local motion; these have produced different hardware implementations including templates, time of travel, feature tracking, edge counting, edge correlation, and the Hassenstein-Reichardt correlator [30, 84, 85] and also different software implementations [85]. However, analog VLSI motion sensors provide significant reductions in power consumption and payload while increasing bandwidth, improving both precision and accuracy in optic flow measurement for MAV applications.

3. Optic flow-based navigation inside buildings

3.1. Obstacle avoidance in the horizontal plane

3.1.1. Keeping the bilateral optic flow constant: a speed control system

The idea of introducing a speed control system based on optic flow was firstly developed by Coombs and Roberts [86]. Their Bee-Bot adjusted its forward speed to keep the optic flow within a measurable range, using a bilateral optic flow criterion to control the robot’s speed. The bilateral optic flow criterion (sum of the left and right optic flows) as a feedback signal to directly control the robot’s speed was first introduced by Santos-Victor and colleagues [87] onboard the Robee robot. Qualitatively, the robot’s speed was scaled by the level of the environment’s visual clutter. Since then, the bilateral optic flow criterion as a feedback signal to directly control the robot’s forward speed has been tested on many robots in both straight and tapered corridors [25, 87, 88, 89, 90, 91, 92, 93, 94]. The desired bilateral optic flow was ~12°/s for the Bee-Bot robot [87], ~19°/s in [88], ~46°/s in [88], ~21°/s in [91], 190, or 250°/s in [25] and [94]. The higher the desired bilateral optic flow, the faster the robot will advance while moving close to the walls.

3.1.2. Dual optic flow regulation

The first optic flow regulator was originally developed for ground avoidance when following terrain [25, 95]. An optic flow set point is compared to a measured optic flow to provide an error signal, this latter feeding into a regulator controlling a force orthogonal to the direction of motion. The combination of a unilateral optic flow regulator for controlling the lateral positioning on either side and a bilateral optic flow regulator for controlling the forward speed has been called a dual optic flow regulator [96]. The dual optic flow regulator concept was originally developed for aerial vehicles endowed with natural roll and pitch stabilization abilities, in which planar flight control systems can be developed conveniently [96] in order to mimic honeybees’ abilities in the horizontal plane [16, 39, 97, 98] and to avoid the weaknesses of the optic flow balance strategy in the presence of lateral openings (see review [99]). The dual optic flow regulator was implemented for the first time onboard an 878-gram fully actuated hovercraft called LORA, which stands for lateral optic flow regulator autopilot [25, 94] (Figure 7a). The dual optic flow regulator is based on:

  1. A unilateral optic flow regulator (Figure 6b) that adjusts the hovercraft’s lateral thrust so as to keep the two perceived lateral optic flows higher (left or right) equal to a sideways optic flow set point (noted ωsetSide). The outcome is that the distance to the nearest wall becomes proportional to the hovercraft’s forward speed Vf, as determined in (ii).

  2. A bilateral optic flow regulator (Figure 6c) adjusts the hovercraft’s forward thrust so as to keep the sum of the two lateral optic flows (right and left) equal to a forward optic flow set point (noted ωsetFwd).

Figure 6.

Functional diagram of the dual optic flow regulator (OF stands for optic flow; FOV stands for field of view; and LMS stands for local motion sensor). (a) Heading-lock system canceling any robot rotations. (b) Speed control system based on bilateral optic flow regulation. (c) Lateral obstacle system based on unilateral optic flow regulation. (from [25] under CC-BY license).

In a steady state, with a given corridor width of D, the final operating point of the dual optic flow regulator will be

Vf=ωsetSideωsetFwdωsetSideωsetFwdDE9
y=ωsetFwdωsetSideωsetFwdDE10

As a consequence, the robot’s speed will asymptotically and automatically be scaled by the corridor width or even by the environment clutter (Figure 7b). By increasing the forward optic flow set point ωsetFwdat a given sideways optic flow set point ωsetSide, one can change the robot’s forward speed. By reducing the sideways optic flow set point at a given forward optic flow set point, one can induce a graceful shift from “wall-following behavior” to “centering behavior” [96]. “Centering behavior” occurs as a particular case of “wall-following behavior,” whenever ωsetSideωsetFwd/2[96]. In addition, the dual optic flow regulator requires a third feedback loop to stabilize the robot around its vertical axis, which makes the robot experience purely translational optic flow. The robot’s heading is maintained by a heading-lock system (based on a micro-compass enhanced by a micro-gyrometer) controlling the rear thrusters differentially in closed-loop mode (Figure 6a).

Figure 7.

(a) A fully autonomous sighted hovercraft equipped with a minimalistic (8 pixel) compound eye. (b) Automatic wall-following behavior as the function of the initial ordinate in both tapered and bent corridors. (from [25] under CC-BY license).

3.1.3. Bioinspired visuomotor convergence

Humbert put forward the bioinspired visuomotor convergence concept during his PhD degree (PhD thesis [100], obstacle avoidance and speed control [101, 102], terrain-following [103], corridor-following [92, 93, 104], urban canyon-following [105]) to control both land-based and flying robots solely on the basis of optic flow. This theory is based on the spatial decompositions performed by the neurons in the insect visuomotor system [106, 107, 108] that extract relative velocity and proximity information from patterns of optic flow. Advantages of bioinspired visuomotor convergence include:

  • Significant improvements in signal-to-noise ratio of relative velocity and proximity information since one feedback signal is given across many estimates of optic flow [105].

  • Through proper choice of weighting functions, the rotational and translational components can be separated automatically and do not require any derotation procedure [93].

To compare the bioinspired visuomotor convergence theory to the “optic flow balance strategy” that frequently fails in one-sided corridors or those with openings in a wall (see review [99]) or the switching mode strategy employed in such environments [87, 88], the bioinspired visuomotor convergence in [109, 110] retains the strategy of balancing lateral optic flows and leverages the stability and performance guarantees of the closed loop to achieve stable quadrotor flight in corridor-like environments which include large openings in a wall or additional structures such as small poles.

3.1.4. Image expansion to avoid frontal obstacles

The “optic flow balance strategy” was originally suggested to explain the centering behavior along a straight corridor (see review [99]). However, it turned out that this strategy, when used alone, did not allow an agent to avoid frontal obstacles, i.e., following a corridor that included L-junctions or T-junctions without using the frontal view field [111]. The frontal image expansion can therefore be used to estimate the time to contact [112] by means of the optic flow divergence [113, 114] and trigger a prespecified rotation angle around the robot’s vertical axis. A simulated small helicopter could therefore trigger U-turns when encountering frontal obstacles [115], a wheeled robot could trigger a rotating angle of 90° [111] or of 110° [90] in front of an obstacle, or the robot could stop and rotate on the spot until the frontal range once again became large enough [88]. Other robots use a series of open-loop commands, called body saccades, to avoid a frontal obstacle. The saccade duration has either been set to a constant prespecified value [116, 117], determined according to a Gaussian distribution [118], or modulated using optic flow [119, 120, 121, 122, 123]. Recently, an optic flow-based algorithm has been developed to compute a quantified saccade angle; this has allowed a simulated fully actuated hovercraft to negotiate tight bends by triggering body saccades, on the basis of a time-to-contact criterion, and to realign its trajectory parallel to the wall along a corridor that includes sharp turns [124].

3.2. Obstacle avoidance in the vertical plane

Ventral optic flow can be used by aerial robots [125, 126, 127] to achieve different maneuvers: takeoff, terrain-following, flying nap of the earth, landing, and decking in the same way as honeybees do it [5, 97]. Ventral optic flow was also employed for ground avoidance onboard MAVs by maintaining the ventral optic flow at a given set point using a ventral optic flow regulator [95]. Another control algorithm based on a “bang-bang” method was used onboard MAVs to control their lift such that if a certain threshold of ventral optic flow was exceeded, the MAV elevator angle would be moved to a preset deflection [120, 121, 128].

Recently, the dual optic flow regulation principle applied in the vertical plane was tested onboard an 80-gram rotorcraft called BeeRotor [11]. As a third control feedback loop, an active system of reorientation based on a quasi-panoramic eye constantly realigned its gaze parallel to the nearest surface followed: BeeRotor demonstrated its abilities and achieved automatic terrain-following despite steep reliefs (Figure 8) without a need for inertial frames to access the verticality as flying insects do. Indeed, behavioral experiments performed 35 years ago on flying insects in zero gravity [129] or recent behavioral experiments with hymenopterans [37] or dipterans [130] demonstrated that flying insects do not actually sense verticality in flight by means of gravity perception as vertebrates do. The eye reorientation therefore enables BeeRotor, at an earlier stage, to detect the increase in the optic flow due to steep relief in order to properly avoid the obstacle [11]. Additionally, in the framework of the “Green Brain” project managed by James Marshall, a dual optic flow regulator for both speed control and lateral positioning and a ventral optic flow for altitude control were implemented onboard a small quadrotor [131].

Figure 8.

(a) BeeRotor I robot equipped with a twin CurvACE sensor [58]. (b) Experimental setup. (c) Trajectories of the robot BeeRotor II following the ground thanks to a dual optic flow regulator applied in the vertical plane and a fixed eye (blue traj.) or a decoupled eye (red traj.) oriented parallel to the ground (adapted from [11] under CC-BY license).

3.3. Obstacle avoidance inside a maze

In silico experiments in a maze were mainly carried out in urban-like environments with a flying robot at a relatively high speed Vfin relatively wide urban canyons ([115] with Vf= 1 m/s and a minimum urban canyon width Dmin= 4 m; [43] with Vf= 13 m/s and Dmin= 40 m; [44] with Vf= 14 m/s and Dmin= 50 m; [132] with Vf= 2 m/s and Dmin= 10 m), hence generating an optic flow, coming from the walls, lower than 45°/s. On the other hand, navigating inside a building requires measuring not only the optic flow with a high refresh rate (>10 Hz), but also high optic flow magnitude in the range of those shown in Figure 3 (i.e., >100°/s). In order to achieve this, the LORA robot is driven by a body saccadic system (see section 3.1.) and a dual optic flow regulator-based intersaccadic system (see section 3.1.) as depicted in detail in [124]. In Figure 9, the optic flow set points have been set at ωsetSide=90°/sand ωsetFwd=130°/s; the LORA robot is seen to explore at Vf=0.33±0.21m/s inside the building and to adopt two possible routes along straight sections (following either the right wall or the left wall) according to Eqs. (9) and (10) leading to an operating point Vf= 0.48 m/s and y= 0.31 m in a steady state. Except three lateral contacts with the walls (red crosses in Figure 9) where there are either a salient angle (90°) or a returning angle (270°), the LORA robot is able to explore for ~23 minutes inside the building even though it is fitted with a minimalistic visual system (16 pixel forming eight LMSs).

Figure 9.

Trajectory of the LORA robot fitted with eight local motion sensors looking ahead at ±15°, ±30°, ±45°, and ±90° but also a heading-lock system by means of both a rate gyro and a micro compass. The LORA robot is driven by a body saccadic system (merging two incidence angles computed by ±30° and ±45° LMSs with Eq. (14) in [124]) and a dual optic flow regulator-based intersaccadic system as depicted in detail in [124]. This maze could represent corridors inside a building. The LORA robot starts at an initial position (X0 = 0.5 m, Y0=4.5 m) and with the initial heading Ψ0=−5° and progresses through the corridors including challenging junctions for ~23 minutes with only three lateral contacts with the walls (red crosses).

4. Drones in the field of architecture

Over the last decade, camera-equipped unmanned aerial vehicles (UAVs) are increasingly used in the field of architecture for visually monitoring construction, operation of buildings, and control of a failure of superstructures (skyscraper, stadium, chimney, nuclear area activity, bridges, etc.) [133]. UAVs can frequently survey construction sites, monitor work in progress, create documents for safety, and inspect existing structures, particularly for hard-to-reach areas. UAVs are used not only for 3D modeling for building reconstruction but also photogrammetric applications [134]. These UAVs evaluate their pose (i.e., their position and their orientation) in outdoor flight with a GPS giving an output signal of about 7 Hz (see section 1.3.) forcing drones to work away from structures, which is a drawback to take high-resolution pictures. In indoor flight, drones work in GPS-denied environments. Consequently, the use of active proximity sensors such as ultrasonic, laser range finders, radar, or scanning light detection and ranging (LIDAR) has been most of the time considered for this purpose [135]. However, such active sensors are bulky, high power consumption, low bandwidth, and low-output refresh rate (2 Hz–5 Hz), compromising their utility for UAV’s fast maneuvers close to obstacles or walls. Recently, a lightweight sensor composed of four stereo heads and an inertial measurement unit (IMU) were developed to perform FPGA-based dense reconstruction for obstacle detection in all directions at 7 Hz output refresh rate [136]. In another application, a drone of a mass less than 500 g was developed for photogrammetric 3D reconstruction applied to a cultural heritage object [137] but requiring a motion capture system to determine accurately their pose of the robot at a frequency of 500 Hz. Better understanding how flying insects work will therefore help future drones to operate inside buildings where obstacles are very close.

5. Conclusion

The expanding set of roles for flying robots increasingly calls for them to operate at relatively high speed close to obstacles (<1 m) in all directions in GPS-denied or cluttered environments including buildings, warehouses, performance halls, or urban canyons. Tiny flying robots cannot rely on GPS signals in such complex environments. However, they have to pick up the 3D structure of the surrounding environment in real time to avoid collisions. At a short distance from obstacles (<1 m), the environment is completely unpredictable; most of the time, emissive proximity sensors have been considered for this purpose, but now optic flow sensing is truly becoming a part of MAV avionics in several companies (e.g., senseFly in Switzerland, Qualcomm or Centeye in the USA), and it can also be used as a direct feedback signal in MAV automatic guidance in the same way as flying insects use it.

Acknowledgments

The author wishes to thank N. Franceschini, S. Viollet, T. Raharijaona, and F. Ruffier for their fruitful discussions and to thank David Wood (English at your Service,www.eays.eu) for revising the English of the manuscript.

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Julien R. Serres (March 28th 2018). Taking Inspiration from Flying Insects to Navigate inside Buildings, Interdisciplinary Expansions in Engineering and Design With the Power of Biomimicry, Gulden Kokturk and Tutku Didem Akyol Altun, IntechOpen, DOI: 10.5772/intechopen.72918. Available from:

chapter statistics

372total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Interdisciplinary Expansions in Engineering and Design With the Power of Biomimicry

Edited by Gulden Kokturk

Next chapter

Biomimetic Design for a Bioengineered World

By Irem Deniz and Tugba Keskin-Gundogdu

Related Book

First chapter

Biomimetic Synthesis of Nanoparticles: Science, Technology & Applicability

By Prathna T.C., Lazar Mathew, N. Chandrasekaran, Ashok M. Raichur and Amitava Mukherjee

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us