Open access peer-reviewed chapter

Augment-Me: An Approach for Enhancing Pilot’s Helmet-Mounted Display Visualization for Tactical Combat Effectiveness and Survivability

Written By

Angelo Compierchio, Phillip Tretten and Prasanna Illankoon

Submitted: 29 June 2023 Reviewed: 11 July 2023 Published: 11 August 2023

DOI: 10.5772/intechopen.1002356

From the Edited Volume

Applications of Augmented Reality - Current State of the Art

Pierre Boulanger

Chapter metrics overview

91 Chapter Downloads

View Full Metrics

Abstract

A learning framework for combining state-of-the-art augmented reality (AR) technologies and artificial intelligence (AI) for helmet-mounted display applications in combat aviation has been proposed to explore perceptual and cognitive performance factors and their influence on mission needs. The analysis originated through examining helmet-mounted display (HMD) design features and their configurations for tactical situational awareness (SA). In accomplishing this goal, the relationship between the pilot visual search and recent advancements in AI have been gauged as a background source to unlock pilot’s uncued visual search limit. In this context, the Augment-Me framework is introduced with the ability to view and organize SA information in a predictive way. The provisioning of AI-augmented fixation maps could effectively outperform current AR-HMD capabilities, facilitating human decision while pursuing the detection and compensation of the mechanisms of human error.

Keywords

  • augmented reality
  • helmet-mounted display
  • situational awareness
  • artificial intelligence
  • visual search
  • tactical readiness
  • fixation

1. Introduction

The dynamic evolution of global challenges has endured many Air Forces in seeking more innovative and financially optimized solutions to enhance pilots’ performance and learning within a wider combat envelope. Wearable solutions have led researchers to rely on the use of advances in technology to examine the tactical information needs of pilots. This explanatory step has led to the development of state-of-the-art devices for processing, transmitting, and displaying images onto the pilot’s visor. Revolutionary interface designs have extended the helmet line-of-sight from virtual images enabling the pilot to see a target through the aircraft to an uninterrupted display of navigation imagery both day and night. These developments coined with the Distributed Aperture System (DAS) capable of warning the pilot of incoming threats have made the F-35 stealth aircraft as the most capable SA aircraft [1]. Such an unrivaled capability accelerates decision-making with unmatched data sharing networked for warfare operational vision [2].

Operational vision boundaries have been confronted with the paradigm of linking computational-oriented knowledge and naturalistic recognition of decision-making. This recognition has been evaluated from the perspective to integrate event-driven and goal-driven outcomes. VR augment cockpit information, however, for effective airpower intelligent support is needed as the human central vision is limited to roughly two degrees wide [3]. When the senses can provide accurate and depth information, the inherent perception and action connection represents an opportunity for fast cognitive processing of maneuver-related information. Seeing is fixating, humans need to fix the gaze on a single spot to inspect the minute details of the world [4]. However, often some stimuli convey distorted images or sounds that cannot be easily perceived, as are biases to sensory processing. This limitation can be overcome if a stimulus has been primed with other stimuli. In this context, the Augment-ME framework explores this discrimination with AI augmented content through a “virtual pilot” to enhance threat detection when engaged in combat conditions. There is an ongoing trend in implementing AI in the combat domain on either side of the Atlantic. Recent reports indicate that Su-57 Fighters are to Get “Smarter” with AI-enabled sensor fusion and data processing. A report stated that: “intelligent support of the leading and slave pair of fighters in the process of conducting long-range air combat with a pair of enemy fighters” [5].

This definition emphasizes two aims a “pair of fighters” as a two-on-two scenario and “long rang” as beyond visual range combat.

Against potential opponents, the tasks of the virtual pilot are to uncover potential enemy threats in a reasonable amount of time with fixations through an eye-tracking system. This measurement concerns recording eye movements of the pilot and is examined by eye and pupil metrics [3, 6, 7, 8, 9, 10]. An AI algorithm can be developed to estimate the position and duration of fixations as shown in Figure 1.

Figure 1.

The virtual pilot.

The main purpose of a pilot is to rely on the eye as the primary sensor to retain critical SA elements and establish cognitive dominance, as defined: “Conscious awareness of actions within two mutually embedded four-dimensional envelopes” [11], this relationship is portrayed in Figure 2.

Figure 2.

Positioning the virtual pilot in a mixed real-virtual environment, (adapted from [12]).

Therefore, the evolving pilot’s mental model relative to the time, position and direction of the aircraft is represented through the inner and outer envelope of the SA construct. The role of the inner envelope refers to a pilot positioned through Boyd’s Observe/Orient/Decide/Act (OODA) loop in an unaided sensory space, while the outer envelope refers to the pilot receiving assistive AI information. This task is implicit as the pilot can bypass the explicit part of the loop almost simultaneously. Moreover, a pilot engaged in air-to-air combat in a challenging SA environment poses higher demands on locomotion and stimuli response. These important aspects refer to the optical flow of pilots’ view, in relation to the eye sensitivity to motion and the field of regard (FoR) which is the local motion within the total area that can be perceived [13]. A frame from a MotoGP event in Figure 3 shows this effect in action.

Figure 3.

Pilots’ optical flow (source adapted from public domain).

The optical flow is presented by vectors representing motion velocity on the track. The scenario of moving forward presents arrowed objects that although the rider accelerates and decelerates apart, they converge toward each other. This sequence makes the fixation on each of the paired objects relatively easier to detect. In principle, this paired category yields the perceptual relationship of object motions in relation to the background.

This approach is intended to uncover fixation time-based maps and to maximize the chances of detecting a target through additional spacetime, as well as capture focused attention as expressed in SA space terms by the U.S. Department of Defense (DOD) dictionary: “The requisite foundational, current, and predictive knowledge and characterization of space objects and the operational environment upon which space operations depend” [14]. The visual engagement of a target remains the main focus in aerial victories, but humans are mostly capable of making only two to three fixations per second. In this study, an AI-augmented approach has been proposed to leverage the effects of falling acuity when the direction of the eye is stirring from one fixation point to another.

Advertisement

2. Evolution of AR-HMDs

The pace of technological advances embedded in combat flight helmets has been relentless. Just over a decade after the Wright Brother first flight in 1903, Albert Bacon Pratt patented an integrated gun helmet, this conceptual helmet-mounted sight (HMS) was worn with a platform for weapon delivery and control [15]. Undoubtedly, the combined integrated approach was also awarded US and UK patents for an “Integrated helmet mounted aiming and weapon delivery system”. This was the first known conception of combat flight helmets as an integrated system. From the mid-1930 till 1940 the Type B Flying helmet was developed with improved and new design features such as oxygen masks, radio earphones and removable goggles. The leather helmet was later replaced by a hard helmet to add further protection to pilots when forced to eject at higher speed, while the goggle was replaced with a tinted visor to protect the eyesight from harsh sunlight. In the mid 1950s Gordon Nash reportedly researched new methods to equip pilots with additional information, further envisaging the pilot’s constant need for operational and tactical information for combat survival. During WWII the advent of the radar changed warfare development on a wider scale, greatly influencing the dominion of airspace and the role of fighters.

This revolutionary development was also tailored with a fire control system to assist pilots to hit a target by supplying both the range and the direction of the target. This device known as a fire-control radar designed to improve mission performance remains a proven capability these days. A fighter aircraft equipped with a fire-control radar would have tactical superiority from being practically invisible to the naked eye, therefore, a pilot to prevail had to have knowledge of flight navigation and weapon information. The thoughtfulness of presenting flight referential conditions to the pilot led to the development of the Head-up Display (HUD), a fixed, transparent and localized display that presented flight data without obstructing the user’s view. The HUD allowed the pilot to keep focus on the image and attention on the forward airspace, unlike, earlier Head-Down Displays (HDD) that presented aircraft status information and required the pilot to look inside the cockpit for updates. One aircraft equipped with a real true HUD was the Havilland Mosquito, allowing a pilot to view radar information and an artificial horizon directly in front of flight controls. The addition of the horizon was instrumental for flying as without the pilot could not tell the aircraft’s position and lose the sense of balance or position [16]. Most obviously the HUD also presented some drawbacks to pilots acquiring and maintaining the SA. The pilot could only receive collimated imagery as of target information only on a small-forward looking area of the aircraft, then a viewing device was needed to always provide information directly in front of the pilot’s eyes. The solution originated in the 1960s with the design and development of the HMD as it provided the pilot unobstructed visual field and unrestricted head movement with heads-up capability. In physiological terms, this translates to better eye relief and lower weight.

2.1 Technological trends

The HMD enhanced the fighter’s offensive capability by allowing pilots to choose a target only by looking at a guileless perceived threat. A common classification for HMDs refers to the type of presentation of the sensory image and symbology, which include monocular which refers to one image to one eye, biocular or the same image to both eyes, or binocular or two different images to each eye.

The Israeli DASH became the first known HMD that measured the pilot’s line of sight (LOS), slaved missile and sensory data to the target location while integrating all aircraft operational modes while the pilot had Hands On Throttle And Stick (HOTAS) controls. The DASH formed the basis for the Joint Helmet Mounted Cueing System (JHMCS) program. JHMCS technology was developed through a joint venture established by Elbit Systems, Vision Systems International (VSI) and Kaiser Electronics. The JHMCS equipped the F-16, F/A-18, F-15 and F-22, it gave the pilot advanced helmet sight despite reduced Air-To-Ground (A/G) accuracy performance due to known timing drift issues related to the cockpit magnetic fields. An overview of fixed-wing HMD development programs is shown in Figure 4.

Figure 4.

Summary of HMD programs (helmet source: Public domain).

Earlier HMDs developed as the JHMCS, the TopSight-I and the VIPER-IV HMDs employed a monocular visor-projected display and retained sensory technology to determine the pilot’s line-of-sight. This trend, thereafter, from TYPHOON to the F-35 steadily shifted toward binocular visor-projected display reflecting the potential benefits of this classification as decluttering information, reducing perceived workload, improved capturing attention, search times and reducing response times [17]. The projection of AR-style symbology on the F-35 HMDS evolved from simple flight data to full motion video, image insertion and color symbology. The Helmet Mounted Display System (HDMS) used the same JHMCS symbology and was the first helmet to incorporate HUD functionality.

Ironed challenges of the HDMS benefited the roll-out of the F-35 Gen III helmet that included the resolution of blooming effects especially during night-time, it was discovered the employed liquid-crystal display (LCD) active-matrix (AM) displayed on the HMD visor a green haze from the backlight. This finding made it difficult to locate targets in dark conditions. The solution focused on replacing LCD with organic light-emitting diode (OLED) displays which eliminated the bleed-through green glow. A change was also made to the helmet tracker to meet helmet line-of-sight accuracy strafing requirements [18]. The Gen III helmet became fully operational with two ocular cameras situated above the helmet’s display visor. The cameras are manually aligned to the pilot’s interpupillary distance (IPD) settings for each eye. This task is set to optimize the acuity of each pilot’s vision as well as establish the proper fit of each helmet. The next generation of binocular HMDs (Zero-G HDMS+) is set to empower the 4th, 5th and 6th with improved optical acuity, SA and information management.

However, technological advances have added additional emphasis to these trends on the physical limitations of the eye as a sensor. Despite innovative technologies, there is ongoing HMD research on F-35 topics influencing human performance: green glow, double vision, monocular symbology and canopy interaction [19]. These aspects cover the basic visual search demands that can be commensurate with an HMD cueing system, however, eye-tracking technology is currently not available on aircraft HMDs. This technology would enable to investigate pilot’s scan patterns necessary for high maneuverability.

Advertisement

3. The augmented SA

Air Forces engage reconnaissance squadrons to maintain tactical readiness and to conduct close air support (CAS) operations. CAS is denoted as “air action … against hostile targets that are in close proximity to friendly forces…(requiring)detailed integration of each air mission with the fire and movement of those forces” [20].

The ability to sustain CAS capability requires eye-blink situational awareness with advanced integrated avionics. The deployment of the F-35 stealth aircraft to provide CAS requires experienced pilots capable to converse decision-making directly on the edge. This need was already perused by earlier known pilots such as Oswald Boelcke, the central theme in combat tactics laid on the basic concepts of analyzing the enemy’s position, executing counteroffensive maneuvers and making continuous decisions. As the recognition of decision-making depends on the knowledge structure directly recruited through time-based tactical appreciations, the complex percepts involved were codified by the OODA loop. This reveling interaction evokes Gibson’s work in stating that the pilot’s sensory system abided the percept construct directly from the environment, such a thought led to establishing the Theory of Direct Perception (TDP) [21]. This theory introduced the concept of affordances related to the interaction between the entire world and the pilot as if the sensory system would be resonating with environmental features.

The effects of this interpretation were quoted as: “When the senses are considered as perceptual systems, all theories of perception become at one stroke unnecessary. It is no longer a question of how the mind operates on the deliverances of sense or how past experience can organize the data, or even how the brain can process the inputs of the nerves, but simply how information is picked up” [22]. The picking of maneuver task-relevant information is key to minimizing workload and increasing SA, while the symbology flow afforded to pilots on display concepts as the HUD provided the opportunity to intercede constraint performance on cognitive tasks [23, 24]. Essentially, the HUD was a genuine precursor to AR breakthroughs in the aviation domain. The two AR words as a phrase were coined by Tom Caudell and David Mizell [25]. The overlay of virtual guidance symbology on the pilot’s out-the-window view provided comprehensive situational awareness (SA) improvement enabling the pilot to filter a larger amount of information into relevant SA inputs. The HUD essentially provided the pilot with the means to remain “head-up” despite instrumental meteorological conditions (IMC). Furthermore, the combination of real and augmented cues provided several aids to the pilot, including increased flight safety related to improved warning indications, reduced pilot workload based on flight completeness of information, increased flight precision and direct trajectory visualization. The Augment-Me framework follows the reflection of visual and control augmentation combination, their influences on cognitive thinking and endows the role of the SA. This ability respond to different SA levels as “Level (1) the pilot’s accurate perception of elements information within the cockpit, the aircraft and the external world, Level (2) the comprehension of the significance of those elements, to form a knowledge pattern with a holistic temporal picture of the external world and Level (3) the ability of the pilot to project future actions through the knowledge acquired from the projection of the elements status in the near future” [26]. This description is illustrated in Figure 5.

Figure 5.

SA interface model.

Advertisement

4. Augmented visual search

The prospect of superimposing and projecting on the HMD visor virtual information with visual cues empowered tactical maneuvering and delimited attentional capture. The first HMD that exploited AR was the Integrated Helmet And Display Sight System (IHADSS). The IHADSS was a monocular device displaying a virtual image on a single eye, this feature led to increased workload and stresses as a result of projected visual disturbance due to head movements. This layout triggers competition between both eyes affecting visual search. On the other hand, an alike condition caused by the neural adaptation of switches between the perceptual state also characterizes binocular rivalry by facilitating visibility near the point of fixation rather than the fixation itself [27, 28]. A consideration for monocular augmented reality is that there is competitive dominance between stimuli luminescence and moving or stationary stimuli. While for binocular rivalry when wearing see-through displays experiments have shown that the influence of stimuli was less unstable than in monocular displays [29]. Notably, this can cause visual fatigue as binocular bistability could incur due to the image alteration between both [30]. The dynamics of every single action starts with the eye and end with the eye and an eye reaches an object before the completion of the head motion.

As such, the distinctive ontology between pilots and combat situations is that situations are structured while pilots do not. This is remarked by the exemplified same human components of a pilot that flew the Tornado aircraft in the late 1970s and a pilot who flies the F35 today. Besides the technological advances the importance of the pilot visual search has not changed, a pilot would still have the last say, whether they are performing Active air and missile defense or Passive air and missile defense. Active defense refers to containing the enemy through kinetic and non-kinetic actions, while Passive defense includes all measures except detection and warning assets [4]. This leads to the instantaneous and continuous evolution of the pilot’s mental picture invoking the dynamics of visually guided reaching [31].

4.1 Visual search measures

Visual search is made up of a series of discrete fixations [32]. Fixation refers to the visual gaze that shifts to a location for a period, between this shift saccade forms. This formation creates 3–5 saccades per second with fixations occurring for about 300 ms, while longer fixations are representative of higher cognitive processing [33]. Saccadic eye movements inhibited between visual attention patterns are relatively brief, their movement has a central function of forming fixation. Eye-tracking devices require sufficient spatial and temporal resolution and must capture in real time a wide range of eye movement measures [34, 35, 36, 37, 38, 39, 40, 41]. Eye-trackers are non-intrusive and can be monocular or binocular and following calibration can measure eye movements of the line-of-sight direction along both the x-axis range and the y-axis range. It is worth noting that the line-of-sight is defined from the direction of the eye gaze and the head. On aircraft, video images generated from sensors mounted on the aircraft or the HMD are directly correlated with the eye-of-sight. Sophisticated HMDs have been built with incorporated eye and head tracking systems. Generally, most eye-tracking devices have video-based capability and are categorized as mobile and remote tracking systems. The former refers to head-mounted devices less invasive and are best suited for realistic simulator applications, and can be used with alternative tracking technology such as electroencephalography (EEG) [42]. While the latter is less complex and can be mounted on a computer and it does not touch the user. Therefore, the visual space is already incorporated, the downside of this device is that it is fixed affecting data accuracy due to the unrestricted movements of the user’s head.

Furthermore, there are two gaze estimation methods available to examine the geometry of the eye, the model-based method that use a priori knowledge to fit a 3D model to the eye image and the appearance-based method that attempts through a statistical model to train a mapping model of the eye gaze direction [43, 44, 45, 46, 47, 48].

4.2 The need of a virtual pilot

In an air-to-air combat, the pilot’s thought process is not linear and constrained by timed critical events. The continued accommodation of information requires the pilot to synthesize with sensemaking ability HMD imagery contributing toward correct decisions. In this perspective, the pilot has to be capable of acting adaptively when transitioning from one maneuver to another without missing important cues. This understanding involves multi-tasking and higher cognitive overload that could affect what makes up the pilot’s mental model. Cognitive psychology defines that humans can perform up to two tasks at a time, while performing more tasks would result in reduced and a potential accident. In a combat scenario, the building blocks of the SA rely on sensors and communications. This all-encompassing information requires translating the holistic picture of detecting an adversary aircraft in a single fixation. Fixation has been raised as a human factor red flag sometimes influencing modern-day accidents [49]. From the outset this cause is not new, there are recorded accidents from both military and civil aircraft directly tracing the cause of accidents to the fixation on the outside environment.

The latest 5th Generation of aircraft as the F-35 has departed from traditional mechanics of the F-15/F-16/F-21 multi-role aircraft with a new design philosophy. The pilot’s HMD Gen III is essentially a component of the Information Fusion system, with the addition of advanced radar and infrared sensors that can pass data to a rocket battery on the ground to improve its accuracy. This capability allows the F-35 to track and destroy rockets, but to actually shut down a just launched one, the pilot would have to detect it just before reaching 100,000 ft [50]. In this case, the pilot would have limited time to detect and intersect the rocket. This limitation highlights the pilot’s uncued visual search limit. This limit is also evident against enemy aircraft or standard squadron formation. The systematic search in Figure 6 highpoint this observation.

Figure 6.

The visual search limits: (a) cumulative probability of a pilot searching a 90° sector with 20 fixation/minute would detect an aircraft approaching direction by colored range (orange, green, blue). (b) The instant virtual map of the fixation points of the pilot’s eye.

Figure 7 demonstrates that the pilot would have very limited time to respond to enemy attacks. This is the consequence of adversary aircraft falling just outside the pilot’s central vision and the fixation at any given point. The established collaborative setting would require the use of AI to reproduce the pilot’s fixations mechanisms generated in real-time during the mission.

Figure 7.

Augment-me learning framework.

Advertisement

5. The augment-me framework

The learning framework provides a cohesive structure for displaying information to pilots from on-board and off-board sensory sources while harvesting cognitive competencies of the pilot’s decision-making when managing unexpected tactical maneuvers. The adaptation of this approach enhances precision engagements against fixed and moving targets as shown in Figure 7.

The learning framework was designed with the following objective:

  • improve pilot’s cueing ability

  • add a prediction layer to the SA from acquiring long-range sensor data

  • optimize the pilot visual search in combat by monitoring the pilot eye movement and pupil measures to design fixation maps

  • enable AI technology to develop from fixation maps to tactical search areas

  • combine AR and AI data to generate frozen search area sectors

  • rapidly integration and transfer to ground support

The components required to build the framework are described as:

Assigned mission, engagement, intra-communication: High-level measures to quantify the combat capability to complete the mission.

Visual search: On-board, off-board measures visual performance and target prioritization.

Common AR engagement: Modular, distributed AR merging AI and virtual fixation environment (VFE).

Multi-intelligence fusion: Measure of intercept and detection probability quantifiable in a single variable.

Operational information: Imagery displayed on HMD and cockpit display.

Pilot HMD: Equipped with eye-tracking to find fixation points saccadic eye-motions.

Eye movement measures: Measures are identified using either are-based, dispersion-based or velocity-based algorithms.

Detection AI algorithm: Estimate the spatial position of the fixations.

Virtual pilot: Virtual agent or AI-powered processing for reconfigurable fixation maps.

Fixation maps: Fixation patterns for classification and maneuver profiles for ground-based tactical counteroffensives.

Since SA development requires recursive levels of abstractions and continued refinements of a combat scenario, the virtual pilot adds predictive action measures through the provision of virtual scan patterns, since each pattern allows a complete coverage of sliced sectors in Figure 7b, hence assuring a timely ground-based response. Efforts have been made to enhance the F-35 training pilots with eye tracking and EEG technology and to establish future applications [51].

5.1 AI augmented reality

Significant SA advantages can be gained through the implementation of AI-based solutions, especially in Level 3 SA. In this context, AI-augmented solutions provide assisted decision-making with real-time analysis and target prediction capability. The basic elements for online tracking combining optical flow, appearance model and a filter to determine the fixation state change are shown in Figure 8.

Figure 8.

Schematic of eye tracking filter structure.

A technique to track eye motions is to use infrared light on the cornea and the pupil. A camera then is positioned to capture eye images and a processing algorithm estimates the center of the pupil and the positions of the glints. Pupil-cornea reflection measurements are performed in real-time, while the image processing algorithm provides an accurate measurement of the gaze. The appearance model updated only would function as a classifier while the filter is used to predict the search region and the fixation state can be modeled from Markov decision models, Cumulative Sum algorithms, Deep Learning and Machine Learning (ML) processes and Edge computing [52, 53, 54]. Edge computing can promote multimodal interaction with improved pilot-cockpit integration beyond standard interfaces with a high level of speed and accuracy [55]. The trend of applicable AI solutions could include Fog/Edge Computing to enable on-board computation rather than other off-board platforms, therefore improving SA responsiveness [24]. Further benefits of AR include low latency for data transmission from node to destination and no single-point failure. This follows the distributed approach of the Edge architecture that in case of a source failure, redirects instantaneously data transmission to an alternative edge network [56].

Advertisement

6. Conclusion

Modern combat aircraft are relying even more on robust sensor, weapon and network communication technology to provide pilots with superior SA. While this situation is evolving the pilot is expected to perform organized visual search patterns and avoid detection and tracking by enemy sensors. The visual search area and associated observation processes of the pilot have been utilized to model the aerial sensor footprint for threat detection. This data combined through an AR and AI learning framework is set to equip the pilot with improved visual search limits and tactical advantage.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. F-35A Lightning II [Internet]. 2023. Available from: https://www.af.mil/About-Us/Fact-Sheets/Display/Article/478441/f-35a-lightning-ii/ [Accessed: April 01, 2023]
  2. 2. Unrivaled Capabilities [Internet]. 2023. Available from: https://www.f35.com/f35/about/5th-gen-capabilities.html [Accessed: April 04, 2023]
  3. 3. Martinez-Conde S, Macknik SL, Hubel DH. The role of fixational eye movements in visual perception. Nature Reviews Neuroscience. 2004;5(3):229-240
  4. 4. Stillion J. Trends in air-to-air combat: Implications for future air superiority. Center for Strategic and Budgetary Assessments [Internet]. 2015. Available from: https://csbaonline.org/uploads/documents/Air-to-Air-Report-.pdf [Accessed: February 16, 2023]
  5. 5. Catching Up With F-35, Russia’s Su-57 Fighters to Get ‘Smarter’ With AI-Enabled Sensor Fusion, Data Processing [Internet]. 2023. Available from: https://eurasiantimes.com/catching-up-with-f-35-russias-su-57-fighters-to-get-smarter/ [Accessed: April 06, 2023]
  6. 6. Niehorster DC, Zemblys R, Beelders T, et al. Characterizing gaze position signals and synthesizing noise during fixations in eye-tracking data. Behavior Research Methods. 2020;52:2515-2534. DOI: 10.3758/s13428-020-01400-9
  7. 7. Salvucci DD, Goldberg JH. Identifying fixations and saccades in eye-tracking protocols. In: Proceedings of the 2000 Association for Computing Machinery (ACM) Symposium on Eye Tracking Research & Applications. Florida, USA: Palm Beach Gardens; 2000. pp. 71-78
  8. 8. Dehais F, Peysakhovich V, Scannella S, Fongue J, Gateau T. "Automation surprise" in aviation: Real-time solutions. In: Proceedings of the 33rd Annual Conference on Human Factors in Computing Systems. New York, USA: ACM; 2015. pp. 2525-2534
  9. 9. Mannaru P, Balasingam B, Pattipati K, Sibley C, Coyne J. Cognitive context detection using pupillary measurements. In: Next-Generation Analyst IV. Vol. 9851. SPIE Defense and Security, Baltimore, MD, USA. 2016. pp. 244-251
  10. 10. Velichkovsky BB, Khromov N, Korotin A, Burnaev E, Somov A. Visual fixations duration as an indicator of skill level in esports. In: Human-Computer Interaction–INTERACT 2019: 17th IFIP TC 13 International Conference, Paphos, Cyprus, September 2-6, 2019, Proceedings, Part I. Vol. 17. Lecture Notes in Computer Science. Vol. 11746. Cham: Springer; 2019. pp. 397-405
  11. 11. Beringer D, Hancock PA. Exploring situational awareness: A review and the effects of stress on rectilinear normalization. In: Proceedings of the Fifth International Symposium on Aviation Psychology. Ohio State University, Department of Aviation Publishing. 1989. pp. 646-651
  12. 12. Compierchio A, Tretten P. Human factors evaluation of shared real and virtual environments. In: Human Interaction, Emerging Technologies and Future Systems V: Proceedings of the 5th International Virtual Conference on Human Interaction and Emerging Technologies, IHIET 2021, August 27-29, 2021 and the 6th IHIET: Future systems (IHIET-FS 2021), October 28-30, 2021, Paris, France: Springer International Publishing; 2022. pp. 745-751
  13. 13. Cutting JE. Images, imagination, and movement: Pictorial representations and their development in the work of James Gibson. Perception. 2000;29(6):635-648
  14. 14. Dictionary of Military and Associated Terms [Internet]. 2019. Available from: https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/dictionary.pdf [Accessed: April 16, 2023]
  15. 15. Li H, Zhang X, Shi G, Qu H, Wu Y, Zhang J. Review and analysis of avionic helmet-mounted displays. Optical Engineering. 2013;52(11):110901
  16. 16. Previc FH, Ercoline WR, editors. Spatial Disorientation in Aviation. Reston, Virginia, USA: AIAA; 2004
  17. 17. Melzer JE, Moffitt K. Head Mounted Displays. McGraw-Hill Publishing, the University of Michigan. 1997
  18. 18. F-35: Under the Helmet of the World’s Most Advanced Fighter. 2018. Available from: https://www.aviationtoday.com/2018/08/24/f-35-helmet-worlds-advanced-fighter/ [Accessed: April 08, 2023]
  19. 19. F-35: Operational Based Vision Assessment (OBVA) “Human Vision Issues, Research and Future Research of the F-35 HMD. 2022. Available from: https://www.sto.nato.int›STO-EN-HFM-350 [Accessed: May 11, 2023]
  20. 20. Joint Publication 3-09.3, Close Air Support, 25 November 2014 [Internet]. 2014. Available from: https://jdeis.js.mil/jdeis/new_pubs/jp3_09_3.pdf [Accessed: April 07, 2023]
  21. 21. van Dijk L, Kiverstein J. Direct perception in context: Radical empiricist reflections on the medium. Synthese. 2021;198:8389-8411. DOI: 10.1007/s11229-020-02578-3
  22. 22. Gibson JJ. The Senses Considered as Perceptual Systems. Boston, USA: Houghton Mifflin Company; 1966
  23. 23. Wickens CD. Pilot attention and perception and spatial cognition. In: Human Factors in Aviation and Aerospace. London, UK: Academic Press; 2023. pp. 141-170. DOI: 10.1016/B978-0-12-420139-2.00009-5
  24. 24. Munir A, Aved A, Blasch E. Situational awareness: Techniques, challenges, and prospects. AI. 2022;3(1):55-77
  25. 25. Carmigniani J, Furht B, Anisetti M, Ceravolo P, Damiani E, Ivkovic M. Augmented reality technologies, systems and applications. Multimedia Tools and Applications. 2011;51:341-377
  26. 26. Endsley MR. Toward a theory of situation awareness in dynamic systems. Human Factors. 1995;37(1):32-64
  27. 27. Bayle E, Guilbaud E, Hourlier S, Lelandais S, Leroy L, Plantier J, et al. Binocular rivalry in monocular augmented reality devices: A review. Situation Awareness in Degraded Environments. 2019;2019(11019):136-149
  28. 28. Yildirim I, Schneider KA. Neural dynamics during binocular rivalry: Indications from human lateral geniculate nucleus. Eneuro. 1 Jan 2023;10(1). DOI: 10.1523/ENEURO.0470-22.2022
  29. 29. Dempo A, Kimura T, Shinohara K. Perceptual and cognitive processes in augmented reality–comparison between binocular and monocular presentations. Attention, Perception, & Psychophysics. 2022;84(2):490-508. DOI: 10.3758/s13414-021-02380-4
  30. 30. Cao T, Wang L, Sun Z, Engel SA, He S. The independent and shared mechanisms of intrinsic brain dynamics: Insights from bistable perception. Frontiers in Psychology. 2018;9:589
  31. 31. Wilson AD, Golonka S. Embodied cognition is not what you think it is. Frontiers in Psychology. 2013;4:58
  32. 32. Schallhorn S, Daill K, Cushman WB, Unterreiner R, Morris A. Visual Search in Air Combat. Pensacola, FL: Naval Aerospace Medical Research Lab, NAMRL Publications; 1990
  33. 33. Walter K, Bex P. Cognitive load influences oculomotor behavior in natural scenes. Scientific Reports. 2021;11(1):12405
  34. 34. Klein G, Drummond T. Robust visual tracking for non-instrumental augmented reality. In: The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings. Tokio, Japan: IEEE; 2003. pp. 113-122
  35. 35. Corbett M, Shang J, Ji B. GazePair: Efficient pairing of augmented reality devices using gaze tracking. IEEE Transactions on Mobile Computing. 2023
  36. 36. Stone A, Rajeev S, Rao SP, Panetta K, Agaian S, Gardony A, et al. Gaze depth estimation for eye-tracking systems. In: Multimodal Image Exploitation and Learning 2023. Vol. 12526. Orlando, Florida, USA: SPIE; 2023. pp. 143-152
  37. 37. Shree DVJ, Murthy LR, Saluja KS, Biswas P. Operating different displays in military fast jets using eye gaze tracker. Journal of Aviation Technology and Engineering. 2018;8(1):31
  38. 38. Lutnyk L, Rudi D, Schinazi VR, Kiefer P, Raubal M. The effect of flight phase on electrodermal activity and gaze behavior: A simulator study. Applied Ergonomics. 2023;109:103989
  39. 39. Reis GA, Miller ME, Geiselman EE, Langhals BT, Kabban CM, Jackson JA. Effect of visual field asymmetries on performance while utilizing aircraft attitude symbology. Displays. 2023;77:102404
  40. 40. Li W-C, Lin JJ, Braithwaite G, Greaves M. The development of eye tracking in aviation (ETA) technique to investigate pilot’s cognitive processes of attention and decision-making. In: Proceedings of the 32nd Conference of the European Association for Aviation Psychology (EAAP) Publishing,, Cascais, Portugal, 26-30 September 2016
  41. 41. Dehais F, Behrend J, Peysakhovich V, Causse M, Wickens CD. Pilot flying and pilot monitoring’s aircraft state awareness during go-around execution in aviation: A behavioral and eye tracking study. The International Journal of Aerospace Psychology. 2017;27(1-2):15-28
  42. 42. Different Kinds of Eye Tracking Devices. 2020. Available from: https://www.bitbrain.com/blog/eye-tracking-devices [Accessed May 21, 2023
  43. 43. Babu MD, JeevithaShree DV, Prabhakar G, Saluja KPS, Pashilkar A, Biswas P. Estimating pilots’ cognitive load from ocular parameters through simulation and in-flight studies. Journal of eye movement. Research. 2 Sep 2019;12(3):10. DOI: 10.16910/jemr.12.3.3
  44. 44. Klaproth OW, Halbrügge M, Krol LR, Vernaleken C, Zander TO, Russwinkel N. A neuroadaptive cognitive model for dealing with uncertainty in tracing pilots' cognitive state. Topics in Cognitive Science. 2020;12(3):1012-1029
  45. 45. Gomolka Z, Kordos D, Zeslawska E. The application of flexible areas of interest to pilot mobile eye tracking. Sensors. 2020;20(4):986
  46. 46. Naeeri S, Mandal S, Kang Z. Analyzing pilots’ fatigue for prolonged flight missions: Multimodal analysis approach using vigilance test and eye tracking. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 63(1). Los Angeles, CA: SAGE Publications; 2019. pp. 111-115
  47. 47. An Eye Tracking based aircraft helmet mounted display aiming system. 2022. Available from: https://www.techrxiv.org/articles/preprint/An_Eye_Tracking_based_Aircraft_Helmet_Mounted_Display_Aiming_System/18093233 [Accessed: June12, 2023]
  48. 48. Modi N, Singh J. A review of various state of art eye gaze estimation techniques. Advances in Computational Intelligence and Communication Technology: Proceedings of CICT. 2019;2021:501-510
  49. 49. Pilot Duty of Care and the Role of the Human Factors Expert. 2014. Available from: https://www.meaforensic.com/pilot-duty-of-care-and-the-role-of-the-human-factors-expert/ [Accessed: April 06, 2023]
  50. 50. Could the F-35 Really Shoot Down an Enemy Ballistic Missile? 2020. Available from: https://nationalinterest.org/blog/buzz/could-f-35-really-shoot-down-enemy-ballistic-missile-135442 [Accessed: April 20, 2023]
  51. 51. Carroll M. Enhancing HMD-based F-35 training through integration of eye tracking and electroencephalography technology. In: Schmorrow DD, Fidopiastis CM, editors. Foundations of Augmented Cognition. AC 2013, Lecture Notes in Computer Science. Vol. 8027. Berlin, Heidelberg: Springer; 2013. DOI: 10.1007/978-3-642-39454-6_3
  52. 52. Klaib AF, Alsrehin NO, Melhem WY, Bashtawi HO, Magableh AA. Eye tracking algorithms, techniques, tools, and applications with an emphasis on machine learning and internet of things technologies. Expert Systems with Applications. 2021;166:114037
  53. 53. Yang T, Cappelle C, Ruichek Y, El Bagdouri M. Online multi-object tracking combining optical flow and compressive tracking in Markov decision process. Journal of Visual Communication and Image Representation. 2019;58:178-186
  54. 54. Gunawardena N, Ginige JA, Javadi B. Eye-tracking technologies in mobile devices using edge computing: A systematic review. ACM Computing Surveys. 2022;55(8):1-33
  55. 55. Edge Computing Poised to Give AR Wearables a Big Boost. 2021. Available from: https://www.fierceelectronics.com/electronics/edge-computing-poised-to-give-ar-wearables-a-big-boost [Accessed: May 02, 2023]
  56. 56. Letaief KB, Shi Y, Lu J, Lu J. Edge artificial intelligence for 6G: Vision, enabling technologies, and applications. IEEE Journal on Selected Areas in Communications. 2021;40(1):5-36

Written By

Angelo Compierchio, Phillip Tretten and Prasanna Illankoon

Submitted: 29 June 2023 Reviewed: 11 July 2023 Published: 11 August 2023