Open access peer-reviewed chapter

Perspective Chapter: Evolution of User Interface and User Experience in Mobile Augmented and Virtual Reality Applications

Written By

Peter J. Van de Broek, Clement Onime, James O. Uhomobhi and Mattia Santachiara

Reviewed: 10 February 2022 Published: 07 April 2022

DOI: 10.5772/intechopen.103166

From the Edited Volume

Haptic Technology - Intelligent Approach to Future Man-Machine Interaction

Edited by Ahmad Hoirul Basori, Sharaf J. Malebary and Omar M. Barukab

Chapter metrics overview

196 Chapter Downloads

View Full Metrics

Abstract

An end-use’s experience of any software is typically influenced by the interface presented by the application to the user. For Mixed Reality Environments such as Augmented and Virtual Reality, the user interface is highly visual, and a poor interface can significantly degrade the user experience. Adequate attention is required when designing or creating interfaces and user experience within Mixed Reality Environments as traditional interface design goals and specifications often need to be adjusted. Furthermore, for mixed reality environments on Mobile devices there are additional interface constraints and considerations that would considerably improve the user experience when properly addressed. This research paper discusses the evolution(s) of user interface(s) and user experience of Augmented and Virtual Reality applications on Mobile devices and contributes a framework for improving user interfaces and experience when using Mixed Reality Environments.

Keywords

  • augmented reality (AR)
  • augmented virtuality (AV)
  • human computer interaction (HCI)
  • mixed reality (MR)
  • user interface (UI)
  • user experience (UX)
  • user experienced interface (UXI)
  • virtual reality (VR)

1. Introduction

User Experience [UX] and User Interface [UI] are major components of any modern software application where interaction with a human is required. While the term UI historically referred to the basic elements that provide input-output functionality [1], such as keyboards, line-printers and visual display units, today, it has broadened to also encompasses elements of visual design including layouts, kinds of prompts/dialog-boxes, fonts/language of text as well as the use of colours and images. There are already voice activated smart or intelligent systems and platforms where the user interface is completely based on audible speech or sound for both input and output.

The UX has been viewed as distinct from the UI and refers to the perception by end-users of the attractiveness and suitability of the software for its intended purpose. In most cases, measurable UX is based on an overall aggregation of both non-abstract and abstract quantities that also include the UI, software functionality and even its response speed. Today, the UX is of paramount importance for any software or (tool) that require user interactions.

Mixed Reality (MR) environments combine both real and virtual (e.g computer generated) objects for presentation within single displays. MR environments have been classified based on the ratio of real to virtual objects within it. Completely virtual reality (VR) environments exists at one end of the continuum while completely real, or physical environments are at the opposite end. In between both ends, the continuum defines arbitrary combinations of both real and virtual objects. When there are more virtual elements than real ones, the environment is classified as Augmented Virtuality (AV) while when there are more real objects than virtual ones, it is known as Augmented Reality (AR) [2].

The AR, AV and VR mixed reality environments can be implemented or displayed using a wide variety of hardware devices that include projectors, visual display units or monitors as well as large wall-sized displays or specialized head mounted displays [3]. This work, however focuses on the display and use of these MR environments on mobile devices, where mobile devices are limited to portable consumer grade ICT devices such as smart-phones/tablets and their associated peripherals such as head or chest mounting units, glasses and watches. That is, we focus on commodity mobile devices such as smart-phones and tablets not including portable or custom (expensive) hardware and equipment [4].

The evolution of UI and UX in MR environments has been heavily influenced by available technology. For example, the MR applications in the 1960s were limited to using wire-frame displays [5]. UX is of particular interest to MR environments as they can easily combine the advantages of both virtual-environments and seamless collaboration [6].

This rest of this chapter provides some background literature review pertinent to the evolution of UI and UX in mobile MR environments, UI/UX frameworks, the unique challenges of the mobile MR environments.

Advertisement

2. Background

Figure 1 presents a redrawing of the Reality Virtuality (RV) continuum first proposed by Milgram and Kishino.

Figure 1.

Redrawn RV contimuum.

This RV continuum was formulated on a 3-dimensional (3D) taxonomy that incorporated the Extent of real-World Knowledge (EWK), Reproduction Fidelity (RF) and the Extent of Presence Metaphor (EPM) [7]. All of which are fundamental to both UI and UX, that is, increasing EWK translates to a better ability to modelling the real-world (which leads to better UI). Similarly, with increasing RF, real and virtual content becomes more and more indistinguishable (which could to a better UX), and with increasing EPM users’ interactions become more natural or better aligned with real environments (which suggests better UX) [8].

Skarbez et al argue that this RV continuum is limited as it describes content only in relation to realism and therefore lacks coherence in the end users’ experience or UX [8]. They state that the “mediating” technology, content conveyed, and resulting impact must be considered together to adequately describe MR experiences”. Equally pertinent is that the RV continuum was formulated explicitly on visual experiences and visual hardware. Due to rapid advances in hardware and software MR environments are no longer confined to just visually synthesized displays alone but now include experiences that facilitate not only haptic and auditory experiences, with at least exploratory iterations in computer-generated stimuli for all the exteroceptive senses, and it is through interactions with the 5 exteroceptive senses (sight, sound, touch, smell and taste) that users experience MR environments [8].

Despite this limitation, the RV continuum remains a relevant framework for MR research and development today. Indeed for modern applications, there is a need to evolve from more passive or traditional modes of HCI to UI that facilitate multi-sensorial modalities allowing for interactions in virtual worlds, where an interaction modality can be defined as a tangible communication mode [9]. Computer UI aim to enhance interactions with computing systems through various interfaces. Historically UI have evolved from batch interface (punched cards) to command-line user interface, graphical user interface (GUI), web-based user interface (WUI), a subclass of GUI, and recently to touch screens that accept inputs at the touch of a stylus or finger [10], Further evolutions in UI can be classified under the broad category of Post-WIMP UI [11, 12, 13], or next generation user interfaces [2]. Such UI employ a variety of novel interaction devices and techniques targeting multi-platform and multi-modal UI that have evolved to address user interactions and experiences in 3-D MR environments, including VR and AR [14], with a need for greater responsiveness, immediacy in feedback and realism within immersive 3-D environments. Examples of NGUI include tangible user interface (TUI), organic user interface OUI, reality-based interface (RBI) and smart material interface (SMI) [15].

2.1 User interface

Traditional UI were predicated on the narrow scope of usability, where cognitive load was reduced, as opposed to users’ overall experiences [16]. An early paradigmatic model, that is still ubiquitous, is the window, icon, menu, pointer (WIMP) GUI, facilitated by the introduction of the point and click mouse. The WIMP GUI model developed in the 1980s using interfaces from the computer-as-tool paradigm where a 2-dimensional workspace is presented with direct manipulation of objects in a serial nature [17]. Although the WIMP GUI was adapted and popularised by Macintosh in the 1980’s it is still the most dominant type of GUI in modern desktop computers [11]. Reasons for the ubiquity of this GUI include its’ effectiveness in facilitating common office tasks [11]. Other advantages are its ease of use due in part to exploitation of muscle memory and image recognition and commonality across applications with widespread accessibility for a range of users, facilitating the creation of a de facto standard [12]. With the introduction of WIMP interactions with computing hit the mainstream. Before this time van Dam [13] argues that there were two previous generations of user interfaces, placing WIMP in the third generation of UI. UI at this time were optimised to the available hardware, although it is argued that the first generation in the 1950s and 1960s were not UI in the strict sense as there was no interaction with users per se as computers were used in batch mode with punched-card inputs and line-printer output. Between the 1960s and 1980s van Dam [13] highlights the evolution of the second generation of UI, in which for the first-time users could interact with computing systems by typing in parameter defined commands on mechanical alphanumeric displays using timesharing on mainframes and microcomputers. Such systems were founded on operating systems such as DOS and UNIX, with command line shells and device drivers. In the DOS OS the device driver has responsibility for input/output operations, and uses blocks, with their own address, to store information [18] in disks. In this way the user controls all system software through the DOS UI that allows for graphical displays on the monitor. DOS was a forerunner to the GUI and is a command line interface (CLI) system. Key considerations of earlier iterations of UI were responsiveness and immediate feedback to user inputs, increasing functionality through human computer interactions [HCI], where functionality was the key paradigm However WIMP UI have several limitations. As the complexity increases, with additional icons and widgets added, the UI becomes more cumbersome and harder to use, with the serialised nature of the interface separating the user from the perceptions of real time working [11] and preventing parallel inputting [19]. In addition, the UI is predicated on a 2-D paradigm, with 2-D input devices and desktop metaphor and do not innately transpose into 3-D environments [11]. Such limitations have become more pronounced. Evolutions in processing and graphical processors, leading to advancements in software and hardware and iterations in designing and development of more appropriate UI have taken place, with developments in gaming having a major input. With the increasingly widespread proliferation of gaming – from handheld to desktop and online collaborative platforms utilising immersive 3-D worlds, HCI had to evolve in which the overall concept of UX became more of a consideration. As Bonnardel [20] argues as UI evolve in response to e.g., games and 3-D environments novel techniques must be used that are future focused. Equally as importantly such UI need to go beyond functionality and elicit feelings of fun and enjoyments for users [21], highlighting the significance of the overall UX, which is enhanced through increased emotional investment, or affective perceptions [22, 23, 24]. As Tractinsky et al. [23] report correlations exist between users’ perceptions of the aesthetics of the HCI system and its usability. Jakubowski [25] sums this up when stating that the most important aspect of HCI is the influence of a good UX experience on the user productivity. McCarthy and Wright [26] define UX as a qualitative experience while interacting with products. The logical argument being that as users’ qualitative experiences increase, through more immersive, multi-modal and realistic UI, HCI improve, whether they be purely functional, for enjoyment or for educational purposes. Early gaming experience, such as Pong and Space Invaders came to the forefront in the 1970s. As Sahay et al. [27] report, although these early gaming iterations, like all games have the ability to engage people, due to a lack of processing power for example, they lacked features, such as shading, texture, realism and dimensionality, with unattractive and unrealistic graphics. With improvements in software and hardware not only has gaming made huge strides with graphics becoming more realistic, but modern gaming also now incorporates artificial intelligence (AI), Evolutionary advances in portability, range of consoles, including mobile, and network-based gaming [27] has culminated in modern online games, with more responsive controllers that take place in virtual environments with the ability to compete against remote opponents [28]. This in turn has increased the appeal of gaming through immersion. As Jennett et al [28] argue not only does immersion transcend the idea of flow, cognitive absorption [CA] and presence, it is a measure of engagement, engrossment and total immersion, as also reported by Brown and Cairns [29]. Csikszentmihalyi [30] argues that flow happens when individuals are completely engrossed in an activity to the detriment of other things. Thus, the concept of immersion involves losing track of time and cognisance of the real world, involvement and becoming lost in the game, or virtual environment, and is dependent on a good gaming experiences [28]. Thus, the overall UX is enhanced, mediated through more intuitive, interactive, realistic, multi-modal and responsive UI. As Brown and Cairns [29] state “engagement, and therefore enjoyment through immersion, is not possible if there are usability and control problems. Essentially there needs to be an invisibility of the controls for total immersion to take place.” In other words, for enhanced UX, UI need to evolve to become unobtrusive, intuitive to use and multi-sensorial, so that UI are subsumed within the interactive experience. Such advances in UI and increased UX are also apposite to interactions with MR environments.

2.2 Considerations for mixed reality environments

Although most applications still try to cope with a WIMP-style user interface and two-dimensional input, devices with multiple degrees of freedom are still rare [11]. However with the growth in 3-D applications and MR environments UI are evolving to meet the needs of users interacting with such environments. The overriding difference is that in MR environments the UI has to shift away from virtual interfaces designed to mediate interactions with computer systems to interfaces that combine both real and virtual environments and objects, dispersed at any point along the MR continuum. The ultimate aim is seamless interaction in the same environment. In this way UI in MR environments need to be able to integrate with a real environment where static and dynamic information streams are combined at runtime [2]. Billinghurst et al. [31] sum this up when stating that “AR interfaces are designed to enhance interactions in the real world.” UI designed to work within 3 dimensions contain greater complexity and need to be multi-modal and sensorial in nature. They require more degrees of freedom (DOF) and greater user efficiency due to the greater number of non-serial tasks, involving parallelism [11]. Additionally, due to the wide range of MR environments and possible applications more interactions between users and the environments are needed and a wider range of UI are needed. As Bowman et al. [32] state performance of UI in such environments is task and environment dependent with specific UI, targeted at displays that may be fully immersive or semi-immersive, being needed. Such UI are dependent on ergonomics and the target device with input/ output interactions in MR interfaces trending towards increasing naturalness becoming more intuitive and seamless. Complexities in UI applicable to MR and 3-D environments are due to several factors. These include the range of applicable input devices, which may be discrete, continuous or a hybrid of both, alongside the navigational options potentially available, ranging from more general exploration of such environments to searching for specific locations as well as more precise manoeuvring [32]. In addition, interfaces in such environments need to allow for the ability to interact with, and manipulate, objects in such environments. This can involve zooming and rotation with direct user control, physical control and/or virtual control [33]. One central feature of MR UI is the integration with a real environment. The application requires information about objects and spaces, whose geometry and behavior is not under the control of the designer but must be acquired from the real environment. Real objects can be subject to real-world manipulation [e.g., in a maintenance task] or external forces. Therefore, it must be possible to track state changes in the environment. In practice the “real world” model of a mixed reality application often consists of a combination of static information [e.g., geometry of the environment that is assumed to be fixed] and dynamic information [e.g. position and orientation information for the user and central objects] that is acquired by sensors at runtime. Sherman and Craig [33] describe direct user control as mimicking real world interaction, physical control that uses real devices and virtual control using virtual devices [11]. All these factors point to the fact that UI applicable to MR environments, unlike WIMP interfaces, need high bandwidth as well as efficient processors. In addition, continuous sampling and processing, probabilistic decoding and recognition that can unify input from parallel channels through multi-modal interfaces are needed [12]. As Van Dam [12] describes UI in MR environments need to facilitate body part tracking, gesture and speech recognition as well as haptic force input and feedback devices. Sub-subsections can also be used throughout the manuscript.

2.3 Considerations for mobile devices

Due to the small screen size, lack of memory, low to moderate processing power, smaller and fewer buttons and limited battery power, alongside the array of sensors, UI for mobile devices have numerous constraints, which can affect overall levels of UX. Subramanya and Li [33] classify these types of constraints as device related constraints with user related constraints including limited attention spans affected by mobility, change in locations and contexts and users’ idiosyncrasies. Chong et al. [34] argue the UI and mobile device size are one of the most significant factors in mobile device design and report on the use of a single-layer touch screen UI as opposed to the more conventional multi-layer UI, with promising results in increasing overall UX. The use of low-level computer languages, termed code optimization [34] also helps in reducing strains on available memory, as does the use of touchscreen UI, as opposed to mouse based and command-based UI topologies. This use of low-level language and single-layer UI can potentially overcome issues due to the noted complexities involved in developing applications and UI across various mobile platforms. Such mobile platforms can be incompatible, alongside the variety of programming languages and hardware differences as reported in [35]. Touchscreen UI obviate the need for physical keyboards, thereby maximising available screen sizes whilst at the same time increasing mobility with concomitant reductions in device sizes, as argued in [34]. Touchscreen UI are also aesthetically more pleasing and intuitive to use, thereby potentially facilitating increased UX. As Dunlop and Brewer [36] report with the increasing proliferation and popularity of mobile devices issues of widening access to powerful computing services and resources through the UI need to be overcome when designing UI with good UX. In addition, alongside the small visual displays mobile devices have had poor interaction facilities, including audio and limited input/output (I/O) [36] which create challenges posed by mobile device UI, which are exacerbated by network access issues. However, with advances in mobile device software and hardware leading to increased performance, effective UI designs have and are being proposed and developed. As Choi [37] report such UI can be classified into hardware and vision based, with vision-based UI receiving more focus due to not needing extra technical equipment or physical sensors. Such extra equipment may be inconvenient and relatively inaccurate [37], potentially leading to less well perceived UX due to the need to interact with additional layers, increasing the complexity of the HCI.

Advertisement

3. Evolution

Skarbez et al. [8] have suggested or proposed a revised RV continumm as shown in Figure 2, based on the idea that MR environments do not affect the interoceptive senses, can be termed as “external” MR environments. It is only when technology can also stimulate internal senses that virtual environments can be separate from the MR continuum, This revised continuum introduces a discontinuity within VR environments. That is, External Virtual Reality (EVR) environments, next to AV environments and the discontinuity before a ‘Matrix-like’ VR environment at the extremity. This allows the continuum to take cognisance of VR environments that focus on stimulations of the interoceptive senses, while external virtual reality environments are remain MR environments. Note, in the revised RV continuum, the EVR is equivalent to the "Virtual Environment" extremity in Figure 1 and is still part of the MR environment.

Figure 2.

Revised RV contimuum.

Skarbez et al consider any form of technology-mediated reality as MR [8]. This is comprehensible when mediated reality encompasses users’ interactions with the world around them, through the use of technology as an extension of users’ minds and bodies [38]. Such arguments expand on MR experiences as going beyond just visual interactions [7]. This is implicit when Paradiso and Landay [38] define extended reality (XR) as a MR environment that involves the union between sensor/actuator networks and shared online virtual worlds. To take account of the interaction between sensor networks and virtual worlds and how a user experiences them, Skarbez et al have re-defined MR as an environment “in which real world and virtual world objects and stimuli are presented together within a single percept,” where different senses, not just sight, may be affected.

As van Dam [12] argues evolutions in UI need to match human perceptual, cognitive, manipulative and social abilities. At the same time interactions need to be as seamless and natural as possible, thereby increasing overall UX. What these evolutionary advancements perhaps highlight, concomitant with advancements in gaming and 3-D MR environments, is that an overall paradigm shift has, and is occurring, from UI that can be viewed as purely functional and non-interactive, such as line printers and earlier iterations of Visual Display Units (VDU) to Graphics Processing Units (GPU) and onto more encompassing, immersive and interactive UI, in which UX is of greater importance. In simple terms non-interactive and more functional UI, that may not have such a high degree of UX, involve using and displaying texts and images as labels, that provide contextual information about objects, images etc., In contract interactive UI, where UX is more of an important paradigm, include buttons, toggles, sliders and other components that facilitate interactions with UI tools, such as icons on touch screens. Evolutionary UI inputs have been aimed at increasing productivity and efficiency through increased interaction, starting with the mouse developed by Douglas Engelbart in 1964. The mouse allowed for greater computer screen interaction in 2-D worlds. Since then, to input UI devices have evolved from the mouse, in line with advancements in gaming and 3-D MR environments, in which improvements in UX are paramount, to include game controllers, motion controllers, hand tracking devices to the Litho controller in 2018 [39]. According to Hillman [39] the Litho controller is an innovative solution that may be able to address shortcomings in hand input or traditional controllers helping with hand fatigue and increasing haptic feedback.

The beginnings of HCI in the late 1950s and early 1960s involved “batch processing” in which programs and data were read from cards or tape (paper or magnetic) until termination with a printed output, via line printers. VDUs superseded such operations, which were still restricted to scrolling commands and responses one line at a time [40]. Research carried out by Ivan Sutherland in the early 1960s led to the development of more powerful computing systems and graphics, with developments in GPU, or graphics cards, as well as developments and evolution in object-oriented programming concepts [41]. Object oriented programming is the fundamental paradigm in the C# computer programming language and focuses on data objects instead of functions and logic, as well as providing the inspiration for the development of Object-Oriented User Experience (OOUX), that classifies objects that users interact with first, before assigning actions to such objects [41]. This seems particularly apposite for MR environments and UI. As Hillman [41] argues OOUX allows for better interaction with spatial 3-D worlds. GPU developments have facilitated accelerations in graphical rendering with many pieces of data being processed simultaneously, leading to more powerful and faster computers. Such evolutionary developments led to the creation of the Xerox Alto in 1973. Although too expensive for widespread use, the Alto supported the use of GUI, as opposed to prototypes [42], as well as being the precursor for evolutionary advancements in gaming and ultimately developments in 3-D immersive MR environments. The emergence of GUI was seen as a disruptive revolution in HCI, being more advantageous and attractive in the early iterations to new users. It was not until 1985, with the release of the Apple Mac, that GUI started to be seen as being successful, and even more importantly with the successful release of Windows 3.0 in 1990 were GUI more widely accepted by government agencies and businesses who controlled research funding [40]. Also, during the late 1960s the first computer aided design [CAD] systems were promulgated with the development of 2-D and 3-D wireframe graphics, with all CAD systems now being based on a windows – menu interface with 3-D models [43]. Wireframe graphics map models, images and objects in 3-D, comprising vertices and edges [44] using triaxial [x,y,z] cartesian coordinates, where the z coordinate represents the height. Wireframe graphics allow for simplicity in presentation and flexibility in the use of colour [45]. Vertices are a collection of the 3-D coordinates connected together into triangles which can contain information such as colours, textures and directions [46], which are displayed through rendering and shading. Evolutions and developments in rendering and shading have further enhanced graphics and GUI. Rendering is the process of generating images and shaders are programmes that take meshes and textures etc. as inputs to generate the outputted image [47]. Figure 3 illustrates how rendering works in Unity.

Figure 3.

The rendering workflow.

Advertisement

4. Framework

Numerous frameworks have been proposed for designing and developing UI taking into account UX. Indeed, with the advent of immersive 3-D MR environments and their concomitant UI, in which UX is an increasingly more important concept, UX has in many cases subsumed UI as part of the design process and framework. In this way a more holistic approach can be taken in which UI and UX are combined into one paradigm, which for the sake of this paper can be termed user experienced interface [UXI]. As Hassenzahl and Tractinsky [48] argue, overall UX is influenced by end users’ internal states, including predispositions, expectations, needs, motivations and emotions; as well as the characteristics of the designed interface, including complexity, purpose, usability and functionality, and; the context within which the interaction occurs, be it the organisational or social setting and meaningfulness of the activity.

The design of the interface, or system, can be conflated with UI, whereas the context in which the interaction occurs can be conflated with immersive experiences in 3-D MR environments, whether that involves mobile devices or not, all encompassed within the overarching UX paradigm. Going a step further Hassenzahl [49] put forward a model for UX design in which users perceive interactions with products in two dimensions: hedonism and pragmatism. The hedonic aspect refers to the users’ interactive experiences and the ability of the system to support what has been termed “be-goals, which correlate more to the enjoyability, and emotions involved with interaction. In contrast the pragmatic aspect refers to the perceived ability of users’ interactions with the system to support “do-goals,” which correlate more to functionality and efficiency domains.

Hillman [39] argues in favour of the importance of frameworks for designing and deploying UX, in regard to MR, or XR, environments and applications. Such frameworks can be enhanced by incorporating integrated development environments [IDE] with built in presets that allow for faster prototyping and iterations. One such appropriate IDE is the Unity 3-D game engine which provides opportunities for developing UX for MR environments. Unity is a software framework, that provides a set of tools for “developers around the world to create rich, interactive, 2-D, 3-D, VR and AR experiences” (Unity public relations fact page, n.d.), negating the need for the construction of virtual spaces from the ground up [50]. Examples of preset built in core functionality includes the AR Foundation package which provides presets and plugins to enable development of immersive AR applications, to mobile devices [both Android and Apple], as well as web based and wearables. This is especially true when having to deal with mobile device considerations, which have been outlined in Section 2.3.

Other important considerations in UX design include the interaction between user needs, whether that is private enterprise or public organisations, and business goals along with the fundamentals, of end users wants and needs, ideation, prototyping, testing and implementation, with iteration [39]. Figure 4 illustrates this principle.

Figure 4.

The UX design process.

By evaluating end users’ needs and wants the useability, usefulness and desirability of the UX in MR can be determine. Hillman [39] classifies this as the trinity of UX design and is illustrated in Figure 5.

Figure 5.

The Trinity of UX.

To arrive at this trinity and design UX effectively and efficiently it would therefore seem that collaboration with end users is another key facet of any framework, much like the incorporation of an IDE, such as Unity. By doing this a modified UXI framework built on foundations outlined by Hillman’s UX design proces is proposed. This is illustrated in Figure 6.

Figure 6.

UXI framework.

In this framework aimed at MR applications, UI and UX are merged into one holistic paradigm: UXI. In the consultation phase a collaborative approach is needed to ascertain end users’ needs and wants, be the desired goal an AR education app or a VR app aimed at private enterprise, alongside usability, more specifically ease of use. This leads to the ideation phase in which the development of MR application will take place using the Unity IDE, with built in packages aimed at the development of MR immersive environments. One such example is the AR Foundations package, which contains monobehaviours for such things as planar surface detection; point clouds; reference points: arbitrary positions and orientations that devices track; light estimation: estimates for average colour temperature and brightness in physical spaces, and world tracking: tracking the device’s position and orientation in physical spaces [51]. Post development, deployment of the MR UXI will take place, after which feedback will be a key factor leading to iteration and continual deployment, with or without changes.

Advertisement

5. Conclusions

The advances in computing and HCI have led to changes in how humans use and interact with computers and other devices, through evolutionary developments in UI and UX. Latterly mobile devices, such as tablets and especially smart phones have become widespread in their proliferation and use. Such mobile devices now have many of the capabilities of larger computers and laptops, albeit with limitations, such as lack of memory, power and smaller screen sizes. Due to advancements in mobile devices such evolutions in UI and UX have been even more pronounced, leading to the blurring between UI and UX, which can no longer be viewed as discrete and separate. UI have been subsumed into overall concepts of UX and frameworks for development and deployment. This is even more pertinent with the advent of MR 3-D immersive environments and how users interact with them. This occurs in a variety of contexts, with interactions occurring more and more on mobile devices. This paper discussed evolutions in UI and UX and how they merged into UXI and proposed a framework for the development of MR UXI, applicable to mobile devices, as well as other devices in general. The next steps are to use this framework in the development of a MR environment UXI for mobile devices and analyse the results.

References

  1. 1. Bolton W. Programmable Logic Controllers. 6th ed. USA: Newnes; 2015
  2. 2. Stöcklein J, Geiger C, Paelke V, Pogscheba P. A Design Method for Next Generation User Interfaces Inspired by the Mixed Reality Continuum. Berlin, Heidelberg: Springer Berlin Heidelberg; 2009. pp. 244-253
  3. 3. Onime C, Uhomobhi J, Bornschlegl MX, Engel FC, Bond R, Hemmje ML. Cost Effective Visualization of Research Data for Cognitive Development Using Mobile Augmented Reality. Cham: Springer International Publishing; 2016. pp. 35-49
  4. 4. FitzGerald E, Adams A, Ferguson R, Gaved M, Mor Y, Thomas R. Augmented reality and mobile learning: The state of the art. In: Specht M, Sharples M, Multisilta J, editors. 11th World Conference on Mobile and Contextual Learning (mLearn 2012), Helsinki; 2012. pp. 62–69
  5. 5. Onime C, Uhomoibhi J, Wang H, Santachiara M. A reclassification of markers for mixed reality environments. The International Journal of Information and Learning Technology. 2021;38(1):161-173
  6. 6. Billinghurst M, Kato. Collaborative Mixed Reality. In: Proceedings of the International Symposium on Mixed Reality. Yokohama, Japan; 1999. pp. 261–284
  7. 7. Milgram P, Kishino. A taxonomy of mixed reality visual displays. IEICE Trans Information and Systems. 1994;12:1321-1329
  8. 8. Skarbez R, Smith M, Whitton M. Revisiting Milgram and Kishino’s Reality-Virtuality Continuum. Frontiers in Virtual Reality. 2021;2:27
  9. 9. Appert D. Conception et évaluation de techniques d’interaction non visuelle optimisées pour de la transmission d’information [Theses]. Université Paul Sabatier - Toulouse III; 2016. Available from: https://tel.archives-ouvertes.fr/tel-01394411
  10. 10. Barnes S. User friendly: A short history of the graphical user interface. Sacred Heart University Review. 2010;16(1):4. Available from: http://digitalcommons.sacredheart.edu/shureview/vol16/iss1/4
  11. 11. Biström J, Cogliati A, Rouhiainen K. Post-wimp user interface model for 3-D web applications; 2005. Available from: https://www.semanticscholar.org/paper/Post-WIMP-User-Interface-Model-for-3-D-Web-Bistr%C3%B6m-Rouhiainen/59b33d4cf54a11156a091ba98b003dfe900c98cc
  12. 12. Van Dam A. Beyond WIMP. IEEE Computer Graphics and Applications. 2000;20(1):50-51
  13. 13. Van Dam A. Post-WIMP user interfaces. Communications of the ACM. 1997;40(2):63-67
  14. 14. Shaer O, Jacob R, Green M, Luyten K. User interface description languages for next generation user interfaces. In: Proceeding of the Twenty-Sixth Annual CHI Conference Extended Abstracts on Human Factors in Computing Systems—CHI’08. 2008. pp. 3949–3952
  15. 15. Georgakopoulou N, Zamplaras D, Kourkoulakou S, Chen CY, Garnier F. Exploring the virtuality continuum frontiers: Multisensory and magical experiences in interactive art. In: Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications; 2019. pp. 175–182
  16. 16. Bastien J, Scapin D. A validation of ergonomic criteria for the evaluation of human-computer interfaces. International Journal of Human-Computer Interaction. 1993;4(2):183-196
  17. 17. Beaudouin-Lafon M. Designing interaction, not interfaces. Proceedings of the Working Conference on Advanced Visual Interfaces; 2004. pp. 15–22
  18. 18. Imam A, Rousan T, Aljawarneh S. An expert code generator using rule-based and frames knowledge representation techniques. IEEE. 2014:1-6
  19. 19. Odell D, Davis R, Smith A, Wright P. Toolglasses, marking menus, and hotkeys: a comparison of one and two-handed command selection techniques. In: GI ’04 Proceedings of Graphics Interface 2004. Research Collection School Of Computing and Information Systems; 2004. pp. 17–24. Available from: https://ink.library.smu.edu.sg/sisresearch/815
  20. 20. Bonnardel N. Designing future products: what difficulties do designers encounter and how can their creative process be supported? Work. 2012;41(Suppl 1):5296-5303. DOI: 10.3233/wor-2012-0020-5296. PMID: 22317539
  21. 21. Norman D. Emotion and design: Attractive things work better. Interactions Magazine. 2002
  22. 22. Schenkman BN, Jönsson FU. Aesthetics and preferences of web pages. Behaviour & Information Technology. 2000;19(5):367–377. DOI: 10.1080/014492900750000063
  23. 23. Tractinsky N, Katz A, Ikar D. What is beautiful is usable. Interacting with Computers. 2000;13(2):127-145
  24. 24. Van Der Heijden H. Factors influencing the usage of websites: The case of a generic portal in The Netherlands. Information & Management. 2003;6:541-549
  25. 25. Jakubowski M. User experience as a crucial element of future simulation and gaming design. In: Developments in Business Simulation and Experiential Learning: Proceedings of the Annual ABSEL Conference. Vol. 42; 2015
  26. 26. Mccarthy J, Wright P. Technology as Experience. Interactions. 2004;11(5):42–43
  27. 27. Sahay A, Choudhury T, Kumar V. Hierarchical Evolution of Digital Games. In: 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE); 2019. pp. 381–386
  28. 28. Jennett C, Cox A, Cairns P, Dhoparee S, Epps A, Tijs T, et al. Measuring and defining the experience of immersion in games. International Journal of Human-computer Studies. 2008;9:641-661
  29. 29. Brown E, Cairns P. A Grounded Investigation of Game Immersion. In: CHI’04 Extended Abstracts on Human Factors in Computing Systems. CHI EA’04. New York, NY, USA: Association for Computing Machinery; 2004. pp. 1297–1300. DOI: 10.1145/985921.986048
  30. 30. Csikszentmihalyi M. In: Toward a Psychology of Optimal Experience. Dordrecht, Netherlands: Springer; 2014. pp. 209–226. DOI: 10.1007/978-94-017-9088-814
  31. 31. Billinghurst M, Clark A, Lee G. A survey of augmented reality. A survey of augmented reality. Foundations and Trends in Human. Computer Interaction. 2015;8(2-3):73-100
  32. 32. Bowman D, Kruijff E, Laviola J, Poupyrev I. An introduction to 3-D user interface design. Presence: Teleoperators and Virtual Environments. 2001;10(1):96-108
  33. 33. Sherman W, Craig A. Understanding virtual reality: interface, application and design. Morgan Kaufmann series in computer graphics and geometric modeling. San Francisco, CA: Morgan Kauffman; 2003
  34. 34. Chong P, So P, Shum P, Li X, Goyal D. Design and implementation of user interface for mobile devices. IEEE Transactions on Consumer Electronics. 2004;50(4):1156-1161
  35. 35. Madari I, Lengyel L, Levendovszky T. Modeling the user interface of mobile devices with DSLs. In: 8th International Symposium of Hungarian Researchers on Computational Intelligence and Informatics. Budapest, Hungary; 2007. pp. 583–589
  36. 36. Dunlop M, Brewster S. The challenge of mobile devices for human computer interaction. Personal and Ubiquitous Computing. 2002;6(4):235-236
  37. 37. Choi J, Park H, Park J, Park JI. Bare-hand-based augmented reality interface on mobile phone. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality; 2011. pp. 275–276
  38. 38. Mann S, Havens J, Iorio J, Yuan Y, Furness T. All Reality: Values, taxonomy, and continuum, for Virtual, Augmented, eXtended/ MiXed (x), Mediated (x,y), and Multimediated Reality/Intelligence. In: AWE 2018, Santa Clara, California; 2018. Available from: http://genesis.eecg.toronto.edu/all.pdf
  39. 39. Hillmann C. In: The Rise of UX and How It Drives XR User Adoption. Berkeley, CA: Apress; 2021. pp. 73–116. DOI: 10.1007/978-1-4842-7020-23
  40. 40. Grudin J. From tool to partner: The evolution of human-computer interaction. Synthesis Lectures on Human-Centered Interaction. 2017;10:i–183
  41. 41. Hillmann C. In: UX and Experience Design: From Screen to 3D Space. Berkeley, CA: Apress; 2021. pp. 117–155. DOI: 10.1007/978-1-4842-7020-24
  42. 42. Sears A, Jacko J. The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. CRC Press; 2007
  43. 43. Dai F. Introduction: Beyond Walkthroughs. Berlin, Heidelberg: Springer Berlin Heidelberg; 1998. pp. 1-9
  44. 44. Cates N. A comparative survey of computer graphics applications in mechanical design. Lehigh University; 1982. Available from: https://core.ac.uk/download/pdf/228649677.pdf
  45. 45. Kellie A. The wireframe model-showing 3D structure with open space. In: Proceedings, 66th Midyear Conference, Engineering Design Graphics Division, ASEE; 2010. pp. 1–9
  46. 46. Learn U. Rendering and Shading – Unity Learn; 2021. Available from: https://learn.unity.com/tutorial/rendering-and-shading#5c7f8528edbc2a002053b539
  47. 47. Hocking J. Introduction to Shaders in Unity. 2020. Available from: https://www.raywenderlich.com/5671826-introduction-to-shaders-in-unity
  48. 48. Hassenzahl M, Tractinsky N. User experience: A research agenda. Behaviour & Information Technology. 2006;2:91-97
  49. 49. Hassenzahl M. The hedonicpragmatic model of user experience. Towards a UX manifesto. 2007
  50. 50. Ward J. What is a game engine? GameCareerGuide; 2008. Available from: https://www.gamecareerguide.com/features/529/whatisagame.php
  51. 51. unity3d com D, editor. About AR Foundation: Package Manager UI website. 2018. Available from: https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@1.0/manual/index.html

Written By

Peter J. Van de Broek, Clement Onime, James O. Uhomobhi and Mattia Santachiara

Reviewed: 10 February 2022 Published: 07 April 2022