Open access peer-reviewed chapter

An Evaluation of Game Usability in Shared Mixed and Virtual Environments

Written By

Francesco De Pace, Federico Manuri, Andrea Sanna and Iñigo Lerga Valencia

Submitted: 09 May 2019 Reviewed: 30 July 2019 Published: 10 September 2019

DOI: 10.5772/intechopen.88922

From the Edited Volume

Game Design and Intelligent Interaction

Edited by Ioannis Deliyannis

Chapter metrics overview

914 Chapter Downloads

View Full Metrics

Abstract

Augmented reality (AR) and virtual reality (VR) technologies are increasingly becoming more pervasive and important in the information technology area. Thanks to the technological improvements, desktop interfaces are being replaced by immersive VR devices that offer a more compelling game experience. AR games have started to draw attention of researchers and companies for their ability to exploit both the real and virtual environments. New fascinating challenges are generated by the possibility of designing hybrid games that allow several users to access shared environments exploiting the features of both AR and VR devices. However, the user experience and usability can be affected by several parameters, such as the field of view (FoV) of the employed devices or the realism of the scene. The work presented in this chapter aims to assess the impact of the FoV on the usability of the interfaces in a first-person shooter game. Two players, interacting with AR (first player) and VR (second player) devices, can fight each other in a large game environment. Although we cannot ascertain that different FoVs have affected the game usability, users considered the narrow FoV interfaces to be less usable, even though they could freely move around the real environment.

Keywords

  • augmented reality
  • virtual reality
  • mixed reality
  • field of view
  • usability
  • shared virtual environment

1. Introduction

Figures of the video game market are impressive: it is expected to be worth over 90 billion US dollars by 2020 (BestTheNews, 2016).2 A significant cut is related to immersive virtual reality (VR) and augmented reality (AR) games. Moreover, technological improvements related to off-the-shelf devices opened new perspectives for researchers and developers to propose users different and intriguing game experiences.

This work presents issues related to the design and the implementation of shared mixed and virtual environments (henceforth referred as hybrid multiplayer games/environments) where players using immersive VR and AR devices can share the same environment. If a lot of scientific works and commercial applications are known to provide users virtual and augmented game experiences, just a few attempts have been focused on providing hybrid solutions, thus allowing augmented and virtual players to live a game experience together. A first attempt [1] proposed a hybrid environment for playing a tabletop game: the chess. Users experienced both positive game experiences using AR and VR devices, but the limited field of view (FoV) provided by the AR device (the Microsoft HoloLens) partially limited the usability and hence the game experience.

This work aims to investigate the impact of the FoV on the usability in a hybrid shooter game. In particular the following questions will be addressed:

  1. Can the FoV limit the usability of first-person shooter games?

  2. Can the freedom of movement mitigate the limited FoV for AR players?

In order to tackle these issues, the interfaces for VR and AR players have been implemented as much similar as possible, thus “isolating” the effect of the FoV on the game usability.

Ten users have been preliminary involved in tests; some users tested alternatively the two interfaces and filled a usability questionnaire after each test. Results show a slight preference for the VR interface, and users have indicated the narrow FoV of the AR device as the most important issue, suggesting that the movement capability has not mitigated the influence of the FoV.

This chapter is organized as follows: Section 2 presents the state of the art of VR and AR games; moreover, previous studies related to the impact of the FoV are considered. Section 3 discusses all challenges involved in the design of hybrid shared environments, whereas Section 4 presents the proposed system and describes how the design challenges have been met. Section 5 shows performed tests and obtained results. Finally, Section 6 presents the conclusions and the future works.

Advertisement

2. Related work

In the following sections, the state of the art related to the AR and VR applications in the game/serious game context is presented.

2.1 VR in serious game

Since the 1990s, immersive VR headsets have been employed for gaming. One of the first examples is the Nintendo Virtual Boy, a 32-bit console capable of displaying stereoscopic contents. Thanks to technological improvements, VR headsets have started to be used for contexts different from the gaming domain (such as training, simulation, etc.), allowing users to experience new typologies of applications. Despite these applications are still based on the game mechanic, their main goal is not the entertainment itself but the creation of learning environments, proposing educational experiences. Analyzing the current state of the art, several examples can be found [2, 3, 4]. In [2], an immersive VR environment is employed to assess how perception affects actions in the sport domain. Stone et al. [3] illustrated how immersive VR has been employed in the aviation industry, whereas a VR headset is used in [4] for pain management. An interesting comparison between an approach based on immersive VR environment and one based on traditional educational methods for aviation safety is proposed in [5]. Results show that the VR experience has generated a knowledge gain that remains in time (1 week), whereas it is lost rapidly using traditional methods. Further comparisons between VR and non-VR technologies have been carried out in the game domain to analyze in which way an immersive VR environment differs from a non-VR one and to assess the impact of immersive VR technologies on the game experience [6, 7, 8]. In [6], a first-person shooter game is considered to evaluate the game experience using VR and non-VR technologies. Despite results indicate that players have preferred the non-VR environment in terms of usability, the VR environment has been perceived more compelling and attractive. Wilson and McGill [7] proposed a comparison to assess the impact of immersive VR in violent video games. Collected outcomes show how users felt higher presence and body ownership with respect to the non-VR solution, perceiving more violence and thus suggesting changing the game rating. Finally, a comparison between VR and non-VR technologies in multiplayer game is proposed in [8]. Results are in-line with the outcomes of the other presented works, showing that users preferred playing multiplayer games in full VR environments with respect to non-VR systems.

2.2 AR in game

One of the first collaborative gaming in an AR environment dates back to 1998, when Szalavári et al. proposed an AR system to play multiplayer tabletop games [9]. Users had to physically be positioned in the same real room, and they could interact with computer-generated contents using tracked probes. Few years later, Thomas et al. [10] extended the well-known desktop game Quake to an AR scenario, allowing users to play the famous game in the real environment. Original Quake’s virtual assets (e.g., monsters, weapons, etc.) were displayed in the real surroundings, and they could be visualized by using a wearable device both indoor and outdoor.

AR technologies have been also used to improve the game experience in outdoor scenarios by using wearable [11] and handheld [12] devices. Despite attention to the game design should be paid to avoid undesired problems [13], new devices (such as the Microsoft HoloLens) have been successfully employed to offer AR game experiences in huge size environments [14]. Concerning indoor environments, Nojima et al. [15] proposed an AR system to augment sports with additional computer-generated contents. A modified version of dodgeball is presented in which players can fight each other through a real ball. Players can detect the damage caused by visualizing health bars positioned above the head of the adversarial players. Regarding video games, a plane detection algorithm is used in [16] to display military virtual assets (e.g., soldiers, enemies, etc.) in an indoor environment. Users, by using a handheld device, can defend their environment against the enemy troops’ assault.

2.3 Hybrid AR and VR games

One of the oldest attempts of creating hybrid games dates back to the 1920s, when Thomas et al. improved their AR Quake game, adding support to AR/VR collaborative multiplayer [17]. In addition to a custom haptic gun used to shoot to the monsters, a collaborative modality has been added to the system. Two different players were able to interact with the same virtual contents using an AR device and a desktop interface, respectively. Some years later, Cheok et al. developed a modified version of the Pacman game, allowing different users to interact using AR and VR devices [18]. Two players, using two wearable devices, were able to interact visualizing virtual assets superimposed in the real scenario. Furthermore, a third player, interacting in a VR environment, could help Pacman by suggesting the best way to follow. In [19], several AR and VR devices are employed to offer a cross media experience. Each interface provides users different functionalities, highlighting the peculiarity of each device. Clash tank [20] proposes a slightly different approach. Despite users were positioned in an immersive VR scenario, they could still interact with the real environment by visualizing it through a virtual monitor (an external camera is used to acquire the desired images).

Finally, another approach for using both AR and VR interfaces consists in alternating them according to the game flow [21, 22].

2.4 FoV in AR

Some projects have been found regarding the study of the impact of the FoV on AR applications. In [23], an evaluation of the impact of three different FoVs (small, medium, and large) on target-following task is proposed. Results show that despite small FoVs should be avoided for targeting people, no evidence in improving performances in using large FoVs with respect to medium ones has been found. Similarly, Kishishita et al. [24] assessed the impact of FoV of wide AR display by comparing two different types of labeling techniques (in view and in situ) in a search target scenario. Main outcomes illustrate how in situ labeling discovery rate increases, whereas the one of the in view methods decreases as the FoV angle approaches 100°. The performance of the two methods converges at around 130°. Ren et al. [25] developed a system to emulate different FoVs finding out that a 108 × 82° FoV allowed users to fulfill tasks quicker than using a FoV constrained to 45 × 30°. The cognitive cost of using different FoVs is evaluated in [26]. Three different devices have been compared to verify whether distinct devices require different cognitive loads in a button-pressing procedural task. Main results demonstrate that spatial AR required significant less effort than the other devices. Authors suggest that the high cognitive load of narrow FoV devices can be reduced by using more virtual cues to drive the attention of the users.

To the best of authors’ knowledge, no study has been found related to the evaluation of the FoV on the usability of a hybrid multiplayer game. Since results of [1] showed how FoV had negatively impacted on the usability of the AR interface in a tabletop game, the work presented in this chapter aims to investigate the impact of FoV on the usability of a game in which players can physically move in a large environment.

Advertisement

3. Design challenges

The realization of hybrid multiplayer environments presents several challenges. Depending on the typology of the game experience that the users want to get, different solutions to distinct problems exist. In the following, some of the most relevant challenges are presented and discussed.

3.1 Game level design challenges

Challenges related to the game level design concern at least two different aspects:

  • The environment: how to represent the game map

  • The alter ego: how to represent the players’ characters

In the following sections, these two aspects are introduced and discussed.

3.1.1 Environment

The environment design challenge is related on how to represent and/or make the game map. For the sake of clarity, a hybrid multiplayer environment composed by an AR and VR player (henceforth called ARP and VRP, respectively) is considered (ARP vs. VRP). Depending on whether the ARP’s real environment can be known a priori or not, the complexity of managing the environment may vary. In the first case, if the geometry of the environment is known (e.g., by scanning or by using a map), a virtual representation of the environment’s shape can be shared with the VRP. Knowing the shape, the two players can effectively interact in the same environment (the one of the ARP) or in two completely different game maps that share only the same shape. In the second case, the ARP should have enough space to play in a game environment unrelated to his/her real environment (e.g., both players can interact in a virtual maze as long as the ARP has enough space to move and to visualize the virtual contents).

The complexity may even increase if two or more ARPs that are not physically in the same real environment are considered. If it is not possible to determine the environment’s shape, all the ARPs should have enough space to play in the shared virtual world. On the other hand, knowing the geometry of their environments, the smallest one should be taken as reference shape or the intersection among the different geometries can be determined and used as reference shape for the game map.

In addition to the shape of the environment, real obstacles should be also considered to properly manage the creation of the hybrid environment. Considering the case of ARP vs. VRP, it should be possible to detect the positions, orientations, and typologies of the obstacles of the scene by scanning the real environment or by using pre-defined maps. Once the obstacles are identified, it should be possible to convey at least two different game experiences: in the first one, there can be a 1:1 match between the obstacles of the ARP and the ones of the VRP (e.g., a real chair in the ARP environment is recreated in the VRP scenario by a virtual chair having the same dimension, position, and orientation). In the second one, the dimensions, positions, and orientations of the virtual obstacles are the same as the real ones, but the virtual obstacles differ for the typology/shape (e.g., a real chair in the ARP environment is represented by a virtual rock in the VRP scenario). Indeed, as the number of ARPs rises, the complexity of the obstacle matching system grows, making arduous finding an effective solution (e.g., two ARPs may have different typologies of obstacles in the same positions or distinct obstacles in different positions). In addition to the real obstacles, virtual objects may be added to improve the game experience, making the game more compelling and attractive.

Finally, the interaction with the real objects of the ARP’s environment should be also considered. An ARP’s environment can be considered as static when no interaction is allowed with the real objects. Their positions and orientations cannot be changed, maintaining handily the synchronization with the VRP’s environment. On the other hand, an ARP’s environment can be considered as dynamic when the positions and orientations of the real objects can be modified. In order to keep synchronized the ARP and VRP environments, sensors should be added to monitor the ARP’s environment, increasing the complexity of the environment but ensuring a more engaging game modality.

3.1.2 Alter ego

The alter ego design challenge refers on how to visualize and control the players’ characters. It is assumed that at least the position and orientation of all the players can be determined and employed to show the virtual alter egos.

Concerning the visualization, in an ARP versus VRP scenario, it is possible to easily visualize two different virtual characters at the positions of the corresponding players by exploiting the different players’ tracked data. On the other hand, if the possibility to visualize virtual characters should be provided to two or more ARPs that are in the same physical space, the position of the virtual alter egos with respect to the real players should be carefully planned. The virtual characters can be superimposed on the real person’s body, or they can be positioned at a pre-defined distance from the real player. In the first case, despite the position of the virtual character and the real’s one is matched, undesired occlusions can arise that may negatively effect on the visualization of the virtual alter ego. In the second case, the virtual characters can be better visualized than in the first scenario, but the ARPs may not clearly understand to which entity (the real person or the displaced virtual alter ego) to pay attention.

The character’s control refers on how to give input and animate the virtual character. Both visualization and control can depend on the tracking approach used to monitor the players’ movements and actions. Since the most common AR and VR devices currently available on the market usually differ for the tracking capability (commonly, the VR systems are capable of tracking more body parts than the AR devices), the input modalities and the number of recognizable actions may vary from ARPs to VRPs. As the number of tracked actions increases, the possibility of providing precise input and realistic animation grows. Actions may be tracked by using external marker/markerless systems and/or by using physical joysticks to detect the players’ input. In order to reduce the complexity of the management of the character’s control, it is possible to employ pre-defined animations that can be visualized every time a certain event arises (e.g., if the displacement speed of the player exceeds a pre-defined threshold, a run animation can be played).

Finally, the user interface (UI) of different devices should be carefully planned to avoid undesired effects. When immersive headset (for the VR) or optical see-through devices (for the AR) are employed, the use of 2D elements for the UI should be avoided. It has been shown that visualizing virtual assets on planes placed at different positions with respect to those of the real objects may generate focal problems and it could reduce the user’s attention [27, 28]. Hence, the UI should be designed using only 3D computer-generated contents. On the other hand, when using smartphones or desktop interfaces, 2D UI elements can be employed without incurring in undesired problems.

3.2 AR challenges

AR challenges refer to issues that are strictly related to only the ARP. They can be divided in:

  • Reference system: how to correctly align the ARPs in the virtual environment

  • Game map restriction: how to prevent ARPs from physically leaving the game environment

  • Occlusion: how to manage the occlusion between real and virtual objects

  • Spawning points: how to effectively spawn the ARPs in the game environment

    These challenges are detailed in the following sections.

3.2.1 Reference system

The reference system challenge refers on how to determine the position and orientation of the ARP with respect to a known reference system. At least two different approaches exist:

  1. Marker/object tracking

  2. ARP’s pre-defined starting position

The first approach relies on the use of marker or markerless technologies. An image target can be positioned at a pre-defined distance from the ARP or in a known and specific position of the real environment. Alternatively, object recognition approaches can be employed to determine the ARP’s position and orientation with respect to a known object of the real environment. Indeed, the position of the tracked object with respect to the real environment should be known to correctly align the ARP in the game map. On the other hand, in the second approach, the ARP should start at a pre-defined position (it is assumed that the coordinates of this position with respect to the game environment are known) with a preset orientation.

3.2.2 Game map restriction

The game map restriction challenge concerns how to keep ARPs from getting out from the game map. Whereas in the VRP’s environment it is relatively simple to avoid this behavior (it can be managed by the game logic itself), it is not physically possible to avoid ARPs to not enter in areas outside the ones defined for the game. One possible approach is placing peculiar virtual assets in critical positions to attempt to limit the ARP’s movements. The design of these assets should refer either to the concept of “forbidden” or to existing real objects that are commonly used to prevent people crossing determined areas (such as barriers or barricades).

3.2.3 Occlusion

The occlusion challenge refers to the problems that arise when a virtual object has to cover a real one (or a part of it) in the ARP’s environment; a virtual representation of the real object should be created and superimposed on the real object. Then, the occlusions between the two virtual objects can be easily managed by the game logic itself. In addition, it may be desired to make the virtual representations of the real objects invisible (keeping the occlusions). One possible solution may consist in rendering the virtual objects by modifying the material property of the virtual assets, assigning special materials (e.g., despite the black color is rendered as transparent on the HoloLens device, it ensures to occlude any objects positioned behind the black one).

3.2.4 Spawning points

The management of the spawning points should be carefully considered if two or more ARPs that are positioned in different locations needed to be added in the same virtual environment. In fact, it is possible that despite the ARPs are physically in different positions, they could share the same virtual positions, overlapping in the virtual environment. One possible solution consists of letting know players which positions are already occupied by virtual entities before participating in the game session.

Advertisement

4. The proposed system

Since players interact using different devices, the usability of the proposed system is affected by the typology of interface employed to interact in the virtual environment. Hence, in order to isolate the influence of the FoV, it has been attempted to offer the same game experience, overcoming the intrinsic differences of the employed devices. To achieve this goal, despite the differences related to the hardware and the environments, some design choices have been made to make the interactions and the game maps for AR and VR players as similar as possible.

In the following sections, the hardware and software architecture, along with the interfaces and the game level design, are discussed. Moreover, the choices employed to overcome/mitigate some of the challenges discussed in Section 3 are presented and detailed.

4.1 The system architecture

The system architecture is composed by two different entities: an Oculus Rift connected to an Intel Core i7-7700 (16.0 GB RAM, NVIDIA GeForce GTX 1070 Ti) personal computer (PC) and a Microsoft HoloLens3 device. The first entity corresponds to the VRP workstation, whereas the second one to the ARP workstation. Both workstations are connected on the same local area network (LAN) using UDP socket connections. In addition to the headset, the VRP workstation is equipped with two infrared sensors and two Oculus Touch controllers. The VRP is thus capable of interacting in the virtual environment using the controllers and the headset tracked by the external sensors. The ARP workstation is equipped with a HoloLens clicker4 device. It consists of a Bluetooth device used by the ARP to interact with the virtual assets.

The hybrid environment has been developed in C# using Unity 2018.3.4 as integrated development environment (IDE). The VRP workstation acts as a server (specifically as a host), whereas the ARP workstation acts as a client. In order to manage the network connection and to access the data from the two employed hardware, the following libraries have been used:

  • The MixedRealityToolkit-Unity 2017.4.35 to access the HoloLens hardware data

  • The SteamVR Plugin6 to access the Oculus Rift hardware data

  • The Unet Unity API7 (high-level API) to manage the client-server architecture

In Figure 1 the system architecture is shown.

Figure 1.

The system architecture.

In the following sections, the AR and VR interfaces and the methodologies adopted to limit the challenges of Section 3 are presented and detailed.

4.2 The level design

In order to provide the same interaction to both the ARP and VRP, the interfaces and the game environments have been designed as similar as possible. Several aspects have been considered:

  • The proposed alter ego: the player’s character and its interaction paradigm

  • The proposed UI: the graphic and sound elements that provide information to the players

  • The proposed environment: how to represent the game map

  • The AR issues

In the following, these aspects are presented and discussed.

4.2.1 The proposed alter ego

The alter ego of both players is represented by a virtual drone. It has been decided to not employ a humanoid (or more articulated) character for two reasons: firstly, to limit the interaction issues (e.g., how the user would control an anthropomorphic character) and secondly due to the narrow FoV of the employed AR device. As stated in [29], a humanoid character can be arduous to be detected using the HoloLens device, and users may be forced to frequently change their point of view to visualize the entire body. Hence, a simpler, smaller, and more visible alter ego has been preferred.

The drone character is capable of flying in all directions, firing from two laser side cannons (Figure 2).

Figure 2.

The proposed alter ego.

To make sure that both players are able to execute these types of actions in a similar way, the character’s inputs have been mapped to the alter ego considering the interaction mode of each interface. Specifically, concerning the ARP, the position in the world reference system and the local rotation of the HoloLens have been directly mapped to the drone’s global position and local rotation. It is worth to notice that the maximum height reachable from the drone is thus equal to the height of the ARP. The laser fire action has been accomplished exploiting the direction of sight as a gunsight and the button of the HoloLens clicker as a fire button. The capability of gesture recognition of the HoloLens has not been employed as hands can occlude the sight of view and gestures can strain arm muscles. Movements and shooting actions of the VRP have been designed and developed in a similar way. The left Oculus Touch controller and the rotation of the headset have been employed to move the alter ego. More specifically, the rotation of the headset provides the direction of translation, whereas the movement of left joystick along the vertical axis provides the positive or negative magnitude of the translation along the view sight direction. To give the VRP the possibility of moving sideways, the movement of the left joystick along the horizontal axis has been used to provide both the magnitude and the direction of translation. Hence, the VRP is able to move side to side independently of the rotation of the headset. Since the movement’s height of the drone controlled by the ARP is constrained by the height of the real AR player, the height of the movement of the VRP’s drone has been set to a pre-defined height in order to not allow the VRP to reach areas of the game map not accessible by the ARP (e.g., the ceiling). Finally, the shooting action has been mapped to the trigger button of the right Oculus controller, whereas the shooting direction is set by the orientation of the headset. In Figure 3 the input mapping of both players is shown.

Figure 3.

The AR input mapping (left column) and the VR input mapping (right column).

4.2.2 The proposed UI

In order to not occlude the FoV of both devices and to ensure that both players can pay attention to the environment and the game itself, the UI has been kept as minimal as possible. It is essentially composed by three elements that represent the life of the players, the action of being shot, and the action of firing. The players’ life is represented by a health bar, positioned at a pre-defined distance from the tip of the drone (see Figure 2). To make players aware of being shot, a red panel is rendered for a few moments in front of the camera when the player is hit by a laser shot. The material of the panel is composed by a RGBA color with the alpha channel that has been set to 0.1 (transparent). As the players are being hit, the alpha channel is increased to let the panel being more visible, providing players the sensation of losing life. Finally, a sound of fire is played when the shooting action is executed to improve the realism and the sensation of immersion of the game. In Figure 4 the two different interfaces can be depicted.

Figure 4.

The AR and VR interfaces.

4.2.3 The proposed environment

To attempt to provide the same game experience to both players, it has been decided to create a virtual representation of the ARP’s environment by modeling it using Rhino 58 and Blender 2.79.9 The environment consists of two rooms connected by a hallway of Politecnico di Torino. Starting from a detailed map, which included the positions/orientations and typologies of the obstacles (chairs, tables, etc.), a designer has modeled the corresponding virtual environment, matching 1:1 the obstaclespositions/orientations and typologies of the ARP to the VRP’s ones (Figure 5).

Figure 5.

The real office and its virtual representation. Notice the 1:1 match among the real and virtual objects.

Considering the adopted client-server architecture, the proposed virtual environment has been employed on the server (the VRP system, acting as a host); thus both virtual representations of the two players can act in the same virtual scenario. The main difference is the visualization of the virtual scenario: it is displayed for the VRP but for the ARP. Hence, the ARP can see the real environments augmented by obstacles with which can “collide.” In addition to the aforementioned virtual environment, it has been necessary to employ a scanned version of the same scenario to stabilize the HoloLens tracking; as suggested in [14], a scanned version of the entire real scenario has been made using the HoloLens SLAM capability. Then, the generated model has been used only in the ARP’s client application to improve the HoloLens’ spatial mapping feature, stabilizing the tracking. In Figure 6, the modeled and scanned environments and the original map of the Politecnico’s rooms are shown.

Figure 6.

Top-left image: the map used to model the virtual environment. The red zone indicates the game area. Top-right image: the scanned map obtained using the SLAM feature of the HoloLens. Bottom-left image: the virtual environment. Bottom-right image: notice the matching of the virtual environment with the scanned map.

4.2.4 The proposed solution of the AR issues

Several AR issues have been tried to solve or limit. Concerning the alignment of the reference system, it has been preferred an approach based on the definition of a preset starting position of the ARP with respect to one based on a marker or object tracking. Hence, in order to compute the position and orientation of the HoloLens with respect to the real room, the device is placed on the floor in a preset configuration during the system’s booting (Figure 7 top-left). When the calibration procedure is completed, the ARP can wear the device, and his/her position and orientation can be correctly determined by the system.

Figure 7.

Top-left image: the pre-defined starting position of the HoloLens. Top-right image: an example of virtual barrier employed to limit the ARP’s movements. Bottom-left image: in the background it is possible to visualize the contrast between the real environment and the black material used to make invisible the scanned environment. Bottom-right image: an example of occlusion between a real (the door) and a virtual object (the barrier).

In order to prevent the ARP from entering in areas not included in the game environment, some virtual assets, which remind the concepts of “forbidden,” “work in progress,” or “prohibition,” have been added to the game map (see Figure 7 top-left). They have been positioned in front of closed doors or corners in which players should not pass.

Finally, to properly manage the occlusions among real and virtual objects, the material of the scanned version of the room employed to stabilize the HoloLens tracking has been modified (see Section 4.2.3). It has been changed to pure black to make it transparent, allowing to visualize the real objects and occluding the virtual ones at a time (see Figure 7 bottom-left and bottom-right).

4.3 The game modality

The game modality consists of a first-person shooter game, in which two players (the ARP and the VRP) compete one against the other, impersonating two drones. The objective is to eliminate the opponent drone, seeking to stay alive. In order to make the game more engaging, some virtual traps have been added to the virtual environment (they are both visible for both players). The traps are placed in some areas of the floor and wall, and they are activated when a player enters in their range (Figure 8). A demonstrative video can be found at the following link.10

Figure 8.

A virtual trap positioned on the real floor.

Advertisement

5. Tests and results

This section presents results obtained by a preliminary study on the usability of the presented hybrid multiplayer game environment. These preliminary tests involved 10 people in 5 different sessions; 2 users tested the system for each session. A test session is split into two different parts. During the first part, the first user plays with the VR device, and the second user plays with the AR device; then after about 10 minutes, users are asked for filling a usability questionnaire [30] (the SUS is a reliable and well-consolidated usability questionnaire). During the second part, users exchange the devices and again play for 10 minutes and then fill the usability questionnaire again. Two questions were added to the usability questionnaire that asks users for explaining the best and the worst element of the AR/VR game for their opinion. Users were PhD or MSc students of the computer science courses, and they all, at least occasionally, use AR and VR technologies. For this reason, the training phase at the beginning of each part was very short and was basically aimed to show users the mechanisms of point and shoot and, only for the VR player, the commands to move the avatar. Users involved in test were nine males and one female with age ranging from 21 to 36 years. All users were volunteers, and they signed the Declaration of Helsinki,11 thus providing informed consent before tests.

Tables 1 and 2 list results for the augmented and virtual applications, respectively. The mean value μAR and the variance σAR2 for the AR application are 27.3 (68.5 normalized in hundredth) and 67.3, respectively, whereas μVR and variance σVR2 for the VR application are 30.5 (76.25 normalized in hundredth) and 23.1, respectively. Under the null hypothesis H0 that μAR=μVR and the alternative hypothesis that two means are different, an unpaired student t-test has been performed by obtaining a probability p=0.2937 higher than 5%; therefore the null hypothesis cannot be rejected, and the difference between the two mean values of usability is not statistically significant. It cannot be claimed that an application is more usable than the other one, even if a slight preference for the VR solution can be noticed.

UserQ1Q2Q3Q4Q5Q6Q7Q8Q9Q10
13434344444
1324223424
2331342311
1433343234
1434223344
2434343434
2212233234
2241334344
0101320001
1443134424

Table 1.

Results obtained by testing the augmented application.

UserQ1Q2Q3Q4Q5Q6Q7Q8Q9Q10
11334223334
3434323334
3332342433
3433343434
3433221224
2333332323
2232212234
4444444444
3434343423
1443323444

Table 2.

Results obtained by testing the virtual application.

Although the statistical analysis does not provide an evidence of user preference and much more users should be involved in the tests, the comments about the best and the worst elements of both applications provided a very interesting food for thought. Most users indicated the FoV as the main limit of the AR interface (as in [1]): it is very difficult to identify the enemy position in particular when the two opponents are closed. On the other hand, the possibility to move in a real environment is the most appreciated aspect. When using the VR applications, users appreciated the level of realism with which the environment has been modeled, and they found a limit in the mechanism of point (with gaze) implemented for the VR interface. Just one user slightly experienced motion sickness using the VR application.

Finally, it is worth to be noticed that only two AR users exceeded the game area, suggesting that the design of the virtual barriers has proved to be effective to limit the ARPs’ movements.

Advertisement

6. Conclusions

This chapter presents a study on the impact of the FoV on the usability in a hybrid AR/VR multiplayer game. Starting from the analysis of the challenges and issues that may arise during the designing of hybrid scenarios, a large environment in which two players can fight each other using AR and VR interfaces has been proposed. In order to isolate the FoV parameter, the game environments and the user interfaces have been designed and developed as similar as possible, overcoming their intrinsic differences. The usability of both interfaces has been evaluated in a first-person shooter game. Although it has not been possible to statistically demonstrate that the FoV has impacted on the usability of the interfaces, the major part of the users has clearly stated that the FoV has negatively impacted on the AR game experience. Nonetheless, they have appreciated the possibility of freely moving in the real environment.

Future works will involve more players, considering more than two players for session and different devices.

References

  1. 1. De Pace F, Manuri F, Sanna A, Zappia D. Virtual and augmented reality interfaces in shared game environments: A novel approach. In: Intelligent Technologies for Interactive Entertainment. Cham: Springer; 2018. pp. 137-147
  2. 2. Bideau B, Kulpa R, Vignais N, Brault S, Multon F, Craig C. Using virtual reality to analyze sports performance. IEEE Computer Graphics and Applications. 2009;30(2):14-21
  3. 3. Stone RJ, Panfilov PB, Shukshunov VE. Evolution of aerospace simulation: From immersive Virtual Reality to serious games. In: Proceedings of 5th International Conference on Recent Advances in Space Technologies-RAST2011; Piscataway, New Jersey, USA: IEEE; 2011. pp. 655-662
  4. 4. Tong X, Gromala D, Amin A, Choo A. The design of an immersive mobile virtual reality serious game in cardboard head-mounted display for pain management. In: Pervasive Computing Paradigms for Mental Health. Cham: Springer; 2015. pp. 284-293
  5. 5. Chittaro L, Buttussi F. Assessing knowledge retention of an immersive serious game vs. a traditional education method in aviation safety. IEEE Transactions on Visualization and Computer Graphics. 2015;21(4):529-538
  6. 6. Pallavicini F, Ferrari A, Zini A, Garcea G, Zanacchi A, Barone G, et al. What distinguishes a traditional gaming experience from one in virtual reality? An exploratory study. In: Applied Human Factors and Ergonomics. Cham: Springer; 2017. pp. 225-231
  7. 7. Wilson G, McGill M. Violent video games in virtual reality: Re-evaluating the impact and rating of interactive experiences. In: Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play; New York, NY, USA: ACM; 2018. pp. 535-548
  8. 8. Christensen JV, Mathiesen M, Poulsen JH, Ustrup EE, Kraus M. Player experience in a VR and non-VR multiplayer game. In: Proceedings of the Virtual Reality International Conference-Laval Virtual; New York, NY, USA: ACM; 2018. p. 10
  9. 9. Szalavári Z, Eckstein E, Gervautz M. Collaborative gaming in augmented reality. In: VRST. Vol. 98; New York, NY, USA: ACM; 1998. pp. 195-204
  10. 10. Thomas B, Close B, Donoghue J, Squires J, De Bondi P, Morris M, et al. ARQuake: An outdoor/indoor augmented reality first person application. In: Digest of Papers. Fourth International Symposium on Wearable Computers; Piscataway, New Jersey,USA: IEEE; 2000. pp. 139-146
  11. 11. Avery B, Piekarski W, Warren J, Thomas BH. Evaluation of user satisfaction and learnability for outdoor augmented reality gaming. In: Proceedings of the 7th Australasian User Interface Conference. Vol. 50; Sydney, Australia: Australian Computer Society, Inc.; 2006. pp. 17-24
  12. 12. Cordeiro D, Correia N, Jesus R. ARZombie: A mobile augmented reality game with multimodal interaction. In: 2015 7th International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN); Piscataway, New Jersey,USA: IEEE; 2015. pp. 22-31
  13. 13. Bonfert M, Lehne I, Morawe R, Cahnbley M, Zachmann G, Schöning J. Augmented invaders: A mixed reality multiplayer outdoor game. In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology; New York, NY, USA: ACM; 2017. p. 48
  14. 14. Rompapas D, Sandor C, Plopski A, Saakes D, Yun DH, Taketomi T, et al. HoloRoyale: A large scale high fidelity augmented reality game. In: The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings; New York, NY, USA: ACM; 2018. pp. 163-165
  15. 15. Nojima T, Rebane K, Shijo R, Schewe T, Azuma S, Inoue Y, et al. Designing augmented sports: Merging physical sports and virtual world game concept. In: Human Interface and the Management of Information. Cham: Springer; 2018. pp. 403-414
  16. 16. Sung H, Wang CM. X-portal: Mixed reality for body engagement in new first person game experience. In: 2018 IEEE International Conference on Applied System Invention (ICASI); Piscataway, New Jersey,USA: IEEE; 2018. pp. 1073-1074
  17. 17. Thomas B, Close B, Donoghue J, Squires J, De Bondi P, Piekarski W. First person indoor/outdoor augmented reality application: ARQuake. Personal and Ubiquitous Computing. 2002;6(1):75-86
  18. 18. Cheok AD, Goh KH, Liu W, Farbiz F, Fong SW, Teo SL, et al. Human Pacman: A mobile, wide-area entertainment system based on physical, social, and ubiquitous computing. Personal and Ubiquitous Computing. 2004;8(2):71-81
  19. 19. Lindt I, Ohlenburg J, Pankoke-Babatz U, Prinz W, Ghellal S. Combining multiple gaming interfaces in epidemic menace. In: CHI’06 Extended Abstracts on Human Factors in Computing Systems; New York, NY, USA: ACM; 2006. pp. 213-218
  20. 20. Ranade S, Zhang M, Al-Sada M, Urbani J, Nakajima T. Clash tanks: An investigation of virtual and augmented reality gaming experience. In: 2017 Tenth International Conference on Mobile Computing and Ubiquitous Network (ICMU); Piscataway, New Jersey,USA: IEEE; 2017. pp. 1-6
  21. 21. Vera L, Gimeno J, Casas S, García-Pereira I, Portalés C. A hybrid virtual-augmented serious game to improve driving safety awareness. In: Advances in Computer Entertainment. Cham: Springer; 2017. pp. 293-310
  22. 22. Ferdinand P, Müller S, Ritschel T, Wechselberger U. The Eduventure—A new approach of digital game based learning combining virtual and mobile augmented reality games episodes. In: Pre-Conference Workshop Game based Learning of DeLFI 2005 and GMW 2005 Conference; Rostock. Vol. 13. 2005
  23. 23. Ventura J, Jang M, Crain T, Höllerer T, Bowman D. Evaluating the effects of tracker reliability and field of view on a target following task in augmented reality. In: Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology. New York, USA: ACM; 2009. pp. 151-154
  24. 24. Kishishita N, Kiyokawa K, Orlosky J, Mashita T, Takemura H, Kruijff E. Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks. In: 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR); Piscataway, New Jersey,USA: IEEE; 2014. pp. 177-186
  25. 25. Ren D, Goldschwendt T, Chang Y, Höllerer T. Evaluating wide-field-of-view augmented reality with mixed reality simulation. In: 2016 IEEE Virtual Reality (VR); Piscataway, New Jersey,USA: IEEE; 2016. pp. 93-102
  26. 26. Baumeister J, Ssin SY, ElSayed NA, Dorrian J, Webb DP, Walsh JA, et al. Cognitive cost of using augmented reality displays. IEEE Transactions on Visualization and Computer Graphics. 2017;23(11):2378-2388
  27. 27. Lamberti F, Manuri F, Paravati G, Piumatti G, Sanna A. Using semantics to automatically generate speech interfaces for wearable virtual and augmented reality applications. IEEE Transactions on Human-Machine Systems. 2016;47(1):152-164
  28. 28. Keil J, Schmitt F, Engelke T, Graf H, Olbrich M. Augmented reality views: Discussing the utility of visual elements by mediation means in industrial AR from a design perspective. In: Virtual, Augmented and Mixed Reality. Cham: Springer; 2018. pp. 298-312
  29. 29. De Pace F, Manuri F, Sanna A, Zappia D. A comparison between two different approaches for a collaborative mixed-virtual environment in industrial maintenance. Frontiers in Robotics and AI. 2019;6:18
  30. 30. Brooke J et al. SUS-A quick and dirty usability scale. Usability Evaluation in Industry. 1996;189(194):4-7

Notes

  • https://www.wepc.com/news/video-game-statistics/
  • https://www.microsoft.com/it-it/hololens
  • https://docs.microsoft.com/en-us/windows/mixed-reality/hardware-accessories
  • https://github.com/microsoft/MixedRealityToolkit-Unity/releases
  • https://assetstore.unity.com/packages/templates/systems/steamvr-plugin-32647
  • https://bitbucket.org/Unity-Technologies/networking/src/2018.3/
  • https://www.rhino3d.com/it/
  • https://www.blender.org/
  • https://drive.google.com/open?id=14vjNgLnyMWpDameigPPo9OE2eXtwKciN
  • https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/

Written By

Francesco De Pace, Federico Manuri, Andrea Sanna and Iñigo Lerga Valencia

Submitted: 09 May 2019 Reviewed: 30 July 2019 Published: 10 September 2019