Open access peer-reviewed chapter

Enabling a 3-D Cyberspace Experience Online

Written By

Bruce Campbell

Submitted: 10 December 2021 Reviewed: 29 December 2021 Published: 24 January 2022

DOI: 10.5772/intechopen.102416

From the Edited Volume

Computer Game Development

Edited by Branislav Sobota

Chapter metrics overview

219 Chapter Downloads

View Full Metrics

Abstract

The allure of a 3-D cyberspace grows again as communication channels connect a growing number of virtual reality (VR) head mounted displays and interaction-enabling controllers. Computer graphics enable 3-D cyberspace as much as any component critical to providing satisfactory experiences. Historically, the usefulness of 2-D interfaces entrenched when developers demonstrated the feasibility of 3-D cyberspace. Now a certain amount of prolonged inertia seems necessary to promote 3-D graphics to the participation level of 2-D graphics. The HTML 5 stack and other technologies enable 3-D integration into our planetary shared cyberspace. This chapter provides a historical perspective of one researcher’s methods used to provide 3-D web experiences, the lack of critical mass for their use in the past, and how the latest virtual reality integration trajectories will bring a critical mass to the 3-D cyberspace of the future.

Keywords

  • 3-D graphics
  • cyberspace
  • virtual reality

1. Introduction

For many 1990s-era computer science students, the science fiction book Snow Crash [1] by N. Stephenson suggested a powerful use case for 3-D computer graphics to advance an alternative online cyberspace whereby large groups of human beings could meet each other online in an audiovisual context that reduced the geographical and temporal limitations of the physical world.

Around the time I read the book, I sent 66 electronic mail messages to academic researchers at US-based universities who publicly announced they did virtual reality research formally. Thirty-seven of those recipients replied enthusiastically with details on their research agendas, hardware and software infrastructures, and thematic aspects of VR they intended to explore. Two suggested they had immediate funding available if I could get accepted to their campus’ PhD program. Six responders suggested I should pursue working with Fred Brooks at the University of North Carolina. Six responders suggested I pursue working with Tom Furness [5] at the University of Washington. Soon thereafter I realigned my life to take the latter recommendation seriously. I wasn’t the first to imagine the potential of VR to improve quality of life for those who would be willing to immerse themselves in 3-D cyberspace.

At the time I landed on the University of Washington campus in Seattle, the Human Interface Technology Laboratory, affectionately known as the HIT Lab, contained a remarkable amount and variety of VR-enabling software and hardware [2]. The HIT lab also employed a highly-effective cybrarian, Toni Emerson, who performed the lead editor role of the USENET group sci.virtual.worlds, which grew to have an avid international readership. Researchers around the world could be read posting VR-related ideas and news at any moment during any 24 h day. As a result, an endless stream of VR-based conversation and news flowed through the HIT Lab. The simple text correspondence built a collegial sense of community.

The HIT Lab worked on research grants with industrial, governmental, and academic sources of funding. The work those grants funded varied from building VR-related hardware, to building software, to performing design and development work in virtual world creation, to testing the effects of VR on human beings immersed in a wide variety of virtual experiences for extended periods of time. Funders seemed motivated to visit the lab often. I understood why. The experiences provided within immersed computer graphics were thought provoking. The human energy level in our 10,000 square foot lab space was high 24/7.

A significant change in career path occurred for some of us when we veered away from PhD work in computer graphics hardware and algorithms, to focus more on the coupling of computing platforms with human perception and cognition. Dr. Furness convinced us that the lab benefitted from being firmly located within an Industrial Engineering department, as not enough people were working on the coupling of the fruits of computer science research with the full capabilities of the human being.

Many of us in the lab decided to focus on VR as a tightly-coupled system between man and machine. By the time we defended our doctoral theses, our department had been renamed as Industrial and Systems Engineering. I had spent so much time applying VR to social spaces and investigating data streams coming from natural systems that the addition of systems to the name alleviated concerns about feeling like a fraud or imposter in the industrial world.

Whenever I walked into the lab to start a long work shift, I could not help but have a nagging thought that the HIT Lab experience deserved to be expanded to avail the experience to more people. The vision of what we were working on seemed to suggest we ourselves should be working and playing in immersed 3-D cyberspace as we worked on VR. An undercurrent of ethical arguments emerged as thoughts from time to time, as well as a burning desire to make greater access possible because it seemed technically possible—and because even more books, movies, and radio broadcasts were telling stories within that context.

Work for my masters thesis in computer science, entitled “3D Collaborative Multiuser Worlds for the Internet” [3], provided me ample opportunity to spend time in the various online VR platforms—a subset referred to as desktop VR—where I virtually walked about in curious visual spaces with likeminded researchers who felt VR should enable widespread visual 3-D human communications, unlimited by geographical location or temporal time zone.

Because the various platforms we met in existed in the labs of research organizations, the quality of communications was high. Those of us meeting in there, from all over the world, were building a 3-D cyberspace and imagining all it could become. Twenty-five years later, what we imagined is far from being mainstream, but the technology to build it has continued to improve and evolve.

Advertisement

2. Building 3-D cyberspace in the 1990s

I had leveraged a paper I published entitled “VRML As a Superset of HTML: An Approach to Consolidation” to first gain access to researchers at the HIT Lab [4]. That paper suggested that 3-D cyberspace might be a useful veneer for organizing all the 2-D content that was rapidly being amassed and distributed in the early years of the World Wide Web. The lab investigated that opportunity from a high-tech perspective, but many of us there shared a vision regarding lower cost implementations.

In 1994, the HIT Lab collaborated with a Fujitsu research lab to build a shared 3-D cyberspace, named GreenSpace, shared by participants meeting there while physically in Seattle and Tokyo [5]. A Silicon Graphics Onyx machine hosted that cyberspace’s visual content and processed the various peripherals that immersed each participant in VR. Four dedicated ISDN phone lines passed voice and data packets between the two locations on opposite sides of the Pacific to create a shared presence. Open Inventor software facilitated the building and management of the cyberspace experience. The lab celebrated upon demonstrating that two groups of people could share an experience in 3-D cyberspace with a million dollars’ investment of technology.

A year later, eight members of the HIT Lab got together to work on a version of GreenSpace that could run on Pentium 2 Intel personal computers, using a graphics accelerator board, and the emergent World Wide Web. The Industrial Technology Research Institute (ITRI) of Taiwan provided the funding, the graphics boards, and a couple of highly-capable collaborators to work with us to find a common ground culture for the distributed cyberspace we created. Our team comprised of two computer scientists, an information scientist, an artist, three virtual world designers, and an architect who eventually married an architect on the GreenSpace team.

We worked with Pentium 2 hardware early in the project but soon got the desktop VR experiences running on Sun Microsystems, Digital Equipment Corp, and other UNIX-flavored machines, thanks to Java 3D software and its capable Java virtual machine that made cross-platform applications easier to develop. The first experience we created on the Virtual Playground framework was a virtual outdoor mall that included embedded web browsers that streamed 2-D and 3-D content on virtual billboards.

The billboards could render most web pages that were HTML-driven, as well as show VRML-based 3-D models and examples of video on some of the underlying hardware flavors. We named that virtual world Netgate Mall and spent significant time discussing what culture we would like to enable for visitors who came to visit through Internet communication channels [6].

Upon sharing Netgate Mall with our research partners in Taiwan, an artist collaborator there built a Taiwanese version of the visual architecture for the mall that shared the same bounding boxes for navigation pathways (Figure 1). As had been demonstrated with GreenSpace, 3-D cyberspace on the desktop could be configurable, and thus personalized, without degrading human communications for many applications. We used the Taiwanese version to run a shared cyberspace world for 10–14 year-old children to use in Taiwan, with a general public access point in a Kaohsiung museum for those who did not have access to the hardware and bandwidth otherwise.

Figure 1.

Netgate Mall Taiwanese version based on Virtual Playground software.

Sun Microsystems provided a phone hotline to us so we could call them anytime with questions or feature requests related to developing and debugging Java 3D-based applications. Within 18 months, we had used Java 3D and a framework of core Virtual Playground modules to create a virtual cadaver lab [7], virtual watershed world [8], and to integrate our applications to comply with an open 3-D cyberspace interaction specification developed through a California-based creative commons (Figure 2). A computer science undergraduate student ported much of the Virtual Playground GUI API architecture from Java to C++ to create BlueSpace, which then let him extend the head’s up display and terrain engine.

Figure 2.

Screenshots of example 1990s-era HIT Lab experiences in 3-D cyberspace. Virtual Big Beef Creek (on left) coordinated planning and geospatial organization of physical data collection (with accessible 3-D photographic perspectives). Virtual Cadaver Lab expanded a popular 2-D interactive anatomy atlas into 3-D cyberspace where self-testing was possible for learning taxonomical awareness of gross anatomy characteristics.

Digitalspace hosted a workshop of the third annual digital biota conference [9] for attendees who were interested in standardizing the usability methods of desktop VR and messaging that coordinated the virtual experience between 3-D cyberspace participants. We adapted our flavors of 3-D cyberspace technologies to validate the specification and focus the creative commons on expanding the project base in which the specification could be verified. We spent a lot of time together in 3-D cyberspace envisioning possible futures.

I worked on demonstrating an in-browser solution that could provide a desktop VR entry point for participants who had not had any 3-D content experiences. A team in Turin, Italy created a native Java engine that supported 3-D cyberspace experiences. The whole engine could be downloaded in less than 100 kilobytes to plug-in to the standard Java virtual machine. Another team outside of Melbourne, Australia created the 3-D models that conformed to the engine’s content loading functionality. We developed JavaScript code that drove the user experience through an HTML skin and made the skin configurable.

The first application we made available with a web-browser based engine was a configurable classroom intended to be shared by school teachers who were thinking through ramifications of the Columbine school shooting. The funder also provided a discussion room for school teachers to discuss their retirement investments and the insurance products provided by their employer.

The lab also maintained a virtual world created with the VRML plug-in and LiveConnect facilities provided by the Netscape web browser [10]. That world ran well enough for us to run masters’ thesis experiments with four synchronous participants connected in 3-D cyberspace at a time—exploring hypotheses negotiated with an advisement committee. Soon after, Netscape lost significant market share as Microsoft’s web browser took over as the leading market share browser on personal computers running the Windows operating system.

Advertisement

3. Building 3-D cyberspace in the 2000s

As the new millennium arrived, we had accumulated a large enough sample size of 3-D cyberspace experiences to develop a preference for specialized experiences—those that were designed for subsets of professional roles that could benefit from participating in visual environments that facilitated their roles and built community around them. We had experienced enough accumulated hours in 3-D cyberspace, though mostly by way of the desktop VR version, that every day our minds lingered there, as if in parallel lives. We concluded that creating a general Metaverse, as described in the book Snow Crash, required a specialty we were not comfortable in acquiring.

As a result, we formed deeper bonds with other academic departments on campus. I turned my attention to designing other useful 3-D cyberspace experiences for oceanographers. I chose oceanographers because they studied one of the two large 3-D volumes in which life thrived on the planet. The atmospheric scientists that studied the spaces where birds soared seemed to be ahead technology-wise thanks to a popular human interest in weather and weather forecasts. The 3-D world of life inside of human anatomy also seemed to be ahead technologically, thanks to the support of medical practices.

As we were contemplating 3-D virtual experiences to spur on insight regarding the nature of tsunamis and underwater earthquakes, and volcanos that can produce them, hurricane Katrina devastated the gulf coast of the United States [11]. A representative of the Federal Emergency Management Agency came to our lab to share stories of how the emergency response effort in New Orleans had been deemed suboptimal. Dr. Furness and I spent hours listening and then suggesting ideas on how VR could improve future hurricane response efforts. We began to attend emergency operations meetings at the campus, city, county, and regional level and concluded that the typical resident in a community crisis lacked situation awareness more than the emergency responders, even though they were physically based in that community.

We attended one particular meeting of many that drove that point home. A red-faced emergency operation center director, who had over 30 years of EOC experience, railed against the audacity of his county’s residents in response to a severe wind event that had blown through the county just south of Seattle’s. Overnight, over 250,000 households had lost power through thousands breach points on the power grid. Many linemen were working 16-hour days to restore power and yet hundreds of residents were calling in to the emergency operations center to complain that power wasn’t being restored fast enough. He yelled to anyone listening that 2-week outages should have been expected, given the number of qualified workers who could repair the electric grid, and the number of homes in the dark.

Our personal experiences in 3-D cyberspace strongly suggested emergency preparedness could be a ripe place to benefit by having a thriving VR experience for residents to explore and engage in with others—including EOC personnel when available. We put together three hypotheses to test our instincts and then worked with three other advisors to whittle down our enthusiasm to a reasonable scope of experiments to perform.

We published the results in four papers [12, 13, 14, 15] and a dissertation [16], comparing results of a physical hospital evacuation scenario drill with a virtual one. In the virtual drill, emergency responders explored the physical environment in which the evacuation was to take place by way of a software application. They could study the hospital’s physical layout, transfer points to transportation, the county’s road network, and 23 other hospital locations that agreed in an official memo of understanding to accept patients in an emergency.

Those in the study had access to a simulator where they could watch evacuation paths and timings for any patient in any room, based on time and motion studies done with physical evacuation efforts (Figure 3). They could watch simulated transport of vehicles carrying patients to destination hospitals.

Figure 3.

The software support services for an emergency response hospital evacuation simulation application (top). Participants preferred 2-D artifacts to explore and interact with the situation as it unfolded (bottom). The 2-D hospital floor layout was used often in day-to-day operations, but patient characteristics were encoded in color for active patients from data sheets used in day-to-day operations.

A server computer communicated with the simulation clients that provided the graphical user interface for participants. The server kept track of 86 key variables on simulated patients, simulated emergency responders, transport vehicles, and events that required dynamic diversions to evacuation plans (e.g. traffic congestion). The hospital emergency response coordinator challenged the behavior of those variables as she became familiar with the client application I iterated upon in preparation of the experiments.

Adapting the server to feed a 3-D cyberspace version of the training and participation interface would be trivial and add an immaterial computational demand. Every object in the simulation was tied by gravity to a ground plane. The effort to port the participant experience to 3-D cyberspace would require significant effort in comparison.

Earthquake, tornado, and fire events have been included in the SimCity game application since the 1990s. Players can watch the effect of those community crises and gain insight into how urban planning matters in facilitating resilience and recovery. The third-person view of a comprehensive emergency response effort is useful. The first-person view provides an opportunity to compare a limited situation awareness in comparison—highlighting the importance of communication between all those limited views, in order to attempt to create a third-person view from first-person experiences.

Dr. Furness and I came to the conclusion that one of the roles in an emergency operation center should be a dedicated situation awareness specialist who spends much of his time in 3-D cyberspace piecing together situation awareness from the first-person reports of emergency responders—coordinated with 3-D graphical assets. Modeling and computational support would be provided by way of a coupled computing environment that contained heuristic-based equations and forecasting equations. The immersed VR role would be available to the command and control leadership in the EOC to answer questions regarding feasibility and appropriateness of actions and to evoke insight that command and control might not be considering.

I came to the conclusion that the 3-D cyberspace that would be developed for EOC personnel and emergency responders should also be made available to the general public by way of a 3-D cyberspace experience. Immersive VR might build compassion for the cognitive demands, under urgent conditions, of the first response effort. Immersive VR would give access to experiencing the delegation of roles involved, so as to prepare the public with reasonable expectations for interacting with those roles in an emergency. Overall, the immersive VR perspective would build up experiential knowledge to go with any more abstract knowledge the public has of their community.

Advertisement

4. Building 3-D cyberspace today

Upon revisiting the thoughts from earlier days of building 3-D cyberspace experiences in a research lab, we consider advancements in computing, development tools, and communications protocols. We see an expansion of VR peripherals into the hands, and onto the heads, of the general public. To build a 3-D cyberspace to support the Metaverse, or a local community preparedness awareness, we have many options to advance the process from the technologies we used previously.

A popular approach among our students, and the students of other teaching colleagues, for creating virtual reality experiences is by way of a pipeline approach where 3-D models are generated in freely downloadable modeling software (e.g. Blender), brought into other freely downloadable software (e.g. Unity Engine) to become part of a 3-D application, and then ported to a VR environment (e.g. Oculus) to be experienced immersively.

At the time of the hospital evacuation simulation software use, a 3-D hospital model (X3D format) was available for participants to explore within a web browser, but participants preferred the 2-D floor layouts that we provided for testing the usefulness of simulation software. Since then, the popularity of the three.js, webGL-enabled 3-D, library makes a hospital model integration easier [17]. For example, the same 3-D hospital model can be used within a Java-based application, JavaScript-based application, and Unity Engine-based application.

Figure 4 shows a 3-D model of one floor of the simulation software hospital, loaded within the Blender software in which the model is iterated upon to improve the design. The model can be exported to OBJ, GlTF, or any other file format for which the three.js library has developed a loader service to rapidly get it inside of a JavaScript-driven 3-D experience.

Figure 4.

One floor of the simulation’s emergency evacuation hospital as contained in a 3-D model for 3-D cyberspace use, as seen within the Blender software used to embellish and maintain it.

The Unity Engine-based application is readily made available for virtual reality use by way of XR plug-ins available from the online Unity Store. Android build support readily enables Oculus flavor VR from both Windows and Mac OS X operating systems. The Unity Engine can also import OBJ and GlTF file formats to enable interactive 3-D experiences that are OpenXR compliant. As a result, a 3-D cyberspace readily provides a virtual hospital in which to have meaningful experiences such as preparing for an emergency evacuation (Figure 5).

Figure 5.

An immersive cyberspace perspective of the 3-D model seen in Figure 3. Detail is faithfully rendered as part of an early iteration in preparation for use. The left image provides view into a patient room. The right image provides the view from a hallway corridor.

Because our resultant experience is OpenXR compliant, we could pursue extending it’s usefulness through dissemination by way of other delivery formats such as A-Frame [18] and Networked A-Frame [19]. At the time of this writing, Collabora, HTC, Microsoft, Oculus, SteamVR, and Varjo provided OpenXR runtimes in which 3-D Cyberspace experiences could be delivered [20].

As we watched participants perform the virtual hospital evacuation activity, we noticed participants using the 2-D interface effectively without any apparent cognitive difficulties. All participants were given versions of the software to practice its use, but participants reported they only needed 2 h at most to feel comfortable as the interface was built from artifacts (maps, forms, and other documents) that participants used regularly in equivalent paper-based formats.

To gain confidence in suggesting a 3-D interface would be useful, we thought about our own time spent recording time and motion data associated with moving about in the hospital and in evacuating human beings with their necessary equipment (wheelchairs, gurneys, respirators, etc.). Having spent time doing those tasks, we could readily cognate about them with the 2-D interface. But we could not do so easily beforehand.

We forget that as humans we had to learn how to use 2-D maps as abstractions of the 3-D physical world they represent. As a result, we learn differently through 2-D abstractions than we learn interacting with the physical world. A goal in using an emergency response simulator is to improve learning with regards to the activities, context, and heuristics associated with a scenario. Some compelling evidence from neuroscience experiments suggests 3-D adds additional value even when a 2-D interface is available [21]. Conclusions reached include:

“A key finding is that 2-D and 3-D neuroanatomical brain models were perceived differently. 3-D learning yielded greater object recognition, perhaps from stereopsis facilitating visual recognition. Exclusive use of 3-D models should be avoided when instead teaching that includes both 2-D and 3-D is likely the most advantageous to promote generalization of knowledge by forcing students to practice fluidly between dimensions.”

Another recent study of 3-D virtual environment interfaces concluded that “findings showed different condition correlations with the traditional tasks and the comparison between the two systems have revealed that 3-D is able to generate lower reaction times, higher correct answers, and lower preservative responses in attentional abilities, inhibition control, and cognitive shifting than the 2-D condition” [22].

Common sense suggests that mental fluidity is valuable under the urgent response demands of a community-wide emergency response scenario.

Advertisement

5. Conclusion

We perform a thought experiment to consider the evolution of 3-D interfaces forward in time to when they would be a more dominant norm. We imagine that once 2-D interfaces no longer need to be learned for many day-to-day activities, participants would not be fluent in their use. Other large primates have an easier time using 3-D interfaces than 2-D interfaces, which seems reasonable because they have not encountered the 2-D abstractions in their day-to-day lives.

As a species we have built a cyberspace that is primarily made up of one-dimensional text and two-dimensional documents. Today’s cyberspace is entrenched in part by that history and the expectation we have created that the destinations we want to visit will be 2-D at most. As a species we have also created an extensive number of 3-D models and have developed technologies to better capture physical 3-D reality in a virtual facsimile. Google Street View technology enables 3-D exploration along street locations whereby maps of our physical world provide context. Run a virtual reality application and that widely shared data provides a primitive 3-D cyberspace experience that has the promise of getting more mature rapidly. Of course large corporations continue to work on their own versions of a Metaverse using their proprietary hardware and software as well.

With a web-accessed 3-D cyberspace available, 3-D computer graphics take on additional value as a potential contribution to useful cyberspace experiences. Whether they are used for that purpose grows as a cultural question but shrinks as a technical question as a result of our progress. Exploring the cultural question rapidly expands into many sub-questions—many of which are perhaps useful to consider anyway:

  1. What are we trying to accomplish as a species?

  2. How do we experience our sense of community?

  3. How useful is it to experience each other independent of space and time?

  4. How are the physical world and virtual world co-evolving?

  5. Should we enable a 3-D cyberspace first and explore what it is good for?

  6. Do we build demonstration uses of 3-D cyberspace and watch as usefulness grows from specific use cases?

The potential use and reuse of computer graphics grows as the interest in connected 3-D cyberspace grows. When revisiting the emergence of the worldwide web in the early 1990s, we remember computer graphics integrating to create a cyberspace that felt like a global encyclopedia of information resources. The potential exists to revisit that time with today’s 3-D technologies available so as to develop other trajectories of world wide information connectivity, with computer graphics providing a sense of continuous 3-D space in which to provide meaningful experiences. The reader is encouraged to participate in its development and exploration.

References

  1. 1. Stephenson N. Snow Crash. New York: Bantam Books; 1992. ISBN: 9780553351927
  2. 2. Human Interface Technology Laboratory [Internet]. 1996. Available from: http://www.hitl.washington.edu [Accessed: December 09, 2021]
  3. 3. Campbell B. Collaborative Multiuser Worlds for the Internet [thesis]. Hartford: Rensselaer Polytechnic Institute; 1997
  4. 4. Campbell B. VRML as a superset of HTML: An approach to consolidation. In: Proceedings of the RPI Annual Computer Science Conference; CT, USA: Rensselaer Polytechnic Institute Press; 17 December 1996
  5. 5. Mandeville J et al. GreenSpace: Creating a distributed virtual environment for global applications. In: Proceedings of IEEE Networked Reality Workshop; Cambridge, MA, USA: MIT Media Lab Press; 26-28 October 1995
  6. 6. Schwartz P et al. Virtual playground: Architectures for a shared virtual world. In: Proceedings of ACM Virtual Reality Software and Technology Conference (VRST). New York, NY, USA: Association for Computing Machinery; 2-5 November 1998
  7. 7. Campbell B, Rosse C, Brinkley J. The virtual anatomy lab: A hands-on anatomy learning environment. In: Proceedings of the Medicine Meets Virtual Reality Conference (MMVR). San Francisco, CA, USA: IOS Press; 22-24 January 2001
  8. 8. Campbell B et al. Web3D in ocean science learning environments: Virtual big beef creek. In: Proceedings of the Web3D Symposium. Tempe, Arizona, USA: ACM Press; 24-28 February 2002
  9. 9. Damer B et al. OWorld: Connecting 3-D cyberspace. In: Proceedings of the Digital Biota III Conference. Boulder Creek, CA, USA: Digitalspace Press; 6-7 November 1999
  10. 10. Leonardo L. Using Netscape Liveconnect. Hoboken: Que Publishing; 1997. ISBN: 9780789711717
  11. 11. United States. Executive Office of the President and United States. Assistant to the President for Homeland Security and Counterterrorism. The Federal Response to Hurricane Katrina: Lessons Learned. White House: Washington, D.C; 2006
  12. 12. Campbell B, Weaver C. RimSim response hospital evacuation: Improving situation awareness and insight through serious games play and analysis. International Journal of Information Systems for Crisis Response and Management. 2011;3(3):1-15
  13. 13. Campbell B, Schroder K, Weaver C. RimSim visualization: An interactive tool for post-event sense making of a first response effort. In: Proceedings of the 7th International Conference on Information Systems for Crisis Response and Management (ISCRAM). New York, NY, USA: Plenum Publishing; 2-5 May 2010
  14. 14. Campbell B, Schroder K. Training for emergency response with RimSim:Response! In: Proceedings of SPIE Defense, Security + Sensing Conference. Bellingham, WA, USA: SPIE Press; 13-17 April 2009
  15. 15. Campbell B et al. Emergency response planning and training through interactive simulation and visualization with decision support. In: IEEE International Conference on Technologies for Homeland Security. New York, NY, USA: IEEE Publishing; 22-24 May 2008
  16. 16. Campbell B. Adapting simulation environments for emergency response planning and training [doctoral thesis]. Seattle: University of Washington; 2010
  17. 17. three.js [Internet]. 2020. Available from: http://threejs.org [Accessed: December 28, 2021]
  18. 18. Marcos D. A-FRAME: A web framework for building 3D/AR/VR experiences [Internet]. Available from: http://aframe.io [Accessed: December 28, 2021]
  19. 19. Fretin V. Networked-aframe GitHub repository [Internet]. Available from: https://github.com/networked-aframe/networked-aframe [Accessed: December 28, 2021]
  20. 20. The Khronos Group. OpenXR: Unifying reality [Internet]. Available from: https://www.khronos.org/api/index_2017/openxr [Accessed: December 28, 2021]
  21. 21. Anderson S, Jamniczky H, Krigolson O. Quantifying two-dimensional and three-dimensional stereoscopic learning in anatomy using electroencephalography. npj Science of Learning. 2019;4:10. DOI: 10.1038/s41539-019-0050-4
  22. 22. Giglioli I et al. Are 3D virtual environments better than 2D interfaces in serious games performance? An explorative study for the assessment of executive functions. Applied Neuropsychology. Adult. 2021;28(2):148-157. DOI: 10.1080/23279095.2019.1607735

Written By

Bruce Campbell

Submitted: 10 December 2021 Reviewed: 29 December 2021 Published: 24 January 2022