Open access

Introductory Chapter: Virtual Reality

Written By

Dragan Cvetković

Submitted: 28 January 2020 Published: 14 January 2021

DOI: 10.5772/intechopen.91950

From the Edited Volume

Virtual Reality and Its Application in Education

Edited by Dragan Cvetković

Chapter metrics overview

595 Chapter Downloads

View Full Metrics

1. Introduction

The constant development of computer and information technology allows the implementation and application of new methods and systems that were previously not possible. One example of such a development is the technology of virtual reality (VR) or virtual environment (VE). With virtual reality techniques, it is possible to achieve realistic simulations that are useful in many areas of human activity. Simulations were known even before, but virtual reality techniques can provide the impression of “stepping in” the apparent world [1, 2].

This impression of presence in the apparent world is possible by using advanced computer and communication devices between a man and a computer. Virtual reality techniques also use modern computer networks to achieve communication between a man and a remote environment with the aim of achieving functioning at a distance.

At the beginning of the twentieth century, public perception of virtual reality was quite distorted. In fact, thanks to its presence in the media, virtual reality was expected to be a miracle. But, despite some predictions, virtual reality did not come into extensive use. Thus, a part of the public and some experts changed their minds significantly and declared that technology is useless [1, 3].

Advertisement

2. Definition and principles of virtual reality

To make the concept and principle of virtual reality clear, at the very beginning it is necessary to clarify the concept of perception. Perception is the process in which man collects and interprets information about the world around oneself. Senses and the brain participate in the process of perception.

There are two kinds of senses—external and internal. External detect phenomena outside the body, and the inner detect the phenomena within the organism (hunger, fatigue, pain, thirst, etc.). The external senses can be divided into remote (heat, eyesight, hearing) and contact (smell, touch, taste). When it comes to virtual reality, only systems that affect the remote senses are well developed, though systems that affect the contact senses will gradually evolve in the future.

Senses transfer information from the environment, while the brain interprets received information. Beside the senses, the perception is also influenced by experience, knowledge, emotion, and motivation. In order “to cheat” the system of perception, the basic idea is that the real stimuli received by the senses should be replaced by artificially generated stimuli. In this way, the real environment can be replaced by apparent environment. As a result, it appears that the system of perception creates the impression of the presence of people in the apparent, nonexistent environment.

Virtual reality is a computer-created sensory experience that allows the user to believe in the apparent reality. The user is then either completely surrounded by these virtual world or partially included by listening and watching virtual reality applications. Virtual reality is a collection of technologies that “inserts” users in a virtual environment. Ideally the user’s senses detect only virtual stimuli produced by a computer, and user’s movements are directly entered in the computer.

Virtual environments are based on objects defined in the computer’s memory in such a way that a computer can later be attached to these items on the screen with the possibility of interaction. By combining the elements of the unreal (imaginary) environment and the real environment (which can also be remoted), the user creates a feeling of presence in a virtual environment. This is illustrated in Figure 1.

Figure 1.

The principle of virtual reality.

The following figure shows the basic principle of virtual reality. The user is in a closed loop and one is connected to a computer with input and output units. Input devices (1) follow the movements of the user and pass them to the computer (2), which makes the simulation of a VE to be based on these and other data. With the help of output units (3) the computer shows a virtual environment, as real as it is possible. Ideally, the user’s senses should only detect artificially generated stimuli (from the computer), and thus the real would be completely turned off. In the example shown in Figure 1, the user (4) sees only the image generated by the computer. Thereby, the loop is closed, and the user directly sees and hears (and possibly feels, smells, tastes, etc.) virtual environment with the immediate results of his own movements.

Advertisement

3. Virtual reality equipment

Firstly, virtual reality devices can be divided into input and output. Secondly, previously mentioned devices can be further divided into types and subtypes within each category. The input devices include the following:

  • Position/orientation sensors—electromagnetic, acoustic, optical, mechanical, and inertial

  • Force/momentum sensorsSpaceball, etc.

  • Body/arm position sensors—sensor glove (data glove) and sensor suit (bodysuit)

  • Motion sensors—treadmill, bike ergometer, rowing ergometer, etc.

  • Other sensors—control through breathing, face tracking, eye tracking, and voice recognition [1, 4]

Electromagnetic sensors use a source of electromagnetic field, and the position of source is fixed and previously known. Sensors located on the head and hands of the user receive an electromagnetic signal and transmit it to a central unit. That unit based on the received signal counts the position and orientation of sensors within the electromagnetic field (the position and orientation in relation to the source) [1, 5].

Acoustic sensors work in a similar way as electromagnetic. The only difference is that instead of electromagnetic waves, each transmitter sends high-frequency sound to the receiver—special microphone. Advantages of this system are good and acceptable price range, but this system has its drawbacks as well. Between the transmitter and the receiver, there must not be any obstacles, the system cannot support a larger number of simultaneous sensors, their accuracy is worse than the accuracy of magnetic sensors, and the dimensions of the receiver may represent difficulty for particular applications [6].

Optical sensors (motion tracking, optical motion capture) are part of a system which follows signs (markers) with numerous cameras, thus calculating the position of markers in space by combining positions of markers in field of vision of each camera. These systems use different principles of monitoring, usually with the help of markers of lustrous material. Cameras, sensitive to infrared light, monitor markers or body movements in space. The cameras must be calibrated, i.e., their relative position and orientation must be known. By combining 2D track position of markers (from all the cameras), with information about the location and characteristics of each camera, 3D position of markers can be precisely determined. The main advantages of the optical monitoring are extremely high accuracy, a large amount of sampling data, and the possibility of simultaneous use of a large number of sensors [4, 6].

Mechanical sensors are conceptually simple. There is a mechanical structure with a number of joints that measures the angle of rotation of the joint. From these angles, known length of the segments between the joints, simple solving of direct kinematics one can obtain the position of the posterior segment compared to the first (fixed) segment of the structure. The so-called exoskeleton (mechanical structure installed on the body, Figure 2) can use mechanical tracking for the monitoring of joints position in the body and thus the position of the entire body. The advantages of such devices are high precision and high sampling rate, but the size is a reason for impractical production [1].

Figure 2.

Exoskeleton: mechanical construction installed on the body.

Inertial sensors measure acceleration and angular acceleration of each segment, thus defining their position and orientation. Because of measurement accuracy, it is highly necessary to determine the starting position precisely.

Force sensors (momentum sensors) are devices that measure the force, or momentum, and they can be a part of devices such as Spaceball or a part of more complex system for simulation of force or touch. They are also used for more intuitive manipulation of 3D objects and more natural motion through the virtual environment.

Body position sensors are composed of a large number of sensors for registering the position and orientation that are integrated into the suit. The information, which are emitted by sensors almost completely in real time, describe the movement of the segments in space.

Arms position sensors (data glove) are the input devices in the form of a glove. There are a number of sensors that register the position of the hand and fingers by measuring the angle displacement. This type of device can be combined with the simulators of force or touch glove, thus becoming a haptic output device.

The role of the motion sensors consists of marking the position of objects in the real world and forwarding the information to the computer. Processed information about the position of the body computer saves, graphically displays, or assigns to the object which is in the interaction with the user.

A common feature of previously mentioned systems is the ability to collect information (motion capture) about the position of referent points in space in real time. The data can be used to visualize situations in three dimensions and their analysis. Different applications for 3D modeling and animation are able to use information gathered from sensors [1, 4, 5, 6].

The output devices include:

  • Devices for 3D display—stereo glasses, head-mounted display, stereo screens (with interchanging images or double vision), and projection systems (stereo projection on a screen, cave automatic virtual environment (CAVE), wide-angle projection, virtual worktable)

  • Devices for 3D sound synthesis

  • Devices for synthesis of sense of touch and force—tactile output devices, devices for force feedback, and the mobile platform

  • Other devices—odor, wind, and heat

In order to achieve the stereoscopic effect, it is necessary to project two images, one to each eye at a time. Head-mounted display (HMD) has a separate screen for each eye (Figure 3). Due to the small dimensions of the device, the screens are too close to be directly observed, so it is necessary to set the corresponding optical system that allows the user to view the screen, between the eye and the screen itself [4].

Figure 3.

Head-mounted display.

The most important characteristics of HMD, apart from the size, weight, and comfort, reflect in the range of view angle and screen resolution. They can be found in various forms, from the helmet to the goggles. Today, there is a tendency toward minimalist approach and practical applicability. The aim is to create a device that is small enough not to interfere free movements. Such devices can be equipped with headphones and position and orientation sensors.

The most advanced projection system is CAVE system. It consists of the area bounded by the projection screen (which creates the room where the user is located) on which are projected computer generated stereo images. The user wears glasses which guarantee a three-dimensional experience, thus providing the satisfactory peripheral vision. The experience is very realistic so that connections between the set of projection canvases is almost invisible.

Sound simulation includes the reproduction or generation of a sound in a virtual environment. By including a three-dimensional sound, we can get an idea of precise location of the sound source in space. The effect can be achieved by the difference in received sound volume in the left and the right ear, by the reflection of sound waves in the ear lobe and its surrounding, and by combining the results of this reflection for different frequencies, which are an integral part of the sound.

Haptic devices allow the simulation of touch and/or force that can cause the sensation of contact (touch) with the virtual object. Simulation of touch (tactile feedback) is usually based on thermal or vibrating elements which the user wears on his fingers and that are activated when the user “touches a virtual object.” For this operation, one has to track accurately the location of user, more precisely his hands. Simulation of force (force feedback) includes the monitoring of the position with the inclusion of active elements (motors, electromagnets, servo motors) that exert force on a user’s hand, other parts of the body, or a tool one handles.

Moving platforms are haptic systems that simulate the position of the user by moving the entire platform on which the user stands or sits. Compliance of the position with the visual information increases the feeling of participation in the simulation. This type of haptic systems is commonly used in complex simulators, car or plane driving simulators, etc. [4, 5, 6].

Advertisement

4. Applications of virtual reality

Virtual reality is mostly applied in the following areas: medicine, military industry, education, entertainment, design, and marketing.

Medicine is the field where the virtual reality had an enormous success and it is still expanding. It is used in the field of surgery, both for training (learning on virtual human models) and for planning of a surgery. 3D displays can be obtained from medical images, as it is the case in modern medical devices. In psychiatry, the virtual reality is used to treat a variety of mental disorders, starting from fear of flying to posttraumatic stress disorder [1, 6].

One of the biggest investors in the field of virtual reality is military organization, and many VR technologies are embedded in various military equipment simulators. Simulations of various vehicles are among the most common applications of virtual reality. Many experts are trained in different simulators, and it is particularly important that the situations which in reality rarely occur (e.g., rescue of hostages) can be trained.

Virtual reality can also be used for the presentation of future projects in architecture, for creation of future product prototypes, etc. It can be also used as a successful tool for the promotion and marketing at exhibitions and fairs because of the fact that 3D projection is still interesting enough to attract the curious [1, 7].

Virtual reality is ideal for the entertainment industry, because of its possibility to create an illusion. More and more games that use this technique are turning up in gaming lounges, and it is a question of time when this technology will be available to everyone who wants to use it, even at home.

Despite the numerous areas of application, there are also some limitations. Although in recent years, there has been a considerable progress, the equipment is still impractical, huge, expensive, and complex. Certain types of virtual reality can cause nausea, and even if they do not cause certain health issues, they are too uncomfortable for long-term use [7, 8].

Advertisement

5. Augmented reality

Augmented reality (AR) adds the elements of virtual environment to the real world so they could look like the part of it. This user’s view of the world expands with additional information that are directly embedded into the real world. In some applications, it is not necessary to replace reality with the virtual world, sometimes it is only requisite to complement or enhance it with some parts of virtual reality [9].

Augmented reality is a relatively new area. Although the basic idea appeared in the beginning of twentieth century, its rapid development has started at the end of the same century. That is the reason why this technology has not still achieved its full expansion. It provides direct access to the information so that they are displayed in the user’s field of vision and intertwined with the real world. This allows faster, better, and easier access to information [8, 9, 10]. It can be applied in the following areas: medicine, manufacturing and maintenance, architecture, robotics, military industry, and entertainment. When it comes to medical application, medical images are overlaid with the patient, resulting in a kind of virtual X-rays but in real time. The resulting effect is reflected in the fact that the doctor can see the patient’s organs as the body is transparent [8, 9]. For now, they are not widely used. In the production and maintenance, visual instructions are displayed directly on the equipment/machinery, and the operator, instead of looking the documentation, has all the necessary information at the right time in the right place. Augmented reality can be used in interior design, visualization of structures, or installations. For example, virtual furniture can be deployed in the actual room; thus one can get an impression of spatial relations and how the rooms will really look like with some furniture. With the help of augmented reality, military pilots can receive additional information, such as guidance, see the targets or guided missiles. The display is built into the helmet or in the cabin [8].

Augmented reality has a great potential, but today’s systems of augmented reality are still quite cumbersome and imprecise and consume too much energy, so it is necessary to solve these problems.

References

  1. 1. Parisi T. Learning Virtual Reality: Developing Immersive Experiences and Applications for Desktop, Web, and Mobile. 1st ed. USA: O’Reilly Media; 2015
  2. 2. Garfinkel S, Grunspan R. The Computer Book-from the Abacus to Artificial Intelligence, 250 Milestones in the History of Computer Science. USA: Sterling; 2018
  3. 3. Jerald J. The VR Book: Human-Centered Design for Virtual Reality. USA: Morgan & Claypool Publishers; 2015
  4. 4. Greengard S. Virtual Reality. USA: MIT Press; 2019
  5. 5. Linowes J. Unity Virtual Reality Projects: Learn Virtual Reality by Developing More than 10 Engaging Projects with Unity 2018. 2nd ed. UK: Packt Publishing; 2018
  6. 6. Sherman WR, Craig AB. Understanding Virtual Reality-Interface, Application, and Design. 1st ed. USA: Morgan Kaufmann; 2002
  7. 7. Pangilinan E, Lukas S, Mohan V. Creating Augmented and Virtual Realities: Theory and Practice for Next-Generation Spatial Computing. 1st ed. USA: O’Reilly Media; 2019
  8. 8. Aukstakalnis S. Practical Augmented Reality—A Guide to the Technologies, Applications, and Human Factors for AR and VR (Usability). 1st ed. USA: Addison-Wesley Professional; 2016
  9. 9. Arnaldi B, Guitton P, Moreau G. Virtual Reality and Augmented Reality-Myths and Realities. 1st ed. USA: Wiley-ISTE2018
  10. 10. Prodromou T. Augmented Reality in Educational Settings. Netherlands: Brill; 2019

Written By

Dragan Cvetković

Submitted: 28 January 2020 Published: 14 January 2021