Open access

Fun Computing

Written By

Kazunori Miyata

Submitted: 01 December 2011 Published: 05 September 2012

DOI: 10.5772/51043

From the Edited Volume

Virtual Reality - Human Computer Interaction

Edited by Xin‐Xing Tang

Chapter metrics overview

2,804 Chapter Downloads

View Full Metrics

1. Introduction

Initially, the main applications of virtual reality (VR) included various walkthroughs with real-time graphics. At present, VR applications have been extended to various fields including telecommunications, training, scientific exploration, collaborative work activities, and entertainment. In particular, in the entertainment field, not only VR but also mixed reality (MR) and augmented reality (AR) technologies are actively used for human–machine interfaces in application building. For example, a number of major amusement parks operate amazing VR rides, and numerous museum exhibits have interactive installations using VR/MR/AR technologies.

In addition, the factor of entertainment is currently adopted in education and skill development applications. LEGO® mindstorms® provides an excellent learning environment for engineering and programming subjects. It promotes self-learning of basic computing logic and machinery mechanisms while having fun. The term “edutainment” is used to indicate video games that teach players by a game-based learning approach. For example, players can learn about a historical background by playing simulation games such as Civilization and Nobunaga’s Ambition. Serious games are developed as educational tools and mainly focus on teaching rather than entertaining. Gamification is a framework used to apply game design methods to nongame matters. Foursquare, which is a location-based SNS, is an example of gamification. Concerning rehabilitation, there are many computer-based programs that encourage rehabilitation while playing games or singing songs along with hand gestures.

The common theme among such programs is “motivation.” Motivation is the reason why people act or behave in a particular manner

from Oxford English Dictionary

. It is important to determine the method to motivate people faced with a tedious task or those unwilling to work. “The Fun Theory

http://www.thefuntheory.com/

” holds that something as simple as “fun” is the easiest way to change people’s behavior for the better. For example, the “Piano Staircase” project installed a giant piano keyboard covering the stairs leading out of a subway station in Stockholm. This keyboard plays a note when people step on it. As a result, people soon preferred going up and down the stairs while making music rather than using the escalator. An objective test revealed that 66% more people than normal chose the stairs over the escalator. Consequently, the element of fun rather than words changed people behavior for the better.

For its part, “Fun Computing” is a coined term that means the use of media technology to entertain people. Fun computing may motivate people to engage in more physical activity or perform burdensome work in a fun manner.

Advertisement

2. Media interaction

Unlike physical simulations and mathematical computations, the design of an interactive system is indirect, sensuous, and fuzzy because human behavior is involved.

2.1. Interaction

Computer-based media, also known as digital media, are characterized as interactive media. People can freely access, edit, and share digital media content through computers. According to the AIP cube model proposed by Zelter [1], interaction is an essential factor of VR. Interaction is a process-related communication feature. In computer science, “interaction” is interpreted as two-way communication in which a system reacts to the user’s action and the user reacts to system response, as illustrated in Figure 1.

In computer science, interactivity can be quantitatively measured as follows:

  1. How quickly does the system respond to user input (speed)?

  2. How accurate is the system output (accuracy)?

  3. How many reaction variations does the system provide (richness)?

Figure 1.

Interaction Model

The speed of a system depends on its computational power, sensing frequency, and machinery response time and directly affects the user’s satisfaction with the system. The accuracy of system output primarily depends on the computational model and affects system reliability. In the design of an interactive system, there is often a tradeoff between speed and accuracy. Richness is the variety of system reactions and influences a user’s sense of boredom. If a system only provides simple and predictable reactions, the user would easily get bored with the system.

2.2. User experience

It is important to design an interactive entertainment system from the viewpoint of the players by focusing on the quality of the user experience. That is, we design what the player should experience, how the player might feel using a system, what kind of enjoyment and surprise a system provides, among other features. In fun computing, we focus on designing a system that is not technology oriented, although the idea of an entertainment application using emerging technology such as VR and AR tends to emphasize technical fascination. However, the user experience of a game is more important for a player. An application is often designed by scripting a short story to illustrate a concept, and then, sketching the user experience to embody the concept. Figure 2 shows the storyboard of the VR application “Landscape Bartender,” which is described in Section 3.5.

Figure 2.

Example of a storyboard

2.3. Media integration

The following should be considered while implementing an attractive entertainment application using media technologies:

  1. Robustness

Robustness has two aspects. One is the ability of the software system to deal with errors and handle unusual or atypical inputs. The other is durability with repetitive use. A user, especially a child, often operates a system in a random and rough manner. Therefore, robustness is the most important feature of an interactive system.

  1. Safety

Safety is an important issue for an interactive system if it has a mechanical force feedback because it may harm the player. The visual feedback of the system should also be considered. Visual interface artifacts may contribute to player sickness such as headaches, asthenopia, and nausea. The image resolution, field of view, refresh rate, and time delay to update a scene of the display are the main causes of sickness.

  1. Intuitiveness

In human–computer interaction, an intuitive interface is regarded as one that is easy to use. People can intuitively use an application with little knowledge or directions on its usage. That is, the users apply prior knowledge to the new system. Therefore, to design an intuitive interface, it is essential to fill the gap between users’ current knowledge and target knowledge needed to use the new system.

  1. Immersion

Immersion is the state of being mentally involved in a game; that is, the players do not feel any physical borders around them. The sensation of immersion in a VR application can be described as presence within the virtual space surrounding a player. The naturalness of the virtual space, that is, the reality of the environment and prompt responses from the system, are the key factors for promoting the sense of immersion.

Advertisement

3. VR applications

In this chapter, we consider VR applications that use an intuitive interaction model to entertain people.

3.1. Ton2

3.1.1. Overview

Ton2 is a new body-sensory style VR application that is implemented using an intuitive and robust interaction model, as shown in Figure 3 [2]. This application captures the player’s motion data as displacement values by means of distance sensors and uses the data for its interaction model.

We have revived an old Japanese traditional game, Paper-Sumo. This game is normally played on a board using paper and cardboard; however, we have designed it as a game that can be played under water. We used water as the media and not just as an element of enjoyment, but the players could experience the comfort of pressing down the water surface. Moreover, when players respond to their opponents’ moves, the game provides feedback similar to the original game. In the 3D imagery projected on a screen floating on the water surface, both players push down the water surface in order to control the movements of the sumo wrestlers. This gives a sense of a sumo wrestling performance.

3.1.2. System configuration

This system consists of four modules, as depicted in Figure 3 (a): (1) A projector that projects the image on the screen, which floats on water, as shown in Figure 3 (b). (2) Wave Generator Cubes (WGCs) that produce waves from the pressing action of the players. There are three WGCs on each side of the tank. (3) Distance sensors record displacement data. Four sensors are placed at the corners of the screen and six (one for each WGC) are set under the tank. (4) The computer controlling the entire system calculates the input data from an A/D conversion board.

The process flow of the system is as follows:

(P1) The distance sensors obtain displacement data from the six WGCs and four corners of the screen.

(P2) The computer calculates the velocity of the hold-down action of the WGCs and the change in the slope of the floating screen on the basis of displacement data.

(P3) By using the results of P2, the computer determines the influence on the movement of the Paper-Sumo wrestler.

(P4) The system calculates the interference and reaction of both wrestlers. At the same time, it determines whether either of the wrestlers has won or lost.

(P5) The projector displays the 3D imagery generated for both wrestlers on the screen.

3.1.3. Result

The players experienced “Paper-Craft Sumo” underwater and enjoyed the game in a typical manner. The 3D imagery projected when the Sumo wrestling is held underwater provides visual enjoyment and the generated waves, which move back and forth in the tank, also give the participant rhythmical comfort while having fun playing the wrestling match. In Ton2, because the input is not a direct handling of the wrestler and instead the wrestler is influenced by the waves, a participant must rely on his/her intuition and no technique is required to play the game. Therefore, participants of any age can play Ton2 equally well. This was one of the fascinations of the traditional Paper-Sumo. Moreover, the physical act of influencing the wrestlers and the visual sensation of the wave movement stimulate the participants. As a result, there is both physical and visual feedback to the players.

Figure 3.

Ton2

3.2. Kyukon

3.2.1. Overview

Traditional sports video games use buttons or sticks as user interfaces. These are unnatural interfaces for playing sports. Kyukon creates a virtual pitching experience through a nonwearable interface. Our system allows the control of a ball similar to a real pitcher. The nonwearable interface employs wireless sensing technology and a strip screen. There are no physical restrictions; hence, a user can freely pitch a ball.

The strip screen smoothly connects a player to the virtual stadium projected on the screen. The objective is to strike out the batter. As the player throws the ball toward the screen, the ball smoothly passes through the screen. At the same time, the virtual ball will be projected at the exact position the player threw the ball, which also reflects the speed and rotation of the thrown ball. The player can also pitch a miracle ball with a particular rotation and speed.

A player can attempt controlling his/her arm and wrist to pitch a miracle ball. For example, a faster ball will become a Flaming Miracle Ball that flies with the flame effect, as shown in Figure 4 (c), whereas a rapidly revolving ball will become a Tornado Miracle Ball with the spiral effect.

3.2.2. System configuration

The strip screen is a novel display system that seamlessly connects the real and virtual worlds, as shown in Figure 4 (a). It is composed of white vinyl and is divided into many strips. A thrown ball will pass through the strip screen with a minimal distortion of the screen. The screen immediately reforms and displays the virtual ball. We developed a combination of a wireless accelerometer and optical sensors to create a nonwearable interface system for pitching. The wireless accelerometer is placed inside the ball, as depicted in Figure 4 (b2). It detects the time at which the ball is released from the hand and the rotation of the ball. Optical sensors are installed behind the strip screen, as shown in Figure 4 (b1). They detect the time when the ball reaches the screen and the position where it passes through the screen.

3.2.3. Result

Kyukon, which is an intuitive interface for pitching a ball, enables various controls without any physical restrictions owing to the nonwearable sensors. Moreover, we developed a method for connecting the virtual and real worlds by enabling a continuation of objects using the physical data of the player’s natural motion. This innovation enhanced the experience of the sport.

In addition, we installed Virtual Petanque, as shown in Figure 4 (d2). The player can throw a ball similar to a pitcher into a virtual field projected on the screen. The players could find a new sense of fun while playing Virtual Petanque.

3.3. Interactive fountain

3.3.1. Overview

There are various computer-controlled fountains in the world. However, these computerized fountains spray water in only some preset patterns. Dietz et al. demonstrated three interactive water displays [3], but the water performance was not very dynamic, and the displays were not too different from conventional fountains. In addition, Mann developed a fluid-based tactile user interface with an array of fluid streams that work like the keys on a keyboard [4].

This project suggests the possibility of a fountain with a novel display system that reacts to a player’s motions. Our version of the interactive fountain is so compact that it can even be installed in a living room. The system provides an ambient display with changing water jets and color illumination. The goal of this project is to change the fountain into a design element, such as the interior decoration of a room, and to demonstrate the possibilities of the fountain’s interface by providing an exciting experience of fountains that react to a player’s actions.

Figure 4.

Kyukon

3.3.2. System configuration

This system consists of a PC, a fan-type controller, a CCD camera, seven speakers, two MIDI-controlled 4ch dimmer switches, and seven fountain units. Each metallic nozzle head is surrounded by nine full-color LEDs, as shown in Figure 5 (b). The MIDI-controlled dimmer switches supply electricity to the underwater pump. Electric current varies with the MIDI signal and changes the height of the water jet. A self-illuminated fan is used as the body of the controller. A wireless three-axis accelerometer is mounted on the controller handle. Figure 5 (c) shows the difference between the acceleration measurements of “waving” and “cutting” motions. The system can distinguish between “waving” and “cutting” motions by considering the changing ratio along the y- and z-axes. The system changes the strength of the water jets and the illumination color on the basis of the measured acceleration. Moreover, it detects the position of the controller by a background image subtraction method. The detected position of light is used to select the fountain to be activated.

Figure 5.

Interactive Fountain

3.3.3. Result

In our system, the strength of the water jets as well as the illumination of water and sound effects are dynamically changed according to player’s motions. Therefore, our interactive fountain performs as a novel interactive water display.

The fountains instantly react to user motion, thus changing the water jet strength, illumination colors, and sound effects. As shown in Figure 5 (d), a player can intentionally control interactions using real water that cause the effect of water being scattered by the wind.

3.4. Witch’s cauldron

3.4.1. Overview

This project presents a novel interactive application that causes virtual objects to fracture by stirring them with a wand. This application calculates the collision force for each object, and an object breaks apart when it collides forcefully with others. The player interactively crashes the objects with haptic sensation and simultaneously watches computer-generated imagery of the fracture.

3.4.2. System configuration

Figure 6 (a) shows the system configuration. This system consists of (1) a screen that shows the virtual world, (2) a rear-screen projector, (3) an interface to sense the movement of a player and to display haptic sensations, (4) an electric circuit to control the stirring interface, and 5) a PC to control the application.

Figure 6 (b) shows the stirring interface and Figure 6 (c) shows the data flow of the system. When the player turns a wand, the universal joint turns the axis of a motor as shown in Figure 6 (c) (1). The system measures the rotation signal as shown in Figure 6 (c) (2) of the rotary encoder, which is transferred from the motor control board. Finally, the system calculates the status of the virtual wand using the measured rotation signal as shown in Figure 6 (c) (3).

The system provides haptic sensation via the wand as follows. The force information to be applied to the stirring interface is transferred to the microcomputer as shown in Figure 6 (c) (4). The microcomputer outputs a pulse wave modulation signal to the short brake circuit as shown in Figure 6 (c) (5). The rotation speed is controlled with the short brake circuit as shown in Figure 6 (c) (6), and the player experiences haptic sensation as shown in Figure 6 (c) (7).

Virtual objects are physically simulated in real time using a physical engine. When the virtual objects are stirred with a wand, they receive an impulsive force from the wand. The status of the wand is determined on the basis of the information transferred from the microcomputer. The breakable object is composed of fragments that are connected to each other with a joint force, as shown in Figure 6 (d). When a strong impulsive force is applied, the joint is separated and the object is broken into fragments. A 3D CGI is generated in real time and the image is projected on the screen.

Figure 6.

Witch’s Cauldron

3.4.3. Result

Today, the opportunity for physical exercise has decreased. As a result, the population is becoming overweight. It is difficult for an overweight child or a person past middle age, whose muscles are weak, to vigorously exercise without any training. There are many training and fitness programs. However, people do not like to use conventional training equipment such as dumbbells and jumping ropes because of their monotonousness. We believe that they need exercise that uses their entire body and is fun.

The player of Witch’s Cauldron has to use his/her entire body to break the objects with the wand. As a result, a fun activity is performed while exercising the entire body. Unlike conventional VR applications, it can be effective in promoting physical health.

We believe that our technology can be applied not only to the destruction of virtual objects but also to other kinds of operations. For example, we can equip a cooking trainer or a simulator for heavy equipment with haptic sensation.

3.5. Landscape bartender

3.5.1. Overview

Some cocktails names compare cocktails to landscapes. For example, tequila sunrise compares orange juice and grenadine to the morning sky with the glow of the sun. These two elements generate a sunrise landscape. This project presents a system that generates landscapes using a cocktail analogy [5]. With this system, users can generate landscapes in the same manner that they make a cocktail, that is, by combining “ingredients.” Each ingredient of the landscape cocktail, i.e., each landscape element, is actually water in a different bottle. The player selects a bottle containing the intended landscape element and pours a suitable amount of water into a shaker.

The system has eight elements: sand, rock, water, plants, sun, moon, stars, and clouds. The elements of landscape are categorized into two groups—a) soil: sand, rock, water, and plants; b) sky: sun, moon, stars, and clouds. The mixture ratio of these two element groups determines the ratio of the ground and sky in the image. The amount of water used from each bottle determines the ratio of the landscape elements. The ratio of sun and moon changes over time with the altitude of sun/moon. If the ratio of the sun element is high, a daytime scene is generated, as shown in Figure 8 (d1); otherwise, a nighttime scene is generated, as shown in Figure 8 (d3). If the ratio of sun and moon is equal, an evening scene is generated, as shown in Figure 8 (d2). Moreover, plants are grown only if water and sand/rock are present.

The contour of the ground and the position of each element are changed by shaking the shaker. The contour of the ground becomes rough if the controller is shaken vertically and becomes smooth if it is shaken horizontally. Moreover, the sun shifts horizontally if the controller is shaken horizontally. To give the player the feeling of generating his/her own landscape, the system displays a vague in-progress image while he/she is shaking the shaker. The in-progress image is unclear, and the contents of the shaker cannot be seen. After shaking the shaker, the player pours water into the cocktail glass. The resultant landscape can be seen clearly only when the cocktail glass is placed on the coaster.

3.5.2. System configuration

The system consists of four modules, as shown in Figure 8 (a): (1) a shaker-type controller with a three-axis wireless accelerometer hidden inside the cap of the shaker; (2) a measuring module comprising eight digital scales for sensing the volume of water: a scale is used for sensing the amount of each landscape element, and the data is used for the combining process; (3) a counter-type image display unit; and 4) a PC. Acceleration data is used to change the contour of the ground and the position of each element.

The sensing module for detecting the placement of the glass on the coaster is installed in the counter-type image display unit. A magnetic chip is placed on the base of the glass. Furthermore, a digital compass is used as the sensing module, which detects the approach of the glass. The data from each module is transmitted to the PC via a serial connection and a wireless signal.

The ground of a landscape is generated using a height map of resolution 256 × 256. Uniform balls are set out on a grid at the initial condition, as illustrated in Figure 8 (c1). The size of each ball is varied according to the strength of a shake in the vertical direction or averaged according to the strength of a shake in the horizontal direction. The final surface model of the ground is obtained from the deformed balls, as shown in Figure 8 (c2).

3.5.3. Result

This system provides the enjoyment of creating one’s favorite scenery. The technique of designing a landscape follows the actual procedure of making a cocktail; therefore, a player can easily understand the instructions.

Figure 7.

Recipe Book

The digital scale measures weight in units of cubic centimeters. Hence, the combination of ingredients becomes almost infinite. The contour of the ground and the position of each element are changed by shaking the shaker. Therefore, the system can generate a once-in-a-lifetime scene. In addition, we provided a “recipe” book, shown in Figure 7, as a reference to design scenery. However, most people enjoyed designing their own scenery like a chemistry experiment.

3.6. Spider Hero

3.6.1. Overview

A superhero has overwhelming speed and power. The special ability of a superhero is his most important characteristic. In this VR application, the user can jump from one building to another by using a web, which is stuck to the user, similar to the famous superhero SpidermanTM. In fact, the aim of this application is to provide the user the enjoyment of using Spiderman’s superpower [6].

The user wears the web shooter illustrated in Figure 9 (b), which is a device used to shoot a web. The user aims at a target building with this device. Then, when the user swings his/her arm forward, the web is launched, thus sticking to the target building on the screen. Next, the user’s arm is pulled in the direction of the target building by the pulling force feedback system, which can provide the feeling of being pulled directly and smoothly as if attached to an elastic string. Finally, the user moves toward the target building.

3.6.2. System configuration

This system consists of three components: a feedback system, input devices, and effects. The feedback system includes an air module that gives the user the feeling of wind and a pulling force feedback system that gives the user the feeling of being pulled. In this system, the user’s arm is connected to an elastic line, and this line gives the user the sense of being pulled. This line is attached to a rubber plug. This rubber plug is activated by a vacuum device. However, without a limit on the system, the user will be indefinitely pulled by this system. Hence, we introduce an openable cap using a servomotor. Using this openable cap, we can control the strength of the pulling force. When the cap is closed, the pulling force is the strongest. On the other hand, when the cap is open, the pulling force is zero. These feedback systems provide the user with a force feedback of flying from one building to another by means of the spider web.

As input devices, the user can use a pressure sensor and two web shooters. The sensor allows the user to change his view by shifting his/her weight, and the web shooters are interfaces to aim at a target building and launch the spider web.

To make this application as immersive as possible, we need to work on visual and sound effects. This is particularly important to enhance the user experience of speed and exhilaration. For visual effects, we use motion blur and focusing effects, as shown in Figure 9 (d1). Furthermore, we use sound effects of wind and a virtual city environment. The wind sound effect uses a sound loop of wind and the pitch of this sound is modified depending on motion speed. In the environment sound effect, we use nine sound sources in the virtual city, as shown in Figure 9 (d2). When the user moves from the one place to another, he/she notices a change in the environment’s sound because each sound source includes different sounds.

Figure 9.

Spider Hero

3.6.3. Result

This application has a dreamlike feel. The objective is to provide everyone with the enjoyment of using super powers. In the application, the user can feel being pulled and can feel the wind through a force feedback system. Visual and sound effects immerse the user in this VR experience. Moreover, the intuitive interface increases the operability of this VR application. However, through experimental evaluation, we confirmed that the operability is inadequate, and hence, we plan to improve the devices. On the software side, to provide a better experience, we need to speed up the motion and wind. Our current pulling force feedback system handles the pulling force only in one direction. We plan to overcome this limitation by including a few pipes with openable caps and improving the content to increase the sense of immersion.

This VR application requires the user to have a sense of balance in order to change the user’s viewpoint and to swing both arms at a sufficient rate of speed. Therefore, this application would be effective for whole-body exercise.

3.7. Extreme can crusher

3.7.1. Overview

Recently, environmental awareness has increased worldwide. One of the simplest actions that can contribute to environmental conservation is crushing a can. It reduces the bulk of cans, thus reducing recycling costs and environmental pollution. However, crushing cans is a tedious and boring task. This project proposes a sustainable method for crushing cans by applying the gamification approach.

The player places a can in the can crusher and his/her foot in the slipper. After engaging in a walking-like motion wearing the slipper, a rocket is fueled up and launched. Here, the amount of fuel is proportional to the speed of the player’s walking motion. The launching direction is displayed cyclically within a half-round direction on the screen, and it is determined by the timing of the foot motion. While the rocket is flying, it sows the soil with seeds, and flowers bloom. The color of the flowers is related to the dominant color of the crushed can.

3.7.2. System configuration

The system consists of three components: (1) a PC, (2) web camera, and (3) can crusher, as shown in Figure 10 (a). Two vibrators are installed on either side of the slipper in which the user places his/her foot. These vibrators imitate the rumbling of the ground when the rocket is launched. An eccentric motor is used as a vibrator to display noticeable strong and long-period vibrations. An infrared distance sensor is installed in the lower part of the can crusher to detect the speed of the player’s foot motion. The height of rocket launch is determined by the speed of the foot motion; the faster the motion, the higher the rocket flies. A web camera is used to capture the surface of a can. After an image is captured, the system extracts the image region that contains the can by using image background subtraction. Then, it analyzes the dominant color of the can by an image histogram method. The extracted dominant color is used as the color of flowers that will bloom.

Figure 10.

Extreme Can Crusher

The system uses dual screens; the upper screen displays a scene of the flying rockets with real-time graphics, as shown in Figure 10 (b), and the lower screen shows system instructions, cheering characters, and a launching guide.

3.7.3. Result

It is observed that players enjoyed the ordinarily monotonous work of crushing a can because the system makes it fun. The system displays the flight distance of the rocket and the players tend to compete against one another. Some players gathered empty cans to use with this system. We noticed that this application may contribute to the cleanliness of public spaces and eco-conscious actions.

Advertisement

4. Conclusion

This chapter provided an overview of the basic concept of fun computing—a form of entertainment that uses media technology. We reviewed the interaction between humans and a VR environment and analyzed an interaction model. In addition, we outlined some VR applications that entertain people by using an intuitive interaction model and showed how “fun computing” helps motivate people’s actions and change human behavior. Fun computing is widely applicable not only for entertainment but also for physical training, rehabilitation, and moral consciousness improvement. Development of fun computing applications is a comprehensive process and requires various skills— not only hardware and software knowledge but also aesthetic design and storytelling abilities. Among them, designing user experience is the most important and difficult skill needed to develop an attractive application.

References

  1. 1. ZelzterD.1994Autonomy, Interaction and Presence. Presence. 1127132
  2. 2. YabuH.KamadaY.TakahashiM.KawarazukaY.MiyataK.2005Ton2: A VR Application With Novel Interaction Method Using Displacement Data. ACM SIGGRAPH 2005, E-Tech.
  3. 3. DietzP. H.HanJ. Y.WesthuesJ.BarnwellJ.YerazunisW.2006Submerging Technologies. ACM SIGGRAPH 2006 Emerging technologies, Article #30.
  4. 4. MannS.2005Interactive arts 1: interfaces for audio and music creation: “"fl Huge UId streams": fountains that are keyboards with nozzle spray as keys that give rich tactile feedback and are more expressive and more fun than plastic keys. Proceedings of the 13th annual ACM international conference on Multimedia ‘05. 181190
  5. 5. NodaT.NomuraK.KomuroN.TaoZ.YangC.MiyataK.2008Landscape Bartender: Landscape Generation Using a Cocktail Analogy. ACM SIGGRAPH 2008, New Tech Demo SIGGRAPH Core.
  6. 6. IshibashiK.LuzT. D.EynardR.KitaN.JiangN.SegiH.TeradaK.FujitaK.MiyataK.2009Spider Hero: A VR application using pulling force feedback system. Proceedings of VRCAI2009. 197202

Notes

  • from Oxford English Dictionary
  • http://www.thefuntheory.com/

Written By

Kazunori Miyata

Submitted: 01 December 2011 Published: 05 September 2012