Open access

Using Augmented Reality Cognitive Artifacts in Education and Virtual Rehabilitation

Written By

Claudio Kirner, Christopher Shneider Cerqueira and Tereza Gonçalves Kirner

Published: 12 September 2012

DOI: 10.5772/46416

From the Edited Volume

Virtual Reality in Psychological, Medical and Pedagogical Applications

Edited by Christiane Eichenberg

Chapter metrics overview

4,469 Chapter Downloads

View Full Metrics

1. Introduction

The first Augmented Reality Systems (ARS) were usually designed with basis on three main blocks, as is illustrated in Figure 1: Infrastructure Tracker Unit, Processing Unit, and Visual Unit. The Infrastructure Tracker was responsible for collecting data from the real world, sending them to the Processing Unit, which mixed the virtual content with the real content and sent the result to the Video Out module of the Visual Unit. Some designs used a Video In, to acquire required data for the Infrastructure Tracker Unit. The Visual Unit can be classified in two types of system, depending on the followed visualization technology:

  1. Video see-through: It uses a Head-Mounted Display (HMD) that employs a video-mixing and displays the merged images on a closed-view HMD.

  2. Optical see-through: It uses a HMD that employs optical combiners to merge the images within an open-view HMD

Figure 1.

Augmented Reality Systems (ARS) standard design.

HMDs are currently the dominant display technology in the AR field. [5]. However they lack in several aspects, such as ergonomics, high prices and relatively low mobility due to their sizes and connectivity features. An additional problem involving HMD is the interaction with the real environment, which places virtual interactive zones to the user, making the collision with these zones hard due to the difficulty to interact with multiple points in different depths.

Alternative approachs to develop ARS involve the use of monitors and tablets. Monitors are used as an option for indirect view, since the user does not look directly into the mixed world. Tablets are used in direct view, since the user points the camera to the scene and looks directly into the mixed world. Both approaches still have difficulties in getting collision.

To make easy the collision actions, we developed series of artifacts, which help the user activity in the active zones, or int the active points by overlapping physical objects with virtual objects. The AR collision is implemented even if the user is not looking directly into the mixed world, but only into the artifact, providing another cognitive possibility as the user can use the tactile to collide. The cognitive function associated to the artifact empowered with augmented reality is based on an Augmented Reality Cognitive Artifact (ARCA).

This chapter presents the concepts and the technology involved with, emphasizing aspects of authoring and interaction on augmented reality applications based on multiple markers and multiple points. Moreover, the discusses augmented reality applications in the education and rehabilitation areas, which use artifacts aiming to overcome the main interaction problems. Finally, the last section concludes the chapter and presents future work.

1.1. Virtual, augmented and cross reality definitions

1.1.1. Virtual Reality (VR)

Virtual reality was the first three-dimensional interface option, allowing natural interaction using hands with virtual environment rendered on monitors, projections or through VR HMD. To interact with the virtual elements, it is necessary to have multimodal devices, such as VR gloves (with sensors and tracking capabilities), force-feedback devices, 3D mice, stereoscope glasses, etc.

Representative definitions of virtual reality are: "virtual reality is an advanced computer interface that has real-time simulation and interactions through multi-sensor channels" [6] and "virtual reality is a computer interface that allows the user to interact, in real time, in a computer generated three-dimensional world, using his senses through special devices" [19]. In virtual reality environments, the user sees the virtual world, through a window rendered on monitor screens or projection screens; or the user is inserted into the virtual world through HMD or projection rooms, called caves. When the user is totally inserted into the virtual world, through HMD, caves and multi-sensed devices, the virtual reality is called Immersive (Figure 2a). When the user is partially inserted into the virtual world, through monitor or equivalent, the virtual reality is called Non-Immersive (Figure 2b).

1.1.2. Augmented Reality (AR)

The AR definition has evolved with the technologic evolution. In early definitions, the real world augmentation was obtained only through visual elements [16]; however, with the development of audio and haptic interactions associated with spatial position in real-time, the AR concept has been extended.

Figure 2.

Virtual reality examples. (a) Immersive VR environment; (b) Non-Immersive VR environment.

Azuma [2] [1] defines augmented reality as a system that allows the user to see the real world, with virtual objects superimposed upon or composed with the real world. That system has the following three characteristics: it combines real and virtual elements; it is interactive in real time; and it is registered in a three-dimensional (3D) way. Figure 3 shows some results of a real world augmentation with virtual elements.

Figure 3.

Augmented reality examples. (a) Virtual objects are misplaced; (b) Virtual objects are correctly placed.

In addition, the development of audio and haptic interactions associated with spatial position in real-time, the AR concept has been expanded. In this context, a wider AR definition involves the real world empowered with virtual objects that consider several aspects, such as: visualization, audio, haptic, etc.

According to an updated definition "Augmented Reality is an interface based on computer generated information combination (static and dynamic images, spatial sounds and haptic sensations) with the real user environment, provided by technological devices and using natural interaction in the real world" [19].

Away to bring virtual information to the physical user environment is using a webcam, which captures a live stream of the real world, and tracks some features, allowing the computer to add virtual information to the real world. The result can be seen, heard and felt by monitors, projections, helmets and haptic devices, depending on the interaction devices that take part of the system.

Cross-reality involves a ubiquitous mixed reality environment that comes from the fusion of a network of sensors and actuators (which collects and sends data related to the real world) with shared virtual worlds, using an augmented reality interface, where the exchange of information is bidirectional between the real and the virtual world. [34] [17]

Cross-reality can be classified in two types: Non Overlapped and Overlapped.

  • Non Overlapped Cross-Reality (NonOVER-CR). It is an AR environment with virtual elements, which are not overlapped with the real elements, using a network of sensors and actuators to acquire bidirectional communication between virtual and real objects. An example of this type of CR is a Seconf Life application that communicates with the real world [35].

  • Overlapped Cross-Reality (OVER-CR). It is an AR environment with virtual elements overlapped with the real elements, using a network of sensors and actuators to acquire bidirectional communication between virtual and real objects. This type of CR was discussed by [27], who explained the concept of augmented space, overlaying layers of data over the physical space using technology. Keiichi Matsuda, in his thesis [29], describes some examples of physical space overlaid with dynamically changing multimedia information, localized for each user.

Figure 4 shows an example of NonOVER-CR and OVER-CR, developed with the basAR authoring tool (Section 4.2), where there are two buttons (one virtual and one real) and an overlapped lamp (LED and virtual light beam). The buttons are not overlapped, witch characterizes a NonOVER-CR example; additionally, the illumination characterizes an OVER-CR example. The interaction with the real or the virtual button activates or deactivates the overlapped lamp, showing the LED on and the virtual beam or the LED off without the virtual beam.

Figure 4.

Cross-Reality example.

Figure 5 shows a CR representation obtained from the fusion of the augmented reality area (overlapping real and virtual environments) with ubiquitous computing, involving a network of real and virtual sensors and actuators. In this situation, interactions in the real environment reflect into the virtual environment and vice-versa. When the virtual and real sensors are overlapped, the interactions in the two worlds are concomitant. These results of this property could be useful in several types of applications, involving telepresence, collaborative and remote work in physical installations empowered by virtual elements, etc. Cross-Reality makes possible to interact with remote equipments, as well as with complex experiments which are hard to simulate.

Figure 5.

Cross-Reality, virtual and real interfaces.

Advertisement

2. Augmented reality artifacts

Humans actuating in the physical world frequently use artifacts as extension of their own knowledge and reasoning systems to support the remembering and processing of information [3] [32]. Classical examples are a shopping list and a string tied around a finger. In this way, artifacts that are used in cognitive applications, are named cognitive artifacts.

The term cognitive artifact was coined by Norman [31] and has different definitions, depending on the available technology and the type of application. An up-to-date definition, a cognitive artifact is a physical object or software application used to aid, enhance or improve thinking and reasoning.

Artifacts, including the cognitive ones, have significant potential to be implemented with augmented reality based on computer vision, once the prototype can present low cost and be easily distributed to interested users. Several interactive artifacts for rehabilitation are being developed, however most of them are applied in motor rehabilitation. There are few examples related to cognitive rehabilitation [4] [37] [12].

Artifacts based on Augmented Reality technology may fulfill the following requirements: [21]

  • The artifact, for cogntive application have to involve muti-sensory perception, memory, attention, logic and motor control, in order to allow the preparation of cognitive exercises;

  • The physical parts of the artifact has to be built with ordinary materials, involving a simple process, presenting availability and low cost;

  • For this, it could be adopted materials such as Styrofoam, cardboard or wood, to implement the physical structure, tied with glue, always followed by instructions and templates;

  • The logical parts of the artifact have to use augmented reality technology based on computer vision software. Authoring tools for rapid prototyping, using augmented reality can make easy the development of applications. A further section will present three possible authoring tools for these purposes;

  • The user interactive actions on the artifact must be tangible and easy. This property, due to the coincident physical and virtual points, allows force feedback interactions, because when the user touches the interacting device (pointer) on the artifact, he feels the contact and the virtual action point is enabled. This characteristic is important because it gives more comfort to the user. When the points are placed into the 3D space, without physical association, they demand more ability and concentration from the user to collide the pointer with the virtual points.

Augmented Reality Artifacts Applications can be visualized with a projector or a HMD; however, using a computer monitor is cheaper, available and easy to operate. The artifact allows direct interaction with sound feedback, but the visualization will be indirect, when a monitor is used.

Advertisement

3. Authoring tools for AR artifact applications

In the last years, series of AR authoring tools were released to help users to develop spatial applications mediated by computer. Authoring augmented reality tools can be classified, according to their characteristics of programming and content design, in low and high level, considering the concepts abstraction and interface complexity incorporated in the tool.

Programming tools are based on basic or advanced libraries (basic or advanced ones), involving computer vision, registration, three-dimensional rendering, sounds, input/output and other functions. ARToolKit [16], MR [42], MX [11] and FLARToolKit are examples of low level programming tools. The development of applications, based on programming tools can be complex. Futhermore, authoring tools, templates and interfaces cover the development complexity and ease the steps to achieve the application abstraction, as illustrated in Figure 6.

Figure 6.

Complexity versus Abstraction in AR Applications Development.

ARToolKit is one of the first augmented reality programming tools that use marker registration and computer vision. In this tool, the developers need C/C++ programming skills to author the applications. A more recent tool, FLARToolKit is a wrapper from ARToolKit, developed with Action Script 3, the language from Adobe Flash environments. FLARToolKit has a distinguishing feature, which is to enable the creation of web-based augmented reality applications.

Content design tools are independent of a specific programming language, replacing it by the description of the virtual objects and their relationship with the real environment. IN this context, APRIL [24] is a low level example of this type of tool, which uses XML descriptions. IN the other had, high level content design tools use graphical user interfaces to represent the descriptions and interactions, as it occurs in DART [26], AMIRE [13], ECT [15], ComposAR [39] and ARSFG [43].

High level content design should be more intuitive and suitable for non-programmers. These tools can support scripting and visual interfaces, new functionalities added by user and real time interpretation.

Our research is different from other AR authoring tools, since it considers the following characteristics.

  • A level of abstraction that covers the framework (ARToolKit or FLARToolkit);

  • Authoring AR applications depends on editing configuration files and tangible operations;

  • There are different authoring levels, depending on the skills of the developer;

  • Authoring can use tangible operations, editing configuration files and mouse and keyboard support; however the end users can interact with the AR application using only one or two markers.

Authoring AR application basically depends on: structure of the AR environment; data structure and folders that support the tool; authoring interface; configuration tasks, action commands, system commands, and utilization procedures that support the end-user to navigate and interact with the augmented environment.

3.1. ARAS-NP

To make easy the development of AR applications with those elements, we developed the authoring tool ARAS-NP (Augmented Real Authoring System for Non-Programmers). It includes authoring and utilization characteristics, besides additional features related to a shared remote use, which enables user collaboration.

ARToolKit is the core of ARAS-NP and additional functionalities were programmed with C/C++. The software, user manual and applications of ARAS-NP are freely distributed by the authors [18].

Augmented reality involves more than superimpose virtual objects and annotations over the real world. Thus, the augmented world (Figure 7 ), as considered in this work, presents real and virtual objects, such as: interactive objects, which can change in certain situations; animated objects, which can be activated; visible or invisible objects, which vanish or appear in certain cases; visible or invisible points, which can be activated or deactivated; etc.

Moreover, the augmented reality environment can be modified after the initial authoring, for example, by adding, changing and deleting points and virtual objects.

Figure 7.

Augmented World.

The data structure of the augmented reality environment to be authored comprises reference markers, which have associated virtual boards, and their respective elements (points, virtual objects and sounds) that appear on the board, according to Figure 8. These elements must be placed in folders that the developer needs to manipulate in order to create the augmented reality environment.

Figure 8.

ARAS-NP Data Structure.

3.2. basAR

The basAR (Behavioral Authoring System for Augmented Reality) is an evolution of the ARAS-NP, once it uses the same AR framework ARToolKit as its core. Its configuration is based on description files and it follows the same approach of using action points, differently of other authoring tools that create behavior and interactivity based on marker position, orientation and proximity. The software, user manual and applications of basAR are freely distributed by the authors [8].

The basAR data structure is organized according to Figure 9.

Figure 9.

basAR Data Structure.

The basAR tool involves a multi-layer approach, with the following features:

  • Infrastructure: It defines the correlation between the real and virtual worlds, such as markers and their properties;

  • Structure: It defines the virtual points layer and where they are located;

  • Content: It defines the models, sounds, etc. that are used to create the application abstraction;

  • Behavior: It defines how the augmented layer handles the feedbacks from the user interaction. The basAR behavior is structured by commands that describe dynamically the application; those commands are grouped on a language called basAR-AL (basAR Authoring Language) [8];

  • Acting: It defines how the user interacts with the structure layer;

  • Cross-Reality: It defines the keywords used by the behavior layer to communicate with the hardware.

3.3. FLARAS

FLARAS (Flash Augmented Reality Authoring System) is an augmented reality authoring tool based in the same action point approach of ARAS-NP; futhermore, it represents an evolution, mainly due to the graphic interface and because it allows to develop application to be hosted on the Internet and played on any computer that has Adobe Flash Player. This is an important advance, since most technologies are going forward the Web applications and the cloud computing. The software, user manual and applications of FLARAS are freely distributed by the authors [41].

The FLARAS data structure is organized according to Figure 10.

Figure 10.

FLARAS Data Structure.

Advertisement

4. Augmented reality applications for education

The main advantages to use ARS for educational purposes are: [7]

  • Students are more motivated, because they live an experience proposed by the application and the use of a new technology;

  • AR can illustrate processes and characteristics that are not usually viewed by the user;

  • AR allows detailed visualization and objects animation;

  • AR allows micro and macro visualizations that cannot be seen with naked eyes, as well as proposes different view angles to understand the subject;

  • AR allows interactive virtual learning using virtual experiments;

  • AR allows the students to recreate the experiments out of the school environment;

  • The students become more active due to the interactive application characteristics;

  • AR encourages creativity, improving the experience;

  • AR provides equal opportunities to different cultural students;

  • AR helps to teach computational and peripheral skills.

AR technology has an strong appeal to the constructivism, where the students control their own learning [10]. AR environments allow the students to explore objects, perform tasks, learn concepts and develop skills. Using AR educational applications, each student can look for its own interests, in its own speed and need, which better suits to its individual characteristics. For example, in a historical place, using an AR application, each student can define its own discovery way [14].

This section shows some educational examples developed with different authoring tools, such as:

  • ARAS-NP: AR books, Spatial Tutor, Q&A Applications, Perspective learning;

  • basAR: Geometry teach and learning application;

  • FLARToolKit: Electromagnetism teach and learning application.

4.1. AR books

The AR books comprise applications that have been much disseminated in the last years [14]. When a person looks to an AR books, it seems as any other book. However, when the user puts the books in front of a computer with a webcam, 3D objects, sounds, animation, extra explanations and several interactive elements seem to jump from the pages. These resources are added to the book to motivate the student to explore the presented theme, supporting the learning process.

Some examples of AR Books are the GeoAR [36] and the SpaceAR [33].

The GeoAR is an AR book to teach geometry subjects related to the main geometric shapes. Figure 11a presents the page of the square in the GeoAR, showing the marker, and some explanations and formulas. Figures 11b and 11c shows the book with the AR layer of the sphere and cube square pages.

Figure 11.

GeoAR examples. (a) Example page, (b) Sphere page and (c) Square page.

Another example of AR Book is the SpaceAR. It has information about the Solar System, and its pages guide the user into new discoveries of the objects that orbit the Sun. Figures 12a, 12b and 12c illustrate the use of the book, with the Sun and its information and a rotating animation.

Figure 12.

SpaceAR examples. (a) How to use the book, (b) Sun information page and (c) Sun animation.

4.2. AR spatial tutor to explore multimedia and three-dimensional environments

The AR Spatial Tutor aims at creating interaction with panels and mockups using AR, to expose 3D objects, annotations, sounds and animations.

This tutor is based on the ARAS-NP tool and includes two physical artifacts to show the tutor use. The first version is based on a Photographic Panel representing the Itaipú Hoover, in Brazil (Figure 13a). It has some action points located on the panel, which, when are clicked with the interaction artifact, they show annotations (Figure 13b), sounds and explanations. Those points can have multiple information elements that allow the expansion of contents or the fulfilling of different types of users.

Figure 13.

AR Spatial Tutor - Multimedia. (a) Photographic Panel and (b) AR annotations.

The second version of the AR Spatial Tutor is based on a mockup of the same Itaipú Hoover, made from Styrofoam. An AR layer paints the Styrofoam and place the action points. Asimple look at the mockup shows a static artifact, without interaction, which could be no attractive to students or users. However, when the AR layer is placed, the mockup is empowered with dynamic content, motivating its use. Figure 14a shows the mockup without the AR layer and Figure 14b shows the AR layer added.

Figure 14.

AR Spatial Tutor - Multimedia. (a) Mockup without AR layer and (b) with AR layer.

4.3. Q&A-AR game

The Q&A-AR educational spatial game is a multiplayer car racing game based on questions and answers, which works in an augmented reality environment [20].

The game Q&A-AR fulfills the following requirements:

  • The game must have educational potential involving several themes of study using texts, illustrations and sounds;

  • The physical parts of the game must be made with ordinary materials and process in order to have availability and low cost;

  • The logical parts of the game must use augmented reality technology based on computer vision;

  • The interactive actions to be executed on the game must be tangible and easy;

  • The information related to the game (questions, answers, instructions) must be easily customized by teachers;

  • The user interface of the game must consider usability factors, such as easy to understand, easy to learn and easy to use.

The game uses a series of artifacts, including nonmoving artifacts, and moving parts. The nonmoving artifact contains two perpendicular planes in order to present the game information to the user. The vertical plane contains the reference marker, which is used to superimpose the virtual information on the artifact. The horizontal plane presents the race path with ten cells, and a textual area for questions and answers.

The moving parts are composed by the player cars, the dices and an interaction pointer with a marker.

The virtual structure is composed of virtual buttons that overlap the printed buttons, the virtual cell buttons. To perform the interaction, the player only needs to touch the physical pointer on the printed button or on the top of the cars placed on the path cells.

Figures 15a, 15b, 15c and 15d present the nonmoving parts, the moving parts, the activation of a question and the answer elements, and the augmented reality environment of the game, respectively.

Figure 15.

Q&A-AR Game. (a),(b),(c) and (d) present the nonmoving parts, the moving parts, the activation of a question and the answer elements, and the augmented reality environment of the game, respectively.

The goal of the game is to reach the end of the path first. The cars run over the path on the horizontal plane driven by moving information, involving dice, forward and backward movement indicated by buttons or by result of the player performance, and answering to the questions presented by the activation of the virtual path cell buttons.

4.4. Perspective learning

To see and describe real and imaginary three-dimensional scenes from the observer’s viewpoint is an intuitive activity for non-impaired people; However, it is difficult and even impossible for congenitally blind people, once it involves abstract concepts for them, such as: perspective, depth planes, occlusion, etc. This project, supported by an augmented reality tool, helps blind people to understand, describe and convert three-dimensional scenes in two-dimensional embossed representations, like painting. To understand how the blind people can acquire those concepts, we developed an augmented reality application, working as an audio spatial tutor to make the perspective learning process easy [23] [38].

Figure 16 presents some developed ARCAs for perspective learning application.

Figure 16.

a),(b),(c) and (d) are ARCAs used in this Perspective Learning, respectively.

4.5. Geometry learning

The development of spatial skills involves a critical understanding, when students start learning three-dimensional objects. In order to help this achievement, the teachers usually employ woodcraft artifacts and several orthographic and axonometric projections inside books [28]. A way to improve the learning of three-dimensional shapes is content based on AR. This application is used to teach polygon extrusion and revolution math concepts, using the authoring tool basAR to create the interactive application, where the student chooses the type of polygon and then apply the movement. Figure 17a shows the three possible polygons (circle, cube and triangle). When a polygon is chosen, it shows the two possibilities (extrusion or revolution), according to the Figure 17b. Figures 17c and 17b show the extrusion and revolution results of a circle.

This application can be found on the Internet [9].

Figure 17.

a) Polygons choices; (b) user selected the circle; (c) extrusion result; (d) revolution result.

4.6. Electromagnetism with augmented reality

Some concepts of electromagnetism, as they are relatively abstract, require more effort from the students to be understood. With the intention to offer an alternative way, that would be more interactive and dynamic, the MiniLabElectroMag-AR (Mini Laboratory of Electromagnetism with Augmented Reality) was developed. The purpose of this application is to work as a simple laboratory for experiments about electromagnetism, allowing, for example, that students explore in a practical way some basic concepts, such as electric currents, electric circuits, that inducted magnetic field generated by the flow of electric current on a straight wire, and also the simulation of the Oersted’s experiment.

Figure 18a shows the artifact with the lamp, battery and switch elements draw. The Figure 18b shows the same artifact with the AR layer, with the virtual elements superimposed.

Figure 18c shows compass deflection due to the magnetic induction of the wire. Figure 18d shows two students collaboratively exploring the experience.

Figure 18.

a) Artifact with drawn elements; (b) Artifact with AR layer; (c) and (d) Oersted’s experiments.

This application can be found on the Internet [40].

Advertisement

5. Augmented reality applications for cognitive rehabilitation

Nowadays, with the technological evolution, cognitive rehabilitation is using interactive artifacts, such as software applications (based on multimedia and virtual reality) and physical objects controlled by computer (PDAs, tablets, cellular phones, specific devices with GPS, accelerometers and other technological resources, etc.). Those artifacts are part of technology for cognitive rehabilitation and can help disabled people presenting traumatic brain injury, stroke, learning disabilities and multiple sclerosis. Besides, they have some potential to aid people with dementia, autism spectrum disorders and mental retardation [30].

The cognitive artifacts used for retraining and development of cognitive skills explores the following aspects: temporal and spatial orientation; attention, concentration and calculation; language understanding and speaking; understanding of social cues; judgment and abstraction; immediate recall, recent and remote memory; organization; planning and problem solving; mental processing speed; multi-sensory processing (visual, auditory and motor); self-control and self-confidence.

With recent technological trends, rehabilitation patients are getting access to advanced interactive devices with interesting features, such as highly technological, highly interactive and multi-sensory ones. Nevertheless, those devices present some disadvantages, such as: complex using, difficulty to convert the rehabilitation training to real-life benefits, low or medium availability, medium or high cost, medium or high dexterity demanding, etc. To overcome such problems, it is important to use assistive devices, presenting simplicity as their main feature [25].

We discuss next the development of interactive cognitive artifacts and their applications for retraining and improvement of cognitive skills, aiming at satisfying the main characteristics desired in a modern cognitive device, such as: low cost, easy customization, user-friendly interface, multi-sensory input/output, low dexterity demanding, etc.

The rehabilitation examples, presented next, are:

  1. ARAS-NP: Artifact-AR

  2. basAR: dGames-VI Memory Game and dGames-inclusive AR Pong

5.1. Artifact-AR

The Artifact-AR was implemented as a 3D structure built with three perpendicular planes, so that each one contains nine cells that can be virtually colored or has spatial colored virtual "coins" activated (Fig. 19). Besides, on the upper side, there is a plane extension used to accommodate the application marker and control buttons and to receive visual information like pictures and texts. The user interacts with the physical artifact, hears the auditory information from computer loudspeaker and visualizes effects (video of the physical artifact expanded with virtual information) on a monitor placed in front of him/her.

The visualization can also be obtained by a projector or an augmented reality HMD, although using a computer monitor is cheap, available and easy to operate. The artifact allows direct interaction with sound feedback; on the other hand, if a monitor or a projector is utilized, the visualization will be indirect, and, if a HMD is utilized, the visualization will be direct. We initially developed two cognitive applications exploring identification, memorization, comparison and association of pictures, patterns and sounds. An application considers pre-built patterns whereas the other one allows the assembly of patterns through the interaction with each cell individually. Figure 19a presents examples where the user can select a picture on the right side and/or a pattern composed of virtual embossed "coins" on the perpendicular planes, comparing or associating pictures and patterns. Figure 19b presents an example of picture containing one plane pattern to serve as reference to be replicated by the user through the activation of cells on a plane.

Figure 19.

a) Comparison and association of pictures and/or patterns; (b) Replication of patterns.

Although that application was implemented originally with ARAS-NP, it is being converted with FLARAS to work on the Internet. The final project using FLARAS will be available on the Internet [22]

5.2. dGames-VI memory game

This project presents a solution to exercise cognitive skills, as association and memory, based on a classic memory card game, using a simple artifact, enhanced with AR. In this application, a therapist can setup several maps, with different characteristics and levels.

This artifact (Figure 20a) was developed using blended tactile and audition sense allowing its use for visual-impaired people. However, as it can also show images (Figure 20b), it can be used by non-visual impaired people as a memory game, or in classes activities, to teach word association, languages, scene associations, etc.

Figure 20.

a) Artifact design; (b) Artifact with AR layer.

An example of this artifact, applied as an inclusive memory card game, is given as follows (Figure 21):

  1. The therapist builds several maps, with different characteristics, levels, etc.;

  2. The therapist setups the environment with the artifact and the webcam/computer, adopting the AR required software;

  3. The system creates the first option, asking to start the activity. The user chooses the next option. The system issues a start sound and shows the covered map.

  4. The user chooses a first card (hole). The system issues the card sound and shows its image;

  5. The user chooses a second card. The system issues the card sound and its image. If the pair of sounds (and/or images) related to the two cards matches, the system verifies if the amout of pairs on the map is completed. If it is completed, the system issues a game over sound and releases the next map, retrieving to step 3; otherwise it continues. If the pair does not match, it issues a mistaken sound, closing both selected cards and enabling the user to go to step 4.

Figure 21.

dGames-VI Memory Game Diagram.

5.3. dGames-inclusive AR pong

This project presents a solution to exercise spatial association of a 3D audio stimulus with its corresponding motor feedback. It was inspired on the Ping-Pong game, using a low cost and easily built Artifact enhanced with an AR layer provided by the basAR authoring tool. In this application, a blind people can play against the computer or against other player who is not necessarily blind as well. The game can has a therapeutic and, in this last case, the therapist can set exercises sequences to evaluate the patient.

Figure 22a shows the Styrofoam artifact, with the AR layer, from the camera view (Figure 22b). Figure 22c shows a therapeutic setup, where, in the right artifact (Therapist Artifact), the therapist can set the sequences and the speed in the top three spaces on his artifact grid.

Figure 22.

a) Artifact design; (b) Artifact with AR layer; (c) Application setup.

The AR software layer provides a 3D audio placement, as Figure 23a, the horizontal placement is performed by the stereo balance; the vertical placement is performed by the frequency modulation, in which a higher pitch indicates a higher height and a lower pitch indicates a lower height; the deep placement is associated with the volume, in which higher volumes indicate that the object is near of the user. Figure 23b shows how to control the ball speed. Five stages, with four time intervals control the ball speed, that is, with high time intervals the speed is decreased, and to get fast speed the time intervals are decreased.

Figure 23.

a) Sound placement into a 3D space; (b) Adjusting the time intervals to control ball speed.

The inclusion of 3D placement in the artifact, enables the augmented reality properties. In this sense, each artifact cell has a deep placement (Figure 24a), to create each cell ball movement. The vertical and horizontal placements are interlaced, to create nine possible combinations of pan and possible pitch (Figure 24b) so that the user can find the correct cell by its sound.

Figure 24.

a) Deep placement; (b) Sound behavior.

Advertisement

6. Conclusion

In this chapter, we presented the concepts and technology related to augmented reality applications, as well as authoring tools and developed applications in the areas of education and cognitive rehabilitation.

When using augmented reality systems based on multiple points instead of on multiple markers, we notice that the collision of the interaction device with the virtual points was hard to accomplish, due to the spatial positioning. To solve this problem, an augmented reality artifact was created, to place the virtual points over real points in a physical structure. The augmented reality artifacts solved the problem with the spatial collision in multiple depths. This allowed the development of several augmented reality applications for educational and cognitive rehabilitation purposes, once the artifact empowered with smart augmented reality reactions would provide significant support to students and patients, at low cost.

The fast prototyping of the application solutions with Styrofoam, cardboards, and easily found materials allows the creation of artifacts enriched with the augmented reality layer, which can be easily distributed and used. Even with the artifact weaknesses, it seems to be a very interesting option to be applied. Evaluation tests confirmed important strengths of using the artifacts, such as low cost, availability, user-friendly interfaces, multi-sensory, tangible interaction, non-demanding dexterity, etc.

The proposed authoring tools have distinct characteristics with respect to other approaches which are based on marker relation behavior, as we use action point interactions. This option allows the use of a minimum amount of marker, instead of a pack of markers to drive an application.

The authors believe that augmented reality artifacts have high potential to be applied in educational and cognitive rehabilitation applications, due to the specific potentiality provided by the augmented reality and the three-dimensional artifact features.

As future work, we are evolving the authoring tools, aiming at generating more powerful and easy-friendly versions and exploring the integration of web applications with online augmented reality applications implemented with the FLARAS authoring tool. Besides, we are developing cognitive and motor rehabilitation games, using the basAR authoring tool, and studying the use of cross-reality in innovative applications that could effectively contribute to the educational and rehabilitation areas.

Acknowledgement

This research was partially funded by Brazilian Agencies CNPq (Grants #558842/2009-7 and #559912/2010-2) and FAPEMIG (Grant #APQ-03643-10).

References

  1. 1. AzumaR.BaillotY.BehringerR.FeinerS.JulierS.MacIntyre. B.2001Recent advances in augmented realityIEEE Comput. Graph. Appl. 2163447URL: http://dx.doi.org/10.1109/38.963459
  2. 2. AzumaR. T.1997A survey of augmented realityPresence: Teleoperators and Virtual Environments 64355385
  3. 3. BangM.TimpkaT.2003Cognitive tools in medical teamwork: the spatial arrangement of patient records.Methods Inf Med 4243316
  4. 4. BeatoN.MapesD. P.HughesC. E.FidopiastisC.SmithE.2009Evaluating the potential of cognitive rehabilitation with mixed realityProceedings of the 3rd International Conference on Virtual and Mixed Reality: Held as Part of HCI International 2009, VMR ‘09, Springer-Verlag, Berlin, Heidelberg, 522531
  5. 5. BimberO.RaskarR.2004Spatial Augmented Reality- Merging Real and Virtual Worlds, 1 edn, A K Peters, Wellesley, Massachusetts, USA.
  6. 6. BurdeaG. C.CoiffetP.2003Virtual Reality Technologyedn, John Wiley & Sons, Inc., New York, NY, USA.
  7. 7. CardosoA.Lamonier JrE.2009Virtual and Augmented Applications- from Aplicações de Realidade Virtual e Aumentada, SBC, chapter AR and VR Educational and Training Applications- from Aplicações de RV e RA na Educação e Treinamento, 2954
  8. 8. CerqueiraC. S.KirnerC.2012abasar, online. URL: http://www.cscerqueira.com/basar
  9. 9. CerqueiraC. S.KirnerC.2012bGeometry learning, online. URL: http://www.cscerqueira.com/basar/projects/005_geometry/
  10. 10. ChenS.J.2007Instructional Design Strategies for Intensive Online Courses: An Objectivist-Constructivist Blended Approach, Journal of Online Interactive Learning 6(1).
  11. 11. DiasJ. M. S.MonteiroL.SantosP.SilvestreR.BastosR.2003Developing and authoring mixed reality with mx toolkit, IEE Review 1826
  12. 12. GrasielleA.CorreaD.AssisG. A. D.NascimentoM.2007Genvirtual : An augmented reality musical game for cognitive and motor rehabilitation object. wt astiuatn uosiber ernative bengiuse rality inter todcollati st cala. pitc depends on, Virtual Reality 16URL: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4362120
  13. 13. GrimmP.AgcF.OverviewI.HallerM.ReimannC.PaelkeV.PaderbornU.ZaunerJ.2002Amire- authoring mixed reality. URL: http://www.amire.net/
  14. 14. HamiltonK. E.2011Augmented reality in education, online.
  15. 15. HampshireA.SeichterH.GrassetR.BillinghurstM.2006Augmented reality authoring: generic context from programmer to designerProceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments, OZCHI ‘06, ACM, New York, NY, USA, 409412URL: http://doi.acm.org/10.1145/1228175.1228259
  16. 16. KatoH.BillinghurstM.1999Marker tracking and hmd calibration for a video-based augmented reality conferencing system, Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality, IWAR ‘99, IEEE Computer Society, Washington, DC, USA. URL: http://dl.acm.org/citation.cfm?id=857202.858134
  17. 17. KimM.GakH. J.PyoC. S.2009Practical rfid + sensor convergence toward context-aware x-realityProceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human, ICIS ‘09, ACM, New York, NY, USA, 10491055URL: http://doi.acm.org/10.1145/1655925.1656115
  18. 18. KirnerC.2011aAras-np: Augmented reality authoring system for non-programmers, online. URL: http://www.ckirner.com/sacra
  19. 19. KirnerC.2011bTendências e Técnicas em Realidade Virtual e Aumentada, SBC, chapter Prototipagem Rápida de Aplicações Interativas de Realidade Aumentada.
  20. 20. KirnerC.KirnerT. G.2011aDevelopment of an educational spatial game using an augmented reality authoring tool, International Journal of Computer Information Systems and Industrial Management Applications, 3MIR Labs, 602611
  21. 21. KirnerC.KirnerT. G.2011bDevelopment of an interactive artifact for cognitive rehabilitation based on augmented reality, Virtual Rehabilitation (ICVR), 2011 International Conference on, 17
  22. 22. KirnerC.KirnerT. G.2012Artifact-ar, online. URL: http://www.ckirner.com/ar/artifact-ar/
  23. 23. KirnerC.KirnerT. G.MatayaR. S.ValenteJ. A. [.Sept2010Using augmented reality to support the understanding of three-dimensional concepts by blind peoplein J. S. P M Sharkey (ed.), WProc. 8th Intl Conf. on Disability, Virtual Reality and Assoc. Technologies, 4150
  24. 24. LedermannF.SchmalstiegD.2005April a high-level framework for creating augmented reality presentations, Proceedings of the 2005 IEEE Conference 2005 on Virtual Reality, VR ‘05, IEEE Computer Society,Washington, DC, USA, 187194URL: http://dx.doi.org/10.1109/VR.2005.8
  25. 25. LoprestiE. F.MihailidisA.KirschN.2004Assistive technology for cognitive rehabilitation: State of the artNeuropsychological Rehabilitation5 EOF39 EOFURL: http://www.tandfonline.com/doi/abs/10.1080/09602010343000101
  26. 26. MacIntyre. B.GandyM.DowS.BolterJ. D.2004Dart: a toolkit for rapid design exploration of augmented reality experiencesProceedings of the 17th annual ACM symposium on User interface software and technology, UIST ‘04, ACM, New York, NY, USA, 197206URL: http://doi.acm.org/10.1145/1029632.1029669
  27. 27. ManovichL.2006The poetics of augmented space, Visual Communication 52219240URL: http://vcj.sagepub.com/cgi/content/abstract/5/2/219
  28. 28. Martín-GutiérrezJ.LuísSaorín. J.ConteroM.AlcañizM.Pérez-LópezD. C.OrtegaM.2010Education: Design and validation of an augmented book for spatial abilities development in engineering students, Comput. Graph. 3417791URL: http://dx.doi.org/10.1016/j.cag.2009.11.003
  29. 29. MatsudaK.2010Domestic city- the dislocated home in augmented space, Master’s thesis, UCL- London’s Global University. URL: http://www.keiichimatsuda.com/thesis.php
  30. 30. MorgantiF.2004Virtual interaction in cognitive neuropsychology.Studies in health technology and informatics995570URL: http://view.ncbi.nlm.nih.gov/pubmed/15295146
  31. 31. NormanD. A.1991Designing interaction, Cambridge University Press, New York, NY, USA, chapter Cognitive artifacts, 1738URL: http://dl.acm.org/citation.cfm?id=120352.120354
  32. 32. NormanD. A.1992Design principles for cognitive artifactsResearch in Engineering Design44350BF02032391. URL: http://dx.doi.org/10.1007/BF02032391
  33. 33. OkawaE. S.KirnerT. G.KirnerC.2012Spacear, online. URL: http://www.ckirner.com/sacra/aplica/sol-ra/
  34. 34. ParadisoJ. A.LandayJ. A.2009Guest editors’ introduction: Cross-reality environmentsIEEE Pervasive Computing 831415URL: http://dx.doi.org/10.1109/MPRV.2009.47
  35. 35. ReillyD.TangA.WuA.EcheniqueA.MasseyJ.MathiasenN.MazalekA.EdwardsW. K.2011Organic uis and cross-reality spaces, Workshop on Organic User Interfaces, 2326
  36. 36. ReisF.KirnerT. G.KirnerC.2012Geoar, online. URL: http://www.fernandamaria.com.br/geoar/
  37. 37. RichardE.BillaudeauV.RichardP.GaudinG.2007Augmented reality for rehabilitation of cognitive disabled children: A preliminary study, 2007 Virtual Rehabilitation, 102108
  38. 38. SaúdeL. M. S.KirnerT. G.KirnerC.2012Perspective learning system with augmented reality from sistema de aprendizagem de perspectiva com realidade aumentada, online. URL: http://ckirner.com/eventos/jornada2011/lara.html
  39. 39. SeichterH.LooserJ.BillinghurstM.2008Composar: An intuitive tool for authoring ar applicationsProceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, ISMAR ‘08, IEEE Computer Society,Washington, DC, USA, 177178URL: http://dx.doi.org/10.1109/ISMAR.2008.4637354
  40. 40. SouzaR. C.KirnerC.2012Minilabeletromag-ra, online. URL: http://ckirner.com/apoio/eletromag/
  41. 41. SouzaR. C.MoreiraH. C. F.KirnerC.2011Flaras: Flash aumented reality authoring system, online. URL: http://www.ckirner.com/flaras
  42. 42. UchiyamaS.TakemotoK.SatohK.YamamotoH.TamuraH.2002Mr platform: A basic body on which mixed reality applications are built, Proceedings of the 1st International Symposium on Mixed and Augmented Reality, ISMAR ‘02, IEEE Computer Society, Washington, DC, USA, 246http://dl.acm.org/citation.cfm?id=850976.854992
  43. 43. YaoY.WuD.LiuY.2009Collabrative education ui in augmented reality from remote to local, Proceedings of the 2009 First InternationalWorkshop on Education Technology and Computer Science- 02ETCS ‘09, IEEE Computer Society, Washington, DC, USA, 670673URL: http://dx.doi.org/10.1109/ETCS.2009.409

Written By

Claudio Kirner, Christopher Shneider Cerqueira and Tereza Gonçalves Kirner

Published: 12 September 2012