InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Computer and Information Science » Human-Computer Interaction » "The Thousand Faces of Virtual Reality", book edited by Cecilia Sik Lanyi, ISBN 978-953-51-1733-9, Published: November 26, 2014 under CC BY 3.0 license. © The Author(s).

Chapter 3

Started From and Where It’s Going in the Augmented Reality

By Veronika Szucs, Silvia Paxian and Cecília Sik Lanyi
DOI: 10.5772/59796
An unedited version of this chapter was published on 2014-11-26. As of 2014-12-08 the correct version is posted online instead

Article top


Tangible UI
Figure 1. Tangible UI
VOMAR interface
Figure 2. VOMAR interface
IKEA Do-it-yourself support
Figure 3. IKEA Do-it-yourself support
IKEA home planner
Figure 4. IKEA home planner
BMW step-by-step repaire manual
Figure 5. BMW step-by-step repaire manual
Priority Mail Virtual Box
Figure 6. Priority Mail Virtual Box
Popular Science Magazine
Figure 7. Popular Science Magazine
Ford Ka advertisment
Figure 8. Ford Ka advertisment
Ford advertisment
Figure 9. Ford advertisment
Hair simulator from HallNeoTech
Figure 10. Hair simulator from HallNeoTech
Fashionista - shopping assist
Figure 11. Fashionista - shopping assist
World Lens
Figure 12. World Lens
AR book
Figure 13. AR book
AR travel guide
Figure 14. AR travel guide
Company presentation
Figure 15. Company presentation
Virtual Ribbons
Figure 16. Virtual Ribbons
Sixth Sense Technologie
Figure 17. Sixth Sense Technologie
Magic Mirror demo
Figure 18. Magic Mirror demo

Augmented Reality – Where it Started from and Where It’s Going

Veronika Szucs1, Silvia Paxian1 and Cecília Sik Lanyi1

1. Introduction

This study provides an overview of augmented reality (Augmented Reality, AR) and some of its important and popular areas of application. Augmented reality technology integrates 3D virtual objects into a real 3D environment, in real time. This book chapter presents the areas of everyday life where AR can be used (including, but not limited to): medical informatics, production repair, visualization, route planning, entertainment and military applications, marketing tasks and education. The basic characteristics of AR systems, the need for compromise in their applicability, and optical and video mixing approaches are presented in the chapter. The chapter introduces the two main areas of sensor errors, which are considered as a basic problem during the design of efficient augmented reality systems. We summarize how the current devices are able to solve these problems. The expected future direction of AR technology developments and the areas where further research is needed are simultaneously introduced.

1.1. Aims

In the course of preparing the study, the actualities of augmented reality technologies have been reviewed. Questions associated with differing scope of application, design and implementation problems of augmented reality systems, and possible solutions have been delineated during the writing process. The book chapter concludes with possible compromises for questions and approaches, which have arisen during the problem-solving process, and possible directions for future developments are presented, which are suitable for further researches.

The present study does not provide new research results. The information from different sources is supported by different journals, periodicals, Internet media, printed books, articles, conference presentations and essays. Thus, the chapter is an up-to-date literature review.

Several expositive articles have been written on the topic [5, 11, 13, 16, 18, 19, 40, 54].This literature sample is comprehensive and up-to-date as far as possible. The study serves as an appropriate starting point for the selection and design of model-creating opportunities, and for potentially applicable technologies in the field of AR (before application development), with the help of which unique AR applications can be developed, based on an independent methodology.

In the “Definitions” section, we clarify what AR is, and summarize what is motivating AR technology developments.

Most of the AR systems that are currently available on the market are affecting various parts of our everyday life. Out of these, the most distinctive, popular and interesting applications have been selected for presentation within the confines of the chapter.

Questions of the feasibility of augmented reality based systems have been emphasized, with a focus on the most important aspects that are responsible for the success and applicability of the AR system.

Finally, a brief overview has been created, in order to know what other areas need further analyses and examination, and which specific points of AR technologies need further research and development, in order to ensure that the area will remain a priority for developers. As a result of this, the developers of entertainment and business applications will geta complex, efficient, fail-safe, highly customizable toolbar, with the help of which a system appropriate for all target groups can be developed.

The first AR interface was developed by Sutherland in the 1960s [60]; however, the first AR conference was organized much later, in 1998. This conference was the International Workshop on Augmented Reality '98 (IWAR 98), in San Francisco. Since then, research results have been continuously presented at the conferences of the International Symposium on Mixed Reality (ISMR) and the International Symposium on Augmented Reality (ISAR). Obviously, these conferences are not the only locations for presenting results, but these are the most renowned events, the premier conferences in the field of AR. Thus, the research tendencies reviewed through these conferences show an interesting historical development. The search for potential applications, the development of AR researches and the formation of tendencies help to identify the research and development points that can be necessary in the future.

Azuma et al. summarized the research results about AR environments in a very broad way, as can be found in their work [2], and a summary of new results can be found in a 2001 article [3].

Since that time, several new research results have been published worldwide on this topic, and new improvements, innovations and developments have been presented at conferences.

To introduce the obtained results, the chapter will deal with some of these publications to show the most important topics, e.g., tracking, interaction and display technologies, as the main problems of development.

1.2. Definitions

Augmented reality (AR) is a technology which makes it possible to put computer-generated virtual pictures and objects in real time, in a three-dimension real space, as if they were the parts of an actual space.

By contrast, in virtual reality (VR) environments, the users are completely immersed in the virtual environment.

AR makes it possible for the user to interact with the real environment through the virtual object. The definition of AR, which is still used today as a definition of the technology, was described by Azuma [2]:

AR as a technology

  1. combines real and virtual objects,

  2. is real-time and interactive, and

  3. registers the virtual objects in the real world.

The abilities of AR, as a technology appropriate to the above described aims, can be used in many areas: from engineering tasks through entertainment and education, to marketing, advertising and multimedia content.

Augmented reality (AR) is a variation on the virtual environment (VE), or virtual reality as it is often misnamed. With the use of VE technologies, the user is placed in a completely artificial environment. While he/she senses this environment, he/she cannot see, hear or sense the real world surrounding him/her.

On the other hand, AR technologies make it possible for the user to be surrounded by virtual objects in a real world; to be incorporated in the real environment. Hereby, AR technologies expand the reality, and do not completely exchange it for virtual elements.

Ideally, it seems that virtual and real objects placed in the same space can achieve the same effects for the users. The differences between AR, as a ‘middle-of-the-roader’, the VE (completely synthetic) and telepresence (completely real) were adequately defined in two of Milgram’s studies in 1994 [39, 40].

Some of the researchers describe AR in such a way that its application requires the use of HMD (head-mounted display) displays. In order to avoid the future limitations of concrete AR technologies, Milgram et al. [40] defined the three characteristics that AR systems need to possess by a survey.

According to the results of Milgram’s survey, the necessary characteristics of AR technologies are the same as the characteristics previously defined by Azuma [2].

This definition makes it possible for AR to keep its most essential elements without other technologies, e.g., not applying HMD displays.

No part of Azuma’s or Milgram’s definition which makes a difference or offers reservations in connection with display interface: the combination of monitor-based displays, monocular system, transparent HMD systems and other technologies can all be applied.

1.3. Motivation

Why is augmented reality such an interesting field? Why is it beneficial to combine real and 3D objects?

The usage of augmented reality extends the user’s sense, perception and interaction in real life. Virtual objects mediate information that the user cannot perceive directly with his/her own senses. With the application of virtual environments, the perception of such events that are not directly able to be experienced physically can be accessed, or they might be dangerous. The information given via virtual objects helps the user to accomplish the tasks in the real world.

AR is a specific example of what Fred Brooks calls IA, intelligence amplification: the computer is a device for making the solution to people’s tasks much easier [12].

1.4. AR research areas

According to the research experience, studies and the assessment of existing technologies [2, 3] show that AR systems are effective at corresponding to expectations when there are developments available at appropriate levels for the following components:

  1. Graphic rendering hardware and software, able to create an overlap of virtual context with real context

  2. Adequate tracking techniques, to appropriately mirror the changes from the user’s viewpoint to the rendered graphic

  3. Accurate synchronizing of tracker calibration and registering

  4. Synchronizing of real and virtual view, when the user’s view is fixed

  5. Display should sufficiently combine the pictures of virtual objects with the appearance of real components

  6. Computer processing: supporting AR simulation running on hardware of input and output devices

  7. Interaction techniques describe how the users can manipulate the virtual contents of AR.

Several secondary topics are important, depending on which concrete AR application is being examined. Applicability, adaptability of mobile/portable devices, visualization techniques, authorization devices, the multi-modality of AR inputs, rendering methods, software architecture, etc., must be evaluated. The questions of hardware integration and software realization are taken into consideration during the development of complete AR applications.

1.5. Review of AR research results

The growing demands for AR technologies, and the development of the technology, have appeared in several related research topics recently.

The published results can be divided into two groups. The first group includes the five current main research areas:

  1. Tracking techniques

  2. User interaction techniques

  3. Calibration

  4. AR applications

  5. Display techniques.

These process the questions of AR applications needed for realization in the basic AR fields.

The second category reflects future research plans:

  1. Evaluation/investigation

  2. Mobile/portable AR

  3. AR authorization

  4. Visualization

  5. AR multimodality

  6. Rendering

2. Aspects of AR development

2.1. Tracking techniques

2.1.1. Sensor-based tracking:

The sensor-based tracking techniques are based on sensors, like magnetic, acoustic, inertial, optical and/or mechanical sensors.

Each of the sensor types has its own pros and cons. For example, the magnetic sensors have a high update frequency and are light. However, the presence of any kind of metal, which can disrupt the magnetic field, can distort the signal [53]; this has been discussed by Rolland in his article about sensor-based tracking techniques.

Sensor techniques have developed substantially recently.

Only a few of publications touched on the topic of tracking at the first IWAR 98 conference in reference to non-camera based systems. One exception is the work of Newman et al. [44], which examines how ultrasound sensors can be used for interior tracking on a large area.

The researchers also investigated how they could combine different types of sensors, so that difficult tasks can be solved by dynamic sensor handling. Klinker et al. [35] described how the use of body sensors can be combined with the usage of fixed, global sensors.

In their further researches, Newman et al. [43] have extended this definition for the bigger sensor networks, which support transparent tracking and dynamic data-fusion.

2.1.2. Vision-based tracking

Vision-based tracking techniques use different picture-processing methods to estimate the relative positioning of the camera and the real world [6]. This is the most active field of research into tracking techniques. More than 80% of the publications analyse possible methods of computer vision.

Stricker et al. [59] introduced a method for detecting 3D coordinates according to the four angles of the quadrant marker, while Park et al. [47] presented an algorithm which estimates the camera position according to the known environment.

Since 2002, extensive investigations have been making progress in the field of marker-based tracking, so that Zhang et al. [64] produced a publication where they compared the more advanced approaches. Following this, no generally new marker-based tracking system has been presented; however, some researchers have been investigating the LED-based tracking techniques [42].

Others have been studying the non-quadrant visual marker-based tracking techniques. Vogt et al. [61] designed circle-shaped marker groups with different parameters, where, e.g., the number of markers, the height and radius of the marker field can be customized, and only one camera is used.

This was the most active area of computer vision-based tracking researches.

The most recent trend amongst computer vision-based tracking techniques is the examination of model-based tracking methods. In these techniques, a model is used with the characteristics of a trackable object, which can be a CAD model, or even a 2D model that possesses the characteristics of the object.

The results of the first model-based tracking were introduced by Comport [14] in 2003. Since then, model-based tracking has become a determinant for vision-based tracking techniques [23, 14].

Wuest et al. [63] presented a real-time model-based tracking technique, an adaptive system, in which the robustness and efficiency was considerably improved.

During the creation of the models, pattern recognition is another beneficial function. Reitmayr and Drummond [51] have introduced a textured 3D model-based hybrid tracking system.

Along the same lines, Pressigout and Marchand [50] suggested a model-based hybrid monocular picture processing system that combined edge enhancement and pattern analysis to achieve more robust and accurate setting estimations.

2.1.3. Hybrid tracking technologies

In some of the AR applications, the use of computer vision-based technique cannot provide in itself a robust tracking key; thus, hybrid methods needed to be created, which combine more sensor technologies. As an example, Azuma et al. [4] suggested that developers use GPS-based tracking systems combined with computer vision-based sensors or inertial sensors for the development of outdoor AR systems.

Aside from a few exceptions (e.g., [20]), initial hybrid methods used markers [1] [32]. Afterwards, the increasing importance of compliance and the developing consensus led to the creation of a “closed-loop” type tracking method, with the combination of inertial and computer vision-based technologies. In the case of vision-based tracking [37], the synchronization is usually satisfactory, there is no skew, but it requires evaluation and extremely bad results may appear. Besides this, the abrupt movements can frequently cause tracking errors, the processing is time-consuming and the error correction may temporarily cause the loss of the real-time aspect of processing.

Lang et al. [37, 49] have achieved a well-endowed complementary method through the combination of vision-based tracking methods and inertial sensors.

This system is quick and robust; it can be used for the estimation of quick movements and changes. Moreover, the position of objects can be retrospectively evaluated according to the metric result data of acceleration and rotation, although some distortion can appear because of the agglomerated noise in the inertial systems.

Foxlin [22] used such an optical and inertial hybrid tracker, where a miniature MEMS (micro-electro-mechanical systems) sensor was used in the cockpit to track the movement of the helmet. The differences of inertial sensors were corrected by inclinable sensors and a compass.

Two other studies [55, 56] describe a hybrid head tracking method using a bird’s-eye perspective viewpoint and a gyroscope, with the help of which the number of parameters to be evaluated could be reduced.

2.2. Interaction techniques and user interface

It may take some more time until AR technology becomes a ‘mature’ technology; until then, lots of technical and social questions (e.g., about the handling of possible limitations of users) are waiting to be answered. One of the important aspects is to create appropriate interaction techniques for the AR applications, which can make it possible for users to interact with the virtual contents in an intuitive way.

2.2.1. Tangible AR

AR joins the real and the virtual world; thus, besides real objects, there is an opportunity to use virtual objects. These objects are the building blocks of AR, and the physical manipulation of them enables a strongly intuitive connection, interaction with the virtual environment and virtual contents.

Previously, Ishii elaborated a conception for tangible user interfaces (TUIs), through which the user is able to manipulate the digital information through the physical object [31]. The tangible interfaces are significant, because physical contact can be formed with them; the objects used have familiar physical characteristics, limits and possibilities; they are easily applicable. (The possibilities refer to the used device’s physical characteristics, shape, surface, deformability and how the object can be applied [24, 45].)

The same TUI aspect can be used for the AR interfaces, in which the intuitive usability of the physical input devices can be combined with the opportunity for a virtual display provided by the augmented display techniques. This new type of interaction received (in a metaphoric way) the name of tangible augmented reality, and became the most frequently used AR input method among the developments of the last 15 years [33].

An adequate example that presents the efficiency of the tangible AR interface is the application named VOMAR, developed by Kato et al. [33]. In the application, the user uses a bench supplied with markers, chooses the appropriate furniture, then organizes them in a virtual dining-room.


Figure 1.

Tangible UI


Figure 2.

VOMAR interface

Gupta’s universal media book [25] is a mixed reality interface: it offers the opportunity to get access to information through the display surface, which is a real physical book; the other necessary information will be displayed on it. The pages of the book were not marked with grid points; thus, the user’s interaction experience remains based on natural vision or touch.

Tangible AR interactions normally hold first place in combining real object input with gesture or voice control, usually leading to the usage multimodal interfaces.

The recognition and then the use of hand gestures is the most natural way of creating user interaction with the AR environment [38]. Irawati et al. [30] introduced the upgraded version of Kato’s VOMAR interface. The upgraded version combined the speech-based and (on the input surface) the gesture-based input techniques with the help of time-based and semantic techniques.

The universal aim of the development of the new interaction techniques is to make the manipulation of AR contents as simple as the handling of real-world objects.

2.2.2. Collaborative AR

Although, for decades, single-user AR applications had been investigated and developed, the first collaborative AR applications started to be developed during the middle of the 1990s.

The Studiersube [57] and the Shared Space project [8] verified that AR is able to support remote and multi-location activities, even in cases where it would be not be accomplishable in real life [52].

Under the direction of Billinghurst, a 3D CSCW (computer supported collaborative work) interface has been developed [7]; this was the Shared Space application.

The most recent researches are dealing with how mobile AR platforms might be used for collaboration among people.

The Invisible Train project made possible for up to eight users to play at the same time in the AR train game, on PDAs. Wagner [62] and Henrysson et al. [26] introduced the first personal collaborative AR application on a mobile phone, which was AR Tennis. An experimental user study proved that the users preferred the AR game to the non-AR technology games, and the multi-sensor feedback enhanced the game experience.

2.3. Display techniques

According to the examination of display techniques, the display devices can be classified into three main categories:

  • Transparent, HMD displays,

  • Projection-based displays and

  • Manual displays.

2.3.1. Transparent HMD devices

The most frequently used devices are the transparent HMD devices. HMD enables the user to see the virtual objects projected onto the real world, for which different optical and video technical methods are used.

The optical see-through displays (optical see-through - OST) are those which permit the user to see the real world with their own eyes and the virtual objects are projected with graphical interlace-technique, holographic optical elements and reflection.

The video see-through displays (video see-through - VST) are those where the user cannot see the outside world, they can only see the real environment through a videocamera record, on which the pictures of virtual objects are likewise projected with interlaced graphical solutions. As an advantage, in the case of VST-HMD displays, there is a greater concordance between the real and virtual views thanks to the different available picture-processing techniques, which adequately correct the intensity and tint [34].

Bimber and Fröhlich [9] introduced a method where the appearance of the virtual object is made universal with the projection of the right shadows if it is in front of the real object; for this method,a projector-based lighting technique was used.

Olwal et al. [46] presented a new auto-stereoscopic OST system, where a transparent holographic optical element (HOE) was used for the division of views projected from two digital projectors. HOE can be built into different surfaces, and the users do not need to wear glasses. Thus, it provides flexibility with minimal interference.

As a VST-HMD device, State et al. [58] developed and constructed an orthoscopic HMD prototype from components available in commercial trade, and they optimized it with the help of a simulator. This VST-HMD device is suitable for selective medical AR tasks, and this device is likely to be the most refined VST-HMD that has ever been constructed.

The head-mounted projective displays (HMPD) [28] are the alternatives to HMD devices. In these devices, a pair of miniature projectors are equipped on a head-mounted surface. The pictures of the virtual and real objects are projected on a light-reflecting material, and the user senses the reflected picture with his/her own eyes.

The main advantage of HMPD in contrast with HMD is that it can support the wide view angle (max. 90°) of the fields; it facilitates the correction of optical distortions, making it possible for the projector to project unbiased pictures on the bent surface. Its disadvantage is that, in HMPD, the light has to pass optically through the display, which can cause a decline in luminance. The idea of HMDP was first published by Fergason [21], but more information about related works can be found in Hua’s [29] article.

2.3.2. Projector-based displays

Projector-based displays are good opportunities for AR applications, because the users do not need extra devices to wear.

Other researchers saw an opportunity in operating projectors and cameras at the same time [10, 15]; however, due to the different, opposing lighting techniques these systems are difficult to actualize. Ehnes et al. [17] have upgraded Pinhanez’s [48] previous work, in which the virtual objects were directly projected onto real objects for the display.

2.3.3. Man-portable displays, mobile devices

Man-portable displays are adequate alternatives to HMD and HMPD devices in AR applications, because they are easily accessible and highly mobile. Recently, various manual devices have become available which can be applied for mobile AR platforms: tablet PC, ultra-mobile PC, and phones (mobile phones, smartphones and PDAs). Most of the earlier portable prototypes, like the Touring Machine [20] and MARS [27], are based on tablet PC, notebook or unique PC hardware, but they are usually heavy and hardly portable. Although these provided higher evaluation performance and had better input opportunities than the PDAs and mobile phones, they are much more extensive and expensive.

Möhring et al. [41] have introduced the first independent AR system, which is running on a mobile phone available on the market. The current smartphones (with Android, iOS, Symbian, Windows 8 systems), with their built-in cameras, GPS, fast processors, dedicated graphical hardware and wireless network interface, are much more useful for the actualization of AR applications than their contemporaries some years before.

3. AR applications in practice

3.1. Assembly, product support

The IKEA merchant chain-store[1] - made itsuser-manuals more interactive with the use of AR technology for the construction of its do-it-yourself furniture. With the use of marker-based technology, the users get a comprehensive guide for the products and their construction.


Figure 3.

IKEA Do-it-yourself support

3.2. Design

AR technologies are perfectly applicable in several design applications, assisting with the spatial arrangement of future objects and proportional scale design.

3.2.1. IKEA home planner[2] - application:

This is an AR application popularizing IKEA’s catalogue. This application can help future customers to choose appropriate furniture from the actual offers, then with the webcam and the IKEA marker they can virtually place it in their house. They can check and decide before the actual purchase.


Figure 4.

IKEA home planner

3.3. Repair technique, how-to

BMW, besides car manufacturing, provides comprehensive service for the cars. To improve the professional and punctual completion of tasks, amongst BMW’s developments, an AR application has appeared[3] - , which shows the mechanic the repair process (step-by-step) from dismounting, through repair and up to the last phase of the set-up.

The application uses transparent HMD glasses as a display; the virtual contents are directly projected onto the real object.


Figure 5.

BMW step-by-step repaire manual

3.4. Packing technique

Priority Mail’sVirtual Box Simulator[4] - helps in the choice of appropriately sized packing in a virtual environment. The application uses marker-based technology. This gives the opportunity to compare the different sized boxes displayed as virtual objects with the things to be packed, thus helping the user with the choice of appropriately sized packing.


Figure 6.

Priority Mail Virtual Box

3.5. Science (Popular Science)

In the augmented reality application by Popular Science[5] - , with the help of markers, the user is able to access the virtual contents of the magazine.


Figure 7.

Popular Science Magazine

3.6. Advertising, marketing

FordKa interactive advertisement

The Ford company has popularized its Ford Ka city car with an interactive AR application[6] - . In the advertisement published in local newspapers they have placed a marker and a web contact.

With the use of the marker and the webcam, the user can test how easy parking is with a Ford Ka: the user can drive the car with the arrows on the keyboard, then he/she can park the car and check the results.


Figure 8.

Ford Ka advertisment

MAX AR poster by Ogilvy & Mather (Ford advertisement)

In the advertisement campaign for the Ford C Max[7] - in Britain, the first 3D image formation technology appeared that used AR environment, but despite using markers, the user interaction was based on natural movements and gestures.


Figure 9.

Ford advertisment

3.7. Product testing and purchase

RayBan- Virtual mirror

On the sunglasses market, one of the leading manufacturing companies is RayBan[8] - . The company provides an opportunity for potential customers, through an AR application, to try on the different types, fashions and coloursof sunglasses, to help them choose the most suitable item for their personalities.

To use the application, a webcam is needed. The program places pilot point pairs, with the help of which it can set the size of the face. After these steps, the customer has to choose from the sunglasses on offer, and the program projects the chosen item as a virtual object onto the face.

Hair simulator, hairstyle probe

The hair simulator byHallNeotech[9] - uses AR technology to provide the user with more than 300 previously stored hairstyles and hair colours to choose the most suitable one. The technology is marker-free,so the appropriate points of the face have to be set on the display of the webcam, according to which the program will fit the scale model of the chosen hair.


Figure 10.

Hair simulator from HallNeoTech


Fashionista[10] - is a new device supporting public shopping. This combines the advantages of a fitting room with the possibilities and freedom of online shopping.


Figure 11.

Fashionista - shopping assist

The users of the program are able to virtually try on their chosen clothes, and they only need a webcam. They can choose directly from the right price category, and their favourites can immediately be shared on Facebook.

3.8. Language education, e-learning


With the help of theWordLensAR[11] - ( application, the user is able to translate foreign-language titles and text into his/her own mothertongue with the use of a mobile phone with built-in camera and the available built-in dictionaries. The application is availablefor Android-based systems and iOS.


Figure 12.

World Lens

Augmented reality- the future of education (AraPacis)

The AraPacis webpage[12] - graphically introduces the prospect of changing the quality of training and education with the use of AR technologies.

In the presentation, the user can use the digital manipulation of the contents of a book, available at the library, for his/her own needs with the use of a transparent HMD device in the form of glasses. Based on the contents of the recognized book, such extra information will be available (in the form of virtualobjects), which can only be feasible with digital techniques: the interactive enlarging of the seen picture, the storing of the observed picture for later use and the display of other related information projected onto the pages of the book, as if onto a real physical object.

The possibilities of arbitrary interactive operations raise a question regarding digital rights management: how can the aspects of DRM (digital rights management) be kept in mind, and what kind of freedom to create digital copies of the medium is respected in the AR-based world? This question needs further investigation.


Figure 13.

AR book

3.9. Tourism

AR travel guide

This application based on augmented reality technology presents the Wikitude mobile travel guide[13] - on a mobile phone. The application displays the information based on the camera picture. It shows the location on the map, or a list of information, or in AR camera view it places the information on objects in the real environment.


Figure 14.

AR travel guide

3.10. Virtual games

Red Bull game

A trailer for a game using AR technology, developed for iPhone, can be viewed at

The mobile application called RedBull Augmented Racing is a car racing game, where the user can create their own fields besides the built-in fields. For advertising purposes, the upper surface of Red Bull energy drink cans needs to be first shown to the application through the mobile device’s camera, and after this initiation the next level of development follows: to create a track, the top surfaces of the cans have to be captured by the camera. The application stores the route, and after starting the race, the users have to go through this route with the racing car.

3.11. Company introduction, company presentation

Helge Lund, the director of the London office of oil company Statoil[14] - , held a 201-minute long presentation about the company’s results achieved so far and demonstrated their future plans. For this presentation, he used a high-tech AR application: he illustrated his speech with spectacular, projected virtual objects. His aim was to convince his colleagues to use lifelike graphical models for the design tasks needed for the operation of the company, and to use augmented reality technologies during presentations.Thus, he could look forward to the more transparent and more effective synergy of different company units.


Figure 15.

Company presentation

3.12. Digital art

The quick spread of PC use has event affected different fields of art. In the area of audio-visual media, digital creations are becoming widespread. Obviously, the penetration of AR technologies has likewise appeared in visual arts. On or, expressive examples can be found.

In the Virtual Ribbons[15] - AR application, the user can create virtual ribbons using his or her hands or the mobile’s light using a webcam; the movement of these ribbons can be arbitrarily controlled on the display.


Figure 16.

Virtual Ribbons

3.13. SixthSense technology

The most interesting AR applications in the near future will be those based on SixthSense technology. During the development of these applications, the developers are searching for probable solutions for how can we use our knowledge about everyday objects to efficiently interact with the digital world. In this field, Indian researchers have made great progress; their work has been presented several times. Their applied devices are simple: as markers they use coloured rings on their fingers; as video input they use any kind of video camera that can be hung on the neck or fixed on a cap; and for display they use a small-size projector, which directly projects the virtual objects onto real-world objects. The advantage of SixthSense technology is that it does not restrain the user’s real sight, there is no need to use an extra input device, and it is a mobile technology that can be used anywhere.

As an example, a video is presented at, which shows how this technology can be applied for taking photos while walking, and the developer suggest show the system can be controlled based on gesture recognition.


Figure 17.

Sixth Sense Technologie

3.14. Healthcare

The usage of AR applications is becoming important too in the field of healthcare. Several healthcare informatics applications are using such visualization techniques, which help in diagnostics, and can even model a planned surgical intervention.

Mirracle – the magic mirror

The Mirracleis an AR technology “magic mirror”, which serves for visual display of medical data.

At, a short presentation can be seen about the application.


Figure 18.

Magic Mirror demo

The application uses the Microsoft Kinect sensor for displaying visual information based on the physical parameters of the user standing in front of it, as though it could see through the human body with a real X-ray. The Kinect sensor, based on its camera view, models and shows the skeleton-based construction, or shows photorealistic virtual sectional pictures, which are similar to CT or MR technology tomographies.

The introduction of the application is predominantly planned in the field of healthcare education.

4. Conclusion

The technological development of the last few years has reached a level where it is impossible to avoid running into 3D visual tricks created and displayed by computers. Whether it is the stereoscopic (or other stereo) display at the cinema or real-time computer graphics which are constantly developing, the form of these virtual worlds has become part of our daily life.

The technological development (whether from hardware or software sectors) givesthe opportunity for everyone to access 3D content in available forms. A range of expedient programs are available for the public, but their handling and usage is by no means trivial.

A new and dynamically developing branch of informatics is augmented reality (AR), which, besides the current applications, will prospectively come to play a significant role in the fields of advertising and marketing, education, distributed labour, healthcare and spare-time applications. The developmental direction of AR technologies isbringing mobile platforms to prominence. The applications with primarily descriptive aims are becoming highly interactive, and efficient multimedia devices are appearing in numerous areas of life due to the development of back-end support based on underlying artificial intelligence.


1 - Auer, T., Pinz, A. Building a hybrid tracking system: Integration of optical and magnetic tracking. In IWAR ‘99, pp.13-22, 1999.
2 - Azuma, R., Gary, B. A frequency-domain analysis of head-motion prediction. Proceedings of SIGGRAPH ‘95 (Los Angeles, CA, 6-11 August 1995). In Computer Graphics, Annual Conference Series, 1995, pp. 401-408.
3 - Azuma, R.T., Baillot, Y., Behringer, R., Feiner, S., Julier, S.,MacIntyre, B.Recent advances in augmented reality. IEEE Computer Graphics & Applications, 21:6, 34-47, 2001.
4 - Azuma, R. T., Hoff, B. R., Howard, I.,Neely, E., Sarfaty, R., Daily, M. J., Bishop, G., Vicci, L., Welch, G., Neumann, U., You, S., Nichols, R., Cannon, J. Making augmented reality work outdoors requires hybrid tracking. In IWAR ‘98, pp. 219-224, 1998.
5 - Barfield, W., Craig, R.,Wouter, A.L. Augmented-reality displays. In W. Barfield and T. A. Furness III (Eds),Virtual Environments and Advanced Interface Design. Oxford University Press (1995), pp. 542-575. ISBN0-19-507555-2.
6 - Bajura, M., Neumann, U. Dynamic registration correction in video-based augmented reality systems. IEEE Computer Graphics and Applications 15, 5 (September 1995), 52- 60.
7 - Billinghurst, M., Kato, H. Collaborative augmented reality. Communications of the ACM, 45:7, 64-70, 2002.
8 - Billinghurst, M., Weghorst, S., Furness, T. III. Shared Space: An augmented reality interface for computer supported collaborative work. In CVE ’96, 1996.
9 - Bimber, O., Fröhlich, B. Occlusion shadows: Using projected light to generate realistic occlusion effects for view-dependent optical see-through displays. In ISMAR ‘02, pp. 186-319, 2002.
10 - Bimber, O., Grundhofer, A., Wetzstein, G., Knodel, S. Consistent illumination within optical see-through augmented environments. In ISMAR ‘03, pp. 198-207, 2003.
11 - Bowskill, J., John, D. Extending the capabilities of the human visual system: An introduction to enhanced reality. Computer Graphics 29, 2 (May 1995), 61-65.
12 - Brooks, F. P. Jr. The computer scientist as toolsmith II. CACM 39, 3 (March 1996), 61-68.
13 - Caudell, T.P. Introduction to augmented reality. SPIE Proceedings Volume 2351: Telemanipulator and Telepresence Technologies (Boston, MA, 31 October-4 November 1994), 272- 281.
14 - Comport, A., Marchand, E., Chaumette,F. A real-time tracker for markerless augmented reality. In ISMAR ‘03, pp. 36-45, 2003.
15 - Cotting, D., Naef, M., Gross, M., Fuchs, H. Embedding imperceptible patterns into projected images for simultaneous acquisition and display. In ISMAR ‘04, pp. 100-109, 2004.
16 - Drascic, D. Stereoscopic vision and augmented reality. Scientific Computing & Automation 9,7 (June 1993), 31-34.
17 - Ehnes, J., Hirota, K., Hirose, M. Projected augmentation - augmented reality using rotatable video projectors. In ISMAR ‘04, pp. 26-35, 2004.
18 - Feiner, S. Augmented reality. Course Notes, 2: ACM SIGGRAPH 1994, 7:1-7:11.
19 - Feiner, S. Redefining the user interface: Augmented reality. Course Notes, 2: ACM SIGGRAPH 1994, 18:1-18:7.
20 - Feiner, S., MacIntyre, B., Höllerer, T., Webster, T. A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment. In ISWC ‘97, pp. 74-81, 1997.
21 - Fergason, J. Optical system for head mounted display using retroreflector and method of displaying an image. U.S. patent 5,621,572, 15 April, 1997.
22 - Foxlin, E., Altshuler, Y., Naimark, L., Harrington,M. Flight Tracker: A novel optical/inertial tracker for cockpit enhanced vision. In ISMAR ‘04, pp. 212-221, 2004.
23 - Fua, P., Lepetit, V. Vision based 3D tracking and pose estimation for mixed reality. In M. Haller, M. Billinghurst, B. H. Thomas (Eds), Emerging Technologies of Augmented Reality Interfaces and Design. Idea Group: Hershey, PA(2007), pp. 43-63.
24 - Gibson, J. J. The theory of affordances. In R. E. Shaw, J. Bransford(Eds), Perceiving, Acting, and Knowing. Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 67-82.1977.
25 - Gupta, S., Jaynes, C. O. The universal media book: Tracking and augmenting moving surfaces with projected information. In ISMAR ‘06, pp. 177-180, 2006.
26 - Henrysson, A., Billinghurst, M., Ollila, M. Face to face collaborative AR on mobile phones. In ISMAR ‘05, pp. 80-89, 2005.
27 - Höllerer, T., Feiner, S., Terauchi, T., Rashid, G., Hallaway,D. Exploring MARS: Developing indoor and outdoor user interfaces to a mobile augmented reality system. Computers and Graphics 23:6, 779-785, 1999.
28 - Hua, H., Gao, C., Brown, L. D., Rolland, J. P. Using a head-mounted projective display in interactive augmented environments. In ISAR ‘01, pp. 217-223, 2001.
29 - Hua, H., Girardot, A., Gao, C., Rolland, J. P. Engineering of head-mounted projective displays. Applied Optics 39:22, 3814-3824, 2000.
30 - Irawati, S., Green, S., Billinghurst, M., Duenser, A., Heedong, K. “Move the couch where?” Developing an augmented reality multimodal interface. In ISMAR ‘06, pp. 183-186, 2006.
31 - Ishii, H., Ullmer, B. Tangible bits: Towards seamless interfaces between people, bits and atoms. In ACM CHI‘97, pp. 234-241, 1997.
32 - Kanbara, M., Fujii, H., Takemura, H., Yokoya, N. A stereo visionbased augmented reality system with an inertial sensor. In ISAR ‘00, pp. 97-100, 2000.
33 - Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., Tachibana, K. Virtual object manipulation on a table-top AR environment. In ISAR ‘00, pp. 111-119, 2000.
34 - Kiyokawa, K. An introduction to head mounted displays for augmented reality. In M. Haller, M. Billinghurst, B. H. Thomas (Eds), Emerging Technologies of Augmented Reality Interfaces and Design. Idea Group: Hershey, PA (2007), pp. 43-63
35 - Klinker, G., Reicher, R., Brugge, B. Distributed user tracking concepts for augmented reality applications. ISAR’00, pp. 37-44, 2000.
36 - Krueger, M. W. Simulation versus artificial reality. Proceedings of IMAGE VI Conference (Scottsdale, AZ, 14-17 July 1992), pp. 147-155.
37 - Lang, P., Kusej, A., Pinz, A., Brasseur, G. Inertial tracking for mobile augmented reality. In IEEE IMTC, Vol.2, pp. 1583-1587, 2002.
38 - Malik, S., McDonald, C., Roth, G. Tracking for interactive pattern- based augmented reality. In ISMAR ‘02, pp. 117-126, 2002.
39 - Milgram, P., Kishino, F. A Taxonomy of mixed reality virtual displays. IEICE Transactions on Information and Systems E77-D, 9 (September 1994), 1321-1329.
40 - Milgram, P., Takemura, H., Utsumi, A., Kishino, F. Augmented reality: A class of displays on the reality- virtualitycontinuum. SPIE Proceedings Volume 2351: Telemanipulator and Telepresence Technologies (Boston, MA, 31 October-4 November 1994), pp. 282-292.
41 - Möhring, M., Lessig, C., Bimber, O.Video see-through AR on consumer cell-phones. In ISMAR ‘04, pp. 252-253, 2004.
42 - Naimark, L., Foxlin, E. Encoded LED system for optical trackers. In ISMAR ‘05, pp. 150–153, 2005.
43 - Newman, J., Wagner, M., Bauer, M., MacWilliams, A., Pintaric, T., Beyer, D., Pustka, D., Strasser, F., Schmalstieg, D., Klinker, G. Ubiquitous tracking for augmented reality. In ISMAR ‘04, pp. 192-201, 2005.
44 - Newman, J., Ingram, D., Hopper, A. Augmented reality in a wide area sentient environment. In ISMAR ‘01, pp. 77-86, 2001.
45 - Norman, D. A. The Design of Everyday Things. New York: Doubleday Business (1988).
46 - Olwal, A., Lindfors, C., Gustafsson, J., Kjellberg, T., Mattsson, L. ASTOR: An auto-stereoscopic optical see-through augmented reality system. In ISMAR ‘05, pp. 24-27, 2005.
47 - Park, J., Jiang, B., Neumann, U. Vision-based pose computation: Robust and accurate augmented reality tracking. In IWAR ‘99, pp. 3-12, 1999.
48 - Pinhanez, C. The Everywhere Displays projector: A device to create ubiquitous graphical interfaces.Proc. Ubiquitous Computing 2001 (Ubicomp '01), Atlanta, Georgia, September 2001, pp. 315- 331. Springer Lecture Notes In Computer Science, v. 2201.
49 - Pinz, A., Brandner, M., Ganster, H., Kusej, A., Lang, P., Ribo, M. Hybrid tracking for augmented reality. ÖGAI Journal, 21:1, 17-24, 2002.
50 - Pressigout, M., Marchand, É. Hybrid tracking algorithms for planar and non-planar structures subject to illumination changes. In ISMAR ‘06, pp. 52-55, 2006.
51 - Reitmayr, G., Drummond, T. Going out: Robust model-based tracking for outdoor augmented reality. In ISMAR ‘06, pp. 109-118, 2006.
52 - Rodrigo Silva, L.S. Introduction to augmented reality. 2003.
53 - Rolland, J. P., Hopkins, T. A Method of Computational Correction for Optical Distortion in Head-Mounted Displays. UNC Chapel Hill Department of Computer Science Technical Report TR93-045 (1993).
54 - Rolland, J. P., Holloway, R., Fuchs, H. A comparison of optical and video see-through head-mounted displays. SPIE Proceedings Volume 2351: Telemanipulator and Telepresence Technologies (Boston, MA, 31 October-4 November 1994), pp. 293- 307.
55 - Satoh, K., Uchiyama, S., Yamamoto, H. A head tracking method using bird’s-eye view camera and gyroscope. In ISMAR ‘04, pp. 202-211, 2004.
56 - Satoh, K., Uchiyama, S., Yamamoto, H., Tamura, H. Robust vision-based registration utilizing bird’s-eye view with user’s view. In ISMAR ‘03, pp. 46-55, 2003.
57 - Schmalstieg, D., Fuhrmann, A., Szalavari, Z., Gervautz, M. 1996. Studierstube: An environment for collaboration in augmented reality. In CVE ’96, 1996.
58 - State, A., Keller, K., Fuchs, H. Simulation-based design and rapid prototyping of a parallax-free, orthoscopic video see-through head-mounted display. In ISMAR ‘05, pp. 28-31, 2005.
59 - Stricker, D., Klinker, G., Reiners, D. A fast and robust line-based optical tracker for augmented reality applications. In IWAR ‘98, pp. 31-46, 1998.
60 - Sutheland, I. The ultimate display. In IFIP ‘65, pp. 506-508, 1965.
61 - Vogt, S., Khamene, A., Sauer, A., Niemann, H. Single camera tracking of marker clusters: Multiparameter cluster optimization and experimental verification. In ISMAR ‘02, pp. 127-136, 2002.
62 - Wagner, D., Pintaric, T., Ledermann, F., Schmalstieg, D. Towards massively multi-user augmented reality on handheld devices. In Pervasive ’05, pp.208-219, 2005.
63 - Wuest, H., Vial, F., Stricker, D. Adaptive line tracking with multiple hypotheses for augmented reality. In ISMAR ‘05, pp. 62-69,2005.
64 - Zhang, X., Fronz, S., Navab, N. Visual marker detection and decoding in AR systems: A comparative study. In ISMAR ’02, pp. 97-106, 2002.


[1] -

[2] -

[3] -

[4] -

[5] -

[6] -

[7] -

[8] -

[9] -

[10] -

[11] -

[12] -

[13] -

[14] -

[15] -