Open access

Using Augmented Reality to Cognitively Facilitate Product Assembly Process

Written By

Lei Hou and Xiangyu Wang

Published: 01 January 2010

DOI: 10.5772/7129

From the Edited Volume

Augmented Reality

Edited by Soha Maad

Chapter metrics overview

2,998 Chapter Downloads

View Full Metrics

1. Introduction

Assembly task is an activity of collecting parts / components and bringing them together through assembly operations to perform one or more of several primary functions. In practice, part drawing is still performing as a main means for assembly guidance nowadays. As an emerging and powerful technology, Augmented Reality (AR) integrates images of virtual objects into a real world. Due to its self characteristic features, AR is envisaged to afford great potentials and to be an alternative of traditional means in assembly task. This chapter speculates the issues and discrepancies involved in the present practice of assembly task, recommends a novel utilization of AR animation technology in this area, and discusses the potentials of using AR animation in guiding product assembly task.

Advertisement

2. State-of-the-art review of visualization methods for product assembly

This section presents the issues and discrepancies involved in the present practice of assembly task, and compares the state-of-the-art of the two advanced visualization technologies in their applications in assembly process: Virtual Reality and Augmented Reality.

2.1. Traditional method for assembly

In practice, assembly drawing (manual) is still performing as a main means for assembly guidance nowadays (Laperriere & EI Maraghy, 1992). Assembly drawing delivers the holistic constructional knowledge of a machine and its separated components / parts, as it is an essential technological file for the technicians to enact assembly craft, conduct assembly task and evaluate assembly result. A well-formulated assembly drawing should necessarily present at least four categories of assembly information, a set of visual percepts of product components / parts, their parameters or dimensions, technical requirements in quality, installation and testing and other auxiliary information.

Confined in a fixed-size two dimensional (2D) drawing, a large quantity of information concerning product parts and components can be quite redundant, cumbersome and crowded, especially when the comparatively complex assembly tasks are referred. This traditionally creates a hardship of fast information orientation and understanding complex assembly relations (He et al., 1989). From the perspective of the technicians (e.g., designers / assemblers), there needs to be a large number of subjective information retrieval behaviors and relative mental processes added into the proceeding of understanding the assembly information context owing to the complex information flows during manual-based cognition. It is widely accepted that the form of capacity of selective information retrieval and filtering does not occur until a long-term accumulation of assembly experiences and expertise. Sometimes, it even needs extra targeted training activities (Croft et al., 1991). However, using such a manual is not easy to train the expert assemblers, especially for assembly processes that require problem-solving skills (Johnson-Laird., 1983). It often takes months or even years for a novice assembler to develop expert knowledge for assembling processes that involve high complexity (Hoffman et al., 1998). In some cases, even the expert assemblers must constantly refer to the instructions from assembly manual for infrequently performed procedures or procedures with high difficulty, not to say for those novice assemblers.

Previously, researches have indicated that assembly under drawing guidance consumed a mass of invalid time (behaviors not related to workpieces) and also found that such an assembly based on 2D assembly drawing relatively failed to consider the cognitive issues comprehensively (Abe & Tsuji, 1987). In practice, implementation of assembly task consists of workpiece activities and non-workpiece activities. In each assembly step, while conducting a series of physically workpiece operations like observing, grasping, installing and so on, an assembler should also conduct several mentally manual-related processes such as comprehending, translating and retrieving information context (Neumann & Majoros, 1998). Once finishing the present step, he or she will resume these physical and mental behaviors in next assembly steps. Neumann and Majoros (1998) claimed that information-related activities tend to be cognitive, workpiece activities tend to be kinesthetic and psychomotor and the time allocated to each portion was measured by Towne (1985), who came to the conclusion that the information-related activities (cognitive workload) accounted for 50 percent of the total task workload. Other researchers have indicated a large number of switchovers between physical (workpiece) and mental (manual) process generally result in operational suspensions and attention transitions to the novice assemblers (Shalin et al., 1996). Ott (1995) concluded that 45 percent of every technicians’ shifts actually spent on finding and reading procedural and related information when assembling the repaired hardware. Neumann and Majoros (1998) also uncovered that individual technicians differed in how much time they devoted to cognitive / informational chores, but differed little in how much time they devoted to manual. Apart from the consumption between physical and cognitive process, mental tiredness can come into being after a long exposure to the drawings, since information retrieval difficulty generally exists in complex assembly drawings, especially to the novice assemblers (Gick & Holyoak, 1980). At the same time, the likelihood of incorrect assemblies typically exists in assembly task when novice assemblers continue to bear the mental pressure and focusing, since a successive cognitive process of handling manual information could trigger the mental tiredness and negligence, which in turn cause error assembly procedures. The evidence was supported by development of Aviation Safety Reporting System (a NASA-conducted method for airline personnel to report problems anonymously), which reported that 60 percent of errors were procedural errors, most of which were due to negligence in understanding drawing (Veinott & Kanki, 1995). Last but not least, the task motivation in this traditional method is to some extent suppressed because of the cognitive baldness and behavioral repetition in between the unilateral information retrieval and the corresponding operation (Locke & Edwin, 1968).

2.2. Virtual reality in assembly

As the development of computer graphics, an entirely new technology, Virtual Reality (VR) has emerged. Described as a technology for which “the excitement to accomplishment ratio remains high” (Durlach and Mavor, 1995), VR is now rapidly outgrowing its computer games image and finding applications in a variety of contexts and in fields as diverse as engineering (Frederick & Brooks, 1999), design (Sherman & Craig, 2003), architecture (Calvin et al., 1993), medicine (Satava1995), education (Psotka, 1993), rescuing (Bliss et al., 1997), military (Hue et al., 1997) and so on (Goldberg, 1994). The kernel of VR is computer simulation, which combines the three dimensional (3D) graphics, motion tracking technology and sensory feedback (Pahl & Beitz, 1997). Accordingly, VR attempts to replace the user’s perception of the surrounding world with computer-generated artificial 3D virtual environments (VEs). The use of VEs allows the user total control of both the stimulus situation and the nature and pattern of feedback, and also allows comprehensive monitoring of performance. To date, there are already many well-known applications of VR technology. One of the most famous applications was in NASA’s Hubble Space Telescope repairing mission, where an immersive virtual environment was created by NASA to train telescope repair personnel (Veinott & Kanki, 1995). Another one was from the U.S. military. A networked artificial virtual environment was aggressively pursued for the distributed simulation of integrated combat operations, where diverse topographic and climate elements were mixed together by creating a series of complex scenarios which included both real and autonomous agents (Mastaglio & Callahan, 1995). Rose and his colleagues (2000) also found except that, VEs were of a considerable skill transfer effect in implementing training task. In their three experiments, performance of trainees disclosed an equivalent extent of skill transfers from training in virtual environments to real world post-training tasks and from training in real task training to real post-training tasks.

The success of a variety of VR applications has also enabled an attempt in manufacturing industry since the 80s. An initial trial of using VR technology in guiding assembly task instead of the assembly drawing was conducted before it won a praise and a further interest in research perspective. Nowadays, this technology has been commonly used in product assembly area (Ritchie et al., 1999). Through it, product designers are able to create the virtual prototypes of product accessories, modules and parts in VEs, where the virtually trial assembly tasks are enabled for the purpose of facilitating assemblers through the evaluation of varying task difficulties from computer. Coincidentally, simulation and evaluation in the early assembly design stage are also enabled with VR, with a large amount of prototyping cost saved from real prototyping in real environment (Zachmann, 1996). In the world wide, commercial virtual prototyping software such as CAD, Pro/E and IDEAL has been widely used to facilitate the product assembly and aid the product assembly design, and the product technicians are capable of developing various product accessories, modules and parts with different functions and dimensions, and conveniently conducting the product assembly guidance or design based on computers (Pratt, 1995).

Notwithstanding, regardless of the considerable assembly accuracy, this software-guided method also manifests its defects. For instance, VR-based method cannot provide a better understanding of diverse interferences of assembly path and complex real assembly environments. The issues such as assembly task difficulty and assembly workload could not be easily evaluated either. What is more, another shortcoming is the limited level of “realism” experience due to the lack of rich sensory feedback (Wang & Dunston, 2006). The computer-generated virtual components cannot convey other channel of useful feedback such as audio, tactile, and force, etc., which normally exist in the real world. Accordingly, the lack of interactions between virtual and real entities greatly cumbers the furthering development of using VR in product assembly task (Brooks & Jr, 1996).

2.3. Augmented reality in assembly

In order to solve this trade-off, researchers have opened up another promising alternative, Augmented Reality (AR) technology, a more expansive form of VR (Milgram & Colquhoun, 1999). As an emerging and cutting-edge technology, AR integrates images of virtual objects into a real world. By inserting the virtual simulated prototypes into the real environment and creating an augmented scene, AR technology could satisfy the goal of enhancing a person’s perception of a virtual prototyping with real entities. This gives a virtual world a better connection to the real world, while maintaining the flexibility of the virtual world (Figure 1. a novel storytelling means).

Figure 1.

A novel storytelling means using AR scene (Zhou et al, 2008)

Defined as the combination of the real and virtual scenes, such technology has been explored in the areas such as maintenance (Feiner et al., 1993), manufacturing (Curtis et al., 1998), training (Boud, 1999), battlefield (Urban, 1995; Metzger, 1993), medicine (Mellor, 1995; Uenohara & Kanade, 1995), 3D video conferencing (Regenbrecht et al., 2004), computer assisted instruction (CAI) (Tang et al., 2003), computer-aided surgery (Bajura et al., 1992), entertainment (Wagner & Pintaric, 2005) and so on. Some of the successful AR applications in industry are highlighted as follows. One is integrating AR with manual gas-metal-arc welding technology. Since in the traditional manual gas-metal-arc welding, the welder has a very limited field of view through the dark cartridge used to protect welders’ eyes from dangerous UV radiation, Park (2007) applied the AR registration technology to welding helmet to aid welding process by virtually presenting the outline of to-be-welded objects. Based on it, they developed an AR-based welding helmet system --- TEREBES (Tragbares Erweiter tes Realitäts-System zur Beobachtung von Schweißprozessen System), a wearable Augmented Reality System for observation of welding processes. Through TEREBES, the limited real vision was enlarged by virtual vision and the welders could better command an overall welding performance with great ease (Figure 2. AR welding helmet system). Another successful application of using augmentation is in supporting factory layout planning. Volkswagen (1999) has developed an Augmented Reality supported manufacturing-planning system where a physically existing production environment can be superimposed with virtual planning objects, and planning tasks can thus be validated without modeling the surrounding environment of the production site. By combining and superimposing the result of the ergonomic simulation process, planners can optimize the manual workplace without actually modeling the workplace. The production personnel can participate in this process, and various rearrangements can be benchmarked at the same time (Figure 3. AR in factory layout) (Doil et al., 2003). Besides, current Augmented Reality was also attempted to combine with x-ray vision to overcome the discrepancies of pure AR registration, e.g. mismatch. Via this combination, users have realized an architectural anatomy, which creates an augmentation that shows users portions of a building that are hidden behind architectural or structural finishes, and allows them to see additional information about the hidden objects (Webster et al., 1996).

Figure 2.

Using AR in welding process-a user interface of welding helmet system (Hillers et al., 2004)

Figure 3.

Visualization of virtual machinery in real plant-environment (Doil et al.,2003)

Compared with VR which entirely separates the real environment from the virtual environment, AR maintains a sense of presence inside the real world and balances the perception of real and virtual worlds (Raghavan et al., 1999). Through AR, a technician can manipulate the virtual components directly inside the real environment. He or she could identify the potential interferences between the to-be-assembled objects and the existing objects in the real assembly environment. Therefore, in AR environment, a user can not only interact with real environments, but also interact with Augmented Environments (AEs) that are structured to offset partial sensory loss in VR to the user. Furthermore, to better support the feedback of augmentation, the additional “non-situated” augmenting elements could be added into the assembly process like recorded voice, animation, replayed video, short tips and arrows, all of which could simultaneously guide the assemblers to conduct assembly task throughout the entire procedural operational process, release their tension and even notify an erroneous assembly sequence. Therefore, the reality being perceived is further augmented. Due to its self characteristic features, AR is envisaged to provide great potentials in product assembly task, which will be discussed in the following sections.

In the past few years, various researches have focused on product assembly area by using Augmented Reality technology. To obtain the optimized assembly sequence, Raghavan, Molineros, and Sharma (1999) adopted AR as an interactive technique for assembly sequence evaluation and formulated the assembly planner and liaison graph. In their research work, they have addressed the issue of automatically generating the most optimized product assembly sequence using AR. A resembled research was also conducted by Liverani, Amati and Caligiana (2004), where a binary assembly tree (BAT) algorithm was developed with the personal active assistant system (PAA) in their project. The BAT in PAA replaced the function of liaison graph and shaped their own assembly sequence optimization method to aid the product assembly design. At the same time, an inline assembly database was created as an attachment of PAA system. Besides, Salonen and his colleagues (2007) used AR technology to conduct their research in the area of industrial products assembly and developed a multi-modality system based on the AR facility, which consisted of a head-mounted display (HMD), a marker-based software toolkit (ARToolKit), an image tracking camera, a web camera and a microphone. In their system, they realized a graphical user interface where three controlling methods were enabled to effectively process the assembly design of the industrial product, including the keyboard control, gesture control and speech control (Liverani et al., 2004). In addition, considering that the utilization of AR in product assembly design was based on the marker registration technology, Kutulakos and Vallino (1998) implemented their research to realize a markerless-based registration technology, for the purpose of overcoming the inconveniences of applying markers as the carrier in assembly design process. Although a markerless-based AR system named calibration-free system was developed, it still could not overcome the relative technical limitations such as the radial camera distortions, perspective projection and so on, which prevented it from a prevailing use. Last but not least, the utilization of AR technology has extended to the assembly guidance of a wide range of products, e.g., furniture assembly design (Zauner et al., 2003), toy assembly design (Tang et al., 2003), and so on (Yamada & Takata 2007). Many past augmented reality developments were based on ARToolKit, a powerful agent for object registration. Via ARToolKit (Kutulakos et al., 1998), the virtual images of product components can be registered onto predefined paper-based markers, then pop up in view of monitors like HMD or computer screen using marker tracking camera.

Notwithstanding, trials have achieved fruitful results, there are still some issues that could not be well resolved in terms of assembly area. For instance, current trials have not thoroughly eliminated the assemblers’ cognitive workload when using AR as an alternative guidance of manual or VR. A reasonable explanation is that because the virtual images of to-be-assembled objects are registered as static image individuals onto pre-defined markers on the basis of the actual registration technology, cognition is still prevented from utmost because the augmented clews are independently exhibited from each other. Accordingly, to acquire the sequent information context such as assembly paths and fixation forms of components (augmented clews), the assemblers still need a positive cognitive retrieval before reorganizing these augmented clews in mind. In order to address this long-standing and critical cognitive problem, the rest of this chapter introduces a more expanded form of traditional AR called AR with animation (rather than abandoning the mature AR technology), and argues its unparalleled potentials of being guideline compared with manual, VR and AR.

Advertisement

3. Potentials

The following cognitive aspects are critical involved the product assembly process. Augmented Reality is discussed in the following cognitive aspects regarding its great potentials in facilitating product assembly process.

3.1. Enhancement of information retrieval capacity

Personal information retrieval capacity differs greatly since it depends on personal expertise, rather than merely on complexity of assembly task itself. An effective retrieval capacity refers to a series of fast mental behaviors, i.e., searching information, analyzing information, extracting information and so on. Actually, the level of personal retrieval capacity generally handicaps the transition from informational novel to information expert (Neumann & Ott, 1995). To solve this trade-off, this revolutionary alternative, AR animation guidance brings a qualitative change in terms of the way to retrieve information where the subjective one-sided information pick-up behavior is taken over by a mutual interaction between computer and people.

3.1.1. Information context formed in assembly task

To accomplish assembly task, a series of visual percepts of components / parts is the first section of information context that needed to retrieve. They encapsulate the components / parts’ functionality, assembled relations and their main structures and are exhibited using proper, complete, clear and concise expression. Followed are the parameters of each components / parts e.g. texture, material, color and weight. Another equally important information context is the technique requirements of product or its segments, including requirements in quality, assembly, testing, using and so on. They fit together, coordinate with each other and shape a sequent information context in assembly task.

3.1.2. Relations in information context

To be specific, the information context consists of the sub-assembly relations among each to-be-assembled component when referring to organizing a coherent information context. Accordingly, it is ultimately implemented to the parts fitting relations, which are the dimensional and functional fits between the contacting surfaces of assembled and to-be-assembled components. For example, a nut matches a bolt, thus, retrieving the diameter information of a bolt could lead to a successful nut and bolt assembly. A concave matches a convexity. Thus, picking the components that maintain the same contacting surfaces could typically bring lead to a successful assembly. Besides, expertise or experiences accumulated are also contributing to assembly relations, e.g. a rigid component is opt to bracing a rigid component and one type of color generally corresponds to the proximate color in views of aesthetics consideration.

3.1.3. Context information retrieved using AR animation

The information context would be intermitted if clues are carelessly overlooked or mistakenly retrieved. In AR animation, it provides a dynamic exhibition of consistent information context via animation segments displayed in each assembly step. Observers could detect the existing dimensions from real positioned components as well as the registered ones attached to the virtual to-be-positioned components from the see-through HMD. At the same time, animation dynamically demonstrates the assembly process in HMD by approaching the virtual to-be-assembled objects to the real assembled ones settled in the ideal positions. This enables technicians to mimic each assembly step and complete real assembly operation with great ease. Through demonstrating a series of virtual animation segments registered in the real environment, AR compensates for the mental and cognitive gaps between individual differences in info-retrieval capacity and lowers the influences that a task difficulty imposes on information retrieval. Consequently, it eases the information retrieval and meanwhile, provides a synchronized means for implementation.

3.2. Synchronized assembly guidance

Synchronization is another characteristic feature of AR animation used in place of traditional manual in assembly task. In view of each assemble step, AR animation scenario dynamically ushers the coherent spatial position changes of components in space within each animation segment that can be triggered by the operators themselves. When completing each animation segment, AR animation scenario turns into an augmented presentation tool, in which the previous mobile virtual components are registered or mapped on the final positions statically, as well as the individual attached information. The scenario is temporarily suspended to wait for the next trigger from assemblers. During each time intervals (when finishing the last bout of guidance), the assemblers get sufficient time to pick up the real to-be-assembled component from the resting and positioning it to the destination under the guidance of previous animation segment. Accordingly, the assembly operations and augmented guidance could proceed collaboratively, which embodies the characteristic of parallel task mode in AR animation.

3.3. Reduction of assembly error

One research finding value made by Miller and Swain (1986) is the influence of working stress on task implementation. It conveys a fact that novices and experts are equally likely to err in performance under low stress, but novices are more likely to err under high stress. In practice, when first facing a new assembly task that is not familiar with, an assembler typically begins with understanding sub-assembly relations from assembly drawing. However, suffering from scarcity of personal expertise and practical experiences, a novice could spend a lot of time in cognitive process ahead of operation. When driven by the actual constrains like working efficiency and qualification rate, a novel assembler would sustain a high stress mentally and make mistakes in information retrieving process and assembly operation. In addition, a resumed step of information retrieval follows the wrong retrieval and is a byproduct of previous assembly error, which further strengthens their intension and exacerbates their performances. As an attempt to address these problems, animation in AR platform better supports the augmentation at virtual and real interface, lowers the cognitive workload, enables a synchronized guidance and releases the working stress. For the purpose of acquiring distinctiveness to the important dimensions, the virtual components can be intentionally rendered with different striking colors. Via exemplars principle that altering the color of target objects does not influence performances unless the task requires encoding of color (Logan et al., 1996), the irrelevant dimensions become less distinguishable and the difficulty of cognition is reduced without harming the task performances. On the other side, as improvements in performances are frequently due to reduces processing of irrelevant stimuli (Haider & Frensch, 1996), while the important dimensions are registered in our AR animation system, the less important counterparts that possibly interrupt the perceptual attention could be omitted to register. Last but not least, as an intuition (empirical support) has indicated that experts in a field have several differentiated categories where the novice has only a single category (Tanaka & Taylor,1991), the possibility of differentiating the important and less important information context could be aided through AR animation scenario based on what is mentioned above (using selective dimension rendering). The outcome is that mental ignorance existed in retrieval behaviors would seldom happen, task performance would rise and assembly errors would be possibly reduced.

3.4. Stimulation of motivation

The fun of interactive experience in AR animation might stimulate the task motivation. As Chignell and Waterworth (1997) stated that multimedia could produce a rich sensory experience that not only conveyed information but also increased the motivation and interest to its operator or viewer. We believe that AR animation is a good media to increase motivation by replenishing manifestation of abundant information context, by offering lifelike assembly guidance environment and by enabling interactive operation to assemblers. On one hand, the dimension of virtual components registered could be designed with noticeable manifestations, i.e., color and font, and the graphical arrows could be added in aim of reinforcing the assemblers’ focus and improving discrimination with surrounding environments. On the other hand, environmental augmented elements like lighting, object shadow could be rendered via OSGART and as a part of real environments conveniently. Since the improvement of interactivity are contributive to the enhancement of assembly motivation (Neumanm & Majoros, 1998), functioning buttons could be added into the augmented interface, such as the play / back buttons (triggering / repeat animation segment), vocal button (turn on / turn off vocal hints) and so on. This way, the users will be readily to try such a novelty.

3.5. Improvement of spatial cognition and reduction of cognitive workload

To decrease information-related activities (cognitive workload), the relationship among a virtual object, a spatial location and spatial cognition is carried out by numerous researchers. Anderson (1980) came to the conclusion that when putting and arranging the imagery-related spatial object (positioning and changing spatial layout of virtual rendered objects) is subject to particular human ability of spatially physical cognition. Repetitive encounters with a spatial space could make people (usually without any conscious effort and probably as an adjunct to attention) build up an enduring, internal representation or "cognitive map" of the space (Thorndyke, 1980). Neumann and Majoros (1998) also disclosed that people like to know where the information is and the information shown spatially underpins the attention of this information patch. As far as attention is concerned, via incorporating virtual objects into real-world scenes, the objects could become part of the world scene and become almost spatially defined entities just as other actual elements do. Many research work and literature have provided an evidence that it is the nature of attention to work spatially and attention can be conceived when working spatially. By combining them with the real context, the cues concerning the property of the virtual objects can also be added to the registered objects that they do not possess independently of the real context. With AR animation platform, the 3D components / parts could become embodiment in real assembly environment. Furthermore, because of the feedback of other “non-situated” augmenting elements like recorded voice, animation, replayed video, short tips and arrows, AR could simultaneously guide the assembly designers through the entire assembly operation, release their tension and even notify an erroneous assembly, and more importantly, ease the assemblers’ spatial recognition. This can be achieved by virtual prototyping technology (Pratt, 1995), which supports the virtual construction on computer by using business software. Therefore, the above reviews and discussions have provided us a theoretically solid foundation for the potential conclusion that comparing with the assembly drawing and VR, AR animation could be the plausible and effective choice to enhance spatial cognition and release cognitive workload in assembly task guidance. The augmentation functionality can be realized by AR-Toolkit and OSGART, which enable the static and dynamic registration of graphically virtual objects and their assembly paths on the pre-defined markers in real environment. This way, an immersive augmentation between real and virtual interface is constructed where the assemblers are able to conduct real assembly task while observe a series of immersive virtual assembly processes. Through realizing all the possibilities in 3D space, any one is able to perform complex assembly work independently.

Advertisement

4. Summary

This chapter speculates the issues and discrepancies involved in the present practice of assembly task, recommends a novel utilization of AR animation technology in this area, and discusses the potentials of using AR animation in guiding product assembly task. These potentials include its unparalleled information retrieval technique, just-in-time assembly guidance, reduction of error assembly, low cognitive workload, high skill transfer, improved task motivation and so on. On the basis of these great potentials, AR animation could facilitate the product assembly from cognitive perspective by lowering assemblers’ cognitive workload.

References

  1. 1. Abe N. Tsuji S. 1987 Toward Understanding of an Instruction Manual in Mechanical Asemblies, Proceedings of IEEE Trans on Robotics and Automation, 1413 1418
  2. 2. Anderson J. R. 1980 Cognitive psychology and its implications. San Francisco : W.H. Freeman
  3. 3. Bajura M. Fuchs H. Ohbuchi R. 1992 Merging Virtual Reality with the Real world: Seeking Ultrasound Imagery Within the Patient. IEEE Computer Graphics, 26(2),203 210
  4. 4. Bliss J. P. Tidwell P. D. Guest M. A. 1997 The effectiveness of virtual reality for administering spatial navigation training to fire-fighters, Presence : Teleoperators and Virtual Environments, 6, 73 86
  5. 5. Boud C. 1999 Virtual Reality and Augmented Reality as a Training Tool for Assembly Tasks. Proceedings of the 1999 International Conference on Information Visualization Page: 32 Year of Publication: 1999 0-76950-210-5
  6. 6. Brooks Jr F. P. 1996 The computer scientist as Toolsmith two. CACM 39, 3 (March), 61 68
  7. 7. Calvin J. Dickens A. Gaines B. Metzger P. Miller D. Owen D. 1993 The SIMNET virtual world architecture. IEEE Virtual Reality Annual International Symposium, 450 455 , 0-78031-363-1
  8. 8. Chignell M. H. Waterworth J. A. 1997 Multimedia. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics, 2nd ed,.1808 1861 , New York : Jobh Wiley
  9. 9. Croft W. Turtle H. R. Lewis D. D. 1991 The use of phrases and structured queries in information retrreval, Proceedings of the 14th annual international ACM SIGIR conference on research and development in information retrieval, 32 45 , Chicago, Illinois, United States, October 13-16, Chicago
  10. 10. Curtis D. Mizell D. Gruenbaum D. Janin A. 1998 Several devils in the detail : Making an ar application work in the airplane factory. In proc. International Workshop on Augmented Reality’ 98
  11. 11. Doil F. Schreiber W. Alt T. Patron C. 2003 Augmented Reality for manufacturing planning. Proceedings of the workshop on Virtual environments, Zurich, Switzerland, ACM, New York, 1-58113-686-2 71 76
  12. 12. Durlach N. I. Mavor A. S. 1995 Virtual Reality: Scientific and Technological Challenges, National Academy Press
  13. 13. Feiner S. Mac Intyre. B. Seligmann D. 1993 Knowledge-based augmented reality. In ACM(Ed.), Special issue on computer augmented environments back to the real world, 53 62 . NY, USA : ACM Publishing
  14. 14. Frederick P. Brooks 1999 What is real about virtual reality. IEEE Computer Graphics and Applications, 19 , 16 27
  15. 15. Gick M. Holyoak K. J. 1980 Analogical problem solving. Cognitive psychology, 12 3)., 306 355
  16. 16. Goldberg S. 1994 Training dismounted soldiers in a distributed interactive virtual environment, US Army Research Institution Newsletter, 14(April), 9 12
  17. 17. Haider H. Frensch P. A. 1996 The role of information reduction in skill acquisition Cogn. Psychol. 30 304 337
  18. 18. He S. Abe N. Kitahashi T. 1989 Understanding Illustrative Deagrams in an Assembly Manual, Proceedings of IEEE International Workshop on Industrial Applications of Machine Intelligence and Vision, 133 138 , Tokyo, Japan
  19. 19. Hoffman R. R. Crandall B. Shadbolt N. 1998 Use of the critical decision method to elicit expert knowledge : a case study in the methodology of cognitive task analysis. Human Factors., 254 277
  20. 20. Hue P. Delannay B. Berland J. C. 1997 Virtual reality training simulator for long time flight, in R.J.Seidel and P.R.Chantelier (eds), Virtual Reality, Training’s Future ? 69 76 ., New York
  21. 21. Johnson-Laird P. N. 1983 Mental models. Cambridge : Cambridge University Press
  22. 22. Kutulakos K. Vallino J. 1998 Calibration-free augmented reality. IEEE Transactions on Visualization and Computer Graphics.
  23. 23. Laperriere L. Maraghy E. I. H. A. 1992 Planning of Products Assembly and Disassembly. Annals of CIRP, 41 , 5
  24. 24. Liverani G. Amati G. Caligiana 2004 A CAD-augmented Reality Integrated Environment for Assembly Sequence Check and Interactive Validation. Concurrent Engineering, 12 1 67 77
  25. 25. Locke Edwin A. 1968 Toward a theroy of task motivation and incentives. Organizational Behavior & Human Performance, 3 2)., 157 189
  26. 26. Logan G. D. Taylor S. E. Etherton J. L. 1996 Attention in the acquisition and expression of automaticity. J. Exp. Psychol : Learn. Mem. Cogn.22620 638
  27. 27. Mastaglio T. W. Callahan R. 1995 A large-Scale Complex Virtual Environment for Team Training. Computer, 28 7 49 56 , July
  28. 28. Mellor J. P. 1995 Enhanced reality visualization in a surgical environment (Tech. Rep). MIT Artificial Intelligence Laboratory
  29. 29. Metzger P. J. 1993 Adding reality to the virtual. Paper presented at Virtual Reality Annual Symposium, WA, USA
  30. 30. Milgram P. Colquhoun H. 1999 Ataxonomy of real and virtual world display integration, In Y Ohta and H Tamura (eds), Mixed Reality : Merging Real and Virtual Worlds, Ohmsha Ltd and Springer Verlag, 5 30
  31. 31. Miller D. Swain A. 1986 Human error and human reliability. In G. Salvendy (Ed.), Handbook of Human Factors, 219 252 . NY : John Wiley
  32. 32. Neumann U. Majoros A. 1998 Cognitive, Performance, and Systems Issues for Augmented Reality Applications in Manufacturing and Maintenance, Proceedings of IEEE Virtual Reality Ann, 4 11 , Los Alamitos, Calif
  33. 33. Ott J. 1995 Maintenance executives seek greater efficiency. Aviation Week and Space Technology, 142 (20), 43 44
  34. 34. Pahl G. Beitz W. K. 1997 Methoden und Anwendungen, 4, Auflage, Berlin: Springer, 1997
  35. 35. Park M. Schmidt C. Luczak H. 2007 Design and Evaluation of an Augmented Reality Welding Helmet. Human Factors and Ergonomics in Manufacturing, 17 4), 317 330
  36. 36. Pratt M. J. 1995 Virtual prototypes and product models in mechanical engineering. In J. Rix, S. Haas, and J. Teixeira, editors, Virtual Prototyping-Virtual environments and the product design process, chapter 10, 113 128
  37. 37. Psotka J. 1993 Immersive training systems : virtual reality and education and training, Instructional Science, 23, 405 431
  38. 38. Regenbrecht H. Lum T. Kohler P. Ott C. Wagner M. Wilke W. Mueller E. 2004 Using Augmented Virtuality for Remote Collaboration, Presence: Teleoperators & Virtual Environments, 13(3): 338 354
  39. 39. Ritchie J. M. Dewar R. G. Simmons J. E. L. 1999 The generation and practical use of plans for manual assembly using immersive virtual reality. Proceedings of the Institution of Mechanical Engineers, Part B : Journal of Engineering Manufacture, 213 09544054, 461 474
  40. 40. Rose F. D. Attree E. A. A. Brooks B. M. Parslow D. M. Penn P. R. Ambihaipahan . N. 2000 Training in virtual environments : transfer to real world tasks and equivalence to real task training. Ergonomics, 43 , 4 , 494 511
  41. 41. Salonen T. Sääski J. Hakkarainen M. 2007 Demonstration of Assembly Work Using Augmented Reality. Conference On Image And Video Retrieval archive. Proceedings of the 6th ACM international conference on Image and video, 120 123
  42. 42. Satava R. M. 1995 Medical applications of virtual reality. Journal of Medical Systems, 275 280
  43. 43. Shalin V. L. Prabhu G. V. Helander M. G. 1996 A cognitive perspective on manual assembly, Ergonomics, 39 , 1 , 108 127
  44. 44. Sherman W. R. Craig A. B. 2003 Understanding Virtual Reality Interface, Application and Design, In : Presence, Morgan Kaufman, 441 442 , 1-55860-350-0
  45. 45. Tanaka J. Taylor M. 1991 Object categories and expertise : Is the basic level in the eye of the beholder ? Cogn. Psychol. 23 457 482
  46. 46. Tang A. Owen C. Biocca F. Mou W. (2003 (2003).Comparative Effectiveness of Augmented Reality in Object Assembly Conference on Human Factors in Computing Systems archive. Proceedings of the SIGCHI conference on Human factors in computing systems table of contents Ft. Lauderdale, Florida, USA. SESSION: New techniques for presenting instructions and transcripts table of contents. 73 80
  47. 47. Thorndyke P. W. 1980 Performance models for spatial and locational cognition, (Reprot R-2676 2676 -ONR). Washington, DC : Rand
  48. 48. Towne D. M. 1985 Cognitive workload in fault diagnosis, Los Angeles, CA : Behavioral Technology Laboratories, University of Southern California
  49. 49. Uenohara M. Kanade T. 1995 Vision-based object registration for real-time image overlay. Computer Vision, Virtual Reality and Robotics in Medicine, 14 22
  50. 50. Urban E. C. 1995 The information warrior. IEEE spectrum, 32(11), 66 70
  51. 51. Veinott E. S. Kanki B. G. 1995 Identifying human factors issues in aircraft maintenance operations. Poster presented at the 39th Annual Meeting of the Human Factors and Ergonomics Society, San Diego, CA
  52. 52. Wagner D. Pintaric T. 2005 Towards the Massively Multi-user Augmented Reality on Handheld Devices. LNCS, Springer Berlin, 978-3-54026-008-0 208 219
  53. 53. Wang X. Dunston P. S. 2006 Compatibility issues in Augmented Reality systems for AEC : An experimental prototype study. Automation in Construction., 314 326
  54. 54. Webster A. Feiner S. Mac Intyre. B. Massie W. 1996 Augmented Reality in Architectural Constructions, Inspection and Renovation. Proc. ASCE Third Congress on Computing in Civil Engineering, Anaheim, CA, 17-19, 913 919
  55. 55. Yamada A. Takataa S. 2007 Reliability Improvement of Industrial Robots by Optimizing Operation Plans Based on Deterioration Evaluation. Waseda University Available online
  56. 56. Zachmann G. 1996 A language for describing behavior and interaction with virtual worlds. In proceedings of ACM conference. VRST 96, Hongkong
  57. 57. Zauner J. Haller M. Brandl A. Hartman W. 2003 Authoring of a Mixed Reality Assembly Instructor for Hierarchical Structures. Mixed and Augmented Reality, 2003. Proceedings.The Second IEEE and ACM International Symposium on Publication Date: 7-10, 237 246

Written By

Lei Hou and Xiangyu Wang

Published: 01 January 2010