Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value

The artistic or historical value of a structure, such as a monument, a mosaic, a painting or, generally speaking, an artefact, arises from the novelty and the development it represents in a certain field and in a certain time of the human activity. The more faithfully the structure preserves its original status, the greater its artistic and historical value is. For this reason it is fundamental to preserve its original condition, maintaining it as genuine as possible over the time. 
Nevertheless the preservation of a structure cannot be always possible (for traumatic events as wars can occur), or has not always been realized, simply for negligence, incompetence, or even guilty unwillingness. So, unfortunately, nowadays the status of a not irrelevant number of such structures can range from bad to even catastrophic. 
In such a frame the current technology furnishes a fundamental help for reconstruction/restoration purposes, so to bring back a structure to its original historical value and condition. Among the modern facilities, new possibilities arise from the Augmented Reality (AR) tools, which combine the virtual reality (VR) settings with real physical materials and instruments. 
The idea is to realize a virtual reconstruction/restoration before materially acting on the structure itself. In this way main advantages are obtained among which: the manpower and machine power are utilized only in the last phase of the reconstruction; potential damages/abrasions of some parts of the structure are avoided during the cataloguing phase; it is possible to precisely define the forms and dimensions of the eventually missing pieces, etc. 
Actually the virtual reconstruction/restoration can be even improved taking advantages of the AR, which furnish lots of added informative parameters, which can be even fundamental under specific circumstances. So we want here detail the AR application to restore and reconstruct the structures with artistic and/or historical value


Introduction
The artistic or historical value of a structure, such as a monument, a mosaic, a painting or, generally speaking, an artefact, arises from the novelty and the development it represents in a certain field and in a certain time of the human activity. The more faithfully the structure preserves its original status, the greater its artistic and historical value is. For this reason it is fundamental to preserve its original condition, maintaining it as genuine as possible over the time. Nevertheless the preservation of a structure cannot be always possible (for traumatic events as wars can occur), or has not always been realized, simply for negligence, incompetence, or even guilty unwillingness. So, unfortunately, nowadays the status of a not irrelevant number of such structures can range from bad to even catastrophic.
In such a frame the current technology furnishes a fundamental help for reconstruction/ restoration purposes, so to bring back a structure to its original historical value and condition. Among the modern facilities, new possibilities arise from the Augmented Reality (AR) tools, which combine the Virtual Reality (VR) settings with real physical materials and instruments.
The idea is to realize a virtual reconstruction/restoration before materially acting on the structure itself. In this way main advantages are obtained among which: the manpower and machine power are utilized only in the last phase of the reconstruction; potential damages/abrasions of some parts of the structure are avoided during the cataloguing phase; it is possible to precisely define the forms and dimensions of the eventually missing pieces, etc. Actually the virtual reconstruction/restoration can be even improved taking advantages of the AR, which furnish lots of added informative parameters, which can be even fundamental under specific circumstances. So we want here detail the AR application to restore and reconstruct the structures with artistic and/or historical value. Kishino, 1994). So we can refer to Reality or Real Environment (RE), Augmented Reality (AR), Augmented Virtuality (AV), Virtual Environment (VE) or Virtuality or Virtual Reality (VR). Intuitively the RE is defined as the world how is perceived by our senses, and the VE defines totally constructed or reconstructed scenario with computers. The intermediate values of the scale are generally referred as Mixed Reality (MR), which can be made with different "percentages" of reality vs. virtuality. So, AV refers to scenarios where the virtual part is predominant, but where the physical parts (real objects, real subjects) are integrated too, with the possibility for them to dynamically interact with the virtual world (preferably in real-time), so the scenarios to be considered as "immersive" as, for instance, a "Cave Automatic Virtual Environment" can be (Cruz-Neira et al., 1992).
On the other hand, the term AR refers to scenarios where the real part is predominant, for which artificial information about the environment and its objects are overlaid on the real world, thanks to a medium such as a computer, a smart phone, or a simply TV screen, so additional information directly related to what we are seeing are easily obtained (see Fig. 2 as an example).

The reconstruction/restoration cycle
Generally speaking the VR can be "self consistent", in a sense that some virtual scenario remain within its boundaries and utilized as such, let's think about some playstation's games for instance. On the contrary, when we deal with restoration and/or reconstruction of architectural heritages or historical artefacts, it is commonly adopted a criteria for which the Virtuality-Reality Continuum is crossed (see Fig. 3). This happens in the sense that we start from the real status of matters (RE step), perform the analysis of the current status (AR step), run a virtual restoration/reconstruction of artefacts and materials (AV step) and produce a complete virtual representation of reconstructed scenario (VE step). All of these steps are finalized to accurately describe, analyze and indicate the exact passages to be executed in reality to obtain the best possible operational results. At first glance, the concept to operate in reality passing through the virtuality seems to be not so practical. After all we change our domain (the reality) for another one (the virtuality) but with the final aim to return to the starting one (the reality). Conversely to cross the Virtuality-Reality Continuum offers some advantages discussed in a while.
We can report a similar occurrence in the electronic field, for which the circuital analysis, necessary in the time domain, is realized in the frequency domain and then returning to the time as an independent variable. This way of proceeding is applied because of its advantages (less time consuming procedures, algorithms with minor complexity, filters which can be more easily implemented).
The idea to exploit the potentialities of the VR and AR domains in archaeology is not new. Examples come from "Geist" (Ursula et al., 2001) and "Archeoguide" projects (Vassilios et al., 2001;Vlahakis et al., 2002). The first allows the users to see the history of the places while walking in the city, and was tested in the Heidelberg castle. The second wants to create a system to behave like an electronic guide during the tours made by the visitors in cultural sites, and was used in the archaeological site of Olympia in Greece. After these early examples, many other projects come in the latest years, as the one that regards a virtual exploration of underwater archaeological sites (Haydar et al., 2010).
But we want here point out that the AR can be successfully used for restoration and/or reconstruction purposes, so can play an active role, rather than be utilized for mere tutorial reasons, so to be confined in a passive part. To this aim, it is really useful to start from the mere real side of the problem, to cross the Virtuality-Reality Continuum, passing through AR and AV, till the mere virtual side, and to come back to the origin, as already stressed. This is for several reasons:  restoration and/or reconstruction time can be reduced;  the costs for restoration and/or reconstruction can be reduced: manpower and machinery are utilized only at the real final step, so even the energy consumption is saved;  some potential breakages or risk of destruction of the archeological, often fragile but valuable, artifacts to be restored and/or reconstructed can be avoided;  some potential abrasions/changes in colors of the artifacts can be avoided;  it is possible to establish forms and dimensions of the parts which are eventually incomplete so to rebuilt the artifact in an exact manner;  it is possible to assemble the artifacts without damage its remains and even cause damages in the excavation site where the artifact was found;  it is possible to preview the possibilities of assembling more easily, reducing errors and the time spent in those tasks;  the 3D scanning procedure is also useful to create a database, for cataloging reasons, for tourism promotion aims, for comparison studies, etc.;  in cases where the structural stability of a monument is not in danger, nonintrusive visual reconstructions should be preferred to physical reconstruction; and so on.
VR played an active role in a project concerning the recovery of some artifacts that were buried in the Museum of the Terra Cotta Warriors and Horses, Lin Tong, Xi'an, China (Zheng & Li, 1999). Another example comes from a project to assemble a monument, like the Parthenon at the Acropolis of Athens (Georgios et al., 2001), for which one of the motivation to utilize the VR was the size and height of the blocks and the distance between one block and a possible match, so VR helps the archaeologists in reconstructing monuments or artifacts avoiding the manual test of verifying if a fragment match with another.
Archaeologists can spend even several weeks drawing plans and maps, taking notes and pictures of the archaeological findings. But VR offers systems to create a 3D reconstruction by simply take several pictures by which it is possible to get a 3D model of the artifacts (Pollefeys et al., 2003).
Among all the possibilities, we want here to point out that VR and, especially, AR can furnish a further meaningful help if joined with Human-Computer Interaction (HMI) possibilities. To do so, we will further detail new acquisition systems capable to measure human movements and translating them into actions, useful for the user to virtually interact with an AR scenario where archaeological artifacts are visualized (see paragraphs 4.3 and 4.4).

The models
In addition to the previous flow diagram regarding the restoration cycle (Fig. 3), it makes sense to define also the evolving situation of the artefacts to be restored during that cycle. So we can distinguish: original, state, restoration, and reconstruction models.
The original model concerns the parts of the monument, mosaic, painting, ancient structures,.. or, generally speaking, artefacts, which survive intact today without being subjected to tampering, just as they were in the past.
The state model just regards the current situation of the artefacts, with its original model but after being integrated with "addictions".
The restoration model consists of the original model with manual interventions of addictions of what has been destroyed over time, so to bring back the artefacts to their native status.
The reconstruction model is defined when we are not limited to "simply" manual intervention of "addictions", because of so little remains that even an original model is difficult to define. So the interventions are quite addressed to built something almost from beginning, taking account only of really "skinny" original parts of the artefact. So, restoration model can be visualized for the Colosseum (also known as Coliseum) originally the Flavian Amphitheatre in the centre of the city of Rome (Italy), while reconstruction model is for the Jewish Second Temple practically destroyed by the Roman legions under Titus.
The restoration and reconstruction models can be realized by means of mixed reality in the V-R continuum.

AR applications
We know pretty well that the AR term refers to the fact that a viewer observes a view of the real world upon which is superimposed computer generated graphics. But the viewer can have a direct view of the real world, or can experience a mediated observation of the reality via video coupling or, finally, can experience an observation of post-processing real images. We will refer to this latter case. Being not completely virtual and not fully real, AR has quite extreme requirements to be suitably adopted. But its great potentiality makes AR both an interesting and challenging subject from scientific and business perspectives.
At one side we have the reality, or its digital representation (which we understand as real data), and at the other we have a system of representation of such a form of informative multimedia database (the digital information). The connection between these two worlds is the geo-referentiation, understood in a broad sense, that is the need to place the information coherently upon the data element, in the three-dimensional space.
The information can be represented in various shapes: a written text, floating windows with images, graphics, videos or other multimedia item, or rendering of the 3D virtual reconstruction, mono or stereoscopic, generated in a real time by trough a virtual model.
The contributions can come from specific files "ad hoc" prepared, structured database, search engines or blog generated by users. The databases can be off-line or on-line. The AR applications can restrict the user in exploring the informative setting by using the eyes movement (directly or through a device) or in offering different interactions, some of which we later detail according to our experiences, that allow to choose among the informative set available.
The geo-referencing can be accomplished by GPS detectors (outdoor), position/motion tracking systems (indoor), until the simplest recognising shaping system based on graphic markers framed by a webcam (AR desktop).
Although AR is usually used to identify a strongly oriented images technology, its general sense is the benchmark also for the description of the audio or tactile/haptic AR experiences.
The virtual audio panorama generation, the geo-places audio notice and the virtual strengthening of the tact, have to be included in the general AR discipline; these aspects play an important and innovative role for the CH improvement. Therefore, the overall added value of this technology is the contextualization of real data and virtual information, and that value increases because of two main factors: the real-time interaction and the multidimensionality.
With real-time interaction, we can understand both the need to have a visual rendering at 25fps, to ensure a good visual exploration of the real/virtual space and also the ability to make queries and actions in the informative database so that determine changes in a status of the system (human in the loop).
For example, if an architect could choose the maximum and minimum values of a static deformation's map of a facade of a historic building under restoration (derived from a realtime simulation), and if the results of this query will be visible superimposed on this facade, certainly would be a better understanding of the variables to control to make decisions.
Within this frame, the multi-dimensionality furnish an improvement, leading to the possibility to use images in stereoscopy (or stereophony/holophony, the third dimension anyway) and to scroll the information displayed along the time (the fourth dimension). In paragraph 4.2 we will furnish some new elements that we experienced in our labs.
These technological extensions become strategic in every operative field dealing with the space and matter like architecture, cultural heritage, engineering, design etc. In fact, thanks to the stereoscopic vision, is possible to create an effective spatial representation of the deepness that is fundamental in an AR application that allow to wander in the ruins of an archaeological site visualising the 3D reconstruction hypothesis upon the real volumes, as already highlighted in this article with the "Archeoguide" project.
If we could choose among a series of virtual anastylosis from different historical periods, we would obtain the best representation of a time machine available in a very natural way.
This type of experience allow to come in contact with the information linked to a place, architecture, ruin or object in a way completely different than in the past, thanks to the possibility to explore the real-virtual space in a real scale (immersion) and in a high level of embodiment that is produced, i.e. in being in the space, actively, offering the moving body to the cognitive process (in fact "embodied" or "embodiment" is the concepts that identify the body and immersion in a subject's perceptual and experiential information).
It's the paradigm of the enaction, or enactive, i.e. the knowledge, enactive interfaces, enactive didactic, which are terms coined by Bruner and later by H. Maturana and F. Varela to identify interactive processes put in place between the subject and significance of the subject in which the action is fundamental to the process of learning (Maturana & Varela 1980;Varela et al., 1991;Bruner, 1996). Enaction can be proposed as a theoretical model to understand the way of development of knowledge starting from the perceptual-motion interaction with the environment. This neuro-physiological approach is based on the idea that the cognitive activity is embodied, that not separable from the corporeal perception and that it can come out only in a well defined context through the direct action of the user with the context and with the other users. Interactivity, immersion, embodiment and enactivity are the key words of the new VR paradigm that, in the case of AR, shows all its power.

Criticalness
The represented technology is not immune by criticalness anyway:  the need to have power of calculation and the speed of data transmission, to grant effective performances in the interaction;  the need to have a rendering software able to get high level of photorealism in real-time;  the need to create virtual light, audio and haptic condition as close as possible to the reality, all calculated in real-time. The trend is to measure the lighting condition through the construction of a HDRI map (Hight Dynamic Range Images on the "use of" in computer graphics, that visit the site of their greatest contemporary), by means of web-cam images, immediately applied to the virtual set;  the need to increase the effectiveness of the tracking, in different environmental conditions;  the adoption of video, audio and haptic hardware that can be worn easily even by people with a physical or cognitive limitations;  the need to propose usable interfaces when the state of application changes (language, detail of the elaboration, historic period, etc..) It is certainly not an exhaustive list, but he wants to be significant analysis of the importance of the current state of the art, when you design an application of AR.

Evolution of the AR techniques in the computer graphic
Apart from the well known computer graphic techniques that get into the AR family, like the bi-dimensional superimposing, the green/blue screen used in the cinema (augmented reality), the television virtual set (augmented virtuality) or the sophisticated technologies used by James Cameron for the known "Avatar" (mixed reality), we will try to offer a point of view about the AR used in a general purposes or in the specific VCH (Virtual Cultural Heritage) field (meaning the discipline that studies and proposes the application of digital technologies of virtual cultural experience understood in the broad sense, including photos, architectural walktrough, video anastylosis, virtual interactive objects, 3D websites, mobile applications, virtual reality and, of course, mixed reality), detailing: 1. AR desktop marker-based 2. AR desktop marker-less 3. AR freehand marker-less 4. AR by mobile 5. AR by projection 6. Collaborative AR 7. Stereo & Auto-Stereoscopic AR 8. Physics AR 9. Robotic AR

AR desktop marker-based
It's the best known AR application as it is the predecessor of the following evolutions. It, basically, is based upon the space trends tracking of a graphical marker, usually a geometric black&white symbol called "confidence marker" on which you can hook in real time a bi/three-dimensional input. The trace is possible using common web-cam with a simple shape recognition software by color contrast. The marker can be printed on a rigid support to allow an easily handling, but some more flexible supports are already available.
The most common application published so far allow the user to explore a 3D model object, experiencing 360 degree vision, as he holds in his hand (see Fig. 4). The model can be static, animated or interactive. The application can have many markers associated to different objects displayed one by one or all at the same time.
In the VCH area, the best known applications have been realized flanking the archaeological piece showed in the case side by side both to the marker (printed on the catalogue, on the invitation card or on special rigid support) and to the multimedia totem that hosts the AR application. In this way the user can simulate the extraction of the object from the case and see it from every point of view. He can also use many markers at the same time to compare the archaeological evolution of a building in the time.
The NoReal.it company we are dealing with, showed many experiences during some scientific exhibit like the "XXI Rassegna del Cinema Archeologico di Rovereto" (TN-Italy) and "Archeovirtual stand" into the "Borsa Mediterranea del Turismo Archeologico di Paestum" (SA -Italy). The technology is useful to solve a very important problem of the new archaeological museums: the will to carry on the emotion even after the visit has finished. Thanks to the AR marker-based, the user can connect to the museum's website, switch on the web-cam, print the graphic markers and start the interaction with the AR applications, realizing a very effective home edutainment session and customer loyalty in the action of the latest techniques museum marketing require. Some examples come from the Paul Getty Museum in Los Angeles or cited in the "Augmented reality Encyclopaedia".
Another advantage of the AR marker-based is the possibility to highlight it, in a clear way, the contributions in AR, printing the graphic symbol everywhere. This freedom is very useful in technical paper, interactive catalogues, virtual pop-up books, interactive books, that link the traditional information to the new one. "ARSights" is a software that allow to associate 3D Sketch-Up model to an AR marker so that you can visualize almost all the Google Earth buildings, working with a big worldwide CH database.

AR desktop marker-less
You can realize AR applications even without a geometric marker through a process that recognize graphically a generic image or a portion of it, after which the tracking system will be able to recognize identity and orientation. In this way, a newspaper page, or part of it, will become the marker and the AR contribution will be able to integrate the virtual content directly in the graphic context of the paper, without resorting to a specific symbol.
From another side, you need to make people understand that that page has been created to receive an AR contribution, as it is not in the symbol. You use the same interaction as in the marker-based applications, with one or multiple AR objects linked to one or different part of the printed image. There are many applications for the VCH: see, for example, the "AR-Museum" realized in 2007 by a small Norwegian company (bought a year later by Metaio), where you can visualize virtual characters interacting with the real furniture into the real space of the museum room. www.intechopen.com

AR freehand marker-less
We can include in the "AR freehand marker-less" category:  applications that trace position, orientation and direction of the user look, using tracking systems and various movement sensors, associated to a gesture recognition software;  applications that recognize the spatial orientation of the visual device.
The main difference, within the previous two categories, is the user role: in the first case the user interacts with his own body, in the second he activates a bound visualizing device spatially traced. In both cases the application control experiences through natural interactive system are becoming very common thanks to Microsoft "Kinetc" or the "Wavi Xtion" by PrimeSense with Asus3D, or even the Nintendo "Wii-mote" and the Sony "PlayStation Move", that still require a pointing device.
But the most extraordinary applications, even in the VCH, are the ones where the user can walk in an archaeological site, visualizing the reconstruction hypothesis superimposed to the real ruins. The first experience is from an European project called "Archeoguide" A third field of application could be to calculate the structural integrity of architectural building with a sort of wearable AR device that permit a x-ray view from the inside of the object around. This kind of application has been presented at the IEEE VR 2009 by Avery et al. (2009) (IEEE VR is the event devoted to Virtual Reality, organized by the IEEE Computer Society, a professional organization, founded in the mid-twentieth century, with the aim of enhancing the advancement of new technologies).

AR by mobile
The latest frontier of the AR technology involves the personal mobile devices, like the mobile phones. Using the GPS, the gyroscope and the standard web-cam hosted in the mobile devices, you can create the optimal conditions for the geo-reference of the information, comparing them with the maps and the satellite orthophotos available by the most common geo-browser (Google Maps, Microsoft Bing, etc).
The possibility to be connected, from every place in the world, permits to choose the type of information to superimpose. In the last five years the first testing were born for Apple mobile devices (iPhone, iPad, iPod), Android, and Windows CE devices. The applications today available tend to exploit the Google Map geo-referencing to suggest informative tags located in the real space, following the mobile camera. You can see, for example, "NearestWiki" for iPhone or "Wikitude" for iPhone, Android and Symbian OS, but many other application are coming out. A particular application VCH is "Tuscany+" that publishes AR informative tags in AR, specific for a virtual tour in the the tuscanies cities with various types of 2D and 3D contributes.
During "Siggraph 2008", the IGD Fraunhofer with ZGDV, presented their first version of "InstantReality", a framework for VR and AR that used historical and archaeological images as a markers to apply the 3D model of the reconstruction hypothesis. Another example of VCH AR application, made for the interactive didactics, has been carried out in Austria (see www.youtube.com/watch?v=denVteXjHlc).

AR by projection
The 3D video-mapping techniques has to be included in the AR applications. It is known because of its application to the entertainment, to the facade of historical and recent buildings, but also used to augmented the physical prototypes used the same way as threedimensional displays.
The geo-referencing is not subjected by a tracking technique, as in the previous cases, but through the perfect superimpose setting of the lighting projection on the building's wall with the virtual projection on the same wall, virtually duplicated. The video is not planar but is distorted, in compliance with the 3D volumes of the wall. The "degree of freedom" of the visitor is total, but in some case the projection could be of a lower quality cause of the unique direction of the lighting beam.
Today we don't know many 3D video-mapping applications for the edutainment, but many entertainment experiences can be reported. 3D video-mapping can be used to expose different hypothesis solution of a facade building in different historical ages or to simulate a virtual restoration, as it is for the original color simulation of the famous "Ara Pacis" in Rome (Fig. 6).

Collaborative AR
The "Second Life" explosion, during the 2008-2009 years, bring the light on the meta-verses like 3D collaborative on-line engines, with many usable capabilities and a high level of production of new 3D interactive contents and the avatar customization. The aesthetic results, and the character animation, obtained a very high technical level.
Other useful aspects of the generic MMORG (Massively Multiplayer Online Role-Playing Game, a type of computer role-playing game within a virtual world) allow to develop and gather together big content-making communities and content generators, in particular script generators that extend the applicative possibilities and the usable devices. "Second Life" and "Open Sim", for instance, have been used by T. Lang & B. Macintyre, two Researchers of Georgia Institute of Technology in Atlanta and of Ludwig-Maximilians Universitat in Monaco, as a rendering engine able to publish avatar and objects in AR (see http://arsecondlife.gvu.gatech.edu). This approach makes the AR experience not only interactive, but also shareable by the users and pursuant to the 2.0 protocol. We can stand that it's the first home low cost tele-presence experience. Fig. 7. The SL avatar in AR, in real scale, and controlled by a brain interface.
An effective use of the system was obtained in the movie "Machinima Futurista" by J. Vandagriff, who reinvent the Italian movie "Vita Futurista" in AR. Other recent experiment by D. Carnovale, link Second Life, the AR and a Brain Control Interface to interact at home with his own avatar, in real scale, with anything but his own thoughts (Fig. 7).
We don't know yet a specific use in the VCH area, but we surely think about an avatar in AR school experiences, a museum guide, an experimental archaeology co-worker, a cultural entertainer for kids, a reproduction of ancient characters and so on.

Stereo & auto-stereoscopic AR
The Stereoscopic/Auto-Stereoscopic visualization is used in VCH as an improving instrument in the spatial exploration of objects and environments, both addressed to the entertainment and the experimental research, and it draw on the industrial research, applied since 10 years, that consider the AR stereoscopic use as fundamental.
The ASTOR project has been presented at Siggraph 2004, by some Royal Institute of Technology of Stockholm-Sweden (Olwal et al., 2005) teaching fellows, but thanks to the new auto-stereoscopic devices, we will increase it's possibilities in the near future. Currently, the only suite that include Stereoscopy and Auto-Stereoscopy in the marker-based/free AR applications is Linceo VR by Seac02, that manage the real-time binoculars rendering using a prismatic lens up the ordinary monitors.
Mobile devices like "Nintendo3D" and the "LG Optimus 3D" will positively influence the use for a museum interactive didactics, both indoor and outdoor experiences.

Physics AR
Recently, AR techniques have be applied to the interactive simulation processes, pursuant to the user actions or reactions as happens, for instance, with simulation of the physical behaviour of objects (as the DNA ellipses can be). The AR physical apparatus simulation can be used in the VCH to visualise tissues, elastic elements, ancient earthquake or tsunami, fleeing crowd, and so on.

Robotic AR
Robotics and AR, so far, match together for some experiments to put a web-cam on different drone and then create AR session with and without markers.
There are examples of such experiments which realized esacopter and land vehicle. "AR Drone" is the first UAV (Unmanned Aerial Vehicle) Wi-Fi controlled through iPhone or iPad that transmit in real time images of the flights and augmented information (see www.parrot.com). "Liceo VR" that include SDK to control the Wowee Rovio. With "Lego Mindstorm NXT Robot" and the "Flex HR" software, you can build robotic domestic experiences.
There are new ways for an AR virtual visit, from high level point of view, dangerous for a real visit or underwater. The greatest difficulties are the limited Wi-Fi connection, the public securities laws (because of flying objects), the flight autonomy and/or robot movement. Fig. 8. A frame of a AR Drone in action. On the iPhone screen you can see the camera frames, in real time, with superimposed information. Currently it is used as a game device. In a future we will use for VCH aerial explorations.

Commercial software and open-source
We can list the most popular AR software currently in use. The commercial company are the German Metaio (Munich, San Francisco), the Italian SEAC02 (Turin), the French Inglobe technologies (Ceccano, FR), the American Total Immersion (Los Angeles, Paris, London, Hong Kong). Every software house proposes different solutions in term of rendering performances, 3D editing tools, format import from external 3d modelling software, type and quantity of control devices, etc. The commercial offer is based on different user licence. Some open/free suite are available with different GNU licences, and the most known is "ARToolKit", usable in its original format or Flash-included ("FLARToolKit") or with the more user friendly interface by "FLARManager". But more and more other solution are coming.
The VCH natural mission is the spatial exploration, seen as an open environment (the ground, the archaeological site, the ancient city), as a close environment (the museum or a building), and as a simple object (the ruin, the ancient object). The user can live this experience in the real time, perceiving himself as a unit ad exploiting all his/her senses and the movement to complete and improve the comprehension of the formal meaning he/she's surrounded by. The mobile technologies allow to acquire information consistent to the spatial position, while the information technologies make the 3D virtual stereoscopic rendering reconstruction very realistic. The VCH will pass through MR technologies, intended as an informative cloud that involve, complete and deepen the knowledge and as possibility to share virtual visit with people online.

New materials and methods
For the aim of the restoration/reconstruction of structures with artistic or historical values, architectural heritages, cultural artefacts and archaeological materials, three steps have been fundamentals since now: acquiring the real images, representing the real (or even virtualized) images, superimposing the virtual information on the real images represented keeping the virtual scene in sync with reality. But with the enhanced latest technologies, each of these steps can find interesting improvements, and a even a fourth step can be implemented. We refer to the possibility that the images can be acquired and represented with autostereoscopic technologies, that an user can see his/her gestures mapped on the represented images, and that these gestures can be used to virtually interact with the real or modelled objects. So after a brief description of the standard techniques of image acquisition (paragraph 4.1), we will discuss, on the basis of our experiences, on the new autostereoscopy possibilities (paragraph 4.2), on the new low-cost systems to record human gestures (paragraph 4.3) and to convert them into actions useful to modify the represented scenario (paragraph 4.4), for an immersive human-machine interaction.

3D Image acquisition
Nowadays the possibility to obtain 3D data images of object appearance and to convert them into useful data, comes mainly from manual measuring (Stojakovic and Tepavcevica, 2009), stereo-photogrammetry surveying, 3D laser-scanner apparatuses, or an mixture of them (Mancera-Taboada et al., 2010).
The stereo-photogrammetry allows the determination of the geometric properties of objects from photographic images. Image-based measurements have been carried out especially for huge architectural heritages (Jun et al., 2008) and with application of spatial information technology (Feng et al., 2008) for instance by means of balloon images assisted with terrestrial laser scanning (Tsingas et al., 2008) or ad-hoc payload model helicopter (Scaioni et al., 2009) or, so called, Unmanned Aerial Vehicles or UAVs (van Blyenburg, 1999, capable to allow a high resolution image acquisition. With the laser-scanner can be obtained the spatial coordination of the surface points of the objects under investigation (Mancera-Taboada et al., 2010, Costantino et al., 2010 as, for example, we did for the pieces of an ancient column of the archeological site of Pompeii (see Fig. 9). Anyway, according to our experience, the data acquired with laser-scanner are not so useful when it is necessary to share them on the web, since the huge amount of data generated, especially for large objects with many parts to be detailed. In addition, laserscanning measurements can often be characterized by errors of different nature, so analytical model must be applied to estimate the differential terms necessary to compute the object's curvature measures. So statistical analyses are generally adopted to overcome the problem (Crosilla et al., 2009). In any case, the 3D data images can be useful adopted both for VR than for AR. In fact, in the first case data are used to build virtual environments, more or less detailed and even linked to a cartographic model (Arriaga & Lozano, 2009), in the second occurrence data are used to superimpose useful information (dimensions, distances, virtual "ghost" representation of hidden parts,...) over the real scene, beyond mere presentation purposes towards being a tool for analytical work. The superimposed information must be deducted from analysis of the building materials, structural engineering criteria and architectural aspects.
The latest technological possibilities allow online image acquisition for auto-stereoscopic effects. These present the fundamental advantage that 3D vision can be realized without the need for the user to don special glasses as currently done. In particular we refer to a system we have adopted and improved, detailed in the following paragraph.

Auto-stereoscopy
The "feeling as sensation of present" in a AR scene is a fundamental requirement. The movements, the virtual interaction with the represented environment and the use of some interfaces are possible only if the user "feels the space" and understands where all the virtual objects are located. But the level of immersion in the AR highly depends on the display devices used. Strictly regarding the criteria of the representation, the general approach of the scenario visualization helps to understand the dynamic behaviour of a system better as well as faster. But the real boost in the representation comes, in the latest years, from a 3D approach which offers help in communication and discussion of decisions with non-experts too. The creation of a 3D visual information or the representation of a "illusion" of depth in a real or virtual image is generally referred as Stereoscopy. A strategy to obtain this is through eyeglasses, worn by the viewer, utilized to combine separate images from two offset sources or to filter offset images from a single source separated to each eye. But the eyeglass based systems can suffer from uncomfortable eyewear, control wires, cross-talk levels up to 10% (Bos, 1993), image flickering and reduction in brightness.
On the other end, AutoStereoscopy is the technique to display stereoscopic images without the use of special headgear or glasses on the part of the viewer. Viewing freedom can be enhanced: presenting a large number of views so that, as the observer moves, a different pair of the views is seen for each new position; tracking the position of the observer and update the display optics so that the observer is maintained in the AutoStereoscopic condition (Woodgate et al., 1998). Since AutoStereoscopic displays require no viewing aids seem to be a more natural long-term route to 3D display products, even if can present loss of image (typically caused by inadequate display bandwidth) and cross-talk between image channels (due to scattering and aberrations of the optical system. In any case we want here to focus on the AutoStereoscopy for realizing what we believe to be, at the moment, the more interesting 3D representations for AR. Current AutoStereoscopic systems are based on different technologies which include lenticular lens (array of magnifying lenses), parallax barrier (alternating points of view), volumetric (via the emission, scattering, or relaying of illumination from well-defined regions in space), electro-holographic (a holographic optical images are projected for the two eyes and reflected by a convex mirror on a screen), and light field displays (consisting of two layered parallax barriers).
Our efforts are currently devoted respect to four main aspects: user comfort, amount of data to process, image realism, deal both with real objects or graphical models. In such a view, our collaboration involves the Alioscopy company (www.alioscopy.com) regarding their 3D AutoStereoscopy visualization system which, despite non completely satisfy all the requirements, remain one of the most affordable systems, in terms of cost and time efforts. The 3D monitor is based on the standard Full HD LCD and its feature back 8 points of view is called "multiscope". Each pixel of the panel combines three sub-pixel colour (red, green and blue) and the arrays of lenticular lenses cast different images onto each eye, since magnify different point of view for each eye viewed from slightly different angles (see Fig. 10). This results in a state of the art visual stereo effect rendered with typical 3D software such as 3ds Max, Maya, Lightwave, and XSI. The display uses 8 interleaved images to produce the AutoStereoscopic 3D effect with multiple viewpoints. We realized 3D images and videos, adopting two different approaches for graphical and real model. The graphical model is easily managed thanks to the 3D Studio Max Alioscopy plug-in, which is not usable for real images, and for which it is necessary a set of multi-cameras to recover 8 view-points. The virtual or real captured images are then mixed, by means of OpenGL tools, in groups of eight to realize AutoStereoscopic 3D scenes. We paid special attention in positioning the cameras to obtain a correct motion capture of a model or a real image, in particular the cameras must have the same distance apart (6.5 cm is the optimal distance) and each camera must "see" the same scene but from a different angle. The great advantage of the AutoStereoscopic systems consists of an "immersive" experience of the user, with unmatched 3D pop-out and depth effects on video screens, and this without the uncomfortable eyewear of any kind of glasses. On the other end, an important amount of data must be processed for every frame since it is formed by eight images at the same time. Current personal computers are, in any case, capable to deal with these amount of data since the powerful graphic cards available today.

Input devices
The human-machine interaction has historically been realized by means of conventional input devices, namely keyboard, mouse, touch screen panel, graphic tablet, trackball, penbased input in 2D environment, and three dimensional mouse, joystick, joypad in 3D space. But new advanced user interfaces can be much more user friendly, can ensure higher user mobility and allow new possibilities of interaction. The new input devices for advanced interactions take advantage from the possibility of measuring human static postures and body motions, translating them into actions in an AR scenario. But there are so many different human static and dynamic posture measurement systems that a classification can be helpful. For this, a suggestion comes from one of our work (Saggio & Sbernini, 2011), completing a previous proposal (Wang, 2005), which refers of a schematization, based on position of the sensors and the sources (see Fig. 14). Specifically: The Outside-In Systems typically involve optical techniques with markers, which are the sources, strategically placed on the wearer's body parts which are to be tracked. Cameras, which are the sensors, capture the wearer's movement, and the motion of those markers can be tracked and analyzed. An example of application can be found in the "Lord of the Rings" movie productions to track movements for the CGI "Gollum" character. This kind of system is widespread adopted (Gavrila, 1999) since it is probably the oldest and perfected ones, but the accuracy and robustness of the AR overlay process can be greatly influenced by the quality of the calibration obtained between camera and camera-mounted tracking markers (Bianchi et al., 2005). The Inside-Out Systems deal with sensors attached to the body while sources are located somewhere else in the world. Examples are the systems based on accelerometers (Fiorentino et al., 2011;Mostarac et al., 2011;Silva et al., 2011), MEMS (Bifulco et al., 2011), ensemble of inertial sensors such as accelerometers, gyroscopes and magnetometers (Benedetti, Manca et al., 2011), RFID, or IMUs which we applied to successfully measure movements of the human trunk (Saggio & Sbernini, 2011). Within this frame, same research groups and commercial companies have developed sensorized garments for all the parts of the body, over the past 10-15 years, obtaining interesting results (Giorgino et al., 2009;Lorussi, Tognetti et al., 2005;Post et al., 2000).
The Inside-In Systems are particularly used to track body part movements and/or relative movements between specific parts of the body, having no knowledge of the 3D world the user is in. Such systems are for sensors and sources which are for the most part realized within the same device and are placed directly on the body segment to be measured or even sewed inside the user's garment. The design and implementation of sensors that are minimally obtrusive, have low-power consumption, and that can be attached to the body or can be part of clothes, with the employ of wireless technology, allows to obtain data over an extended period of time and without significant discomfort. Examples of the Inside-In Systems come from application of strain gauges for stress measurements (Ming et al., 2009), conductive ink based materials (Koehly et al., 2006) by which it is possible to realize bend / touch / force / pressure sensors, piezoelectric materials or PEDOT:PSS basic elements for realizing bend sensors (Latessa et al., 2008) and so on.
The Outside-Out Systems consider both sensors and sources not directly placed on the user's body but in the surrounding world. Let's consider, for instance, the new Wireless Embedded Sensor Networks which consist of sensors embedded in object such as an armchair. The sensors detect the human postures and, on the basis of the recorded measures, furnish information to modify the shape of the armchair to best fit the user body (even taking into account the environment changes). Another application is the tracking of the hand's motions utilized as a pointing device in a 3D environment (Colombo et al., 2003).
In a next future it is probable that the winning rule will be played by a technology which will take advantages from mixed systems, i.e. including only the most relevant advantages of Outside-In and/or Inside-Out and/or Inside-In and/or Outside-Out Systems. In this sense an interesting application comes from the Fraunhofer IPMS, where the researchers have developed a bidirectional micro-display, which could be used in Head-Mounted Displays (HMD) for gaze triggered AR applications. The chips contain both an active OLED matrix and therein integrated photo-detectors, with a Front brightness higher than 1500 cd/m². The combination of both matrixes in one chip is an essential possibility for system integrators to design smaller, lightweight and portable systems with both functionalities.

Human-computer interaction
As stated at the beginning of paragraph 4, we want here to point out on the utilization of the measurements of the human postures and kinematics in order to convert them into actions useful to somehow virtually interact with represented real or virtual scenario. We are representing here the concept of Human-Computer Interaction (HCI), with functionality and usability as its major issues (Te'eni et al., 2007). In literature there are several examples of HMI (Karray et al., 2008).
In the latest year our research group realized systems capable to measure human postures and movements and to convert them into actions or commands for pc based applications. This can be intended an example of Human Computer Interaction (HCI) which, more specifically, can be defined as the study, the planning and the design of the interaction between users and computers. We designed and realized different systems which can be framed into both the Inside-Out and Inside-In ones. An example comes from our "data glove", named Hiteg glove since our acronym (Health Involved Technical Engineering Group). The glove is capable to measure all the degree of freedom of the human hand, and the recorded movements can be then simply represented on a screen (Fig. 15a) or adopted to virtually interact, manipulate, handle object on a VR/AR scenario (Fig. 15b). On the basis of the Hiteg glove and of a home-made software, we realized a project for the virtual restoration/reconstruction of historical artifacts. It starts from acquiring the spatial coordinates of each part of the artifact to be restored/reconstructed by means of laser scanner facilities. Each of these pieces are then virtually represented in a VR scenario, and the user can manipulate them to virtually represent the possibilities of a restoration/ econstruction steps detailing each single maneuver (see Fig. 16 and Fig. 17).
We developed noninvasive systems to measure human trunk static and dynamic postures too. The results are illustrated in Fig. 18a,b where sensors are applied to a home-made dummy, capable to perform the real trunk movements of a person, and the measured positions are replicated by an avatar on a computer screen. Thanks to the measured movements, one can see him-herself directly immersed into an AR scenario, and his/her gestures can virtually manipulate the virtual represented objects. But the mixed reality can be even addressed to more sophisticated possibilities. In fact, adopting systems which furnish feedback to the user, it is possible to make the action of virtually touch a real object seen in a pc screen, but having the sensation of the touch as being real. So, the usual one-way communication between man and computer, can now become a bidirectional information transfer providing a user-interface return channel (see Fig. 19).

Fig. 19. Exchange information between user and computer
We can have sensations to the skin and muscles through touch, weight and relative rigidity of non existing objects. Generally speaking we can treat of force or haptic f e e d b a c k s , p r o p e r t i e s w h i c h c a n b e i n t e g r a t e d i n t o V R a n d A R s c e n a r i o . T h e force feedback consists in equipment to furnish the physical sensation of resistance, to trigger kinetic stimuli in the user, while the haptic consists in apparatus that interfaces with the user through the sense of touch (for its calibration, registration, and synchronization problems see Harders et al., 2009). In particular the so called affective haptic involves the study and design of devices and systems that can elicit, enhance, or influence emotional state of a human just by means of sense of touch. Four basic haptic (tactile) channels governing our emotions can be distinguished: physiological changes (e.g., heart beat rate, body temperature, etc.), physical stimulation (e.g., tickling), social touch (e.g., hug, handshake), emotional haptic design (e.g., shape of device, material, texture) (Wikipedia, 2011). In our context frame we refer only on the physical stimulation part, since it is the less emotional but the only that makes sense for our purposes. Within it the tactile sensation is the most relevant and includes pressure, texture, puncture, thermal properties, softness, wetness, friction-induced phenomena such as slip, adhesion, and micro failures, as well as local features of objects such as shape, edges, embossing and recessed features (Hayward et al., 2004). But also vibro-tactile sensations, in the sense of the perception of oscillating objects in contact with the skin can be relevant for HCI aspects. A possibility to simulate the grasping of virtual objects can be realized by means of small pneumatic pistons in a hand worn solution, which make it possible to achieve a low weight and hence portable device (an example is the force-feedback glove from the HMI Laboratory at the Rutgers University, Burdea et al., 1992)

Conclusion
Stands the importance of restoration and/or reconstruction of structures with artistic or historical values, architectural heritages, cultural artefacts and archaeological materials, this chapter discussed about the meaning and importance of AR applications within this frame. So, after a brief overview, we focused on the restoration cycle, underlining the Reality and Virtuality cross relations and how them support AR scenarios. An entire paragraph was devoted to AR applications, its criticalness, developments, and related software. Particular attention was paid for new materials and methods to discover (more or less) future possibilities which can considerably improve restoration/reconstruction processes of artefacts, in terms of time, efforts and cost reductions.