8 The Hidden World : Reality Through Laser Scanner Technologies – A Critical Approach to Documentation and Interpretation

Our experience with laser scanner systems began in 2003, when the Leica company put some of their new devices at our disposal and gave us the opportunity to include them in our 3D modelling work (Figure 1). As we indicated in the press cutting (Farjas and Sardiña, 2003) this new equipment was able to model practically continuous surfaces without the need to discretise the data capture and ruled out subjectivity in choosing the singular points to represent the model.


Introduction
Our experience with laser scanner systems began in 2003, when the Leica company put some of their new devices at our disposal and gave us the opportunity to include them in our 3D modelling work (Figure 1).As we indicated in the press cutting (Farjas and Sardiña, 2003) this new equipment was able to model practically continuous surfaces without the need to discretise the data capture and ruled out subjectivity in choosing the singular points to represent the model.These were surprising times in which we let ourselves be carried away not only by the spectacular results but also by the capacity for evolution of this new technology.This included longer-range devices, short-range precision, they could be controlled by a computer or left to work independently, they could use external or compact batteries, which made them independent of external power sources.Although it would be true to say we were highly impressed by this new digital tool, it should also be said that it was not all pure enjoyment; we also had our moments of suffering.
As researchers in the field of 3D modelling, we believe it is now time to pause and take stock of the system of automatic processes in which we now find ourselves working with this technology.Perhaps we should think about recovering the power to take decisions, to look back to the time of traditional survey methods and in this light to analyse the tools that technological advances have put into our hands.Again and again we find mention made in the conclusions of studies carried out with laser scanners that in order to surpass the present limits we need more powerful computers, faster information processing, more highly automated tasks, the intervention of the operator must be bypassed, etc.Our question is: do we really mean this, or are we putting these words into the mouths of the computers?Is this not what computers want, or would request if they could speak for themselves?Have computers already not practically taken over the data acquisition and information treatment processes?What role do we play in all of this?We have been reduced to merely obeying the orders given to us by the different programs, we labour at boring tasks and produce strange results that do not seem to be consistent with the day-to-day work of a scientist.
We consider that now is the time to think again about our real needs and objectives and for each researcher to adapt both processes and results to the context of his/her individual work.Now is the time to escape from the tyranny of 3D technology and to adopt a critical approach to laser scanner systems (Figure 2: data capture, treatment and visualisation).We are, so to speak, engineers and the new technologies are our raw materials, but we are slowly but surely turning ourselves into mere robots.3D laser scanner technology has practically taken over the field and by itself directs entire projects, leaving us to carry out its orders from a pre-programmed protocol.We first suggested this idea in the workshops on "Graphic Documentation of the National Heritage-Present and Future" organised in Madrid by the Instituto de Patrimonio Cultural de España in November 2010 in a paper entitled Cartografía en Patrimonio: la métrica en la documentación.¿Una realidad pendiente?(Farjas et alii, 2010 b).In this paper we compared a survey carried out by traditional methods of the 3D modelling of the Cloister of the Cathedral of Sigüenza by Silvia Peces Rata with another using laser scanner technology of the Escalada Monastery by María Expósito.The former used a Leica TCR 705 total station, which measures distances by means of visible laser beams without the need for a reflecting prism.A total of 1524 points were captured and subsequently edited by AutoCad 2005 to obtain the plans and elevations of the cloister on a scale of 1:200 from the point-cloud graphics file.A 3D model was finally produced as well as metric documents containing a description of the cloister.The general procedure used was similar to previous studies using the same equipment (Alonso et alii, 2002).The millions of points of the monastery captured in the laser scanner survey were stored in computer programs, but, as far as we are aware did not make any valuable contribution to the study itself.
Topography consists of the representation of a surface by means of points located in a system of coordinates.Developments in this field have been closely associated with technological advances, mainly in instrumentation.The appearance of computer programs, advances in applied electronics, the discovery of photogrammetry, global positioning systems and electromagnetic distance measurement have all played a part in the development process.
Terrestrial laser scanner equipment is one of the results of this evolution.This device carries out sweeps of the survey zone by laser beam to generate a 3D point cloud.The x, y and z coordinates of each point are stored in a reference system together with the reflected beam intensity and these provide us with a massive automated 3D metric data acquisition system.Data treatment is divided into different stages and includes work planning, locating work stations, the data capture itself, image taking, digital data treatment to eliminate unwanted points, joining the sweeps together, transforming coordinates, a sampling process to determine spatial resolution, the construction of a plane triangle network to define the object's geometry and the creation of the solid model.Finally, the model is textured from different digital images in order to give it a realistic appearance.Descriptions of the foregoing processes can be found in the proceedings of different congresses, such as: the International Conference CAA-Computer Applications in Archaeology, VAST-Symposium on Virtual Reality Archaeology and Cultural Heritage, VSMM-Conference on Virtual Systems and MultiMedia Dedicated to Digital Heritage and Arqueológica 2.0.Specific examples can be found in the authors' previous references (Farjas, 2007;Farjas and García-Lázaro, 2010).
In this chapter we provide examples of applications from a critical viewpoint in order to give ourselves the opportunity to question our achievements in both technical aspects and results.We have at the present time technologies, equipment and procedures that are used to a greater or lesser degree, but it is by no means clear to what extent they are used by the specialists in the field and even whether they are of any real assistance.We therefore propose going back to the general perspective of applying equipment, instruments and methods as appropriate.Our aim is that when what we really need is an image we will feel we are able to use a simple camera instead of the most advanced laser scanner system, or that when we need to film an object we can opt for traditional video footage instead of presenting badly textured hyperrealistic flights over that add nothing to the quality of the model as a great achievement.We should be able to freely choose whatever system of representation best suits our purpose and not feel obliged to follow an automated information processing chain, rejecting intermediate laser scanner process points and products for no reason at all, with the single-minded purpose of obtaining an empty 3D model.
As we have already mentioned, this chapter is divided into three sections.The first deals with an introduction to 3D laser scanner modelling methodology in a typical survey, indicating certain points that could be usefully subjected to analysis.The second describes traditional surveying methods and tries to determine to what extent they can be used to improve laser scanner modelling.Finally, the third section will take a look at future lines of research.

General description
These topographical and map-making techniques have traditionally been linked to the representation of terrain in all kinds of states and situations and provide digital results in two or three dimensions.Data capture systems are in constant evolution; an important benchmark was the incorporation of 3D laser scanner systems.In Spain they appeared around 2003, when they started to be used experimentally in large-scale point cloud 3D modelling in archaeology and the national heritage.
A laser device emits a laser beam to obtain measurements.The word is an acronym of the description Light Amplification by Stimulation Emission of Radiation.The first laser was invented in 1960 by Theodore H. Maiman (1927Maiman ( -2007)), a physicist from the University of Colorado, who had obtained a Master's Degree in Electrical Engineering and a Ph.D in Physics from the University of Stanford.
A 3D laser scanner captures the position of different points of an object with reference to its own position and gives each one a set of coordinates.Data captured are coordinates (x, y and z) and luminance (I), or RGB values if photographs are acquired simultaneously.
Terrestrial laser scanners can be classified according to their measuring system or their sweep system.There are three types of measuring systems: time of flight, phase shift and optical triangulation.Sweep systems can be divided into camera, panoramic and hybrid (Farjas et alii, 2010 a).

Time of flight
This type of scanner measures the time delay between the emission and return of a laser beam to define a vector that is completed with data from the orthogonal angles that define the point's position.The beam covers the entire study area and records a measurement for each point, according to the mesh definition selected by the user.This type of equipment is suitable for outside work and can be used to model objects at medium and long distances (up to a kilometre or more) with an accuracy to within one centimetre.

Phase shift
In this method, the distance between point and scanner is calculated from the phase shift between the emitted and received waves.A number n, which is the total number of complete wavelengths covered, is added to the measured shift.To determine the exact value of n, various wavelengths are emitted at different frequencies.Both orthogonal angles are also recorded in order to precisely locate the point.
These devices can be used for medium distances, usually less than a hectometre, in both interiors and exteriors and are accurate to within less than one centimetre.

Optical triangulation
The position of each point of the object is obtained by the principle of laser triangulation.A highly collimated laser beam is emitted while a camera receives the light reflected from the object.As the values of the base of the triangle formed by the light emitter and the camera and both angles are known, the position of each of the points can be obtained.
These devices are used for high-precision short-distance measurements and are accurate to within less than one millimetre.

Camera
A laser beam is focused onto each of the points of the object by synchronised vertical and horizontal mirrors, so that the sweep creates a 60º scan window in both directions.

Panoramic
In these devices, the laser beam generally moves around the vertical axis, thus carrying out a horizontal movement.A vertical sweep is performed for each horizontal position, with the same movement as that in a total station.A 360º horizontal scan window and 300º vertical scan window can be obtained, limited only by the shadows produced by the shape of the device and its support.

Hybrid
The laser beam makes a horizontal sweep around the vertical axis, as in the panoramic method.It also makes a vertical movement through a mirror, as in the camera method, to record the points.A 360º horizontal and 60º vertical scan area can be obtained.
A laser scanner usually contains a data capture system together with a computerised control program.A camera is a required extra in optical triangulation but is also increasingly used with other systems to obtain model radiometric and colour information.GNSS receivers are now being incorporated into laser scanner technology as spatial positioning systems to provide geo-referencing.
When data acquisition is complete, the next stage is the processing of the information obtained.Most laser scanner devices include a data treatment and visualisation computer program designed to deal with the large number of points obtained from each sweep, which can be big enough to swamp traditional CAD systems.A 3D laser scanner project usually follows the work flow design shown in Figure 3, which begins with data acquisition, continues with data processing and finally provides usable 2 or 3D models in the form of point clouds.Close attention should be paid to three fundamental aspects in data acquisition: the number of work stations for the equipment and their locations, the point density registered by the device and the systems of coordinates that can be used.With regard to the first of these points, enough work stations should be planned to ensure complete digitalisation of the zone and minimising the number of gaps or zones in shadow.Another consideration is the way in which the different reference systems are to be linked.For stitching sweeps together (each of which is stored in the appropriate work station's instrument reference system) neighbouring sweeps must contain common points.Work station planning should include whether reference signals or auxiliary elements are to be used to join sweeps and if it is necessary to provide these points with coordinates for subsequent referencing of the model.If a camera or panoramic type scanner is to be used, the device's capture window should be taken into account during project design.
The point density requisites depend on the following parameters: the precision required by the work or project, the distance separating the device from the object, the resolution selected to carry out the work and capture time.
With respect to reference systems, it is important to remember that laser scanners register point coordinates in their own instrument reference systems and this will differ from station to station.It will therefore be necessary to reference all captures to a single global or local reference system, for which specifically designed target signals can be used to link, register or georeference different systems of coordinates from topographical measurements or from the specific signals provided by the device manufacturer, (flat) reflecting signals or (flat and curved) target signals that can be automatically recognised by the specific algorithms of the data editing program.If photogrammetric methods are to be used simultaneously with laser scanners (if the device is equipped with its own camera or images are obtained independently), the relationship between the camera's system of coordinates and that of the scanner should be considered.
After the data has been acquired it must be processed.Most scanners incorporate a data processing and viewing program in an appropriate format.In this respect, progress is being made in the direction of interoperability of devices and standardised formats are becoming available to make working with 3D point clouds more flexible.In April 2011, the American Society for Testing and Materials (ASTM) approved the E57 E2807 Data Exchange Standard, which is considered the first specification for the interchange of 3D data.The Association's webpage recently published a message from Gene V. Roe, chairman of the ASTM's E57.04 Committee; "The new ASTM data exchange standard will allow the hardware vendors to export their data in a neutral, binary format that can be imported by the design software," and stresses, "This has the potential to move the entire industry along the technology adoption curve by allowing the mass market access to laser scanned data." The 3D ASTM International E57 E2807 file interchange format has begun to filter through to commercial software.For example, since June 2011 it has been included in Trimble's RealWorks version 7.0.In this way 3D laser scanner professionals can use their design software of choice, depending on their applications and workflow.
Whatever the software selected, the general treatment of 3D scanner point clouds usually includes the following phases:


Pre-editing of sweeps.If the sweep is too dense it can be subjected to a re-sampling or segmentation.


Registering each point cloud in the chosen project reference system, generally either local or global.


Elimination of unnecessary and erroneous points, as for example those obtained from the unexpected appearance of people or vehicles in the survey zone. 3D modelling. 3D segmentation of point clouds.
Most of these processes are carried out interactively, although research is now being carried out on the creation of automatic or semi-automatic or algorithms to optimise and simplify them.The opacity of the algorithms and the recent implementation of 3D laser scanner data acquisition technology mean that in certain cases they have to be performed manually using CAD tools.

3D Laser scanner monologue: The methodology presents one of its applications and asks some questions about the process
In 2007 I was asked to make a precise 3D model of the façade of the Church of Santa Teresa in Ávila, Spain, for use in a possible restoration project (Prieto, 2008).The study zone considered for this model consisted of the façade of the church itself, the front of the adjacent building and the square in which both buildings were located.For this work I opted for a Trimble GS 200.What was this choice based on?Different technical criteria can be considered, but in practice the most important is price, so that normally a research team will make do with whatever equipment they already have, so that this question has really been decided beforehand.Different scanners have very different measuring characteristics and are thus designed for different applications but at the time of buying few in my special field are sufficiently familiar with the range of scanners to be able to distinguish between them.Their experience of the world of 3D is basically limited to the question of long, middle or short distance.In Ávila, data capture was possible with phase measuring equipment, so we used one of this type.Data were acquired from different positions and we obtained the complete model by joining the sweeps in a transformation process.The model of the façade was then geo-referenced in an official system of coordinates, using laser scanner capture points which had also been given coordinates in the official system by the radiation method with a Leica 705 total station.Geo-referencing models is worth doing, since they can then be related spatially, but you should not forget to indicate the precision obtained in the results, as this parameter could be an important criterion in the final model files.The complete survey process is described below.

Planning
Prior to the data capture process, the entire zone was studied in order to define the location of the stations.The number of stations should be kept to a minimum in order to get as few errors as possible when sweeps are joined together.We have to take into account the number of stations and joins that can be performed, how precision can be transferred from one sweep to another and how the joining of sweeps from different stations can affect the accuracy of the final model.Enough attention is still not given to the initial quantification of units of measurement and the control of rotations in this process.
The field equipment consisted of the following: Accessories: Batteries, cables, generator, etc.  Laptop with the PointScape 3.1 program.
In the laser scanner method, the common areas between stations must be surveyed in order to define common points in different sweeps and convert reference systems.I still have no precise overlap values, as these have never been experimentally defined or been considered in theoretical studies.In photogrammetry these values are indicated in the initial conditions to ensure that the quality of the results is acceptable.
When considering the location of the stations we also tried to keep the hidden zones of the object to a minimum.Some zones would of course be impossible to scan due to their height or the presence of nearby buildings.We finally decided to do the survey from a total of seven stations.This aspect must be given careful consideration in the planning stage, as in spite of the care we took during the field survey we continually found gaps in the takes during the editing process.Complementary methods can be used to analyse these shadows in the design phase, according to their relative importance in the project objectives.Possible alternative measuring methods can be established to supplement data acquisition or approximate surface laser scanner models can be implemented after editing by algorithms that are not very friendly to the reality they are required to represent.Is a visual inspection of the target zone enough to form the basis of these decisions?
The laser scanner registers the points of the model in the instrument's system of coordinates, which differs from station to station.To unite these points, as we have said before, well defined points are scanned in common zones.During editing, homologous points are identified in each point cloud to convert the coordinates of all the points into a single instrument system.The algorithms that integrate the sweeps into a common reference system are encoded and the operators do not normally know their mathematical formulation.We obtain the results and the fit precision parameters, but nothing else.Are these algorithms the most appropriate for the geometry of our points?Will it be necessary to use some kind of constraint in the fit that considers the location of the stations?In the future we may be able to analyse these questions, but at the moment the answers are hidden from us.In fact, these mathematical formulations and the way they are processed are the key differences between the different instrument manufacturers.

Laser scanner stations
Prior to data acquisition the scanner is placed at a point in the selected zone.In this study we did not wish to locate this point precisely as we intended to work with free stations.Are there any comments to make on this?The exact definition of the coordinates of the station could change the least squares adjustment of the observations, reducing the number of unknowns or degrees of uncertainty, or allowing new surfaces to be adapted to those already existing.Decisions can be taken here that could have a marked influence on the final precision obtained.At the present time scanners are not able to carry out forced centring and some do not even have plumb lines or other auxiliary systems.Let us leave these questions to future researchers, if they should think them worthy of study.

The data capture process
The device can be controlled by means of a laptop computer with Internet connection either through a control terminal or screen.In this study we used a laptop and PointScape 3.1.A window in the screen showed an image of the zone under study and indicated the area that could be scanned by the device from its present position at the chosen resolution.The precise scanning zone was then selected from the image in the window, after which the following parameters were set up:


The first parameter that must be set is the scan resolution, i.e the distance between the registered points.


The second is the approximate distance from the target.This value is vital if the device is to register the points at the desired resolution.From the average distance entered and the scan resolution the scanner computes the laser beam angle of inclination and programs the sweep.


Finally, the number of times each point is measured must be entered.As in the case of measuring distance with a total station, each point is measured a certain number of times and the value of the observation distance is the average of the measurements performed.
In our project we varied measurement resolution due to the irregular outline of the church we were modelling.These values were established by bearing in mind that the principal objective was to map the façade of the Santa Teresa Church and that the other buildings were included merely to give the church spatial continuity.The façade was scanned with a resolution of approximately 1cm.Details of the façade, such as statues, shields, reliefs, etc. were taken at a resolution of 0.5cm, the other buildings at between 2 and 3cm with details between 0.5 and 1cm and the ground was given a resolution of 10cm.How was all this information indicated in the final model?How can the design precision affect decisions on the resolution of data acquisition?Is the target faithfully reproduced?Scanning is carried out automatically and all the points in the corresponding interval are captured.Surfaces are discretised at a certain interval that does not convey much information about the details, which could imply a smoothing out of the real object.It would appear that we need to design scans according to the surfaces and level of detail required, so as to eliminate improvisation in the field and ensure not only the resolution but also acquisition of the level of metric detail required by the project.
Table 1 shows the sweeps carried out by the laser scanner, with the number of sweeps performed from each station, the number of points included in each one, and total project points, which came to more than 15 million.After data capture and during the construction of the solid body a triangular mesh is obtained to define the surface geometry of the objects represented, using the Mesh Creation Tool to generate and edit the triangular mesh.Although a projection can be used in editing the mesh, in our case we decided against it.The editing options include eliminating vertices, triangles and rough edges and smoothing out the mesh itself.
To speed up the process, triangular mesh is initially generated one by one for individual part-elements, however, in this way problems may arise when joining meshes together (blank spaces and overlaps) and when placing photographs on the surface.We decided to try generating the triangular mesh with the entire surface in a single process and even though it took an hour and a half better results were obtained.

Data treatment
For this process we used Trimble's Realworks Survey 6.0 (Zazo et alii, 2011).The treatment was divided into the following steps: 1. Data cleaning 2. Joining sweeps 3. Geo-referencing 4. Data filtering 5. Generation and clean-up of the model 6.Texturing the model

Data cleaning
This step consists of removing foreign noise from the model (Figure 4), or the elimination of all points that do not strictly form part of the target object.These include, people, vehicles or animals that somehow intrude on the scene during data acquisition.Can other types of object come between the scanning device and the target?How can we control or quantify the accuracy of the measurements with respect to those of the actual object?What type of inaccuracies can be obtained in the details?Does a variable interfere by introducing into the model automatic processes which can only be detected at certain points?All these factors must be given a great deal of thought.

Joining sweeps
The laser scanner uses a different reference system at each station.In order to give all the point clouds a common system of coordinates, we can choose between two options:  Use aiming points such as cards or spheres arranged around the scanning zone.


Use common scanning zones and locate common points between different stations.
Both options are solved in the same way by means of a 3D similarity transformation of 6 unknowns, 3 rotations and 3 translations.We chose the second option, since the model involved a building and common points could be easily identified in the different sweeps to an accuracy of around 1 cm.For this we used the Cloud-Based Registration Tool, which allowed three homologous points to be selected and after identification, showed the standard deviation of the fit on the computer screen.With the Refine option, a further fitting together of the point clouds can be performed, showing the standard deviation in each case together with an image of the model (Figure 5).With this program the process can be repeated as many times as required.It would be interesting to have more information on this process, including maps of the differences obtained from the homologous points used to join the sweeps, to obtain information on the way in which different joining methods can deform the geometry of the rest of the model.At the moment, the operator can do little more.
Table 2 shows the standard deviation of the joining together of the sweeps from the different stations.

Stations Standard Deviation (mm)
Table 2. Standard deviation of the joining together of scans from different stations.

Georeferencing
In order to convert the instrument coordinate system to the official Spanish cartographic system at that time (ED-50), the points had to be referenced in both systems of coordinates.Seventy-five points were measured in the official system by a total station within the working zone.From this total, eleven were selected to calculate the transformation parameters.The dynamics of the georeferencing process was as follows:  Points were identified in the point cloud that had coordinates in the ED-50 system. The coordinates of these points were then manually entered in the ED-50 system. When three points had been registered the standard deviation of the adjustment was shown on the screen.As new points were entered, the calculation of this parameter was updated. Also available was the option of eliminating any existing points that caused excessive discrepancies in relation to previous adjustments, so that they could be left out of the final transformation.
The geometric distribution of the points to be used with coordinates in both systems must be borne in mind.It is not advisable to have all these points on the same plane and they should be evenly distributed throughout the modelled zone to avoid extrapolations.The location of the points we used is shown below (Figure 6).The standard deviation of the georeferencing adjustment was 19mm.
Fig. 6.Distribution of the points used in the georeferencing transformation.

Data filtering
With even the most powerful computers it is usually considered necessary to reduce the size of the file.In our project there came a time when we found ourselves with a file of around five GBs, so that it took between five and ten minutes simply to save the changes.Fearing the possibility that the computer would not be able to deal with all this information, we decided to drastically reduce the number of points stored from over 15m to 5m by sampling the point cloud with the Spatial Sampling filter and entering the distance required between the different points.For the sampling, we separated the planes in which a sampling of 100mm could be made, e.g. the outside of buildings.In detail zones such as shields, statues and reliefs the sampling was carried out at a distance of 5mm (Figure 7).When we acquired the data with the laser scanner we captured a real surface on a mesh at a pre-defined interval.This surface, which could be considered as a first order approximation to the object, has been reduced to only about a third of the original.What part of its identity has disappeared?How much detail has been lost?If we aim for an X th order approximation to reality, should we not lay down certain criteria during the design phase?
Should we not define experimental control methods for data processing?We should devote a lot of time to deciding what we really wish to represent.For the present, we show the image obtained of the façade in our project after processing the data (Figure 8).
As we have said, after filtering, the initial 15,066,854 points became 4,185,284.This volume made it possible to continue with data processing in the computers available, which were of the latest generation at that time.What had we left out?How did the model obtained relate to the real object?What had remained hidden?What would happen nowadays if we processed the same data with the programs presently available?Would there be any differences in the final models?

Construction and cleaning of the model
This phase consists of constructing a triangular mesh to define the geometry of the object.
After deconstructing what has been constructed (i.e.breaking the monument up into a multitude of points with the laser scanner) we have to think about its spatial reconstruction.This is a long process, but to what end?Is it justified for a possible restoration?Should we apply any additional objectives?
We used the Mesh Creation Tool to create the solid body, after which we looked for faults in the surface, most of which we could fix by eliminating triangles.Others had to be corrected by reconstructing the solid body in stages after cleaning up the point cloud (Figure 9).The model was divided into three zones: ground, church and museum and each of these was split into objects.It would have been impossible to construct the solid body in a single stage due to the large volume of information.When we created the solid body we found certain problems with hidden zones.As we mentioned before, some zones simply could not be scanned due to being out of the line of sight.Some of these cases were solved by generating a solid body from the existing points (even if they did not comply with the regularity of the spatial resolution) so that the texturing process could be carried out (if there are gaps the model cannot be textured).This criterion was adopted for purely aesthetic reasons.Today, we could question these reasons and, like the points that brought us closer to modelling the real object (in our case, the façade), they are better hidden in a regular, smooth and uniform model.After generating the solid bodies we had surfaces that represented the surface of the Church of Santa Teresa and its surroundings (Figure 10).We had created a scenario.Would it be of any interest to lay down an experimental solid model control system to suit the sampling conditions?How could the difference maps contribute at this time to validating the representation of the real object?Reality has been covered with a curtain and we still have to discover to what extent it can stand up to metric analysis.We may say there is precision in the point cloud, also in the joining together of the models and in the georeferencing.But, where is the real object?Where did it get lost?We have a representation of the tree, but can we discover the size of its leaves from the digital model we have created?And can we do the same for the details of the façade?

Texturing the model
To give the model a more realistic appearance textures were applied from digital images.As the camera incorporated in the laser scanner was only of 1 megapixel, we used a Konica Minolta DiMAGE E50 de 5.2 megapixels and maximum resolution of 2560 x 1920 pixels.The resulting images were digitally treated with Adobe Photoshop CS2 as follows:  Elimination of objects between the façade and the camera, such as lamp-posts, benches, waste paper baskets, etc.


Treatment of brightness and contrast to give all the images similar lighting.
The next step was to apply these images to the model, which involved finding corresponding points in the model and the photograph (Figure 10).The greater the number of points selected, the better the image fitted into the model.No criteria have yet been established for selecting homologous points, the minimum number to achieve acceptable quality or their distribution.It should not be forgotten that the images are not calibrated and are really types of optical effects.The process is usually automatic and we should be highly sceptical of these graphical methods.This was one of the most complicated stages in the project.The photos to be used in texturing had to be selected and given the appropriate treatment to make tonality variations invisible.The image treatment results are shown in Figure 12.

Results and conclusions
After all the data had been processed a model with the required precision of the Santa Teresa Church Square was obtained, in compliance with the general project objectives (Figure 13).Let us leave the discussion of these points for now and continue with our attempt to define a future laser scanner project with a design phase that asks the questions Why? and What for?Also to be established are the objectives by which the project is guided, a decision as to when each laser scanner product is required, for what purpose and with what characteristics, plus the consideration of the definition of images, resolutions, pixels, model precision required, etc.
In the following section we will look back in history to the way in which modelling and survey projects were carried out in the past.For these, a process did exist, but the important thing was that people were given the training to enable them to take decisions in the knowledge that every decision involved a certain loss, even though it may have opened up new possibilities.You could not have everything, but at least you were aware of the data that had been acquired and you were thus equipped to take the appropriate actions.
Nothing was lost in the intermediate processes and the iconography was human.

Dialogue with traditional surveys:
The laser scanner method listens to the traditional method in its search for answers.
I was invited to take part in the work on the Casa de Taracena at the Clunia archaeological site in Burgos, Spain (Juez and Martín, 2006).My contribution would be to generate a series of documents to geometrically describe this ancient Roman settlement.La Casa de Taracena (Figure 14) is a large rectangular Roman villa that took up a large part of the settlement and had few open patios, due to the cold winters experienced in the region.In this project I used traditional topographical methods assisted by GPS receivers.In the past, a survey of the site would have been a fundamental step, as would the design of a basic network.The existing official 1/25000 scale maps published by the National Spanish Geographic Institute were used to study the site topography.The definitive signal locations were later selected in the field, the vertices were identified by pre-fabricated markers and note was taken of their locations.
The system of GPS observations was then designed, including reference observations from the National Geodesic Network.When the observations of the basic network were complete, the radiation was carried out to gather information on the details of the survey zone.These details were so diverse that a Code Table had to be compiled.The codes were assigned to points as radiation progressed and provided a relationship between the model and the actual site in order to identify its different features.Point radiation was carried out entirely by GPS in real time with no problems due to screening.With the laser scanner system it is also possible to lose signals and acquire false data, for example due to rain or when representing corners.How can these situations be overcome?
All observations were then processed and coordinates were obtained for all the points making up the site.DXF file extension points were generated that enabled us to later contour and shape all the details in the cartography.The design of the site map was performed with digital cartography methods on AutoCad 2004.The site cartography consisted of a series of personalized sheets with symbols for points, lines and surfaces.Relief lines were shown by 0.5m equidistance curves for 1/500 scale maps and 0.2m equidistance for the 1/200, all of which were obtained by MDT triangulation.In order to obtain reliable curvature, break and inclusion lines, as well as all detail points, were entered.The general process is described below.

Data capture
As we have seen, data capture was started by designing a basic network.What are the conditions for the design of laser scanner stations?Perhaps specific requisites should be laid down or a set of conditions that seek to achieve something more than a general coverage of the object.Sweep overlaps could be defined (from geometric or other criteria) that make it possible to carry out partial internal control tests.For the survey of the Casa de Taracena we located stations in the places with the best visibility and no obstacles to GPS observations.These stations were placed on a map of the site on a scale of 1/25,000 with 5m equidistance.
We then made a survey of the site to ensure that the chosen stations were the best available for placing the receivers.How is a survey performed with a scanner?How is coverage analysed?Can it be quantified?If we follow the traditional method, this can be done according to geometries, intersections, visibility and homogeneity, among others.
For the observations we used GPS300 receivers with dual satellite reception frequency and SR9500 sensors.We bore in mind the following criteria when selecting network points:


The point network had to completely cover the work zone. Points had to be evenly distributed throughout the site, with the highest concentration in the survey zone.


The points had to be visible from other points, or at least from the neighbouring points, for orientation in case of the possible future use of a total station. Good inter-point geometry should be achieved.
A basic network was designed in accordance with the above criteria that covered almost the complete site and took full advantage of its topography.It was composed of 7 vertices with an additional one in the geometrical centre.Laser scanners make it possible to design special controls that ensure a certain degree of redundancy by capturing additional point clouds that can be used for model control.The network inter-vertex distance was between 100 and 200m.Have you made sure that a similar distance range is used for the entire laser scanner survey?Have you analysed any discrepancies in model homogeneity when performing scans, limited only by the device's data capture capacity?
To calculate the network we used three receivers, one of which was fixed and the others mobile.Only one scanning device is used in these projects.The quality of the results therefore depends on the status of this device and no experimental control or verification is carried out.Calibration is normally performed once a year.A process control system is defined for the observation measurements of all topographical equipment after the survey has been completed.Do we have to define a laser scanner verification procedure?For example, you could take direct field observations by another method prior to carrying out the calculations to validate field data.Is there a control program for the device's systems?Topographical observation methods are designed to be used with a working verification system (direct circle/inverse circle, Bessel's Rule, Moinot's Method, double calculation of baselines from different antennas, etc.).
The basic network of the Casa Taracena was constructed from a static, simultaneous, longduration observation (approximately one hour) in order to define the vertex coordinates with high precision.Are the coordinates of laser scanner points guaranteed to be accurate?Are all captures the product of a non-verified radiation?
The equipment set-up phase began with a project configuration that specified the way in which measurements would be carried out.The following parameters were defined:  Type of operation: Relative Static  Satellite following parameter: 15º elevation mask.This is the minimum elevation to consider the position of the satellite when calculating GDOB (the parameter that estimates the goodness of the observations according to satellite-receiver geometry).
The laser scanner method also defines initial parameters, such as resolution, scan distance, measurement repetitions, etc., but there are no criteria expressly linked to modelling.It all depends on the operator's skill and experience.Should we establish an elevation mask for laser scanners to avoid angles of incidence greater or less than 20 g ?

Data storage features
The three receivers were set up in the relative static method.A copy was made with all the parameters for inclusion in the Clunia Red Mission, in which we participate.
The points that defined the 2 and 3D cartography were then obtained by the GPS radiation method in real time from WGS84 coordinates.The baseline was calculated to an accuracy of 10mm 1ppm.The points representing the terrain morphology were then converted to the ED50 reference system and the 3D model was created.
The coordinates are obtained directly by laser scanners, but we are really aware of what is going on inside the devices?Which mathematical formulae are processed?What is the order of redundancy?Is there any residue treatment or error detection system?Are any visual images eliminated during the least squares adjustments?Laser scanners are equipped with associated software in which we blindly trust.If we should try to process data with different software we may find that the information is not complete and so the process cannot be carried out.For example, having information on x, y, z, I, we have sometimes found that for texturing we need the normal information from the triangulated model, which we had failed to import.It would be a good idea to identify the data necessary for each of the processes and import these from the corresponding software.In other words, we should identify the information transmitted with each file extension and each data format.

Cartographic design
When considering present-day cartography, we should be aware that they consist of 3D documents that metrically reproduce a site or an artefact.In traditional mapping the classification of symbols was used according to the nature of the objects (points, lines and surfaces), roughly corresponding to those defined in the previous sketches made in the field during the survey.
From the data capture and processing stages for the geometric definition of the Casa Taracena we produced a point cloud containing: The result of the process was a point cloud distributed in a three-dimensional space, which was used as the base material for making maps and generating the digital model of the terrain (Figure 15).www.intechopen.com Maps were produced from the point cloud files.Other indispensable elements were the field sketches (Figure 16), which not only represented the survey zone but also indicated the correspondence between specific elements in the terrain and the buildings with the points captured in the field.Laser scanner data processing starts with the point clouds.This does not consist of a single visual image but a set of images, which could be considered as an analogue beam containing all the points of which it is formed.Must each of the beams be characterized in some way?What about maximum and minimum angles and distances, the surface covering the target?With GPS, the parameter that characterizes the geometry of the observation is known as Geometric Dilution of Precision (GDOP).Would it be any advantage to have something similar in laser scanners?Not all the beams that are emitted by a device are identical with respect to the data captured in the field.Some will always be more important for creating the model (normal-convergent, near-far, angle of incidence on the surface, signal repeatability, the size of the laser beam when it reaches the target, etc.).If, for a given device these questions (after analysis) ever come to be trivial, we will then have to train ourselves to join point clouds from other devices (e.g.beams acquired from time-of-flight equipment at a distance of 1 km or phase-measuring equipment at a distance of 40m).
In the traditional method, each point has a number (in the present registering devices they also have a code) and the structural lines of the model are edited.In the laser scanners, the point clouds are cold.The only value is illuminance or RGB values.Could we consider some type of radiometric treatment or selection that applies image treatment technologies or algorithms, pattern recognition, or others pertaining to photogrammetry or remote sensing?
The software used in modeling the Casa Taracena was AutoCad 2004 for cartographic editing (Figure 17), MDT for creating the 3D model and Protopo for altimetric contouring.With the help of identification points and field sketches, in the traditional method it was possible to reconstruct reality in accordance with a certain iconography represented by the symbols belonging to those generally used for the corresponding scale.What happens with laser scanners?Can illuminance or RGB values be used as internal codes for each point?Could an active parallel system be included containing a new parameter?
In this phase, cartographic editing began with assigning symbols, which consisted of obtaining a simplified and/or exaggerated representation on the appropriate scale of the elements that were not represented at the scale used but were important enough to be included.These symbols aided the reading of the plans by being as clear and intuitive as possible.How could this concept be incorporated into 3D laser scanner models?Do we require reality to be the result of being captured at a certain resolution and point density?
The symbols used were of three types (Figure 18) and referred to points, lines and surfaces.Point symbols were used to indicate elements considered to have position but no extension and included complementary (or filling-in) points and different cells (e.g.trees, waste baskets and reference network points.Linear symbols were shown as one-dimensional and were used for main contour and depression contour lines and also for the handrail.Surface symbols were used for small elements that could be represented by lines on the surface of the map and were applied in zones with lines or points with a common property.The following elements were considered to be surface elements: walls or structures defined by contour lines and textures, raised structures and mosaics, streets, roads, benches and steps, were given two-dimensional representation on this scale.We considered different cartographic variables of the 1/200 and 1/500 scale urban maps when designing the symbols, such as colour and dimensions, in order to make them as conventional as possible.
After structural details had been entered in the model we dealt with contouring the topography, taking into account the results of the previous phases and applied modifications as required.Laser scanners carry out this process with cold information.How can errors be detected in the capture or in the information itself?Why are certain points and triangles eliminated?Are they got rid of to improve surface uniformity?Are there no parallel field notes that refer to specific areas of the model?What should our limits be?Why are we using laser scanner modelling?

Digital terrain model
We created a digital terrain model after editing the maps.This file allows contour lines to be drawn and generates a 3D model to which shading can be added to facilitate terrain interpretation and hypsometric tints to indicate height.
We used the three coordinates of all the points obtained by GPS as basic project data and could choose the points that best represented the terrain: break lines, dividing lines and depressions.The quality of these lines was fundamental to obtain an acceptable mathematical terrain model with an even distribution of points and increasing point density as necessary.The software used was MDT.A triangular network was generated from the original irregular point cloud (Figure 19).At this point there only remained to include the labelling, which was done in accordance with our field notes.We used Arial font for names and contour lines.We paid special attention to the contour lines in the area of the Casa Taracena in order to provide the greatest possible amount of height information in this zone.
The maps were then divided up into various sheets for easier handling.Laser scanners are often limited by the available capacity of the computers used by the research team.If file size is reduced, this is done by a method in the form of a pyramid (details are lost on a certain scale and thus also the capacity to analyse them) or by compression, which reduces the quality of the information.We could extend the traditional idea of map sheets to construct a more versatile type of digital file for the final products of modelling processes.
The design of the grid, framework and legend was dictated by the size of the sheet.In this case we opted for the A1 sheet size, with two for the 1/500 scale (Figure 20) and one for the 1/200 scale.Before approving the final product the maps were given another editing in search for any remaining errors.When working with laser scanners, can we incorporate metric control protocols?Studies are at present being carried out on the repeatability of surface representation and modelling according to the perspective of the scans in relation to the object.In our work, we try to maintain control of the model precision and verify results by obtaining points by other methods before we are satisfied with the final product.

Conclusions
The question that gave rise to this work sought to establish the real objectives of 3D modelling as to What? Why? How? and How much? and how these criteria should never be lost sight of in projects involving laser scanner data acquisition and processing.The enormous quantity of data acquired and the excessive amount of time required for its processing has led to a situation in which a large part of our research efforts are now involved with finding new algorithms and improving automatic processes while ignoring the risk of the arbitrary loss of field data and the fact that the final representation in terms of quality and quantity may not be all that it should be.
Faced with the challenge of putting human intervention back in control of the representation of monuments, artefacts and spaces and daring to question the power of machines, we wish to propose the Theory of the Hidden Reality.We are technicians and engineers and we handle points (hundreds with the traditional surveying methods and millions with the laser scanner).Following the scientific method of the traditional topographic data treatment, our aim is to open a line of analysis that leaves to one side the automated protocols of laser scanner systems and gives way to a new approach: point clouds that give a partial view of the reality we want to represent, or at least that makes us aware of how it is being represented and its limitations.For too long we have considered laser scanner systems as a new technique that can convert traditional topography into an automatic process, closely related to photogrammetry, analogue representations of surfaces that make up the object we wish to reproduce.At the present time there are two issues we want to make you think about; firstly, we must question the reality of the object to be represented and question also the capacity of laser scanners to faithfully capture this reality, an underlying hidden reality, rather like a geoid of the surface of the Earth.Laser scanner technology tells us the role we have to play: collecting data, joining sweeps, cleaning up point clouds, triangulation, creating a rigid body, obtaining a realistic image (Figure 21).Specialists in this technology can take part in all these stages, considered as windows open to other products or models of representation.There is no need to follow the entire process, it is enough to analyse to what extent each of them can help us to obtain valid results.
We propose a search for new analytic methods and procedures with the data obtained, new documents for scientific interpretation that create new knowledge from the point clouds acquired in the field.
In traditional surveys, points were captured, sketches were drawn, coordinates were calculated and files of drawings were then edited.Later, digital models allowed triangulated meshes and more or less hyper-realistic 3D or flat 2D textured models to be obtained (Figure 22).However, there is no reason why we should forget one of these techniques when we are using the other.Anna Maria Marras, in her doctoral thesis entitled Topographia e topothèsia: dialogo diacronico su uno rurale nella campagna dell´alto Tell tunisino tra geografia e archeologia for the Università degli Studi di Siena, introduces the term topothesia in contrast to topography.The latter is involved with a detailed description of a place, the former is a description of something fictional or imaginary.We may believe we can achieve absolute reality, but perhaps the laser scanner technology is only giving us a topothesia of reality.

Fig. 3 .
Fig. 3. Typical work flow of a laser scanner modelling project.

Fig. 4 .
Fig. 4. Example of eliminating noise from a model.

Fig. 7 .
Fig. 7. General scan of the façade before and after applying a 100mm spatial sampling.In spatial sampling points are lost by considering only the single variable Distance to the Nearest Point, so that the metric detail of the model loses clarity.A massive and indiscriminate amount of information is thus lost which may well have an effect on the objective of the project.So then, what is actually being represented?Why were so many points registered?Does the need for spatial sampling indicate that parameters have been incorrectly entered prior to data acquisition?Can the application be used simply to reduce the size of files?

Fig. 10 .
Fig. 10.Solid model of the door of the Church of the Convent of Santa Teresa.

Fig. 11 .
Fig. 11.Application of image to the model identifying homologous points.

Fig. 12 .
Fig. 12. Final image of the façade obtained from three photographs.

Fig. 13 .
Fig. 13.Model of the Santa Teresa Church Square with and without texturing.


A topographical map to a scale of 1/500 of Area 3 in the site, covering an area of around 15 hectares.Atopographical 1/200 scale map of the centre of the settlement known as the Casa Taracena.

Fig. 15 .
Fig. 15.Point cloud of Area 3 with details of the zone known as Casa Taracena.

Fig. 16 .
Fig. 16.A sketch of part of the Casa Taracena.

Fig. 17 .
Fig. 17.Different stages of the wall editing process in the Casa Taracena.

Fig
Fig. 18.Table of symbols used in the project.

Fig. 21 .
Fig. 21.Process products and stages of a laser scanner modelling project.

Fig. 22 .
Fig. 22. Virtual representation of the Mleiha archaeological site in the Sharjah Emirate on a scale of 1/500 obtained from GPS receivers.As specific proposals in laser scanner methodology we would like to propose the following lines of research and critical points to be taken into consideration:  Equipment check: introduce a systems control to determine to what extent they affect all aspects of the point cloud. Control of variables that affect results: e.g.topographic phase measuring equipment incorporates a system of corrections for temperature and pressure that are not included in laser scanners of this type. Identification and control of variables that could affect the measurement process: dust in suspension, relative humidity, etc.  Control of the modelling target zone: surface materials of the object (up to now this problem has not been quantified), identification of structure lines, points or surfaces of

Table 1 .
Points registered by station and total points.
interest that will not be captured in sufficient detail for work requirements (e.g.corners or edges of windows).Combinationswithothertechnologies that complement the capture in accordance with the project objectives.Analysis of the influence of the angle of incidence on laser scanner data capture. Establishing limits in point cloud captures in accordance with geometries. Parameterised identification of the characteristics of the capture or point cloud: angles, apertures, distances, etc.  Analysis of overlap indices between sweeps from different stations and a study of the possibility of performing additional complementary sweeps to improve fits and coverage (as in convergent sweeps for a photogrammetric model). Control of data processing: control of algorithms, redundancy analysis, control of rotations or translations in fitting sweeps, control of results in cleaning point clouds, etc. Results will vary according to software used and "distortion" of the model will depend on the cleaning processes used and the degree to which points are removed. Control of texturing processes and their metric quality.Analysis of point distribution in the model. Study of the final product obtained from the entire process or from intermediate stages, according to the documentation project requirements. Identification of the parameters that characterise a laser scanner project: in the survey (number of stations, resolution, distance, etc.), processing (programs used and parameters introduced in the different options, final products (accuracy, image resolution, etc.).Our aim in the two projects discussed in this chapter was to obtain a complete detailed geometry of the object.Both are examples of the application of metric documentation.We would like to leave to the reader the question of what products or examples are best suited to his/her work.Not only should the model requirements be considered but also one's personal situation, including familiarity with different technologies, the budget available for the project and the training received in the processing of georeferenced digital data.New documentation systems are appearing that make digital reality possible, but what we do not know is just how far they take us away from what we really need or how well they can represent this reality.