Open access peer-reviewed chapter

Geometric Accuracy of Digital Twins for Structural Health Monitoring

Written By

Ruodan Lu, Chris Rausch, Marzia Bolpagni, Ioannis Brilakis and Carl T. Haas

Submitted: 22 October 2019 Reviewed: 08 May 2020 Published: 10 June 2020

DOI: 10.5772/intechopen.92775

From the Edited Volume

Structural Integrity and Failure

Edited by Resat Oyguc and Faham Tahmasebinia

Chapter metrics overview

1,189 Chapter Downloads

View Full Metrics


We present an exploratory analysis of the geometric accuracy of digital twins generated for existing infrastructure using point clouds. The Level of Geometric Accuracy is a vital specification to measure the twinning quality of the resulting twins. However, there is a lack of a clear definition of the Level of Geometric Accuracy for twins generated in the operation and maintenance stage, especially for structural health monitoring purposes. We critically review existing industry applications and twinning methods. To highlight the technical challenges with creating high-fidelity digital replicas, we present a case study of twinning a bridge using real-world point clouds. We do not provide conclusive methods or results but envisage potential twinning strategies to achieve the desired geometry accuracy. This chapter aims to inform the future development of a geometric accuracy-based evaluation system for use in twinning and updating processes. Since a major barrier for a fully automated twinning workflow is the lack of rigorous interpretation of ‘geometric accuracy’ outside design environments, it is imperative to develop comprehensive standards to guide practitioners and researchers in order to achieve model certainty. As such, this chapter also aims to educate all stakeholders in order to minimise risk when drafting contracts and exchanging digital deliverables.


  • digital twin
  • geometric accuracy
  • point clouds
  • bridge
  • structural health monitoring

1. Introduction

In the wake of the Notre Dame Cathedral fire, digital scans collected by Dr. Andrew Tallon [1] offer the hope for future restoration. One question raised is, what Level of Geometric Accuracy (LOGA) can the reconstructed digital replica achieve with respect to the physical asset? In the Architecture, Engineering and Construction (AEC) sector, operation and maintenance (O&M) costs can range between 60 and 80% of total life cycle costs, which is three times greater than the cost of design and construction [2]. This demonstrates the significance of implementing intelligent asset documentation and structural health monitoring (SHM) approaches for existing built assets. Laser scanning has been widely used to document and monitor existing conditions of real-world assets in the form of point clouds [3, 4]. A point cloud is an unstructured low-level digital representation, which by itself does not contain any meaningful information of the documented asset. A ‘twinning’ process is utilised to convert the low-level data into a high-level digital representation in a structured format, namely, a geometric Digital Twin (gDT) [5]. The gDT can be further enriched with other information, such as semantic meanings, texture, materials, damage, energy use, maintenance data and so forth from its physical twin using IoT technologies [6], to form an information enriched model over time, namely, a ‘digital twin’ (DT). ‘Geometric accuracy’ is a vital indicator that guides and describes the degree of spatial accuracy of the resulting twin. It is conventionally deemed as the Represented Accuracy [7] that denotes the standard deviation range to be achieved once the point cloud is twinned into a geometric model. Twinning a real-world asset is an interpretive process, where geometric accuracy largely depends on a modeller’s experience and discretion [5]. While in their unstructured state, point clouds contain more geometric details than a resulting gDT created from the point cloud. Therefore, the resulting ‘best-fit’ gDTs are highly unlikely to be as accurate as the measured data (e.g., a point cloud) at the end of the twinning process [8]. This is also true for the automated methods since there is a trade-off between the achieved geometric accuracy and the quantity of information used for describing existing constructive objects in arbitrary shapes [9]. This occurs because the process of twinning involves simplifications to create polygon- or mesh-based primitives so that it ‘smooths’ discontinuities and gaps in point clouds [10]. This means that almost every object is approximated in order to transform point-cloud-based descriptors (in non-parametric formats) into parametric primitives [11]. Figure 1 illustrates a series of components for a bridge asset where the point cloud is converted into bespoke gDT elements. However, since point clouds often contain defects, such as varying point density [12] and occlusions [13], it is difficult or often not feasible to achieve a desired LOGA for resulting gDTs [5]. When these conditions occur, what are realistic expectations for a modeller or of an automated method with regard to representing the reality and meeting the required accuracies for SHM?

Figure 1.

Customising shapes of bridge components and fitting them to point clusters.

Numerous specifications termed as LOX (e.g. Level of Development and Level of Detail) have been developed to guide practitioners and researchers when creating digital models [14]. What do the LOX mean? How to measure whether the specifications were met? What is the best practice approach to reflect when the employer requires ‘1 cm accuracy’ or ‘every element to be within a half centimetre’? This chapter explores these questions, aiming (1) to provide a critical review of existing specifications and twinning implementations, (2) to identify technical twinning challenges, and (3) to inform the establishment of a geometric-accuracy-based evaluation system for twinning and updating.


2. Background

2.1 Existing LOX

The term ‘LOD’ was initially introduced by Vico Software [15]. Ambiguity of defining LOD stems largely from the fact that the American Institute of Architects (AIA) later adopted this concept and kept the acronym LOD but changed it to mean ‘Level of Development’ rather than ‘Level of Detail’ [16]. It was then superseded by the document AIA G202™ [17], which defines five progressively detailed levels of completeness: LOD100–LOD500. Based on the AIA protocols, the BIMForum [18] released another LOD specification, which was identical to those published in the AIA’s Digital Practice Documents [19], but with two exceptions. First, a new LOD was designated as LOD350. Second, the LOD500 was removed from the specification. The geometric requirements of gDT elements of LOD300, LOD350, and LOD400 are defined in the same way in terms of accuracy. However, this BIMForum document does not elaborate on what is implied by ‘accurate’ or how to measure it. Bolpagni [20, 21, 22] summarised the history of the LOX classification system in Table 1. Various new classification systems have been developed to accompany and complement the BIMForum’s LOD specification. For example, New Zealand proposed a LOD specification that contains five maturity levels [23], each of which is a sum of different aspects that define the geometry and information of gDT elements. Among these, Level of Detail (LOD) and Level of Accuracy (LOA) do not specify any quantitative standards. Royal Institution of Chartered Surveyors [24] proposed a concept of building survey detail accuracy banding, which defines accuracies to be achieved for different surveyed features when an employer requires a customised geometric accuracy and confidence level. This banding, however, is tailored for designing building settings consisting of cuboids defined by length, width, and height. Similarly, Abualdenien and Borrmann [25] introduced a multi-LOD meta scheme, taking into account the geometric uncertainties by assigning quantitative fuzziness in cm. Again, the usefulness of this scheme in describing the twinning quality is unknown. To this end, Banfi [26] and Banfi et al. [27] proposed a new Grades of Generation (GoG) protocol for twinning highly complex historic structures from point clouds. LOGA was defined as the error resulting between the reconstructed objects and the point clouds using metrics such as the mean distance, median distance, and standard deviation. The USIBD specifications [7] were the first to provide the means to report twinning results of existing building conditions (from point clouds) based on standard deviation (stdev). It articulates the ‘accuracy’ as well as the five different LOAs (Figure 2) by which to represent real-world out-of-plumb geometries. Specifically, the Measured Accuracy represents the stdev range that is to be achieved to acquire a point cloud, regardless of the method used. In contrast, the Represented Accuracy represents the stdev range that is to be achieved when a point cloud is twinned. This guideline, however, does not indicate how to achieve and how to measure the Measured Accuracy and Represented Accuracy. As shown, various acronyms are used across countries and organisations. These acronyms are either identical or interchangeable, making them very challenging to be understood or adopted.

Country/regionDocumentYearLOXWhole gDTgDT elementGeometric data/infoNon-geometric data/info
DenmarkBIPS2007Information Level
AustraliaCRC2009Object Data Levels/Level of Detail
USADepartment of VA2010Level of Development
USAVico Software2011Level of Detail
AustraliaNATSPEC2011Level of Development
Hong KongHKIBIM2011Level of Detail
USANYC DDC2012Model Level of Development/Level of Development
Model Granularity
Penn State University2012Level of Development
USC2012Level of Detail
US Army Corps of Engineers (USACE)2012Level of Development
SingaporeBCA2013Level of Detail
UKPAS 1192–22013Level of Model Definition
Level of Model Detail
Level of Model Information
UKCIC BIM Protocol2013Level of Detail
GermanyBMVBS2013Level of Development
NetherlandBIM2014Information Level
CanadaAEC2014Level of Development
FranceLe Moniteur2014Level of Detail/Level of Development
AustraliaBCPP2014Level of Development
Level of Detail
Level of Accuracy
Level of Information
Level of Coordination
ChinaCBC2014Level of Detail
BelgiumABEB-VBA2015Level of Development
GermanyD&R2015Level of Development
USABIMForum2015Level of Development
Element Geometry
Associated Attribute Information
UKNBS BIM Toolkit2015Level of Detail
Level of Information
UKAEC (UK)2015Level of Definition
Level of Information
Grade/Level of Detail
ChinaSZGWS (Shenzhen)2015LOD

Table 1.

Comparison of the LOX classification system across countries.

Figure 2.

Measured and represented accuracy [7].

2.2 Industry applications

Leading software vendors provide advanced commercial twinning solutions, which are currently semi-automated processes at best. ClearEdge3D Edgewise software can automatically extract geometric features for industrial constructive elements and basic architectural elements using cross-sections in user-cropped regions followed by fitting 3D shapes from a library of preloaded features [28, 29]. This means, the current practice can achieve a high degree of automation of twinning if the resulting geometries are assumed to be generic or pre-defined. However, in the context of SHM, this assumption is unrealistic if a millimetre twinning accuracy is required. Twinning arbitrary geometries using point clouds is quite challenging [30]. Most authoring tools are designed to model orthogonally, or along local coordinate axes. They employ the use of rigid-body parameters to design construction elements by defining cross-sectional shapes, length, width and height parameters, whereas in the real world, as-is components are often warped, off-plumb, or contain deflections [31]. While finite element analysis and multi-physics engines can be used to predict elastic and plastic distortions in materials [32], current digitization workflows that produce parametric objects cannot capture distortion such as bowing in a beam or welding distortion in steel frames. Errors are introduced when the as-is geometries are twinned as being plumb and subjected to rigid-body physics [33]. In this case, geometry deviation analysis is important because unfitted geometries would potentially reduce the reliability of the gDT to be used for structural analysis and defect detection for SHM purposes. Current authoring applications are not capable of carrying out geometry deviation analysis for point clouds. The actual geometry deviation analysis requires third-party middleware software to interpret and investigate. FARO BuildIT Construction [34] is the most recent verification software for dimensional quality control (QC) process. Measured data collected from laser scanners can be compared against a gDT to analyse geometric deviations (Figure 3). However, it is worth noting that the nature and origin of a deviation is not identified in the analysis directly. Specifically, the analysis itself is often in the form of a ‘heat map’, where deviations are plotted in colours that correspond to a specific magnitude and direction from a perfect state (i.e. 0 mm deviation). However, point clouds contain voids and sparse measurements, which as directly classified deviations. These false positive measurements make it difficult to interpret the deviation analysis results. Users must manually inspect datasets to observe and detect gross errors or missing components. Currently, there are no available automated solutions for this in existing middleware. In addition, once deviations are identified through deviation analysis and manual interpretation, users must also manually apply changes to update the authoring gDTs. This is currently a large challenge since there is very little research into automated updating of gDT from point clouds [35, 36].

Figure 3.

Deviation analysis of a pipe assembly (point cloud data courtesy of FARO Technologies, Inc.).

2.3 Existing research methods

Automated methods have been proposed to streamline the twinning process (Table 2) [37, 38]. However, user intervention was still required for some crucial steps [44]. Zhang et al. [40] and Laefer and Truong-Hong [47] produced gDTs for bridges and industry plants, but without a geometric deviation assessment. Anil et al. [39] were among the pioneers who discussed in depth the problem of geometric deviation. They suggested using minimum Euclidean distance and thresholding [49] as metrics to evaluate the fitting quality (CAD model against point clouds). The deviation analysis at macro level (for the whole structure) was performed using a commercial software application (i.e. Polyworks v9). Bonduel et al. [46] suggested assessing the twinning results at both macro and micro levels. They used CloudCompare to analyse the deviations between a point cloud and a manually generated building floor gDT. They also discussed the achieved represented accuracy using LOAs provided by USIBD. Then, Hausdorff distance was proposed to measure the fitting deviation of a mesh-based building gDT reconstructed from a synthetic point cloud [41]. Thomson and Boehm [42] suggested using Euclidean distance and area difference based on the width and length, and angular difference to measure the fitting quality of walls. Although these measurements can assess elementwise quality, they are tailored for generic building walls in cuboid shapes. Similarly, Valero et al. [43] assessed fitting deviations of individual furniture objects and walls using orientation, dimension, positioning, and sizing metrics, assuming these objects consist of planar surfaces. Lu et al. [5] proposed an automated fitting method to twin bridge components. They gauged the fitting accuracy using Cloud-to-Cloud (C2C) distance metrics—a similar metric used by Shirowzhan et al. [51]. However, the geometric deviation evaluation was performed only at the macro level. NURBS-based methods [27, 44, 45, 48] were employed to reconstruct geometric surfaces for building, industry plant, and historic building elements. Note that the generation of compound pipes requires user intervention to group a set of cylindrical segments followed by automatically fitting surfaces [44]. Likewise, highly complex historic structures require manual surface generation, although extremely high twinning accuracy was reported [27]. Point-to-surface distance metrics were used to evaluate the fitting quality [44, 48]. In contrast, Barazzetti [45] used the commercial package Geomagic Studio to evaluate the fitting accuracy of the NURBS curves through a progressive densification (i.e. multi-resolution) approach. As shown, there is no fully automatic method to produce geometrically highly accurate twins for existing assets. Also, more comprehensive evaluation metrics need to be established for assessing twinning quality.

Point cloud authenticity real (R) or synthetic (S)Manual (×)/automated (√) twining processStructure completenessLOGA assessment availabilityManual (×)/automated (√)
LOGA assessment
Micro (I)/macro (A) levels
LOGA assessment
LODGA assessment method/metrics
[39]R×××AMinimum Euclidian distance and thresholding
[41]SAHausdorff distance
[42]R×ICentroid Euclidean distance, area difference (width and length), angular difference
[38]R×AVisual assessment
[43]RA/IPlane fitting/orientation/dimensional error, positioning/sizing error
[44]R××AMean point-surface distance
[45]R××AProgressive densification
[37]R××ICylinder radius and orientation
[11]R×/√A/IControl points and point-to-gDT
[27]R××AStandard deviation
[48]R××AStandard deviation
[5]RACloud-to-cloud distance

Table 2.

Twinning methods and evaluation metrics.


3. Case study

Previous sections have discussed that twinning existing assets using point clouds is restricted by current software tools which are limited in their ability to represent out-of-plumb conditions and non-rigid formations. It is also restricted by the limits of the data itself. This section discusses this problem in detail through a case study.

Laser scanning can sample an object’s surface as it exists with highly accurate spatial measurements in the form of 3D points. If the documented object is not straight or plumb, the scanner can capture its geometric status. Theoretically, a terrestrial laser scanner such as the FARO Focus 3D X330 [52] has a ranging error of ±2 mm at 10 m, equating to a systematic measurement error at around of 1σ at 10 m. However, the measured accuracy is affected by many factors, including the standard deviation of the sensor, registration methods, material type being scanned, low temperature, bad weather, and strong sunlight [53]. The overall twinning error ET can be expressed as a combination of three primary sources of error:


where ERA is the ranging error associated with the laser scanner, EM is the measured error introduced during scanning and registration, and ER is the represented error resulting from the process of scan-to-gDT. It is important to specify the error associated with each source independent of each other since they are assumed to be mutually exclusive. In this chapter, we only focus on discussing the represented error ER, which is independent of the sensor, or parameters of the documented object, or scanning and registration methods. It is related to the manner with which the measured point cloud is being transformed into the outcome, i.e. a gDT and describes the extent the gDT matches the acquired points.

As mentioned earlier, existing authoring software packages are by nature orthographic modelling tools. The challenge with using these software packages becomes how to represent a structure’s up-to-date conditions. To complicate matters further, the as-weathered, as-damaged, or as-deviated information of existing assets further increases the representation difficulty. Fitting deviations will be generated and propagated if these conditions are represented in an over-simplified fashion. In addition, sparseness, hidden, or concealed conditions are often encountered in point clouds, making it difficult or impossible to twin constructive objects with certainty. Thus, ER is the accumulated error from the geometric deviations and the propagation of data uncertainty.

Figure 4 demonstrates current efforts on parametric bridge design [54]. The essential feature for bridges is the horizontal and vertical alignments, which control the parametric relationships and dependencies between assembly systems and all components. The deck cross-sections are then driven by the bridge alignment curves. They are profiles that are used in conjunction with the alignment to derive the overall 3D shape of the bridge deck.

Figure 4.

Parametric cross section design of a slab-beam bridge with user-defined geometric constraints [54].

When SHM and retrofit planning is being performed, accurate as-is condition data is required regardless of the availability of the as-designed parametric information. Point clouds can depict the as-is geometries of an asset using thousands of data points. However, maintaining the dimensional accuracy and geometric fidelity of a given bridge point cloud is challenging because the usefulness of topological and geometric constraints is limited to very simple geometric shapes and spatial relationships. As-is geometries do not exhibit a parametric pattern with respect to the initial primitives used to create the as-designed model. Figure 5 illustrates the non-orthogonal geometries of a real-world bridge point cloud cannot be fitted using generic shapes, such as cuboids, in an orthogonal fashion. The modelled slabs do not follow the point cloud and produce fitting deviations when they are joined at sharp angles (Figure 5a). These deviations become smaller if the cross-sections are outlined with as-is 2D shapes. However, the bridge gDT does not necessarily close better and become manifold as the fitting quality is improved at the expense of broken or clashing connections (Figure 5b). This is especially true when twinning point clouds of pipes with sags, beams and columns with welding distortion or walls that are skewed. Adjacent components do not fit to properly watertight connections unless they are joined at right angles. For example, Figure 6 illustrates part of a piping system generated using point clouds. The local deviation is reduced from 30 to 1 mm when watertight connections are not used. Given the challenge with the mediation of non-parametric real-world deviations to parametric model primitives, modellers are often forced to leave objects ‘slightly off-axis’ or perform ‘unnatural shape editing’ by eliminating or ignoring as many overlapping and joint warnings as possible in order to match the points.

Figure 5.

Fitting geometric shapes to bridge point clouds. (a) Point clouds fitted by cuboids; (b) point clouds fitted by best-fit shapes.

Figure 6.

Fitting cylinders to piping point clouds (point cloud data courtesy of FARO Technologies, Inc.).

When facing occlusions and damage conditions, the geometric accuracy has a reliance on human perception followed by inferring the hidden information based on assumptions. For example, a bearing plays an important role in a bridge, but its surface is less than 1% of that of the deck slab and has a complex composition. These characteristics make it difficult to be fully captured by a laser sensor (Figure 7a). In addition, point clouds need to be down sampled before feeding into in-memory-system-based authoring tools or automated algorithms that cannot handle huge datasets. The down sampling is often performed using a third-party processing software application, which applies generic filters to evenly down sample the points without considering local geometric context. While this is certainly helpful and creates beneficial data compression, the resulting datasets often lose information along the way (i.e. sparse areas or smaller objects will have little to measurements). Thus, only a few points are retained for the bearing surface which does not provide enough information to support the twinning task and result in geometry uncertainties (Figure 7b). The interpretation of bearing shapes largely depends on modeller’s knowledge and discretion, which could introduce connection problems (e.g., clashing/gaps) (Figure 7c). Uncertainty increases when working with point clouds containing skewness and noise (Figure 7d and e). Although methods have been suggested to work under occlusions and sparseness [5, 55], the certainty of the resulting models is rarely investigated.

Figure 7.

Bearing gDT generation under uncertainty. (a) Original point cloud; (b) down-sampled point cloud; (c) bearing shapes and connection problem; (d) (e) geometry uncertainty in point clouds.

Figure 8 shows an example of a bridge where little-to-no measurements were captured in the girder areas due to a limited line of sight [56]. Like many existing works, both the manual and the automated method inferred specific girder profiles and produced gDTs with detailed dimensions using engineering knowledge. Then, Cloud-to-Cloud (C2C) distance could be used [5] to compute the deviation between the point clouds sampled from the manually generated gDTs (Manual) and the automated ones (Auto), and the real point clouds (Real):

Figure 8.

Geometric deviation with complete girder profiles in occluded areas.

C2C=maxdist¯Manual or Auto/Realdist¯Real/Manual or Auto,E2

where dist¯ is the estimated distance between a compared point cloud (i.e. Manual or Auto) and a reference point cloud (i.e. Real). Non-trivial fitting deviations occurred and raised the overall macro-level deviation (C2CAuto—12.5 cm and C2CManual—5.7 cm) [5]. These significant fitting deviations were due to the occluded areas, as no measurements were available to compare against, resulting in an incorrect gDT from a geometric accuracy standpoint. This solution is straightforward since it does not take the modelling uncertainties into account. It simply takes uncertain areas as errors. Figure 9 illustrates that the fitting deviation was drastically reduced by approximately 70% (C2CAuto—4.2 cm) if we replace the complete girder profiles with unclosed mesh-based gDTs while other parts remain unchanged. Yet still, the improved accuracy only aligns with USIBD’s LOA 20 (lower range: 15 mm, and upper range: 5 cm, at 2σ) [7], corresponding to a relatively low accuracy standard. USIBD provides different represented accuracy levels, but it does not specify how to measure it. For example, we can only use a couple of reference points to estimate the accuracy. It is the averaged fraction between pair reference-point distances in the registered scan data and the corresponding pair on-site or gDT point distances:

Figure 9.

Geometric deviation with incomplete girder profiles in occluded areas.

acc¯=1Mi=1Mpair referencepoint distancepaironsite orgDTpoint distancei,E3

where M is the number of investigated pair-wise distance. Then, it is possible to acquire acc¯ that aligns with a higher LOA in USIBD. By contrast, unlike acc¯, C2C is an estimation using thousands of calculated points. Therefore, the resulting C2C-based accuracy is almost surely not going to achieve an expected ‘high accuracy’ level (e.g., ±10 mm or USIBD’s LOA 30 onwards). The C2C comparison between the Auto and Real revealed that points sampled from bottom flanges of girders were well matched with the original points while the mismatched points were mainly from the central part of the deck slab where points were not evenly distributed. This is attributed to the undulating-surfaces of the gDT generated using the proposed ConcaveHull alpha-shape algorithm (Figure 9). Local indentations or bumps are generated when alpha value is too small to smooth out the surface affected by unavoidable noise, raising the fitting deviations. However, optimising the alpha value is difficult because an indentation, for instance, could be due to a defect or a hole but could also due to localised sparse and unevenly distributed points. In addition, although the ConcaveHull alpha-shape algorithm can describe slab geometries in a 2D space, it oversimplifies a 3D space.


4. Prospective twinning methods and deviation analysis

The analysis provided in the previous section demonstrates that real-world conditions are seldom orthogonal and perfect, rendering it extremely difficult to perform high-fidelity twinning with a geometric accuracy on the millimetre scale. Commonly used representation models include but are not limited to: implicit representation such as mathematical formula-based methods [57], Boundary Representation such as polygon- and mesh-based methods [41], Constructive Solid Geometry [58], Swept Solid Representation [47], and NURBS representation [45, 48]. Depending on the nature of defects, the as-damaged geometries may be represented in different ways. Figure 10 illustrates the vision of the concept of an as-damaged bridge gDT implemented for the inspection work. The method proposed by Hüthwohl et al. [59] can be used to integrate superficial defects such as cracks, efflorescence, corrosion, and slight spalling [Figure 10a—(3) and (4)] to the affected element using the back-project technology [59] (Figure 10b). In contrast, major defects, such as severe spalling, cavity and pothole [Figure 10a—(1) and (2)], are significantly different in geometry compared to their surrounding healthy (i.e. good condition) surfaces. The method proposed by Lu et al. [5] can be used to represent healthy elements; however, it cannot describe the unhealthy areas precisely, due to the extrusion-based twinning nature. Finer representation, such as mesh-based and NURBS-based twinning techniques [50], can be employed to handle the geometry complexity of significant defects in a precise manner (Figure 10b). The more variable the defect, the greater the geometric twinning needs to rely on non-parametric representation such as mesh format. One promising solution to produce a gDT that takes the as-damage information into account is to first detect unhealthy areas [60], followed by twinning these unhealthy areas using finer twinning techniques based on their type and size. However, the mesh polygon resolution should not degrade the rendered presentation. This requires an intelligent a priori scheme to resample the point clouds based on the geometric complexity of a sampled surface [61, 62].

Figure 10.

Vision of the concept of an as-damaged bridge gDT applied for inspection. (a) actual damages or defects; (b) digital representations.

Construction elements with different scales may require different twinning techniques. For example, extrusions could be efficient for twinning slab segments; however, they cannot be directly applied to bearings. This means a gDT is highly likely to contain more than one data representation type in order to balance its resolution and the LOGA, which very few works have covered in depth. In addition, as previously mentioned, occlusions and sparseness increase the uncertainty of the resulting gDT. These problems require a more intuitive geometric deviation analysis system. The macro-level deviation analysis can provide an overview of the twinning quality whereas it does not reflect a detailed comparison at the component- or feature-level. Therefore, the dimensional QC system of geometric deviation analysis should consist of both macro- and micro-level analysis. The former, can be used to quickly localise uncertain areas, or areas with major deviations (Figures 8 and 9) while the latter can provide detailed deviation analysis at the component-level, indicating a more meaningful LOGA of specific elements. Table 3 shows an example of the C2C-based geometric deviation analysis of five bridge gDTs using an automated twinning method. The micro-level numerical indications show that the deck slab takes the bigger part of the overall deviation whereas the other components such as pier caps, piers, and girders take the smaller part. Specifically, for all these bridges except Bridge 7, the deviations stemming from deck slabs are 2.9, 3.2, 2.1, and 1.5 times bigger than that of the averaged value for the remaining components, respectively. Bridge 7 initially appears misleading since the slab deviations are only 48.8% of that of its girders. However, these abnormal deviations are due to significant occlusions in the raw data. The distribution of the deviations is not necessarily proportional to the LOGA. This can be demonstrated through the coverage area of components. The deck slab takes most of the sampled surface compared to that of the pier caps and piers, which are much smaller in size and in covered area. Specifically, pier caps, piers, and girders take 12, 10.4, 7.2, 31.7, and 15.6% of the overall sampled surface of each bridge, respectively. This means although the absolute twinning accuracy of smaller components is higher than larger ones, their relative accuracy is not necessarily better. A deviation analysis system that combines both macro- and micro-level information can better interpret the twinning accuracy.

C2C (cm)Macro [5] (bridge wise)Micro (element wise)
Bridge 14.3Deck slabPier cap 1Pier cap 2Pier cap 3Pier 11Pier 12Pier 13Pier 21Pier 22Pier 23Pier 31Pier 32Pier 33
Area88.0%2.2% × 30.6% × 9
Bridge 49.4Deck slabPier 11Pier 12Pier 13Pier 14Pier 15Pier 16Pier 21Pier 22Pier 23Pier 24Pier 25Pier 26
Area89.6%1%0.8% × 41%1%0.8% × 41%
Bridge 64.6Deck slabPier 11Pier 12Pier 21Pier 22Pier 31Pier 32
Area92.8%1.2% × 6
Bridge 712.5Deck slabPierGirder 11Girder 12Girder 13Girder 14Girder 15Girder 16Girder 17Girder 18Girder 19Girder 21Girder 22Girder 23Girder 24Girder 25Girder 26Girder 27Girder 28Girder 29
Area54.2%6.2%2.1% × 92.3% × 9
Bridge 95.6Deck slabPier 11Pier 12Pier 21Pier 22Pier 31Pier 32
Area84.4%2.6% × 6

Table 3.

Macro- and micro-level C2C geometric deviation analysis.


5. Conclusions

This chapter presents an exploratory analysis of the LOGA of geometric twinning for existing assets using point clouds. Twinning existing assets for monitoring the structural health is a daunting task since the as-is geometric conditions can differ from the designed status due to geometric anomalies, physical damages, deflections, and the complexity, ambiguities, and defects in the measured point cloud data. Section 2.1 reviews existing LOX systems that lack a clear elaboration on geometry accuracy. They share the same acronym but do not necessarily carry the same meaning. They are tailored for basic assumptions made in the design phase or at the beginning of a generative process, making them useless to interpret the as-is geometries of gDTs delivered for SHM purposes. Section 2.2 reviews on industry applications and reveals that there remains a gap between the accuracy requirements placed on gDTs and the capabilities of underlying twinning processes. Specifically, there are practical limitations of authoring tools with respect to the context of orthogonal (i.e. idealised parametric primitives) and real-world deviations (i.e. non-parametric data formats such as point clouds and meshes). Their ability to twin or capture non-rigid-body deformations is extremely limited. Likewise, limitations are also revealed for the deviation evaluation tools with respect to geometric accuracy interpretations. Despite the growing state of the art (Section 2.3), a fully automated twinning and updating process is still in its infancy. A major bottleneck for complete automation of the workflow is the definition of LOGA of the documented asset that covers all geometric deviations and data uncertainties. This requires a development of comprehensive LOGA-based evaluation metrics for gDTs generated in the post-construction stage. The case study (Section 3) demonstrates the technical challenges of the twinning process. High-fidelity twinning within millimetre-level geometric accuracy is challenging to achieve because each step introduces errors. This requires in-depth research on the level of the model certainty. LOGA is closely related to the tools, techniques, and process used to represent the specific object being documented. In the end, the twinning method and LOGA depend highly on what the gDT will be used for (Section 4), on the specific needs and goals of the project, and what kind of metadata is required when providing information about the geometric accuracy.

Parameterising point cloud data results in a loss of geometric accuracy along with a decrease of model certainty. This requires practitioners and researchers to effectively communicate the LOGA through a universal consensus before developing, evaluating, and using gDTs. Until there is a consensus and a universal system for describing geometric accuracy of gDTs, the following recommendations are provided. In the case where geometric accuracy requirements are very strict, such as in the O&M stage, it may be useful to store and link the initial as-is captured data along with the resulting gDT. The purpose for this is two-fold. First, it allows for an end-user to view the initial dataset that was used to create the gDT, for conducting its own unique accuracy or structural analysis. Storing the initial raw point cloud data will provide a level of confidence to an end-user when they use the geometric information from a gDT. It also alleviates some of the burden placed on individuals who create the gDT to provide a subjective global accuracy figure (which can have legal impacts depending on end-use of such gDTs). Secondly, linking the initial data capture avoids loss of geometric data. Since point cloud data contains much rawer geometric information than a resulting surface-based or solid-based gDT, data fidelity can be preserved. As twinning processes and algorithms continue to develop and improve (both in accuracy but also in computational efficiency) it will be possible to build, update, manage, and exploit gDTs in a progressive manner.



This research work is supported by the National Sciences and Engineering Research Council (NSERC), Mitacs and Edge Architects Ltd. and Cambridge Trimble Fund. We would like to thank them for their support. We also acknowledge Faro Technologies for their in-kind support, provision of sample point cloud data and access to BuildIT Construction software. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the authors and do not necessarily reflect the views of the stakeholders who have supported this research.


  1. 1. Shea RH. Historian Uses Lasers to Unlock Mysteries of Gothic Cathedrals. National Geographic Magazine. 2019. Available from: [Accessed: 17 March 2020]
  2. 2. Aziz ND, Nawawi AH, Ariff NRM. Building information modelling (BIM) in facilities management: Opportunities to be considered by facility managers. Procedia - Social and Behavioral Sciences. 31 October 2016;234:353-362. DOI: 10.1016/j.sbspro.2016.10.252
  3. 3. Wang Q, Kim MK. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Advanced Engineering Informatics. January 2019;39:306-319. DOI: 10.1016/j.aei.2019.02.007
  4. 4. Kim MK, Wang Q, Li H. Non-contact sensing based geometric quality assessment of buildings and civil structures: A review. Automation in Construction. April 2019;100:163-179. DOI: 10.1016/j.autcon.2019.01.002
  5. 5. Lu R, Brilakis I. Digital twinning of existing reinforced concrete bridges from labelled point clusters. Automation in Construction. September 2019;105:102837. DOI: 10.1016/j.autcon.2019.102837
  6. 6. Davila Delgado JM, Butler LJ, Gibbons N, Brilakis I, Elshafie MZEB, Middleton C. Management of structural monitoring data of bridges using BIM. Proceedings of the Institution of Civil Engineers - Bridge Engineering. September 2017;170(3):204-218. DOI: 10.1680/jbren.16.00013. ISSN 1478-4637. E-ISSN 1751-7664
  7. 7. USIBD. Level of Accuracy (LOA) Specification Version 2.0. U.S. Inst. Build. Doc. 2016. Available from: [Accessed: 17 March 2020]
  8. 8. Qu T, Sun W. Usage of 3D point cloud data in BIM (building information modelling): Current applications and challenges. Journal of Civil Engineering and Architecture. 2015;9(11):1269-1278
  9. 9. Weisstein EW. Torus. MathWorld—A Wolfram Web Resource. 2018. Available from: [Accessed: 17 March 2020]
  10. 10. Lu R. Automated generation of geometric digital twins of existing reinforced concrete bridges [Dr. thesis]. 2019. DOI: 10.17863/CAM.36680
  11. 11. Macher H, Landes T, Grussenmeyer P. From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings. Applied Sciences. 2017;7(10):1030
  12. 12. Truong-Hong L, Laefer DF. Quantitative evaluation strategies for urban 3D model generation from remote sensing data. Computers and Graphics. 2015;49:82-91
  13. 13. Dimitrov A, Golparvar-Fard M. Segmentation of building point cloud models including detailed architectural/structural features and MEP systems. Automation in Construction. 2015;51(C):32-45
  14. 14. Succar B. Level of X (LoX), BIM Dictionary. Available from: [Accessed: 17 March 2020]
  15. 15. Trimble. VICO Software and Trimble to Integrate Workflows and Use Building Information Models to Construct Better Buildings. 2008. Available from:
  16. 16. AIA. Building Information Modeling Protocol Exhibit. AIA Doc. E202TM; 2008. Available from: [Accessed: 03 June 2020]
  17. 17. AIA. AIA Document E203—2013 Building Information Modeling and Digital Data Exhibit; 2013. Available from: [Accessed: 03 June 2020]
  18. 18. Reinhardt J et al. Level of Development Specification for Building Information Models. 2013. Available from: [Accessed: 03 June 2020]
  19. 19. AIA. Digital Practice Documents—Guide, Instructions and Commentary to the 2013 AIA Digital Practice Documents. AIA Document E202TM; 2013. Available from: [Accessed: 03 June 2020]
  20. 20. Bolpagni M, Ciribini ALC. The information modeling and the progression of data-driven projects. In: Proceedings of the CIB World Building Congress 2016. Vol. 3. Building up Business Operations and Their Logic. Shaping Materials and Technologies; 2016
  21. 21. Bolpagni M. The Implementation of BIM within the Public Procurement: A Model-Based Approach for the Construction Industry. 2013. Available from: [Accessed: 03 June 2020]
  22. 22. Bolpagni M. Digitalisation of tendering and awarding processes: A Building Information Modelling (BIM)-based approach to public procurement routes [Doctoral dissertation]. Politecnico di Milano; 2018. Available from:
  23. 23. Reding A, Williams J. Appendix C—Levels of Development Definitions. New Zealand BIM Handbook. 2014. Available from:
  24. 24. RICS. Measured Surveys of Land, Buildings and Utilities. 3rd ed. RICS Prof. Guid. Glob. Royal Institution of Chartered Surveyors (RICS); 2014. Available from:
  25. 25. Abualdenien J, Borrmann A. Multi-LOD model for describing uncertainty and checking requirements in different design stages. In: eWork and eBusiness in Architecture, Engineering and Construction. 2019. DOI: 10.1201/9780429506215-24
  26. 26. Banfi F. BIM orientation: Grades of generation and information for different type of analysis and management process. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives. 2017;XLII-2/W5:57-64. DOI: 10.5194/isprs-archives-XLII-2-W5-57-2017
  27. 27. Banfi F, Fai S, Brumana R. BIM automation: Advanced modeling generative process for complex structures. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2017;IV-2/W2:9-16. DOI: 10.5194/isprs-annals-IV-2-W2-9-2017
  28. 28. ClearEdge3D. Structure Modelling Tools. 2019. Available from: [Accessed: 17 March 2020]
  29. 29. ClearEdge3D. EdgeWise. 2019. Available from: [Accessed: 17 March 2020]
  30. 30. Wang C, Cho YK, Kim C. Automatic BIM component extraction from point clouds of existing buildings for sustainability applications. Automation in Construction. 2015;56:1-13
  31. 31. Russo JM. What is meant by ‘level of accuracy?’. LIDAR Magazine. 2014;4(3). Available from: [Accessed: 17 March 2020]
  32. 32. Castellazzi G, D’Altri AM, Bitelli G, Selvaggi I, Lambertini A. From laser scanning to finite element analysis of complex buildings by using a semi-automatic procedure. Sensors (Switzerland). 2015;15(8):18360-18380. DOI: 10.3390/s150818360
  33. 33. Rausch C, Nahangi M, Perreault M, Haas CT, West J. Optimum assembly planning for modular construction components. Journal of Computing in Civil Engineering. January 2017;31(1). DOI: 10.1061/(ASCE)CP.1943-5487.0000605
  34. 34. Faro BuildIT Construction. 2019. Available from: [Accessed: 17 March 2020]
  35. 35. Lin YC, Lin CP, Hu HT, Su YC. Developing final as-built BIM model management system for owners during project closeout: A case study. Advanced Engineering Informatics. April 2018;36:178-193. DOI: 10.1016/j.aei.2018.04.001
  36. 36. Hamledari H, Rezazadeh Azar E, McCabe B. IFC-based development of as-built and as-is BIMs using construction and facility inspection data: Site-to-BIM data transfer automation. Journal of Computing in Civil Engineering. March 2018;32(2). DOI: 10.1061/(ASCE)CP.1943-5487.0000727
  37. 37. Patil AK, Holi P, Lee SK, Chai YH. An adaptive approach for the reconstruction and modeling of as-built 3D pipelines from point clouds. Automation in Construction. 2017;75:65-78
  38. 38. Ochmann S, Vock R, Wessel R, Klein R. Automatic reconstruction of parametric building models from indoor point clouds. Computers and Graphics. 2016;54:94-103
  39. 39. Anil EB, Tang P, Akinci B, Huber D. Deviation analysis method for the assessment of the quality of the as-is building information models generated from point cloud data. Automation in Construction. 2013;35:507-516
  40. 40. Zhang G, Vela PA, Brilakis I. Automatic generation of as-built geometric civil infrastructure models from point cloud data. In: Computing in Civil and Building Engineering. Vol. 2014. 2014. pp. 406-413
  41. 41. Oesau S, Lafarge F, Alliez P. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut. ISPRS Journal of Photogrammetry and Remote Sensing. 2014;90:68-82
  42. 42. Thomson C, Boehm J. Automatic geometry generation from point clouds for BIM. Remote Sensing. 2015:7(9):11753-11775. DOI: 10.3390/rs70911753
  43. 43. Valero E, Adán A, Bosché F. Semantic 3D reconstruction of furnished interiors using laser scanning and RFID technology. Journal of Computing in Civil Engineering. 2016;30(4):04015053
  44. 44. Dimitrov A, Gu R, Golparvar-Fard M. Non-uniform B-spline surface fitting from unordered 3D point clouds for as-built modeling. Computer-Aided Civil and Infrastructure Engineering. 2016;31(7):483-498
  45. 45. Barazzetti L. Parametric as-built model generation of complex shapes from point clouds. Advanced Engineering Informatics. August 2016;30(3):298-311. DOI: 10.1016/j.aei.2016.03.005
  46. 46. Bonduel M, Bassier M, Vergauwen M, Pauwels P, Klein R. Scan-to-BIM output validation: Towards a standardized geometric quality assessment of building information models based on point clouds. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives. 2017
  47. 47. Laefer DF, Truong-Hong L. Toward automatic generation of 3D steel structures for building information modelling. Automation in Construction. 2017;74:66-77
  48. 48. Brumana R et al. Generative HBIM modelling to embody complexity (LOD, LOG, LOA, LOI): Surveying, preservation, site intervention—The Basilica di Collemaggio (L’Aquila). Applied Geomatics. 2018;10:545-567. DOI: 10.1007/s12518-018-0233-3
  49. 49. Stojanovic V, Richter R, Döllner J, Trapp M. Comparative visualization of BIM geometry and corresponding point clouds. International Journal of Sustainable Development and Planning. 2018;13(1):12-23. DOI: 10.2495/SDP-V13-N1-12-23
  50. 50. Ma L, Sacks R, Kattel U, Bloch T. 3D object classification using geometric features and pairwise relationships. Computer-Aided Civil and Infrastructure Engineering. 2018;33(2):152-164
  51. 51. Shirowzhan S, Sepasgozar SME, Li H, Trinder J, Tang P. Comparative analysis of machine learning and point-based algorithms for detecting 3D changes in buildings over time using bi-temporal lidar data. Automation in Construction. September 2019;105:102841. DOI: 10.1016/j.autcon.2019.102841
  52. 52. FARO. FARO® Laser Scanner Focus3D X 330 Manual. 2015. Available from: [Accessed: 17 March 2020]
  53. 53. Lichti DD, Harvey BR. The effects of reflecting surface material properties on time-of-flight laser scanner measurements. In: Geospatial Theory, Processing and Applications. 2002. Available from: Corpus ID: 9926219 [Accessed: 03 June 2020]
  54. 54. Ji Y, Borrmann A, Beetz J, Obergrießer M. Exchange of parametric bridge models using a neutral data format. Journal of Computing in Civil Engineering. 2013;27(6):593-606
  55. 55. Adan A, Huber D. 3D reconstruction of interior wall surfaces under occlusion and clutter. In: Proceedings of the 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT). 2011. pp. 275-281
  56. 56. Lu R, Brilakis I, Middleton CR. Detection of structural components in point clouds of existing RC bridges. Computer-Aided Civil and Infrastructure Engineering. March 2019;34(3):191-212. DOI: 10.1111/mice.12407
  57. 57. Weisstein EW. Ellipsoid. MathWorld—A Wolfram Web Resource. 2018. Available from: [Accessed: 17 March 2020]
  58. 58. Deng Y, Cheng JCP, Anumba C. Mapping between BIM and 3D GIS in different levels of detail using schema mediation and instance comparison. Automation in Construction. 2016;67:1-21
  59. 59. Hüthwohl P, Brilakis I, Borrmann A, Sacks R. Integrating RC bridge defect information into BIM models. Journal of Computing in Civil Engineering. 2018;32(3):04018013
  60. 60. Xu Z, Li S, Li H, Li Q. Modeling and problem solving of building defects using point clouds and enhanced case-based reasoning. Automation in Construction. December 2018;96:40-54. DOI: 10.1016/j.autcon.2018.09.003
  61. 61. Chen J, Zhang C, Tang P. Geometry-based optimized point cloud compression methodology for construction and infrastructure management. In: Computing in Civil Engineering 2017: Smart Safety, Sustainability and Resilience - Selected Papers from the ASCE International Workshop on Computing in Civil Engineering. American Society of Civil Engineers (ASCE). 2017. pp. 377-385
  62. 62. Zhang C, Tang P. Visual complexity analysis of sparse imageries for automatic laser scan planning in dynamic environments. In: Computing in Civil Engineering. 2015. pp. 271-279

Written By

Ruodan Lu, Chris Rausch, Marzia Bolpagni, Ioannis Brilakis and Carl T. Haas

Submitted: 22 October 2019 Reviewed: 08 May 2020 Published: 10 June 2020