Comparison between this method and typical implicit methods.
Surface reconstruction is an interesting and challenging task in extensively applied fields including rapid prototype manufacturing, computer vision, virtual reality and computer aided design (CAD). A typical reconstruction procedure begins with scanning, in which the point data are sampled from physical objects by digitizing measurement systems (such as laser-range scanners and hand-held digitizers). And then, the point data are generated as a smooth, water-tight and proper resulting surface by a suitable reconstruction method. In industry the most difficulty comes from the defective samples that are subject to the noise, holes and overlapping regions. The defective samples are often unavoidable due to the sampling inaccuracy, scan mis-registration and accessibility constraints of scanning device. They often make most existing reconstruction methods not practical for engineering application because the oriented or neighbour information of points, which the most methods are highly based on, are hard to evaluate. For instance, many methods rely on consistent normals, or pose the demand on triangular meshes generated from point data. However, the holes and overlapping samples confuse the point’s neighbour relationship, some jagged, self-intersect regions could exist in the corresponding triangular mesh or the estimation of consistent normals becomes an ill-posed problem. Only a few methods need not such specific information, but they have to resort to some complex or time-consuming steps, like re-sampling, distance-computing, mesh-smooth or deformable models. Even if these methods can generate a water-tight resulting surface, the reasonableness of fitting overlapping samples and holes is not guaranteed. In fact, such issues, especially “bad-scanning” data, often lead long scanning time, massive manual work and poor model quality.
Given these challenges, this paper propose a novel surface reconstruction method that takes as input defective point clouds without any specific information and output a smooth and water-tight surface. The main idea is that (1) this technique is based on implicit function, because implicit reconstruction is convenient to guarantee a water-tight result; (2) the approach is indirect, two off-set surfaces are generated to best fit the point clouds instead of direct approximation. As shown in Fig.1 (1D situation for simple expression), the point clouds are represented as origin of coordinates (Fig. 1 (a)). The space is divided into inside part (positive axis in Fig. 1 (a)) and outside part (negative axis in Fig. 1 (a)). If the point clouds are defective, it is very hard to reconstruct final implicit function with-width(real line in Fig.1 (b)) directly, including reducing noise, filling holes and merging overlapping samples. Therefore, this method constructs dual off-set functions to approximate the inner and outer level set of the final implicit surface (real lines in Fig.1 (c)). The dual water-tight surfaces form a minimal crust surrounding the point data. Based on the dual relative functions, a novel energy function is defined. By minimizing the energy function, the resulting surface (dash dot line in Fig.1 (c)) is finally obtained and visualized.
The dual relative functions are built on volumetric grids by extending some sophisticated 2D grey image processing algorithms into 3D space, including morphology operation and weighted vector median filter algorithm. The method needs not any specific information and also has the advantages that (1) the implementation needs not time-consuming steps, like computing distances between each point which is performed normally by most existing methods; (2) the dual gradient functions provide global constrains to the resulting surface, the holes could be filled smoothly and the overlapping samples could be fitted much reasonably. (3) the method can successfully construct “bad-scanning” point data which could not be handled by many methods. The reminder of the paper is organized as follows, after the previous works are reviewed and compared in section 2, the process and details of this method is described in section 3. To demonstrate the effectiveness, extensive numerical implementations are discussed in section 4. Finally, the conclusion and future work is summarized in section 5.
2. Related work
The previous algorithms of surface reconstruction can be generally classified into two categories: explicit methods and implicit methods. Most explicit methods employ Delaunay-triangulation or Voronoi diagrams, like alpha shapes (Edelsbrunner & Mücke 1994), crust method (Nina et al. 1998), triangular-sculpting (Jean-Daniel 1984), mesh growing (Li et al. 2009) and their developed version(Veltkamp 1995; Baining et al. 1997; Attali 1998; Amenta et al. 2000; Amenta et al. 2001; Yang et al. 2009). But the noise and overlapping samples could make the resulting surface jagged. Smoothing (Ravikrishna et al. 2004), refitting (e.g. (Chandrajit et al. 1995; Shen et al. 2005; Shen et al. 2009)) or blending (LA Piegl 1997) of subsequent processing are required.
In contrast, the implicit methods are much efficacious to infer topology of points, blend surface primitives, tolerate noise, and fill holes automatically. A popular algorithm is based on blending locally fitted implicit primitives, such as Radial basis functions (RBF) method (Carr et al. 2001; Greg & James 2002), multi-level partition of unity (MPU) method (Yutaka et al. 2003), products of univariate B-splines (Song & Jüttler 2009) and tri-cubic B-spline basis functions (Esteve et al. 2008) on voxelization. However they either need the consistent normals as aided information or the point clouds with fewer defective samples. The local primitives also include polynomial of point set surface (Alexa et al. 2001; Gael & Markus 2007) with moving least squares (MLS) approximation. MLS methods have to employ normal computation and projection operators, which could lead to low efficiency and need certain extra procedure to improve (Anders & Marc 2003; Marc & Anders 2004). Some methods need pre- or post-processing, like oriented estimation (Vanco & Brunnett 2004; David & Guido 2005), smooth operation (e.g. (Yukie et al. 2009)) or holes filling. For instance, the method proposed in (Davis et al. 2002) uses diffusion to fill holes on reconstructed surfaces. The approach essentially solves the homogeneous partial differential equation (PDE) given boundary conditions to create an implicit surface spanning boundaries. Poisson method (Michael et al. 2006) is also a PDE-based reconstruction algorithm with oriented point clouds. Several approaches use combinatorial structures, such as signed distance function and Voronoi diagram as (Boissonnat & Cazals 2002) (Hybrid method). But the normal information of point clouds is still required.
Only a few methods could demand little restriction on point data. Hoppe’s method (Hoppe et al. 1992) is a typical method of this category. It creates the object surface by locally fitting planes to generate a signed distance function and triangulate its zero-level set. The signed distance function can also be cumulated into volumetric grids as proposed in (Brian & Marc 1996). But the two methods are troubled by the noisy or sparse data, which make the connection relationship of these regions hard to confirm. The method proposed in (Alliez et al. 2007) employs Voronoi diagram to estimate the un-consistent normals and solves a generalized eigenvalue problem to construct resulting surface. However, it has to suffer from low computation efficiency. Level set method (Zhao et al. 2000), a typical deformable models, reconstructs the surface by solving corresponding level set equation defined on point data. It is a time-consuming method since it requires a process of re-initialization and needs updating all the nodes of compute grids in very time step. The reconstruction method also employ voting algorithm (Xie et al. 2004) to cluster points into local groups, which are then blended to produce a signed distance field using the modified Shepard’s method. But it needs to compute the medial axis transformation and perform an active contour growing process, like deformable models. The methods in (Esteve et al. 2005) proposes DMS operation on volumetric grids to fill holes by detecting the incursions to the interior of the surface and approximates them with a bounded maximum distance. It is an improvement of (Song & Jüttler 2009), but a post process has to be introduced to the low density data zones.
The typical implicit methods mentioned above are summarized in Table 1 with four respects that whether the methods need specific information and have the effectiveness of reducing noise, filling holes and merging overlapping samples. Although all the implicit methods can guarantee water-tight results that the holes could be filled, some methods (like Level set method) could not fill holes smoothly. Many methods have certain low efficient steps, like solving density matrix equation (e.g. RBF), projection procedure (e.g. MLS) and compute the distance function among all the point clouds for neighbor information. All the methods can reduce noise, but the effectiveness is not the same. For instance, MPU and Hoppe’s method could be influenced by noise more than others, if the noise is too much the resulting surface is still not smooth. All the methods do not address how to merge overlapping samples, especially the “bad-scanning” points, which is a common problem in practice.
|Method name||Specific information||Reduce noise||Fill holes||Merge overlapping samples|
|RBF (Carr et al. 2001; Greg & James 2002)||Yes||Yes||Yes||——|
|MPU (Yutaka et al. 2003)||Yes||Yes||Yes||——|
|MLS (Marc et al. 2001; Gael & Markus 2007)||No||Yes||Yes||——|
|Poisson method (Michael et al. 2006)||Yes||Yes||Yes||——|
|Hybrid method (Boissonnat & Cazals 2002)||Yes||Yes||Yes||——|
|Hoppe’s method (Hoppe et al. 1992)||No||Yes||Yes||——|
|Voronoi method (Alliez et al. 2007)||No||Yes||Yes||——|
|Level set method (Zhao et al. 2000)||No||Yes||Yes||——|
|Voting algorithm (Xie et al. 2004)||No||Yes||Yes||——|
|DMS method (Esteve et al. 2005)||No||Yes||Yes||——|
Rather than constructing final surface directly, it is much easier to confirm the off-set surfaces of point clouds. Some methods have focused on the respect, like duplex fitting method (Liu & Wang 2009) or dual-RBF method (Lin et al. 2009). But they need the consistent normals for accurately fitting. Recently, some robust and efficient methods in other research areas (like image processing (Peng & Loftus 1998; Peng & Loftus 2001) and statistics (Roca-Pardinas et al. 2008)) have been introduced in reverse engineering. Inspired by the two ideas, this paper describes a novel reconstruction method using the dual off-set surface by extending morphology operation and weighted vector median filter algrithms. The comparison between this method and typical implicit methods is added in Table 1. This method provides a convenient and efficient manner for reconstruction and addresses the issues of overlapping points and “bad-scanning” samples.
3. Method description
The main process of the proposed method is illustrated in Fig.2. Let represent defective point clouds (black points in Fig.2) sampled from a real object surface (black line in Fig.2). The goal of this method is to reconstruct an implicit functionwhose zero level set approximatesreasonably. The general intermediate step is to approximate, which denote the offset surfaces towith slight distance. Finally, is generated by blending according to a minimal model. The mathematic description can be defined as follow: given the position coordinates of discrete point setsaround 3D local region, find an implicit functionthat satisfy,
by approximating the offset functionand.in Eq. (1) represents the interface of the region.
In the first step, andare constructed on volumetric grids respectively (represented as two red lines in Fig.2 (b)). The dual functions are water-tight and guarantee a minimal narrowband surrounding the point clouds. This step is accomplished by the morphology operation, a powerful theory in image processing (Maragos et al. 1996). In the second step, by blurring the boundary ofand, two monotonic functions are obtained and transformed as gradient fields (arrows in Fig. 2 (c)). This paper extends weighted vector median filter algorithm to reduce the noise influence. In the third step, the surface reconstruction problem is formulated as solving a minimal energy model by blending the dual gradient fields (arrows in Fig. 2 (d)). By deriving and solving the corresponding Euler-Lagrange equation, the implicit resulting function is obtained. Finally, the reconstructed surface is extracted by marching cube method (William & Harvey 1987) and visualized (dashed line in Fig. 2 (e)).
3.1. Generate off-set functions
The point data are first need divide into volumetric grids (voxelization). A best voxelization is that the size of voxel cooperates with the data density, where one grid only contains one point. As shown in Fig. 3, the black points represent defective points and the real line represents the reasonable resulting surface. If the point clouds are uniform, the voxel-building step is immediate. In industry, the uniform samples are common data because the original scanning data are numerous, a regular sampling for reduction are often needed before reconstruction. If the point data have a very irregular sampling density, the ratio, number of voxels which contain two or more points divided by the total number, should be calculated. If the valve is too high, the size of the volumetric grids is re-calculated by decreasing the length of grids. The most difficulty is how to guarantee off-surface, and, water-tight. It needs determining the inside/outside of the grids near defective samples, especially holes and overlapping regions (circles in Fig.3 (a)). This method extends the basic operations of mathematical morphology, dilation and erosion, to confirm the sign of andnear the holes and overlapping samples (dash dot line in Fig.3 (b)).
To express simply, 2D uniform point clouds of arbitrary shape are designed (Fig.4 (a)). This shape contains lines, fillets and free curves according to the practical products. Some random noise, overlapping regions and holes are added. Fig.4 (b) shows the voxelization results. The node (white rectangles) of the grids within points (black points) is labeled as value 1 and other nodes are labeled as value -1. The dilation operation of morphology is then used to construct a rough crust (as shown in Fig. 4 (d)). Let denote the point image of voxelization anddenote structuring element. Dilation of the imageby an elementis denoted by
The best choice ofshould preserve the shape of point clouds and perform dilation with less times. According to the research (Maragos et al. 1996), a disk shape (as shown in Fig.4 (c)) in 3D space is a good choice. Ifis a rectangle or sphere shape, some shape details may be blurred. The structuring element size should be little larger than the noise distribution. If error of noise is defined as, the length ofshould be set as
whererepresents the length of,denotes the grid size according to the density of points,is rounded down function. For instance, if the random noise withis added in points and the grid size is defined as, the structuring element size should be more than 2. If the size ofis too large, although the dilation times is less, the shape of points could be blurred. A suitable selection is given by
In this paper, it is generally set as median size(Fig. 4 (c)), if the point clouds are much dense with little noise, the size can be smaller.
By the close crust, the inside and outside of the point data can be roughly separated. The inside part is then filled (see Fig. 4 (e)) by a simple flood-fill algorithm. It starts at a node (E.g. the middle gird node) known to be inside, those nodes accessible from initial node are labeled “inside”, and the remaining nodes are labeled “outside”. Each node of resulting image is therefore classified as lying inside the object (value 1) and outside/on the object (value -1). When sparse samples or large holes exist, the dilation should be executed for several times until a water-tight crust is constructed. The flood-fill step can check if the crust
is water-tight. If the crust is not close, the flood-fill operation could cover the whole space grids. Let donates the function after the flood-fill step, denotes the whole volumetric image, if, the crust is not close. Thus, image needs be dilated and check again. Due to the voxelization and the choice of, the dilation of examples in the paper performed less than 3 times (except Fig.18). The resulting image is shown in Fig. 4 (e) by iteratively performing the operation. And then the erosion operation is used to restore the image, expressed by
structuring element and the erosion times must be the same as the dilation step otherwise it cannot recover the image. The restored image is treated as the outside function (Fig. 4 (f)).
The inside functionis generated by dilation from inside part as follow, where the intermediate result is adopted,
whereis the same as in the process of constructing. The result of is shown in Fig. 4 (g).
As shown in Fig.3 (b) the relative functionsandconstruct a narrowband. This process needs not complex computation just set operation. Theoretically, this step could be also used in the situation of non-manifold surface, which divides the space into more than two pieces. But it needs to confirm the start points for flood-fill operation artificially, because without any pre-information, the topology of point clouds is not unique and may be confused. In industrial application, this kind of products is a rare case.
Actually if many small details of points need to preserve, andcould be updated on non-uniform grids by a subdividing process with two general steps (Fig.5). In Fig.5 (a), the dotted lines represent the dual functions on rough uniform grids, the real line represents the old resulting surface. First, the grid containing more than two points is subdivided as next level of 8 neighbor grids (see Fig.5 (a)) and the points within are inserted in new subdivided grids. The iterative subdividing process stops until no grid contains two or more points. Then, a local flood-fill process is performed in the new subdivided grids. For instance, for generating new inside function, the filling operation starts at a node of last level known to be inside (real rounds at nodes in Fig.5 (b)). It performed on each subdivided levels hierarchically and stop until all the inside/outside of new subdivided grids are confirmed. Sinceandare digitally based, they could be easily updated. With non-uniform grids, the new resulting surface (red dotted line in Fig.5 (b)) could preserver more details. However, the subdividing process is not often necessary because like other reconstruction methods based on fitting local primitives, the details of corresponding resulting surface are influenced deeply by the noise. So it could be used at the situation that the point clouds are dense enough without much noise.
3.2. Construct weighted gradient fields
andgenerated in last step are rough approximation to the off-set surface ofdue to the noise influence. This step is to reduce the affection and reconstruct a smooth and reasonable off-set surface. The weighted vector median filter in 2D grey image processing is extended into 3D space. Sinceandare Heaviside-like functions, they need to transform as gradient fields by two sub-steps. First, they are transformed as monotonic function by blurring their boundaries within neighbor space. Second, the corresponding gradient functions are computed. In this paper, akernel of Gaussian functionwith standard deviationis employed to produce non-integer node values,
andare thus transformed as,
Although the computation of Eq.(3) amplifies the noise influence, it is much suitable of weighted vector median filter to reduce the noise in function and, which is more effective and robust to denoise in image process (Barner et al. 2001). The algorithm needs to define the metric and the relationship between the elements in neighbor grid region. Letrepresents the vector in a neighbor region, which containsvectors. The metric of two vectors and can be defined as thenorm of the difference between them,
The relationship between each vector, represented asshould changes according to the metric. According to (Nie & Barner 2006), it can be expressed by following constraints,
Usually,can adopted the Gaussian function coupled with metric,
is adopted to modify the median vector in local regionfor local geometric constraints. Therefore, the output of the modified vector is defined as
whereis the median vector defined by,
The neighbor regionis set asneighbor space, a little larger than the structuring elements in last step. Because if it is less than, the noise could not be reduced, if it is too large, the computation is not efficiency, since it needs to compute the each vectorto all other vectors in neighbor region of elements to find the median vector. Normis adopted asin the rest of the paper. According to the convolution, function andare redefined as
and are visualized by extracting the zero-level lines from their integer function and Fig.7 .
The details of the results demonstrate that the noise influence is effectively rejected according to the comparison of one noise region (on the right in Fig.7 (a) and Fig.7 (b)). The overlapping regions don’t lead any jagged errors or self-intersection. Actually, only one of the dual functions, either or, can be used as for the surface reconstruction by gradient computation with large kernel size. But the holes can not be filled flatly, some concave parts exist (at the bottom in Fig.7 (a) and Fig.7 (b)). Since outside functionand inside functionare obtained by erosion and dilation respectively, which are opposite with each other, the reasonable resulting could be obtained by blending the dual functions.
3.3. Formulate and solve PDE
Based on the dual gradient functions, a minimal energy model is proposed. The gradient of resulting surfaceshould best approximate a combined field generated in last step. The differences between them are defined as the square ofnorm. The object function is expressed as
whereandare positive constants for adjusting the influence ofand to. Thus, the corresponding Euler-Lagrange equation is derived as,
The boundary condition is. Actually, this method treats the two gradient functions and equally, thus, the positive parameter is set as. The PDE is a typical Poisson equation, there are many methods to solve the classic equation. This paper adopts Fast Fourier Transform (FFT) method, since it only needs to transform the equation as Fourier series and solve a linear equation.
To visualize the resulting surface, the level set valve ofshould be confirmed. Theoretically, zero-level set is the object surface, however, due to the weighted combination, the valve of the object level set changes little from zero. This paper adopts the approximation valve of the positions of the input samples. It is confirmed by evaluating at the sample positions and using the average of the values for iso-surface extraction,
The dual functionsandgive reasonable constraints to guarantee the global shape of the resulting surface. The parametersmake the overlapping regions and holes naturally adopt the flat result. Compared with the result in Fig.7, besides the noise influence, the holes are filled flatly and the surface patch in overlapping samples is much reasonably.
4. Numerical examples and analysis
In this paper, the implementation employs PC CPU 2G Hz and 1G main RAM with the soft platform Matlab coupled with C++ API. To demonstrate the effectiveness and robustness, this paper takes the point clouds series sampled from a fan disk model (Fig. 9 (a)) as the examples. The point clouds (Fig. 9 (b)) are sampled with uniform density, thus the length of the voxel is set as.
This paper gives all the situations of the defective samples as shown in the left row of Fig.10, including sparse point clouds (Fig.10 (a)), point clouds with random noise (, Fig.10 (c)), point clouds with holes (Fig. 10 (e)), with overlapping samples (Fig.10 (g)) and the hybrid point clouds (Fig.10 (i)) which contains all the defective situations. The point clouds with holes are generated by reducing some random places on the original samples (Fig.9 (b)). The overlapping samples are generated by mis-registration (the error is set as) which often make the resulting surface have some scallops. The right row is the corresponded resulting surfaces and the details (from Fig. 10 (b) to Fig. 10 (j)). The propose method is much convenient to implement, the hybrid defective samples (Fig.10 (i)) needs not any extra steps, the holes of resulting surface (Fig. 10 (j)) are filled smoothly and no self-intersections exist in overlapping samples.
The numerical details of all the examples about fan disk are shown in Table.2. The time complexity of the method generally includes three main components, dilation-erosion (), weighted vector median filter () and FFT (). As weighted vector median filter adopts a fixed window of neighbor space, its time complexity is only relevant to point number, which is far less than grid number. Therefore, the computing time mainly depends on the resolution of space grids, which is related with the point density. Since the point clouds are all generated from same original model, the computing time is not difference too much except for the sparse points. The whole time of all the examples is within 80 seconds. This paper adopts the average errors (between resulting surfaces and point clouds) as the main accuracy standard for evaluation. The average errors of all examples are lower than, which is accuracy enough to satisfy the practical application. The results also demonstrate that the noise and overlapping regions can cause more errors than other defective samples since they often influence some sharp corners of resulting surface. Besides the average errors to the point clouds, this paper gives the comparison between the resulting surface and original model (Fig.9 (a)) with error distribution of colour map (Fig.11) Fig. 11 (a) is the result of point clouds with holes. the hole are filled smoothly and reasonably, thus the errors of the holes are nearly the same as their neighbor regions. Fig. 11 (b) is the result of point clouds with overlapping samples. Of course, the errors are larger than other regions due to mis-registration, but he overlapping samples are merged reasonably and they don’t cause any impulse changes in resulting surface, thus their errors change smoothly.
|Point clouds of fan disk||Number of points||Grid resolutions||Compute time (s)||Average errors (mm)|
|Points with holes||99155||75.6654||0.016|
|Points with overlapping||125567||76.1203||0.035|
|Hybrid defective samples||124221||75.2368||0.046|
In fact, the length of voxel could not follow the density of point clouds strictly, but if it is not set suitably, the resulting surface becomes over-fit or over-smooth cases. Two examples with “bad” grid size are shown in Fig. 12, which are both the resulting surface of noisy point clouds (Fig. 10 (c)). Fig. 12 (a) is the over-fit resulting example with grid. Fig. 10 (b) is the over-smooth case with larger grid size. This paper suggests a suitable grid size as, whereis the average density of point clouds.
Beside the theoretical model of fan disk, this paper also adopts some practical examples since in real case the overlapping regions and holes are complicated. The following practical point clouds are scanned by the hand-held digitizer (type number: Cimcore Infinite Sc2.4). Fig.13 (a) shows the point clouds of a mechanical part. It is the example containing much overlapping samples (details labeled in circles). The resulting surface (Fig.13 (b)) demonstrates the overlapping regions can be reasonably fitted and smoothed. The next example is the point clouds of piston rod. In the middle bottom of Fig.14 (a) is the points within a section plane, where overlapping samples exist. The detail of a hole is shown in the lower right (Fig.14 (b)). In practice, the sparse points often exist in un-uniform point data. Fig.15 (a), point clouds of engine outtake ports from an automobile in real case, shows the situation. Because the density is not uniform, this paper could adopt the average density to decide the grid size. The result of smooth and water-tight surface is shown in Fig.15 (b).
Fig.16 shows the example of an ancient cup which contains much free-form details for preserve. Since the point clouds have no holes, just little noise but many overlapping samples due to multi-scanning, the non-uniform grids are employed. Detail 3 in Fig.16 is the little noisy part, Details 1 and details 2 are the overlapping regions because the cup bottom is hard to scan within only once. The point clouds are density enough, the structuring elementcould be chosen with small size. In the example, the structuring element size is set as, just only for merging overlapping samples. The resulting surface and details are shown in Fig. 16 (b). The resulting surface is smooth, the noise influence and overlapping regions are reduced. Although only fewer small details are blurred due to the dilation-erosion operation, but most shape features are preserved.
Because this method needs not any triangulation, the efficiency is highly improved more than the methods which need triangulation. The mechanical part (Fig. 13 (a)) is taken as an example for comparison. This paper uses Geomagic (version 8.0) which is a widely used commercial software utility in reverse engineering domain. It performs reconstruction based on building triangular meshes, as most conventional methods do. The result is shown in Fig. 17 (a). As the influence by the noise and non-uniform regions in the point clouds, it is difficult to construct a water-tight triangular mesh (Fig. 17 (a)). Some holes and rough place exist (Fig. 17 (b)). Thus, the artificial work of filling holes has to be performed (Fig. 17 (c)). The time of triangulation and reconstruction by this method is given in Table 3, where the time of Goemagic contains no artificial filling time. The comparison demonstrates that if the number of point clouds increase, or much noise exist, the triangulation time increases more than the time of surface reconstruction by this method.
|Number of points||Time of triangulation (s)||Reconstruction time of this method (s)|
The proposed method is especially useful to deal with the point clouds of “bad-scanning” like the example shown in Fig.18. It is a resin mould of engine intake ports from an automobile in real case (Fig. 18 (a)) and the original CAD model is shown in Fig.18 (b). Because the resin mould is too soft to fix, the point clouds by multi-scanning have larger errors of mis-registration (e.g. labeled by circle 1 in Fig. 18 (c)) than all examples above. The holes (e.g. labeled by circle 2 in Fig. 18 (c)) are much large and the noise appears everywhere. The points are much disadvantageous to obtain the specific information, especially triangular mesh with high quality. This paper gives the results by two typical methods based on Delaunay triangulation. Fig.18 (d) is the triangular mesh reconstructed by Raindrop Geomagic. The defective samples obviously influence the results: the holes need to be filled by other artificial work and the overlapping regions lead to some jagged triangles. Fig. 18 (e) is the resulting surface by power crust method (Amenta et al. 2001). Although the resulting surface is watertight, the surface is rugged and overlapping regions are self-intersect. The two results need some complex post-process. Fig. 18 (f) shows the resulting surface by level set method(Zhao et al. 2000), which need no specific information. The resulting surface is much better, but the surface is not smooth enough, especially the regions of holes. The final surface by this method is shown in Fig. 18 (f). This method guarantees a smooth and water-tight surface, holes are filled flatly and no self-intersections in overlapping samples. The “bad-scanning” samples need more dilation-erosion operations, the resulting surface is acceptable and convenient for certain post-process. But if the input data have too large holes or serious overlapping samples, the details of the resulting surface may be blurred due to too many dilation-erosion times.
The numerical details of the practical examples are shown in Table. 4. Since the point data of engine intake ports contain so many defective samples, the resulting surface has the largest average errors than other examples. Even if the ancient cup example has nearly 1 million points, the compute time is only 90 seconds.
|Name of point clouds||Number of points||Grid resolution||Average errors (mm)|
|Engine outtake ports||98175||0.035|
|Engine intake ports||154890||0.087|
5. Conclusions and future work
This paper presents a novel implicit surface reconstruction method based on dual off-set gradient functions. Its core idea is to construct the dual off-set functions and generate the resulting surface indirectly. Through the extensive examples, it is indicated that the proposed method is robust to reconstruct discrete point sets, especially practical point data or “bad-scanning” data. The method need not any specify information, thus it can skip complex pre-process and make the reconstruction process much efficient. The morphology operation is based on set calculation so it is much faster to make off-set surfaces water-tight. The weighted vector median filter algorithm is extended into 3D space for reducing the noise influence and making the final surface much smooth. The dual relative functions construct a minimal crust surrounding the points from dual side, which can guarantee the holds and overlapping samples are fitted reasonably.
In future research, the problem of preserving details from “bad-scanning” points would be well-studied. Some advanced hierarchical data structures would be discussed for more efficiency implementation. The choice of structuring elements in morphology is a world classic problem in the field of image processing. Some improved structuring elements would be discussed in future work, such as combinational shape of structuring elements. The method would also be improved to handle some complex non-manifold point clouds. Some other image processing methods would also be extended in surface reconstruction, like some adaptive filter algorithms. To generate a suitable surface from defective point data, it is much important to employ industry prior design knowledge. With such prior but general knowledge, the resulting surface could be much reasonable than what are reconstructed only based on geometric information. The research would also focus on this respect.
The research work is supported by Dongfang Electric Corporation Research & Develop Centre, Intelligent Equipment & Control Technology Institute and the Natural Science Fund of China (NSFC) (Projects No. 50835004 and 50625516).