Open access peer-reviewed chapter

Surface Approximation from Rapidly Varying Data: Applications to Geophysical Surfaces and Seafloor Surfaces

By Apprato Dominique, Gout Christian and Le Guyader Carole

Published: October 1st 2009

DOI: 10.5772/8309

Downloaded: 1025

1. Introduction

We propose an approximation method for surfaces with fault and /or large variations. We use image segmentation tools, meshing constraints, finite element methods and spline theory.

Curve and surface fitting using spline functions from rapidly varying data is a difficult problem (see Salkauskas, 1974, or Franke and Nielson, 1984, or Franke, 1982). In the bivariate case and without information about the location of large variations, usual approximation methods lead to instability phenomena or undesirable oscillations that can locally and even globally hinder the approximation (Gibbs phenomenon).

So, we propose a new method which uses scale transformations (see Apprato and Gout, 2000). The originality of the method consists of a pre-processing and a post-processing of the data. Instead of trying to find directly an approximant, we first apply a scale transformation to the z-values of the function. In the particular case of the approximation of surfaces, the originality of the method consists in removing the variations of the unknown function using a scale transformation in the pre-processing. And so the pre-processed data do not exhibit large variations. So we can use a usual approximant which will not create oscillations.

In case of vertical fault, we also propose an algorithm in order to find the location of large variations : the right approach to get a good approximant consists, in effect, in applying first a segmentation process to precisely define the locations of large variations and faults, and exploiting then a discrete approximation technique. To perform the segmentation step, we propose a quasi-automatic algorithm that uses a level set method to obtain from the given (gridded or scattered) Lagrange data, several patches delimited by large gradients (or faults). Then, with the knowledge of the location of the discontinuities of the surface, we generate a triangular mesh (which takes into account the identified set of discontinuities) on which a D m-spline approximant (see Manzanilla 1986, Apprato et al., 1987, or Arcangéli et., 1997, or Arcangéli et al. 2004) is constructed.

We apply our method to different datasets (bathymetry, Lagrange data…): Piton de la Fournaise volcano in La Réunion island (see Gout and Komatitsch, 2000), Pyrenées mountains in France (see Apprato et al., 2000), Marianas trench in the Pacific (see Apprato et al. 2002). We also give an example around the Hawaiian hot spot. The topography and bathymetry of the Hawaiian Islands in the Pacific ocean result from the activity of a huge hot spot combined with the effect of erosion. This hot spot has been more or less active since the Late Cretaceous, and as a result, the Big Island continues to grow, and to the East a new island is being formed.

2. Mathematical Modelling for surface approximation from Lagrange dataset

Unfortunately, when applied to the approximation of surfaces from rapidly varying data, usual methods like splines lead to strong oscillations near steep gradients, as illustrated in Figure 1. When the location of the large variations in the dataset is known, Salkauskas (1974) has proposed methods that use a spline under tension with a nonconstant smoothing parameter, and Hsieh and Chang (1994) have proposed a concept of virtual nodes inserted at the level of the large variations in the case of an approximant in the context of computer-aided geometric design. In the more general context where the location of the large variations in the dataset is not known a priori, Franke (1982) and Bouhamidi and Le Méhauté (1999) have proposed splines under tension belonging to more general spaces. These methods give good results in the case of curve fitting, but less accurate results in the case of surface fitting.

Figure 1.

When classical splines (for instance, here a C 1 spline) are used to interpolate data points (x i ; f (x i )) with large local variations, strong spurious oscillations are generated near steep gradients.

The new method we introduce here uses scale transformations, and is applied without any particular a priori knowledge on the data. The philosophy of the method is similar to interpolation methods based upon anamorphosed data commonly used in geostatistics (see, for instance, Issaks and Srivastava, 1989). In the first part of this article, a construction of the scale transformation families is presented. Results concerning the convergence of the approximation are given. We also show the efficiency of this innovative approach by applying it to the topography of the summit of the Piton de la Fournaise volcano, located in the Réunion Island (Indian Ocean, France). This volcano exhibits large and rapid variations in steep river valleys in its southwestern part, as well as in a caldera, where the behavior of the method is tested.

The method we propose uses two scale transformations—namelyϕdfor the preprocessing andψdfor the postprocessing. The first one,ϕd, is used to transform the z values representing the height of the unknown surface f into values (ui ), regularly distributed in an interval chosen by the user, as illustrated in Figures 2 A and 2B. The preprocessing functionϕdis such that the transformed data do not exhibit large local variations, and therefore a usual spline operator Td can subsequently be applied without generating significant oscillations, as shown in Figure 2C. The second scale transformationψdis then applied to the approximated values to map them back and obtain the approximated values of z (Figure 2D). It is important to underline that the proposed scale transformations do not create spurious oscillations. Moreover, this method is applied without any particular knowledge on the location of the large variations in the dataset. Let us consider a dataset(xid,zid)i=1,...,N(d)dindexed with a real d, such that when d tends to 0, the number of data points N(d) tends to infinity. For the purpose of a theoretical study of the approximation convergence, we introduce a functionf:Ω[a,b], such that the data set becomes(xid,zid=f(xid))i=1,...,N(d)d

The functions introduced above have the following expressions, formIN:

ϕd:[ab][αβ]IRTd:(ϕdf)Hm(Ω[α,β])Td(ϕdf)Hm(Ω,[αβ])ψdTd(ϕdf)Hm(Ω[ab])E1

where the preprocessingϕdand the postprocessingψdare continuous scale transformation families, whereTdis an approximation operator, for instance a spline, and whereHm(Ω,.)denotes the usual Sobolev space. More precisely, we introduce a nonempty bounded connected set of R2 with Lipschitz boundary, and an unknown functionfHm'(Ω,[a,b])that we want to approximate, this hypothesis allowing to have(ϕdf)bounded inCm(Ω¯)(withm'm+1)a property used to establish the convergence of the approximation ( Apprato and Gout, 2000 ). We also consider a subset Ad of N = N(d) distinct points ofΩ¯such that

supxΩ¯δ(x,Ad)=dE2

where is the Euclidean distance in R2; the index d represents the radius of the biggest sphere included inΩthat does not intersect with any point of Ad, and thus, when d tends to 0, the number of data points tends to infinity. We also introduce the setZ1dof N = N(d) real numbers such that

xidAd,f(xid)Z1dE3

and the sequenceZ2dof p(d) distinct z values obtained from the ordering ofZ1d,z˜idZ2d,i=1,...,p(d),

a=z˜1dz˜2dz˜3d...z˜p(d)1dz˜p(d)d=bE4

where [a, b] = Im( f ). The sequenceZ2dwill be used for the construction of the scale transformation families in the following section. In what follows, for convenience, we also writezidinstead ofz˜id

Figure 2.

The preprocessing phase, (A) and (B), transforms the values f(xi) using a scale transformation ϕ d . After preprocessing, B, the local variations in the data have been drastically reduced. Therefore, it is possible to obtain a regular approximant T d with no significant oscillations using a usual C1 spline operator (see subsection 2.2), as shown in (C). A second scale transformation ψ d is subsequently applied to the values of the approximant in a postprocessing phase, (D), to map them back and obtain the final approximant. It is important to mention that the scale transformations used do not create spurious oscillations, as illustrated in (D).

2.1. Scale transformations

In this section, we give a construction of the scale transformation families by generalizing the technique seen in Torrens, 1991. These scale transformations are realistic in the sense that, as classical transformations, they are monotonous.

Preprocessing of the Data: Familyϕdof Scale Transformations

The goal of the scale transformationϕdis to reduce the variations in the data set. We first constructϕd, and, in order to study the convergence of the approximation, we then establish the convergence ofϕdto a functionϕwhen the number of data points tends to infinity (i.e.,d0): Let[α,β]be an interval of R, and{ui}i=1,...,p(d)the following regular subdivision, for i = 1,…,p(d):

α=u1u2u3...up(d)1up(d)=βandui+1ui=βαp(d)1E5

These interval and subdivision are chosen by the user. When dealing with surface approximation from rapidly varying data, we choose the interval to be [0,1], and an even subdivision of the {ui} that is used to reduce the local variations of the (zi). After applyingϕd, we obtain a new data set (xi,ui) related to the initial data by ui =ϕd(zi).When this technique is applied to other problems however, for instance in some applications in imaging when one has an image with homogeneous regions, it can be of interest to increase the variations between pixel values—the (zi)—; in such a case, Apprato and Gout, 2000 showed that it is possible to choose a nonregular distribution in the interval[α,β]to generate variations, and therefore to enhance some features present in the image to facilitate its segmentation.

We introduceϕ:[a,b][α,β]theCdiffeomorphism that transforms[a,b]into[α,β](such families of transformations are usually called anamorphosis in the geostatistics literature):

ϕ(z)=βαba(za)+αE6

We also introduce the functionϕd, for i=1,…, p(d)-1 and for anyz[zi,zi+1]

ϕd(z)=uiq0m0(zzizi+1zi)+ui+1q0m1(zzizi+1zi)+α1(zi)(zi+1zi)q1m0(zzizi+1zi)+α1(zi+1)(zi+1zi)q1m1(zzizi+1zi)E7

where theqlmi, for i = (0,1), and l = (0,1), are the basis functions of the finite element of class Cm on [0,1] (see Ciarlet, 1978) and where, for any i = 1, …, p(d)-1,

α1(zi)=ui+1uizi+1ziandα1(zp(d))=α1(zp(d)1)E8

Using relations (5)–(8), we obtain the following results:ϕdimplements the interpolation of the (ui) andϕdbelongs toCm([a,b]):

  1. ϕd(zi)=ui,for i=1,…,p(d);

  2. ϕdCm([a,b])

We now consider a sufficient convergence hypothesis, which implies that the distribution of the data (zi) has an asymptotic regularity in the interval [a; b] when d tends to 0, and which is used to establish the convergence of the approximation. This hypothesis is that there exists C > 0 and an integer m’’ verifyingm''m2such that, for d small enough; and for any i = 1,…, p(d)-2, we have

|1zi+1zizi+2zi+1|C(bap(d)1)m'E9

We also suppose that the set Ad introduced above is such that there exists C’ > 0 such that

p(d)C'd2E10

In equation (10), introduced by Arcangéli (1989), expresses a property of asymptotic regularity of the distribution of the data set Ad inΩ¯. Using a compactness argument, Gout, 2002 established that hypotheses (9) and (10) imply that there exists C’’ > 0, such thatϕdCm([a,b])C''and

limd0ϕd=ϕinC0([a,b])E11

whereϕdis defined by (7), andϕis defined by (6).

One can notice that the construction of the scale transformationsϕdmade in (7) uses a finite difference scheme of order 1 to construct, from the ui, the first derivatives ofϕdat the pointsz˜i, i =1, …, p(d). Moreover, the option retained in (6), which is to cancel the l derivatives ofϕdat the pointsz˜ifor any l=2,…,m, could be substituted by the option consisting in using a finite difference scheme of order l to define these l derivatives. Let us also mention that we have chosen to construct scale transformations on a finite element basis in order to be able to study the convergence of the approximation.

Postprocessing of the Data: Familyψdof Scale Transformations

Similarly to the way we constructed the scale transformationsϕd, we now define a scale transformation familyψdthat implements the postprocessing of the calculation. We recall that after the preprocessing, the large local variations in the dataset have been drastically reduced; therefore it is possible to approximate the data using a usual spline operator Td without generating significant oscillations. To map these values back and obtain the approximated values of z, we need to use a postprocessing step, and therefore need to introduce a familyψd, which is almost the inverse ofϕd: asϕdconverges toϕ, we constructψdsuch thatψdconverges toϕ1. To do so, we define theCdiffeomorphismϕ1:[α,β][a,b]inverse ofϕdefined in Equation (6):

ϕ1(u)=(uα)(ba)βα+aE12

We also defineψdthe function, for i =1,…,p(d)-1, and for anyu[ui,ui+1]

ψd(z)=ziq0m0(uuiui+1ui)+zi+1q0m1(uuiui+1ui)+β1(ui)(ui+1ui)q1m0(uuiui+1ui)+β1(ui+1)(ui+1ui)q1m1(uuiui+1ui)E13

where theqlmi, for i = (0,1), and l = (0,1), are the basis functions of the finite element of class Cm on [0; 1] (Ciarlet, 1977) and where, for any i = 1, …, p(d)-1:

β1(ui)=zi+1ziui+1uiandα1(up(d))=α1(up(d)1)E14

Under hypotheses (9) and (10), Apprato and Gout, 2000 established the following relations:

(i)ψd(ui)=zii=1p(d)(ii)ψdCm([αβ])(iii) there existsC0such thatψdCm([αβ])C(iv)limd0ψd=ϕ1inC0([αβ])E15

It is important to mention that (15-i) is one of the key points of the algorithm, that (15-ii) enables us to obtain approximants with high regularity, and that (15-iii) and (15-iv) are used to establish the convergence of the approximation.

2.2. The spline operator

Given a Lagrange dataset(xi,(ϕdf)(xi)=ϕd(zi))i,dwe have to solve the classical problem of constructing an approximant Td of class Ck (with k = 1 or 2 in practice). In this work, we use a smoothing Dm spline, as defined in Arcangéli et al., 2004, which has many advantages: it is possible to implement a local refinement, the matrix of the linear system to solve is banded, and it is possible to study the convergence of the approximation. We have chosen to use a smoothing Dm spline and not an interpolation spline because we want to be able to work with large datasets of up to several hundreds of thousands of points, and in that case, a smoothing spline is far less expensive than an interpolation spline. We consider the functional, for any belonging to Hm(),

Jεd(Φ)=ρd(Φϕdf)p(d)2+ε|Φ|m,Ω2E16

whereρdL(Hm(Ω),IRp(d))is defined byρdf=(f(a))aAdIRp(d),||m,Ωis the usual semi-norm on Hm(),p(d)is the Euclidean norm in Rp(d), and a smoothing parameter. We callσεdthe Dm -smoothing spline on relative toϕdfwhich is the unique solution of the minimization problem: findσεdHm(Ω)such that for any belonging to Hm():

Jεd(σεd)Jεd(Φ)E17

The solutionσεdto this problem is also the unique solution of the variational problem: findσεdHm(Ω)such that for any belonging to Hm():

ρdσεd,ρdΦp(d)+ε(σεd,Φ)m,Ω=ρd(ϕdf),ρdΦp(d)E18

Uniqueness of the solution can be proved using the Lax–Milgram lemma and results by Necas, 1967 to establish an equivalence of norms.

In order to computeσεd, we choose to discretize it on a finite element basis, which enables us to obtain a small sparse linear system. We choose the generic Bogner–Fox–Schmit (BFS) rectangular finite element (see Ciarlet, 1977). In what follows, we use either the BFS of class C0 or of class C1 in order to obtain a C0 or C1 approximant. In the following, we writeσεdinstead of Td.

2.3. Convergence results

We first give the convergence of the Dm spline operatorσεdrelated to the transformed data(ϕdf)to the function(ϕf)when d tends to 0. We obtain this result using the convergence ofϕdtoϕ, using the fact that Apprato et al., 1987, showed that, for any function g, we havelimd0σεd(g)=g

Keeping the notation of the previous sections, and since(ϕdf)is bounded inCm(Ω¯)Apprato and Gout, 2000, proved that

limd0σεd(ϕdf)=ϕfinC0(Ω¯)E19

From this result, using a compactness argument, Apprato and Gout, 2000 established a theoretical result concerning the convergence of the approximation:

limd0(ψdσεd(ϕdf))=ϕ1ϕf=finHmΘ(Ω)E20

for anyΘ0such thatΘm1(continuousembeddingofHmΘ(Ω)intoC0(Ω¯)). Note that if we take n = 2 and m = 3, the convergence takes place inH2Θ(Ω)for anyΘ]0,1[

2.4. Numerical examples

The Piton de la Fournaise is a volcano located in the Indian Ocean, in the Réunion Island, France. This volcano exhibits strong topographic variations near its summit, due to the presence of a caldera and of two steep river valleys in its southwestern part, as can be seen on the picture of the volcano presented in Figure 3. The maximum height of the volcano is 2.6 km, and the depth of the valleys reaches more than 1000 m in several places. Being able to describe the topography of such regions exhibiting rapid local variations with at least C0 regularity, or even C1 regularity, is important in many fields in geophysics. For example, this description of the topography can be an input to numerical modeling codes that study the propagation of pyroclastic flows or lava flows, and related hazards; other examples are seismic site effects and ground motion amplification due to topographic features. In both cases, to avoid creating numerical artefacts, it is important not to introduce spurious oscillations in the description of the model itself. Otherwise, it is well known that in the context of curvilinear spectral element modeling of elastic wave propagation, artificial diffraction points appear at the edges between elements, which significantly affects the behavior of surface waves. To demonstrate the efficiency of our method, we create C0 and C1 approximants from a set of 8208 data points taken from a DEM of the summit. The data points in the DEM have been obtained by digitizing a map of the area. In this DEM, the height is given on an evenly spaced grid of 76 x 108 points, with a grid spacing of 200 m. Therefore the considered region has a dimension of 15 km in the East–West direction, and 21.4 km in the North–South direction. This DEM is shown in Figure 4 using a top view with isocontours representing the height of the topography every 0.2 km.

Figure 3.

Image of the Piton de la Fournaise volcano in the Réunion Island, Indian Ocean, France. One can clearly see the summital caldera, and the two steep valleys in the South–West. The size of the region represented is approximately 40 x 35 km. The height of the volcano is 2.6 km. Image taken as part of the Space Shuttle SIR-C/X-SAR radar missions, courtesy of Pete Mouginis-Mark, University of Hawaii.

In the preprocessing step, we choose a regular distribution of the ui in [,] = [0,1] in order to reduce the large variations in the data set. The approximants are subsequently obtained by discretizing the Dm spline in a finite-element space. In the case of the C0 approximant, we use 30 x 40 rectangular C0 BFS finite elements, each having four degrees of freedom. In the case of the C1 approximant, we use 15 x 20 rectangular C1-BFS finite elements, each having sixteen degrees of freedom. In both cases, the smoothing parameter is taken to be 10-6.

In Figure 4, we show a three-dimensional representation of the C1 approximant after postprocessing, evaluated on an evenly spaced grid comprising 200 x 200 points. The grid spacing in the East–West direction is therefore 107.54 m, and the one in the North–South direction is 75.37 m. From the figure it is clear that the results do not exhibit strong oscillations, even though the use of such a dense grid for the evaluation of the approximant is expected to enhance the artefacts generated by the approximation method. To compare this approximant to the original dataset more precisely, in Figure 5 we present a top view of the approximated values, with isocontours representing the height every 0.2 km, in addition to the same plot for the original dataset. It is clear from these plots that the approximant is very close to the original data, with local variations smoothed as expected. One can notice that the approximant does not exhibit significant oscillations even in the difficult regions of the model, particularly the two valleys. To demonstrate this more quantitatively, we evaluate the quadratic error for the two approximants. In the case of the C0 approximant, we find that the error is 4.96x10-4; in the case of the C1 approximant it is 4.01x10-4. Such values are considered as very good ones in the context of surface approximation, and show the efficiency of the proposed approach for this case with rapidly varying data. In the entire dataset, the maximum error measured is 5.5%, corresponding to an absolute error of 56 m. This maximum error occurs in a region located on the edge of the steep valleys, where the local variations are the strongest, as expected. More detailed studies of the approximation error, and evidence that the rate of convergence is higher in this method than in usual approaches with no preprocessing, such as thin plate spline or splines under tension can b found in Schoenberg, 1960.

We have presented a new method to fit rapidly varying geophysical data. The ability to suppress, or at least significantly reduce, oscillations of the surface near steep gradients has been demonstrated. The scale transformation families introduced provide more control on the behavior of the approximant, without any particular a priori knowledge of the location of the large variations in the dataset. The regularity obtained, which can be C0, C1, or higher, enables us to describe the topography of real geophysical surfaces accurately. We have shown the good properties of this approach by applying it to the real case of the Piton de la Fournaise volcano.

Figure 4.

Three-dimensional view of the C1 approximant, after post-processing, obtained for the Piton de la Fournaise volcano from the Digital Elevation Model. The scale represents the height of the topography, from 0 to 2.6 km. The image has been generated with no vertical exaggeration. The approximant has been evaluated on an evenly spaced grid comprising 200 x 200 points. No significant oscillations can be observed, even in the difficult regions of the model, which are mainly the two valleys, and also the caldera. In this example, we have discretized the spline using 15 x 20 BFS finite elements, each having sixteen degrees of freedom.

Figure 5.

Comparison between the isocontours obtained from the original dataset of the DEM. Left : Isocontours of the DEM of the Piton de la Fournaise volcano. The DEM is given on a grid of 76 x 108 points, with a uniform grid spacing of 200 m. The height of the summit is 2.6 km. One can clearly observe the slopes of the two steep valleys, and the isocontours of the C1 approximant after postprocessing (right), as in the three-dimensional view of Figure 4.

The general agreement is excellent, and it is important to notice that no significant oscillations can be observed, even in the two steep valleys. Isocontours represent the height of the topography every 0.2 km. The gray scale also indicates the height of the topography, from 0 to 2.6 km.

3. Seafloor surface approximation from bathymetric dataset

3.1. Modelling

The problem of surface approximation from a given set of curves can be formulated as follows: from a finite set of curves Fj, j = 1,…, N (the bathymetry ship track curves in our case) in the closure of a bounded nonempty open setΩR2, and from a function f defined onF=j=1,..,NFj, construct a regular function on approximating f on F, i.e.:

Φ|Ff|FE21

We can assume that is a connected set, with a Lipschitz-continuous boundary (following the definition of Necas, 1967), that for any integer j, with j=1,…,N, Fj is a nonempty connected subset in F, and that, for simplicity, f is the restriction on F of a function, still denoted by f, that belongs to the usual Sobolev space Hm(), with the integer m>1. We also assume that the approximant belongs to Hm()Ck(Ω¯)with k = 1 or 2, where¯denotes the closure. The main interest of such a regularity for is that it allows one to obtain a final surface that can later be used directly as an input model in a different application, such as ray tracing, image synthesis, or numerical simulation.

Let us define, for any v belonging to Hm(),ρv=v|Fwhere is a linear operator and let us introduce the convex setK={vHm(Ω),ρv=ρf}. Then we consider the minimization problem of findingσKsuch that for anyvK

|σ|m,Ω|v|mΩE22

where||m,Ωis the usual semi-norm on Hm(). If L2(F) is equipped with the usual norm

v0,F=(j=1NFjv2ds)1/2E23

and under the hypothesis that for anypPm1(F¯),p|F=0p0we know, based upon a compactness argument (Necas, 1967), that the function||||||defined by

|||u|||=(ρu0,Ω2+|u|m,Ω2)1/2E24

is a norm on Hm() which is equivalent to the usual normm,Ωon Hm(). Then the solution of the interpolation problem (22) is the unique element of minimal norm||||||in K that is convex, nonempty, and closed in Hm(). Hence we could take the solution = when m > k + 1. Unfortunately, it is often impossible to compute using a discretization of problem (22), because in a finite dimensional space, it is generally not possible to satisfy an infinity of interpolation conditions. Therefore, to take into account the continuous aspect of the data on F, we instead choose to define the approximant as a fitting surface on the set:

{(x1,x2,x3)IR3,x3=f(x1,x2),(x1,x2)Fj,j=1N}E25

In this work, we propose to construct a variant of the “smoothing Dm-spline,” seen in previous sections, that will be discretized in a suitable piecewise-polynomial space. The use of such spline functions has been shown to be efficient in the context of geophysical applications such as Ground Penetrating Radar data analysis (Apprato et al., 2000) or the creation of Digital Elevation Models describing topography (Gout and Komatitsch, 2000).

Let us present in this section the theoretical aspects of the method. We first introduce a functional J that we shall minimize, defined on Hm(F) by

Jε(v)=|||vf|||0,F2+ε|v|m,Ω2E26

whereε|v|m,F2is a smoothing term, > 0 being a classical smoothing parameter. The key idea here is that the fidelity criterion to the data|||vf|||0F2honors their continuous aspect. We now need to numerically estimate this L2-norm, which is done using a quadrature formula. In this respect, the approach is quite different from more classical techniques that usually simply make use of a large number of data points on F in order to solve the approximation problem. For any integer j, j= 1,…,N, and anyη0, let{ςi}1iLbe a set of L = L( j ) distinct pointsςi=ςi(j)F¯jsuch thatmax1iL1δ(ςiςi+1)ηwhereδis the Euclidean distance in R2. This relation implies that the distance between two consecutiveςiis bounded by , it also enables one to study the convergence of the approximation when tends to 0. The {ςi} will also be the nodes of a numerical integration formula. Let us also introduce a set(λi)1iLof real numbers (that will be the weights of a quadrature formula) such thatλi=λi(j) > 0, and let us define, for anyvC0(F¯j),η0

ljη(v)=i=1Lλiv(ςi)E27

and for anyvC0(F¯j)

l(v)=j=1Nljη(v)E28

In what follows, we will suppose that, for any vHm(F¯) and any > 0, there exists C > 0 such that

|ljη(v2)v0,Ff2|Cηvm,Ω2E29

When this hypothesis is satisfied, one can consider l as a theoretical quadrature formula forv0Ff2. Note that in some case (N=1), l(v) is a quadrature formula for the curvilinear integralFvds. Note also that in most applications the Fj are polygonal curves, and one can, therefore, use a classical quadrature formula (e.g., Arcangéli and Gout, 1976, or Gout and Guessab, 2001).

For the discretization, we proceed like in Section 2, using a finite element space.

3.2. Application to surface reconstruction from bathymetry ship track data in the Marianas trench

Detailed bathymetry maps are essential in several fields in geophysics, such as oceanography and marine geophysics. Historically, over the past decades, research vessels have collected a large number of depth echo soundings, also called SONAR (for “SOnic Navigation And Ranging”) bathymetry ship track data. Many of these measurements have been compiled to produce global bathymetry maps (e.g., Canadian Hydrographic Office in 1981). In recent years tremendous advances in satellite altimetry have enabled researchers to produce very detailed bathymetry maps independently of satellite gravity field measurements. However, long-wavelength variations of the depth of the ocean floor are difficult to constrain using satellite altimetry, and ship track data are still often used instead for that purpose. It is, therefore, of interest to address the issue of producing a bathymetry map from a given set of SONAR bathymetry ship tracks. Let us mention that SONAR ship tracks are typically acquired as a discrete set of measurement points, as opposed to continuous recording. However, the typical horizontal interval between measurement points is always small compared to expected bathymetry variations; therefore, in the context of this study the dataset can be considered as consisting of smooth continuous lines.

As presented in Url, 2009, in order to understand the shape of the seafloor, oceanographers go out on ships and collect sonar data. Sonar data are collected using echosounders and side-scan sonar systems. The digital data are then converted into maps and images.

How does this work?

Echosounders: Since World War II echosounders have been used to determine water depths of the oceans. Echosounders are usually attached to the hull of a ship. The echosounder sends an outgoing sound pulse into the water. The sound energy travels through the water to the ocean bottom where it is reflected back towards the source, received, and recorded.

Figure 6.

Sound travels from the ship to the seafloor and is reflected back. The time it takes is converted into distance yielding water depth. (Credit: www.womenoceanographers.org).

The time that it takes for the sound to make the round trip to the seafloor is accurately measured. Water depth is determined from the travel time and the speed of sound in water. Water depth can be estimated simply by using an average sound speed and the following relationship: Distance = speed × time/2, (the time is divided by 2 to take into account the round trip from the echosounder to the seafloor). The unique drawback of such an echosounder is that it will only give one depth at each time. That is why multibeam echosounder have been created...

How are water depths turned into a map?: As a ship steams ahead through the water, multibeam echosounders provide water depths for a swath of the seafloor. The water depths are located in space using satellite navigation. From these data, oceanographers can make maps of the seafloor that resemble topographic maps of land areas. In the early times, bathymetry maps were drawn by hand. Contours (lines) of equal water depth were drawn through a grid of numbers that had been plotted on a sheet of paper. Colors, put on by hand, indicated regions of equivalent water depth. Eventually computers took over and produced paper charts of the data, contoured and automatically colored. Now computer softwares enable individual scientists to process the data and display them on their own computer monitors. Maps can be imported into graphic software applications and annotations and other marks can be added.

"Side-scan"sonars: Similar to the multibeam echosounder, the sound transmitted by a side scan sonar instrument travels to the seafloor, bounces off the seafloor, returns to the instrument, and is recorded In the case of a side-scan sonar, it is the intensity or strength of the returning acoustic signal that is recorded. This is controlled primarily by the slope of the seafloor and by what the seafloor is made of. A stronger return is received if the seafloor slopes down to the instrument. Also, the return is stronger if the seafloor is made of bare rocks. The strength of the return is much lower if the seafloor is covered by mud or sand. Volcanoes and other features that stick up above the surrounding seafloor will cast acoustic shadows. These shadows are just like the shadow behind a person when a flashlight is shone on him.

Figure 7.

Cartoon from a NOAA web site showing the swath of seafloor insonified by the multibeam echosounder. (Credit : NOAA and GSC).

Converting intensity into an image: The strength of the sound recorded by the side-scan sonar instrument is converted into shades of gray. A very strong return, say from bare rock, is white; a very weak return is black. The echo strengths that fall between these two extremes are converted into different shades of gray. Historically, side scan sonar data have been displayed on a hard copy paper recorder. The paper chart used to be the most convenient method for displaying and storing these data (as well as bathymetry data). Since the 1980s or so, software and hardware have been developed to process side scan data using computers and display the data on computer screens.

Figure 8.

Left: Hull-mounted multibeam sonar (left) and towed side scan sonar (right). NOAA – Right: The side scan sonar instrument is towed by the ship. Sound is transmitted into the water and images are made based on the strength of the recorded return. (credit: NOAA)

We select the region of the Marianas trench (Figure 9). The trench is located in the North Pacific ocean, east of the South Honshu ridge, parallel to the Mariana Islands. It corresponds to the subduction zone where the fast-moving Pacific plate converges against the slower moving Philippine plate. It is also the place on Earth where the oceans are the deepest, reaching a maximum depth of slightly more than 11 km in the so-called “Challenger Deep” area (Figure 9). This region is ideal to test our surface approximation technique because it has been thoroughly studied; therefore, many ship track datasets are available. We select a 45 x 45 km area, corresponding to latitudes between 11.2± and 11.6± North, and longitudes between 142± and 142.4± East. We use 16 tracks from the database assembled by David T. Sandwell and coworkers at the University of California, San Diego (http://topex.ucsd.edu). Each individual track contains between 62 and 152 points giving depth for a given latitute and longitude. The total number of points in the whole dataset is 1576. The depth varies between 6779 and 10952 m. As can be seen on Figure 9, the ship track coverage of the area is nonuniform. Note in particular the lack of data in the north-east and south-east corners. Fortunately, data coverage is much better near the center in the deepest part of the trench. We create an approximant using 169 quadrangular Bogner–Fox–Schmit finite elements defined on a regular 13 x 13 grid in the horizontal plane in the area under study. As underlined in the previous section, these elements enable us to obtain an approximant with C1 regularity. Figure 9 shows a 3D view of the final surface obtained, as well as the original set of ship tracks. For display purposes, the approximant has been evaluated on a regular 200 x 200 grid of points and a vertical exaggeration factor of 3 has been applied. By comparing with Figure 9 and with the ship tracks, one can see that the smooth surface obtained correctly reproduces the general characteristics of the bathymetry of the region, and behaves satisfactorily even in the areas where the data coverage is sparse.

Figure 9.

LEFT: We focus on a 45 x 45 km region in the south-west of the Marianas trench. We use 16 bathymetry ship tracks, each containing between 62 and 152 points. The entire set of curves contains 1567 points. Each point gives depth for a given latitude and longitude. On this top view the coordinates have been mapped using the Universal Transverse Mercator (UTM) projection. The depth in the dataset varies between 6779 and 10952 m. One can see that the ship track coverage is nonuniform. For instance we have little information in the north-east and south-east corners of the area. RIGHT: We construct a bathymetry map from the set of 16 ship track data curves using a regular grid of 13 x 13 quadrangular Bogner–Fox–Schmit finite elements of class C1. For display purposes, the approximant obtained has been evaluated on a regular 200 x 200 grid of points, and a vertical exaggeration factor of 3 has been applied. The original 16 ship tracks are also shown (dashed lines) to illustrate the quality of the obtained surface. The isolines represent bathymetry every 500 m. By comparing with Figure 9 (right), one can see that we are correctly reproducing the general trends of the bathymetry of the area.

The quadratic error is equal to 3.29 x 10-5, which is a very satisfactory result (unusually low in the context of surface approximation, e.g., Apprato et al. (2002); as a comparison, a usual Dm-spline (Arcangéli, 1989) applied to the same data set using the same finite-element grid gave an error of 6.4 x 10-4).

3.3. Application to surface reconstruction from bathymetry ship track data and Lagrange data around Hawaiian hot spot

To demonstrate the efficiency of our method, we also give a numerical example from a set of 7049 data points (both bathymetry and Lagrange data) around the Big Island in Hawaii.

Figure 10.

Hawaiian Islands. From lower right to upper left, the Big Island (Hawaii), Maui, Kahoolawe, Lanai, Molokai, Oahu, Kauai, and Niihau islands all make up the state of Hawaii, which lies on more than 2,000 miles from any other part of the United States. The small red dot on the Big Islands southeastern side denotes a hot spot on Kilauea Volcanos southern flank. Kilauea has been erupting almost continuously since January 1983, and is one of the world’s best studied volcanoes.

The maximum height of the big island is 4.7 km, and the depth of the seafloor reaches more than 4 km in several places. To get seafloor data, radar scan sonar are used. From the dataset, with the knowledge of the large variations, we have made a triangulation on each region using the software Mefisto. We give the C1 approximant (see below). We also evaluate the approximant obtained at the 7049 data points of the dataset. To estimate the error quantitatively, we then evaluate the quadratic error on the dataset: we obtain a value of 5.65 10-5, which is a satisfactory result.

Figure 11.

A 3D view of the C1 approximant of the Big Island, Hawaii.

Figure 12.

A 3D view of the entire zone of the Hawaiian islands from a large dataset (one million data points).

4. Surface approximation from surface patches

The problem of constructing a surface from given patches on this surface appears, for instance, in geophysics or geology processes like migration of time-maps or depth-maps. The problem of surface approximation from surface patches can be posed as follows: from a finite set of open subsets wj, j = 1,…, N (surface patches in our case) in the closure of a bounded nonempty open setΩR2, and from a function f defined onΞ=j=1,..,Nωj, construct a regular function on approximating f onΞ, i.e.:Φ|Ξf|Ξ

The modelling corresponds to the one of the case of bathymetry dataset. The difference only rests on the fidelity criterion (see (23) for the case of bathymetry dataset) which is:

v0,Ξ=(j=1Nωjv2(x)dx)1/2E30

The functional (26) becomes:Jε(v)=|||vf|||0,Ξ2+ε|v|m,Ω2and the introduced minimization problem has a unique solution (using Lax-Milgram lemma, see Gout (2002), or Apprato et al. (2000)). The main difficulty consists in approximating|||vf|||0,Ξ2. It is done using quadrature formulae. The discretization is done using the finite element method. Convergence results and numerical examples are given in Apprato et al. (2000) and in Gout (2002).

5. Non regular Surface approximation: application in Geosciences

5.1. Modelling

The right approach to get a good approximant of a non regular surface consists in applying first a segmentation process to precisely define the locations of large variations and faults, and exploiting then a discrete approximation technique. To perform the segmentation step, we propose a quasi-automatic algorithm that uses a level set method to obtain from the given (gridded or scattered) Lagrange data several patches delimited by large gradients (or faults). Other approaches for fault detection can be found in Gutzmer and Iske 1997, or in Parra et al. 1996.

Then, with the knowledge of the location of the discontinuities of the surface, we generate a triangular mesh (which takes into account the identified set of discontinuities) on which a Dm-spline approximant is constructed. To show the efficiency of this technique, we will present the results obtained by its application to a dataset in Geosciences.

The main goal of this work is thus to give a quasi-automatic algorithm to determine the location of the large variations and the faults of the surface in order to use specific methods that rely on splines under tension with a nonconstant smoothing parameter near the identified set of discontinuities. Likewise, if one wants to use finite element methods in the discretization step, it is well known that to correctly reproduce the set of surface discontinuities (both of the function—faults—and/or its derivatives—creases), there are some constraints regarding the triangulation of the domain of definition of the function: in particular, as shown by Arcangéli et al., 1997, the edges of the triangles of the triangulation should not intersect the set of discontinuities, here denoted by D (Figure 13).

As a consequence, it is generally necessary to consider a number of different connected open subsets Fi, commonly called patches (Figure 13), and mesh them as done in Figure 13. Let us note that to improve the results, an adaptative mesh refinement can be made near the set D.

Then, it would be possible to use a finite element approximant. Unfortunately, because of difficulties linked to the geometry and the number of data points, finite element methods turn out to be hard to use. In this work we will use therefore a Dm-spline approximant whose definition takes into account the particular structure of the surface domain, as introduced in Arcangéli et al., 1997. The algorithm is summarized in Diagram 1. To have more details about the segmentation method that will be exploited to locate the set of discontinuities D of the surface, please see Gout et al. 2008. To do the segmentation process, the input surface is converted into a grayscale image composed of pixels whose brightness values are given by the z-coordinate of the data points on each node of a regular grid. So, it is easy to apply segmentation tools developed in image processing (see Le Guyader et al., 2005, or Caselles et al., 1997, or Gout and Le Guyader, 2006 and 2008 , or Gout et al. 2005, or Forcadel et al., 2008, ) to surface approximation applications. As mentioned in Diagram 1, it would be possible to also work with random datasets on a surface (see Gout et al., 2008), but this paper primarily aims at showing the efficiency of the proposed strategy in the case of regularly distributed points. We then use a finite element method to mesh the surface taking into account the identified set of discontinuities D. The approximation operator and the convergence of the method when the number of data tends to infinity is discussed in Arcangéli et al. 1997.

Figure 13.

Top: Examples of prohibited and allowed triangles in the domain triangulation: the identified set of discontinuities D must be taken into account. Middle: Example of a set of discontinuities D and the three different subsets F1, F2, F3 it delineates. Bottom: Following the set of discontinuities D, a triangulation is made: no triangle intersects the set of discontinuities.

Diagram 1.

Presentation of the modelling.

5.2. Numerical examples

In this section we give an example based on real data values which come from the Vallée d’Ossau, Pyrénées mountains, France. The ground penetrating radar (GPR) technique is based on the principle that high-frequency electromagnetic waves may be reflected at boundaries separating heterogeneous regions of the subsurface. This technique is a very high resolution geophysical tool, with a penetration depth of a few tens of meters (100 m in the best conditions), depending on the underground physical properties and on the radar wave frequency used. Usually, GPR surveys are conducted with a constant offset between transmitter and receiver and a single-fold coverage. The time unit is the nanosecond, the frequency range is between 10 MHz and 1 GHz. The propagation velocity range is between 0.01 and 0.3 m/ns, for example 0.12 m/ns in limestone, 0.07 m/ns in silts. In this study, we have used a frequency of 100 MHz, an offset of 1m and we have obtained a penetration depth of about 10 m.

Figure 14.

View of the Three-dimensional data block. A and B correspond to two horizons. In this work we will use the dataset corresponding to Horizon A.

The studied area is located in the Western Pyrénées (30 km south of Pau, Béarn, France) in the Vallée d’Ossau which is an old glacial valley. 2D experiments were conducted and gave information on the sub-surface structures in the area. The interface between the limestone bedrock and the fluvio-glacial deposits has a depth varying between 2–3 and 12–13 m with a weak general dip from the north to the south (about 2). Above this interface, the fluvio-glacial deposits show several sedimentary figures, with meter or decameter scale, which could correspond to old fluvial channels. However, this 2D acquisition cannot give us enough information to precisely describe the structures present on the site. The solution was to conduct a 3D GPR experiment on a part of the area, in order to obtain 3D information. So, a 3D GPR data acquisition on the area described above has been conducted. The single-fold GPR data were acquired along north-south profiles. The acquisition area corresponds to a rectangle of 38m × 35m. Throughout the whole acquisition work, a constant distance of 1m was maintained between the transmitter and receiver antennae, and both antennae were oriented perpendicularly to the profile direction. Each trace was vertically stacked 256 times in the field. The sampling rate was 0.676ns and the trace length 300ns. Figure 14 shows the 3D view of the cuboid. Continuity of reflectors is better in the inline direction because the number of traces is higher (141 for inline instead of 77 for crossline sections). A strong and continuous reflector (called Horizon A) appears at about 250 ns and is present on all the traces. According to the field observations and the first interpretation with the 2D profiles, this lowermost reflector can be interpreted as the interface between the limestone bedrock and the fluvio-glacial deposits. In order to test our algorithm, we have modified the dataset to create a fault. 2D and 3D views of the considered dataset are given in Figure 15 and 16 respectively.

Figure 15.

Left: Two-dimensional view of the geological dataset (8949 data points) containing a fault in correspondence to the boundaries with the white regions. Right: Two-dimensional view of the locally C1 approximant of the geological surface with the fault.

Figure 16.

Three-dimensional view of the dataset corresponding to the geological surface with the fault (8949 data points).

Figure 17.

Three-dimensional view of the locally C1 approximant of the geological surface with the fault. The approximant has been evaluated on an evenly space grid made of 150×150 points. The same scale and colormap are used for each surface. The general agreement is excellent. No oscillations are present.

Here are the parameters used when running the proposed algorithm on this second real world example:

  1. The triangulation is made of 400 triangles;

  2. The adopted generic finite element is the Argyris triangle (class C1);

  3. The smoothing parameter is chosen equal to 10-6;

  4. The evaluation grid is 150 × 150.

The quadratic error on the 8949 data points turns out to be equal to 2.332 10-3. The obtained approximant is depicted in Figures 15 (right) and 17 where a 2D and a 3D view are given respectively. As it appears, no oscillations are present and excellent reconstruction results can be achieved.

6. Conclusion

We have developed a novel and efficient algorithm for approximating non regular gridded data exhibiting large and rapid variations and/or complex fault structures. The main steps of the proposed strategy consist in (1) accomplishing a pre-processing phase to define the set of discontinuities of the surface, (2) generating a triangular mesh taking into account these discontinuities, (3) applying a specific (known) finite element domain decomposition method and using a spline operator that relies on scale transformations (which seems to be very useful in controlling the behavior of the surface in the presence of steep gradients not found by the segmentation process) to produce the final approximating surface. Compared with conventional data fitting methods that exist in the literature, the proposed algorithm is able to suppress or at least decrease the undesired oscillations that generally arise near steep gradients, thus ensuring a faithful and accurate representation. The presented numerical examples illustrate the efficacy of the method.

© 2009 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike-3.0 License, which permits use, distribution and reproduction for non-commercial purposes, provided the original is properly cited and derivative works building on this content are distributed under the same license.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Apprato Dominique, Gout Christian and Le Guyader Carole (October 1st 2009). Surface Approximation from Rapidly Varying Data: Applications to Geophysical Surfaces and Seafloor Surfaces, Geoscience and Remote Sensing, Pei-Gee Peter Ho, IntechOpen, DOI: 10.5772/8309. Available from:

chapter statistics

1025total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Three-Dimensional Microwave Imaging Using Synthetic Aperture Technique

By Shi Jun, Zhang Xiaoling, Yang Jianyu, Liao Kefei and Wang Yinbo

Related Book

First chapter

A Survey of Image Segmentation by the Classical Method and Resonance Algorithm

By Fengzhi Dai, Masanori Sugisaka and Baolong Zhang

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us