## 1. Introduction

We propose an approximation method for surfaces with fault and /or large variations. We use image segmentation tools, meshing constraints, finite element methods and spline theory.

Curve and surface fitting using spline functions from rapidly varying data is a difficult problem (see Salkauskas, 1974, or Franke and Nielson, 1984, or Franke, 1982). In the bivariate case and without information about the location of large variations, usual approximation methods lead to instability phenomena or undesirable oscillations that can locally and even globally hinder the approximation (Gibbs phenomenon).

So, we propose a new method which uses scale transformations (see Apprato and Gout, 2000). The originality of the method consists of a pre-processing and a post-processing of the data. Instead of trying to find directly an approximant, we first apply a scale transformation to the z-values of the function. In the particular case of the approximation of surfaces, the originality of the method consists in removing the variations of the unknown function using a scale transformation in the pre-processing. And so the pre-processed data do not exhibit large variations. So we can use a usual approximant which will not create oscillations.

In case of vertical fault, we also propose an algorithm in order to find the location of large variations : the right approach to get a good approximant consists, in effect, in applying first a segmentation process to precisely define the locations of large variations and faults, and exploiting then a discrete approximation technique. To perform the segmentation step, we propose a quasi-automatic algorithm that uses a level set method to obtain from the given (gridded or scattered) Lagrange data, several patches delimited by large gradients (or faults). Then, with the knowledge of the location of the discontinuities of the surface, we generate a triangular mesh (which takes into account the identified set of discontinuities) on which a *D*
^{m}-spline approximant (see Manzanilla 1986, Apprato et al., 1987, or Arcangéli et., 1997, or Arcangéli et al. 2004) is constructed.

We apply our method to different datasets (bathymetry, Lagrange data…): Piton de la Fournaise volcano in La Réunion island (see Gout and Komatitsch, 2000), Pyrenées mountains in France (see Apprato et al., 2000), Marianas trench in the Pacific (see Apprato et al. 2002). We also give an example around the Hawaiian hot spot. The topography and bathymetry of the Hawaiian Islands in the Pacific ocean result from the activity of a huge hot spot combined with the effect of erosion. This hot spot has been more or less active since the Late Cretaceous, and as a result, the Big Island continues to grow, and to the East a new island is being formed.

## 2. Mathematical Modelling for surface approximation from Lagrange dataset

Unfortunately, when applied to the approximation of surfaces from rapidly varying data, usual methods like splines lead to strong oscillations near steep gradients, as illustrated in Figure 1. When the location of the large variations in the dataset is known, Salkauskas (1974) has proposed methods that use a spline under tension with a nonconstant smoothing parameter, and Hsieh and Chang (1994) have proposed a concept of virtual nodes inserted at the level of the large variations in the case of an approximant in the context of computer-aided geometric design. In the more general context where the location of the large variations in the dataset is not known *a priori*, Franke (1982) and Bouhamidi and Le Méhauté (1999) have proposed splines under tension belonging to more general spaces. These methods give good results in the case of curve fitting, but less accurate results in the case of surface fitting.

The new method we introduce here uses scale transformations, and is applied without any particular *a priori* knowledge on the data. The philosophy of the method is similar to interpolation methods based upon anamorphosed data commonly used in geostatistics (see, for instance, Issaks and Srivastava, 1989). In the first part of this article, a construction of the scale transformation families is presented. Results concerning the convergence of the approximation are given. We also show the efficiency of this innovative approach by applying it to the topography of the summit of the Piton de la Fournaise volcano, located in the Réunion Island (Indian Ocean, France). This volcano exhibits large and rapid variations in steep river valleys in its southwestern part, as well as in a caldera, where the behavior of the method is tested.

The method we propose uses two scale transformations—namely
_{i} ), regularly distributed in an interval chosen by the user, as illustrated in Figures 2
A and 2B. The preprocessing function
_{d} can subsequently be applied without generating significant oscillations, as shown in Figure 2C. The second scale transformation

The functions introduced above have the following expressions, for

where the preprocessing
^{2} with Lipschitz boundary, and an unknown function
^{d} of N = N(d) distinct points of

where is the Euclidean distance in R^{2}; the index d represents the radius of the biggest sphere included in
^{d}, and thus, when d tends to 0, the number of data points tends to infinity. We also introduce the set

and the sequence

where [a, b] = Im( f ). The sequence

### 2.1. Scale transformations

In this section, we give a construction of the scale transformation families by generalizing the technique seen in Torrens, 1991. These scale transformations are realistic in the sense that, as classical transformations, they are monotonous.

Preprocessing of the Data: Family

The goal of the scale transformation

These interval and subdivision are chosen by the user. When dealing with surface approximation from rapidly varying data, we choose the interval to be [0,1], and an even subdivision of the {u_{i}} that is used to reduce the local variations of the (z_{i}). After applying
_{i},u_{i}) related to the initial data by u_{i} =
_{i})—; in such a case, Apprato and Gout, 2000 showed that it is possible to choose a nonregular distribution in the interval

We introduce

We also introduce the function

(7) |

where the
^{m} on [0,1] (see Ciarlet, 1978) and where, for any i = 1, …, p(d)-1,

Using relations (5)–(8), we obtain the following results:
_{i}) and

We now consider a sufficient convergence hypothesis, which implies that the distribution of the data (z_{i}) has an asymptotic regularity in the interval [a; b] when d tends to 0, and which is used to establish the convergence of the approximation. This hypothesis is that there exists C > 0 and an integer m’’ verifying

We also suppose that the set A^{d} introduced above is such that there exists C’ > 0 such that

In equation (10), introduced by Arcangéli (1989), expresses a property of asymptotic regularity of the distribution of the data set A^{d} in

where

One can notice that the construction of the scale transformations
_{i,} the first derivatives of

Postprocessing of the Data: Family

Similarly to the way we constructed the scale transformations
^{d} without generating significant oscillations. To map these values back and obtain the approximated values of z, we need to use a postprocessing step, and therefore need to introduce a family

We also define

(13) |

where the
^{m} on [0; 1] (Ciarlet, 1977) and where, for any i = 1, …, p(d)-1:

Under hypotheses (9) and (10), Apprato and Gout, 2000 established the following relations:

(15) |

It is important to mention that (15-i) is one of the key points of the algorithm, that (15-ii) enables us to obtain approximants with high regularity, and that (15-iii) and (15-iv) are used to establish the convergence of the approximation.

### 2.2. The spline operator

Given a Lagrange dataset
^{d} of class C^{k} (with k = 1 or 2 in practice). In this work, we use a smoothing D^{m} spline, as defined in Arcangéli et al., 2004, which has many advantages: it is possible to implement a local refinement, the matrix of the linear system to solve is banded, and it is possible to study the convergence of the approximation. We have chosen to use a smoothing D^{m} spline and not an interpolation spline because we want to be able to work with large datasets of up to several hundreds of thousands of points, and in that case, a smoothing spline is far less expensive than an interpolation spline. We consider the functional, for any belonging to H^{m}(),

where
^{m}(),
^{p(d)}, and a smoothing parameter. We call
^{m} -smoothing spline on relative to
^{m}():

The solution
^{m}():

Uniqueness of the solution can be proved using the Lax–Milgram lemma and results by Necas, 1967 to establish an equivalence of norms.

In order to compute
^{0} or of class C^{1} in order to obtain a C^{0} or C^{1} approximant. In the following, we write
^{d}.

### 2.3. Convergence results

We first give the convergence of the D^{m} spline operator

Keeping the notation of the previous sections, and since

From this result, using a compactness argument, Apprato and Gout, 2000 established a theoretical result concerning the convergence of the approximation:

for any

### 2.4. Numerical examples

The Piton de la Fournaise is a volcano located in the Indian Ocean, in the Réunion Island, France. This volcano exhibits strong topographic variations near its summit, due to the presence of a caldera and of two steep river valleys in its southwestern part, as can be seen on the picture of the volcano presented in Figure 3. The maximum height of the volcano is 2.6 km, and the depth of the valleys reaches more than 1000 m in several places. Being able to describe the topography of such regions exhibiting rapid local variations with at least C^{0} regularity, or even C^{1} regularity, is important in many fields in geophysics. For example, this description of the topography can be an input to numerical modeling codes that study the propagation of pyroclastic flows or lava flows, and related hazards; other examples are seismic site effects and ground motion amplification due to topographic features. In both cases, to avoid creating numerical artefacts, it is important not to introduce spurious oscillations in the description of the model itself. Otherwise, it is well known that in the context of curvilinear spectral element modeling of elastic wave propagation, artificial diffraction points appear at the edges between elements, which significantly affects the behavior of surface waves. To demonstrate the efficiency of our method, we create C^{0} and C^{1} approximants from a set of 8208 data points taken from a DEM of the summit. The data points in the DEM have been obtained by digitizing a map of the area. In this DEM, the height is given on an evenly spaced grid of 76 x 108 points, with a grid spacing of 200 m. Therefore the considered region has a dimension of 15 km in the East–West direction, and 21.4 km in the North–South direction. This DEM is shown in Figure 4 using a top view with isocontours representing the height of the topography every 0.2 km.

In the preprocessing step, we choose a regular distribution of the u_{i} in [,] = [0,1] in order to reduce the large variations in the data set. The approximants are subsequently obtained by discretizing the D^{m} spline in a finite-element space. In the case of the C^{0} approximant, we use 30 x 40 rectangular C^{0} BFS finite elements, each having four degrees of freedom. In the case of the C^{1} approximant, we use 15 x 20 rectangular C^{1}-BFS finite elements, each having sixteen degrees of freedom. In both cases, the smoothing parameter is taken to be 10^{-6}.

In Figure 4, we show a three-dimensional representation of the C^{1} approximant after postprocessing, evaluated on an evenly spaced grid comprising 200 x 200 points. The grid spacing in the East–West direction is therefore 107.54 m, and the one in the North–South direction is 75.37 m. From the figure it is clear that the results do not exhibit strong oscillations, even though the use of such a dense grid for the evaluation of the approximant is expected to enhance the artefacts generated by the approximation method. To compare this approximant to the original dataset more precisely, in Figure 5 we present a top view of the approximated values, with isocontours representing the height every 0.2 km, in addition to the same plot for the original dataset. It is clear from these plots that the approximant is very close to the original data, with local variations smoothed as expected. One can notice that the approximant does not exhibit significant oscillations even in the difficult regions of the model, particularly the two valleys. To demonstrate this more quantitatively, we evaluate the quadratic error for the two approximants. In the case of the C^{0} approximant, we find that the error is 4.96x10^{-4}; in the case of the C^{1} approximant it is 4.01x10^{-4}. Such values are considered as very good ones in the context of surface approximation, and show the efficiency of the proposed approach for this case with rapidly varying data. In the entire dataset, the maximum error measured is 5.5%, corresponding to an absolute error of 56 m. This maximum error occurs in a region located on the edge of the steep valleys, where the local variations are the strongest, as expected. More detailed studies of the approximation error, and evidence that the rate of convergence is higher in this method than in usual approaches with no preprocessing, such as thin plate spline or splines under tension can b found in Schoenberg, 1960.

We have presented a new method to fit rapidly varying geophysical data. The ability to suppress, or at least significantly reduce, oscillations of the surface near steep gradients has been demonstrated. The scale transformation families introduced provide more control on the behavior of the approximant, without any particular a priori knowledge of the location of the large variations in the dataset. The regularity obtained, which can be C^{0}, C^{1}, or higher, enables us to describe the topography of real geophysical surfaces accurately. We have shown the good properties of this approach by applying it to the real case of the Piton de la Fournaise volcano.

The general agreement is excellent, and it is important to notice that no significant oscillations can be observed, even in the two steep valleys. Isocontours represent the height of the topography every 0.2 km. The gray scale also indicates the height of the topography, from 0 to 2.6 km.

## 3. Seafloor surface approximation from bathymetric dataset

### 3.1. Modelling

The problem of surface approximation from a given set of curves can be formulated as follows: from a finite set of curves F_{j,} j = 1,…, N (the bathymetry ship track curves in our case) in the closure of a bounded nonempty open set
^{2}, and from a function f defined on

We can assume that is a connected set, with a Lipschitz-continuous boundary (following the definition of Necas, 1967), that for any integer j, with j=1,…,N, F_{j} is a nonempty connected subset in F, and that, for simplicity, f is the restriction on F of a function, still denoted by f, that belongs to the usual Sobolev space H^{m}(), with the integer m>1. We also assume that the approximant belongs to H^{m}()

Let us define, for any v belonging to H^{m}(),

where
^{m}(). If L^{2}(F) is equipped with the usual norm

and under the hypothesis that for any

is a norm on H^{m}() which is equivalent to the usual norm
^{m}(). Then the solution of the interpolation problem (22) is the unique element of minimal norm
^{m}(). Hence we could take the solution = when m > k + 1. Unfortunately, it is often impossible to compute using a discretization of problem (22), because in a finite dimensional space, it is generally not possible to satisfy an infinity of interpolation conditions. Therefore, to take into account the continuous aspect of the data on F, we instead choose to define the approximant as a fitting surface on the set:

In this work, we propose to construct a variant of the “smoothing D^{m}-spline,” seen in previous sections, that will be discretized in a suitable piecewise-polynomial space. The use of such spline functions has been shown to be efficient in the context of geophysical applications such as Ground Penetrating Radar data analysis (Apprato et al., 2000) or the creation of Digital Elevation Models describing topography (Gout and Komatitsch, 2000).

Let us present in this section the theoretical aspects of the method. We first introduce a functional J_{} that we shall minimize, defined on H^{m}(F) by

where
^{2}-norm, which is done using a quadrature formula. In this respect, the approach is quite different from more classical techniques that usually simply make use of a large number of data points on F in order to solve the approximation problem. For any integer j, j= 1,…,N, and any
^{2.} This relation implies that the distance between two consecutive

and for any

In what follows, we will suppose that, for any v
^{m}(

When this hypothesis is satisfied, one can consider l as a theoretical quadrature formula for
_{j} are polygonal curves, and one can, therefore, use a classical quadrature formula (e.g., Arcangéli and Gout, 1976, or Gout and Guessab, 2001).

For the discretization, we proceed like in Section 2, using a finite element space.

### 3.2. Application to surface reconstruction from bathymetry ship track data in the Marianas trench

Detailed bathymetry maps are essential in several fields in geophysics, such as oceanography and marine geophysics. Historically, over the past decades, research vessels have collected a large number of depth echo soundings, also called SONAR (for “SOnic Navigation And Ranging”) bathymetry ship track data. Many of these measurements have been compiled to produce global bathymetry maps (e.g., Canadian Hydrographic Office in 1981). In recent years tremendous advances in satellite altimetry have enabled researchers to produce very detailed bathymetry maps independently of satellite gravity field measurements. However, long-wavelength variations of the depth of the ocean floor are difficult to constrain using satellite altimetry, and ship track data are still often used instead for that purpose. It is, therefore, of interest to address the issue of producing a bathymetry map from a given set of SONAR bathymetry ship tracks. Let us mention that SONAR ship tracks are typically acquired as a discrete set of measurement points, as opposed to continuous recording. However, the typical horizontal interval between measurement points is always small compared to expected bathymetry variations; therefore, in the context of this study the dataset can be considered as consisting of smooth continuous lines.

As presented in Url, 2009, in order to understand the shape of the seafloor, oceanographers go out on ships and collect sonar data. Sonar data are collected using echosounders and side-scan sonar systems. The digital data are then converted into maps and images.

*How does this work? *

*Echosounders:* Since World War II echosounders have been used to determine water depths of the oceans. Echosounders are usually attached to the hull of a ship. The echosounder sends an outgoing sound pulse into the water. The sound energy travels through the water to the ocean bottom where it is reflected back towards the source, received, and recorded.

The time that it takes for the sound to make the round trip to the seafloor is accurately measured. Water depth is determined from the travel time and the speed of sound in water. Water depth can be estimated simply by using an average sound speed and the following relationship: Distance = speed × time/2, (the time is divided by 2 to take into account the round trip from the echosounder to the seafloor). The unique drawback of such an echosounder is that it will only give one depth at each time. That is why multibeam echosounder have been created...

*How are water depths turned into a map?:* As a ship steams ahead through the water, multibeam echosounders provide water depths for a swath of the seafloor. The water depths are located in space using satellite navigation. From these data, oceanographers can make maps of the seafloor that resemble topographic maps of land areas. In the early times, bathymetry maps were drawn by hand. Contours (lines) of equal water depth were drawn through a grid of numbers that had been plotted on a sheet of paper. Colors, put on by hand, indicated regions of equivalent water depth. Eventually computers took over and produced paper charts of the data, contoured and automatically colored. Now computer softwares enable individual scientists to process the data and display them on their own computer monitors. Maps can be imported into graphic software applications and annotations and other marks can be added.

*"Side-scan"sonars:* Similar to the multibeam echosounder, the sound transmitted by a side scan sonar instrument travels to the seafloor, bounces off the seafloor, returns to the instrument, and is recorded In the case of a side-scan sonar, it is the intensity or strength of the returning acoustic signal that is recorded. This is controlled primarily by the slope of the seafloor and by what the seafloor is made of. A stronger return is received if the seafloor slopes down to the instrument. Also, the return is stronger if the seafloor is made of bare rocks. The strength of the return is much lower if the seafloor is covered by mud or sand. Volcanoes and other features that stick up above the surrounding seafloor will cast acoustic shadows. These shadows are just like the shadow behind a person when a flashlight is shone on him.

*Converting intensity into an image:* The strength of the sound recorded by the side-scan sonar instrument is converted into shades of gray. A very strong return, say from bare rock, is white; a very weak return is black. The echo strengths that fall between these two extremes are converted into different shades of gray. Historically, side scan sonar data have been displayed on a hard copy paper recorder. The paper chart used to be the most convenient method for displaying and storing these data (as well as bathymetry data). Since the 1980s or so, software and hardware have been developed to process side scan data using computers and display the data on computer screens.

We select the region of the Marianas trench (Figure 9). The trench is located in the North Pacific ocean, east of the South Honshu ridge, parallel to the Mariana Islands. It corresponds to the subduction zone where the fast-moving Pacific plate converges against the slower moving Philippine plate. It is also the place on Earth where the oceans are the deepest, reaching a maximum depth of slightly more than 11 km in the so-called “Challenger Deep” area (Figure 9). This region is ideal to test our surface approximation technique because it has been thoroughly studied; therefore, many ship track datasets are available. We select a 45 x 45 km area, corresponding to latitudes between 11.2± and 11.6± North, and longitudes between 142± and 142.4± East. We use 16 tracks from the database assembled by David T. Sandwell and coworkers at the University of California, San Diego (http://topex.ucsd.edu). Each individual track contains between 62 and 152 points giving depth for a given latitute and longitude. The total number of points in the whole dataset is 1576. The depth varies between 6779 and 10952 m. As can be seen on Figure 9, the ship track coverage of the area is nonuniform. Note in particular the lack of data in the north-east and south-east corners. Fortunately, data coverage is much better near the center in the deepest part of the trench. We create an approximant using 169 quadrangular Bogner–Fox–Schmit finite elements defined on a regular 13 x 13 grid in the horizontal plane in the area under study. As underlined in the previous section, these elements enable us to obtain an approximant with C^{1} regularity. Figure 9 shows a 3D view of the final surface obtained, as well as the original set of ship tracks. For display purposes, the approximant has been evaluated on a regular 200 x 200 grid of points and a vertical exaggeration factor of 3 has been applied. By comparing with Figure 9 and with the ship tracks, one can see that the smooth surface obtained correctly reproduces the general characteristics of the bathymetry of the region, and behaves satisfactorily even in the areas where the data coverage is sparse.

The quadratic error is equal to 3.29 x 10^{-5}, which is a very satisfactory result (unusually low in the context of surface approximation, e.g., Apprato et al. (2002); as a comparison, a usual D^{m}-spline (Arcangéli, 1989) applied to the same data set using the same finite-element grid gave an error of 6.4 x 10^{-4}).

### 3.3. Application to surface reconstruction from bathymetry ship track data and Lagrange data around Hawaiian hot spot

To demonstrate the efficiency of our method, we also give a numerical example from a set of 7049 data points (both bathymetry and Lagrange data) around the Big Island in Hawaii.

The maximum height of the big island is 4.7 km, and the depth of the seafloor reaches more than 4 km in several places. To get seafloor data, radar scan sonar are used. From the dataset, with the knowledge of the large variations, we have made a triangulation on each region using the software Mefisto. We give the C^{1} approximant (see below). We also evaluate the approximant obtained at the 7049 data points of the dataset. To estimate the error quantitatively, we then evaluate the quadratic error on the dataset: we obtain a value of 5.65 10^{-5}, which is a satisfactory result.

## 4. Surface approximation from surface patches

The problem of constructing a surface from given patches on this surface appears, for instance, in geophysics or geology processes like migration of time-maps or depth-maps. The problem of surface approximation from surface patches can be posed as follows: from a finite set of open subsets w_{j,} j = 1,…, N (surface patches in our case) in the closure of a bounded nonempty open set
^{2}, and from a function f defined on

The modelling corresponds to the one of the case of bathymetry dataset. The difference only rests on the fidelity criterion (see (23) for the case of bathymetry dataset) which is:

The functional (26) becomes:

## 5. Non regular Surface approximation: application in Geosciences

### 5.1. Modelling

The right approach to get a good approximant of a non regular surface consists in applying first a segmentation process to precisely define the locations of large variations and faults, and exploiting then a discrete approximation technique. To perform the segmentation step, we propose a quasi-automatic algorithm that uses a level set method to obtain from the given (gridded or scattered) Lagrange data several patches delimited by large gradients (or faults). Other approaches for fault detection can be found in Gutzmer and Iske 1997, or in Parra et al. 1996.

Then, with the knowledge of the location of the discontinuities of the surface, we generate a triangular mesh (which takes into account the identified set of discontinuities) on which a D^{m}-spline approximant is constructed. To show the efficiency of this technique, we will present the results obtained by its application to a dataset in Geosciences.

The main goal of this work is thus to give a quasi-automatic algorithm to determine the location of the large variations and the faults of the surface in order to use specific methods that rely on splines under tension with a nonconstant smoothing parameter near the identified set of discontinuities. Likewise, if one wants to use finite element methods in the discretization step, it is well known that to correctly reproduce the set of surface discontinuities (both of the function—faults—and/or its derivatives—creases), there are some constraints regarding the triangulation of the domain of definition of the function: in particular, as shown by Arcangéli et al., 1997, the edges of the triangles of the triangulation should not intersect the set of discontinuities, here denoted by D (Figure 13).

As a consequence, it is generally necessary to consider a number of different connected open subsets Fi, commonly called patches (Figure 13), and mesh them as done in Figure 13. Let us note that to improve the results, an adaptative mesh refinement can be made near the set D.

Then, it would be possible to use a finite element approximant. Unfortunately, because of difficulties linked to the geometry and the number of data points, finite element methods turn out to be hard to use. In this work we will use therefore a D^{m}-spline approximant whose definition takes into account the particular structure of the surface domain, as introduced in Arcangéli et al., 1997. The algorithm is summarized in Diagram 1. To have more details about the segmentation method that will be exploited to locate the set of discontinuities D of the surface, please see Gout et al. 2008. To do the segmentation process, the input surface is converted into a grayscale image composed of pixels whose brightness values are given by the z-coordinate of the data points on each node of a regular grid. So, it is easy to apply segmentation tools developed in image processing (see Le Guyader et al., 2005, or Caselles et al., 1997, or Gout and Le Guyader, 2006 and
2008
, or Gout et al. 2005, or Forcadel et al., 2008, ) to surface approximation applications. As mentioned in Diagram 1, it would be possible to also work with random datasets on a surface (see Gout et al., 2008), but this paper primarily aims at showing the efficiency of the proposed strategy in the case of regularly distributed points. We then use a finite element method to mesh the surface taking into account the identified set of discontinuities D. The approximation operator and the convergence of the method when the number of data tends to infinity is discussed in Arcangéli et al. 1997.

### 5.2. Numerical examples

In this section we give an example based on real data values which come from the Vallée d’Ossau, Pyrénées mountains, France. The ground penetrating radar (GPR) technique is based on the principle that high-frequency electromagnetic waves may be reflected at boundaries separating heterogeneous regions of the subsurface. This technique is a very high resolution geophysical tool, with a penetration depth of a few tens of meters (100 m in the best conditions), depending on the underground physical properties and on the radar wave frequency used. Usually, GPR surveys are conducted with a constant offset between transmitter and receiver and a single-fold coverage. The time unit is the nanosecond, the frequency range is between 10 MHz and 1 GHz. The propagation velocity range is between 0.01 and 0.3 m/ns, for example 0.12 m/ns in limestone, 0.07 m/ns in silts. In this study, we have used a frequency of 100 MHz, an offset of 1m and we have obtained a penetration depth of about 10 m.

The studied area is located in the Western Pyrénées (30 km south of Pau, Béarn, France) in the Vallée d’Ossau which is an old glacial valley. 2D experiments were conducted and gave information on the sub-surface structures in the area. The interface between the limestone bedrock and the fluvio-glacial deposits has a depth varying between 2–3 and 12–13 m with a weak general dip from the north to the south (about 2). Above this interface, the fluvio-glacial deposits show several sedimentary figures, with meter or decameter scale, which could correspond to old fluvial channels. However, this 2D acquisition cannot give us enough information to precisely describe the structures present on the site. The solution was to conduct a 3D GPR experiment on a part of the area, in order to obtain 3D information. So, a 3D GPR data acquisition on the area described above has been conducted. The single-fold GPR data were acquired along north-south profiles. The acquisition area corresponds to a rectangle of 38m × 35m. Throughout the whole acquisition work, a constant distance of 1m was maintained between the transmitter and receiver antennae, and both antennae were oriented perpendicularly to the profile direction. Each trace was vertically stacked 256 times in the field. The sampling rate was 0.676ns and the trace length 300ns. Figure 14 shows the 3D view of the cuboid. Continuity of reflectors is better in the inline direction because the number of traces is higher (141 for inline instead of 77 for crossline sections). A strong and continuous reflector (called Horizon A) appears at about 250 ns and is present on all the traces. According to the field observations and the first interpretation with the 2D profiles, this lowermost reflector can be interpreted as the interface between the limestone bedrock and the fluvio-glacial deposits. In order to test our algorithm, we have modified the dataset to create a fault. 2D and 3D views of the considered dataset are given in Figure 15 and 16 respectively.

Here are the parameters used when running the proposed algorithm on this second real world example:

The triangulation is made of 400 triangles;

The adopted generic finite element is the Argyris triangle (class C

^{1});The smoothing parameter is chosen equal to 10

^{-6};The evaluation grid is 150 × 150.

The quadratic error on the 8949 data points turns out to be equal to 2.332 10-3. The obtained approximant is depicted in Figures 15 (right) and 17 where a 2D and a 3D view are given respectively. As it appears, no oscillations are present and excellent reconstruction results can be achieved.

## 6. Conclusion

We have developed a novel and efficient algorithm for approximating non regular gridded data exhibiting large and rapid variations and/or complex fault structures. The main steps of the proposed strategy consist in (1) accomplishing a pre-processing phase to define the set of discontinuities of the surface, (2) generating a triangular mesh taking into account these discontinuities, (3) applying a specific (known) finite element domain decomposition method and using a spline operator that relies on scale transformations (which seems to be very useful in controlling the behavior of the surface in the presence of steep gradients not found by the segmentation process) to produce the final approximating surface. Compared with conventional data fitting methods that exist in the literature, the proposed algorithm is able to suppress or at least decrease the undesired oscillations that generally arise near steep gradients, thus ensuring a faithful and accurate representation. The presented numerical examples illustrate the efficacy of the method.