## 1. Introduction

In recent years, sustainable materials have been widely studied and developed especially for construction, highway, and civil engineering. Controlled low-strength material (CLSM) is commonly used as backfilled materials. It would be a friendly environment-cheap material and typically consists of small amount of cement, supplementary, fine aggregates, and a large amount of mixing water. Self-compacting/self-leveling, significantly low strength, as well as almost no measured settlement after hardening are remarkable characteristics of CLSM.

CLSM can be defined as a kind of self-compacting cementitious material that is in a flowable state at the initial period of placement and has specified compressive strength of 1200 *psi* (8.27 *MPa*) (or less at 28 days) or is defined as excavatable if the compressive strength is 300 psi (2.07 *MPa*) (or less at 28 days) [1]. Recent studies have reported that the maximum CLSM strength of approximately up to 1.4 MPa is suitable for most of backfilling applications when re-excavation is required [2, 3] It is also recommended that depending upon the availability and project requirements, any recycle material would be acceptable in the production of CLSM with prior tests its feasibility before uses [4]. The special features of CLSM can be summarized as follows: durable, excavatable, erosion-resistant, self-leveling, rapid curing, flowable around confined spacing, wasting material usage, elimination of compaction labors and equipment, and so on.

There are several studies on the engineering properties of CLSMs by laboratory experiments [5, 6, 7, 8, 9, 10], and numerical analyses of applications of CLSM to civil engineering, such as excavation and backfill after retaining walls [11, 12, 13], bridge abutments [14, 15, 16, 17], pipeline and trench ducts [18], pavement bases [19, 20, 21, 22, 23, 24], and so on. All these studies reflect requirement of the identification of mechanical constants of the CLSMs. Though it is known that the Young’s modulus of CLSMs lies between soil and commonly used concrete, precise determination of engineering material properties of CLSMs (even for soil and concretes) is a questionable and difficult problem. For example, modulus of elasticity is evaluated by experiments using secant modulus of stress-strain curve or estimated from empirical formula of Young’s modulus with 28-day compressive strength or weight of concrete.

Besides, artificial neural networks had been widely applied to various engineering [25], especially to civil and construction engineering [26, 27]. Alternately, several studies were conducted on the application of inverse problems in structural and geotechnical problems [28, 29, 30, 31, 32].

## 2. Problem definition and data preparation

### 2.1. Parameter recognition considered as inverse problem

In a classical mechanical analysis of civil engineering problems, we determinate displacements and/or stresses of a structure with known engineering constants, such as the Young’s modulus, *E* (with the unit *GPa)*, and the Poisson’s ratio, ν, which had been identified from laboratory or in-site experiments. This kind of problems can be termed *forward problems*, that is, we evaluate the unknown dependent variables (physical quantities) from prescribed parameters and dimensions analytically if there exist some closed-form relationships, or numerically using computational schemes (such as the finite element methods, FEMs) if the domain and/or boundary conditions are complicated.

On the other hand, an *inverse problem* in this case is to identify the engineering constants of a structure through evaluated or measured displacements and/or stresses. For a simple problem which can be analyzed and the results can be expressed in closed-form mathematical relationship, the inverse problem can be easily obtained from mathematical operation. However, for a practical huge structure of complicated shape, closed-form relationship cannot be obtained. Recently, a lot of schemes had been developed for parameter recognition (classification and regression) in the field of machine learning and artificial intelligence, among which neural networks such as back-propagation artificial neural network (BPANN) and radial basis function neural network (RBFNN) are proven to be powerful and efficient if well designed, trained, and tested.

The two problems are compared and illustrated in
**Table 1**
, where problems with and without closed-form relationship between parameters and physical quantities are shown, respectively. In
**Table 1**
, a well-known example in civil engineering, displacement at the end of a cantilevered beam subjected to a concentrated load is shown, in which *Δ*, *P*, *L*, *E*, *I* denotes the end displacement, loading, length, Young’s modulus, and moment of inertia, respectively.

This research study aims to consider the identification of engineering constant of a typical CLSM-backfilled region as an inverse problem. The processes employed are as follows: (1) preparation of training data, and testing data through numerical analysis (using ANSYS); (2) preparation of verification data (from experiments on CLSMs); (3) normalization of training data, testing data, and verification data; (4) conducting prediction using neural networks of BPANN and RBFNN along with comparison study of parameters involved in the networks; and (5) selection of a useful network topology for parameter recognition.

In the following sections, the procedures are explained with numerical results.

### 2.2. Typical problem employed for the recognition of engineering constants

A typical problem of backfilled region after a retaining wall is considered (
**Figure 1**
). The length and height of backfilled region is *L* = 20 *m*, *H* = 5 *m*, respectively. For simplicity, the backfilled materials are considered to be linearly elastic and are within small deformation under external loadings; therefore, linear elastic analysis can be performed using only two engineering constants, that is, the Young’s modulus, *E* (with the unit *GPa)*, and the Poisson’s ratio, ν. Furthermore, considering the width of backfilled zone is infinitely long, and a two-dimensional analysis can be used. In order to evaluate the engineering constants through displacements and stresses, the backfilled region is assumed to be subjected to a concentrated vertical loading (surcharge) *Q*
_{0} = 72.5 *kN* acting at the point behind the retaining wall*a* = 0.5 *H*.

In order to provide training data and testing data in parameter recognition using neural networks, the following four quantities are defined:

*x*=_{1}*U*: vertical displacement (settlement) at upper surface (_{y1}*x = L/4*), (positive downward,*m*).*x*=_{2}*U*: vertical displacement (settlement) at upper surface (_{y2}*x = L/2*), (positive downward,*m*).*x*=_{3}*S*: horizontal normal stress (lateral earth pressure) on wall (_{x1}*y = 3H/4*), (tensile positive,*Pa*).*x*=_{4}*S*: horizontal normal stress (lateral earth pressure) on wall (_{x2}*y = H/2*), (tensile positive,*Pa*).

These sampling points are illustrated in
**Figure 1**
.

### 2.3. FEM analysis using ANSYS

Finite element equation for elastic analysis of displacement and stress of a plane deformation problem can be expressed in matrix form [33]:

where [*K*], {*X*}, {*F*} denotes global stiffness matrix, global degrees of freedom vector and global load vector respectively, defined as follows:

If a plane 4-node isoparametric element is used, the shape functions in Eq. (2) can be expressed as

where *ξ, η* are local coordinates. A typical PLANE42 element in ANSYS is shown in
**Figure 2**
.

### 2.4. Numerical experiments for different combination of engineering constants

A finite element mesh with *40 × 10 = 400* elements, *41 × 11 = 451* nodes, *451 × 2 = 902* degrees of freedom (displacements) is employed for a numerical analysis of backfilled zone (
**Figure 3**
). The left-hand side and right-hand side can only move freely in vertical direction, while the bottom of backfilled zone is considered fixed in both directions. In order to provide enough sampling data for later training and testing process using supervised neural networks, the Young’s modulus (*E*) ranges from 0.02 to 3 *GPa* (covering the general values of soil and CLSMs), while the Poisson’s ratio (ν) is selected to be 0.1, 0.2, 0.25, 0.3, and 0.4. A total of 270 data samples with different combinations of *E* and ν are used.

### 2.5. Data preparation

#### 2.5.1. Data collection from FEM analysis using ANSYS

**Table 2**
shows part of the 270 numerical results of computed displacements and stresses for different Young’s moduli and Poisson’s ratio using the ANSYS PLANE42 model.

#### 2.5.2. Normalization of data

Since the output range of the sigmoid function is within [0, 1], and in order to have a better performance of training of neural network, the input data should be normalized to have a uniformly ranged data.

The relationship between normalized and original data can be expressed as follows:

where *a _{x}
*,

*b*are selected such that

_{x}*a*,

_{y}*b*are chosen to let

_{y}Part of typical normalized FEM computed displacements and stress at sampling points are shown in
**Table 3**
. Among these normalized data, 216 set (216/270 ≒ 80%) are selected randomly for training and 54 set (54/270 ≒ 20%) for testing in the later neural work analyses.

#### 2.5.3. Data for verification

In order to verify the trained and tested neural networks, three data set of backfilled materials are used: (1) the first is commonly used soil (*E* = 0.1 *GPa*, ν = 0.30); (2) CLSM-B80/30% (*E* = 0.27 *GPa*, ν = 0.25); and (3) CLSM-B130/30% (*E* = 0.87 *GPa*, ν = 0.25). The CLSM-B80/30% and CLSM-B130/30% denote unit weight of binder of the CLSM is 80 and 130 *kg/m*
^{3} with 30% of replacement of cement by fly ash. Selection of materials for the CLSM mixture in this study consisted of fine aggregate, type I Portland cement, stainless steel reducing slag (SSRS), and water. Fine aggregate for CLSM was formed by well blending between river sand and residual soil with a given proportion (e.g., 6:4, by volume) for grading improvement. The soil was obtained from a construction site. The experimental work was conducted on two binder content levels in mixtures (i.e., 80- and 130 *kg/m ^{3}
*). The water-to-binder ratio was selected via few trial mixes until the acceptable flowability for CLSM of 150−300

*mm*was achieved. The detailed information can be seen in [8].

In
**Table 4**
, computed displacements and stresses for verified soil and two CLSMs using the ANSYS PLANE42 model are summarized, while the normalized data for verification are tabulated in
**Table 5**
.

## 3. Parameter recognition using BPANN

### 3.1. Application of BPANN for parameter recognition

**Figure 4**
demonstrates a typical BPANN for the identification of engineering constants of CLSM. The BPANN shown in
**Figure 4**
contains a single output, that is, the Young’s modulus (*y1 = E, y2 = ν*), *L* input neurons (*x _{i}
*,

*i*= 1, 2, ⋯,

*L*), and

*M*hidden neurons (

*h*,

_{j}*j*= 1, 2, ⋯,

*M*). The predicted outputs can be expressed as [35, 36, 37]:

Where the activating function (the sigmoid function) and its derivative can be expressed as.

### 3.2. Learning algorithms of BPANN for parameter recognition

There exist many approaches for the determination of the network parameters in BPANN (*w _{kj}
*,

*b*,

_{k}*w*,

_{ji}*b*). The most basic and popular generalized delta rule based on the method of the steepest descent along with two learning parameters (i.e., the learning rate

_{j}*η*and the momentum factor

*μ*)is employed, which can be expressed as

(8) |

where

denotes the error between targets and trained output results.

The MATLAB toolbox *nntool* provides a lot of training methods along with BPANN [38], among which four kinds of training algorithms were tested in the current research as follows:

Generalized steepest decent (GD):

The learning rule can be written as Eq. (8) with

*η*≠ 0,*μ*= 0.Generalized steepest decent including momentum (GDM):

The learning rule can be written as Eq. (8) with

*η*≠ 0,*μ*≠ 0.Generalized steepest decent with adjustable learning rate (GDA):

In this algorithm, the basic learning rule is the same as Eq. (6) but adding a conditional judgment. When stable learning can be kept under a learning rate, then the learning rate is increased, otherwise it is decreased. The learning rate increment and decrement can be denoted as

*ζ*and_{inc}*ζ*[35]._{dec}Levenberg-Marquardt (LM):

The learning rule can be written as

where λ denotes a constant to assure the inversion of matrix, and the learning rule becomes Gauss-Newton algorithm when λ = 0, while it approaches GD with small learning rate with large λ [35].

### 3.3. Numerical results of parameter recognition using BPANN

The first supervised learning employed for parameter recognition of engineering constants is the BPANN, which can be easily implemented for multiple inputs and multiple outputs. Since there are various design parameters in the construction of a BPANN, we study some influence factors (such as different algorithms of training rules, different combination of input variables, and number of neurons of hidden layer), and then we propose and analyze an appropriate topology of BPANN. The MATLAB nntool is used for network simulation.

#### 3.3.1. Comparison of different algorithms of learning rules

The convergence of RSME with iteration of BPANN by using different training algorithms under a BPANN topology with 4–6-2 (NI = 4, NH = 6, NO = 2) for the 216 training set is shown in
**Figure 5**
. The training functions in MATLAB are as follows: (1) GD (*traingd*); (2) GDM (*traingdm*); (3) GDA (*traingda*); and (4) LM (*trainlm*). Some parameters employed in the four BPANN algorithms are also shown in
**Figure 5**
. It can be observed that except LM (*trainlm*), all another three algorithms cannot allow the root mean square errors (RMSEs) approach to zero quickly. (The results had been verified by the self-developed BPANN by using GD learning rule programming in Python). This reflects the current problem that contains a special feature that first-order method of the steepest decent cannot help to find the global minimum quickly, while the Levenberg-Marquardt (LM) algorithms based on Hessian matrix containing second-order derivative of cost function work well.

**Table 6**
summarizes the effect of different training algorithms of BPANN on the predicted accuracy. It can be shown that LM (*trainlm*) gives accurate prediction of both the Young’s modulus (error is within 6% for CLSMs) and Poisson’s ratio (error is 1% for all) for three kinds of backfilled materials.

#### 3.3.2. Effects of input variable combinations

In a practical engineering situation, we are interested in a question: what physical variables should we measure? To solve this problem, five cases of different combination of input variables are investigated:

*Case 1: (U*_{y1},U_{y2}, S_{x1}, S_{x2}) (NI =4) (two displacements, two stresses).*Case 2: (U*_{y1},U_{y2}, S_{x1}) (NI =3) (two displacements, one stress).*Case 3: (U*_{y1},U_{y2}) (NI =2) (two displacements).*Case 4: (U*_{y1}, S_{x1},) (NI =2) (one displacement, one stress).*Case 5: (S*_{x1}, S_{x2}) (NI =2) (two stresses).

The convergence of RSME with iteration of BPANN using different combinations of input variables using LM training algorithms (NH = 6, NO = 2) for the 216 training set is shown in
**Figure 6**
. Again, LM works well, but the predicted accuracy shown in
**Table 7**
depicts that only cases 1, 2, and 3 are appropriate for use. The result of case 5 reflects the fact that only stresses information cannot predict the material constant. Therefore, in the following analysis, we basically consider input variables of case 1; this means we need to measure two displacements and two stresses.

#### 3.3.3. Effect of number of hidden neurons

The results of BPANN results using different number of neurons of hidden layer (NH) are shown in
**Figure 7**
and
**Table 8**
. From the results it is recommended to employ NH > 2 for the accurate prediction of engineering constants if training algorithms LM (*trainlm*) are employed.

#### 3.3.4. Selection of a BPANN topology

After some parametric studies, a BPANN with 4–6-2 topology using LM training algorithm is proposed for current parameter recognition problems. The convergence of RMSE is shown in
**Figure 8**
to depict very fast decay of RSME.
**Figure 9**
shows the QQ plot of tested results and predicted results during testing stage after training process. The *R*
^{2}value for the Young’s modulus and Poisson’s ratio are 0.99454 and 0.99864, respectively. It reflects that the training BPANN works well for testing data.
**Table 9**
shows predicted results using final design of a BPANN for the recognition of engineering constants of CLSM.

## 4. Parameter recognition using RBFNN

### 4.1. Application of RBFNN for parameter recognition

**Figure 10**
demonstrates a typical RBFNN for the identification of engineering constants of CLSM. The RBFNN shown in
**Figure 10**
contains a single output, that is, the Young’s modulus (*y*
_{1} = *E*, *y*
_{2} = *ν*), *L* input neurons (*x _{i}
*,

*i*= 1, 2, ⋯,

*L*), and

*M*hidden neurons (

*h*,

_{j}*j*= 1, 2, ⋯,

*M*). Suppose we have S samples for training, we can select

*M*≤

*S*. The predicted output can be expressed as [35, 36, 37]:

where the kernel function

### 4.2. Learning algorithms for RBFNN for parameter recognition

There exist many approaches for the determination of the network parameters in RBFNN (*C _{j}
*,

*σ*,

_{j}*w*). For parallel comparison, here the supervised learning algorithm, that is, the generalized delta rule based on the method of the steepest descent is used, which can be expressed as [35, 36, 37]:

_{kj}where

denotes the error between targets and trained output results.

There also exist many algorithms for learning RBFNN; among those, some are unsupervised ones such as using kNN for determination of *C _{j}
* and using the pseudo-inversion method to evaluate

*w*[35, 36, 37].

_{kj}### 4.3. Numerical results of parameter recognition using RBFNN

**Table 10**
summarizes the accuracy of predicted results by using RBFNN with different combinations of input variables. MATLAB *nntool* (*newrb*) is used for numerical experiments [38]. Unfortunately, the results are not as good as those obtained using BPANN with the LM training method.
**Figure 11**
also reflects the same results that Young’s modulus cannot be predicted well using RBFNN (even though a lot of trials on selection of spread constants cannot obtain satisfying results).

## 5. Conclusions

This chapter presents a trial of application of two supervised learning artificial neural networks (BPANN and RBFNN) to predict engineering material constants, Young’s modulus and Poisson’s ratio, of backfilled materials (soil and CLSMs). The training and testing data are obtained from numerical experiments using ANSYS. Concluding remarks can be summarized as follows:

BPANN with training method LM (such as trainlm in MATLAB) works well for parameter recognition of engineering constant. For example, a well-designed BPANN with LM (

*trainlm*) can give accurate prediction of both Young’s modulus (error is within 6% for CLSMs) and Poisson’s ratio (error is with 1% for all) for three kinds of backfilled materials;In the BPANN framework, at least two displacements at different points should be measured, together with optional 0, 1, or 2 stress measurements;

In the BPANN structure, for example, small number of neurons of hidden layer is enough for parameter recognition;

RBFNN is not appropriate for the parameter recognition of engineering constants, for example, as compared with BPANN.

In future, another neural network which is appropriate for regression, such as probabilistic neural networks (PNN) and supporting vector machines (SVM), maybe used for the study on the parameter recognition of engineering constants of problems in civil engineering.