Open access peer-reviewed chapter

Parameter Recognition of Engineering Constants of CLSMs in Civil Engineering Using Artificial Neural Networks

By Li-Jeng Huang

Submitted: April 15th 2017Reviewed: October 6th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.71538

Downloaded: 319

Abstract

Controlled low-strength materials (CLSMs) had been widely applied to excavation and backfill in civil engineering. However, the engineering properties of CLSM in these embankments vary dramatically due to different contents involved. This study is proposed to employ the ANSYS software and two different artificial neural networks (ANNs), that is, back-propagation artificial neural network (BPANN) and radial basis function neural network (RBFNN), to determine the engineering properties of CLSM by considering an inverse problem in which elastic modulus and the Poisson’s ratio can be identified from inputting displacements and stress measurements. The PLANE42 element of ANSYS was first used to investigate a 2D problem of a retaining wall with embankment, with E = 0.02~3 GPa, ν= 0.1~0.4 to obtain totally 270 sampling data for two earth pressures and two top surface settlements of embankment. These data are randomly divided into training and testing set for ANNs. Practical cases of three kinds of backfilled materials, soil, and two kinds of CLSMs (CLSM-B80/30% and CLSM-B130/30%) will be used to check the validity of ANN prediction results. Results showed that maximal errors of CLSM elastic parameters identified by well-trained ANNs can be within 6%.

Keywords

  • ANNs
  • ANSYS
  • CLSM
  • fem
  • parameter recognition

1. Introduction

In recent years, sustainable materials have been widely studied and developed especially for construction, highway, and civil engineering. Controlled low-strength material (CLSM) is commonly used as backfilled materials. It would be a friendly environment-cheap material and typically consists of small amount of cement, supplementary, fine aggregates, and a large amount of mixing water. Self-compacting/self-leveling, significantly low strength, as well as almost no measured settlement after hardening are remarkable characteristics of CLSM.

CLSM can be defined as a kind of self-compacting cementitious material that is in a flowable state at the initial period of placement and has specified compressive strength of 1200 psi (8.27 MPa) (or less at 28 days) or is defined as excavatable if the compressive strength is 300 psi (2.07 MPa) (or less at 28 days) [1]. Recent studies have reported that the maximum CLSM strength of approximately up to 1.4 MPa is suitable for most of backfilling applications when re-excavation is required [2, 3] It is also recommended that depending upon the availability and project requirements, any recycle material would be acceptable in the production of CLSM with prior tests its feasibility before uses [4]. The special features of CLSM can be summarized as follows: durable, excavatable, erosion-resistant, self-leveling, rapid curing, flowable around confined spacing, wasting material usage, elimination of compaction labors and equipment, and so on.

There are several studies on the engineering properties of CLSMs by laboratory experiments [5, 6, 7, 8, 9, 10], and numerical analyses of applications of CLSM to civil engineering, such as excavation and backfill after retaining walls [11, 12, 13], bridge abutments [14, 15, 16, 17], pipeline and trench ducts [18], pavement bases [19, 20, 21, 22, 23, 24], and so on. All these studies reflect requirement of the identification of mechanical constants of the CLSMs. Though it is known that the Young’s modulus of CLSMs lies between soil and commonly used concrete, precise determination of engineering material properties of CLSMs (even for soil and concretes) is a questionable and difficult problem. For example, modulus of elasticity is evaluated by experiments using secant modulus of stress-strain curve or estimated from empirical formula of Young’s modulus with 28-day compressive strength or weight of concrete.

Besides, artificial neural networks had been widely applied to various engineering [25], especially to civil and construction engineering [26, 27]. Alternately, several studies were conducted on the application of inverse problems in structural and geotechnical problems [28, 29, 30, 31, 32].

2. Problem definition and data preparation

2.1. Parameter recognition considered as inverse problem

In a classical mechanical analysis of civil engineering problems, we determinate displacements and/or stresses of a structure with known engineering constants, such as the Young’s modulus, E (with the unit GPa), and the Poisson’s ratio, ν, which had been identified from laboratory or in-site experiments. This kind of problems can be termed forward problems, that is, we evaluate the unknown dependent variables (physical quantities) from prescribed parameters and dimensions analytically if there exist some closed-form relationships, or numerically using computational schemes (such as the finite element methods, FEMs) if the domain and/or boundary conditions are complicated.

On the other hand, an inverse problem in this case is to identify the engineering constants of a structure through evaluated or measured displacements and/or stresses. For a simple problem which can be analyzed and the results can be expressed in closed-form mathematical relationship, the inverse problem can be easily obtained from mathematical operation. However, for a practical huge structure of complicated shape, closed-form relationship cannot be obtained. Recently, a lot of schemes had been developed for parameter recognition (classification and regression) in the field of machine learning and artificial intelligence, among which neural networks such as back-propagation artificial neural network (BPANN) and radial basis function neural network (RBFNN) are proven to be powerful and efficient if well designed, trained, and tested.

The two problems are compared and illustrated in Table 1 , where problems with and without closed-form relationship between parameters and physical quantities are shown, respectively. In Table 1 , a well-known example in civil engineering, displacement at the end of a cantilevered beam subjected to a concentrated load is shown, in which Δ, P, L, E, I denotes the end displacement, loading, length, Young’s modulus, and moment of inertia, respectively.

ProblemProcessProblem with closed-form expressionProblem without closed-form expression
Forwardmodel parameters →
model→
prediction of data
y = f(p 1)
(e.g., Δ=PL33EI)
parameters→
numerical models (FEM) →
displacements, stresses
Inversedata →
model →
estimation of model
parameters
p 1 = f −1(y)
(e.g.,E=PL33IΔ)
displacements, stresses →
welled trained ANNs →
parameters

Table 1.

Comparison of forward problem and inverse problem.

This research study aims to consider the identification of engineering constant of a typical CLSM-backfilled region as an inverse problem. The processes employed are as follows: (1) preparation of training data, and testing data through numerical analysis (using ANSYS); (2) preparation of verification data (from experiments on CLSMs); (3) normalization of training data, testing data, and verification data; (4) conducting prediction using neural networks of BPANN and RBFNN along with comparison study of parameters involved in the networks; and (5) selection of a useful network topology for parameter recognition.

In the following sections, the procedures are explained with numerical results.

2.2. Typical problem employed for the recognition of engineering constants

A typical problem of backfilled region after a retaining wall is considered ( Figure 1 ). The length and height of backfilled region is L = 20 m,  H = 5 m, respectively. For simplicity, the backfilled materials are considered to be linearly elastic and are within small deformation under external loadings; therefore, linear elastic analysis can be performed using only two engineering constants, that is, the Young’s modulus, E (with the unit GPa), and the Poisson’s ratio, ν. Furthermore, considering the width of backfilled zone is infinitely long, and a two-dimensional analysis can be used. In order to evaluate the engineering constants through displacements and stresses, the backfilled region is assumed to be subjected to a concentrated vertical loading (surcharge) Q 0 = 72.5 kN acting at the point behind the retaining walla = 0.5 H.

Figure 1.

Schematic of a typical CLSM-backfilled region for the analysis of parameter recognition.

In order to provide training data and testing data in parameter recognition using neural networks, the following four quantities are defined:

  1. x1  = Uy1 : vertical displacement (settlement) at upper surface (x = L/4), (positive downward, m).

  2. x2  = Uy2 : vertical displacement (settlement) at upper surface (x = L/2), (positive downward, m).

  3. x3  = Sx1 : horizontal normal stress (lateral earth pressure) on wall (y = 3H/4), (tensile positive, Pa).

  4. x4  = Sx2 : horizontal normal stress (lateral earth pressure) on wall (y = H/2), (tensile positive, Pa).

These sampling points are illustrated in Figure 1 .

2.3. FEM analysis using ANSYS

Finite element equation for elastic analysis of displacement and stress of a plane deformation problem can be expressed in matrix form [33]:

KX=FE1

where [K], {X}, {F} denotes global stiffness matrix, global degrees of freedom vector and global load vector respectively, defined as follows:

K=e=1NEke=e=1NEtAeBTDBdxdyF=e=1NEfe=e=1NEtSeNTqdSE2

If a plane 4-node isoparametric element is used, the shape functions in Eq. (2) can be expressed as

N1ξη=1ξ1η/4N2ξη=1+ξ1η/4N3ξη=1+ξ1+η/4N4ξη=1ξ1+η/4E3

where ξ, η are local coordinates. A typical PLANE42 element in ANSYS is shown in Figure 2 .

Figure 2.

Definition of PLANE42 element of ANSYS [34].

2.4. Numerical experiments for different combination of engineering constants

A finite element mesh with 40 × 10 = 400 elements, 41 × 11 = 451 nodes, 451 × 2 = 902 degrees of freedom (displacements) is employed for a numerical analysis of backfilled zone ( Figure 3 ). The left-hand side and right-hand side can only move freely in vertical direction, while the bottom of backfilled zone is considered fixed in both directions. In order to provide enough sampling data for later training and testing process using supervised neural networks, the Young’s modulus (E) ranges from 0.02 to 3 GPa (covering the general values of soil and CLSMs), while the Poisson’s ratio (ν) is selected to be 0.1, 0.2, 0.25, 0.3, and 0.4. A total of 270 data samples with different combinations of E and ν are used.

Figure 3.

FEM mesh of the typical example of CLSM-backfilled region using ANSYS of PLANE42 elements.

2.5. Data preparation

2.5.1. Data collection from FEM analysis using ANSYS

Table 2 shows part of the 270 numerical results of computed displacements and stresses for different Young’s moduli and Poisson’s ratio using the ANSYS PLANE42 model.

Datax 1,Uy1 (m)x 2,Uy2 (m)x 3, Sx1 (Pa)x 4, Sx2 (Pa)y 1,E(GPa)y 2,ν
1−0.000981.9447E-05−4452.6−4029.10.020.1
2−4.91E-049.72E-06−4452.6−4029.10.040.1
55−0.00086072.6951E-05−5434.05−51960.020.2
56−0.00043031.3476E-05−5434.05−51960.040.2
109−7.61E-044.36E-05−5979.35−5862.90.020.25
110−3.80E-042.18E-05−5979.35−5862.90.040.25
163−0.62826E-030.72850E-04−6567.6−65990.020.3
164−3.14E-043.64E-05−6567.6−65990.040.3
269−1.88E-061.50E-06−7896.55−8337.12.50.4
270−1.56E-061.25E-06−7896.55−8337.130.4

Table 2.

Computed displacements and stresses for different Young’s moduli and Poisson’s ratio using the ANSYS PLANE42 model.

2.5.2. Normalization of data

Since the output range of the sigmoid function is within [0, 1], and in order to have a better performance of training of neural network, the input data should be normalized to have a uniformly ranged data.

The relationship between normalized and original data can be expressed as follows:

x˜=axx+bxy˜=ayy+byE4

where ax , bx are selected such that x˜11, ay , by are chosen to let y˜01. In this case, the relations are

x˜1=x1/0.001x˜2=x2/0.001x˜3=x3/9000x˜4=x4/9000y˜1=3y1/1010y˜2=y2E5

Part of typical normalized FEM computed displacements and stress at sampling points are shown in Table 3 . Among these normalized data, 216 set (216/270 ≒ 80%) are selected randomly for training and 54 set (54/270 ≒ 20%) for testing in the later neural work analyses.

Datax 1,Uy1 (m)x 2,Uy2 (m)x 3, Sx1 (Pa)x 4, Sx2 (Pa)y 1,E(GPa)y 2,ν
10.9800000.019447−0.494733−0.4476780.0060000.100000
20.4912000.009721−0.494733−0.4476780.0120000.100000
550.8606800.026951−0.603783−0.5773330.0060000.200000
560.4303400.013476−0.603783−0.5773330.0120000.200000
1090.7609100.043562−0.664372−0.6514330.0060000.250000
1100.3804500.021781−0.664372−0.6514330.0120000.250000
1630.6282600.072850−0.729733−0.7332220.0060000.300000
1640.3141300.036425−0.729733−0.7332220.0120000.300000
2690.0018760.001505−0.877394−0.9263440.7500000.400000
2700.0015640.001254−0.877394−0.9263440.9000000.400000

Table 3.

Typical normalized FEM computed displacements and stress at sampling points.

2.5.3. Data for verification

In order to verify the trained and tested neural networks, three data set of backfilled materials are used: (1) the first is commonly used soil (E = 0.1 GPa, ν = 0.30); (2) CLSM-B80/30% (E = 0.27 GPa, ν = 0.25); and (3) CLSM-B130/30% (E = 0.87 GPa, ν = 0.25). The CLSM-B80/30% and CLSM-B130/30% denote unit weight of binder of the CLSM is 80 and 130 kg/m 3 with 30% of replacement of cement by fly ash. Selection of materials for the CLSM mixture in this study consisted of fine aggregate, type I Portland cement, stainless steel reducing slag (SSRS), and water. Fine aggregate for CLSM was formed by well blending between river sand and residual soil with a given proportion (e.g., 6:4, by volume) for grading improvement. The soil was obtained from a construction site. The experimental work was conducted on two binder content levels in mixtures (i.e., 80- and 130 kg/m3 ). The water-to-binder ratio was selected via few trial mixes until the acceptable flowability for CLSM of 150−300 mm was achieved. The detailed information can be seen in [8].

In Table 4 , computed displacements and stresses for verified soil and two CLSMs using the ANSYS PLANE42 model are summarized, while the normalized data for verification are tabulated in Table 5 .

Datax 1,Uy1 (m)x 2,Uy2 (m)x 3,Sx1 (Pa)x 4, Sx2 (Pa)y 1,E(GPa)y 2,ν
Soil−1.26E-041.46E-05−6567.6−65990.10.3
CLSM-B80/30%−5.64E-053.23E-06−5979.35−5862.90.270.25
CLSM-B130/30%−1.75E-051.00E-06−5979.35−5862.90.870.25

Table 4.

Computed displacements and stresses for verified soil and two CLSMs using the ANSYS PLANE42 model.

Datax 1,Uy1 (m)x 2,Uy2 (m)x 3, Sx1 (Pa)x 4, Sx2 (Pa)y 1,E(GPa)y 2,ν
Soil−0.1260.0146−0.72973−0.733220.0030.3
CLSM-B80/30%−0.05640.00323−0.66437−0.651430.00810.25
CLSM-B130/30%−0.01750.001−0.66437−0.651430.02610.25

Table 5.

Normalized FEM computed displacements and stress for verified soil and two CLSMs using the ANSYS PLANE42model.

3. Parameter recognition using BPANN

3.1. Application of BPANN for parameter recognition

Figure 4 demonstrates a typical BPANN for the identification of engineering constants of CLSM. The BPANN shown in Figure 4 contains a single output, that is, the Young’s modulus (y1 = E, y2 = ν), L input neurons (xi ,  i = 1, 2, ⋯, L), and M hidden neurons (hj ,  j = 1, 2, ⋯, M). The predicted outputs can be expressed as [35, 36, 37]:

  1. from input layer (IL) to hidden layer (HL):

    hj=fnj=fi=1LwjixibjE6a

  • from hidden layer (HL) to output layer (OL):

    yk=fnk=fj=1MwkjhjbkE6b

  • Figure 4.

    Schematic of a BPANN topology for single parameter recognition.

    Where the activating function (the sigmoid function) and its derivative can be expressed as.

    fn=11+enfn=fn1fn.E7

    3.2. Learning algorithms of BPANN for parameter recognition

    There exist many approaches for the determination of the network parameters in BPANN (wkj , bk , wji , bj ). The most basic and popular generalized delta rule based on the method of the steepest descent along with two learning parameters (i.e., the learning rate η and the momentum factor μ)is employed, which can be expressed as

    wkjp+1=wkjpηEwkjp+μwkjpbkp+1=bkpηEbkp+μbkpwjip+1=wjipηEwjip+μwjipbjp+1=bjpηEbjp+μbjpE8

    where

    Ep=12k=1Nek2p=k=1N12dkpykp2E9

    denotes the error between targets and trained output results.

    Ewkjp=ekpfnkhjEbkp=ekpfnk2Ewjip=k=1NekpfnkwkjfnjxiEbjp=k=1NekpfnkwkjfnjE10

    The MATLAB toolbox nntool provides a lot of training methods along with BPANN [38], among which four kinds of training algorithms were tested in the current research as follows:

    1. Generalized steepest decent (GD):

      The learning rule can be written as Eq. (8) with η ≠ 0,  μ = 0.

    2. Generalized steepest decent including momentum (GDM):

      The learning rule can be written as Eq. (8) with η ≠ 0,  μ ≠ 0.

    3. Generalized steepest decent with adjustable learning rate (GDA):

      In this algorithm, the basic learning rule is the same as Eq. (6) but adding a conditional judgment. When stable learning can be kept under a learning rate, then the learning rate is increased, otherwise it is decreased. The learning rate increment and decrement can be denoted as ζinc and ζdec [35].

    4. Levenberg-Marquardt (LM):

      The learning rule can be written as

      Δwijt+1=wijt+1wijt=JTJ+λI1JTeE11

    where λ denotes a constant to assure the inversion of matrix, and the learning rule becomes Gauss-Newton algorithm when λ = 0, while it approaches GD with small learning rate with large λ [35].

    3.3. Numerical results of parameter recognition using BPANN

    The first supervised learning employed for parameter recognition of engineering constants is the BPANN, which can be easily implemented for multiple inputs and multiple outputs. Since there are various design parameters in the construction of a BPANN, we study some influence factors (such as different algorithms of training rules, different combination of input variables, and number of neurons of hidden layer), and then we propose and analyze an appropriate topology of BPANN. The MATLAB nntool is used for network simulation.

    3.3.1. Comparison of different algorithms of learning rules

    The convergence of RSME with iteration of BPANN by using different training algorithms under a BPANN topology with 4–6-2 (NI = 4, NH = 6, NO = 2) for the 216 training set is shown in Figure 5 . The training functions in MATLAB are as follows: (1) GD (traingd); (2) GDM (traingdm); (3) GDA (traingda); and (4) LM (trainlm). Some parameters employed in the four BPANN algorithms are also shown in Figure 5 . It can be observed that except LM (trainlm), all another three algorithms cannot allow the root mean square errors (RMSEs) approach to zero quickly. (The results had been verified by the self-developed BPANN by using GD learning rule programming in Python). This reflects the current problem that contains a special feature that first-order method of the steepest decent cannot help to find the global minimum quickly, while the Levenberg-Marquardt (LM) algorithms based on Hessian matrix containing second-order derivative of cost function work well.

    Figure 5.

    Convergence of RSME with iteration of BPANN using different training algorithms (NI = 4, NH = 6, NO = 2).

    Table 6 summarizes the effect of different training algorithms of BPANN on the predicted accuracy. It can be shown that LM (trainlm) gives accurate prediction of both the Young’s modulus (error is within 6% for CLSMs) and Poisson’s ratio (error is 1% for all) for three kinds of backfilled materials.

    CaseTraining methodRSMESoilCLSM-B80/30%CLSM-B130/30%
    E(Error%)ν(Error%)E(Error%)ν(Error%)E(Error%)ν(Error%)
    1traingd0.020.5855
    (485.5%)
    0.4109
    (36.95%)
    0.6875
    (154.6%)
    0.2769
    (10.75%)
    0.7356
    (−15.45%)
    0.2717
    (8.69%)
    2traingdm0.020.5234
    (423.4%)
    0.3307
    (10.24%)
    0.6700
    (148.2%)
    0.2628
    (5.11%)
    0.6875
    (−20.97%)
    0.2581
    (3.25%)
    3traingda0.020.5148
    (414.8%)
    0.2181
    (−27.30%)
    0.5695
    (110.92%)
    0.1999
    (−20.06%)
    0.5766
    (−33.72%)
    0.2049
    (−18.06%)
    4trainlm0.000.0729
    (−27.11%)
    0.2991
    (−0.31%)
    0.2862
    (5.99%)
    0.2512
    (0.49%)
    0.8561
    (−1.60%)
    0.2510
    (0.40%)
    Experimental value0.10.30.270.250.870.25

    Table 6.

    Effect of different training algorithms of BPANN on the predicted accuracy.

    3.3.2. Effects of input variable combinations

    In a practical engineering situation, we are interested in a question: what physical variables should we measure? To solve this problem, five cases of different combination of input variables are investigated:

    1. Case 1: (Uy1,Uy2, Sx1, Sx2) (NI =4) (two displacements, two stresses).

    2. Case 2: (Uy1,Uy2, Sx1) (NI =3) (two displacements, one stress).

    3. Case 3: (Uy1,Uy2) (NI =2) (two displacements).

    4. Case 4: (Uy1, Sx1,) (NI =2) (one displacement, one stress).

    5. Case 5: (Sx1, Sx2) (NI =2) (two stresses).

    The convergence of RSME with iteration of BPANN using different combinations of input variables using LM training algorithms (NH = 6, NO = 2) for the 216 training set is shown in Figure 6 . Again, LM works well, but the predicted accuracy shown in Table 7 depicts that only cases 1, 2, and 3 are appropriate for use. The result of case 5 reflects the fact that only stresses information cannot predict the material constant. Therefore, in the following analysis, we basically consider input variables of case 1; this means we need to measure two displacements and two stresses.

    Figure 6.

    Convergence of RSME with iteration of BPANN using different combinations of input variables (trainlm, NI = 4, NH = 6, NO = 2).

    CaseNIInput variablesSoilCLSM-B80/30%CLSM-B130/30%
    E(Error%)ν(Error%)E(Error%)ν(Error%)E(Error%)ν(Error%)
    14Uy1, Uy2, Sx1, Sx20.1363
    (36.28%)
    0.3003
    (0.11%)
    0.2368
    (−12.30%)
    0.2470
    (−0.86%)
    0.8636
    (−0.73%)
    0.2480
    (−0.79%)
    23Uy1,Uy2, Sx10.0895
    (−10.50%)
    −0.2997
    (−0.09%)
    0.2795
    (3.50%)
    0.2518
    (0.71%)
    0.8618
    (−0.95%)
    0.2516
    (0.65%)
    32Uy1, Uy20.0747
    (−25.31%)
    0.2997
    (−0.10%)
    0.2555
    (−5.337%)
    0.2501
    (0.04%)
    0.8485
    (−2.48%)
    0.2505
    (0.21%)
    42Uy1, Sx17.5206
    (7420%)
    −0.1752
    (−158.4%)
    7.5706
    (2704%)
    −0.1768
    (−17.7%)
    7.6544
    (779.8%)
    −0.1724
    (−168.9%)
    52Sx1, Sx21.3530
    (1253%)
    1.6716
    (457.2%)
    1.3079
    (384.4%)
    1.6449
    (557.9%)
    1.2985
    (49.25%)
    1.6331
    (553.3%)
    Experimental values0.10.30.270.250.870.25

    Table 7.

    Effect of input variables on the prediction accuracy of engineering constants using BPANN.

    3.3.3. Effect of number of hidden neurons

    The results of BPANN results using different number of neurons of hidden layer (NH) are shown in Figure 7 and Table 8 . From the results it is recommended to employ NH > 2 for the accurate prediction of engineering constants if training algorithms LM (trainlm) are employed.

    Figure 7.

    Convergence of RSME with iteration of BPANN using different number of neurons of hidden layer (trainlm, NI = 4, NO = 2).

    CaseNHNI-NH-NOSoilCLSM-B80/30%CLSM-B130/30%
    E(Error%)ν(Error%)E(Error%)ν(Error%)E(Error%)ν(Error%)
    124–2-2-0.2088
    (108.80%)
    0.3007
    (0.22%)
    0.2327
    (−13.80%)
    0.2498
    (−0.08%)
    0.9413
    (8.20%)
    0.2499
    (−0.04%)
    244–4-20.1562
    (56.19%)
    0.2878
    (−4.06%)
    0.3097
    (14.69%)
    0.2453
    (−1.87%)
    0.8903
    (2.33%)
    0.2498
    (−0.09%)
    364–6-20.1653
    (65.35%)
    0.2985
    (−0.50%)
    0.3238
    (19.94%)
    0.2491
    (−0.36%)
    0.8859
    (1.83%)
    0.2492
    (−0.31%)
    484–8-20.0811
    (−11.91%)
    0.3015
    (0.51%)
    0.2392
    (−11.43%)
    0.2507
    (0.29%)
    0.8566
    (−1.54%)
    0.2508
    (0.34%
    5104–10-20.1174
    (17.39%)
    0.2992
    (−0.28%)
    0.2777
    (2.86%)
    0.2531
    (1.23%)
    0.8833
    (1.53%)
    0.2542
    (1.69%)
    Experimental values0.10.30.270.250.870.25

    Table 8.

    Effect of number of neurons of hidden layer on the prediction accuracy of engineering constants using BPANN.

    3.3.4. Selection of a BPANN topology

    After some parametric studies, a BPANN with 4–6-2 topology using LM training algorithm is proposed for current parameter recognition problems. The convergence of RMSE is shown in Figure 8 to depict very fast decay of RSME. Figure 9 shows the QQ plot of tested results and predicted results during testing stage after training process. The R 2value for the Young’s modulus and Poisson’s ratio are 0.99454 and 0.99864, respectively. It reflects that the training BPANN works well for testing data. Table 9 shows predicted results using final design of a BPANN for the recognition of engineering constants of CLSM.

    Figure 8.

    Convergence of RSME with iteration of finally selected of BPANN (trainlm, NI = 4, NH = 6, NO = 2).

    Figure 9.

    QQ plot of testing data of finally selected of BPANN (trainlm, NI = 4, NH = 6, NO = 2).

    Training methodInput variablesNI-NH-NOSoilCLSM-B80/30%CLSM-B130/30%
    E(Error%)ν(Error%)E(Error%)ν(Error%)E(Error%)ν(Error%)
    trainlmUy1, Uy2, Sx1,Sx24–6-2-0.0729
    (−27.11%)
    0.2991
    (0.31%)
    0.2862
    (5.99%)
    0.2512
    (0.49%)
    0.8561
    (−1.60%)
    0.2510
    (0.40%)
    Experimental values0.10.30.270.250.870.25

    Table 9.

    Predicted results using final design of a BPANN for the recognition of engineering constants of CLSM.

    4. Parameter recognition using RBFNN

    4.1. Application of RBFNN for parameter recognition

    Figure 10 demonstrates a typical RBFNN for the identification of engineering constants of CLSM. The RBFNN shown in Figure 10 contains a single output, that is, the Young’s modulus (y 1 = E,  y 2 = ν), L input neurons (xi ,  i = 1, 2, ⋯, L), and M hidden neurons (hj ,  j = 1, 2, ⋯, M). Suppose we have S samples for training, we can select M ≤ S. The predicted output can be expressed as [35, 36, 37]:

    yk=Fkxi=j=1MwjKxiCj)=j=1Mwje12σj2xiCj2E12

    where the kernel function KxiCj=e12σj2xiCj2is the Gaussian basis function.

    Figure 10.

    Schematic of a RBFNN topology for single parameter recognition.

    4.2. Learning algorithms for RBFNN for parameter recognition

    There exist many approaches for the determination of the network parameters in RBFNN (Cj , σj , wkj ). For parallel comparison, here the supervised learning algorithm, that is, the generalized delta rule based on the method of the steepest descent is used, which can be expressed as [35, 36, 37]:

    wkjp+1=wkjpηEwkjp+μwkjpCjp+1=wCjpηECjp+μCjpσjp+1=σjpηEσjp+μσjpE13

    where

    Ep=12k=1Nek2p=12k=1Ndkpykp2E14

    denotes the error between targets and trained output results.

    Ewkjp=KxipCjpECjp=wkjp1σj2pKxipCjpxipCjpEσjp=wkjp1σj3pKxipCjpxipCjp2E15

    There also exist many algorithms for learning RBFNN; among those, some are unsupervised ones such as using kNN for determination of Cj and using the pseudo-inversion method to evaluate wkj [35, 36, 37].

    4.3. Numerical results of parameter recognition using RBFNN

    Table 10 summarizes the accuracy of predicted results by using RBFNN with different combinations of input variables. MATLAB nntool (newrb) is used for numerical experiments [38]. Unfortunately, the results are not as good as those obtained using BPANN with the LM training method. Figure 11 also reflects the same results that Young’s modulus cannot be predicted well using RBFNN (even though a lot of trials on selection of spread constants cannot obtain satisfying results).

    CaseNIInput variablesSoilCLSM-B80/30%CLSM-B130/30%
    E(Error%)ν(Error%)E(Error%)ν(Error%)E(Error%)ν(Error%)
    14Uy1, Uy2, Sx1, Sx20.1877
    (87.73%)
    0.3009
    (0.30%)
    0.5786
    (114.29%)
    0.2498
    (−0.08%)
    0.8085
    (−7.07%)
    0.2498
    (−0.08)
    23Uy1,Uy2, Sx10.1649
    (64.85%)
    0.300
    (0.00%)
    0.5561
    (105.95%)
    0.2486
    (−0.55%)
    0.7832
    (−9.98%)
    0.2487
    (−0.53%)
    32Uy1, Uy20.2124
    (112.44%)
    0.2151
    (−28.31%)
    0.5809
    (115.15%)
    0.2309
    (−7.63%)
    0.7983
    (−8.25%)
    0.2549
    (1.98%)
    42Uy1, Sx10.1774
    (77.44%)
    0.3001
    (0.02)
    0.5072
    (87.85%)
    0.2450
    (−2.01%)
    0.7017
    (−19.34%)
    0.2425
    (−3.00%)
    52Sx1, Sx20.6023
    (502.27%)
    0.300
    (0.00)
    0.6365
    (135.74%)
    0.2500
    (0.01%)
    0.6365
    (−26.84%)
    0.2500
    (0.01%)
    Experimental values0.10.30.270.250.870.25

    Table 10.

    Effect of input variables on the prediction accuracy of engineering constants using RBFNN.

    Figure 11.

    QQ plot of predicted and tested engineering constants of CLSM using RBFNN (NI = 4).

    5. Conclusions

    This chapter presents a trial of application of two supervised learning artificial neural networks (BPANN and RBFNN) to predict engineering material constants, Young’s modulus and Poisson’s ratio, of backfilled materials (soil and CLSMs). The training and testing data are obtained from numerical experiments using ANSYS. Concluding remarks can be summarized as follows:

    1. BPANN with training method LM (such as trainlm in MATLAB) works well for parameter recognition of engineering constant. For example, a well-designed BPANN with LM (trainlm) can give accurate prediction of both Young’s modulus (error is within 6% for CLSMs) and Poisson’s ratio (error is with 1% for all) for three kinds of backfilled materials;

    2. In the BPANN framework, at least two displacements at different points should be measured, together with optional 0, 1, or 2 stress measurements;

    3. In the BPANN structure, for example, small number of neurons of hidden layer is enough for parameter recognition;

    4. RBFNN is not appropriate for the parameter recognition of engineering constants, for example, as compared with BPANN.

    In future, another neural network which is appropriate for regression, such as probabilistic neural networks (PNN) and supporting vector machines (SVM), maybe used for the study on the parameter recognition of engineering constants of problems in civil engineering.

    How to cite and reference

    Link to this chapter Copy to clipboard

    Cite this chapter Copy to clipboard

    Li-Jeng Huang (December 20th 2017). Parameter Recognition of Engineering Constants of CLSMs in Civil Engineering Using Artificial Neural Networks, Advanced Applications for Artificial Neural Networks, Adel El-Shahat, IntechOpen, DOI: 10.5772/intechopen.71538. Available from:

    chapter statistics

    319total chapter downloads

    More statistics for editors and authors

    Login to your personal dashboard for more detailed statistics on your publications.

    Access personal reporting

    Related Content

    This Book

    Next chapter

    Electricity Consumption and Generation Forecasting with Artificial Neural Networks

    By Adela Bâra and Simona Vasilica Oprea

    Related Book

    First chapter

    Introductory Chapter: Electric Machines for Smart Grids and Electric Vehicles Applications

    By Adel El-Shahat

    We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

    More about us