Open access peer-reviewed chapter

Determination of the Elastic Constants of a Metal-Laminated Composite Material Using Artificial Neural Networks

Written By

Marta Eraña-Díaz and Mario Acosta-Flores

Submitted: 10 September 2022 Reviewed: 13 October 2022 Published: 15 November 2022

DOI: 10.5772/intechopen.108601

From the Edited Volume

Artificial Neural Networks - Recent Advances, New Perspectives and Applications

Edited by Patrick Chi Leung Hui

Chapter metrics overview

85 Chapter Downloads

View Full Metrics

Abstract

This chapter explores the use of an artificial neural network (ANN) to obtain the elastic constants of the components of a metal laminated composite material (MLCM). The dataset for the training and validation of the ANN was obtained by applying an analytical model developed for the study of stresses in MLCM. The information used in the dataset corresponds to MLCM configurations and data generated with variants registered in the structural presentation of the inputs and outputs. The best configuration found for the generation of the ANN models yielded an average relative error of less than 4% in relation to the results of the constants evaluated and published in a previous article. As shown in this research, it is important to have a clear definition of the problem as well as an effective selection and preparation of the characteristics of the training data during the constitutive modeling of composite materials and the correct application of the ANN.

Keywords

  • elastic constants of laminated composite materials
  • artificial neural networks
  • composite materials
  • constitutive model of composite materials
  • training dataset

1. Introduction

Artificial neural networks (ANN) are an efficient artificial intelligence (AI) technique applied in several areas such as bioinformatics [1] for classification, function approximation, and knowledge discovery, as well as for data visualization in medical diagnosis [2].

Various numerical models and experimental techniques have been applied in relation to ANN in the investigation and obtention of the mechanical properties of composite materials, such as Young’s Modulus (E), Rigidity Modulus (G), Elastic Limit, and Maximum Tensile Stress [3, 4]. In [5], the different elastics constants of the face-centered cubic austenitic stainless steel are determined. In [6], elastic parameters of an orthotropic material are obtained based on experimental data and using the finite element method (FEM) applied to ANN. The method described in [7] combines the FEM and deep neural networks to obtain constitutive relationships from indirect observations. Acosta et al. [8] use a linear constitutive analytical model proposed in [9] for the analysis and obtention of elastic constants of laminated composite materials with metallic layers. The elastic constants of a laminate’s component are obtained through an axial load experimental test.

In the constitutive modeling of composite materials, the ANN applications’ state of the art, [10] exposes the obstacles that have been encountered due to the difficulty of having a large amount of constitutive experimental training data.

This research presents a method to obtain the elastic constants of one of the components of a MLCM using ANN. The amount of data needed for training was obtained using constitutive models of composite materials proposed by [8].

Advertisement

2. Artificial neural networks (ANN)

ANN are a model inspired by the functioning of the human brain and are made up of connected node set (artificial neurons) that transmit signals to each other from an input stage to generate an output, in order, to improve their learning process by automatically modifying each other. There are several types of neural networks [11, 12] including recurrent neural networks (RNN) and feed-forward neural networks. The latter is an artificial neural network where the connections between the units do not form a cycle and where the information only moves forward.

This research used feed-forward ANN, made up of neurons grouped in layers alongside an input layer, one or more hidden layers, and an output layer. Each network neuron has a weight, a numerical value that modifies the received input. The new modified values are output from the neurons, if the output of any individual neuron is above the specified threshold value the neuron fires and sends data to the next layer of the network, otherwise, the data does not go through. This operation can be appreciated in Figure 1.

Figure 1.

Example of a feed-forward ANN configuration with i-inputs (X1… Xi) in the input layer, bias1 and j neurons in hidden layer one (B1, H1, …, Hj), bias2 and k neurons in hidden layer two (B2, H1, …, Hk), and bias3 in the output layer (B3, Y1, Y2) for the two outputs.

The Hj neuron has an assigned weight to each of its inputs (Eq. (1)); the assigned weight by Hi to Hj is represented as wij. The threshold represents the neuron’s degree of inhibition, and it is represented as ai(t). Eq. (2) is calculated with an activation function f(t), which can be linear (Eq. (3)), tangential (Eq. (4)), or hyperbolic tan, (Eq. (5)).

Hj=n=1iwijai+BiE1
ait=fiHiE2
ft=tE3
ft=11+tE4
ft=ttt+tE5

Thus, each ANN neuron, except those in the input layer and the bias neurons, processes all its inputs and provides its own activation as an output.

Once the ANN has been designed, the training process begins to ensure that the wij given by each neuron is set correctly so that the entire network provides an acceptable output.

During this process, the neural network is capable of storing knowledge from a subset of data containing information on both the inputs and their corresponding outputs, which are known as “desired outputs.” The network’s obtained outputs are compared with the desired outputs, thus updating the synaptic weights (wij) so as to reduce the margin of error in the network results. This procedure is repeated until the network reaches a satisfactory performance. One of the used methods to train the ANN is backpropagation [13, 14], where the wij update is done by gradient descent, minimizing the mean squared error (MSE) (Eq. (6)).

MSE=1Ni=1NyPred,iyact,i2E6

Overfitting, an ANN flaw [15, 16, 17], prevents it from obtaining acceptable outputs from unobserved data, that is, those not used in training. Ying X [18] proposes the following strategies to minimize the effects of overfitting: (1) stop training before finding the optimal MSE; (2) exclude any noise in the training set; and (3) expand the training data.

Advertisement

3. Methodology of determining the elastic constants of a metal laminated composite material using artificial neural networks

Obtaining efficient and consistent results when calculating the elastic constants of a MLCM using an ANN with a constitutive model of composite materials, requires a clear and complete understanding of the analytical model presented in [8] for the efficient preparation of the training data and the correct application of the ANN. The methodology used in this work is as follows:

  1. Physical description of the linear analytical model of the axial load of composite laminated materials identifying the role of the implicit variables and parameters present in the model.

  2. Definition of the objectives to be solved and identification of the sufficient and necessary parameters that will be used during the training phase. The composition of the composite (position and dimension of the components), the boundary conditions, the geometry, and the dimensions are defined on this stage. The values of the strains are obtained through those parameters and through applying the analytical models.

  3. With the application of ANN comes a description of their operation and processes, as well as the characteristics of the training and test data that correspond to the different selected laminated composite materials. The quantitative value of the data in the dataset is delimited depending on the model, the use of the ANN and the final application to obtain one of the MLCM components’ elastic constants. The ANN were trained using the R software “neuralnet.”

  4. The process of applying neural networks to determine elastic constants is carried out using the trained ANN and the data presented in [8].

  5. Finally, the analysis of results is carried out by means of the percentages of the relative error (RE) of each of the obtained configurations from the ANN in relation to the data of [8].

Advertisement

4. Analytical model of linear of axial load of composite laminate materials

This study uses a linear analytical model of a composite laminated material made up of layers of metallic material. It is assumed the laminate components are relatively thin, homogeneous, with elastic and linear properties and that the union between them, is perfect.

There is a global uniaxial stress problem, a homogeneous state of strain is considered throughout the laminate as well as in the layers, each point of the laminate presents a state of plane stress.

At the local level, the problem is each layer is biaxial of stresses and the normal stresses have a constant average distribution throughout the thickness of the layers. The state of plane stress generated at the internal points of each of the layers (local analysis) will be referred to as intralaminar state of stress while the stress components of layer i in directions 1 and 2 will be called intralaminar stresses (σxi and σyi).

The linear analytical model allows the application of the superposition principle (SP) considering the general problem as a set of individual problems. Therefore, for each load condition, the state of global stresses (average or total) σGx and σGy are the sum of the states of individual stresses (local) in each layer, see Figure 2. The analytical model’s global–local equation (Eq. (7)) is as follows:

Figure 2.

Representation of the stress state, global, and local models [8].

σGx=nIσxI+nIIσxII+nIIIσxIII++niσxi
σGy=nIσyI+nIIσyII+nIIIσyIII++niσyiE7

σxi and σyi represent intralaminar stresses and σGx and σGx are the global average of the stresses in both x and y directions. ni represents the volumetric fraction of material layers, nI=hi/h, where

1=nI+nII+nIII++niE8

The values of ni are the volumetric fractions of material with different properties, and h and hi are both the total thickness and the thickness of the layers or layers groups, respectively.

4.1 Definition of the experimental and illustrative example problem and identification of parameters to consider

The application of the ANN technique requires a data set that helps the network to learn certain patterns related to the analyzed problem. The variables and parameters that will be considered as input and output data during the numerical application of the ANN must be those necessary and sufficient so that the problem is representative. If some key parameters are not considered in the problem, the performed study will be an incomplete and poorly formulated problem, implying a deficient solution.

In a mechanical problem, the state of stress is a function of the position, geometry, boundary conditions, and material. For the discussed problem here, the applied stresses at their σGx and σGy boundaries were uniformly distributed. Considering the strain state was homogeneous, the state of the plane stresses at a point was independent of the position within each component.

For the analyzed case in [8], which uses a laminated composite material consisting of metallic layers of two different materials (isotropic, homogeneous, and elastic-linear), Eqs. (7) and (8), globally and locally, respectively, are as follows:

σGx=n1σxM1+n2σxM2
σGy=n1σyM1+n2σyM2E9
σxM1=Q11M1εx1+Q12M1εy1
σyM1=Q21M1εx1+Q22M1εy1
σxM2=Q11M2εx2+Q12M2εy2
σyM2=Q21M2εx2+Q22M2εy2E10

And considering the engineering constants:

Q11M1=Q22M1=EM11vM12
Q12M1=Q21M1=vM1EM11vM12
Q11M2=Q22M2=EM21vM22
Q12M2=Q21M2=vM2EM21vM22E11

Eq. (10) can be represented as follows:

σxM1=EM11νM12εx1+νM1EM11νM12εy1
σyM1=νM1EM11νM12εx1+EM11νM12εy1
σxM2=EM21νM22εx2+νM2EM21νM22εy2
σyM2=νM2EM21νM22εx2+EM21νM22εy2E12

Here, Q11M1, Q12M1, Q22M1, Q11M2, Q12M2, and Q22M2 represent the material’s stiffness constants. The engineering constants for each layer were Young’s moduli (EM1 and EM2) and Poisson’s ratios (νM1 and νM2). The deformation states were defined for each layer through their longitudinal strains: εx1, εy1 and εx2, εy2.

Advertisement

5. Neural network training process

5.1 ANN arguments

As mentioned in Sections 2 and 3, the used software to train the ANNs was R [19] and the used library was “neuralnet” [20], the used parameters are shown in Table 1. The used learning algorithm was resilient backpropagation [21, 22], which modifies the updated values for each weight, wij according to the sign sequence behavior of the partial derivative equations in each dimension of the weight space, this reduces the number of steps compared to the original gradient descent backpropagation procedure.

FormulaDescription of the model
dataDataset of variables specified in formula
hiddenNumber of hidden layers and number of neurons
stepmaxMaximum steps for the training
thresholdValue for the error function as stopping criteria
repNumber of repetitions for the training
startweightsStarting values for the weights
learningrateLowest and highest limit for the learning rate
algorithmName of type to calculate the ANN
err.fctFunction that is used for error
act.fctName of the activation function
linear.outputBoolean value for output layer
constant.weightsThe weights that are exclude from training

Table 1.

Arguments for the neuralnet function.

The procedure to obtain a good ANN begins with the generation of the dataset through a normalization process that allows scaling the data values to improve learning. The process utilized scaling over the maximum value of the inputs as seen in Eq. (13).

xi=ximaxx1xn1nE13

The best and final dataset built for this study consisted of 253 pieces of data, 76% of which were used for training (data1), 14% for testing (data2), and the remaining 10% for model validation (data3) (192, 36, 25). A final dataset (data4) was built for the ANN application as indicated in Section 7.3 with results as published in [8].

The variants in the arguments for ANN generation in this study were 2 or 3 hidden layers, with either the hyperbolic tangent (tanh) activation functions (Eq. (5)) or the logistic function (Eq. (4)) The number of neurons in each hidden layer was chosen to obtain the lowest RE in both the training dataset and the test dataset. All this is depicted in Figure 3.

Figure 3.

Procedure diagram for the ANN training process.

It should be noted that some variations in the structural presentation of the inputs and outputs were made for the elaboration of the dataset, this was necessary since high MSE values were obtained during the ANN training.

5.2 Generation of training data from the analytical model for the ANN

As seen in Eqs. (10) and (12), the necessary and sufficient variables that define the plane stress models, based on the stiffness constants and the engineering constants, are:

  1. the material’s volumetric fraction of the components in the laminate (ni).

  2. the stress components of the global stress state (σGx and σGy).

  3. the local stress state in each component (σxi and σyi).

  4. the elastic constants of the known components (EM1, EM2, νM1, and νM2 or Q11M1, Q12M1, Q11M2, and Q12M2).

  5. the strains state equal for all points of the laminate: εx1, εy1, εx2, and εy2.

As seen in Eqs. (10) and (12), the necessary and sufficient variables that define the state of plane stress models based on the stiffness constants and engineering constants are the material concentration of the components in the laminate (ni); the stress components of the global state of stress σGx and σGy; the local states of stress in each component (σxi and σyi); and the elastic constants of the known components (EM1, EM2, νM1, and νM2 or Q11M1, Q12M1, Q21M1, and Q22M1).

When the ANN objective is directly to determine the engineering elastic constants of one of the components of the laminate, the input parameters are Eqs. (9) and (12): n1, n2, σGx, σGy, εx1, εy1, εx2, εy2, EM1, νM1 and outputs EM2, νM2, the EvANN was constructed. And a QANN was constructed for Eqs. (9) and (10): with input parameters n1, n2, σGx, σGy, εx1, εy1, εx2, εy2, Q11M1, and Q12M1. and outputs Q11M2 and Q12M2.

5.3 Specification of quantitative ranges of input data

As described in the methodology, the input data must establish:

  1. The parameters that define the problem, identifying inputs and outputs.

  2. The quantitative ranges in the boundary conditions, global stresses (σGx and σGy).

  3. The quantitative ranges in the volumetric fractions of each of the components in the laminate (n1 and n2).

  4. The elastics constants (E and ν) of the MLCM component materials.

  5. The global strains (εxM1, εyM1 and εxM2, εyM2) for a simple tension problem. These dependent and necessary data for training were obtained from the analytical model. Eqs. (10) and (12) were solved using MAPLE 2018 [23], for strains, see Eqs. (14) and (15).

εx=σGxQ11M1n1+Q11M2n2Q11M12n12+2Q11M1Q11M2n1n2+Q11M22n22Q12M12n122Q12M1Q12M2n1n2Q12M22n22
εy=σGxQ12M1n1+Q12M2n2Q11M12n12+2Q11M1Q11M2n1n2+Q11M22n22Q12M12n122Q12M1Q12M2n1n2Q12M22n22E14

and

εx=σGxEM1n1νM22+EM2n2νM12EM1n1EM2n2EM12n12νM22+2EM1EM2n1n2νM1νM2+EM22n22νM12EM12n122EM1EM2n1n2EM22n22
εy=σGxEM1n1νM22+EM2n2νM12νM2EM1n1νM1EM2n2νM2EM12n12νM22+2EM1EM2n1n2νM1νM1+EM22n22νM12EM12n122EM1EM2n1n2EM22n22E15

The maximum and minimum quantitative value of the boundary conditions in the training data was established using the values found in [8] as a reference. Between 1 and 22 MPa for the global input stress. The components concentrations in the MLCM were bounded for values between 0 and 1 for 2, 3, 4, 5, and 6 layers of two metallic components assumed to have the same thickness. Tables B1B4 in Appendix B show various scenarios evaluated during the study.

The considered scenarios were:

  1. Different MLCM configurations with different concentrations and different components.

  2. Different global stress values (σGx and σGy).

  3. Two ANN targets for each configuration, one to obtain the elastic constants of one of two MLCM components M1, and another to obtain the constants of M2.

The obtained training data from the model were adjusted so that there was not much difference in the order of the values, the stress was given in MPa, the Q’s and E’s in GPa and the strains in με.

An EvANN model to determine engineering constants and another to determine stiffness coefficients, QANN model, were presented to contextualize the effect that occurs when an ANN model is trained from simple knowledge or general knowledge, their implications can be seen in Eqs. (14) and (15).

The nomenclature used in the analytical model and the ANN network formulas is shown in Appendix A Table A1.

5.4 EvANN

As mentioned above, this ANN was trained using the engineering constants and the R software. The settings for the “neuralnet” function are given in Table 2, where the output variables are the second material constants.

FormulaEM2 + vM2 ∼ SGX + SGY + CON1 + CON2 + EX + EY + EM1 + vM1
datadataset (192 training, 36 test, 25 validation)
hiddenc(a,b) or c(a,b,c) where a,b,c are the number of neurons
stepmax1.00E+07
threshold0.01
rep1
startweightsNULL
learningrate0.0001
algorithmrprop (resilient backpropagation)
err.fctsse (sum of squared errors)
act.fcttanh (tangent hyperbolicus) or logistic (logistic function)
linear.outputTRUE
const.weightsTRUE

Table 2.

“Neuralnet” Argument functions for training EvANN.

Starting from the first dataset training was carried out obtaining a MSE of 1.186e+09 and 4.391 for unnormalized and normalized data. Because of this, the dataset was extended considering a larger number of MLCM configurations with variations in n concentrations and global stress ranges, as well as, for the same mechanical problem, was done an inverted request in the elastic constants of component M1, for one case, and M2 for another.

Table 3 shows the configured EvANNs and specifies the activation function, the number of hidden layers, the number of neurons in each layer, the MSE, the threshold reached, and the number of steps performed. The dataset used can be found in Appendix B.

IDActivation functionHidden layersNeurons per layerMSE ANNReached thresholdSteps
1tanh212,40.047428820.008019735612
2logistic212,40.011128870.00955054115,577
3tanh212,60.021832810.00976816530,691
4logistic212,60.034284630.0095152637269
5tanh212,80.059680340.00916500229,721
6logistic212,80.028814640.0087273364276
7tanh214,80.046178850.00821428137,203
8logistic214,80.032251610.0091751144013
9tanh314,6,40.018189920.0093290593394
10tanh312,8,40.017494270.0099556285562
11tanh218,80.043915950.0090817912,742

Table 3.

EvANN Configurations with different activation function, hidden layer, and number of neurons.

The third EvANN configuration and its graph is shown in Figure 4 along with the relative error percentages in Figures 5 and 6, the RE was calculated with Eq. (16).

Figure 4.

EvANN topology image. Input layers and (EM2) and (vM2) in output layers.

Figure 5.

% RE training EvANN Configuration 3.

Figure 6.

%RE, test dataset ANN Configuration 3.

RE=Real ValueANNValueReal Value100E16

A second ANN was generated for the same problem now defined in terms of Q11 and Q12.

5.5 QANN

A second model with stiffness coefficients was now developed with the results being the coefficients of the second material, Q11M2 and Q12M2, Figure 7 and Table 4. The settings for the “neuralnet” function are given in Table 4.

Figure 7.

QANN topology image. Input layers and Q11 Material 2 (Q11M2); Q12 Material 2 (Q12M2) in output layer configuration 2.

FormulaQ11M2 + Q12M2 ∼ SGX + SGY + CON1 + CON2 + EX + EY + Q11M1 + Q12M1
datadataset (192 training, 36 test, 25 validation)
hiddenc(a,b) or c(a,b,c) where a,b,c are the number of neurons
stepmax1.00E+07
threshold0.01
rep1
startweightsNULL
learningrate0.0001
algorithmrprop (resilient backpropagation)
err.fctsse (sum of squared errors)
act.fcttanh (tangent hyperbolics) or logistic (logistic function)
linear.outputTRUE
constant.weightTRUE

Table 4.

“Neuralnet” argument functions for training QANN.

The generated configurations with their respective achieved values are shown in Table 5.

IDActivation functionHidden layersNeuronsMSE ANNReached thresholdSteps
1tanh212,63.09E-029.93E-034.49E+04
2tanh212,82.03E-018.57E-035.50E+04
3logistic212,85.02E-029.41E-032.15E+04
4tanh216,62.01E-017.51E-032.08E+04
5logistic316,65.02E-029.90E-032.67E+04
6tanh316,6,43.43E-037.57E-031.17E+04
7logistic316,6,45.68E-039.70E-035.52E+03

Table 5.

QANN Configurations with different activation function, hidden layer, and number of neurons.

The second QANN configuration and its graph is showed in Figure 7 with the relative error percentages, Figures 8 and 9, RE which computes with Eq. (16).

Figure 8.

% RE training QANN Configuration 2.

Figure 9.

% RE, test dataset QANN Configuration 2.

Advertisement

6. Neural network validation process

Once an ANN has been trained and tested, it is evaluated by applying it to an equivalent problem but with different structural values from those used in the training. The results for the different scenarios are in Table 6 and Figures 10 and 11.

Activation functionHidden layersNeuronsMSE ANNReached thresholdSteps
EvANNtanh212,62.18E-029.83E-033.07E+04
QANNtanh212,82.03E-018.57E-035.50E+04

Table 6.

Configurations selected for EvANN and QANN.

Figure 10.

% RE validation dataset EvANN Model.

Figure 11.

% RE QANN validation dataset.

Configurations 3 and 2 were selected for the evaluation of both, EvANN and QANN, based on their performance in the test dataset.

Table 6 and Figures 10 and 11 depicts the configuration and attributes selected for each ANN.

The RE for each ANN is shown in the plots in Figures 10 and 11.

As can be seen in the tables, the maximum RE obtained are up to 58.5% for the EM2 output and up to 7.5% for vM2.

When looking at QANN, the tables show the maximum RE obtained was up to 4.48% for Q11M2 production and up to 3.71% for Q12M2.

Advertisement

7. Results of application

In this section, the contrasting results of EvANN and QANN RE for data3 and compute with the results published in the article [8], data4 are presented .

7.1 Validation process (Data3)

The training data was expanded and a configuration that is not close to the optimal MSE value was selected to avoid overfitting. For the EvANN, this occurred with configuration 3, which has a higher MSE than configurations 2, 9, and 10, as shown in Table 3. For the QANN, configuration 2 was selected, which has two hidden layers with 12 and 8 neurons, respectively, with the tanh activation function (hyperbolic tangential), as shown in Table 5. The selection criterion was to obtain the lowest average RE in the two output variables.

The RE of EvANN and QANN outputs is shown in Table 7, where the QANN outputs were converted to engineering constants (Eqs. (17) and (18)).

Young’s Modulus EPoisson’s Ratio v
Constants model of three materialConstants evaluated% REConstants modelConstants evaluated% RE
EvANN6765.452.310.330.32930.22
QANN67.080.120.32980.05
EvANN10061.7638.240.310.32665.37
QANN101.021.020.31020.05
EvANN200178.0410.980.290.29521.80
QANN202.251.120.29020.06

Table 7.

Comparison of results for EvANN and QANN for three materials in dataset 3 for the engineering constants.

EM2=Q11M22Q12M22Q11M2E17
νM2=Q12M2Q11M2E18

7.2 Real case data (Data4)

The different MLCMs used in this stage (Figure 12) are (1) Aluminum-Brass-Aluminum (A-B-A), (2) Brass-Aluminum-Brass (B-A-B) and Copper-Aluminum (C-A) (Figure 3). The properties and volumetric fractions of the materials in the MLCM are given in Tables 8 and 9.

Figure 12.

MLCM test tubes used in the final ANN application.

MCLMAluminum volumetric fractions, nBrass volumetric fractions, nAluminum volumetric fractions, nCopper volumetric fractions, n
Aluminum-brass-aluminum0.6710.329
Brass-aluminum-brass0.3380.662
Aluminum-copper0.5160.484

Table 8.

Volumetric fractions of materials in the MLCMs.

MaterialYoung’s Modulus (E), GpaPoisson’s Ratio (υ)
Aluminum670.345
Brass1010.313
Copper1090.33

Table 9.

Elastic constants of the MLCM components.

7.3 Real case data compute in QANN and EvANN

Continuing with the real case, the contrasted RE of EvANN and QANN results are presented as shown in Figure 13. The QANN model shows a better performance.

Figure 13.

Comparison of EvANN and QANN results based on engineering constants, EVM2, vM2.

Table 10 shows the averages of the RE obtained for each of the outputs when checking the application of the model obtained using QANN for the results in [8].

IDActivation functionNum layersNeuronsQ11M2Q12M2
RE mediaStandard deviationMinMaxRE mediaStandard deviationMinMax
1tanh212,68.729.860.7250.0037.2080.760.84301.50
2tanh212,85.464.370.3421.874.633.890.1818.82
3logistic212,812.178.150.9149.2911.456.070.8833.17
4tanh216,65.336.680.3736.975.305.630.7231.28
5logistic316,67.2810.180.2255.736.357.010.1336.56
6tanh316,6,45.6412.400.0671.245.409.570.1051.09
7logistic316,6,46.6314.730.2384.726.3110.710.3757.61

Table 10.

Means and standard deviation of RE for different configurations of the QANN application with the article results.

Configuration two, which has two hidden layers with 12 and 8 neurons in relation to the tanh (hyperbolic tangential) activation function, was selected. The selection criterion was to obtain the smallest average percentage of error in the two output variables, Q11M2 and Q12M2 as in Section 7.1, which obtained smaller RE for data3 using the QANN.

Table 11 shows the results and the contrast in RE average percentage of all the considered outputs from the final results, obtained for the ANN, for the stiffness constants (QANN) and for the engineering constants (EvANN).

ANN, AppliedAverage RE %
Constants EM2Constants vM2
EvANN12.543.15
QANN6.1843.58

Table 11.

Average percentages RE for the stiffness and engineering constants.

The values of the engineering constants as a function of Q were determined from the identity equations (Eqs. (17) and (18)), and the obtained average for each of the specimens in Table 12. The table also shows that the results for each configuration line should be very approximate since these were acquired from linear multiplication, but have variations.

Configuration line (Brass-Aluminum)Young’s Modulus (E)Poisson’s Relation (υ)
EM2 Brass (ANN) (Gpa)Expt Brass EM2 (Gpa)RE %EM2 Aluminum (ANN) (Gpa)Expt Aluminum EM2 (Gpa)RE %vM2 Brass (ANN)Expt Brass vM2RE %vM2 Aluminum (ANN)vM2 Aluminum ExptRE %
189.7710112.564.74673.50.3150.3130.60.3320.3454.0
283.7210120.664.64673.60.3180.3131.60.3320.3453.9
377.4710130.465.37672.50.3220.3132.80.3310.3454.1
496.211015.065.70672.00.3120.3130.40.3310.3454.3
Average17.1Average2.9Average1.4Average4.1
Configuration line (Aluminum-Brass-Aluminum)Young’s Modulus (E)Poisson Relation υ
EM2 Brass (ANN) (Gpa)Expt Brass EM2 (Gpa)RE %EM2 Aluminum (ANN) (Gpa)Expt Aluminum EM2 (Gpa)RE %vM2 Brass (ANN)Expt Brass vM2RE %vM2 Aluminum (ANN)Expt Aluminum vM2RE %
1106.611015.365.64672.10.3110.3130.60.3310.3454.2
2106.891015.565.86671.70.3100.3130.90.3310.3454.4
3106.441015.165.85671.80.3100.3131.10.3300.3454.5
4105.391014.265.90671.70.3120.3130.30.3320.3454.0
Average5.0Average1.8Average0.7Average4.3
Configuration line (Aluminum-Copper)Young’s Modulus (E)Poisson Relation υ
EM2 Aluminum (ANN) (Gpa)Expt Aluminum EM2 (Gpa)RE %EM2 Copper (ANN) (Gpa)Expt Copper EM2 (Gpa)RE %vM2 Aluminum (ANN)Expt Aluminum vM2RE %vM2 Copper (ANN)Expt Copper vM2RE %
167.25670.4118.501098.00.3290.3135.00.3040.3308.4
264.25674.3115.881095.90.3320.3135.80.3050.3308.1
362.47677.2112.541093.10.3340.3136.30.3060.3307.8
471.67676.5120.211099.30.3260.3133.90.3040.3308.6
Avg4.6Avg6.6Avg5.3Avg8.2

Table 12.

Values of the constants of the analytical model [8] and each output for each configuration line.

Finally, the QANN configuration that shows the best results during validation was applied to the MLCMs analyzed in [8]. The value of the final average constants is presented in Table 13.

Young’s Modulus (E)Poisson’s Ratio υ
Expt constantArticle constantRE %Evaluated constantRE %Expt constantArticle constantRE %Evaluated constantRE %
Aluminum-Brass-Aluminum
Aluminum67727.565.112.80.3450.341.40.341.4
Brass10197.63.486.814.10.3130.3181.60.31480.6
Brass-Aluminum-Brass
Aluminum67727.565.811.80.3450.341.40.334.3
Brass10197.63.4106.335.30.3130.3181.60.3110.6
Aluminum-Copper
Aluminum6764.43.966.410.90.3450.334.30.341.4
Copper109106.22.6116.87.20.330.3230.31484.6

Table 13.

Average final elastic constants obtained with QANN.

Advertisement

8. Discussion

The different conditions described in Section 5, were evaluated as a result of the used study to obtain a trained and efficient ANN, the most important of which were the following:

  1. The values of all data used in the training were selected and restricted in such a way that their values, corresponding to the MLCM of [8] (Aluminum-brass-aluminum (A-B-A), Brass-Aluminum-brass (B-A-B), and Copper-aluminum (C-A) were located at a mean value. An observed case that showed the need for this step was in the training validation, the stress values (σGx and σGy) in the input were out of range compared to those used in the training, the results had larger differences between 20 and 80%, from when these were close to the mean.

  2. The selected materials for training were randomly taken from the literature as only values approximating those needed for the final ANN application were needed (see Appendix B Table B1).

  3. The homogeneous unit strains were obtained from the model, Eqs. (10) and (12), the MLCM configuration, the elastic constants of its components and the different values of global stresses applied on their boundaries. Thus, the training tables and the ANN validation, Appendix B Table B2, were generated.

  4. The data for the same MLCM and the same boundary conditions were duplicated for training purposes by inverting the requested outputs: in one case, the objective elastic properties of one of the MLCM components and, in another, those of the other component. This is all depicted in Appendix B Table B1. It is important to mention that this activity was relevant because it improved the ANN results by 4.36% in the MSE, where the average of the EvANN configurations is 0.03283643, Table 4.

  5. Regarding the real case, only one data was available for each MLCM, but since it was a linear mechanical problem, the values were multiplied in boundary conditions and four input configuration lines were obtained for the same piece of data. The presented results in Appendix B, Table B4 showed the importance of this step, because although these were performed only four times, it was observed that for the same case of the MLCM (A-L-A) the results that ANN had, were different with variations of up to 30%, when these should be the same. However, uncertainty risks are avoided by averaging the obtained values for each output as shown in Table 13.

  6. A further important point that showed an overview of simplicity in the training setup was found when evaluating two cases: one in the training phase and the other in the output data request. The first one required the engineering constants while the latter required the stiffness constants. In the first case average RE of 12.54% E and 3.15 for υ were obtained; in the second case, the RE for E and υ were 6.18% and 3.57%, respectively. From the above and observing (Eqs. (14) and (15)), it is assumed that the analytical model in terms of Q’s is simpler than the model in terms of engineering constants.

Advertisement

9. Conclusions

This chapter, using ANN, establishes a method to determine the engineering constants of metallic laminated composite material layers, shows the importance of adequately defining the problem to be solved, analyzing concepts, establishing scopes and constraints, and selecting sufficient and necessary training parameters, based on the obtained results. The importance of the following was identified by evaluating several scenarios to generate the ANN dataset: (a) the qualitative ranges of the parameters in the input data; recommending that the values of the application data should be in the mean of the training data, (b) variations in the structure of the dataset (different outputs for the same MLCM problem), and (c) simplicity in the dataset; the ANN showed better results when stiffness constants were requested in the output data; the analytical solution, is simpler in terms of stiffness constants than in terms of engineering constants.

Several configurations with different activation functions, number of layers, and number of neurons per layer were tested in the study, finding better results for this problem with a medium MSE when compared with the lowest MSE trained. This action may be due to the fact that there is no overfitting.

Based on this research, it is recommended to use the analytical model applied here to generate an ANN dataset for the study of the constitutive modeling of composite materials in plane stress problems.

Advertisement

ParameterName in ModelName in ANN formula
Global stress in x (MPa)σGxSGX
Global stress in y (MPa)σGySGX
Material concentration coefficient 1n1CON1
Material concentration coefficient 2n2CON2
Strain in xεxEX
Strain in yεyEY
Young’s Modulus, Material 1 (GPa)EM1EM1
Young’s Modulus, Material 2 (GPa)EM2EM2
Poisson’s Relation, Material 1νM1vM1
Poisson’s Relation, Material 2νM2vM2
Stiffness constants, material 1Q11M1, Q12M1Q11M1, Q12M1
Stiffness constants, material 1Q11M2, Q12M2Q11M2, Q12M2

Table A1.

Nomenclature used for the analytical model and ANN formula.

InputsExpected outputs
Global Stress in x (σx) PaGlobal Stress in y (σy) PaVolumetric fraction, material 1 (n1)Volumetric fraction, material 2 (n2)Strain in x (Ɛx)Strain in y (Ɛy)Q11 Material 1 (Q11M1) GPaQ12 Material 1 (Q12M1) GPaQ11 Material 2 (Q11M2) GPaQ12 Material 2 (Q12M2) GPa
100.6660.3330.0090−0.002875.1924.81218.3663.33
100.3330.6660.0090−0.0028218.3663.3375.1924.81
200.6660.3330.0120−0.0035218.3663.33110.6334.30
200.3330.6660.0120−0.0035110.6334.30218.3663.33
2.500.6660.3330.0225−0.006975.1924.81218.3663.33
2.500.3330.6660.0225−0.0069218.3663.3375.1924.81
300.6660.3330.0337−0.0106110.6334.3075.1924.81
300.3330.6660.0337−0.010675.1924.81110.6334.30
400.6660.3330.0359−0.011075.1924.81218.3663.33
400.3330.6660.0359−0.0110218.3663.3375.1924.81
4.500.6660.3330.0506−0.0159110.6334.3075.1924.81
4.500.3330.6660.0506−0.015975.1924.81110.6334.30
500.6660.3330.0300−0.0088218.3663.33110.6334.30
500.3330.6660.0300−0.0088110.6334.30218.3663.33

Table B1.

Part of the data used in the training, (data1).

Global Stress in x (σx) PaGlobal Stress in y (σy) PaVolumetric fraction, material 1 (n1)Volumetric fraction, material 2 (n2)Strain in x (Ɛx)Strain in y (Ɛy)Q11 Material 1 (Q11M1) GPaQ12 Material 1 (Q12M1) GPaQ11 Material 2 (Q11M2) GPaQ12 Material 2 (Q12M2) GPa
3.500.6660.3330.0210−0.0062218.3663.33110.6334.30
5.500.6660.3330.0494−0.015175.1924.81218.3663.33
800.6660.3330.0480−0.0141218.3663.33110.6334.30
9.500.6660.3330.0571−0.0168218.3663.33110.6334.30
1100.6660.3330.0661−0.0194218.3663.33110.6334.30
13.500.6660.3330.1518−0.0478110.6334.3075.1924.81
15.500.6660.3330.0931−0.0274218.3663.33110.6334.30
16.500.6660.3330.1856−0.0585110.6334.3075.1924.81
18.500.6660.3330.1111−0.0327218.3663.33110.6334.30
2000.6660.3330.1201−0.0353218.3663.33110.6334.30
21.500.6660.3330.1291−0.0380218.3663.33110.6334.30
200.50.50.0133−0.0040218.3663.33110.6334.30
400.50.50.0300−0.009075.1924.81218.3663.33
5.500.50.50.0412−0.012475.1924.81218.3663.33
700.50.50.0524−0.015775.1924.81218.3663.33

Table B2.

Part of the data used in training results (data2).

InputsExpected outputs
Global Stress in x (σx) PaGlobal Stress in y (σy) PaVolumetric fraction, material 1 (n1)Volumetric fraction, material 2 (n2)Strain in x (Ɛx)Strain in y (Ɛy)Q11 Material 1 (Q11M1) GPaQ12 Material 1 (Q12M1) GPaQ11 Material 2 (Q11M2) GPaQ12 Material 2 (Q12M2) GPa
1.8400.4290.5710.0227−0.0072110.6334.3075.1924.81
2.6800.6000.4000.0167−0.0049218.3663.33110.6334.30
3.5200.4290.5710.0246−0.007375.1924.81218.3663.33
4.3600.6000.4000.0502−0.0159110.6334.3075.1924.81
5.2000.4290.5710.0364−0.0108218.3663.33110.6334.30
6.0400.6000.4000.0502−0.015375.1924.81218.3663.33
6.8800.4290.5710.0848−0.0271110.6334.3075.1924.81
7.7200.6000.4000.0482−0.0142218.3663.33110.6334.30
8.5600.4290.5710.0598−0.017875.1924.81218.3663.33
9.4000.6000.4000.1083−0.0342110.6334.3075.1924.81
10.2400.4290.5710.0717−0.0214218.3663.33110.6334.30
11.0800.6000.4000.0921−0.028075.1924.81218.3663.33
11.9200.4290.5710.1469−0.0469110.6334.3075.1924.81
12.7600.6000.4000.0797−0.0235218.3663.33110.6334.30
13.6000.4290.5710.0951−0.028475.1924.81218.3663.33

Table B3.

Part of the data used in validation results (data3).

InputsExpected outputs
Global Stress in x (σx) PaGlobal Stress in y (σy) PaVolumetric fraction, material 1 (n1)Volumetric fraction, material 2 (n2)Strain in x (Ɛx)Strain in y (Ɛy)Q11 Material 1 (Q11M1) GPaQ12 Material 1 (Q12M1) GPaQ11 Material 2 (Q11M2) GPaQ12 Material 2 (Q12M2) GPa
8.7310.0000.6710.3290.1135−0.038075.1924.81110.6334.30
1.8670.0000.6620.3380.0205−0.0066110.6334.3075.1924.81
8.0930.0000.5160.4840.0940−0.031075.1924.81122.3240.37
8.7310.0000.3290.6710.1135−0.0380110.6334.3075.1924.81
1.8670.0000.3380.6620.0205−0.006675.1924.81110.6334.30
8.0930.0000.4840.5160.0940−0.0310122.3240.3775.1924.81
8.7310.0000.6710.3290.1135−0.038075.1924.81110.6334.30
1.8670.0000.6620.3380.0205−0.0066110.6334.3075.1924.81
10.4770.0000.6710.3290.1362−0.045675.1924.81110.6334.30
2.2400.0000.6620.3380.0246−0.0079110.6334.3075.1924.81
9.7120.0000.5160.4840.1128−0.037275.1924.81122.3240.37
10.4770.0000.3290.6710.1362−0.0456110.6334.3075.1924.81
2.2400.0000.3380.6620.0246−0.007975.1924.81110.6334.30
9.7120.0000.4840.5160.1128−0.0372122.3240.3775.1924.81
9.7120.0000.5160.4840.1128−0.037275.1924.81122.3240.37

Table B4.

Data used in the ANN application (data4).

Nomenclature

ANN

Artificial neural network

MLCM

Metal laminated composite material

MCL

Metallic composite

E

Young’s Modulus

G

Rigidity Modulus

MSE

Mean squared error

EvANN

ANN that directly determine the engineering elastic constants of one of the components of the laminate

QANN

ANN to determine the stiffness coefficients of one of the components of the laminate

References

  1. 1. Yang ZR. Machine Learning Approaches to Bioinformatics. Exeter, UK: World Scientific; 2010. p. 336
  2. 2. Al-shayea QK. Artificial neural networks in medical diagnosis. International Journal of Computer Science Issues. 2011;8(2):150-154
  3. 3. D’Antino T, Papanicolaou C. Mechanical characterization of textile reinforced inorganic-matrix composites. Composites Part B Engineering. 2017;127:78-91
  4. 4. Abbud LH, Al-Masoudy MMM, Hussien Omran S, Abed AM. Experimental study the mechanical properties of nano composite materials by using multi-metallic nano powder/epoxy. Materials Today: Proceedings. 2021. DOI: 10.1016/j.matpr.2021.06.395. ISSN 2214-7853
  5. 5. Benyelloul K, Aourag H. Elastic constants of austenitic stainless steel: Investigation by the first-principles calculations and the artificial neural network approach. Computational Materials Science. 2013;67:353-358
  6. 6. Shin HS, Lee SW, Kim CY, Bae GJ. Neural network based identification of nine elastic constants of an orthotropic material from a single structural test. In: Proceedings of the 21st ISARC; Jeju, South Korea; 2004
  7. 7. Huang DZ, Xu K, Farhat C, Darve E. Learning constitutive relations from indirect observations using deep neural networks. Journal of Computational Physics. 2020;416:1-28
  8. 8. Acosta-Flores M, Jiménez-López E, Chávez-Castillo M, Molina-Ocampo A, Delfín-Vázquez JJ, Rodríguez-Ramírez JA. Experimental method for obtaining the elastic properties of components of a laminated composite. Results in Physics. 2019;12:1500-1505
  9. 9. Acosta-Flores M, Jiménez-López E, Rodríguez-Ramirez JA. Modelo para el análisis experimental de esfuerzos intralaminares en materiales compuestos laminados sujetos a carga axial. DYNA-Ingeniería e Industria. 2016;91:216-222
  10. 10. Liu X, Tian S, Tao F, Yu W. A review of artificial neural networks in the constitutive modeling of composite materials. Composites Part B: Engineering. 2021;224:1-15
  11. 11. Chen M, Challita U, Saad W, Yin C, Debbah M. Artificial neural networks-based machine learning for wireless networks: A tutorial. IEEE Communication Surveys and Tutorials. 2019;21(4):3039-3071
  12. 12. Zhang Z. Artificial neural network. In: Multivariate Time Series Analysis in Climate and Environmental Research. Cham: Springer International Publishing; Springer, 2018. pp. 1-35
  13. 13. Li X, Cheng X, Wu W, Wang Q, Tong Z, Zhang X, et al. Forecasting of bioaerosol concentration by a Back Propagation neural network model. Science of the Total Environment. 2020;698:134315
  14. 14. Ye F, Wheeler C, Chen B, Hu J, Chen K, Chen W. Calibration and verification of DEM parameters for dynamic particle flow conditions using a backpropagation neural network. Advanced Powder Technology. 2019;30(2):292-301
  15. 15. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research. 2014;15(1):1929-1958
  16. 16. Li Z, Kamnitsas K, Glocker B. Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation. In: Shen D, Liu T, Peters TM, Staib LH, Essert C, Zhou S, et al., editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Cham: Springer International Publishing; 2019. pp. 402-410
  17. 17. Frei S, Chatterji NS, Bartlett P. Benign overfitting without linearity: Neural network classifiers trained by gradient descent for noisy linear data. In: Proceedings of Thirty Fifth Conference on Learning Theory PMLR; 2022. p. 2668–2703
  18. 18. Ying X. An overview of overfitting and its solutions. Journal of Physics: Conference Series. 2019;1168(2)
  19. 19. Storey MA, Singer L, Cleary B, Figueira Filho F, Zagalsky A. The (r)evolution of social media in software engineering. Future of Software Engineering Proceedings. 2014:100-116. DOI: 10.1145/2593882.2593887
  20. 20. Fritsch S, Guenther F, Guenther MF. Package ‘neuralnet’. Training of. Neural Networks. The R Journal 2010;2(1):30-38
  21. 21. Riedmiller M, Braun H. Rprop: a fast adaptive learning algorithm. Procedures of the International Symposium on Computer and Information Science VII. 1992
  22. 22. Riedmiller M, Braun H. A direct adaptive method for faster backpropagation learning: The Rprop algorithm. In IEEE International Conference on Neural Networks. 1993; p. 586-591
  23. 23. Maplesoft, 2014, Maple (Release 18). Waterloo, ON: Maplesoft, a division of Waterloo Maple Inc.; 2014. Available online: https://hadoop.apache.org

Written By

Marta Eraña-Díaz and Mario Acosta-Flores

Submitted: 10 September 2022 Reviewed: 13 October 2022 Published: 15 November 2022