Inverse Analysis in Civil Engineering: Applications to Identi ﬁ cation of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization

The field of research that studies the emergent collective intelligence of self-organized and decentralized simple agents is referred to as Swarm Intelligence. It is based on social behavior that can be observed in nature, such as flocks of birds, fish schools and bee hives, where a number of individuals with limited capabilities are able to come to intelligent solutions for complex problems. The computer science community have already learned about the importance of emergent behaviors for complex problem solving. Hence, this book presents some recent advances on Swarm Intelligence, specially on new swarm-based optimization methods and hybrid algorithms for several applications. The content of this book allows the reader to know more both theoretical and technical aspects and applications of Swarm Intelligence.


Introduction
Many engineering applications suffer from the ignorance of mechanical parameters.It is particularly true when soil model is necessary to assess soil behaviour [Meieret al., 2008].Nevertheless, it is not always efficient to directly assess the values of all the parameters in the case of soil mechanics.Considering structural mechanics, [Li et al., 2007] also worked to propose an optimal design of a truss pylon respecting the stress constraints of the elements but it is not an easy task to solve considering the number and loading of the structure.Inverse analysis is an efficient solution to reach these aims.This technique becomes more and more popular thanks to the increase of the computing capabilities.Computing costs have decreased and allow to handle complex optimization problems through meta heuristic methods for example to identify the solution of the problem like the mechanical parameters of a behaviour model of a soil [Fontan et al., 2011, Levasseur et al., 2008], to define the best section of the beams composing a truss structure or to optimize wood-plastic composite mechanical properties designed for decking and taking into account the environmental impact during the life cycle of the product [Ndiaye et al., 2009].The literature about inverse analysis is very rich and it covers many application fields like management or mechanical science as attesting the table number 1 in [Fontan et al., 2011] which presents several civil engineering applications (this table is not presented there).Most of the authors mentioned in this paper used the concept of inverse analysis to identify parameters either in structural mechanics [Li et al., 2007, Fontan 2011] or soil mechanics [Meier et al., 2008, Levasseur et al., 2008].They were just using different mechanical models (analytical or numerical) or different algorithms to solve their problem (PSO, descent gradient, ant colony, genetic algorithm, etc.).Inverse analysis is based on the simple concept of solving an equation to find the n values X n respecting equation 1, with M: the mechanical model corresponding to the real behaviour of the analysis and Y m : the m measurement carried out on site.
Inverse Analysis in Civil Engineering: Applications to Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization 89 as being one of the most efficient in terms of accuracy and time cost computing [Fan 2006, Hammouche 2010].

Particle swarm optimization (PSO)
Particle swarm optimization (PSO) is a swarm intelligence technique developed by Kennedy and Eberhart (1995).This technique, inspired by flocks of birds and shoals of fish, has proved to be very efficient in hard optimization problems.The swarm is composed of particles, a number of simple entities, randomly placed in the search space of the objective function.Each particle can interact with members of the swarm that are its social neighbourhood.It can evaluate the fitness at its current location in the search space, it knows its best position ever visited and the best position of its social neighbourhood.It determines its movement through the search space by combining these information, and moving along with the corresponding instantaneous velocity.A particle position is better than another one if its objective function is better; (better means smaller than if it is a minimization problem and greater than if it is a maximization problem).
The social neighbourhood of a given particle influences its trajectory in the search space.The two most commonly used neighbourhood topologies are the fully connected topology named g best topology and the ring topology named l best topology [Kennedy and Mendes, 2002].In the fully connected topology the trajectory of each particle is influenced by the best position found by any particle of the swarm as well as their own past experience.Usually the ring topology neighbourhood comprises exactly two neighbours, every particle is connected to its two immediate neighbours, one on each side with toroidal wrapping.With a fully connected topology the swarm converges quickly on the problem solution but is vulnerable to the attraction of local optima, while, with ring topology, it better explores the search space and is less vulnerable to the attraction of local optima.Various neighbourhood topologies have been investigated in [Kennedy, 1999;Kennedy and Mendes, 2002;Mendes et al., 2004] (fig.1).The main conclusion was that the difference in performance depends on the topology implemented for a given objective function, with nothing suggesting that any topology was generally better than any other [Poli et al., 2007].
If the objective function is n dimensional, the position and velocity of any particle can be represented as a vector with n components.Starting with the velocity vector, v p = (v p,1 , . . ., v p,n ), each component, v p,i , is given by equation (4).For the position vector x p = (x p,1 , ... , x p,n ), each component x p,i is given by equation (5). www.intechopen.com Theory and New Applications of Swarm Intelligence 90 where x p,i (t) is the ith component of the position of the particle i and v p,i (t) the ith component of its velocity; p p,i is the i th component of the best position ever visited by the ith particle; g p,I is the i th component of the best position ever visited by the neighbourhood of the particle; is called inertia weight, it is used to control the impact of the previous history of velocity on the current one; r 1 and r 2 are uniformly distributed random numbers between 0 and 1; c 1 and c 2 are positive acceleration constants.The formula ( 4) is used for each dimension of the objective function, for each particle and synchronously at time step for all the particles of the swarm.

Discrete binary
Particle Swarm Optimization (DPSO) Kennedy and Eberhart (1997) have introduced a discrete binary version of PSO (DPSO) that operates on binary variables (bit, symbol or string) rather than real numbers.The difference between the PSO and DPSO definitions is in the velocity updating rules where the position updating rule x p,i (t+1) ( 7) is based on a logistic function ( 6).The introduction of DPSO extends the use of PSO to optimization of discrete binary functions as well as functions of continuous and discrete binary variables at the same time.

91
Where  is an uniformly distributed random number between 0 and 1 Michaud et al. (2009), to be able to handle the optimization of functions including more than two discrete variables, have generalised the discrete binary version of PSO to a discrete nary version of PSO (8).
x p,i (t  1) where  1 , …  k-1 are strictly ordered uniformly distributed random numbers between 0 and 1

Application to structural problems
This section presents the results of the work carried out on a continuous beam laying on three elastic supports.A numerical code were developed using real data (synthetic data in the case of numerical analysis), a FE model of the structure as the mechanical model, and the PSO.The flowchart of the code is presented figure 2. As it was explained above, the code combined (a) a mechanical model of the structure (numerical or analytical), (b) a field data generator and (c) a particle swarm optimisation algorithm (PSO) to iteratively minimize the distance between field data and predicted data.This work had been realized on both a numerical case and at real scale case.The influence of the metrology had been studied by changing either the number of measurement data to identify the three stiffnesses, or the level of noise of sensors, or the localization of the sensors on the beam.The developed code using the PSO succeeds to estimate the stiffnesses with accuracy according the different sources of errors taking into account during the experiences.More synthetic experiences were carried out to identify the different sources of errors by using this code that can impact the accuracy of the identification process as:  error from the accuracy of location sensors,  error from the sensors placements,  error from the optimization algorithm used during the identification process,  the sensitivity of the unknown parameters to the field data.
Both numerical and real experiences were carried out to validate the methodology and to highlight the influence of the input data (here displacements data) on the quality of identification.A general numerical frame was developed, combining different tools and methods (inverse analysis, FEM, PSO).The efficiency in terms of CPU time of the PSO to converge towards the solution of the problem allows the integration of a FE model of the structure without any problem.A second part of this work focussed on the different sources of error that may alter the accuracy of the parameters identification process.It is shown on two structures, a continuous beam bearing on three elastic supports, cf.fig.3, and a half frame structure, cf.fig.4, that four points strongly impact the parameter identification.Several experiences were carried out, considering different metrology set, i.e. by modifying either the number of sensors, or their accuracy or their location on the structure.Several recommendations are mentioned to help engineers to prepare as best as possible their metrology set in order to do an identification of parameters using the inverse analyse concept with the PSO.
Fig. 2. Flowchart of code based on the concept of the inverse analysis.

Framework and objectives of both synthetic and real experiences
Concerning numerical experience, the following work relies on either numerical model using the finite element (FE) software Castem©, or analytical model.This means that "field data" are also fully synthetic.In order to reproduce what happens on field data with real sensors, introducing some noise disturbs the original "true" values that are first generated, using a controlled random process.The result is then synthetic "noised" data at each location where a sensor can be located.It is from these "noised" data that the inversion process is carried out.The main advantage of the synthetic simulation is that, the "true" values being also known, it is always possible to quantify the quality of the estimation (i.e.distance between "true" and estimated values), making possible detailed analysis of errors sources.Using exact data u exact obtained from the mechanical model and a random coefficient generates the synthetic field data, u insitu , cf.Equation10.This coefficient models the magnitude of the error of measurement, which depends on the accuracy of the sensors.
It is assumed to be normally distributed with a zero mean, and a given standard error ε (the various values of ε are: 0% or 1% or 3% or 5%), cf.Equation9 that simulates sensors of different quality.Those errors should cover all the sources of errors and uncertainties concerning the measurement process either due to the device, or to the other causes (environmental conditions, electronic noise, etc.).The errors arising on different sensors are assumed to be uncorrelated.As soon as ε exist, it is impossible for F obj to converge towards zero [Fontan et al., 2011].
The real experience was carried out on a quite similar structure with the numerical model of the beam bearing a three elastic support.The main difference is due to the integration of several components so as to model the effect of the soil structure interaction.These structures are described section 3.1.1.

Presentation of the studied structures
The first numerical example is that of a continuous beam bearing on three elastic supports, cf.fig. 3 and named STR1.It models a wooden beam bearing on three elastic supports, with two equal spans L i = 1.35 m which has also been the support of a "physical experience" in the same research program, not detailed here [Hasançebi et al., 2009 andLi et al., 2009].The section is a square 7.50 x 7.50 cm².The beam is assumed to be homogenous and the Young's modulus is equal to 10 GPa.A 50 daN/m load is uniformly distributed all along the beam.
The parameters that must be identified from the measurements are the stiffnesses of the three elastic bearings (modelled as Winkler springs), whose true values (known in this synthetic model) are respectively: The true values of the support stiffnesses result in a large settlement on the third bearing (bearing 1 is the stiffer and bearing 3 is the softer).Ten measurements of displacement were extracted to generate the synthetic field data.The abscissas of those ten displacements are given Table 1.Four metrology sets, called CM i are given at Table 1.Those metrology sets are created to stress either the number of sensors, or their localisation on the beam for a same number.This first example will be used in order to study the influence of the number, accuracy or localisation of the sensors on the accuracy of the parameters identification.
The second numerical structure is a half frame, cf.fig.4., named STR2.The column is embedded at its foot whereas the beam is articulated.The beam is 4.00m long and the column is 5.00 m high (H).The section of the beam is an IPE270, (inertia I beam = 5790 cm 4 ) and the column is a HEA340 (inertia I column = 27700 cm 4 ).The beam and the column are made of standard steel (Young modulus E = 210 GPa).A distributed load q = 500 daN/m, is vertically applied on the beam whereas a horizontal concentrated load (F lat = equals 1000 daN) which is applied on the column at its two thirds.The parameters to identify are the flexural stiffnesses of the beam EI beam and of the column EI column .The metrology set is made of six displacements sensors.Three sensors are evenly distributed on the beam and the others on the column, cf.fig. 4. The analytical relationships giving the displacements for each sensor have been explicated as functions of E, I beam , I column , q, F lat , L and H using the beam theory.
Concerning the real experience, cf.fig.5, a continuous wooden beam of 3.00 m long is bearing on three different supports (Pinus pinaster, square section 7.50x7.50cm2, Young's modulus equals 10 GPa).This structure is named STR3.The distance between supports is 1.35 m.Each support is made of a transverse beam, or Secondary Beam, SB.Varying the span of the SB comes to vary the support stiffness.Each SB lies on a wooden plate, which relies on its four sides on a fully rigid concrete support.This physical model reproduces the main patterns of a bridge deck (the continuous beam) bearing on foundations (the beams www.intechopen.com Theory and New Applications of Swarm Intelligence 94 SB) lying on a deformable soil mass (here modelled by the wooden plate).This threecomponent system has some complexity, typical of the soil-structure interaction:

Presentation of the objectives to reach by experience
Two kinds of numerical structures and one real structure have been studied to reach several objectives and to highlight several points: firstly, the feasibility of the identification process using the PSO as an efficient tool and, secondly to clearly identify the sources of errors which occur during an identification process.
The real experiment, applied on STR3, focused on the identification of mechanical parameters and studied the impact of the localisation of used sensors.Indeed, the goal of the numerical experiments was to study the influence of: the noise induced by the meta-heuristic algorithm applied on STR1, -the noise measurement applied on STR1, -the impact of the interaction of the parameters to identify applied on STR2.
For each numerical experiment, the identification process is repeated 20 times.Those simulations are using 20 sets of noise data as it is explained in the following section.The average of the identified parameters (20 values per parameter per experiment), their standard deviation and their coefficient of variation (CV) are calculated.The ending conditions of the identification process are either (a) the maximum number of iterations is fixed at 35, it has been shown in [Fontan et al., 2011] that increasing the number of iterations is not efficient in terms of gain about the F objec in this case, or (b) the threshold of the F objec is fixed at 10 -5 .As soon as the F objec value is below this threshold, the identification process automatically stops.

Results of the identification process: real experience
The real experimental tests use three different support sets so that the length of the SB are the following according the three configurations studied: configuration 1 : l SB1 = 0.50m, l SB2 = 0.50m and l SB3 = 0.50m, -configuration 2 : l SB1 = 0.30m, l SB2 = 0.90m and l SB3 = 1.30m, -configuration 3 : l SB1 = 0.20m, l SB2 = 0.50m and l SB3 = 1.30m,A 3D finite element model, 3DFEM, presents the global experience fig.6.This 3DFEM helps to estimate the equivalent stiffnesses of the elastic support of the main beam considering the association of the stiffnesses of the SB and the plate as a Winkler spring of which stiffness by support is unknown.The fig. 7 gives the displacements measured during the experimental tests for each support set (illustrated by the points), whereas the displacements obtained with 3DFEM for each support set is illustrated by the continuous curve.The good correlation between measurements and simulations confirms the good quality of the 3DFEM model and justifies both a priory estimation of the equivalent stiffness by support and the limits of the space research using for the PSO.Then, it was possible to constrain the research domain for the equivalent stiffnesses k i between 0 and 2 MN/m.The physical model is used with the distributed load and for the three configuration of support sets defined above.The vertical displacements are measured on all sensors.Thus the IdP software is used, where the PSO is combined to a 2DFEM mechanical model presented fig.8.So as to analyse the influence of the number and location of sensors, the efficiency of the identification process is compared by considering three possible sensor sets: -Set A: three sensors (n°1, n°5 and n°9, cf.Table 2), located on the three supports, -Set B: ten sensors (n°1 to n°10, cf.Table 2) regularly spaced all along the beam, -Set C: three sensors (n°3, n°5 and n°7, cf.Table 2), concentrated in the left span.
The value of the objective function at convergence (well above 10 -12 ) is due to the measurement noise.The stiffnesses presented in Tables 3-5 are identified by the software for the three respective sensor sets (A, B and C).For each support set, each sensor set), keeping the same input data (measurements).Since the PSO has some random dimension, the values obtained as a final solution differ from one simulation to another.
The tables provide the average value and the standard deviation calculated from these 10 simulations.Let us consider first the results obtained for sensor set B (using data from all 10 sensors for inversion).All simulations converge towards similar values, leading to a small standard deviation.In addition, the identified values are close to the ''reference values'', which confirms the ability of the process to correctly identify the unknown parameters.The small difference between reference and identified values is not a problem when one reminds that the former cannot be considered as the ''true'' solution (it is only a good indicator of the range of the true solution).These results confirm the efficiency of the identification process.
When comparing the results of Table 4 with those of Table 5, it can be seen that the sensor sets A and B lead to almost the same results.This shows that using three well-located sensors can be sufficient.It is not the case for Set C, which shows some limits for identifying the stiffnesses on external supports 1 and 3.This confirms, on a practical application, that the location of sensors has a high influence on the quality of the identification.

Sources of errors impacting the accuracy of the identification process: Synthetic experience
Concerning synthetic experience, the following works relies on either numerical using the finite element (FE) software Castem©, or analytical model.

Error from the meta-heuristic algorithm
Twenty identification processes were carried out without any noise applied on field data (i.e.considering perfect measurements).These tests were applied on structure 1 or STR1 using the CM 1 metrology set, cf.Table 1.This case corresponds to a perfect case with a high number of sensors, evenly distributed and no measurement error.The convergence curve of the best particle of the swarm is presented Figure 9 for one simulation.The three elastic  6. Statistical analysis of the identified parameters using exact field data, structure 1, metrology set CM 1 .
Fig. 9. Convergence curve of F objec during an IdP process using exact field data.
The average value of each unknown parameter is very close to the reference but the standard deviation in not zero, which means that all solutions are not identical, even in this perfect case.Some scatter due to the meta-heuristic algorithm affects the identification process.However this scatter remains small.In real cases, it will be overshadowed by other error sources that will be studied now.

Sensors with measurement noise
In this section, the three elastic stiffnesses are identified on structure 1, STR1, using the metrology set CM 1 .The objective is to show how noisy data impact the accuracy of the predicted parameters.Several values of ε(1%, 3% and 5%) were used to noise the field data.20 identification processes with a different noise for each identification were carried out.The results are given in Table 7.The average, standard deviation and coefficient of variation C.V. illustrate the impact of noise on the accuracy of the processes.The larger the ε coefficient is, the wider the scatter appears to be coherent.It can be also noticed that a random noise from a normal distribution with a zero mean, and a varying standard error ε = 5% does not impact so much the prediction of the identified parameters: the error on the average of 20 simulations is about 1% and the C.V. is between 1 and 7% for this metrology set made of 10 sensors evenly distributed.The loss of accuracy is linear with the standard deviation of the random error, cf.table 7 and figure10.The accuracy of the predicted parameters is linearly correlated with the accuracy of the sensors.7. Statistical analysis of the identified parameters using noised field data, structure 1, metrology set CM 1 .

Dependence between unknown parameters
The studied structure is here the structure 2, STR2, i.e. the half truss structure presented at section 3.1.1.Let us assume that one must identify the Young Modulus E, the inertia of the column I column and the inertia of the beam, I beam .Four different levels of noise on field data (ε = 0% (perfect data), then ε = 1%, 3% and 5%) are considered and simulations are repeated 20 times.The results are presented in Figures 11and 12and Table 8.The first result is that one obtains a front of solutions, since it is not possible to uncouple the weight of E from that of inertia: for the same product EI i , there exists an infinite number of acceptable pairs {E, I i = (EI i )/E = k/E} satisfying the same criteria.In order to estimate the sensitivity of the identified parameters to the field data, the sensibility of the displacement to stiffnesses was calculated.EI column or EI beam are varied in the [-50%; +50%] range and the displacement is calculated on 3 points of the beam, and on three points of the column, cf.fig. 4 The results confirm that only some field data are sensitive to the stiffness variations.The stiffness variation of the column inertia (reversely beam) has only a negligible influence on beam (reversely column) displacements.Thus, during the identification process, the magnitude of the errors on sensors localized on the beam will not impact the column because a lack of sensibility, and reversely.A more detailed analysis Table 9 shows that sensitivity of column displacement to column parameters is slightly larger than the same for the beam.The sensitivity has been calculated as the ratio between the variation of the displacements at the studied point with the variation of the stiffness.Those results show that the displacements of the column are more sensitive to a variation of the stiffness of the column that for the beam and explain why the scatter of the identified stiffnesses EI column fig.12 is more important that the scatter of the identified stiffnesses EI beam .Indeed, when field data on the column are noised, the inertia of the column to identified have to be badly identified from the reference value regards the magnitude of the noise.That result focuses how chosen of both the nature and the localisation of the field data regards the parameters to identified is important on the accuracy of the identification process.Table 9. Sensitivity of the displacement on several points of the structure to the variation of the stiffnesses.

Application to eco conception
Taking into account environmental impact criteria in the preliminary eco-design of semiproducts or of full functional units is becoming more and more an issue for industry.It implies going through a life cycle analysis (LCA) that is now the international standard to evaluate such impacts.It is in fact the only way to compare the environmental impact of different products that fulfil the same function; and this, from the production of raw materials to the final destination.The fact that it is necessary to know the life cycle of a product makes it difficult to use the LCA during the preliminary eco-design stage.One way to tackle the problem would be to focus on one of the stages of the life cycle of the product and to consider it as independent from the other stages.
The design process will be different if we are trying to: i) improve the environmental characteristics of a product while disturbing as little as possible its production process, ii) optimize the environmental impact of a product defined by end-use performances without restricting oneself to a particular process.The first case, frequent with manufacturers, being guided by the manufacturing process, can make it impossible to meet both the technical and the environmental requirements in a given manufacturing scheme.The second approach, which is more prospective and open, is guided by the end-use properties that are required, and therefore can be tackled either in seeking an environmental optimum in a search space that is constrained by functional specifications or through a multi-objective optimization.
The second approach is closer to conventional preliminary design.However, as multiobjective optimization does not provide a single solution, but a set of possible solutions satisfying the design criteria among which the designer will be able to choose according to additional constraints, both approaches will be considered as preliminary (eco) design.
The example that is presented here concerns the preliminary design of an outdoor decking taking into account its environmental profile (first approach).The initial choice was of a wood-plastic composite, this choice allowing the use of industrial by-products in a constrained search space.The optimum of the required properties will be obtained by multiobjective optimization.

A multi-objective optimization problem
Design by multi-objective optimization implies simultaneous optimization of various contradictory objectives like it is illustrated below.
If we take a simple example consisting in minimizing simultaneously the two following functions: f 1 (x) = x 1 and f 2 (x) = x 2 /ax 1, the improvement of the first objective, f 1 (x), comes with a degradation of the second objective f 2 (x)).This contradiction expresses the fact that there does not exist an optimal solution regarding the two objectives, there are only optimal compromises.
With this example we see that for a minimal f 1 and thus x 1 the lowest possible, we need the lowest possible x 2 to minimize f 2 .In addition, the absolute minimum f 2 is obtained with x 1 the highest possible and x 2 the lowest possible.It is the taking into account of this contradiction between minimization of f 1 and minimization of f 2 that introduces the notion of compromise whether one favours f 1 or f 2 .We see that from a purely algebraic point of view x 1 cannot be null (division by zero).This observation introduces the fact that there is often a certain amount of constraints that must be met by the objective functions and/or their variables.These are also called parameters, optimization variables or design variables.The constraints that are specifications of the problem limit the search spaces of the parameters and/or the determining, for example, bottom or top values.A general multiobjective optimization problem includes a set of k objective functions of n decision variables (parameters) constrained by a set of m constraint functions.It can be defined as below: is the vector of decision variables, f i : n  for i  1,, k are the objective functions and g i ,h j : n  for i  1,, m and j  1,, p are the constraint functions of the problem www.intechopen.com and New Applications of Swarm Intelligence 104 A compromise will be said optimal if every improvement of an objective induces degradation of another objective.A compromise whose objectives can be improved is not optimal.It is said to be dominated by at least another compromise, which is the one obtained after improvement of its objective functions.The optimal compromises are located on a front, named Pareto front (fig.13).The Pareto Dominance can be defined as below: A solution is Pareto optimal if and only if it is not dominated by any other solution [Van Veldhuizen et al., 2000;Reyes-Sierra et al., 2006 ;Zitzler et al., 2000].A Pareto optimal solution, a vector of decision variables , can be defined as below [Castéra et al., 2010]: The presence of a Pareto front, thus a set of optimal non-equivalent solutions, allows the choice of an optimal solution with regard to economical of functional criteria, which are external to the solved problem of multi-objective optimization.
Fig. 13.The Pareto front is constituted by the plain dots, the objective functions f 1 and f 2 at point can still be improved to reach point ; therefore point is dominated by at least point .
We will illustrate the multi-objective particle swarm optimization for the design of a woodplastic composite decking with three objectives [Michaud et al, 2009].In this example, the optimization focuses on the creep, swelling, and exhaustion of abiotic resources functions.
The design variables are mainly characteristics of raw materials such as timber particle sizes and chemical or thermal timber changes.

The wood-plastic composite preliminary eco-design problem
The wood-plastic composites (WPC) initially developed in North America for recycling materials -plastics and papers -they also enable a significant reduction of the plastic coming from the petrochemical industry.There is thus in their development both a definite 105 economic advantage and a potential environmental interest.Nevertheless when decking is used outdoor, these products exhibit a certain amount of weakness points and contradictions: in order to allow a homogeneous extrusion and to prevent the material from becoming too fragile, a minimal quantity of thermoplastic (about 30 percent in the case of a PEHD/wood composite) is necessary.In addition, chemical additives are included in the formula in order to improve compatibility between the two components; one being polar and the other apolar.
The WPC preliminary eco-design requires first that the designer solves a multi-objective optimization problem.Usually one of the three strategies below is used: optimization of one objective with constraints from the others, which leads to a single solution; -optimization of a weighted function including the different objectives, which leads to a single solution; -Pareto optimization, which leads to a set of optimal compromises between the objectives that is well distributed in the space of solutions.
The population based search approaches-genetic algorithm (GA), ant colony (AC), particle swarm optimization (PSO), etc…-are well adapted to the Pareto optimization with more or less efficiency.The PSO technique, like other evolutionary techniques, finds optima in complex optimization problems.Like GA, the system is initialized with a population and searches for optima by updating generations.However, unlike GA, PSO has no evolution operators such as crossover and mutation.PSO while traversing the search space is focused on the optimum, whereas GA explores the search space and then takes more time to find the optimum.In the WPC preliminary eco-design the main objective is to find the relevant optima to be able to choice an optimum with regard to economical of functional criteria; knowing that completely different composite formulations lead to equivalent composites in reference to the objective functions.Multi-objective PSO technique is specially and fully suitable for this problem.

The wood-plastic composite preliminary eco-design modelling
The modelling of WPC for decking application preliminary eco-design has required a multidisciplinary team (physicists and computer scientists).The modelling process consisted in: generating knowledge by some experiments, collecting knowledge generated and those from the literature and building up the influence graphs of relationships between the problem variables (fig.14).The three objectives considered in the preliminary eco-design of wood-plastic composite (creep, swelling and exhaustion of fossil resources functions) have been identified as critical weak points of the product [Michaud et al., 2009].From an environmental point of view, exhaustion of fossil resources is, with the greenhouse effect, the weak point of this material.We will recall their definition in order to highlight the algorithmic nature of these functions.

The creep function (def)
The creep function, def(t ref ), is an empirical non linear power function that has been fitted to bending experimental results.The magnitude of creep deformation is related to the elastic compliance 1/E.The kinetics of creep deformation is related to the viscosity of the composite, .The fiber size distribution parameter k GRAN used in equation ( 13) is a discrete variable that can take three different values between 0.3 (random) and 1 (unidirectional) Fig. 14.The influence graph of relationships between the decision variables and the objectives.
with an intermediate value calculated at 0.69 (partially oriented) -see Michaud et al, op.cit., whereas the other variables used in the equations ( 12), ( 13) and ( 14) are continuous.In fact the def function (equation 12), in its developed formula has an algorithm form due to the conditions on the discrete k GRAN .
Where A and N are fitted parameters of the creep function model, σ 0 is applied stress, σ MOR is modulus of rupture of the composite material, t ref is the time to reach a limit state deflection, E is the modulus of elasticity and ν is the apparent viscosity of the composite at room temperature.E and ν are calculated through a simple mixture law, as shown in equations ( 13) and ( 14).These equations reveal the main optimization variables, i.e. material properties, volume fractions and fibre orientation.Fiber ratio in composite formulation 0 x 1  1 and x 1 = x 1 (x 4 +x 5 +x 6 ) x 2 = λ add Additives ratio in composite formulation 0 x 2  1 x 3 = λ m Matrix ratio in composite formulation 0 x 3  1, x 3 = 1-x 1 -x 2 and x 3 = x 3 (x 7 +x 8 +x 9 ) x 4 = f Fiber ratio in Fiber component 0 x 4  1 and x 4 + x 5 + x 6 = 1 x 5 = frec Recycled Fiber ratio in Fiber component 0 x 5  1 x 6 = reinf Other reinforcement ratio in Fiber component 0 x 6  1 x 7 = m Thermoplastic ratio in matrix component 0 x 7  1 and x 7 + x 8 + x 9 = 1 x 8 = bio Biopolymer ratio in matrix component 0 x 8  1 x 9 = trec Recycled thermoplastic ratio in matrix 0 x 9  1 x 10 = gran Fiber size distribution factor discrete variable x 10 = {1, 2, 3} x 11 = k t Fiber treatment factor discrete variable x 11 = {0, 1, 2, 3} x 12 Viscoelastic properties of constituents E, n Table 10.Variables X = {x 1 , x 2 , …, x 12 } related to the composite formulation.

Water swelling function (SW)
The swelling function due to water absorption, SW, is defined by equation ( 15).It expresses the fact that the swelling of the composite is the sum of the swelling deformations of all hygroscopic components present in the composite and accessible to water, e.g.wood, biopolymers….The part representing the swelling of the fibres vanishes when the fibres are not accessible to water (below a given percolation threshold   .In addition the swelling capacity of wood fibres can be changed by thermal or chemical wood modification, which is expressed in equation ( 15) by the discrete variable k t that can take three different values (low, medium or high effect).The SW function is also an algorithm: there are conditions on the discrete variables (k t , m and ω) and on the threshold variable λ 0 .
where  0 is the percolation threshold; k fr is the user defined coefficient for influence of recycled fiber onto swelling; k t is the user defined coefficient for influence of treatment onto swelling; m, , SW f and SW m are swelling function parameters.
See table 10 for the meaning of other variables.

Exhaustion of fossil resources function (efr)
The exhaustion of fossil resources function, efr, is defined as an addition of two factors (equation 16): one for fibres used and one for the non-renewable part of the polymer if the polymer is a blend.
where the coefficient a 1 represents the impact of fiber processing and treatment on the exhaustion of fossil resources, and the coefficient a 2 reflects the impact of non renewable thermoplastic and additives production and processing.Other factors have an impact on efr, such as consumption of non-renewable energy during composite assembly, production of additives...For simplification they have not been considered.Normally a 2 is expected to be higher than a 1 .The balance between the two coefficients influences the environmental optimization.
See table 10 for the meaning of other variables.

Application of the MOPSO algorithm
In the design of wood-plastic composite (WPC), the creep and swelling functions are conflicting: the swelling of the composite growth when the creep decreases with the rate of fibers (wood).The MOPSO deals with such conflicting objectives; even if the representation of each objective is an algorithm and thus with a high number of functions.In our WPC preliminary design we have three objective functions with two of them represented each by an algorithm utilizing several variables.

Dealing with continuous and discrete variables
The equations ( 5) and ( 8) are used as position updating rule of respectively real and discrete variables.The equation ( 4) is used as velocity updating rule for all variables.During the optimization process, the real variables converge to their optima according to the objective functions, whereas each discrete variable randomly traverses its space of definition and consequently its best solution is identified.Due to the discrete variables, the solution space of the multi-objective optimization problem is discontinuous (fig.15)

Multi-objective optimization
In this work we have applied the MOPSO method described in [Alvarez-Benitez et al., 2005].
In this method only the fully connected topology is used to calculate the position of each particle for each objective function and then the Pareto dominance test is applied to each particle regarding the particle's positions stored in the extended memory.If the position of a particle dominates some particle's positions in the extended memory, the position of the particle is stored in the extend memory and the ones dominated are discarded from the extended memory.We used, as end condition of the optimization process, a given maximum number of iterations.Of course the swarm is randomly initialized and the number of its particles is given.The Pareto front is constituted by the particle's positions in the extended memory at the end of the optimization process.
The efficiency of the optimization is hardly influenced by the constant parameters ω, c 1 and c 2 in the equation ( 4).Such parameters have to be experimentally adapted to each optimization problem.For our problem the parameters ω, c 1 and c 2 have been respectively settled to 0.63, 1.45 and 1.45 (fig.16).

Stability of the Pareto front
The Pareto front is stable regarding the swarm size and the number of generations of particles (number of iterations used as end-condition of the optimization process) [Ndiaye et al. 2009].For a given swarm size, the number of particles in the Pareto front increases with the increasing number of generation of particles according to an affine law, but the shape of the front remains the same (fig.17a); and for a given number of generation of particles, the number of particles in the Pareto front increases with the increase of the swarm size (fig.17b).The size of the Pareto front can be rather large and therefore the swarm size and the number of iterations should be fitted in order to obtain a reasonable front size.

Fig. 4 .
Fig. 4. Half truss structure with its loads; EI beam and EI column are the unknown parameters.

Fig. 6 .
Fig. 6.FE model of the physical model with the main beam bearing on SB and the wood plate.

Fig. 10 .
Fig. 10.Illustration of the noise on field data and the dispersion of identified parameters k i , Structure 1, CM 1 .

Fig. 15 .
Fig. 15.Solution space of the multi-objective (def, SW and efr) optimization problem determined from a MOPSO of 1000 generations of 30 particles.

Fig. 16 .
Fig. 16.The Pareto front of the multi-objective (def, SW and efr) optimization problem determined from a MOPSO of 1000 generations of 30 particles.
Inverse Analysis in Civil Engineering: Applications to Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization www.intechopen.com

Table 1 .
Continuous beams bearing on three elastics supports with a distributed load, k i and E i are the unknown parameters.Positions of sensors used during the identification process function to the metrology set.
Table compares, for the three supports sets, identified values with ''reference values'' obtained with the 3DFEM Inverse Analysis in Civil Engineering: Applications to Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization 97 (Av. means average and s.d.means standard deviation).The identification process (PSO combined with the mechanical model) was repeated ten times for each case (support set x

Table 2 .
Abscissa of the sensors used during the real experience.
Fig. 8. Localisation of the sensors used in both numerical and physical models.www.intechopen.com

Table 3 .
Identified equivalent stiffnesses using sensors from set C.

Table 4 .
Identified equivalent stiffnesses using sensors from set A.

Table 5 .
Identified equivalent stiffnesses using sensors from set B.
Inverse Analysis in Civil Engineering: Applications to Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization 99 stiffnesses were correctly identified but the identified parameters show slight variations for each of the 20 simulations.The results are presented in Table6. www.intechopen.com

Table 8 .
. Only displacements perpendicular to the main axis of the element are calculated.Statistical analysis of the identified parameters using noised field data, structure 3. Inverse Analysis in Civil Engineering: Applications Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization 101 www.intechopen.com Inverse Analysis in Civil Engineering: Applications Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization 103 Inverse Analysis in Civil Engineering: Applications Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization Inverse Analysis in Civil Engineering: Applications Identification of Parameters and Design of Structural Material Using Mono or Multi-Objective Particle Swarm Optimization 107 see table 10 for the meaning of other variables. www.intechopen.com