Methodology for Optimization of Polymer Blends Composition

Alessandra Martins Coelho1, Vania Vieira Estrela2, Joaquim Teixeira de Assis3 and Gil de Carvalho3 1Instituto Federal de Educacao, Ciencia e Tecnologia do Sudeste de Minas Gerais (IF SUDESTE MG), Rio Pomba, MG, 2Departamento de Telecomunicacoes, Universidade Federal Fluminense (UFF), Niteroi, RJ, 3Instituto Politecnico (IPRJ), Universidade Estadual do Rio de Janeiro (UERJ), Nova Friburgo, RJ, Brazil


Introduction
The research of polymer blends, or alloys, has experienced enormous growth in size and sophistication in terms of its scientific base, technology and commercial development (Paul & Bucknall, 2000). As a consequence two very important issues arise: the increased availability of new materials and the need for materials with better performance.
Polymer blends are polymer systems originated from the physical mixture of two or more polymers and/or copolymers, without a high degree of chemical reactions between them. To be considered a blend, the compounds should have a concentration above 2% in mass of the second component (Hage & Pessan, 2001;Ihm & White, 1996). However, the commercial viability of new polymers has begun to become increasingly difficult, due to several factors.
The advantages of polymer blends lie in the ability to combine existing polymers into new compositions obtaining in this way, materials with specific properties. This strategy allows for savings in research and development of new materials with equivalent properties, as well as versatility, simplicity, relatively low cost (Koning et al., 1998) and faster development time of new materials (Silva, 2011). Rossini (2005) mentions that economically and environmentally, a very viable alternative is to replace the recycling of pure polymers by mixtures of discarded materials. Mechanical recycling causes the breakdown of polymer chains, which impairs the properties of polymers. This degradation is directly proportional to the number of cycles of recycling. Therefore, the blend of two or more discarded polymers can be a realistic alternative, since it can result in materials with very interesting properties, at a low cost. Besides its inexpensiveness, this choice is also a smart solution to the reutilization of garbage. Postconsumption package disposal always occurs in a disorderly manner and without regard for the environment. The recycling process becomes increasingly more important and necessary to remediate environmental impact. Pawlak et al. (2002) pointed out that the elongation at break and impact strength of recycled HDPE/PET blends has increased with the addition of EGMA or maleic anhydride grafted styrene-ethylene butylene-styrene (SEBS-g-MA). The best results were obtained for PET/HDPE/EGMA at 75%/25%/4 pph and PET/HDPE/SEBS-g-MA at 75%/25%/10 pph. The mechanical properties of the blends were related to the phase dispersion. The increase in the viscosities of the compatibilized blends was observed due to the reaction during blending. Carvalho et al. (2003) considered blend composition complexity as a function of the ideal percentage of each one of their components in their computer study for optimization of polymeric blends. With the objective of analyzing the mechanical behavior of the blend in relation to PET and to PP, the same speed test was adopted for the three tested materials. The results are presented in Table 1.

Introduction to design of experiments
One of the most common and challenging problems in experiments concerns the determination of the influence that one or more variables has on the variable of interest. Designed experiments address these problems and also have extensive application in the development of new processes and design of new products. Some of its applications are  Characterization of a process (experiment screening): It aims to determine which factors affect the response;  Optimization of an experiment: It aims to determine the important factors in the region leading to an optimal response; and  Product planning: It tries to determine the factors that influence the most the verification effort.
A DOE is the pre-requisite for a successful experimental study (Tang et al., 2010). Assuming that the goal of experimentation is to find a function, or at least a satisfactory approximation of it, which acts on k factors producing observed responses (as outlined in Figure 1), the system acts like an initially unknown transfer (or modifying) function, which operates on the factors, producing as output, the observed responses. Thus, a better understanding of the nature of the reaction under study in order to choose the best system operating conditions (Silva, 2011).

Factorials design
In a designed experiment, the data-producing process is actively manipulated to improve the data quality and to eliminate redundancy. A common goal of all experimental designs is to collect data as parsimoniously as possible while providing sufficient information to accurately estimate model parameters. By factorial experiment we mean that in each replication of the experiment, all possible combinations of levels are investigated.
Multilevel designs is used to systematically vary experimental factors and then assign each factor a discrete set of levels. Full factorial designs (FD) measure response variables using every treatment (combination of the factor levels).
Plackett-Burman designs are used when only main effects are considered significant. They require a number of experimental runs that are a multiple of 4 rather than a power of 2.
Binary factor levels are indicated by ±1. The design is for eight runs manipulating seven two-level factors. The number of runs is a fraction 8/2 7 = 0.0625 of the runs required by a full factorial design. Economy is achieved at the expense of confounding main effects with any two-way interactions.

Two-level designs
Two-Level designs are often used in experiments involving several factors, in which is necessary to study the combined effect of factors on a response. However, several special cases of general factorial design are important because they are widely used in research and form the basis for other designs of considerable practical value. The most important of these special cases is of k factors, where each one has only two levels.
When planning an experiment, one should first determine the factors and the answers adequate to the system under study. The factors, that is, the variables controlled by the experimenter, can be both quantitative (such as values of temperature, pressure or time) and qualitative (such as two machines, two operators, levels "up" and "down" of a factor, or perhaps the presence or absence of a factor). Depending on the problem, there may be more than a response of interest and, eventually, these responses can also be qualitative.
After determination of the factors to be observed, it is necessary to implement the factorial design, i.e., the values of the factors that will be used in the experiment. All possible combinations of factors are investigated. Among the many advantages of factorial design, the following (Button, 2005) can be named: . . . a. The number of trials can be reduced without jeopardizing the quality of information; b. It permits simultaneous study of several variables while separating its effects; c. It assesses the reliability of results; d. It allows stepwise research realization which in general adds new tests an iterative; and e. It selects the variables that influence a process with a minimum number of tests;

Response
In factorial design, the factors and levels are pre-determined by setting and they correspond to a fixed effects model. This type of planning is normally used in the early stages of research. Since there are only two levels for each factor analysis, its assumed that the response variable presents a linear behavior between these levels (Button, 2005). Effects are defined as "the change in response down level (-) for the up level (+)" and they can be classified in two categories: main effect (effect on the level change of a single factor) and interaction effect (effect on the change in level between two or more factors at the same time).

2 2 factorial design
Geometrically, the design 2 2 can be represented by a square where each vertex corresponds to an experiment. Figure 2 shows, geometrically, the 2 2 factorial design and its planning matrix. The letters A and B represent the factors. The levels are represented by -and +, which correspond to low and high levels of factors. The combination of experiments, with both factors at low level is represented by the number 1. The effects of interest in the 2 2 factorial design are the main effects A (represented by number 2) and B (represented by number 3). The interaction factor AB, also called contrast (represented by the number 4) is generated from the product of the signs of the columns of the main effects A and B. The main effect of A is by definition the average of the effects of A in two levels of B. The same happens with the main effect B, as seen in (1)

2 3 factorial design
2 3 factorial designs have three factors at two different levels, which request the performance of eight experimental trials (each of these experiments in which the system is subjected to a defined set of levels). Based on factors that you want to study and their levels, it is possible to build a planning matrix as shown in Table 1. The first column of the effects (A factor) is filled alternating one by one the levels of factors (-+ -+ ...), column 2 (B factor) is filled alternating two by two the levels of factors (--+ + ...) and, finally, the third column (C factor) the first four experiments are filled with the lowest level and last four with the higher level (----+ + + +). The combination of experiments with both factors at low level (-) is also represented by the number 1.
Based on the planning matrix (Table 2) it is possible to generate the table of contrast coefficients. This matrix is composed of three main effects (A, B and C) and four interaction effects (AB, AC, BC and ABC). Table 3 shows the signs of effects for the 2 3 factorial design. In conformity to Neto et al. (2003), the effects on the 2 3 factorial design can also be interpreted as contrasts geometric, whose representation is a cube, in which the eight trials of the planning matrix corresponding to its vertices. The main effects and interactions of two factors are contrasts between two planes, which can be identified by examining the coefficients of contrast. In general, one main effect on the planning 2 3 is a contrast between the opposite sides and perpendicular to the axis of the corresponding variable. The interactions between two factors, in turn, are contrasts between two diagonal planes. These planes are perpendicular to a third plane, defined by the axes of the two variables involved in the interaction. Table 3. Signs of Effects for the 2 3 Factorial Design.

Effects I A B C AB AC BC ABC
If K is the number factors, then a general form for the effects can be given by:

Fractional designs
For experiments with many factors, two-level full FD can lead to large amounts of data. For example, a two-level full factorial design with 11 factors requires 2 11 = 2048 runs. Often, however, individual factors or their interactions have no distinguishable effects on a response. This is especially true of higher order interactions. As a result, a well-designed experiment can use fewer runs for estimating model parameters.
Fractional FD use a fraction of the runs required by full FD. A subset of experimental treatments is selected based on an evaluation (or assumption) of which factors and interactions have the most significant effects. Once this selection is made, the experimental design must separate these effects. In particular, significant effects should not be confounded, that is, the measurement of one should not depend on the measurement of another. The challenge is to choose basic factors and generators so that the design achieves a specified resolution in a specified number of runs. The confounding pattern shows that main effects are effectively separated by the design, but two-way interactions are confounded with various other two-way interactions.

Response Surface Methodology (RSM)
RSM is defined how a collection of mathematical and statistical techniques useful for the modeling and analysis of problems in which a response of interest is influenced by several process variables (termed factors) whose objective is to optimize this response (Montgomery, 2005;Box & Draper 1987;Myers & Montgomery, 1995apud Tang et al., 2010. Box & Draper (1987) define RSM how a collection of statistical techniques useful in researches, with the purpose to determine the best conditions and give greater insight into the nature of certain phenomena. It comprises the following three main components (Tang et al., 2010): a. Experimental design to determine the process factors values based on which the experiments are conducted and data are collected; b. Empirical modeling to approximate the relationship (i.e. the response surface) between responses and factors; and c. Optimization to find the best response value based on the empirical model.
It can be assumed that the system under study is governed by a function which is described by the experimental variables. Normally this function can be approximated by a polynomial, which provides a good description of the factors and response. The order of the polynomial is limited by the type of planning used. Two-level FD, fractional or complete, can only estimate main effects and interactions. Factorial design with three levels (central point) can estimate, moreover, degree of curvature in the response. In general, the relationship is: where the true response f is unknown and sometimes very complicated; ε represents disturbances in f, such as, measurement error on the response, background noise, the effect of other variables, and so on. In any planned experiment, there is a strong relationship between the analysis of a designed experiment and a regression analysis that can be used for predictions of an experiment 2 k .
Because f is unknown, we must approximate it. In fact, successful use of RSM is critically dependent upon the experimenter's ability to develop a suitable approximation for f. Usually, a low-order polynomial is sought after.
The first-order model is likely to be appropriate when the experimenter is interested in approximating the true response surface over a relatively small region of the independent variable space in a location where there is little curvature in f.
To describe these models in a screening study, are used simple polynomials, i.e., those containing only linear terms. A simple model of a response y in an experiment with two controlled factors x 1 and x 2 , two polynomials is: where x 1 and x 2 are main effects; x 1 x 2 is a two-way interaction effect; β 0 is the average value of all responses; ε includes both experimental error and the effects of any uncontrolled factors in the experiment; and β 1 , β 2 and β 2 , are, respectively, the coefficients related to the main variables x 1 and x 2 , and the coefficient for the interaction between x 1 and x 2. So, x 1 and x 2 should be manipulated while measuring y, with the objective of accurately estimating β0, β1 and β2. Equations (7) and (8) can be combined and the resulting model is given by: where ŷ is the vector of responses estimated by model; X is the coefficient contrast matrix; and β is the coefficient of the model or regression vector. In RSM design, there should be at least three levels for each factor. In this way, the factor values that are not actually tested using fewer experimental combinations and the combinations themselves can be estimated (Neseli et al., 2011). The effect of a factor is defined as the variation in the response produced by the change in the factor level.

Development and discussion
Factorial DOE has been used to measure the influence of the following input variables: amount of polypropylene, additive type and amount of additive on the values of response variables. Relevant mechanical properties for polymeric blends PET/PP are ME, elongation at rupture and TS at rupture. The following experiments were accomplished by a 2 3 factorial design. Their specifications are presented in the  Table 4. Specification supplied by the manufacturers of PET and PP (Carvalho et al., 2003).
The factors will be analyzed on two levels (top and bottom) according to data presented in Table 5.

Main Effects Factors Level (-) Level (+) A
Amount of polypropylene 5% 25% B Additive type C2 (acrylic acid) C1 (maleic anhydride) C Amount of additive 1% 5%  (2010). The mechanical properties of ME, elongation at rupture and TS were evaluated in ten executions for each test.
Tables of contrast coefficients for ME (Table 6), contrast coefficients for study of Strain at Break (Table 7) and contrast coefficients for TS (Table 8) were obtained from the Table 3 and  Table 4. All tables were composed by three main effects: A (amount of polypropylene), B (additive type), C (amount of additive), and the four interaction effects AB, AC, BC and ABC. The last column of each table contains the values of Y n (n = 1, 2 and 3, respectively, ME, elongation at rupture and TS at rupture), corresponds to the average of the experimental results found for each test, in 10 executions.

Treatment Combination
Effects

Calculation of effects and results interpretation
The 8 x 8 matrix factorial design is www.intechopen.com Returning to Tables 5, 6 and 7 can be seen that in all columns except the first, have four positive and four negative signs. To find the global average to fairly apportion the first element of each of the vectors X T .Y 1 , X T .Y 2 e X T .Y 3 by 8. The other elements of the vectors correspond to the effects and will be divided by 4, result in (13).
The Gauss method, which is a direct method for solving linear systems, can be used to solve the system found. In this case, the elements of columns of matrix X (10), that the corresponding effects were divided by 2, as shown in (14). The vectors y 1 , y 2 and y 3 are the terms independent of the linear system. The results are the same as described in (13).
The three tables below show data contained in the vectors (13) in order to enable analysis of the influence of each factor individually and the interaction of these factors on the ME, strain at break and tensile strength (TS). Table 9 shows that the three main effects, the factors of polypropylene amount, additive type and amount of additive reduce the ME. The amount of polypropylene is the major contributing factor to the reduction of elasticity. The model obtained for the ME is presented in (15).  Figure 3 represents the RS for the ME as a function of B and C. The additive type and amount of additive increase the ME. Hence, the interaction between type and amount of additive can improve the interaction between molecules and compatibility of the mixture.  Table 10, the main effect, amount of additive, increases the strain at rupture. The same happens with the interaction of two factors AC and BC. The obtained ME model appears in (16) and (17).

Average
www.intechopen.com Figure 4 represents the graphic of the response surface elongation at rupture as a function of the factors A and C. Note that the additive type and amount of additive increases the elongation at rupture, fact already observed in Table 10. Figure 4 show that this factor has a significant effect on elongation at rupture. It is evident in the Figures (4) and (5) that the amount of additive is more significant than the types of additive analyzed. In Table 11, the main effect C has no significant value for TS, since the main effects A and B show a reduction. Interaction BC shows an increase in TS, while the interaction between the three factors (ABC) reduces TS. The model obtained for the modulus of TS is presented in (17).

Main Effects:
A (Amount of polypropylene) -8 B (Additive type) -3 C (Amount of Additive) 0 Interaction between two factors: AB -0.5 AC -0.5 BC 2.5 Interaction between three factors: ABC -2 Table 11. Effects calculated for tensile strength Figure 6 represents the graphic of the response surface for TS as a function of the factors B and C. Note that the additive type and amount of additive increases the TS.

Geometrical interpretation of effects
The eight trials of each of the three planning matrices correspond to the vertices of the cube. The effects can be identified by examining the coefficients of contrast. Figure 5 reveals that the tests are all negative on one side of the cube, which is perpendicular to the axis of factor 1 (amount of polypropylene) and is located on the lower level of this factor. The other essays are on the opposite side, which corresponds to the upper level. The effect of factor 1 can be considered, therefore, as the contrast between these two faces of the cube. The effects, 2 and 3, also are contrasts between healthy opposite sides and perpendicular to the axis of the corresponding variable. The interaction between two factors, appear as contrasts between two diagonal planes. These planes are perpendicular to a third plane, defined by the axes of the two variables involved in the interaction. Figure 7 presents the geometric interpretation of the effects. For instance, vertex 1 has the following coordinates: 5% polypropylene and 1% additive, which is acrylic acid.

Model-based DOE (PCA-based DOE)
Nowadays, design, monitoring and optimization of applications by means of mathematical models are very advantageous in process control. Nevertheless, a trustworthy model that complies with operation constraints is as a rule difficult to develop not trivial. According to Asprey & Macchietto (2000), a wide-ranging modeling method comprises:  An initial analysis and structure modeling of the system based on process knowledge;  Designing optimal experiments according to the planned model;  Perform experiments; and  Using experimental information to estimate model parameters and accomplish model validation by probing available estimated parameters and existing data.
This chapter deals with experiments designed for a specific algebraic equations (AE) system called model-based DOE (MBDOE) while factorial analysis based on DOE uses empirical models. Numerical models are often nonlinear algebraic equations (NAE), dynamic algebraic equations (DAE), or partial differential equations (PDE). MBDOE is done before any real in order to describe structure selection, to model parameter estimation, and so forth. Pragmatically speaking, MBDOE sets up a DOE objective function. From an algorithmic point of view, DOE has been combined with AE systems for a long time and applied to DAE systems by Zullo (1991) and Asprey & Macchietto (2002). Several optimal design criteria (ODC) have been suggested and considered by different case studies; Walter & Pronzato (1990) gave a detailed discussion of available ODC and their geometrical interpretations. Lately, Atkinson (2003) used DOE for non-constant measurement variance cases and Galvanin et al. (2007) extended the DOE territory to parallel experiment designs.
This work focuses on a DOE global methodology relying on PCA, so that a large system can separated into small pieces and a sequence of experiments can be designed to avoid numerical problems. Moreover, the problem can be transformed into familiar ODC under certain assumptions and a subset of model parameters can be chosen to boost estimation precision without changing the objective function form.

Parameter estimation
Parameter estimation can be generalized into the following optimization problem: subject to: where n is the number of experiments, q is the number of equations, respectively, y stands for measured variables and subscript m indicates a measurement. x is the state variables of the DAE system. For simplicity, the variables x are assumed to be measurable, thus y=x.f represents the DAE equations and H is used to discriminate algebraic and dynamic equations (the corresponding rows for AEs are zero).  stands for the model parameters and u has the controlled variables. Assume the control profile u is known over a predefined time In parameter estimation, the only unknown in integrating f is  and normally the boundary of  is defined according to the nature of the process to be modeled. The measurement noises is considered a multivariate normal distribution (N(0,V m )), otherwise Eq. (19) needs to be rebuilt from MLE according to the specific noise distribution function. In most cases, normally distributed noise is a safe assumption. Eq. (19) is similar to the classical optimal control problem in which the objective function usually is ) ( min f u t x Z  .
This dynamic system optimization problem can be solved by sequential and simultaneous methods.
In sequential approaches, only the unknown variables (e.g.,  for parameter estimation, u for optimal control) are discretized and manipulated directly by the non-linear programming (NLP) solver. After the unknown variables are updated, the DAE is integrated given the initial condition x 0 and integration interval [t 0 , t f ].
For simultaneous methods, the entries of x are discretized along t and approximated by polynomials between two neighboring discretization grids. Thus, the integration step is avoided and both state and unknown variables are changed by NLP directly with certain constraints. A review of these methods can be found in Espie & Macchietto (1989). After the NLP solver converges, the corresponding  is our best estimate ( )  based on the measurements at hand. To evaluate the accuracy of the estimation, the posterior covariance matrix (parameter covariance matrix) is defined by: where  is the design vector which typically contains the time, initial state condition, control variables, etc. v m,rs is the r-th term in V that can be estimated by: For AEs, the sensitivity matrix is , evaluated at n experimental points (sampling times). For DAEs, V can be treated as a sequential experimental design result according to Zullo (1991  In MBDOE, M helps designing a series of experiments based on the model structure. By carrying out these experiments, the model parameters can be estimated with the best accuracy. The unknown design vector  contains measured time, initial conditions, control variables, and so on. Minimizing V, corresponds to maximizing the absolute value of M with respect to . For a single parameter model, J is nx1 and V is a scalar. Parameter estimation and DOE rely on the maximization of M(,) with respect to  while and  correspondingly. The smallest amount of experiments amounts to the best model. It corresponds to the objective function suggested by Espie & Macchietto (1989).
N M is the number of candidate model structures. As continuous sampling is not feasible, the integration is replaced by Σ tk . Eq. (24) gives the  that maximizes the differences among models f i . Thus, after getting the real experiment profile y m , the best candidate model predicts y m most truthfully.
MBDOE has still some drawbacks that require further study: 1. Now and then, it fails to find out the optimal experiment for medium and large scale DAE systems and it generally takes a long time even for small scale systems; 2. There is no trivial/automatic way to classify model parameters sensibly; 3. All criteria depend on optimizing the prediction error variance and V of M in some sense. When M is ill-conditioned, V cannot be numerically calculated, because M cannot be inverted. A possible solution is working with M instead of V; 4. It is difficult to handle models for DAE systems.

Principal Component Analysis (PCA)
PCA decomposes the data matrix from experiments X by the following expression: according to Coelho et al. (2009), where X m×n , with scores T n×n pc , loadings P m×n pc , residual E and n pc is the number of principal components (PCs). Nice PCA features are: 1. If b i is the i th eigenvalue of the covariance matrix (X T X/n-1) in descending order, then the columns t i of T are orthonormal and explain the relationship between each row: 2. The columns p i of P are orthonormal (I=P T P), and capture the relationship between each column of X. Because X T X is symmetric, its eigenvalues and eigenvectors are real.
The first few columns of T and P explain most of the variance in X. When n pc =min(m,n), E=0. The Cumulative Percent Variance (CPV) is one such method of obtaining the optimal n pc that separates useful information and from noise and the threshold for this method can be set to 90% (Qin & Dunia, 1998;Zhang & Edgar, 2007 Since X T X is a real symmetric matrix, W(mxm) contains the left eigenvectors, C(mxm) has the right eigenvectors and P=C=W. The related eigenvalues are in L(mxm)=B.

PCA and Information matrix combined criterion for DOE (P-optimality)
For the sake of simplicity, assume there is only one measured output (q=1) and the measurement error is v m,rs =1, such that Eq. (20) becomes: The sensitivity matrix J can be viewed as X in the above PCA equations, and M is proportional to V (the scaling factor (1/n-1) in the covariance is contained in v m,rs Since P T .P=I=P -1 .P, and P T =P -1 , then V(,)=P -T . -1 .P -1 =P. -1 .P T . From PCA analysis, V comes from M, by means of SVD or NIPALS. If the smallest eigenvalue in M is  m , then  m -1 will be the largest eigenvalue of V, which indicates the largest variance in the prediction error covariance matrix. The corresponding eigenvector P m gives the direction of the largest variance in the m parameter space  m . Figure 8 shows the covariance matrix of a two-parameter system. Two eigenvectors p2, p 1 indicate the direction of largest and second largest direction of variance. The projection of www.intechopen.com long axis (p 2 direction) on  1 and short axis (p 1 direction) on  2 is proportional to the coincidence region of  1 and  2 , respectively. In Figure 8, when  1 is much larger than  2 , the ellipsoid will degenerate into a line and it is reasonable to look at  2 alone. Instead, when  2 is well known, one can only focus on shrinking the projection of both ellipsoid axes on  1 direction: . In order to eliminate the absolute value and take advantage of the unit length of p i , we use the following expression: where b i are eigenvalues of V in ascending order (b i =1/ i ) and  i is in descending order) and P is the corresponding eigenvector matrix. The advantage of storing eigenvalues of V in ascending order is that P can be used directly without transformation; otherwise, P for V needs to be transformed by: After obtaining the eigenvalues of the V matrix (b i ), a series of experiments are designed to minimize b 1 , b 2 ,…,b m respectively. In general, by minimizing some eigenvalues, the estimation of certain parameters will improve.
When calculating n pc ,, the eigenvalues of either M or V can be chosen. If V is used, then the last n pc eigenvalues (kept in ascending order such that P does not need to be transformed) and the corresponding eigenvectors should be used to characterize the objective function. When using M, if the first k eigenvalues sum to 90%, then the remaining m-k eigenvalues n pc=m-k and eigenvectors are used in Eq. (30). In general, for most model parameters a single eigenvalue cannot comprise information for most parameters (some elements in p i are close to zero), thus retaining more eigenvalues in the objective function for the first few runs is better. Commonly speaking, the new criterion has the following advantages: For medium and large-scale DAE systems, it is easier to shrink the scale of the DOE problem by choosing certain parameters out of the entire set to be the focus. By introducing PCA to carry out both eigenvalue calculation and selecting the optimal number of eigenvalues to evaluate, the ill-conditioning of M is avoided. PCA automatically chooses the optimal number of eigenvalues to be investigated, and reduces the problem scale. P gives a clue on grouping the estimated parameters, so it is easy to design an experiment for improving specific parameter estimation, compared with conventional methods.

Summary
It is noticed that the factorial design does not determine the optimal values in a single step, but this procedure suitably indicates the path to reach a nice experimental design.
Main effects and the interaction effect are calculated using all the observed responses. Half of the observations belong to one mean, while the remaining half appears in other mean. There is not, therefore, idle information's in the planning. This is an important characteristic of factorial design two-level.
Using factorial design, the calculation of the effects becomes an easy task. The formulation can be extended to any two-level factorial design. The system generated can be solved with the aid of a computer program for solving linear systems.
Modeling focuses on mathematic equations that try to reproduce the real-world behavior accurately over a wide range. Still, regardless of modeling approach chosen, the resulting mathematical models are frequently nonlinear algebraic equations (AE), dynamic algebraic equations (DAE), or partial differential equations (PDE). AE and DAE systems are the most frequently used modeling techniques. Model parameters are in general used to describe www.intechopen.com special properties such as reaction orders, adsorption kinetics, etc. Hence factorial designs may not be satisfactory for intricate systems.
As a rule, model parameters are not known a priori and have to be estimated from measurements. Moreover, disturbing the system under study very often leads to repetitive measurements and does not produce new data. This leads to the problem of designing experiments prudently to maximize information for specific modeling purposes. An alternative to DOE relying on the previous assumptions is MBDOE.
This work introduces a PCA-based optimal criterion (P-optimal) for model-based DOE that combines PCA with information matrix analysis proposed by Zhang et al. (2007). The main advantages of P-optimal DOE include ease of reducing the scale of optimization problem by choosing parameter subsets to increase estimation accuracy of specific parameters and avoid an ill-conditioned information matrix.
Countless products are produced from the investigation of a large amount of sensors to mine data for analysis. In such cases, the available data maybe correlated, and PCA in addition to other multivariate methods are normally used. PCA is a multivariate technique in which a large number of related variables is transformed into a smaller number of uncorrelated variables (dimensionality reduction).