Open access

Concurrent Subspace Optimization for Aircraft System Design

Written By

Ke-shi Zhang

Submitted: 02 November 2010 Published: 12 September 2011

DOI: 10.5772/19143

From the Edited Volume

Aeronautics and Astronautics

Edited by Max Mulder

Chapter metrics overview

3,155 Chapter Downloads

View Full Metrics

1. Introduction

Concurrent Subspace Optimization (CSSO) is one of the main decomposition approaches in Multidisciplinary Design Optimization (MDO). It supports a collaborative and distributed multidisciplinary design optimization environment among different disciplinary groups. Sobieski first proposed the subspace optimization method (Sobieszczanski-Sobieski, 1988), and Sobieski’s blueprint was further developed by Bloebaum and subsequently named the concurrent subspace optimization method (Bolebaum, 1991). Renaud developed a second-order variant of the Global Sensitivity Equation (GSE) method and an alternative potential coordination procedure for the CSSO method (Renaud & Gabriele, 1993a, 1993b, 1994). Sellar proposed to replace GSE with the neutral-network based response surface method (Sellar et al., 1996).

The CSSO method allows a complex couple system to be decomposed into smaller, temporarily decoupled subsystems, each corresponding to different disciplines (subspaces). Each subspace optimization minimizes the system objective function subject to its own constraints as well as constraints contributed from the other subspaces. Each subspace optimization use its own high-fidelity analysis tools as well as given surrogate models or low-fidelity analysis tool provided by the other subspaces for analysis. Subsequently, the subspace optimizations can be performed concurrently. The system-level coordination optimization will be implemented completely based on approximation analysis tools. The subspace optimizations and the coordination optimization will be alternatively performed until results are finally decided by the coordination optimization. Therefore, the CSSO method is particularly suited to applications in a design organization where tasks are distributed among different design groups.

The CSSO method was developed initially for a single objective MDO problem. However, most MDO problems are essentially multi-objective. In recent years more work (Aute & Azarm, 2006; Huang & Bloebaum, 2004; McAllister et al., 2000; McAllister et al., 2004; Orr & Hajela, 2005; Parashar & Bloebaum, 2006; Tappeta & Renaud, 1997; Zhang et al., 2008) has focused on extending existing MDO method to handle such multi-objective MDO problems, by means of integrating a multi-objective optimization method within the MDO framework. This kind of method can be called a multi-objective MDO method.

It is an effective way to integrate multi-objective optimization method within the CSSO framework to develop the multi-objective MDO method. CSSO was extended to solve multi-objective MDO problems, including the Multi-objective Pareto CSSO (MOPCSSO) method, the Multi-objective Range CSSO (MORCSSO) method, the Multi-objective Target CSSO (MOTCSSO) method, the Multi-objective Genetic Algorithm CSSO (MOGACSSO) method and Adaptive Weighted Sum based CSSO (AWSCSSO). In MOPCSSO the Constraint method is integrated within CSSO framework (Huang & Bloebaum, 2007). In MORCSSO and MOTCSSO the concept of designer preference is introduced (Huang & Bloebaum, 2004). In MOGACSSO the Genetic Algorithm is combined with CSSO and in the hope of improving the computational efficiency (Parashar & Bloebaum, 2006). In AWSCSSO the Adaptive Weighted Sum method is introduced into CSSO (Zhang et al., 2008).

Advertisement

2. General description of MDO problem for aircraft system design

Aircraft is a complex engineering system in which multiple disciplines (such as aerodynamics, structure, thrust, noise, electronics, cost, et al.) are included. Actually different disciplines are not independent of each other. For example, deformation of a wing structure affects the aerodynamic lift distribution on the wing and in turn a new deformation is caused, which is the well-known aeroelastic problem. The designers of each discipline cannot work without consideration of other disciplines, which makes the aircraft system design become complicated. Especially in the aircraft preliminary design, the specialists of different disciplines have to often work together and discuss with each other to decide many design variables so that higher performance can be achieved. Aircraft system design is a typical multidisciplinary design problem.

In recent years, industry has paid more attention to improving efficiency in the design of complex systems, such as aircraft. MDO has emerged as an engineering discipline that focuses on the development of new design and optimization strategies for the complex systems. MDO researchers strive to reduce the time and cost associated with the coupling interaction among several disciplines. “Decomposition approaches provide many advantages for the solution of complex MDO problems, as they enable a partitioning of a large coupled problem into smaller, more manageable sub-problems. The resulting computational benefits, besides the obvious one associated with the solution of smaller problems, include creating a potential distributed processing environment. The primary benefit, however, pertains to the savings in personal hours, because groups are no longer required to wait around for other groups in the process to complete their design tasks.”(Huang & Bloebaum, 2007)

Mathematically for minimization problems the general form for MDO can be represented as follows:

MinF(X,Y1,,YN)s.t.Gi(X,Y1,,YN)0Yi=fi(X,Y1,,Yi1,Yi+1,,YN)i=1,2,,NX={x1,x2,,xNV}E1

Where F is the objective function vector that is composed of one or more objectives, Giis the constraint vector provided by subsystemi, Yiis the coupling vector of subsystemi, and Xis the design vector. The objective function and the constraints can be expressed by the function of XandYi. In this chapter the MDO problem expressed in Eq. (1) will be taken as the example to discuss the CSSO methods.

Advertisement

3. Different frameworks for concurrent subspace optimization

Global Sensitivity Equation based CSSO and Response Surface based CSSO will be discussed in this chapter as the typical CSSO methods.

3.1. Global Sensitivity Equation based CSSO (GSECSSO)

GSECSSO is a bi-level optimization method. As an example, the GSECSSO method for a problem with subsystem 1 and 2 (Eq. (1): one objective and two coupled subsystems) is stated in following paragraphs.

The mathematical models of subspace optimizations of GSECSSO can be written as

                Sub-optimization 1MinF(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(s1(1r11)+(1s1)t11)C^2(X1)C20(s2(1r12)+(1s2)t12)Y1=f1(X1,Y^2)|                   Sub-optimization 2MinF^(X2)s.t.C^1(X2)C10(s1(1r21)+(1s1)t21)C2(X2,Y^1,Y2)C20(s2(1r22)+(1s2)t22)Y2=f2(X2,Y^1)E2

Where Xi, Yi and Ci are design variable vector, state variable vector and cumulative constraint of subspace i, respectively. Ci0is the value of Ci at the starting point X0. The value with symbol ‘^’ above is a linearly approximated one.rkp, tkpand spare responsibility, trade-off and switch coefficients, respectively. In the first iteration, the responsibility coefficients are initialized using the sensitivities of cumulative constraints with respect to design variables (Bolebaum, 1991) and the switch coefficients are set to one. In the following iterations the coordination coefficients are decided by the coordination optimizations.

By means of Kresselmeier-Steinhauser (KS) function (Kreisselmeier & Steinhauser, 1979), the cumulative constraint, Ci, which represents the constraints of subsystem i, is expressed as

Ci=gimax+1ρln(k=1mexp(ρ(gik(X)gimax)))E3

Where [gi1,gi2,,gim]=Gi and ρ is a positive user-prescribed value.

Global Sensitivity Equation (GSE) (Sobieszczanski-Sobieski, 1990) is a method for computing sensitivity derivatives of state (output) variables with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. By using GSE expressed below, the global sensitivities are calculated, by

[IY1Y2Y2Y1I][dY1dXdY2dX]=[Y1XY2X]E4

The responsibility coefficients divide the responsibility of satisfying constraints among all participating subspaces. rkprepresents the responsibility allocated to the k-th subspace for satisfying the p-th cumulative constraint. In the first iteration, they are initialized with the sensitivities of cumulative constraints with respect to design variables (Bolebaum, 1991). The trade-off coefficients allow for the violation of a constraint in one subspace optimization in order to gain a large reduction of the objective function. The switch coefficients enable or disable the responsibility coefficients and the trade-off coefficients decided by whether the cumulative constraints are satisfied.

The mathematical model of coordination optimization of GSECSSO can be written as

MinF(r,t)=F*+p=12k=12dFdrkp|X=X*Δrkp+p=12k=12dFdtkp|X=X*Δtkps.t.k=12rkp=1,k=12tkp=00rkp1,1tkp1,p,k=1,2E5

Where F*is the objective value at X* (X*is the value of design variable vector after subspace optimizations). dFdrkp|X=X*and dFdtkp|X=X* are the optimum sensitivity of F with respect to responsibility and trade-off coefficients.

Figure 1.

Flowchart of GSECSSO method

The flowchart of GSECSSO is shown in Fig. 1. The original problem is decomposed and the mathematical models of subspace optimizations and coordination optimization are established as Eq. (2) and Eq. (5). Based on the initial design X0, the system analysis is performed and the GSE is used to find the system derivatives. At the first iteration, the design variables are allocated to the appropriate subspaces, as well as the responsibility, trade-off and switch coefficients, r, t, s, are initialized. Subsequently, each subspace optimization minimizes the system objective function subject to its own constraints as well as constraints contributed from the other subspaces. The subspace optimizations are performed concurrently with respect to a disjoint subset of design variables. The updated design vector is the simple combination of local optimal design sub-vectors. After that the global sensitivities and optimum sensitivities are updated. Then the system-level coordination optimization is implemented to optimize coordination parameters.

In GSECSSO, the subspace optimizations are implemented concurrently with respect to a disjoint subset of design variables, which substantially reduces the complexity of the optimization problem within each disciplinary group. The updated design vector is the simple combination of local optimal design sub-vectors. This provides designers with a significant potential benefit in terms of computational effort. However, sometimes the convergence of GSECSSO is oscillatory and premature. It is due to that the trade-off coefficients and the KS parameter ρ with inappropriate value may lead to bad convergence (Huang & Bloebaum, 2004) as main reasons.

3.2. A variant of GSECSSO

The variant of GSECSSO (Huang & Bloebaum, 2004) is much more efficient than the original GSECSSO. It is adopted in multiple multi-objective CSSO methods (Huang & Bloebaum, 2004; Parashar & Bloebaum, 2006; Zhang et al., 2008).

In the variant of GSECSSO, several modifications are made: (1) the trade-off coefficients are abandoned since all trade-offs will occur directly with minimization of the objective functions within each subspace; (2) The KS parameter, ρ, is set to be increased from a small value; (3) an infeasibility minimization is appended to move the point into the feasible region; (4) the coordination optimization are abandoned and the responsibility coefficients are directly calculated via their initialization strategy.

Taking the MDO problem in Eq. (1) (one objective, two coupled subsystems, i.e. N=2) as an example, the mathematical models of subspace optimizations for the modified CSSO can be written as

   Sub-optimization 1MinF(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)Y1=f1(X1,Y^2)|    Sub-optimization 2MinF^(X2)s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)Y2=f1(X2,Y^1)E6

Where Xi, Yi and Ci are design variable vector, state variable vector and cumulative constraint of subspace i, respectively. Ci0is the value of Ci at the starting point X0. The value with symbol ‘^’ above is a linearly approximated one. rkpis responsibility coefficients.

the flowchart of modified CSSO method is shown in Fig. 2. In the first stage, based on the initial design variables the system analysis is implemented. If the initial scenario does not satisfy the constraints, a minimization will be required to reduce initial infeasibility of the constraints as much as possible. In the subsequent stage, all the sensitivities are computed by using GSE. Based on the sensitivity information, the impacts of design variables upon each subspace can be analyzed and the design variables will be allocated to the subspace of the greatest impact. In the third stage, the subspace optimizations are performed concurrently. There is no system optimization in this method. The updated design vector is the simple combination of the local optimal design sub-vectors.

Figure 2.

Flowchart of the modified CSSO method

The variant of GSECSSO behaves better than GSECSSO on convergence performance. Nevertheless, how to set the start value and increase step of the KS parameter is still hard to say. These two values will affect convergence to some extent and need to be further investigated. The constraint minimization will bring extra computation cost. Furthermore, the linear approximation may sometimes cause oscillatory and premature convergence.

3.3. Response Surface based CSSO (RSCSSO)

The RSCSSO method performs optimization in bi-level framework. Taking the MDO problem in Eq. (1) (one objective, two coupled subsystems, i.e. N=2) as an example, the mathematical models of subspace optimizations for RSCSSO can be written as

   Sub-optimization 1MinF(X1,Y1,Y^2)s.t.G1(X1,Y1,Y^2)0G2(X1,Y1,Y^2)0Y1=f1(X1,Y^2)|    Sub-optimization 2MinF(X2,Y^1,Y2)s.t.G1(X2,Y^1,Y2)0G2(X2,Y^1,Y2)0Y2=f2(X2,Y^1)E7

Where Xi and Yi are design variable vector and state variable vector of subspace i, respectively. In subspace i, Yi is calculated using the high-fidelity analysis tool while the other state variables with a symbol ‘^’ above are approximated by a quadratic response surface. Actually, other surrogate modeling techniques, such as kriging model or radial basis functions, can be used for approximation as well.

The mathematical model of the coordination optimization of RSCSSO can be written as

MinF(X,Y^1,Y^2)s.t.G1(X,Y^1,Y^2)0G2(X,Y^1,Y^2)0E8

In the system-level optimization, all the state variables are approximately calculated and the design variables of all subspaces are optimized.

The flowchart of RSCSSO is shown in Fig. 3. Firstly, the original problem is decomposed into subspace optimizations in Eq. (7) and coordination optimization in Eq. (8). Then couples of sample points of the design variable vector are generated via Design of Experiment (DOE) method, such as the orthogonal experiment design. The system analysis will be performed based on these data and the results will be stored in the database.

Figure 3.

Flowchart of RSCSSO method

Subsequently these data will be used to generate the response surface models for the state variables. After that the subspace optimizations are performed simultaneously, the results of which will be analyzed and augmented into the database for updating the response surface models, as flow (1) in Fig. 3. Then all of the design variables will be optimized in the coordination optimization and the result will be used to update response surface models as well, as flow (2) in Fig. 3. The subspace optimizations and the coordination optimization are alternately performed until stable convergence is achieved.

In the RSCSSO method, the subspace optimizations are to generate sufficient design information for approximation. After the concurrent subspace optimizations, a global approximation problem is formulated about the current design vector using information stored in the design database. It is the coordination procedure of the global approximation that drives constraint satisfaction and the overall system optimization. A fast and robust convergence appears in the RSCSSO method. However, with an increase in design variables the sample points needed for creating response surface models will be greatly increased. Furthermore, augmenting optimal points is not so reasonable for improving response surface models. As the final result is completely decided by the coordination optimization, the subspace optimizations have little impact on the results and can be abandoned.

3.4. Examples

3.4.1. Example 1: An analytical example with coupling relationship

The mathematical model for an analytical MDO example considering two coupling disciplines is of the form

Minf=x22+x3+y1+ey2s.t.g1=8y10g2=y2100y1=x12+x2+x30.2y2y2=y1+x1+x3x1[10,10],x2,x3[0,10]E9

Where y1 and y2 are state variables belonging to subspace 1 and subspace 2, respectively; g1and g2 are constrains provided by discipline 1 and discipline 2, respectively. The coupling relationship between discipline 1 and discipline 2 is depicted in Fig. 4.

Figure 4.

Coupling relationship between discipline 1 and discipline 2 for a numerical example

The comparison of the results obtained by different methods is listed in Table 1. Sequential Quadratic Programming (SQP) method is taken as a reference for the convenience of this comparative study. From Table 1, several conclusions can be drawn as follows: (1) the system-level analysis can be remarkably reduced when using different CSSO methods, which shows that the CSSO methods are well suited for MDO problems; (2) the variant of GSECSSO method outperforms the RSCSSO method, in view of the optimization results as well as the required number of system- and disciplinary-level analysis.

variablesSQPVariant of GSECSSORSCSSO
f8.0028608.0029438.031944
x13.0284273.0284422.993963
x20.0000000.0000000.192359
x30.0000000.0000000.000000
Number of system analysis122738
Number of discipline 1 analysis072179
Number of discipline 2 analysis064144

Table 1.

Comparison of different CSSO methods with direct SQP for an analytical optimization problem

3.4.2. Example 2: Gear reducer optimization

This example is taken from the reference (Azarm & Li, 1989). The object is to minimize the overall weight, subject to the constraints for bending and torque stresses. The mathematical model for the optimization problem is as following:

Minf=0.7854x1x22(3.3333x32+14.9334x3-43.0934)-1.508x1(x62+x72)          +7.477(x63+x73)+0.7854(x4x62+x5x72)s.t.g1=27x11x22x3110g2=397.5x11x22x3210g3=1.93x21x31x43x6410g4=1.93x21x31x53x7410g5=10x63(745x21x31x4)2+1.69×10711000g6=10x73(745x21x31x5)2+1.575×1088500g7=x41(1.5x6+1.9)10g8=x51(1.1x7+1.9)10g9=x2x3400g10=5x1x210g11=x1x21120x1[2.6,3.6]x2[0.7,0.8]x3[17,28]x4,x5[7.3,8.3]x6[2.9,3.9]x7[5.0,5.5]E10

The optimization problem is classified into two disciplines: bearing discipline and shaft discipline, without any coupling relationship.

The optimization problem is solved by SQP and different CSSO methods. The results are depicted in Table 2. The convergence histories are shown in Fig. 5. From this comparative study, some conclusions can be drawn as follows: 1) Both GSECSSO and the variant of GSECSSO enable the reduction of system analysis, which in turn improve the efficiency; 2) No reduction of the system analysis is achieved by using RSCSSO. Nevertheless, the optimization procedure as well as the data-flow interface is much simpler, and the optimization can be potentially improved by implementing parallel system analysis for sample points and parallel subspace optimization; 3) The variant of GSECSSO method outperforms the GSECSSO method.

VariablesSQPGSECSSOVariant of GSECSSORSCSSO
f2994.3413162995.6067592995.6074222994.193811
x13.53.5000003.5000003.500021
x20.70.70.70.700000
x317171717.000000
x47.37.3000007.37.300000
x57.7153207.7333017.7333307.715316
x63.3502153.3521313.3521563.349631
x75.2866545.2872565.2872465.286643
Number of system analysis40151151
Number of bearing-discipline analysis0326188280
Number of shaft-discipline analysis029861818288

Table 2.

Comparison of different CSSO methods with direct SQP for gear-box optimization

Figure 5.

Convergence history of CSSO optimization for gear reducer

Advertisement

4. Different frameworks for multi-objective concurrent subspace optimization

The MOPCSSO, MORCSSO and AWSCSSO methods will be discussed in this chapter as the typical multi-objective CSSO methods.

4.1. Multi-objective Pareto (MOPCSSO)

The constraint method is an effective multi-objective optimization method which optimizes preferred objective with others treated as constraints. MOPCSSO is developed by introducing the constraint method into the variant of GSECSSO.

Taking the MDO problem in Eq. (1) (two objectives, two coupled subsystems, i.e. N=2) as an example, the mathematical models of subspace optimizations of MOPCSSO can be written as

   Sub-optimization 1MinF1(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)F^2F20Y1=f1(X1,Y^2)|    Sub-optimization 2MinF2(X2,Y^1,Y2)s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)F^1F10Y2=f1(X2,Y^1)E11

The variant of GSECSSO method described in Sub-section 3.2 is adopted in the single objective optimization for each objective function. The flowchart of MOPCSSO is same as that in Fig. 2. During the system optimization, all objective functions are improved simultaneously or individual objective function are improved without worsening the others. The optimization continues until no further improvement can be made so that the Pareto optimum can be obtained finally.

4.2. Multi-objective Range CSSO (MORCSSO)

The goal programming method is one of the most popular multi-objective optimization techniques that consider designer’s preference. MORCSSO and Multi-objective Target CSSO (MOTCSSO) are both developed by combining idea of the goal programming method with MOPCSSO.

The [F2min,F2max] is supposed to be the preferred range of objective functionF2. The MDO problem in Eq. (1) (two objectives, two coupled subsystems, i.e. N=2) is taken as an example. The framework of MORCSSO is shown in Fig. 6. The variant of GSECSSO method described in Sub-section 3.2 is adopted for sub-optimizations and system-level coordination. The optimization is performed in two stages to obtain a preferred Pareto point.

If the design point does not meet the preference at beginning, the first-stage optimizations are implemented to obtain a design in the preferred range from the starting point.

The mathematical models of subspace optimizations of MORCSSO can be written as

F20<F2minE12
   Sub-optimization 1MinF1(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)F^2F20Y1=f1(X1,Y^2)|    Sub-optimization 2Min|F2(X2,Y^1,Y2)F2min|s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)Y2=f1(X2,Y^1)E13
F20>F2maxE14
   Sub-optimization 1MinF1(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)F^2F20Y1=f1(X1,Y^2)|    Sub-optimization 2Min|F2(X2,Y^1,Y2)F2max|s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)Y2=f1(X2,Y^1)E15

In the second stage the design point is optimized closer to the Pareto frontier gradually within the preferred range.

After the optimization in the first stage, the design is in the designer’s preferred objective range. Then the mathematical models of subspace optimizations of MORCSSO is same as those of MOPCSSO in Eq. (11).

In the course of optimization in the second stage, the design point maybe flees out of the preferred objective range again. In such a case, the optimization below should be performed.

F20<F2minE16
   Sub-optimization 1MinF1(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)F^2F20Y1=f1(X1,Y^2)|    Sub-optimization 2Min|F2(X2,Y^1,Y2)F2min|s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)F^1F10Y2=f1(X2,Y^1)E17
F20>F2maxE18
   Sub-optimization 1MinF1(X1,Y1,Y^2)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)F^2F20Y1=f1(X1,Y^2)|    Sub-optimization 2Min|F2(X2,Y^1,Y2)F2max|s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)F^1F10Y2=f1(X2,Y^1)E19

Figure 6.

Framework of MORCSSO method

For the MORCSSO method, in the case of F20>F2max in the second-stage optimizations, the optimizations may converge to the Pareto frontier above the preferred objective range, i.e. the Pareto point that F2>F2max is obtained.

Why does the optimization fail in this case? It can be analyzed from Eq. (15) in the objective space shown in Fig. 7. For Eq. (15), (1) X0should be closer to the line F2=F2max according toMin|F2F2max|, (2) F2(X0)should be no more than F20 according toF^2F20, (Eq. 3) X0should optimized to decrease F10 according to F^1F10 andMinF1, (4) X0should be in the feasible region. As shown in Fig. 7, (2) and (3) forces X0 to move along a direction of n1 in the lower-left shadowed region, in which direction the design point will impossibly move into the preferred objective range. Only along a direction of n2 in the lower-right shadowed region the preferred range can be achieved. In such a case X0 may move down straightly along the direction of n to arrive at the Pareto front. The semi-infinite region between F1Fmax1 and the Pareto front is named as Blind Region in this chapter, which means the point falling into this region will not converge to the Pareto front in the preferred region any more. This error will happen in the case of three and more objectives as well, as shown in Fig. 8.

Figure 7.

The analysis of Eq. (15) in bi-objective space

Figure 8.

The analysis of Eq. (15) in three-objective space

How to solve this problem? From Fig. 7, we only need to move the line F1=F10 right a little bit, then the two shadowed region will be crossed each other. In the mathematical meaning F1F10 is relaxed to F1F10+ω|F10| in Eq. (15). This strategy is proven to be effective.

4.3. Adaptive Weighted Sum based CSSO (AWSCSSO)

The procedure of solving the Pareto front by AWSCSSO is similar to Adaptive Weighted Sum (AWS) method (Kim & de Weck, 2004, 2005). As an example, the AWSCSSO method for a generic bi-objective problem, with subsystem 1 and 2, is stated in the following paragraphs (Eq. (1): two objectives and two coupled subsystems). In the same way AWSCSSO can be also applied for the multi-objective problem with three or more subsystems.

In the first stage a rough profile of the Pareto front is determined.

The variant of CSSO method described in Sub-section 3.2 is adopted in the single objective optimization for each objective function and objective function is normalized as following:

J¯i=JiJNadirJUtopiaJNadirE20

When Xi* is the optimal solution vector for the single objective optimization of the ith objective function Ji, the utopia point and pseudo nadir point are defined as

JUtopia=[J1(X1*),J2(X2*),,Jm(Xm*)]E21
JNadir=[J1Nadir,J2Nadir,,JmNadir]E22

Where JiNadir=max[Ji(X1*)Ji(X2*)Ji(Xm*)] and m is the number of objective functions.

Then with a large step size of the weighting factor the usual weighted sum method is used in the variant of GSECSSO to approximate the Pareto front quickly. The subspace optimization for AWSCSSO can be expressed as

        Sub-optimization 1MinW1F1(X1,Y1,Y^2)+W2F^2(X1)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)Y1=f1(X1,Y^2)|          Sub-optimization 2MinW1F^1(X2)+W2F2(X2,Y^1,Y2)s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)Y2=f2(X2,Y^1,Y^3)E23

Where the value with symbol ‘^’ above is a linearly approximated one, C1 and C2 are cumulative constraints of G1 and G2, respectively, andrkp represents responsibility assigned to the k-th subsystem for reducing the violation of Cp. The value with superscript ‘0’ is corresponding to the starting point X0. W1 and W2 are weighting factors for objective function vector F1 and F2, respectively.

By estimating the size of each Pareto patch, the refined regions in the objective space are determined. An example of the mesh refinement in AWSCSSO is shown in Fig. 9. Where hollow points represent the newly refined node PE (expected solution) while solid points represent initial four nodes that define the patch. As shown in Fig. 9, the quadrilateral patch is taken as an example. If the line segment that connects two neighboring nodes of the patch is too long, it is divided into only two equal line segments. The central point becomes the

Figure 9.

Refine patches of AWSCSSO method

new refined node. These refined nodes are connected to form a refined mesh. Then the sub-optimizations in Eq. (19) are performed using different additional constraints for different refined nodes and the new Pareto points are obtained. In next step, according to the prescribed density of Pareto points, the Pareto-front patch that is too large will be refined again in the same way. In subsequent steps, the refinement and sub-optimizations are repeated until the number of Pareto points does not increase anymore.

In the subsequent stage only these regions are specified as feasible domains for sub-optimization problem with additional constraints. Each Pareto front patch is then refined by imposing additional equality constraints that connect the pseudo nadir point (PN) and the expected Pareto optimal solutions (PE) on a piecewise planar surface in the objective space (as shown in Fig. 10).

Figure 10.

AWSCSSO method for multidimensional problems

Sub-optimizations are defined by imposing additional constraint H to Eq. (19) as

           Sub-optimization 1MinW1F1(X1,Y1,Y^2)+W2F^2(X1)s.t.C1(X1,Y1,Y^2)C10(1r11)C^2(X1)C20(1r12)Y1=f1(X1,Y^2)H(X1,Y1,Y^2)0|          Sub-optimization 2MinW1F^1(X2)+W2F2(X2,Y^1,Y2)s.t.C^1(X2)C10(1r21)C2(X2,Y^1,Y2)C20(1r22)Y2=f2(X2,Y^1,Y^3)H(X2,Y^1,Y^3)0E24

The additional inequality constraint is

H=(F¯EF¯N)(F¯(X)F¯N)|F¯EF¯N||F¯(X)F¯N|+L0E25

Where L is the adaptive relax factor that is less than 1.F¯E, F¯Nand F¯(X) are the normalized position vector of node PE, PN and the current design point X respectively. In AWSCSSO, L is set to be increased with the rise of the distribution density of Pareto points.

Fig. 11 shows the framework of AWSCSSO. In Fig. 11, W1i and W2i are weighting factors in stage 1 and stage 2, respectively; H is the additional constraint. The optimization problem is performed in two stages in the AWSCSSO method. In the first stage the Pareto front is approximated quickly with large step size of weight factors. The optimization problems of this stage are defined in Eq. (19). In the subsequent stage, by calculating the distances between neighboring solutions on the front in objective space, the refined regions are identified and the refined mesh is formulated. Only these regions then become the feasible regions for optimization by imposing additional constraints in the objective space. The optimization problems of this stage are defined in Eq. (20). The different locations of new Pareto points are defined by the different additional constraints. Optimization is performed in each of the regions and the new solution set is acquired. Being a MDO problem, the optimization is performed by the variant of GSECSSO method.

Figure 11.

Framework of AWSCSSO method

4.4. Examples

4.4.1. Example 1: Convex Pareto front

This problem is taken from a test problem (Huang, 2003). This is a test problem available in the NASA Langley Research Center MDO Test Suite. It has two objectives, F1 and F2, to be minimized. It consists of ten inequality constraints, four coupled state variables and ten design variables in two coupled subsystems. The mathematical model is not listed here for concision. We refer the readers to the test problem 1 in the corresponding references.

The comparison of the solution obtained by MOPCSSO and AWSCSSO is shown in Fig. 12. It can be concluded that for the problem with convex Pareto front a uniformly-spaced wide-distributed and smooth Pareto front can be obtained by AWSCSS method. When using MOPCSSO I have not captured the whole range.

Figure 12.

Comparison of Pareto front obtained by using AWSCSSO and MOPCSSO

4.4.2. Example 2: Non-convex Pareto front

This problem consists of two objective functions, six design variables and six constraints. Two objectives, F1 and F2 need to be minimized. The model problem is defined as

MinF1(x)=(25(x12)2+(x22)2+(x31)2+(x44)2+(x51)2)MinF2(x)=x12+x22+x32+x42+x52+x62s.t.c1(x)=x1+x220c2(x)=6x1x20c3(x)=2+x1x20c4(x)=2x1+3x20c5(x)=4(x33)2x40c6(x)=(x53)2+x6400x1,x2,x610,1x3,x55,0x46E26

Figure 13.

Comparison of Pareto fronts obtained by using AWSCSSO and MOPCSSO methods

The comparison of Pareto front obtained by AWSCSSO and MOPCSSO is shown in Fig. 13. It is concluded that, for the problem with non-convex Pareto front, the more uniformly-spaced, more widely-distributed and smoother Pareto front is also obtained by the AWSCSSO method.

4.4.3. Example 3: Conceptual design of a subsonic passenger aircraft

The mathematical model of this problem is defined as

MaxUMaxL/DCs.t.Cd0L0.2,Cd0C0.02Rf1qTo0.027,qL0.024DTo1981,DL1371E27

The objective functions in Eq. (23) are to maximize useful load fraction (U) and lift-to-drag ratio for the cruising condition (L/DC). The constraints in Eq.(23) are as follows. (1) The drag coefficient for the take-off condition and landing condition (Cd0L) is no more than 0.2 and that for the cruising condition (Cd0C) is no more than 0.02. (2) The overall fuel balance coefficient (Rf) is no less than 1. (3) The achievable climb gradient for the take-off condition (qTo) is greater than 0.027 and that for the landing condition (qL) is greater than 0.024. (4) The take-off field length (DTo) is less than 1981m and the landing field length (DL) is less than 1371m. The overall fuel balance coefficient is defined as the ratio of the fuel weight required for mission to that available for mission. The design variables are listed in Table 3.

Design Variable ∕ unitSymbolLower limitUpper limit
Wing area ∕ m2S111.48232.26
Aspect ratioAR9.510.5
Design gross weight ∕ 103kgWdg63.504113.400
Installed thrust ∕ 103kgTi12.58724.948

Table 3.

List of design variables

Two disciplines, aerodynamics and weight, are considered in this problem. The dataflow between and in subsystems is analyzed in Fig. 14, where L/DTo, L/DL, L/DC are the lift-to-drag ratios for the take-off, landing and cruising conditions respectively, Vbr is the cruise velocity with the longest range, Rfr is the fuel weight fraction required for mission, and Cd0C is the zero-lift drag coefficient for the cruising condition.

Two disciplines, aerodynamics and weight, are coupled. When the state variables in aerodynamics such as cruise velocity with the longest range, lift coefficients, zero-lift drag coefficients, skin-friction drag coefficients, lift-to-drag ratio are computed, some state variables in Weight such as Rfr should be known. Similarly, when the state variables in Weight such as useful load fraction, overall fuel balance coefficient, achievable climb gradient on take-off and landing, take-off field length and landing field length are computed, some state variables in Aerodynamics such as L/DTo, L/DL, L/DC and Vbr should also be provided. In the Aerodynamics discipline, Vbr is coupled with Cd0C. The dataflow between state variables and design variables can be seen in Fig. 15. Many details of equations in the aerodynamic discipline model and weight discipline model can refer to the reference (Zhang et al., 2008). The full description of them can be found in the references (Lewis & Mistree, 1995; Lewis, 1997).

Figure 14.

Dataflow between and in subspaces

Figure 15.

Dataflow between state variables and design variables

Figure 16.

Pareto front obtained by using AWSCSSO

The Pareto front obtained using AWSCSSO is shown in Fig. 16. Each solution on Pareto front is obtained using CSSO with iterative subspace optimizations. Taking one of the optimal designs as example, the values of the design variables are: S=232.3m2, AR=10.5, Wdg=113.4×103kg, Ti=16.75×103kg. The performance parameters of the aircraft in optimal design are as follows: Cd0C=0.01777, L/DC=21.05, Vbr=183.43m/s, qTo=0.03303, qL=0.08804, DTo=1823m and DL=1086m. Several conclusions can be made from these results. 1) The AWSCSSO method is primarily proved to be applicable for aircraft conceptual design. 2) The distribution of Pareto points is not so uniform as expected. These results are still very encouraging in general. The non-uniformity may be due to the additional constraint that changes the location to expected solution. Further study is still needed on how to achieve the balance between uniformity and convergence.

Advertisement

5. Conclusion

The CSSO method is one of the main bi-level MDO methods. Couples of CSSO methods for single- and multi-objective MDO problems are discussed in this chapter. It can be concluded that, (1) number of the system analysis can be greatly reduced by using the CSSO methods, which in turn improve the efficiency; (2) the CSSO methods enable concurrent design and optimization of different design groups, which can greatly improve efficiency; (3) the CSSO methods are effective and applicable in solving not only single-objective but also multi-objective MDO problems.

For the CSSO methods, although the RSCSSO method behaves more robust, it will actually reduce to a single-level surrogate modeling based MDO method since the subspace optimizations have little impact on the results. So the GSECSSO method is more promising as a bi-level method and worth further studying. The future study on the GSECSSO method will focus on improving its robustness and efficiency. For the multi-objective CSSO methods, the AWSCSSO method behaves better on obtaining widely-distributed Pareto points. The future work on the multi-objective CSSO methods will focus on improving the solution quality and also on testing it for more realistic engineering design problems.

References

  1. 1. AuteV.AzarmS. A. 2006“Genetic Algorithms Based Approach for Multidisciplinary Multiobjective Collaborative Optimization,” 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, VirginiaAIAA 20066953
  2. 2. AzarmS.LiW. C.1989Multi-level“.DesignOptimization.usingGlobal.MonotonicityAnalysis,”. A. S. M. E.Journalof.MechanismsAutomationin.Design1112259263
  3. 3. BolebaumC. L., 1991 “Formal and Heuristic System Decomposition in Structural Optimization,” NASA-CR-4413,.
  4. 4. HuangC. H.2003“Developmentof-ObjectiveMulti.ConcurrentSubspace.OptimizationVisualizationMethods.forMultidisciplinary.Design,”Ph. D.Dissertation, The State University of New York, New York,
  5. 5. HuangC. H.BloebaumC. L.2004Incorporation“.ofPreferences.in-ObjectiveMulti.ConcurrentSubspace.Optimizationfor.MultidisciplinaryDesign,”.10thA. I. A. A. I. S. S. M. O.MultidisciplinaryAnalysis.OptimizationConference.NewYork.AIAA 20044548
  6. 6. HuangC. H.BloebaumC. L.2007Multi-Objective“.ParetoConcurrent.SubspaceOptimization.forMultidisciplinary.Design,”A. I. A. A.JournalVol.818941906
  7. 7. Kreisselmeier, G., Steinhauser, R.,1979 "Systematic Control Design by Optimizing a Vector Performance Index," IEAC Symposium on Computer Aided Design of Control Systems, Zurich, Switzerland,.
  8. 8. KimI. Y.de WeckO. L.2004Adaptive“.WeightedSum.Methodfor.MultiobjectiveOptimization,”.10thA. I. A. A. I. S. S. M. O.MultidisciplinaryAnalysis.OptimizationConference.NewYork.AIAA 20044322
  9. 9. KimI. Y.de WeckO. L.2005Adaptive“.Weighted-sumMethod.forBi-objective.OptimizationPareto.FrontGeneration,”.StructuralMultidisciplinaryOptimization.No149158
  10. 10. LewisK.MistreeF. “.1995Designing Top-level Aircraft Specifications-A Decision-based Approach to A Multiobjective, Highly Constrained Problem,” 6th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Bellevue, WA, AIAA 1431
  11. 11. Lewis, K., 1997 “An Algorithm for Integrated Subsystem Embodiment and System Synthesis,” Ph.D. Dissertation, Georgia Institute of Technology, Atlanta, Georgia, August
  12. 12. Mc AllisterC. D.SimpsonT. W.YukeshM. .2000Goal Programming Applications in Multidisciplinary Design Optimization,” 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, CA, AIAA 20004717
  13. 13. Mc AllisterC. D.SimpsonT. W.LewisK.MessacA.2004Robust“.MultiobjectiveOptimization.throughCollaborative.OptimizationLinearPhysical.Programming,”10th. A. I. A. A. I. S. S. M. O.MultidisciplinaryAnalysis.OptimizationConference.NewYork.AIAA 20044549
  14. 14. OrrS. A.HajelaP.2005Genetic“.AlgorithmBased.CollaborativeOptimization.ofA.TilrotorConfiguration,”.46thA. I. A. A. A. S. M. E. A. S. C. E. A. H. S. A. S. C.StructuresStructural.DynamicsMaterialsConference.TexasAIAA 20052285
  15. 15. ParasharS.BloebaumC. L.2006Multi-objective“.GeneticAlgorithm.ConcurrentSubspace.Optimization. M. O. G. A. C. S. S. O.forMultidisciplinary.Design,”47th. A. I. A. A. A. S. M. E. A. S. C. E. A. H. S. A. S. C.StructuresStructural.DynamicsMaterialsConference.RhodeIsland.AIAA 20062047
  16. 16. Renaud, J.E. and Gabriele, G.A., 1993 “Second Order Based Multidisciplinary Design Optimization Algorithm Development,” Advance in Design Automation,652347357
  17. 17. RenaudJ. E.GabrieleG. A. “.1993Improved coordination in non-hierarchic system optimization,” AIAA Journal, 311223672373
  18. 18. RenaudJ. E.GabrieleG. A.1994Approximation“.innon-hierarchic.systemoptimization,”. A. I. A. A.Journal321198205
  19. 19. SellarR. S.BatillS. M.RenaudJ. E.1996Response“.surfacebased.concurrentsubspace.optimizationfor.multidisciplinarysystem.design,”34th. A. I. A. A.AerospaceSciences.MeetingAIAA 960714
  20. 20. Sobieszczanski-Sobieski, J., 1988 “Optimization by Decomposition: A Step from Hierarchic to Non-hierarchic Systems,” Recent Advances in Multidisciplinary Analysis and Optimization, NASA CP-3031, Hampton,.
  21. 21. Sobieszczanski-SobieskiJ.1990Sensitivity“.ofComplex.InternallyCoupled.Systems,”A. I. A. A.JournalVol.1153160
  22. 22. TappetaR. V.RenaudJ. E.1997Multiobjective“.CollaborativeOptimization,”. A. S. M. E.Journalof.MechanicalDesign.No403411
  23. 23. ZhangK. S.HanZ. H.LiW. J.SongW. P.Bilevel.2008“AdaptiveWeighted.SumMethod.forMultidisciplinary.Multi-ObjectiveOptimization,”. A. I. A. A.Journal461026112622

Written By

Ke-shi Zhang

Submitted: 02 November 2010 Published: 12 September 2011