Open access peer-reviewed chapter

Parameter Optimization for Spacecraft Attitude Stabilization Using Magnetorquers

Written By

Renato Bruni and Fabio Celani

Submitted: 19 February 2019 Reviewed: 16 August 2019 Published: 15 January 2020

DOI: 10.5772/intechopen.89197

From the Edited Volume

Advances in Spacecraft Attitude Control

Edited by Timothy Sands

Chapter metrics overview

711 Chapter Downloads

View Full Metrics

Abstract

The attitude stabilization of a spacecraft that uses magnetorquers as torque actuators is a very important task. Depending on the availability of angular rate sensors on the spacecraft, control laws can be designed either by using measurements of both attitude and attitude rate or by using measurements of attitude only. The parameters of both types of control laws are usually determined by means of a simple trial-and-error approach. Evidently, such an approach has several drawbacks. This chapter describes recently developed systematic approaches for determining the parameters using derivative-free optimization techniques. These approaches allow to find the parameter values that minimize the settling time of the attitude error or an indirect measure of this error. However, such cost indices depend also on initial conditions of the spacecraft, which are not known in advance. Thus, a min-max optimization problem is formulated, whose solution provides values of the parameters minimizing the chosen cost index under the worst initial conditions. The chapter also provides numerical results showing the effectiveness of the described approaches.

Keywords

  • derivative-free optimization
  • spacecraft attitude control
  • robust optimization
  • min-max problems
  • magnetic actuators

1. Introduction

A magnetorquer (or magnetic torquer) is a torque actuator widely used for attitude control in satellites, especially those flying in low Earth orbits. The magnetorquer generates a magnetic dipole that interacts with the Earth magnetic field, thus generating a torque used to control the spacecraft attitude. The control torque generated by magnetorquers is constrained to belong to a plane orthogonal to the Earth magnetic field; hence, magnetorquers may be supported by additional torque actuators to achieve full three-axis control (see [1], Chapter 7). However, attitude control systems using only magnetorquers represent a feasible option especially for low-cost satellites or for satellites with a fault in the main attitude control system. Therefore, attitude control of spacecraft using magnetorquers is a very important topic in aerospace engineering.

Many control laws have been designed in such a setting, and an overview can be found in [2]. In particular, Celani [3] shows that attitude stabilization using only magnetorquers can be achieved by proportional derivative-like (PD-like) control which requires measurements of both attitude and attitude rate. The same work shows that attitude stabilization can also be obtained by using attitude-only feedback, which has the advantage of not requiring the installation of angular rate sensors (rate gyros), thus saving in cost, volume, and weight. Those control laws contain parameters. Thus, numerical values must be assigned to those parameters to practically implement the laws. A common way to find those values is by using a so-called trial-and-error approach, consisting in trying several values and rejecting those which are not acceptable, and eventually choosing the best among the acceptable ones. This approach is affected by two important drawbacks: (a) it is often very time-consuming, and (b) it is not systematic. Thus, even if an obtained solution is satisfactory, it is not known if protracting the search could lead to better solutions and which would be the magnitude of the possible improvement.

On the other hand, a systematic approach for determining the parameters should aim at finding the values which minimize the settling time of the attitude error. Such an approach has been recently proposed in [4, 5, 6]. However, this is not an easy objective to pursue. The settling time depends not only on the parameters but also on the initial conditions of the spacecraft. To overcome this issue, the above works propose to compute the values of the parameters that minimize the settling time obtained under the worst initial conditions, so as to provide averagely good results and to set as bound the worst-case behavior. Hence, in the above works, the problem is modeled as a min-max problem, and the obtained parameters’ values are called robust optimal values.

This min-max problem is considerably challenging, since solving the main minimization problem (upper-level) requires solving a maximization problem (lower-level) at every evaluation of its objective function. A decomposition is not possible because the worst initial conditions are not determined in general: they, in turn, depend on the adopted parameters. Optimizing this min-max problem involves other two mathematical issues:

  1. Settling time cannot be expressed as an analytical function of parameters and initial conditions; therefore the specification of an explicit optimization model is not possible.

  2. Settling time is not even continuous with respect to both parameters and initial conditions, in the sense that small variations of the latter may result in a substantial gap in the variation of the settling time.

To overcome these issues, the approach proposed in [4, 5, 6] relies on the use of derivative-free optimization algorithms as building blocks. These techniques do not need first-order information on the objective function nor do they need its analytical expression. They only need to compute the objective function over a number of points by means of simulations.

For the design of the specific optimization algorithm, a distinction must be done between spacecraft equipped with angular rate sensors, studied in [5], and spacecraft not equipped with those sensors, studied in [6]. This because, in the first case, the control is based on the availability of measurements of both attitude and attitude rates and is realized through a PD-like control, while in the second case, the control can rely only on attitude feedback, and it is called attitude-only feedback control.

The first case, though presenting the mentioned difficulties, turns out to be the easier among the two, since the PD-like control law has only two parameters, and their robust optimal values can be successfully determined by means of a global search optimization procedure of the type of DIRECT algorithm [7], as proposed in [5]. In the second case, the control law contains four parameters, and their range of variation is wider than the first case. Thus, the determination of the robust optimal parameters becomes even harder, and a very complex derivative-free optimization algorithm based on a combination of both local and global search had to be developed. Moreover, numerical experience has shown that dealing with a discontinuous objective function makes the determination of the optimal solution very challenging even when using derivative-free algorithms. Thus, in Ref. [4, 6], the objective function is changed to the so-called integral time absolute error (ITAE) which is continuous with respect to both the parameters and the initial conditions. Such a change is acceptable since it has been shown that minimizing the ITAE is approximately equivalent to minimizing the settling time (see [8]).

This chapter describes in detail the abovementioned approaches to the determination of robust optimal values for the parameters in order to minimize the settling time or the ITAE obtained under the worst initial conditions. The exposition will highlight in particular the following main contributions: (i) the definition of a new systematic approach for the determination of the parameters for the PD-like control algorithm and for the attitude-only feedback, (ii) the formulation of a min-max optimization model to find a robust optimal solution to both problems, and (iii) the development of derivative-free optimization strategies to tackle the min-max problems.

The chapter is organized as follows: Section 2 defines the spacecraft model and the control algorithms for the two cases mentioned above; Section 3 describes the optimization model and the solution algorithm for the case of PD-like control; Section 4 provides some computational experience for this first case; Section 5 describes the optimization model and the solution algorithm for the case of attitude-only feedback control; Section 6 provides again computational experience for this second case.

Advertisement

2. Spacecraft model and control algorithms

The following coordinate frames are introduced to describe the attitude dynamics of an Earth-orbiting spacecraft and the Earth magnetic field:

Earth-centered inertial frame F i . The origin of the frame is the center of the Earth, the xi axis coincides with the vernal equinox direction, the zi axis is the axis of rotation of the Earth and points northward, and the yi axis completes the frame (see [1], Chapter 2.6.1).

Spacecraft body frame F b . Its origin is in the spacecraft mass center. The axes are attached to the spacecraft and are selected so that the (inertial) pointing objective is having F b aligned with F i .

Since the objective is having F b aligned to F i , consider the relative kinematics and dynamics of the satellite with respect to the inertial frame. Let the attitude of F b with respect to F i be represented by quaternion q = q 1 q 2 q 3 q 4 T = q v T q 4 T . The corresponding direction cosine matrix is equal to

C q = q 4 2 q v T q v I + 2 q v q v T 2 q 4 q v × , E1

where I is the identity matrix (see [9], Section 5.4). Moreover, given a R 3 symbol a × denotes the skew symmetric matrix

a × 0 a 3 a 2 a 3 0 a 1 a 2 a 1 0 E2

so that the cross product a × b can be expressed as the matrix multiplication a × b . The attitude kinematics equation is equal to q ̇ = W q ω (see [9], Section 5.5.3), where ω R 3 is the spacecraft angular velocity resolved in F b and

W q 1 2 q 4 I + q v × q v T . E3

The attitude dynamics equation resolved in body frame is equal to J ω ̇ = ω × + T , where J R 3 × 3 is the inertia matrix of the spacecraft and T is the control torque resolved in F b (see [9]). Three magnetic coils aligned with the F b axes equip the spacecraft. Thus, the following magnetic attitude control torque is created

T = m coils × B b = B b × m coils . E4

In the previous expression, m coils R 3 is the column matrix of the magnetic dipole moments for the three coils, and B b is the Earth magnetic field at spacecraft resolved in body frame F b (see [1], Section 7.4.1). Let B i be the Earth magnetic field at spacecraft expressed in inertial frame F i . Observe that B i changes in time, at least because of the motion of the spacecraft along the orbit. Hence,

B b q t = C q B i t E5

showing the explicit dependence of B b on both q and t. The previous equations grouped together lead to the following system

q ̇ = W q ω J ω ̇ = ω × B b q t × m coils E6

where m coils is the control variable. Let us characterize the dependence on time of B b q t , which is equivalent to characterize the time dependence of B i t . Assume a circular orbit with radius R. Through the adoption of the so-called dipole model of the Earth magnetic field (see [10], Appendix H), obtain:

B i t = μ m R 3 3 m ̂ i t T r ̂ i t r ̂ i t m ̂ i t . E7

In the previous equation, μ m is the total dipole strength, r i t is the position of the spacecraft resolved in F i , and r ̂ i t is the column matrix of the direction cosines of r i t . The coordinates of vector m ̂ i t are the direction cosines of the Earth magnetic dipole expressed in F i which are written as follows:

m ̂ i t = sin θ m cos ω e t + α 0 sin θ m sin ω e t + α 0 cos θ m E8

where θ m is the coelevation of the dipole, ω e = 360.99 deg/day is the average angular velocity of the Earth, and α 0 is the dipole right ascension at t = 0 . Set μ m = 7.746 10 15 Wb m and θ m = 170.0 as reported in [11].

Eq. (7) shows that to characterize the time dependence of B i t , one needs to determine an expression for r i t which is the spacecraft position vector resolved in F i . Define the orbital plane coordinate system ap, bp whose origin is at the Earth center and with ap axis coinciding with the line of nodes. Then, the center of mass of the satellite is positioned in

a p t = R cos nt + ψ b p t = R sin nt + ψ E9

where n is the orbital rate and ψ is spacecraft argument at time t = 0 . It is possible to determine the mass center coordinates in the inertial frame F i from (9) by means of a rotation matrix which depends on the inclination incl of the orbit and on the right ascension of the ascending node (RAAN) Ω (see [1], Section 2.6.2). By inserting the expressions of the latter coordinates into (7), we obtain an explicit expression for B i t .

Since C q = I for q = q v T q 4 T = ± q ¯ where q ¯ = 0 0 0 1 T (see (1)), then the goal is designing control strategies for m coils so that q v 0 and ω 0 . Reference [3] proposes the following stabilizing proportional derivative (PD)-like control law, which is a modification of those in [12, 13]:

m coils = m coils sat 1 m coils B b q t × κ p q v + κ d ω . E10

In the previous equation, m coils is the saturation limit on each magnetic dipole moment, “sat” denotes the standard saturation function, and κ p and κ d are parameters. As shown in [3], if the orbit’s inclination incl is not too low, there are large ranges for the values of κ p > 0 and κ d > 0 which lead to local exponentially stability of equilibrium q ω = q ¯ 0 for the closed-loop systems (6) and (10).

The following attitude-only feedback, obtained as modification of one described in [12], is also proposed in [3]

δ ̇ = α q βδ m coils = m coils sat 1 m coils B b q t × κ 1 q v + κ 2 αβW q T q βδ . E11

In (11), δ R 4 is an internal state of the controller, κ 1 κ 2 α β are all parameters, and W q was introduced in (3). Note that the previous equation describes an attitude feedback, since it requires only the measure of attitude q and not of attitude rate ω . It has the advantage of not requiring the installation of rate gyros obtaining savings in cost, volume, and weight. As shown in [3], if the orbit’s inclination incl is not too low, there are large ranges for the values of κ 1 > 0 , κ 2 > 0 α > 0 , β > 0 which lead to local exponential stability of equilibrium q ω δ = q ¯ 0 1 ϵ λ q ¯ achieved for the closed-loop system (6) and (11).

For both control laws, there are no precise indications for choosing the parameters. In practice, they are usually computed recurring to a trial-and-error approach. This causes the two main limitations: (1) the computation is quite time-consuming, and (2) it is not systematic. This means that when a satisfactory set of parameter values is finally obtained, it is unknown whether continuing the search could allow to discover new parameter values producing a better performance of the closed-loop system. And, if the search is continued, it is unknown when it should be stopped and what could be the possible improvements obtained with this additional search. In any case, it can easily happen to neglect values providing an overall better performance, unless all possible values are tried. However, such an exhaustive search is generally impracticable, because the search space is way too vast to be completely explored. Therefore, we describe here a different approach to find the values of the feedback parameters. Since the desired attitude is obtained when q v = 0 , we define the settling time t si for each component q i , with i 1,2,3 , as:

t si min t s . t . q i t ν t t si . E12

In other words, this is the time needed for q i to permanently remain under ν . Value 0 < ν < 1 can be set depending on the desired final value of qi. Finally, we can define the settling time ts for the whole quaternion q as the time required by the slowest component of q v , hence

t s max i = 1,2,3 t si E13

Now, having set the spacecraft initial conditions to specific values, one can determine the values of the parameters that minimize the settling time ts for each control law (10) and (11).

Advertisement

3. Determination of optimal parameters for the case of PD-like control

In the case of PD-like control (10), the minimization of the settling time can be formulated as follows

min κ p 0 , κ d 0 t s . E14

In order to practically solve the problem, the feasible set should be reasonably bounded and not infinite. Hence, we define two upper bounds κ p ̂ and κ d ̂ for the gains κ p and κ d . These values can usually be determined for the specific problem; we do this for our case study in Section 3.3. Thus, the problem becomes as follows, where the dependence of ts on the initial conditions q 0 q 0 , ω 0 ω 0 , and ψ (see (9)) is explicitly indicated,

min κ p κ d K t s κ p κ d q 0 ω 0 ψ , E15

and the feasible set is K = κ p κ d : 0 κ p κ p ̂ 0 < κ d κ d ̂ . Even though ts obviously depends on κ p and κ d and on initial conditions, it is not possible in practice to express this relation in analytical form. Moreover, ts is discontinuous with respect to κ p , κ d and initial conditions See for example in ([14], p. 233) the prove that the settling time of the step response is not continuous with respect to the system parameters.

Now, problem (15) has the following features: (i) an analytic expression of the objective function cannot be given, and (ii) the objective function is not continuous with respect to the decision variables. Therefore, we cannot use standard optimization techniques. Instead, we need to use derivative-free optimization.

3.1 From the simple min to the min-max problem

Given the initial conditions q 0 ω 0 ψ , problem (15) can be solved to optimality by using the global optimization derivative-free technique described in Section 3.2. However, when the initial conditions change, then that solution may be no longer optimal. An example of this is given in Section 4.1. Therefore, a robust approach consists in searching for the optimal solution to (15) under the worst initial condition. Such a worst-case approach is commonly used in similar cases. In other words, we accept to pay something, in terms of objective function values, in the easy cases, but we obtain in return advantages in the more difficult situations. However, the worst initial condition is not a priori known, since it depends on the chosen values of κ p and κ d . Therefore, the problem cannot be decomposed and should be solved as a whole.

The set of values describing the initial conditions is given by:

S = q 0 ω 0 ψ : q 0 v 1 q 04 = 1 q 0 v T q 0 v 1 / 2 ω 01 ω 01 ̂ ω 02 ω 02 ̂ ω 03 ω 03 ̂ 0 ψ < 2 π E16

Set S includes any possible initial attitude and any possible initial argument ψ for the spacecraft; only the magnitude of the initial angular rate is limited. The worst-case minimization of t s can be modeled as follows:

min κ p κ d K max q 0 ω 0 ψ S t s κ p κ d q 0 ω 0 ψ . E17

To use the derivative-free algorithm explained in Section 3.2, the feasible set of each optimization problem must be now converted into a hyperrectangle. This can be obtained by expressing q 0 v 1 in spherical coordinates ρ ϕ θ :

S = q 0 ω 0 ψ : q 01 = ρ sin θ cos ϕ q 02 = ρ sin θ sin ϕ q 03 = ρ cos θ q 40 = 1 q 0 v T q 0 v 1 / 2 , 0 ρ 1 , 0 ϕ < 2 π , 0 θ π , ω 01 ω 01 ̂ ω 02 ω 02 ̂ ω 03 ω 03 ̂ 0 ψ < 2 π . E18

The dependence of t s on q 0 can be expressed as dependence on the variables ρ ϕ θ , as follows. We introduce the hyperrectangle:

H = ρ ϕ θ ω 0 ψ : 0 ρ 1 0 ϕ < 2 π 0 θ π ω 01 ω 01 ̂ ω 02 ω 02 ̂ ω 03 ω 03 ̂ 0 ψ < 2 π . E19

Now, the min-max problem (17) can be equivalently reformulated as follows:

min κ p κ d K max ρ ϕ θ ω 0 ψ H t s κ p κ d ρ ϕ θ ω 0 ψ . E20

3.2 The whole derivative-free optimization approach

This section explains the min-max procedure to solve problem (20). Its building blocks are given by the following derivative-free algorithms:

3.2.1 The global strategy

Lipschitzian methods constitute a main approach in non-differentiable optimization. However, they are limited by their requirement of knowing the value of the Lipschitz constant. On the other hand, the search may be conducted without knowing the Lipschitz constant if we use DIRECT-type algorithms [15]. These techniques are based on the partitioning of the feasible region into hyperrectangles and on their examination in a specific order. The feasible region starts as a single hyperrectangle that is internally normalized to a unit hyperrectangle. At each iteration k, the algorithm partitions the hyperrectangles identified in the previous (k−1)th iteration, obtaining a collection of smaller hyperrectangles H k = H 1 H k , and evaluates the objective function in their central points. The computation of the objective value allows to identify potentially optimal hyperrectangles within H k . These hyperrectangles are further partitioned and investigated in the next (k+1)th iteration of the algorithm, while the rest of them is simply discarded. The algorithm stops when the hyperrectangle size becomes very small or when the maximum number of iterations is reached. This algorithm guarantees to converge to the global optimum of the function if sampling is dense enough. However, a large number of function evaluations might be needed to obtain dense sampling, and consequently large computational times could be required.

3.2.2 The local strategy

When the function under optimization is sufficiently regular, it can be optimized by using gradient-based methods, which require the computation of the first-order derivatives of the objective function. However, the basic strategy underlying these methods could be replicated even if no information on the derivatives is available. This is the basic idea of the SDBOX algorithm [16], which can be described as follows. After the selection of a starting point, it cyclically determines a feasible and good descent direction for the objective function, and then it performs a sufficiently large step along such direction. To find the good feasible descent direction, at each iteration k , the algorithm computes the local behavior of the objective function at the variation of the i -th variable. If a move of length α along d i gives a feasible point where the function is reduced enough, the algorithm applies a linesearch technique d i to find the stepsize α k . Otherwise, the algorithm tries the same operations in the opposite direction d i . If both d i and d i are not able to obtain a sufficient decrease, the stepsize α is decreased. The linesearch technique does not require information on the slope of the objective function. Note that the convergence of the algorithm was proved in [16] for minimization of a continuously differentiable function; however, the same technique is often used to optimize different types of functions. This approach is considerably faster than the global strategy; however it is well known that a poor choice of the initial point leads to poor solutions.

3.2.3 The whole procedure

To simplify the description of the robust optimization approach, we now rename the set of variables κ p κ d as x belonging to a feasible set F x = lb i x i ub i i = 1 n R n (in our case F x = K and n = 2 ) and the set of initial conditions ρ ϕ θ ω 0 ψ as y belonging to a feasible set F y = lb j y j ub j j = 1 m R m (in our case F y = H and m = 7 , since ω 0 has three components). Moreover, we now use g x y to denote the objective function (in our case t s ). Its analytical expression is not known; however its value can be computed via a software simulation.

The upper-level minimization problem min x F x g x y must be solved by means of a global strategy applied one time: no starting points can be considered here; hence, there is no ground for a local strategy. Thus, the procedure will contain an external loop implementing the global strategy. This loop essentially computes the value of g obtained for a number maxeval_ext of points x . Let x ¯ be one of them. The evaluation of x ¯ requires however to solve one lower-lever maximization problem max y F y g x ¯ y . Hence, the lower-lever problem should be solved many times, and its corresponding computation time must be reduced to a few seconds. We can solve it in three ways: (1) with the global strategy but using a necessarily small maximum number of function evaluations maxeval_int; (2) with the local strategy, allowing the same number of function evaluations from a single start; and (3) with the local strategy, using a multistart from different starting points, still allowing maxeval_int total function evaluations. This last option allows to use information that happens to be available in the choice of the starting points. In our case, from the physics of the problem, we suppose that good solutions are in the vicinity of extremal values of angular velocity, so we take them as starting points. We report the whole procedure:

Procedure 1: Solve min-max

  1. Input: A g x y computable by means of a software simulation for x i lb i ub i with i = 1 , , n and y j lb j ub j with j = 1 , , m .

  2. Output: A robust optimum value x = arg min x F x max y F y g x y .

  3. External loop:

  4. Solve the upper-level problem min x F x f x , with f x = max y F y g x y by using DIRECT for maxeval_ext evaluations of f , and return x . The k -th evaluation works with point x k

  5.   Internal loop:

  Solve the lower-level problem max y F y g x k y

  by using

DIRECT performing maxeval _ int evaluations of g SDBOX single start performing maxeval _ int evaluations of g SDBOX multistart with s starting points , performing maxeval _ int s evaluations of g for each of them .
Advertisement

4. Computational results for the case of PD-like control

We apply the above approach to solve the numerical example studied in [3]. The inertia matrix of the spacecraft is equal to J = diag 27 17 25 kgm 2 ; the saturation level for each magnetic dipole moment is given by m coils = 10 Am 2 . The orbit has an inclination of incl = 87 and an altitude of 450 km. The orbital period in these conditions is about 5609 s. The value Ω of RAAN is 0. Given these values, we can compute an upper bound κ p ̂ for κ p . Simulation shows that B i t B min = 2.4 × 10 5 T for the considered orbit; since B b q t = B i t , it follows that B b q t B min . Assume that the resolution of the attitude sensor is q s = 0.04 in terms of quaternion component. In this case, an upper bound for κ p can be computed by enforcing that each component of the term B b q t × κ p q v (see (10)) does not exceed the saturation limit m coils = 10 Am 2 when B b q t and q v are orthogonal, B b q t = B min and q i = q r for one index i 1,2,3 , and q i = 0 for the other ones. This is achieved if κ p is smaller than m coils / B min q r = 1.0417 × 10 8 ; thus, we set κ p ̂ = 10 8 . Assuming that the attitude rate sensor has a resolution of ω r = 4 × 10 4 rad/s, by a parallel argument it follows that an upper bound for κ d is given by m coils / B min ω r = 1.0417 × 10 9 ; thus, we set κ d ̂ = 10 9 .

4.1 Fixed initial conditions

Consider an initial state characterized by attitude equal to the target attitude q 0 = q ¯ (which corresponds to having ρ = 0 and any value for ϕ and θ ) and by the following initial angular rate:

ω 0 = 0.02 0.02 0.03 T rad / s . E21

This example corresponds to a spacecraft possessing the desired attitude and no angular momentum, which received an impact from a small object leading to an instantaneous change in the spacecraft angular rate. We assign to ψ a random value over the interval [ 0 , 2 π [ by setting ψ = 0.332 rad.

Gains κ p and κ d have been found by trial-and-error in [3] as κ p = 2 × 10 5 and κ d = 3 × 10 8 . The settling time corresponding to these values of the gains is ts = 4.002 orbital periods, while the vast majority of possible gain values would produce settling times larger than 10 orbital periods. On the contrary, if we use the optimization algorithm described in Section 3.2, we obtain the following fixed initial condition optimal gains after only 32,821 iterations (i.e., 251 s of computation):

κ p = 205761.316872 κ d = 99382716.049383 E22

The corresponding optimal settling time is t s = 1.775 orbital periods. Hence, the optimization algorithm can provide a significant improvement in convergence speed.

However, the above gains might become no longer optimal if we assume different initial conditions. Consider, for example, the case of initial conditions with the same attitude q 0 = q ¯ (i.e., ρ = 0 and any ϕ and θ ), the same argument ψ = 0.332 rad, but a different initial angular rate

ω 0 = 0.1 0.1 0.1 T rad / s . E23

The above gains yield a settling time ts = 3.562, and they are no longer optimal, as shown in Section 4.2. Indeed, the optimal values of the gains would need to be determined for every possible initial condition, which is clearly impossible in practice. As seen in Section 3.1, we can instead search for the gain values providing the best performance under the worst initial condition. It has been found that the worst initial conditions corresponding to gains (22) are

ρ = 0.5 , ϕ = 0.0 , θ = 2.356194490192345 , ψ = 1.570796326794897 , ω 0 = 0.1 0.1 0.1 T . E24

Under conditions (24), the above values of κ p and κ d yield a settling time t s = 4.338 .

4.2 Variable initial conditions

We now search for κ p and κ d using the robust optimization approach. We allow initial conditions to vary as follows: 0 ρ 1 , 0 ϕ 2 π , 0 θ π , ω 01 0.1 , ω 02 0.1 , ω 03 0.1 , and 0 ψ 2 π . We allow a maximum of 2,200 function evaluations in each internal loop, in order to practically solve the problem. Table 1 reports the results of Procedure Solve min-max with maxeval_ext = 10,000 and maxeval_int = 2200.

DIRECT + DIRECT DIRECT + SDBOX single start DIRECT + SDBOX multistart
κ p 205493.82716049382 203930.04115226338 209581.61865569273
κ d 117283950.61728397 154320987.65432101 117283950.61728397
t s 3.359 3.142 3.742
ext eval 10,641 11,127 10,011
Time (s) 172,242.6 57,255.1 89,640.8

Table 1.

Results of Solve min-max with maxiter_ext = 10,000 and maxiter_int = 2200.

When using SDBOX multistart, the starting points are ρ = 0.5 , ϕ = π , θ = π / 2 , ψ = π , and all the eight combinations of extreme angular velocities

0.1 0.1 0.1 T , 0.1 0.1 0.1 T , , 0.1 0.1 0.1 T . E25

These results show that (i) SDBOX multistart is able to reach the highest value of the objective t s even with the small number of computations allowed in the internal loop and (ii) the different results in the solution of the lower-level problems cause a different evolution of the upper-level search. To understand which among the three points above is the best choice, we continue the analysis in Table 2. We fix parameters κ p and κ d , and we solve the maximization problem with increased accuracy by allowing more function evaluations. We do this by means of global strategy, local multistart strategy (which proved to dominate the single start one), and an exhaustive grid search, which is much slower but used here as reference.

Gains DIRECT SDBOX multistart Grid search
eval = 40,000 eval = 40,000 (5000 × 8) eval = 78,125 (57)
κ p = 205493.82716049382 3.876 3.872 3.990
κ d = 117283950.61728397
κ p = 203930.04115226338 4.223 4.149 4.362
κ d = 154320987.65432101
κ p = 209581.61865569273 3.895 3.744 4.016
κ d = 117283950.61728397

Table 2.

Accurate evaluation of the previously obtained gains.

This analysis shows that the first point has the smallest maximum settling time. In conclusion, the robust optimal solution is:

κ p = 205493.82716049382 , κ d = 117283950.61728397 . E26

To evaluate the performance of the above solution (26), we try it with the three different initial conditions reported above. If the initial state is (21), the settling time becomes t s = 1.945 (instead of 1.775). Hence, there is a small worsening. Clearly, no improvement was possible because gains (22) are optimal for state (21). If the initial state is given by (23), we obtain t s = 2.958 (instead of 3.562). Hence, there is an improvement. This could be expected, though not guaranteed, since gains (22) have a higher worst-case result than gains (26).

Finally, if the initial state is given by (24), then we obtain t s = 3.370 (instead of 4.338). Hence, there is a substantial improvement. Indeed, this was certain, since gains (26) have a worst-case result considerably better than gains (22). In conclusion, the gains computed with the proposed robust optimization approach offer improvements in the difficult situations. Moreover, even if they may be suboptimal in the easier cases, the balance appears beneficial.

Advertisement

5. Determination of optimal parameters for the case of attitude-only feedback control

As observed, the objective function ts is not continuous with respect to the parameters, and this introduces numerical difficulties in solving the optimization problem. Thus, one can consider an alternative objective function named integral time absolute error (ITAE) [8], indicated by Γ :

Γ = 0 T f t q v t dt E27

where represents the Euclidean norm and Tf is a time which is selected large enough. The ITAE carries the benefit of being continuous with respect to the design parameters. Continuous differentiability cannot be guaranteed. However, since q ω δ = ± q ¯ 0 1 β q ¯ are equilibria of the closed-loop system (6) and (11), it is very unlikely that q v becomes 0 0 0 T at some finite time. Note that this would make Γ not continuously differentiable with respect to the design parameters because q v is not continuously differentiable at q v = 0 0 0 T .

Minimizing the ITAE is known to lead to nearly optimal solutions with regard to the settling time. In fact, in very simple situations, it is shown analytically that minimizing the ITAE leads to solutions that minimize the settling time. In more complex scenarios, it was numerically shown that minimizing the ITAE gives solutions that are very close to the optimal ones in terms of settling time (see [8]).

By introducing physically reasonable upper bounds κ 1 ̂ , κ 2 ̂ , α ̂ , β ̂ for the design parameters, we obtain the feasible set K = { κ 1 κ 2 α β : 0 κ 1 κ 1 ̂ , 0 κ 2 κ 2 ̂ , 0 α α ̂ , 0 β β ̂ } . Now, our optimization problem is:

min κ 1 κ 2 α β K Γ . E28

Given specific initial conditions of the spacecraft, problem (28) can be solved by a suitable use of derivative-free techniques. However, if the initial conditions of the spacecraft change, that solution might be no longer optimal. Since several different initial conditions for the spacecraft are possible in practical situations, a robust solution is an optimal solution to problem (28) under the worst spacecraft initial conditions. Such a worst-case optimization is widely employed in such scenarios, because by adopting the latter approach, we can give an efficient bound on the objective value in spite of the uncertainty on the spacecraft initial conditions. However, the worst initial conditions for the spacecraft cannot be determined a priori, since they depend on the selected values of κ 1 , κ 2 , α , β . The initial conditions of the spacecraft are given by q 0 = q 0 , ω 0 = ω 0 , 0 ψ < 2 π , and 0 α 0 < 2 π . We chose the set of their possible values as:

S = q 0 ω 0 ψ α 0 : q 0 v 1 q 04 = 1 q 0 v T q 0 v 1 / 2 ω 01 ω 01 ̂ ω 02 ω 02 ̂ ω 03 ω 03 ̂ 0 ψ < 2 π 0 α 0 < 2 π } . E29

Note that S includes all possible initial attitudes, all possible right ascensions of the Earth magnetic dipole at time t = 0 , and all possible initial arguments ψ for the spacecraft. It only constrains the amplitude of the initial angular velocity. The minimization of Γ under the worst spacecraft initial conditions is equivalent to formulating the following min-max problem:

min κ 1 κ 2 α β K max q 0 ω 0 ψ α 0 S Γ . E30

To apply the optimization techniques, we convert the feasible set of each optimization problem into a hyperrectangle by expressing the set q 0 v 1 in spherical coordinates ρ ϕ θ :

S = q 0 ω 0 ψ α 0 : q 01 = ρ sin θ cos ϕ q 02 = ρ sin θ sin ϕ q 03 = ρ cos θ q 40 = 1 q 0 v T q 0 v 1 / 2 , 0 ρ 1 , 0 ϕ < 2 π , 0 θ π , ω 01 ω 01 ̂ ω 02 ω 02 ̂ ω 03 ω 03 ̂ 0 ψ < 2 π 0 α 0 < 2 π . E31

The dependence of Γ on q 0 can now be expressed as dependence on the variables ρ ϕ θ . Consequently, after having introduced the hyperrectangle

H = ρ ϕ θ ω 0 ψ α 0 : 0 ρ 1 0 ϕ < 2 π 0 θ π ω 01 ω 01 ̂ ω 02 ω 02 ̂ ω 03 ω 03 ̂ 0 ψ < 2 π 0 α 0 < 2 π , E32

the min-max problem (30) can be equivalently reformulated as follows:

min κ 1 κ 2 α β K max ρ ϕ θ ω 0 ψ α 0 H Γ . E33

5.1 Combining global and local search

To simplify the description of the proposed approach, we now rename the set of design parameters κ 1 κ 2 α β as x belonging to a feasible set F x = lbx i x i ubx i i = 1 n R n (in our case F x = K and n = 4 ) and the set of initial conditions ρ ϕ θ ω 0 ψ α 0 as y belonging to a feasible set F y = lby j y j uby j j = 1 m R m (in our case F y = H and m = 8 , since ω 0 possesses three components). Indicate by f x y the function giving the objective value (in our case Γ ). When y is fixed (the initial conditions are assigned), we simply write y ¯ in it; when x is fixed (the design parameters are assigned), we write x ¯ in it. Problem (28) can be written as

min x F x f x y ¯ E34

and may be tackled by a global derivative-free optimization algorithm of the type of DIRECT [7], already described in Section 3. Those methods work without the need for analytically writing the objective function; they only need to compute it in a number of points by using simulations. Due to the so-called everywhere dense property, such an algorithm reaches a global optimum if the sampling is dense enough. However, a dense search may require that the function is evaluated many times.

In our case, all lbx i = 0 , since design parameters have to be greater than or equal to zero. Values ubx i can typically be set to very large values based on some physical considerations. For instance, κ 1 ̂ can easily become equal to 109, since its maximum feasible value can be determined knowing the saturation level of the coil moments, the minimum amplitude of the geomagnetic field, and the attitude sensor resolution. However, when considering these values, a sufficiently dense exploration of the feasible set Fx requires a number of function evaluations such that the corresponding run time is impracticable. On the other hand, a sampling that uses a practically sustainable number of function evaluations does not produce solutions substantially better than random solutions. The presence of a large number of local minima makes the optimization task particularly difficult. In such conditions, the evaluation of the generic H h using only one point may be very inaccurate at the first iterations of the algorithm, because the dimension of the hyperrectangles is too large. By proceeding with the iterations, the hyperrectangles becomes smaller, but their number, and consequently the run time, increases exceedingly.

On the other hand, if a fast but effective probing technique to early identify the “promising” H h would be available, then one could explore densely only such promising regions in reasonable times. Hence, a probing technique based on the use of the local derivative-free optimization algorithm SDBOX was proposed in [6]. This algorithm was originally presented in [16] as a globally convergent algorithm for the minimization of a continuously differentiable function, but in practice it can be employed to optimize different types of functions as a good trade-off between efficiency and convergence properties. Most interestingly for our case, given an initial guess κ 1 κ 2 α β = x , this algorithm should be able to find good solutions in short times in the neighborhood of x . We describe below a solution approach combining these global and local strategies to solve problem (28).

Procedure 2: Solve min combining global and local search

  1. Input: A vector y ¯ F y and a function f x y ¯ computable by means of a software simulation for any x F x . Values for the parameters p , maxeval , maxsubsets , maxiter , maxpost .

  2. Output: A solution x approximating one vector in arg min x F x f x y ¯ .

    1. Normalize and grid partition the whole feasible set F x into a collection of hyperrectangles H 1 = H 1 H p similarly to the initial phase of DIRECT.

    2. For each H h H 1 , compute the value f h of the solution obtained by maxeval iterations of SDBOX in H h starting from its central point. This is an upper bound on the value of the best solution in H h and constitutes our “evaluation” of H h .

    3. Take a number maxsubsets of hyperrectangles corresponding to the smallest of the above f h values.

    4. Take the region given by the union of those subsets, and “convexify” it by including also the additional subsets required to convert it into an hyperrectangle F x .

    5. Switch to DIRECT algorithm to continue the search in F x allowing maxiter function evaluations. This search can now be dense using reasonable time, and it gives a solution x .

    6. Try to improve x by using maxpost iterations of a local search method, finally obtaining a solution x to problem (28).

Depending on the practical case, the number of hyperrectangles p in the partitioning phase (step 1) and the number of iterations maxeval in the evaluation phase (step 2) must be selected in order to allow a fixed computational time to the evaluation phase. Indeed, we search for a compromise between speed and effectiveness at this stage. Those parameters must be tuned by also considering that the accuracy of the evaluation of each H h depends not only on maxeval but also on the span of each H h , which in turn depends on p . However, there is no need that all the hyperrectangles in the initial partition H 1 have the same size. Their size could be growing with the absolute values of the coordinates so that p is kept smaller. The number maxsubsets in the selection phase (step 3) must be chosen so that F x remains much smaller than F x ; otherwise the benefits of the proposed procedure is reduced. The number of function evaluations maxiter in the standard DIRECT phase (step 5) is chosen so that the search in F x is dense enough. This is now possible because of the size reduction in F x . The post-optimization phase (step 6) can be executed with SDBOX or with CS-DFN [17], which is another linesearch-based method which uses a dense set of search directions and not only the coordinates ones. CS-DFN does not require f to be continuously differentiable; however, it could be more computationally expensive on smooth problems. For example, in our case, Γ is at least continuous, and even if there are no theoretical arguments to ensure its continuous differentiability, in practice this property may often occur. In conclusion, we can choose between SDBOX and CS-DFN depending on the presence or absence of the continuous differentiability of f , and if we set maxpost to be sufficiently large, we obtain a solution x which satisfies necessary conditions for a local optimum. Moreover, x should approximate one of the global optima, because, given the dense search in F , it should provide one of the global minima of F , and the regions of F \ F , which were less “promising,” should not contain better solutions. Note that the accuracy and the properties of the evaluation phase can be modified, depending on the computational request of the practical case, either by modifying the number of iterations or even by employing an alternative evaluation algorithm, still keeping the same algorithmic framework.

On the other hand, problem (33) is made of an upper-level minimization problem and a lower-level maximization one. By changing the names of the set of variables as explained at the beginning of the section, the problem becomes

min x F x max y F y f x y = min x F x g x E35

with function g such that its value at the generic point x ¯ is given by the solution of the lower-level problem:

g x ¯ = max y F y f x ¯ y . E36

The upper-level problem is solved through an external loop applying the combination of local and global search described as Procedure 2. This loop computes, using a parameter maxeval_ext, to define the overall maximum number of function evaluations and the value of g corresponding to different points of F x . Let x ¯ be one of them; then the evaluation of g x ¯ requires the solution of one lower-level maximization problem max y F y f x ¯ y . Thus, the lower-level problem must be solved up to maxeval_ext times. Consequently, solving it in a few seconds is crucial.

For this problem, the global strategy either would perform a very poor search or would need excessive time. Therefore, the local strategy appears to be the only feasible choice for the lower-level problem. However, taking as initial guess the center of the feasible set Fy does not lead to good solutions of the maximization problem within the limited available time.

In this case, based on the problem physics, it can be assumed that good solutions of the maximization problem are in the neighborhood of extreme values of angular velocity. Therefore, we solve the lower-level problem in a nested loop by using a local search with multistart. We take as initial guesses the eight combinations of extreme values for the three components of the angular velocity ± ω 01 ̂ , ± ω 02 ̂ , and ± ω 03 ̂ . This can be done with SDBOX or CS-DFN, depending on the presence or absence of the continuous differentiability of f. In our case, Γ is at least continuous also with regard to the initial conditions of the spacecraft. Again, there are no theoretical arguments to ensure its continuous differentiability; however in practice this may often happen. For each solution of the lower-level problem, we allow a necessarily small maximum number of function evaluations maxeval_int. In conclusion, this produces a solution x R for problem (33). We report here the whole procedure:

Procedure 3: Solve min-max combining global and local search

  1. Input: A f x y computable by means of a software simulation for any x F x and y F y . Values for the parameters maxeval_ext, maxeval_int, s .

  2. Output: A robust solution x R approximating one vector in arg min x F x max y F y f x y .

  3.    External loop:

Solve the upper-level problem min x F x g x , with g x ¯ = max y F y f x ¯ y x ¯ F x ,

by using Procedure 2 with maxeval_ext total evaluations of g , and return x R .

Given x ¯ , the evaluation of g x ¯ is performed by the internal loop.

  1.       Internal loop:

      Take x ¯ , and solve the lower-level problem max y F y f x ¯ y

      by using multistart local search with s starting points,

      performing maxeval _ int s evaluations of f for each of them.

Advertisement

6. Computational results for the case of attitude-only feedback control

We apply the above method to solve the numerical example presented in [3]. The spacecraft inertia matrix is J = diag 27 17 25 kg m 2 , and the saturation level for each dipole moment is m coils = 10 A m 2 . The orbit inclination is incl = 87 , and the orbit altitude is 450 km; the right ascension of the ascending node Ω is equal to 0. Upper bounds κ 1 ̂ , κ 2 ̂ , α ̂ , β ̂ are selected as 10 9 10 9 10 4 10 3 .

At the beginning we consider the easier situation of known spacecraft initial conditions, and we present, in Section 6.1, the results of Procedure 2 in solving this problem. We report also a comparison with the classical DIRECT method. Subsequently, in Section 6.2, the more realistic case of a spacecraft having variable initial conditions is considered.

6.1 Fixed initial conditions

We consider here the case of the above described spacecraft with known fixed initial conditions; thus we deal with problem (28) using the following values:

ρ ϕ θ ω 0 ψ α 0 = 0,0,0,0.02,0.02 0.03,0.9416,4.5392 . E37

The best solution obtained in [3] by trial and error search achieves a value of ITAE= 3.7 × 10 7 , while the vast majority of the solutions have the ITAE limit value of about 1.2 × 10 9 . This upper limit is related to the value of Tf in the definition of ITAE (27), which is chosen equal to 56,009 s corresponding to 10 orbital periods. Practically this means that when we reach the limit value of ITAE, the corresponding settling time would be roughly larger than 10 orbital periods. Then, that solution is not an attractive one, and we are not interested in determining it with further precision. Even if this choice causes a flattening in the values of ITAE, the use of such a finite Tf is necessary to run the simulations that compute the ITAE in practice.

Table 3 reports two solution attempts carried on with the standard DIRECT algorithm on the whole feasible set K with, respectively, 50,000 and 100,000 iterations, followed by 1000 iterations of local search refinement using CS-DFN. In spite of the significant computational effort (the running times of these experiments, respectively, correspond to about 3 days and 1 week), the obtained solutions have values of ITAE greater than 1.1 × 10 9 that is not very different from the ITAE value of a random solution. Evidently, the search was not dense enough to explore the feasible set as it would be needed. Other similar attempts with standard DIRECT do not achieve better results. The same table also reports Procedure 2 of Section 5.1. As observable, this procedure is able to reach a much better solution, with a value of ITAE of about 8 × 10 6 . Note that this solution is also considerably better than the best solution obtained by trial and error in days of work.

Algorithm Solution Obj. value Time
DIRECT 50,000 + CS-DFN 1000 κ 1  = 913405022.139
κ 2  = 195426826.870
α  = 9794.170752422
β  = 0.000000000000
1,142,470,478.101 262,000 + 5460 s
DIRECT 100,000 + CS-DFN 1000 κ 1  = 500798437.500
κ 2  = 159788790.177
α  = 9996.679486501
β  = 0.000000000003
1,129,234,873.703 565,200 + 5040 s
Procedure 2: Combining global and local strategies κ 1  = 246494.579020
κ 2  = 233333315.349
α  = 92.5925925927
β  = 0.000129629629
8,021,573.4077 201,500 + 15 + 4 s

Table 3.

Comparison of Procedure 2 and standard DIRECT with 50,000 or 100,000 iterations.

The promising region of the feasible set K has been identified with the previously described probing technique. The values of maxelav and p are selected so that this step is executed in reasonable time, according to the following considerations. One single function evaluation takes a time which is very variable and goes from fractions of seconds to several tenths of seconds; but a very rough average function evaluation time can be assessed as equal to 1 s. By selecting maxeval  = 10, the evaluation of each hyperrectangle would require roughly 10 s. Thus, to finish within 2 or 3 days of computation, we select p = 40,960 , which is obtained by making 64 partitions on the domain of κ 1 , 64 partitions on the domain of κ 2 , and 10 partitions on the domain of α . The intervals corresponding to these partitions do not have the same size; they increase with the absolute value of the coordinates. The following hyperrectangle is obtained as convexification of the collection of the most promising regions, as in Step 4 of Procedure 2:

K = κ 1 κ 2 α β : 200000 κ 1 260000 200000000 κ 2 310000000 0 α 100,0 β 0.001 . E38

The determination of K actually needs 409,600 iterations of SDBOX and 201,500 s of computations (about 56 hours). Note that the time necessary for each function evaluation is generally much faster in K , where it can be less than 0.01 s than in the rest of K. Now, by applying standard DIRECT strategy over the feasible set K , as in Step 5 of Procedure 2, after 3007 iterations and only 15 s, we obtain the solution:

κ 1 = 200781.893004 , κ 2 = 226268861.454 , α = 39.3004115226 , β = 0.0005 E39

whose value is ITAE = 9.072 × 10 6 . Note that the volume of the set K* is considerably smaller than that of K: it is only 1 / 15151515.15 of the volume of K. To better appreciate the difference, consider that an exploration of K with the same degree of density used on K* would require 15 × 15,151,515.15 = 227,272,727.25  s that roughly corresponds to more than 7 years, if the simulation times on K were the same as on K*. Since they are often much slower, the time needed would be even more.

Solution (39) can be further refined by using the local search strategy, as in Step 6 of Procedure 2. By performing 1000 iterations of the local search CS-DFN, which can move along a dense set of directions [17], the solution (39) is improved in 4 s to ITAE =  8.021 × 10 6 with the solution:

κ 1 = 246494.579020 , κ 2 = 233333315.349 , α = 92.5925925927 , β = 0.000129629629 . E40

We also tested in this post-optimization phase the local search SDBOX, which moves only along the coordinate directions [16]. With 5000 additional iterations of SDBOX that requires 11 s, solution (39) is improved to the following new solution, which has ITAE = 9.062 × 10 6

κ 1 = 200781.893004 , κ 2 = 226268812.597 , α = 39.3004115226 , β = 0.000500000024 . E41

Solution (40) corresponds to x of Procedure 2. Note that this is actually an approximation of a theoretical optimal solution. Since there is probably no optimal solutions available for comparison, we evaluate it by using the following considerations.

The settling time (see (12) and (13)) corresponding to (40) is computed as t s = 6280  s. Then, we search for a lower bound t s ¯ on the minimum settling time of the considered case, through physical considerations. The initial conditions (37) correspond to having the spacecraft with the desired attitude but with a nonzero initial angular velocity ω 0 = 0.02 0.02 0.03 T . Consider now simple rotations about each single body axis, and for each simple rotation, compute lower bounds t sx ¯ , t sy ¯ , t sz ¯ of the times necessary to move to the desired attitude with final zero angular rate. Then, a rough lower bound for the settling time is given by t s ¯ = max t sx ¯ t sy ¯ t sz ¯ . Value t sx ¯ can be computed using the equation which describes rotation about the x body axis, which is given by

ϕ ¨ = J x T x E42

where ϕ is the roll angle. The amplitude of torque T x is limited by an upper limit T , which can be found using (4). Indeed, numerical simulations show that B i B = 5 10 5 T. Since B b = B i (see (5)), then B b B . Moreover, each component of m coils is bounded by m coils = 10 A m 2 and then m coils 3 m coils . Thus, T = 3 B m coils = 5 3 10 4 N m. Next, the minimum time to bring the state of system (42) subject to the constraint T x T , from the initial state ϕ = 0 ϕ ̇ = ω x 0 to the final state ϕ = 0 ϕ ̇ = 0 , is given by (see [18], Section 7.2)

t sx ¯ = J x T 1 + 2 ω x 0 = 1505 s . E43

Similar considerations hold for rotations about y and z axes leading to

t sy ¯ = J y T 1 + 2 ω y 0 = 948 s . t sz ¯ = J z T 1 + 2 ω z 0 = 2091 s . E44

Then, t s ¯ = max t sx ¯ t sy ¯ t sz ¯ = 2091 s . Therefore, solution (40) takes just about 70 min more than the minimum time necessary to rotate the spacecraft about a single body axis at the maximum speed allowed by the available magnetorquers so that it reaches the desired rest position. Thus, solution (40) does not appear to be too far from an optimal solution. Note also that the above computed bound is very conservative, in the sense that the minimum spacecraft evolution time surely cannot require less, though it could very easily require more.

6.2 Variable initial conditions

Now, problem (33) is solved by employing the above described Procedure 3. We select 20 partitions on the domain of κ 1 , 20 on that of κ 2 , and 10 on that of α . We therefore perform 40,000 evaluations of g x for the identification of the new K and then another 3000 + 1000 evaluations of g x to solve the problem on this K , for a total of maxeval _ ext = 44,000 .

For the internal loop, as described in Procedure 3, we employ as initial guesses the eight combinations of extreme values for the three components of the angular velocity ± ω 01 ̂ , ± ω 02 ̂ , and ± ω 03 ̂ . We allow 64 iterations per point, for a total of 512 iterations per single lower-level problem, which requires slightly less than 3 s in average within K . Run times are greater in the rest of K ; however, we try to keep them under control by allowing to exit the internal loop when the value of f x ¯ y is large enough to reach the limit value for ITAE of about 1.2 × 10 9 . Then, the whole nested loop procedure provides the following solution in about 342,000 s (about 95 h)

κ 1 = 227777.777778 , κ 2 = 294444444.444 , α = 83.3333333333 , β = 0.00061095869532 . E45

The ITAE value of this solution is 2.176 × 10 7 for the initial conditions (37) instead of ITAE = 8.021 × 10 6 of solution (40). However, this solution is a robust solution: by varying the initial conditions in H , the worst value that can be obtained is ITAE = 1.357 × 10 8 that is still considerably better than average solutions, whose vast majority has the limit value for ITAE of 1.2 × 10 9 . As a comparison, the worst value achievable by varying the initial conditions in H for solution (40) is ITAE = 1.103 × 10 9 that is very close to the limit value for ITAE. Indeed, the above limit value for ITAE is very easily obtainable for almost any generic tuple κ ¯ 1 κ ¯ 2 α ¯ β ¯ by simply searching for difficult initial conditions. Note also that this value of ITAE is obtained in some attempts in solving problem (33) by using standard DIRECT algorithm on the whole feasible set K allowing no more than 1 day of computation. More time-consuming attempts in solving problem (33) using standard DIRECT algorithm cannot be practically accomplished. This holds for the following motivations. In the case of fixed initial conditions (Section 6.1), 1 week of computation was not enough to explore the search space. In this case, there is an internal loop requiring 512 function evaluations instead of one single function evaluation. Hence, any serious attempt would need to allocate months of computation and would obtain results probably similar to those obtained in Section 6.1.

Advertisement

7. Conclusions

The attitude of a spacecraft can be controlled using only magnetorquers by means of a PD-like control or an attitude feedback. However, for both control laws, design parameters must be assigned. These parameters may be conveniently selected so that they minimize the spacecraft settling time, or an indirect measure of it, either for fixed initial conditions of the spacecraft or under the worst initial conditions. This latter choice gives an upper bound on the minimum value of the objective obtainable by varying the initial conditions. This chapter has described solution approaches based on an innovative use of global and local derivative-free optimization techniques to practically solve these computationally demanding problems. This approach is able to provide robust solutions to the considered application in reasonable times.

References

  1. 1. Sidi MJ. Spacecraft Dynamics and control. New York: Cambridge University Press; 1997
  2. 2. Silani E, Lovera M. Magnetic spacecraft attitude control: A survey and some new results. Control Engineering Practice. 2005;13(3):357-371
  3. 3. Celani F. Robust three-axis attitude stabilization for inertial pointing spacecraft using magnetorquers. Acta Astronautica. 2015;107:87-96
  4. 4. Bruni R, Celani F. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback. AIP Proceedings. 2016;1776:090032
  5. 5. Bruni R, Celani F. A robust optimization approach for magnetic spacecraft attitude stabilization. Journal of Optimization Theory and Applications. 2017;173:994-1012
  6. 6. Bruni R, Celani F. Combining global and local strategies to optimize parameters in magnetic spacecraft control via attitude feedback. Journal of Optimization Theory and Applications. 2019;181:997-1014
  7. 7. Jones DR, Perttunen CD, Stuckman BE. Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Application. 1993;79(1):157-181
  8. 8. Graham D, Lawthrop RC. The synthesis of optimum response: Criteria and standard forms, part 2. Transactions of the AIEE. 1953;72:273-288
  9. 9. Wie B. Space Vehicle Dynamics and Control. Reston, VA: AIAA; 2008
  10. 10. Wertz JR, editor. Spacecraft Attitude Determination and Control. Norwell, MA: Kluwer Academic; 1978
  11. 11. Rodriguez-Vazquez AL, Martin-Prats MA, Bernelli-Zazzera F. Full magnetic satellite attitude control using ASRE method. In: 1st IAA Conference on Dynamics and Control of Space Systems. 2012
  12. 12. Lovera M, Astolfi A. Spacecraft attitude control using magnetic actuators. Automatica. 2004;40(8):1405-1414
  13. 13. Lovera M, Astolfi A. Global magnetic attitude control of inertially pointing spacecraft. Journal of Guidance, Control, and Dynamics. 2005;28(5):1065-1067
  14. 14. Ogata K. Modern Control Engineering. 4th ed. Upper Saddle River, NJ: Prentice Hall; 2002
  15. 15. Jones DR. DIRECT global optimization. In: Floudas CA, Pardalos PM, editors. Encyclopedia of Optimization. Berlin: Springer; 2009. pp. 725-735
  16. 16. Lucidi S, Sciandrone M. A derivative-free algorithm for bound constrained optimization. Computational Optimization and Applications. 2002;21:119-142
  17. 17. Fasano G, Liuzzi G, Lucidi S, Rinaldi F. A linesearch-based derivative-free approach for nonsmooth constrained optimization. SIAM Journal on Optimization. 2014;24(3):959-992
  18. 18. Athans M, Falb PL. Optimal Control: An Introduction to the Theory and Its Applications. New York: Dover; 2007

Written By

Renato Bruni and Fabio Celani

Submitted: 19 February 2019 Reviewed: 16 August 2019 Published: 15 January 2020