Research Article - (2018) Volume 7, Issue 3

Multiobjective Imperialist Competitive Algorithm for Solving Nonlinear Constrained Optimization Problems

Chun An Liu1* and Huamin Jia2
1School of Mathematics and Information Science, Baoji University of Arts and Sciences, Baoji, China, E-mail: huaminj@gmail.com
2School of Engineering, Cranfield University, England, UK, E-mail: huaminj@gmail.com
*Corresponding Author: Chun An Liu, School of Mathematics and Information Science, Baoji University of Arts and Sciences, Baoji, China, Tel: +08959927 Email:

Abstract

Nonlinear Constrained Optimization Problem (NCOP) has been arisen in a diverse range of sciences such as portfolio, economic management, airspace engineering and intelligence system etc. In this paper, a new multiobjective imperialist competitive algorithm for solving NCOP is proposed. First, we review some existing excellent algorithms for solving NOCP; then, the nonlinear constrained optimization problem is transformed into a bi objective optimization problem. Second, in order to improve the diversity of evolution country swarm and help the evolution country swarm to approach or land into the feasible region of the search space, three kinds of different methods of colony moving toward their relevant imperialist are given. Thirdly, the new operator for exchanging position of the imperialist and colony is given similar as a recombination operator in genetic algorithm to enrich the exploration and exploitation abilities of the proposed algorithm. Fourth, a local search method is also presented in order to accelerate the convergence speed. At last, the new approach is tested on thirteen well-known NP-hard nonlinear constrained optimization functions, and the experiment evidences suggest that the proposed method is robust, efficient, and generic when solving nonlinear constrained optimization problem. Compared with some other state of the art algorithms, the proposed algorithm has remarkable advantages in terms of the best, mean, and worst objective function value and the standard deviations.

Keywords: Optimization; Algorithm; Intelligence; Distribution

Introduction

In science and engineering fields, many complex optimization problems involve in constraint conditions [1-3]. That’s to say, the optimal solution of those practical problems are restricted to the problem’s constraint conditions. Valves in chemical process control need a maximum and a minimum displacement. Also, for safety or other operational reason, it is usual to impose some limits on allowable temperatures, levels and pressures [4]. When solving these optimization problems, it is difficult to deal with the constraints and find the optimal solution of the nonlinear constrained problem.

Mostly often, constraint handling optimization algorithm used in classical optimization methods can be classified into two types: one is generic methods that do not exploit the mathematical structure of the constraint, such as the penalty function method [5], lagrange multiple method [6], and some intelligence optimization search heuristic algorithms, e.g., enhanced grey wolf optimization algorithm [7], surrogate-assisted evolutionary optimization method [8], modified butterfly optimization algorithm [9] chaotic grey wolf optimization algorithm [10], and enhanced grey wolf optimisation algorithm [11] and the other is special methods that used to solve these problems with specific types of constraints, such as the cutting place method [12] the gradient projection method [13] the quasi-Newton method [14] and the steepest descent method [15].

As far as generic methods are concerned, since these algorithms are generic, some performances of them in some case can’t be fully satisfied. However, these special methods are applicable either to these optimization problems having convex search region only or to these optimization problems whose objective and constraint functions are differentiable. In fact, among the generic methods, the most popular approach in real optimization fields to deal with the constraint of an optimization problem is the penalty function method, which involves a number of penalty parameters and we must to set right in any algorithms in order to obtain the optimal solution, and this performance on penalty parameter has led many researches to devise the sophisticated penalty function method. These methods mainly can be divided three categories:

a) Multi-level penalty functions [16]

b) Dynamic penalty functions based on adaptive and coevolutionary penalty approaches [17] and;

c) Hybrid penalty functions combined with the advantages of evolutionary computation, such as [18,19].

Evolutionary algorithm is generally inspired by the modelling of the natural processes, especially human evolution. Genetic algorithm lies in the category of evolutionary algorithms. However, Imperialist Competitive Algorithm (ICA) uses socio political evolution of human as a source of inspiration for developing a strong optimization strategy proposed by Atashpaz-Gargari et al. [20]. ICA has been succeeded widely to solve many real-world optimization problems in recent years, e.g., Mahdi et al. [21] introduced an imperialist competitive algorithm for solving systems of nonlinear equations; in reference Mohammadi et al. [22] designed a multi-objective imperialist competitive algorithm to solve a capacitated hub covering location problem; and Shokrollahpour et al. [23] proposed a novel imperialist competitive algorithm for solving bi-criteria scheduling of the assembly flow-shop problem [23].

Moreover, how to find a balance between exploration and exploitation for an excellent generic algorithm is very important. Joshi et al. [11], the authors proposed an Enhanced Grey Wolf Optimization (EGWO) algorithm with a better hunting mechanism, which focuses on the proper balance between exploration and exploitation that leads to an optimal performance of the algorithm and hence promising candidate solutions are generated and Long et al. [24], the authors introduce a nonlinear control parameter strategy and a new position-updated equation in order to balance the exploration and exploitation of the algorithm.

In this paper, we proposed a new multiobjective optimization method based on ICA to solve nonlinear constrained optimization problem. Firstly, the nonlinear constrained optimization problem concerned is transformed into a bi-objective unconstrained optimization problem, so that no penalty function or other mechanism to deal with the constrained are introduced. Then, in order to improve the diversity of evolution country swarm and help the evolution country swarm to approach or land into the feasible region, three kinds of different methods of colonies moving toward their relevant imperialist are presented. Also, the new operator for exchanging position of the imperialist and colony is given as a recombination operator in genetic algorithm to achieve a better balance of the exploration and exploitation behaviors of the proposed algorithm. Moreover, a new local search method is also integrated in order to increase the convergence speed of the proposed algorithm. At last, the new method is tested on 13 well-known NP-hard nonlinear constrained optimization functions, and the experiment results suggest that the proposed algorithm is robust, efficient, and generic when solving nonlinear constrained optimization problem. Compared with some other state-of-the-art algorithms, the proposed algorithm has remarkably advantage in terms of the best, mean, and worst objective function value and the standard deviation, i.e, it is indicated that the proposed algorithm can effectively solve the nonlinear constrained optimization problem. The paper is organized as follows. In section 2, the related concepts of nonlinear constrained optimization problem are given. The main steps of the proposed imperialist competitive algorithm for solving the nonlinear constrained optimization problem are designed in section 3. The flowchart of the proposed algorithm is described in section 4. After simulation results are shown in section 5, the conclusion and acknowledgment are made in section 6 and 7, respectively.

Related Concepts Of NCOP

Without loss of generality, the general Nonlinear Constrained Optimization Problem (NCOP) that we are interested in can be formulated as

equation

Where equation is n dimension decision vector and equation is the inequation constraint forequation is equation constraint. Forequation (in both cases, constraints could be linear or non-linear), and equation is feasible region.

equation (3) in the search space.

Definition 1

For every point equation such as equation holds, then the point x* is called the optimal solution, and equation is the optimal value for problem (1). Let

equation

where f(x) is the objective function of problem (1) and f2(x) is the optimization function defined by the constraint condition of problem (1), then, we can transform the nonlinear constrained optimization problem (1) into the bi objective optimization problem as follows: min equationequation

For the bi-objective optimization problem (4), to minimize the first objective function f1(x) means to find a feasible point so as to become the optimal solution of problem (1), to minimize the second objective function f2(x)  means to search the point in order to meet all the constraints of problem (1). Therefore, when to minimize the two objectives function of problem (4) simultaneously means searching for the point so as to satisfy all the constraints and make the objective function of problem (1) to reach the optimum.

Definition 2

A two-dimension vector u = u1, u2 is said to weakly dominate another two-dimension vector (v1, v2),ui≤vi for i=1,2.

Definition 3

A point x Є [L,U] is said to be a weakly Pareto optimal solution for problem (4) if there does not exist another point y Є [L,U] such as F(y) weakly dominates F(x). The set of all the weakly Pareto optimal solutions is called the weakly Pareto optimal set and the set of all the weakly Pareto optimal solution’s objective vectors is called the weakly Pareto front. Suppose that Ew(F,x) is the weakly Pareto optimal solution set of problem (4). Then, the optimal solution of the problem (1) and the weakly Pareto optimal solution of the problem (4) have the following relation:

Theorem 1

A solution x* is the optimal solution of problem (1) if equation and equation Proof. Sufficiency is obvious. The necessity proof is given as follows: Since x* is the optimal solution of problem (1), then equation Furthermore, we have equation Furthermore, we have equation

If equation then there at least exists another solution equation and makes equation for i= 1,2 hold i.e equation this is contradiction to the definition of function equation for so equation

The conclusion of the Theorem 1 demonstrates that the optimal solution of problem (1) can be obtained from the intersection of the feasible region of problem (1) and the weakly Pareto optimal solution set of problem (4), and the optimal solution makes the first objective function minimum.

The Design Of The Main Operators For The Proposed Algorithm

In order to solve the Nonlinear Constrained Optimization Problem (NCOP) proposed in section 2, a new imperialist competitive algorithm is designed in sections 3 and 4. Firstly, we briefly introduce the idea of the Imperialist Competitive Algorithm (ICA), proposed by the authors Atashpaz-Gargari et al. [20]. ICA, similar to the evolutionary algorithm, particle swarm algorithm and so on, is a kind of swarm intelligence algorithm. ICA is inspired by imperialistic competition. All the countries are divided into two types: imperialist states and colonies. Imperialistic competition is the main evolution operator and hopefully causes the colonies to approach the global optimal solution. Based on the idea, we design the main operators for the proposed algorithm as follows.

The creation of initial empires

During the operation process of ICA, the initial evolution country swarm, should be generated firstly. Among the initial country swarm, some of the best countries are selected to form the initial imperialist, and the rest of the countries are divided among the initial imperialists as colonies. In this section, randomly generate equation initial countries in search space equation denotes them s countryequation forequation and define the cost of each country as follows:

equation

Where f1(x) is the first objective function and f1(x) is the second objective function of problem (4), respectively. Select N of the most powerful countries to form empires, where the most powerful countries refer to the countries whose cost are relatively small. The rest countries of the initial countries will become colonies of each of empires according to their power. Thus, each empire receives several colonies. This process is presented in Figure 1, where the more powerful empires have a greater number of colonies and weaker empires have fewer colonies. At last, these initial countries are divided into two groups: imperialist and colony (denote in imperialist i and colony equation N respectively). In order to form the initial empires, we divide colonies into N imperialists based on their power. Here, we divide these colonies among imperialists according to the method of proportion selection or the roulette wheel selection used in genetic evolution as follows:

swarm-intelligence-evolutionary-computation-empire

Figure 1: Generation of the initial empire and their initial colony in search space [L,U] Є R2

Step 1: Suppose the normalized power of each imperialist is defined by

equation

Where pj is the normalized power of the j-th imperialist, and equation is the normalized cost of the j-th imperialist for equation is the cost of the equation imperialist for equation

Step 2: Generate the initial number of the colonies belonging to each empire based on the following formula

equation

Where N.C.j is the number of initial colonies of the j-th empire, and equation  is the total number of all initial colonies.

Step 3: Select N.C.j colonies according to the roulette wheel selection and join them to the j-th imperialist. These colonies along with the imperialist together will form the j-th empire (denote empire j,j=1,2,…,N).

Method Of Colonies Moving Toward Their Relevant Imperialist

Atashpaz-Gargari et al. [20], the authors make each colony to move toward the imperialist by x-units in the direction which is the vector from colony to imperialist. x will be a random variable with uniform distribution, i.e.,

equation (8)

Where β > 1 and d is the distance between the colony and imperialist, and parameter β causes the colony to get closer to the imperialist from both sides. However, for constraint optimization problem, it needs the designed algorithm not only to make the infeasible solution approaching the feasible region and satisfying the constraint condition, but also make the objective function minimum. Based on these, we proposed a new method of colonies moving to their relevant imperialist as follows. Suppose that we make the colony equation to move the imperialist i(i =1,2,…,N) then

Case 1

If both imperialist i and colony j are feasible, i.e., imperialist i and colony j Є D, we generate a circle which the diameter is the straight-line segment d joining the imperialist i and colony j, the new position (denote as colony k) of the jth colony moved to their relevant imperialist is shown in a gray colour in Figure 2, where A and r are two random numbers with uniform distribution, i.e, and

equation (9)

equation (10)

swarm-intelligence-evolutionary-computation-imperialist

Figure 2: The method of colony moving to imperialist based on the fact that both the jth colony and ith imperialist are feasible in search space [L,U] Є R2.j Є[L,U]\D

Parameter r and A  can cause the colony j to get closer to the imperialist from its neighbourhood rather than far away from the imperialist i.

Case 2

If both imperialist i and colony j are infeasible, i.e, imperialist i colony j Є [L,U]\D, we random select a colony s from feasible region , compute the barycenter of three countries colony j, imperialist i and colony s, and then, the barycenter (denote by colony k in Figure 3) can be regarded as the new position which colony j move to imperialist i. Using this method, we can make the colony not only to move to the imperialist but approach the feasible region.

swarm-intelligence-evolutionary-computation-colony

Figure 3: The method of colony moving to imperialist based on the fact that both colony j and imperialist  are infeasible in search space [L,U] Є R2.

Case 3

If there exists one feasible country between colony j and i, and suppose colony j is feasible country and imperialist  is infeasible country and vice versa, i.e, colony j Є D , imperialist i Є [L,U]\D then, we generate a circle which the circle’s centre is colony j and the radius is the straight-line segment L joining the colony j and imperialist i the new position colony k of colony j moved to imperialist i is shown in a gray colour in Figure 4, where B is a random number with uniform distribution, i.e,

equation (11)

swarm-intelligence-evolutionary-computation-feasible

Figure 4: The method of colony moving to imperialist based on the fact that colony j is feasible and imperialist i is infeasible in search space [L,U] Є R2.

Where equation is a parameter that adjusts the deviation of direction which is the vector from colony j to imperialist equation is a random number with uniform distribution, and τ and equation are arbitrary. In most of our implementation, the value of equation have a good convergence to the global minimum and can make the feasible colony j not far away from the feasible region.

Exchanging Position Of The Imperialist And Colony

Based on the method of colonies moving toward their relevant imperialist in subsection the operator of exchanging position of the imperialist and the colony can be described as follows:

equation

If both colony j and imperialist i are feasible, and suppose that the cost of colony j has lower cost than that the imperialist does, i.e., f1(colony j 1(imperialist i).

(1) f1(imperialist i) then we use the colony j to replace the imperialist i and form the new imperialist, vise versa.

(2) If both colony j and imperialist i are unfeasible, then we always choose the one with the smaller cost as the new imperialist i  i.e, if f2 (colony j > f2 (imperialist i) then keep imperialist i invariable; otherwise, if f2(colony j < f2 imperialist i) then use the colony j to replace the imperialist i and form new imperialist.

(3) If there exists one feasible country between the colony j and imperialist i, we always use the feasible country as the new imperialist in order to make the evolution country swarm approaching the feasible region and fast converging to the minimum.

Local Search Operator

In order to accelerate the convergence speed, we add a local search operator as follows to the proposed algorithm. Suppose imperialist , imperialist, imperialist k are k imperialists obtained by the proposed algorithm in current evolution countries swarm, where k is the number of imperialist and it will become more less with the imperialist competition.

Compute the approximate value of gradient for each imperialist

equation (12)

For equation is a n-dimension unit vector which the jth component ej = 1 and the rest of componentsequation

Case 1: If equation carry out one-dimensional search along the descent directionequation obtainequation and make the following formula.

equation (13)

equation (14)

Holds, where equation then we useequation to replace imperialist i and form the new imperialist.

Case 2: If equation keep imperialist i no change.

Imperialistic Competition

As mentioned in reference [20], during the evolution of countries, all the empires try to possess the colonies of the other empires and control them. As a result, the power of the weaker empires gradually begins to decrease and the power of the more powerful increases. This process can be described in the follows:

• Compute the total power of the j-th empire depending on its own imperialist and colonies as follows:

equation(15)

Where equation is a positive coefficient, and imperialist_ is the imperialist of the jth empire, N.C.j is the number of colonies of the jth empire, cost (·) is the normalized cost function defined in formula (5). Use the following formula (16) to compute the possession probability

equation (16)

Where equation is the normalized total power and is the total power of the j-th empire, respectively. Randomly select some colonies from current evolution countries swarm, e.g., when select only one and denote in colony, let equation and also generate a N-dimension vector V with uniformly distributed elements, i.e.,

equation(17)

Where equationFurthermore let, vequationequation(18)

Then we divide the colony in to the -th empires, where index equation is subscript of the maximum component in vector θ.

With the imperialistic competition, powerless empires will collapse in the imperialist competitive, and the number of their colonies become less and less. When an empire loses all of its colonies, we consider that the empire has been collapse and the imperialist become one of the rest colonies.

The Flowchart Of The Proposed Algorithm

The main difference between the proposed Multi-objective Imperialist Competitive Evolutionary Algorithm (denote as MICA) and the original imperialist competitive algorithms is the method of colony moving toward their relevant imperialist according to Figures 2 and 4. Additionally, in order to make the evolution country swarm to approach or come in the feasible region, three kinds of different methods of colonies moving toward their relevant imperialist are given. In addition, a new operator for exchanging position of the imperialist is also designed to achieve a better balance between the exploration and exploitation behaviours for MICA, and a new local search method is also embedded in order to increase the convergence speed of the proposed algorithm. The flowchart of the proposed algorithm is shown as follows: [L,U]

Step 1: Choose the proper parameter, initial country size equation randomly generate initial country swarm in search space [L,U] and denote them as the set pop(0) find the weakly Pareto optimal countries (i.e, weakly Pareto optimal solutions) in pop(0) according to the Definition 3 and denote them as an external set let t=0.

Step 2: Generate initial empires, i.e, select the most powerful countries N from pop(t) and divide the rest countries to each of them.

Step 3: Make each of colonies to move toward relative imperialist based on the method of colonies moving toward their relevant imperialist in subsection 3.2 and exchange the position of the imperialist and the colony according to subsection 3.3.

Step 4: Carry out the local search operator and imperialistic competition, and form the next evolution country swarm pop(t+1).

Step 5: Find the weakly Pareto optimal countries in the set equation and use them to replace those countries including C(t) to form the new external set equation

Step 6: If the maximum number of the cycles has been reached, the algorithm stops; output the optimal solution equation of problem (1). Otherwise, let equation go to Step 2.

Experimental Results And Discussions

To evaluate the efficiency of the proposed algorithm, thirteen nonlinear constrained optimization test problems g01∼g13 were tested by six optimizations evolutionary algorithms: OICA [20], SAEA [25], SMEA [26], RCEA [27], ISEA [28], and the proposed algorithm MICA. These benchmark functions are described in [27]. And they are summarized here for completeness, and the original sources of the functions are also cited. Test functions g02, g03, and g12 are maximization problems, they were transformed into minimization problems using the objective function min(-f (x )).

In order to estimate how difficult, it is to generate feasible countries through a purely random process, we use the ρ-metric [29] which can measure the ratio between the size of the feasible search space and that of the entire search space, i.e.,

equation (19)

Where equation is the number of countries randomly generated from search spaceequation and equation is the number of feasible countries found by each algorithm (out of these equation countries. In our algorithm, equation is theinitial countryequation. Each algorithm was implemented by using MATLAB 7.0 on an Intel Pentium IV 2.8-GHz personal computer and was executed 30 independent runs for each test problem. In the simulation, the initial country equation the ratio of the mostpowerful countries is equation and the maximum number of cycles is 1500 [30-33].

equation

equation

Where equation Table 1 and equation

Test problem g01 g02 g03 g04 g05 g06 g07 g08 g09 g10 g11 g12 g13
Average percentage 100 95 59 89 100 62 78 92 85 78 86 98 57

Table 1: Average percentage of feasible countries in the final country swarm with 30 independent runs.

Summarizes the average percentage of the feasible countries in the final evolution country swarm in 30 independent runs for each test problem. Moreover, In order to illustrate the rate of the convergence for the proposed algorithm, we record the average distance from the best individual of the imperialist swarm to the boundaries of the feasible region at every 1500 generations in 30 runs. The results are presented in Table 2. Also, we list the known optimal solution and the best, mean, and the std. for the objective function value in Table 3, and the standard deviation (std.) after 30 independent runs by MICA and the Original Imperialist Competitive Algorithm (denote as OICA) is also given. These results provided by four compared algorithms SAEA, SMEA, RCEA and ISEA were taken from the original references. In Table 2, “I.N.” represents the iteration number, and in Table 3, “NA” presents no results in the reference.

I.N.
Function 1500 3000 4500 6000 7500 9000 10500 12000 13500 15000
g01 0 0 0 0 0 0 0 0 0 0
g02 0 0 0 0 0 0 0 0 0 0
g03 6.21E-03 4.42E-01 5.92E-13 0 0 0 0 0 0 0
g04 0 0 0 0 0 0 0 0 0 0
g05 3.28E-06 1.12E-04 0 0 0 0 0 0 0 0
g06 0 0 0 0 0 0 0 0 0 0
g07 0 0 0 0 0 0 0 0 0 0
g08 0 0 0 0 0 0 0 0 0 0
g09 0 0 0 0 0 0 0 0 0 0
g10 0 0 0 0 0 0 0 0 0 0
g11 2.86E-14 1.85E-05 2.72E-17 0 0 0 0 0 0 0
g12 0 0 0 0 0 0 0 0 0 0
g13 3.24E-12 5.32E-11 1.54E-07 2.34E-14 3.35E-06 0 0 0 0 0

Table 2: Average distance from the best individual of the imperialist swarm to the boundaries of the feasible region at every 1500 generations for the 30 runs.

Methods
Function Optimal Status OICA [19] SAEA [9] SMEA [16] RCEA [17] ISEA [26] MICA
g01 -15 best mean worst std.  -15.000  -15.000  -15.000  -15.000  -15.000  -15.000
-15 -15 -15 -15 -14.494 -15
-15 -15 -15 -15 -12.446 -15
-15 0 0  0.0E+00 9.30E-01 1.30E-11
g02 -0.80362 best mean worst std.  -0.80342  -0.80297  -0.803601  -0.803515  -0.803376  -0.803619
-0.79212 -0.7901 -0.785238 -0.781975 -0.798231 -0.793421
-0.76213 -0.76043 -0.751322 -0.726288 -0.768291 -0.783461
1.50E-03 1.20E-02 1.70E-02 2.00E-02 9.00E-03 2.50E-02
g03 -1 best mean worst std.  -1.0000 -1  -1.0000  -1.0000  -1.0000  -1.0000
-1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1
6.40E-03 7.5E-05  2.10E-04 1.90E-04 9.70E-05 2.30E-12
g04 -30665.5 best mean worst std.  -30665.405  -30665.500  -30665.539  -30665.539  - 30665.539 - 30665.539
-30665.531 -30665.2 -30665.539 -30665.539 -30665.539 -30665.539
-30665.523 -30665.3 -30665.539 -30665.539 -30665.539 -30665.539
0 4.90E-01 0 2.00E-05 0 7.20E-10
g05 5126.498 best mean worst std.  5126.964  5126.989  5126.599  5126.497  NA  5126.4981
5432.08 5432.08 5174.492 5128.881 NA 5126.4981
5883.95 6089.43 5304.167 5142.472 NA 5126.4981
3.30E+05 3.90E+03 5.00E+01 3.50E+00 NA 1.51E-10
g06 -6961.81 best mean worst std.  -6961.800  -6961.800  -6961.814  -6961.814  -6961.814  -6961.814
-6961.8 -6961.8 -6961.284 -6875.94 -6961.813 -6961.814
-6961.8 -6961.8 -6952.482 -6350.262 -6961.812 -6961.814
0 0 1.90E+00 1.60E+02 8.50E-05 1.21E-10
g07 24.30621 best mean worst std.  24.47  24.48  24.327  24.307  24.338  24.3062
25.38 26.58 24.475 24.374 24.527 24.3457
28.32 28.4 24.843 24.642 24.995 24.3812
1.20E+01 1.10E+00 1.30E-01 6.60E-02 1.70E-01 2.53E-09
g08 0.095825 best mean worst std.  0.095825  0.095825  0.095825  0.095825  0.095825  0.095825
0.095825 0.095825 0.095825 0.095825 0.095825 0.095825
0.095825 0.095825 0.095825 0.095825 0.095825 0.095825
0 0 0 2.60E-17 0 3.21E-14
g09 680.6301 best mean worst std.  680.64  680.64  680.632  680.630  680.630  680.630
680.7 680.72 680.643 680.656 680.631 680.63
680.83 680.87 680.719 680.763 680.634 680.63
5.30E+00 5.90E-02 1.60E-02 3.40E-02 8.10E-04 4.20E-09
g10 7049.331 best mean worst std.  7051.31  7061.34  7051.903  7054.316  7062.019  7049.330
7625.87 7627.89 7253.047 7559.192 7342.944 7049.33
8187.54 8288.79 7638.366 8835.655 7588.054 7049.33
3.40E+01 3.70E+02 7638.366 5.30E+02 1.40E+02 1.10E-09
g11 0.75 best mean worst std.  0.750  0.750  0.750  0.750  0.750  0.750
0.75 0.75 0.75 0.75 0.75 0.75
0.75 0.75 0.75 0.75 0.751 0.75
0 0 1.50E-04 8.00E-05 2.60E-04 5.31E-08
g12 -1 best mean worst std.  -1.0000  -1.0000  -1.0000  -1.0000  NA  -1.0000
-1 -1 -1 -1 NA -1
-1 -1 -1 -1 NA -1
0 0 0 0.00E+00 NA 6.80E-12
g13 0.05395 best mean worst std.  0.053997  NA  0.053986  0.053957  0.05517  0.053949
0.066531 NA 0.166385 0.067543 0.28184 0.025432
0.097569 NA 0.468294 0.216915 0.471 0.043957
1.60E-02 NA 1.80E-01 3.10E-02 1.80E-01 1.50E-01

Table 3: Comparison of the proposed algorithm MICA with respect to OICA [19], SAEA [9], SMES [16], RCEA [17], ISEA [26] on 13 benchmark functions. “NA” presents not results.

The global minimum is equation and the optimum value equation Moreover, they g1.g2,g3g6 and g8 are active.

equation

where equation the global maximum is unknown, the best we found isequation which is better than any reported value up to the best of our knowledge, and the constraint g1 is active.

g03[32]

equation

Where equation forequation

The optimum is equation andequation

g05[33]

equation

The best-known solution equation

equation

The best-known solution equation

equation

equation

Where equation The global optimum is equationequation

equation

Where equationequation andequation

equation

equation

Where equation Theequation optimum is equationequation

equation

Where equation The feasible region of the search space consists of 93 disjointed spheres. A point equation is feasible if there exist such that the above inequality holds.

The optimum is located at x*=(5,5,5)within the feasible region and f(x*)=1.

equation

Where −2.3 ≤ xi ≤ 2.3 for i = 1, 2 and − 3.2 ≤xi≤ 3.2 for i=3,4,5. The optimum is located at x*=(-1.717143, 1.595709, 1.827247, 0.7636413, 0.763645) and f(x*)=0.0539498

As can be seen from Table 2, for the test problems without equality constraints (g01, g02, g04, g06, g07, g08, g09, g10, and g12), the proposed algorithm MICA can enter the feasible region within 1500 generations; for test functions g03 and g11, and the proposed algorithm can enter the feasible region within 6000 generations. Although for functions g03 and g13, MICA can enter the feasible region within 6000 and 9000 generations, respectively; however, after 3000 generations, the best individual of the imperialist swarm has had very little distance to the boundaries of the feasible region.

It can be seen from Table 3, our algorithm MICA can find a better “best” result, compared with the other five algorithms OICA, SAEA, SMEA, RCEA and ISEA in four functions g02, g07, g10 and g13. In addition, algorithm MICA found a similar best solution in five problems g01, g03, g06, g08, g11, and g12 (ISEA didn’t give the results for g12). Slightly better best results were found by MICA in the remaining functions g04, g05, g06, and g09 (in fact, our algorithm obtained a similar best solution in g04 and g06 along with the compared three algorithms SMEA, RCEA and ISEA). Our approach found better mean and worst results in seven test functions g02, g05, g06, g07, g09, g10, and g10 except the compared algorithm ISEA for test g02 does [34]. It also provided similar mean and worst results in six functions g01, g03, g04, g08, g11, and g12. Also, the proposed algorithm obtained the slightly “worse” mean in test functions g01, g08, g12 and g13 for RCEA, and in g02 for the compared algorithm SMEA and ISEA.

The compared results in Table 2 verifies that MICA has the capability in convergence rate, and the compared results in Table 3 reflects the fact that our algorithm is capable of performing a robust and stable search. Furthermore, feasible solutions are consistently found for all test problems. The above observations validate that the proposed algorithm MICA has substantial potential in coping with various nonlinear constrained optimization problems.

Conclusions

This paper introduces a new imperialist competitive algorithm (MICA) for solving nonlinear constrained optimization problem. The proposed algorithm has three important characterizes:

1) Combining multi-objective optimization with local search models;

2) To achieve a better balance of the exploration and exploitation through the method of exchanging positions of the imperialist and colony;

3) Speeding up the convergence by taking advantage of a new local search method. Based on the comparison between the proposed algorithm and the five compared algorithms, it is concluded our algorithm NICA has shown its potential to handle various nonlinear constrained optimization problems, and its performance is much better than all other state-of-the-art evolutionary algorithms referred in this paper in terms of the selected performance metrics.

An important subject of ongoing work is of applying our approach to the solution of real-world optimization problems. Additionally, try to design different global and local search models since suitable search model can improve the capability of the algorithm remarkably. Last, we aim to explore the possibility of decreasing its computational cost after reaching the feasible region.

Acknowledgments

This work is partially supported by The Planning Fund for the Humanities and Social Sciences of the Ministry of Education (No. 18YJA790053). The author also gratefully acknowledges the helpful comments and suggestions of the reviewers, which have greatly improved the presentation.

References

Citation: Liu CA and Jia H (2018) Multiobjective Imperialist Competitive Algorithm for Solving Nonlinear Constrained Optimization Problems. J Swarm Intel Evol Comput 7: 172.

Copyright: © 2018 Liu CA, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.