20+ Million Readerbase
Indexed In
  • Genamics JournalSeek
  • RefSeek
  • Hamdard University
  • EBSCO A-Z
  • OCLC- WorldCat
  • Publons
  • Euro Pub
  • Google Scholar
Share This Page
Recommended Webinars & Conferences

Global Summit on Robotics and Artificial Intelligence

Prague, Czech Republic
Journal Flyer
Flyer image

Research Article - (2016) Volume 5, Issue 3

A Hybrid Simplex Non-dominated Sorting Genetic Algorithm for Multi- Objective Optimization

Seid H Pourtakdoust* and Seid M Zandavi
Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran, E-mail: [email protected]
*Corresponding Author: Seid H Pourtakdoust, Department of Aerospace Engineering, Sharif University of Technology, Tehran, Iran, Tel: 982166164610 Email:

Abstract

This paper introduces a hybrid scheme for multi-objective optimization problems via utility of two established heuristic algorithms. The proposed hybrid scheme consists of two parts that include the Nelder-Mead simplex algorithm (SA) as well as the Non-dominated Sorting Genetic Algorithm II (NSGA II). In this respect, subsequent to NSGA II sorting for the optimum points, the SA searches the optimum set to find the local optimal points and thus localize a promising area that is likely to contain the global minimum. This is especially helpful since SA is an efficient algorithm that can accurately and quickly exploit the promising area for the optimum point. The proposed hybrid scheme is applied to multi- objective optimization of some bench mark functions and its performance is compared against those of the classical NSGA II as well as the Multi-Objective Particle Swarm Optimization (MOPSO). The numerical results show that the proposed hybrid scheme provides competitive results that outperform those of the existing algorithms.

Keywords: Heuristic hybrid algorithm; Multi-objective optimization problem; Simplex algorithm; Genetic algorithm

Introduction

Often, in many engineering applications it is required to find the best approximate solution of multi-objective optimization problems quick and with good accuracy. Multi-objective optimization problems are very common and important in real world applications, and as such many researchers are still working to devise various novel heuristic and mathematical approaches for their solution. Mathematical techniques were more common in the earlier times for optimal solution, but as many of the conventional and especially deterministic pure mathematical optimization schemes suffer from stagnation in local optima, there is a demand for new innovations for heuristic methods [1]. The history of heuristic algorithms as a revolutionary idea for stochastic optimization techniques goes back to 1984 [1,2]. Further, it is notable that stochastic optimization techniques are readily applicable to the real problem because of gradient free mechanism and local optima trap avoidance. Stochastic and multi- objective optimization is in use in different fields of study such as mechanical engineering [3], civil engineering [4], chemistry [5,6] and other [7]. There are also some heuristic algorithms suggested to solve challenging problems in power systems such as cuckoo search algorithm [8], Pseudo-Gradient based Particle Swarm Optimization with Constriction Factor (PG-PSOCF) [9] and Hybrid Particle Swarm Optimization (HPSO) [10].

Within the past decade, a number of multi-objective heuristic algorithms has been suggested [1114] that represent various trade-offs between the objectives including optimal solutions of a multi-objective problem [15]. In contrary to classical optimization methods converting the multi-objective optimization problem to a single-objective optimization problem by emphasizing one particular Pareto optimal solutions at a time, multi-objective heuristic algorithms give a set of optimal solutions called Pareto optimal solutions (PAS). A key point in this regard is to find accurate approximation of true Pareto optimal solutions with the highest diversity for multi-objective optimization algorithms [16]. PAS help decision makers to select a certain optimal points in a diverse range of design options based on their experience and design requirements. Among the most well-known stochastic optimization techniques are Non-dominated Sorting Genetic Algorithm (NSGA) [17], Strength-Pareto Evolutionary Algorithm (SPEA) [18,19], Pareto Archived Evolution Strategy (PAES) [20], Pareto frontier Differential Evolution (PDE) [21], Efficient Ant Colony Optimization [22], Hybridization of Fruit Fly Optimization Algorithm and Firefly Algorithm (FOA-FA) [23], Non-dominated Sorting Genetic Algorithm II (NSGA II) [24] and Multi-Objective Particle Swarm Optimization (MOPSO) [25]. MOPSO is one of the most popular multi-objective meta-heuristic algorithms that are conceptually similar to PSO. In more recent MOPSO studies [2628] extra conditions such as diversity estimation are utilized to reach better exploration characteristics. In this sense, the concept of Pareto dominance for selecting leaders from a non-dominated external archive has been utilized by MOPSO where the leaders of swarms that guide the particles to the Pareto Frontier are selected from the top portion of the archive at each iteration. A dynamic fitness inheritance technique [29] is also applied to reduce the computation cost of the optimization process. Another well-known multi-objective meta-heuristics is NSGA II [24] that is a multiobjective version of the well- regarded Genetic Algorithm (GA) [30,31]. NSGA II was proposed to remove several deficiencies of its first version [17] that included high computational cost of nondominated sorting, lack of elitism and lack of sharing parameters. In the second version, the first population is generated randomly and each individual is grouped based on the non-dominated sorting method and new population is generated by genetic operators such as selection, crossover and mutation operators. The new population is next sorted again and the new elements of the final population are selected via higher non-dominated level. The process of selecting the non-dominated individuals is repeated till reaching a population of the same size as the initial population.

More recently over the past three years a number of multiobjective optimization algorithms have also been proposed. These new studied include Multi-objective Grey Wolf Optimizer [1], Multi-objective Cat Swarm Optimization [32], Multi-objective Differential Evolution [33], Multi-objective Gravitational Search Algorithm [34], Multi-objective Ant Colony Optimization [35], Multi-objective Artificial Bee Colony algorithm [36] and the Multiobjective Teaching-Learning-Based Optimization algorithm [37]. These studies show the ability of meta-heuristics in handling multiple objectives. Although all algorithms are able to approximate the true Pareto optimal front of a given problem, they are not able to solve all optimization problems accurately. Therefore, it is still very likely that new hybrid algorithms to be proposed that can solve new problems that not solved before and/or with a better accuracy as compared with the existing techniques.

In this respect a new hybrid heuristic optimization algorithm is proposed in the current work for multi-objective problems. The proposed hybrid algorithm hybridizes the Nelder-Mead simplex algorithm with the Non-dominated Sorting Genetic Algorithm II (NSGA II) to find the best global point. The performance of the new proposed algorithm is demonstrated via some complex benchmark functions.

The organization of this paper is as follows: section 2 deals with the mathematical representation of multi-objective optimization. Section 3 describes the simplex method. Section 4 is devoted to non-dominated sorting genetic algorithm II. The proposed hybrid algorithm is introduced in section 5. Section 6 provides the numerical simulation results. Finally, a conclusion is drawn in section 7.

Multi-Objective Optimization

The problem is to minimize the fitness function of a multi-objective problem. The model is described as follows:

image

where image is the N-dimensional vector of decision variables that represents the possible solutions, M is the number of objectives, fm is the objective function, and (gj, hk) are the inequality and equality constraint functions, respectively.

To clarify the concept of Pareto optimality, consider two vectors {Fa, Fb}M. Fa dominates imageThe dominance relations image for a two-objective problem are indicated by labelled circles in Figure 1. Hence, a vector of decision variables Xa is a non-dominated solution if and only if there is no other solution Xb such thatimage The non-dominated solution is also known as the Pareto optimal solution. The set of non-dominated solution of a multi-objective optimization problem is known as the Pareto Optimal set (P). The set of objective vectors with respect to (P) is known as the Pareto Front image The PF for a two-objective problem is illustrated by the black circles in Figure 1.

swarm-intelligence-evolutionary-Dominance-relation

Figure 1: Dominance relation for two objectives problem.

It is desired to find as many non-dominated solutions as possible according to the conflicting and different objective functions and constraints in the multi-objective optimization algorithm. The Pareto front corresponding to the non-dominated set should be as close to and well distributed over the true Pareto front as possible. However, it is possible to have different solutions that map to the same fitness value in objective space.

Simplex Method

Simplex is a heuristic optimization algorithm. It is a direct search method that does not use the derivatives of the objective function. Simplex is a geometrical object produced by n+1 points (X0,..., Xn) in an n-dimensional space [38,39]. Thus for example, the simplex in twodimension space is a triangle. The basic idea of the simplex algorithm is to compare the value of the objective function at n+1 vertices of a simplex object and move it gradually toward the optimum point via an iterative process. The following equations can be used to generate the vertices of a regular simplex of size a (equilateral triangle in twodimensional space), within the n-dimensional space [39]:

image

where X0 is the initial base point, us is the unit vector along the coordinate axis s and p,q are defined as follows:

image

image

Through a sequence of elementary geometric transformations (reflection, contraction and expansion), the simplex, shown in Figure 2, moves toward the optimum point where after each transformation, the current worst vertex is replaced by a better one.

The reflected, expanded and contracted points are given by: Xr, Xe and Xc , respectively.

image

Here, image is the centroid of all verticesimage where h is the index of the worst point. The parameters α, β and γ are called reflection, expansion and contraction coefficients, respectively.

Reflection is the operation by which the worst vertex, called high, is reflected with respect to the centroid image .. If the reflected point is better than all other points, the method expands the simplex in the reflection direction; otherwise, if it is at least better than the worst, the algorithm performs again the reflection with the new worst point [39]. The contraction is performed when the worst point is at least as good as the reflected point.

Non-dominated Sorting Genetic Algorithm II (NSGA II)

The NSGA II [24] algorithm is one of the most efficient and famous algorithms for multi-objective optimization. It uses fast non-dominated sorting to rank the population fronts and a parameter called crowding distance is calculated in the same front. Subsequently, the tournament selection is made between two individuals that are randomly selected from the parent population. The individual with lower front number is selected if the two individuals come from different fronts. The individual with higher crowding distance is selected if the two individuals are from the same front. Next the crossover and mutation operators are used to generate a new offspring population. And finally, the parent and offspring population are combined together where a fast nondominated sorting and crowding distance assignment procedure is used to rank the combined population and only the best N individuals are selected as the new parent population [40].

NSGA II is proposed on basis of non-dominated sorting genetic algorithm (NSGA) [17] and the main advantages of NSGA II compared with NSGA are as follows: (1) it uses a fast non-dominated sorting approach, (2) it has no sharing parameter and (3) it uses an elitist strategy [40]. NSGA II has good global search ability and well distributed non-dominated solution in the Pareto front. Nevertheless, the convergence capability of NSGA II is limited and its elite strategy may result in the loss of the non-dominated solutions in the Pareto front [40].

Simplex Non-Dominated Sorting Genetic Algorithm II

In This section, the proposed hybrid algorithm is introduced as a tool for multi-objective optimization problems. First the basic structure of the proposed algorithm is presented followed by detail description of the constructive modules.

General setting out of the algorithm

The block diagram of Simplex-NSGA II is shown in Figure 3. The parameters of Simplex-NSGA II are set during the initialization. Simplex-NSGA II utilizes two heuristic algorithms, simplex optimization method and NSGA II to find the optimal solution. In this approach, simplex optimization is utilized as exploration to make better diversity within the search space. In addition NSGA II is used as exploitation to arrange the member of Pareto optimal solutions is the main form of a multi-objective optimization algorithm. The proposed Simplex-NSGA II consists of two loops. The outer loop generates new population using the simplex method at each generation. The inner loop initially propagates the vertices of simplex using reflection, expansion and contraction operations to reshape and move the simplex toward likelihood regions of the search space and is terminated when the maximum number of iterations ( imax ) is reached. Table 1 represents the pseudo-code of the Simplex-NSGA II.

swarm-intelligence-evolutionary-simplex-reflection

Figure 2:Available moves in the simplex method: (a) initial simplex; (b) reflection; (c) expansion; (d) contraction.

swarm-intelligence-evolutionary-Simplex-NSGA

Figure 3:Simplex-NSGA II flowchart.

Initialization

Simplex-NSGA II has nine parameters that must be set before the execution of the algorithm. The parameters are listed in Table 2 and are introduced in the following subsections.

Generation initial population

In the proposed method, the initial parent population ( P0 of size Npop ) is generated randomly with uniform distribution. The population is sorted based on non-domination and each solution is assigned a fitness value equal to its non-domination level. An initial offspring population ( Q0 ) with the same size as the parent population is created through recombination based on binary tournament selection and inducing variations via mutation operators.

Population update

From the first generation onward, a combined population ( image ) is formed that is sorted according to fast nondomination procedure. The new parent population is formed by the inner loop. Therefore, during any iteration of the inner loop, the population will be updated using reflection, expansion and contraction operators for the next generation. In this respect, any individual that is generated by genetic operators such as selection, crossover and mutation is utilized as vertices of the simplex to generate the initial simplex in order to create the new population.

swarm-intelligence-evolutionary-pseudo-code

Table 1:Simplex-NSGA II pseudo-code.

Stop condition

The inner loop stops when the maximum number of iterations (imax) is reached. The outer loop is stopped when the maximum number of generation is reached.

Numerical Results

In this section, the performance of Simplex-NSGA II is evaluated using ten benchmarks that are proposed in CEC 2009 [41], listed in Table 3 and 4. The results are compared to well-known algorithms of Non-dominated Sorting Genetic Algorithm II (NSGA II) [24] and Multi-Objective Particle Swarm Optimization (MOPSO) [28]. For the performance metric [32], Inverted Generational Distance (IGD) [42], Spacing (SP) [43] and Maximum Spread (MS) [18] criteria are employed to measure convergence, quantity and coverage respectively. The mathematical formulation of IGD is as follows:

image

Where N is the number of true Pareto optimal solutions and di indicates the Euclidean distance between the i-th true Pareto optimal solution and the closet obtained Pareto optimal solutions in the reference set. It should be noted that IGD = 0 indicates that all members of the non-dominated solutions are in the true Pareto Front.

The mathematical formulation of the SP and MS are as follows:

image

where image is the average of all di , N is the number of Pareto optimal solutions obtained, and image for all image

Parameter Description Value
Npop Population size 50
Ngen Maximum generation 100
Pcrs Probability of crossover 0.80
Pmut Probability of mutation 0.01
a Side of simplex 22
α Reflection coefficient 0.9
γ Expansion coefficient 2
β Contraction coefficient 0.005
imax Maximum iteration of simplex method 60

Table 2: Parameters of simplex-NSGA II.

Name Mathematic Formulation
UF1 image
UF2 image
UF3 image
UF4 image
UF5 image
UF6 image
UF7 image

Table 3: Bi-objective benchmark functions.

image

where d is a function to calculate the Euclidean distance, bi is the maximum value in the ith objective, bi is the minimum in the ith objective, and M is the number of objectives.

In addition to utilizing the performance metrics, the best set of Pareto optimal solution obtained by Simplex-NSGA II on both the parameter space as well as the search space is shown in Figures 4-6. These figures show the performance of Simplex-NSGA II in comparison with true Pareto front. For a comparative evaluation, all the algorithms are run 30 times on the test problems and the statistics results of the 30 runs are provided in Tables 5-7. Note that 900,000 function evaluations are used for each algorithm and the number of parameters for each benchmark is 30.

Statistical results of the algorithm for IGD, SP and MS are provided in Tables 5-7, respectively. The IGD shows that the proposed hybrid algorithm (Simplex-NSGA II) is able to provide the best results on all statistical metrics for bi-objective test problems. IGD is the performance metric that shows the accuracy and convergence of an algorithm. Thus it can be stated that the proposed Simplex-NSGA II algorithm is able to provide superior convergence on the bi-objective benchmarks.

The resulting Pareto optimal solution of Simplex-NSGA II on each benchmark is also depicted in Figures 4 and 5. These figures demonstrate the convergence and coverage characteristics (Tables 6 and 7) of the Simplex-NSGA II are also better than the other algorithms. In addition, Simplex-NSGA II Pareto optimal fronts are closer to the true Pareto front and highly distributed along both objectives. The numerical results show that the performance of the Simplex-NSGA algorithm is more stable than MOPSO and NSGA II.

Name Mathematic Formulation
UF8 image
UF9 image
UF10 image

Table 4: Tri-objective benchmark functions.

swarm-intelligence-evolutionary-Pareto-optimal

Figure 4: Pareto optimal solutions for UF1 to UF6.

Problems UF8, UF9 and UF10 are three-objective that make them more challenging against the two-objective problems. The results of the first three-objective problem show that Simplex-NSGA II has competitive and even better performance in solving these problems as compared to NSGA II.

The Pareto optimal fronts obtained by the proposed algorithm for UF8 are shown in Figure 5. The Pareto optimal solutions have converged towards the true Pareto optimal front despite their low diversity. Therefore, the proposed Simplex-NSGA II algorithm shows high convergence but low coverage for the UF8 in the best case.

The UF9 test problem has a separated Pareto front and its pertinent results indicate that the proposed algorithm has been effective on this challenging test function. The statistical results both quantitatively and qualitatively indicate that the coverage of the proposed Simplex-NSGA II is very good for all objectives. Finally, the results of the last UF10 test problem are similar to that of UF8, in which NSGA II provides better statistical results on the average, median and standard deviation, and the worst result is for IGD. However, the proposed Simplex-NSGA II, provides the best coverage among the 30 independent runs.

To get a better image of the Pareto optimal sets obtained via the proposed Simplex-NSGA II, the first three variables of the true Pareto optimal set and determined sets are illustrated in Figure 6 showing a very goof convergence and coverage characteristics.

swarm-intelligence-evolutionary-OPTIMAL-SOLUTIONS

Figure 5: PARETO OPTIMAL SOLUTIONS FOR UF7 TO UF10.

swarm-intelligence-evolutionary-Simplex-NSGA

Figure 6: Pareto optimal sets of Simplex-NSGA II on UF2, UF7 and UF9.

IGD UF1   UF2   UF3
Simplex-NSGAII MOPSO MOEA/D   Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II
Average 0.04695 0.137005 0.18713   0.01359 0.06040 0.12224   0.07361 0.31399 0.28875
Median 0.03886 0.13174 0.18285   0.01225 0.04835 0.1231   0.07265 0.30802 0.28989
STD. Dev. 0.03319 0.04407 0.05073   0.00289 0.02762 0.01040   0.01108 0.04473 0.015931
Worst 0.09301 0.22786 0.24642   0.01845 0.13051 0.14389   0.09167 0.37773 0.31284
Best 0.00674 0.08990 0.12652   0.01060 0.03699 0.10456   0.05764 0.25648 0.26322
  UF4   UF5   UF6
  Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II
Average 0.02434 0.13604 0.06813   0.45289 2.20238 1.29155   0.15713 0.64752 0.688412
Median 0.02415 0.13432 0.06846   0.45486 2.12574 1.33731   0.13523 0.55073 0.69811
STD. Dev. 0.00138 0.00739 0.00214   0.08449 0.55304 0.13479   0.09019 0.26612 0.05543
Worst 0.02760 0.15189 0.07037   0.53701 3.03836 1.46726   0.35279 1.24281 0.74031
Best 0.02295 0.12733 0.06466   0.25741 1.46479 1.12326   0.07954 0.37933 0.55255
  UF7   UF8   UF9   UF10
  Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII NSGA II   Simplex-NSGAII NSGA II   Simplex-NSGAII NSGA II
Average 0.02669 0.35395 0.45534   0.19992 0.53681   0.31183 0.48840   1.70149 1.63729
Median 0.02641 0.38730 0.43776   0.20740 0.53650   0.32593 0.41441   1.54063 1.59123
STD. Dev. 0.01986 0.20442 0.18993   0.06217 0.18267   0.08044 0.14459   0.55163 0.29849
Worst 0.06600 0.61512 0.67711   0.33433 0.79647   0.48522 0.72230   3.02835 2.16250
Best 0.00558 0.05402 0.02910   0.12553 0.24530   0.22217 0.33365   1.12806 1.22028

Table 5: Statistical results for IGD on UF1 to UF10.

SP UF1   UF2   UF3
Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII   MOPSO NSGA II
Average 0.03811 0.00898 0.00384   0.02770 0.00829 0.00876   0.01991   0.00699 0.28875
Median 0.03761 0.00855 0.00382   0.02654 0.00814 0.00859   0.02007   0.00677 0.28939
STD. Dev. 0.01516 0.00247 0.00151   0.01289 0.00168 0.00076   0.00276   0.00170 0.01582
Worst 0.06146 0.01464 0.00665   0.06199 0.01245 0.01042   0.02369   0.01007 0.31294
Best 0.01399 0.00670 0.00213   0.01640 0.00624 0.00797   0.01461   0.00476 0.26342
  UF4   UF5   UF6
  Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II
Average 0.02543 0.00666 0.00730   0.12049 0.00479 0.00278   0.06150 0.02084 0.00640
Median 0.02386 0.00662 0.00728   0.12758 0.00487 0.00007   0.05482 0.01235 0.00000
STD. Dev. 0.00475 0.00091 0.00059   0.02505 0.00408 0.00553   0.04126 0.03258 0.01227
Worst 0.03384 0.00809 0.00836   0.15826 0.01206 0.01615   0.13353 0.11140 0.03010
Best 0.02055 0.00546 0.00610   0.07457 0.00006 0.00000   0.01222 0.00215 0.00000
  UF7   UF8   UF9   UF10
  Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII NSGA II   Simplex-NSGAII NSGA II   Simplex-NSGAII NSGA II
Average 0.02541 0.00670 0.00540   0.23364 0.02682   0.25104 0.02343   1.06788 0.01994
Median 0.02379 0.00655 0.00441   0.23699 0.02639   0.24302 0.02349   0.89524 0.02066
STD. Dev. 0.01286 0.00285 0.00301   0.03859 0.00827   0.03695 0.00405   0.41661 0.00348
Worst 0.05257 0.01240 0.01168   0.27492 0.04473   0.30891 0.03087   1.81766 0.02665
Best 0.00351 0.00325 0.00084   0.16389 0.01531   0.19916 0.01716   0.67219 0.01536

Table 6: Statistical results for SP on UF1 to UF10.

In summary, numerical results demonstrate that the proposed Simplex-NSGA II has better performance for bi-objective bench mark functions with regards to convergence and coverage. However, for some tri-objective bench mark functions, even though the proposed algorithm shows high convergence, it has a low coverage. Thus one could say the key advantages of the proposed Simplex-NSGA II algorithm compared to NSGA II and MOPSO are high convergence and coverage characteristics. In addition, the results of the proposed Simplex-NSGA II were in almost all cases better than those of MOPSO. Thus the results show that the proposed Simplex-NSGA II outperforms MOPSO as it utilizes simplex operators for generating new population obtained from the non-dominated solution. In addition, MOPSO updates gBest in each iteration. So all particles are attracted by the same (or a similar set of) gBest(s) in each iteration [44], while the individuals of Simplex-NSGA II, are updated for each generation thus assisting search agents to explore the search space more extensively.

Conclusion

MS UF1   UF2   UF3
Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII   MOPSO NSGA II
Average 0.96922 0.64538 0.51774   0.89222 0.91205 0.87201   0.99659   0.61030 0.23994
Median 0.91629 0.66322 0.59541   0.88328 0.91636 0.87437   1.00000   0.61612 0.22943
STD. Dev. 0.13919 0.19292 0.16609   0.05448 0.02560 0.00560   0.08165   0.10575 0.12129
Worst 0.79407 0.26592 0.31478   0.82385 0.86654 0.85986   0.87763   0.38172 0.08975
Best 0.99536 0.95226 0.74128   0.93883 0.95301 0.87794   1.00000   0.77145 0.47863
  UF4   UF5   UF6
  Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII MOPSO NSGA II
Average 0.95759 0.81275 0.88320   0.83746 0.27926 0.29215   0.98393 0.27435 0.09677
Median 0.96323 0.81321 0.88131   0.83556 0.28654 0.29165   1.00000 0.22917 0.00005
STD. Dev. 0.01762 0.01367 0.01812   0.06986 0.09575 0.03470   0.21202 0.11285 0.20715
Worst 0.92252 0.79441 0.85324   0.69914 0.15574 0.23834   0.79702 0.15436 0.00000
Best 0.97665 0.83449 0.91394   0.95996 0.43827 0.34380   1.00000 0.52516 0.59484
  UF7   UF8   UF9   UF10
  Simplex-NSGAII MOPSO NSGA II   Simplex-NSGAII NSGAII   Simplex-NSGAII NSGA II   Simplex-NSGAII NSGA II
Average 0.98737 0.42928 0.56317   0.35198 0.50810   0.87641 0.19160   0.27643 0.13025
Median 1.00000 0.29520 0.63267   0.34883 0.50601   0.94795 0.16566   0.14774 0.10933
STD. Dev. 0.09838 0.27553 0.24209   0.44928 0.16136   0.39021 0.16351   0.38569 0.06243
Worst 0.80289 0.14458 0.14963   0.27879 0.22723   0.30638 0.06771   0.03183 0.06479
Best 1.00000 0.87714 0.99152   0.73412 0.71476   0.96641 0.64242   0.93133 0.25414

Table 7: Statistical results for MS on UF1 to UF10.

A new hybrid heuristic optimization algorithm is proposed for multi-objective optimization problems. The proposed hybrid algorithm formulates the optimization problems via a stochastic search method. The new hybrid algorithm is based on Nelder-Mead simplex and Nondominated Sorting Genetic Algorithm II (NSGA II) developed to find the global optimum. The proposed hybrid scheme is noted as Simplex-NSGA II whose performance is evaluated using a series of bi-objective and triobjective benchmarks test functions. The results are compared with those of other multi-objective optimization methods such as MOPSO and NSGA II. The numerical results demonstrate that the proposed hybrid algorithm has a greater performance in solving multi-objective optimization problems with a much better convergence and coverage capability.

References

  1. Mirjalili S, Saremi S, Mirjalili SM, Coelho LD (2016) Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert SystAppl47:106-19.
  2. Coello CAC (2006) Evolutionary multi-objective optimization: A historical view of the field.ComputIntell Mag IEEE1:28-36.
  3. Kipouros T, Jaeggi DM, Dawes WN, Parks GT, Savilla M, et al. (2008) Biobjectivedesign optimization for axial compressors using tabu search. AIAA J46:701-711.
  4. Luh GC, Chueh CH (2004) Multi-objective optimal design of truss structure with immune algorithm.  ComputStruct82:829-44.
  5. Cunha AG, Covas JA (2002)RPSGAe - A multiobjective genetic algorithm with elitism: Application to polymer extrusion. Work Mult Object Metaheuristics.
  6. Rangaiah GP (2008) Multi-objective optimization: techniques and applications in chemical engineering. World Scientific 1:1.
  7. Coello CAC, Lamont GB (2004) Applications of multi-objective evolutionary algorithms. World Scientific p: 1.
  8. Le DA, Vo DN (2016) Cuckoo search algorithm for minimization of power loss and voltage deviation. Int J energy OptimEng5:23-34.
  9. Vo DN (2015)Pseudo-gradient based particle swarm optimization with constriction factor for multi objective optimal power flow. Glob J TechnolOptim.
  10. AlRashidi MR, El-Hawary ME (2007) Hybrid particle swarm optimization approach for solving the discrete opf problem considering the valve loading effects. IEEE Trans Power Syst22:2030-2038.
  11. Deb K (2001) Multi-objective optimization using evolutionary algorithms. John Wiley & Sons p: 16.
  12. Fonseca CM, Fleming PJ (1993) Genetic algorithms for multiobjective optimization: formulationdiscussion and generalization. IcgaCiteseer 93: 416-423.
  13. Horn J, Nafploitis N, Goldberg D (1994) A niched pareto genetic algorithm for multiobjective optimization. Proc. 1st IEEE Conf. EvolComputIeee p: 82-87.
  14. Zitzler E, Thiele L (1998) Multiobjective optimization using evolutionary algorithms—A comparative case study. Lect Notes ComputSci Parallel Probl Solving from Nature—PPSN V, Springer 1498: 292-301.
  15. Coello CC, Lamont GB, Van Veldhuizen DA (2007) Evolutionary algorithms for solving multi-objective problems. Springer Science & Business Media.
  16. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011)Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm EvolComput1:32-49.
  17. Srinivas N, Deb K (1994)Muiltiobjectiveoptimization using nondominated sorting in genetic algorithms.EvolComput2:221-248.
  18. Zitzler E, Thiele L (1999) Multi-objective evolutionary algorithms: a comparative case study and the strength Pareto approach. EvolComput IEEE Trans3:257-271.
  19. Zitzler E (1999) Evolutionary algorithms for multi-objective optimization: Methods and applications. Citeseerp:63.
  20. Knowles JD, Corne DW(2000) Approximating the non-dominated front using the Pareto archived evolution strategy. EvolComput8:149-172.
  21. Abbass HA, Sarker R, Newton C (2001) PDE: A pareto-frontier differential evolution approach for multi-objective optimization problems. ProcCongrEvolComput 2: 971-978.
  22. Gebreslassie UMDBH (2016)Efficient ant colony optimization (EACO) algorithm for deterministic optimization. Int J Swarm IntellEvolComput.
  23. Allah RMR (2016) Hybridization of fruit fly optimization algorithm and firefly algorithm for solving nonlinear programming problems. Int J Swarm IntellEvolComput.
  24. Deb K, Pratap A, Agarwal S, Meyarivan T (2002)A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans EvolComput6:182-197.
  25. Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. EvolComput IEEE Trans8:256-279.
  26. Zhang Q, Li H (2007) MOEA/D: A multi-objective evolutionary algorithm based on decomposition. IEEE Trans EvolComput11:712-731.
  27. Coello A, Lechunga M (2002) MOPSO: A proposal for multi-objective swarm optimization.EvolComput 2: 1051-1056.
  28. Coello CA, Pulido GT, Lechuga M (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans EvolComput8:256-279.
  29. Reyes-Sierra M, Coello CAC (2005) Fitness inheritance in multi-objective particle swarm optimization. Swarm IntellSymp p: 116-123.
  30. Goldberg D (1989) Genetic algorithms in optimization search and machinelearning. Addison Wesley 905:205-211.
  31. Goldberg D, Holland J (1988) Genetic Algorithms and Machine Learning. Mach Learn3:95-99.
  32. Pradhan PM, Panda G (2012) Solving multi-objective problems using cat swarm optimization. Expert SystAppl39:2956-2964.
  33. Velazquez OJ, CoelloCC, Arias-Montano A (2014) Multi-objective compact differential evolution. DifferEvol IEEE Symp p: 1-8.
  34. Hemmatian H, Fereidoon A, Assareh E (2014) Optimization of hybrid laminated composites using the multi-objective gravitational search algorithm (MOGSA). EngOptim46:1169-1182.
  35. Shi X, Kong D (2015) A Multi-objective Ant Colony Optimization Algorithm Based on Elitist Selection Strategy.
  36. Hancer E, Xue B, Zhang M, Karaboga D, Akay B (2015) A multi-objective artificial bee colony approach to feature selection using fuzzy mutual information. EvolComput (CEC) 2015 IEEE Congr p: 2420-2427.
  37. Lin W, Yu DY, Zhang C, Liu X, Zhang S, Tian Y, et al. (2015) A multi-objective teaching−learning-based optimization algorithm to scheduling in turning processes for minimizing makespan and carbon footprint. J Clean Prod101:337-347.
  38. Kitagawa G (1998) A self-organizing state-space model. J Am Stat Assoc1203-1215.
  39. Rao SS (1996) Engineering optimization, theory and practice 3rd. edition.
  40. Zhang C, Ma X (2015) NSGA-II Algorithm with a Local Search Strategy for Multiobjective Optimal Design of Dry-Type Air-Core Reactor. Math ProblEng p: 839035.
  41. Zhang Q, Zhou A, Zhao S (2008) Multi-objective optimization test instances for the CEC 2009 special session and competition. Univ Essex 1-30.
  42. Sierra MR, CoelloCCA (2005) Improving PSO-Based Multi-objective Optimization Using Crowding, Mutation and E-Dominance. EvolMulti-Criterion Optimpp: 505-519.
  43. Schott JR (1995)Fault Tolerant Design Using Single and Multi-criteria Genetic Algorithm Optimization. DTIC Document.
  44. Nebro AJ, Durillo JJ, Coello CCA (2013) Analysis of leader selection strategies in a multi- objective Particle Swarm Optimizer. 2013 IEEE Congr. Evol. Comput p: 3153-3160.
Citation: Pourtakdoust SH, Zandavi SM (2016) A Hybrid Simplex Non-dominated Sorting Genetic Algorithm for Multi-Objective Optimization. Int J Swarm Intel Evol Comput 5:138.

Copyright: ©2016 Pourtakdoust SH, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
bellicon