close

Вход

Забыли?

вход по аккаунту

?

Particle swarm optimization algorithm for constrained problems.

код для вставкиСкачать
ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING
Asia-Pac. J. Chem. Eng. 2009; 4: 437–442
Published online 17 April 2009 in Wiley InterScience
(www.interscience.wiley.com) DOI:10.1002/apj.203
Research Article
Particle swarm optimization algorithm for constrained
problems
Jian-Ming Zhang and Lei Xie*
National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou, P. R. China
Received 25 April 2007; Revised 15 July 2008; Accepted 25 July 2008
ABSTRACT: A novel particle swarm optimization (PSO) algorithm with the evaluation of infeasibility degree (IFD) of
constraints is presented for nonlinear programming (NLP) problems with equality and inequality constraints. The IFD
of constraints is defined as the sum of the squared values of the constraint violations. The proposed novel PSO updates
the local best position and global best position according to the objective value and the value of IFD simultaneously.
The results of several numerical tests and one real engineering optimization problem show that the proposed approach
is efficient.  2009 Curtin University of Technology and John Wiley & Sons, Ltd.
KEYWORDS: particle swarm optimization; constrained problem; infeasibility degree
INTRODUCTION
In industrial engineering area, there are many scientific calculation and optimization problems under constraints, which can be formulated as nonlinear programming (NLP) problems with both unequal and equal
constraints:
Minimize f (x)
(1)
subject to:
gj (x) ≤ 0
j = 1, 2, · · · , m
hk (x) = 0
k = m + 1, m + 2, · · · , p
(2)
where f (x) is the objective function that should be minimized. x is the vector denoted as x = [x1 , x2 , · · · , xn ]T ,
m is the number of inequality constraints and p is
the number of all constraints. The objective function
f (x) and constraints g(x), h(x) are nonlinear functions.
The presence of constraints may significantly affect
the optimization performances of any optimization algorithms for unconstrained problems. Traditional methods
for solving these problems are gradient-based algorithms and assume that goal and constraints are differentiable. Recently, a number of methods based on
*Correspondence to: Lei Xie, National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang
University, Hangzhou, 310027, P. R. China.
E-mail: leix@iipc.zju.edu.cn
 2009 Curtin University of Technology and John Wiley & Sons, Ltd.
evolutionary computation are proposed for solving constrained optimization problems such as the genetic
algorithm (GA), simulated annealing, ant colony algorithm. They can be divided into the following four
categories[1] :
1.
2.
3.
4.
Methods based on preserving feasibility of solutions.
Methods based on penalty functions.
Methods based on a search for feasible solutions.
Other hybrid methods.
Among these approaches, penalty strategy in the construction of the fitness function was commonly used to
evaluate the infeasible points. In general, the penalty
function methods augment the objective function with
the squared or absolute values of the constraint violations multiplied by penalty factors. However, it is often
not easy to set suitable penalty factors since small values
of the penalty coefficients drive the search outside the
feasible region and often produce infeasible solutions,
while imposing very severe penalties makes it difficult
to drive the population to the optimum.[2] Therefore, the
penalty strategy weakens the ability of finding better
candidates for global optimization.
Particle swarm optimization (PSO) is a stochastic
method proposed initially by Kennedy and Eberhart,[3]
it is similar to evolutionary computation techniques in
that a population of potential solutions to the optimal
problem under consideration is used to probe the search
space. Owing to the simple concept, easy implementation and quick convergence, PSO was applied successfully in various computer science and engineering
problems,[4] especially in dealing with unconstrained
438
J.-M. ZHANG AND L. XIE
optimization problems. However, because of the difficulties in dealing with system constraints, there is
relatively less work on constrained optimization problems with PSO. The most important reason is that in
the implementation of PSO to NLP problems, particles
randomly generated at the beginning and/or generated
by position update during the evolutionary process usually violate the system constraints resulting in infeasible
particles. Therefore, the method of handling system constraints, particularly nonlinear equal constraints, and the
evaluation of infeasible particles are the main difficulties.
Recently, several methods have been developed to
deal with system constraints. Parsopoulos and Vrahatis proposed a PSO method with penalty strategy
for constrained optimization problems.[5] Dong proposed a PSO algorithm embedded with constraints fitness priority-based ranking method for NLP.[6] Similar
to the proposal by Dong et al .,[6] Zahara and Kao[7]
proposed a hybrid Nelder–Mead constraint handling
scheme, which also included gradient repair and constraint fitness priority ranking methods. Cagnina etc.[8]
incorporates a shake-mechanism and dual population
PSO algorithm to solve the constrained problem, which
overcomes the premature problem. Coevolutionary and
hybrid PSO approach was also proposed recently by He
and Wang[9,10] to address the constrained engineering
design problems. In spite of the recent progress in PSO
of constrained problems, few of them are of practical
use either due to the slow convergence rates or due to
complex algorithm implementation. This paper aims at
a new method to deal with constraints using infeasibility degree (IFD), such that PSO method can be applied
directly and easily to solve the constraints optimization
problems.
The rest of this paper is organized as follows. Section
on Basics of Particle Swarm Optimization introduces
some basics for PSO. In the Section on Particle Swarm
Optimization Algorithm for Constrained Optimization
Problems, a new constraints-handling mechanism with
IFD for PSO is presented. Simulation results of several
examples and an engineering design problem are presented and comparisons with previously reported results
are given in the Section on Numerical Simulation and
Results. Finally, the conclusions are given in Section
on Conclusions.
BASICS OF PARTICLE SWARM
OPTIMIZATION
PSO is a population-based optimization technique proposed firstly for unconstrained optimization problem.
In a PSO system, multiple candidate solutions coexist
and collaborate simultaneously. Each candidate solution called a ‘particle’, flies in the problem search
 2009 Curtin University of Technology and John Wiley & Sons, Ltd.
Asia-Pacific Journal of Chemical Engineering
space looking for the optimal position to land. A particle adjusts its position according to its own ‘experience’ as well as the experience of neighboring particles.
Tracking and memorizing the best position encountered
build particle’s experience. For that reason, PSO possesses a memory (i.e. every particle remembers the
best position it reached during the past). PSO system
combines local search method (through self experience)
with global search methods (through neighboring experience), attempting to balance exploration and exploitation.
Let D be the dimension of the search space. Suppose
that xi = [xi 1 , xi 2 , · · · , xiD ]T denotes the current position of the i th particle, where xid ∈ [ld , ud ], d ∈ [1, D],
ld , ud are the lower and upper bounds for the dth dimension, respectively and i = 1, 2, · · · , N , N is the number
of all particles in the population.The current velocity of the i th particle, vi = [vi 1 , vi 2 , · · · , viD ]T which
represents the distance to be traveled by this particle
from its current position. The best position that the i th
particle has ever visited, pi = [pi 1 , pi 2 , · · · , piD ]T .The
global best location that the population has achieved,
pg = [pg1 , pg2 , · · · , pgD ]T . In a PSO system, a particle
in the search space is characterized by two factors, its
position and velocity, which are updated with the following equations.
vid (t + 1) = wvid (t) + c1 r1 (pid (t) − xid (t))
+ c2 r3 (pgd (t) − xid (t))
xid (t + 1) = xid (t) + vid (t + 1)
(3)
(4)
where t denotes the i th iteration, c1 and c2 are positive
constant parameters called acceleration coefficients, r1
and r2 are two independently uniformly distributed
random variables with the range of [0,1] and w is the
inertia weight which controls the impact of previous
velocity of particle on its current one. The parameter
w regulates the trade-off between the global and local
exploration abilities of the swarm. A large inertia weight
facilitates a global search while a small inertia weight
facilitates a local search. By linearly decreasing the
inertia weight from a relatively large value to a small
one through the course of the PSO run, PSO tends to
have more global search ability at the beginning and
have more local search ability near the end of the run.
In the standard PSO, Eqn 3 is used to calculate the
new velocity according to its previous velocity and
the distance of its current position from both its own
best historical position and its neighbors’ best position.
Generally, the value of each component in vi can be
clamped to the range [vmin , vmax ] to control excessive
roaming of particles outside the search space. Then the
particle flies toward a new position according Eqn 4.
This process is repeated until a user-defined stopping
criterion is reached.
Asia-Pac. J. Chem. Eng. 2009; 4: 437–442
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering
PSO FOR CONSTRAINED PROBLEMS
PARTICLE SWARM OPTIMIZATION
ALGORITHM FOR CONSTRAINED
OPTIMIZATION PROBLEMS
The infeasibility degree for handling
constraints
For the constrained optimization problem, in order to
guide the search direction to the optimum solution, it
is necessary to measure the IFD of a candidate solution
quantitatively. The IFD of solution can be regarded as
the distance between solution and the feasible region.
For a feasible solution, its IFD is zero but for an
infeasible solution it is greater than zero and the more
severe it violates, the larger its IFD is.
The IFD of a candidate solution used in this paper is
defined as follows:
ϕ(xi ) =
m
j =1
[min{0, gj (xi )}] +
2
p
[hk (xi )]2
(5)
k =m+1
where gj (xi ), hk (xi ) are inequality constraints and
equality constraints respectively, and m and p are the
number of inequality constraints and all constraints,
respectively.
As the progress of PSO evolves, the evaluation
criterion of infeasible solutions should become more
and more rigorous. A threshold value ϕcrit which
decreases with the iteration number is adopted for this
motivation. If the particle Xi satisfies ϕ(Xi ) ≤ ϕcrit , the
particle i is accepted as a feasible solution, otherwise,
it is classified as an infeasible solution.
In each loop of the standard PSO algorithm, the
local best position and the global best position should
be updated on the basis of the objective function. In
this article, in order to deal with the constraints and
convergence to the optimal solution from both infeasible
region and feasible region, a new selection method
for pi and pg is proposed. Firstly, it is necessary to
determine whether a particle is feasible or not on the
basis of its IFD and the current IFD threshold value
and then according to the fitness value to determine pi
and pg . If there are two feasible particles, the particle
that has the higher fitness will be selected. If one of the
particles is infeasible and the other one is feasible,the
feasible particle is chosen as the better one. If both
particles are infeasible, then the particle with the lower
IFD value wins.
Step 1: Select relative parameters, including the size
of swarm N , the inertia weight w , the acceleration
constants c1 and c2 , the maximum velocity vmax , the
stop criterion and the initial threshold value ϕcrit .
Step 2 (initialization): For each particle i in the
population:
Step 2.1: Initialize xi randomly with in a proper
range.
Step 2.2: Initialize vi randomly.
Step 2.3: Evaluate fitness using the objective function
and calculate the IFD ϕ(xi ) of each particle.
Step 2.4: Find the feasible particle and infeasible
particles by comparing ϕcrit and current IFD value ϕ(xi ).
Step 2.5: Initialize pg with the index of the particle with the best fitness function among the feasible
particle.
Step 2.6: Initialize pi with a copy of Xi .
Step 3: Repeat until a stopping criterion is satisfied:
Step 3.1: For each particle i , update vi and xi
according to Eqns 3 and 4.
Step 3.2: Update IFD threshold value such that the
ϕcrit decreases with the iteration number.
Step 3.3: Evaluate fitness using the objective function
and calculate the IFD ϕ(xi ) of each particle.
Step 3.4: Find the feasible particle and infeasible
particles by comparing ϕcrit and current IFD value ϕ(xi ).
Step 3.5: Find pg according to the fitness and the IFD
of the particle.
Step 3.6: For each particle, find pi by comparing its
fitness and feasibility with its memory value.
NUMERICAL SIMULATION AND RESULTS
To test the performance of the proposed algorithms,
several famous benchmark optimization problems and
an engineering design problem are given.
During the simulation, the population size is 20, c1
and c2 are set to 2.0, and vmax is clamped to be 15%
of the search space. A linearly varying inertia weight
over the generations is used, i.e. varying from 1.2 at the
beginning of the search to 0.2 at the end. The maximum
number of generations is set to be 1000. All examples
are tested for 30 runs. The results are given as follows.
Test P1
Minimum f (x ) = (x12 + x2 − 11)2 + (x1 + x22 − 7)2
Subject to:
PSO algorithm for constrained optimization
On the basis of the analysis above, the procedure of
the proposed PSO with IFD evaluation is described as
follows:
 2009 Curtin University of Technology and John Wiley & Sons, Ltd.
g1 (x ) = 4.84 − (x1 − 0.05)2 − (x2 − 2.5)2 ≥ 0
g2 (x ) = x12 + (x2 − 2.5)2 − 4.84 ≥ 0
(6)
This is a two dimensional minimization problem,[11]
subject to two nonlinear inequality constraints. The
Asia-Pac. J. Chem. Eng. 2009; 4: 437–442
DOI: 10.1002/apj
439
440
J.-M. ZHANG AND L. XIE
Asia-Pacific Journal of Chemical Engineering
search space is bounded by 0 ≤ x1 , x2 ≤ 6, and the
known optimum solution is [x1 , x2 ; f ∗ ] = [2.246826,
2.381865; 13.59085]. Using the proposed method for
30 independent runs, the best solution found was
[2.24682698, 2.381905, 13.59087464]. The worst one
was [2.19973359, 2.0330672, 17.48601099], and the
average value of 30 independent runs was f (x ) =
13.84873821.
Subject to:
g1 (x ) = x1 + 2x2 + 8x3 + x4 + 3x5 + 5x6 ≤ 16
g2 (x ) = −8x1 − 4x2 − 2x3 + 2x4 + 4x5 − x6 ≤ −1
g3 (x ) = 2x1 + 0.5x2 + 0.2x3 − 3x4 − x5 − 4x6 ≤ 24
g4 (x ) = 0.2x1 + 2x2 + 0.1x3 − 4x4 + 2x5 + 2x6 ≤ 12
g5 (x ) = −0.1x1 − 0.5x2 + 2x3 + 5x4 − 5x5 + 3x6 ≤ 3
Test P2
Maximum
0 ≤ x1 ≤ 2, 0 ≤ x2 ≤ 8, 0 ≤ x3 ≤ 2
f (x ) = −(x1 − 3)2 − (x2 − 2)2
0 ≤ x4 ≤ 1, 0 ≤ x5 ≤ 1, 0 ≤ x6 ≤ 2
(9)
Subject to:
g1 (x ) = x12 + x22 − 5 ≥ 0
h1 (x ) = xi + 2x2 − 4 = 0
(7)
This is a two dimensional minimization problem,[12]
subject to one nonlinear inequality constraint and
one nonlinear equality constraint. The search space
is bounded by 0 ≤ x1 , x2 ≤ 5, and the exact
solution is [x1 , x2 ; f ∗ ] = [2.4, 0.8; −1.8]. Using the
proposed method for 30 independent runs, the
best solution found was [2.43300024, 0.78405866;
−1.80000208]. The worst one was [3.27911077,
0.35469110, −2.78494418], and the average value
was f (x ) = −1.866233879. In contrast, the best
solution given by Fung et al .[12] is [x1 , x2 ; f ∗ ] =
[2.4150, 0.7925; −1.80028].
Test P3
Maximum f (x ) = (x12 x2 x32 )/(2x13 x32 + 3x12 x22
This test problem comes from Ref., [14] the known
best solution of which is [x1 , x2 , x3 , x4 , x5 , x6 ; f ∗ ] =
[0, 6, 0, 1, 1, 0; −11.005]. Using the proposed method
for 30 independent runs, the best solution found was
[0, 6.0049095, 0, 1, 1, 0; −11.0049095]. The worst
one was [0, 6.0158104, 0, 1, 1, 0; −11.01581042],
and the average value was f (x ) = −11.00862647. In
contrast, the best value given by Rao,[14] is f (x ) =
−10.97192883 with the violation error of 3E-3, so a
better solution is found with the proposed approach.
Test P5 Welded beam design problem
The welded beam design problem is taken from
Ref. [14] in which a welded beam is designed for
minimum cost subject to constraints on shear stress
(t), bending stress in the beam (θ ), buckling load on
the bar (Pc ), end deflection of the beam (δ), and side
constraints. There are four design variables as depicted
in Fig. 1, i.e. h(x1 ), l (x2 ), t(x3 ), and b(x4 ).
The problem can be formulated as follows:
+ 2x22 x33 + x13 x22 x32 )
Minimize f (x ) = 1.010471x12 x2
Subject to:
+ 0.04811x3 x4 (14.0 + x2 )
g1 (x ) = x12 + x22 + x32 − 1 ≥ 0
g2 (x ) = x12 + x22 + x32 − 4 ≤ 0
x1 , x2 , x2 > 0
(8)
This test problem comes from Ref., [13] the known
best solution of which is [x1 , x2 , x3 ; f ∗ ] = [0.8692552,
0.5345225, .313627; 0.15373]. Using the proposed
method, the best solution found was [0.86775197,
0.53366133, 1.31299666; 0.15372990]. The worst one
was
[0.87325072,
0.53668951,
1.32108921;
0.15372900], and the average value of 30 independent
runs was f (x ) = 0.153729357.
Test P4
Minimum f (x ) = 6.5x1 − 0.5x12 − x2
− 2x3 − 3x4 − 2x5 − x6
 2009 Curtin University of Technology and John Wiley & Sons, Ltd.
Figure 1. Welded beam design problem.
Asia-Pac. J. Chem. Eng. 2009; 4: 437–442
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering
PSO FOR CONSTRAINED PROBLEMS
Table 1. Comparison of the best solutions for welded beam design problem.
Design variables
This article (PSO)
x1(h)
x2(l)
x3(t)
x4(b)
f (x )
0.205730
3.470489
9.036624
0.205730
1.724752
GP
CPSO
HPSO
NM-PSO
0.245500
6.196000
8.273000
0.245500
2.38593732
0.202369
3.544214
9.028210
0.205723
1.728024
0.205730
3.470489
9.032264
0.205730
1.724852
0.205830
3.468338
9.036624
0.205730
1.724717
subject to:
Table 2. Statistical results of different methods for
welded beam design problem.
g1 (x ) = τ (x ) − 13 600 ≤ 0
Method
g2 (x ) = σ (x ) − 30 000 ≤ 0
This paper
(PSO)
GP
CPSO
HPSO
NM-PSO
g3 (x ) = x1 − x4 ≤ 0
g4 (x ) = 0.10471x12 + 0.04811x3 x4 (14.0 + x2 )
− 5.0 ≤ 0
Best
Mean
Worst
Std Dev
1.724752
1.725268
1.729946
0.001074
2.385937
1.728042
1.724852
1.724717
N/A
1.748831
1.749040
1.726373
N/A
1.782143
1.814295
1.733393
N/A
0.012926
0.040049
0.003497
g5 (x ) = 0.125 − x1 ≤ 0
g6 (x ) = δ(x ) − 0.25 ≤ 0
g7 (x ) = 6000 − Pc (x ) ≤ 0
where
τ (x ) =
(τ )2 + 2τ τ (10)
x2
+ (τ )2
2R
6000
τ = √
2x1 x2
MR
τ =
J
M = 6000(14.0 + x2 /2)
x22
x1 + x3 2
R=
+
4
2
√
x22
x1 + x3 2
+
J = 2 2x1 x2
12
2
σ (x ) =
504 000
δ(x ) =
2.1952
CONCLUSIONS
x4 x32
x33 x4
Pc (x ) = 102 372.448980(1 − 0.0282346x3 )
× x3 x43
10, 0.1 ≤ x4 ≤ 2. The best solutions obtained by the
abovementioned approaches are listed in Table 1, and
their statistical simulation results are shown in Table 2.
From Table 1, it can be seen that the best feasible solution found by PSO is better than the best solutions by
all other techniques except NM-PSO. However, from
Table 2, it can be seen that the average searching quality of the proposed PSO is better than all other methods including NM-PSO. The standard deviation of the
results by PSO in 30 independent runs is very small,
which demonstrates that the proposed constraint handling approach is more reliable.
(11)
The approaches applied to this problem include geometric programming (GP),[15] coevolutionary particle
swarm optimization (CPSO),[9] hybrid particle swarm
optimization (HPSO)[10] and NM-particle swarm optimization (NM-PSO).[7] In this article, the PSO runs
for 30 times independently with the following variable regions: 0.1 ≤ x1 ≤ 2, 0.1 ≤ x2 ≤ 10, 0.1 ≤ x3 ≤
 2009 Curtin University of Technology and John Wiley & Sons, Ltd.
A new efficient PSO algorithm is proposed for handling
constrained optimization problems without penalty
function term. The IFD of a candidate solution is
defined as the sum of the squared values of the constraint violations and the IFD is applied to guide the
search convergence to the feasible region. The proposed
optimization methodology was tested on various problems in the literature and proved to be efficient with
respect to the accuracy and convergence. It can be used
for solving nonconvex constrained optimization problems, such as engineering design problems, which are
difficult to solve with the traditional optimization algorithms.
Acknowledgements
This work is supported by National Natural Science
Foundation of China (No. 60421002, No. 70471052)
Asia-Pac. J. Chem. Eng. 2009; 4: 437–442
DOI: 10.1002/apj
441
442
J.-M. ZHANG AND L. XIE
REFERENCES
[1] Z. Michalewicz, M. Schoenauer. Evol. Comput., 1996; 4(1),
1–32.
[2] D.W. Coit, A.E. Smith, D.M. Tate. INFORMS J. Comput.,
1996; 8(2), 173–182.
[3] J. Kennedy, R.C. Eberhart. Particle swarm optimization. In
Proceedings Of IEEE International Conference on Neural
Networks, Perth, 1995; 1942–1948.
[4] K.E. Parsopoulos, M.N. Vrahatis. Nat. Comput., 2002; 1,
235–306.
[5] K.E. Parsopoulos, M.N. Vrahatis. Particle Swarm Optimization Method for Constrained Optimization Problem. Intelligent
Technologies Theory and Application: New Trends in Intelligent Technologies, 2002; 214–220.
[6] Y. Dong, J. Tang, B. Xu, D. Wang. Comput. Math. Appl.,
2005; 49, 1655–1668.
 2009 Curtin University of Technology and John Wiley & Sons, Ltd.
Asia-Pacific Journal of Chemical Engineering
[7] E. Zahara, Y.T. Kao. Expert Syst. Appl., 2009; 36, 3880–3886.
[8] L.C. Cagnina, S.C. Esquivel, C.A. Coello. A bi-population
PSO with a shake-mechanism for solving constrained
numerical optimization, CEC 2007. IEEE Congress on
Evolutionary Computation, 25-28 Sept. 2007; 670–676.
[9] Q. He, L. Wang. Eng. Appl. Artif. Intell., 2007; 20, 89–99.
[10] Q. He, L. Wang. Appl. Math. Comput., 2007; 186, 1407–1422.
[11] K. Deb. Comput. Methods Appl. Mech. Eng., 2000; 186(2–4),
311–338.
[12] R.Y.K. Fung, J. Tang, D. Wang. Comput. Oper. Res., 2002;
29, 261–274.
[13] D. Bunnag, M. Sun. Appl. Math. Comput., 2005; 171,
604–636.
[14] S.S. Rao. Engineering Optimization, Wiley: New York, 1996.
[15] K.M. Ragsdell, D.T. Phillips. ASME J. Eng. Ind., 1976; 98(3),
1021–1025.
Asia-Pac. J. Chem. Eng. 2009; 4: 437–442
DOI: 10.1002/apj
Документ
Категория
Без категории
Просмотров
0
Размер файла
104 Кб
Теги
problems, constraint, optimization, algorithms, particles, swarr
1/--страниц
Пожаловаться на содержимое документа