Grey Wolf Optimizer is a metaheuristic algorithm proposed by Seyedali Mirjalili in 2014 inspired by the big gray wolf predation strategy. It mainly simulates searching for prey , encircling prey and attacking prey . After the source code pays attention to the official account , reply " Grey Wolf " "Or " GWO " to obtain.
Enlighten
Gray wolves belong to the canine family and are the top predators at the top of the food chain. Most of them like to live in groups, with an average of 512 per population. What is particularly interesting is that they have a very strict social hierarchy , as shown in the figure below.
Figure 1 Gray wolf layering
The leading wolf (leader) is a male and a female, called alphas ( personal understanding, the reason is so called because alpha is the first in Greek subtitles, used to indicate the front ). Alpha wolves are mainly responsible for making decisions about hunting, where to sleep, when to wake up, etc., and then pass the decision to the entire population. If the wolves lower their tails, they all agree. However, there will also be democratic behavior in the population, and alpha wolves will also listen to other wolves in the population. Interestingly, the alpha wolf is not necessarily the strongest member, but the best in terms of managing the team. This shows that the organization and discipline of a wolf pack are far more important than its strength.
The second layer is beta. Deta wolves are subordinate wolves, male or female, used to assist alpha wolves in decisionmaking or other population activities. When one of the alpha wolves dies or grows old, it is the best substitute for the alpha wolf. Beta wolves should obey alpha wolves, but they will also order other lowerlevel wolves, thus acting as a link between the previous and the next.
At the bottom is the omega wolf. They played the role of "scapegoat" (so miserable ), had to succumb to other leading wolves, and they were also ranked last when eating. It seems that the omega wolf is not an important individual in the wolf pack, but once the omega wolf is lost, the entire wolf pack will face internal struggles and problems.
If a wolf is neither alpha, beta, nor omega, it is called a subordinate wolf or delta wolf. Delta wolves must obey alpha and beta wolves, but dominate omega wolves. Scout wolves, guard wolves, old wolves, predatory wolves, and guard wolves are all of this category. Scouting wolves are responsible for monitoring the boundaries of the territory and warning the wolves if there is danger; guarding wolves to protect and guarantee the safety of the wolves; old wolves are experienced wolves who have retired from alpha or beta; predatory wolves help alpha and beta hunt and Provide food for the wolves; guard wolves take care of the old, weak, sick and disabled in the wolves.
In addition, group hunting is another interesting social behavior of gray wolves, which mainly includes the following stages:
 Tracking, chasing and approaching prey;
 Pursue, surround and harass the prey until it stops moving;
 Attack the prey.
Figure 2 Predation behavior of gray wolves: (A) Chase, approach and track prey. (BD) Hunt, harass and surround prey. (E) Static state and attack
Mathematical models and algorithms
1. the mathematical model of social hierarchy, tracking, encircling and attacking prey is given, and then the complete GWO algorithm is provided.
social level
In order to mathematically model the social hierarchy of wolves when designing GWO, it is considered that the most appropriate solution is alpha(/alpha ), then the second and third optimal solutions are expressed as beta(/beta ) and delta(/delta ), and the remaining solutions are assumed to be omega(/omega ). In GWO, /alpha , /beta , and /delta are used to guide predation (optimization), and /omega obeys these three wolves.
Surround predator
Gray wolves surround their prey when preying, and use the following formula to express this behavior:
D = C X p (t) X (t) (1)/vec(D)=\vec(C)/cdot/overrightarrow(X_(p))(t)/vec{X}(t)\tag{1}D= C Xp (t) X(t) (1)
X (t + 1) = X p (t) A D (2)/vec(X)(t+1)=\overrightarrow{X_(p))(t)\vec(A }/cdot/vec{D}\tag{2}X(t+1)=Xp (t) A D(2)
Where t tt represents the current iteration number, A /vec{A}A and C /vec{C}C are coefficient vectors, X p /overrightarrow{X_{p}}Xp is the position vector of the prey, X /vec{X}X is the position vector of the gray wolf.
The vector A /vec{A}A and C /vec{C}C are calculated as follows:
A = 2 a r 1 a (3)/vec{A}=2/vec{a}/cdot/vec{r}_{1}\vec{a}\tag{3}A=2a r1 a(3)
C = 2 r 2 (4)/vec{C}=2/cdot/overrightarrow{r_{2}}\tag{4}C=2 r2 (4)
Among them, the components of a /vec{a}a linearly decrease from 2 to 0 in the iterative process, r 1/vec{r}_{1}r1 and r 2 /overrightarrow{r_{2}} r2 is a random vector between [0,1].
In order to clearly reflect the effects of equations (1) and (2), Figure 3(a) shows the twodimensional position vector and possible neighborhoods. It can be seen that the gray wolf position (X, Y) (X ,Y)(X,Y) can be updated according to the position of the prey (X , Y ) (X^*,Y^*)(X ,Y ) by adjusting A /vec{A}A and The value of C /vec{C}C can reach different places relative to the current position around the optimal agent. For example, when A = (0, 1)/vec{A}=(0,1)A=(0,1) and C = (1, 1)/vec{C}=(1,1)C =(1,1), the new position of the gray wolf is (X X, Y ) (X^*X,Y^*)(X X,Y ). The same is true in threedimensional space. Note that the two pictures here only show A = (0, 1)/vec{A}=(0,1)A=(0,1) and C = (1, 1)/vec{C }=(1,1)C=(1,1) In this case, when the random vectors r 1 r_1r1 and r 2 r_2r2 take different values, the gray wolf can reach the position between any two points. At the same time, note that the different value of A AA will determine whether the gray wolf is close or far away from the prey, which will be explained in detail later.
Figure 3 2D and 3D position vectors and their possible next positions
hunting
Gray wolves can identify the location of their prey and surround them. Hunting is usually led by alpha wolves. Beta and delta wolves occasionally participate in hunting. However, in an abstract search space, we do not know the best (prey) location. In order to mathematically simulate the hunting behavior of gray wolves, we assume that alpha (best candidate solution), beta, and delta wolves have a better understanding of the potential location of their prey. Therefore, we save the first three best solutions obtained so far , and force other search agents (including omegas) to update their positions based on the position of the best search agent. In this regard, the following formula is proposed:
D = C 1 X X , D = C 2 X X , D = C 3 X X (5)/overrightarrow{D_{\alpha}}=\left\vec{C}_{1}/cdot/overrightarrow{X_{\alpha}}\vec{X}\right,/overrightarrow{D_ {\beta}}=\left\vec{C}_{2}/cdot/overrightarrow{X_{\beta}}\vec{X}\right,/overrightarrow{D_{\delta}}=/overrightarrow{C_{3}}/cdot/overrightarrow{X_{\delta}}\vec{X}\tag{5}D = C1 X X ,D = C2 X X ,D = C3 X X (5)
X 1 = X A 1 (D ), X 2 = X A 2 (D ), X 3 = X A 3 (D ) 6) (()/overrightarrow{X_{1}}=\overrightarrow{X_{\alpha}}\overrightarrow{A_{1}}/cdot(\overrightarrow{D_{\alpha}}),/overrightarrow{X_{2}}=\overrightarrow{X_{\beta}}\overrightarrow{A_{2}}/cdot(\overrightarrow{D_{\beta}}),/overrightarrow{X_{3}}=/overrightarrow{X_{\delta}}\overrightarrow{A_{3}}/cdot(\overrightarrow{D_{\delta}})\tag(6)X1 =X A1 (D ),X2 =X A2 (D ),X3 =X A3 (D )6)(()
X (t + 1) = X 1 + X 2 + X 3 3 (7)/vec(X)(t+1)=\frac{\overrightarrow{X_{1}}+\overrightarrow{X_ {2}}+\overrightarrow{X_{3}}}{3}\tag{7}X(t+1)=3X1 +X2 +X3 (7)
Figure 4 shows how the proxy position is updated based on alpha, beta and delta in 2D space. It can be seen that the final position will be a random position within the circle defined by alpha, beta, and delta. In other words, alpha, beta, and delta wolves estimate the position of their prey, while other wolves randomly around the prey Update their location.
Figure 4 Schematic diagram of location update in GWO
Attack prey (exploit)
As mentioned above, when the prey stops moving, the gray wolf will attack it to complete the hunt. In order to model the approach to the prey, the value of a /vec aa needs to be continuously reduced, so the fluctuation range of A /vec AA will also be reduced. When A [ 1, 1]/vec A\in[1,1]A [ 1,1], the next position of the search agent can be any position between the current position of the agent and the position of the prey. Figure 5(a) shows that when A <1 A<1 A <1, gray wolves attack their prey .
Figure 5 Prey attack vs. search for prey
Search for prey (exploration)
Gray wolves usually search based on the location of alpha, beta, and delta wolves. They spread out to find prey, and then gather to attack the prey. In order to mathematically model the dispersion, A /vec AA with a random value greater than 1 or less than 1 is used to force the search agent to deviate from the prey, thereby ensuring exploration. Figure 5(b) shows that when A > 1 A>1 A >1, the gray wolf is forced to leave its prey, hoping to find a more suitable prey . Another factor that supports GWO's exploration is C /vec CC. According to formula (4), the value range of C /vec CC is [0, 2] [0,2][0,2], and this component is The prey provides random weights to randomly emphasize (C>1) or weaken (C<1) the role of the
prey in defining the distance in equation (1). This helps GWO to show more random behavior throughout the optimization process, which is beneficial to explore and avoid local optima. C CC is not linearly decreasing like A AA. C CC is specifically required to provide a random value at all times, so that exploration is emphasized not only in the initial iteration, but also in the final iteration.
The C CC vector can also be considered as the influence of obstacles on approaching prey in nature . Generally speaking, obstacles in nature appear on the hunting path of wolves, which actually prevent them from approaching prey quickly and easily. This is the role of the vector C CC. According to the location of the wolf, it can randomly give a weight to the prey, thus making the wolf's predation more difficult and distant, and vice versa.
GWO algorithm
In short, the search process starts with the creation of a random gray wolf population (candidate solution) in the GWO algorithm. In the iterative process, alpha, beta, and delta wolves estimate the possible location of the prey. Each candidate solution updates its distance from the prey. In order to emphasize exploration and utilization respectively, the parameter a aa is reduced from 2 to 0. When A > 1 \vec A>1 A >1, the candidate solution tends to deviate from the prey. When A <1 \vec A<1 A <1, The candidate solution converges to the prey. Finally, the GWO algorithm is terminated when the end condition is met. The pseudo code of GWO is as follows.
Initialize the gray wolf population Xi (i = 1, 2,..., N) X_i(i=1,2,...,n)Xi (i=1,2,...,n)
Initialize a, A, C a, A, Ca, A, C
Calculate the fitness value of each search agent
X = X_{\alpha}=X =Optimal search agent
X = X_{\beta}=X =The second best search agent
X = X_{\delta}=X =The third best search agent
while (t<max number of iterations)
for each search agent
Update the current agent location according to equation (7)
end for
Update a, A, C a,A,Ca,A,C
Calculate the fitness value of all search agents
Update X X_(\alpha)X , X X_(\beta)X , X X_(\delta)X
KaTeX parse error: Expected'EOF', got'&' at position 1: & emsp; t=t+
end while
return X X_{\alpha}X
Through the following points, we can understand how GWO solves optimization problems in theory:

The proposed social hierarchy helps GWO preserve the current optimal solution in the iterative process;

The proposed enclosing mechanism defines a circular neighborhood around the solution, which can be extended to higher dimensions as a hypersphere;

Random parameters A AA and C CC auxiliary candidate solutions have hyperspheres with different random radii;

The proposed hunting method allows candidate solutions to determine the possible location of the prey;

The adaptation of a aa and A AA ensures exploration and utilization;

The adaptive values of parameters a aa and A AA enable GWO to achieve a smooth transition between exploration and utilization;

With the decline of A AA, half of the iteration is devoted to exploration ( A 1 A 1 A 1), and the other half is devoted to exploitation ( A <1 A<1 A < 1);

Only two main parameters of GWO need to be adjusted (a aa and C CC).

clear all clc drawing_flag = 1; TestProblem='UF1'; nVar=10; fobj = cec09(TestProblem); xrange = xboundary(TestProblem, nVar); % Lower bound and upper bound lb=xrange(:,1)'; ub=xrange(:,2)'; VarSize=[1 nVar]; GreyWolves_num=100; MaxIt=1000;% Maximum Number of Iterations Archive_size=100;% Repository Size alpha=0.1;% Grid Inflation Parameter nGrid=10;% Number of Grids per each Dimension beta=4; %=4;% Leader Selection Pressure Parameter gamma=2;% Extra (to be deleted) Repository Member Selection Pressure % Initialization GreyWolves=CreateEmptyParticle(GreyWolves_num); for i=1:GreyWolves_num GreyWolves(i).Velocity=0; GreyWolves(i).Position=zeros(1,nVar); for j=1:nVar GreyWolves(i).Position(1,j)=unifrnd(lb(j),ub(j),1); end GreyWolves(i).Cost=fobj(GreyWolves(i).Position')'; GreyWolves(i).Best.Position=GreyWolves(i).Position; GreyWolves(i).Best.Cost=GreyWolves(i).Cost; end GreyWolves=DetermineDomination(GreyWolves); Archive=GetNonDominatedParticles(GreyWolves); Archive_costs=GetCosts(Archive); G=CreateHypercubes(Archive_costs,nGrid,alpha); for i=1:numel(Archive) [Archive(i).GridIndex Archive(i).GridSubIndex]=GetGridIndex(Archive(i),G); end % MOGWO main loop for it=1:MaxIt a=2it*((2)/MaxIt); for i=1:GreyWolves_num clear rep2 clear rep3 % Choose the alpha, beta, and delta grey wolves Delta=SelectLeader(Archive,beta); Beta=SelectLeader(Archive,beta); Alpha=SelectLeader(Archive,beta); % If there are less than three solutions in the least crowded % hypercube, the second least crowded hypercube is also found % to choose other leaders from. if size(Archive,1)>1 counter=0; for newi=1:size(Archive,1) if sum(Delta.Position~=Archive(newi).Position)~=0 counter=counter+1; rep2(counter,1)=Archive(newi); end end Beta=SelectLeader(rep2,beta); end % This scenario is the same if the second least crowded hypercube % has one solution, so the delta leader should be chosen from the % third least crowded hypercube. if size(Archive,1)>2 counter=0; for newi=1:size(rep2,1) if sum(Beta.Position~=rep2(newi).Position)~=0 counter=counter+1; rep3(counter,1)=rep2(newi); end end Alpha=SelectLeader(rep3,beta); end % Eq.(3.4) in the paper c=2.*rand(1, nVar); % Eq.(3.1) in the paper D=abs(c.*Delta.PositionGreyWolves(i).Position); % Eq.(3.3) in the paper A=2.*a.*rand(1, nVar)a; % Eq.(3.8) in the paper X1=Delta.PositionA.*abs(D); % Eq.(3.4) in the paper c=2.*rand(1, nVar); % Eq.(3.1) in the paper D=abs(c.*Beta.PositionGreyWolves(i).Position); % Eq.(3.3) in the paper A=2.*a.*rand()a; % Eq.(3.9) in the paper X2=Beta.PositionA.*abs(D); % Eq.(3.4) in the paper c=2.*rand(1, nVar); % Eq.(3.1) in the paper D=abs(c.*Alpha.PositionGreyWolves(i).Position); % Eq.(3.3) in the paper A=2.*a.*rand()a; % Eq.(3.10) in the paper X3=Alpha.PositionA.*abs(D); % Eq.(3.11) in the paper GreyWolves(i).Position=(X1+X2+X3)./3; % Boundary checking GreyWolves(i).Position=min(max(GreyWolves(i).Position,lb),ub); GreyWolves(i).Cost=fobj(GreyWolves(i).Position')'; end GreyWolves=DetermineDomination(GreyWolves); non_dominated_wolves=GetNonDominatedParticles(GreyWolves); Archive=[Archive non_dominated_wolves]; Archive=DetermineDomination(Archive); Archive=GetNonDominatedParticles(Archive); for i=1:numel(Archive) [Archive(i).GridIndex Archive(i).GridSubIndex]=GetGridIndex(Archive(i),G); end if numel(Archive)>Archive_size EXTRA=numel(Archive)Archive_size; Archive=DeleteFromRep(Archive,EXTRA,gamma); Archive_costs=GetCosts(Archive); G=CreateHypercubes(Archive_costs,nGrid,alpha); end disp(['In iteration 'num2str(it)': Number of solutions in the archive =' num2str(numel(Archive))]); save results % Results costs=GetCosts(GreyWolves); Archive_costs=GetCosts(Archive); if drawing_flag==1 hold off plot(costs(1,:),costs(2,:),'k.'); hold on plot(Archive_costs(1,:),Archive_costs(2,:),'rd'); legend('Grey wolves','Nondominated solutions'); drawnow end end Copy code
Complete code or write on behalf of adding QQ1575304183