web123456

[Computational Intelligence] - Swarm Intelligence Algorithms (Ant Colony Optimization Algorithm ACO, Particle Swarm Optimization Algorithm PSO)

group intelligence algorithm

Unlike most gradient-based optimization algorithms, population intelligence algorithms rely on probabilistic search algorithms.

Advantages over gradient algorithms and traditional evolutionary algorithms:

  1. There are no centralized control constraints that can affect the solution of the whole problem due to individual failures.
  2. Scalability of the system is ensured by means of non-direct information exchange.
  3. Parallel distributed algorithmic model that leverages multiple processors.
  4. There are no special requirements for continuity of problem definition.
  5. The algorithm is simple to implement.
Ant Colony Optimization Algorithm ACO

ACO algorithmIt is a bionic method derived from modeling the way ants forage and find paths in nature.

On their way to forage, ant colonies leave a volatile secretion, pheromone, in the paths they travel and sense its presence and intensity, moving in the direction of high pheromone concentrations. At the same time pheromone is released in relation to the length of the path. The longer the path, the lower the concentration of pheromone released. The more ants choose a path with a higher concentration the more it increases the pheromone concentration on that path, which in turn attracts more ants, thus creating apositive feedback.. Ants always end up finding a way to get from their food to the nest between theoptimal path. The concentration of hormones on the optimal pathway grows, while the concentration of hormones on the other pathways wanes with the passage of time.

Particle Swarm Optimization Algorithm PSO

Assuming that there is only one piece of food in the region (i.e., the optimal solution as usually stated in optimization problems), the flock's task is to find this food source. Throughout the search process, the flock lets the other birds know its position by passing their information to each other, and through such collaboration, it determines whether it has found the optimal solution or not, and at the same time, it also puts the optimal solution of thepass on informationto the entire flock, and eventually, the entire flock is able to congregate around the food source, i.e., we say that the optimal solution has been found, i.e., the problem converges.
PSO takes its cue from this model and uses it to solve optimization problems.

  • In PSO, each potential solution to an optimization problem is a bird, called a particle, in the search space. All particles have a fitness value determined by the function being optimized, and each particle has a velocity that determines the direction and distance they fly. The particles then follow the current optimal particle as it searches the solution space.
  • The PSO is initialized as a population of random particles (random solutions) and then the optimal solution is found by iteration.
    In each iteration, the particle updates itself by keeping track of the two extremes;
    The first is the optimal solution found by the particle itself, which is called theindividual extreme value
    The other extreme value is the optimal solution found so far by the whole population, which isglobal extreme value (math.)
    Alternatively it is possible not to use the whole population but just a part of it as the neighbors of the particles, then the extremes in all the neighbors are local extremes.

Process:
Suppose there are N particles forming a colony in a D-dimensional target search space where:
The ith particle is represented as a D-dimensional vector: xi=(xi1,xi2,...xiD);
The "flight" velocity of the ith particle is also a D-dimensional vector: vi=(vi1,vi2,...viD), 1 ≤ i ≤ m,1 ≤ d ≤ D. The "flight" velocity of the ith particle is also a D-dimensional vector: vi=(vi1,vi2,...viD), 1 ≤ i ≤ m,1 ≤ d ≤ D;
The best position in history experienced by particle i: pi=(pi1,pi2,...piD);
The best position experienced by all particles in the population (or field): pg = (pg1,pg2,...pgD).
In finding these two optimal values, the particles are based on the following equationUpdate your speed and position
v i d = w ⋅ v i d + c 1 ⋅ r 1 ( p i d − x i d ) + c 2 ⋅ r 2 ( p g d − x i d ) v_id = w · v_id + c_1 · r_1(p_id - x_id) + c_2 · r_2(p_gd - x_id) vid=wvid+c1r1(pidxid)+c2r2(pgdxid)
x i d = x i d + v i d x_id = x_id + v_id xid=xid+vid
where: c1 and c2 are learning factors, w is an inertia factor, and r1 and r2 are uniform random numbers in the range [0, 1].

Algorithmic Processes:

  1. Initialization, set particle swarm size, initial position, initial velocity, etc.
  2. The fitness function of each particle is calculated to find the individual history optimal solution and the global history optimal solution.
  3. Updates the position and velocity of each particle.
  4. Has the number of iterations been reached? Yes, end; No, return 2.

Pros:

  1. Easy to implement, high accuracy, fast convergence, no need to adjust parameters.
  2. Not very sensitive to population size.
  3. Establishing a balance between diversification and centralization.

Drawbacks:

  1. Poor handling of discrete optimization problems.
  2. It is easy to fall into local optimality.