Particle Swarm Optimizer

Modified on 2015/07/01 21:29 by Eugene — Categorized as: Optimizers

The Particle Swarm Optimizer has been developed by Leonard Mozeko (LenMoz).

Benefits of Wealth-Lab's Particle Swarm Optimizer


In a nutshell

Particle swarm algorithms are about solving an optimization problem by evolving a population of particles (candidate solutions) toward better solutions. A "particle" represents a combination of Strategy Parameters that defines one of the many possible solutions to the optimization problem that the optimization is trying to solve. The initial parameters of a particle are randomly chosen between each of the strategy’s parameter’s Start and Stop values, as seen on the Optimization Control screen.

Particle swarm optimization finds good parameter combinations by moving the particles toward good parameter sets already seen. It does this by remembering the parameter combinations that produced the best results, both at the particle and global levels. A next set of parameters for each particle is chosen by a function that considers, for each parameter, the current value, the velocity from the prior position, the distance and direction from the best position seen so far by the particle, and the distance and direction from the best position seen so far by the entire population of particles.

Is Particle Swarm Algorithm any different from Monte Carlo?

Yes. Although both Particle Swarm Optimizer and Monte Carlo work fast and use randomness, unlike Monte Carlo, Particle Swarm does its search purposefully. Particle Swarm algorithms employ movement of particles toward previously seen good solutions, searching for parameter combinations that provide an optimal value of the fitness function, “Net Profit”, for instance.

How does it work?

After installing the extension and restarting Wealth-Lab, Particle Swarm appears in your list of installed optimizers, next to "Exhaustive", and "Monte Carlo". After choosing “Particle Swarm” as the Optimization Method, and selecting the Scorecard having the metric you wish to optimize, click on Settings to customize the Particle Swarm Optimizer (PSO) run.


Stochastic(random) Optimizer Hints


Running the Particle Swarm Optimizer

1. Choose a fitness function. What are we optimizing?

The metric to optimize is called the fitness function – a measure of a particle's optimality among the population. To choose one, you can leave the default "Net profit" option or select any performance metric available from Scorecards installed in Wealth-Lab – such as Basic, Extended or MS123 Scorecards (if installed). It is suggested that you start with “Net Profit”. Not all metrics have been tested; not all metrics may work with PSO.

2. Review Settings and Launch Optimization

Note: A brief description is of each algorithm can be seen by selecting it from the dropdown.

Image

Select the metric and direction of optimization

Make sure you’ve selected the correct Scorecard for the metric you want to optimize. Set “Highest” or “Lowest” to indicate whether you want to find the highest or lowest value of the metric. Choose the metric you want to optimize from the right-side Metric dropdown. The dropdown shows all choices from the currently selected Scorecard (“Select Scorecard” on the main optimizer screen). Note: Walk-Forward Optimization (WFO) also has a “Highest/Lowest” option. When using Particle Swarm Optimization, the WFO option should always be set to “Highest.”

Specify which algorithm to use

Specify the Algorithm to be used to move the particles in the solution space. Particle Swarm Optimization(PSO) has many variations. Many articles and scholarly studies have contributed to this diversity. A number of variations are implemented and can be chosen from the Algorithm dropdown. A brief description is displayed as each algorithm is selected. See "Available Algorithms" in this document for more details. Author favorites are "GPAC PSO" and "Clerc Tribes."

Check or uncheck “Plus Genetic Crossover”

This option borrows from Genetic Algorithm techniques. Checking this may improve specific algorithms for a given strategy. Your experience may depend on your strategy. Try an optimization with and without this option. When checked, at the start of each iteration, the one third of particles having the worst fitness (metric value) are replaced by a random crossover of two better particles. This is a trade-off. Presumably the replaced particles were far from a solution and this action improves their position, but genetic crossover causes a loss of diversity. If there is a solution near the replaced particle’s prior location, odds of finding it have been reduced. This option is not available when using the “Clerc Tribes” algorithm.

Specify Particles

A “Particle” is combination of parameter values, originally chosen randomly within the parameter Start and Stop ranges. So, “Particles” indicates the number of parameter combinations to be used for each iteration. A larger population provides better diversity (especially in first generations) – helping ensure that the algorithm won't hit a local extreme by mischance. A larger population increases run time, however, so it’s a trade-off. “Particles” is greyed out when using “Clerc Tribes” because in this case the number of particles is automatically increased or decreased based on progress towards a solution. Before release 2015.7.1, the particles were positioned randomly. Now, for most algorithms (except Tribes), the Hammersley algorithm is used to produce particles that are uniformly, rather than randomly, distributed within the solution space.

Specify iteration count

The optimization process stops after reaching the number of iterations specified as Iterations.

No formal rule exists for finding the best combination of particles and iterations. A suggested starting place is 10 to 30 Particles and 8 to 12 iterations.

Close “Settings” and click “Begin Optimization”

Note that, like all optimization methods, you can end the optimization at any time by clicking “Cancel Optimization”

3. Monitoring progress

You can observe Particle Swarm Optimization progress by clicking on the “Progress Log” and “Progress Graph” tabs.

Progress Tab

Note that the progress tab has a “Copy To Clipboard” button.

Image

Fitness Graph Tab

A graphical representation showing the maximum and average values of the function being optimized

Image

The basic PSO equation

The various algorithms are extensions to the basic PSO equation. The general form of the PSO equation is:

Image

where w is the inertia weight, c1 is the attraction weight toward the best position the particle has seen, and c2 is the attraction weight toward the best position any particle has seen. The different algorithms below differ from each other in how they assign values to w, c1, and c2. Some also differ in how data is selected and/or in additional terms added to the basic formula.

Available Algorithms

Twelve different PSO algorithms are implemented. Choose which algorithm to run from Settings. Experiment with the different algorithms. A “Genetic Crossover” function can be added to all of the algorithms except “Clerc Tribes”.

"Basic PSO" The original PSO formula. w, c1, and c2 are constant throughout the optimization (0.729, 1.49445, and 1.49445, respectively).

"Clerc Basic(OEP 0)" Moves each particle based on attraction toward the best of randomly chosen particles. Based on the algorithm described in the “First Results” chapter of Maurice Clerc’s book "Particle Swarm Optimization."

"Clerc Tribes" This is an adaptive algorithm, quite different from the others. The number of particles is not specified by the user, but changes dynamically after each iteration of the optimization. Particles are organized into tribes which share results within the tribe. Only the best particle of a tribe communicates with the other tribes. While it starts with a few particles in a few “tribes”, additional tribes are added as the optimization progresses, using knowledge from earlier tribes. Finally, particles may be added to or dropped from a tribe based on the tribe’s improvement (or deterioration) from one iteration to the next. The theory of this algorithm can be found in the “Tribes” chapter of Maurice Clerc’s book, "Particle Swarm Optimization". Due to its dynamic nature, time projections to the end of the optimization cannot be determined, but progress can be monitored on the Progress and Graph tabs.

"Coordinated Aggregation (CAPSO)" Based on Vlachogiannis/Lee paper(IEEE, 2006), in this unique approach, particles are attracted to all better particles, where the attraction strength is proportional to the improvement in fitness. Use of “Genetic Crossover” is not recommended.

"Comprehensive (CLPSO)" When moving a particle, this variation finds the two nearest other particles, choosing the one with better fitness. Then, it may randomly (50-50) use that particle’s best position rather than the current particle’s best as the “local attraction” position.

"Custom" This algorithm features descending inertia weight and decreasing velocity factor. The idea is to initially cover as much solution space as possible, then later slow the particle movement as the solution is approached. Varies the inertial weight (w) and overall velocity with each iteration.

"Dynamic" This algorithm holds the sum of w, c1, and c2 equal to a linearly adjusted total velocity... In early iterations, it emphasizes inertia, then local best, then global best, with the goal to find a high quality local best. In later iterations, emphasis is on global best, then local best, then inertia, because some particle likely has found a quality best.

"Fully Informed (FIPS)" This is Mendes' fully informed particle swarm - Particles are attracted to all particle bests, not just their own. Research states this tends to overemphasize the center of the solution space.

"GPAC PSO" This algorithm is highly recommended. Based on Vlachogiannis/Lee paper (November, 2006), particles are attracted to their own best, global best, and passive congregational particle (a random particle, "Passive Congregation" PAC). Velocity is dampened with a constriction factor. Use of “Genetic Crossover” is not recommended.

"IWPSO w/Constraint" Varies the inertia weight (w) from 0.9 to 0.4 linearly through the iterations and applies a constraint factor (f) to the inertia weight.

"LPAC PSO" Based on Vlachogiannis/Lee paper (November, 2006),, particles are attracted to their best, the closest particle with a better best position, and passive congregational particle (a random particle, "Passive Congregation" PAC). Use of “Genetic Crossover” is not recommended.

"Simulated Annealing" Based on Shieh/Kuo/Chiang paper(2011) Uses descending inertia(w), and fixed values for local(c1), and global(c2) attraction. Increasingly rejects movement to positions of lower fitness, emulating “cooling”.