The Particle Swarm Optimizer has been developed by
Leonard Mozeko (
LenMoz).
Benefits of Wealth-Lab's Particle Swarm Optimizer
- Speeds up optimizations. Optimal parameters are usually discovered in a fraction of time it takes when using exhaustive search.
- Works where exhaustive method stumbles: when the number of parameter combinations is too large. It takes only a subset of total runs to find the "Top-10" of values using Particle Swarm Optimization (PSO). In this document, “PSO” may be used as a shorthand for “Particle Swarm Optimization.”
- Aims at the near optimal solution which is likely to be a robust solution. The PSO algorithms do not seek for the absolute optimum which could easily turn an unreliable "peak" value.
- Like user-guided optimization, PSO focuses on important regions of solution space, having the benefit of a selective search – yet without the need for human interaction.
- Particle Swarm optimization can be used by itself or to pave the way for exhaustive optimization – with a much tighter range of possible optimization values.
In a nutshell
Particle swarm algorithms are about solving an optimization problem by evolving a population of particles (candidate solutions) toward better solutions. A "particle" represents a combination of Strategy Parameters that defines one of the many possible solutions to the optimization problem that the optimization is trying to solve. The initial parameters of a particle are randomly chosen between each of the strategy’s parameter’s Start and Stop values, as seen on the Optimization Control screen.
Particle swarm optimization finds good parameter combinations by moving the particles toward good parameter sets already seen. It does this by remembering the parameter combinations that produced the best results, both at the particle and global levels. A next set of parameters for each particle is chosen by a function that considers, for each parameter, the current value, the velocity from the prior position, the distance and direction from the best position seen so far by the particle, and the distance and direction from the best position seen so far by the entire population of particles.
Is Particle Swarm Algorithm any different from Monte Carlo?
Yes. Although both Particle Swarm Optimizer and Monte Carlo work fast and use randomness, unlike Monte Carlo, Particle Swarm does its search purposefully. Particle Swarm algorithms employ movement of particles toward previously seen good solutions, searching for parameter combinations that provide an optimal value of the fitness function, “Net Profit”, for instance.
How does it work?
After installing the
extension and restarting Wealth-Lab, Particle Swarm appears in your list of installed optimizers, next to "Exhaustive", and "Monte Carlo". After choosing “Particle Swarm” as the Optimization Method, and selecting the Scorecard having the metric you wish to optimize, click on Settings to customize the Particle Swarm Optimizer (PSO) run.
- A user-specified number of particles is generated within the solution space, i.e. having parameter values between Start and Stop.
- The "particle movement" happens in cycles, also known as "iterations". The first cycle starts from a population of random particles having random velocity.
- The strategy under test is run for each particle / set of parameters. This determines the optimality of a particle, based on the chosen fitness function, and allows it to be ranked against the population. In each iteration, the fitness of each particle is evaluated.
- Based on fitness, particles are moved toward parameter combinations that have shown the best “fitness”, the highest value for the chosen metric, giving it a new position within the solution space. The basic rules for moving particles consist of three components, inertia, local best, and global best. The “inertia” component serves to keep the particle moving in the same direction. Current velocity is multiplied by “inertia weight” to give one part of the new velocity. Local Best is that parameter set having the best fitness seen by the current particle. Global Best is that parameter set having the best fitness seen by the any particle. There are variations in how these basic influences (plus others) are combined and how much weight is applied to each factor. The goal is to cover as much solution space as possible with the fewest calculations, i.e. in the shortest time.
- After particles are “moved” (receive a new set of parameters), the fitness function is recalculated (i.e. the strategy under test is run against the symbol or DataSet) for each particle at its new position. A speedup heuristic watches for parameter combinations that have already been seen, and simply reuses the fitness/metric value in that case.
- This process goes on iteratively until the number of iterations specified in Settings has been considered or the optimization is cancelled. A satisfactory result may not be achieved if the number of particles or the number of iterations are too small. (Particle count and iterations are user Settings).
Stochastic(random) Optimizer Hints
- Consider using “Priority”: If the selected PosSizer can reject trades and the strategy does not set priority on entry signals, optimization may fail to produce good results. This is because the same parameter values will produce different outcomes on successive runs due to the random choice of trades.
- Consider Size of the Solution Space: The Particle Swarm Optimizer may end earlier than the progress bar indicates. This is because a heuristic allows a set of parameters to be calculated only once. If a second particle uses an already calculated parameter set, the prior result will be used, saving time. Finishing early can be a sign that your Start, Stop, and Step values result in too few unique values.
- Optimizing Towards Zero: Be aware that metrics that optimize toward zero (Ulcer Index, Luck Coefficient, etc.) can achieve a near-zero result by making no trades. The optimizer will migrate the particles to that useless result. Setting appropriate parameter Start and Stop values can avoid this.
Running the Particle Swarm Optimizer
1. Choose a fitness function. What are we optimizing?
The metric to optimize is called the fitness function – a measure of a particle's optimality among the population. To choose one, you can leave the default "Net profit" option or select any performance metric available from Scorecards installed in Wealth-Lab – such as Basic, Extended or MS123 Scorecards (
if installed).
It is suggested that you start with “Net Profit”. Not all metrics have been tested; not all metrics may work with PSO.
2. Review Settings and Launch Optimization
Note: A brief description is of each algorithm can be seen by selecting it from the dropdown.
Select the metric and direction of optimizationMake sure you’ve selected the correct Scorecard for the metric you want to optimize. Set “Highest” or “Lowest” to indicate whether you want to find the highest or lowest value of the metric. Choose the metric you want to optimize from the right-side Metric dropdown. The dropdown shows all choices from the currently selected Scorecard (“Select Scorecard” on the main optimizer screen).
Note: Walk-Forward Optimization (WFO) also has a “Highest/Lowest” option. When using Particle Swarm Optimization, the WFO option should always be set to “Highest.”Specify which algorithm to useSpecify the Algorithm to be used to move the particles in the solution space. Particle Swarm Optimization(PSO) has many variations. Many articles and scholarly studies have contributed to this diversity. A number of variations are implemented and can be chosen from the Algorithm dropdown. A brief description is displayed as each algorithm is selected. See "Available Algorithms" in this document for more details. Author favorites are "GPAC PSO" and "Clerc Tribes."
Check or uncheck “Plus Genetic Crossover”This option borrows from Genetic Algorithm techniques. Checking this may improve specific algorithms for a given strategy. Your experience may depend on your strategy. Try an optimization with and without this option. When checked, at the start of each iteration, the one third of particles having the worst fitness (metric value) are replaced by a random crossover of two better particles. This is a trade-off. Presumably the replaced particles were far from a solution and this action improves their position, but genetic crossover causes a loss of diversity. If there is a solution near the replaced particle’s prior location, odds of finding it have been reduced. This option is not available when using the “Clerc Tribes” algorithm.
Specify ParticlesA “Particle” is combination of parameter values, originally chosen randomly within the parameter Start and Stop ranges. So, “Particles” indicates the number of parameter combinations to be used for each iteration. A larger population provides better diversity (especially in first generations) – helping ensure that the algorithm won't hit a local extreme by mischance. A larger population increases run time, however, so it’s a trade-off. “Particles” is greyed out when using “Clerc Tribes” because in this case the number of particles is automatically increased or decreased based on progress towards a solution. Before release 2015.7.1, the particles were positioned randomly. Now, for most algorithms (except Tribes), the
Hammersley algorithm is used to produce particles that are uniformly, rather than randomly, distributed within the solution space.
Specify iteration countThe optimization process stops after reaching the number of iterations specified as
Iterations.
No formal rule exists for finding the best combination of particles and iterations. A suggested starting place is 10 to 30 Particles and 8 to 12 iterations.
Close “Settings” and click “Begin Optimization”Note that, like all optimization methods, you can end the optimization at any time by clicking “Cancel Optimization”
3. Monitoring progress
You can observe Particle Swarm Optimization progress by clicking on the “Progress Log” and “Progress Graph” tabs.
Progress TabNote that the progress tab has a “Copy To Clipboard” button.
Fitness Graph TabA graphical representation showing the maximum and average values of the function being optimized
The basic PSO equation
The various algorithms are extensions to the basic PSO equation. The general form of the PSO equation is:
where w is the inertia weight, c1 is the attraction weight toward the best position the particle has seen, and c2 is the attraction weight toward the best position
any particle has seen. The different algorithms below differ from each other in how they assign values to w, c1, and c2. Some also differ in how data is selected and/or in additional terms added to the basic formula.
Available Algorithms
Twelve different PSO algorithms are implemented. Choose which algorithm to run from Settings. Experiment with the different algorithms. A “Genetic Crossover” function can be added to all of the algorithms except “Clerc Tribes”.
"Basic PSO"
The original PSO formula. w, c1, and c2 are constant throughout the optimization (0.729, 1.49445, and 1.49445, respectively).
"Clerc Basic(OEP 0)"
Moves each particle based on attraction toward the best of randomly chosen particles. Based on the algorithm described in the “First Results” chapter of Maurice Clerc’s book "Particle Swarm Optimization."
"Clerc Tribes"
This is an adaptive algorithm, quite different from the others. The number of particles is not specified by the user, but changes dynamically after each iteration of the optimization. Particles are organized into tribes which share results within the tribe. Only the best particle of a tribe communicates with the other tribes. While it starts with a few particles in a few “tribes”, additional tribes are added as the optimization progresses, using knowledge from earlier tribes. Finally, particles may be added to or dropped from a tribe based on the tribe’s improvement (or deterioration) from one iteration to the next. The theory of this algorithm can be found in the “Tribes” chapter of Maurice Clerc’s book, "Particle Swarm Optimization". Due to its dynamic nature, time projections to the end of the optimization cannot be determined, but progress can be monitored on the Progress and Graph tabs.
"Coordinated Aggregation (CAPSO)"
Based on Vlachogiannis/Lee paper(IEEE, 2006), in this unique approach, particles are attracted to all
better particles, where the attraction strength is proportional to the improvement in fitness. Use of “Genetic Crossover” is not recommended.
"Comprehensive (CLPSO)"
When moving a particle, this variation finds the two nearest other particles, choosing the one with better fitness. Then, it may randomly (50-50) use that particle’s best position rather than the current particle’s best as the “local attraction” position.
"Custom"
This algorithm features descending inertia weight and decreasing velocity factor. The idea is to initially cover as much solution space as possible, then later slow the particle movement as the solution is approached. Varies the inertial weight (w) and overall velocity with each iteration.
"Dynamic"
This algorithm holds the sum of w, c1, and c2 equal to a linearly adjusted total velocity... In early iterations, it emphasizes inertia, then local best, then global best, with the goal to find a high quality local best. In later iterations, emphasis is on global best, then local best, then inertia, because some particle likely has found a quality best.
"Fully Informed (FIPS)"
This is Mendes' fully informed particle swarm - Particles are attracted to all particle bests, not just their own. Research states this tends to overemphasize the center of the solution space.
"GPAC PSO"
This algorithm is highly recommended. Based on Vlachogiannis/Lee paper (November, 2006), particles are attracted to their own best, global best, and passive congregational particle (a random particle, "Passive Congregation" PAC). Velocity is dampened with a constriction factor. Use of “Genetic Crossover” is not recommended.
"IWPSO w/Constraint"
Varies the inertia weight (w) from 0.9 to 0.4 linearly through the iterations and applies a constraint factor (f) to the inertia weight.
"LPAC PSO"
Based on Vlachogiannis/Lee paper (November, 2006),, particles are attracted to their best,
the closest particle with a better best position, and passive congregational particle (a random particle, "Passive Congregation" PAC). Use of “Genetic Crossover” is not recommended.
"Simulated Annealing"
Based on Shieh/Kuo/Chiang paper(2011) Uses descending inertia(w), and fixed values for local(c1), and global(c2) attraction. Increasingly rejects movement to positions of lower fitness, emulating “cooling”.