Follow the Leader + Random Walk (FLW) Swarm Algorithm

Follow the Leader + Random Walk (FLW) Swarm Algorithm preview image

1 collaborator

Tags

metaheuristic 

Tagged by Sergio Rojas-Galeano about 2 years ago

optimisation 

Tagged by Sergio Rojas-Galeano about 2 years ago

swarm artificial intelligence 

Tagged by Sergio Rojas-Galeano about 2 years ago

Visible to everyone | Changeable by the author
Model was written in NetLogo 6.1.0 • Viewed 661 times • Downloaded 189 times • Run 0 times
Download the 'Follow the Leader + Random Walk (FLW) Swarm Algorithm' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

This is an agent-based model that implements a swarm search algorithm for numerical optimisation of real-valued bound-constraint functions, based mainly on two search operators: Follow the Leader and Random Walk (FLW). The goal of the swarm is to discover the best patch within a discretised landscape for a given optimisation problem, that is, to find the optimal (minimum) value of the discretised cost function for said problem, in a more efficient way than traversing the entire landscape (exhaustive search).

The complete set of operations performed by the algorithm are:

  • Follow the leader: there are groups of agents that follow a leader.

  • Random walk: there are scout agents moving on their own.

  • Single- or multi-leaders mode: there can be multiple leaders in the swarm.

  • Leadership relay: leadership is constantly updated based on fitness.

  • Clash safeguard: leader agents must stay away from each other to avoid collisions.

  • Monitor premature convergence: when cohesion is too low, agents disperse.

  • Enable elitism: an agent stays near the best patch discovered so far.

HOW IT WORKS

The optimisation problem defined by the LANDSCAPE chooser (from SPHERE, RASTRIGIN, ROSENBROCK, HIMMELBLAU, EGGHOLDER, or RANDOM), is discretised and projected onto the 2D grid of cells (patches) that make up the view area of the simulation. Therefore, the swarm of agents aims to discover the optimal patch in said projected landscape, that is, the coordinates where the cost function is the lowest.

Each agent is characterised by a location (x,y) in the landscape and the value of the cost function evaluated at that patch, which also indicates its fitness as a candidate solution for the chosen problem. Now each agent can have one of three possible roles (or breeds): LEADER (an agent located in a promising patch), FOLLOWER (an agent following a leader), or WALKER (a scout agent).

In the simulation, LEADERS are shown as a "bird" shape (reminiscent of earlier models of swarm intelligence inspired by the behavior of flocks of birds) and a different color, while FOLLOWERS are shown as a triangle and the color of his LEADER. WALKERs, in turn, are shown as a gray "triangle" shape. As a final solution to the problem, the algorithm maintains a FLW-BEST-PATCH ever discovered during the simulation run. Similarly, the TOP-LEADER represents the best solution during a given iteration.

LEADERs and WALKERs move randomly around their current positions, however LEADERs move within a small range of distance (local exploitation) while WALKERS move within a larger range (global exploration). When a WALKER accidentally discovers a more promising patch than those of the current leaders, the TOP-LEADER is asked to move to that patch.

Furthermore, every LEADER has an (equally larger) group of FOLLOWERs. FOLLOWERs move in pursuit of their leaders: they face their corresponding leader and jump in that direction with a random step length. When a FOLLOWER accidentally discovers a more promising patch, he switches roles to become the LEADER of their group (in practice, since all agents are homogeneous, they simply trade places).

In each iteration of the algorithm, all the agents of the swarm traverse the landscape moving according to their roles. Additionally, three operators were introduced to improve search efficacy:

  • Monitor premature convergence: When the average cohesion of groups within the swarm falls below a certain level, agents disperse by performing a warm restart to random locations.

  • Clash safeguard: Prevents two leaders with their groups from colliding in the same landscape region.

  • Elitism: Recalls the best discovered patch (the best solution found so far) by moving a WALKER to that position.

The simulation ends after a maximum number of steps (MAX-TICKS) is reached, or when the GROUND-TRUTH-SOLUTION is discovered first.

HOW TO USE IT

First, choose an optimisation problem to solve from the LANDSCAPE dropdown list and select the corresponding XY-BOUNDS (recommended values are [-6, 6] for SPHERE, RASTRIGIN, ROSENBROCK, and HIMMELBLAU, and [-512, 512] for EGGHOLDER and RANDOM). The GRID-SIZE chooser allows defining the sampling resolution of the 2D grid of patches where the cost function will be evaluated (the larger this size, the harder it is to discover the best patch).

Then choose the number of agents to simulate with the SWARM-SIZE slider, the percentage of walkers in the swarm with the WALKERS-RATE slider, and the number of N-LEADERS within the swarm. Also, choose the termination condition with the MAX-TICKS slider and use the ELITISM? and COHESION? switches to enable or disable the corresponding operators. A typical configuration of these parameters would be (to simulate a population of 20 agents, including 4 leaders with 3 followers each, plus 4 walkers):

  • SWARM-SIZE: 20

  • N-LEADERS: 4

  • WALKERS-RATE: 0.2

  • ELITISM?: On

  • COHESION?: On

  • MAX-TICKS: 1000

Now you can run the simulation using the buttons in the control panel:

  • SETUP: Computes the problem LANDSCAPE and displays it in the view grid as a 2D discretised surface plot. In addition, it creates the swarm, scatters the agents randomly within the boundaries of the landscape, and initialises the algorithm global variables (depending on the GRID-SIZE this setup may take a while).

  • RESET: Relocates the agents randomly in the 2D view and initialises the algorithm's global variables (this is much faster than SETUP, as it doesn't recalculate the problem landscape, which might be useful for multiple simulation runs).

  • GO: Iteratively execute the main procedure of the algorithm until finding the optimum or reaching the MAX-TICKS.

  • STEP: Execute an iteration (a single tick) of the main procedure of the algorithm.

  • SPOTLIGHT: A useful chooser to keep track of the optimal solution, the best solution found so far, or the current top leader within the display area.

Also remember that the SPEED slider at the top of the window can be used to control how fast the simulation runs in the view grid.

Once you start running the simulation, the swarm behavior will emerge in the view grid, as the agents try to discover the promising regions within the landscape and hopefully find the optimal patch before MAX-TICKS. Besides, the output monitors allow you to inspect such emerging behaviour as the model evolves:

  • GROUND-TRUTH-SOLUTION: Coordinates and cost function value of the optimum.

  • BEST-SOLUTION-FOUND: Coordinates and cost function value of the FLW-BEST-PATCH discovered so far.

  • CURRENT-TOP-LEADER: Coordinates and cost function value of the leader at the best patch in the current iteration.

  • TOP-LEADER-FITNESS: Historical plot of the leader with best fitness (value of cost function) throughout the simulation.

  • BEST-SOLUTION-FITNESS: Historical plot of best solutions discovered throughout the simulation.

  • AVG-COHESION: The average measure of cohesion of the groups within the swarm (only if COHESION? is activated).

  • AVERAGE-GROUP-COHESION: Historical plot of the average cohesion measurement throughout the simulation (only if COHESION? is enabled).

  • RUNTIME: Total execution time of the simulation run.

  • OPTIMUM-TICK: The tick number where the optimum was discovered (if it happens). Notice that if the algorithm is able to do so, these last two monitors will display a "!!!" sign in their text box.

THINGS TO NOTICE

Each LANDSCAPE may exhibit different properties: multimodality, convexity, separability, multiple global or local minima, plateaus, ridges, valleys, etc. The advantage of swarm optimisation methods is their ability to explore several regions of the search space simultaneously so they can discover these properties without getting stuck in local minima, as opposed to single point metaheuristics. However, it is also known that these swarm-based methods may suffer from premature convergence where the entire population collapses to a single suboptimal patch or small region. For reference, each landscape exhibits the following properties (the actual mathematical expressions and 2D landscapes can be seen in the Appendix of [1]):

  • SPHERE, SPHERE-OFFSET: unimodal, convex.

  • RASTRIGIN: multimodal, multiple local minima, ridges.

  • ROSENBROCK: multiple local minima, valleys.

  • HIMMELBLAU: multimodal, multiple global minima, valleys.

  • EGGHOLDER: multimodal, multiple local minima, valleys.

  • RANDOM: multimodal, multiple local minima, sampled from a scaled normal distribution.

The FLW algorithm attempts to circumvent such swarm-based drawbacks by using two features. First, it allows for multiple leaders, meaning you can choose between having a single swarm of agents with an unique leader, or having multiple leaders with different subgroups of followers (distinguished by different colors). When choosing the multi-leader mode, you can notice how the subgroups spread out looking for minima in separate regions of the LANDSCAPE, which is partly due to the anti-clash mechanism that the algorithm implements to avoid collisions between leaders. You can test this emergent behaviour (leaders dominating separate regions of the search space) by experimenting with different values for the N-LEADERS and SWARM-SIZE sliders.

The second feature is intended to prevent premature convergence by dispersing agents by means of a warm restart of agent locations every time AVG-COHESION drops below a certain threshold (if COHESION? is enabled), or periodically once a predetermined number of ticks is reached (if COHESION? is disabled) during the execution of the GO loop. Since relocated leaders can start exploring newer unknown regions, an ELITISM-like learning mechanism allows the swarm to keep track of the most promising solution found so far in previous restarts by moving an arbitrary WALKER to the BEST-SOLUTION-FOUND patch; this way, eventually a leader would be attracted to that patch ("the elite") if no better solutions are found in the new restart.

You can test the above behaviour by experimenting with the COHESION? and ELITISM? switches, whose effect is depicted in the view area and in the periodic saw-like patterns that appear in AVERAGE-GROUP-COHESION and TOP-LEADER-FITNESS plots. Use the SPEED slider at the top to slow down the display of the warm restart mechanism. You can use the SPOTLIGHT chooser to highlight in the view area the location of the LANDSCAPE's GROUND-TRUTH-SOLUTION or the current BEST-SOLUTION-FOUND patch or the current TOP-LEADER.

THINGS TO TRY

The following are interesting simulation scenarios:

  • Solve each LANDSCAPE with a single-leader swarm. Repeat the experiment many times (use the SETUP button once, then RESET and GO as many times as you like). What is the success rate for each problem? (i.e. what percentage of the total number of repetitions was the FLW algorithm able to discover the ground-truth optimum?)

  • Is the multi-leader strategy better than the single-leader strategy? Which one achieves a higher success rate? Do you get better results as the number of leaders increases? You can test the last hypothesis by trying different combinations of pair values (N-LEADERS, SWARM-SIZE): (1, 5), (2, 10), (4, 20), (8, 40).

  • Try to solve each LANDSCAPE with different resolutions (GRID-SIZE): 100x100, 200x200, 400x400, 800x800. This parameter affects the sampling rate of the problem cost function during quantisation and thus changes the size of the search space (the larger this size, the more difficult the search for the best patch). How does this affect the success rate of the algorithm? How does this affect the speed of convergence (that is, the average tick where the optimum is discovered)?

  • Notice that unlike the other problems, the RANDOM problem produces a different landscape each time you press SETUP. It would be interesting to investigate if nonetheless the FLW algorithm is able to solve it or if some instances present more difficulty than others.

As a side note, we remark that the resolution level can induce quantisation errors during the cost function sampling, therefore the optimum patch coordinates of a given LANDSCAPE can differ depending on the GRID-SIZE. For example, the optimum of ROSENBROCK'S problem for 100x100 and 200x200 resolutions is at f(1.2, 1.44) = 0.04, but for 400x400 resolution it is at f(0.93, 0.87) = 0.007501, whereas for 800x800 resolution it is at f(1, 1) = 0.00255.

Another interesting idea to experiment with is to assess the performance of the model under different swarm configurations, in terms of efficiency. Since the worst case for a single-agent algorithm would be to perform an exhaustive search of the entire LANDSCAPE, i.e., evaluate the cost function on the entire grid of NxN patches (where N depends on the GRID-SIZE), the efficiency of each algorithm configuration can be measured as the percentage of evaluations of the cost function needed to find the optimum, with respect to NxN, in other words, the average rate OPTIMUM-TICK / NxN. The following swarm configurations can be tested:

  • Simultaneous Global Random Search: A swarm of WALKERs (SWARM-SIZE: 5, N-LEADERS: 1, WALKERS-RATE: 0.8, COHESION?: OFF, ELITISM?: OFF)

  • Simultaneous Local Random Search: A swarm of singleton LEADERs (SWARM-SIZE: 5, N-LEADERS: 5, WALKERS-RATE: 0.2, COHESION?: OFF, ELITISM?: OFF)

  • Single-leader Swarm Search: A swarm of agents with an unique LEADER, FOLLOWERs and WALKERs (SWARM-SIZE: 5, N-LEADERS: 1, WALKERS-RATE: 0.2, COHESION?: OFF, ELITISM?: OFF).

  • Multi-leader Swarm Search: A swarm of agents with multiple LEADERs, FOLLOWERs and WALKERs (SWARM-SIZE: 20, N-LEADERS: 4, WALKERS-RATE: 0.2, COHESION?: OFF, ELITISM?: OFF)

  • FLW swarm search: A swarm of agents with multiple LEADERs, FOLLOWERs and WALKERs, plus cohesion tracking and elitism (SWARM-SIZE: 20, N-LEADERS: 4, WALKER-RATE: 0.2, COHESION?: ON, ELITISM?: ON)

Can you identify which configuration (hence which search strategy/operators) improves the optimisation effectiveness/efficiency of the FLW algorithm? Is there a minimum follower base size for the multi-leader setup to work properly?

EXTENDING THE MODEL

Some possible paths for model extensions are:

  • Expand the landscape repertoire with additional problem cost functions.

  • Extend the model to solve problems in higher dimensions (not necessarily using a 2D grid view).

  • Extend the model to discrete (binary) problem domains.

  • Extend the model to solve non-stationary problems, that is, problems where the landscape can vary with time during the course of a run. In this sense, we believe that the mechanisms to prevent premature convergence implemented in the FLW algorithm can be useful to also adapt to this type of dynamic changes in the landscape.

RELATED MODELS

Modeling Commons -> Urban Pigeon-inspired Model for Unconstraint Optimisation, see [1].

(http://modelingcommons.org/browse/one_model/6390)

NetLogo Models Library -> Particle Swarm Optimization (PSO), see [2].

(https://ccl.northwestern.edu/netlogo/models/ParticleSwarmOptimization)

CREDITS AND REFERENCES

Authors:

Sergio Rojas-Galeano

Copyright (c) April 2022

email: srojas@udistrital.edu.co

Version 1.5

Licenses:

References:

[1] Garzon, M., and Rojas-Galeano, S. (2019, November). An Agent-Based Model of Urban Pigeon Swarm Optimisation. In 2019 IEEE Latin American Conference on Computational Intelligence (LA-CCI) (pp. 1-6). IEEE. doi: 10.1109/LA-CCI47412.2019.9036758. https://ieeexplore.ieee.org/document/9036758

[2] Stonedahl, F. and Wilensky, U. (2008). NetLogo Particle Swarm Optimization model. http://ccl.northwestern.edu/netlogo/models/ParticleSwarmOptimization. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

;; --------------------------------------------------------------------------
;; A multi-leader swarm model for bound-constrained numerical optimization
;; based on follow-the-leader and random walk (FLW) operators.
;;
;; This agent-based model aims to find the patch with the global minimum cost
;; value within the search space (landscape) of the chosen problem.
;;
;; A model by Sergio Rojas-Galeano
;; v1.5, April 2022 Copyright (c) The author
;; Correspondance email: srojas@udistrital.edu.co
;; Universidad Distrital Francisco Jose de Caldas, Bogota, Colombia
;;
;; This program is free software: you can redistribute it and/or modify
;; it under the terms of the GNU General Public License (GPLv3)
;; (see license at: https://www.gnu.org/licenses/gpl-3.0.txt)
;;
;; The model is made publicly available in the hope that it will be useful
;; to modelers, but WITHOUT ANY WARRANTY whatsoever (see license for details).
;; --------------------------------------------------------------------------

globals [
  ;; FLW algorithm globals
  top-leader          ; best agent in current iteration
  top-leader-fitness  ; fitness value of current top leader
  flw-best-patch      ; best solution (patch) discovered by FLW
  flw-runtime         ; runtime (ms) until optimum discovery or max. ticks
  flw-optimum-tick    ; tick where optimum was discovered (if found)
  avg-cohesion        ; average cohesion of followers to their leaders

  ;; Problem globals
  true-best-patch     ; ground truth optimum patch for a given landscape
  agent-size          ; agent size depending on lanscape's grid size
]

patches-own [
  x                   ; simulated pxcor, depending on bound constraints
  y                   ; simulated pycor, depending on bound constraints
  value               ; value of cost function at these simulated coordinates
]

turtles-own [
  my-leader           ; leader that this agent is following
]

;; FLW breeds
breed [leaders leader]
breed [followers follower]
breed [walkers walker]

;; Initialise problem landscape and swarm of agents

to setup

  ;; compute problem landscape (cost function) and set initial best patch randomly
  setup-search-landscape
  set flw-best-patch one-of patches
  reset-ticks

  ;; create walker agents
  create-walkers swarm-size * walkers-rate [
    set color gray set size agent-size                    ; assing walker color & size
    move-to one-of patches                                ; assign initial random location
  ]

  ;; create leader agents and their followers
  let colors shuffle [ 15 25 65 75 95 115 125 135 ]        ; colors to distinguish leaders
  create-leaders n-leaders [
    set color first colors set colors remove color colors ; assing random color
    set shape "bird" set size agent-size                  ; assign shape and size
    set my-leader self
    move-to one-of patches                                ; assign initial random location

    ;; create an equal number of followers for this leader and spread them around
    hatch-followers (swarm-size - count walkers - n-leaders) / n-leaders [
      rt random-normal 0 360 fd random world-width
    ]
  ]

  ;; initialise FLW globals
  update-best
end 

;; Execute one iteration of FLW algorithm

to go
  reset-timer

  ;; apply search operators
  do-elitism
  follow-leaders
  random-walks
  track-clashes
  track-cohesion
  track-leadership


  ;; update FLW globals
  update-best
  update-runtime
  show-spotlight
  tick

  ;; termination condition: max ticks, or optimum discovered
  if (ticks > max-ticks) or ((flw-optimum-tick > 0) ) [ stop ]
end 

;; Explotiation operator: follow-the-leader

to follow-leaders
  ask leaders [ rt random-normal 0 30 fd random-float 1 ]
  ask followers [ follow-move ]
end 

;; Move a follower towards its leader

to follow-move
  face my-leader fd (distance my-leader) * (random-float 2) ; move towards leader...
  rt random-normal 0 60 fd random-float 2  ; ... with a small local perturbation
end 

;; Exploration operator: random walks

to random-walks
  ask walkers [
    walk-move
    if value < top-leader-fitness [        ; if walker is better, drag top-leader
      ask top-leader [ move-to myself ]
    ]
  ]
end 

;; Move a walker around

to walk-move
  rt random-normal 0 30 jump (random-float 20)
end 

;; Move walker to best solution so far

to do-elitism
  ask walker 0 [
    if elitism? [ move-to flw-best-patch walk-move ]
    set shape ifelse-value elitism? [ "star" ][ "default" ]
  ]
end 

;; Spread away clashing leaders and their followers

to track-clashes
  ask leaders [
    ask other leaders with [ distance myself < (world-width - 1) / 25 ] [
      move-to one-of patches
      let me my-leader
      ask followers with [ my-leader = me ] [ follow-move ]
    ]
  ]
end 

;; Verify early convergence to local minima (low cohesion)

to track-cohesion
  if cohesion? [
    ;; compute average cohesion measure
    let total 0
    ask leaders [
      set total total + sum [distance myself] of followers with [ my-leader = myself ]
    ]
    set avg-cohesion total / swarm-size
  ]

  ;; spread out if early convergence to local minima
  if (cohesion? and avg-cohesion < .75) or (not cohesion? and ticks mod 30 = 0)[
    ask turtles [ rt 360 fd random 200 ]  ; radial dispersion
  ]
end 

;; Allow best followers to claim leadership

to track-leadership
  ask leaders [
    ;; swap places with best follower (including itself)
    let my-old-patch patch-here
    let my-best-follower min-one-of turtles with [ my-leader = myself ] [value]
    move-to my-best-follower
    ask my-best-follower [ move-to my-old-patch ]
  ]
end 

;; Update best solution discovered so far

to update-best
  ;; best solution so far would be top-leader
  ask min-one-of leaders [value] [
    set top-leader self
    set top-leader-fitness value
    if top-leader-fitness < [value] of flw-best-patch [
        set flw-best-patch patch-here

        ;; check if optimum found (termination condition)
        if flw-best-patch = true-best-patch [ set flw-optimum-tick ticks ]
    ]
  ]
end 

;; Update current algorithm runtime

to update-runtime
  set flw-runtime flw-runtime + timer
end 

;; Reset simulation (without setting up world and landscape from scratch)

to reset
  ask turtles [ move-to one-of patches ]
  set flw-best-patch one-of patches
  set top-leader one-of leaders
  set flw-optimum-tick 0
  set flw-runtime 0
  set avg-cohesion 0
  update-best
  show-spotlight
  clear-all-plots
  reset-ticks
end 

;; Create the fitness landscape depending on optimisation problem

to setup-search-landscape
  clear-all

  ;; set view (world) size and cell (patch) size
  (ifelse
    grid-size = "100x100" [ resize-world -50 50 -50 50  set agent-size 4 ]
    grid-size = "200x200" [ resize-world -100 100 -100 100 set agent-size 8 ]
    grid-size = "400x400" [ resize-world -200 200 -200 200 set agent-size 16 ]
    grid-size = "800x800" [ resize-world -400 400 -400 400 set agent-size 18 ]
  )
  set-patch-size  ifelse-value grid-size = "800x800" [ 1 ] [ 400 / (world-width - 1) ]

  ;; make a landscape with hills and valleys according to chosen cost function
  ask patches [
    set x pxcor * (xy-bounds / max-pxcor)
    set y pycor * (xy-bounds / max-pycor)

    set value (ifelse-value
      landscape = "Sphere" [
        x ^ 2 + y ^ 2
      ]
    	landscape = "Sphere-offset" [
        (x - 50 * (xy-bounds / max-pxcor) ) ^ 2  + (y + 50 * (xy-bounds / max-pxcor) ) ^ 2
      ]
      landscape = "Rastrigin" [ ; note that degrees, not radians, are needed for cos function
        20 + ((x ^ 2) -  10 * cos ( (180 / pi) * (2 * pi) * x )) + ((y ^ 2) -  10 * cos ( (180 / pi) * (2 * pi) * y ))
    	]
      landscape = "Rosenbrock" [
        100 * (y - (x ^ 2))^ 2 + (1 - x)^ 2
      ]
      landscape = "Himmelblau" [
        ((x ^ 2) + y - 11) ^ 2 + (x + (y ^ 2) - 7)^ 2
      ]
      landscape = "Eggholder" [ ; note that degrees, not radians, are needed for sin function
        ( (- x) * sin ( (180 / pi) * sqrt (abs (x - (y + 47))))) - (y + 47) * sin ( (180 / pi) * sqrt (abs ((x / 2) + (y + 47))))
      ]
      [ random-normal 0 500  ] ; the last case is a random landscape
;      [ random-float 500  ] ; the last case is a random landscape
    )
  ]

  if landscape = "Random" [
    ask min-n-of 4 patches [value] [ ask patches in-radius 30 [ set value value - random-float 300 ] ]
    repeat 20 [ diffuse value 1 ]
  ]

  ;; find the true best value
  ask min-one-of patches [value][ set true-best-patch self ]

	;; scale patches color within values limits
  let min-val min [value] of patches
  let max-val max [value] of patches
	ask patches [ set pcolor scale-color yellow value min-val log abs max-val 1.05 ]
end 

;; Turn on the spotlight on the chosen agent

to show-spotlight
  ifelse spotlight = "Best patch found"
  [ watch flw-best-patch]
  [ ifelse spotlight = "Ground truth patch"
    [  watch true-best-patch ]
    [ ifelse spotlight = "Top leader now"
      [  watch top-leader ]
      [  reset-perspective ]
    ]
  ]
end 

There are 80 versions of this model.

Uploaded by When Description Download
Sergio Rojas-Galeano about 2 years ago Updated Info tab, minor errors Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Things to Notice, Try, Extend Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Things to Notice, Try, Extend Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Things to Notice, Try, Extend Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Update Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Updated Info tab Download this version
Sergio Rojas-Galeano about 2 years ago Initial upload Download this version

Attached files

File Type Description Last updated
FLW_UserGuide.pdf pdf User guide about 2 years ago, by Sergio Rojas-Galeano Download
Follow the Leader + Random Walk (FLW) Swarm Algorithm.png preview Cropped preview about 2 years ago, by Sergio Rojas-Galeano Download

This model does not have any ancestors.

This model does not have any descendants.