Date on Master's Thesis/Doctoral Dissertation

5-2018

Document Type

Doctoral Dissertation

Degree Name

Ph. D.

Department

Industrial Engineering

Degree Program

Industrial Engineering, PhD

Committee Chair

Depuy, Gail

Committee Co-Chair (if applicable)

Alexander, Suraj

Committee Member

Alexander, Suraj

Committee Member

Usher, John

Committee Member

Yampolskiy, Roman

Author's Keywords

particle swarm; self-adaptive; evolutionary algorithm; convergence analysis; escape-local-minima strategy

Abstract

The performance and stability of the Particle Swarm Optimization algorithm depends on parameters that are typically tuned manually or adapted based on knowledge from empirical parameter studies. Such parameter selection is ineffectual when faced with a broad range of problem types, which often hinders the adoption of PSO to real world problems. This dissertation develops a dynamic self-optimization approach for the respective parameters (inertia weight, social and cognition). The effects of self-adaption for the optimal balance between superior performance (convergence) and the robustness (divergence) of the algorithm with regard to both simple and complex benchmark functions is investigated. This work creates a swarm variant which is parameter-less, which means that it is virtually independent of the underlying examined problem type. As PSO variants always have the issue, that they can be stuck-in-local-optima, as second main topic the MSAPSO algorithm do have a highly flexible escape-lmin-strategy embedded, which works dimension-less. The MSAPSO algorithm outperforms other PSO variants and also other swarm inspired approaches such as Memetic Firefly algorithm with these two major algorithmic elements (parameter-less approach, dimension-less escape-lmin-strategy). The average performance increase in two dimensions is at least fifteen percent with regard to the compared swarm variants. In higher dimensions (≥ 250) the performance gain accumulates to about fifty percent in average. At the same time the error-proneness of MSAPSO is in average similar or even significant better when converging to the respective global optima’s.

Share

COinS