Optimization algorithms

Below you can find a number of optimization algorithms implemented in MATLAB. All implementation are inspired by things found on the internet. I do have to admit that I'm not an expert in the field of optimization. I am an engineer, and I use these algorithms to the problems I'm facing. I'm sure that the implementations can be improved, and I would be very happy to learn from your suggestions. So please contact me if you have any.

Get more information on these and many other (global) optimization in these books/links...

Simplex-Simulated annealing (SIMPSA)

Optimization algorithms based on simulated annealing employ a stochastic generation of solution vectors and employ similarities between the physical process of annealing (i.e. melting a solid by heating it, followed by slow cooling and crystallization into a minimum free energy state) and a minimization problem. During the cooling process, transitions are accepted to occur from a low to a high energy level through a Boltzmann probability distribution. For an optimization problem, this corresponds to "wrong-way" movements. This MATLAB implementation contains an algorithm that was described in Cardoso et al. (1996) and is based on the combination of the non-linear smplex and simulated annealing algorithms (the SIMPSA algorithm).

Download here.

Particle swarm optimization

Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling. Particle swarm optimization or PSO is a general purpose optimization algorithm of which the underlying idea is the following: a swarm of particles moves around in the search space and the movements of the individual particles are influenced by the improvements discovered by the others in the swarm. As the optimization progresses, the optimum will be discovered.

Download here.

Shuffled complex evolution

The shuffled complex evolution optimization algorithm (SCE) begins with an "initial population" of points sampled randomly from the feasible search space. The population is then partitioned into several complexes, each containing a fixed number of points. During the optimization, each complex evolves based on a statistical "reproduction" process that uses the "simplex" geometric shape to direct the search in the correct direction. After a defined number of iteration, the complexes are merged, shuffled and the points are reassigned to a new set of complexes to ensure information sharing. As the optimization progresses, the entire population tends to converge toward the neighborhood of the global optimum, provided the initial population size and the number of complexes is sufficiently large.

Download here.


UPDATE: To illustrate how (well) the algorithms work, I have worked out an example in which the parameters of a dynamic model are estimated from experimental data. A zip file containing a the .m files is found here!

Get even more information on these and many other (global) optimization in these books...