Dataset Open Access
The W-Model follows a layered approach, where each layer can either be omitted or introduce a different characteristic feature such as neutrality via redundancy, ruggedness and deceptiveness, epistasis, and multi-objectivity, in a tunable way. The model problem is defined over bit string representations, which allows for extracting some of its layers and stacking them on top of existing problems that use this representation, such as OneMax, the Maximum Satisfisiability or the Set Covering tasks, and the NK landscape. The ruggedness and deceptiveness layer can be stacked on top of any problem with integer-valued objectives.
Here we provide some experimental results with several simple algorithms on the W-Model along with the algorithm and experiment executor implementation. The most recent version of the code can be found at http://www.github.com/thomasWeise/BBDOB_W_Model, while the program used in the experiment are attached to this dataset. The implemented algorithms are
These algorithms are applied to a range of different parametric setups of the W-Model for single-objective optimization and fixed-length bit string representations.
The experiments are executed on a HP Z640 Work Station with 32 GB DDR4-2400 RAM and Intel Xeon E5-2609v4 CPU under Ubuntu Server Linux 16.04, Kernel 4.4.0-116-generic, and with Java 1.8.0_151. Each run was granted at most 1048576 function evaluations (FEs). The total amount of data collected is about 45 GB, with tar.xz compression down to about 1.2 GB. The text files containing the algorithm traces are fairly self-describing.
Publications that describe the W-Model: