Thursday, January 14, 2021

[DMANET] CfP: IEEE TEC, Benchmarking Sampling-Based Optimization Heuristics: Methodology and Software (BENCH)

===================================
CALL FOR PAPERS

The IEEE Transactions on Evolutionary Computation Special Issue on

Benchmarking Sampling-Based Optimization Heuristics: Methodology and Software (BENCH)
http://www-ia.lip6.fr/~doerr/BENCH-TEVC-SI-2021

welcomes submissions of original research articles.

Submission deadline: May 31, 2021
Important: Make sure to select the BENCH special issue when submitting!


_MOTIVATION_
Benchmarking plays an important role in the study of evolutionary computation methods and other optimization algorithms. Among other benefits, benchmarking helps us analyze strengths and weaknesses of different techniques -- knowledge that can be used to design more efficient optimization approaches. Core to benchmarking is a well-designed experimental setup, which ranges from the selection of algorithms, problem instances, and performance metrics over efficient experimentation to a sound evaluation of the benchmark data. To assist researchers and users of evolutionary computation methods, a number of different tools addressing the various different aspects of benchmarking are available. However, most of them are developed in isolation, without a possible integration to already existing software in mind. This hinders knowledge transfer between the different research groups and between academic and industrial practitioners of evolutionary computation methods.

The goal of this special issue is to provide an overview of state-of-the-art software packages, methods, and data sets that facilitate sound benchmarking of evolutionary algorithms and other optimization techniques. By providing an overview of today's benchmarking landscape, new synergies will be laid open, helping the community to converge towards a higher compatibility between tools, towards better reproducibility and replicability of our research, a better use of resources, and, ultimately, towards higher standards in our benchmarking practices.

_TOPICS_
We welcome submissions on the following topics.

- Generation, selection, and analysis of problems and problem instances;
- benchmark-driven algorithm design, selection, and analysis;
- experimental design;
- benchmark data collections and their organization;
- performance analysis and visualization, including statistical evaluation;
- holistic performance analysis in the context of real-world optimization problems, e.g., algorithm robustness, problem class coverage, implementation complexity;
- other aspects of benchmarking optimization algorithms.

We are particularly interested in submissions that present methods, software, and data collections that are applicable beyond a specific use case and that relate to problem classes of wider impact and interest. Availability of code, software, and data in an open-source format is strongly encouraged, but not formally required. They must be well documented and made available for public download or in special cases (in particular for real-world problems) upon request. We are particularly interested in submissions that discuss the role of the contributed techniques to the existing benchmarking landscape; this includes a discussion of interfaces with other existing software and/or the challenges in implementing these.
No restriction is made on the type of optimization problems that are being analyzed; contributions on constrained/unconstrained, noisy/ non-noisy, static/dynamic, single-/ multi-objective, combinatorial/continuous/mixed-integer problems, etc. are equally welcome. Algorithm comparisons are as much in the scope as are methods to analyze problem characteristics or any other component of algorithm benchmarking. We also welcome submissions which propose comparisons between sampling-based heuristics and other optimization techniques.

NOT in the scope of this special issue are reports which mostly focus on technical aspects such as descriptions of software architecture or user manuals. That is, submission shall focus on the contribution that the proposed method, data collection, or software makes towards better benchmarking practices of our community.

_SUBMISSION_
Manuscripts should be prepared according to the ``Information for Authors'' section of the journal found at https://cis.ieee.org/publications/t-evolutionary-computation/tevc-information-for-authors and submissions should be made through the journal submission website at https://mc.manuscriptcentral.com/tevc-ieee, by selecting the Manuscript Type ``BENCH Special Issue Papers'' and clearly adding ``Benchmarking Special Issue Paper'' to the comments to the Editor-in-Chief. Submission of a manuscript implies that it is the authors' original unpublished work and is not being submitted for possible publication elsewhere.

_IMPORTANT DATES_
- Submission opens: January 1, 2021
- Submission deadline: May 31, 2021
- Tentative publication date: Spring 2022

_GUEST EDITORS_
Thomas Bäck, Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands
Carola Doerr, CNRS researcher at LIP6, Sorbonne University, Paris, France,
Bernhard Sendhoff, Honda Research Institute Japan Co., Ltd. Japan
Thomas Stützle, IRIDIA laboratory, Université libre de Bruxelles (ULB), Belgium

**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************