Wednesday, March 25, 2020

[DMANET] [Extended Deadline] Gecco 2020 Workshop: Good Benchmarking Practices for Evolutionary Computation

[Apologies for multiple emails]
Please find below a Call for Contributions for the following GECCO 2020
WORKshop.

*Note that GECCO 2020 will be an electronic-only conference due to
COVID-19. * *It will be required that the presentation of all accepted
papers is provided in the form of a pre-recorded talk. More details about
this will be provided soon (together with how workshop discussions will
occur).*

Extended Submission deadline: April 17, 2020

Good Benchmarking Practices for Evolutionary Computation

BENCHMARK @ GECCO

https://sites.google.com/view/benchmarking-network/home/activities/GECCO20
<https://sites.google.com/view/benchmarking-network/home/activities/%0DGECCO20>

(Please copy and paste the above link in your browser if it does not work)

see also http://iao.hfuu.edu.cn/benchmark-gecco20

PDF Flyer:
http://www.gm.fh-koeln.de/~naujoks/gecco/benchmark_gecco20_cfp.pdf

Exteneded Submission Deadline: April 17th, 2020
to be held as part of the Genetic and Evolutionary Computation Conference
(GECCO 2020) July 8-12, 2020, Cancún, Quintana Roo, Mexico, organized by
ACM SIGEVO https://gecco-2020.sigevo.org

SCOPE AND OBJECTIVES

Benchmarking aims to illuminate the strengths and weaknesses of algorithms
regarding different problem characteristics. To this end, several
benchmarking suites have been designed which target different types of
characteristics.

Gaining insight into the behavior of algorithms on a wide array of problems
has benefits for different stakeholders. It helps engineers new to the
field of optimization find an algorithm suitable for their problem. It also
allows experts in optimization to develop new algorithms and improve
existing ones.

Even though benchmarking is a highly-researched topic within the
evolutionary computation community, there are still a number of open
questions and challenges that should be explored:

1.

(i) most commonly-used benchmarks are small and do not cover the space
of meaningful problems,
2.

(ii) benchmarking suites lack the complexity of real-world problems,
3.

(iii) proper statistical analysis techniques that can easily be applied
depending on the nature of the data are lacking or seldom used, and
4.

(iv) user-friendly, openly accessible benchmarking techniques and
software need to be developed and spread.

We wish to enable a culture of sharing to ensure direct access to resources
as well as reproducibility. This helps to avoid common pitfalls in
benchmarking such as overfitting to specific test cases. We aim to
establish new standards for benchmarking in evolutionary computation
research so we can objectively compare novel algorithms and fully
demonstrate where they excel and where they can be improved.

WORKSHOP DESCRIPTION

As the goal of the workshop is to discuss, develop and improve benchmarking
practices in evolutionary computation, we particularly welcome informal
position statements addressing or identifying open challenges in
benchmarking, as well as all other suggestions and contributions for a
discussion. Possible contributions include, but are not limited to:

-

- lists of open questions/issues in benchmarking
-

- examples of good benchmarking
-

- descriptions of common pitfalls in benchmarking and how to avoid them.

For all other information about the workshop, please contact Thomas Weise
at tweise@ustc.edu.cn with CC to p.oliveto@sheffield.ac.uk, lacava@upenn.edu
, boris.naujoks@th-koeln.de, and vanessa@modl.ai.

We also welcome the submission of workshop papers to be published in the
GECCO companion proceedings. The Workshop Call for Papers (CfP) can be
downloaded in PDF format or as plain text file here:
https://sites.google.com/view/benchmarking-network/home/activities/GECCO20

Our goal for the WORKshop is to collaboratively produce output that
improves the state-of-the-art of benchmarking in evolutionary computation,
not to organize yet another mini-conference!

TOPICS

The topics of interest for this workshop include, but are not limited to:

- the selection of meaningful (real-world) benchmark problems,

- performance measures for comparing algorithm behavior,

- novel statistical approaches for analyzing empirical data,

- landscape analysis,

- data mining approaches for understanding algorithm behavior,

- transfer learning from benchmark experiences to real-world problems, and

- benchmarking tools for executing experiments and analysis of experimental
results.

IMPORTANT DATES

Paper Submission Opening: 27 February 2020

Extended Submission Deadline: 17 April 2020

Decisions Due: 1 May 2020

Camera-Ready Material Due: 8 May 2020

Author Registration Deadline: 27 April 2020 (TBC)

Conference Presentation: 8-9 July 2020

*Submission Instructions*

All relevant instructions regarding paper submission are available at
https://gecco-2020.sigevo.org/index.html/tiki-index.php?page=Workshops.


RELATED EVENT

A similar benchmarking best practices workshop will be held at PPSN 2020,
which takes place from September 5-9, 2020, in Leiden, The
Netherlands: https://sites.google.com/view/benchmarking-
network/home/activities/PPSN20

<https://sites.google.com/view/benchmarking-%0Dnetwork/home/activities/PPSN20>
. Contributions to this workshop are welcome in any format until June 8,
2020.

LIST OF ORGANIZERS (alphabetical order)

 Thomas Bäck, Leiden University, Leiden, The Netherlands
 Carola Doerr, CNRS and Sorbonne University, Paris, France
 Tome Eftimov, Jožef Stefan Institute, Ljubljana, Slovenia
 Pascal Kerschke, University of Münster, Münster, Germany
 William La Cava, University of Pennsylvania, Philadelphia, PA, USA
 Manuel López-Ibáñez, University of Manchester, Manchester, UK
 Boris Naujoks, Technical University of Cologne, Köln (Cologne), Germany
 Pietro S. Oliveto, University of Sheffield, Sheffield, UK
 Patryk Orzechowski, University of Pennsylvania, Philadelphia, PA, USA
 Mike Preuss, LIACS, Leiden University, Leiden, The Netherlands
 Jérémy Rapin, Facebook AI Research, Paris, France
 Ofer M. Shir, Tel-Hai College and Migal Institute, Israel
 Olivier Teytaud, Facebook AI Research, Paris, France
 Heike Trautmann, University of Münster, Münster, Germany
 Ryan J. Urbanowicz, University of Pennsylvania, Philadelphia, PA, USA
 Vanessa Volz, modl.ai, Copenhagen, Denmark
 Markus Wagner, The University of Adelaide, Adelaide, Australia
 Hao Wang, Sorbonne University, Paris, France
 Thomas Weise, Institute of Applied Optimization, Hefei University, Hefei,
China  Borys Wróbel, Adam Mickiewicz University, Poland
 Aleš Zamuda, University of Maribor, Maribor, Slovenia

HOSTING EVENT
The Genetic and Evolutionary Computation Conference (GECCO 2020) July 8-12,
2020, Cancún, Quintana Roo, Mexicohttp://gecco-2020.sigevo.org

The Genetic and Evolutionary Computation Conference (GECCO 2020) will
present the latest high- quality results in genetic and evolutionary
computation. Topics include genetic algorithms, genetic programming,
evolution strategies, evolutionary programming, memetic algorithms,
hyper-heuristics, real-world applications, evolutionary machine learning,
evolvable hardware, artificial life, adaptive behavior, ant colony
optimization, swarm intelligence, biological applications, evolutionary
robotics, coevolution, artificial immune systems, and more. The full list
of tracks is available at: https://gecco-
2020.sigevo.org/index.html/Program+Tracks

This workshop is organized as part of ImAppNIO Cost Action 15140.

--
Pietro S. Oliveto
Senior Lecturer,
EPSRC Early Career Fellow,
Department of Computer Science,
The University of Sheffield, Sheffield, UK.
*www.dcs.shef.ac.uk/people/P.Oliveto/rig/
<http://www.dcs.shef.ac.uk/people/P.Oliveto/rig/>*

*Fully funded PhD studentships available now* in time complexity analysis
of bio-inspired computation. Enquiries by excellent candidates can be sent
to me by email. Applications will be accepted until the posts are filled.
Further details are here
<http://staffwww.dcs.shef.ac.uk/people/P.Oliveto/PhDStudentships.html>.
Applicants should apply using the online application form here
<http://www.shef.ac.uk/postgraduate/online>

**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************