NASO 2022 - New Architectures for Search and Optimization
a single-day Workshop at IJCAI-ECAI 2022
Messe Wien, Vienna, Austria, TBA (around July 23-25, 2022)
Important Dates
Paper submission: May 13, 2022
Notification: June 3, 2022
Camera ready papers: June 17, 2022
Workshop: TBA (around July 23-25, 2022)
NASO is a 1-day workshop at IJCAI-ECAI 2022 (https://ijcai-22.org/ <https://ijcai-22.org/>) featuring both paper presentations and open discussions about common topics, bringing researchers from various backgrounds and aiming at maximal interaction between participants, rather than a sequence of sharply focused formal talks with little interaction with the audience.
For more information on NASO : https://sites.google.com/view/naso-2022 <https://sites.google.com/view/naso-2022>
Aims and Scope
With the multiplication and increased availability of specialized hardware and supercomputers for AI applications, the idea of using dedicated architectures (not only hybrid GPU-enhanced parallel platforms but also systems based on quantum annealing) for hard search and optimization problems has seen a rapid development. As fundamental techniques widely used in AI, search and optimization methods (e.g. constraint programming, SAT solving or metaheuristics) share the key concern of using in an efficient manner the computing power at hand, as bigger computing power means the ability to attack more complex combinatorial problems.
Following many experiments in the last decade aimed at efficiently parallelizing different types of methods, the challenge is now to devise efficient techniques and algorithms for Exascale computing, that is, massively parallel computers with hundreds of thousands of cores in the form of heterogeneous hybrid systems based on both multi-core processors and GPUs.
Orthogonal to the deployment of supercomputing hardware, a series of exotic architectures appeared in the last few years, based on quantum annealing (D-Wave systems) or quantum-inspired annealing (Fujitsu's Digital Annealing Unit, Hitachi's CMOS Annealing Machine, Toshiba's Simulated Bifurcation Machine, Fixstars Amplify Annealer Engine). Their main target is to efficiently solve combinatorial optimization problems with dedicated methods on specialized hardware, and clear advances can be expected in that area over the next decade. Even more exotic are the very recent experiments with optical computers tackling combinatorial problems formulated as Ising models.
All these new research directions can pave the way for developing new search algorithms and combinatorial optimization methods or to the re-design of well-known techniques in order to boost this growing area of research through cross-fertilization. This workshop thus aims to be a forum for researchers working on new architectures for search and optimization in diverse fields (AI, HPC, algorithms, hardware, quantum computing), willing to share their experience and to exchange ideas. We would like to provide a cross-community forum for researchers working on any kind of novel architectures, and therefore solicit papers on the following topics, including reports on work in progress, as well as position papers.
Topics include, but are not limited to:
- Parallel and distributed search algorithms for problem solving (search algorithms, constraint solving, SAT solving, SMT, logic programming, planning, etc),
- Parallel metaheuristics (local search, evolutionary algorithms, ant colony optimization, particle swarm optimization, etc)
- Quantum and quantum-inspired annealing for combinatorial problems
- Problem representation in the QUBO and Ising models
- Relations between statistical physics models and combinatorial problems
- Optical computing for combinatorial problems (e.g. based on Ising model)
- Benchmarks and performance comparison between different architectures
Workshop Topics and Paper Submission
We would like to provide a cross-community forum for researchers working on search methods (Constraint Solving, Logic Programming, SAT solving, Artificial Intelligence, etc), combinatorial optimization methods (metaheuristics, local search, tabu search, evolutionary algorithms, ant colony optimization, particle swarm optimization, memetic algorithms, and other types of algorithms) and users of alternative computing architectures (Grids, large PC clusters, massively parallel computers, GPGPUs, edge computing, heterogeneous multicores, quantum annealers, quantum-inspired digital annealers, etc) in order to tackle the challenge of efficient implementations of search and optimization methods on all kinds of exotic hardware.
We thus solicit papers on the above topics, including reports on work in progress, as well as position papers.
Papers must not exceed 10 pages in the IJCAI-ECAI style (https://www.ijcai.org/authors_kit <https://www.ijcai.org/authors_kit>) and should be submitted through EasyChair at https://easychair.org/conferences/?conf=naso2022 <https://easychair.org/conferences/?conf=naso2022>.
Organizers
- Philippe Codognet, JFLI - CNRS / Sorbonne University / the University of Tokyo, Japan (Chair)
- Salvador Abreu, NOVA LINCS / University of Evora, Portugal
- Daniel Diaz, CRI / University of Paris-1, France
Program Committee
- Salvador Abreu (NOVA LINCS / University of Évora, Portugal)
- Alejandro Arbelaez (Autonomous University of Madrid, Spain)
- Philippe Codognet (JFLI – CNRS / Sorbonne University / University of Tokyo, Japan)
- Daniel Diaz (CRI / University of Paris-1, France)
- Inês Dutra (University of Porto, Portugal)
- Youssef Hamadi (Tempero.tech, Paris, France)
- Jin-Kao Hao (LERIA / University of Angers, France)
- Arnaud Lallouet (Huawei Technologies, Paris, France)
- Inês Lynce (INESC-ID, University of Lisbon, Portugal)
- Danny Múnera (University of Antioquia, Colombia)
- Matthieu Parizy (Fujitsu, Japan)
- Enrico Pontelli (New Mexico State University, Las Cruces, USA)
- Lakhdar Sais (CRIL / Université d'Artois, Lens, France)
- Vijay Saraswat (Goldman Sachs R&D, NY, USA)
- Meinolf Sellman (Shopify, Ottawa, Canada)
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************