Friday, December 29, 2023

[DMANET] [CFP - Reminder] ESSA Workshop at IEEE IPDPS 2024 - Submissions due January 25, 2024

**[Apologies if you receive multiple copies of this email]**

*********************************************************************

Call for Papers

ESSA 2024: The 5th Workshop on Extreme-Scale Storage and Analysis
(Formerly High Performance Storage (HPS))

Held in conjunction with IEEE IPDPS 2024, San Francisco, CA, USA.

Workshop Date: May 2024

(https://sites.google.com/view/essa-2024)

*********************************************************************

We are organizing the fifth edition of the International Workshop on
Extreme-Scale Storage and Analytics (ESSA) held in conjunction with
IPDPS since 2020 (formerly High Performance Storage (HPS) in 2020 and
2021). Advances in storage are becoming increasingly critical because
workloads on high performance computing (HPC) and cloud systems are
producing and consuming more data than ever before, and the situation
promises to only increase in future years. Additionally, the last
decades have seen relatively few changes in the structure of parallel
file systems, and limited interaction between the evolution of parallel
file systems, e.g., Lustre, GPFS, and I/O support systems that take
advantage of hierarchical storage layers, e.g., node local burst
buffers. However, recently the community has seen a large uptick in
innovations in data storage and processing systems as well as in I/O
support software for several reasons:

(1) Technology: The availability of an increasing number of persistent
solid-state storage and persistent storage-class memory technologies
that can replace either memory or disk are creating new opportunities
for the structure of storage systems.

(2) Performance requirements: Disk-based parallel file systems cannot
satisfy the performance needs of high-end systems. However, it is not
clear how solid-state storage and storage-class memory can best be used
to achieve the needed performance, so new approaches for using
solid-state storage and storage-class memory in HPC systems are being
designed and evaluated.

(3) Application evolution: Data analysis applications, including graph
analytics and machine learning, are becoming increasingly important both
for scientific computing and for commercial computing.  I/O is often a
major bottleneck for such applications, both in cloud and HPC
environments – especially when fast turnaround or integration of heavy
computation and analysis are required. Consequently, data storage, I/O
and processing requirements are evolving, as complex workflows involving
computation, analytics and learning emerge.

(4) Infrastructure evolution: HPC technology will not only be deployed
in dedicated supercomputing centers in the future. "Embedded HPC", "HPC
in the box", "HPC in the loop", "HPC in the cloud", "HPC as a service",
and "near-to-real-time simulation" are concepts requiring new
small-scale deployment environments for HPC. A federation of systems and
functions with consistent mechanisms for managing I/O, storage, and data
processing across all participating systems will be required to create
what is called a "computing continuum".

(5) Virtualization and disaggregation: As virtualization and
disaggregation become broadly used in cloud and HPC computing, the issue
of virtualized storage has increasing importance and efforts will be
needed to understand its implications for performance.

Our goals in the ESSA Workshop are to bring together expert researchers
and developers in data-related areas such as storage, I/O, processing
and analytics on extreme scale infrastructures including HPC systems,
clouds, edge systems or hybrid combinations of these, to discuss
advances and possible solutions to the new challenges we face. We expect
the ESSA Workshop to result in lively interactions over a wide range of
interesting topics.

Topics of interest include, but are not limited to:
 - Extreme-scale storage systems (on high-end HPC infrastructures,
clouds, or hybrid combinations of them)
 - Extreme-scale parallel and distributed storage architectures
 - The synergy between different storage models (POSIX file system,
object storage, key-value, row-oriented, and column-oriented databases)
 - Structures and interfaces for leveraging persistent solid-state
storage and storage-class memory
 - High-performance I/O libraries and services
 - I/O performance in extreme-scale systems and applications
(HPC/clouds/edge)
 - Storage and data processing architectures and systems for hybrid
HPC/cloud/edge infrastructures, in support of complex workflows
potentially combining simulation and analytics
 - Integrating computation into the memory and storage hierarchy to
facilitate in-situ and in-transit data processing
 - I/O characterization and data processing techniques for application
workloads relying on extreme-scale parallel/distributed
machine-learning/deep learning
 - Tools and techniques for managing data movement among compute and
data intensive components
 - Data reduction and compression
 - Failure and recovery of extreme-scale storage systems
 - Benchmarks and performance tools for extreme-scale I/O
 - Language and library support for data-centric computing
 - Storage virtualization and disaggregation
 - Ephemeral storage media and consistency optimizations
 - Storage architectures and systems for scalable stream-based processing
 - Study cases of I/O services and data processing architectures in
support of various application domains (bioinformatics, scientific
simulations, large observatories, experimental facilities, etc.)

================================
Submission Guidelines
================================
The workshop will accept traditional research papers (Page limit: 8
pages) for in-depth topics and short papers (Page limit: 5 pages) for
work in progress on hot topics. Papers should present original research
and provide sufficient background material to make them accessible to
the broader community. Papers with 5 pages or less are reviewed as
Short/work-in-progress papers. Short papers must have 4 pages to be
published in the IEEE Digital Library.

Formatting:
Single-spaced double-column pages using 10-point size font on 8.5x11
inch pages (IEEE conference style), including figures, tables, and
references. The submitted manuscripts should include author names and
affiliations. The IEEE conference style templates for MS Word and LaTeX
provided by IEEE eXpress Conference Publishing are available here:
https://www.ieee.org/conferences/publishing/templates.html .
All papers must be in English. We use single-blind reviewing process, so
please keep the authors names, publications, etc., in the text. Papers
will be peer-reviewed, and accepted papers will be published in the
workshop proceedings as part of the IEEE Digital Library.

Submission link (login required, opening soon):
https://ssl.linklings.net/conferences/ipdps/?page=Submit&id=ESSAWorkshopFullSubmission&site=ipdps2024

================================
Important Dates
================================
Please note: All deadlines are midnight Anywhere on Earth
- Abstract submission (optional) deadline: January 18, 2024
- Paper submission deadline: January 25, 2024
- Acceptance notification: February 15, 2024
- Camera-ready deadline: February 29, 2024
- Workshop date: May 2024 (day TBA)

================================
Workshop Committees
================================
Workshop Chairs:
- Chair: François Tessier, Inria, France
- Co-Chair: Weikuan Yu, Florida State University, USA

Program Chairs:
- Chair: Sarah Neuwirth, Johannes Gutenberg University Mainz, Germany
- Co-Chair: Arnab K. Paul, BITS Pilani, K K Birla Goa Campus, India

Web Chair:
- Chair: Lenny Guo, Pacific Northwest National Laboratory, Richmond, USA

Publicity Chair:
- Chair: Chen Wang, Lawrence Livermore National Laboratory, Livermore, USA

Steering Committee:
- Gabriel Antoniu , INRIA Rennes
- Franck Cappello, Argonne National Laboratory
- Tony Cortes, Barcelona Supercomputing Center
- Kathryn Mohror, Lawrence Livermore National Laboratory
- Kento Sato, RIKEN
- Marc Snir, University of Illinois at Urbana-Champaign
- Weikuan Yu, Florida State University

Program Committee:
- Tyler Allen, University of North Carolina at Charlotte, USA
- Oceane Bel, Pacific Northwest National Laboratory (PNNL), USA
- Sajal Dash, Oak Ridge National Laboratory (ORNL), USA
- Matthieu Dorier, Argonne National Laboratory (ANL), USA
- Anna Fuchs, University of Hamburg, Germany
- Hariharan Devarajan, Lawrence Livermore National Laboratory (LLNL), USA
- Adrian Jackson, The University of Edinburgh, UK
- Hideyuki Kawashima, Keio University, Japan
- Radita Liem, RWTH Aachen University, Germany
- Glenn K. Lockwood, Microsoft, USA
- Xiaoyi Lu, University of California Merced, USA
- Osamu Tatebe, University of Tsukuba, Japan
- Luan Teylo, National Institute for Research in Digital Science and
Technology (Inria), France
- Marc-André Vef, Johannes Gutenberg University Mainz, Germany
- Lipeng Wan, Georgia State University, USA
- Dongfang Zhao, University of Washington, USA

--
-----------------------------------------------------------------------
Prof. Dr. Sarah M. Neuwirth
Johannes Gutenberg University Mainz
High Performance Computing and its Applications
Anselm-Franz-von-Bentzelweg 12
55099 Mainz | Germany
Phone: +49 6131 39 23643
Email: neuwirth@uni-mainz.de
Website: https://www.hpca-group.de/
-----------------------------------------------------------------------

**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************