Tuesday, November 12, 2024

[DMANET] [CFP] ESSA Workshop at IEEE IPDPS 2025 - Submissions due January 26, 2025

**[Apologies if you receive multiple copies of this email]**

###############################################################

ESSA 2025 : 6th Workshop on Extreme-Scale Storage and Analysis
Held in conjunction with IEEE IPDPS 2025 - June 4th, 2025

https://sites.google.com/view/essa-2025/

###############################################################

==========
OVERVIEW:
==========

Advances in storage are becoming crucial as HPC and cloud systems handle ever-increasing data, a trend expected to accelerate. For decades, parallel file systems like Lustre and GPFS have seen few structural changes, with limited integration of I/O support technologies that leverage hierarchical storage layers, such as node-local burst buffers. Recently, however, there has been a surge in innovations in data storage, processing systems, and I/O support software for several key reasons:

- Technology: The growing availability of persistent solid-state storage and storage-class memory technologies, capable of replacing both memory and disk, is opening up new possibilities for the design of storage systems.

- Performance requirements: Disk-based parallel file systems are no longer sufficient to meet the performance demands of high-end systems. However, the optimal use of solid-state storage and storage-class memory to achieve the required performance remains unclear. As a result, new approaches for integrating these technologies into HPC systems are being actively developed and evaluated.

- Application evolution: Data analysis applications, such as graph analytics and machine learning, are becoming increasingly critical in both scientific and commercial computing. I/O often presents a major bottleneck for these applications, particularly in cloud and HPC environments, where rapid turnaround or the integration of intensive computation and analysis is required. As a result, data storage, I/O, and processing demands are evolving, driven by the emergence of complex workflows that integrate computation, analytics, and learning.

- Infrastructure evolution: In the future, HPC technology will extend beyond dedicated supercomputing centers. Concepts such as "Embedded HPC," "HPC in a Box," "HPC in the Loop," "HPC in the Cloud," "HPC as a Service," and "near-real-time simulation" will drive the need for new, small-scale HPC deployment environments. To enable a seamless computing continuum, a federation of systems and functions will be needed, with unified mechanisms for managing I/O, storage, and data processing across all participating systems.

- Virtualization and disaggregation: As virtualization and disaggregation become more widely adopted in cloud and HPC computing, the importance of virtualized storage is growing. Increased efforts will be required to understand its impact on performance.

Our goals for the ESSA Workshop are to bring together leading researchers and developers in data-related fields?such as storage, I/O, processing, and analytics?on extreme-scale infrastructures, including HPC systems, clouds, edge systems, and hybrid combinations of these. We aim to discuss advancements and potential solutions to the new challenges we encounter.

- Extreme-scale storage systems for high-end HPC infrastructures, clouds, or hybrid environments
- Extreme-scale parallel distributed, storage architectures
- Synergies between different storage models, including POSIX file systems, object storage, key-value stores, and row- and column-oriented databases
- Structures and interfaces for leveraging persistent solid-state storage and storage-class memory
- High-performance I/O libraries and services
- I/O performance in extreme-scale systems and applications (HPC, clouds, edge)
- Storage and data processing architectures for hybrid HPC/cloud/edge infrastructures supporting complex workflows that integrate simulation and analytics
- Integrating computation within the memory and storage hierarchy to facilitate in-situ & in-transit data processing
- I/O characterization and data processing techniques for application workloads in extreme-scale parallel and distributed machine learning and deep learning
- Tools and techniques for managing data movement among compute and data-intensive components
- Data reduction and compression methods
- Failure management and recovery strategies for extreme-scale storage systems
- Benchmarks and performance tools for extreme-scale I/O
- Language and library support for data-centric computing
- Storage virtualization and disaggregation
- Ephemeral storage media and consistency optimizations
- Storage architectures and systems for scalable stream-based processing
- Case studies of I/O services and data processing architectures across various application domains (e.g., scientific simulations, experimental facilities, large observatories, bioinformatics, etc.)

============
SUBMISSIONS:
============

The workshop will accept traditional research papers (8 pages) for in-depth topics and short papers (5 pages) for work in progress on hot topics. Papers should present original research and provide sufficient background material to make them accessible to the broader community.

Paper format: single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. The submitted manuscripts should include author names and affiliations. The IEEE conference style templates for MS Word and LaTeX provided by IEEE eXpress Conference Publishing are available here: https://www.ieee.org/conferences/publishing/templates.html

Submission site:
https://ssl.linklings.net/conferences/ipdps/

================
IMPORTANT DATES:
================

- Paper submission deadline: January 26th, 2025
- Acceptance notification: February 21st, 2025
- Camera-ready deadline: March 6th, 2025
- Workshop date: June 4th, 2025

=================
WORKSHOP CHAIRS:
=================

Sarah Neuwirth, Johannes Gutenberg University Mainz, Germany - Chair - neuwirth@uni-mainz.de<mailto:neuwirth@uni-mainz.de>
Francois Tessier, Inria, France - Co-Chair - francois.tessier@inria.fr<mailto:francois.tessier@inria.fr>

================
PROGRAM CHAIRS:
================

Chen Wang, Lawrence Livermore National Laboratory, USA - Chair - wang116@llnl.gov<mailto:wang116@llnl.gov>
Lipeng Wan, Georgia State University, USA - Co-Chair - lwan@gsu.edu<mailto:lwan@gsu.edu>

======================
WEB & PUBLICITY CHAIR:
======================

Radita Liem, RWTH Aachen University, Germany - Chair - liem@itc.rwth-aachen.de<mailto:liem@itc.rwth-aachen.de>

====================
STEERING COMMITTEE:
====================

Gabriel Antoniu , Inria, Rennes, France
Franck Cappello, Argonne National Laboratory, USA
Toni Cortés, Barcelona Supercomputing Center, Spain
Kathryn Mohror, Lawrence Livermore National Laboratory, USA
Kento Sato, RIKEN, Japan
Marc Snir, University of Illinois at Urbana-Champaign, USA
Weikuan Yu, Florida State University, USA


--------------------------------------------------------------------------------
Prof. Dr. Sarah M. Neuwirth
Co-Director, NHR South-West HPC Center
Research Group Head, High Performance Computing and its Applications

Johannes Gutenberg University Mainz
Anselm-Franz-von-Bentzelweg 12
55099 Mainz | Germany

Office: ZDV, Room 01-339
Phone: +49 6131 39 23643
Email: neuwirth@uni-mainz.de<mailto:neuwirth@uni-mainz.de>
Website: https://www.hpca-group.de/
--------------------------------------------------------------------------------

**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************