==============================================================================
CALL FOR PAPERS
AHPC3: The 1st Workshop on Accelerated HPC
in the Cloud-Edge Continuum
Affiliated with the 33rd Euromicro International Conference on Parallel,
Distributed and Network-Based Processing (PDP 2025)
12-14 March 2025, Turin (Italy)
IMPORTANT DATES
Abstracts submission: October 30, 2024 November 18, 2024 (Firm)
---
Papers submission: November 11, 2024 November 27, 2024 (Firm)
---
Authors notification: December 16, 2024
---
Camera-Ready submission: January 27, 2025
SPECIAL ISSUE
Selected papers will be invited to submit extended versions for a special issue
of the International Journal of Networked and Distributed Computing,
published by Springer.
WEBSITE: http://ahpc3.di.unipi.it/
==============================================================================
Today, we are witnessing an increasing demand for high-performance computing
infrastructures. Modern applications imply the need to process computationally
intensive workloads. Unlike in the past, when just a handful of application
domains used HPC infrastructures, nowadays, they are requested by a plethora of
domains and applications. This is mainly due to the high availability of a
large amount of data. Traditionally, HPC infrastructures were sharply distinct
from Cloud infrastructures by their unique software and hardware requirements
and their on-premises nature.
However, in recent times, such distinctions have become increasingly blurry,
driven by the proliferation of applications such as Big Data and AI/ML. Modern
Cloud infrastructures are getting closer to HPC systems in terms of performance
capabilities and hardware specifications.
This workshop aims to explore the intersection of high-performance computing
and modern Cloud Edge continuum architectures. The focus will be on achieving
HPC by relying on Cloud-Edge architectures. The workshop will investigate how
technologies that are typically exploited in the context of Cloud and Edge
environments, including serverless computing, microservices, and load
balancers must be adapted, tailored, and managed to achieve efficient and
scalable solutions that can support the execution of HPC applications.
The key topics include lightweight virtualisation, dynamic execution
environments, and advanced scheduling technologies crucial to deploying
high-performance workflows in Cloud environments, but not limited to them. In
addition, the workshop will focus on orchestration and deployment techniques.
The workshop aims to attract submissions on innovative programming paradigms
for high-performance Cloud Edge computing, including network communication,
data management and fault tolerance, reliability, and security strategies. It
seeks insights on managing data-intensive workloads, heterogeneous resource
management tools, and HPC application monitoring in Cloud-Edge environments.
Emphasis will be placed on sustainability and efficient green practices.
In addition, contributions to the potential for FPGA/GPU acceleration
architectures for data flow processing and joint resource-sharing mechanisms in
hybrid HPC environments are welcome. Similarly, we aim to explore the latest
innovations, challenges, and applications in Cloud-Edge continuity and hybrid
Cloud HPC, focusing on accelerated environments such as FPGA and GPU architectures.
Improvement and innovation opportunities like these call for new solutions and
theoretical frameworks. The 1st International Workshop on Accelerated HPC in
the Cloud Edge Continuum (AHPC3) aims to bring together Cloud, Edge computing,
and HPC experts from academia and industry to identify new challenges, discuss
novel systems, methods, and approaches for Hybrid and accelerated HPC
Cloud-Edge infrastructures and architectures, and promote this vision toward
academia and industry stakeholders.
==============================================================================
TOPICS OF INTEREST
Topics of interest for the workshop include but are not limited to the
following ones:
Adaptation of Cloud-Edge technologies and methodologies for HPC (e.g.,
serverless, microservices, task offloading)
- Cloud-Edge computing architectures for HPC (e.g., resource federation)
- Lightweight virtualisation tools, execution environments and scheduling techniques
- Orchestration, deployment techniques and algorithms for High-performance
workflows in Cloud-Edge environments
- Programming paradigms for High-Performance Cloud-Edge computing
- Communication and Data management for Cloud-Edge computing
- Fault tolerance, reliability and security in the Cloud-Edge continuum
- Data-intensive workloads and tools
- Methodologies and tools for heterogeneous resource management
- Tools and techniques for monitoring HPC Cloud-Edge applications
- Sustainability for HPC Cloud-Edge computing
- Accelerated FPGA/GPU architectures for Cloud-Edge computing
- Data stream processing with FPGA/GPU in Cloud-Edge computing
- Federated resource-sharing mechanisms for hybrid HPC
==============================================================================
SUBMISSIONS AND ATTENDANCE
Accepted papers will be published in the conference Proceedings. Submitted
papers must be original work that has not appeared in and is not under
consideration for another conference or a journal. Every submitted paper will
be reviewed by at least three members of the Program Committee. Reviewing will
be single-blind. Authors are invited to submit papers of the following types
and lengths in the IEEE Conference proceedings format style:
- Regular papers (maximum 8 pages) should present innovative works whose claims
are supported by solid justifications.
- Short papers (maximum 4 pages) should target position papers that articulate
a high-level vision or describe challenging future directions.
Please note that registering on the submission site with a title and meaningful
abstract by the earliest deadline is required to enable the actual paper
submission.
The authors must be prepared to sign a copyright transfer statement. At least
one author of each accepted paper must register for the workshop by the early
date, to be indicated by the organisers, and *must* present the paper.
SUBMISSION LINK: http://ahpc3.di.unipi.it/submissions.html
==============================================================================
ORGANISERS
- Luca Ferrucci, University of Pisa, luca.ferrucci@unipi.it<mailto:luca.ferrucci@unipi.it>, General Chair
- Stefano Forti, University of Pisa, stefano.forti@unipi.it<mailto:stefano.forti@unipi.it>, General Chair
- Valerio Besozzi, University of Pisa, valerio.besozzi@phd.unipi.it<mailto:valerio.besozzi@phd.unipi.it>, Program Chair
- Alberto Ottimo, University of Pisa, alberto.ottimo@phd.unipi.it<mailto:alberto.ottimo@phd.unipi.it>, Program Chair
- Jacopo Massa, University of Pisa, jacopo.massa@phd.unipi.it<mailto:jacopo.massa@phd.unipi.it>, Program Chair
==============================================================================
PROGRAM COMMITTEE
- Jorn Altmann, Seoul National University
- Hojjat Baghban, Chang Gung University
- Roberto Casadei, University of Bologna
- Emanuele Carlini, ISTI-CNR
- Marcin Copik, ETH Zürich
- Massimo Coppola, ISTI-CNR
- Patrizio Dazzi, University of Pisa
- Maria Fazio, University of Messina
- Carlos Guerrero, University of Balearic Islands
- SongHee Kang, Seoul National University
- Hanna Kavalionak, ISTI-CNR
- Isaac Lera, University of Balearic Islands
- Matteo Mordacchini, IIT-CNR
- Paolo Palazzari, ENEA
- Paul Rourab, Siksha 'O' Anusandhan University
- Jocelyn Sérot, Université Clermont Auvergne
- Konstantinos Tserpes, NTUA
- Paolo Trunfio, University of Calabria
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************