Networks are evolving into interconnected computing systems that manage
network operations and provide end-user services. These facilities vary in
size and distance from users: Clouds are large and distant from end-users,
Edges are smaller and closer, and Fogs are small facilities near or within
user devices.
The ensemble of the computing facilities within a network has been recently
termed the Computing Continuum, or the Edge-to-Cloud Continuum. The
emergence of 5G and Beyond 5G (B5G) technologies has significantly advanced
this continuum, enabling ultra-low latency, high bandwidth, and reliable
connectivity. These advancements enhance the potential for distributed
services, allowing seamless deployment of chains of microservices and
network functions across diverse computing elements within the continuum.
The availability of a number of alternative computing elements and the
possibility to implement services in a distributed fashion, as chains of
microservices and network functions possibly residing on different
computing elements, raises a number of interesting research questions in
which performance plays a key role. Some of the research problems have been
investigated for a number of years, most notably the server selection
problem (where a system user must decide which computing element is the
best choice for her computation, according to some performance metric of
interest) and the server placement problem (where a network operator must
decide the best locations where to distribute its computing facilities over
the network). Other problems are new and not yet well investigated.
This special issue solicits unpublished works on performance measurements,
analysis and evaluation of the Edge-to-Cloud Continuum, as well as of the
applications running over the available computing facilities.
This special issue is intended for researchers, engineers, and
practitioners who study and work on Cloud and Edge Systems, as well as
those interested in performance measurement, analysis and modeling in
general. We welcome submissions that study performance and resource
management in the Edge-to-Cloud Continuum or those that present novel
algorithms, techniques, or solutions to improve efficiency and
sustainability, noting that sustainability is central to the operation of
these systems. While both theoretical and experimental approaches are
welcome, and measurements of existing systems are of special interest,
attention will be paid in the review process on rigor and quantitative
analysis.
Topics of interest for this special issue include, but are not limited to,
the following:
Techniques employed:
- Performance modeling, including probabilistic techniques, queueing
models, and simulation
- AI/ML for the optimization of the Edge-to-Cloud Continuum
- Measurements of existing Cloud and Edge system performance
Application domains:
- Robustness reliability and availability in the Edge-to-Cloud Continuum
- AI/ML execution over Cloud, Edge and Fog infrastructures
- Integrated inference over Cloud, Edge and Fog
- Edge-based networked games and augmented/extended reality
- Real-time services over Cloud, Edge and Fog infrastructures
- Edge-based video distribution
Specific examples of submissions include, but are not limited to the
following research themes in Edge to Cloud Continuum:
Resource Management and Scheduling:
- Server selection and placement algorithms
- Performance analysis of microservice orchestration frameworks
- Energy-efficient resource management techniques
- Adaptive scheduling for AI/ML tasks across Edge-to-Cloud
infrastructures
- Optimal offloading strategies
- Memory-computation-network tradeoffs for edge-based resource allocation
Data and Traffic Management:
- Network-aware routing, data compression and deduplication techniques
- Network-aware and similarity caching techniques
- Network-aware AI/ML tasks, e.g. early-exit deep neural networks,
federated learning and distributed AI
- Age of Information (AoI) in dynamic Edge-to-Cloud systems
Security and Resilience:
- Performance impact of security protocols in Edge-to-Cloud systems
- Fault-tolerance mechanisms for the Edge-to-Cloud Continuum
Emerging Domains:
- OpenRAN and federated Edge-to-Cloud systems
- Performance of 5G and beyond-5G (B5G) technologies in the
Edge-to-Cloud Continuum
- Performance of edge-based augmented and virtual reality applications
- Edge-based blockchain technologies
- Performance of edge-based applications in vertical domains, e.g.,
healthcare, industry 4.0, smart cities
Submissions focusing on real-world measurements, comprehensive
simulations, and/or novel theoretical approaches are particularly
encouraged. Researchers are also invited to explore interdisciplinary
solutions that span network performance, application design,
sustainability, security, and energy efficiency. Note that surveys are not
in scope of the special issue and only original full-length papers will
be considered.
Manuscript submission information:
Expected timeline:
- Submission deadline: 18 March, 2025
- First reviews: 8 August, 2025
- First revision due: 15 November, 2025
- Second reviews: 15 January, 2026
- Publication: 18 March, 2026
Manuscript format:
Manuscripts should follow the Performance Evaluation template, available at
: Guide for authors - Performance Evaluation
<https://www.sciencedirect.com/journal/performance-evaluation/publish/guide-for-authors>
All manuscripts should be submitted via the Elsevier Editorial System
at Submission
site for Performance Evaluation
<https://www.editorialmanager.com/peva/default.aspx?pg=mainpage.html>.
Guest editors:
Daniel Sadoc Menasché, UFRJ, Brazil
Francesco de Pellegrini, University of Avignon, France
Marco Ajmone Marsan, IMDEA Networks Institute, Spain
Special issue information:
https://www.sciencedirect.com/special-issue/317472/performance-in-the-edge-to-cloud-continuum
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************