Call For Online Participation
ScaDL 2021: Third IPDPS Workshop on Scalable Deep Learning over Parallel And Distributed Infrastructures
https://2021.scadl.org/home
Welcome to ScaDL 2021 - the Third IPDPS Workshop on Scalable Deep Learning over Parallel and Distributed Infrastructures,
held virtually in conjunction with the 35th IEEE International Parallel & Distributed Processing Symposium. The workshop will
foster collaboration among researchers from the distributed/parallel computing and deep learning communities to share the
relevant topics as well as results of the current approaches lying at the intersection of distributed/parallel computing and deep learning.
We have a good participation from both academia and industry. Our six keynote speakers include eminent technical leaders from
industry (IBM Research, Google Research, Anyscale), academia (RPI, ETH Zurich, Harvard University), and national labs (BSC).
ScaDL 2021 will be held on line (virtually) on May 21 from 7.00 to 14.00 PDT according to the following program.
Registration is required for participation.
The logistics for attending the conference (URL, etc.) will be emailed to registered participants.
Register for FREE at https://docs.google.com/forms/d/e/1FAIpQLSci3uKc5EXuQ2YcgX1IdfUVONfz66oPA_Oy39o7zwNhPU3IgQ/viewform
All times are in Pacific Daylight Time (PDT)
-------------------------------------------------
7:00 - 7:10 : Welcome from the organizers
7:10 - 7:50 : Dynamic and Intelligent workfows with eFlows4HPC
Invited Talk by Prof. Rosa Badia, Barcelona Supercomputing Center, Spain
7:50 - 8:15 : A Distributed Multi-GPU System for Large-Scale Node Embedding at Tencent Wanjing Wei, Yangzihao Wang,
Pin Gao, Shijie Sun, Donghai Yu, Tencent Ltd., China
8:15 - 8:55 : The Three Pillars of Large-scale Deep Learning - Invited Talk by Prof. Torsten Hoefler, ETH Zurich, Switzerland
8:55 - 9:20 : Scaling Single-Image Super-Resolution Training on Modern HPC Clusters: Early Experiences
Quentin Anthony, Lang Xu, Hari Subramoni, Dhabaleswar Panda, Ohio State University, USA
-------------------------------------------------
9:20 - 9:40 : Break
-------------------------------------------------
9:40 - 10:20 : AI for Social Impact: Results from Multi-agent Reasoning and Learning in the Real World
Invited talk by Prof. Milind Tambe, Harvard University, USA and Google Research, India
10:20 - 10:45 : Distributed Deep Learning Using Volunteer Computing-Like Paradigm
Medha Atre, Birendra Jha, Ashwini Rao, Eydle Inc., USA
10:45 - 11:25 : Riding the Composable Systems Wave to Improve DNN Distributed Training Performance
Invited talk by Prof. Christopher Carothers, RPI, USA
11:25 - 11:40 : Ex-NNQMD: Extreme-Scale Neural Network Quantum Molecular Dynamics
Pankaj Rajak (Argonne National Laboratory, USA), Thomas Linker, Ken-ichi Nomura, Anikeya Aditya, Kuang Liu
(University of Southern California, USA), Kohei Shimamura, Shogo Fukushima (Kumamoto University, Japan),
Ye Luo (Argonne National Laboratory, USA), Fuyuki Shimojo (Kumamoto University, Japan), Rajiv K. Kalia, Aiichiro
Nakano and Priya Vashishta (University of Southern California, USA).
-------------------------------------------------
11:40 - 12:00 : Break
-------------------------------------------------
12:00 - 12:40 : Innovating across the AI stack for scale and speed
Invited talk by Dr. Rania Khalaf, IBM Research AI, USA
12:40 - 13:20 : Ray as the Unified Compute Substrate for Machine Learning Applications
Invited Talk by Dr. Zhe Zhang, Anyscale Inc. USA
13:20 - 13:35 : Training EfficientNets at Supercomputer Scale: 83% ImageNet Top-1 Accuracy in One Hour
Arissa Wongpanich (UC Berkeley, USA), Hieu Pham (Google Research, USA), James Demmel (UC Berkeley, USA),
Mingxing Tan, Quoc Le, Yang You and Sameer Kumar (Google Research, USA)
13:35 - 13:50 : Performance Analysis of Deep Learning Workloads on a Composable System
Kaoutar El Maghraoui, Lorraine Herger, Chekuri Choudary, Kim Tran, Todd Deshane, David Hanson, IBM, USA
13:50 - 14:00 : Closing Remarks by the organizers
Please visit for the entire program at https://2021.scadl.org/program. Also please feel free to ask us questions.
Sincerely,
Anirban Das and Fedrtica Filippini
Publicity Chairs
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************