Tuesday, April 30, 2019

[DMANET] CFP: ParLearning 2019 in conjunction with KDD 2019

*****************************************************************************************
* The 8th International Workshop on Parallel and Distributed Computing for
* Large-Scale Machine Learning and Big Data Analytics (ParLearning 2019)
* https://parlearning.github.io
* August 5, 2019
* Anchorage, Alaska, USA
*
* Co-located with
* The 25th ACM SIGKDD International Conference on
* Knowledge Discovery and Data Mining (KDD 2019)
* https://www.kdd.org/kdd2019/
* August 4 - August 8, 2019
* Dena'ina Convention Center and William Egan Convention Center
* Anchorage, Alaska, USA
*****************************************************************************************

Call for Papers

Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms
from Artificial Intelligence (AI) for massive datasets is a major technical
challenge in the time of "Big Data". The past ten years have seen the rise
of multi-core and GPU based computing. In parallel and distributed
computing, several frameworks such as OpenMP, OpenCL, and Spark continue to
facilitate scaling up ML/DM/AI algorithms using higher levels of
abstraction. We invite novel works that advance the trio-fields of ML/DM/AI
through development of scalable algorithms or computing frameworks. Ideal
submissions should describe methods for scaling up X using Y on Z, where
potential choices for X, Y and Z are provided below.

Scaling up

o Recommender systems
o Optimization algorithms (gradient descent, Newton methods)
o Deep learning
o Distributed algorithms and AI for Blockchain
o Clustering (agglomerative techniques, graph clustering, clustering
heterogeneous data)
o Probabilistic inference (Bayesian networks)
o Graph algorithms, graph mining and knowledge graphs
o Graph neural networks
o Autoencoders and variational autoencoders
o Generative adversarial networks
o Generative models
o Deep reinforcement learning

Using

o Parallel architectures/frameworks (OpenMP, CUDA etc.)
o Distributed systems/frameworks (MPI, Spark, etc.)
o Machine learning frameworks (TensorFlow, PyTorch etc.)

On

o Various infrastructures, such as cloud, commodity clusters, GPUs, and
emerging AI chips.

Workshop Proceedings

Accepted papers will be published in the conference proceedings by ACM and
also appear in the ACM Digital Library.

Awards

Best Paper Award: The program committee will nominate a paper for the Best
Paper award. In past years, the Best Paper award included a cash prize.
Stay tuned for this year!
Travel Awards: Students with accepted papers have a chance to apply for a
travel award. Please find details on the ACM KDD 2019 web page.

Important Dates

o Paper submission: May 5, 2019 (Anywhere on Earth)
o Author notification: June 1, 2019
o Camera-ready version: June 8, 2019

Paper Guidelines

All submissions are limited to a total of 6 pages, including all content
and references, and must be in PDF format and formatted according to the
new Standard ACM Conference Proceedings Template. Additional information
about formatting and style files is available online at:
https://www.acm.org/publications/proceedings-template. Papers that do not
meet the formatting requirements will be rejected without review.

All submissions must be uploaded electronically at
https://www.easychair.org/conferences/?conf=parlearning2019.

Special Issue

We are planning to publish a special issue of a journal, consisting of the
best papers of ParLearning 2019. We are about to publish a special issue of
the Springer journal Future Generation Computer Systems, containing the
selected papers of ParLearning 2017.

Keynote Speakers

o Professor V.S. Subrahmanian (Dartmouth College, Hanover, NH, USA)
o Dr. Lifeng Nai (Google, Mountain View, CA, USA)

Organizing Committee

o General Chairs: Arindam Pal (TCS Research and Innovation, Kolkata, India)
and Henri Bal (Vrije Universiteit, Amsterdam, Netherlands)
o Program Chairs: Azalia Mirhoseini (Google AI, Mountain View, CA, USA),
Thomas Parnell (IBM Research, Zurich, Switzerland)
o Publicity Chair: Anand Panangadan (California State University,
Fullerton, USA)
o Steering Committee Chairs: Sutanay Choudhury (Pacific Northwest National
Laboratory, Richland, WA, USA) and Yinglong Xia (Huawei Research America,
Santa Clara, CA, USA)

Technical Program Committee

o Vito Giovanni Castellana, PNNL, USA
o Daniel Gerardo Chavarria, PNNL, USA
o Jianting Zhang, City College of New York, USA
o Farinaz Koushanfar, UCSD, USA
o Erich Elsen, Google Brain, USA
o Kazuaki Ishizaki, IBM Research, Tokyo, Japan
o Zhihui Du, Tsinghua University, China
o Anand Eldawy, University of Minnesota, USA
o Carson Leung, University of Manitoba, Canada
o Lingfei Wu, IBM Watson Research Center, USA
o Ananth Kalyanaraman, Washington State University, Pullman, USA
o Animesh Mukherjee, IIT Kharagpur, India
o Arnab Bhattacharya, IIT Kanpur, India
o Dinesh Garg, IBM Research, India
o Francesco Parisi, University of Calabria, Italy
o Himadri Sekhar Paul, TCS Research and Innovation, India
o Kripabandhu Ghosh, IIT Kanpur, India
o Mayank Singh, IIT Gandhinagar, India
o Nirmalya Roy, University of Maryland, Baltimore County, USA
o Partha Basuchowdhuri, Heritage Institute of Technology, Kolkata, India
o Sanjukta Bhowmick, University of North Texas, USA
o Saptarshi Ghosh, IIT Kharagpur, India
o Saurabh Paul, Kohl's, USA
o Sourangshu Bhattacharya, IIT Kharagpur, India
o Tanmoy Chakraborty, IIIT Delhi, India

Past Workshops

The first 7 editions of ParLearning were organized in conjunction with the
International Parallel and Distributed Processing Symposium (IPDPS). The
details of the past workshops can be found on the website
http://parlearning.ecs.fullerton.edu. From 2019, the organizers have
decided to conduct it with KDD.

Regards,
Arindam Pal, Ph.D.
Research Scientist
TCS Research and Innovation
http://www.cse.iitd.ac.in/~arindamp/

**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************