We are organizing a workshop at NeurIPS on optimization algorithms and GPU
acceleration. We are accepting contributions! Details below.
Best regards,
Tobia Marcucci
############
*TLDR: Contribute to a NeurIPS workshop on the use of GPUs and Learning for
Optimization.*
Website: https://www.cvxgrp.org/scaleopt/
Important Dates:
* Submissions Opens: July 25, 2025
* Submission Deadline: August 22, 2025
Organizers: Parth Nobel, Fangzhao Zhang, Maximilian Schaller, Alexandre
Amice, Tobia Marcucci, Tetiana Parshakova, Stephen Boyd
*Overview*
Recent advancements in GPU-based large-scale optimization have been
remarkable. Recognizing the revolution in optimizing neural network weights
via large-scale GPU-accelerated algorithms, the optimization community has
been interested in developing general purpose GPU-accelerated optimizers
for various families of classic optimization problems, including linear
programming, general conic optimization, combinatorial optimization, and
more specific problem families such as flow optimization and optimal
transport. This workshop welcomes submissions explaining how to deploy GPUs
for optimization; relevant works could span theoretical work—designing
novel highly-parallel optimization algorithms suitable for modern GPU
implementation—to impressive engineering work—designing specialized
implementations with custom CUDA kernels to accelerate the numerical
computations.
Beyond deploying GPUs directly at classical problems, current frontier AI
tools—including large language models (LLMs)—are being deployed to solve
optimization problems. Specifically, researchers have used LLMs to solve
linear programs and in combinatorial optimization. Various works have used
neural networks to solve mixed integer problems, linear or quadratic
programs, general combinatorial optimization problems, and more specific
optimization problems such as LASSO and robust PCA. This workshop welcomes
works related to Learning to Optimize (L2O)—using neural networks to learn
to solve classic optimization problems—and other forms of modern techniques
being deployed to solve classic optimization problems.
In this workshop, we aim to provide a platform for interested researchers
to engage with each other on recent breakthroughs and current bottlenecks
in designing large-scale GPU-based optimizers and synergizing AI systems
with solving optimization problems.
*Keywords*We welcome submissions in, but not limited to, the following
areas:
Parallel algorithms
Learning to optimize
Optimization software package development
Accelerated optimization methods
Advancements in GPU-based optimization
Large-scale distributed optimization
Robust and adaptive optimization
Randomized numerical linear algebra for optimization
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************