======================================================================
Special Track on Integration of Logical Constraints in Deep Learning
----------------------------------------------------------------------
Journal of Artificial Intelligence Research (JAIR)
Deadline: May 31, 2025
Info: https://www.jair.org/index.php/jair/SpecialTrack-LogicDL
======================================================================
Track Editors:
----------------------------------------------------------------------
Alessandro Abate, University of Oxford, U.K.
Eleonora Giunchiglia, Imperial College London, U.K.
Bettina Könighofer, Graz University of Technology, Austria
Luca Pasa, University of Padova, Italy
Matteo Zavatteri, University of Padova, Italy
======================================================================
Overview:
Over the last few years, the integration of logical constraints in Deep
Learning models has gained significant attention from research communities
for its potential to enhance the interpretability, robustness, safety, and
generalization capabilities of these models. This integration opens the
possibility of incorporating prior knowledge, handling incomplete data, and
combining symbolic and subsymbolic reasoning. Moreover, the use of logical
constraints improves generalization, formal verification, and ethical
decision-making. The versatility of logical constraint integration spans
diverse domains, presenting both research challenges and opportunities. In
recent times, there has been a growing trend in incorporating logical
constraints into deep learning models, especially in safety-critical
applications. Looking ahead, challenges in this field extend to the
development of Machine Learning models that not only incorporate logical
constraints but also provide robust assurances. This involves ensuring that
AI systems adhere to specific (temporal) logical or ethical constraints,
offering a level of guarantees in their behavior.
Thus, this special track seeks submissions on the integration of logical
constraints into deep learning approaches. We are particularly interested
in the following broad content areas.
- Formal verification of neural networks is an active area of research that
has been proposing methods, tools, specification languages (e.g., VNNLIB),
and annual competitions (e.g., VNN-COMP) devoted to verify that a neural
network satisfies a certain property typically given in (a fragment) of
first order logic.
- Synthesis aims at synthesizing neural networks that are compliant with
some given constraint. Approaches to achieve this aim range from modifying
the loss function in the training phase (i.e., soft constraint injection)
to exploit counterexample guided inductive synthesis (CEGIS).
- Monitoring: Logical constraints can be used to mitigate and/or neutralize
constraint violations of machine learning systems when formal verification
and synthesis are not possible. Shielding techniques intervene by changing
the output of the network when a constraint is being violated. Runtime
monitoring can be used to anticipate failures of AI systems without
modifying them.
- Explainability: Automated learning of formulae and logical constraints
from past executions of the system provides natural explanations for neural
network predictions and poses another avenue for future research. Formulae
and constraints offer a high degree of explainability since they carry a
precise syntax and semantics, and thus they can be "read" by humans more
easily than other explainability methods.
This special track aims to explore and showcase recent advancements in the
integration of logical constraints within deep learning models, spanning
the spectrum of verification, synthesis, monitoring and explainability, by
considering exact and approximate solutions, online and offline approaches.
The focus will also extend to encompass innovative approaches that address
the challenges associated with handling logical constraints in neural
networks.
Submissions:
This special track seeks contributions that delve into various aspects of
logic constraint integration in deep learning, including, but not limited
to:
- Learning with logical constraints
- Enhancing neural network expressiveness for logical constraints
- Formal verification (certification) of neural networks
- Automated synthesis of certified neural networks, or of AI systems with
neural nets
- Decision making: Strategy/policy synthesis for AI systems with neural
networks
- Runtime monitoring of AI systems
- Learning of (temporal) logic formulae for explainable and interpretable AI
- Scalability challenges in neural networks with logical constraints
- Real-world applications of neural networks with logical constraints
- Enhancing model explainability via logical constraints
- Design of neural networks under temporal logical requirements
Pertinent review papers of exceptional quality may also be considered.
**********************************************************
*
* Contributions to be spread via DMANET are submitted to
*
* DMANET@zpr.uni-koeln.de
*
* Replies to a message carried on DMANET should NOT be
* addressed to DMANET but to the original sender. The
* original sender, however, is invited to prepare an
* update of the replies received and to communicate it
* via DMANET.
*
* DISCRETE MATHEMATICS AND ALGORITHMS NETWORK (DMANET)
* http://www.zaik.uni-koeln.de/AFS/publications/dmanet/
*
**********************************************************