- The paper introduces Semantic Probabilistic Layers (SPL) that enforce symbolic constraints via probabilistic circuits, significantly improving consistency in structured-output predictions.
- SPL employs a modular framework combining probabilistic reasoning with logical constraints to compute normalized label configurations, outperforming existing neuro-symbolic methods.
- Empirical evaluations on hierarchical multi-label classification and Warcraft pathfinding tasks demonstrate SPL's ability to achieve higher accuracy with perfect constraint satisfaction and computational efficiency.
Semantic Probabilistic Layers for Neuro-Symbolic Learning
Introduction
The paper introduces Semantic Probabilistic Layers (SPL), a novel approach designed to enable structured-output prediction (SOP) models to maintain consistency with predefined symbolic constraints. SPL is developed to model complex correlations and hard constraints within a structured output space, facilitating end-to-end learning via maximum likelihood. Existing methodologies often fail to consistently and efficiently model such tasks, but SPL combines probabilistic inference with logical reasoning to overcome these challenges.
Neural networks typically excel in flexibility but struggle with ensuring that predictions adhere to logical constraints inherent in SOP tasks. This limitation is particularly problematic for tasks like hierarchical multi-label classification (HMLC) and pathfinding, where leveraging domain constraints can improve accuracy but does not inherently guarantee consistency. SPL promises to bridge this gap by enforcing strict adherence to constraints during both training and prediction.
Methodology
SPL integrates logical constraints into neural network architectures through a modular and probabilistic framework. It comprises a predictive layer that enforces constraints while maintaining probabilistic semantics, expressiveness, consistency, generality, modularity, and efficiency—desiderata that current methods often fail to satisfy concurrently.
Key Features of SPL:
- Probabilistic Semantics: Ensures normalized probabilistic predictions.
- Expressiveness: Encapsulates complex label correlations.
- Consistency: Guarantees predictions align with predefined constraints.
- Generality: Supports rich logical expressions.
- Modularity: Allows integration into existing neural networks.
- Efficiency: Predictive process remains computationally efficient.
These features collectively equip SPL to outperform neuro-symbolic methods on challenging SOP tasks.
SPL Architecture
An SPL replaces standard predictive layers (e.g., sigmoid layers) with a probabilistic circuit that accounts for constraints. The probability of a label configuration is computed as a product of a probabilistic reasoning module and a consistency-enforcing logic module, normalized by a partition function to ensure probabilistic correctness.
A notable aspect of SPL is the implementation of conditional probabilistic circuits and constraint circuits, leveraging advances in circuit-based representations to achieve expressiveness and consistency simultaneously. By incorporating these circuits, SPL can efficiently model joint distributions of neural network outputs while ensuring adherence to logical constraints.
The SPL approach was empirically evaluated on several SOP tasks, including HMLC and pathfinding in Warcraft maps. In these tasks, SPL demonstrated notable improvements over state-of-the-art methods by achieving higher accuracy and perfect constraint satisfaction.



Figure 1: Neural nets struggle with satisfying validity constraints in complex semantic SOP tasks, such as predicting the lowest-cost path on a Warcraft map. SPL guarantees validity while retaining modularity and efficiency.
(Figure 2)
Figure 2: Examples of shortest path predictions demonstrate SPL's reliability in delivering valid paths closely approximating ground-truth costs. Competitors may yield paths with higher Hamming scores but fail in validity.
Conclusion
SPL advances the field of neuro-symbolic learning by ensuring SOP tasks adhere to logical constraints, offering a practical solution for integrating probabilistic reasoning with symbolic logic in neural architectures. The significant performance improvements in tasks like HMLC and pathfinding underscore its potential for broader application. Future research can explore expanding SPL capabilities, including support for first-order logical formulas, improving computational efficiency for large-scale LLMs, and enhancing interactive machine learning systems.