Releases: Shivvrat/NeuPI
v1.0.2: Update version numbers across project files to 1.0.1 and 1.0.2
NeuPI v1.0.2 Release Notes
With minor bug fixes
We are thrilled to release NeuPI v1.0.2. This version marks a major milestone, transitioning the library from a collection of core evaluators into a full-fledged framework for advanced neural probabilistic inference. This release introduces powerful new inference schemes, sophisticated discretization methods, and significant improvements to the library's core architecture, making it more modular, extensible, and robust.
✨ New Features
Advanced Inference Schemes
ITSELF_Engine
: The highlight of this release is the introduction of the Inference Time Self-Supervised Learning Fine-tuning engine. This advanced inference module performs on-the-fly optimization for each test instance or batch, refining the neural model's parameters to significantly improve solution quality over a standard single-pass approach.
Sophisticated Discretization Methods
The discretize
module has been expanded with powerful methods that go far beyond simple thresholding:
KNearestDiscretizer
: Implements a high-performance beam search to find the k-nearest binary vectors to the network's continuous output. It uses a PGM evaluator as a scoring function to identify high-quality discrete solutions, backed by an optimized Cython helper for performance.HighUncertaintyDiscretizer
: A new heuristic search method that focuses computational effort where it matters most. It identifies the k query variables with probabilities closest to 0.5 (the highest uncertainty) and performs an exhaustive search over this reduced space to find the best assignment.
Modular Preprocessing
preprocess
Module: Introduced a dedicated module for feature engineering. This separates the creation of model inputs from the model architecture itself.
This release lays a robust foundation for future research and development. We look forward to expanding the library's capabilities in upcoming versions.
NeuPI v1.0.1
NeuPI v1.0.1 Release Notes
With minor bug fixes
We are thrilled to release NeuPI v1.0.1. This version marks a major milestone, transitioning the library from a collection of core evaluators into a full-fledged framework for advanced neural probabilistic inference. This release introduces powerful new inference schemes, sophisticated discretization methods, and significant improvements to the library's core architecture, making it more modular, extensible, and robust.
✨ New Features
Advanced Inference Schemes
ITSELF_Engine
: The highlight of this release is the introduction of the Inference Time Self-Supervised Learning Fine-tuning engine. This advanced inference module performs on-the-fly optimization for each test instance or batch, refining the neural model's parameters to significantly improve solution quality over a standard single-pass approach.
Sophisticated Discretization Methods
The discretize
module has been expanded with powerful methods that go far beyond simple thresholding:
KNearestDiscretizer
: Implements a high-performance beam search to find the k-nearest binary vectors to the network's continuous output. It uses a PGM evaluator as a scoring function to identify high-quality discrete solutions, backed by an optimized Cython helper for performance.HighUncertaintyDiscretizer
: A new heuristic search method that focuses computational effort where it matters most. It identifies the k query variables with probabilities closest to 0.5 (the highest uncertainty) and performs an exhaustive search over this reduced space to find the best assignment.
Modular Preprocessing
preprocess
Module: Introduced a dedicated module for feature engineering. This separates the creation of model inputs from the model architecture itself.
This release lays a robust foundation for future research and development. We look forward to expanding the library's capabilities in upcoming versions.
NeuPI v1.0.0
NeuPI v1.0.0 Release Notes
We are thrilled to release NeuPI v1.0.0. This version marks a major milestone, transitioning the library from a collection of core evaluators into a full-fledged framework for advanced neural probabilistic inference. This release introduces powerful new inference schemes, sophisticated discretization methods, and significant improvements to the library's core architecture, making it more modular, extensible, and robust.
✨ New Features
Advanced Inference Schemes
ITSELF_Engine
: The highlight of this release is the introduction of the Inference Time Self-Supervised Learning Fine-tuning engine. This advanced inference module performs on-the-fly optimization for each test instance or batch, refining the neural model's parameters to significantly improve solution quality over a standard single-pass approach.
Sophisticated Discretization Methods
The discretize
module has been expanded with powerful methods that go far beyond simple thresholding:
KNearestDiscretizer
: Implements a high-performance beam search to find the k-nearest binary vectors to the network's continuous output. It uses a PGM evaluator as a scoring function to identify high-quality discrete solutions, backed by an optimized Cython helper for performance.HighUncertaintyDiscretizer
: A new heuristic search method that focuses computational effort where it matters most. It identifies the k query variables with probabilities closest to 0.5 (the highest uncertainty) and performs an exhaustive search over this reduced space to find the best assignment.
Modular Preprocessing
preprocess
Module: Introduced a dedicated module for feature engineering. This separates the creation of model inputs from the model architecture itself.
This release lays a robust foundation for future research and development. We look forward to expanding the library's capabilities in upcoming versions.
NeuPI v0.1.0: Neural Solvers for Probabilistic Inference
We are excited to announce the initial release of NeuPI (Neural Probabilistic Inference), a PyTorch-based framework for developing and evaluating neural network-based solvers for complex inference tasks over probabilistic models.
This library is the culmination of research into leveraging deep learning for fast and accurate MPE and MAP inference. It provides a modular, extensible, and high-performance toolkit for researchers and practitioners in the field.
Highlights of this Release
This first version establishes the core architecture and features of the NeuPI library.
- Modular Architecture: The library is organized into distinct, reusable components reflecting the end-to-end workflow:
pm
: Evaluators for probabilistic models, including Markov Networks (pairwise and higher-order) and Sum-Product Networks.preprocess
: Tools for feature engineering, such as theBucketEmbedder
for creating rich input representations.models
: Flexible neural network architectures, starting with a customizableMLP
.training
: A powerfulSelfSupervisedTrainer
to handle training loops.inference
: Multiple inference strategies, including a standardSinglePassInferenceEngine
and the novelITSELF_Engine
for test-time refinement.discretize
: Utilities likeThresholdDiscretizer
to convert network outputs into discrete solutions.
- Advanced Inference with ITSELF: This release introduces the Inference Time Self-Supervised Training (ITSELF) engine, an advanced inference scheme that fine-tunes a model on a per-instance basis to significantly improve solution quality.
- High-Performance Parsing: Includes a highly optimized
.uai
file parser written in Cython, enabling fast loading of Markov Network models. - Robust and Tested: The entire library is supported by a comprehensive test suite using
pytest
to ensure correctness and reliability.
Looking Ahead
This is just the beginning for NeuPI. Future releases will focus on:
- Expanding support for more probabilistic models, including MADE and other Autoregressive Models.
- Introducing more sophisticated discretization and inference techniques.
- Adding more advanced neural architectures from our research.
We welcome feedback, contributions, and collaborations from the community!