Nautilus is an MIT-licensed pure-Python package for Bayesian posterior and evidence estimation. It utilizes importance sampling and efficient space exploration using neural networks. Compared to traditional MCMC and Nested Sampling codes, it often needs fewer likelihood calls and produces much larger posterior samples. Additionally, nautilus is highly accurate and produces Bayesian evidence estimates with percent precision. I collaborated with Dr. Lange on this project in an attempt to improve the accuracy of the neural netwrok by leveraging TensorFlow. This version of the project has my TensorFlow code in nautilus/neural.py. This version had ~10% increase in accuracy when tested on the loggamma problem with 15+ dimensions.
These were my findings, presented at the University of Michigan UROP symposium.
This example, sampling a 3-dimensional Gaussian, illustrates how to use nautilus.
import corner
import numpy as np
from nautilus import Prior, Sampler
from scipy.stats import multivariate_normal
prior = Prior()
for key in 'abc':
prior.add_parameter(key)
def likelihood(param_dict):
x = [param_dict[key] for key in 'abc']
return multivariate_normal.logpdf(x, mean=[0.4, 0.5, 0.6], cov=0.01)
sampler = Sampler(prior, likelihood)
sampler.run(verbose=True)
points, log_w, log_l = sampler.posterior()
corner.corner(points, weights=np.exp(log_w), labels='abc')
The most recent stable version of nautilus is listed in the Python Package Index (PyPI) and can be installed via pip
.
pip install nautilus-sampler
Additionally, nautilus is also on conda-forge. To install via conda
use the following command.
conda install -c conda-forge nautilus-sampler
You can find the documentation at nautilus-sampler.readthedocs.io.
A paper describing nautilus's underlying methods and performance has been accepted for publication. A draft of the paper is available on arXiv. Please cite the paper if you find nautilus helpful in your research.
Nautilus is licensed under the MIT License. The logo uses an image from the Illustris Collaboration.