- United States (of America); moving from Iowa to NH
Stars
Keras implementation of the Information Dropout (arXiv:1611.01353) paper
🤖 Implementation of Self Normalizing Networks (SNN) in PyTorch.
📓 Notes from The Neural Perspective (discontinued) blog.
Magenta: Music and Art Generation with Machine Intelligence
Hyperparameter-free weight decay implemented in TensorFlow.
TensorFlow implementation of the method from Variational Dropout Sparsifies Deep Neural Networks, Molchanov et al. (2017)
Sparsifying Variational Dropout in Tensorflow
A Python package to manage extremely large amounts of data
Caffe implementation for dynamic network surgery.
Directory of tutorials and open-source code repositories for working with Keras, the Python deep learning library
Deep learning, architecture and hyper parameters search with genetic algorithms
Genetic algorithm to optimize Keras Sequential model
Reference caffe implementation of LSUV initialization
Simple implementation of the LSUV initialization in keras
auto-tuning momentum SGD optimizer
Code to reproduce some of the figures in the paper "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima"
Training deep neural networks with low precision multiplications
MIT Deep Learning Book in PDF format (complete and parts) by Ian Goodfellow, Yoshua Bengio and Aaron Courville
The imeplementation of Fully Connected Highway network using TensorFlow. The imeplementation is based on this paper: https://arxiv.org/pdf/1505.00387.pdf
Some handy utility libraries and tools for the Caffe deep learning framework.
OptNet - Reducing memory usage in torch neural nets
The original code from the DeepMind article + my tweaks
Official implementation for the paper: "Shallow Updates for Deep Reinforcement Learning"