8000 GitHub - nrjc/LEOC2: Bayesian Reinforcement Learning in Tensorflow
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
This repository was archived by the owner on Nov 22, 2020. It is now read-only.
8000
/ LEOC2 Public archive
forked from nrontsis/PILCO

Bayesian Reinforcement Learning in Tensorflow

License

Notifications You must be signed in to change notification settings

nrjc/LEOC2

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Probabilistic Inference for Learning Control (PILCO)

Build Status codecov

A modern & clean implementation of the PILCO Algorithm in TensorFlow.

Unlike PILCO's original implementation which was written as a self-contained package of MATLAB, this repository aims to provide a clean implementation by heavy use of modern machine learning libraries.

In particular, we use TensorFlow to avoid the need for hardcoded gradients and scale to GPU architectures. Moreover, we use GPflow for Gaussian Process Regression.

The core functionality is tested against the original MATLAB implementation.

Installation

  1. Install venv
virtualenv -p python3 venv
source venv/bin/activate
  1. Install requirements
pip install -r requirements.txt
python setup.py develop
  1. You might also need to install openai gym
pip install gym
  1. You might also need to install mujoco click here
  2. In the case that mujoco and mujoco-py fail to build on macos, change the MacOS SDK to 10.14.sdk

Example of usage

Before using, or installing, PILCO, you need to have Tensorflow 1.13.1 installed (either the gpu or the cpu version). It is recommended to install everything in a fresh conda environment with python>=3.7. Given Tensorflow, PILCO can be installed as follows

git clone https://github.com/nrontsis/PILCO && cd PILCO
python setup.py develop

The examples included in this repo use OpenAI gym 0.15.3 and mujoco-py 2.0.2.7. Once these dependencies are installed, you can run one of the examples as follows

python examples/inverted_pendulum.py

While running an example, Tensorflow might print a lot of warnings, some of which are deprecated. If necessary, you can suppress them by running

tf.logging.set_verbosity(tf.logging.ERROR)

right after including TensorFlow in Python.

Credits:

The following people have been involved in the development of this package:

References

See the following publications for a description of the algorithm: 1, 2, 3

About

Bayesian Reinforcement Learning in Tensorflow

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 55.0%
  • MATLAB 45.0%
0