8000 GitHub - RuiShu/vae-experiments: Code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. Atari video prediction is work-in-progress.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. Atari video prediction is work-in-progress.

Notifications You must be signed in to change notification settings

RuiShu/vae-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

vae-experiments

Here is the code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. The code uses Torch7 which can be installed here

System requirements

  • All experiments were run on GPU with the following libraries
    • cuda/7.5
    • cuDNN/v4
    • hdf5
    • nccl

Required torch7 libraries

  • nn. Building neural networks.
  • nngraph. Building graph-based neural networks.
  • optim. Various gradient descent parameter update methods.
  • cunn. Provides CUDA support for nn.
  • cudnn. Provides CUDNN support for nn.
  • torch-hdf5. HDF5 interace for torch.
  • lfs. Luafilesystem for file manipulation.
  • penlight. Commandline argument parser.
  • image. Provides support for reading images.
  • threads. For multi-threaded data loading.

About

Code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. Atari video prediction is work-in-progress.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0