PlaNet: A Deep Planning Network for Reinforcement Learning . Supports symbolic/visual observation spaces. Supports some Gym environments (including classic control/non-MuJoCo environments, so DeepMind Control Suite/MuJoCo are optional dependencies).
python.main.py. For best performance with DeepMind Control Suite, try setting environment variable
MUJOCO_GL=egl (see instructions and details here).
Results and pretrained models can be found in the releases.
To install all dependencies with Anaconda run
conda env create -f environment.yml and use
source activate planet to activate the environment.