Counting 3,834 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

Author
Contributors
Last Commit
May. 23, 2019
Created
Dec. 21, 2017

lagom

Build Status CircleCI Documentation Status

lagom is a light PyTorch infrastructure to quickly prototype reinforcement learning algorithms. Lagom is a 'magic' word in Swedish, "inte för mycket och inte för lite, enkelhet är bäst", meaning "not too much and not too little, simplicity is often the best". lagom is the philosophy on which this library was designed.

Contents of this document

Basics

lagom balances between the flexibility and the usability when developing reinforcement learning (RL) algorithms. The library is built on top of PyTorch and provides modular tools to quickly prototype RL algorithms. However, it does not go overboard, because too low level is often time consuming and prone to potential bugs, while too high level degrades the flexibility which makes it difficult to try out some crazy ideas fast.

We are continuously making lagom more 'self-contained' to set up and run experiments quickly. It internally supports base classes for multiprocessing (master-worker framework) for parallelization (e.g. experiments and evolution strategies). It also supports hyperparameter search by defining configurations either as grid search or random search.

A common pipeline to use lagom can be done as following:

  1. Define your RL agent
  2. Define your environment
  3. Define your engine for training and evaluating the agent in the environment.
  4. Define your Configurations for hyperparameter search
  5. Define run(config, seed, device) for your experiment pipeline
  6. Call run_experiment(run, config, seeds, num_worker) to parallelize your experiments

A graphical illustration is coming soon.

Installation

Install dependencies

Run the following command to install all required dependencies:

pip install -r requirements.txt

Note that it is highly recommanded to use an Miniconda environment:

conda create -n lagom python=3.7

We also provide some bash scripts in scripts/ directory to automatically set up the conda environment and dependencies.

Install lagom

Run the following commands to install lagom from source:

git clone https://github.com/zuoxingdong/lagom.git
cd lagom
pip install -e .

Installing from source allows to flexibly modify and adapt the code as you pleased, this is very convenient for research purpose which often needs fast prototyping.

Getting Started

Detailed tutorials is coming soon. For now, it is recommended to have a look in examples/ or the source code.

Documentation

The documentation hosted by ReadTheDocs is available online at http://lagom.readthedocs.io

Baselines

We provide a set of implementations of reinforcement learning algorithms at baselines using lagom.

Test

We are using pytest for tests. Feel free to run via

pytest test -v

What's new

  • 2019-03-04 (v0.0.3)

    • Much easier and cleaner APIs
  • 2018-11-04 (v0.0.2)

    • More high-level API designs
    • More unit tests
  • 2018-09-20 (v0.0.1)

    • Initial release

Reference

This repo is inspired by OpenAI Gym, OpenAI baselines, OpenAI Spinning Up

Please use this bibtex if you want to cite this repository in your publications:

@misc{lagom,
      author = {Zuo, Xingdong},
      title = {lagom: A light PyTorch infrastructure to quickly prototype reinforcement learning algorithms},
      year = {2018},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/zuoxingdong/lagom}},
    }

Latest Releases
Minor updates
 May. 14 2019
RL Baselines stable release
 May. 12 2019
Breaking refactoring
 Mar. 4 2019
alpha release
 Nov. 4 2018
v0.0.2
 Nov. 2 2018