Counting 2,129 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

Project Page
Last Commit
Dec. 11, 2017
Nov. 2, 2017

This project is experimental. Feedback and testing are welcome.

Build Status Documentation Gitter


OpenNMT-tf is a general purpose sequence modeling tool in TensorFlow with production in mind. While neural machine translation is the main target task, it has been designed to more generally support:

  • sequence to sequence mapping
  • sequence tagging
  • sequence classification

Key features

OpenNMT-tf focuses on modularity and extensibility using standard TensorFlow modules and practices to support advanced modeling capability:

  • arbitrarily complex encoder architectures
    e.g. mixing RNNs, CNNs, self-attention, etc. in parallel or in sequence.
  • hybrid encoder-decoder models
    e.g. self-attention encoder and RNN decoder or vice versa.
  • multi-source training
    e.g. source text and Moses translation as inputs for machine translation.
  • multiple input format
    text with support of mixed word/character embeddings or real vectors serialized in TFRecord files.

and all of the above can be used simultaneously to train novel and complex architectures. See the predefined models to discover how they are defined.

OpenNMT-tf is also compatible with some of the best TensorFlow features:


  • tensorflow (1.4)
  • pyyaml


A minimal OpenNMT-tf run consists of 3 elements:

  • a Python file describing the model
  • a YAML file describing the parameters
  • a run type


python -m bin.main <run_type> --model <> --config <config_file.yml>

When loading an existing checkpoint, the --model option is optional.

  • For more information about configuration files, see the documentation.
  • For more information about command line options, see the help flag python -m bin.main -h.


Here is a minimal workflow to get you started in using OpenNMT-tf. This example uses a toy English-German dataset for machine translation.

1. Build the word vocabularies:

python -m bin.build_vocab --size 50000 --save_vocab data/toy-ende/src-vocab.txt data/toy-ende/src-train.txt
python -m bin.build_vocab --size 50000 --save_vocab data/toy-ende/tgt-vocab.txt data/toy-ende/tgt-train.txt

2. Train with preset parameters:

python -m bin.main train --model config/models/ --config config/opennmt-defaults.yml config/data/toy-ende.yml

3. Translate a test file with the latest checkpoint:

python -m bin.main infer --config config/opennmt-defaults.yml config/data/toy-ende.yml --features_file data/toy-ende/src-test.txt

Note: do not expect any good translation results with this toy example. Consider training on larger parallel datasets instead.

Compatibility with {Lua,Py}Torch implementations

OpenNMT-tf has been designed from scratch and compatibility with the {Lua,Py}Torch implementations in terms of usage, design, and features is not a priority. Please submit a feature request for any missing feature or behavior that you found useful in the {Lua,Py}Torch implementations.


The implementation is inspired by the following:

Additional resources