Counting 3,834 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

Generating Words from Embeddings

This is the code for my blog post on Generating Words from Embeddings. It uses a character level decoder RNN to convert a word embedding (which represents a meaning) into a word by sampling one character at a time.

To get straight into sampling words, run these commands:

conda create -y -n word_generator python=3.6  
source activate word_generator
git clone  
cd WordGenerator  
pip install -r requirements.txt   
python with word=musical sigma=0.2

This works only for Linux systems. For Windows, you have to manually install pytorch 0.4.1 before running the scripts. Use this command:

pip3 install


python 3.6
pytorch 0.4.1

Also needs the following packages:

  • pytorch-nlp - for getting word embeddings link
  • sacred - keeping track of configs of training runs and easily writing scripts link
  • visdom - live dynamic loss plots link
  • pytorch-utils - for easily writing training code in pytorch link
  • visdom-observer - interface between sacred and visdom link

Install these using:

pip install -r requirements.txt

All the scripts are sacred experiments, so they can be run as

python <script>.py with <config updates>

Word embeddings

First, get the GloVe vectors and preprocess them by running


This will download the GloVe word vectors and pickle them to be used for training and inference.


If you want to directly sample words from a pretrained network, just go ahead and run

python with word=musical sigma=0.2

You can change the word and sigma to sample for different embeddings. The sampling script also has other parameters like start characters and beam size.

The script is used to generate words from a trained model. A pretrained set of weights are present in the trained_model/ directory, along with the config used to train it.

Run python print_config to see the different sampling parameters.

Examples of words generated. The embedding of the input word + noise is passed into the GRU model to generate the words.

Input word Generated words
musical melodynamic, melodimentary, songrishment
war demutualization, armision
intelligence technicativeness, intelimetry
intensity miltrality, amphasticity
harmony symphthism, ordenity, whistlery, hightonial
conceptual stemanological, mathedrophobic
mathematics tempologistics, mathdom
research scienting
befuddled badmanished, stummied, stumpingly
dogmatic doctivistic, ordionic, prescribitious, prefactional, pastological

Train your own model

Run python print_config to get a list of config options for training your own model.

To train your own model, make sure to have a visdom server running in the background at port 8097. Just run visdom in a separate terminal before running the train script to start the server.

To train the same model I used for generating the words in the post, run this command:

python with trained_model/config.json