This is a tiny experiment to visualize the activations of each unit of a neural network based image classifier as a graphical plot.
The image classifier in this experiment is based on a deep neural network that has 3 hidden layers with 10 units each and a single output layer. The hidden layers use the ReLU activation function and the output layer uses the sigmoid activation function.
Although the model trained in this experiment works with about 80% accuracy, that's not the primary concern of this experiment. The primary concern of this experiment is to visualize the activations in each unit of a trained model.
The development steps here are written for a Linux or Mac system. All steps mentioned below assume that Python 3 is installed and you are at the top-level directory of this project.
Enter the following command to create a Python 3 virtual environment with
Enter the following command to enter the virtual environment.
Enter the following command to train a model, test it and write the model to a file named
To alter the learning parameters, look for the
train()function in this file, edit the values of
alphavariables and run this script again.
Classify arbitrary 64x64 PNG images in the
extra-setdirectory with the following command. You can copy any image into this directory as long as it is a 64x64 PNG and run the following command.
To generate graphical plots of the learned model, enter the following command.
This generates activation plots for each unit in the neural network. This is explained further in the next section.
Here are the graphical plots of the activations in each unit in each layer for every pixel component (i.e. R, G and B components). Each image is a visualization of what the activations in a specific unit looks like. For example, the first image for layer 1 is the visualization of the activations of the first unit in the first hidden layer.
Each pixel in an image below represents the activation in a specific unit for the corresponding pixel in the input image. The activation for each component (red, green and blue) for each pixel in each unit is computed separately. Then the activations of red, green and blue components in each pixel is combined and shown as a single pixel in an image below.
Layer 1 Activations
Layer 2 Activations
Layer 3 Activations
Layer 4 Activations
Out of 50 test samples, 41 were correctly classified.
The test accuracy is: 82.00%.
Alter the learning parameters in
alpha variables in
backward functions, respectively, of
model.py to alter
the test accuracy.
Training and Test Sets
The training and test data were obtained from a few HDF5 files shared by
Andrew Ng. The original H5 files are present in the