Countering Adversarial Images Using Input Transformations
This package implements the experiments described in the paper Countering Adversarial Images Using Input Transformations. It contains implementations for adversarial attacks, defenses based image transformations, training, and testing convolutional networks under adversarial attacks using our defenses. We also provide pre-trained models.
If you use this code, please cite our paper:
- Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering Adversarial Images using Input Transformations. arXiv 1711.00117, 2017. [PDF]
The code implements the following four defenses against adversarial images, all of which are based on image transformations:
- Image quilting
- Total variation minimization
- JPEG compression
- Pixel quantization
Please refer to the paper for details on these defenses. A detailed description of the original image quilting algorithm can be found here; a detailed description of our solver for total variation minimization can be found here.
The code implements the following four approaches to generating adversarial images:
To use this code, first install Python, PyTorch, and Faiss (to perform image quilting). We tested the code using Python 2.7, PyTorch v0.2.0, and scikit-image 0.11; your mileage may vary when using other versions.
Pytorch can be installed using the instructions here. Faiss is required to run the image quilting algorithm; it is not automatically included because faiss does not have a pip support and because it requires configuring BLAS and LAPACK flags, as described here. Please install faiss using the instructions given here.
The code uses several other external dependencies (for training Inception models, performing Bregman iteration, etc.). These dependencies are automatically downloaded and installed when you install this package via
# Install from source cd adversarial_image_defenses pip install .
To import the package in Python:
The functionality implemented in this package is demonstrated in this example. Run the example via:
The full functionality of the package is exposed via several runnable Python scripts. All these scripts require the user to specify the path to the Imagenet dataset, the path to pre-trained models, and the path to quilted images (once they are computed) in
lib/path_config.json. Alternatively, the paths can be passed as input arguments into the scripts.
Generate quilting patches
index_patches.py creates a faiss index of images patches. This index can be used to perform quilting of images.
import adversarial from index_patches import create_faiss_patches, parse_args args = parse_args() # Update args if needed args.patch_size = 5 create_faiss_patches(args)
python index_patches.py. The following arguments are supported:
--patch_sizePatch size (square) that will be used in quilting (default: 5).
--num_patchesNumber of patches to generate (default: 1000000).
--pca_dimsPCA dimension for faiss (default: 64).
--patches_fileFile in which patches are saved.
--index_fileFile in which faiss index of patches is saved.
gen_transformed_images.py has applies an image transformation on (adversarial or non-adversarial) ImageNet images, and saves them to disk. Image transformations such as image quilting are too computationally intensive to be performed on-the-fly during network training, which is why we precompute the transformed images.
import adversarial from gen_transformed_images import generate_transformed_images from lib import opts # load default args for transformation functions args = opts.parse_args(opts.OptType.TRANSFORMATION) args.operation = "transformation_on_raw" args.defenses = ["tvm"] args.partition_size = 1 # Number of samples to generate generate_transformed_images(args)
--operationOperation to run. Supported operations are:
transformation_on_raw: Apply transformations on raw images.
transformation_on_adv: Apply transformations on adversarial images.
cat_data: Concatenate output from distributed
--data_typeData type (
--out_dirDirectory path for output of
--partition_dirDirectory path to output transformed data.
--data_batchesNumber of data batches to generate. Used for random crops for ensembling.
--partitionDistributed data partition (default: 0).
--partition_sizeThe size of each data partition.
transformation_on_raw, partition_size represents number of classes for each process.
transformation_on_adv, partition_size represents number of images for each process.
--n_threadsNumber of threads for
Generate TAR data index
Many file systems perform poorly when dealing with millions of small files (such as images). Therefore, we generally TAR our image datasets (obtained by running
generate_transformed_images). Next, we use
gen_tar_index.py to generate a file index for the TAR file. The file index facilitates fast, random-access reading of the TAR file; it is much faster and requires less memory than untarring the data or using
import adversarial from gen_tar_index import generate_tar_index, parse_args args = parse_args() generate_tar_index(args)
python gen_tar_index.py. The following arguments are supported:
--tar_pathPath for TAR file or directory.
--index_rootDirectory in which to store TAR index file.
--path_prefixPrefix to identify TAR member names to be indexed.
gen_adversarial_images.py implements the generation of adversarial images for the ImageNet dataset.
import adversarial from gen_adversarial_images import generate_adversarial_images from lib import opts # load default args for adversary functions args = opts.parse_args(opts.OptType.ADVERSARIAL) args.model = "resnet50" args.adversary_to_generate = "fgs" args.partition_size = 1 # Number of samples to generate args.data_type = "val" # input dataset type args.normalize = True # apply normalization on input data args.attack_type = "blackbox" # For <whitebox> attack, use transformed models args.pretrained = True # Use pretrained model from model-zoo generate_adversarial_images(args)
train_model.py implements the training of convolutional networks on (transformed or non-transformed) ImageNet images.
import adversarial from train_model import train_model from lib import opts # load default args args = opts.parse_args(opts.OptType.TRAIN) args.defenses = None # defense=<(raw, tvm, quilting, jpeg, quantization)> args.model = "resnet50" args.normalize = True # apply normalization on input data train_model(args)
python train_model.py. In addition to the common arguments, the following arguments are supported:
--resumeResume training from checkpoint (if available).
--lrInitial learning rate defined in [constants.py] (lr=0.045 for Inception-v4, 0.1 for other models).
--lr_decayExponential learning rate decay defined in [constants.py] (0.94 for inception_v4, 0.1 for other models).
--lr_decay_stepsizeDecay learning rate after every stepsize epochs defined in [constants.py] (0.94 for inception_v4, 0.1 for other models).
--momentumMomentum (default: 0.9).
--weight_decayAmount of weight decay (default: 1e-4).
--start_epochIndex of first epoch (default: 0).
--end_epochIndex of last epoch (default: 90).
--preprocessed_epoch_dataAugmented and transformed data for each epoch is pre-generated (default:
classify_images.py implements the testing of a training convolutional network on an dataset of (adversarial or non-adversarial / transformed or non-transformed) ImageNet images.
import adversarial from classify_images import classify_images from lib import opts # load default args args = opts.parse_args(opts.OptType.CLASSIFY) classify_images(args)
python classify_images.py. In addition to the common arguments, the following arguments are supported:
--ncropsList of number of crops for each defense to use for ensembling (default:
--crop_fracList of crop fraction for each defense to use for ensembling (default:
--crop_typeList of crop type(
sliding(hardset for 9 crops)) for each defense to use for ensembling (default:
We provide pre-trained models that were trained on ImageNet images that were processed using total variation minimization (TVM) or image quilting can be downloaded from the following links (set the
models_root argument to the path that contains these model model files):
- ResNet-50_model trained on quilted images
- ResNet-50_model trained on TVM images
- ResNet-101_model trained on quilted images
- ResNet-101_model trained on TVM images
- DenseNet-169_model trained on quilted images
- DenseNet-169_model trained on TVM images
- Inception-v4_model trained on quilted images
- Inception-v4_model trained on TVM images
The following arguments are used by multiple scripts, including
--data_rootMain data directory to save and read data.
--models_rootDirectory path to store/load models.
--tar_dirDirectory path for transformed images(train/val) stored in TAR files.
--tar_index_dirDirectory path for index files for transformed images in TAR files.
--quilting_index_rootDirectory path for quilting index files.
--quilting_patch_rootDirectory path for quilting patch files.
--modelModel to use (default:
--deviceDevice to use: cpu or gpu (default:
--normalizeNormalize image data.
--batchsizeBatch size for training and testing (default: 256).
--preprocessed_dataTransformations/Defenses are already applied on saved images (default:
--defensesList of defenses to apply:
--pretrainedUse pretrained model from PyTorch model zoo (default:
--tvm_weightRegularization weight for total variation minimization (TVM).
--pixel_drop_ratePixel drop rate to use in TVM.
--tvm_methodReconstruction method to use in TVM (default:
--quilting_patch_sizePatch size to use in image quilting.
--quilting_neighborsNumber of nearest patches to sample from in image quilting (default: 1).
--quantize_depthBit depth for quantization defense (default: 8).
The following arguments are used whem generating adversarial images with
--n_samplesMaximum number of samples to test on.
--adversaryAdversary to use:
--adversary_modelModel to use for generating adversarial images (default:
--learning_rateLearning rate for iterative adversarial attacks (default: read from constants).
--adv_strengthAdversarial strength for non-iterative adversarial attacks (default: read from constants).
--adversarial_rootPath containing adversarial images.