Counting 3,384 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

Author
Last Commit
Dec. 5, 2018
Created
Nov. 16, 2018

Perlin Adversarial Examples

This repository contains sample code and an interactive Jupyter notebook for the paper "Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks".

Procedural noise functions are parametrized and used to generate textures in computer graphics. In this work we use Perlin noise, a type of procedural noise, to create adversarial perturbations against popular deep neural network architectures trained on the ImageNet image classification task.

The results show that adversarial examples can be generated using Perlin noise without any knowledge of the target classifier. This demonstrates the instability of current neural networks to procedural noise patterns.

You can play with the noise function parameters to make your own adversarial examples with our interactive widget in the Jupyter notebook.

slider

Please see our paper for more details: "Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks." Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu. arXiv 2018.

Python Dependencies

  • ipywidgets
  • jupyter
  • keras
  • matplotlib >= 2.0.2
  • noise
  • numpy
  • opensimplex
  • tensorflow

Acknowledgments

Learn more about the Resilient Information Systems Security (RISS) group at Imperial College London. The main author is a PhD student supported by DataSpartan. DataSpartan is not affiliated with the university.

Please cite this paper if you use the code in this repository as part of a published research project.

@article{co2018procedural,
  title={Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks},
  author={Co, Kenneth T and Mu{\~n}oz-Gonz{\'a}lez, Luis and Lupu, Emil C},
  journal={arXiv preprint arXiv:1810.00470},
  year={2018}
}

This project is licensed under the MIT License, see the LICENSE.md file for details.