Darkon: Toolkit to Hack Your Deep Learning Models
Darkon is an open source toolkit to understand deep learning models better. Deep learning is often referred as a black-box that is difficult to understand.
But, accountability and controllability could be critical to commercialize deep learning models. People often think that high accuracy on prepared dataset
is enough to use the model for commercial products. However, well-performing models on prepared dataset often fail in real world usages and cause corner cases
to be fixed. Moreover, it is necessary to explain the result to trust the system in some applications such as medical diagnosis, financial decisions, etc. We hope
Darkon can help you to understand the trained models, which could be used to debug failures, interpret decisions, and so on.
In this first release, we provide influence score calculation easily applicable to any Tensorflow models (other models to be supported later). The score can be used for filtering bad training samples that affects test performance negatively. It can be used for prioritize potential mislabeled examples to be fixed, and debugging distribution mismatch between train and test samples.
We will gradually enable technologies to analyze deep learning models easily applicable to your existing projects. More features will be released soon. Feedback and feature request are always welcome, which help us to manage priorities. Please keep your eyes on Darkon.
pip install darkon
inspector = darkon.Influence(workspace_path, YourDataFeeder(), loss_op_train, loss_op_test, x_placeholder, y_placeholder) scores = inspector.upweighting_influence_batch(sess, test_indices, test_batch_size, approx_params, train_batch_size, train_iterations)
- Issues: report issues, bugs, and request new features
- Pull request
- Discuss: Gitter
- Email: [email protected]
Apache License 2.0
 Cook, R. D. and Weisberg, S. "Residuals and influence in regression", New York: Chapman and Hall, 1982
 Koh, P. W. and Liang, P. "Understanding Black-box Predictions via Influence Functions" ICML2017
 Pearlmutter, B. A. "Fast exact multiplication by the hessian" Neural Computation, 1994
 Agarwal, N., Bullins, B., and Hazan, E. "Second order stochastic optimization in linear time" arXiv preprint arXiv:1602.03943