Counting 2,129 Big Data & Machine Learning Frameworks, Toolsets, and Examples...
Suggestion? Feedback? Tweet @stkim1

Author
Contributors
Project Page
http://darkon.io/
Last Commit
Dec. 10, 2017
Created
Nov. 13, 2017



Build Status codecov License PyPI Gitter Codacy Badge


Darkon: Toolkit to Hack Your Deep Learning Models

Darkon is an open source toolkit to understand deep learning models better. Deep learning is often referred as a black-box that is difficult to understand. But, accountability and controllability could be critical to commercialize deep learning models. People often think that high accuracy on prepared dataset is enough to use the model for commercial products. However, well-performing models on prepared dataset often fail in real world usages and cause corner cases to be fixed. Moreover, it is necessary to explain the result to trust the system in some applications such as medical diagnosis, financial decisions, etc. We hope
Darkon can help you to understand the trained models, which could be used to debug failures, interpret decisions, and so on.

In this first release, we provide influence score calculation easily applicable to any Tensorflow models (other models to be supported later). The score can be used for filtering bad training samples that affects test performance negatively. It can be used for prioritize potential mislabeled examples to be fixed, and debugging distribution mismatch between train and test samples.

We will gradually enable technologies to analyze deep learning models easily applicable to your existing projects. More features will be released soon. Feedback and feature request are always welcome, which help us to manage priorities. Please keep your eyes on Darkon.

Dependencies

Installation

pip install darkon

Usage

inspector = darkon.Influence(workspace_path,
                             YourDataFeeder(),
                             loss_op_train,
                             loss_op_test,
                             x_placeholder,
                             y_placeholder)
                             
scores = inspector.upweighting_influence_batch(sess,
                                               test_indices,
                                               test_batch_size,
                                               approx_params,
                                               train_batch_size,
                                               train_iterations)

Examples

API Documentation

Communication

Authors

Neosapience, Inc.

License

Apache License 2.0

References

[1] Cook, R. D. and Weisberg, S. "Residuals and influence in regression", New York: Chapman and Hall, 1982

[2] Koh, P. W. and Liang, P. "Understanding Black-box Predictions via Influence Functions" ICML2017

[3] Pearlmutter, B. A. "Fast exact multiplication by the hessian" Neural Computation, 1994

[4] Agarwal, N., Bullins, B., and Hazan, E. "Second order stochastic optimization in linear time" arXiv preprint arXiv:1602.03943

Latest Releases
v0.0.3
 Nov. 23 2017
v0.0.2
 Nov. 17 2017
v0.0.1
 Nov. 15 2017
v0.0.1
 Nov. 13 2017