A comprehensive, cross-framework solution to convert, visualize and diagnosis deep neural network models. The "MM" in MMdnn stands for model management and "dnn" is an acronym for deep neural network.
Basically, it converts many DNN models that trained by one framework into others. The major features include:
- Model File Converter Converting DNN models between frameworks
- Model Code Snippet Generator Generating training or inference code snippet for frameworks
- Model Visualization Visualizing DNN network architecture and parameters for frameworks
- Model compatibility testing (On-going)
This project is designed and developed by Microsoft Research (MSR). We also encourage researchers and students leverage this project to analysis DNN models and we welcome any new ideas to extend this project.
You can get stable version of MMdnn by
pip install mmdnn
or you can try the newest version by
pip install -U git+https://github.com/Microsoft/[email protected]
Across the industry and academia, there are a number of existing frameworks available for developers and researchers to design a model, where each framework has its own network structure definition and saving model format. The gaps between frameworks impede the inter-operation of the models.
We provide a model converter to help developers convert models between frameworks, through an intermediate representation format.
[Note] You can click the links to get detail README of each framework
- Microsoft Cognitive Toolkit (CNTK)
- ONNX (Destination only, initial state)
- PyTorch (Destination only)
- Tensorflow (Experimental) (Highly recommend you read the README of tensorflow firstly)
- DarkNet (Source only, Experiment)
The model conversion between currently supported frameworks is tested on some ImageNet models.
|Inception V1||√||√||√||√||√||x (No LRN)||√|
One command to achieve the conversion. Use a TensorFlow ResNet V2 152 to PyTorch as our example.
$ mmdownload -f tensorflow -n resnet_v2_152 -o ./ $ mmconvert -sf tensorflow -in imagenet_resnet_v2_152.ckpt.meta -iw imagenet_resnet_v2_152.ckpt --dstNode MMdnn_Output -df pytorch -om tf_resnet_to_pth.pth
- PyTorch (Source)
- Torch7 (Source)
- Chainer (help wants)
- Semantic Segmentation
- Image Style Transfer
- Object Detection
You can use the MMdnn model visualizer and submit your IR json file to visualize your model. In order to run the commands below, you will need to install requests, keras, and Tensorflow using your favorite package manager.
Use the Keras "inception_v3" model as an example again.
- Download the pre-trained models
$ mmdownload -f keras -n inception_v3
- Convert the pre-trained model files into intermediate representation
$ mmtoir -f keras -w imagenet_inception_v3.h5 -o keras_inception_v3
- Open the MMdnn model visualizer and choose file keras_inception_v3.json
The intermediate representation stores the network architecture in protobuf binary and pre-trained weights in NumPy native format.
[Note!] Currently the IR weights data is in NHWC (channel last) format.
We are working on other frameworks conversion and visualization, such as PyTorch, CoreML and so on. And more RNN related operators are investigating. Any contributions and suggestions are welcome! Details in Contribution Guideline
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.