Welcome to the Anakin GitHub.
Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineers and is a large-scale application of industrial products.
Please refer to our release announcement to track the latest feature of Anakin.
Anakin supports a wide range of neural network architectures and different hardware platforms. It is easy to run Anakin on GPU / x86 / ARM platform.
In order to give full play to the performance of hardware, we optimized the forward prediction at different levels.
Automatic graph fusion. The goal of all performance optimizations under a given algorithm is to make the ALU as busy as possible. Operator fusion can effectively reduce memory access and keep the ALU busy.
Memory reuse. Forward prediction is a one-way calculation. We reuse the memory between the input and output of different operators, thus reducing the overall memory overhead.
Assembly level optimization. Saber is a underlying DNN library for Anakin, which is deeply optimized at assembly level. Performance comparison between Anakin, TensorRT and Tensorflow-lite, please refer to the benchmark tests.
For ARM, please refer run on arm.
It is recommended to check out the readme of benchmark.
We appreciate your contributions!
You are welcome to submit questions and bug reports as Github Issues.
Copyright and License
Anakin is provided under the Apache-2.0 license.