Synchronized Batch Normalization implementation in PyTorch.
This module differs from the built-in PyTorch BatchNorm as the mean and standard-deviation are reduced across all devices during training.
For example, when one uses
nn.DataParallel to wrap the network during
training, PyTorch's implementation normalize the tensor on each device using
the statistics only on that device, which accelerated the computation and
is also easy to implement, but the statistics might be inaccurate.
Instead, in this synchronized version, the statistics will be computed
over all training samples distributed on multiple devices.
Note that, for one-GPU or CPU-only case, this module behaves exactly same as the built-in PyTorch implementation.
This module is currently only a prototype version for research usages. As mentioned below, it has its limitations and may even suffer from some design problems. If you have any questions or suggestions, please feel free to open an issue or submit a pull request.
Why Synchronized BatchNorm?
Although the typical implementation of BatchNorm working on multiple devices (GPUs) is fast (with no communication overhead), it inevitably reduces the size of batch size, which potentially degenerates the performance. This is not a significant issue in some standard vision tasks such as ImageNet classification (as the batch size per device is usually large enough to obtain good statistics). However, it will hurt the performance in some tasks that the batch size is usually very small (e.g., 1 per GPU).
For example, the importance of synchronized batch normalization in object detection has been recently proved with a an extensive analysis in the paper MegDet: A Large Mini-Batch Object Detector.
To use the Synchronized Batch Normalization, we add a data parallel replication callback. This introduces a slight
difference with typical usage of the
Use it with a provided, customized data parallel wrapper:
from sync_batchnorm import SynchronizedBatchNorm1d, DataParallelWithCallback sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1])
Or, if you are using a customized data parallel module, you can use this library as a monkey patching.
from torch.nn import DataParallel # or your customized DataParallel module from sync_batchnorm import SynchronizedBatchNorm1d, patch_replication_callback sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) patch_replication_callback(sync_bn) # monkey-patching
tests/test_sync_batchnorm.py for numeric result comparison.
Implementation details and highlights
If you are interested in how batch statistics are reduced and broadcasted among multiple devices, please take a look at the code with detailed comments. Here we only emphasize some highlights of the implementation:
- This implementation is in pure-python. No C++ extra extension libs.
- Easy to use as demonstrated above.
- It is completely compatible with PyTorch's implementation. Specifically, it uses unbiased variance to update the
moving average, and use
sqrt(max(var, eps))instead of
sqrt(var + eps).
- The implementation requires that each module on different devices should invoke the
batchnormfor exactly SAME amount of times in each forward pass. For example, you can not only call
batchnormon GPU0 but not on GPU1. The
#i (i = 1, 2, 3, ...)calls of the
batchnormon each device will be viewed as a whole and the statistics will be reduced. This is tricky but is a good way to handle PyTorch's dynamic computation graph. Although sounds complicated, this will usually not be the issue for most of the models.
Runtime error on backward pass.
Due to a PyTorch Bug, using old PyTorch libraries will trigger an
RuntimeError with messages like:
Assertion `pos >= 0 && pos < buffer.size()` failed.
This has already been solved in the newest PyTorch repo, which, unfortunately, has not been pushed to the official and anaconda binary release. Thus, you are required to build the PyTorch package from the source according to the instructions here.
Because this library does not fuse the normalization and statistics operations in C++ (nor CUDA), it is less
numerically stable compared to the original PyTorch implementation. Detailed analysis can be found in
Authors and License:
Copyright (c) 2018-, Jiayuan Mao.
Contributors: Tete Xiao.
Distributed under MIT License (See LICENSE)