We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Apache MXNet 1.4.1 compatibility, improvements to coordination perfor… …mance at ultra-large scale, stall message improvements, bugfixes
Add horovodrun
PySpark, Apache MXNet, autotuning, TensorFlow eager execution, mixed-… …precision & embedding improvements
Force allreduce of all gradients in step(), bugfixes
FP16 hierarchical allreduce
PyTorch 1.0, TF-Keras, FP16 ops on GPU
Parallelized hierarchical allreduce
Support for the upcoming PyTorch release
Add compatibility with PyTorch 0.4.1
Support for IBM PowerAI DDL & APIs to restore optimizer state