This repository has been archived by the owner on Dec 26, 2024. It is now read-only.
Tags: bytedance/fairseq
Tags
0.6.1 -> 0.6.2 (facebookresearch#577) Summary: Changelog: - 998ba4f: Add language models from Baevski & Auli (2018) - 4294c4f: Add mixture of experts code from Shen et al. (2019) - 0049349: Add example for multilingual training - 48d9afb: Speed improvements, including fused operators from apex - 44d27e6: Add Tensorboard support - d17fa85: Add Adadelta optimizer - 9e1c880: Add `FairseqEncoderModel` - b65c579: Add `FairseqTask.inference_step` to modularize generate.py - 2ad1178: Add back `--curriculum` - Misc bug fixes and other features Pull Request resolved: facebookresearch#577 Differential Revision: D14481233 Pulled By: myleott fbshipit-source-id: 4ff8625ef1c0b24273fc65df7c5658e3c932e8b7
Add fairseq to PyPI (facebookresearch#495) Summary: - fairseq can now be installed via pip: `pip install fairseq` - command-line tools are globally accessible: `fairseq-preprocess`, `fairseq-train`, `fairseq-generate`, etc. Pull Request resolved: facebookresearch#495 Differential Revision: D14017761 Pulled By: myleott fbshipit-source-id: 10c9f6634a3056074eac2f33324b4f1f404d4235
Online backtranslation module Co-authored-by: liezl200 <[email protected]>
0.4.0 -> 0.5.0 Changelog: - 97b58b4: add Transformer model from Vaswani et al. (2017) - b2374e5: faster Transformer inference with improved caching - 2d27ae0: simulate large mini-batch training with delayed updates (`--update-freq`) - 7ee1d28: add FP16 training support (`--fp16`) - 2a84f46: faster inference by removing completed sentences from the batch - 663fd80: batched interactive generation - 4c2ef2d: add language modeling / gated convolutional model from Dauphin et al. (2017) - b59815b: add Hierarchical Neural Story Generation model from Fan et al. (2018) - ff68a9e: add FairseqTask to modularize task definitions (e.g., translation, language modeling)