Lua-based toolkit for sequence-to-sequence learning
Top 13.2% on sourcepulse
This repository provides the Lua-based fairseq toolkit for sequence-to-sequence learning, specifically tailored for Neural Machine Translation (NMT). It implements convolutional and LSTM-based models, offering multi-GPU training and fast beam search generation. The toolkit is intended for researchers and practitioners in NLP and NMT.
How It Works
Fairseq utilizes Torch (Lua) for its backend, enabling efficient tensor operations and GPU acceleration. It implements state-of-the-art NMT architectures, including convolutional sequence-to-sequence models and standard LSTM-based models. The toolkit supports multi-GPU training for faster model development and includes optimized generation routines for both CPU and GPU, facilitating efficient inference.
Quick Start & Requirements
luarocks make rocks/fairseq-scm-1.rockspec
. For CPU-only translation, use luarocks make rocks/fairseq-cpu-scm-1.rockspec
.nn
package (from May 5th, 2017 or later). NVIDIA GPU and NCCL are required for training.Highlighted Details
Maintenance & Community
This Lua version is preserved but provided without support, with new development focusing on the PyTorch version. Community links include a Facebook group and Google group.
Licensing & Compatibility
BSD-licensed, including pre-trained models, with an additional patent grant. Compatible with commercial use.
Limitations & Caveats
This Lua version is no longer actively developed or supported, with all new efforts directed towards the PyTorch implementation. Users should be aware of potential compatibility issues with modern hardware or operating systems due to its age.
3 years ago
Inactive