PyTorch library for composable Transformer models
Top 41.4% on sourcepulse
A PyTorch library providing state-of-the-art Transformer models and reusable components, targeting researchers and developers needing flexible, efficient, and well-annotated implementations of LLMs like Falcon and Llama. It simplifies model customization and integration, offering benefits like 4/8-bit inference and PyTorch meta device support.
How It Works
Models are constructed from a set of reusable building blocks, enabling features like 4/8-bit inference via bitsandbytes
and PyTorch meta device for efficient memory management. This modular design facilitates easy addition of new architectures and ensures bug fixes or feature implementations benefit all models. Public APIs feature consistent type annotations for enhanced IDE support and code maintainability.
Quick Start & Requirements
pip install curated-transformers
pip install torch --index-url https://download.pytorch.org/whl/cu118
.pip install curated-transformers[quantization]
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The README does not detail specific limitations, unsupported platforms, or known bugs. The library is presented as production-tested, implying a degree of stability.
1 year ago
1 day