SDK for fine-tuning and customizing open-source LLMs
Top 18.1% on sourcepulse
xTuring is an open-source library designed for the efficient and accessible fine-tuning of large language models (LLMs). It empowers users, from researchers to developers, to personalize LLMs like LLaMA, Mistral, and GPT-J with their own data, ensuring data privacy by enabling local or private cloud execution.
How It Works
xTuring employs memory-efficient fine-tuning techniques such as LoRA (Low-Rank Adaptation) and quantization (INT4, INT8) to significantly reduce hardware requirements and costs, potentially by up to 90%. This approach allows for faster training epochs and enables fine-tuning on less powerful hardware, making LLM customization more accessible. The library also supports scaling across multiple GPUs for accelerated training and includes features for data preprocessing, model evaluation with metrics like perplexity, and inference on both GPU and CPU.
Quick Start & Requirements
pip install xturing
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The roadmap indicates future support for INT3, INT2, and INT1 low-precision fine-tuning, suggesting these are not yet implemented. Stable Diffusion support is also listed as a future item.
10 months ago
1 day