Showing 1 - 25 of 15725 of 157 repos
Repository | Description | Stars | Stars 7d Δ | Stars 7d % | PRs 7d Δ | Created | Response rate | Issues 30d | Last active | |
---|---|---|---|---|---|---|---|---|---|---|
1 | VITS-fast-fine-tuningPlachtaa | Fine-tune VITS models for voice conversion & TTS with custom characters. Supports cloning voices from audio/video and inference via CLI/GUI. | 5k Top 25% | 7 | 0.1% | 0 | 2y ago | Inactive | 6mo ago | |
2 | Curated resources for fine-tuning LLMs like GPT, BERT, & RoBERTa. Includes tutorials, papers, tools, & best practices for adapting models.
| 445 | 1 | 0.2% | 0 | 1y ago | Inactive | 8mo ago | ||
3 | Hands-On-LLM-Fine-TuningyoussefHosni | Tutorials for fine-tuning LLMs using techniques like full fine-tuning, PEFT (LoRA), and instruction fine-tuning. Includes reasoning fine-tun... | 266 | 11 | 4.2% | 0 | 9mo ago | Inactive | 1mo ago | |
4 | Fine-tunes GPT models using Node.js. Uploads datasets, creates fine-tunes, lists fine-tunes, and creates completions using the fine-tuned mo... | 255 | 0 | 0% | 0 | 2y ago | Inactive | 2y ago | ||
5 | Platform for fine-tuning 100+ large language models with CLI and Web UI. Supports LoRA, QLoRA, DPO, PPO, and many more training approaches. | 55k Top 1% | 445 | 0.8% | 9 | 2y ago | 1 day | 3d ago | ||
6 | LLamaTunerjianzhnie | Toolkit for fine-tuning LLMs (Llama3, Phi3, Qwen, Mistral,...). Supports LoRA, QLoRA and full-parameter fine-tuning. Compatible with DeepSpe... | 608 | 1 | 0.2% | 0 | 2y ago | 1 day | 6mo ago | |
7 | Improves LLMs using self-play fine-tuning. The LLM generates its own training data from previous iterations to refine its policy. | 1k Top 50% | 4 | 0.3% | 0 | 1y ago | 1 week | 1y ago | ||
8 | Finetune_LLAMAchaoyi-wu | Fine-tunes the LLaMA model for Chinese, integrating frameworks like Minimal LLaMA and Alpaca. Supports FSDP and DeepSpeed for multi-GPU. | 401 | 0 | 0% | 0 | 2y ago | Inactive | 2y ago | |
9 | CLI tool to generate synthetic datasets for LLM fine-tuning. It supports data ingestion, CoT generation, curation, and format conversion.
| 1k Top 50% | 17 | 1.6% | 0 | 4mo ago | Inactive | 1w ago | ||
10 | InstructGLMyanqiangmiffy | Fine-tunes the ChatGLM-6B model using LoRA on instruction datasets like Alpaca and BELLE. Includes scripts for data preprocessing and traini... | 652 | 0 | 0% | 0 | 2y ago | 1 week | 2y ago | |
11 | Python package for text generation and LLM fine-tuning on Apple silicon with MLX. Supports quantization, LoRA, and distributed inference. | 1k Top 50% | 121 | 8.7% | 11 | 4mo ago | 1 day | 11h ago | ||
12 | Toolkit for efficient fine-tuning of LLMs and VLMs. Supports QLoRA, LoRA, and full-parameter fine-tuning. Integrates with LMDeploy.
| 5k Top 25% | 8 | 0.2% | 0 | 2y ago | 1 day | 3w ago | ||
13 | Enables efficient adaptation of large pretrained models to downstream applications by fine-tuning a small number of parameters. Integrated w... | 19k Top 5% | 73 | 0.4% | 23 | 2y ago | 1 day | 23h ago | ||
14 | MeZOprinceton-nlp | Memory-efficient zeroth-order optimizer for fine-tuning language models with the same memory footprint as inference. Compatible with LoRA.
| 1k Top 50% | 2 | 0.2% | 0 | 2y ago | 1 week | 1y ago | |
15 | Code for running and fine-tuning LLaMA. Includes PEFT, LoRA, and pipeline parallel implementations. Also includes tokenization scripts.
| 458 | 0 | 0% | 0 | 2y ago | Inactive | 1y ago | ||
16 | ChatGLM-LLaMA-chinese-insturct27182812 | Fine-tuning of ChatGLM and LLaMA models on Chinese instruction data, using PEFT to reduce resource requirements. Includes fine-tuned weights... | 390 | 0 | 0% | 0 | 2y ago | 1 day | 2y ago | |
17 | Parameter-efficient fine-tuning for instruction-following and multi-modal LLaMA models. Introduces adapters with minimal parameters. | 6k Top 10% | 2 | 0.0% | 0 | 2y ago | 1 day | 1y ago | ||
18 | JAX library for using and fine-tuning Gemma, a family of open-weights Large Language Models (LLM). Includes examples for multi-modal and LoR... | 4k Top 25% | 18 | 0.5% | 5 | 1y ago | 1 day | 1d ago | ||
19 | EmoLLMSmartFlowAI | Fine-tuned LLMs for mental health support, focusing on understanding and aiding users. Models support instruction fine-tuning and RAG.
| 2k Top 50% | 6 | 0.4% | 0 | 1y ago | 1 day | 2mo ago | |
20 | Fine-tunes LLaMA with 52K instruction-following data. Includes data generation & fine-tuning code. Recovers Alpaca-7B weights. Research only... | 30k Top 5% | 16 | 0.1% | 0 | 2y ago | 1 day | 1y ago | ||
21 | DeltaPapersthunlp | Curated list of research papers on parameter-efficient tuning methods (Delta Tuning) for pre-trained models, facilitating model adaptation.
| 285 | 0 | 0% | 0 | 3y ago | Inactive | 2y ago | |
22 | parameter-efficient-moeCohere-Labs-Community | Parameter-efficient fine-tuning (PEFT) via Mixture of Experts (MoE). Includes MoV and MoLoRA implementations, built on T5X, Flaxformer, and ... | 269 | 1 | 0.4% | 0 | 1y ago | Inactive | 1y ago | |
23 | Chinese-Llama-2longyuewangdcu | Enhances Llama-2's Chinese language capabilities using LoRA fine-tuning, full-parameter instruction fine-tuning, and secondary pre-training ... | 448 | 0 | 0% | 0 | 2y ago | 1 day | 1y ago | |
24 | Chinese-VicunaFacico | Fine-tunes LLaMA for Chinese instruction following, even on a single RTX 2080Ti. Includes code for fine-tuning, generation, and CPU inferenc... | 4k Top 25% | 0 | 0% | 0 | 2y ago | 1 day | 3mo ago | |
25 | Enables QLoRA fine-tuning of LLMs. Includes training scripts, configurations, and inference examples. Supports model conversion to GGUF.
| 260 | 0 | 0% | 0 | 2y ago | 1 day | 1y ago |
Showing 1 - 25 of 15725 of 157 repos