Discover and explore top open-source AI tools and projects—updated daily.
PhoebusSiIFT platform for instruction collection, parameter-efficient methods, and LLMs
Top 17.1% on SourcePulse
This repository provides a unified platform for instruction fine-tuning (IFT) of large language models (LLMs), focusing on instruction collection, parameter-efficient methods, and multi-LLM integration. It aims to lower the barrier for NLP researchers to experiment with and deploy LLMs, particularly for enhancing Chain-of-Thought (CoT) reasoning and Chinese instruction following.
How It Works
The platform unifies various LLMs (LLaMA, ChatGLM, Bloom, MOSS, InternLM) and parameter-efficient fine-tuning (PEFT) techniques (LoRA, P-tuning, AdaLoRA, Prefix Tuning) under a single interface. It leverages a comprehensive collection of instruction-tuning datasets, including English, Chinese, and CoT data, to improve model capabilities. The core advantage lies in its modular design, allowing researchers to easily mix and match LLMs, PEFT methods, and datasets for systematic empirical studies.
Quick Start & Requirements
pip install -r requirements.txt (ensure Python >= 3.9 for ChatGLM). For PEFT methods other than LoRA, install from the project: pip install -e ./peft.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
load_in_8bit incompatibility.1 year ago
Inactive
InternLM
allenai
tloen