SuperAdapters  by cckuailong

CLI tool for finetuning and inference of LLMs using adapters

created 2 years ago
325 stars

Top 85.0% on sourcepulse

GitHubView on GitHub
Project Summary

SuperAdapters provides a unified framework for fine-tuning a wide range of Large Language Models (LLMs) using various adapter techniques. It aims to simplify the process of adapting LLMs for diverse tasks and hardware, catering to researchers and developers working with LLMs.

How It Works

The library supports multiple adapter methods including LoRA, QLoRA, AdaLoRA, Prefix Tuning, P-Tuning, and Prompt Tuning. It offers a flexible architecture that allows users to select specific LLM architectures (e.g., LLaMA, ChatGLM, Qwen, Mixtral) and apply different fine-tuning strategies. The framework handles data loading from files or databases and supports both sequence-to-sequence and classification tasks.

Quick Start & Requirements

  • Install: pip install -r requirements.txt
  • Prerequisites: Python 3.10.0 recommended for macOS with XZ support. GPU support requires specific PyTorch nightly builds for macOS.
  • Data: Supports data from files or MySQL databases.
  • Models: Links to Hugging Face model repositories are provided for various LLMs.
  • Usage: Examples for fine-tuning and inference with different models and adapters are available. See Usage Examples.

Highlighted Details

  • Supports fine-tuning on Windows, Linux, and Mac M1/2.
  • Offers a unified interface for multiple adapter types and LLM architectures.
  • Includes tools for combining base models with adapter weights for easier deployment.
  • Provides options for web-based demos and API endpoints for inference.

Maintenance & Community

The project references other repositories like LLM-Adapters and Alpaca-CoT, suggesting community engagement and potential for shared development. No specific community channels or active maintainer information is detailed in the README.

Licensing & Compatibility

The README does not explicitly state a license. Given the references to other open-source projects, it is likely to be permissively licensed, but this requires verification. Compatibility for commercial use is not specified.

Limitations & Caveats

QLoRA is not supported on Mac. Users on macOS may need to recompile Python with XZ support for full functionality. The README does not detail performance benchmarks or specific hardware requirements beyond general OS and GPU considerations.

Health Check
Last commit

1 week ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
8 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems).

JittorLLMs by Jittor

0%
2k
Low-resource LLM inference library
created 2 years ago
updated 5 months ago
Starred by Patrick von Platen Patrick von Platen(Core Contributor to Hugging Face Transformers and Diffusers), Michael Han Michael Han(Cofounder of Unsloth), and
1 more.

ktransformers by kvcache-ai

0.4%
15k
Framework for LLM inference optimization experimentation
created 1 year ago
updated 2 days ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Nat Friedman Nat Friedman(Former CEO of GitHub), and
32 more.

llama.cpp by ggml-org

0.4%
84k
C/C++ library for local LLM inference
created 2 years ago
updated 16 hours ago
Feedback? Help us improve.