Discover and explore top open-source AI tools and projects—updated daily.
Adapt LLMs instantly with textual descriptions
Top 41.6% on SourcePulse
This repository provides a reference implementation for Text-to-LoRA (T2L), a method for adapting large language models (LLMs) to specific tasks using only textual descriptions. It targets researchers and practitioners seeking efficient LLM customization without extensive fine-tuning.
How It Works
T2L generates LoRA (Low-Rank Adaptation) adapters for LLMs based on natural language task descriptions. This approach leverages hypernetworks to predict adapter weights, enabling rapid adaptation by conditioning on task semantics rather than requiring direct task data for adaptation. This offers a more efficient and flexible way to specialize LLMs.
Quick Start & Requirements
uv
and pip
, including a specific flash-attention
wheel:
git clone https://github.com/SakanaAI/text-to-lora.git
cd text-to-lora
# Install uv if not present
uv venv --python 3.10 --seed
uv sync
uv pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.6.3/flash_attn-2.6.3+cu123torch2.3cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
uv pip install src/fishfarm
uv run huggingface-cli login
uv run huggingface-cli download SakanaAI/text-to-lora --local-dir . --include "trained_t2l/*"
uv run python webui/app.py
uv run python scripts/generate_lora.py {T2l_DIRECTORY} {TASK_DESCRIPTION}
uv run python scripts/run_eval.py --model-dir {base_model_dir} --lora-dirs {lora_dirs} --save-results --tasks {tasks}
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
3 months ago
Inactive