Discover and explore top open-source AI tools and projects—updated daily.
Framework for self-adapting language models
Top 44.5% on SourcePulse
SEAL (Self-Adapting LLMs) is a framework for training language models to generate self-edits, such as finetuning data or update directives, in response to new inputs. It targets researchers and practitioners seeking to enable LLMs to continually learn and adapt to new information and tasks without manual intervention, demonstrated in general-knowledge incorporation and few-shot task adaptation.
How It Works
SEAL utilizes Reinforcement Learning (RL) to train language models to produce self-editing actions. This approach allows the model to learn a policy for generating updates based on new data, effectively creating a self-improving loop. The framework is designed to be flexible, supporting adaptation for both factual knowledge integration and few-shot learning scenarios.
Quick Start & Requirements
pip install -r requirements.txt
.env
file.Highlighted Details
Maintenance & Community
The project is associated with MIT CSAIL and lists authors Adam Zweiger, Jyothish Pari, Han Guo, Ekin Akyürek, Yoon Kim, and Pulkit Agrawal.
Licensing & Compatibility
The repository does not explicitly state a license.
Limitations & Caveats
The setup and experimental configurations are optimized for specific hardware (2x A100/H100 GPUs) and may require significant refactoring for different setups. An OpenAI API key is a mandatory requirement for operation.
1 month ago
Inactive