ICLR 2023 research paper code for binding language models in symbolic languages
Top 86.0% on sourcepulse
Binder addresses the challenge of grounding large language models (LLMs) in symbolic reasoning and execution environments, enabling them to interact with and leverage external tools and knowledge bases. It targets researchers and developers working on LLM agents, program synthesis, and knowledge-grounded AI, offering a method to improve LLM performance with minimal symbolic annotations.
How It Works
Binder employs a novel approach that binds LLMs to symbolic languages by fine-tuning them on a small set of input-output examples. This method allows LLMs to learn to generate symbolic programs that can be executed by an interpreter, effectively grounding their outputs in a verifiable and actionable format. The advantage lies in achieving state-of-the-art or comparable performance with significantly fewer annotations than traditional methods.
Quick Start & Requirements
conda env create -f py3.7binder.yaml
and pip install records==0.5.3
.code-davinci-002
, gpt-3.5-turbo
, gpt-4-xxx
).conda activate binder
) and placing the API key in key.txt
.Highlighted Details
Maintenance & Community
The project was accepted to ICLR 2023. The primary contributors are listed in the citation. Further updates are planned to refactor code and support more models.
Licensing & Compatibility
The repository does not explicitly state a license. The code is provided for research purposes, and compatibility with commercial or closed-source applications is not specified.
Limitations & Caveats
The project's reliance on OpenAI's API means it is subject to their rate limits and model availability. The README notes past issues with OpenAI's request limitations and the deprecation of Codex models, indicating potential instability or the need for ongoing adaptation to API changes.
1 year ago
1 day