Research paper demo code for zero-shot planning with LLMs
Top 95.2% on sourcepulse
This project provides official code for "Language Models as Zero-Shot Planners," enabling Large Language Models (LLMs) like GPT-3 and Codex to generate action plans for complex tasks without fine-tuning. It targets researchers and developers working with embodied agents or AI planning, offering a method to extract actionable knowledge for task execution.
How It Works
The approach leverages LLMs as zero-shot planners by prompting them with task descriptions and a set of available actions. The LLM generates a sequence of actions to achieve the goal. This method is advantageous as it requires no task-specific training data, relying solely on the LLM's pre-existing knowledge and planning capabilities.
Quick Start & Requirements
conda create --name language-planner-env python=3.6.13
), activate it, and install dependencies (pip install -r requirements.txt
).demo.ipynb
for a walkthrough.available_actions.json
and examples from available_examples.json
.Highlighted Details
Maintenance & Community
The project is associated with authors from UC Berkeley, Carnegie Mellon University, and Google Brain. No specific community channels or roadmap are mentioned in the README.
Licensing & Compatibility
The README does not explicitly state a license. The code is provided for research purposes. Compatibility with commercial or closed-source applications is not specified.
Limitations & Caveats
The project requires specific Python (3.6.13) and CUDA (11.3) versions, which may be outdated. Performance is highly dependent on the chosen LLM and careful tuning of sampling hyperparameters. The action set is currently limited to what's defined in available_actions.json
, requiring manual updates for new task domains.
3 years ago
1 day