AI agent playground using LangChain with LLama-based models
Top 94.9% on sourcepulse
This repository provides a playground for building AI agents using Langchain and Vicuna, a Llama-based LLM. It focuses on implementing Zero Shot/Few Shot prompts via the ReAct framework, enabling users to experiment with conversational AI and task execution.
How It Works
The project leverages the ReAct (Reasoning and Acting) framework, which combines language models with external tools. Agents can reason about a task, decide on an action (e.g., using a Python REPL, search engine), execute it, and then use the observation to refine their next step. It supports various Vicuna models, including quantized versions (4-bit GPTQ), and offers two backend options: oobabooga's Text Generation WebUI or a custom server with prompt logging.
Quick Start & Requirements
chmod +x ./install_on_virtualenv_and_pip.sh && ./install_on_virtualenv_and_pip.sh
or manually run pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
and pip3 install -r requirements.txt
.Highlighted Details
Maintenance & Community
The project is maintained by paolorechia. Further community engagement details are not explicitly provided in the README.
Licensing & Compatibility
The repository itself does not specify a license. However, it utilizes and acknowledges other projects, including GPTQ-for-LLaMa and FastChat, which have their own licenses. Compatibility for commercial use or closed-source linking would depend on the licenses of these underlying components and the models used.
Limitations & Caveats
The README notes that coding prompts are currently unreliable and model-dependent. The project's custom web server option is not recommended due to open bugs. Windows installation instructions are basic, and some environment variable handling may require adaptation. The "Code Editor Tool / Code-it task executor" is experimental.
2 years ago
1 day