SlashGPT  by receptron

LLM agent prototyping playground

created 2 years ago
275 stars

Top 94.9% on sourcepulse

GitHubView on GitHub
Project Summary

SlashGPT is a framework for rapidly prototyping LLM agents and applications with natural language UIs. It allows developers to easily create new agents by defining manifest files, switch between them seamlessly, and integrate functionalities like ChatGPT plugins and code execution without writing extensive code.

How It Works

SlashGPT utilizes a "dispatcher" agent that routes user messages to appropriate specialized agents based on manifest files (JSON or YAML). Agents can be extended with custom logic via Python modules or declarative "actions," which support message templates, REST calls, GraphQL queries, data URLs, and event emission. This approach simplifies agent creation and integration, enabling complex workflows through configuration.

Quick Start & Requirements

  • Install: pip install -r requirements/full.txt or pip install -r requirements.txt
  • Dependencies: OpenAI API key (required). Optional keys for PaLM, Replicate (Llama), CodeBox, Wolfram, OpenWeather, Noteable, Webpilot, Alchemy.
  • Execution: ./SlashGPT.py or via Docker (make docker).
  • Documentation: API docs available via shpdoc src/slashgpt or on GitHub.

Highlighted Details

  • Supports multiple LLMs including GPT-3.5, PaLM2, and Llama via API tokens.
  • Features "Code Interpreter" agents capable of executing Python code, generating Jupyter notebooks.
  • Extensible "actions" system for integrating REST, GraphQL, data URLs, and message templates without Python coding.
  • Manifest files define agent behavior, prompts, models, and integrations.

Maintenance & Community

  • Project appears actively maintained.
  • Links to community resources (Discord/Slack) or roadmap are not explicitly provided in the README.

Licensing & Compatibility

  • The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

  • Some code interpreter agents (PaLM2, Llama) have limitations in executing generated code, requiring manual user intervention.
  • IPython's image display behavior differs from CodeBox when generating plots.
  • Streaming output is noted as "not yet implemented."
Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.