code-interpreter  by haseeb-heaven

Open-source code interpreter alternative, CLI tool for code generation/execution

created 1 year ago
269 stars

Top 96.2% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides a free, open-source code interpreter that transforms natural language instructions into executable code across multiple operating systems. It supports a wide array of large language models (LLMs) including GPT, Gemini, Claude, LLaMa, and HuggingFace models, enabling tasks from file manipulation and image processing to data analysis and graph creation.

How It Works

The interpreter leverages LLMs via the LiteLLM library to understand user instructions and generate code. It supports various modes (code, script, command, vision, chat) and programming languages (Python, JavaScript). Users can select specific models, execute generated code directly, save/edit code, and even integrate local models via LM Studio or Ollama. The inclusion of vision models allows for image and video processing capabilities.

Quick Start & Requirements

  • Install via pip: pip install open-code-interpreter
  • Requires API keys for supported LLM providers (OpenAI, Google AI Studio, HuggingFace, Groq AI, Anthropic AI).
  • API keys should be configured in a .env file.
  • Optional: LM Studio or Ollama for local model execution.
  • Official documentation: https://github.com/haseeb-heaven/code-interpreter

Highlighted Details

  • Supports 20+ LLMs including GPT-3.5/4, Gemini Pro, Claude 3, Mixtral, and Code-Llama.
  • Vision models for image and video processing.
  • Command history, editing, and direct execution features.
  • Cross-platform compatibility (Windows, macOS, Linux).
  • Ability to integrate custom API bases and add new models.

Maintenance & Community

  • Actively maintained by Haseeb-Heaven.
  • Version history indicates frequent updates and feature additions.
  • Open to contributions via pull requests.

Licensing & Compatibility

  • Licensed under the MIT License.
  • Use of specific LLM models is subject to their respective provider's terms of service (OpenAI, Google, HuggingFace, Anthropic AI).

Limitations & Caveats

The project relies on external API keys for most models, and usage may incur costs depending on the provider. While it supports local models, setup requires additional configuration. The "free" aspect primarily refers to the interpreter software itself, not the underlying LLM API calls.

Health Check
Last commit

2 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
5 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.