llm  by simonw

CLI tool and Python library for LLM interaction

created 2 years ago
9,168 stars

Top 5.6% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides a command-line interface (CLI) and Python library for interacting with large language models (LLMs). It allows users to run prompts against both remote APIs and locally hosted models, store results, generate embeddings, and process various data types directly from the terminal. The primary audience is developers and power users who need a flexible tool for LLM experimentation and integration.

How It Works

The tool leverages a plugin architecture to support a wide range of LLMs, including those from OpenAI and self-hosted models like Mistral. Users can install plugins via pip or brew to extend functionality. Prompts can be executed directly, with results optionally stored in SQLite. The library also supports advanced features like system prompts for instruction-following and processing multimedia content via specialized plugins.

Quick Start & Requirements

  • Install via pip: pip install llm or Homebrew: brew install llm.
  • OpenAI API key can be set with llm keys set openai.
  • Local models require installing specific plugins (e.g., llm install llm-gpt4all).
  • Full documentation: llm.datasette.io

Highlighted Details

  • Supports remote APIs (e.g., OpenAI) and local models (e.g., Mistral 7B via plugins).
  • Enables running prompts against images, audio, and video using specific plugins.
  • Stores prompt results in SQLite for easy querying and analysis.
  • Offers chat sessions and system prompt capabilities for guided interactions.

Maintenance & Community

The project is actively maintained by Simon Willison, with contributions from a community of developers. Further details on community engagement and roadmap can be found via links on the project's documentation site.

Licensing & Compatibility

The project is released under the MIT license, permitting commercial use and integration with closed-source applications.

Limitations & Caveats

While versatile, the performance and capabilities of specific LLMs depend on the underlying models and hardware used for local execution. Some advanced features, like multimedia processing, rely on specific, potentially large, plugin installations.

Health Check
Last commit

1 month ago

Responsiveness

1 week

Pull Requests (30d)
9
Issues (30d)
20
Star History
1,929 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Joe Walnes Joe Walnes(Head of Experimental Projects at Stripe), and
2 more.

prompttools by hegelai

0.3%
3k
Open-source tools for prompt testing and experimentation
created 2 years ago
updated 11 months ago
Feedback? Help us improve.