plock  by jasonjmcghee

CLI tool for streaming LLM output or script results directly into any text field

created 1 year ago
496 stars

Top 63.4% on sourcepulse

GitHubView on GitHub
Project Summary

Plock enables users to interact with LLMs and other command-line scripts directly from any text input field, replacing selected text with streaming output. It targets developers and power users seeking a seamless, context-aware interface for AI-driven tasks, offering local-first operation and extensive customization.

How It Works

Plock utilizes a system of configurable "triggers" that map keyboard shortcuts to specific processes and prompts. These prompts can leverage environment variables, clipboard content ($CLIPBOARD), and selected text ($SELECTION). Processes can be local LLM engines like Ollama, custom shell scripts, or API calls. The output can be streamed directly to the screen, stored, or used to trigger subsequent actions, creating flexible automation workflows.

Quick Start & Requirements

  • Install Ollama and pull a model (e.g., ollama pull openhermes2.5-mistral).
  • Run plock (binary or built from source).
  • Requires Node.js (v14+) and Rust (v1.41+) for building.
  • macOS requires keyboard accessibility permissions. Linux may need X11 libs for clipboard/key simulation. Windows support is experimental and requires alternative LLM backends.
  • Official Demos: GPT-3.5/GPT-4, Ollama

Highlighted Details

  • Seamlessly integrates LLM output into any application via configurable global hotkeys.
  • Supports local LLMs (Ollama) and custom shell scripts for maximum flexibility.
  • Clipboard context integration allows for rich, context-aware prompting.
  • Highly customizable settings.json for shortcuts, models, prompts, and chained triggers.

Maintenance & Community

The project is actively seeking contributions. Users are encouraged to report issues, suggest features, or submit pull requests. The primary developer is Jason McGhee.

Licensing & Compatibility

The repository does not explicitly state a license in the README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

Windows support is experimental and requires workarounds for LLM backends. Linux support is untested. OCR integration, while explored, was found to be disappointing with rusty-tesseract.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
5 stars in the last 90 days

Explore Similar Projects

Starred by Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), Georgios Konstantopoulos Georgios Konstantopoulos(CTO, General Partner at Paradigm), and
13 more.

llm by simonw

1.3%
9k
CLI tool and Python library for LLM interaction
created 2 years ago
updated 1 month ago
Feedback? Help us improve.