CLI tool for streaming LLM output or script results directly into any text field
Top 63.4% on sourcepulse
Plock enables users to interact with LLMs and other command-line scripts directly from any text input field, replacing selected text with streaming output. It targets developers and power users seeking a seamless, context-aware interface for AI-driven tasks, offering local-first operation and extensive customization.
How It Works
Plock utilizes a system of configurable "triggers" that map keyboard shortcuts to specific processes and prompts. These prompts can leverage environment variables, clipboard content ($CLIPBOARD
), and selected text ($SELECTION
). Processes can be local LLM engines like Ollama, custom shell scripts, or API calls. The output can be streamed directly to the screen, stored, or used to trigger subsequent actions, creating flexible automation workflows.
Quick Start & Requirements
ollama pull openhermes2.5-mistral
).plock
(binary or built from source).Highlighted Details
settings.json
for shortcuts, models, prompts, and chained triggers.Maintenance & Community
The project is actively seeking contributions. Users are encouraged to report issues, suggest features, or submit pull requests. The primary developer is Jason McGhee.
Licensing & Compatibility
The repository does not explicitly state a license in the README. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
Windows support is experimental and requires workarounds for LLM backends. Linux support is untested. OCR integration, while explored, was found to be disappointing with rusty-tesseract
.
1 year ago
Inactive