Local typing assistant powered by Ollama
Top 87.1% on sourcepulse
This project provides a lightweight, AI-powered local typing assistant designed to correct typos and improve text quality using a local Large Language Model (LLM) via Ollama. It's ideal for users who want to enhance their typing speed and accuracy by offloading text correction to an LLM without relying on cloud services.
How It Works
The assistant runs as a background script, monitoring global hotkeys. Upon activation, it captures the current line or selected text, sends it to a locally running Ollama model (e.g., Mistral 7B Instruct) with a specific prompt for text correction, and then replaces the original text with the LLM's output. This approach leverages the LLM's natural language understanding for efficient and context-aware text refinement.
Quick Start & Requirements
ollama run mistral:7b-instruct-v0.2-q4_K_S
.pip install pynput pyperclip httpx
python main.py
Key.cmd
to Key.ctrl
).Highlighted Details
Maintenance & Community
The project is a personal project by Patrick Loeber, with a demo and explanations available on his YouTube channel. Community interaction channels are not specified.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial use or linking with closed-source projects is not specified.
Limitations & Caveats
macOS-specific hotkey configurations might require manual adjustments for Linux and Windows. Initial setup on macOS may require granting accessibility and input monitoring permissions to the script's execution environment.
1 year ago
1 day