Local CLI copilot using Ollama for command-line assistance
Top 29.0% on sourcepulse
tlm provides local, offline command-line assistance powered by open-source LLMs via Ollama. It targets developers and power users seeking an alternative to cloud-based AI assistants, offering features like command suggestion, explanation, and context-aware Q&A without requiring API keys or internet connectivity.
How It Works
tlm integrates with Ollama to leverage various open-source models (e.g., Llama 3, Phi-4, DeepSeek-R1, Qwen) directly on the user's machine. It supports automatic shell detection for seamless integration and offers a Retrieval Augmented Generation (RAG) capability for context-aware queries, allowing users to provide local file paths or patterns for more relevant responses.
Quick Start & Requirements
curl ... | sudo -E bash
for Linux/macOS, Invoke-RestMethod ... | Invoke-Expression
for Windows PowerShell) or go install github.com/yusufcanb/tlm@1.2
if Go 1.22+ is installed.Highlighted Details
Maintenance & Community
The project is maintained by yusufcanb. No specific community channels or roadmap links are provided in the README.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The ask
command is marked as beta. The README does not detail specific model performance benchmarks or provide explicit compatibility information for all operating systems and hardware configurations beyond general support for macOS, Linux, and Windows.
4 months ago
1 day