Discover and explore top open-source AI tools and projects—updated daily.
iagooarLLM-powered CLI for shell assistance
Top 59.0% on SourcePulse
Summary
qqqa is a pair of fast, stateless CLI tools (qq for questions, qa for agent tasks) designed to integrate LLM assistance directly into the shell. It offers a ceremony-free, shell-friendly approach for developers and power users, enabling quick queries and single-step command execution with LLM intelligence.
How It Works
The project provides two binaries: qq for stateless, read-only questions, and qa for single-step agent tasks involving file I/O or command execution, always requiring user confirmation. Its core philosophy is statelessness, promoting simple, reproducible, and pipe-friendly interactions aligned with Unix principles. qqqa supports multiple LLM providers (OpenRouter, OpenAI, Groq, Ollama) via configuration, with safety rails built into file access and command execution to prevent unintended actions.
Quick Start & Requirements
Installation is straightforward: macOS users can brew tap iagooar/qqqa && brew install qqqa, while Linux users download prebuilt archives from GitHub Releases. Configuration is handled interactively via qq --init or qa --init, which sets up ~/.qq/config.json, prompts for provider selection (OpenRouter, Groq, OpenAI, Ollama), and optionally stores API keys. Local Ollama integration is supported, requiring the Ollama runtime.
Highlighted Details
Maintenance & Community
The README does not detail specific contributors, sponsorships, or community channels like Discord/Slack. Development guidelines are available in CONTRIBUTING.md.
Licensing & Compatibility
qqqa is licensed under the MIT license, permitting broad use, modification, and distribution, including for commercial purposes.
Limitations & Caveats
Anthropic provider integration is currently a stub and not functional. The qa agent's command execution is restricted by a default allowlist and requires explicit confirmation for potentially risky operations, blocking pipelines and redirection by default. Local Ollama execution is noted as slower than hosted cloud providers. Each qa invocation performs at most one tool step.
1 week ago
Inactive
simonmysun
emcf