shell-ai  by ibigio

AI shell assistant for generating commands and code snippets

created 3 years ago
422 stars

Top 70.8% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

ShellAI is an AI-powered command-line assistant designed to help developers and power users quickly find shell commands, code snippets, and explanations without leaving the terminal. It aims to significantly reduce the time spent searching for information online, offering a minimal and convenient user experience.

How It Works

ShellAI leverages large language models (LLMs) to interpret natural language queries and generate relevant shell commands or code snippets. It features a fast, syntax-highlighted interface, automatically extracts and copies code, and allows for follow-up questions to refine results. The system supports OpenAI's GPT models and offers extensibility for other providers and local LLMs via a configurable config.yaml file.

Quick Start & Requirements

  • Install: Homebrew (brew install shell-ai) or script (curl ... | bash).
  • Prerequisites: OpenAI API key (or other LLM endpoint configuration).
  • Configuration: Set OPENAI_API_KEY environment variable. Advanced configuration for local models or Azure OpenAI is available via q config and direct file editing.
  • Docs: Custom Model Configuration

Highlighted Details

  • Supports GPT-3.5 and GPT-4, with extensibility for local OSS models (e.g., via llama.cpp).
  • Features auto-extraction and clipboard copying of generated code.
  • Includes a q config revert command to restore previous configurations.
  • Allows customization of prompts and model endpoints in ~/.shell-ai/config.yaml.

Maintenance & Community

The project is maintained by @ilanbigio. Future development focuses on building a comprehensive configuration TUI and setting up model install templates.

Licensing & Compatibility

The repository does not explicitly state a license in the provided README.

Limitations & Caveats

Configuration for local models requires manual setup of LLM inference servers (e.g., llama.cpp) and careful prompt engineering. The configuration TUI is still under development, necessitating direct file editing for advanced setups.

Health Check
Last commit

3 months ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
0
Star History
34 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
9 more.

codex by openai

0.8%
32k
Coding agent CLI tool for terminal-based chat-driven development
created 3 months ago
updated 21 hours ago
Feedback? Help us improve.