LSP-AI provides a unified, open-source language server backend for AI-powered coding assistance, targeting software engineers who want to integrate LLM features into their existing LSP-compatible editors. It aims to democratize advanced AI coding tools by abstracting complex LLM integrations, enabling broader editor support and simplified plugin development.
How It Works
LSP-AI acts as a central hub for AI functionalities, abstracting LLM backend complexities and prompt engineering. It communicates with editors via the Language Server Protocol (LSP), allowing any LSP-supported editor to leverage its features. This approach centralizes development effort, enabling a single backend to power AI features across multiple editors, simplifying plugin creation and fostering community collaboration.
Quick Start & Requirements
- Installation:
pip install lsp-ai
- Prerequisites: Python 3.8+, LSP-compatible editor. Backend LLM setup (e.g., Ollama, llama.cpp, OpenAI API keys) is required for functionality.
- Documentation: https://github.com/SilasMarvin/lsp-ai/wiki
Highlighted Details
- Supports in-editor chatting with LLMs and AI-powered code completions.
- Offers custom actions for code refactoring and other tasks using LLMs.
- Compatible with numerous editors including VS Code, NeoVim, Emacs, Helix, and Sublime.
- Flexible backend support includes llama.cpp, Ollama, OpenAI, Anthropic, Gemini, and Mistral AI APIs.
Maintenance & Community
- The project is actively used daily by many users, though the original author states it has reached a stage with desired features, with no new features currently in development.
- Community engagement via Discord: https://discord.gg/9p7h7f6x
- Roadmap includes semantic search context, Tree-sitter integration, and additional backend support.
Licensing & Compatibility
- License: MIT.
- Permissive MIT license allows for commercial use and integration into closed-source projects.
Limitations & Caveats
- Current development focus is on maintenance rather than new feature additions by the primary author, though community contributions are welcomed.
- Performance of AI features is dependent on the chosen LLM backend and hardware.