VSCode extension for LLM-powered code development
Top 31.2% on sourcepulse
This VSCode extension provides LLM-powered code completion and generation, targeting developers who want to integrate large language models into their workflow. It offers "ghost-text" style code completion and code generation capabilities, aiming to enhance productivity by leveraging AI models.
How It Works
The extension utilizes llm-ls
as its backend Language Server Protocol (LSP) implementation. It sends code context and prompts to configurable backends, including Hugging Face Inference API, Ollama, OpenAI-compatible endpoints, and Text Generation Inference (TGI). Prompts are dynamically sized to fit the model's context window using tokenizers, and a code attribution feature checks generated code against "The Stack" dataset for potential licensing issues.
Quick Start & Requirements
bigcode/starcoder
and Hugging Face Inference API.Llm: Login
).Highlighted Details
Maintenance & Community
The project is maintained by Hugging Face. Related repositories include huggingface-vscode-endpoint-server
and llm-vscode-inference-server
.
Licensing & Compatibility
The extension itself is likely under a permissive license, but usage of specific models via the Hugging Face Inference API may be subject to Hugging Face's terms of service and pricing tiers (e.g., PRO plan for higher rate limits).
Limitations & Caveats
Using the Hugging Face Inference API's free tier may result in rate limiting. The code attribution feature uses a Bloom filter, which can produce false positives, and recommends a second pass with a dedicated search tool for complete accuracy. Developing locally requires building the llm-ls
binary, with potential platform support issues if not using pre-built binaries.
1 year ago
1+ week