Discover and explore top open-source AI tools and projects—updated daily.
bernardo-bruningProxy for local code completion, like GitHub Copilot
Top 44.6% on SourcePulse
This project provides a proxy server that enables the use of Ollama models as a backend for GitHub Copilot-like code completion in IDEs. It targets developers seeking to leverage local, open-source LLMs for coding assistance, offering a cost-effective and privacy-preserving alternative to proprietary solutions.
How It Works
The proxy acts as an intermediary, intercepting requests from IDE plugins (like copilot.vim, copilot.el, or VS Code's Copilot extension) and forwarding them to a running Ollama instance. It translates the Copilot API format to Ollama's expected input, allowing local models such as codellama:code to provide code suggestions. This approach democratizes AI-powered coding by utilizing accessible local infrastructure.
Quick Start & Requirements
curl -fsSL https://ollama.com/install.sh | shollama pull codellama:codego install github.com/bernardo-bruning/ollama-copilot@latest$HOME/go/bin is in your $PATH.ollama-copilot or OLLAMA_HOST="<ollama_host>" ollama-copilothttp://localhost:11435 or 11437 for VS Code).Highlighted Details
copilot.vim, copilot.el (experimental), and VS Code Copilot extension.Maintenance & Community
The project is maintained by Bernardo Bruning. Further community engagement channels are not explicitly listed in the README.
Licensing & Compatibility
The repository does not explicitly state a license. Compatibility for commercial use or linking with closed-source projects is not specified.
Limitations & Caveats
The project is experimental for Emacs integration. Windows setup and comprehensive documentation are listed as future roadmap items, indicating potential setup hurdles for users on that platform.
1 month ago
Inactive
SilasMarvin
huggingface
TabbyML