Emacs extension for local LLM code completion
Top 47.4% on sourcepulse
This project provides Emacs Lisp code for integrating large language models (LLMs) for code completion, enabling pair programming with local LLMs. It targets Emacs users seeking advanced, free, and local code generation capabilities, offering superior quality and freedom compared to cloud-based solutions.
How It Works
Emacs Copilot functions by running an LLM as a subprocess, feeding it the current buffer's content and local editing history. It streams generated tokens directly into the Emacs buffer, allowing for real-time completion and interruption. The system intelligently manages context by purging deleted code that matches verbatim, and it supports various LLMs via the llamafile
executable, which bundles the model and runtime.
Quick Start & Requirements
llamafile
(e.g., WizardCoder-Python-34b, 23.9 GB, LLaMA 2 license).llamafile
executable (chmod +x
).M-x eval-buffer
).C-c C-k
.ape
registration. Windows users might need to rename llamafile
to llamafile.exe
. CPU requires SSSE3 (Intel Core 2006+, AMD Bulldozer 2011+). ARM64 requires ARMv8a+.Highlighted Details
Maintenance & Community
No specific community links (Discord/Slack) or roadmap are mentioned in the README. The project is maintained by jart.
Licensing & Compatibility
The Emacs Lisp code itself is not explicitly licensed. The llamafile
executables are distributed under various licenses, including LLaMA 2 and Microsoft Research License, which may have restrictions on commercial use. Compatibility with closed-source linking depends on the specific LLM's license.
Limitations & Caveats
Requires significant disk space and computational resources for larger LLMs. Performance is hardware-dependent, with CPU-only inference being slower. Potential compatibility issues exist on certain OS configurations or with security software like CrowdStrike.
1 year ago
1 day