emacs-copilot  by jart

Emacs extension for local LLM code completion

created 1 year ago
747 stars

Top 47.4% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides Emacs Lisp code for integrating large language models (LLMs) for code completion, enabling pair programming with local LLMs. It targets Emacs users seeking advanced, free, and local code generation capabilities, offering superior quality and freedom compared to cloud-based solutions.

How It Works

Emacs Copilot functions by running an LLM as a subprocess, feeding it the current buffer's content and local editing history. It streams generated tokens directly into the Emacs buffer, allowing for real-time completion and interruption. The system intelligently manages context by purging deleted code that matches verbatim, and it supports various LLMs via the llamafile executable, which bundles the model and runtime.

Quick Start & Requirements

  • Install Emacs (a single-file build is provided).
  • Download a compatible llamafile (e.g., WizardCoder-Python-34b, 23.9 GB, LLaMA 2 license).
  • Make the llamafile executable (chmod +x).
  • Evaluate the provided Emacs Lisp code (M-x eval-buffer).
  • Trigger completion with C-c C-k.
  • Prerequisites: macOS requires Xcode. Some Linux systems may need ape registration. Windows users might need to rename llamafile to llamafile.exe. CPU requires SSSE3 (Intel Core 2006+, AMD Bulldozer 2011+). ARM64 requires ARMv8a+.

Highlighted Details

  • LLM context includes local editing history, enabling recall of previously generated code.
  • Code generation is tuned to stop at function completion, avoiding extraneous commentary.
  • Supports streaming token output for low-latency completions.
  • Language agnostic, determined by file extension.

Maintenance & Community

No specific community links (Discord/Slack) or roadmap are mentioned in the README. The project is maintained by jart.

Licensing & Compatibility

The Emacs Lisp code itself is not explicitly licensed. The llamafile executables are distributed under various licenses, including LLaMA 2 and Microsoft Research License, which may have restrictions on commercial use. Compatibility with closed-source linking depends on the specific LLM's license.

Limitations & Caveats

Requires significant disk space and computational resources for larger LLMs. Performance is hardware-dependent, with CPU-only inference being slower. Potential compatibility issues exist on certain OS configurations or with security software like CrowdStrike.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
9 stars in the last 90 days

Explore Similar Projects

Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Anil Dash Anil Dash(Former CEO of Glitch), and
15 more.

llamafile by Mozilla-Ocho

0.2%
23k
Single-file LLM distribution and runtime via `llama.cpp` and Cosmopolitan Libc
created 1 year ago
updated 1 month ago
Feedback? Help us improve.