Vim plugin for LLM-assisted code/text completion
Top 27.8% on sourcepulse
llama.vim provides local LLM-assisted code completion within Vim, targeting developers who want intelligent suggestions without relying on cloud services. It leverages the llama.cpp
backend for efficient inference, enabling powerful text generation and fill-in-the-middle (FIM) capabilities directly within the editor, even on less powerful hardware.
How It Works
The plugin integrates with a running llama.cpp
server, which handles the heavy lifting of LLM inference. It employs a "ring context" mechanism to manage and reuse context from open files, edited buffers, and yanked text, allowing for very large effective contexts on resource-constrained systems. Speculative decoding and FIM support are core features, enabling faster and more accurate code completion.
Quick Start & Requirements
vim-plug
(Plug 'ggml-org/llama.vim'
) or lazy.nvim
.llama.cpp
server instance. Installation of llama.cpp
is via brew install llama.cpp
on macOS or building from source/using binaries for other OSes.g:llama_config
variable in .vimrc
. See :help llama_config
.Highlighted Details
Tab
and the first line with Shift+Tab
.Maintenance & Community
The project is associated with the ggml-org
and llama.cpp
ecosystems. Further details on community or roadmap are not explicitly provided in the README.
Licensing & Compatibility
The README does not explicitly state a license for llama.vim
. llama.cpp
is typically licensed under the MIT license, which is permissive for commercial use.
Limitations & Caveats
The plugin requires a separate llama.cpp
server to be running and configured. Performance and suggestion quality are dependent on the chosen LLM and hardware capabilities.
1 month ago
1 day