Discover and explore top open-source AI tools and projects—updated daily.
Neovim plugin for local LLM access
Top 94.1% on SourcePulse
nvim-llama provides Neovim interfaces for interacting with Large Language Models (LLMs) locally via Ollama. It targets Neovim users who want to integrate LLM capabilities directly into their development workflow, offering a convenient way to chat with and utilize AI models within the editor.
How It Works
The plugin leverages Docker to manage and run LLM models and clients, abstracting away complex setup and dependencies. This Docker-centric approach ensures cross-platform compatibility (macOS, Linux, Windows) and simplifies model management, as specified models are automatically downloaded and managed by Ollama.
Quick Start & Requirements
use 'jpmcb/nvim-llama'
require('nvim-llama').setup({})
with optional model
and debug
parameters.llama2
, mistral
, codellama
). RAM requirements range from 8GB for 3B models to 64GB+ for 70B models.:Llama
command opens a terminal for chatting with the LLM.Highlighted Details
Maintenance & Community
The project is marked as "Under active development." No specific community links or contributor information are provided in the README.
Licensing & Compatibility
The README does not explicitly state a license.
Limitations & Caveats
The project is under active development and may contain bugs or undergo breaking changes. Docker is a mandatory requirement. Significant RAM is needed for larger models.
6 months ago
Inactive