LLM plugin for accessing models running on an Ollama server
Top 85.2% on sourcepulse
This plugin provides seamless integration between the llm
CLI tool and Ollama, enabling users to leverage local LLM models for prompting, chatting, embeddings, and structured output generation. It targets developers and power users who want to utilize Ollama's extensive model library within the llm
ecosystem.
How It Works
The plugin acts as a bridge, querying the Ollama server for available models and registering them with the llm
CLI. It supports various Ollama features, including multi-modal image inputs, embedding generation, and structured JSON output via schemas. For asynchronous operations, it provides access to async
models for use with Python's asyncio
.
Quick Start & Requirements
llm install llm-ollama
.llava
) and embedding models.OLLAMA_HOST
environment variable.Highlighted Details
:latest
tags.asyncio
integration.-o temperature 0.8
).Maintenance & Community
No specific contributors, sponsorships, or community links (Discord/Slack) are mentioned in the README.
Licensing & Compatibility
The README does not explicitly state the license type. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The README does not detail any specific limitations, known bugs, or deprecation notices. The project appears to be actively maintained for integration with the llm
CLI.
5 days ago
1 day