Docker compose setup for Ollama deployment
Top 33.4% on sourcepulse
This project provides a streamlined Docker Compose setup for deploying Ollama, a tool for running large language models locally. It targets developers and users who want an easy, containerized way to access and experiment with LLMs, offering GPU acceleration and integration with tools like Langchain.
How It Works
The setup utilizes Docker Compose to orchestrate multiple containers, including Ollama itself and a web UI. For GPU acceleration, it integrates with the NVIDIA Container Toolkit, allowing Ollama to leverage host GPU resources efficiently. An additional app
container is provided for Langchain examples and as a devcontainer for development.
Quick Start & Requirements
git clone https://github.com/mythrantic/ollama-docker.git && cd ollama-docker
docker compose up -d
docker compose -f docker-compose-ollama-gpu.yaml up -d
Highlighted Details
app
container for Langchain examples and devcontainer functionality.run.sh
script for setting up a local virtual environment as an alternative to Docker.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is primarily focused on Docker deployment; alternative development environments are supported via a run.sh
script but may require manual setup.
1 month ago
1 day