Discover and explore top open-source AI tools and projects—updated daily.
Ollama integration for Intel ARC GPUs
Top 99.6% on SourcePulse
This repository provides a Docker-based solution for running Ollama with Intel ARC GPU acceleration on Linux, targeting users with Intel ARC hardware who want to leverage local LLMs. It simplifies the setup process for using models like deepseek-r1 on Intel GPUs.
How It Works
The project utilizes Docker Compose to build a custom Ollama image with Intel Extension for PyTorch (IPEX-LLM) support, specifically leveraging the IPEX-LLM portable ZIP distribution. This approach allows Ollama to utilize Intel's optimized libraries for GPU inference, enabling local execution of large language models on compatible Intel hardware.
Quick Start & Requirements
docker compose up
.http://localhost:3000
.Highlighted Details
ONEAPI_DEVICE_DELECTOR
environment variable.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is described as an illustration and may not be production-ready. It is specifically tailored for Linux and Intel ARC GPUs, limiting its applicability to other operating systems or GPU vendors. Updates to IPEX-LLM require manual modification of the docker-compose.yml
file.
3 months ago
1+ week