Discover and explore top open-source AI tools and projects—updated daily.
Simplify Ollama setup and management for AMD GPUs
Top 100.0% on SourcePulse
This project provides a user-friendly installer and manager for Ollama installations optimized for AMD GPUs, leveraging the likelovewant/ollama-for-amd
library. It targets users seeking a simplified setup and maintenance process for running large language models on AMD hardware, offering automated installation, ROCm library updates, and common error fixes.
How It Works
The tool offers a graphical interface to automate the installation and updating of Ollama with AMD-specific ROCm libraries. It intelligently detects Ollama installations, allows selection of GPU models for optimized library versions, and includes a one-click solution for the prevalent 0xc0000005
runtime error. Proxy support is integrated for users facing network restrictions.
Quick Start & Requirements
Installation is straightforward via a downloadable .exe
from the Releases page or by building from source.
Ollama-For-AMD-Installer.exe
from the Releases Page or git clone
the repository and run pip install -r requirements.txt
.Highlighted Details
0xc0000005
runtime error.Maintenance & Community
The author is committed to maintaining the project but has transitioned to NVIDIA hardware. Development and testing now rely on an AMD APU (6800H) and community feedback, particularly for discrete GPU (dGPU) related issues. Contributions via issues or pull requests are actively welcomed.
Licensing & Compatibility
This project is licensed under the permissive MIT License, allowing for broad use, modification, and distribution, including within commercial or closed-source applications.
Limitations & Caveats
The author's primary development hardware is now NVIDIA, meaning testing on dedicated AMD GPUs is limited. The project's continued effectiveness for dGPU users is dependent on community contributions and feedback for issue reporting and resolution.
2 months ago
Inactive