Awesome-Ollama-Server  by forrany

Ollama service monitor for availability/performance, with web UI

created 5 months ago
342 stars

Top 81.9% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides a system for monitoring the availability and performance of Ollama services, targeting users who manage or interact with multiple Ollama instances. It offers a modern, multilingual web interface with real-time detection and data visualization, simplifying the management of LLM model deployments.

How It Works

The system is built with Next.js 14, leveraging React Server Components for efficient rendering and TypeScript for type safety. It utilizes Tailwind CSS for styling and next-intl for its dual-language (Chinese/English) support. The core functionality involves real-time detection of Ollama service status, response times, and TPS (Transactions Per Second), with data export and advanced filtering capabilities.

Quick Start & Requirements

  • Install: Clone the repository, then run npm install or yarn install.
  • Prerequisites: Node.js 18.0+ and npm or yarn.
  • Development: Run npm run dev or yarn dev to start the development server.
  • Production: Run npm run build followed by npm start or yarn build and yarn start.
  • Docker: docker-compose up -d for full deployment, or docker run for the web app only.
  • Docs: https://github.com/forrany/Awesome-Ollama-Server

Highlighted Details

  • Real-time detection and status display for multiple Ollama services.
  • Performance monitoring including response time and TPS.
  • Multilingual support (Chinese/English) with easy switching.
  • Optional Redis integration for data storage (Upstash).
  • FOFA scanning capabilities for discovering Ollama services.

Maintenance & Community

The project is maintained by VincentKo (@forrany). Contribution guidelines are provided, encouraging forks and pull requests.

Licensing & Compatibility

Licensed under the MIT License, allowing for commercial use and integration with closed-source projects.

Limitations & Caveats

The project is primarily focused on monitoring Ollama services and may not support other LLM serving frameworks. Some advanced features like FOFA scanning and Redis integration are optional and require specific environment variable configuration.

Health Check
Last commit

5 hours ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
41 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Daniel Han Daniel Han(Cofounder of Unsloth), and
1 more.

airweave by airweave-ai

0.6%
3k
Semantic MCP server for AI agents
created 7 months ago
updated 1 day ago
Feedback? Help us improve.