GoModel  by ENTERPILOT

Unified AI gateway for diverse LLM providers

Created 4 months ago
602 stars

Top 54.0% on SourcePulse

GitHubView on GitHub
Project Summary

GoModel is a high-performance AI gateway written in Go, providing a unified, OpenAI-compatible API for integrating numerous LLM providers like OpenAI, Anthropic, Gemini, Groq, and Ollama. It targets developers seeking simplified multi-provider LLM access, enhanced observability, guardrails, and response caching to reduce complexity and operational overhead.

How It Works

This gateway acts as a reverse proxy, abstracting diverse LLM provider APIs behind a single, consistent OpenAI-compatible interface. Built in Go for efficiency, it routes requests to configured providers using environment variables for credentials. Key features include support for chat completions, embeddings, file operations, and batch processing, alongside observability (logging, metrics), guardrails, and a two-layer response cache (exact-match and semantic) for performance and cost optimization.

Quick Start & Requirements

The simplest deployment is via Docker: docker run --rm -p 8080:8080 -e OPENAI_API_KEY="your-key" enterpilot/gomodel Configuration relies on environment variables for provider API keys and settings. Building from source requires Go 1.26.2+. Production deployments should use .env files for secrets.

Highlighted Details

  • Broad Provider Support: Integrates with OpenAI, Anthropic, Gemini, Groq, xAI, Azure OpenAI, Oracle, Ollama, vLLM, and others via environment variable configuration.
  • OpenAI Compatibility: Exposes standard OpenAI API endpoints (/v1/chat/completions, /v1/embeddings, etc.), enabling easy integration with existing applications.
  • Observability & Control: Features configurable logging, Prometheus metrics, guardrails, and a two-layer response cache (exact-match and semantic) to reduce latency and costs.
  • Streaming & Passthrough: Supports streaming responses and offers provider-native passthrough routes (/p/{provider}/...) for advanced use cases.

Maintenance & Community

The project is actively developed, with a roadmap targeting version 0.2.0 that includes features like intelligent routing and budget management. Community engagement is facilitated via a Discord server.

Licensing & Compatibility

The project's license is not explicitly stated in the README. This lack of clear licensing information presents a significant barrier for adoption, particularly in commercial or closed-source environments where license compatibility is critical.

Limitations & Caveats

The project is under active development; several "Must Have" features for the upcoming 0.2.0 release (e.g., intelligent routing, budget management) are not yet implemented. Certain LLM providers have incomplete feature support within the gateway. The absence of a specified license is a critical adoption blocker.

Health Check
Last Commit

15 hours ago

Responsiveness

Inactive

Pull Requests (30d)
85
Issues (30d)
3
Star History
580 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems") and David Cramer David Cramer(Cofounder of Sentry).

llmgateway by theopenco

2.9%
1k
LLM API gateway for unified provider access
Created 1 year ago
Updated 3 hours ago
Feedback? Help us improve.