Lynkr  by Fast-Editor

Universal LLM proxy for AI coding tools

Created 2 months ago
305 stars

Top 88.1% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

Lynkr is a self-hosted HTTP proxy that unifies AI coding tools like Cursor and Claude Code CLI with diverse LLM providers. It targets developers and enterprises seeking flexible, cost-effective (60-80% reduction), and private AI interactions, acting as a universal LLM interface.

How It Works

Lynkr functions as a drop-in backend replacement, intercepting AI tool requests and routing them to over 10 local or cloud LLM providers. Its architecture emphasizes efficiency through token optimization, prompt caching, and memory deduplication, enabling significant cost savings and allowing 100% local/private execution via options like Ollama and llama.cpp.

Quick Start & Requirements

Installation is recommended via NPM (npm install -g lynkr or npx lynkr). Alternatives include cloning the repo (npm install, npm start) or using Docker (docker-compose up -d). Prerequisites include Node.js for NPM installations. Detailed guides for setup and provider configuration are available.

Highlighted Details

  • Multi-Provider Support: Integrates 9+ LLM providers (local: Ollama, llama.cpp; cloud: Bedrock, OpenRouter, Azure OpenAI).
  • Cost Reduction: Claims 60-80% savings via token optimization, caching, and deduplication.
  • Local/Private Execution: Enables 100% offline operation with Ollama/llama.cpp.
  • OpenAI Compatibility: Seamlessly integrates with Cursor IDE, Continue.dev, and other OpenAI-compatible clients.
  • Embeddings Support: Options for @Codebase search via Ollama, llama.cpp, OpenRouter, OpenAI.
  • Enterprise Features: Circuit breakers, load shedding, Prometheus metrics, K8s health checks.
  • Advanced Features: Streaming support, Titans-inspired memory system, full tool calling.

Maintenance & Community

Developed "by developers, for developers," community support is available via GitHub Discussions and the Issues tracker. No specific notable contributors, sponsorships, or partnerships are highlighted.

Licensing & Compatibility

Distributed under the Apache 2.0 license, permitting commercial use and integration into closed-source projects.

Limitations & Caveats

The project focuses on proxying and optimization; specific performance benchmarks beyond claimed cost reductions are not detailed. MLX integration is limited to Apple Silicon hardware. Users should be aware of ongoing development.

Health Check
Last Commit

16 hours ago

Responsiveness

Inactive

Pull Requests (30d)
30
Issues (30d)
11
Star History
77 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.