Discover and explore top open-source AI tools and projects—updated daily.
Augmented LLM for RAG and MCP agents
Top 77.7% on SourcePulse
This project provides an augmented Large Language Model (LLM) agent that integrates with the Model Context Protocol (MCP) and Retrieval Augmented Generation (RAG) without relying on popular frameworks like LangChain or LlamaIndex. It's designed for users who need a simplified, self-contained solution for building LLM agents capable of interacting with external tools and retrieving information from knowledge bases.
How It Works
The core architecture features an Agent
class that orchestrates interactions between an LLM (specifically OpenAI's models) and multiple MCPClient
instances. The MCPClient
facilitates communication with MCP services, allowing the agent to discover and invoke tools. RAG is implemented via an EmbeddingRetriever
that embeds documents and queries, storing them in a VectorStore
for efficient similarity search. Retrieved information is then injected into the LLM's context to enhance responses.
Quick Start & Requirements
git clone git@github.com:KelvinQiu802/ts-node-esm-template.git
), then run pnpm install
and pnpm add dotenv openai @modelcontextprotocol/sdk chalk
.Highlighted Details
Maintenance & Community
No specific details on contributors, community channels, or roadmap are provided in the README.
Licensing & Compatibility
The project's licensing is not explicitly stated in the README. Compatibility for commercial use or closed-source linking is therefore undetermined.
Limitations & Caveats
The project appears to be a simplified implementation, and its robustness for complex, production-level agent orchestration is not detailed. The lack of explicit licensing information may pose a barrier to commercial adoption.
5 months ago
Inactive