Serverless AI chat app with RAG using LangChain.js and Azure
Top 44.2% on sourcepulse
This project provides a serverless AI chat application leveraging Retrieval-Augmented Generation (RAG) with LangChain.js, TypeScript, and Azure services. It's designed for developers looking to build enterprise-grade chatbots that can answer questions based on custom document sets, offering a scalable and cost-effective solution.
How It Works
The application utilizes a serverless architecture with Azure Static Web Apps for the frontend and Azure Functions for the backend API. LangChain.js orchestrates the RAG process, interacting with Azure Cosmos DB for NoSQL to store document embeddings and chat history. Azure Blob Storage is used for source document storage. This approach simplifies AI application development by abstracting infrastructure management and providing a robust framework for integrating language models with external data.
Quick Start & Requirements
llama3.1:latest
and nomic-embed-text:latest
models.npm install
.npm start
.npm run upload:docs
.http://localhost:8000
.azd auth login
and azd up
to deploy resources and the application.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Local models may not perfectly adhere to advanced formatting instructions for citations and follow-up questions, which is an expected limitation of smaller local models.
2 weeks ago
Inactive