Production-ready Q&A chatbot using AWS and LangChain
Top 94.1% on sourcepulse
This project provides a production-ready, AWS-native, LangChain-based chatbot for knowledge Q&A, designed for easy integration with IM tools like Lark. It offers flexible configuration for vector and large language models, supporting various data formats and optimized for efficient knowledge retrieval.
How It Works
The system employs a decoupled front-end and back-end architecture, leveraging AWS serverless services. Key components include Lambda functions for intention detection, query rewriting, and core agent logic. It supports offline knowledge ingestion and processing, with a workflow that typically involves three LLM calls for comprehensive Q&A. The design emphasizes modularity, allowing for plug-and-play replacement of vector and LLM models.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
The project is part of the aws-samples
repository. Community interaction and help can be found via WeChat groups and Bilibili demo videos.
Licensing & Compatibility
The repository is licensed under the Apache-2.0 license, permitting commercial use and integration with closed-source applications.
Limitations & Caveats
SageMaker for rerank models is optional and incurs significant cost; its exclusion is possible if not needed. OpenSearch costs are dependent on instance type and data volume. The workshop deployment branch may not be the latest.
6 months ago
Inactive