LLM platform for efficient, environment-specific content generation
Top 8.4% on sourcepulse
This project provides a platform for efficient content generation tailored for specific environments, addressing the computational resource limitations, knowledge security, and privacy concerns of individuals and small to medium-sized businesses. It integrates various large language models, local and online knowledge bases, and an extensible "Auto" scripting system for custom workflows.
How It Works
The platform supports a wide array of LLMs, including offline options like ChatGLM, RWKV, Llama, Baichuan, Aquila, and InternLM, as well as online APIs from OpenAI and ChatGLM. It features a robust knowledge base system that can connect to local offline vector databases, local search engines, and online search engines. The "Auto" scripting system, written in JavaScript, allows users to extend functionality by creating plugins for custom dialogue flows, external API access, and dynamic model switching.
Quick Start & Requirements
pip install -r requirements/requirements.txt
(or use provided "lazy packages" for Windows).config.yml
.Highlighted Details
Maintenance & Community
The project has active community engagement via QQ groups for general discussion, knowledge base usage, and Auto development. Specific user contributions for model fine-tuning are mentioned.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
Some models (e.g., Llama, Moss) are noted as not recommended for Chinese users or require specific configurations (Baichuan with LoRA). Knowledge base data insertion has length and quantity limits, which can be mitigated by Auto scripts. Pre-building RTST indexes mandates CUDA, while runtime index building can use CPU for lower-VRAM systems.
6 months ago
Inactive