Web app for LLM-powered chat, self-deployable via Docker
Top 48.0% on sourcepulse
This project provides a self-hostable, Dockerized web interface for interacting with large language models, primarily targeting users who want a customizable and private chat experience. It offers features like custom API keys, model selection, chat history management, and proxy support, aiming to provide a more controlled and personalized alternative to online services.
How It Works
The project is a static web application that can be deployed via Docker or by simply unzipping a build archive. It communicates with LLM APIs (like OpenAI) and allows users to configure various parameters, including API keys, model endpoints, system prompts, and character avatars. The Docker version includes a built-in proxy for handling API requests, simplifying network configuration for users.
Quick Start & Requirements
docker run -d -p 9000:9000 easychen/chatchan:latest
default.json
for API keys, models, and other settings. Requires a web server or browser capable of loading local files if not using Docker.Highlighted Details
Maintenance & Community
The project is actively maintained, with recent updates adding support for new models and features. Links to browser extensions and an online version are provided.
Licensing & Compatibility
The repository does not explicitly state a license. The project is designed for self-hosting and local deployment, making it compatible with private or commercial use cases as long as API usage terms are met.
Limitations & Caveats
Older versions (pre-v1.0.8) may have issues with long content due to WASM token calculation limitations in certain hosting environments, requiring manual MIME type configuration. Some features, like voice interaction, are specific to API2D keys.
1 year ago
1 day