Discover and explore top open-source AI tools and projects—updated daily.
fingerthiefLLM chat interface for local and remote inference
Top 95.9% on SourcePulse
MinimalChat is a lightweight, open-source chat client designed for interacting with various large language models, including OpenAI, DeepSeek, and custom local endpoints. It targets users seeking a simple, feature-rich, and responsive LLM interface, offering full mobile PWA support for accessibility. The application streamlines LLM interaction by providing a unified client for diverse models, enhancing user productivity and privacy, especially when self-hosting or using local models.
How It Works
MinimalChat functions as a versatile front-end client, supporting any API endpoint that adheres to OpenAI's response format. A key feature is its integration with WebLLM, enabling popular LLM models like Llama-3-8B-Instruct to be downloaded and cached directly within the user's browser. This approach offers significant advantages in terms of privacy, offline usability, and reduced reliance on external servers for model inference.
Quick Start & Requirements
docker pull tannermiddleton/minimal-chat:latestnpm install, build with npm run build, and run with npm run preview (production) or npm run dev (development).Highlighted Details
Maintenance & Community
Contributions are welcomed via GitHub Issues and pull requests. Community support and feedback can be directed to GitHub Issues or the project's Discord server (fingerthief#0453).
Licensing & Compatibility
The project is licensed under the MIT License, which permits broad usage, including commercial applications, with minimal restrictions.
Limitations & Caveats
Interaction with certain language models requires valid API keys. Users hosting local API servers must ensure correct CORS configuration to avoid communication issues. The DALL-E 3 integration is described as basic, suggesting limited advanced image generation capabilities.
10 months ago
Inactive
pytorch
ChatGPTNextWeb