Gemini chatbot web app with one-click deployment
Top 27.8% on sourcepulse
This project provides a free, one-click deployable private Gemini chatbot application, targeting users who want to leverage Google's Gemini models (1.5 Pro, 1.5 Flash, Pro, Pro Vision) with a user-friendly interface. It offers a web application, a cross-platform desktop client, and supports multimodal capabilities, plugins, and extensive Markdown rendering.
How It Works
The application is built with Next.js, Tailwind CSS, and shadcn/ui, providing a responsive and feature-rich user experience. It leverages the Gemini API for natural language processing and supports multimodal inputs like images and videos. Function calling is integrated for plugin support, enabling features like web search, reading, and more. Data is stored locally in the browser for privacy.
Quick Start & Requirements
docker pull xiangfa/talk-with-gemini:latest
and docker run -d --name talk-with-gemini -p 5481:3000 xiangfa/talk-with-gemini
.Highlighted Details
Maintenance & Community
The project is actively maintained, with recent releases and ongoing development indicated by roadmap items. Community contributions are welcomed via pull requests and issue reporting.
Licensing & Compatibility
Licensed under the MIT License, allowing for commercial use and integration with closed-source projects.
Limitations & Caveats
The Multimodal Live API currently only supports the Gemini 2.0 Flash model and may require a Cloudflare Worker proxy for access in certain regions (e.g., China). Chinese voice output is not yet supported for the Multimodal Live API.
2 days ago
1 day