Discover and explore top open-source AI tools and projects—updated daily.
sebastianvklEmbedded voice AI assistant
Top 66.6% on SourcePulse
A voice-controlled AI assistant built on a Raspberry Pi Zero W, pizero-openclaw targets hobbyists and power users seeking a dedicated, low-power conversational AI device. It offers real-time LLM interaction streamed directly to a small LCD, enhancing user experience with immediate visual feedback and optional spoken responses.
How It Works
The project orchestrates a button-press-to-response workflow. Upon activation, audio is recorded via ALSA, transcribed by OpenAI's speech-to-text models, and then streamed to a local OpenClaw gateway. The LLM response is received and rendered in real-time on the PiSugar WhisPlay LCD with precise word wrapping. Optionally, OpenAI's TTS can vocalize responses as sentences complete, and the device maintains conversation history for context.
Quick Start & Requirements
sudo apt install python3-numpy python3-pil, pip install requests python-dotenv (or pip install -r requirements.txt). WhisPlay hardware driver installation is required separately per its setup guide..env.example to .env and fill in OPENAI_API_KEY and OPENCLAW_TOKEN.python3 main.py or use the included ./sync.sh script for systemd deployment.Highlighted Details
Maintenance & Community
No specific details on contributors, sponsorships, community channels (Discord/Slack), or roadmap were provided in the README.
Licensing & Compatibility
Limitations & Caveats
Requires a separate, accessible OpenClaw gateway to function. Relies on external OpenAI API services for core functionality (transcription, TTS), incurring associated costs and requiring internet connectivity. Specific hardware dependencies limit portability to the specified Raspberry Pi and PiSugar components.
2 weeks ago
Inactive