Discover and explore top open-source AI tools and projects—updated daily.
brenpolyCustomizable offline AI agent for embedded systems
Top 64.0% on SourcePulse
This project provides a framework for building a fully local, offline-first conversational AI agent on a Raspberry Pi. It targets hobbyists and engineers seeking a private, customizable, and low-latency AI assistant without cloud dependencies or API fees. The core benefit is enabling a personal AI companion that runs entirely on edge hardware.
How It Works
The agent integrates multiple open-source components for local processing: Ollama serves Large Language Models (LLMs) like Gemma and Moondream, Whisper.cpp handles Speech-to-Text, OpenWakeWord detects custom wake words, and Piper TTS generates low-latency neural voices. It features reactive GUI faces, hardware-aware audio processing, and optional web search via DuckDuckGo for real-time information. This approach ensures data privacy and eliminates recurring costs.
Quick Start & Requirements
./setup.sh (installs dependencies, Python venv, Piper TTS), activate the virtual environment (source venv/bin/activate), and run python agent.py.git. Ollama installation is handled by a provided script (curl -fsSL https://ollama.com/install.sh| sh).Highlighted Details
.wav files to create unique agent personalities.Licensing & Compatibility
Licensed under the MIT License. This project is a fan creation for educational and hobbyist purposes, not affiliated with or endorsed by Cartoon Network. Users are responsible for the assets they integrate.
Limitations & Caveats
Requires specific Raspberry Pi hardware and peripherals. Users may encounter ALSA errors upon script exit, noted as normal but indicative of audio stream interruption. Audio speed issues can arise if voice model sample rates are misconfigured. Custom wake word implementation requires training a new .onnx model.
1 month ago
Inactive