Offline voice assistant for macOS
Top 63.5% on sourcepulse
This project provides a completely offline voice assistant for macOS, leveraging Ollama for LLM inference (Mistral 7b) and Whisper for speech recognition. It's designed for users seeking a private, local AI assistant experience.
How It Works
The assistant integrates Ollama's Mistral 7b model for natural language understanding and response generation, coupled with OpenAI's Whisper for accurate speech-to-text transcription. It processes user voice input, sends it to the LLM via Ollama, and then uses text-to-speech with macOS's built-in system voices to provide spoken responses.
Quick Start & Requirements
mistral
model (ollama pull mistral
).base.en
) and place it in a /whisper
directory.brew install portaudio
for PyAudio Apple Silicon support.pip install -r requirements.txt
.python assistant.py
.Highlighted Details
assistant.yaml
.Maintenance & Community
This project builds upon the work of maudoin. Further community or maintenance details are not specified in the README.
Licensing & Compatibility
The repository's license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking is not detailed.
Limitations & Caveats
The project is specifically for macOS. While it mentions improvements over a previous version, specific performance benchmarks or known issues are not detailed. The README implies that higher quality TTS requires downloading premium macOS system voices.
1 year ago
1 day