Desktop AI assistant for real-time context understanding
Top 9.9% on SourcePulse
Glass is a desktop application designed to act as a "digital mind extension" by capturing screen activity and audio in real-time to generate structured knowledge. It targets users who want to proactively extract information, summaries, and action items from their digital interactions, particularly during meetings, while maintaining privacy.
How It Works
Glass operates by observing screen content and audio input, processing this data with Large Language Models (LLMs) and Speech-to-Text (STT) engines to understand context. It supports various LLM providers like OpenAI, Gemini, and Claude, as well as local LLMs via Ollama. The system aims to provide instant answers and insights based on captured user activity, with a focus on unobtrusiveness and privacy.
Quick Start & Requirements
npm run setup
Highlighted Details
Maintenance & Community
The project is actively developed with a recent full code refactoring. Contributions are welcomed via issues and pull requests. A Discord server is available for community engagement.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
The Windows version is currently in beta. The project is undergoing a full code refactor, which may introduce breaking changes. Specific details on data privacy and storage are not elaborated in the README.
3 weeks ago
Inactive