Discover and explore top open-source AI tools and projects—updated daily.
lifemate-aiAI embodiment project granting LLMs physical senses and agency
Top 99.3% on SourcePulse
Summary
This project, lifemate-ai/embodied-claude, equips LLMs like Claude with physical embodiment—enabling perception (sight, hearing), voice, and mobility using affordable hardware. It transforms AI from passive text processors into active agents that interact with and remember the physical world, fostering novel human-AI interactions and advancing embodied AI research.
How It Works
A modular system of "MCP Servers" provides distinct physical capabilities. Components like wifi-cam-mcp (vision/neck), tts-mcp (voice), and mobility-mcp (robot vacuum integration) leverage inexpensive hardware (e.g., ~$30 Wi-Fi cameras). This design prioritizes essential functions, allowing AI to "see for itself" and actively engage with its environment, rather than passively receiving data.
Quick Start & Requirements
git clone https://github.com/kmizu/embodied-claude.git), then set up individual MCP servers using uv sync within their directories. Configuration is managed via .mcp.json and .env files.https://github.com/kmizu/embodied-claude.gitHighlighted Details
Maintenance & Community
Notable contributions are acknowledged from Rumia-Channel (ONVIF support) and fruitriin (interoception hook). claude-code-webui by sugyan is used for remote operation. No explicit community channels or detailed roadmap are provided.
Licensing & Compatibility
Limitations & Caveats
The system-temperature-mcp module is non-functional within WSL2 environments due to hardware access limitations. Autonomous action features require careful privacy considerations and manual cron job setup. Robot vacuum integration depends on Tuya device compatibility and 2.4GHz Wi-Fi.
2 days ago
Inactive
pollen-robotics