Discover and explore top open-source AI tools and projects—updated daily.
bagh2178Zero-shot navigation to any goal
Top 98.1% on SourcePulse
Summary
UniGoal enables universal zero-shot goal-oriented navigation, allowing agents to reach arbitrary goals in diverse environments without task-specific training. It targets embodied AI and robotics researchers/developers, offering flexible navigation via a unified graph representation.
How It Works
The core innovation is a unified graph representation for navigation goals and environments, enabling direct generalization to unseen scenes and goal types without retraining. The method integrates visual perception, language understanding, and spatial reasoning for goal decomposition and path planning.
Quick Start & Requirements
unigoal Conda env, install dependencies (habitat-sim, habitat-lab, pytorch3d, detectron2, LightGlue, Grounded-Segment-Anything, GroundingDINO, faiss-gpu). Download SAM and GroundingDINO models.ollama pull llama3.2-vision) or custom API configuration.python main.py --goal_type <ins-image|text>. Infer using goal type and details (image path/text). Real-world deployment needs implementation in src/envs/real_world_env.py.Highlighted Details
Maintenance & Community
Active development is indicated by recent updates and conference acceptances (CVPR 2025, CoRL 2025). The README lacks community channel links, roadmaps, or detailed contributor information.
Licensing & Compatibility
The repository's license is not specified in the README. This omission is a significant adoption blocker, especially for commercial use or integration into proprietary systems.
Limitations & Caveats
Object-goal navigation support is pending. Real-world integration requires manual implementation of environment-specific functions. As a research project, stability and long-term maintenance are TBD. The unspecified license is the primary barrier to assessing usage rights.
1 month ago
Inactive
microsoft