visionOS examples for spatial computing
Top 79.9% on sourcepulse
This repository provides a collection of example applications and resources for developing on Apple's visionOS platform. It targets developers looking to explore spatial computing capabilities, offering practical implementations of features like hand tracking, plane detection, and integration with AI models and external APIs. The examples serve as accelerators for building immersive experiences on the Apple Vision Pro.
How It Works
The project showcases various visionOS features through distinct example applications. These examples leverage Apple's native frameworks such as SwiftUI for UI, ARKit for spatial awareness and tracking, and RealityKit for 3D rendering and scene management. Some examples also demonstrate integration with external services like OpenAI for chat functionalities and LM Studio for local large language models, highlighting the platform's extensibility.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
The repository is maintained by IvanCampos. Learning resources and community links are provided, including Apple's official documentation, other community repositories, and relevant subreddits like r/visionosdev.
Licensing & Compatibility
The repository's licensing is not explicitly stated in the provided README. Compatibility is for visionOS development, requiring Apple hardware and software.
Limitations & Caveats
The examples are presented as "accelerators" and may require further development for production-ready applications. Some examples are noted as beta versions, indicating potential instability or ongoing changes. The README also mentions a "visionOS Dev Bot" requiring ChatGPT Plus, which is a proprietary dependency.
2 months ago
Inactive