Framework for point cloud sequence segmentation via vision foundation model distillation
Top 53.9% on sourcepulse
This repository provides Seal, a framework for segmenting automotive point cloud sequences by distilling knowledge from Vision Foundation Models (VFMs). It targets researchers and engineers working with 3D perception, offering a self-supervised approach that leverages spatial and temporal consistency without requiring manual annotations during pretraining.
How It Works
Seal distills knowledge from VFMs into point clouds by generating semantic superpixels and superpoints. It enforces spatial consistency between LiDAR and camera features and temporal consistency between segments across frames. This cross-modal learning approach enables effective knowledge transfer to diverse point cloud datasets.
Quick Start & Requirements
INSTALL.md
.DATA_PREPARE.md
.SUPERPOINT.md
.GET_STARTED.md
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The current release focuses on automotive point clouds and may require adaptation for other domains. While pretraining is self-supervised, downstream tasks may require fine-tuning with labeled data. The "TODO List" indicates that evaluation and training details are still to be added.
1 year ago
1 day