PointGST  by jerryfeng2003

Efficiently fine-tune 3D point cloud models in the spectral domain

Created 1 year ago
375 stars

Top 75.7% on SourcePulse

GitHubView on GitHub
Project Summary

PointGST offers parameter-efficient fine-tuning (PEFT) for point cloud learning, addressing the high computational and storage costs of full fine-tuning pre-trained models. It targets researchers and practitioners needing to adapt 3D models efficiently, delivering state-of-the-art results with significantly fewer trainable parameters.

How It Works

The core innovation is the Point Cloud Spectral Adapter (PCSA), a lightweight, trainable module introduced into frozen pre-trained models. PCSA operates in the spectral domain, enabling efficient adaptation by focusing parameter updates on key spectral features. This approach leverages global geometric properties for improved fine-tuning efficiency and performance.

Quick Start & Requirements

  • Installation: Clone the repository (git clone https://github.com/jerryfeng2003/PointGST.git) and cd into it. Anaconda is recommended.
  • Environment: Create and activate a Conda environment (conda create -y -n pgst python=3.9, conda activate pgst).
  • Dependencies: Install PyTorch 2.0.0 with CUDA 11.8 (pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118) and other requirements (pip install -r requirements.txt).
  • Custom Extensions: Requires compiling CUDA extensions for Chamfer Distance, EMD, and GPU kNN.
  • Hardware: Experiments were conducted on a single NVIDIA 3090 GPU.
  • Datasets: Details are in DATASET.md.

Highlighted Details

  • Achieves state-of-the-art (SOTA) on ScanObjectNN (OBJ_BG, OBJ_OBLY, PB_T50_RS) with 99.48%, 97.76%, 96.18% accuracy.
  • Reduces trainable parameters to 0.67% while reaching SOTA performance.
  • Outperforms full fine-tuning by up to 2.78% accuracy.
  • Leverages spectral domain tuning for efficient point cloud model adaptation.

Maintenance & Community

The project is authored by researchers from Huazhong University of Science and Technology and Baidu Inc. The project has been accepted by IEEE TPAMI. No community channels or roadmaps are specified in the README.

Licensing & Compatibility

Licensed under Apache 2.0, permitting commercial use and integration with closed-source projects.

Limitations & Caveats

Experiments are documented on a single NVIDIA 3090 GPU, suggesting potential hardware dependencies. Broader compatibility or performance on different hardware configurations is not detailed. The project relies on several external codebases, inheriting their maintenance status.

Health Check
Last Commit

1 month ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
14 stars in the last 30 days

Explore Similar Projects

Starred by Tri Dao Tri Dao(Chief Scientist at Together AI), Stas Bekman Stas Bekman(Author of "Machine Learning Engineering Open Book"; Research Engineer at Snowflake), and
1 more.

oslo by tunib-ai

0%
309
Framework for large-scale transformer optimization
Created 3 years ago
Updated 3 years ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
6 more.

xTuring by stochasticai

0.0%
3k
SDK for fine-tuning and customizing open-source LLMs
Created 2 years ago
Updated 1 day ago
Feedback? Help us improve.