Self-supervised masked graph autoencoder implementation for node/graph classification and molecular property prediction
Top 60.2% on sourcepulse
GraphMAE is a self-supervised learning framework for graphs, offering a generative alternative to contrastive methods. It targets researchers and practitioners in graph representation learning, aiming to achieve competitive or superior performance on node classification, graph classification, and molecular property prediction tasks.
How It Works
GraphMAE employs a masked autoencoder approach, similar to BERT for text. It masks a portion of the input graph's structural or feature information and trains an autoencoder to reconstruct the original graph. This generative pre-training allows the model to learn robust graph representations without relying on negative sampling or manual augmentation strategies common in contrastive methods.
Quick Start & Requirements
pip install torch dgl pyyaml
(ensure PyTorch version >= 1.9.0, DGL >= 0.7.2)sh scripts/run_transductive.sh
) or Python scripts (e.g., python main_transductive.py
).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
2 years ago
1 week