Graph-Bert  by jwzhanggy

Research paper for graph representation learning using attention

created 5 years ago
499 stars

Top 63.1% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides the implementation for Graph-Bert, a model designed for learning graph representations using only attention mechanisms. It is targeted at researchers and practitioners in graph neural networks and representation learning, offering a novel approach that simplifies GNN architectures by relying solely on attention.

How It Works

Graph-Bert leverages a self-attention mechanism, inspired by the Transformer architecture, to capture relationships within graph data. Instead of relying on traditional graph convolutional layers, it uses attention to weigh the importance of neighboring nodes and their attributes. This approach aims to achieve competitive performance with a potentially simpler and more scalable architecture. The model utilizes node attributes, subgraph batching, and hop distances as prior inputs.

Quick Start & Requirements

  • Install dependencies: pytorch, sklearn, transformers, networkx.
  • Run scripts via python3 script_name.py.
  • For node classification, execute python3 script_3_fine_tuning.py.
  • Note: Transformer import statements may require adjustment based on your transformers toolkit version (e.g., from transformers.models.bert.modeling_bert import ...).

Highlighted Details

  • Implements Graph-Bert, a model focusing on attention for graph representation learning.
  • Includes scripts for data preprocessing (WL code, subgraph batching, hop distance), pre-training (node attribute reconstruction, graph structure recovery), fine-tuning (node classification, graph clustering), and evaluation.
  • Provides a structured codebase with base classes for data loading, methods, results, evaluation, and settings.

Maintenance & Community

  • The project originates from IFM Lab.
  • Links to related papers and other GNN projects from IFM Lab are provided.

Licensing & Compatibility

  • The repository does not explicitly state a license in the provided README.

Limitations & Caveats

  • The README notes that random seed control for parameter initialization in transformers might be unreliable, suggesting multiple runs for optimal results.
  • Users may need to manually adjust import statements for the transformers library depending on their installed version.
  • The README mentions that saving/loading the entire PyTorch model works, but saving/loading state_dict might not function correctly.
Health Check
Last commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
9 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.