Research paper for graph representation learning using attention
Top 63.1% on sourcepulse
This repository provides the implementation for Graph-Bert, a model designed for learning graph representations using only attention mechanisms. It is targeted at researchers and practitioners in graph neural networks and representation learning, offering a novel approach that simplifies GNN architectures by relying solely on attention.
How It Works
Graph-Bert leverages a self-attention mechanism, inspired by the Transformer architecture, to capture relationships within graph data. Instead of relying on traditional graph convolutional layers, it uses attention to weigh the importance of neighboring nodes and their attributes. This approach aims to achieve competitive performance with a potentially simpler and more scalable architecture. The model utilizes node attributes, subgraph batching, and hop distances as prior inputs.
Quick Start & Requirements
pytorch
, sklearn
, transformers
, networkx
.python3 script_name.py
.python3 script_3_fine_tuning.py
.transformers
toolkit version (e.g., from transformers.models.bert.modeling_bert import ...
).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
transformers
library depending on their installed version.state_dict
might not function correctly.2 years ago
Inactive