Discover and explore top open-source AI tools and projects—updated daily.
XiaoxinHeEnhanced graph representation learning via LLM-LM interpretation
Top 100.0% on SourcePulse
This repository provides the official implementation for the ICLR 2024 paper "Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning." It enables researchers and practitioners to enhance graph representation learning by leveraging explanations generated by Large Language Models (LLMs) as interpreters for text attributes. The project offers a framework for fine-tuning language models and training Graph Neural Networks (GNNs) with these enriched features, aiming to improve performance on text-attributed graph tasks.
How It Works
The core approach involves an LLM-to-LM interpreter that processes text attributes associated with graph nodes. This interpreter generates explanations, which are then used to fine-tune language models. These fine-tuned models produce enriched text representations that are integrated into GNN architectures. This method aims to capture deeper semantic understanding from text, leading to more effective graph representation learning compared to using raw text or standard embeddings alone.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Information regarding specific maintainers, community channels (like Discord/Slack), or a public roadmap is not explicitly detailed in the provided README. The project is associated with the ICLR 2024 paper.
Licensing & Compatibility
The license type is not specified in the provided README content. Compatibility for commercial use or closed-source linking cannot be determined without explicit licensing information.
Limitations & Caveats
The setup requires specific versions of PyTorch and CUDA (11.3), which might pose compatibility challenges with newer hardware or existing environments. The README does not detail any known bugs, alpha status, or unsupported platforms.
6 months ago
1 day