alphafold2  by lucidrains

Pytorch implementation for protein structure prediction

Created 4 years ago
1,620 stars

Top 25.9% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides an unofficial PyTorch implementation of AlphaFold2, targeting researchers and developers interested in protein structure prediction. It aims to replicate DeepMind's AlphaFold2 architecture, offering flexibility in predicting distograms, angles, and 3D coordinates, with a focus on integrating various attention mechanisms and structural refinement techniques.

How It Works

The core of the implementation is a modular Transformer architecture that processes sequence and Multiple Sequence Alignment (MSA) data. It incorporates axial attention for MSA processing and offers options for SE(3) Transformers, E(n)-Transformers, or EGNNs for iterative coordinate refinement. The design allows for customization of attention types (sparse, linear, Kronecker), convolutional blocks, and atom representations, enabling exploration of different architectural choices for improved accuracy and efficiency.

Quick Start & Requirements

  • Install via pip: pip install alphafold2-pytorch
  • Requires PyTorch and CUDA.
  • Optional: NVIDIA Apex library for pre-trained embeddings (ESM, MSA Transformers, Protein Transformer).
  • See Usage Examples for detailed code snippets.

Highlighted Details

  • Supports prediction of distograms, angles, and 3D coordinates.
  • Integrates various attention mechanisms: sparse, linear, Kronecker, and memory-compressed attention.
  • Offers multiple structure module options for coordinate refinement: SE(3) Transformer, E(n)-Transformer, EGNN.
  • Allows customization of convolutional kernels, dilations, and block ordering.
  • Can incorporate pre-trained embeddings from ESM, MSA Transformers, or Protein Transformer.

Maintenance & Community

  • Actively developed by lucidrains.
  • Discussion channel: #alphafold on Discord.
  • References numerous research papers and datasets, indicating community engagement.

Licensing & Compatibility

  • MIT License.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

  • This is an unofficial implementation and may not perfectly match the official AlphaFold2.
  • Sparse attention is currently only supported for self-attention, not cross-attention.
  • Installation of Deepspeed with sparse attention and Triton may require specific steps.
Health Check
Last Commit

2 years ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
7 stars in the last 30 days

Explore Similar Projects

Starred by Aravind Srinivas Aravind Srinivas(Cofounder of Perplexity), Li Jiang Li Jiang(Coauthor of AutoGen; Engineer at Microsoft), and
6 more.

numpy-ml by ddbourgin

0.1%
16k
ML algorithms implemented in NumPy
Created 6 years ago
Updated 1 year ago
Feedback? Help us improve.