Discover and explore top open-source AI tools and projects—updated daily.
RobustBenchStandardized benchmark for adversarial robustness research
Top 46.3% on SourcePulse
RobustBench provides a standardized benchmark and a model zoo for evaluating adversarial robustness in deep learning models. It aims to track real progress in the field by focusing on models with deterministic forward passes and non-zero gradients, excluding those with optimization loops or randomness that can inflate robustness metrics. The project is valuable for researchers and practitioners seeking to understand and utilize state-of-the-art robust models.
How It Works
RobustBench establishes a benchmark by evaluating models against standardized attacks, primarily AutoAttack, and also welcomes adaptive attack evaluations. It imposes restrictions on accepted defenses to ensure reliable comparisons, focusing on models with deterministic forward passes and non-zero gradients. The project maintains a "Model Zoo" of pre-trained robust models that can be easily loaded and used for downstream tasks or further evaluation.
Quick Start & Requirements
pip install git+https://github.com/RobustBench/robustbench.gitHighlighted Details
Maintenance & Community
adversarial.benchmark@gmail.com or via GitHub issues/pull requests.Licensing & Compatibility
Limitations & Caveats
7 months ago
Inactive
anthropics
QData
meta-pytorch
cleverhans-lab
EthicalML