Standardized benchmark for adversarial robustness research
Top 48.5% on sourcepulse
RobustBench provides a standardized benchmark and a model zoo for evaluating adversarial robustness in deep learning models. It aims to track real progress in the field by focusing on models with deterministic forward passes and non-zero gradients, excluding those with optimization loops or randomness that can inflate robustness metrics. The project is valuable for researchers and practitioners seeking to understand and utilize state-of-the-art robust models.
How It Works
RobustBench establishes a benchmark by evaluating models against standardized attacks, primarily AutoAttack, and also welcomes adaptive attack evaluations. It imposes restrictions on accepted defenses to ensure reliable comparisons, focusing on models with deterministic forward passes and non-zero gradients. The project maintains a "Model Zoo" of pre-trained robust models that can be easily loaded and used for downstream tasks or further evaluation.
Quick Start & Requirements
pip install git+https://github.com/RobustBench/robustbench.git
Highlighted Details
Maintenance & Community
adversarial.benchmark@gmail.com
or via GitHub issues/pull requests.Licensing & Compatibility
Limitations & Caveats
4 months ago
1 day