robustbench  by RobustBench

Standardized benchmark for adversarial robustness research

created 5 years ago
727 stars

Top 48.5% on sourcepulse

GitHubView on GitHub
Project Summary

RobustBench provides a standardized benchmark and a model zoo for evaluating adversarial robustness in deep learning models. It aims to track real progress in the field by focusing on models with deterministic forward passes and non-zero gradients, excluding those with optimization loops or randomness that can inflate robustness metrics. The project is valuable for researchers and practitioners seeking to understand and utilize state-of-the-art robust models.

How It Works

RobustBench establishes a benchmark by evaluating models against standardized attacks, primarily AutoAttack, and also welcomes adaptive attack evaluations. It imposes restrictions on accepted defenses to ensure reliable comparisons, focusing on models with deterministic forward passes and non-zero gradients. The project maintains a "Model Zoo" of pre-trained robust models that can be easily loaded and used for downstream tasks or further evaluation.

Quick Start & Requirements

  • Install: pip install git+https://github.com/RobustBench/robustbench.git
  • Requirements: Python, PyTorch. GPU recommended for evaluation. Datasets (CIFAR-10, CIFAR-100, ImageNet) are downloaded automatically, except for ImageNet which requires manual download.
  • Links: Quick start Colab notebook

Highlighted Details

  • Comprehensive leaderboards for CIFAR-10, CIFAR-100, and ImageNet across Linf, L2, and common corruptions.
  • Model Zoo provides easy access to over 70 robust models with one-line loading.
  • Supports evaluation against AutoAttack and other standardized attacks.
  • Includes benchmarks for common corruptions like ImageNet-C and ImageNet-3DCC.

Maintenance & Community

  • Actively maintained with regular updates to leaderboards and model zoo.
  • Open for contributions: adding new models, evaluations, or improving the codebase.
  • Contact: adversarial.benchmark@gmail.com or via GitHub issues/pull requests.

Licensing & Compatibility

  • Models in the Model Zoo are typically released under the MIT license.
  • Users can specify custom licenses for their submitted models.
  • Compatible with PyTorch and standard Python environments.

Limitations & Caveats

  • Robustness against common corruptions can sometimes be inversely correlated with adversarial robustness, as noted in the README.
  • ImageNet dataset requires manual download due to licensing.
  • While AutoAttack is used for standardization, the project acknowledges the value of adaptive attacks for flagging potential overestimations of robustness.
Health Check
Last commit

4 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
23 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.