advmlthreatmatrix  by mitre

Adversarial ML threat matrix for security analysts

Created 4 years ago
1,089 stars

Top 34.9% on SourcePulse

GitHubView on GitHub
Project Summary

This project, now branded as ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems), provides a structured framework for understanding and mitigating threats to machine learning systems. It targets security analysts and researchers by mapping ML vulnerabilities and adversary behaviors in an MITRE ATT&CK-style matrix, enabling better defense strategies against emerging AI-related cyberattacks.

How It Works

ATLAS organizes adversarial ML techniques into a matrix, similar to the well-established MITRE ATT&CK framework. This approach leverages existing security analyst familiarity with the ATT&CK structure, making it easier to understand and apply to ML-specific threats. The framework is populated with vetted vulnerabilities and behaviors effective against production ML systems, drawing from industry and academic partnerships.

Quick Start & Requirements

The project's primary output is an interactive website and matrix available at https://atlas.mitre.org. No specific installation commands or dependencies are listed for the core matrix itself, as it is presented as a knowledge base and framework.

Highlighted Details

  • Positions ML attacks within an ATT&CK-style framework for security analysts.
  • Includes a curated set of vulnerabilities and adversary behaviors vetted against production ML systems.
  • Features numerous case studies demonstrating real-world ML system compromises.
  • Offers an "ATLAS Navigator" for tailored exploration of the threat landscape.

Maintenance & Community

The project is a collaboration between MITRE and Microsoft, with significant contributions from numerous industry and academic partners including Bosch, IBM, NVIDIA, and the University of Toronto. Community engagement is encouraged through pull requests for corrections, a mailing list (Google Group), and potential workshops. Contact information for contributions and general inquiries is provided.

Licensing & Compatibility

The README does not explicitly state a license. Given the nature of the project and its contributors, it is likely to be permissive for research and non-commercial use, but commercial compatibility would require explicit confirmation.

Limitations & Caveats

The project is described as a "first-cut attempt" and actively seeks community contributions to fill gaps. While case studies are based on production systems, the framework itself may not be exhaustive and requires ongoing updates to reflect the rapidly evolving adversarial ML landscape.

Health Check
Last Commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.