Threat-Actors-use-of-Artifical-Intelligence  by cybershujin

AI use by cyber threat actors

Created 1 year ago
258 stars

Top 98.2% on SourcePulse

GitHubView on GitHub
Project Summary

This repository organizes and classifies the confirmed use of artificial intelligence (AI) and large language models (LLMs) by cyber threat actors, focusing on AI-enhanced cyberattacks rather than influence campaigns. It aims to map these activities to MITRE ATT&CK TTPs and LLM-specific classifications, providing a valuable resource for cybersecurity professionals and researchers tracking evolving threat landscapes.

How It Works

The project compiles and analyzes publicly reported instances of threat actors leveraging AI/LLMs. It categorizes these uses into specific TTPs, such as LLM-informed reconnaissance, LLM-enhanced scripting, LLM-supported social engineering, and LLM-assisted vulnerability research. The data is drawn from various cybersecurity vendor reports and analyses, with an emphasis on confirmed threat actor usage rather than researcher-discovered potential.

Quick Start & Requirements

This repository is a curated collection of information and does not require installation or execution. Users can directly browse the documented threat actor activities and their associated TTPs.

Highlighted Details

  • Documents specific threat actor groups (e.g., Fancy Bear, APT43, Scattered Spider) and their AI/LLM usage.
  • Maps observed AI/LLM techniques to MITRE ATT&CK framework and LLM-specific TTPs.
  • Includes analysis of trends, such as the increase in AI-generated phishing emails and the shift from training LLMs to jailbreaking existing ones by criminals.
  • Details the use of deepfakes for impersonation, financial fraud, and influence operations.

Maintenance & Community

The repository is maintained by cybershujin. Updates are indicated by dates in the README, suggesting ongoing curation. Community engagement is encouraged via comments for TTP mapping suggestions.

Licensing & Compatibility

The repository itself does not specify a license. Content is derived from various sources, and users should consult the original sources for licensing and usage terms.

Limitations & Caveats

The project focuses solely on confirmed reports of threat actor AI/LLM use, acknowledging that many observed increases (e.g., in phishing) may be indirect effects. It excludes research on actors attacking AI systems or misinformation campaigns using deepfakes, directing users to other repositories for these topics. The "still under construction" note indicates ongoing development.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
3 more.

llm-security by greshake

0.1%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
Created 2 years ago
Updated 2 months ago
Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
5 more.

PurpleLlama by meta-llama

0.6%
4k
LLM security toolkit for assessing/improving generative AI models
Created 1 year ago
Updated 1 day ago
Feedback? Help us improve.