awesome-llm-cybersecurity-tools  by tenable

Cybersecurity research tools using large language models

created 2 years ago
468 stars

Top 65.9% on sourcepulse

GitHubView on GitHub
Project Summary

This repository curates Large Language Model (LLM) tools specifically for cybersecurity research, offering solutions for reverse engineering, network analysis, cloud security, and malware development. It targets security researchers and developers seeking to leverage AI for tasks like code annotation, vulnerability discovery, and threat analysis.

How It Works

The tools leverage LLMs, primarily from OpenAI (GPT-3, GPT-3.5, GPT-4) and Anthropic, to process and analyze cybersecurity-related data. Approaches include querying LLMs for explanations of decompiled code (G-3PO, Gepetto), analyzing network traffic (Burp Extension for GPT), identifying privilege escalation in cloud IAM policies (EscalateGPT), and generating metamorphic malware (LLMorphism, Darwin-GPT). This allows for automated analysis and insight generation that would be time-consuming manually.

Highlighted Details

  • Reverse Engineering: G-3PO (Ghidra), Gepetto (IDA Pro), and GPT-WPRE assist in annotating decompiled code and summarizing binaries.
  • Debugging: ai for Pwndbg and ai for GEF act as AI debugging sidekicks.
  • Network & Cloud Security: Burp Extension for GPT analyzes HTTP traffic, and EscalateGPT finds privilege escalation in AWS IAM policies.
  • Malware & Exploitation: Tools demonstrate LLM-driven malware (LLMorphism, Darwin-GPT) and indirect prompt injection attacks.

Maintenance & Community

This is a curated list, with many tools developed by individuals at Tenable (Olivia Lucca Fraser, Yossi Nisani, Ivan Kwiatkowski) and other researchers. There are no direct links to community channels or roadmaps provided in the README.

Licensing & Compatibility

The repository itself is a list, not a software package, and is likely under a permissive license. However, the individual tools listed will have their own licenses, which are not specified here. Compatibility with specific LLM providers (OpenAI, Anthropic) and cybersecurity tools (Ghidra, IDA Pro, Pwndbg, GEF, BurpSuite) is required.

Limitations & Caveats

The README lists tools that are often prototypes or research projects, implying potential instability or incomplete features. Specific LLM API keys and potentially significant computational resources may be required for many of these tools. The effectiveness and security implications of using LLMs for malware generation are also inherent risks.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
17 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
2 more.

llm-security by greshake

0.2%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
created 2 years ago
updated 2 weeks ago
Feedback? Help us improve.