stride-gpt  by mrwadams

AI-powered tool for automated threat modeling using LLMs

created 2 years ago
794 stars

Top 45.1% on sourcepulse

GitHubView on GitHub
Project Summary

STRIDE GPT is an AI-powered threat modeling tool that automates the generation of threat models, attack trees, and mitigations using LLMs. It targets security engineers and developers seeking to integrate threat modeling into their application development lifecycle, offering a structured approach based on the STRIDE methodology.

How It Works

The tool leverages various LLM providers, including OpenAI, Azure OpenAI, Google AI, Mistral, and local models via Ollama/LM Studio. Users input application details, and the LLM analyzes this information to produce threat models, identify potential attack paths via attack trees, suggest mitigations, and optionally perform DREAD risk scoring. It also supports multi-modal inputs, allowing architecture diagrams to be analyzed by vision-capable models.

Quick Start & Requirements

  • Install: pip install -r requirements.txt or docker pull mrwadams/stridegpt:latest.
  • Prerequisites: Python 3.x, API keys for chosen LLM providers (OpenAI, Azure, Google, Mistral), or local LLM setup (Ollama/LM Studio).
  • Setup: Requires creating a .env file for API keys and configuration.
  • Run: streamlit run main.py or docker run -p 8501:8501 --env-file .env mrwadams/stridegpt.
  • Docs: https://github.com/mrwadams/stride-gpt

Highlighted Details

  • Supports multi-modal threat modeling with vision-capable LLMs (e.g., GPT-4o, Gemini 2.5, Claude 4).
  • Integrates with GitHub repository analysis for comprehensive threat modeling.
  • Generates Gherkin test cases based on identified threats.
  • Offers local LLM hosting support via Ollama and LM Studio for enhanced privacy.

Maintenance & Community

The project is actively maintained with frequent updates, including support for the latest LLM models (e.g., GPT-4o, Claude 4, Gemini 2.5). A public roadmap is available.

Licensing & Compatibility

MIT License. Permissive for commercial use and integration with closed-source projects.

Limitations & Caveats

Google Gemini models may not consistently generate JSON output, potentially requiring retries. Attack tree generation is not supported with Google AI models due to safety restrictions.

Health Check
Last commit

1 month ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
0
Star History
81 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.