AI-ML-Free-Resources-for-Security-and-Prompt-Injection  by anmolksachan

AI/ML security and LLM penetration testing roadmap

Created 1 year ago
376 stars

Top 75.7% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

This repository provides a structured, comprehensive roadmap for learning AI/ML security and penetration testing, with a strong focus on prompt injection and LLM attacks. Aimed at beginners to practitioners, it offers a curated path through free resources to become proficient in securing AI systems.

How It Works

The project outlines a phased learning journey, progressing from foundational security and ML concepts to advanced exploitation techniques and real-world research. It meticulously curates a vast array of free online courses, videos, guides, tools, and practical exercises, enabling a self-directed learning experience. The approach prioritizes a logical skill progression, making complex AI security topics accessible.

Quick Start & Requirements

As a resource guide, there's no direct installation. Prerequisites include general security basics (e.g., PortSwigger Web Security Academy, TryHackMe), Python programming proficiency, and an understanding of APIs/HTTP. Learners will need access to the internet and various free online platforms like Coursera, edX, fast.ai, Hugging Face, and specific interactive labs.

Highlighted Details

  • Covers key frameworks like OWASP LLM Top 10 and MITRE ATLAS.
  • Features hands-on practice platforms including Gandalf, Prompt Airlines, and Crucible.
  • Lists essential offensive tools such as Garak and PyRIT, alongside defensive tools like Rebuff and NeMo Guardrails.
  • Details bug bounty programs for major AI providers (OpenAI, Google, Meta) and AI-focused platforms.
  • Offers tailored learning paths for Beginner, Intermediate, and Advanced experience levels.

Maintenance & Community

Last updated in 2025, the project welcomes contributions via pull requests. It highlights active communities like AI Village (DEF CON) and OWASP AI Exchange, alongside leading blogs and research outlets in AI security.

Licensing & Compatibility

The roadmap itself does not specify a license. The linked resources are predominantly free or offer audit-free access, but users must verify the individual licenses of each external resource for compatibility, especially for commercial use.

Limitations & Caveats

Being a curated list of free resources, it may not cover all commercial or advanced proprietary tools. The rapidly evolving AI/ML security landscape necessitates continuous updates beyond its last update in 2025. It serves as a learning guide, requiring users to independently set up and utilize separate tools and environments for practical application.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
3
Issues (30d)
0
Star History
282 stars in the last 30 days

Explore Similar Projects

Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
5 more.

PurpleLlama by meta-llama

0.3%
4k
LLM security toolkit for assessing/improving generative AI models
Created 2 years ago
Updated 3 days ago
Feedback? Help us improve.