llm-guard  by protectai

Security toolkit for LLM interactions

Created 2 years ago
2,573 stars

Top 17.8% on SourcePulse

GitHubView on GitHub
Project Summary

LLM Guard is a security toolkit designed to protect Large Language Model (LLM) interactions from various threats, including prompt injection, data leakage, and harmful language. It offers sanitization and detection capabilities, making it suitable for developers and organizations deploying LLMs in production environments.

How It Works

LLM Guard employs a modular design with "scanners" that analyze both prompts and LLM outputs. It supports a wide range of predefined scanners for tasks like anonymization, toxicity detection, and identifying sensitive information, alongside custom regex-based checks. This approach allows for flexible and extensible security policies tailored to specific LLM applications.

Quick Start & Requirements

Highlighted Details

  • Offers both prompt and output scanning capabilities.
  • Supports a variety of built-in scanners (e.g., Toxicity, Secrets, PromptInjection, FactualConsistency).
  • Allows for custom scanner creation using regular expressions.
  • Designed for easy integration into existing LLM workflows and production environments.

Maintenance & Community

Health Check
Last Commit

2 months ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
3
Star History
123 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Michele Castata Michele Castata(President of Replit), and
3 more.

rebuff by protectai

0.3%
1k
SDK for LLM prompt injection detection
Created 2 years ago
Updated 1 year ago
Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
5 more.

PurpleLlama by meta-llama

0.3%
4k
LLM security toolkit for assessing/improving generative AI models
Created 2 years ago
Updated 6 days ago
Feedback? Help us improve.