llm-guard  by protectai

Security toolkit for LLM interactions

created 2 years ago
1,901 stars

Top 23.4% on sourcepulse

GitHubView on GitHub
Project Summary

LLM Guard is a security toolkit designed to protect Large Language Model (LLM) interactions from various threats, including prompt injection, data leakage, and harmful language. It offers sanitization and detection capabilities, making it suitable for developers and organizations deploying LLMs in production environments.

How It Works

LLM Guard employs a modular design with "scanners" that analyze both prompts and LLM outputs. It supports a wide range of predefined scanners for tasks like anonymization, toxicity detection, and identifying sensitive information, alongside custom regex-based checks. This approach allows for flexible and extensible security policies tailored to specific LLM applications.

Quick Start & Requirements

Highlighted Details

  • Offers both prompt and output scanning capabilities.
  • Supports a variety of built-in scanners (e.g., Toxicity, Secrets, PromptInjection, FactualConsistency).
  • Allows for custom scanner creation using regular expressions.
  • Designed for easy integration into existing LLM workflows and production environments.

Maintenance & Community

Health Check
Last commit

2 days ago

Responsiveness

1 week

Pull Requests (30d)
11
Issues (30d)
6
Star History
272 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
2 more.

llm-security by greshake

0.2%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
created 2 years ago
updated 2 weeks ago
Starred by Omar Sanseviero Omar Sanseviero(DevRel at Google DeepMind), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
10 more.

guardrails by guardrails-ai

0.7%
5k
Python framework for adding guardrails to LLMs
created 2 years ago
updated 2 weeks ago
Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
3 more.

PurpleLlama by meta-llama

0.5%
4k
LLM security toolkit for assessing/improving generative AI models
created 1 year ago
updated 1 week ago
Feedback? Help us improve.