guardrails  by guardrails-ai

Python framework for adding guardrails to LLMs

created 2 years ago
5,354 stars

Top 9.6% on sourcepulse

GitHubView on GitHub
Project Summary

Guardrails is a Python framework designed to enhance the reliability of AI applications by validating LLM inputs and outputs and facilitating structured data generation. It targets developers building LLM-powered applications who need to ensure data quality, prevent risks, and extract structured information from LLM responses.

How It Works

Guardrails employs a system of "validators" sourced from Guardrails Hub, which are pre-built measures for specific risks or data structures. These validators are combined into "Guards" that intercept LLM interactions. For structured data generation, Guardrails leverages LLM function calling or prompt optimization to enforce output schemas, such as Pydantic models.

Quick Start & Requirements

  • Install: pip install guardrails-ai
  • Requirements: Python 3.7+
  • Setup: guardrails configure to download CLI and guardrails hub install <validator> to install validators.
  • Docs: https://www.guardrailsai.com/docs

Highlighted Details

  • Guardrails Hub offers a growing collection of pre-built validators for various risks (e.g., regex matching, competitor checks, toxic language).
  • Supports generating structured data via Pydantic models, utilizing function calling or prompt engineering.
  • Can be deployed as a standalone Flask service for REST API interaction.
  • Offers a Python client and can integrate with the OpenAI SDK by setting a custom base_url.

Maintenance & Community

  • Active development with CI/CD pipelines and code coverage reporting.
  • Community support available via Discord and Twitter.
  • Blog and news updates are regularly published.

Licensing & Compatibility

  • Licensed under Apache 2.0.
  • Compatible with proprietary and open-source LLMs.
  • Offers JavaScript support and is working on other languages.

Limitations & Caveats

The framework is primarily Python-centric, though JavaScript support is available. Users need to explicitly install validators from Guardrails Hub for specific validation tasks.

Health Check
Last commit

2 weeks ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
20
Star History
500 stars in the last 90 days

Explore Similar Projects

Starred by Peter Norvig Peter Norvig(Author of Artificial Intelligence: A Modern Approach; Research Director at Google), Michael Han Michael Han(Cofounder of Unsloth), and
15 more.

open-interpreter by openinterpreter

0.1%
60k
Natural language interface for computers
created 2 years ago
updated 4 days ago
Feedback? Help us improve.