NeMo-Guardrails  by NVIDIA

Open-source toolkit for adding programmable guardrails to LLM-based conversational systems

created 2 years ago
4,938 stars

Top 10.2% on sourcepulse

GitHubView on GitHub
Project Summary

NeMo Guardrails is an open-source toolkit for adding programmable guardrails to LLM-based conversational applications. It enables developers to control LLM output for safety, security, and adherence to conversational flows, targeting developers building trustworthy and secure LLM applications.

How It Works

NeMo Guardrails acts as an intermediary between application code and LLMs, enforcing rules through various rail types: input, dialog, retrieval, execution, and output. It utilizes Colang, a domain-specific language, to define these rules, allowing for precise control over conversational paths and LLM behavior. This approach enables fine-grained control over when specific guardrails are applied, integrating multiple safety and moderation mechanisms into a cohesive layer.

Quick Start & Requirements

  • Install via pip: pip install nemoguardrails
  • Requires Python 3.9-3.12 and C++ compiler/dev tools.
  • Official documentation: docs.nvidia.com/nemo/guardrails
  • Examples available in the repository.

Highlighted Details

  • Supports multiple LLMs including OpenAI, LLaMa-2, Falcon, and Vicuna.
  • Integrates with LangChain for seamless wrapping of chains.
  • Offers built-in guardrails for jailbreak detection, moderation, fact-checking, and hallucination detection.
  • Includes a CLI for starting servers, interactive chat, and evaluation tasks.

Maintenance & Community

  • Actively developed by NVIDIA.
  • Community contributions are invited; see contributing guidelines.

Licensing & Compatibility

  • Licensed under the Apache License, Version 2.0.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

The project is currently in beta, with the main branch tracking beta release 0.13.0. The developers advise against deploying this beta version in production due to potential instability and unexpected behavior. Examples are for educational purposes and not production-ready.

Health Check
Last commit

23 hours ago

Responsiveness

Inactive

Pull Requests (30d)
52
Issues (30d)
12
Star History
268 stars in the last 90 days

Explore Similar Projects

Starred by Omar Sanseviero Omar Sanseviero(DevRel at Google DeepMind), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
10 more.

guardrails by guardrails-ai

0.7%
5k
Python framework for adding guardrails to LLMs
created 2 years ago
updated 2 weeks ago
Feedback? Help us improve.