NeMo-Guardrails  by NVIDIA

Open-source toolkit for adding programmable guardrails to LLM-based conversational systems

Created 2 years ago
5,074 stars

Top 9.8% on SourcePulse

GitHubView on GitHub
Project Summary

NeMo Guardrails is an open-source toolkit for adding programmable guardrails to LLM-based conversational applications. It enables developers to control LLM output for safety, security, and adherence to conversational flows, targeting developers building trustworthy and secure LLM applications.

How It Works

NeMo Guardrails acts as an intermediary between application code and LLMs, enforcing rules through various rail types: input, dialog, retrieval, execution, and output. It utilizes Colang, a domain-specific language, to define these rules, allowing for precise control over conversational paths and LLM behavior. This approach enables fine-grained control over when specific guardrails are applied, integrating multiple safety and moderation mechanisms into a cohesive layer.

Quick Start & Requirements

  • Install via pip: pip install nemoguardrails
  • Requires Python 3.9-3.12 and C++ compiler/dev tools.
  • Official documentation: docs.nvidia.com/nemo/guardrails
  • Examples available in the repository.

Highlighted Details

  • Supports multiple LLMs including OpenAI, LLaMa-2, Falcon, and Vicuna.
  • Integrates with LangChain for seamless wrapping of chains.
  • Offers built-in guardrails for jailbreak detection, moderation, fact-checking, and hallucination detection.
  • Includes a CLI for starting servers, interactive chat, and evaluation tasks.

Maintenance & Community

  • Actively developed by NVIDIA.
  • Community contributions are invited; see contributing guidelines.

Licensing & Compatibility

  • Licensed under the Apache License, Version 2.0.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

The project is currently in beta, with the main branch tracking beta release 0.13.0. The developers advise against deploying this beta version in production due to potential instability and unexpected behavior. Examples are for educational purposes and not production-ready.

Health Check
Last Commit

12 hours ago

Responsiveness

1 week

Pull Requests (30d)
66
Issues (30d)
2
Star History
80 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
6 more.

prompt-engine by microsoft

0.1%
3k
NPM library for LLM prompt engineering
Created 3 years ago
Updated 2 years ago
Feedback? Help us improve.