nah  by manuelschipper

Context-aware safety guard for LLM tool execution

Created 1 month ago
412 stars

Top 70.7% on SourcePulse

GitHubView on GitHub
Project Summary

A context-aware safety guard for Claude Code, nah provides a granular permission system that moves beyond simple allow/deny. It classifies tool calls by their actual function and context, preventing dangerous operations like arbitrary file deletion or sensitive data exfiltration. This enhances security for developers by intelligently managing tool execution based on risk, ensuring safer interactions with code environments.

How It Works

nah functions as a PreToolUse hook, intercepting all tool calls before execution. It first applies a fast, deterministic structural classifier to categorize actions (e.g., filesystem_delete, git_history_rewrite). For ambiguous calls, an optional LLM layer can provide further analysis. This hybrid approach ensures rapid blocking of known threats while offering flexibility for complex scenarios, with all decisions logged for auditability.

Quick Start & Requirements

  • Install: pip install nah
  • Setup: Run nah install to set up permissions.
  • Prerequisites: Requires bash.
  • Demo: A live security demo is available via /nah-demo within Claude Code, covering approximately 25 cases across 8 threat categories.
  • Docs: Links to "Docs", "Install", "What it guards", "How it works", "Configure", and "CLI" are provided in the README.

Highlighted Details

  • Context-Aware Decisions: Policies adapt dynamically; rm dist/bundle.js might be allowed within a project, while rm ~/.bashrc would be flagged.
  • Action Type Classification: Commands are mapped to over 20 built-in action types (e.g., filesystem_read, lang_exec, network_outbound) for policy enforcement, not just command names.
  • Supply-Chain Safety: Project-specific configurations (.nah.yaml) can only tighten security policies, preventing malicious repositories from disabling safety measures.
  • LLM Integration: Supports multiple LLM providers (Ollama, OpenAI, etc.) for enhanced decision-making on ambiguous tool calls, with configurable confidence levels.

Maintenance & Community

No specific details regarding maintainers, community channels (e.g., Discord, Slack), or project roadmap were found in the provided README content.

Licensing & Compatibility

  • License: MIT.
  • Compatibility: Primarily designed as a safety guard for Claude Code. Commercial use implications under MIT are generally permissive, but specific integration details beyond Claude Code are not detailed.

Limitations & Caveats

The --dangerously-skip-permissions bypass mode is noted as a significant risk, as it allows commands to execute before nah can intervene due to asynchronous hook execution. The effectiveness of the LLM layer depends on the chosen provider and configuration.

Health Check
Last Commit

5 days ago

Responsiveness

Inactive

Pull Requests (30d)
7
Issues (30d)
10
Star History
33 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Michele Castata Michele Castata(President of Replit), and
3 more.

rebuff by protectai

0.3%
1k
SDK for LLM prompt injection detection
Created 3 years ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Elie Bursztein Elie Bursztein(Cybersecurity Lead at Google DeepMind), and
3 more.

llm-guard by protectai

0.8%
3k
Security toolkit for LLM interactions
Created 2 years ago
Updated 4 months ago
Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
5 more.

PurpleLlama by meta-llama

0.4%
4k
LLM security toolkit for assessing/improving generative AI models
Created 2 years ago
Updated 4 days ago
Feedback? Help us improve.