unofficial-claude-code-prompt-playbook  by kropdx

Production-grade LLM system prompt architecture

Created 1 week ago

New!

254 stars

Top 99.1% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

This repository provides an unofficial playbook for building production-grade LLM system prompts, derived from analyzing Anthropic's Claude Code prompt architecture. It targets engineers and power users, offering a practical manual to construct robust, modular, and reliable agent systems by treating prompts as engineered infrastructure rather than simple text.

How It Works

The core approach emphasizes layered instruction architectures over monolithic prompts. It advocates for a strict separation between static policy and dynamic runtime context, enabling modularity, cacheability, and cleaner instruction precedence. Key techniques include encoding workflows as explicit procedures, implementing adversarial verification patterns, and incorporating anti-rationalization rules to mitigate common LLM shortcuts.

Quick Start & Requirements

This repository serves as a technical guide and playbook, not a runnable application. It provides patterns, templates, and architectural blueprints for prompt engineering. No specific installation commands or runtime requirements are detailed, as it is intended for conceptual understanding and implementation within user projects.

Highlighted Details

  • Layered Prompt Architecture: System prompts are structured into distinct sections (role, operating policy, tool policy, format contract, etc.) for better organization and reliability.
  • Policy-Dynamic Data Separation: Stable instructions are kept separate from runtime content to facilitate prompt caching and prevent accidental mutations.
  • Explicit Interface Contracts: The playbook details teaching the model the system's UI semantics and how to interpret tool outputs.
  • Procedure-Based Workflows: Important workflows are encoded as explicit standard operating procedures rather than vague directives.
  • Fault Containment: Safety and policy constraints are repeated at relevant points of failure, such as within tool-specific policies.
  • Anti-Rationalization Rules: Explicit language is used to block common LLM shortcuts and prevent undesirable reasoning patterns.
  • Adversarial Verification: A strong emphasis is placed on using separate verifier agents for evidence-based validation, including adversarial probes.
  • Narrow Memory Design: Durable memory is designed to be focused on specific, non-derivable facts rather than storing noisy interaction logs.

Maintenance & Community

As an unofficial resource, this repository does not list specific maintainers, community channels (like Discord or Slack), or a formal roadmap. It is presented as a local-source-derived analysis.

Licensing & Compatibility

The provided README content does not specify a software license. Users should assume all rights are reserved or consult the repository owner directly for licensing information.

Limitations & Caveats

This playbook is based on the static analysis of an unofficial extraction of Anthropic's Claude Code prompt architecture and is not an official Anthropic document. Its insights are derived from observed patterns and generalized for practical application.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
256 stars in the last 11 days

Explore Similar Projects

Starred by Shizhe Diao Shizhe Diao(Author of LMFlow; Research Scientist at NVIDIA), Pawel Garbacki Pawel Garbacki(Cofounder of Fireworks AI), and
3 more.

promptbench by microsoftarchive

0.1%
3k
LLM evaluation framework
Created 2 years ago
Updated 1 month ago
Feedback? Help us improve.