vscode-prompt-tsx  by microsoft

Declarative prompt engineering for LLM extensions

Created 1 year ago
258 stars

Top 98.1% on SourcePulse

GitHubView on GitHub
Project Summary

<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> @vscode/prompt-tsx addresses the challenges of programmatically composing LLM prompts by introducing a declarative TSX-based component model. It targets VS Code extension developers, offering a more maintainable, flexible, and context-aware approach to prompt engineering, especially for complex interactions like those in Copilot Chat. The library enables prompts to dynamically adapt to model context window constraints, improving developer experience and prompt robustness.

How It Works

Prompts are structured as a tree of TSX components, which are then flattened into ChatMessage objects. Each component has an associated priority. When the total token count exceeds the model's context window, the renderer prunes lower-priority messages, preserving essential information. This system allows developers to safely include large contextual data, such as conversation history or codebase snippets, by managing their priority.

Quick Start & Requirements

  • Installation: npm install --save @vscode/prompt-tsx
  • Configuration: Requires tsconfig.json updates for JSX: "jsx": "react", "jsxFactory": "vscpp", "jsxFragmentFactory": "vscppf".
  • Dependencies: Potential for compilation conflicts with other JSX-using libraries; may require explicit types configuration in tsconfig.json.
  • Resources: Links to documentation and a quickstart sample are mentioned but not provided.

Highlighted Details

  • Context Window Management: Advanced features like flexGrow, flexReserve, and flexBasis enable fine-grained token budget allocation.
  • Pruning Logic: TokenLimit enforces hard token caps, while Expandable fills remaining budget.
  • Conditional Rendering: useKeepWith ensures related prompt elements are kept or pruned together; IfEmpty provides fallback content.
  • Debugging: An HTMLTracer visualizes prompt rendering and token budget distribution.
  • Tool Integration: Supports serializing prompt elements for tool invocation (renderElementJSON) and consuming tool results (ToolResult).

Maintenance & Community

No specific details on maintainers, community channels (like Discord/Slack), or roadmap were found in the provided text.

Licensing & Compatibility

The specific license is not stated in the provided text. As a Microsoft project, it is likely a permissive open-source license (e.g., MIT), but this should be verified. Compatibility for commercial use would depend on the final license.

Limitations & Caveats

Potential for TypeScript compilation errors in monorepos due to JSX conflicts, requiring manual tsconfig.json adjustments. Newlines within JSX text must be explicitly added using <br />. The priority and flex-based token management system introduces a learning curve for complex prompt designs. Asynchronous operations within prompts must be handled in the prepare method, as render is synchronous.

Health Check
Last Commit

5 days ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
0
Star History
6 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
6 more.

prompt-engine by microsoft

0.1%
3k
NPM library for LLM prompt engineering
Created 3 years ago
Updated 2 years ago
Feedback? Help us improve.