This repository is for "ai-that-works," a weekly live coding and Q&A session focused on practical AI application development. It targets developers and engineers aiming to move AI applications from demo to production, offering insights into advanced techniques for leveraging large language models (LLMs).
How It Works
The sessions feature live coding demonstrations and discussions on topics such as context engineering, prompt evaluation, memory implementation, and agentic workflows. The approach emphasizes practical application and problem-solving, often using Python, TypeScript, or Go, with a focus on tools like Cursor and the BAML framework.
Quick Start & Requirements
- Prerequisites: Familiarity with Zoom, Cursor (or VS Code), Python/TypeScript/Go, and BAML is recommended. Specific sessions may require knowledge of package managers like UV (Python) or PNPM (TypeScript).
- Resources: Sessions are live, typically 1 hour, and often include links to YouTube recordings and code repositories. An event calendar is available at https://lu.ma/baml.
Highlighted Details
- Covers advanced context engineering for coding agents and non-coding tasks.
- Explores techniques for interruptible agents and fine-grained user control.
- Dives into model inference, prompt evaluation across multiple models, and PDF processing for AI.
- Demonstrates building AI with memory, decaying-resolution memory, and content pipelines.
Maintenance & Community
Licensing & Compatibility
- The repository itself is likely open-source, but specific session content and tools used may have their own licenses. Details on licensing for the core "ai-that-works" project are not explicitly stated in the README.
Limitations & Caveats
- The content is delivered live and may not be suitable for those who prefer self-paced learning without direct interaction. Some advanced topics assume prior knowledge or require specific tooling setup.