textgrad  by zou-group

Autograd engine for textual gradients, enabling LLM-driven optimization

created 1 year ago
2,793 stars

Top 17.4% on sourcepulse

GitHubView on GitHub
Project Summary

TextGrad enables automatic differentiation for text-based tasks by leveraging Large Language Models (LLMs) to provide gradient feedback. This framework allows users to define loss functions and optimize textual outputs, such as reasoning steps, code snippets, or prompts, using a PyTorch-like API. It's designed for researchers and developers working with LLMs who need to fine-tune or improve the quality of generated text through an iterative optimization process.

How It Works

TextGrad implements a novel "textual gradient" concept, where LLMs act as differentiators. Instead of numerical gradients, LLMs provide textual feedback on the quality or correctness of an output. This feedback is then used by a Textual Gradient Descent (TGD) optimizer to iteratively refine the textual variable, guided by a natural-language loss function. This approach allows optimization of complex, unstructured data like natural language, code, or even multimodal inputs.

Quick Start & Requirements

Highlighted Details

  • Published in Nature (March 2025).
  • Supports multiple LLM backends via litellm, including Bedrock, Together, and Gemini.
  • Enables optimization of text, code, prompts, and multimodal inputs.
  • Features a PyTorch-like API for intuitive usage.

Maintenance & Community

  • Active development with recent updates introducing new litellm-based engines.
  • Key contributors include Federico Bianchi and Mert Yuksekgonul.
  • Inspiration drawn from PyTorch, DSPy, Micrograd, ProTeGi, and Reflexion.

Licensing & Compatibility

  • The repository does not explicitly state a license in the provided README. Further investigation is required for licensing details and commercial use compatibility.

Limitations & Caveats

  • The new litellm engines are experimental and may have issues.
  • The effectiveness of optimization is highly dependent on the quality of the LLM feedback and the defined loss function.
  • Requires access to LLM APIs, which may incur costs.
Health Check
Last commit

1 week ago

Responsiveness

1 day

Pull Requests (30d)
5
Issues (30d)
10
Star History
319 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.