hatchet  by hatchet-dev

Background task platform for scalable workflows

created 1 year ago
5,869 stars

Top 8.9% on sourcepulse

GitHubView on GitHub
Project Summary

Hatchet is a platform for running background tasks at scale, designed to replace traditional task queues like Celery or BullMQ. It offers robust workflow orchestration, durable task execution, and flow control, targeting developers who need to offload work from their primary applications while ensuring reliability and observability.

How It Works

Hatchet leverages a durable task queue built on PostgreSQL, ensuring tasks are reliably delivered and tracked. It supports complex workflow definitions using Directed Acyclic Graphs (DAGs), allowing for intricate task dependencies and data flow. The system provides built-in features for concurrency control, rate limiting, scheduling, and event-driven triggers, all managed through a unified platform.

Quick Start & Requirements

  • Install/Run: Examples provided for Python, TypeScript, and Go SDKs. Requires a running PostgreSQL instance.
  • Prerequisites: PostgreSQL.
  • Resources: Self-hosted deployment involves running the Hatchet engine and a PostgreSQL database. Cloud version available.
  • Docs: https://docs.hatchet.run

Highlighted Details

  • Durable Task Queue: Persists execution history for monitoring and debugging, unlike many standard task queues.
  • Workflow Orchestration: Supports DAGs, durable tasks, and conditional execution for complex logic.
  • Flow Control: Granular control over concurrency and rate limiting per user or tenant.
  • Real-time UI: Bundled web UI for monitoring tasks, workflows, and queues with live updates and logging.

Maintenance & Community

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: Permissive license suitable for commercial use and integration with closed-source applications.

Limitations & Caveats

Hatchet is positioned as a more comprehensive alternative to simple task queues and specialized orchestration tools, potentially introducing more operational overhead than library-based solutions. While it supports high throughput, extreme scale (>10k/s) without retention might be better served by simpler, dedicated queue libraries.

Health Check
Last commit

22 hours ago

Responsiveness

1 day

Pull Requests (30d)
120
Issues (30d)
13
Star History
362 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.