github-action-benchmark  by benchmark-action

GitHub Action for continuous benchmarking to track performance

created 5 years ago
1,129 stars

Top 34.7% on sourcepulse

GitHubView on GitHub
Project Summary

This action provides continuous benchmarking for GitHub Actions workflows, enabling users to monitor performance, detect regressions, and visualize results. It supports a wide range of languages and benchmarking tools, making it suitable for developers and teams focused on performance-critical projects.

How It Works

The action parses benchmark output from various tools (e.g., cargo bench, go test -bench, pytest-benchmark) and stores the results. It can automatically push these results to a GitHub Pages branch for visualization via time-series charts. Performance regressions are detected by comparing current results against previous runs, with configurable thresholds for triggering alerts via commit comments or workflow failures.

Quick Start & Requirements

  • Install/Run: Use as a GitHub Action step: uses: benchmark-action/github-action-benchmark@v1.
  • Prerequisites: Requires benchmark output from supported tools. For commit comments or auto-pushing to GitHub Pages, secrets.GITHUB_TOKEN is needed.
  • Setup: Minimal setup involves checking out code, setting up the language environment, running the benchmark, and then using this action. See examples/ for language-specific configurations.

Highlighted Details

  • Supports Rust, Go, JavaScript, Python, C++, Julia, .NET, Java, and Luau.
  • Can automatically push benchmark results to a gh-pages branch for visualization.
  • Alerts on performance regressions via commit comments or workflow failures.
  • Integrates with GitHub Actions Job Summaries.
  • Allows custom benchmark data with name, unit, and value fields.

Maintenance & Community

The project is actively maintained by benchmark-action. Community support channels are not explicitly listed, but release updates can be tracked via "release only" notifications on the repository.

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: Permissive MIT license allows for commercial use and integration with closed-source projects.

Limitations & Caveats

Workflows should not run on pull requests to prevent unauthorized modification of the GitHub Pages branch. Benchmark stability can be affected by the virtual environment; self-hosted runners may be necessary for highly sensitive benchmarks. Customizing the generated benchmark dashboard requires manual modification of the HTML.

Health Check
Last commit

2 months ago

Responsiveness

1+ week

Pull Requests (30d)
2
Issues (30d)
2
Star History
40 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
3 more.

AutoPR by irgolic

0.1%
1k
AI-powered workflows for codebase automation
created 2 years ago
updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems) and Travis Fischer Travis Fischer(Founder of Agentic).

LiveCodeBench by LiveCodeBench

0.8%
606
Benchmark for holistic LLM code evaluation
created 1 year ago
updated 2 weeks ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Simon Willison Simon Willison(Author of Django), and
1 more.

tau-bench by sierra-research

2.6%
709
Benchmark for tool-agent-user interaction research
created 1 year ago
updated 3 weeks ago
Feedback? Help us improve.