github-action-benchmark  by benchmark-action

GitHub Action for continuous benchmarking to track performance

Created 5 years ago
1,137 stars

Top 33.8% on SourcePulse

GitHubView on GitHub
Project Summary

This action provides continuous benchmarking for GitHub Actions workflows, enabling users to monitor performance, detect regressions, and visualize results. It supports a wide range of languages and benchmarking tools, making it suitable for developers and teams focused on performance-critical projects.

How It Works

The action parses benchmark output from various tools (e.g., cargo bench, go test -bench, pytest-benchmark) and stores the results. It can automatically push these results to a GitHub Pages branch for visualization via time-series charts. Performance regressions are detected by comparing current results against previous runs, with configurable thresholds for triggering alerts via commit comments or workflow failures.

Quick Start & Requirements

  • Install/Run: Use as a GitHub Action step: uses: benchmark-action/github-action-benchmark@v1.
  • Prerequisites: Requires benchmark output from supported tools. For commit comments or auto-pushing to GitHub Pages, secrets.GITHUB_TOKEN is needed.
  • Setup: Minimal setup involves checking out code, setting up the language environment, running the benchmark, and then using this action. See examples/ for language-specific configurations.

Highlighted Details

  • Supports Rust, Go, JavaScript, Python, C++, Julia, .NET, Java, and Luau.
  • Can automatically push benchmark results to a gh-pages branch for visualization.
  • Alerts on performance regressions via commit comments or workflow failures.
  • Integrates with GitHub Actions Job Summaries.
  • Allows custom benchmark data with name, unit, and value fields.

Maintenance & Community

The project is actively maintained by benchmark-action. Community support channels are not explicitly listed, but release updates can be tracked via "release only" notifications on the repository.

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: Permissive MIT license allows for commercial use and integration with closed-source projects.

Limitations & Caveats

Workflows should not run on pull requests to prevent unauthorized modification of the GitHub Pages branch. Benchmark stability can be affected by the virtual environment; self-hosted runners may be necessary for highly sensitive benchmarks. Customizing the generated benchmark dashboard requires manual modification of the HTML.

Health Check
Last Commit

1 week ago

Responsiveness

1+ week

Pull Requests (30d)
5
Issues (30d)
3
Star History
8 stars in the last 30 days

Explore Similar Projects

Starred by Morgan Funtowicz Morgan Funtowicz(Head of ML Optimizations at Hugging Face), Luis Capelo Luis Capelo(Cofounder of Lightning AI), and
7 more.

lighteval by huggingface

2.6%
2k
LLM evaluation toolkit for multiple backends
Created 1 year ago
Updated 1 day ago
Starred by Pawel Garbacki Pawel Garbacki(Cofounder of Fireworks AI), Shizhe Diao Shizhe Diao(Author of LMFlow; Research Scientist at NVIDIA), and
14 more.

SWE-bench by SWE-bench

2.3%
4k
Benchmark for evaluating LLMs on real-world GitHub issues
Created 1 year ago
Updated 20 hours ago
Feedback? Help us improve.