Discover and explore top open-source AI tools and projects—updated daily.
Reference implementations for MLPerf training benchmarks
Top 24.9% on SourcePulse
This repository provides reference implementations for MLPerf™ training benchmarks, targeting ML engineers and researchers seeking to understand or implement standardized machine learning performance tests. It offers a starting point for benchmark implementations, enabling users to evaluate model training performance across various frameworks and hardware.
How It Works
The project offers code for MLPerf training benchmarks, including model implementations in at least one framework, Dockerfiles for containerized execution, dataset download scripts, and timing scripts. This approach standardizes the benchmarking process, allowing for reproducible performance comparisons across different hardware and software stacks.
Quick Start & Requirements
install_cuda_docker.sh
), downloading datasets (./download_dataset.sh
), and building/running the Docker image.install_cuda_docker.sh
), specific framework dependencies (PyTorch, TensorFlow, NeMo, TorchRec, GLT), and large datasets (e.g., LAION-400M-filtered, C4, OpenImages).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 week ago
Inactive