Evaluation benchmark suite for semi-supervised learning algorithms
Top 66.8% on sourcepulse
This repository provides the evaluation benchmark suite for deep semi-supervised learning (SSL) algorithms, as detailed in the paper "Realistic Evaluation of Deep Semi-Supervised Learning Algorithms." It is intended for researchers and practitioners in machine learning who need to rigorously evaluate and compare SSL methods.
How It Works
The suite automates the download, preprocessing, and splitting of datasets (CIFAR-10, SVHN, ImageNet 32x32) into labeled and unlabeled subsets using configurable label maps. It then facilitates running and evaluating various SSL algorithms, including VAT, by leveraging tmuxp
for managing training and evaluation processes, ensuring a structured and reproducible experimental setup.
Quick Start & Requirements
pip3 install -r requirements.txt
tmuxp load <.yml file>
or directly run provided Python scripts.tmuxp
.Highlighted Details
.yml
configuration files for running experiments as described in the paper.Maintenance & Community
This project is an open-source release associated with a specific research paper. No active community channels or ongoing maintenance efforts are indicated.
Licensing & Compatibility
The repository does not explicitly state a license. It is presented as an open-source release for research purposes. Commercial use or linking with closed-source projects may require clarification.
Limitations & Caveats
Exact reproducibility of paper results may be affected by subtle differences in TensorFlow versions and random seeds. The project is not an official Google product.
6 years ago
1 day