PyTorch implementation of federated learning research paper
Top 30.1% on sourcepulse
This repository provides a PyTorch implementation of federated learning, specifically targeting the "Communication-Efficient Learning of Deep Networks from Decentralized Data" paper. It's designed for researchers and practitioners interested in understanding and experimenting with federated learning on common datasets like MNIST, Fashion MNIST, and CIFAR10, supporting both IID and non-IID data distributions.
How It Works
The project implements federated learning by training multiple local models on decentralized data across simulated users, then aggregating their updates to improve a global model. This approach reduces communication overhead compared to sending raw data, enabling efficient training on distributed datasets. It uses simple MLP and CNN models to clearly illustrate the federated learning paradigm.
Quick Start & Requirements
pip install -r requirements.txt
data
directory. GPU support is optional but recommended.python src/baseline_main.py --model=mlp --dataset=mnist --epochs=10
python src/federated_main.py --model=cnn --dataset=cifar --gpu=0 --iid=1 --epochs=10
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The implementation focuses on simple MLP and CNN models, and the README does not detail support for more complex architectures or advanced federated learning techniques beyond the vanilla approach.
1 year ago
Inactive