Research paper code for federated learning in heterogeneous networks
Top 50.6% on sourcepulse
This repository provides the implementation for FedProx, a federated learning optimization framework designed to address heterogeneity in distributed networks. It targets researchers and practitioners in federated learning, offering a more robust convergence solution than FedAvg, particularly in statistically heterogeneous environments, with reported average improvements of 22% in test accuracy.
How It Works
FedProx introduces a proximal term to the local client objective function. This term penalizes deviations from the global model, effectively regularizing local training and mitigating the divergence caused by non-identically distributed data across clients. This approach enhances convergence stability and accuracy in heterogeneous settings.
Quick Start & Requirements
pip3 install -r requirements.txt
export CUDA_VISIBLE_DEVICES=
bash run_fedprox.sh synthetic_iid 0 1 | tee log_synthetic/synthetic_iid_client10_epoch20_mu1
export CUDA_VISIBLE_DEVICES=available_gpu_id
) and modify run_fedavg.sh
/run_fedprox.sh
with dataset-specific models and hyperparameters.Highlighted Details
mu
parameter, which requires adjustment based on the dataset and heterogeneity.Maintenance & Community
The project is associated with the MLSys '20 paper. Further community engagement or maintenance status is not detailed in the README.
Licensing & Compatibility
The repository does not explicitly state a license. Users should verify compatibility for commercial or closed-source use.
Limitations & Caveats
Running experiments on real-world federated datasets can be time-consuming due to dataset size and model complexity. Hyperparameter tuning, especially for the mu
parameter, is critical and dataset-dependent.
2 years ago
1 day