Discover and explore top open-source AI tools and projects—updated daily.
FDU-INCFramework for efficient, private foundation model fine-tuning and inference
Top 88.2% on SourcePulse
Summary
SplitFM is an open-source framework for parameter-efficient fine-tuning (SplitLoRA) and inference (SplitInfer) of foundation models. It addresses edge deployment challenges by enabling resource-constrained, data-sensitive environments through techniques like Federated Learning, Split Learning, and cloud offloading, thereby enhancing privacy and efficiency.
How It Works
SplitLoRA combines Federated Learning (FL) for data privacy with Split Learning (SL) for computational offloading, based on the LoRA technique. This allows fine-tuning by training minimal parameters, reducing load and enhancing privacy. SplitInfer enables large foundation model inference on edge devices by leveraging cloud resources. It partitions models, performing inference without transmitting sensitive data to external servers, thus preserving privacy and enabling deployment on low-resource devices.
Quick Start & Requirements
pip install loralib. Dependencies via pip install -r requirement.txt. Docker image nvcr.io/nvidia/pytorch:20.03-py3 available.nn.Linear, nn.Embedding, nn.Conv2d. Requires original pre-trained checkpoints.3 days ago
Inactive
tf-encrypted
bigscience-workshop