Open-source tool for simplifying GPU allocation and AI workload orchestration
Top 23.9% on sourcepulse
dstack provides an open-source platform for orchestrating AI workloads and managing GPU resources, serving as an alternative to Kubernetes and Slurm for ML teams. It simplifies the allocation and deployment of jobs, services, and development environments across diverse hardware, including NVIDIA, AMD, Google TPUs, and Intel Gaudi accelerators, on cloud and on-premise infrastructure.
How It Works
dstack operates by defining infrastructure and workload configurations in YAML files, covering environments, tasks, services, fleets, volumes, and gateways. Users apply these configurations via a CLI or API, enabling dstack to automate provisioning, job queuing, scaling, networking, and failure management across heterogeneous compute resources. This declarative approach abstracts away the complexities of distributed systems and cloud provider specifics.
Quick Start & Requirements
pip install "dstack[all]"
or uv tool install "dstack[all]"
.~/.dstack/server/config.yml
), start server (dstack server
), configure CLI (dstack config --url ... --project ... --token ...
).Highlighted Details
Maintenance & Community
The project is actively maintained with frequent updates. A Discord community is available for support and discussion.
Licensing & Compatibility
Limitations & Caveats
The project is described as an alternative to established systems like Kubernetes, implying a potentially smaller ecosystem and community support compared to more mature platforms. Specific performance benchmarks or detailed comparisons are not provided in the README.
1 day ago
1 day