ray  by ray-project

AI compute engine for scaling Python and AI applications

Created 9 years ago
40,702 stars

Top 0.7% on SourcePulse

GitHubView on GitHub
Project Summary

Ray is a unified framework for scaling AI and Python applications, designed for developers and researchers needing to move workloads from a laptop to a cluster. It provides a core distributed runtime and a suite of AI libraries (Data, Train, Tune, RLlib, Serve) to simplify and accelerate machine learning compute, enabling seamless scaling of Python code without requiring additional infrastructure.

How It Works

Ray's core is a distributed runtime built on key abstractions: Tasks (stateless functions), Actors (stateful processes), and Objects (immutable distributed values). This allows for flexible parallel and distributed execution of Python code. The AI libraries leverage this core to provide specialized, scalable functionalities for data processing, hyperparameter tuning, distributed training, reinforcement learning, and model serving.

Quick Start & Requirements

  • Install with: pip install ray
  • Requirements: Python 3.7+. GPU and CUDA support are available for specific libraries.
  • Documentation: http://docs.ray.io/en/master/

Highlighted Details

  • Unified framework for scaling Python and AI applications.
  • Comprehensive suite of AI libraries: Data, Train, Tune, RLlib, Serve.
  • General-purpose runtime for any Python workload.
  • Includes Ray Dashboard for monitoring and a Distributed Debugger.

Maintenance & Community

  • Active community with Discourse forum, GitHub Issues, and Slack channel.
  • Regular updates and a growing ecosystem of integrations.
  • Twitter: @raydistributed
  • Slack: https://www.ray.io/join-slack

Licensing & Compatibility

  • Apache License 2.0. Permissive for commercial use and closed-source linking.

Limitations & Caveats

While Ray aims for seamless scaling, complex distributed systems can introduce debugging challenges. Performance tuning for specific workloads may require understanding Ray's internal mechanisms and best practices.

Health Check
Last Commit

14 hours ago

Responsiveness

1 day

Pull Requests (30d)
663
Issues (30d)
224
Star History
471 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Jiayi Pan Jiayi Pan(Author of SWE-Gym; MTS at xAI), and
20 more.

alpa by alpa-projects

0.0%
3k
Auto-parallelization framework for large-scale neural network training and serving
Created 4 years ago
Updated 2 years ago
Feedback? Help us improve.