PaddleX  by PaddlePaddle

All-in-one toolkit for PaddlePaddle-based AI development

created 5 years ago
5,661 stars

Top 9.2% on sourcepulse

GitHubView on GitHub
Project Summary

PaddleX is an all-in-one, low-code development tool built on PaddlePaddle, designed to streamline the entire AI model lifecycle from training to deployment. It offers a vast collection of over 200 pre-trained models across 33 "pipeline" categories, covering areas like OCR, object detection, image classification, and time series analysis, making advanced AI capabilities accessible to developers for industrial applications.

How It Works

PaddleX provides a unified command-line interface and Python API for seamless model integration and execution. It abstracts complex model architectures and training pipelines into easy-to-use "pipelines," allowing users to invoke pre-trained models with minimal code. The framework emphasizes efficiency and ease of use, supporting features like model fusion, semi-supervised learning, and flexible deployment options (high-performance inference, service, and edge).

Quick Start & Requirements

  • Installation:
    • Install PaddlePaddle: pip install paddlepaddle==3.0.0 (CPU) or pip install paddlepaddle-gpu==3.0.0 (GPU, requires driver >= 450.80.02/452.39 or >= 550.54.14).
    • Install PaddleX: pip install paddlex==3.0rc1
  • Prerequisites: Python 3.8-3.12. GPU support requires NVIDIA drivers.
  • Usage:
    • CLI: paddlex --pipeline [pipeline_name] --input [input_path] --device [device]
    • Python API: from paddlex import create_pipeline; pipeline = create_pipeline(pipeline=[pipeline_name]); output = pipeline.predict([input_path])
  • Documentation: PaddleX Documentation

Highlighted Details

  • Supports over 200 models across 33 pipelines, with 38 single-function modules for custom combinations.
  • Recent updates (v3.0.0rc1) include full compatibility with PaddlePaddle 3.0, enabling compiler training for up to 30% speedup, and integration of the PP-DocBee multimodal large model for document understanding.
  • Extensive hardware support, including NVIDIA GPUs, Kunlun Xin, Ascend, and Cambricon, with specific optimizations for NPU inference speedups (113.8%-226.4%).
  • Offers both zero-code cloud-based development via AI Studio and local low-code development.

Maintenance & Community

Licensing & Compatibility

  • Licensed under the Apache 2.0 license.
  • Permissive license suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

  • While supporting multiple hardware backends, specific model availability and performance may vary across platforms. Some pipelines are marked as "under development" (🚧) for certain deployment methods.
Health Check
Last commit

3 days ago

Responsiveness

Inactive

Pull Requests (30d)
48
Issues (30d)
88
Star History
263 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.