tensorRT_Pro-YOLOv8  by Melody-Zhou

TensorRT SDK for high-performance inference of various YOLO models

Created 1 year ago
372 stars

Top 76.1% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides a C++ inference engine for various object detection and vision models, optimized for TensorRT. It aims to offer high-performance, server- and embedded-friendly deployment solutions for a wide range of models including YOLOv8 variants, RT-DETR, YOLOv9, YOLOv10, and more.

How It Works

The project leverages TensorRT 8.x and provides a C++ API for efficient model inference. It includes custom plugins (e.g., for LayerNorm) and detailed instructions for exporting models from various frameworks (like Ultralytics YOLO, YOLOX, MMPose) to ONNX format, followed by TensorRT engine generation. The core advantage lies in its unified C++ interface for diverse models, simplifying deployment pipelines.

Quick Start & Requirements

  • Installation: Clone the repository and compile using make.
  • Prerequisites: CUDA >= 10.2, cuDNN >= 8.x, TensorRT >= 8.x, OpenCV, Protobuf. Specific versions are recommended in the README.
  • Setup: Requires manual configuration of library paths in CMakeLists.txt or Makefile. Compilation can take time depending on system resources.
  • Resources: Links to CSDN articles provide detailed explanations and deployment guides for each supported model.

Highlighted Details

  • Supports a broad spectrum of models: YOLOv8 (detection, classification, segmentation, OBB, pose), RT-DETR, YOLOv9, YOLOv10, RTMO, PP-OCRv4, LaneATT, CLRNet, CLRerNet, Depth-Anything, YOLOv11, YOLOv12.
  • Offers C++ inference interfaces for server and embedded deployment.
  • Includes instructions for ONNX export and TensorRT engine generation for each model.
  • Provides a basic ByteTrack implementation for object tracking.

Maintenance & Community

The repository is actively updated with support for new models and features. Links to CSDN articles suggest active development and community engagement through detailed explanations.

Licensing & Compatibility

The repository's license is not explicitly stated in the provided README snippet. Compatibility for commercial use would depend on the underlying licenses of the models and libraries used.

Limitations & Caveats

  • The README indicates that some functionalities, like the LayerNorm plugin, are not currently used in inference due to identified issues.
  • Specific model export and engine generation steps often require manual code modifications within the respective model repositories (e.g., Ultralytics, MMPose) before integration.
  • The project relies heavily on manual path configuration for dependencies, which can be error-prone.
Health Check
Last Commit

2 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
3
Star History
7 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Soumith Chintala Soumith Chintala(Coauthor of PyTorch), and
1 more.

jetson-inference by dusty-nv

0.1%
9k
Vision DNN library for NVIDIA Jetson devices
Created 9 years ago
Updated 11 months ago
Feedback? Help us improve.