TensorRT SDK for high-performance inference of various YOLO models
Top 79.1% on sourcepulse
This repository provides a C++ inference engine for various object detection and vision models, optimized for TensorRT. It aims to offer high-performance, server- and embedded-friendly deployment solutions for a wide range of models including YOLOv8 variants, RT-DETR, YOLOv9, YOLOv10, and more.
How It Works
The project leverages TensorRT 8.x and provides a C++ API for efficient model inference. It includes custom plugins (e.g., for LayerNorm) and detailed instructions for exporting models from various frameworks (like Ultralytics YOLO, YOLOX, MMPose) to ONNX format, followed by TensorRT engine generation. The core advantage lies in its unified C++ interface for diverse models, simplifying deployment pipelines.
Quick Start & Requirements
make
.CMakeLists.txt
or Makefile
. Compilation can take time depending on system resources.Highlighted Details
Maintenance & Community
The repository is actively updated with support for new models and features. Links to CSDN articles suggest active development and community engagement through detailed explanations.
Licensing & Compatibility
The repository's license is not explicitly stated in the provided README snippet. Compatibility for commercial use would depend on the underlying licenses of the models and libraries used.
Limitations & Caveats
1 month ago
Inactive