jetson-inference  by dusty-nv

Vision DNN library for NVIDIA Jetson devices

created 9 years ago
8,419 stars

Top 6.2% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides a comprehensive guide and library for deploying deep learning inference networks and real-time vision primitives on NVIDIA Jetson devices. It targets developers and researchers working with embedded AI, offering optimized inference via TensorRT and training capabilities with PyTorch, enabling applications from image classification to action recognition.

How It Works

The library leverages NVIDIA's TensorRT for highly optimized deep learning inference on Jetson GPUs, supporting FP16 precision for maximum throughput. It provides pre-trained models and APIs for common vision tasks like classification (ImageNet), object detection (SSD, TAO), segmentation (SegNet), pose estimation, and action recognition. The project also includes utilities for camera streaming, CUDA manipulation, and ROS integration.

Quick Start & Requirements

  • Installation: Primarily via Docker containers or building from source. Detailed setup instructions are available in the README and linked documentation.
  • Prerequisites: NVIDIA Jetson Developer Kit (Nano, Xavier, Orin series) with compatible JetPack versions (e.g., JetPack 4.2+ for Nano, JetPack 5.0+ for Orin). CUDA and TensorRT are core dependencies.
  • Resources: Setup time varies; running pre-trained models is generally lightweight, but training requires significant GPU resources.
  • Links:

Highlighted Details

  • Supports a wide range of DNN architectures including ResNet, VGG, SSD, and custom TAO models.
  • Offers end-to-end capabilities: data collection, PyTorch-based transfer learning on Jetson, and TensorRT deployment.
  • Includes examples for live camera feeds, WebRTC streaming, and ROS/ROS2 integration.
  • Provides performance benchmarks for various segmentation models across different Jetson platforms.

Maintenance & Community

The project is actively maintained by NVIDIA (dusty-nv). Community support channels are not explicitly listed, but NVIDIA's Jetson AI Lab offers additional tutorials and resources.

Licensing & Compatibility

The project appears to be primarily licensed under a permissive BSD 3-Clause license, allowing for commercial use and integration into closed-source projects.

Limitations & Caveats

While supporting a broad range of Jetson hardware, performance benchmarks are often tied to specific JetPack versions and hardware configurations (e.g., JetPack 4.2.1, nvpmodel 0). Some older tutorials (e.g., DIGITS/Caffe) are marked as deprecated.

Health Check
Last commit

9 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
2
Star History
184 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Omar Sanseviero Omar Sanseviero(DevRel at Google DeepMind), and
5 more.

TensorRT-LLM by NVIDIA

0.6%
11k
LLM inference optimization SDK for NVIDIA GPUs
created 1 year ago
updated 17 hours ago
Feedback? Help us improve.