openvino2tensorflow  by PINTO0309

Model conversion tool for AI inference

Created 5 years ago
343 stars

Top 80.5% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides a comprehensive tool for converting deep learning models between various formats, primarily focusing on bridging ONNX/OpenVINO IR to TensorFlow and its ecosystem (SavedModel, TFLite, TFJS, etc.). It aims to simplify the complex model conversion process for users, especially those encountering difficulties with standard ONNX-to-TensorFlow tools, and specifically addresses issues with Transpose operations.

How It Works

The core functionality revolves around a Python script that orchestrates a multi-step conversion pipeline. It leverages specialized TensorFlow and OpenVINO libraries, along with tools like TensorRT, CoreML, and EdgeTPU compilers. The process typically involves converting from PyTorch (NCHW) to ONNX (NCHW), then to OpenVINO IR (NCHW), and finally to TensorFlow formats (NHWC/NCHW), with extensive support for intermediate and final format conversions. A key advantage is its ability to handle complex layer transformations and shape manipulations, offering workarounds for common issues like Transpose and Reshape operations through configuration files.

Quick Start & Requirements

  • Primary install / run command: Docker is strongly recommended.
    • docker pull ghcr.io/pinto0309/openvino2tensorflow:latest
    • docker run -it --rm -v $(pwd):/home/user/workdir ghcr.io/pinto0309/openvino2tensorflow:latest
  • Non-default prerequisites: Python 3.8+, TensorFlow v2.10.0+, PyTorch v1.12.1+, OpenVINO 2022.1.0, TensorRT 8.4.0+. NVIDIA GPU (CUDA) and Intel iHD GPU (OpenCL) are supported. Docker installation is required for the recommended setup.
  • Estimated setup time: Docker image pull can take time depending on network speed. Building from source or setting up the host environment requires installing numerous dependencies.
  • Links: Official Quick Start, Supported Layers, Usage Examples

Highlighted Details

  • Supports conversion to SavedModel, TFLite (multiple quantization types), TFJS, TensorRT, CoreML, EdgeTPU, ONNX, and Protocol Buffer (.pb).
  • Offers advanced features like weight replacement via JSON configuration and layer-specific output debugging.
  • Includes specific optimizations and workarounds for common issues like Transpose, Reshape, and Swish/HardSwish operations.
  • Provides Dockerfiles for various hardware acceleration setups (NVIDIA GPU, Intel iGPU) and GUI/camera access.

Maintenance & Community

  • The project is actively maintained by PINTO0309.
  • Links to community resources are not explicitly provided in the README.

Licensing & Compatibility

  • The README does not explicitly state a license. Users should verify licensing for commercial use.

Limitations & Caveats

  • The project is described as "a tool in the making, so there are lots of bugs."
  • Specific operations like 2D/3D/5D Tensor Reshape and Transpose have known issues, requiring manual configuration via JSON for workarounds.
  • Some operations like Conv3D, Unsqueeze, Range, NonMaxSuppression, and GatherElements are marked as Work In Progress (WIP) or have limited support.
Health Check
Last Commit

3 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
0 stars in the last 30 days

Explore Similar Projects

Starred by Junyang Lin Junyang Lin(Core Maintainer at Alibaba Qwen), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
3 more.

neural-compressor by intel

0.1%
3k
Python library for model compression (quantization, pruning, distillation, NAS)
Created 5 years ago
Updated 10 hours ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Luis Capelo Luis Capelo(Cofounder of Lightning AI), and
1 more.

tutorials by onnx

0%
4k
ONNX model tutorials and examples
Created 8 years ago
Updated 1 year ago
Feedback? Help us improve.