Nvidia TensorRT detector (#4718)

* Initial WIP dockerfile and scripts to add tensorrt support

* Add tensorRT detector

* WIP attempt to install TensorRT 8.5

* Updates to detector for cuda python library

* TensorRT Cuda library rework WIP

Does not run

* Fixes from rebase to detector factory

* Fix parsing output memory pointer

* Handle TensorRT logs with the python logger

* Use non-async interface and convert input data to float32. Detection runs without error.

* Make TensorRT a separate build from the base Frigate image.

* Add script and documentation for generating TRT Models

* Add support for TensorRT devcontainer

* Add labelmap to trt model script and docs.  Cleanup of old scripts.

* Update detect to normalize input tensor using model input type

* Add config for selecting GPU. Fix Async inference. Update documentation.

* Update some CUDA libraries to clean up version warning

* Add CI stage to build TensorRT tag

* Add note in docs for image tag and model support
This commit is contained in:
Nate Meyer
2022-12-30 11:53:17 -05:00
committed by GitHub
parent e3ec292528
commit 3f05f74ecb
9 changed files with 515 additions and 16 deletions

View File

@@ -36,7 +36,19 @@ jobs:
context: .
push: true
platforms: linux/amd64,linux/arm64,linux/arm/v7
target: frigate
tags: |
ghcr.io/blakeblackshear/frigate:${{ github.ref_name }}-${{ env.SHORT_SHA }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Build and push TensorRT
uses: docker/build-push-action@v3
with:
context: .
push: true
platforms: linux/amd64
target: frigate-tensorrt
tags: |
ghcr.io/blakeblackshear/frigate:${{ github.ref_name }}-${{ env.SHORT_SHA }}-tensorrt
cache-from: type=gha
cache-to: type=gha,mode=max