mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-09-26 19:41:29 +08:00
Initial implementation of D-FINE model via ONNX (#16772)
* initial implementation of D-FINE model * revert docker-compose * add docs for D-FINE * remove weird auto-format issue
This commit is contained in:
@@ -10,25 +10,31 @@ title: Object Detectors
|
||||
Frigate supports multiple different detectors that work on different types of hardware:
|
||||
|
||||
**Most Hardware**
|
||||
|
||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Hailo](#hailo-8l): The Hailo8 AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
||||
|
||||
**AMD**
|
||||
|
||||
- [ROCm](#amdrocm-gpu-detector): ROCm can run on AMD Discrete GPUs to provide efficient object detection.
|
||||
- [ONNX](#onnx): ROCm will automatically be detected and used as a detector in the `-rocm` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
**Intel**
|
||||
|
||||
- [OpenVino](#openvino-detector): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
|
||||
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
|
||||
|
||||
**Nvidia**
|
||||
|
||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
|
||||
|
||||
**Rockchip**
|
||||
|
||||
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
|
||||
|
||||
**For Testing**
|
||||
|
||||
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
|
||||
|
||||
:::
|
||||
@@ -147,7 +153,6 @@ model:
|
||||
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
|
||||
```
|
||||
|
||||
|
||||
## OpenVINO Detector
|
||||
|
||||
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||
@@ -412,7 +417,7 @@ When using docker compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
|
||||
environment:
|
||||
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
|
||||
```
|
||||
@@ -555,6 +560,50 @@ model:
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
#### D-FINE
|
||||
|
||||
[D-FINE](https://github.com/Peterande/D-FINE) is the [current state of the art](https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=d-fine-redefine-regression-task-in-detrs-as) at the time of writing. The ONNX exported models are supported, but not included by default.
|
||||
|
||||
To export as ONNX:
|
||||
|
||||
1. Clone: https://github.com/Peterande/D-FINE and install all dependencies.
|
||||
2. Select and download a checkpoint from the [readme](https://github.com/Peterande/D-FINE).
|
||||
3. Modify line 58 of `tools/deployment/export_onnx.py` and change batch size to 1: `data = torch.rand(1, 3, 640, 640)`
|
||||
4. Run the export, making sure you select the right config, for your checkpoint.
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml -r output/dfine_m_obj2coco.pth
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
||||
Model export has only been tested on Linux (or WSL2). Not all dependencies are in `requirements.txt`. Some live in the deployment folder, and some are still missing entirely and must be installed manually.
|
||||
|
||||
Make sure you change the batch size to 1 before exporting.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
onnx:
|
||||
type: onnx
|
||||
|
||||
model:
|
||||
model_type: dfine
|
||||
width: 640
|
||||
height: 640
|
||||
input_tensor: nchw
|
||||
input_dtype: float
|
||||
path: /config/model_cache/dfine_m_obj2coco.onnx
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
||||
|
||||
## CPU Detector (not recommended)
|
||||
|
||||
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
|
||||
@@ -704,7 +753,7 @@ To convert a onnx model to the rknn format using the [rknn-toolkit2](https://git
|
||||
This is an example configuration file that you need to adjust to your specific onnx model:
|
||||
|
||||
```yaml
|
||||
soc: ["rk3562","rk3566", "rk3568", "rk3576", "rk3588"]
|
||||
soc: ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
|
||||
quantization: false
|
||||
|
||||
output_name: "{input_basename}"
|
||||
|
Reference in New Issue
Block a user