Include libraries and .rknn models for other Rockchip SoCs (#8649)

* support for other yolov models and config checks

* apply code formatting

* Information about core mask and inference speed

* update rknn postprocess and remove params

* update model selection

* Apply suggestions from code review

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* support rknn on all socs

* apply changes from review and fix post process bug

* apply code formatting

* update tip in object_detectors docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Marc Altmann
2023-11-18 14:53:49 +01:00
committed by GitHub
parent 2da99c2308
commit c6208b266b
5 changed files with 72 additions and 29 deletions

View File

@@ -295,16 +295,16 @@ To verify that the integration is working correctly, start Frigate and observe t
## Rockchip RKNN-Toolkit-Lite2
This detector is only available if one of the following Rockchip SoCs is used:
- RK3566/RK3568
- RK3588/RK3588S
- RV1103/RV1106
- RK3568
- RK3566
- RK3562
These SoCs come with a NPU that will highly speed up detection.
### Setup
RKNN support is provided using the `-rk` suffix for the docker image. Moreover, privileged mode must be enabled by adding the `--privileged` flag to your docker run command or `privileged: true` to your `docker-compose.yml` file.
Use a frigate docker image with `-rk` suffix and enable privileged mode by adding the `--privileged` flag to your docker run command or `privileged: true` to your `docker-compose.yml` file.
### Configuration
@@ -376,3 +376,16 @@ $ cat /sys/kernel/debug/rknpu/load
model:
path: /config/model_cache/rknn/my-rknn-model.rknn
```
:::tip
When you have a multicore NPU, you can enable all cores to reduce inference times. You should consider activating all cores if you use a larger model like yolov8l. If your NPU has 3 cores (like rk3588/S SoCs), you can enable all 3 cores using:
```yaml
detectors:
rknn:
type: rknn
core_mask: 0b111
```
:::

View File

@@ -103,7 +103,7 @@ Frigate supports SBCs with the following Rockchip SoCs:
- RV1103/RV1106
- RK3562
Using the yolov8n model and an Orange Pi 5 Plus with RK3588 SoC inference speeds vary between 25-40 ms.
Using the yolov8n model and an Orange Pi 5 Plus with RK3588 SoC inference speeds vary between 20 - 25 ms.
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)