mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-09-26 19:41:29 +08:00
Quick fixes (#17639)
* Use mobile drawer for face selection * Convert face selection to separate component * Cleanup dialogs * Add FAQ for record resolution * Update image name * Remove unused * Cleanup
This commit is contained in:
@@ -129,3 +129,10 @@ The Frigate considers the recognition scores across all recognition attempts for
|
||||
### Can I use other face recognition software like DoubleTake at the same time as the built in face recognition?
|
||||
|
||||
No, using another face recognition service will interfere with Frigate's built in face recognition. When using double-take the sub_label feature must be disabled if the built in face recognition is also desired.
|
||||
|
||||
### Does face recognition run on the recording stream?
|
||||
|
||||
Face recognition does not run on the recording stream, this would be suboptimal for many reasons:
|
||||
1. The latency of accessing the recordings means the notifications would not include the names of recognized people because recognition would not complete until after.
|
||||
2. The embedding models used run on a set image size, so larger images will be scaled down to match this anyway.
|
||||
3. Motion clarity is much more important than extra pixels, over-compression and motion blur are much more detrimental to results than resolution.
|
||||
|
@@ -295,8 +295,7 @@ These instructions were originally based on the [Jellyfin documentation](https:/
|
||||
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
|
||||
|
||||
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
|
||||
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 5.0+ use the `stable-tensorrt-jp5`
|
||||
tagged image, or if your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
|
||||
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
|
||||
|
||||
You will need to use the image with the nvidia container runtime:
|
||||
|
||||
@@ -306,7 +305,7 @@ You will need to use the image with the nvidia container runtime:
|
||||
docker run -d \
|
||||
...
|
||||
--runtime nvidia
|
||||
ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5
|
||||
ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6
|
||||
```
|
||||
|
||||
### Docker Compose - Jetson
|
||||
@@ -315,7 +314,7 @@ docker run -d \
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5
|
||||
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6
|
||||
runtime: nvidia # Add this
|
||||
```
|
||||
|
||||
|
@@ -27,7 +27,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
**Nvidia**
|
||||
|
||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp6` Frigate images when a supported ONNX model is configured.
|
||||
|
||||
**Rockchip**
|
||||
|
||||
|
Reference in New Issue
Block a user