Compare commits

...

36 Commits

Author SHA1 Message Date
dependabot[bot]
b48b749c4e Bump @docusaurus/plugin-content-docs from 3.8.1 to 3.9.1 in /docs
Bumps [@docusaurus/plugin-content-docs](https://github.com/facebook/docusaurus/tree/HEAD/packages/docusaurus-plugin-content-docs) from 3.8.1 to 3.9.1.
- [Release notes](https://github.com/facebook/docusaurus/releases)
- [Changelog](https://github.com/facebook/docusaurus/blob/main/CHANGELOG.md)
- [Commits](https://github.com/facebook/docusaurus/commits/v3.9.1/packages/docusaurus-plugin-content-docs)

---
updated-dependencies:
- dependency-name: "@docusaurus/plugin-content-docs"
  dependency-version: 3.9.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-29 14:37:55 +00:00
Nicolas Mowen
9fdce80729 Handle case when no classification model exists (#20257) 2025-09-28 16:03:44 -05:00
Josh Hawkins
12f8c3feac Watchdog enhancements (#20237)
* refactor get_video_properties and use json output from ffprobe

* add zmq topic

* publish valid segment data in recording maintainer

* check for valid video data

- restart separate record ffmpeg process if no video data has been received in 120s
- refactor datetime import

* listen to correct topic in embeddings maintainer

* refactor to move get_latest_segment_datetime logic to recordings maintainer

* debug logging

* cleanup
2025-09-28 10:52:14 -06:00
Josh Hawkins
b6552987b0 Fixes (#20254)
* fix api async/await functions

* fix synaptics detector from throwing error when unused

* clean up
2025-09-28 07:08:52 -06:00
Nicolas Mowen
c207009d8a Refactor AMD GPU support (#20239)
* Update ROCm to 7.0.1

* Update ONNXRuntime

* Add back in

* Get basic detection working

* Use env vars

* Handle complex migraphx models

* Enable model caching

* Remove unused

* Add tip to docs
2025-09-27 14:43:11 -05:00
Nicolas Mowen
e6cbc93703 More stationary cleanup (#20229)
* Always return false for active objects

* Cleanup
2025-09-26 07:23:29 -06:00
GaryHuang-ASUS
b8b07ee6e1 [Init] Initial commit for Synaptics SL1680 NPU (#19680)
* [Init] Initial commit for Synaptics SL1680 NPU

* add a rough detector which is testing with yolov8 tflite model.

* [Feat] Add dependencies installation in docker build

- Add runtime library and wheels installation in main/Dockerfile
- Add model.synap(default model, transfer from mobilenet_224full80) in docker/synap1680

* [Update] Remove dependencies installation from main Dockerfile

- remove deps installation from Dockerfile
- add dependencies installation and split wheels, deps stage in synap1680 Dockerfile

* Refactor synap detector to more closely match other implementations

* [Update] Add model path configuration check

* [Update] update ModelType to ssd

* [Update] Remove unuse script

- install_deps.sh has already been executing in deps download stage
- Dockerfile.toolchain is for testing to extract runtime libraries from Synaptics toolchain

* [Update] update Synaptics SL1680 setup description

* [Update] remove install_synap1680

- The deps download and installation is existed in synap1680

* [Fix] update document content

* [Update] Update detector from synap1680 to synaptics

This update is in order to make the synaptics SL-series NPU detector more general.

- Fix detector `os` module not import bug
- Update detector type `synap1680` to `synaptics`
- Update document description `SL1680` to `Synaptics` only
- Update docker build content `synap1680` to `synaptics`

* [Fix] Update configuration document

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* [Update] Update document content and detector default layout

- Update object_detectors document
- Update detector's default layout
- Update default model name

* [Update] Update object detector document content

* [Fix] Fix InputTensorEnum not defined error

- import InputTensorEnum from detector_config

* [Update] Update detector script coding format

* [Update] Update synaptics detector coding format

* [Update] Add synaptics ci workflow

* [Update] update synaptics runtime libs download path

- Fork Synaptics astra sdk repo and put the runtime lib package on it
- Frigate team can update this download path later

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-09-26 07:07:12 -05:00
Nicolas Mowen
082867447b Stationary bug fixes (#20225)
* Correctly only enable for car

* Fix limiting stationary objects history
2025-09-26 07:03:59 -05:00
Nicolas Mowen
8b293449f9 Improve review summary (#20216)
* Add debug logging for review summaries report

* Improve debug logging

* Improve review report prompt

* Cleanup

* Add date to report
2025-09-25 21:05:22 -05:00
Nicolas Mowen
2f209b2cf4 Implement stationary car classifier to improve parked car management (#20206)
* Implement stationary car classifier to base stationary state on visual changes and not just bounding box stability

* Cleanup

* Fix mypy

* Move to new file and add config to disable if needed

* Cleanup

* Undo
2025-09-25 10:18:45 -05:00
Nicolas Mowen
9a22404015 Use devcontainer build to run tests (#20212)
* Use devcontainer build to run tests

* Make ignored github changes more restrictive
2025-09-25 09:59:18 -05:00
Nicolas Mowen
2c4a043dbb Update go2rtc to 1.9.10 (#20202) 2025-09-25 06:15:04 -05:00
Nicolas Mowen
b23355da53 Update apple silicon docs (#20204) 2025-09-25 06:12:35 -05:00
Nicolas Mowen
90db2d57b3 Update Ollama docs (#20201) 2025-09-24 08:17:20 -05:00
Blake Blackshear
652fdc6a38 Merge remote-tracking branch 'origin/master' into dev 2025-09-24 06:57:50 -05:00
Nicolas Mowen
7e2f5a3017 Improve 640x640 model detection of small objects (#20190)
* Allow larger models to have smaller regions

* remove unnecessary hailo resize

* Update benchmark

* Fix table

* Update nvidia specs
2025-09-23 15:49:54 -05:00
Nicolas Mowen
2f99a17e64 Add docs for classification models (#20188) 2025-09-23 08:29:16 -06:00
Nicolas Mowen
2bc92cce81 Update model explanation for genai (#20186) 2025-09-23 07:30:42 -06:00
Josh Hawkins
7f7eefef7f Live view improvements (#20177) 2025-09-22 21:21:51 -05:00
Josh Hawkins
4914029a50 Add average_estimated_speed to mqtt docs (#20101) 2025-09-16 11:03:36 -06:00
GuoQing Liu
bafdab9d67 feat: add robots.txt (#20093) 2025-09-16 06:14:27 -06:00
GuoQing Liu
b08db4913f feat: add github mirror download endpoint (#20007)
* feat: add github mirror download endpoint

* fix: fix face_embedding endpoint line

* fix: fix github raw endpoint

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-09-14 06:51:56 -06:00
Nicolas Mowen
7c7ff49b90 Improve d-fine model export docs (#20020) 2025-09-11 06:17:08 -05:00
Nicolas Mowen
037c4d1cc0 Don't block UI while pulling the stream live info (#19998) 2025-09-09 17:53:26 -05:00
laviddichterman
1613499218 Update object_detectors.md to document configuring image size in YOLO 9 (#19951)
* Update object_detectors.md for v16

* add configurability to IMG_SIZE for YOLOv9 export
* remove TensorRT detector as it's no longer supported in v16

* Revert removing NVIDIA TensorRT detector docs

Added documentation for NVidia TensorRT Detector, including model generation, configuration parameters, and example usage.

* Dumb copy/paste

* Enhance YOLOv9 export instructions in documentation

Updated YOLOv9 export command to include IMG_SIZE parameter and clarified model size options.
2025-09-09 14:27:30 -06:00
Nicolas Mowen
205fdf3ae3 Fixes (#19984)
* Always handle RKNN as NHWC in Frigate+ model loading

* Correct Intel stats

* Update inference time docs

* Update version

* Adjust inference speeds
2025-09-09 06:17:56 -06:00
Nicolas Mowen
f46f8a2160 More inference speed updates (#19974) 2025-09-08 10:39:33 -06:00
Josh Hawkins
880902cdd7 Add specific notes for frigate+ models in object detector docs (#19971) 2025-09-08 09:29:03 -05:00
Nicolas Mowen
c5ed95ec52 More inference speed updates (#19947)
* More inference speed updates

* Update hardware.md

* Update hardware.md

* Update index.md

* More inference speeds

* Update home-assistant.md

* Update object_detectors.md

* Update first_model.md
2025-09-08 07:43:04 -05:00
Josh Hawkins
751de141d5 Fix model selection type in Frigate+ settings pane (#19952)
* model type does not need to match config model type

As long as a model is supported by a detector, it should be available in the list

* fix missing semicolon

the web linter was complaining
2025-09-07 19:19:40 -06:00
Nicolas Mowen
0eb441fe50 Update inference times for yolov9 (#19946) 2025-09-07 14:59:48 -05:00
Josh Hawkins
7566aecb0b Add note about Apple Silicon support in 0.17 (#19944) 2025-09-07 14:12:49 -05:00
Blake Blackshear
60714a733e update docs for Frigate+ yolov9 (#19938)
* update docs for Frigate+ yolov9

* footnote memryx suport

* tweaks
2025-09-07 06:01:10 -05:00
Josh Hawkins
d7f7cd7be1 best thumbnail endpoint should pass correct extension param (#19930) 2025-09-05 06:33:57 -05:00
GuoQing Liu
6591210050 docs: fix reolink camera table display (#19926) 2025-09-05 06:01:26 -05:00
Nicolas Mowen
7e7b3288a8 Update live FAQ for camera distortion (#19907)
* Add item to FAQ about stream distortion

* Update updating docs

* Update link
2025-09-04 07:44:33 -05:00
61 changed files with 2758 additions and 676 deletions

View File

@@ -173,6 +173,31 @@ jobs:
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
synaptics_build:
runs-on: ubuntu-22.04-arm
name: Synaptics Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Synaptics build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: synaptics
files: docker/synaptics/synaptics.hcl
set: |
synaptics.tags=${{ steps.setup.outputs.image-name }}-synaptics
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:

View File

@@ -4,38 +4,14 @@ on:
pull_request:
paths-ignore:
- "docs/**"
- ".github/**"
- ".github/*.yml"
- ".github/DISCUSSION_TEMPLATE/**"
- ".github/ISSUE_TEMPLATE/**"
env:
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:
runs-on: ubuntu-latest
name: Build Devcontainer
# The Dockerfile contains features that requires buildkit, and since the
# devcontainer cli uses docker-compose to build the image, the only way to
# ensure docker-compose uses buildkit is to explicitly enable it.
env:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
run: devcontainer build --workspace-folder .
# It would be nice to also test the following commands, but for some
# reason they don't work even though in VS Code devcontainer works.
# - name: Start devcontainer
# run: devcontainer up --workspace-folder .
# - name: Run devcontainer scripts
# run: devcontainer run-user-commands --workspace-folder .
web_lint:
name: Web - Lint
runs-on: ubuntu-latest
@@ -102,13 +78,18 @@ jobs:
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build
run: make debug
- name: Run mypy
run: docker run --rm --entrypoint=python3 frigate:latest -u -m mypy --config-file frigate/mypy.ini frigate
- name: Run tests
run: docker run --rm --entrypoint=python3 frigate:latest -u -m unittest
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
env:
DOCKER_BUILDKIT: "1"
run: devcontainer build --workspace-folder .
- name: Start devcontainer
run: devcontainer up --workspace-folder .
- name: Run mypy in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m mypy --config-file frigate/mypy.ini frigate"
- name: Run unit tests in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m unittest"

View File

@@ -55,7 +55,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.9/go2rtc_linux_${TARGETARCH}" go2rtc
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.10/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio
ARG TARGETARCH

View File

@@ -15,14 +15,14 @@ ARG AMDGPU
RUN apt update -qq && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/6.4.1/ubuntu/jammy/amdgpu-install_6.4.60401-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.0.1/ubuntu/jammy/amdgpu-install_7.0.1.70001-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -qq -y rocm
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* librocroller.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
@@ -64,11 +64,10 @@ COPY --from=rocm /opt/rocm-dist/ /
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV HSA_ENABLE_SDMA=0
ENV TF_ROCM_USE_IMMEDIATE_MODE=1
# avoid kernel crashes
ENV HIP_FORCE_DEV_KERNARG=1
ENV MIGRAPHX_DISABLE_MIOPEN_FUSION=1
ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1
ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1
ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1
COPY --from=rocm-dist / /

View File

@@ -1 +1 @@
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.4.1/onnxruntime_rocm-1.21.1-cp311-cp311-linux_x86_64.whl
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.0.1/onnxruntime_migraphx-1.23.0-cp311-cp311-linux_x86_64.whl

View File

@@ -2,7 +2,7 @@ variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "6.4.1"
default = "7.0.1"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""

View File

@@ -0,0 +1,28 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels AS synap1680-wheels
ARG TARGETARCH
# Install dependencies
RUN wget -qO- "https://github.com/GaryHuang-ASUS/synaptics_astra_sdk/releases/download/v1.5.0/Synaptics-SL1680-v1.5.0-rt.tar" | tar -C / -xzf -
RUN wget -P /wheels/ "https://github.com/synaptics-synap/synap-python/releases/download/v0.0.4-preview/synap_python-0.0.4-cp311-cp311-manylinux_2_35_aarch64.whl"
FROM deps AS synap1680-deps
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=synap1680-wheels,source=/wheels,target=/deps/synap-wheels \
pip3 install --no-deps -U /deps/synap-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib
ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /synaptics/mobilenet.synap

View File

@@ -0,0 +1,27 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "rootfs"
}
target synaptics {
dockerfile = "docker/synaptics/Dockerfile"
contexts = {
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/arm64"]
}

View File

@@ -0,0 +1,15 @@
BOARDS += synaptics
local-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=frigate:latest-synaptics \
--load
build-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics
push-synaptics: build-synaptics
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics \
--push

View File

@@ -177,9 +177,11 @@ listen [::]:5000 ipv6only=off;
By default, Frigate runs at the root path (`/`). However some setups require to run Frigate under a custom path prefix (e.g. `/frigate`), especially when Frigate is located behind a reverse proxy that requires path-based routing.
### Set Base Path via HTTP Header
The preferred way to configure the base path is through the `X-Ingress-Path` HTTP header, which needs to be set to the desired base path in an upstream reverse proxy.
For example, in Nginx:
```
location /frigate {
proxy_set_header X-Ingress-Path /frigate;
@@ -188,9 +190,11 @@ location /frigate {
```
### Set Base Path via Environment Variable
When it is not feasible to set the base path via a HTTP header, it can also be set via the `FRIGATE_BASE_PATH` environment variable in the Docker Compose file.
For example:
```
services:
frigate:
@@ -200,6 +204,7 @@ services:
```
This can be used for example to access Frigate via a Tailscale agent (https), by simply forwarding all requests to the base path (http):
```
tailscale serve --https=443 --bg --set-path /frigate http://localhost:5000/frigate
```
@@ -218,7 +223,7 @@ To do this:
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.9, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.10, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@@ -147,7 +147,7 @@ WEB Digest Algorithm - MD5
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
|-------------------|---------------------------|-----------------------------------|-------------------------------------------------------------------------|
| ----------------- | ------------------------- | --------------------------------- | ----------------------------------------------------------------------- |
| 5MP or lower | All | http-flv | Stream is h264 |
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
@@ -231,7 +231,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
@@ -250,6 +250,7 @@ TP-Link VIGI cameras need some adjustments to the main stream settings on the ca
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
- Preparation outside of Frigate:
- Get USB camera path. Run `v4l2-ctl --list-devices` to get a listing of locally-connected cameras available. (You may need to install `v4l-utils` in a way appropriate for your Linux distribution). In the sample configuration below, we use `video=0` to correlate with a detected device path of `/dev/video0`
- Get USB camera formats & resolutions. Run `ffmpeg -f v4l2 -list_formats all -i /dev/video0` to get an idea of what formats and resolutions the USB Camera supports. In the sample configuration below, we use a width of 1024 and height of 576 in the stream and detection settings based on what was reported back.
- If using Frigate in a container (e.g. Docker on TrueNAS), ensure you have USB Passthrough support enabled, along with a specific Host Device (`/dev/video0`) + Container Device (`/dev/video0`) listed.
@@ -277,5 +278,3 @@ cameras:
width: 1024
height: 576
```

View File

@@ -0,0 +1,73 @@
---
id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
## Minimum System Requirements
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
### Sub label vs Attribute
- **Sub label**:
- Applied to the objects `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
## Example use cases
### Sub label
- **Known pet vs unknown**: For `dog` objects, set sub label to your pets name (e.g., `buddy`) or `none` for others.
- **Mail truck vs normal car**: For `car`, classify as `mail_truck` vs `car` to filter important arrivals.
- **Delivery vs non-delivery person**: For `person`, classify `delivery` vs `visitor` based on uniform/props.
### Attributes
- **Backpack**: For `person`, add attribute `backpack: yes/no`.
- **Helmet**: For `person` (worksite), add `helmet: yes/no`.
- **Leash**: For `dog`, add `leash: yes/no` (useful for park or yard rules).
- **Ladder rack**: For `truck`, add `ladder_rack: yes/no` to flag service vehicles.
## Configuration
Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.
```yaml
classification:
custom:
dog:
threshold: 0.8
object_config:
objects: [dog] # object labels to classify
classification_type: sub_label # or: attribute
```
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
### Getting Started
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@@ -0,0 +1,52 @@
---
id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
## Minimum System Requirements
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
## Example use cases
- **Door state**: Detect if a garage or front door is open vs closed.
- **Gate state**: Track if a driveway gate is open or closed.
- **Trash day**: Bins at curb vs no bins present.
- **Pool cover**: Cover on vs off.
## Configuration
State classification is configured as a custom classification model. Each model has its own name and settings. You must provide at least one camera crop under `state_config.cameras`.
```yaml
classification:
custom:
front_door:
threshold: 0.8
state_config:
motion: true # run when motion overlaps the crop
interval: 10 # also run every N seconds (optional)
cameras:
front:
crop: [0, 180, 220, 400]
```
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
### Getting Started
When choosing a portion of the camera frame for state classification, it is important to make the crop tight around the area of interest to avoid extra signals unrelated to what is being classified.
// TODO add this section once UI is implemented. Explain process of selecting a crop.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.

View File

@@ -27,13 +27,26 @@ Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::info
Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger sizes are more capable of complex tasks and understanding of situations, but requires more memory and computational resources. It is recommended to try multiple models and experiment to see which performs best.
:::
:::tip
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. https://github.com/skye-harris/ollama-modelfiles contains optimized model configs for this task.
:::
The following models are recommended:
| Model | Size | Notes |
| ----------------- | ------ | ----------------------------------------------------------- |
| `gemma3:4b` | 3.3 GB | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5vl:3b` | 3.2 GB | Fast but capable model with good vision comprehension |
| `llava-phi3:3.8b` | 2.9 GB | Lightweight and fast model with vision comprehension |
| Model | Notes |
| ----------------- | ----------------------------------------------------------- |
| `Intern3.5VL` | Relatively fast with good vision comprehension
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5vl` | Fast but capable model with good vision comprehension |
| `llava-phi3` | Lightweight and fast model with vision comprehension |
:::note
@@ -50,6 +63,8 @@ genai:
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
## Google Gemini
@@ -124,4 +139,4 @@ genai:
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
```

View File

@@ -427,3 +427,29 @@ cameras:
```
:::
## Synaptics
Hardware accelerated video de-/encoding is supported on Synpatics SL-series SoC.
### Prerequisites
Make sure to follow the [Synaptics specific installation instructions](/frigate/installation#synaptics).
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream
output_args:
record: preset-record-generic-audio-aac
```
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your input stream is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::

View File

@@ -176,7 +176,7 @@ For devices that support two way talk, Frigate can be configured to use the feat
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` *usually* supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
### Streaming options on camera group dashboards
@@ -230,7 +230,26 @@ Note that disabling a camera through the config file (`enabled: False`) removes
If you are using continuous streaming or you are loading more than a few high resolution streams at once on the dashboard, your browser may struggle to begin playback of your streams before the timeout. Frigate always prioritizes showing a live stream as quickly as possible, even if it is a lower quality jsmpeg stream. You can use the "Reset" link/button to try loading your high resolution stream again.
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the [recommendations above](#camera_settings_recommendations).
Errors in stream playback (e.g., connection failures, codec issues, or buffering timeouts) that cause the fallback to low bandwidth mode (jsmpeg) are logged to the browser console for easier debugging. These errors may include:
- Network issues (e.g., MSE or WebRTC network connection problems).
- Unsupported codecs or stream formats (e.g., H.265 in WebRTC, which is not supported in some browsers).
- Buffering timeouts or low bandwidth conditions causing fallback to jsmpeg.
- Browser compatibility problems (e.g., iOS Safari limitations with MSE).
To view browser console logs:
1. Open the Frigate Live View in your browser.
2. Open the browser's Developer Tools (F12 or right-click > Inspect > Console tab).
3. Reproduce the error (e.g., load a problematic stream or simulate network issues).
4. Look for messages prefixed with the camera name.
These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors:
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)).
- Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS).
- Test with a different stream via the UI dropdown (if `live -> streams` is configured).
- For WebRTC-specific issues, ensure port 8555 is forwarded and candidates are set (see (WebRTC Extra Configuration)(#webrtc-extra-configuration)).
3. **It doesn't seem like my cameras are streaming on the Live dashboard. Why?**
@@ -253,3 +272,7 @@ Note that disabling a camera through the config file (`enabled: False`) removes
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.
7. **My camera streams have lots of visual artifacts / distortion.**
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.

View File

@@ -35,6 +35,7 @@ Frigate supports multiple different detectors that work on different types of ha
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
**Nvidia Jetson**
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
@@ -42,6 +43,10 @@ Frigate supports multiple different detectors that work on different types of ha
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
@@ -331,6 +336,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@@ -442,12 +453,13 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
When Frigate is started with the following config it will connect to the detector client and transfer the model automatically:
```yaml
detectors:
onnx:
type: onnx
apple-silicon:
type: zmq
endpoint: tcp://host.docker.internal:5555
model:
model_type: yolo-generic
@@ -543,6 +555,17 @@ $ docker exec -it frigate /bin/bash -c '(unset HSA_OVERRIDE_GFX_VERSION && /opt/
### ROCm Supported Models
:::tip
The AMD GPU kernel is known problematic especially when converting models to mxr format. The recommended approach is:
1. Disable object detection in the config.
2. Startup Frigate with the onnx detector configured, the main object detection model will be converted to mxr format and cached in the config directory.
3. Once this is finished as indicated by the logs, enable object detection in the UI and confirm that it is working correctly.
4. Re-enable object detection in the config.
:::
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
- D-FINE models are not supported
@@ -592,6 +615,12 @@ There is no default model provided, the following formats are supported:
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
:::warning
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@@ -619,6 +648,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@@ -757,19 +792,19 @@ To verify that the integration is working correctly, start Frigate and observe t
# Community Supported Detectors
## MemryX MX3
## MemryX MX3
This detector is available for use with the MemryX MX3 accelerator M.2 module. Frigate supports the MX3 on compatible hardware platforms, providing efficient and high-performance object detection.
This detector is available for use with the MemryX MX3 accelerator M.2 module. Frigate supports the MX3 on compatible hardware platforms, providing efficient and high-performance object detection.
See the [installation docs](../frigate/installation.md#memryx-mx3) for information on configuring the MemryX hardware.
To configure a MemryX detector, simply set the `type` attribute to `memryx` and follow the configuration guide below.
### Configuration
### Configuration
To configure the MemryX detector, use the following example configuration:
To configure the MemryX detector, use the following example configuration:
#### Single PCIe MemryX MX3
#### Single PCIe MemryX MX3
```yaml
detectors:
@@ -795,7 +830,7 @@ detectors:
device: PCIe:2
```
### Supported Models
### Supported Models
MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the container at `/memryx_models/model_folder/`.
@@ -809,9 +844,9 @@ The input size for **YOLO-NAS** can be set to either **320x320** (default) or **
- The default size of **320x320** is optimized for lower CPU usage and faster inference times.
##### Configuration
##### Configuration
Below is the recommended configuration for using the **YOLO-NAS** (small) model with the MemryX detector:
Below is the recommended configuration for using the **YOLO-NAS** (small) model with the MemryX detector:
```yaml
detectors:
@@ -833,13 +868,13 @@ model:
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOv9
#### YOLOv9
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
##### Configuration
Below is the recommended configuration for using the **YOLOv9** (small) model with the MemryX detector:
Below is the recommended configuration for using the **YOLOv9** (small) model with the MemryX detector:
```yaml
detectors:
@@ -848,7 +883,7 @@ detectors:
device: PCIe:0
model:
model_type: yolo-generic
model_type: yolo-generic
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
@@ -861,13 +896,13 @@ model:
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOX
#### YOLOX
The model is sourced from the [OpenCV Model Zoo](https://github.com/opencv/opencv_zoo) and precompiled to DFP.
##### Configuration
##### Configuration
Below is the recommended configuration for using the **YOLOX** (small) model with the MemryX detector:
Below is the recommended configuration for using the **YOLOX** (small) model with the MemryX detector:
```yaml
detectors:
@@ -888,13 +923,13 @@ model:
# ├── yolox.dfp (a file ending with .dfp)
```
#### SSDLite MobileNet v2
#### SSDLite MobileNet v2
The model is sourced from the [OpenMMLab Model Zoo](https://mmdeploy-oss.openmmlab.com/model/mmdet-det/ssdlite-e8679f.onnx) and has been converted to DFP.
##### Configuration
##### Configuration
Below is the recommended configuration for using the **SSDLite MobileNet v2** model with the MemryX detector:
Below is the recommended configuration for using the **SSDLite MobileNet v2** model with the MemryX detector:
```yaml
detectors:
@@ -1029,6 +1064,41 @@ model:
height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416
```
## Synaptics
Hardware accelerated object detection is supported on the following SoCs:
- SL1680
This implementation uses the [Synaptics model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0.
This implementation is based on sdk `v1.5.0`.
See the [installation docs](../frigate/installation.md#synaptics) for information on configuring the SL-series NPU hardware.
### Configuration
When configuring the Synap detector, you have to specify the model: a local **path**.
#### SSD Mobilenet
A synap model is provided in the container at /mobilenet.synap and is used by this detector type by default. The model comes from [Synap-release Github](https://github.com/synaptics-astra/synap-release/tree/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80).
Use the model configuration shown below when using the synaptics detector with the default synap model:
```yaml
detectors: # required
synap_npu: # required
type: synaptics # required
model: # required
path: /synaptics/mobilenet.synap # required
width: 224 # required
height: 224 # required
tensor_format: nhwc # default value (optional. If you change the model, it is required)
labelmap_path: /labelmap/coco-80.txt # required
```
## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs:
@@ -1303,26 +1373,29 @@ Here are some tips for getting different model types
### Downloading D-FINE Model
To export as ONNX:
1. Clone: https://github.com/Peterande/D-FINE and install all dependencies.
2. Select and download a checkpoint from the [readme](https://github.com/Peterande/D-FINE).
3. Modify line 58 of `tools/deployment/export_onnx.py` and change batch size to 1: `data = torch.rand(1, 3, 640, 640)`
4. Run the export, making sure you select the right config, for your checkpoint.
Example:
D-FINE can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=s` in the first line to `s`, `m`, or `l` size.
```sh
docker build . --build-arg MODEL_SIZE=s --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /dfine
RUN git clone https://github.com/Peterande/D-FINE.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnxsim
# Create output directory and download checkpoint
RUN mkdir -p output
ARG MODEL_SIZE
RUN wget https://github.com/Peterande/storage/releases/download/dfinev1.0/dfine_${MODEL_SIZE}_obj2coco.pth -O output/dfine_${MODEL_SIZE}_obj2coco.pth
# Modify line 58 of export_onnx.py to change batch size to 1
RUN sed -i '58s/data = torch.rand(.*)/data = torch.rand(1, 3, 640, 640)/' tools/deployment/export_onnx.py
RUN python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_${MODEL_SIZE}_obj2coco.yml -r output/dfine_${MODEL_SIZE}_obj2coco.pth
FROM scratch
ARG MODEL_SIZE
COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL_SIZE}.onnx
EOF
```
python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml -r output/dfine_m_obj2coco.pth
```
:::tip
Model export has only been tested on Linux (or WSL2). Not all dependencies are in `requirements.txt`. Some live in the deployment folder, and some are still missing entirely and must be installed manually.
Make sure you change the batch size to 1 before exporting.
:::
### Download RF-DETR Model
@@ -1374,23 +1447,25 @@ python3 yolo_to_onnx.py -m yolov7-320
#### YOLOv9
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available sizes are `t`, `s`, `m`, `c`, and `e`).
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
```sh
docker build . --build-arg MODEL_SIZE=t --output . -f- <<'EOF'
docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnx-simplifier>=0.4.1
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz 320 --simplify --include onnx
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx
FROM scratch
ARG MODEL_SIZE
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /
ARG IMG_SIZE
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx
EOF
```

View File

@@ -287,6 +287,9 @@ detect:
max_disappeared: 25
# Optional: Configuration for stationary object tracking
stationary:
# Optional: Stationary classifier that uses visual characteristics to determine if an object
# is stationary even if the box changes enough to be considered motion (default: shown below).
classifier: True
# Optional: Frequency for confirming stationary objects (default: same as threshold)
# When set to 1, object detection will run to confirm the object still exists on every frame.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
@@ -697,7 +700,7 @@ audio_transcription:
language: en
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.9)
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:

View File

@@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.9) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features.
:::note
@@ -156,7 +156,7 @@ See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-22
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@@ -56,6 +56,7 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
@@ -94,8 +95,21 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
:::
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
@@ -110,6 +124,7 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
| Name | Hailo8 Inference Time | Hailo8L Inference Time |
| ---------------- | ---------------------- | ----------------------- |
| ssd mobilenet v1 | ~ 6 ms | ~ 10 ms |
| yolov9-tiny | | 320: 18ms |
| yolov6n | ~ 7 ms | ~ 11 ms |
### Google Coral TPU
@@ -142,17 +157,19 @@ More information is available [in the detector docs](/configuration/object_detec
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
| Name | MobileNetV2 Inference Time | YOLOv9 | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | ------------------------------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
### TensorRT - Nvidia GPU
@@ -160,7 +177,7 @@ Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA
#### Minimum Hardware Support
12.x series of CUDA libraries are used which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
12.x series of CUDA libraries are used which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
@@ -180,12 +197,13 @@ Inference speeds will vary greatly depending on the GPU and the model used.
✅ - Accelerated with CUDA Graphs
❌ - Not accelerated with CUDA Graphs
| Name | ✅ YOLOv9 Inference Time | ✅ RF-DETR Inference Time | ❌ YOLO-NAS Inference Time
| --------------- | ------------------------ | ------------------------- | -------------------------- |
| RTX 3050 | t-320: 8 ms s-320: 10 ms | Nano-320: ~ 12 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 | t-320: 6 ms s-320: 8 ms | Nano-320: ~ 9 ms | 320: ~ 8 ms 640: ~ 14 ms |
| RTX A4000 | | | 320: ~ 15 ms |
| Tesla P40 | | | 320: ~ 105 ms |
| Name | ✅ YOLOv9 Inference Time | ✅ RF-DETR Inference Time | ❌ YOLO-NAS Inference Time |
| --------- | ------------------------------------- | ------------------------- | -------------------------- |
| GTX 1070 | s-320: 16 ms | | 320: 14 ms |
| RTX 3050 | t-320: 8 ms s-320: 10 ms s-640: 28 ms | Nano-320: ~ 12 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 | t-320: 6 ms s-320: 8 ms s-640: 25 ms | Nano-320: ~ 9 ms | 320: ~ 8 ms 640: ~ 14 ms |
| RTX A4000 | | | 320: ~ 15 ms |
| Tesla P40 | | | 320: ~ 105 ms |
### Apple Silicon
@@ -197,18 +215,20 @@ Apple Silicon can not run within a container, so a ZMQ proxy is utilized to comm
:::
| Name | YOLOv9 Inference Time |
| --------- | ---------------------- |
| M3 Pro | t-320: 6 ms s-320: 8ms |
| M1 | s-320: 9ms |
| Name | YOLOv9 Inference Time |
| ------ | ------------------------------------ |
| M4 | s-320: 10 ms |
| M3 Pro | t-320: 6 ms s-320: 8 ms s-640: 20 ms |
| M1 | s-320: 9ms |
### ROCm - AMD GPU
With the [ROCm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | ------------------------- | ------------------------- |
| AMD 780M | t-320: 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------------- | ------------------------- |
| AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
## Community Supported Detectors
@@ -227,14 +247,14 @@ Detailed information is available [in the detector docs](/configuration/object_d
The MX3 is a pipelined architecture, where the maximum frames per second supported (and thus supported number of cameras) cannot be calculated as `1/latency` (1/"Inference Time") and is measured separately. When estimating how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.
| Model | Input Size | MX3 Inference Time | MX3 Total FPS |
|----------------------|------------|--------------------|---------------|
| -------------------- | ---------- | ------------------ | ------------- |
| YOLO-NAS-Small | 320 | ~ 9 ms | ~ 378 |
| YOLO-NAS-Small | 640 | ~ 21 ms | ~ 138 |
| YOLOv9s | 320 | ~ 16 ms | ~ 382 |
| YOLOv9s | 640 | ~ 41 ms | ~ 110 |
| YOLOX-Small | 640 | ~ 16 ms | ~ 263 |
| SSDlite MobileNet v2 | 320 | ~ 5 ms | ~ 1056 |
Inference speeds may vary depending on the host platform. The above data was measured on an **Intel 13700 CPU**. Platforms like Raspberry Pi, Orange Pi, and other ARM-based SBCs have different levels of processing capability, which may limit total FPS.
### Nvidia Jetson

View File

@@ -256,6 +256,37 @@ or add these options to your `docker run` command:
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
### Synaptics
- SL1680
#### Setup
Follow Frigate's default installation instructions, but use a docker image with `-synaptics` suffix for example `ghcr.io/blakeblackshear/frigate:stable-synaptics`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
```yaml
devices:
- /dev/synap
- /dev/video0
- /dev/video1
```
or add these options to your `docker run` command:
```
--device /dev/synap \
--device /dev/video0 \
--device /dev/video1
```
#### Configuration
Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics).
## Docker
Running through Docker with Docker Compose is the recommended install method.

View File

@@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.16.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.0).
The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.0` instead of `0.15.2`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.16.0
image: ghcr.io/blakeblackshear/frigate:0.16.1
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.0`, `0.16.0-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
```
3. **Start the Container**:

View File

@@ -13,7 +13,7 @@ Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect
# Setup a go2rtc stream
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#module-streams), not just rtsp.
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#module-streams), not just rtsp.
:::tip
@@ -49,8 +49,8 @@ After adding this to the config, restart Frigate and try to watch the live strea
- Check Video Codec:
- If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
```yaml
go2rtc:
streams:

View File

@@ -185,6 +185,26 @@ For clips to be castable to media devices, audio is required and may need to be
<a name="api"></a>
## Camera API
To disable a camera dynamically
```
action: camera.turn_off
data: {}
target:
entity_id: camera.back_deck_cam # your Frigate camera entity ID
```
To enable a camera that has been disabled dynamically
```
action: camera.turn_on
data: {}
target:
entity_id: camera.back_deck_cam # your Frigate camera entity ID
```
## Notification API
Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications.

View File

@@ -29,12 +29,12 @@ Message published for each changed tracked object. The first message is publishe
"camera": "front_door",
"frame_time": 1607123961.837752,
"snapshot": {
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": [],
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": []
},
"label": "person",
"sub_label": null,
@@ -61,6 +61,7 @@ Message published for each changed tracked object. The first message is publishe
}, // attributes with top score that have been identified on the object at any point
"current_attributes": [], // detailed data about the current attributes in this frame
"current_estimated_speed": 0.71, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"average_estimated_speed": 14.3, // average estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180, // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
"recognized_license_plate": "ABC12345", // a recognized license plate for car objects
"recognized_license_plate_score": 0.933451
@@ -70,12 +71,12 @@ Message published for each changed tracked object. The first message is publishe
"camera": "front_door",
"frame_time": 1607123962.082975,
"snapshot": {
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": [],
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": []
},
"label": "person",
"sub_label": ["John Smith", 0.79],
@@ -109,6 +110,7 @@ Message published for each changed tracked object. The first message is publishe
}
],
"current_estimated_speed": 0.77, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"average_estimated_speed": 14.31, // average estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180, // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
"recognized_license_plate": "ABC12345", // a recognized license plate for car objects
"recognized_license_plate_score": 0.933451

View File

@@ -34,6 +34,12 @@ Model IDs are not secret values and can be shared freely. Access to your model i
:::
:::tip
When setting the plus model id, all other fields should be removed as these are configured automatically with the Frigate+ model config
:::
## Step 4: Adjust your object filters for higher scores
Frigate+ models generally have much higher scores than the default model provided in Frigate. You will likely need to increase your `threshold` and `min_score` values. Here is an example of how these values can be refined, but you should expect these to evolve as your model improves. For more information about how `threshold` and `min_score` are related, see the docs on [object filters](../configuration/object_filters.md#object-scores).

View File

@@ -11,34 +11,51 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
## Available model types
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yolov9`. All of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| Model Type | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
_\* Support coming in 0.17_
### YOLOv9 Details
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
:::info
When switching to YOLOv9, you may need to adjust your thresholds for some objects.
:::
#### Hailo Support
If you have a Hailo device, you will need to specify the hardware you have when submitting a model request because they are not cross compatible. Please test using the available base models before submitting your model request.
#### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
## Supported detector types
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), and ONNX (`onnx`) detectors.
:::warning
Using Frigate+ models with `onnx` is only available with Frigate 0.15 and later.
:::
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
| Hardware | Recommended Detector Type | Recommended Model Type |
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
| [NVidia GPU](/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires Frigate 0.15_
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
## Improving your model

997
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,7 @@
},
"dependencies": {
"@docusaurus/core": "^3.7.0",
"@docusaurus/plugin-content-docs": "^3.6.3",
"@docusaurus/plugin-content-docs": "^3.9.1",
"@docusaurus/preset-classic": "^3.7.0",
"@docusaurus/theme-mermaid": "^3.6.3",
"@inkeep/docusaurus": "^2.0.16",

View File

@@ -5,14 +5,14 @@ import frigateHttpApiSidebar from "./docs/integrations/api/sidebar";
const sidebars: SidebarsConfig = {
docs: {
Frigate: [
'frigate/index',
'frigate/hardware',
'frigate/planning_setup',
'frigate/installation',
'frigate/updating',
'frigate/camera_setup',
'frigate/video_pipeline',
'frigate/glossary',
"frigate/index",
"frigate/hardware",
"frigate/planning_setup",
"frigate/installation",
"frigate/updating",
"frigate/camera_setup",
"frigate/video_pipeline",
"frigate/glossary",
],
Guides: [
"guides/getting_started",
@@ -28,7 +28,7 @@ const sidebars: SidebarsConfig = {
{
type: "link",
label: "Go2RTC Configuration Reference",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration",
} as PropSidebarItemLink,
],
Detectors: [
@@ -40,6 +40,19 @@ const sidebars: SidebarsConfig = {
"configuration/face_recognition",
"configuration/license_plate_recognition",
"configuration/bird_classification",
{
type: "category",
label: "Custom Classification",
link: {
type: "generated-index",
title: "Custom Classification",
description: "Configuration for custom classification models",
},
items: [
"configuration/custom_classification/state_classification",
"configuration/custom_classification/object_classification",
],
},
{
type: "category",
label: "Generative AI",
@@ -106,11 +119,11 @@ const sidebars: SidebarsConfig = {
"configuration/metrics",
"integrations/third_party_extensions",
],
'Frigate+': [
'plus/index',
'plus/annotating',
'plus/first_model',
'plus/faq',
"Frigate+": [
"plus/index",
"plus/annotating",
"plus/first_model",
"plus/faq",
],
Troubleshooting: [
"troubleshooting/faqs",

View File

@@ -822,9 +822,9 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified date-time on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str):
async def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str):
"""VOD for specific hour. Uses the default timezone (UTC)."""
return vod_hour(
return await vod_hour(
year_month, day, hour, camera_name, get_localzone_name().replace("/", ",")
)
@@ -834,7 +834,9 @@ def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str)
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified date-time (with timezone) on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
async def vod_hour(
year_month: str, day: int, hour: int, camera_name: str, tz_name: str
):
parts = year_month.split("-")
start_date = (
datetime(int(parts[0]), int(parts[1]), day, hour, tzinfo=timezone.utc)
@@ -844,7 +846,7 @@ def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: st
start_ts = start_date.timestamp()
end_ts = end_date.timestamp()
return vod_ts(camera_name, start_ts, end_ts)
return await vod_ts(camera_name, start_ts, end_ts)
@router.get(
@@ -875,7 +877,7 @@ async def vod_event(
if event.end_time is None
else (event.end_time + padding)
)
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
vod_response = await vod_ts(event.camera, event.start_time - padding, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
@@ -1248,7 +1250,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get("/events/{event_id}/clip.mp4")
def event_clip(
async def event_clip(
request: Request,
event_id: str,
padding: int = Query(0, description="Padding to apply to clip."),
@@ -1270,7 +1272,9 @@ def event_clip(
if event.end_time is None
else event.end_time + padding
)
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
return await recording_clip(
request, event.camera, event.start_time - padding, end_ts
)
@router.get("/events/{event_id}/preview.gif")
@@ -1698,7 +1702,7 @@ def preview_thumbnail(file_name: str):
"/{camera_name}/{label}/thumbnail.jpg",
dependencies=[Depends(require_camera_access)],
)
def label_thumbnail(request: Request, camera_name: str, label: str):
async def label_thumbnail(request: Request, camera_name: str, label: str):
label = unquote(label)
event_query = Event.select(fn.MAX(Event.id)).where(Event.camera == camera_name)
if label != "any":
@@ -1707,7 +1711,7 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
try:
event_id = event_query.scalar()
return event_thumbnail(request, event_id, 60)
return await event_thumbnail(request, event_id, Extension.jpg, 60)
except DoesNotExist:
frame = np.zeros((175, 175, 3), np.uint8)
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
@@ -1722,7 +1726,7 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
@router.get(
"/{camera_name}/{label}/clip.mp4", dependencies=[Depends(require_camera_access)]
)
def label_clip(request: Request, camera_name: str, label: str):
async def label_clip(request: Request, camera_name: str, label: str):
label = unquote(label)
event_query = Event.select(fn.MAX(Event.id)).where(
Event.camera == camera_name, Event.has_clip == True
@@ -1733,7 +1737,7 @@ def label_clip(request: Request, camera_name: str, label: str):
try:
event = event_query.get()
return event_clip(request, event.id)
return await event_clip(request, event.id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Event not found"}, status_code=404
@@ -1743,7 +1747,7 @@ def label_clip(request: Request, camera_name: str, label: str):
@router.get(
"/{camera_name}/{label}/snapshot.jpg", dependencies=[Depends(require_camera_access)]
)
def label_snapshot(request: Request, camera_name: str, label: str):
async def label_snapshot(request: Request, camera_name: str, label: str):
"""Returns the snapshot image from the latest event for the given camera and label combo"""
label = unquote(label)
if label == "any":
@@ -1764,7 +1768,7 @@ def label_snapshot(request: Request, camera_name: str, label: str):
try:
event: Event = event_query.get()
return event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
return await event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
except DoesNotExist:
frame = np.zeros((720, 1280, 3), np.uint8)
_, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])

View File

@@ -2,6 +2,7 @@
import logging
from enum import Enum
from typing import Any
from .zmq_proxy import Publisher, Subscriber
@@ -10,18 +11,21 @@ logger = logging.getLogger(__name__)
class RecordingsDataTypeEnum(str, Enum):
all = ""
recordings_available_through = "recordings_available_through"
saved = "saved" # segment has been saved to db
latest = "latest" # segment is in cache
valid = "valid" # segment is valid
invalid = "invalid" # segment is invalid
class RecordingsDataPublisher(Publisher[tuple[str, float]]):
class RecordingsDataPublisher(Publisher[Any]):
"""Publishes latest recording data."""
topic_base = "recordings/"
def __init__(self, topic: RecordingsDataTypeEnum) -> None:
super().__init__(topic.value)
def __init__(self) -> None:
super().__init__()
def publish(self, payload: tuple[str, float], sub_topic: str = "") -> None:
def publish(self, payload: Any, sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
@@ -32,3 +36,11 @@ class RecordingsDataSubscriber(Subscriber):
def __init__(self, topic: RecordingsDataTypeEnum) -> None:
super().__init__(topic.value)
def _return_object(
self, topic: str, payload: tuple | None
) -> tuple[str, Any] | tuple[None, None]:
if payload is None:
return (None, None)
return (topic, payload)

View File

@@ -29,6 +29,10 @@ class StationaryConfig(FrigateBaseModel):
default_factory=StationaryMaxFramesConfig,
title="Max frames for stationary objects.",
)
classifier: bool = Field(
default=True,
title="Enable visual classifier for determing if objects with jittery bounding boxes are stationary.",
)
class DetectConfig(FrigateBaseModel):

View File

@@ -93,7 +93,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if camera_config.review.genai.debug_save_thumbnails:
id = data["after"]["id"]
Path(os.path.join(CLIPS_DIR, f"genai-requests/{id}")).mkdir(
Path(os.path.join(CLIPS_DIR, "genai-requests", f"{id}")).mkdir(
parents=True, exist_ok=True
)
shutil.copy(
@@ -124,6 +124,9 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if topic == EmbeddingsRequestEnum.summarize_review.value:
start_ts = request_data["start_ts"]
end_ts = request_data["end_ts"]
logger.debug(
f"Found GenAI Review Summary request for {start_ts} to {end_ts}"
)
items: list[dict[str, Any]] = [
r["data"]["metadata"]
for r in (
@@ -141,7 +144,7 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if len(items) == 0:
logger.debug("No review items with metadata found during time period")
return None
return "No activity was found during this time."
important_items = list(
filter(
@@ -154,8 +157,16 @@ class ReviewDescriptionProcessor(PostProcessorApi):
if not important_items:
return "No concerns were found during this time period."
if self.config.review.genai.debug_save_thumbnails:
Path(
os.path.join(CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}")
).mkdir(parents=True, exist_ok=True)
return self.genai_client.generate_review_summary(
start_ts, end_ts, important_items
start_ts,
end_ts,
important_items,
self.config.review.genai.debug_save_thumbnails,
)
else:
return None

View File

@@ -19,3 +19,4 @@ class ReviewMetadata(BaseModel):
default=None,
description="Other concerns highlighted by the user that are observed.",
)
time: str | None = Field(default=None, description="Time of activity.")

View File

@@ -42,10 +42,13 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
self.detected_birds: dict[str, float] = {}
self.labelmap: dict[int, str] = {}
GITHUB_RAW_ENDPOINT = os.environ.get(
"GITHUB_RAW_ENDPOINT", "https://raw.githubusercontent.com"
)
download_path = os.path.join(MODEL_CACHE_DIR, "bird")
self.model_files = {
"bird.tflite": "https://raw.githubusercontent.com/google-coral/test_data/master/mobilenet_v2_1.0_224_inat_bird_quant.tflite",
"birdmap.txt": "https://raw.githubusercontent.com/google-coral/test_data/master/inat_bird_labels.txt",
"bird.tflite": f"{GITHUB_RAW_ENDPOINT}/google-coral/test_data/master/mobilenet_v2_1.0_224_inat_bird_quant.tflite",
"birdmap.txt": f"{GITHUB_RAW_ENDPOINT}/google-coral/test_data/master/inat_bird_labels.txt",
}
if not all(

View File

@@ -48,9 +48,9 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
self.requestor = requestor
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter = None
self.tensor_input_details: dict[str, Any] = None
self.tensor_output_details: dict[str, Any] = None
self.interpreter: Interpreter | None = None
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.labelmap: dict[int, str] = {}
self.classifications_per_second = EventsPerSecond()
self.inference_speed = InferenceSpeed(
@@ -61,17 +61,24 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
if not os.path.exists(model_path) or not os.path.exists(labelmap_path):
self.interpreter = None
self.tensor_input_details = None
self.tensor_output_details = None
self.labelmap = {}
return
self.interpreter = Interpreter(
model_path=os.path.join(self.model_dir, "model.tflite"),
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(
os.path.join(self.model_dir, "labelmap.txt"),
prefill=0,
)
self.labelmap = load_labels(labelmap_path, prefill=0)
self.classifications_per_second.start()
def __update_metrics(self, duration: float) -> None:
@@ -140,6 +147,16 @@ class CustomStateClassificationProcessor(RealTimeProcessorApi):
logger.warning("Failed to resize image for state classification")
return
if self.interpreter is None:
write_classification_attempt(
self.train_dir,
cv2.cvtColor(frame, cv2.COLOR_RGB2BGR),
now,
"unknown",
0.0,
)
return
input = np.expand_dims(frame, axis=0)
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], input)
self.interpreter.invoke()
@@ -197,10 +214,10 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
self.model_config = model_config
self.model_dir = os.path.join(MODEL_CACHE_DIR, self.model_config.name)
self.train_dir = os.path.join(CLIPS_DIR, self.model_config.name, "train")
self.interpreter: Interpreter = None
self.interpreter: Interpreter | None = None
self.sub_label_publisher = sub_label_publisher
self.tensor_input_details: dict[str, Any] = None
self.tensor_output_details: dict[str, Any] = None
self.tensor_input_details: dict[str, Any] | None = None
self.tensor_output_details: dict[str, Any] | None = None
self.detected_objects: dict[str, float] = {}
self.labelmap: dict[int, str] = {}
self.classifications_per_second = EventsPerSecond()
@@ -211,17 +228,24 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
@redirect_output_to_logger(logger, logging.DEBUG)
def __build_detector(self) -> None:
model_path = os.path.join(self.model_dir, "model.tflite")
labelmap_path = os.path.join(self.model_dir, "labelmap.txt")
if not os.path.exists(model_path) or not os.path.exists(labelmap_path):
self.interpreter = None
self.tensor_input_details = None
self.tensor_output_details = None
self.labelmap = {}
return
self.interpreter = Interpreter(
model_path=os.path.join(self.model_dir, "model.tflite"),
model_path=model_path,
num_threads=2,
)
self.interpreter.allocate_tensors()
self.tensor_input_details = self.interpreter.get_input_details()
self.tensor_output_details = self.interpreter.get_output_details()
self.labelmap = load_labels(
os.path.join(self.model_dir, "labelmap.txt"),
prefill=0,
)
self.labelmap = load_labels(labelmap_path, prefill=0)
def __update_metrics(self, duration: float) -> None:
self.classifications_per_second.update()
@@ -265,6 +289,16 @@ class CustomObjectClassificationProcessor(RealTimeProcessorApi):
logger.warning("Failed to resize image for state classification")
return
if self.interpreter is None:
write_classification_attempt(
self.train_dir,
cv2.cvtColor(crop, cv2.COLOR_RGB2BGR),
now,
"unknown",
0.0,
)
return
input = np.expand_dims(crop, axis=0)
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], input)
self.interpreter.invoke()

View File

@@ -60,10 +60,12 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
self.faces_per_second = EventsPerSecond()
self.inference_speed = InferenceSpeed(self.metrics.face_rec_speed)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
download_path = os.path.join(MODEL_CACHE_DIR, "facedet")
self.model_files = {
"facedet.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facedet.onnx",
"landmarkdet.yaml": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/landmarkdet.yaml",
"facedet.onnx": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/facedet.onnx",
"landmarkdet.yaml": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/landmarkdet.yaml",
}
if not all(

View File

@@ -78,6 +78,21 @@ class BaseModelRunner(ABC):
class ONNXModelRunner(BaseModelRunner):
"""Run ONNX models using ONNX Runtime."""
@staticmethod
def is_migraphx_complex_model(model_type: str) -> bool:
# Import here to avoid circular imports
from frigate.detectors.detector_config import ModelTypeEnum
from frigate.embeddings.types import EnrichmentModelTypeEnum
return model_type in [
EnrichmentModelTypeEnum.paddleocr.value,
EnrichmentModelTypeEnum.jina_v1.value,
EnrichmentModelTypeEnum.jina_v2.value,
EnrichmentModelTypeEnum.facenet.value,
ModelTypeEnum.rfdetr.value,
ModelTypeEnum.dfine.value,
]
def __init__(self, ort: ort.InferenceSession):
self.ort = ort
@@ -441,6 +456,15 @@ def get_optimized_runner(
options[0]["device_id"],
)
if (
providers
and providers[0] == "MIGraphXExecutionProvider"
and ONNXModelRunner.is_migraphx_complex_model(model_type)
):
# Don't use MIGraphX for models that are not supported
providers.pop(0)
options.pop(0)
return ONNXModelRunner(
ort.InferenceSession(
model_path,

View File

@@ -161,6 +161,10 @@ class ModelConfig(BaseModel):
if model_info.get("inputDataType"):
self.input_dtype = InputDTypeEnum(model_info["inputDataType"])
# RKNN always uses NHWC
if detector == "rknn":
self.input_tensor = InputTensorEnum.nhwc
# generate list of attribute labels
self.attributes_map = {
**model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP),

View File

@@ -33,10 +33,6 @@ def preprocess_tensor(image: np.ndarray, model_w: int, model_h: int) -> np.ndarr
image = image[0]
h, w = image.shape[:2]
if (w, h) == (320, 320) and (model_w, model_h) == (640, 640):
return cv2.resize(image, (model_w, model_h), interpolation=cv2.INTER_LINEAR)
scale = min(model_w / w, model_h / h)
new_w, new_h = int(w * scale), int(h * scale)
resized_image = cv2.resize(image, (new_w, new_h), interpolation=cv2.INTER_CUBIC)

View File

@@ -165,8 +165,9 @@ class Rknn(DetectionApi):
if not os.path.isdir(model_cache_dir):
os.mkdir(model_cache_dir)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
urllib.request.urlretrieve(
f"https://github.com/MarcA711/rknn-models/releases/download/v2.3.2-2/{filename}",
f"{GITHUB_ENDPOINT}/MarcA711/rknn-models/releases/download/v2.3.2-2/{filename}",
model_cache_dir + filename,
)

View File

@@ -0,0 +1,103 @@
import logging
import os
import numpy as np
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import (
BaseDetectorConfig,
InputTensorEnum,
ModelTypeEnum,
)
try:
from synap import Network
from synap.postprocessor import Detector
from synap.preprocessor import Preprocessor
from synap.types import Layout, Shape
SYNAP_SUPPORT = True
except ImportError:
SYNAP_SUPPORT = False
logger = logging.getLogger(__name__)
DETECTOR_KEY = "synaptics"
class SynapDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
class SynapDetector(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, detector_config: SynapDetectorConfig):
if not SYNAP_SUPPORT:
logger.error(
"Error importing Synaptics SDK modules. You must use the -synaptics Docker image variant for Synaptics detector support."
)
return
try:
_, ext = os.path.splitext(detector_config.model.path)
if ext and ext != ".synap":
raise ValueError("Model path config for Synap1680 is incorrect.")
synap_network = Network(detector_config.model.path)
logger.info(f"Synap NPU loaded model: {detector_config.model.path}")
except ValueError as ve:
logger.error(f"Synap1680 setup has failed: {ve}")
raise
except Exception as e:
logger.error(f"Failed to init Synap NPU: {e}")
raise
self.width = detector_config.model.width
self.height = detector_config.model.height
self.model_type = detector_config.model.model_type
self.network = synap_network
self.network_input_details = self.network.inputs[0]
self.input_tensor_layout = detector_config.model.input_tensor
# Create Inference Engine
self.preprocessor = Preprocessor()
self.detector = Detector(score_threshold=0.4, iou_threshold=0.4)
def detect_raw(self, tensor_input: np.ndarray):
# It has only been testing for pre-converted mobilenet80 .tflite -> .synap model currently
layout = Layout.nhwc # default layout
detections = np.zeros((20, 6), np.float32)
if self.input_tensor_layout == InputTensorEnum.nhwc:
layout = Layout.nhwc
postprocess_data = self.preprocessor.assign(
self.network.inputs, tensor_input, Shape(tensor_input.shape), layout
)
output_tensor_obj = self.network.predict()
output = self.detector.process(output_tensor_obj, postprocess_data)
if self.model_type == ModelTypeEnum.ssd:
for i, item in enumerate(output.items):
if i == 20:
break
bb = item.bounding_box
# Convert corner coordinates to normalized [0,1] range
x1 = bb.origin.x / self.width # Top-left X
y1 = bb.origin.y / self.height # Top-left Y
x2 = (bb.origin.x + bb.size.x) / self.width # Bottom-right X
y2 = (bb.origin.y + bb.size.y) / self.height # Bottom-right Y
detections[i] = [
item.class_index,
float(item.confidence),
y1,
x1,
y2,
x2,
]
else:
logger.error(f"Unsupported model type: {self.model_type}")
return detections

View File

@@ -144,7 +144,7 @@ class EmbeddingMaintainer(threading.Thread):
EventMetadataTypeEnum.regenerate_description
)
self.recordings_subscriber = RecordingsDataSubscriber(
RecordingsDataTypeEnum.recordings_available_through
RecordingsDataTypeEnum.saved
)
self.review_subscriber = ReviewDataSubscriber("")
self.detection_subscriber = DetectionSubscriber(DetectionTypeEnum.video.value)
@@ -313,6 +313,7 @@ class EmbeddingMaintainer(threading.Thread):
if resp is not None:
return resp
logger.error(f"No processor handled the topic {topic}")
return None
except Exception as e:
logger.error(f"Unable to handle embeddings request {e}", exc_info=True)
@@ -524,20 +525,28 @@ class EmbeddingMaintainer(threading.Thread):
def _process_recordings_updates(self) -> None:
"""Process recordings updates."""
while True:
recordings_data = self.recordings_subscriber.check_for_update()
update = self.recordings_subscriber.check_for_update()
if recordings_data == None:
if not update:
break
camera, recordings_available_through_timestamp = recordings_data
(raw_topic, payload) = update
self.recordings_available_through[camera] = (
recordings_available_through_timestamp
)
if not raw_topic or not payload:
break
logger.debug(
f"{camera} now has recordings available through {recordings_available_through_timestamp}"
)
topic = str(raw_topic)
if topic.endswith(RecordingsDataTypeEnum.saved.value):
camera, recordings_available_through_timestamp, _ = payload
self.recordings_available_through[camera] = (
recordings_available_through_timestamp
)
logger.debug(
f"{camera} now has recordings available through {recordings_available_through_timestamp}"
)
def _process_review_updates(self) -> None:
"""Process review updates."""

View File

@@ -27,11 +27,12 @@ FACENET_INPUT_SIZE = 160
class FaceNetEmbedding(BaseEmbedding):
def __init__(self):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="facedet",
model_file="facenet.tflite",
download_urls={
"facenet.tflite": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
"facenet.tflite": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
},
)
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
@@ -114,11 +115,12 @@ class FaceNetEmbedding(BaseEmbedding):
class ArcfaceEmbedding(BaseEmbedding):
def __init__(self, config: FaceRecognitionConfig):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="facedet",
model_file="arcface.onnx",
download_urls={
"arcface.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
"arcface.onnx": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
},
)
self.config = config

View File

@@ -37,11 +37,12 @@ class PaddleOCRDetection(BaseEmbedding):
if model_size == "large"
else "detection_v5-small.onnx"
)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="paddleocr-onnx",
model_file=model_file,
download_urls={
model_file: f"https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{'v3' if model_size == 'large' else 'v5'}/{model_file}"
model_file: f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{'v3' if model_size == 'large' else 'v5'}/{model_file}"
},
)
self.requestor = requestor
@@ -97,11 +98,12 @@ class PaddleOCRClassification(BaseEmbedding):
requestor: InterProcessRequestor,
device: str = "AUTO",
):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="paddleocr-onnx",
model_file="classification.onnx",
download_urls={
"classification.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/classification.onnx"
"classification.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/classification.onnx"
},
)
self.requestor = requestor
@@ -157,12 +159,13 @@ class PaddleOCRRecognition(BaseEmbedding):
requestor: InterProcessRequestor,
device: str = "AUTO",
):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="paddleocr-onnx",
model_file="recognition_v4.onnx",
download_urls={
"recognition_v4.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/v4/recognition_v4.onnx",
"ppocr_keys_v1.txt": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/v4/ppocr_keys_v1.txt",
"recognition_v4.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/v4/recognition_v4.onnx",
"ppocr_keys_v1.txt": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/v4/ppocr_keys_v1.txt",
},
)
self.requestor = requestor
@@ -218,11 +221,12 @@ class LicensePlateDetector(BaseEmbedding):
requestor: InterProcessRequestor,
device: str = "AUTO",
):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="yolov9_license_plate",
model_file="yolov9-256-license-plates.onnx",
download_urls={
"yolov9-256-license-plates.onnx": "https://github.com/hawkeye217/yolov9-license-plates/raw/refs/heads/master/models/yolov9-256-license-plates.onnx"
"yolov9-256-license-plates.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/yolov9-license-plates/raw/refs/heads/master/models/yolov9-256-license-plates.onnx"
},
)

View File

@@ -73,7 +73,7 @@ Your task is to provide a clear, security-focused description of the scene that:
Facts come first, but identifying security risks is the primary goal.
When forming your description:
- Describe the time, people, and objects exactly as seen. Include any observable environmental changes (e.g., lighting changes triggered by activity).
- Describe the people and objects exactly as seen. Include any observable environmental changes (e.g., lighting changes triggered by activity).
- Time of day should **increase suspicion only when paired with unusual or security-relevant behaviors**. Do not raise the threat level for common residential activities (e.g., residents walking pets, retrieving mail, gardening, playing with pets, supervising children) even at unusual hours, unless other suspicious indicators are present.
- Focus on behaviors that are uncharacteristic of innocent activity: loitering without clear purpose, avoiding cameras, inspecting vehicles/doors, changing behavior when lights activate, scanning surroundings without an apparent benign reason.
- **Benign context override**: If scanning or looking around is clearly part of an innocent activity (such as playing with a dog, gardening, supervising children, or watching for a pet), do not treat it as suspicious.
@@ -99,7 +99,7 @@ Sequence details:
**IMPORTANT:**
- Values must be plain strings, floats, or integers — no nested objects, no extra commentary.
{get_language_prompt()}
"""
"""
logger.debug(
f"Sending {len(thumbnails)} images to create review description on {review_data['camera']}"
)
@@ -135,6 +135,7 @@ Sequence details:
if review_data["recognized_objects"]:
metadata.potential_threat_level = 0
metadata.time = review_data["start"]
return metadata
except Exception as e:
# rarely LLMs can fail to follow directions on output format
@@ -146,34 +147,75 @@ Sequence details:
return None
def generate_review_summary(
self, start_ts: float, end_ts: float, segments: list[dict[str, Any]]
self,
start_ts: float,
end_ts: float,
segments: list[dict[str, Any]],
debug_save: bool,
) -> str | None:
"""Generate a summary of review item descriptions over a period of time."""
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%I:%M %p')}"
time_range = f"{datetime.datetime.fromtimestamp(start_ts).strftime('%B %d, %Y at %I:%M %p')} to {datetime.datetime.fromtimestamp(end_ts).strftime('%B %d, %Y at %I:%M %p')}"
timeline_summary_prompt = f"""
You are a security officer. Time range: {time_range}.
You are a security officer.
Time range: {time_range}.
Input: JSON list with "scene", "confidence", "potential_threat_level" (1-2), "other_concerns".
Write a report:
Security Summary - {time_range}
[One-sentence overview of activity]
[Chronological bullet list of events with timestamps if in scene]
[Final threat assessment]
Task: Write a concise, human-presentable security report in markdown format.
Rules:
- List events in order.
- Highlight potential_threat_level ≥ 1 with exact times.
- Note any of the additional concerns which are present.
- Note unusual activity even if not threats.
- If no threats: "Final assessment: Only normal activity observed during this period."
- No commentary, questions, or recommendations.
- Output only the report.
"""
Rules for the report:
- Title & overview
- Start with:
# Security Summary - {time_range}
- Write a 1-2 sentence situational overview capturing the general pattern of the period.
- Event details
- Present events in chronological order as a bullet list.
- **If multiple events occur within the same minute or overlapping time range, COMBINE them into a single bullet.**
- Summarize the distinct activities as sub-points under the shared timestamp.
- If no timestamp is given, preserve order but label as “Time not specified.”
- Use bold timestamps for clarity.
- Group bullets under subheadings when multiple events fall into the same category (e.g., Vehicle Activity, Porch Activity, Unusual Behavior).
- Threat levels
- Always show (threat level: X) for each event.
- If multiple events at the same time share the same threat level, only state it once.
- Final assessment
- End with a Final Assessment section.
- If all events are threat level 1 with no escalation:
Final assessment: Only normal residential activity observed during this period.
- If threat level 2+ events are present, clearly summarize them as Potential concerns requiring review.
- Conciseness
- Do not repeat benign clothing/appearance details unless they distinguish individuals.
- Summarize similar routine events instead of restating full scene descriptions.
"""
for item in segments:
timeline_summary_prompt += f"\n{item}"
return self._send(timeline_summary_prompt, [])
if debug_save:
with open(
os.path.join(
CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}", "prompt.txt"
),
"w",
) as f:
f.write(timeline_summary_prompt)
response = self._send(timeline_summary_prompt, [])
if debug_save and response:
with open(
os.path.join(
CLIPS_DIR, "genai-requests", f"{start_ts}-{end_ts}", "response.txt"
),
"w",
) as f:
f.write(response)
return response
def generate_object_description(
self,

View File

@@ -80,9 +80,7 @@ class RecordingMaintainer(threading.Thread):
[CameraConfigUpdateEnum.add, CameraConfigUpdateEnum.record],
)
self.detection_subscriber = DetectionSubscriber(DetectionTypeEnum.all.value)
self.recordings_publisher = RecordingsDataPublisher(
RecordingsDataTypeEnum.recordings_available_through
)
self.recordings_publisher = RecordingsDataPublisher()
self.stop_event = stop_event
self.object_recordings_info: dict[str, list] = defaultdict(list)
@@ -98,6 +96,41 @@ class RecordingMaintainer(threading.Thread):
and not d.startswith("preview_")
]
# publish newest cached segment per camera (including in use files)
newest_cache_segments: dict[str, dict[str, Any]] = {}
for cache in cache_files:
cache_path = os.path.join(CACHE_DIR, cache)
basename = os.path.splitext(cache)[0]
camera, date = basename.rsplit("@", maxsplit=1)
start_time = datetime.datetime.strptime(
date, CACHE_SEGMENT_FORMAT
).astimezone(datetime.timezone.utc)
if (
camera not in newest_cache_segments
or start_time > newest_cache_segments[camera]["start_time"]
):
newest_cache_segments[camera] = {
"start_time": start_time,
"cache_path": cache_path,
}
for camera, newest in newest_cache_segments.items():
self.recordings_publisher.publish(
(
camera,
newest["start_time"].timestamp(),
newest["cache_path"],
),
RecordingsDataTypeEnum.latest.value,
)
# publish None for cameras with no cache files (but only if we know the camera exists)
for camera_name in self.config.cameras:
if camera_name not in newest_cache_segments:
self.recordings_publisher.publish(
(camera_name, None, None),
RecordingsDataTypeEnum.latest.value,
)
files_in_use = []
for process in psutil.process_iter():
try:
@@ -111,7 +144,7 @@ class RecordingMaintainer(threading.Thread):
except psutil.Error:
continue
# group recordings by camera
# group recordings by camera (skip in-use for validation/moving)
grouped_recordings: defaultdict[str, list[dict[str, Any]]] = defaultdict(list)
for cache in cache_files:
# Skip files currently in use
@@ -233,7 +266,9 @@ class RecordingMaintainer(threading.Thread):
recordings[0]["start_time"].timestamp()
if self.config.cameras[camera].record.enabled
else None,
)
None,
),
RecordingsDataTypeEnum.saved.value,
)
recordings_to_insert: list[Optional[Recordings]] = await asyncio.gather(*tasks)
@@ -250,7 +285,7 @@ class RecordingMaintainer(threading.Thread):
async def validate_and_move_segment(
self, camera: str, reviews: list[ReviewSegment], recording: dict[str, Any]
) -> None:
) -> Optional[Recordings]:
cache_path: str = recording["cache_path"]
start_time: datetime.datetime = recording["start_time"]
record_config = self.config.cameras[camera].record
@@ -261,7 +296,7 @@ class RecordingMaintainer(threading.Thread):
or not self.config.cameras[camera].record.enabled
):
self.drop_segment(cache_path)
return
return None
if cache_path in self.end_time_cache:
end_time, duration = self.end_time_cache[cache_path]
@@ -270,10 +305,18 @@ class RecordingMaintainer(threading.Thread):
self.config.ffmpeg, cache_path, get_duration=True
)
if segment_info["duration"]:
duration = float(segment_info["duration"])
else:
duration = -1
if not segment_info.get("has_valid_video", False):
logger.warning(
f"Invalid or missing video stream in segment {cache_path}. Discarding."
)
self.recordings_publisher.publish(
(camera, start_time.timestamp(), cache_path),
RecordingsDataTypeEnum.invalid.value,
)
self.drop_segment(cache_path)
return None
duration = float(segment_info.get("duration", -1))
# ensure duration is within expected length
if 0 < duration < MAX_SEGMENT_DURATION:
@@ -284,8 +327,18 @@ class RecordingMaintainer(threading.Thread):
logger.warning(f"Failed to probe corrupt segment {cache_path}")
logger.warning(f"Discarding a corrupt recording segment: {cache_path}")
Path(cache_path).unlink(missing_ok=True)
return
self.recordings_publisher.publish(
(camera, start_time.timestamp(), cache_path),
RecordingsDataTypeEnum.invalid.value,
)
self.drop_segment(cache_path)
return None
# this segment has a valid duration and has video data, so publish an update
self.recordings_publisher.publish(
(camera, start_time.timestamp(), cache_path),
RecordingsDataTypeEnum.valid.value,
)
record_config = self.config.cameras[camera].record
highest = None

View File

@@ -1,7 +1,7 @@
import logging
import random
import string
from typing import Any, Sequence
from typing import Any, Sequence, cast
import cv2
import numpy as np
@@ -17,6 +17,7 @@ from frigate.camera import PTZMetrics
from frigate.config import CameraConfig
from frigate.ptz.autotrack import PtzMotionEstimator
from frigate.track import ObjectTracker
from frigate.track.stationary_classifier import StationaryMotionClassifier
from frigate.util.image import (
SharedMemoryFrameManager,
get_histogram,
@@ -119,6 +120,7 @@ class NorfairTracker(ObjectTracker):
self.ptz_motion_estimator: PtzMotionEstimator | None = None
self.camera_name = config.name
self.track_id_map: dict[str, str] = {}
self.stationary_classifier = StationaryMotionClassifier()
# Define tracker configurations for static camera
self.object_type_configs = {
@@ -321,23 +323,14 @@ class NorfairTracker(ObjectTracker):
# tracks the current position of the object based on the last N bounding boxes
# returns False if the object has moved outside its previous position
def update_position(self, id: str, box: list[int], stationary: bool) -> bool:
xmin, ymin, xmax, ymax = box
position = self.positions[id]
self.stationary_box_history[id].append(box)
if len(self.stationary_box_history[id]) > MAX_STATIONARY_HISTORY:
self.stationary_box_history[id] = self.stationary_box_history[id][
-MAX_STATIONARY_HISTORY:
]
avg_iou = intersection_over_union(
box, average_boxes(self.stationary_box_history[id])
)
# object has minimal or zero iou
# assume object is active
if avg_iou < THRESHOLD_KNOWN_ACTIVE_IOU:
def update_position(
self,
id: str,
box: list[int],
stationary: bool,
yuv_frame: np.ndarray | None,
) -> bool:
def reset_position(xmin: int, ymin: int, xmax: int, ymax: int) -> None:
self.positions[id] = {
"xmins": [xmin],
"ymins": [ymin],
@@ -348,13 +341,48 @@ class NorfairTracker(ObjectTracker):
"xmax": xmax,
"ymax": ymax,
}
return False
xmin, ymin, xmax, ymax = box
position = self.positions[id]
self.stationary_box_history[id].append(box)
if len(self.stationary_box_history[id]) > MAX_STATIONARY_HISTORY:
self.stationary_box_history[id] = self.stationary_box_history[id][
-MAX_STATIONARY_HISTORY:
]
avg_box = average_boxes(self.stationary_box_history[id])
avg_iou = intersection_over_union(box, avg_box)
median_box = median_of_boxes(self.stationary_box_history[id])
# Establish anchor early when stationary and stable
if stationary and yuv_frame is not None:
history = self.stationary_box_history[id]
if id not in self.stationary_classifier.anchor_crops and len(history) >= 5:
stability_iou = intersection_over_union(avg_box, median_box)
if stability_iou >= 0.7:
self.stationary_classifier.ensure_anchor(
id, yuv_frame, cast(tuple[int, int, int, int], median_box)
)
# object has minimal or zero iou
# assume object is active
if avg_iou < THRESHOLD_KNOWN_ACTIVE_IOU:
if stationary and yuv_frame is not None:
if not self.stationary_classifier.evaluate(
id, yuv_frame, cast(tuple[int, int, int, int], tuple(box))
):
reset_position(xmin, ymin, xmax, ymax)
return False
else:
reset_position(xmin, ymin, xmax, ymax)
return False
threshold = (
THRESHOLD_STATIONARY_CHECK_IOU if stationary else THRESHOLD_ACTIVE_CHECK_IOU
)
# object has iou below threshold, check median to reduce outliers
# object has iou below threshold, check median and optionally crop similarity
if avg_iou < threshold:
median_iou = intersection_over_union(
(
@@ -363,27 +391,26 @@ class NorfairTracker(ObjectTracker):
position["xmax"],
position["ymax"],
),
median_of_boxes(self.stationary_box_history[id]),
median_box,
)
# if the median iou drops below the threshold
# assume object is no longer stationary
if median_iou < threshold:
self.positions[id] = {
"xmins": [xmin],
"ymins": [ymin],
"xmaxs": [xmax],
"ymaxs": [ymax],
"xmin": xmin,
"ymin": ymin,
"xmax": xmax,
"ymax": ymax,
}
return False
# If we have a yuv_frame to check before flipping to active, check with classifier if we have YUV frame
if stationary and yuv_frame is not None:
if not self.stationary_classifier.evaluate(
id, yuv_frame, cast(tuple[int, int, int, int], tuple(box))
):
reset_position(xmin, ymin, xmax, ymax)
return False
else:
reset_position(xmin, ymin, xmax, ymax)
return False
# if there are more than 5 and less than 10 entries for the position, add the bounding box
# and recompute the position box
if 5 <= len(position["xmins"]) < 10:
if len(position["xmins"]) < 10:
position["xmins"].append(xmin)
position["ymins"].append(ymin)
position["xmaxs"].append(xmax)
@@ -416,7 +443,12 @@ class NorfairTracker(ObjectTracker):
return False
def update(self, track_id: str, obj: dict[str, Any]) -> None:
def update(
self,
track_id: str,
obj: dict[str, Any],
yuv_frame: np.ndarray | None,
) -> None:
id = self.track_id_map[track_id]
self.disappeared[id] = 0
stationary = (
@@ -424,7 +456,7 @@ class NorfairTracker(ObjectTracker):
>= self.detect_config.stationary.threshold
)
# update the motionless count if the object has not moved to a new position
if self.update_position(id, obj["box"], stationary):
if self.update_position(id, obj["box"], stationary, yuv_frame):
self.tracked_objects[id]["motionless_count"] += 1
if self.is_expired(id):
self.deregister(id, track_id)
@@ -440,6 +472,7 @@ class NorfairTracker(ObjectTracker):
self.tracked_objects[id]["position_changes"] += 1
self.tracked_objects[id]["motionless_count"] = 0
self.stationary_box_history[id] = []
self.stationary_classifier.on_active(id)
self.tracked_objects[id].update(obj)
@@ -467,6 +500,15 @@ class NorfairTracker(ObjectTracker):
) -> None:
# Group detections by object type
detections_by_type: dict[str, list[Detection]] = {}
yuv_frame: np.ndarray | None = None
if self.ptz_metrics.autotracker_enabled.value or (
self.detect_config.stationary.classifier
and any(obj[0] == "car" for obj in detections)
):
yuv_frame = self.frame_manager.get(
frame_name, self.camera_config.frame_shape_yuv
)
for obj in detections:
label = obj[0]
if label not in detections_by_type:
@@ -481,9 +523,6 @@ class NorfairTracker(ObjectTracker):
embedding = None
if self.ptz_metrics.autotracker_enabled.value:
yuv_frame = self.frame_manager.get(
frame_name, self.camera_config.frame_shape_yuv
)
embedding = get_histogram(
yuv_frame, obj[2][0], obj[2][1], obj[2][2], obj[2][3]
)
@@ -575,7 +614,11 @@ class NorfairTracker(ObjectTracker):
self.tracked_objects[id]["estimate"] = new_obj["estimate"]
# else update it
else:
self.update(str(t.global_id), new_obj)
self.update(
str(t.global_id),
new_obj,
yuv_frame if new_obj["label"] == "car" else None,
)
# clear expired tracks
expired_ids = [k for k in self.track_id_map.keys() if k not in active_ids]

View File

@@ -0,0 +1,202 @@
"""Tools for determining if an object is stationary."""
import logging
from typing import Any, cast
import cv2
import numpy as np
from scipy.ndimage import gaussian_filter
logger = logging.getLogger(__name__)
THRESHOLD_KNOWN_ACTIVE_IOU = 0.2
THRESHOLD_STATIONARY_CHECK_IOU = 0.6
THRESHOLD_ACTIVE_CHECK_IOU = 0.9
MAX_STATIONARY_HISTORY = 10
class StationaryMotionClassifier:
"""Fallback classifier to prevent false flips from stationary to active.
Uses appearance consistency on a fixed spatial region (historical median box)
to detect actual movement, ignoring bounding box detection variations.
"""
CROP_SIZE = 96
NCC_KEEP_THRESHOLD = 0.90 # High correlation = keep stationary
NCC_ACTIVE_THRESHOLD = 0.85 # Low correlation = consider active
SHIFT_KEEP_THRESHOLD = 0.02 # Small shift = keep stationary
SHIFT_ACTIVE_THRESHOLD = 0.04 # Large shift = consider active
DRIFT_ACTIVE_THRESHOLD = 0.12 # Cumulative drift over 5 frames
CHANGED_FRAMES_TO_FLIP = 2
def __init__(self) -> None:
self.anchor_crops: dict[str, np.ndarray] = {}
self.anchor_boxes: dict[str, tuple[int, int, int, int]] = {}
self.changed_counts: dict[str, int] = {}
self.shift_histories: dict[str, list[float]] = {}
# Pre-compute Hanning window for phase correlation
hann = np.hanning(self.CROP_SIZE).astype(np.float64)
self._hann2d = np.outer(hann, hann)
def reset(self, id: str) -> None:
logger.debug("StationaryMotionClassifier.reset: id=%s", id)
if id in self.anchor_crops:
del self.anchor_crops[id]
if id in self.anchor_boxes:
del self.anchor_boxes[id]
self.changed_counts[id] = 0
self.shift_histories[id] = []
def _extract_y_crop(
self, yuv_frame: np.ndarray, box: tuple[int, int, int, int]
) -> np.ndarray:
"""Extract and normalize Y-plane crop from bounding box."""
y_height = yuv_frame.shape[0] // 3 * 2
width = yuv_frame.shape[1]
x1 = max(0, min(width - 1, box[0]))
y1 = max(0, min(y_height - 1, box[1]))
x2 = max(0, min(width - 1, box[2]))
y2 = max(0, min(y_height - 1, box[3]))
if x2 <= x1:
x2 = min(width - 1, x1 + 1)
if y2 <= y1:
y2 = min(y_height - 1, y1 + 1)
# Extract Y-plane crop, resize, and blur
y_plane = yuv_frame[0:y_height, 0:width]
crop = y_plane[y1:y2, x1:x2]
crop_resized = cv2.resize(
crop, (self.CROP_SIZE, self.CROP_SIZE), interpolation=cv2.INTER_AREA
)
result = cast(np.ndarray[Any, Any], gaussian_filter(crop_resized, sigma=0.5))
logger.debug(
"_extract_y_crop: box=%s clamped=(%d,%d,%d,%d) crop_shape=%s",
box,
x1,
y1,
x2,
y2,
crop.shape if "crop" in locals() else None,
)
return result
def ensure_anchor(
self, id: str, yuv_frame: np.ndarray, median_box: tuple[int, int, int, int]
) -> None:
"""Initialize anchor crop from stable median box when object becomes stationary."""
if id not in self.anchor_crops:
self.anchor_boxes[id] = median_box
self.anchor_crops[id] = self._extract_y_crop(yuv_frame, median_box)
self.changed_counts[id] = 0
self.shift_histories[id] = []
logger.debug(
"ensure_anchor: initialized id=%s median_box=%s crop_shape=%s",
id,
median_box,
self.anchor_crops[id].shape,
)
def on_active(self, id: str) -> None:
"""Reset state when object becomes active to allow re-anchoring."""
logger.debug("on_active: id=%s became active; resetting state", id)
self.reset(id)
def evaluate(
self, id: str, yuv_frame: np.ndarray, current_box: tuple[int, int, int, int]
) -> bool:
"""Return True to keep stationary, False to flip to active.
Compares the same spatial region (historical median box) across frames
to detect actual movement, ignoring bounding box variations.
"""
if id not in self.anchor_crops or id not in self.anchor_boxes:
logger.debug("evaluate: id=%s has no anchor; default keep stationary", id)
return True
# Compare same spatial region across frames
anchor_box = self.anchor_boxes[id]
anchor_crop = self.anchor_crops[id]
curr_crop = self._extract_y_crop(yuv_frame, anchor_box)
# Compute appearance and motion metrics
ncc = cv2.matchTemplate(curr_crop, anchor_crop, cv2.TM_CCOEFF_NORMED)[0, 0]
a64 = anchor_crop.astype(np.float64) * self._hann2d
c64 = curr_crop.astype(np.float64) * self._hann2d
(shift_x, shift_y), _ = cv2.phaseCorrelate(a64, c64)
shift_norm = float(np.hypot(shift_x, shift_y)) / float(self.CROP_SIZE)
logger.debug(
"evaluate: id=%s metrics ncc=%.4f shift_norm=%.4f (shift_x=%.3f, shift_y=%.3f)",
id,
float(ncc),
shift_norm,
float(shift_x),
float(shift_y),
)
# Update rolling shift history
history = self.shift_histories.get(id, [])
history.append(shift_norm)
if len(history) > 5:
history = history[-5:]
self.shift_histories[id] = history
drift_sum = float(sum(history))
logger.debug(
"evaluate: id=%s history_len=%d last_shift=%.4f drift_sum=%.4f",
id,
len(history),
history[-1] if history else -1.0,
drift_sum,
)
# Early exit for clear stationary case
if ncc >= self.NCC_KEEP_THRESHOLD and shift_norm < self.SHIFT_KEEP_THRESHOLD:
self.changed_counts[id] = 0
logger.debug(
"evaluate: id=%s early-stationary keep=True (ncc>=%.2f and shift<%.2f)",
id,
self.NCC_KEEP_THRESHOLD,
self.SHIFT_KEEP_THRESHOLD,
)
return True
# Check for movement indicators
movement_detected = (
ncc < self.NCC_ACTIVE_THRESHOLD
or shift_norm >= self.SHIFT_ACTIVE_THRESHOLD
or drift_sum >= self.DRIFT_ACTIVE_THRESHOLD
)
if movement_detected:
cnt = self.changed_counts.get(id, 0) + 1
self.changed_counts[id] = cnt
if (
cnt >= self.CHANGED_FRAMES_TO_FLIP
or drift_sum >= self.DRIFT_ACTIVE_THRESHOLD
):
logger.debug(
"evaluate: id=%s flip_to_active=True cnt=%d drift_sum=%.4f thresholds(changed>=%d drift>=%.2f)",
id,
cnt,
drift_sum,
self.CHANGED_FRAMES_TO_FLIP,
self.DRIFT_ACTIVE_THRESHOLD,
)
return False
logger.debug(
"evaluate: id=%s movement_detected cnt=%d keep_until_cnt>=%d",
id,
cnt,
self.CHANGED_FRAMES_TO_FLIP,
)
else:
self.changed_counts[id] = 0
logger.debug("evaluate: id=%s no_movement keep=True", id)
return True

View File

@@ -284,7 +284,9 @@ def post_process_yolox(
def get_ort_providers(
force_cpu: bool = False, device: str | None = "AUTO", requires_fp16: bool = False
force_cpu: bool = False,
device: str | None = "AUTO",
requires_fp16: bool = False,
) -> tuple[list[str], list[dict[str, Any]]]:
if force_cpu:
return (
@@ -351,12 +353,15 @@ def get_ort_providers(
}
)
elif provider == "MIGraphXExecutionProvider":
# MIGraphX uses more CPU than ROCM, while also being the same speed
if device == "MIGraphX":
providers.append(provider)
options.append({})
else:
continue
migraphx_cache_dir = os.path.join(MODEL_CACHE_DIR, "migraphx")
os.makedirs(migraphx_cache_dir, exist_ok=True)
providers.append(provider)
options.append(
{
"migraphx_model_cache_dir": migraphx_cache_dir,
}
)
elif provider == "CPUExecutionProvider":
providers.append(provider)
options.append(

View File

@@ -269,7 +269,20 @@ def is_object_filtered(obj, objects_to_track, object_filters):
def get_min_region_size(model_config: ModelConfig) -> int:
"""Get the min region size."""
return max(model_config.height, model_config.width)
largest_dimension = max(model_config.height, model_config.width)
if largest_dimension > 320:
# We originally tested allowing any model to have a region down to half of the model size
# but this led to many false positives. In this case we specifically target larger models
# which can benefit from a smaller region in some cases to detect smaller objects.
half = int(largest_dimension / 2)
if half % 4 == 0:
return half
return int((half + 3) / 4) * 4
return largest_dimension
def create_tensor_input(frame, model_config: ModelConfig, region):

View File

@@ -303,7 +303,7 @@ def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, s
"-o",
"-",
"-s",
"1",
"1000", # Intel changed this from seconds to milliseconds in 2024+ versions
]
if intel_gpu_device:
@@ -603,87 +603,87 @@ def auto_detect_hwaccel() -> str:
async def get_video_properties(
ffmpeg, url: str, get_duration: bool = False
) -> dict[str, Any]:
async def calculate_duration(video: Optional[Any]) -> float:
duration = None
if video is not None:
# Get the frames per second (fps) of the video stream
fps = video.get(cv2.CAP_PROP_FPS)
total_frames = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
if fps and total_frames:
duration = total_frames / fps
# if cv2 failed need to use ffprobe
if duration is None:
p = await asyncio.create_subprocess_exec(
ffmpeg.ffprobe_path,
"-v",
"error",
"-show_entries",
"format=duration",
"-of",
"default=noprint_wrappers=1:nokey=1",
f"{url}",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
async def probe_with_ffprobe(
url: str,
) -> tuple[bool, int, int, Optional[str], float]:
"""Fallback using ffprobe: returns (valid, width, height, codec, duration)."""
cmd = [
ffmpeg.ffprobe_path,
"-v",
"quiet",
"-print_format",
"json",
"-show_format",
"-show_streams",
url,
]
try:
proc = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
await p.wait()
stdout, _ = await proc.communicate()
if proc.returncode != 0:
return False, 0, 0, None, -1
if p.returncode == 0:
result = (await p.stdout.read()).decode()
else:
result = None
data = json.loads(stdout.decode())
video_streams = [
s for s in data.get("streams", []) if s.get("codec_type") == "video"
]
if not video_streams:
return False, 0, 0, None, -1
if result:
try:
duration = float(result.strip())
except ValueError:
duration = -1
else:
duration = -1
v = video_streams[0]
width = int(v.get("width", 0))
height = int(v.get("height", 0))
codec = v.get("codec_name")
return duration
duration_str = data.get("format", {}).get("duration")
duration = float(duration_str) if duration_str else -1.0
width = height = 0
return True, width, height, codec, duration
except (json.JSONDecodeError, ValueError, KeyError, asyncio.SubprocessError):
return False, 0, 0, None, -1
try:
# Open the video stream using OpenCV
video = cv2.VideoCapture(url)
def probe_with_cv2(url: str) -> tuple[bool, int, int, Optional[str], float]:
"""Primary attempt using cv2: returns (valid, width, height, fourcc, duration)."""
cap = cv2.VideoCapture(url)
if not cap.isOpened():
cap.release()
return False, 0, 0, None, -1
# Check if the video stream was opened successfully
if not video.isOpened():
video = None
except Exception:
video = None
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
valid = width > 0 and height > 0
fourcc = None
duration = -1.0
result = {}
if valid:
fourcc_int = int(cap.get(cv2.CAP_PROP_FOURCC))
fourcc = fourcc_int.to_bytes(4, "little").decode("latin-1").strip()
if get_duration:
fps = cap.get(cv2.CAP_PROP_FPS)
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
if fps > 0 and total_frames > 0:
duration = total_frames / fps
cap.release()
return valid, width, height, fourcc, duration
# try cv2 first
has_video, width, height, fourcc, duration = probe_with_cv2(url)
# fallback to ffprobe if needed
if not has_video or (get_duration and duration < 0):
has_video, width, height, fourcc, duration = await probe_with_ffprobe(url)
result: dict[str, Any] = {"has_valid_video": has_video}
if has_video:
result.update({"width": width, "height": height})
if fourcc:
result["fourcc"] = fourcc
if get_duration:
result["duration"] = await calculate_duration(video)
if video is not None:
# Get the width of frames in the video stream
width = video.get(cv2.CAP_PROP_FRAME_WIDTH)
# Get the height of frames in the video stream
height = video.get(cv2.CAP_PROP_FRAME_HEIGHT)
# Get the stream encoding
fourcc_int = int(video.get(cv2.CAP_PROP_FOURCC))
fourcc = (
chr((fourcc_int >> 0) & 255)
+ chr((fourcc_int >> 8) & 255)
+ chr((fourcc_int >> 16) & 255)
+ chr((fourcc_int >> 24) & 255)
)
# Release the video stream
video.release()
result["width"] = round(width)
result["height"] = round(height)
result["fourcc"] = fourcc
result["duration"] = duration
return result

View File

@@ -1,10 +1,9 @@
import datetime
import logging
import os
import queue
import subprocess as sp
import threading
import time
from datetime import datetime, timedelta, timezone
from multiprocessing import Queue, Value
from multiprocessing.synchronize import Event as MpEvent
from typing import Any
@@ -13,6 +12,10 @@ import cv2
from frigate.camera import CameraMetrics, PTZMetrics
from frigate.comms.inter_process import InterProcessRequestor
from frigate.comms.recordings_updater import (
RecordingsDataSubscriber,
RecordingsDataTypeEnum,
)
from frigate.config import CameraConfig, DetectConfig, ModelConfig
from frigate.config.camera.camera import CameraTypeEnum
from frigate.config.camera.updater import (
@@ -20,8 +23,6 @@ from frigate.config.camera.updater import (
CameraConfigUpdateSubscriber,
)
from frigate.const import (
CACHE_DIR,
CACHE_SEGMENT_FORMAT,
PROCESS_PRIORITY_HIGH,
REQUEST_REGION_GRID,
)
@@ -129,7 +130,7 @@ def capture_frames(
fps.value = frame_rate.eps()
skipped_fps.value = skipped_eps.eps()
current_frame.value = datetime.datetime.now().timestamp()
current_frame.value = datetime.now().timestamp()
frame_name = f"{config.name}_frame{frame_index}"
frame_buffer = frame_manager.write(frame_name)
try:
@@ -199,6 +200,11 @@ class CameraWatchdog(threading.Thread):
self.requestor = InterProcessRequestor()
self.was_enabled = self.config.enabled
self.segment_subscriber = RecordingsDataSubscriber(RecordingsDataTypeEnum.all)
self.latest_valid_segment_time: float = 0
self.latest_invalid_segment_time: float = 0
self.latest_cache_segment_time: float = 0
def _update_enabled_state(self) -> bool:
"""Fetch the latest config and update enabled state."""
self.config_subscriber.check_for_updates()
@@ -243,6 +249,11 @@ class CameraWatchdog(threading.Thread):
if enabled:
self.logger.debug(f"Enabling camera {self.config.name}")
self.start_all_ffmpeg()
# reset all timestamps
self.latest_valid_segment_time = 0
self.latest_invalid_segment_time = 0
self.latest_cache_segment_time = 0
else:
self.logger.debug(f"Disabling camera {self.config.name}")
self.stop_all_ffmpeg()
@@ -260,7 +271,37 @@ class CameraWatchdog(threading.Thread):
if not enabled:
continue
now = datetime.datetime.now().timestamp()
while True:
update = self.segment_subscriber.check_for_update(timeout=0)
if update == (None, None):
break
raw_topic, payload = update
if raw_topic and payload:
topic = str(raw_topic)
camera, segment_time, _ = payload
if camera != self.config.name:
continue
if topic.endswith(RecordingsDataTypeEnum.valid.value):
self.logger.debug(
f"Latest valid recording segment time on {camera}: {segment_time}"
)
self.latest_valid_segment_time = segment_time
elif topic.endswith(RecordingsDataTypeEnum.invalid.value):
self.logger.warning(
f"Invalid recording segment detected for {camera} at {segment_time}"
)
self.latest_invalid_segment_time = segment_time
elif topic.endswith(RecordingsDataTypeEnum.latest.value):
if segment_time is not None:
self.latest_cache_segment_time = segment_time
else:
self.latest_cache_segment_time = 0
now = datetime.now().timestamp()
if not self.capture_thread.is_alive():
self.requestor.send_data(f"{self.config.name}/status/detect", "offline")
@@ -298,18 +339,55 @@ class CameraWatchdog(threading.Thread):
poll = p["process"].poll()
if self.config.record.enabled and "record" in p["roles"]:
latest_segment_time = self.get_latest_segment_datetime(
p.get(
"latest_segment_time",
datetime.datetime.now().astimezone(datetime.timezone.utc),
now_utc = datetime.now().astimezone(timezone.utc)
latest_cache_dt = (
datetime.fromtimestamp(
self.latest_cache_segment_time, tz=timezone.utc
)
if self.latest_cache_segment_time > 0
else now_utc - timedelta(seconds=1)
)
if datetime.datetime.now().astimezone(datetime.timezone.utc) > (
latest_segment_time + datetime.timedelta(seconds=120)
):
latest_valid_dt = (
datetime.fromtimestamp(
self.latest_valid_segment_time, tz=timezone.utc
)
if self.latest_valid_segment_time > 0
else now_utc - timedelta(seconds=1)
)
latest_invalid_dt = (
datetime.fromtimestamp(
self.latest_invalid_segment_time, tz=timezone.utc
)
if self.latest_invalid_segment_time > 0
else now_utc - timedelta(seconds=1)
)
# ensure segments are still being created and that they have valid video data
cache_stale = now_utc > (latest_cache_dt + timedelta(seconds=120))
valid_stale = now_utc > (latest_valid_dt + timedelta(seconds=120))
invalid_stale_condition = (
self.latest_invalid_segment_time > 0
and now_utc > (latest_invalid_dt + timedelta(seconds=120))
and self.latest_valid_segment_time
<= self.latest_invalid_segment_time
)
invalid_stale = invalid_stale_condition
if cache_stale or valid_stale or invalid_stale:
if cache_stale:
reason = "No new recording segments were created"
elif valid_stale:
reason = "No new valid recording segments were created"
else: # invalid_stale
reason = (
"No valid segments created since last invalid segment"
)
self.logger.error(
f"No new recording segments were created for {self.config.name} in the last 120s. restarting the ffmpeg record process..."
f"{reason} for {self.config.name} in the last 120s. Restarting the ffmpeg record process..."
)
p["process"] = start_or_restart_ffmpeg(
p["cmd"],
@@ -328,7 +406,7 @@ class CameraWatchdog(threading.Thread):
self.requestor.send_data(
f"{self.config.name}/status/record", "online"
)
p["latest_segment_time"] = latest_segment_time
p["latest_segment_time"] = self.latest_cache_segment_time
if poll is None:
continue
@@ -346,6 +424,7 @@ class CameraWatchdog(threading.Thread):
self.stop_all_ffmpeg()
self.logpipe.close()
self.config_subscriber.stop()
self.segment_subscriber.stop()
def start_ffmpeg_detect(self):
ffmpeg_cmd = [
@@ -405,33 +484,6 @@ class CameraWatchdog(threading.Thread):
p["logpipe"].close()
self.ffmpeg_other_processes.clear()
def get_latest_segment_datetime(
self, latest_segment: datetime.datetime
) -> datetime.datetime:
"""Checks if ffmpeg is still writing recording segments to cache."""
cache_files = sorted(
[
d
for d in os.listdir(CACHE_DIR)
if os.path.isfile(os.path.join(CACHE_DIR, d))
and d.endswith(".mp4")
and not d.startswith("preview_")
]
)
newest_segment_time = latest_segment
for file in cache_files:
if self.config.name in file:
basename = os.path.splitext(file)[0]
_, date = basename.rsplit("@", maxsplit=1)
segment_time = datetime.datetime.strptime(
date, CACHE_SEGMENT_FORMAT
).astimezone(datetime.timezone.utc)
if segment_time > newest_segment_time:
newest_segment_time = segment_time
return newest_segment_time
class CameraCaptureRunner(threading.Thread):
def __init__(
@@ -727,10 +779,7 @@ def process_frames(
time.sleep(0.1)
continue
if (
datetime.datetime.now().astimezone(datetime.timezone.utc)
> next_region_update
):
if datetime.now().astimezone(timezone.utc) > next_region_update:
region_grid = requestor.send_data(REQUEST_REGION_GRID, camera_config.name)
next_region_update = get_tomorrow_at_time(2)

2
web/public/robots.txt Normal file
View File

@@ -0,0 +1,2 @@
User-agent: *
Disallow: /

View File

@@ -139,7 +139,7 @@ export default function HlsVideoPlayer({
if (hlsRef.current) {
hlsRef.current.destroy();
}
}
};
}, [videoRef, hlsRef, useHlsCompat, currentSource]);
// state handling

View File

@@ -84,6 +84,17 @@ function MSEPlayer({
return `${baseUrl.replace(/^http/, "ws")}live/mse/api/ws?src=${camera}`;
}, [camera]);
const handleError = useCallback(
(error: LivePlayerError, description: string = "Unknown error") => {
// eslint-disable-next-line no-console
console.error(
`${camera} - MSE error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live`,
);
onError?.(error);
},
[camera, onError],
);
const handleLoadedMetadata = useCallback(() => {
if (videoRef.current && setFullResolution) {
setFullResolution({
@@ -237,9 +248,9 @@ function MSEPlayer({
onDisconnect();
}
if (isIOS || isSafari) {
onError?.("mse-decode");
handleError("mse-decode", "Safari cannot open MediaSource.");
} else {
onError?.("startup");
handleError("startup", "Error opening MediaSource.");
}
});
},
@@ -267,9 +278,9 @@ function MSEPlayer({
onDisconnect();
}
if (isIOS || isSafari) {
onError?.("mse-decode");
handleError("mse-decode", "Safari cannot open MediaSource.");
} else {
onError?.("startup");
handleError("startup", "Error opening MediaSource.");
}
});
},
@@ -297,7 +308,7 @@ function MSEPlayer({
if (wsRef.current) {
onDisconnect();
}
onError?.("mse-decode");
handleError("mse-decode", "Safari reported InvalidStateError.");
return;
} else {
throw e; // Re-throw if it's not the error we're handling
@@ -424,7 +435,10 @@ function MSEPlayer({
(bufferThreshold > 10 || bufferTime > 10)
) {
onDisconnect();
onError?.("stalled");
handleError(
"stalled",
"Buffer time (10 seconds) exceeded, browser may not be playing media correctly.",
);
}
const playbackRate = calculateAdaptivePlaybackRate(
@@ -470,7 +484,7 @@ function MSEPlayer({
videoRef.current
) {
onDisconnect();
onError("stalled");
handleError("stalled", "Media playback has stalled.");
}
}, timeoutDuration),
);
@@ -479,6 +493,7 @@ function MSEPlayer({
bufferTimeout,
isPlaying,
onDisconnect,
handleError,
onError,
onPlaying,
playbackEnabled,
@@ -663,7 +678,7 @@ function MSEPlayer({
if (wsRef.current) {
onDisconnect();
}
onError?.("startup");
handleError("startup", "Browser reported a network error.");
}
if (
@@ -674,7 +689,7 @@ function MSEPlayer({
if (wsRef.current) {
onDisconnect();
}
onError?.("mse-decode");
handleError("mse-decode", "Safari reported decoding errors.");
}
setErrorCount((prevCount) => prevCount + 1);
@@ -683,7 +698,7 @@ function MSEPlayer({
onDisconnect();
if (errorCount >= 3) {
// too many mse errors, try jsmpeg
onError?.("startup");
handleError("startup", `Max error count ${errorCount} exceeded.`);
} else {
reconnect(5000);
}

View File

@@ -37,6 +37,18 @@ export default function WebRtcPlayer({
return `${baseUrl.replace(/^http/, "ws")}live/webrtc/api/ws?src=${camera}`;
}, [camera]);
// error handler
const handleError = useCallback(
(error: LivePlayerError, description: string = "Unknown error") => {
// eslint-disable-next-line no-console
console.error(
`${camera} - WebRTC error '${error}': ${description} See the documentation: https://docs.frigate.video/configuration/live`,
);
onError?.(error);
},
[camera, onError],
);
// camera states
const pcRef = useRef<RTCPeerConnection | undefined>();
@@ -212,7 +224,7 @@ export default function WebRtcPlayer({
useEffect(() => {
videoLoadTimeoutRef.current = setTimeout(() => {
onError?.("stalled");
handleError("stalled", "WebRTC connection timed out.");
}, 5000);
return () => {
@@ -327,7 +339,7 @@ export default function WebRtcPlayer({
document.visibilityState === "visible" &&
pcRef.current != undefined
) {
onError("stalled");
handleError("stalled", "WebRTC connection stalled.");
}
}, 3000),
);
@@ -344,7 +356,7 @@ export default function WebRtcPlayer({
// @ts-expect-error code does exist
e.target.error.code == MediaError.MEDIA_ERR_NETWORK
) {
onError?.("startup");
handleError("startup", "Browser reported a network error.");
}
}}
/>

View File

@@ -33,29 +33,43 @@ export default function useCameraLiveMode(
const streamsFetcher = useCallback(async (key: string) => {
const streamNames = key.split(",");
const metadata: { [key: string]: LiveStreamMetadata } = {};
await Promise.all(
streamNames.map(async (streamName) => {
try {
const response = await fetch(`/api/go2rtc/streams/${streamName}`);
if (response.ok) {
const data = await response.json();
metadata[streamName] = data;
}
} catch (error) {
// eslint-disable-next-line no-console
console.error(`Failed to fetch metadata for ${streamName}:`, error);
const metadataPromises = streamNames.map(async (streamName) => {
try {
const response = await fetch(`/api/go2rtc/streams/${streamName}`, {
priority: "low",
});
if (response.ok) {
const data = await response.json();
return { streamName, data };
}
}),
);
return { streamName, data: null };
} catch (error) {
// eslint-disable-next-line no-console
console.error(`Failed to fetch metadata for ${streamName}:`, error);
return { streamName, data: null };
}
});
const results = await Promise.allSettled(metadataPromises);
const metadata: { [key: string]: LiveStreamMetadata } = {};
results.forEach((result) => {
if (result.status === "fulfilled" && result.value.data) {
metadata[result.value.streamName] = result.value.data;
}
});
return metadata;
}, []);
const { data: allStreamMetadata = {} } = useSWR<{
[key: string]: LiveStreamMetadata;
}>(restreamedStreamsKey, streamsFetcher, { revalidateOnFocus: false });
}>(restreamedStreamsKey, streamsFetcher, {
revalidateOnFocus: false,
dedupingInterval: 10000,
});
const [preferredLiveModes, setPreferredLiveModes] = useState<{
[key: string]: LivePlayerMode;

View File

@@ -28,7 +28,6 @@ import {
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { useResizeObserver } from "@/hooks/resize-observer";
@@ -116,6 +115,7 @@ import {
SelectGroup,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
import { usePersistence } from "@/hooks/use-persistence";
import { Label } from "@/components/ui/label";
@@ -499,122 +499,118 @@ export default function LiveCameraView({
) : (
<div />
)}
<TooltipProvider>
<div
className={`flex flex-row items-center gap-2 *:rounded-lg ${isMobile ? "landscape:flex-col" : ""}`}
>
{fullscreen && (
<Button
className="bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 text-primary"
aria-label={t("label.back", { ns: "common" })}
size="sm"
onClick={() => navigate(-1)}
>
<IoMdArrowRoundBack className="size-5 text-secondary-foreground" />
{isDesktop && (
<div className="text-secondary-foreground">
{t("button.back", { ns: "common" })}
</div>
)}
</Button>
)}
{supportsFullscreen && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={fullscreen ? FaCompress : FaExpand}
isActive={fullscreen}
title={
fullscreen
? t("button.close", { ns: "common" })
: t("button.fullscreen", { ns: "common" })
}
onClick={toggleFullscreen}
/>
)}
{!isIOS && !isFirefox && preferredLiveMode != "jsmpeg" && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={LuPictureInPicture}
isActive={pip}
title={
pip
? t("button.close", { ns: "common" })
: t("button.pictureInPicture", { ns: "common" })
}
onClick={() => {
if (!pip) {
setPip(true);
} else {
document.exitPictureInPicture();
setPip(false);
}
}}
disabled={!cameraEnabled}
/>
)}
{supports2WayTalk && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={mic ? FaMicrophone : FaMicrophoneSlash}
isActive={mic}
title={
mic
? t("twoWayTalk.disable", { ns: "views/live" })
: t("twoWayTalk.enable", { ns: "views/live" })
}
onClick={() => {
setMic(!mic);
if (!mic && !audio) {
setAudio(true);
}
}}
disabled={!cameraEnabled}
/>
)}
{supportsAudioOutput && preferredLiveMode != "jsmpeg" && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={audio ? GiSpeaker : GiSpeakerOff}
isActive={audio ?? false}
title={
audio
? t("cameraAudio.disable", { ns: "views/live" })
: t("cameraAudio.enable", { ns: "views/live" })
}
onClick={() => setAudio(!audio)}
disabled={!cameraEnabled}
/>
)}
<FrigateCameraFeatures
camera={camera}
recordingEnabled={camera.record.enabled_in_config}
audioDetectEnabled={camera.audio.enabled_in_config}
autotrackingEnabled={
camera.onvif.autotracking.enabled_in_config
<div
className={`flex flex-row items-center gap-2 *:rounded-lg ${isMobile ? "landscape:flex-col" : ""}`}
>
{fullscreen && (
<Button
className="bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 text-primary"
aria-label={t("label.back", { ns: "common" })}
size="sm"
onClick={() => navigate(-1)}
>
<IoMdArrowRoundBack className="size-5 text-secondary-foreground" />
{isDesktop && (
<div className="text-secondary-foreground">
{t("button.back", { ns: "common" })}
</div>
)}
</Button>
)}
{supportsFullscreen && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={fullscreen ? FaCompress : FaExpand}
isActive={fullscreen}
title={
fullscreen
? t("button.close", { ns: "common" })
: t("button.fullscreen", { ns: "common" })
}
transcriptionEnabled={
camera.audio_transcription.enabled_in_config
}
fullscreen={fullscreen}
streamName={streamName ?? ""}
setStreamName={setStreamName}
preferredLiveMode={preferredLiveMode}
playInBackground={playInBackground ?? false}
setPlayInBackground={setPlayInBackground}
showStats={showStats}
setShowStats={setShowStats}
isRestreamed={isRestreamed ?? false}
setLowBandwidth={setLowBandwidth}
supportsAudioOutput={supportsAudioOutput}
supports2WayTalk={supports2WayTalk}
cameraEnabled={cameraEnabled}
onClick={toggleFullscreen}
/>
</div>
</TooltipProvider>
)}
{!isIOS && !isFirefox && preferredLiveMode != "jsmpeg" && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={LuPictureInPicture}
isActive={pip}
title={
pip
? t("button.close", { ns: "common" })
: t("button.pictureInPicture", { ns: "common" })
}
onClick={() => {
if (!pip) {
setPip(true);
} else {
document.exitPictureInPicture();
setPip(false);
}
}}
disabled={!cameraEnabled}
/>
)}
{supports2WayTalk && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={mic ? FaMicrophone : FaMicrophoneSlash}
isActive={mic}
title={
mic
? t("twoWayTalk.disable", { ns: "views/live" })
: t("twoWayTalk.enable", { ns: "views/live" })
}
onClick={() => {
setMic(!mic);
if (!mic && !audio) {
setAudio(true);
}
}}
disabled={!cameraEnabled}
/>
)}
{supportsAudioOutput && preferredLiveMode != "jsmpeg" && (
<CameraFeatureToggle
className="p-2 md:p-0"
variant={fullscreen ? "overlay" : "primary"}
Icon={audio ? GiSpeaker : GiSpeakerOff}
isActive={audio ?? false}
title={
audio
? t("cameraAudio.disable", { ns: "views/live" })
: t("cameraAudio.enable", { ns: "views/live" })
}
onClick={() => setAudio(!audio)}
disabled={!cameraEnabled}
/>
)}
<FrigateCameraFeatures
camera={camera}
recordingEnabled={camera.record.enabled_in_config}
audioDetectEnabled={camera.audio.enabled_in_config}
autotrackingEnabled={camera.onvif.autotracking.enabled_in_config}
transcriptionEnabled={
camera.audio_transcription.enabled_in_config
}
fullscreen={fullscreen}
streamName={streamName ?? ""}
setStreamName={setStreamName}
preferredLiveMode={preferredLiveMode}
playInBackground={playInBackground ?? false}
setPlayInBackground={setPlayInBackground}
showStats={showStats}
setShowStats={setShowStats}
isRestreamed={isRestreamed ?? false}
setLowBandwidth={setLowBandwidth}
supportsAudioOutput={supportsAudioOutput}
supports2WayTalk={supports2WayTalk}
cameraEnabled={cameraEnabled}
/>
</div>
</div>
<div id="player-container" className="size-full" ref={containerRef}>
<TransformComponent
@@ -707,27 +703,25 @@ function TooltipButton({
...props
}: TooltipButtonProps) {
return (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Button
aria-label={label}
onClick={onClick}
onMouseDown={onMouseDown}
onMouseUp={onMouseUp}
onTouchStart={onTouchStart}
onTouchEnd={onTouchEnd}
className={className}
{...props}
>
{children}
</Button>
</TooltipTrigger>
<TooltipContent>
<p>{label}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Button
aria-label={label}
onClick={onClick}
onMouseDown={onMouseDown}
onMouseUp={onMouseUp}
onTouchStart={onTouchStart}
onTouchEnd={onTouchEnd}
className={className}
{...props}
>
{children}
</Button>
</TooltipTrigger>
<TooltipContent>
<p>{label}</p>
</TooltipContent>
</Tooltip>
);
}
@@ -961,59 +955,56 @@ function PtzControlPanel({
)}
{ptz?.features?.includes("pt-r-fov") && (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Button
className={`${clickOverlay ? "text-selected" : "text-primary"}`}
aria-label={t("ptz.move.clickMove.label")}
onClick={() => setClickOverlay(!clickOverlay)}
>
<TbViewfinder />
</Button>
</TooltipTrigger>
<TooltipContent>
<p>
{clickOverlay
? t("ptz.move.clickMove.disable")
: t("ptz.move.clickMove.enable")}
</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Button
className={`${clickOverlay ? "text-selected" : "text-primary"}`}
aria-label={t("ptz.move.clickMove.label")}
onClick={() => setClickOverlay(!clickOverlay)}
>
<TbViewfinder />
</Button>
</TooltipTrigger>
<TooltipContent>
<p>
{clickOverlay
? t("ptz.move.clickMove.disable")
: t("ptz.move.clickMove.enable")}
</p>
</TooltipContent>
</Tooltip>
)}
{(ptz?.presets?.length ?? 0) > 0 && (
<TooltipProvider>
<DropdownMenu modal={!isDesktop}>
<Tooltip>
<TooltipTrigger asChild>
<DropdownMenu modal={!isDesktop}>
<DropdownMenuTrigger asChild>
<Button aria-label={t("ptz.presets")}>
<BsThreeDotsVertical />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent
className="scrollbar-container max-h-[40dvh] overflow-y-auto"
onCloseAutoFocus={(e) => e.preventDefault()}
>
{ptz?.presets.map((preset) => (
<DropdownMenuItem
key={preset}
aria-label={preset}
className="cursor-pointer"
onSelect={() => sendPtz(`preset_${preset}`)}
>
{preset}
</DropdownMenuItem>
))}
</DropdownMenuContent>
</DropdownMenu>
<DropdownMenuTrigger asChild>
<Button aria-label={t("ptz.presets")}>
<BsThreeDotsVertical />
</Button>
</DropdownMenuTrigger>
</TooltipTrigger>
<TooltipContent>
<p>{t("ptz.presets")}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
<DropdownMenuContent
className="scrollbar-container max-h-[40dvh] overflow-y-auto"
onCloseAutoFocus={(e) => e.preventDefault()}
>
{ptz?.presets.map((preset) => (
<DropdownMenuItem
key={preset}
aria-label={preset}
className="cursor-pointer"
onSelect={() => sendPtz(`preset_${preset}`)}
>
{preset}
</DropdownMenuItem>
))}
</DropdownMenuContent>
</DropdownMenu>
)}
</div>
);
@@ -1401,9 +1392,11 @@ function FrigateCameraFeatures({
}}
>
<SelectTrigger className="w-full">
{Object.keys(camera.live.streams).find(
(key) => camera.live.streams[key] === streamName,
)}
<SelectValue>
{Object.keys(camera.live.streams).find(
(key) => camera.live.streams[key] === streamName,
)}
</SelectValue>
</SelectTrigger>
<SelectContent>
@@ -1733,9 +1726,11 @@ function FrigateCameraFeatures({
}}
>
<SelectTrigger className="w-full">
{Object.keys(camera.live.streams).find(
(key) => camera.live.streams[key] === streamName,
)}
<SelectValue>
{Object.keys(camera.live.streams).find(
(key) => camera.live.streams[key] === streamName,
)}
</SelectValue>
</SelectTrigger>
<SelectContent>

View File

@@ -391,7 +391,6 @@ export default function FrigatePlusSettingsView({
className="cursor-pointer"
value={id}
disabled={
model.type != config.model.model_type ||
!model.supportedDetectors.includes(
Object.values(config.detectors)[0]
.type,