Compare commits

..

30 Commits

Author SHA1 Message Date
Nicolas Mowen
1d5c2466a8 Update HIKVISION camera link in hardware documentation (#21256) 2025-12-12 14:25:22 -06:00
GuoQing Liu
0a293aebab docs: update OpenVINO D-FINE configuration default device (#21231)
* docs: remove OpenVINO D-FINE configuration device

* docs: change D-FINE model detectors default device
2025-12-11 06:31:52 -07:00
User873902
1de7519d1a Update camera_specific.md for Wyze Cameras (Thingino) (#21221)
* Update camera_specific.md

Wyze Cameras alternative firmware considerations.

* Update docs/docs/configuration/camera_specific.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update docs/docs/configuration/camera_specific.md

* Update camera_specific.md

Moved Wyze Camera section

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-10 10:33:10 -07:00
GuoQing Liu
c3f596327e docs: fix the missing quotes in the Reolink example within the documentation (#21178) 2025-12-07 07:38:41 -07:00
Nicolas Mowen
90344540b3 Fix jetson build (#21173) 2025-12-06 09:16:23 -06:00
Josh Hawkins
7167cf57c5 pin cryptography version to fix vapid issues (#21126) 2025-12-02 07:20:50 -07:00
Josh Hawkins
e47e82f4be Pin onnx in rfdetr model generation command (#21127)
* pin onnx in rfdetr model generation command

* Apply suggestion from @NickM-27

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-02 08:15:12 -06:00
munit85
a43d294bd1 Add Axis Q-6155E camera configuration details (#21105)
* Add Axis Q-6155E camera configuration details

Added Axis Q-6155E camera details with ONVIF service port information.

* Update Axis Q-6155E ONVIF autotracking support details

Added the reason for autotracking not working
2025-12-01 10:47:01 -07:00
Josh Hawkins
9f95a5f31f version bump in docs (#21111) 2025-12-01 07:21:27 -07:00
Josh Hawkins
592c245dcd Fixes (#21061)
* require admin role to delete users

* explicitly prevent deletion of admin user

* Recordings playback fixes

* Remove nvidia pyindex

* Update version

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-26 07:27:16 -06:00
h-leth
914ff4f1e5 add comment about unifi g5 and newer cams (#21003) 2025-11-22 12:41:13 -06:00
Josh Hawkins
9589c5fc24 Fix rf-detr heading (#20963)
The link earlier in the file was referencing "#downloading-rf-detr-model"
2025-11-18 18:15:38 -07:00
Nicolas Mowen
3620ef27db Update hailo installation instructions (#20847)
* Update hailo docs installation

* Adjust section separation
2025-11-08 13:21:15 -06:00
GuoQing Liu
5cf2ae0121 docs: remove webrtc not support H.265 tips (#20769) 2025-11-05 06:23:45 -06:00
Nicolas Mowen
17d2bc240a Update recommended hardware to list more models (#20777)
* Update recommended hardware to list more models

* Update hardware.md with new Intel models and links
2025-11-04 10:56:28 -06:00
Nicolas Mowen
6fd7f862f5 Update coral docs / links (#20674)
* Revise GPU and AI accelerator recommendations

Updated hardware recommendations for AI acceleration.

* Revise PCIe Coral driver installation instructions

Updated instructions for PCIe Coral driver installation.

* Revise Coral driver installation instructions

Updated driver installation instructions for PCIe and M.2 versions of Google Coral.

* Change PCIe Coral driver link in getting_started.md

Updated the link for PCIe Coral driver instructions.

* Change PCIe Coral driver link in installation guide

Updated the link for PCIe Coral driver instructions.

* Update Coral TPU recommendation in hardware documentation

Added a warning about the Coral TPU's recommendation status for new Frigate installations and suggested alternatives.
2025-10-26 06:56:01 -05:00
Nicolas Mowen
5d038b5c75 Update PWA requirements and add usage section (#20562)
Added VPN as a secure context option for PWA installation and included a usage section.
2025-10-26 05:39:09 -06:00
Nicolas Mowen
c5fe354552 Improve Reolink Camera Documentation (#20605)
* Improve Reolink Camera Documentation

* Update Reolink configuration link in live.md
2025-10-21 16:20:41 -06:00
Josh Hawkins
5dc8a85f2f Update Azure OpenAI genai docs (#20549)
* Update azure openai genai docs

* tweak url
2025-10-18 06:44:26 -06:00
Nicolas Mowen
0302db1c43 Fix model exports (#20540) 2025-10-17 07:16:30 -05:00
Nicolas Mowen
a4764563a5 Fix YOLOv9 export script (#20514) 2025-10-16 07:56:37 -05:00
Josh Hawkins
942a61ddfb version bump in docs (#20501) 2025-10-15 05:53:31 -06:00
Nicolas Mowen
4d582062fb Ensure that a user must provide an image in an expected location (#20491)
* Ensure that a user must provide an image in an expected location

* Use const
2025-10-14 16:29:20 -05:00
Nicolas Mowen
e0a8445bac Improve rf-detr export (#20485) 2025-10-14 08:32:44 -05:00
Josh Hawkins
2a271c0f5e Update GenAI docs for Gemini model deprecation (#20462) 2025-10-13 10:00:21 -06:00
Nicolas Mowen
925bf78811 Update review topic description (#20445) 2025-10-12 07:28:08 -05:00
Sean Kelly
59102794e8 Add keyboard shortcut for switching to previous label (#20426)
* Add keyboard shortcut for switching to previous label

* Update docs/docs/plus/annotating.md

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>

---------

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
2025-10-11 10:43:41 -06:00
mpking828
20e5e3bdc0 Update camera_specific.md to fix 2 way audio example for Reolink (#20343)
Update camera_specific.md to fix 2 way audio example for Reolink
2025-10-03 08:49:51 -06:00
AmirHossein_Omidi
b94ebda9e5 Update license_plate_recognition.md (#20306)
* Update license_plate_recognition.md

Add PaddleOCR description for license plate recognition in Frigate docs

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-10-01 08:18:47 -05:00
Nicolas Mowen
8cdaef307a Update face rec docs (#20256)
* Update face rec docs

* clarify

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-09-28 11:31:59 -05:00
745 changed files with 8031 additions and 36272 deletions

View File

@@ -23,7 +23,7 @@ jobs:
name: AMD64 Build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -47,7 +47,7 @@ jobs:
name: ARM Build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -77,12 +77,42 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
jetson_jp5_build:
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 5
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 5)
env:
ARCH: arm64
BASE_IMAGE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
SLIM_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
TRT_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
jetson_jp6_build:
runs-on: ubuntu-22.04-arm
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -113,7 +143,7 @@ jobs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -155,7 +185,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -173,31 +203,6 @@ jobs:
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
synaptics_build:
runs-on: ubuntu-22.04-arm
name: Synaptics Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Synaptics build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: synaptics
files: docker/synaptics/synaptics.hcl
set: |
synaptics.tags=${{ steps.setup.outputs.image-name }}-synaptics
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:
@@ -212,7 +217,7 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@@ -4,19 +4,43 @@ on:
pull_request:
paths-ignore:
- "docs/**"
- ".github/*.yml"
- ".github/DISCUSSION_TEMPLATE/**"
- ".github/ISSUE_TEMPLATE/**"
- ".github/**"
env:
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:
runs-on: ubuntu-latest
name: Build Devcontainer
# The Dockerfile contains features that requires buildkit, and since the
# devcontainer cli uses docker-compose to build the image, the only way to
# ensure docker-compose uses buildkit is to explicitly enable it.
env:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
run: devcontainer build --workspace-folder .
# It would be nice to also test the following commands, but for some
# reason they don't work even though in VS Code devcontainer works.
# - name: Start devcontainer
# run: devcontainer up --workspace-folder .
# - name: Run devcontainer scripts
# run: devcontainer run-user-commands --workspace-folder .
web_lint:
name: Web - Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
@@ -32,7 +56,7 @@ jobs:
name: Web - Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
@@ -52,7 +76,7 @@ jobs:
name: Python Checks
steps:
- name: Check out the repository
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
@@ -75,21 +99,16 @@ jobs:
name: Python Tests
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
env:
DOCKER_BUILDKIT: "1"
run: devcontainer build --workspace-folder .
- name: Start devcontainer
run: devcontainer up --workspace-folder .
- name: Run mypy in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m mypy --config-file frigate/mypy.ini frigate"
- name: Run unit tests in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m unittest"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build
run: make
- name: Run mypy
run: docker run --rm --entrypoint=python3 frigate:latest -u -m mypy --config-file frigate/mypy.ini frigate
- name: Run tests
run: docker run --rm --entrypoint=python3 frigate:latest -u -m unittest

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
with:
persist-credentials: false
- id: lowercaseRepo
@@ -18,7 +18,7 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.17.0
VERSION = 0.16.3
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty
@@ -20,12 +20,6 @@ local: version
--tag frigate:latest \
--load
debug: version
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--build-arg DEBUG=true \
--tag frigate:latest \
--load
amd64:
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) \

View File

@@ -12,7 +12,7 @@
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
Use of a GPU, Integrated GPU, or AI accelerator such as a [Hailo](https://hailo.ai/) is highly recommended. Dedicated hardware will outperform even the best CPUs with very little overhead.
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary

View File

@@ -4,13 +4,13 @@ from statistics import mean
import numpy as np
import frigate.util as util
from frigate.config import DetectorTypeEnum
from frigate.object_detection.base import (
ObjectDetectProcess,
RemoteObjectDetector,
load_labels,
)
from frigate.util.process import FrigateProcess
my_frame = np.expand_dims(np.full((300, 300, 3), 1, np.uint8), axis=0)
labels = load_labels("/labelmap.txt")
@@ -91,7 +91,7 @@ edgetpu_process_2 = ObjectDetectProcess(
)
for x in range(0, 10):
camera_process = FrigateProcess(
camera_process = util.Process(
target=start, args=(x, 300, detection_queue, events[str(x)])
)
camera_process.daemon = True

View File

@@ -55,7 +55,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.10/go2rtc_linux_${TARGETARCH}" go2rtc
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.9/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio
ARG TARGETARCH
@@ -148,7 +148,6 @@ RUN --mount=type=bind,source=docker/main/install_s6_overlay.sh,target=/deps/inst
FROM base AS wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
ARG DEBUG=false
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
@@ -178,8 +177,6 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
COPY docker/main/requirements.txt /requirements.txt
COPY docker/main/requirements-dev.txt /requirements-dev.txt
RUN pip3 install -r /requirements.txt
# Build pysqlite3 from source
@@ -187,10 +184,7 @@ COPY docker/main/build_pysqlite3.sh /build_pysqlite3.sh
RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt && \
if [ "$DEBUG" = "true" ]; then \
pip3 wheel --wheel-dir=/wheels -r /requirements-dev.txt; \
fi
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
# Install HailoRT & Wheels
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
@@ -212,7 +206,6 @@ COPY docker/main/rootfs/ /
# Frigate deps (ffmpeg, python, nginx, go2rtc, s6-overlay, etc)
FROM slim-base AS deps
ARG TARGETARCH
ARG BASE_IMAGE
ARG DEBIAN_FRONTEND
# http://stackoverflow.com/questions/48162574/ddg#49462622
@@ -231,15 +224,9 @@ ENV TRANSFORMERS_NO_ADVISORY_WARNINGS=1
# Set OpenCV ffmpeg loglevel to fatal: https://ffmpeg.org/doxygen/trunk/log_8h.html
ENV OPENCV_FFMPEG_LOGLEVEL=8
# Set NumPy to ignore getlimits warning
ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
# TensorFlow error only
ENV TF_CPP_MIN_LOG_LEVEL=3
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
# Install dependencies
@@ -256,10 +243,6 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
pip3 install -U /deps/wheels/*.whl
# Install MemryX runtime (requires libgomp (OpenMP) in the final docker image)
RUN --mount=type=bind,source=docker/main/install_memryx.sh,target=/deps/install_memryx.sh \
bash -c "bash /deps/install_memryx.sh"
COPY --from=deps-rootfs / /
RUN ldconfig

View File

@@ -5,21 +5,27 @@ set -euxo pipefail
SQLITE3_VERSION="3.46.1"
PYSQLITE3_VERSION="0.5.3"
# Install libsqlite3-dev if not present (needed for some base images like NVIDIA TensorRT)
if ! dpkg -l | grep -q libsqlite3-dev; then
echo "Installing libsqlite3-dev for compilation..."
apt-get update && apt-get install -y libsqlite3-dev && rm -rf /var/lib/apt/lists/*
fi
# Fetch the pre-built sqlite amalgamation instead of building from source
if [[ ! -d "sqlite" ]]; then
mkdir sqlite
cd sqlite
# Download the pre-built amalgamation from sqlite.org
# For SQLite 3.46.1, the amalgamation version is 3460100
SQLITE_AMALGAMATION_VERSION="3460100"
wget https://www.sqlite.org/2024/sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}.zip -O sqlite-amalgamation.zip
unzip sqlite-amalgamation.zip
mv sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}/* .
rmdir sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}
rm sqlite-amalgamation.zip
cd ../
fi

View File

@@ -19,8 +19,7 @@ apt-get -qq install --no-install-recommends -y \
nethogs \
libgl1 \
libglib2.0-0 \
libusb-1.0.0 \
libgomp1 # memryx detector
libusb-1.0.0
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
@@ -32,18 +31,6 @@ unset DEBIAN_FRONTEND
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
rm /tmp/libedgetpu1-max.deb
# install mesa-teflon-delegate from bookworm-backports
# Only available for arm64 at the moment
if [[ "${TARGETARCH}" == "arm64" ]]; then
if [[ "${BASE_IMAGE}" == *"nvcr.io/nvidia/tensorrt"* ]]; then
echo "Info: Skipping apt-get commands because BASE_IMAGE includes 'nvcr.io/nvidia/tensorrt' for arm64."
else
echo "deb http://deb.debian.org/debian bookworm-backports main" | tee /etc/apt/sources.list.d/bookworm-backbacks.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y mesa-teflon-delegate/bookworm-backports
fi
fi
# ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
@@ -91,33 +78,11 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2
apt-get -qq install -y ocl-icd-libopencl1
intel-opencl-icd=24.35.30872.31-996~22.04 intel-level-zero-gpu=1.3.29735.27-914~22.04 intel-media-va-driver-non-free=24.3.3-996~22.04 \
libmfx1=23.2.2-880~22.04 libmfxgen1=24.2.4-914~22.04 libvpl2=1:2.13.0.0-996~22.04
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# install legacy and standard intel icd and level-zero-gpu
# see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info
# needed core package
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/libigdgmm12_22.5.5_amd64.deb
dpkg -i libigdgmm12_22.5.5_amd64.deb
rm libigdgmm12_22.5.5_amd64.deb
# legacy packages
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
# standard packages
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-opencl-icd_24.52.32224.5_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
dpkg -i *.deb
rm *.deb
fi
if [[ "${TARGETARCH}" == "arm64" ]]; then

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -e
# Download the MxAccl for Frigate github release
wget https://github.com/memryx/mx_accl_frigate/archive/refs/heads/main.zip -O /tmp/mxaccl.zip
unzip /tmp/mxaccl.zip -d /tmp
mv /tmp/mx_accl_frigate-main /opt/mx_accl_frigate
rm /tmp/mxaccl.zip
# Install Python dependencies
pip3 install -r /opt/mx_accl_frigate/freeze
# Link the Python package dynamically
SITE_PACKAGES=$(python3 -c "import site; print(site.getsitepackages()[0])")
ln -s /opt/mx_accl_frigate/memryx "$SITE_PACKAGES/memryx"
# Copy architecture-specific shared libraries
ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then
cp /opt/mx_accl_frigate/memryx/x86/libmemx.so* /usr/lib/x86_64-linux-gnu/
cp /opt/mx_accl_frigate/memryx/x86/libmx_accl.so* /usr/lib/x86_64-linux-gnu/
elif [[ "$ARCH" == "aarch64" ]]; then
cp /opt/mx_accl_frigate/memryx/arm/libmemx.so* /usr/lib/aarch64-linux-gnu/
cp /opt/mx_accl_frigate/memryx/arm/libmx_accl.so* /usr/lib/aarch64-linux-gnu/
else
echo "Unsupported architecture: $ARCH"
exit 1
fi
# Refresh linker cache
ldconfig

View File

@@ -1,4 +1 @@
ruff
# types
types-peewee == 3.17.*

View File

@@ -1,28 +1,25 @@
aiofiles == 24.1.*
click == 8.1.*
# FastAPI
aiohttp == 3.12.*
starlette == 0.47.*
starlette-context == 0.4.*
fastapi[standard-no-fastapi-cloud-cli] == 0.116.*
uvicorn == 0.35.*
aiohttp == 3.11.3
starlette == 0.41.2
starlette-context == 0.3.6
fastapi == 0.115.*
uvicorn == 0.30.*
slowapi == 0.1.*
joserfc == 1.2.*
joserfc == 1.0.*
cryptography == 44.0.*
pathvalidate == 3.3.*
pathvalidate == 3.2.*
markupsafe == 3.0.*
python-multipart == 0.0.20
# Classification Model Training
tensorflow == 2.19.* ; platform_machine == 'aarch64'
tensorflow-cpu == 2.19.* ; platform_machine == 'x86_64'
python-multipart == 0.0.12
# General
mypy == 1.6.1
onvif-zeep-async == 4.0.*
onvif-zeep-async == 3.1.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.13.*
psutil == 7.1.*
psutil == 6.1.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
pytz == 2025.*
@@ -31,7 +28,7 @@ ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
norfair == 2.3.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
@@ -40,15 +37,16 @@ titlecase == 2.4.*
numpy == 1.26.*
opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.*
scipy == 1.16.*
scipy == 1.14.*
# OpenVino & ONNX
openvino == 2025.3.*
onnxruntime == 1.22.*
openvino == 2024.4.*
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
# Embeddings
transformers == 4.45.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.5.*
ollama == 0.3.*
openai == 1.65.*
# push notifications
py-vapid == 1.9.*
@@ -74,10 +72,3 @@ prometheus-client == 0.21.*
# TFLite
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'
# audio transcription
sherpa-onnx==1.12.*
faster-whisper==1.1.*
librosa==0.11.*
soundfile==0.13.*
# DeGirum detector
degirum == 0.16.*

View File

@@ -1,2 +1 @@
scikit-build == 0.18.*
nvidia-pyindex

View File

@@ -10,7 +10,7 @@ echo "[INFO] Starting certsync..."
lefile="/etc/letsencrypt/live/frigate/fullchain.pem"
tls_enabled=`python3 /usr/local/nginx/get_listen_settings.py | jq -r .tls.enabled`
tls_enabled=`python3 /usr/local/nginx/get_tls_settings.py | jq -r .enabled`
while true
do

View File

@@ -85,7 +85,7 @@ python3 /usr/local/nginx/get_base_path.py | \
-out /usr/local/nginx/conf/base_path.conf
# build templates for optional TLS support
python3 /usr/local/nginx/get_listen_settings.py | \
python3 /usr/local/nginx/get_tls_settings.py | \
tempio -template /usr/local/nginx/templates/listen.gotmpl \
-out /usr/local/nginx/conf/listen.conf

View File

@@ -26,10 +26,6 @@ try:
except FileNotFoundError:
config: dict[str, Any] = {}
tls_config: dict[str, any] = config.get("tls", {"enabled": True})
networking_config = config.get("networking", {})
ipv6_config = networking_config.get("ipv6", {"enabled": False})
tls_config: dict[str, Any] = config.get("tls", {"enabled": True})
output = {"tls": tls_config, "ipv6": ipv6_config}
print(json.dumps(output))
print(json.dumps(tls_config))

View File

@@ -1,45 +1,33 @@
# Internal (IPv4 always; IPv6 optional)
# intended for internal traffic, not protected by auth
listen 5000;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:5000;{{ end }}{{ end }}
{{ if not .enabled }}
# intended for external traffic, protected by auth
{{ if .tls }}
{{ if .tls.enabled }}
# external HTTPS (IPv4 always; IPv6 optional)
listen 8971 ssl;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971 ssl;{{ end }}{{ end }}
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
{{ else }}
# external HTTP (IPv4 always; IPv6 optional)
listen 8971;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971;{{ end }}{{ end }}
{{ end }}
listen 8971;
{{ else }}
# (No tls section) default to HTTP (IPv4 always; IPv6 optional)
listen 8971;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971;{{ end }}{{ end }}
# intended for external traffic, protected by auth
listen 8971 ssl;
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
{{ end }}

View File

@@ -1,47 +0,0 @@
#!/bin/bash
set -e # Exit immediately if any command fails
set -o pipefail
echo "Starting MemryX driver and runtime installation..."
# Detect architecture
arch=$(uname -m)
# Purge existing packages and repo
echo "Removing old MemryX installations..."
# Remove any holds on MemryX packages (if they exist)
sudo apt-mark unhold memx-* mxa-manager || true
sudo apt purge -y memx-* mxa-manager || true
sudo rm -f /etc/apt/sources.list.d/memryx.list /etc/apt/trusted.gpg.d/memryx.asc
# Install kernel headers
echo "Installing kernel headers for: $(uname -r)"
sudo apt update
sudo apt install -y dkms linux-headers-$(uname -r)
# Add MemryX key and repo
echo "Adding MemryX GPG key and repository..."
wget -qO- https://developer.memryx.com/deb/memryx.asc | sudo tee /etc/apt/trusted.gpg.d/memryx.asc >/dev/null
echo 'deb https://developer.memryx.com/deb stable main' | sudo tee /etc/apt/sources.list.d/memryx.list >/dev/null
# Update and install memx-drivers
echo "Installing memx-drivers..."
sudo apt update
sudo apt install -y memx-drivers
# ARM-specific board setup
if [[ "$arch" == "aarch64" || "$arch" == "arm64" ]]; then
echo "Running ARM board setup..."
sudo mx_arm_setup
fi
echo -e "\n\n\033[1;31mYOU MUST RESTART YOUR COMPUTER NOW\033[0m\n\n"
# Install other runtime packages
packages=("memx-accl" "mxa-manager")
for pkg in "${packages[@]}"; do
echo "Installing $pkg..."
sudo apt install -y "$pkg"
done
echo "MemryX installation complete!"

View File

@@ -11,8 +11,7 @@ COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
RUN sed -i '/\[.*\]/d' /requirements-wheels.txt \
&& pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN rm -rf /rk-wheels/opencv_python-*
RUN rm -rf /rk-wheels/torch-*

View File

@@ -2,7 +2,7 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=1
ARG ROCM=6.3.3
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
@@ -13,16 +13,16 @@ FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt update -qq && \
RUN apt update && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.0.1/ubuntu/jammy/amdgpu-install_7.0.1.70001-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/$ROCM/ubuntu/jammy/amdgpu-install_6.3.60303-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -qq -y rocm
apt install -y rocm
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* librocroller.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
@@ -33,10 +33,7 @@ RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM deps AS deps-prelim
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
RUN apt-get update && \
apt-get install -y libnuma1 && \
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers
RUN apt-get update && apt-get install -y libnuma1
WORKDIR /opt/frigate
COPY --from=rootfs / /
@@ -47,7 +44,7 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
RUN python3 -m pip config set global.break-system-packages true
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
RUN pip3 uninstall -y onnxruntime \
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 install -r /requirements.txt
#######################################################################
@@ -64,10 +61,9 @@ COPY --from=rocm /opt/rocm-dist/ /
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV MIGRAPHX_DISABLE_MIOPEN_FUSION=1
ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1
ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1
ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1
ENV HSA_ENABLE_SDMA=0
ENV MIGRAPHX_ENABLE_NHWC=1
ENV TF_ROCM_USE_IMMEDIATE_MODE=1
COPY --from=rocm-dist / /

View File

@@ -1,6 +0,0 @@
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm-backports
Components: main
Enabled: yes
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

View File

@@ -1 +1 @@
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.0.1/onnxruntime_migraphx-1.23.0-cp311-cp311-linux_x86_64.whl
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.3.3/onnxruntime_rocm-1.20.1-cp311-cp311-linux_x86_64.whl

View File

@@ -2,7 +2,7 @@ variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "7.0.1"
default = "6.3.3"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""

View File

@@ -1,28 +0,0 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels AS synap1680-wheels
ARG TARGETARCH
# Install dependencies
RUN wget -qO- "https://github.com/GaryHuang-ASUS/synaptics_astra_sdk/releases/download/v1.5.0/Synaptics-SL1680-v1.5.0-rt.tar" | tar -C / -xzf -
RUN wget -P /wheels/ "https://github.com/synaptics-synap/synap-python/releases/download/v0.0.4-preview/synap_python-0.0.4-cp311-cp311-manylinux_2_35_aarch64.whl"
FROM deps AS synap1680-deps
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=synap1680-wheels,source=/wheels,target=/deps/synap-wheels \
pip3 install --no-deps -U /deps/synap-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib
ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /synaptics/mobilenet.synap

View File

@@ -1,27 +0,0 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "rootfs"
}
target synaptics {
dockerfile = "docker/synaptics/Dockerfile"
contexts = {
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/arm64"]
}

View File

@@ -1,15 +0,0 @@
BOARDS += synaptics
local-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=frigate:latest-synaptics \
--load
build-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics
push-synaptics: build-synaptics
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics \
--push

View File

@@ -12,16 +12,13 @@ ARG PIP_BREAK_SYSTEM_PACKAGES
# Install TensorRT wheels
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
# remove dependencies from the requirements that have type constraints
RUN sed -i '/\[.*\]/d' /requirements-wheels.txt \
&& pip3 wheel --wheel-dir=/trt-wheels -c /requirements-wheels.txt -r /requirements-tensorrt.txt
RUN pip3 wheel --wheel-dir=/trt-wheels -c /requirements-wheels.txt -r /requirements-tensorrt.txt
FROM deps AS frigate-tensorrt
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 uninstall -y onnxruntime tensorflow-cpu \
pip3 uninstall -y onnxruntime-openvino tensorflow-cpu \
&& pip3 install -U /deps/trt-wheels/*.whl
COPY --from=rootfs / /

View File

@@ -112,7 +112,7 @@ RUN apt-get update \
&& apt-get install -y protobuf-compiler libprotobuf-dev \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=docker/tensorrt/requirements-models-arm64.txt,target=/requirements-tensorrt-models.txt \
pip3 wheel --wheel-dir=/trt-model-wheels -r /requirements-tensorrt-models.txt
pip3 wheel --wheel-dir=/trt-model-wheels --no-deps -r /requirements-tensorrt-models.txt
FROM wget AS jetson-ffmpeg
ARG DEBIAN_FRONTEND
@@ -145,7 +145,8 @@ COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
--mount=type=bind,from=trt-model-wheels,source=/trt-model-wheels,target=/deps/trt-model-wheels \
pip3 uninstall -y onnxruntime \
&& pip3 install -U /deps/trt-wheels/*.whl /deps/trt-model-wheels/*.whl \
&& pip3 install -U /deps/trt-wheels/*.whl \
&& pip3 install -U /deps/trt-model-wheels/*.whl \
&& ldconfig
WORKDIR /opt/frigate/

View File

@@ -13,7 +13,6 @@ nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
tensorflow==2.19.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@@ -1,2 +1,3 @@
onnx == 1.14.0; platform_machine == 'aarch64'
protobuf == 3.20.3; platform_machine == 'aarch64'
numpy == 1.23.*; platform_machine == 'aarch64' # required by python-tensorrt 8.2.1 (Jetpack 4.6)

View File

@@ -177,11 +177,9 @@ listen [::]:5000 ipv6only=off;
By default, Frigate runs at the root path (`/`). However some setups require to run Frigate under a custom path prefix (e.g. `/frigate`), especially when Frigate is located behind a reverse proxy that requires path-based routing.
### Set Base Path via HTTP Header
The preferred way to configure the base path is through the `X-Ingress-Path` HTTP header, which needs to be set to the desired base path in an upstream reverse proxy.
For example, in Nginx:
```
location /frigate {
proxy_set_header X-Ingress-Path /frigate;
@@ -190,11 +188,9 @@ location /frigate {
```
### Set Base Path via Environment Variable
When it is not feasible to set the base path via a HTTP header, it can also be set via the `FRIGATE_BASE_PATH` environment variable in the Docker Compose file.
For example:
```
services:
frigate:
@@ -204,7 +200,6 @@ services:
```
This can be used for example to access Frigate via a Tailscale agent (https), by simply forwarding all requests to the base path (http):
```
tailscale serve --https=443 --bg --set-path /frigate http://localhost:5000/frigate
```
@@ -223,7 +218,7 @@ To do this:
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.10, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.9, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@@ -50,7 +50,7 @@ cameras:
### Configuring Minimum Volume
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. MQTT explorer can be used on the audio topic to see what volume level is being detected.
:::tip
@@ -72,77 +72,3 @@ audio:
- speech
- yell
```
### Audio Transcription
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. To enable transcription, it is recommended to only configure the features at the global level, and enable it at the individual camera level.
```yaml
audio_transcription:
enabled: False
device: ...
model_size: ...
```
Enable audio transcription for select cameras at the camera level:
```yaml
cameras:
back_yard:
...
audio_transcription:
enabled: True
```
:::note
Audio detection must be enabled and configured as described above in order to use audio transcription features.
:::
The optional config parameters that can be set at the global level include:
- **`enabled`**: Enable or disable the audio transcription feature.
- Default: `False`
- It is recommended to only configure the features at the global level, and enable it at the individual camera level.
- **`device`**: Device to use to run transcription and translation models.
- Default: `CPU`
- This can be `CPU` or `GPU`. The `sherpa-onnx` models are lightweight and run on the CPU only. The `whisper` models can run on GPU but are only supported on CUDA hardware.
- **`model_size`**: The size of the model used for live transcription.
- Default: `small`
- This can be `small` or `large`. The `small` setting uses `sherpa-onnx` models that are fast, lightweight, and always run on the CPU but are not as accurate as the `whisper` model.
- The
- This config option applies to **live transcription only**. Recorded `speech` events will always use a different `whisper` model (and can be accelerated for CUDA hardware if available with `device: GPU`).
- **`language`**: Defines the language used by `whisper` to translate `speech` audio events (and live audio only if using the `large` model).
- Default: `en`
- You must use a valid [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
- Transcriptions for `speech` events are translated.
- Live audio is translated only if you are using the `large` model. The `small` `sherpa-onnx` model is English-only.
The only field that is valid at the camera level is `enabled`.
#### Live transcription
The single camera Live view in the Frigate UI supports live transcription of audio for streams defined with the `audio` role. Use the Enable/Disable Live Audio Transcription button/switch to toggle transcription processing. When speech is heard, the UI will display a black box over the top of the camera stream with text. The MQTT topic `frigate/<camera_name>/audio/transcription` will also be updated in real-time with transcribed text.
Results can be error-prone due to a number of factors, including:
- Poor quality camera microphone
- Distance of the audio source to the camera microphone
- Low audio bitrate setting in the camera
- Background noise
- Using the `small` model - it's fast, but not accurate for poor quality audio
For speech sources close to the camera with minimal background noise, use the `small` model.
If you have CUDA hardware, you can experiment with the `large` `whisper` model on GPU. Performance is not quite as fast as the `sherpa-onnx` `small` model, but live transcription is far more accurate. Using the `large` model with CPU will likely be too slow for real-time transcription.
#### Transcription and translation of `speech` audio events
Any `speech` events in Explore can be transcribed and/or translated through the Transcribe button in the Tracked Object Details pane.
In order to use transcription and translation for past events, you must enable audio detection and define `speech` as an audio type to listen for in your config. To have `speech` events translated into the language of your choice, set the `language` config parameter with the correct [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
The transcribed/translated speech will appear in the description box in the Tracked Object Details pane. If Semantic Search is enabled, embeddings are generated for the transcription text and are fully searchable using the description search type.
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.

View File

@@ -59,7 +59,6 @@ The default session length for user authentication in Frigate is 24 hours. This
While the default provides a balance of security and convenience, you can customize this duration to suit your specific security requirements and user experience preferences. The session length is configured in seconds.
The default value of `86400` will expire the authentication session after 24 hours. Some other examples:
- `0`: Setting the session length to 0 will require a user to log in every time they access the application or after a very short, immediate timeout.
- `604800`: Setting the session length to 604800 will require a user to log in if the token is not refreshed for 7 days.
@@ -81,7 +80,7 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
Frigate looks for a JWT token secret in the following order:
1. An environment variable named `FRIGATE_JWT_SECRET`
2. A file named `FRIGATE_JWT_SECRET` in the directory specified by the `CREDENTIALS_DIRECTORY` environment variable (defaults to the Docker Secrets directory: `/run/secrets/`)
2. A docker secret named `FRIGATE_JWT_SECRET` in `/run/secrets/`
3. A `jwt_secret` option from the Home Assistant Add-on options
4. A `.jwt_secret` file in the config directory
@@ -124,7 +123,7 @@ proxy:
role: x-forwarded-groups
```
Frigate supports `admin`, `viewer`, and custom roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
A default role can be provided. Any value in the mapped `role` header will override the default.
@@ -134,34 +133,6 @@ proxy:
default_role: viewer
```
## Role mapping
In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do not pass a Frigate-compatible role directly, but instead pass one or more group claims. To handle this, Frigate supports a `role_map` that translates upstream group names into Frigates internal roles (`admin`, `viewer`, or custom).
```yaml
proxy:
...
header_map:
user: x-forwarded-user
role: x-forwarded-groups
role_map:
admin:
- sysadmins
- access-level-security
viewer:
- camera-viewer
operator: # Custom role mapping
- operators
```
In this example:
- If the proxy passes a role header containing `sysadmins` or `access-level-security`, the user is assigned the `admin` role.
- If the proxy passes a role header containing `camera-viewer`, the user is assigned the `viewer` role.
- If the proxy passes a role header containing `operators`, the user is assigned the `operator` custom role.
- If no mapping matches, Frigate falls back to `default_role` if configured.
- If `role_map` is not defined, Frigate assumes the role header directly contains `admin`, `viewer`, or a custom role name.
#### Port Considerations
**Authenticated Port (8971)**
@@ -170,7 +141,6 @@ In this example:
- The `remote-role` header determines the users privileges:
- **admin** → Full access (user management, configuration changes).
- **viewer** → Read-only access.
- **Custom roles** → Read-only access limited to the cameras defined in `auth.roles[role]`.
- Ensure your **proxy sends both user and role headers** for proper role enforcement.
**Unauthenticated Port (5000)**
@@ -216,41 +186,6 @@ Frigate supports user roles to control access to certain features in the UI and
- **admin**: Full access to all features, including user management and configuration.
- **viewer**: Read-only access to the UI and API, including viewing cameras, review items, and historical footage. Configuration editor and settings in the UI are inaccessible.
- **Custom Roles**: Arbitrary role names (alphanumeric, dots/underscores) with specific camera permissions. These extend the system for granular access (e.g., "operator" for select cameras).
### Custom Roles and Camera Access
The viewer role provides read-only access to all cameras in the UI and API. Custom roles allow admins to limit read-only access to specific cameras. Each role specifies an array of allowed camera names. If a user is assigned a custom role, their account is like the **viewer** role - they can only view Live, Review/History, Explore, and Export for the designated cameras. Backend API endpoints enforce this server-side (e.g., returning 403 for unauthorized cameras), and the frontend UI filters content accordingly (e.g., camera dropdowns show only permitted options).
### Role Configuration Example
```yaml
cameras:
front_door:
# ... camera config
side_yard:
# ... camera config
garage:
# ... camera config
auth:
enabled: true
roles:
operator: # Custom role
- front_door
- garage # Operator can access front and garage
neighbor:
- side_yard
```
If you want to provide access to all cameras to a specific user, just use the **viewer** role.
### Managing User Roles
1. Log in as an **admin** user via port `8971` (preferred), or unauthenticated via port `5000`.
2. Navigate to **Settings**.
3. In the **Users** section, edit a users role by selecting from available roles (admin, viewer, or custom).
4. In the **Roles** section, add/edit/delete custom roles (select cameras via switches). Deleting a role auto-reassigns users to "viewer".
### Role Enforcement

View File

@@ -147,7 +147,7 @@ WEB Digest Algorithm - MD5
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
| ----------------- | ------------------------- | --------------------------------- | ----------------------------------------------------------------------- |
| ---------------- | ------------------------- | -------------------------------- | ----------------------------------------------------------------------- |
| 5MP or lower | All | http-flv | Stream is h264 |
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
@@ -164,13 +164,35 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
<details>
<summary>Example Config</summary>
:::tip
Reolink's latest cameras support two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
NOTE: The RTSP stream can not be prefixed with `ffmpeg:`, as go2rtc needs to handle the stream to support two way audio.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
:::
```yaml
go2rtc:
streams:
# example for connecting to a standard Reolink camera
your_reolink_camera:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
your_reolink_camera_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
# example for connectin to a Reolink camera that supports two way talk
your_reolink_camera_twt:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- "rtsp://username:password@reolink_ip/Preview_01_sub"
your_reolink_camera_twt_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
- "rtsp://username:password@reolink_ip/Preview_01_sub"
# example for connecting to a Reolink NVR
your_reolink_camera_via_nvr:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
- "ffmpeg:your_reolink_camera_via_nvr#audio=aac"
@@ -201,25 +223,16 @@ cameras:
roles:
- detect
```
#### Reolink Doorbell
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
streams:
your_reolink_doorbell:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- rtsp://reolink_ip/Preview_01_sub
your_reolink_doorbell_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
```
</details>
### Unifi Protect Cameras
:::note
Unifi G5s cameras and newer need a Unifi Protect server to enable rtsps stream, it's not posible to enable it in standalone mode.
:::
Unifi protect cameras require the rtspx stream to be used with go2rtc.
To utilize a Unifi protect camera, modify the rtsps link to begin with rtspx.
Additionally, remove the "?enableSrtp" from the end of the Unifi link.
@@ -231,7 +244,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
@@ -245,12 +258,15 @@ ffmpeg:
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
### Wyze Wireless Cameras
Some community members have found better performance on Wyze cameras by using an alternative firmware known as [Thingino](https://thingino.com/).
## USB Cameras (aka Webcams)
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
- Preparation outside of Frigate:
- Get USB camera path. Run `v4l2-ctl --list-devices` to get a listing of locally-connected cameras available. (You may need to install `v4l-utils` in a way appropriate for your Linux distribution). In the sample configuration below, we use `video=0` to correlate with a detected device path of `/dev/video0`
- Get USB camera formats & resolutions. Run `ffmpeg -f v4l2 -list_formats all -i /dev/video0` to get an idea of what formats and resolutions the USB Camera supports. In the sample configuration below, we use a width of 1024 and height of 576 in the stream and detection settings based on what was reported back.
- If using Frigate in a container (e.g. Docker on TrueNAS), ensure you have USB Passthrough support enabled, along with a specific Host Device (`/dev/video0`) + Container Device (`/dev/video0`) listed.

View File

@@ -89,9 +89,7 @@ An ONVIF-capable camera that supports relative movement within the field of view
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback. If you'd like to report specific quirks or issues with a manufacturer or camera that would be helpful for other users, open a pull request to add to this list.
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`, plus `PTZAuxiliary`. These features are required for autotracking, but some cameras still fail to respond even if they claim support. If they are missing, autotracking will not work (though basic PTZ in the WebUI might). Avoid cameras with no database entry unless they are confirmed as working below.
This list of working and non-working PTZ cameras is based on user feedback.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -100,14 +98,13 @@ The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.or
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Annke CZ504 | ✅ | ✅ | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
| Axis Q-6155E | ✅ | ❌ | ONVIF service port: 80; Camera does not support MoveStatus.
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, picoo series (commonly), among others) have been reported to not support autotracking. These models usually don't have a four digit model number with chassis prefix and options postfix (e.g. DH-P5AE-PV vs DH-SD49825GB-HNR). |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, among others) have been reported to not support autotracking |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
| Dahua DH-P5AE-PV | ❌ | ❌ | |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database | |
| Foscam R5 | ✅ | ❌ | |
| Foscam SD4 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
@@ -138,6 +135,3 @@ camera_groups:
icon: LuCar
order: 0
```
## Two-Way Audio
See the guide [here](/configuration/live/#two-way-talk)

View File

@@ -1,73 +0,0 @@
---
id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
## Minimum System Requirements
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
### Sub label vs Attribute
- **Sub label**:
- Applied to the objects `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
## Example use cases
### Sub label
- **Known pet vs unknown**: For `dog` objects, set sub label to your pets name (e.g., `buddy`) or `none` for others.
- **Mail truck vs normal car**: For `car`, classify as `mail_truck` vs `car` to filter important arrivals.
- **Delivery vs non-delivery person**: For `person`, classify `delivery` vs `visitor` based on uniform/props.
### Attributes
- **Backpack**: For `person`, add attribute `backpack: yes/no`.
- **Helmet**: For `person` (worksite), add `helmet: yes/no`.
- **Leash**: For `dog`, add `leash: yes/no` (useful for park or yard rules).
- **Ladder rack**: For `truck`, add `ladder_rack: yes/no` to flag service vehicles.
## Configuration
Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.
```yaml
classification:
custom:
dog:
threshold: 0.8
object_config:
objects: [dog] # object labels to classify
classification_type: sub_label # or: attribute
```
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
### Getting Started
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@@ -1,52 +0,0 @@
---
id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
## Minimum System Requirements
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
## Example use cases
- **Door state**: Detect if a garage or front door is open vs closed.
- **Gate state**: Track if a driveway gate is open or closed.
- **Trash day**: Bins at curb vs no bins present.
- **Pool cover**: Cover on vs off.
## Configuration
State classification is configured as a custom classification model. Each model has its own name and settings. You must provide at least one camera crop under `state_config.cameras`.
```yaml
classification:
custom:
front_door:
threshold: 0.8
state_config:
motion: true # run when motion overlaps the crop
interval: 10 # also run every N seconds (optional)
cameras:
front:
crop: [0, 180, 220, 400]
```
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
### Getting Started
When choosing a portion of the camera frame for state classification, it is important to make the crop tight around the area of interest to avoid extra signals unrelated to what is being classified.
// TODO add this section once UI is implemented. Explain process of selecting a crop.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.

View File

@@ -24,7 +24,7 @@ Frigate needs to first detect a `person` before it can detect and recognize a fa
Frigate has support for two face recognition model types:
- **small**: Frigate will run a FaceNet embedding model to recognize faces, which runs locally on the CPU. This model is optimized for efficiency and is not as accurate.
- **large**: Frigate will run a large ArcFace embedding model that is optimized for accuracy. It is only recommended to be run when an integrated or dedicated GPU / NPU is available.
- **large**: Frigate will run a large ArcFace embedding model that is optimized for accuracy. It is only recommended to be run when an integrated or dedicated GPU is available.
In both cases, a lightweight face landmark detection model is also used to align faces before running recognition.
@@ -34,7 +34,7 @@ All of these features run locally on your system.
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
The `large` model is optimized for accuracy, an integrated or discrete GPU / NPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
The `large` model is optimized for accuracy, an integrated or discrete GPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
## Configuration
@@ -73,9 +73,6 @@ Fine-tune face recognition with these optional parameters at the global level of
- Default: `100`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
- Default: `None`.
- Note: This setting is only applicable when using the `large` model. See [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)
## Usage
@@ -161,6 +158,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
### Why can't I bulk upload photos?
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.

View File

@@ -9,38 +9,35 @@ Requests for a description are sent off automatically to your AI provider at the
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
cameras:
front_camera:
objects:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
objects:
genai:
enabled: False # <- disable GenAI for your indoor camera
genai:
enabled: False # <- disable GenAI for your indoor camera
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
@@ -69,6 +66,7 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava:7b
@@ -80,7 +78,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
@@ -95,9 +93,10 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
```
:::note
@@ -112,7 +111,7 @@ OpenAI does not have a free tier for their API. With the release of gpt-4o, pric
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
@@ -122,6 +121,7 @@ To start using OpenAI, you must first [create an API key](https://platform.opena
```yaml
genai:
enabled: True
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
@@ -139,18 +139,20 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
```yaml
genai:
enabled: True
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
@@ -191,35 +193,32 @@ You are also able to define custom prompts in your configuration.
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
genai:
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts

View File

@@ -1,142 +0,0 @@
---
id: genai_config
title: Configuring Generative AI
---
## Configuration
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::info
Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger sizes are more capable of complex tasks and understanding of situations, but requires more memory and computational resources. It is recommended to try multiple models and experiment to see which performs best.
:::
:::tip
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. https://github.com/skye-harris/ollama-modelfiles contains optimized model configs for this task.
:::
The following models are recommended:
| Model | Notes |
| ----------------- | ----------------------------------------------------------- |
| `Intern3.5VL` | Relatively fast with good vision comprehension
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5vl` | Fast but capable model with good vision comprehension |
| `llava-phi3` | Lightweight and fast model with vision comprehension |
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
```
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
```

View File

@@ -1,77 +0,0 @@
---
id: genai_objects
title: Object Descriptions
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@@ -1,44 +0,0 @@
---
id: genai_review
title: Review Summaries
---
Generative AI can be used to automatically generate structured summaries of review items. These summaries will show up in Frigate's native notifications as well as in the UI. Generative AI can also be used to take a collection of summaries over a period of time and provide a report, which may be useful to get a quick report of everything that happened while out for some amount of time.
Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well.
Generative AI review summaries can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/review_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset).
## Review Summary Usage and Best Practices
Review summaries provide structured JSON responses that are saved for each review item:
```
- `scene` (string): A full description including setting, entities, actions, and any plausible supported inferences.
- `confidence` (float): 0-1 confidence in the analysis.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
Threat-level definitions:
- 0 — Typical or expected activity for this location/time (includes residents, guests, or known animals engaged in normal activities, even if they glance around or scan surroundings).
- 1 — Unusual or suspicious activity: At least one security-relevant behavior is present **and not explainable by a normal residential activity**.
- 2 — Active or immediate threat: Breaking in, vandalism, aggression, weapon display.
```
This will show in the UI as a list of concerns that each review item has along with the general description.
### Additional Concerns
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:
```yaml
review:
genai:
enabled: true
additional_concerns:
- animals in the garden
```
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.

View File

@@ -5,11 +5,11 @@ title: Enrichments
# Enrichments
Some of Frigate's enrichments can use a discrete GPU / NPU for accelerated processing.
Some of Frigate's enrichments can use a discrete GPU for accelerated processing.
## Requirements
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU and configure the enrichment according to its specific documentation.
- **AMD**
@@ -23,9 +23,6 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
- **RockChip**
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
:::note

View File

@@ -427,29 +427,3 @@ cameras:
```
:::
## Synaptics
Hardware accelerated video de-/encoding is supported on Synpatics SL-series SoC.
### Prerequisites
Make sure to follow the [Synaptics specific installation instructions](/frigate/installation#synaptics).
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream
output_args:
record: preset-record-generic-audio-aac
```
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your input stream is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::

View File

@@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
@@ -67,15 +66,12 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- **`device`**: Device to use to run license plate recognition models.
- Default: `CPU`
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
- **`model_size`**: The size of the model used to detect text on plates.
- Default: `small`
- This can be `small` or `large`.
- The `small` model is fast and identifies groups of Latin and Chinese characters.
- The `large` model identifies Latin characters only, but uses an enhanced text detector and is more capable at finding characters on multi-line plates. It is significantly slower than the `small` model. Note that using the `large` model does not improve _text recognition_, but it may improve _text detection_.
- For most users, the `small` model is recommended.
- This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large model does not improve _text recognition_, but it may improve _text detection_.
### Recognition
@@ -105,32 +101,6 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- This setting is best adjusted at the camera level if running LPR on multiple cameras.
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at 5 and adjusting as needed. You should see how different enhancement levels affect your plates. Use the `debug_save_plates` configuration option (see below).
### Normalization Rules
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially. Each rule must have a `pattern` (which can be a string or a regex, prepended by `r`) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
These rules must be defined at the global level of your `lpr` config.
```yaml
lpr:
replace_rules:
- pattern: r'[%#*?]' # Remove noise symbols
replacement: ""
- pattern: r'[= ]' # Normalize = or space to dash
replacement: "-"
- pattern: "O" # Swap 'O' to '0' (common OCR error)
replacement: "0"
- pattern: r'I' # Swap 'I' to '1'
replacement: "1"
- pattern: r'(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123)
replacement: r'\1-\2'
```
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
- Backrefs (`\1`, `\2`) allow dynamic replacements (e.g., capture groups).
- Any changes made by the rules are printed to the LPR debug log.
- Tip: You can test patterns with tools like regex101.com.
### Debugging
- **`debug_save_plates`**: Set to `True` to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `<camera>/<event_id>`, and named based on the capture timestamp.
@@ -165,9 +135,6 @@ lpr:
recognition_threshold: 0.85
format: "^[A-Z]{2} [A-Z][0-9]{4}$" # Only recognize plates that are two letters, followed by a space, followed by a single letter and 4 numbers
match_distance: 1 # Allow one character variation in plate matching
replace_rules:
- pattern: "O"
replacement: "0" # Replace the letter O with the number 0 in every plate
known_plates:
Delivery Van:
- "RJ K5678"

View File

@@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
@@ -127,7 +127,8 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that WebRTC does not support H.265.
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
:::tip
@@ -174,9 +175,7 @@ For devices that support two way talk, Frigate can be configured to use the feat
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-cameras)
### Streaming options on camera group dashboards
@@ -230,26 +229,7 @@ Note that disabling a camera through the config file (`enabled: False`) removes
If you are using continuous streaming or you are loading more than a few high resolution streams at once on the dashboard, your browser may struggle to begin playback of your streams before the timeout. Frigate always prioritizes showing a live stream as quickly as possible, even if it is a lower quality jsmpeg stream. You can use the "Reset" link/button to try loading your high resolution stream again.
Errors in stream playback (e.g., connection failures, codec issues, or buffering timeouts) that cause the fallback to low bandwidth mode (jsmpeg) are logged to the browser console for easier debugging. These errors may include:
- Network issues (e.g., MSE or WebRTC network connection problems).
- Unsupported codecs or stream formats (e.g., H.265 in WebRTC, which is not supported in some browsers).
- Buffering timeouts or low bandwidth conditions causing fallback to jsmpeg.
- Browser compatibility problems (e.g., iOS Safari limitations with MSE).
To view browser console logs:
1. Open the Frigate Live View in your browser.
2. Open the browser's Developer Tools (F12 or right-click > Inspect > Console tab).
3. Reproduce the error (e.g., load a problematic stream or simulate network issues).
4. Look for messages prefixed with the camera name.
These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors:
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)).
- Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS).
- Test with a different stream via the UI dropdown (if `live -> streams` is configured).
- For WebRTC-specific issues, ensure port 8555 is forwarded and candidates are set (see (WebRTC Extra Configuration)(#webrtc-extra-configuration)).
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the [recommendations above](#camera_settings_recommendations).
3. **It doesn't seem like my cameras are streaming on the Live dashboard. Why?**

View File

@@ -13,18 +13,12 @@ Frigate supports multiple different detectors that work on different types of ha
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
**AMD**
- [ROCm](#amdrocm-gpu-detector): ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- [ONNX](#onnx): ROCm will automatically be detected and used as a detector in the `-rocm` Frigate image when a supported ONNX model is configured.
**Apple Silicon**
- [Apple Silicon](#apple-silicon-detector): Apple Silicon can run on M1 and newer Apple Silicon devices.
**Intel**
- [OpenVino](#openvino-detector): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
@@ -43,10 +37,6 @@ Frigate supports multiple different detectors that work on different types of ha
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
@@ -63,7 +53,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -275,7 +265,7 @@ detectors:
:::
### OpenVINO Supported Models
### Supported Models
#### SSDLite MobileNet v2
@@ -405,7 +395,7 @@ After placing the downloaded onnx model in your config/model_cache folder, you c
detectors:
ov:
type: openvino
device: GPU
device: CPU
model:
model_type: dfine
@@ -419,60 +409,6 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## Apple Silicon detector
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
### Setup
1. Setup the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) and run the client
2. Configure the detector in Frigate and startup Frigate
### Configuration
Using the detector config below will connect to the client:
```yaml
detectors:
apple-silicon:
type: zmq
endpoint: tcp://host.docker.internal:5555
```
### Apple Silicon Supported Models
There is no default model provided, the following formats are supported:
#### YOLO (v3, v4, v7, v9)
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
:::tip
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate.
:::
When Frigate is started with the following config it will connect to the detector client and transfer the model automatically:
```yaml
detectors:
apple-silicon:
type: zmq
endpoint: tcp://host.docker.internal:5555
model:
model_type: yolo-generic
width: 320 # <--- should match the imgsize set during model export
height: 320 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolo.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## AMD/ROCm GPU detector
### Setup
@@ -495,10 +431,10 @@ When using Docker Compose:
```yaml
services:
frigate:
---
devices:
- /dev/dri
- /dev/kfd
...
devices:
- /dev/dri
- /dev/kfd
```
For reference on recommended settings see [running ROCm/pytorch in Docker](https://rocm.docs.amd.com/projects/install-on-linux/en/develop/how-to/3rd-party/pytorch-install.html#using-docker-with-pytorch-pre-installed).
@@ -526,9 +462,9 @@ When using Docker Compose:
```yaml
services:
frigate:
environment:
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
...
environment:
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
```
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
@@ -553,18 +489,7 @@ We unset the `HSA_OVERRIDE_GFX_VERSION` to prevent an existing override from mes
$ docker exec -it frigate /bin/bash -c '(unset HSA_OVERRIDE_GFX_VERSION && /opt/rocm/bin/rocminfo |grep gfx)'
```
### ROCm Supported Models
:::tip
The AMD GPU kernel is known problematic especially when converting models to mxr format. The recommended approach is:
1. Disable object detection in the config.
2. Startup Frigate with the onnx detector configured, the main object detection model will be converted to mxr format and cached in the config directory.
3. Once this is finished as indicated by the logs, enable object detection in the UI and confirm that it is working correctly.
4. Re-enable object detection in the config.
:::
### Supported Models
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
@@ -607,7 +532,7 @@ detectors:
:::
### ONNX Supported Models
### Supported Models
There is no default model provided, the following formats are supported:
@@ -792,196 +717,6 @@ To verify that the integration is working correctly, start Frigate and observe t
# Community Supported Detectors
## MemryX MX3
This detector is available for use with the MemryX MX3 accelerator M.2 module. Frigate supports the MX3 on compatible hardware platforms, providing efficient and high-performance object detection.
See the [installation docs](../frigate/installation.md#memryx-mx3) for information on configuring the MemryX hardware.
To configure a MemryX detector, simply set the `type` attribute to `memryx` and follow the configuration guide below.
### Configuration
To configure the MemryX detector, use the following example configuration:
#### Single PCIe MemryX MX3
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
```
#### Multiple PCIe MemryX MX3 Modules
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
memx1:
type: memryx
device: PCIe:1
memx2:
type: memryx
device: PCIe:2
```
### Supported Models
MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the container at `/memryx_models/model_folder/`.
#### YOLO-NAS
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
The input size for **YOLO-NAS** can be set to either **320x320** (default) or **640x640**.
- The default size of **320x320** is optimized for lower CPU usage and faster inference times.
##### Configuration
Below is the recommended configuration for using the **YOLO-NAS** (small) model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: yolonas
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolonas.zip
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOv9
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
##### Configuration
Below is the recommended configuration for using the **YOLOv9** (small) model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: yolo-generic
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolov9.zip
# The .zip file must contain:
# ├── yolov9.dfp (a file ending with .dfp)
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOX
The model is sourced from the [OpenCV Model Zoo](https://github.com/opencv/opencv_zoo) and precompiled to DFP.
##### Configuration
Below is the recommended configuration for using the **YOLOX** (small) model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: yolox
width: 640
height: 640
input_tensor: nchw
input_dtype: float_denorm
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolox.zip
# The .zip file must contain:
# ├── yolox.dfp (a file ending with .dfp)
```
#### SSDLite MobileNet v2
The model is sourced from the [OpenMMLab Model Zoo](https://mmdeploy-oss.openmmlab.com/model/mmdet-det/ssdlite-e8679f.onnx) and has been converted to DFP.
##### Configuration
Below is the recommended configuration for using the **SSDLite MobileNet v2** model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: ssd
width: 320
height: 320
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/ssdlite_mobilenet.zip
# The .zip file must contain:
# ├── ssdlite_mobilenet.dfp (a file ending with .dfp)
# └── ssdlite_mobilenet_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### Using a Custom Model
To use your own model:
1. Package your compiled model into a `.zip` file.
2. The `.zip` must contain the compiled `.dfp` file.
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
5. Update the `labelmap_path` to match your custom model's labels.
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
```yaml
# The detector automatically selects the default model if nothing is provided in the config.
#
# Optionally, you can specify a local model path as a .zip file to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
#
# Example:
# path: /config/yolonas.zip
#
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
---
## NVidia TensorRT Detector
Nvidia Jetson devices may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt-jp6` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6`. This detector is designed to work with Yolo models for object detection.
@@ -1064,41 +799,6 @@ model:
height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416
```
## Synaptics
Hardware accelerated object detection is supported on the following SoCs:
- SL1680
This implementation uses the [Synaptics model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0.
This implementation is based on sdk `v1.5.0`.
See the [installation docs](../frigate/installation.md#synaptics) for information on configuring the SL-series NPU hardware.
### Configuration
When configuring the Synap detector, you have to specify the model: a local **path**.
#### SSD Mobilenet
A synap model is provided in the container at /mobilenet.synap and is used by this detector type by default. The model comes from [Synap-release Github](https://github.com/synaptics-astra/synap-release/tree/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80).
Use the model configuration shown below when using the synaptics detector with the default synap model:
```yaml
detectors: # required
synap_npu: # required
type: synaptics # required
model: # required
path: /synaptics/mobilenet.synap # required
width: 224 # required
height: 224 # required
tensor_format: nhwc # default value (optional. If you change the model, it is required)
labelmap_path: /labelmap/coco-80.txt # required
```
## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs:
@@ -1142,7 +842,7 @@ $ cat /sys/kernel/debug/rknpu/load
:::
### RockChip Supported Models
### Supported Models
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for two). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
@@ -1268,101 +968,6 @@ Explanation of the paramters:
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.2/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.2_EN.pdf).
## DeGirum
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector *cannot* be used for commercial purposes.
### Configuration
#### AI Server Inference
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
```yaml
degirum_detector:
container_name: degirum
image: degirum/aiserver:latest
privileged: true
ports:
- "8778:8778"
```
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
Once completed, changing the `config.yml` file is simple.
```yaml
degirum_detector:
type: degirum
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Setting up a model in the `config.yml` is similar to setting up an AI server.
You can set it to:
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
- A path to some model.json.
```yaml
model:
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
#### Local Inference
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
3. Create a DeGirum detector in your `config.yml` file.
```yaml
degirum_detector:
type: degirum
location: "@local" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
```yaml
model:
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
#### AI Hub Cloud Inference
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
2. Get an access token.
3. Create a DeGirum detector in your `config.yml` file.
```yaml
degirum_detector:
type: degirum
location: "@cloud" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
```
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
```yaml
model:
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
# Models
Some model types are not included in Frigate by default.
@@ -1383,7 +988,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /dfine
RUN git clone https://github.com/Peterande/D-FINE.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnxsim
RUN uv pip install --system onnx onnxruntime onnxsim onnxscript
# Create output directory and download checkpoint
RUN mkdir -p output
ARG MODEL_SIZE
@@ -1397,19 +1002,19 @@ COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL
EOF
```
### Download RF-DETR Model
### Downloading RF-DETR Model
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
```sh
docker build . --build-arg MODEL_SIZE=Nano --output . -f- <<'EOF'
docker build . --build-arg MODEL_SIZE=Nano --rm --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /rfdetr
RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 onnxscript
ARG MODEL_SIZE
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()"
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
FROM scratch
ARG MODEL_SIZE
COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx
@@ -1457,7 +1062,7 @@ COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt

View File

@@ -11,7 +11,7 @@ This adds features including the ability to deep link directly into the app.
In order to install Frigate as a PWA, the following requirements must be met:
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
- Frigate must be accessed via a secure context (localhost, secure https, VPN, etc.)
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
@@ -22,3 +22,7 @@ Installation varies slightly based on the device that is being used:
- Desktop: Use the install button typically found in right edge of the address bar
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
- iOS: Use the `Add to Homescreen` button in the share menu
## Usage
Once setup, the Frigate app can be used wherever it has access to Frigate. This means it can be setup as local-only, VPN-only, or fully accessible depending on your needs.

View File

@@ -13,15 +13,14 @@ H265 recordings can be viewed in Chrome 108+, Edge and Safari only. All other br
### Most conservative: Ensure all video is saved
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion will be saved for 7 days. After 7 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
```yaml
record:
enabled: True
continuous:
retain:
days: 3
motion:
days: 7
mode: all
alerts:
retain:
days: 30
@@ -39,8 +38,9 @@ In order to reduce storage requirements, you can adjust your config to only reta
```yaml
record:
enabled: True
motion:
retain:
days: 3
mode: motion
alerts:
retain:
days: 30
@@ -58,7 +58,7 @@ If you only want to retain video that occurs during a tracked object, this confi
```yaml
record:
enabled: True
continuous:
retain:
days: 0
alerts:
retain:
@@ -80,17 +80,15 @@ Retention configs support decimals meaning they can be configured to retain `0.5
:::
### Continuous and Motion Recording
### Continuous Recording
The number of days to retain continuous and motion recordings can be set via the following config where X is a number, by default continuous recording is disabled.
The number of days to retain continuous recordings can be set via the following config where X is a number, by default continuous recording is disabled.
```yaml
record:
enabled: True
continuous:
retain:
days: 1 # <- number of days to keep continuous recordings
motion:
days: 2 # <- number of days to keep motion recordings
```
Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean)
@@ -114,6 +112,38 @@ This configuration will retain recording segments that overlap with alerts and d
**WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect.
## What do the different retain modes mean?
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects).
Let's say you have Frigate configured so that your doorbell camera would retain the last **2** days of continuous recording.
- With the `all` option all 48 hours of those two days would be kept and viewable.
- With the `motion` option the only parts of those 48 hours would be segments that Frigate detected motion. This is the middle ground option that won't keep all 48 hours, but will likely keep all segments of interest along with the potential for some extra segments.
- With the `active_objects` option the only segments that would be kept are those where there was a true positive object that was not considered stationary.
The same options are available with alerts and detections, except it will only save the recordings when it overlaps with a review item of that type.
A configuration example of the above retain modes where all `motion` segments are stored for 7 days and `active objects` are stored for 14 days would be as follows:
```yaml
record:
enabled: True
retain:
days: 7
mode: motion
alerts:
retain:
days: 14
mode: active_objects
detections:
retain:
days: 14
mode: active_objects
```
The above configuration example can be added globally or on a per camera basis.
## Can I have "continuous" recordings, but only at certain times?
Using Frigate UI, Home Assistant, or MQTT, cameras can be automated to only record in certain situations or at certain times.

View File

@@ -73,12 +73,6 @@ tls:
# Optional: Enable TLS for port 8971 (default: shown below)
enabled: True
# Optional: IPv6 configuration
networking:
# Optional: Enable IPv6 on 5000, and 8971 if tls is configured (default: shown below)
ipv6:
enabled: False
# Optional: Proxy configuration
proxy:
# Optional: Mapping for headers from upstream proxies. Only used if Frigate's auth
@@ -88,13 +82,7 @@ proxy:
# See the docs for more info.
header_map:
user: x-forwarded-user
role: x-forwarded-groups
role_map:
admin:
- sysadmins
- access-level-security
viewer:
- camera-viewer
role: x-forwarded-role
# Optional: Url for logging out a user. This sets the location of the logout url in
# the UI.
logout_url: /api/logout
@@ -287,9 +275,6 @@ detect:
max_disappeared: 25
# Optional: Configuration for stationary object tracking
stationary:
# Optional: Stationary classifier that uses visual characteristics to determine if an object
# is stationary even if the box changes enough to be considered motion (default: shown below).
classifier: True
# Optional: Frequency for confirming stationary objects (default: same as threshold)
# When set to 1, object detection will run to confirm the object still exists on every frame.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
@@ -354,33 +339,6 @@ objects:
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI object description generation (default: shown below)
enabled: False
# Optional: Use the object snapshot instead of thumbnails for description generation (default: shown below)
use_snapshot: False
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: objects to generate descriptions for (default: all objects that are tracked)
objects:
- person
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional: Review configuration
# NOTE: Can be overridden at the camera level
@@ -393,8 +351,6 @@ review:
labels:
- car
- person
# Time to cutoff alerts after no alert-causing activity has occurred (default: shown below)
cutoff_time: 40
# Optional: required zones for an object to be marked as an alert (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
@@ -409,27 +365,12 @@ review:
labels:
- car
- person
# Time to cutoff detections after no detection-causing activity has occurred (default: shown below)
cutoff_time: 30
# Optional: required zones for an object to be marked as a detection (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
# should be configured at the camera level.
required_zones:
- driveway
# Optional: GenAI Review Summary Configuration
genai:
# Optional: Enable the GenAI review summary feature (default: shown below)
enabled: False
# Optional: Enable GenAI review summaries for alerts (default: shown below)
alerts: True
# Optional: Enable GenAI review summaries for detections (default: shown below)
detections: False
# Optional: Additional concerns that the GenAI should make note of (default: None)
additional_concerns:
- Animals in the garden
# Optional: Preferred response language (default: English)
preferred_language: English
# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
@@ -499,18 +440,18 @@ record:
expire_interval: 60
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Continuous retention settings
continuous:
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Motion retention settings
motion:
# Optional: Retention settings for recording
retain:
# Optional: Number of days to retain recordings regardless of tracked objects (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Mode for retention. Available options are: all, motion, and active_objects
# all - save all recording segments regardless of activity
# motion - save all recordings segments with any detected motion
# active_objects - save all recording segments with active/moving objects
# NOTE: this mode only applies when the days setting above is greater than 0
mode: all
# Optional: Recording Export Settings
export:
# Optional: Timelapse Output Args (default: shown below).
@@ -605,9 +546,6 @@ semantic_search:
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Target a specific device to run the model (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: None
# Optional: Configuration for face recognition capability
# NOTE: enabled, min_area can be overridden at the camera level
@@ -631,9 +569,6 @@ face_recognition:
blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below)
model_size: small
# Optional: Target a specific device to run the model (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: None
# Optional: Configuration for license plate recognition capability
# NOTE: enabled, min_area, and enhancement can be overridden at the camera level
@@ -641,7 +576,6 @@ lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: The device to run the models on (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: CPU
# Optional: Set the model size used for text detection. (default: shown below)
model_size: small
@@ -664,8 +598,6 @@ lpr:
enhancement: 0
# Optional: Save plate images to /media/frigate/clips/lpr for debugging purposes (default: shown below)
debug_save_plates: False
# Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {}
# Optional: Configuration for AI generated tracked object descriptions
# WARNING: Depending on the provider, this will send thumbnails over the internet
@@ -680,27 +612,16 @@ genai:
base_url: http://localhost::11434
# Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}"
# Required if enabled: The model to use with the provider.
model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None)
provider_options:
keep_alive: -1
# Optional: Configuration for audio transcription
# NOTE: only the enabled option can be overridden at the camera level
audio_transcription:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: The device to run the models on (default: shown below)
device: CPU
# Optional: Set the model size used for transcription. (default: shown below)
model_size: small
# Optional: Set the language used for transcription translation. (default: shown below)
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
language: en
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# Uses https://github.com/AlexxIT/go2rtc (v1.9.9)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
@@ -906,22 +827,33 @@ cameras:
# By default the cameras are sorted alphabetically.
order: 0
# Optional: Configuration for triggers to automate actions based on semantic search results.
triggers:
# Required: Unique identifier for the trigger (generated automatically from friendly_name if not specified).
trigger_name:
# Required: Enable or disable the trigger. (default: shown below)
enabled: true
# Type of trigger, either `thumbnail` for image-based matching or `description` for text-based matching. (default: none)
type: thumbnail
# Reference data for matching, either an event ID for `thumbnail` or a text string for `description`. (default: none)
data: 1751565549.853251-b69j73
# Similarity threshold for triggering. (default: none)
threshold: 0.7
# List of actions to perform when the trigger fires. (default: none)
# Available options: `notification` (send a webpush notification)
actions:
- notification
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI description generation (default: shown below)
enabled: False
# Optional: Use the object snapshot instead of thumbnails for description generation (default: shown below)
use_snapshot: False
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: objects to generate descriptions for (default: all objects that are tracked)
objects:
- person
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional
ui:

View File

@@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.9) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration) for more advanced configurations and features.
:::note
@@ -156,7 +156,7 @@ See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-22
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@@ -39,7 +39,7 @@ If you are enabling Semantic Search for the first time, be advised that Frigate
The [V1 model from Jina](https://huggingface.co/jinaai/jina-clip-v1) has a vision model which is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the object description docs](/configuration/genai/objects.md) for more information on how to automatically generate tracked object descriptions.
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted versions of the Jina models are available and can be selected by setting the `model_size` config option as `small` or `large`:
@@ -78,21 +78,17 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU / NPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.
```yaml
semantic_search:
enabled: True
model_size: large
# Optional, if using the 'large' model in a multi-GPU installation
device: 0
```
:::info
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU / NPU will be detected and used automatically.
Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)).
If you do not specify a device, the first available GPU will be used.
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
@@ -106,49 +102,3 @@ See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".
5. Semantic search on thumbnails tends to return better results when matching large subjects that take up most of the frame. Small things like "cat" tend to not work well.
6. Experiment! Find a tracked object you want to test and start typing keywords and phrases to see what works for you.
## Triggers
Triggers utilize semantic search to automate actions when a tracked object matches a specified image or description. Triggers can be configured so that Frigate executes a specific actions when a tracked object's image or description matches a predefined image or text, based on a similarity threshold. Triggers are managed per camera and can be configured via the Frigate UI in the Settings page under the Triggers tab.
### Configuration
Triggers are defined within the `semantic_search` configuration for each camera in your Frigate configuration file or through the UI. Each trigger consists of a `type` (either `thumbnail` or `description`), a `data` field (the reference image event ID or text), a `threshold` for similarity matching, and a list of `actions` to perform when the trigger fires.
#### Managing Triggers in the UI
1. Navigate to the **Settings** page and select the **Triggers** tab.
2. Choose a camera from the dropdown menu to view or manage its triggers.
3. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one.
4. In the **Create Trigger** dialog:
- Enter a **Name** for the trigger (e.g., "red_car_alert").
- Select the **Type** (`Thumbnail` or `Description`).
- For `Thumbnail`, select an image to trigger this action when a similar thumbnail image is detected, based on the threshold.
- For `Description`, enter text to trigger this action when a similar tracked object description is detected.
- Set the **Threshold** for similarity matching.
- Select **Actions** to perform when the trigger fires.
5. Save the trigger to update the configuration and store the embedding in the database.
When a trigger fires, the UI highlights the trigger with a blue outline for 3 seconds for easy identification.
### Usage and Best Practices
1. **Thumbnail Triggers**: Select a representative image (event ID) from the Explore page that closely matches the object you want to detect. For best results, choose images where the object is prominent and fills most of the frame.
2. **Description Triggers**: Write concise, specific text descriptions (e.g., "Person in a red jacket") that align with the tracked objects description. Avoid vague terms to improve matching accuracy.
3. **Threshold Tuning**: Adjust the threshold to balance sensitivity and specificity. A higher threshold (e.g., 0.8) requires closer matches, reducing false positives but potentially missing similar objects. A lower threshold (e.g., 0.6) is more inclusive but may trigger more often.
4. **Using Explore**: Use the context menu or right-click / long-press on a tracked object in the Grid View in Explore to quickly add a trigger based on the tracked object's thumbnail.
5. **Editing triggers**: For the best experience, triggers should be edited via the UI. However, Frigate will ensure triggers edited in the config will be synced with triggers created and edited in the UI.
### Notes
- Triggers rely on the same Jina AI CLIP models (V1 or V2) used for semantic search. Ensure `semantic_search` is enabled and properly configured.
- Reindexing embeddings (via the UI or `reindex: True`) does not affect trigger configurations but may update the embeddings used for matching.
- For optimal performance, use a system with sufficient RAM (8GB minimum, 16GB recommended) and a GPU for `large` model configurations, as described in the Semantic Search requirements.
### FAQ
#### Why can't I create a trigger on thumbnails for some text, like "person with a blue shirt" and have it trigger when a person with a blue shirt is detected?
TL;DR: Text-to-image triggers arent supported because CLIP can confuse similar images and give inconsistent scores, making automation unreliable.
Text-to-image triggers are not supported due to fundamental limitations of CLIP-based similarity search. While CLIP works well for exploratory, manual queries, it is unreliable for automated triggers based on a threshold. Issues include embedding drift (the same textimage pair can yield different cosine distances over time), lack of true semantic grounding (visually similar but incorrect matches), and unstable thresholding (distance distributions are dataset-dependent and often too tightly clustered to separate relevant from irrelevant results). Instead, it is recommended to set up a workflow with thumbnail triggers: first use text search to manually select 35 representative reference tracked objects, then configure thumbnail triggers based on that visual similarity. This provides robust automation without the semantic ambiguity of text to image matching.

View File

@@ -88,9 +88,7 @@ Sometimes objects are expected to be passing through a zone, but an object loite
:::note
When using loitering zones, a review item will behave in the following way:
- When a person is in a loitering zone, the review item will remain active until the person leaves the loitering zone, regardless of if they are stationary.
- When any other object is in a loitering zone, the review item will remain active until the loitering time is met. Then if the object is stationary the review item will end.
When using loitering zones, a review item will remain active until the object leaves. Loitering zones are only meant to be used in areas where loitering is not expected behavior.
:::

View File

@@ -18,7 +18,7 @@ Here are some of the cameras I recommend:
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
- <a href="https://amzn.to/4ltOpaC" target="_blank" rel="nofollow noopener sponsored">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
- <a href="https://www.bhphotovideo.com/c/product/1705511-REG/hikvision_colorvu_ds_2cd2387g2p_lsu_sl_8mp_network.html" target="_blank" rel="nofollow noopener">HIKVISION DS-2CD2387G2P-LSU/SL ColorVu 8MP Panoramic Turret IP Camera</a> (affiliate link)
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
@@ -36,9 +36,11 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
:::
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
| Name | Capabilities | Notes |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors
@@ -56,36 +58,24 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
- Runs best with tiny, small, or medium-size models
**AMD**
- [ROCm](#rocm---amd-gpu): ROCm can run on AMD Discrete GPUs to provide efficient object detection
- [Supports limited model architectures](../../configuration/object_detectors#rocm-supported-models)
- [Supports limited model architectures](../../configuration/object_detectors#supported-models-1)
- Runs best on discrete AMD GPUs
**Apple Silicon**
- [Apple Silicon](#apple-silicon): Apple Silicon is usable on all M1 and newer Apple Silicon devices to provide efficient and fast object detection
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#apple-silicon-supported-models)
- Runs well with any size models including large
- Runs via ZMQ proxy which adds some latency, only recommended for local connection
**Intel**
- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- [Supports majority of model architectures](../../configuration/object_detectors#openvino-supported-models)
- [Supports majority of model architectures](../../configuration/object_detectors#supported-models)
- Runs best with tiny, small, or medium models
**Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs and Jetson devices.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#supported-models-2)
- Runs well with any size models including large
**Rockchip**
@@ -95,21 +85,8 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
:::
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
@@ -129,10 +106,16 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
### Google Coral TPU
:::warning
The Coral is no longer recommended for new Frigate installations, except in deployments with particularly low power requirements or hardware incapable of utilizing alternative AI accelerators for object detection. Instead, we suggest using one of the numerous other supported object detectors. Frigate will continue to provide support for the Coral TPU for as long as practicably possible given its still one of the most power-efficient devices for executing object detection models.
:::
Frigate supports both the USB and M.2 versions of the Google Coral.
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
- The PCIe and M.2 versions require installation of a driver on the host. https://github.com/jnicolson/gasket-builder should be used.
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
@@ -167,7 +150,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
@@ -177,7 +160,7 @@ Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA
#### Minimum Hardware Support
12.x series of CUDA libraries are used which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
12.x series of CUDA libraries are used which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
@@ -192,71 +175,27 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
[NVIDIA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny (t)` variants are faster than the equivalent non-tiny model, some known examples are below:
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
✅ - Accelerated with CUDA Graphs
❌ - Not accelerated with CUDA Graphs
| Name | ✅ YOLOv9 Inference Time | ✅ RF-DETR Inference Time | ❌ YOLO-NAS Inference Time |
| --------- | ------------------------------------- | ------------------------- | -------------------------- |
| GTX 1070 | s-320: 16 ms | | 320: 14 ms |
| RTX 3050 | t-320: 8 ms s-320: 10 ms s-640: 28 ms | Nano-320: ~ 12 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 | t-320: 6 ms s-320: 8 ms s-640: 25 ms | Nano-320: ~ 9 ms | 320: ~ 8 ms 640: ~ 14 ms |
| RTX A4000 | | | 320: ~ 15 ms |
| Tesla P40 | | | 320: ~ 105 ms |
### Apple Silicon
With the [Apple Silicon](../configuration/object_detectors.md#apple-silicon-detector) detector Frigate can take advantage of the NPU in M1 and newer Apple Silicon.
:::warning
Apple Silicon can not run within a container, so a ZMQ proxy is utilized to communicate with [the Apple Silicon Frigate detector](https://github.com/frigate-nvr/apple-silicon-detector) which runs on the host. This should add minimal latency when run on the same device.
:::
| Name | YOLOv9 Inference Time |
| ------ | ------------------------------------ |
| M4 | s-320: 10 ms |
| M3 Pro | t-320: 6 ms s-320: 8 ms s-640: 20 ms |
| M1 | s-320: 9ms |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| --------------- | ------------------------- | ------------------------- | ---------------------- |
| GTX 1070 | s-320: 16 ms | 320: 14 ms | |
| RTX 3050 | t-320: 15 ms s-320: 17 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
| RTX 3070 | t-320: 11 ms s-320: 13 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
| RTX A4000 | | 320: ~ 15 ms | |
| Tesla P40 | | 320: ~ 105 ms | |
### ROCm - AMD GPU
With the [ROCm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------------- | ------------------------- |
| AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------- | ------------------------- |
| AMD 780M | 320: ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
## Community Supported Detectors
### MemryX MX3
Frigate supports the MemryX MX3 M.2 AI Acceleration Module on compatible hardware platforms, including both x86 (Intel/AMD) and ARM-based SBCs such as Raspberry Pi 5.
A single MemryX MX3 module is capable of handling multiple camera streams using the default models, making it sufficient for most users. For larger deployments with more cameras or bigger models, multiple MX3 modules can be used. Frigate supports multi-detector configurations, allowing you to connect multiple MX3 modules to scale inference capacity.
Detailed information is available [in the detector docs](/configuration/object_detectors#memryx-mx3).
**Default Model Configuration:**
- Default model is **YOLO-NAS-Small**.
The MX3 is a pipelined architecture, where the maximum frames per second supported (and thus supported number of cameras) cannot be calculated as `1/latency` (1/"Inference Time") and is measured separately. When estimating how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.
| Model | Input Size | MX3 Inference Time | MX3 Total FPS |
| -------------------- | ---------- | ------------------ | ------------- |
| YOLO-NAS-Small | 320 | ~ 9 ms | ~ 378 |
| YOLO-NAS-Small | 640 | ~ 21 ms | ~ 138 |
| YOLOv9s | 320 | ~ 16 ms | ~ 382 |
| YOLOv9s | 640 | ~ 41 ms | ~ 110 |
| YOLOX-Small | 640 | ~ 16 ms | ~ 263 |
| SSDlite MobileNet v2 | 320 | ~ 5 ms | ~ 1056 |
Inference speeds may vary depending on the host platform. The above data was measured on an **Intel 13700 CPU**. Platforms like Raspberry Pi, Orange Pi, and other ARM-based SBCs have different levels of processing capability, which may limit total FPS.
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).

View File

@@ -94,6 +94,10 @@ $ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
## Extra Steps for Specific Hardware
The following sections contain additional setup steps that are only required if you are using specific hardware. If you are not using any of these hardware types, you can skip to the [Docker](#docker) installation section.
### Raspberry Pi 3/4
By default, the Raspberry Pi limits the amount of memory available to the GPU. In order to use ffmpeg hardware acceleration, you must increase the available memory by setting `gpu_mem` to the maximum recommended value in `config.txt` as described in the [official docs](https://www.raspberrypi.org/documentation/computers/config_txt.html#memory-options).
@@ -106,14 +110,107 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
#### Installation
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
:::warning
For other installations, follow these steps for installation:
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
4. Run the script with `./user_installation.sh`
:::
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
:::note
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
:::
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
```bash
lsmod | grep hailo
```
If it shows `hailo_pci`, unload it:
```bash
sudo rmmod hailo_pci
```
Now blacklist the driver to prevent it from loading on boot:
```bash
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
```
Update initramfs to ensure the blacklist takes effect:
```bash
sudo update-initramfs -u
```
Reboot your Raspberry Pi:
```bash
sudo reboot
```
After rebooting, verify the built-in driver is not loaded:
```bash
lsmod | grep hailo
```
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
2. **Run the installation script**:
Download the installation script:
```bash
wget https://raw.githubusercontent.com/blakeblackshear/frigate/dev/docker/hailo8l/user_installation.sh
```
Make it executable:
```bash
sudo chmod +x user_installation.sh
```
Run the script:
```bash
./user_installation.sh
```
The script will:
- Install necessary build dependencies
- Clone and build the Hailo driver from the official repository
- Install the driver
- Download and install the required firmware
- Set up udev rules
3. **Reboot your system**:
After the script completes successfully, reboot to load the firmware:
```bash
sudo reboot
```
4. **Verify the installation**:
After rebooting, verify that the Hailo device is available:
```bash
ls -l /dev/hailo0
```
You should see the device listed. You can also verify the driver is loaded:
```bash
lsmod | grep hailo_pci
```
#### Setup
@@ -132,77 +229,6 @@ If you are using `docker run`, add this option to your command `--device /dev/ha
Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup.
### MemryX MX3
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
- x86 (Intel/AMD) PCs
- Raspberry Pi 5
- Orange Pi 5 Plus/Max
- Multi-M.2 PCIe carrier cards
#### Configuration
#### Installation
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
Then follow these steps for installing the correct driver/runtime configuration:
1. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/memryx/user_installation.sh).
2. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
3. Run the script with `./user_installation.sh`
4. **Restart your computer** to complete driver installation.
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
```yaml
devices:
- /dev/memx0
```
During configuration, you must run Docker in privileged mode and ensure the container can access the max-manager.
In your `docker-compose.yml`, also add:
```yaml
privileged: true
volumes:
/run/mxa_manager:/run/mxa_manager
```
If you can't use Docker Compose, you can run the container with something similar to this:
```bash
docker run -d \
--name frigate-memx \
--restart=unless-stopped \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
--shm-size=256m \
-v /path/to/your/storage:/media/frigate \
-v /path/to/your/config:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /run/mxa_manager:/run/mxa_manager \
-e FRIGATE_RTSP_PASSWORD='password' \
--privileged=true \
-p 8971:8971 \
-p 8554:8554 \
-p 5000:5000 \
-p 8555:8555/tcp \
-p 8555:8555/udp \
--device /dev/memx0 \
ghcr.io/blakeblackshear/frigate:stable
```
#### Configuration
Finally, configure [hardware object detection](/configuration/object_detectors#memryx-mx3) to complete the setup.
### Rockchip platform
Make sure that you use a linux distribution that comes with the rockchip BSP kernel 5.10 or 6.1 and necessary drivers (especially rkvdec2 and rknpu). To check, enter the following commands:
@@ -256,37 +282,6 @@ or add these options to your `docker run` command:
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
### Synaptics
- SL1680
#### Setup
Follow Frigate's default installation instructions, but use a docker image with `-synaptics` suffix for example `ghcr.io/blakeblackshear/frigate:stable-synaptics`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
```yaml
devices:
- /dev/synap
- /dev/video0
- /dev/video1
```
or add these options to your `docker run` command:
```
--device /dev/synap \
--device /dev/video0 \
--device /dev/video1
```
#### Configuration
Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics).
## Docker
Running through Docker with Docker Compose is the recommended install method.
@@ -302,7 +297,7 @@ services:
shm_size: "512mb" # update for your cameras based on calculation above
devices:
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
- /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
volumes:

View File

@@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1).
The current stable version of Frigate is **0.16.3**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.3).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.3` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.16.1
image: ghcr.io/blakeblackshear/frigate:0.16.3
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
docker pull ghcr.io/blakeblackshear/frigate:0.16.3
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.3`, `0.16.3-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
docker pull ghcr.io/blakeblackshear/frigate:0.16.3
```
3. **Start the Container**:

View File

@@ -13,7 +13,7 @@ Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect
# Setup a go2rtc stream
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#module-streams), not just rtsp.
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#module-streams), not just rtsp.
:::tip
@@ -49,8 +49,8 @@ After adding this to the config, restart Frigate and try to watch the live strea
- Check Video Codec:
- If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
```yaml
go2rtc:
streams:

View File

@@ -202,7 +202,7 @@ services:
...
devices:
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
...
```

View File

@@ -161,7 +161,14 @@ Message published for updates to tracked object metadata, for example:
### `frigate/reviews`
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
An `update` with the same ID will be published when:
- The severity changes from `detection` to `alert`
- Additional objects are detected
- An object is recognized via face, lpr, etc.
When the review activity has ended a final `end` message is published.
```json
{
@@ -208,20 +215,6 @@ Message published for each changed review item. The first message is published w
}
```
### `frigate/triggers`
Message published when a trigger defined in a camera's `semantic_search` configuration fires.
```json
{
"name": "car_trigger",
"camera": "driveway",
"event_id": "1751565549.853251-b69j73",
"type": "thumbnail",
"score": 0.85
}
```
### `frigate/stats`
Same data available at `/api/stats` published at a configurable interval.
@@ -240,14 +233,6 @@ Topic with current state of notifications. Published values are `ON` and `OFF`.
## Frigate Camera Topics
### `frigate/<camera_name>/<role>/status`
Publishes the current health status of each role that is enabled (`audio`, `detect`, `record`). Possible values are:
- `online`: Stream is running and being processed
- `offline`: Stream is offline and is being restarted
- `disabled`: Camera is currently disabled
### `frigate/<camera_name>/<object_name>`
Publishes the count of objects for the camera for use as a sensor in Home Assistant.
@@ -281,8 +266,6 @@ The height and crop of snapshots can be configured in the config.
Publishes "ON" when a type of audio is detected and "OFF" when it is not for the camera for use as a sensor in Home Assistant.
`all` can be used as the audio_type for the status of all audio types.
### `frigate/<camera_name>/audio/dBFS`
Publishes the dBFS value for audio detected on this camera.
@@ -295,12 +278,6 @@ Publishes the rms value for audio detected on this camera.
**NOTE:** Requires audio detection to be enabled
### `frigate/<camera_name>/audio/transcription`
Publishes transcribed text for audio detected on this camera.
**NOTE:** Requires audio detection and transcription to be enabled
### `frigate/<camera_name>/enabled/set`
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.
@@ -423,22 +400,6 @@ Topic to turn review detections for a camera on or off. Expected values are `ON`
Topic with current state of review detections for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/object_descriptions/set`
Topic to turn generative AI object descriptions for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/object_descriptions/state`
Topic with current state of generative AI object descriptions for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_descriptions/set`
Topic to turn generative AI review descriptions for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/review_descriptions/state`
Topic with current state of generative AI review descriptions for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/birdseye/set`
Topic to turn Birdseye for a camera on and off. Expected values are `ON` and `OFF`. Birdseye mode

View File

@@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso
| `w` | Add box |
| `d` | Toggle difficult |
| `s` | Switch to the next label |
| `Shift + s` | Switch to the previous label |
| `tab` | Select next largest box |
| `del` | Delete current box |
| `esc` | Deselect/Cancel |

View File

@@ -68,8 +68,7 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
- For some newer Linux distros (for example, Ubuntu 22.04+), https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
- In most cases https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction

3071
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,14 +5,14 @@ import frigateHttpApiSidebar from "./docs/integrations/api/sidebar";
const sidebars: SidebarsConfig = {
docs: {
Frigate: [
"frigate/index",
"frigate/hardware",
"frigate/planning_setup",
"frigate/installation",
"frigate/updating",
"frigate/camera_setup",
"frigate/video_pipeline",
"frigate/glossary",
'frigate/index',
'frigate/hardware',
'frigate/planning_setup',
'frigate/installation',
'frigate/updating',
'frigate/camera_setup',
'frigate/video_pipeline',
'frigate/glossary',
],
Guides: [
"guides/getting_started",
@@ -28,7 +28,7 @@ const sidebars: SidebarsConfig = {
{
type: "link",
label: "Go2RTC Configuration Reference",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration",
} as PropSidebarItemLink,
],
Detectors: [
@@ -37,36 +37,10 @@ const sidebars: SidebarsConfig = {
],
Enrichments: [
"configuration/semantic_search",
"configuration/genai",
"configuration/face_recognition",
"configuration/license_plate_recognition",
"configuration/bird_classification",
{
type: "category",
label: "Custom Classification",
link: {
type: "generated-index",
title: "Custom Classification",
description: "Configuration for custom classification models",
},
items: [
"configuration/custom_classification/state_classification",
"configuration/custom_classification/object_classification",
],
},
{
type: "category",
label: "Generative AI",
link: {
type: "generated-index",
title: "Generative AI",
description: "Generative AI Features",
},
items: [
"configuration/genai/genai_config",
"configuration/genai/genai_review",
"configuration/genai/genai_objects",
],
},
],
Cameras: [
"configuration/cameras",
@@ -119,11 +93,11 @@ const sidebars: SidebarsConfig = {
"configuration/metrics",
"integrations/third_party_extensions",
],
"Frigate+": [
"plus/index",
"plus/annotating",
"plus/first_model",
"plus/faq",
'Frigate+': [
'plus/index',
'plus/annotating',
'plus/first_model',
'plus/faq',
],
Troubleshooting: [
"troubleshooting/faqs",

View File

@@ -1,6 +1,5 @@
import argparse
import faulthandler
import multiprocessing as mp
import signal
import sys
import threading
@@ -16,17 +15,12 @@ from frigate.util.config import find_config_file
def main() -> None:
manager = mp.Manager()
faulthandler.enable()
# Setup the logging thread
setup_logging(manager)
setup_logging()
threading.current_thread().name = "frigate"
stop_event = mp.Event()
# send stop event on SIGINT
signal.signal(signal.SIGINT, lambda sig, frame: stop_event.set())
# Make sure we exit cleanly on SIGTERM.
signal.signal(signal.SIGTERM, lambda sig, frame: sys.exit())
@@ -99,14 +93,7 @@ def main() -> None:
print("*************************************************************")
print("*** End Config Validation Errors ***")
print("*************************************************************")
# attempt to start Frigate in recovery mode
try:
config = FrigateConfig.load(install=True, safe_load=True)
print("Starting Frigate in safe mode.")
except ValidationError:
print("Unable to start Frigate in safe mode.")
sys.exit(1)
sys.exit(1)
if args.validate_config:
print("*************************************************************")
print("*** Your config file is valid. ***")
@@ -114,23 +101,8 @@ def main() -> None:
sys.exit(0)
# Run the main application.
FrigateApp(config, manager, stop_event).start()
FrigateApp(config).start()
if __name__ == "__main__":
mp.set_forkserver_preload(
[
# Standard library and core dependencies
"sqlite3",
# Third-party libraries commonly used in Frigate
"numpy",
"cv2",
"peewee",
"zmq",
"ruamel.yaml",
# Frigate core modules
"frigate.camera.maintainer",
]
)
mp.set_start_method("forkserver", force=True)
main()

View File

@@ -6,12 +6,11 @@ import json
import logging
import os
import traceback
import urllib
from datetime import datetime, timedelta
from functools import reduce
from io import StringIO
from pathlib import Path as FilePath
from typing import Any, Dict, List, Optional
from typing import Any, Optional
import aiofiles
import requests
@@ -21,7 +20,7 @@ from fastapi.encoders import jsonable_encoder
from fastapi.params import Depends
from fastapi.responses import JSONResponse, PlainTextResponse, StreamingResponse
from markupsafe import escape
from peewee import SQL, fn, operator
from peewee import SQL, operator
from pydantic import ValidationError
from frigate.api.auth import require_role
@@ -29,18 +28,12 @@ from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryPa
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateTopic,
)
from frigate.models import Event, Timeline
from frigate.stats.prometheus import get_metrics, update_metrics
from frigate.util.builtin import (
clean_camera_user_pass,
flatten_config_data,
get_tz_modifiers,
process_config_query_string,
update_yaml_file_bulk,
update_yaml_from_url,
)
from frigate.util.config import find_config_file
from frigate.util.services import (
@@ -130,14 +123,7 @@ def metrics(request: Request):
"""Expose Prometheus metrics endpoint and update metrics with latest stats"""
# Retrieve the latest statistics and update the Prometheus metrics
stats = request.app.stats_emitter.get_latest_stats()
# query DB for count of events by camera, label
event_counts: List[Dict[str, Any]] = (
Event.select(Event.camera, Event.label, fn.Count())
.group_by(Event.camera, Event.label)
.dicts()
)
update_metrics(stats=stats, event_counts=event_counts)
update_metrics(stats)
content, content_type = get_metrics()
return Response(content=content, media_type=content_type)
@@ -368,37 +354,14 @@ def config_set(request: Request, body: AppConfigSetBody):
with open(config_file, "r") as f:
old_raw_config = f.read()
f.close()
try:
updates = {}
# process query string parameters (takes precedence over body.config_data)
parsed_url = urllib.parse.urlparse(str(request.url))
query_string = urllib.parse.parse_qs(parsed_url.query, keep_blank_values=True)
# Filter out empty keys but keep blank values for non-empty keys
query_string = {k: v for k, v in query_string.items() if k}
if query_string:
updates = process_config_query_string(query_string)
elif body.config_data:
updates = flatten_config_data(body.config_data)
if not updates:
return JSONResponse(
content=(
{"success": False, "message": "No configuration data provided"}
),
status_code=400,
)
# apply all updates in a single operation
update_yaml_file_bulk(config_file, updates)
# validate the updated config
update_yaml_from_url(config_file, str(request.url))
with open(config_file, "r") as f:
new_raw_config = f.read()
f.close()
# Validate the config schema
try:
config = FrigateConfig.parse(new_raw_config)
except Exception:
@@ -422,25 +385,8 @@ def config_set(request: Request, body: AppConfigSetBody):
status_code=500,
)
if body.requires_restart == 0 or body.update_topic:
old_config: FrigateConfig = request.app.frigate_config
if body.requires_restart == 0:
request.app.frigate_config = config
if body.update_topic and body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
else:
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum[field], camera),
settings,
)
return JSONResponse(
content=(
{

View File

@@ -11,7 +11,7 @@ import secrets
import time
from datetime import datetime
from pathlib import Path
from typing import List, Optional
from typing import List
from fastapi import APIRouter, Depends, HTTPException, Request, Response
from fastapi.responses import JSONResponse, RedirectResponse
@@ -33,6 +33,7 @@ from frigate.models import User
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.auth])
VALID_ROLES = ["admin", "viewer"]
class RateLimiter:
@@ -203,7 +204,6 @@ async def get_current_user(request: Request):
def require_role(required_roles: List[str]):
async def role_checker(request: Request):
proxy_config: ProxyConfig = request.app.frigate_config.proxy
config_roles = list(request.app.frigate_config.auth.roles.keys())
# Get role from header (could be comma-separated)
role_header = request.headers.get("remote-role")
@@ -217,123 +217,19 @@ def require_role(required_roles: List[str]):
if not roles:
raise HTTPException(status_code=403, detail="Role not provided")
# enforce config roles
valid_roles = [r for r in roles if r in config_roles]
if not valid_roles:
# Check if any role matches required_roles
if not any(role in required_roles for role in roles):
raise HTTPException(
status_code=403,
detail=f"No valid roles found in {roles}. Required: {', '.join(required_roles)}. Available: {', '.join(config_roles)}",
detail=f"Role {', '.join(roles)} not authorized. Required: {', '.join(required_roles)}",
)
if not any(role in required_roles for role in valid_roles):
raise HTTPException(
status_code=403,
detail=f"Role {', '.join(valid_roles)} not authorized. Required: {', '.join(required_roles)}",
)
return next(
(role for role in valid_roles if role in required_roles), valid_roles[0]
)
# Return the first matching role
return next((role for role in roles if role in required_roles), roles[0])
return role_checker
def resolve_role(
headers: dict, proxy_config: ProxyConfig, config_roles: set[str]
) -> str:
"""
Determine the effective role for a request based on proxy headers and configuration.
Order of resolution:
1. If a role header is defined in proxy_config.header_map.role:
- If a role_map is configured, treat the header as group claims
(split by proxy_config.separator) and map to roles.
- If no role_map is configured, treat the header as role names directly.
2. If no valid role is found, return proxy_config.default_role if it's valid in config_roles, else 'viewer'.
Args:
headers (dict): Incoming request headers (case-insensitive).
proxy_config (ProxyConfig): Proxy configuration.
config_roles (set[str]): Set of valid roles from config.
Returns:
str: Resolved role (one of config_roles or validated default).
"""
default_role = proxy_config.default_role
role_header = proxy_config.header_map.role
# Validate default_role against config; fallback to 'viewer' if invalid
validated_default = default_role if default_role in config_roles else "viewer"
if not config_roles:
validated_default = "viewer" # Edge case: no roles defined
if not role_header:
logger.debug(
"No role header configured in proxy_config.header_map. Returning validated default role '%s'.",
validated_default,
)
return validated_default
raw_value = headers.get(role_header, "")
logger.debug("Raw role header value from '%s': %r", role_header, raw_value)
if not raw_value:
logger.debug(
"Role header missing or empty. Returning validated default role '%s'.",
validated_default,
)
return validated_default
# role_map configured, treat header as group claims
if proxy_config.header_map.role_map:
groups = [
g.strip() for g in raw_value.split(proxy_config.separator) if g.strip()
]
logger.debug("Parsed groups from role header: %s", groups)
matched_roles = {
role_name
for role_name, required_groups in proxy_config.header_map.role_map.items()
if any(group in groups for group in required_groups)
}
logger.debug("Matched roles from role_map: %s", matched_roles)
if matched_roles:
resolved = next(
(r for r in config_roles if r in matched_roles), validated_default
)
logger.debug("Resolved role (with role_map) to '%s'.", resolved)
return resolved
logger.debug(
"No role_map match for groups '%s'. Using validated default role '%s'.",
raw_value,
validated_default,
)
return validated_default
# no role_map, treat as role names directly
roles_from_header = [
r.strip().lower() for r in raw_value.split(proxy_config.separator) if r.strip()
]
logger.debug("Parsed roles directly from header: %s", roles_from_header)
resolved = next(
(r for r in config_roles if r in roles_from_header),
validated_default,
)
if resolved == validated_default and roles_from_header:
logger.debug(
"Provided proxy role header values '%s' did not contain a valid role. Using validated default role '%s'.",
raw_value,
validated_default,
)
else:
logger.debug("Resolved role (direct header) to '%s'.", resolved)
return resolved
# Endpoints
@router.get("/auth")
def auth(request: Request):
@@ -370,11 +266,22 @@ def auth(request: Request):
else "anonymous"
)
# parse header and resolve a valid role
config_roles_set = set(auth_config.roles.keys())
role = resolve_role(request.headers, proxy_config, config_roles_set)
role_header = proxy_config.header_map.role
role = (
request.headers.get(role_header, default=proxy_config.default_role)
if role_header
else proxy_config.default_role
)
# if comma-separated with "admin", use "admin",
# if comma-separated with "viewer", use "viewer",
# else use default role
roles = [r.strip() for r in role.split(proxy_config.separator)] if role else []
success_response.headers["remote-role"] = next(
(r for r in VALID_ROLES if r in roles), proxy_config.default_role
)
success_response.headers["remote-role"] = role
return success_response
# now apply authentication
@@ -466,13 +373,7 @@ def profile(request: Request):
username = request.headers.get("remote-user", "anonymous")
role = request.headers.get("remote-role", "viewer")
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
return JSONResponse(
content={"username": username, "role": role, "allowed_cameras": allowed_cameras}
)
return JSONResponse(content={"username": username, "role": role})
@router.get("/logout")
@@ -503,12 +404,8 @@ def login(request: Request, body: AppPostLoginBody):
password_hash = db_user.password_hash
if verify_password(password, password_hash):
role = getattr(db_user, "role", "viewer")
config_roles_set = set(request.app.frigate_config.auth.roles.keys())
if role not in config_roles_set:
logger.warning(
f"User {db_user.username} has an invalid role {role}, falling back to 'viewer'."
)
role = "viewer"
if role not in VALID_ROLES:
role = "viewer" # Enforce valid roles
expiration = int(time.time()) + JWT_SESSION_LENGTH
encoded_jwt = create_encoded_jwt(user, role, expiration, request.app.jwt_token)
response = Response("", 200)
@@ -533,17 +430,11 @@ def create_user(
body: AppPostUsersBody,
):
HASH_ITERATIONS = request.app.frigate_config.auth.hash_iterations
config_roles = list(request.app.frigate_config.auth.roles.keys())
if not re.match("^[A-Za-z0-9._]+$", body.username):
return JSONResponse(content={"message": "Invalid username"}, status_code=400)
if body.role not in config_roles:
return JSONResponse(
content={"message": f"Role must be one of: {', '.join(config_roles)}"},
status_code=400,
)
role = body.role or "viewer"
role = body.role if body.role in VALID_ROLES else "viewer"
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
User.insert(
{
@@ -556,8 +447,14 @@ def create_user(
return JSONResponse(content={"username": body.username})
@router.delete("/users/{username}")
def delete_user(username: str):
@router.delete("/users/{username}", dependencies=[Depends(require_role(["admin"]))])
def delete_user(request: Request, username: str):
# Prevent deletion of the built-in admin user
if username == "admin":
return JSONResponse(
content={"message": "Cannot delete admin user"}, status_code=403
)
User.delete_by_id(username)
return JSONResponse(content={"success": True})
@@ -614,52 +511,10 @@ async def update_role(
return JSONResponse(
content={"message": "Cannot modify admin user's role"}, status_code=403
)
config_roles = list(request.app.frigate_config.auth.roles.keys())
if body.role not in config_roles:
if body.role not in VALID_ROLES:
return JSONResponse(
content={"message": f"Role must be one of: {', '.join(config_roles)}"},
status_code=400,
content={"message": "Role must be 'admin' or 'viewer'"}, status_code=400
)
User.set_by_id(username, {User.role: body.role})
return JSONResponse(content={"success": True})
async def require_camera_access(
camera_name: Optional[str] = None,
request: Request = None,
):
"""Dependency to enforce camera access based on user role."""
if camera_name is None:
return # For lists, filter later
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
return current_user
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
# Admin or full access bypasses
if role == "admin" or not roles_dict.get(role):
return
if camera_name not in allowed_cameras:
raise HTTPException(
status_code=403,
detail=f"Access denied to camera '{camera_name}'. Allowed: {allowed_cameras}",
)
async def get_allowed_cameras_for_filter(request: Request):
"""Dependency to get allowed_cameras for filtering lists."""
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
return [] # Unauthorized: no cameras
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
return User.get_allowed_cameras(role, roles_dict, all_camera_names)

View File

@@ -14,14 +14,10 @@ from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import require_role
from frigate.api.defs.request.classification_body import (
AudioTranscriptionBody,
RenameFaceBody,
)
from frigate.api.defs.request.classification_body import RenameFaceBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera import DetectConfig
from frigate.const import CLIPS_DIR, FACE_DIR
from frigate.const import FACE_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event
from frigate.util.path import get_event_snapshot
@@ -388,255 +384,3 @@ def reindex_embeddings(request: Request):
},
status_code=500,
)
@router.put("/audio/transcribe")
def transcribe_audio(request: Request, body: AudioTranscriptionBody):
event_id = body.event_id
try:
event = Event.get(Event.id == event_id)
except DoesNotExist:
message = f"Event {event_id} not found"
logger.error(message)
return JSONResponse(
content=({"success": False, "message": message}), status_code=404
)
if not request.app.frigate_config.cameras[event.camera].audio_transcription.enabled:
message = f"Audio transcription is not enabled for {event.camera}."
logger.error(message)
return JSONResponse(
content=(
{
"success": False,
"message": message,
}
),
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
response = context.transcribe_audio(model_to_dict(event))
if response == "started":
return JSONResponse(
content={
"success": True,
"message": "Audio transcription has started.",
},
status_code=202, # 202 Accepted
)
elif response == "in_progress":
return JSONResponse(
content={
"success": False,
"message": "Audio transcription for a speech event is currently in progress. Try again later.",
},
status_code=409, # 409 Conflict
)
else:
return JSONResponse(
content={
"success": False,
"message": "Failed to transcribe audio.",
},
status_code=500,
)
# custom classification training
@router.get("/classification/{name}/dataset")
def get_classification_dataset(name: str):
dataset_dict: dict[str, list[str]] = {}
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(name), "dataset")
if not os.path.exists(dataset_dir):
return JSONResponse(status_code=200, content={})
for name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, name)
if not os.path.isdir(category_dir):
continue
dataset_dict[name] = []
for file in filter(
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(category_dir),
):
dataset_dict[name].append(file)
return JSONResponse(status_code=200, content=dataset_dict)
@router.get("/classification/{name}/train")
def get_classification_images(name: str):
train_dir = os.path.join(CLIPS_DIR, sanitize_filename(name), "train")
if not os.path.exists(train_dir):
return JSONResponse(status_code=200, content=[])
return JSONResponse(
status_code=200,
content=list(
filter(
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(train_dir),
)
),
)
@router.post("/classification/{name}/train")
async def train_configured_model(request: Request, name: str):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
context: EmbeddingsContext = request.app.embeddings
context.start_classification_training(name)
return JSONResponse(
content={"success": True, "message": "Started classification model training."},
status_code=200,
)
@router.post(
"/classification/{name}/dataset/{category}/delete",
dependencies=[Depends(require_role(["admin"]))],
)
def delete_classification_dataset_images(
request: Request, name: str, category: str, body: dict = None
):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
list_of_ids = json.get("ids", "")
folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
)
for id in list_of_ids:
file_path = os.path.join(folder, sanitize_filename(id))
if os.path.isfile(file_path):
os.unlink(file_path)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)
@router.post(
"/classification/{name}/dataset/categorize",
dependencies=[Depends(require_role(["admin"]))],
)
def categorize_classification_image(request: Request, name: str, body: dict = None):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
category = sanitize_filename(json.get("category", ""))
training_file_name = sanitize_filename(json.get("training_file", ""))
training_file = os.path.join(
CLIPS_DIR, sanitize_filename(name), "train", training_file_name
)
if training_file_name and not os.path.isfile(training_file):
return JSONResponse(
content=(
{
"success": False,
"message": f"Invalid filename or no file exists: {training_file_name}",
}
),
status_code=404,
)
new_name = f"{category}-{datetime.datetime.now().timestamp()}.png"
new_file_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", category
)
if not os.path.exists(new_file_folder):
os.mkdir(new_file_folder)
# use opencv because webp images can not be used to train
img = cv2.imread(training_file)
cv2.imwrite(os.path.join(new_file_folder, new_name), img)
os.unlink(training_file)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)
@router.post(
"/classification/{name}/train/delete",
dependencies=[Depends(require_role(["admin"]))],
)
def delete_classification_train_images(request: Request, name: str, body: dict = None):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
list_of_ids = json.get("ids", "")
folder = os.path.join(CLIPS_DIR, sanitize_filename(name), "train")
for id in list_of_ids:
file_path = os.path.join(folder, sanitize_filename(id))
if os.path.isfile(file_path):
os.unlink(file_path)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)

View File

@@ -1,8 +1,7 @@
from enum import Enum
from typing import Optional, Union
from typing import Optional
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
class Extension(str, Enum):
@@ -23,7 +22,6 @@ class MediaLatestFrameQueryParams(BaseModel):
zones: Optional[int] = None
mask: Optional[int] = None
motion: Optional[int] = None
paths: Optional[int] = None
regions: Optional[int] = None
quality: Optional[int] = 70
height: Optional[int] = None
@@ -53,10 +51,3 @@ class MediaMjpegFeedQueryParams(BaseModel):
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"
class MediaRecordingsAvailabilityQueryParams(BaseModel):
cameras: str = "all"
before: Union[float, SkipJsonSchema[None]] = None
after: Union[float, SkipJsonSchema[None]] = None
scale: int = 30

View File

@@ -1,13 +1,9 @@
from typing import Optional
from pydantic import BaseModel, Field
from pydantic import BaseModel
from frigate.events.types import RegenerateDescriptionEnum
class RegenerateQueryParameters(BaseModel):
source: Optional[RegenerateDescriptionEnum] = RegenerateDescriptionEnum.thumbnails
force: Optional[bool] = Field(
default=False,
description="Force (re)generating the description even if GenAI is disabled for this camera.",
)

View File

@@ -1,12 +1,10 @@
from typing import Any, Dict, Optional
from typing import Optional
from pydantic import BaseModel
class AppConfigSetBody(BaseModel):
requires_restart: int = 1
update_topic: str | None = None
config_data: Optional[Dict[str, Any]] = None
class AppPutPasswordBody(BaseModel):

View File

@@ -3,7 +3,3 @@ from pydantic import BaseModel
class RenameFaceBody(BaseModel):
new_name: str
class AudioTranscriptionBody(BaseModel):
event_id: str

View File

@@ -2,8 +2,6 @@ from typing import List, Optional, Union
from pydantic import BaseModel, Field
from frigate.config.classification import TriggerType
class EventsSubLabelBody(BaseModel):
subLabel: str = Field(title="Sub label", max_length=100)
@@ -47,9 +45,3 @@ class EventsDeleteBody(BaseModel):
class SubmitPlusBody(BaseModel):
include_annotation: int = Field(default=1)
class TriggerEmbeddingBody(BaseModel):
type: TriggerType
data: str
threshold: float = Field(default=0.5, ge=0.0, le=1.0)

View File

@@ -1,6 +1,5 @@
"""Event apis."""
import base64
import datetime
import logging
import os
@@ -8,23 +7,16 @@ import random
import string
from functools import reduce
from pathlib import Path
from typing import List
from urllib.parse import unquote
import cv2
import numpy as np
from fastapi import APIRouter, Request
from fastapi.params import Depends
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filename
from peewee import JOIN, DoesNotExist, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
)
from frigate.api.auth import require_role
from frigate.api.defs.query.events_query_parameters import (
DEFAULT_TIME_RANGE,
EventsQueryParams,
@@ -42,7 +34,6 @@ from frigate.api.defs.request.events_body import (
EventsLPRBody,
EventsSubLabelBody,
SubmitPlusBody,
TriggerEmbeddingBody,
)
from frigate.api.defs.response.event_response import (
EventCreateResponse,
@@ -53,12 +44,11 @@ from frigate.api.defs.response.event_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.const import CLIPS_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline, Trigger
from frigate.models import Event, ReviewSegment, Timeline
from frigate.track.object_processing import TrackedObject
from frigate.util.builtin import get_tz_modifiers
from frigate.util.path import get_event_thumbnail_bytes
logger = logging.getLogger(__name__)
@@ -66,10 +56,7 @@ router = APIRouter(tags=[Tags.events])
@router.get("/events", response_model=list[EventResponse])
def events(
params: EventsQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def events(params: EventsQueryParams = Depends()):
camera = params.camera
cameras = params.cameras
@@ -143,14 +130,8 @@ def events(
clauses.append((Event.camera == camera))
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((Event.camera << camera_list))
camera_list = cameras.split(",")
clauses.append((Event.camera << camera_list))
if labels != "all":
label_list = labels.split(",")
@@ -180,32 +161,43 @@ def events(
clauses.append((sub_label_clause))
if recognized_license_plate != "all":
# use matching so joined recognized_license_plates are included
# for example a recognized license plate 'ABC123' would get events
# with recognized license plates 'ABC123' and 'ABC123, XYZ789'
recognized_license_plate_clauses = []
filtered_recognized_license_plates = recognized_license_plate.split(",")
clauses_for_plates = []
if "None" in filtered_recognized_license_plates:
filtered_recognized_license_plates.remove("None")
clauses_for_plates.append(Event.data["recognized_license_plate"].is_null())
# regex vs exact matching
normal_plates = []
for plate in filtered_recognized_license_plates:
if plate.startswith("^") or any(ch in plate for ch in ".[]?+*"):
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").regexp(plate)
)
else:
normal_plates.append(plate)
# if there are any plain string plates, match them with IN
if normal_plates:
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").in_(normal_plates)
recognized_license_plate_clauses.append(
(Event.data["recognized_license_plate"].is_null())
)
recognized_license_plate_clause = reduce(operator.or_, clauses_for_plates)
clauses.append(recognized_license_plate_clause)
for recognized_license_plate in filtered_recognized_license_plates:
# Exact matching plus list inclusion
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
== recognized_license_plate
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*{recognized_license_plate},*"
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*, {recognized_license_plate}*"
)
)
recognized_license_plate_clause = reduce(
operator.or_, recognized_license_plate_clauses
)
clauses.append((recognized_license_plate_clause))
if zones != "all":
# use matching so events with multiple zones
@@ -335,17 +327,9 @@ def events(
@router.get("/events/explore", response_model=list[EventResponse])
def events_explore(
limit: int = 10,
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def events_explore(limit: int = 10):
# get distinct labels for all events
distinct_labels = (
Event.select(Event.label)
.where(Event.camera << allowed_cameras)
.distinct()
.order_by(Event.label)
)
distinct_labels = Event.select(Event.label).distinct().order_by(Event.label)
label_counts = {}
@@ -356,18 +340,14 @@ def events_explore(
# get most recent events for this label
label_events = (
Event.select()
.where((Event.label == label) & (Event.camera << allowed_cameras))
.where(Event.label == label)
.order_by(Event.start_time.desc())
.limit(limit)
.iterator()
)
# count total events for this label
label_counts[label] = (
Event.select()
.where((Event.label == label) & (Event.camera << allowed_cameras))
.count()
)
label_counts[label] = Event.select().where(Event.label == label).count()
yield from label_events
@@ -420,7 +400,7 @@ def events_explore(
@router.get("/event_ids", response_model=list[EventResponse])
async def event_ids(ids: str, request: Request):
def event_ids(ids: str):
ids = ids.split(",")
if not ids:
@@ -429,16 +409,6 @@ async def event_ids(ids: str, request: Request):
status_code=400,
)
for event_id in ids:
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found"}),
status_code=404,
)
try:
events = Event.select().where(Event.id << ids).dicts().iterator()
return JSONResponse(list(events))
@@ -449,11 +419,7 @@ async def event_ids(ids: str, request: Request):
@router.get("/events/search")
def events_search(
request: Request,
params: EventsSearchQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def events_search(request: Request, params: EventsSearchQueryParams = Depends()):
query = params.query
search_type = params.search_type
include_thumbnails = params.include_thumbnails
@@ -526,13 +492,7 @@ def events_search(
event_filters = []
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
event_filters.append((Event.camera << list(filtered)))
else:
event_filters.append((Event.camera << allowed_cameras))
event_filters.append((Event.camera << cameras.split(",")))
if labels != "all":
event_filters.append((Event.label << labels.split(",")))
@@ -551,31 +511,42 @@ def events_search(
event_filters.append((reduce(operator.or_, zone_clauses)))
if recognized_license_plate != "all":
# use matching so joined recognized_license_plates are included
# for example an recognized_license_plate 'ABC123' would get events
# with recognized_license_plates 'ABC123' and 'ABC123, XYZ789'
recognized_license_plate_clauses = []
filtered_recognized_license_plates = recognized_license_plate.split(",")
clauses_for_plates = []
if "None" in filtered_recognized_license_plates:
filtered_recognized_license_plates.remove("None")
clauses_for_plates.append(Event.data["recognized_license_plate"].is_null())
# regex vs exact matching
normal_plates = []
for plate in filtered_recognized_license_plates:
if plate.startswith("^") or any(ch in plate for ch in ".[]?+*"):
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").regexp(plate)
)
else:
normal_plates.append(plate)
# if there are any plain string plates, match them with IN
if normal_plates:
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").in_(normal_plates)
recognized_license_plate_clauses.append(
(Event.data["recognized_license_plate"].is_null())
)
recognized_license_plate_clause = reduce(operator.or_, clauses_for_plates)
for recognized_license_plate in filtered_recognized_license_plates:
# Exact matching plus list inclusion
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
== recognized_license_plate
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*{recognized_license_plate},*"
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*, {recognized_license_plate}*"
)
)
recognized_license_plate_clause = reduce(
operator.or_, recognized_license_plate_clauses
)
event_filters.append((recognized_license_plate_clause))
if after:
@@ -785,10 +756,7 @@ def events_search(
@router.get("/events/summary")
def events_summary(
params: EventsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def events_summary(params: EventsSummaryQueryParams = Depends()):
tz_name = params.timezone
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(tz_name)
has_clip = params.has_clip
@@ -820,7 +788,7 @@ def events_summary(
Event.zones,
fn.COUNT(Event.id).alias("count"),
)
.where(reduce(operator.and_, clauses) & (Event.camera << allowed_cameras))
.where(reduce(operator.and_, clauses))
.group_by(
Event.camera,
Event.label,
@@ -835,11 +803,9 @@ def events_summary(
@router.get("/events/{event_id}", response_model=EventResponse)
async def event(event_id: str, request: Request):
def event(event_id: str):
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
return model_to_dict(event)
return model_to_dict(Event.get(Event.id == event_id))
except DoesNotExist:
return JSONResponse(content="Event not found", status_code=404)
@@ -868,7 +834,7 @@ def set_retain(event_id: str):
@router.post("/events/{event_id}/plus", response_model=EventUploadPlusResponse)
async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
if not request.app.frigate_config.plus_api.is_active():
message = "PLUS_API_KEY environment variable is not set"
logger.error(message)
@@ -886,7 +852,6 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
message = f"Event {event_id} not found"
logger.error(message)
@@ -981,7 +946,7 @@ async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = N
@router.put("/events/{event_id}/false_positive", response_model=EventUploadPlusResponse)
async def false_positive(request: Request, event_id: str):
def false_positive(request: Request, event_id: str):
if not request.app.frigate_config.plus_api.is_active():
message = "PLUS_API_KEY environment variable is not set"
logger.error(message)
@@ -997,7 +962,6 @@ async def false_positive(request: Request, event_id: str):
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
message = f"Event {event_id} not found"
logger.error(message)
@@ -1021,7 +985,7 @@ async def false_positive(request: Request, event_id: str):
)
if not event.plus_id:
plus_response = await send_to_plus(request, event_id)
plus_response = send_to_plus(request, event_id)
if plus_response.status_code != 200:
return plus_response
# need to refetch the event now that it has a plus_id
@@ -1075,10 +1039,9 @@ async def false_positive(request: Request, event_id: str):
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def delete_retain(event_id: str, request: Request):
def delete_retain(event_id: str):
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
@@ -1099,14 +1062,13 @@ async def delete_retain(event_id: str, request: Request):
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def set_sub_label(
def set_sub_label(
request: Request,
event_id: str,
body: EventsSubLabelBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
event = None
@@ -1137,7 +1099,7 @@ async def set_sub_label(
new_score = None
request.app.event_metadata_updater.publish(
(event_id, new_sub_label, new_score), EventMetadataTypeEnum.sub_label.value
EventMetadataTypeEnum.sub_label, (event_id, new_sub_label, new_score)
)
return JSONResponse(
@@ -1154,14 +1116,13 @@ async def set_sub_label(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def set_plate(
def set_plate(
request: Request,
event_id: str,
body: EventsLPRBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
event = None
@@ -1192,8 +1153,7 @@ async def set_plate(
new_score = None
request.app.event_metadata_updater.publish(
(event_id, "recognized_license_plate", new_plate, new_score),
EventMetadataTypeEnum.attribute.value,
EventMetadataTypeEnum.recognized_license_plate, (event_id, new_plate, new_score)
)
return JSONResponse(
@@ -1210,14 +1170,13 @@ async def set_plate(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def set_description(
def set_description(
request: Request,
event_id: str,
body: EventsDescriptionBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
@@ -1262,12 +1221,11 @@ async def set_description(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def regenerate_description(
def regenerate_description(
request: Request, event_id: str, params: RegenerateQueryParameters = Depends()
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
@@ -1276,10 +1234,9 @@ async def regenerate_description(
camera_config = request.app.frigate_config.cameras[event.camera]
if camera_config.objects.genai.enabled or params.force:
if camera_config.genai.enabled:
request.app.event_metadata_updater.publish(
(event.id, params.source, params.force),
EventMetadataTypeEnum.regenerate_description.value,
EventMetadataTypeEnum.regenerate_description, (event.id, params.source)
)
return JSONResponse(
@@ -1306,42 +1263,9 @@ async def regenerate_description(
)
@router.post(
"/description/generate",
response_model=GenericResponse,
# dependencies=[Depends(require_role(["admin"]))],
)
def generate_description_embedding(
request: Request,
body: EventsDescriptionBody,
):
new_description = body.description
# If semantic search is enabled, update the index
if request.app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = request.app.embeddings
if len(new_description) > 0:
result = context.generate_description_embedding(
new_description,
)
return JSONResponse(
content=(
{
"success": True,
"message": f"Embedding for description is {result}"
if result
else "Failed to generate embedding",
}
),
status_code=200,
)
async def delete_single_event(event_id: str, request: Request) -> dict:
def delete_single_event(event_id: str, request: Request) -> dict:
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return {"success": False, "message": f"Event {event_id} not found"}
@@ -1371,8 +1295,8 @@ async def delete_single_event(event_id: str, request: Request) -> dict:
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def delete_event(request: Request, event_id: str):
result = await delete_single_event(event_id, request)
def delete_event(request: Request, event_id: str):
result = delete_single_event(event_id, request)
status_code = 200 if result["success"] else 404
return JSONResponse(content=result, status_code=status_code)
@@ -1382,7 +1306,7 @@ async def delete_event(request: Request, event_id: str):
response_model=EventMultiDeleteResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def delete_events(request: Request, body: EventsDeleteBody):
def delete_events(request: Request, body: EventsDeleteBody):
if not body.event_ids:
return JSONResponse(
content=({"success": False, "message": "No event IDs provided."}),
@@ -1393,7 +1317,7 @@ async def delete_events(request: Request, body: EventsDeleteBody):
not_found_events = []
for event_id in body.event_ids:
result = await delete_single_event(event_id, request)
result = delete_single_event(event_id, request)
if result["success"]:
deleted_events.append(event_id)
else:
@@ -1437,6 +1361,7 @@ def create_event(
event_id = f"{now}-{rand_id}"
request.app.event_metadata_updater.publish(
EventMetadataTypeEnum.manual_event_create,
(
now,
camera_name,
@@ -1449,7 +1374,6 @@ def create_event(
body.source_type,
body.draw,
),
EventMetadataTypeEnum.manual_event_create.value,
)
return JSONResponse(
@@ -1469,13 +1393,11 @@ def create_event(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
async def end_event(request: Request, event_id: str, body: EventsEndBody):
def end_event(request: Request, event_id: str, body: EventsEndBody):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
end_time = body.end_time or datetime.datetime.now().timestamp()
request.app.event_metadata_updater.publish(
(event_id, end_time), EventMetadataTypeEnum.manual_event_end.value
EventMetadataTypeEnum.manual_event_end, (event_id, end_time)
)
except Exception:
return JSONResponse(
@@ -1489,430 +1411,3 @@ async def end_event(request: Request, event_id: str, body: EventsEndBody):
content=({"success": True, "message": "Event successfully ended."}),
status_code=200,
)
@router.post(
"/trigger/embedding",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def create_trigger_embedding(
request: Request,
body: TriggerEmbeddingBody,
camera_name: str,
name: str,
):
try:
if not request.app.frigate_config.semantic_search.enabled:
return JSONResponse(
content={
"success": False,
"message": "Semantic search is not enabled",
},
status_code=400,
)
# Check if trigger already exists
if (
Trigger.select()
.where(Trigger.camera == camera_name, Trigger.name == name)
.exists()
):
return JSONResponse(
content={
"success": False,
"message": f"Trigger {camera_name}:{name} already exists",
},
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
# Generate embedding based on type
embedding = None
if body.type == "description":
embedding = context.generate_description_embedding(body.data)
elif body.type == "thumbnail":
try:
event: Event = Event.get(Event.id == body.data)
except DoesNotExist:
# TODO: check triggers directory for image
return JSONResponse(
content={
"success": False,
"message": f"Failed to fetch event for {body.type} trigger",
},
status_code=400,
)
# Skip the event if not an object
if event.data.get("type") != "object":
return
if thumbnail := get_event_thumbnail_bytes(event):
cursor = context.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[body.data],
)
row = cursor.fetchone() if cursor else None
if row:
query_embedding = row[0]
embedding = np.frombuffer(query_embedding, dtype=np.float32)
else:
# Extract valid thumbnail
thumbnail = get_event_thumbnail_bytes(event)
if thumbnail is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
},
status_code=400,
)
embedding = context.generate_image_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to generate embedding for {body.type} trigger",
},
status_code=400,
)
if body.type == "thumbnail":
# Save image to the triggers directory
try:
os.makedirs(
os.path.join(TRIGGER_DIR, sanitize_filename(camera_name)),
exist_ok=True,
)
with open(
os.path.join(
TRIGGER_DIR,
sanitize_filename(camera_name),
f"{sanitize_filename(body.data)}.webp",
),
"wb",
) as f:
f.write(thumbnail)
logger.debug(
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
)
Trigger.create(
camera=camera_name,
name=name,
type=body.type,
data=body.data,
threshold=body.threshold,
model=request.app.frigate_config.semantic_search.model,
embedding=np.array(embedding, dtype=np.float32).tobytes(),
triggering_event_id="",
last_triggered=None,
)
return JSONResponse(
content={
"success": True,
"message": f"Trigger created successfully for {camera_name}:{name}",
},
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
return JSONResponse(
content={
"success": False,
"message": "Error creating trigger embedding",
},
status_code=500,
)
@router.put(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def update_trigger_embedding(
request: Request,
camera_name: str,
name: str,
body: TriggerEmbeddingBody,
):
try:
if not request.app.frigate_config.semantic_search.enabled:
return JSONResponse(
content={
"success": False,
"message": "Semantic search is not enabled",
},
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
# Generate embedding based on type
embedding = None
if body.type == "description":
embedding = context.generate_description_embedding(body.data)
elif body.type == "thumbnail":
webp_file = sanitize_filename(body.data) + ".webp"
webp_path = os.path.join(
TRIGGER_DIR, sanitize_filename(camera_name), webp_file
)
try:
event: Event = Event.get(Event.id == body.data)
# Skip the event if not an object
if event.data.get("type") != "object":
return JSONResponse(
content={
"success": False,
"message": f"Event {body.data} is not a tracked object for {body.type} trigger",
},
status_code=400,
)
# Extract valid thumbnail
thumbnail = get_event_thumbnail_bytes(event)
with open(webp_path, "wb") as f:
f.write(thumbnail)
except DoesNotExist:
# check triggers directory for image
if not os.path.exists(webp_path):
return JSONResponse(
content={
"success": False,
"message": f"Failed to fetch event for {body.type} trigger",
},
status_code=400,
)
else:
# Load the image from the triggers directory
with open(webp_path, "rb") as f:
thumbnail = f.read()
embedding = context.generate_image_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to generate embedding for {body.type} trigger",
},
status_code=400,
)
# Check if trigger exists for upsert
trigger = Trigger.get_or_none(
Trigger.camera == camera_name, Trigger.name == name
)
if trigger:
# Update existing trigger
if trigger.data != body.data: # Delete old thumbnail only if data changes
try:
os.remove(
os.path.join(
TRIGGER_DIR,
sanitize_filename(camera_name),
f"{trigger.data}.webp",
)
)
logger.debug(
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
)
Trigger.update(
data=body.data,
model=request.app.frigate_config.semantic_search.model,
embedding=np.array(embedding, dtype=np.float32).tobytes(),
threshold=body.threshold,
triggering_event_id="",
last_triggered=None,
).where(Trigger.camera == camera_name, Trigger.name == name).execute()
else:
# Create new trigger (for rename case)
Trigger.create(
camera=camera_name,
name=name,
type=body.type,
data=body.data,
threshold=body.threshold,
model=request.app.frigate_config.semantic_search.model,
embedding=np.array(embedding, dtype=np.float32).tobytes(),
triggering_event_id="",
last_triggered=None,
)
if body.type == "thumbnail":
# Save image to the triggers directory
try:
camera_path = os.path.join(TRIGGER_DIR, sanitize_filename(camera_name))
os.makedirs(camera_path, exist_ok=True)
with open(
os.path.join(camera_path, f"{sanitize_filename(body.data)}.webp"),
"wb",
) as f:
f.write(thumbnail)
logger.debug(
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
)
return JSONResponse(
content={
"success": True,
"message": f"Trigger updated successfully for {camera_name}:{name}",
},
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
return JSONResponse(
content={
"success": False,
"message": "Error updating trigger embedding",
},
status_code=500,
)
@router.delete(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def delete_trigger_embedding(
request: Request,
camera_name: str,
name: str,
):
try:
trigger = Trigger.get_or_none(
Trigger.camera == camera_name, Trigger.name == name
)
if trigger is None:
return JSONResponse(
content={
"success": False,
"message": f"Trigger {camera_name}:{name} not found",
},
status_code=500,
)
deleted = (
Trigger.delete()
.where(Trigger.camera == camera_name, Trigger.name == name)
.execute()
)
if deleted == 0:
return JSONResponse(
content={
"success": False,
"message": f"Error deleting trigger {camera_name}:{name}",
},
status_code=401,
)
try:
os.remove(
os.path.join(
TRIGGER_DIR, sanitize_filename(camera_name), f"{trigger.data}.webp"
)
)
logger.debug(
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
)
return JSONResponse(
content={
"success": True,
"message": f"Trigger deleted successfully for {camera_name}:{name}",
},
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
return JSONResponse(
content={
"success": False,
"message": "Error deleting trigger embedding",
},
status_code=500,
)
@router.get(
"/triggers/status/{camera_name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def get_triggers_status(
camera_name: str,
):
try:
# Fetch all triggers for the specified camera
triggers = Trigger.select().where(Trigger.camera == camera_name)
# Prepare the response with trigger status
status = {
trigger.name: {
"last_triggered": trigger.last_triggered.timestamp()
if trigger.last_triggered
else None,
"triggering_event_id": trigger.triggering_event_id
if trigger.triggering_event_id
else None,
}
for trigger in triggers
}
if not status:
return JSONResponse(
content={
"success": False,
"message": f"No triggers found for camera {camera_name}",
},
status_code=404,
)
return {"success": True, "triggers": status}
except Exception as ex:
logger.exception(ex)
return JSONResponse(
content=({"success": False, "message": "Error fetching trigger status"}),
status_code=400,
)

View File

@@ -4,23 +4,19 @@ import logging
import random
import string
from pathlib import Path
from typing import List
import psutil
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filepath
from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
)
from frigate.api.auth import require_role
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.request.export_rename_body import ExportRenameBody
from frigate.api.defs.tags import Tags
from frigate.const import EXPORT_DIR
from frigate.const import CLIPS_DIR, EXPORT_DIR
from frigate.models import Export, Previews, Recordings
from frigate.record.export import (
PlaybackFactorEnum,
@@ -35,23 +31,12 @@ router = APIRouter(tags=[Tags.export])
@router.get("/exports")
def get_exports(
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
exports = (
Export.select()
.where(Export.camera << allowed_cameras)
.order_by(Export.date.desc())
.dicts()
.iterator()
)
def get_exports():
exports = Export.select().order_by(Export.date.desc()).dicts().iterator()
return JSONResponse(content=[e for e in exports])
@router.post(
"/export/{camera_name}/start/{start_time}/end/{end_time}",
dependencies=[Depends(require_camera_access)],
)
@router.post("/export/{camera_name}/start/{start_time}/end/{end_time}")
def export_recording(
request: Request,
camera_name: str,
@@ -70,7 +55,14 @@ def export_recording(
playback_factor = body.playback
playback_source = body.source
friendly_name = body.name
existing_image = body.image_path
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
content=({"success": False, "message": "Invalid image path"}),
status_code=400,
)
if playback_source == "recordings":
recordings_count = (
@@ -150,10 +142,9 @@ def export_recording(
@router.patch(
"/export/{event_id}/rename", dependencies=[Depends(require_role(["admin"]))]
)
async def export_rename(event_id: str, body: ExportRenameBody, request: Request):
def export_rename(event_id: str, body: ExportRenameBody):
try:
export: Export = Export.get(Export.id == event_id)
await require_camera_access(export.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=(
@@ -179,10 +170,9 @@ async def export_rename(event_id: str, body: ExportRenameBody, request: Request)
@router.delete("/export/{event_id}", dependencies=[Depends(require_role(["admin"]))])
async def export_delete(event_id: str, request: Request):
def export_delete(event_id: str):
try:
export: Export = Export.get(Export.id == event_id)
await require_camera_access(export.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=(
@@ -233,11 +223,9 @@ async def export_delete(event_id: str, request: Request):
@router.get("/exports/{export_id}")
async def get_export(export_id: str, request: Request):
def get_export(export_id: str):
try:
export = Export.get(Export.id == export_id)
await require_camera_access(export.camera, request=request)
return JSONResponse(content=model_to_dict(export))
return JSONResponse(content=model_to_dict(Export.get(Export.id == export_id)))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export not found"},

View File

@@ -1,10 +1,8 @@
import logging
import re
from typing import Optional
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
from joserfc.jwk import OctKey
from playhouse.sqliteq import SqliteQueueDatabase
from slowapi import _rate_limit_exceeded_handler
from slowapi.errors import RateLimitExceeded
@@ -28,7 +26,6 @@ from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
)
from frigate.config import FrigateConfig
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.embeddings import EmbeddingsContext
from frigate.ptz.onvif import OnvifController
from frigate.stats.emitter import StatsEmitter
@@ -60,7 +57,6 @@ def create_fastapi_app(
onvif: OnvifController,
stats_emitter: StatsEmitter,
event_metadata_updater: EventMetadataPublisher,
config_publisher: CameraConfigUpdatePublisher,
):
logger.info("Starting FastAPI app")
app = FastAPI(
@@ -131,27 +127,6 @@ def create_fastapi_app(
app.onvif = onvif
app.stats_emitter = stats_emitter
app.event_metadata_updater = event_metadata_updater
app.config_publisher = config_publisher
if frigate_config.auth.enabled:
secret = get_jwt_secret()
key_bytes = None
if isinstance(secret, str):
# If the secret looks like hex (e.g., generated by secrets.token_hex), use raw bytes
if len(secret) % 2 == 0 and re.fullmatch(r"[0-9a-fA-F]+", secret or ""):
try:
key_bytes = bytes.fromhex(secret)
except ValueError:
key_bytes = secret.encode("utf-8")
else:
key_bytes = secret.encode("utf-8")
elif isinstance(secret, (bytes, bytearray)):
key_bytes = bytes(secret)
else:
key_bytes = str(secret).encode("utf-8")
app.jwt_token = OctKey.import_key(key_bytes)
else:
app.jwt_token = None
app.jwt_token = get_jwt_secret() if frigate_config.auth.enabled else None
return app

View File

@@ -8,27 +8,25 @@ import os
import subprocess as sp
import time
from datetime import datetime, timedelta, timezone
from functools import reduce
from pathlib import Path as FilePath
from typing import Any, List
from typing import Any
from urllib.parse import unquote
import cv2
import numpy as np
import pytz
from fastapi import APIRouter, Depends, Path, Query, Request, Response
from fastapi import APIRouter, Path, Query, Request, Response
from fastapi.params import Depends
from fastapi.responses import FileResponse, JSONResponse, StreamingResponse
from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn, operator
from peewee import DoesNotExist, fn
from tzlocal import get_localzone_name
from frigate.api.auth import get_allowed_cameras_for_filter, require_camera_access
from frigate.api.defs.query.media_query_parameters import (
Extension,
MediaEventsSnapshotQueryParams,
MediaLatestFrameQueryParams,
MediaMjpegFeedQueryParams,
MediaRecordingsAvailabilityQueryParams,
MediaRecordingsSummaryQueryParams,
)
from frigate.api.defs.tags import Tags
@@ -50,11 +48,12 @@ from frigate.util.path import get_event_thumbnail_bytes
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.media])
@router.get("/{camera_name}", dependencies=[Depends(require_camera_access)])
async def mjpeg_feed(
@router.get("/{camera_name}")
def mjpeg_feed(
request: Request,
camera_name: str,
params: MediaMjpegFeedQueryParams = Depends(),
@@ -110,7 +109,7 @@ def imagestream(
)
@router.get("/{camera_name}/ptz/info", dependencies=[Depends(require_camera_access)])
@router.get("/{camera_name}/ptz/info")
async def camera_ptz_info(request: Request, camera_name: str):
if camera_name in request.app.frigate_config.cameras:
# Schedule get_camera_info in the OnvifController's event loop
@@ -126,10 +125,8 @@ async def camera_ptz_info(request: Request, camera_name: str):
)
@router.get(
"/{camera_name}/latest.{extension}", dependencies=[Depends(require_camera_access)]
)
async def latest_frame(
@router.get("/{camera_name}/latest.{extension}")
def latest_frame(
request: Request,
camera_name: str,
extension: Extension,
@@ -142,7 +139,6 @@ async def latest_frame(
"zones": params.zones,
"mask": params.mask,
"motion_boxes": params.motion,
"paths": params.paths,
"regions": params.regions,
}
quality = params.quality
@@ -237,11 +233,8 @@ async def latest_frame(
)
@router.get(
"/{camera_name}/recordings/{frame_time}/snapshot.{format}",
dependencies=[Depends(require_camera_access)],
)
async def get_snapshot_from_recording(
@router.get("/{camera_name}/recordings/{frame_time}/snapshot.{format}")
def get_snapshot_from_recording(
request: Request,
camera_name: str,
frame_time: float,
@@ -327,10 +320,8 @@ async def get_snapshot_from_recording(
)
@router.post(
"/{camera_name}/plus/{frame_time}", dependencies=[Depends(require_camera_access)]
)
async def submit_recording_snapshot_to_plus(
@router.post("/{camera_name}/plus/{frame_time}")
def submit_recording_snapshot_to_plus(
request: Request, camera_name: str, frame_time: str
):
if camera_name not in request.app.frigate_config.cameras:
@@ -418,23 +409,11 @@ def get_recordings_storage_usage(request: Request):
@router.get("/recordings/summary")
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def all_recordings_summary(params: MediaRecordingsSummaryQueryParams = Depends()):
"""Returns true/false by day indicating if recordings exist"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
query = (
Recordings.select(
@@ -462,7 +441,7 @@ def all_recordings_summary(
.order_by(Recordings.start_time.desc())
)
if params.cameras != "all":
if cameras != "all":
query = query.where(Recordings.camera << cameras.split(","))
recording_days = query.namedtuples()
@@ -471,10 +450,8 @@ def all_recordings_summary(
return JSONResponse(content=days)
@router.get(
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
@router.get("/{camera_name}/recordings/summary")
def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(timezone)
recording_groups = (
@@ -535,8 +512,8 @@ async def recordings_summary(camera_name: str, timezone: str = "utc"):
return JSONResponse(content=list(days.values()))
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
async def recordings(
@router.get("/{camera_name}/recordings")
def recordings(
camera_name: str,
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
before: float = datetime.now().timestamp(),
@@ -565,87 +542,11 @@ async def recordings(
return JSONResponse(content=list(recordings))
@router.get("/recordings/unavailable", response_model=list[dict])
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get time ranges with no recordings."""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
before = params.before or datetime.datetime.now().timestamp()
after = (
params.after
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
)
scale = params.scale
clauses = [(Recordings.start_time > after) & (Recordings.end_time < before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
camera_list = allowed_cameras
# Get recording start times
data: list[Recordings] = (
Recordings.select(Recordings.start_time, Recordings.end_time)
.where(reduce(operator.and_, clauses))
.order_by(Recordings.start_time.asc())
.dicts()
.iterator()
)
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Generate all time segments
current = after
no_recording_segments = []
current_start = None
while current < before:
segment_end = current + scale
# Check if segment overlaps with any recording
has_recording = any(
start <= segment_end and end >= current for start, end in recordings
)
if not has_recording:
if current_start is None:
current_start = current # Start a new gap
else:
if current_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_start), "end_time": int(current)}
)
current_start = None
current = segment_end
# Append the last gap if it exists
if current_start is not None:
no_recording_segments.append(
{"start_time": int(current_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
dependencies=[Depends(require_camera_access)],
description="For iOS devices, use the master.m3u8 HLS link instead of clip.mp4. Safari does not reliably process progressive mp4 files.",
)
async def recording_clip(
def recording_clip(
request: Request,
camera_name: str,
start_ts: float,
@@ -741,10 +642,9 @@ async def recording_clip(
@router.get(
"/vod/{camera_name}/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified timestamp-range on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
def vod_ts(camera_name: str, start_ts: float, end_ts: float):
recordings = (
Recordings.select(
Recordings.path,
@@ -819,24 +719,20 @@ async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
@router.get(
"/vod/{year_month}/{day}/{hour}/{camera_name}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified date-time on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str):
def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str):
"""VOD for specific hour. Uses the default timezone (UTC)."""
return await vod_hour(
return vod_hour(
year_month, day, hour, camera_name, get_localzone_name().replace("/", ",")
)
@router.get(
"/vod/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified date-time (with timezone) on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_hour(
year_month: str, day: int, hour: int, camera_name: str, tz_name: str
):
def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
parts = year_month.split("-")
start_date = (
datetime(int(parts[0]), int(parts[1]), day, hour, tzinfo=timezone.utc)
@@ -846,15 +742,14 @@ async def vod_hour(
start_ts = start_date.timestamp()
end_ts = end_date.timestamp()
return await vod_ts(camera_name, start_ts, end_ts)
return vod_ts(camera_name, start_ts, end_ts)
@router.get(
"/vod/event/{event_id}",
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
async def vod_event(
request: Request,
def vod_event(
event_id: str,
padding: int = Query(0, description="Padding to apply to the vod."),
):
@@ -870,14 +765,22 @@ async def vod_event(
status_code=404,
)
await require_camera_access(event.camera, request=request)
if not event.has_clip:
logger.error(f"Event does not have recordings: {event_id}")
return JSONResponse(
content={
"success": False,
"message": "Recordings not available.",
},
status_code=404,
)
end_ts = (
datetime.now().timestamp()
if event.end_time is None
else (event.end_time + padding)
)
vod_response = await vod_ts(event.camera, event.start_time - padding, end_ts)
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
@@ -895,7 +798,7 @@ async def vod_event(
"/events/{event_id}/snapshot.jpg",
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
)
async def event_snapshot(
def event_snapshot(
request: Request,
event_id: str,
params: MediaEventsSnapshotQueryParams = Depends(),
@@ -905,7 +808,6 @@ async def event_snapshot(
try:
event = Event.get(Event.id == event_id, Event.end_time != None)
event_complete = True
await require_camera_access(event.camera, request=request)
if not event.has_snapshot:
return JSONResponse(
content={"success": False, "message": "Snapshot not available"},
@@ -934,7 +836,6 @@ async def event_snapshot(
height=params.height,
quality=params.quality,
)
await require_camera_access(camera_state.name, request=request)
except Exception:
return JSONResponse(
content={"success": False, "message": "Ongoing event not found"},
@@ -968,7 +869,7 @@ async def event_snapshot(
@router.get("/events/{event_id}/thumbnail.{extension}")
async def event_thumbnail(
def event_thumbnail(
request: Request,
event_id: str,
extension: Extension,
@@ -981,7 +882,6 @@ async def event_thumbnail(
event_complete = False
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
if event.end_time is not None:
event_complete = True
@@ -1044,7 +944,7 @@ async def event_thumbnail(
)
@router.get("/{camera_name}/grid.jpg", dependencies=[Depends(require_camera_access)])
@router.get("/{camera_name}/grid.jpg")
def grid_snapshot(
request: Request, camera_name: str, color: str = "green", font_scale: float = 0.5
):
@@ -1250,7 +1150,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get("/events/{event_id}/clip.mp4")
async def event_clip(
def event_clip(
request: Request,
event_id: str,
padding: int = Query(0, description="Padding to apply to clip."),
@@ -1272,9 +1172,7 @@ async def event_clip(
if event.end_time is None
else event.end_time + padding
)
return await recording_clip(
request, event.camera, event.start_time - padding, end_ts
)
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
@router.get("/events/{event_id}/preview.gif")
@@ -1293,10 +1191,7 @@ def event_preview(request: Request, event_id: str):
return preview_gif(request, event.camera, start_ts, end_ts)
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/preview.gif",
dependencies=[Depends(require_camera_access)],
)
@router.get("/{camera_name}/start/{start_ts}/end/{end_ts}/preview.gif")
def preview_gif(
request: Request,
camera_name: str,
@@ -1452,10 +1347,7 @@ def preview_gif(
)
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/preview.mp4",
dependencies=[Depends(require_camera_access)],
)
@router.get("/{camera_name}/start/{start_ts}/end/{end_ts}/preview.mp4")
def preview_mp4(
request: Request,
camera_name: str,
@@ -1695,14 +1587,9 @@ def preview_thumbnail(file_name: str):
####################### dynamic routes ###########################
@router.get(
"/{camera_name}/{label}/best.jpg", dependencies=[Depends(require_camera_access)]
)
@router.get(
"/{camera_name}/{label}/thumbnail.jpg",
dependencies=[Depends(require_camera_access)],
)
async def label_thumbnail(request: Request, camera_name: str, label: str):
@router.get("/{camera_name}/{label}/best.jpg")
@router.get("/{camera_name}/{label}/thumbnail.jpg")
def label_thumbnail(request: Request, camera_name: str, label: str):
label = unquote(label)
event_query = Event.select(fn.MAX(Event.id)).where(Event.camera == camera_name)
if label != "any":
@@ -1711,7 +1598,7 @@ async def label_thumbnail(request: Request, camera_name: str, label: str):
try:
event_id = event_query.scalar()
return await event_thumbnail(request, event_id, Extension.jpg, 60)
return event_thumbnail(request, event_id, Extension.jpg, 60)
except DoesNotExist:
frame = np.zeros((175, 175, 3), np.uint8)
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
@@ -1723,10 +1610,8 @@ async def label_thumbnail(request: Request, camera_name: str, label: str):
)
@router.get(
"/{camera_name}/{label}/clip.mp4", dependencies=[Depends(require_camera_access)]
)
async def label_clip(request: Request, camera_name: str, label: str):
@router.get("/{camera_name}/{label}/clip.mp4")
def label_clip(request: Request, camera_name: str, label: str):
label = unquote(label)
event_query = Event.select(fn.MAX(Event.id)).where(
Event.camera == camera_name, Event.has_clip == True
@@ -1737,17 +1622,15 @@ async def label_clip(request: Request, camera_name: str, label: str):
try:
event = event_query.get()
return await event_clip(request, event.id)
return event_clip(request, event.id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Event not found"}, status_code=404
)
@router.get(
"/{camera_name}/{label}/snapshot.jpg", dependencies=[Depends(require_camera_access)]
)
async def label_snapshot(request: Request, camera_name: str, label: str):
@router.get("/{camera_name}/{label}/snapshot.jpg")
def label_snapshot(request: Request, camera_name: str, label: str):
"""Returns the snapshot image from the latest event for the given camera and label combo"""
label = unquote(label)
if label == "any":
@@ -1768,7 +1651,7 @@ async def label_snapshot(request: Request, camera_name: str, label: str):
try:
event: Event = event_query.get()
return await event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
return event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
except DoesNotExist:
frame = np.zeros((720, 1280, 3), np.uint8)
_, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])

View File

@@ -5,10 +5,9 @@ import os
from datetime import datetime, timedelta, timezone
import pytz
from fastapi import APIRouter, Depends
from fastapi import APIRouter
from fastapi.responses import JSONResponse
from frigate.api.auth import require_camera_access
from frigate.api.defs.tags import Tags
from frigate.const import BASE_DIR, CACHE_DIR, PREVIEW_FRAME_TYPE
from frigate.models import Previews
@@ -19,10 +18,7 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.preview])
@router.get(
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(require_camera_access)],
)
@router.get("/preview/{camera_name}/start/{start_ts}/end/{end_ts}")
def preview_ts(camera_name: str, start_ts: float, end_ts: float):
"""Get all mp4 previews relevant for time period."""
if camera_name != "all":
@@ -75,10 +71,7 @@ def preview_ts(camera_name: str, start_ts: float, end_ts: float):
return JSONResponse(content=clips, status_code=200)
@router.get(
"/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
dependencies=[Depends(require_camera_access)],
)
@router.get("/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}")
def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
"""Get all mp4 previews relevant for time period given the timezone"""
parts = year_month.split("-")
@@ -93,10 +86,7 @@ def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name
return preview_ts(camera_name, start_ts, end_ts)
@router.get(
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}/frames",
dependencies=[Depends(require_camera_access)],
)
@router.get("/preview/{camera_name}/start/{start_ts}/end/{end_ts}/frames")
def get_preview_frames_from_cache(camera_name: str, start_ts: float, end_ts: float):
"""Get list of cached preview frames"""
preview_dir = os.path.join(CACHE_DIR, "preview_frames")

View File

@@ -4,21 +4,15 @@ import datetime
import logging
from functools import reduce
from pathlib import Path
from typing import List
import pandas as pd
from fastapi import APIRouter, Request
from fastapi import APIRouter
from fastapi.params import Depends
from fastapi.responses import JSONResponse
from peewee import Case, DoesNotExist, IntegrityError, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import (
get_allowed_cameras_for_filter,
get_current_user,
require_camera_access,
require_role,
)
from frigate.api.auth import get_current_user, require_role
from frigate.api.defs.query.review_query_parameters import (
ReviewActivityMotionQueryParams,
ReviewQueryParams,
@@ -32,8 +26,6 @@ from frigate.api.defs.response.review_response import (
ReviewSummaryResponse,
)
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.embeddings import EmbeddingsContext
from frigate.models import Recordings, ReviewSegment, UserReviewStatus
from frigate.review.types import SeverityEnum
from frigate.util.builtin import get_tz_modifiers
@@ -47,7 +39,6 @@ router = APIRouter(tags=[Tags.review])
async def review(
params: ReviewQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
if isinstance(current_user, JSONResponse):
return current_user
@@ -72,14 +63,8 @@ async def review(
]
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((ReviewSegment.camera << camera_list))
camera_list = cameras.split(",")
clauses.append((ReviewSegment.camera << camera_list))
if labels != "all":
# use matching so segments with multiple labels
@@ -153,7 +138,7 @@ async def review(
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
async def review_ids(request: Request, ids: str):
def review_ids(ids: str):
ids = ids.split(",")
if not ids:
@@ -162,18 +147,6 @@ async def review_ids(request: Request, ids: str):
status_code=400,
)
for review_id in ids:
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
await require_camera_access(review.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=(
{"success": False, "message": f"Review {review_id} not found"}
),
status_code=404,
)
try:
reviews = (
ReviewSegment.select().where(ReviewSegment.id << ids).dicts().iterator()
@@ -190,7 +163,6 @@ async def review_ids(request: Request, ids: str):
async def review_summary(
params: ReviewSummaryQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
if isinstance(current_user, JSONResponse):
return current_user
@@ -207,14 +179,8 @@ async def review_summary(
clauses = [(ReviewSegment.start_time > day_ago)]
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((ReviewSegment.camera << camera_list))
camera_list = cameras.split(",")
clauses.append((ReviewSegment.camera << camera_list))
if labels != "all":
# use matching so segments with multiple labels
@@ -308,14 +274,8 @@ async def review_summary(
clauses = []
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((ReviewSegment.camera << camera_list))
camera_list = cameras.split(",")
clauses.append((ReviewSegment.camera << camera_list))
if labels != "all":
# use matching so segments with multiple labels
@@ -418,7 +378,6 @@ async def review_summary(
@router.post("/reviews/viewed", response_model=GenericResponse)
async def set_multiple_reviewed(
request: Request,
body: ReviewModifyMultipleBody,
current_user: dict = Depends(get_current_user),
):
@@ -429,8 +388,6 @@ async def set_multiple_reviewed(
for review_id in body.ids:
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
await require_camera_access(review.camera, request=request)
review_status = UserReviewStatus.get(
UserReviewStatus.user_id == user_id,
UserReviewStatus.review_segment == review_id,
@@ -512,10 +469,7 @@ def delete_reviews(body: ReviewModifyMultipleBody):
@router.get(
"/review/activity/motion", response_model=list[ReviewActivityMotionResponse]
)
def motion_activity(
params: ReviewActivityMotionQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
def motion_activity(params: ReviewActivityMotionQueryParams = Depends()):
"""Get motion and audio activity."""
cameras = params.cameras
before = params.before or datetime.datetime.now().timestamp()
@@ -530,14 +484,8 @@ def motion_activity(
clauses.append((Recordings.motion > 0))
if cameras != "all":
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
camera_list = list(filtered)
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
clauses.append((Recordings.camera << allowed_cameras))
data: list[Recordings] = (
Recordings.select(
@@ -595,13 +543,15 @@ def motion_activity(
@router.get("/review/event/{event_id}", response_model=ReviewSegmentResponse)
async def get_review_from_event(request: Request, event_id: str):
def get_review_from_event(event_id: str):
try:
review = ReviewSegment.get(
ReviewSegment.data["detections"].cast("text") % f'*"{event_id}"*'
return JSONResponse(
model_to_dict(
ReviewSegment.get(
ReviewSegment.data["detections"].cast("text") % f'*"{event_id}"*'
)
)
)
await require_camera_access(review.camera, request=request)
return JSONResponse(model_to_dict(review))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Review item not found"},
@@ -610,11 +560,11 @@ async def get_review_from_event(request: Request, event_id: str):
@router.get("/review/{review_id}", response_model=ReviewSegmentResponse)
async def get_review(request: Request, review_id: str):
def get_review(review_id: str):
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
await require_camera_access(review.camera, request=request)
return JSONResponse(content=model_to_dict(review))
return JSONResponse(
content=model_to_dict(ReviewSegment.get(ReviewSegment.id == review_id))
)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Review item not found"},
@@ -656,35 +606,3 @@ async def set_not_reviewed(
content=({"success": True, "message": f"Set Review {review_id} as not viewed"}),
status_code=200,
)
@router.post(
"/review/summarize/start/{start_ts}/end/{end_ts}",
description="Use GenAI to summarize review items over a period of time.",
)
def generate_review_summary(request: Request, start_ts: float, end_ts: float):
config: FrigateConfig = request.app.frigate_config
if not config.genai.provider:
return JSONResponse(
content=(
{
"success": False,
"message": "GenAI must be configured to use this feature.",
}
),
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
summary = context.generate_review_summary(start_ts, end_ts)
if summary:
return JSONResponse(
content=({"success": True, "summary": summary}), status_code=200
)
else:
return JSONResponse(
content=({"success": False, "message": "Failed to create summary."}),
status_code=500,
)

View File

@@ -5,7 +5,6 @@ import os
import secrets
import shutil
from multiprocessing import Queue
from multiprocessing.managers import DictProxy, SyncManager
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from typing import Optional
@@ -15,20 +14,19 @@ import uvicorn
from peewee_migrate import Router
from playhouse.sqlite_ext import SqliteExtDatabase
import frigate.util as util
from frigate.api.auth import hash_password
from frigate.api.fastapi_app import create_fastapi_app
from frigate.camera import CameraMetrics, PTZMetrics
from frigate.camera.maintainer import CameraMaintainer
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.dispatcher import Dispatcher
from frigate.comms.event_metadata_updater import EventMetadataPublisher
from frigate.comms.inter_process import InterProcessCommunicator
from frigate.comms.mqtt import MqttClient
from frigate.comms.object_detector_signaler import DetectorProxy
from frigate.comms.webpush import WebPushClient
from frigate.comms.ws import WebSocketClient
from frigate.comms.zmq_proxy import ZmqProxy
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.config.config import FrigateConfig
from frigate.const import (
CACHE_DIR,
@@ -38,12 +36,12 @@ from frigate.const import (
FACE_DIR,
MODEL_CACHE_DIR,
RECORD_DIR,
SHM_FRAMES_VAR,
THUMB_DIR,
TRIGGER_DIR,
)
from frigate.data_processing.types import DataProcessorMetrics
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.embeddings import EmbeddingProcess, EmbeddingsContext
from frigate.embeddings import EmbeddingsContext, manage_embeddings
from frigate.events.audio import AudioProcessor
from frigate.events.cleanup import EventCleanup
from frigate.events.maintainer import EventProcessor
@@ -57,58 +55,56 @@ from frigate.models import (
Regions,
ReviewSegment,
Timeline,
Trigger,
User,
)
from frigate.object_detection.base import ObjectDetectProcess
from frigate.output.output import OutputProcess
from frigate.output.output import output_frames
from frigate.ptz.autotrack import PtzAutoTrackerThread
from frigate.ptz.onvif import OnvifController
from frigate.record.cleanup import RecordingCleanup
from frigate.record.export import migrate_exports
from frigate.record.record import RecordProcess
from frigate.review.review import ReviewProcess
from frigate.record.record import manage_recordings
from frigate.review.review import manage_review_segments
from frigate.stats.emitter import StatsEmitter
from frigate.stats.util import stats_init
from frigate.storage import StorageMaintainer
from frigate.timeline import TimelineProcessor
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.builtin import empty_and_close_queue
from frigate.util.image import UntrackedSharedMemory
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
from frigate.util.object import get_camera_regions_grid
from frigate.util.services import set_file_limit
from frigate.version import VERSION
from frigate.video import capture_camera, track_camera
from frigate.watchdog import FrigateWatchdog
logger = logging.getLogger(__name__)
class FrigateApp:
def __init__(
self, config: FrigateConfig, manager: SyncManager, stop_event: MpEvent
) -> None:
self.metrics_manager = manager
def __init__(self, config: FrigateConfig) -> None:
self.audio_process: Optional[mp.Process] = None
self.stop_event = stop_event
self.stop_event: MpEvent = mp.Event()
self.detection_queue: Queue = mp.Queue()
self.detectors: dict[str, ObjectDetectProcess] = {}
self.detection_out_events: dict[str, MpEvent] = {}
self.detection_shms: list[mp.shared_memory.SharedMemory] = []
self.log_queue: Queue = mp.Queue()
self.camera_metrics: DictProxy = self.metrics_manager.dict()
self.camera_metrics: dict[str, CameraMetrics] = {}
self.embeddings_metrics: DataProcessorMetrics | None = (
DataProcessorMetrics(
self.metrics_manager, list(config.classification.custom.keys())
)
DataProcessorMetrics()
if (
config.semantic_search.enabled
or config.lpr.enabled
or config.face_recognition.enabled
or len(config.classification.custom) > 0
)
else None
)
self.ptz_metrics: dict[str, PTZMetrics] = {}
self.processes: dict[str, int] = {}
self.embeddings: Optional[EmbeddingsContext] = None
self.region_grids: dict[str, list[list[dict[str, int]]]] = {}
self.frame_manager = SharedMemoryFrameManager()
self.config = config
def ensure_dirs(self) -> None:
@@ -125,9 +121,6 @@ class FrigateApp:
if self.config.face_recognition.enabled:
dirs.append(FACE_DIR)
if self.config.semantic_search.enabled:
dirs.append(TRIGGER_DIR)
for d in dirs:
if not os.path.exists(d) and not os.path.islink(d):
logger.info(f"Creating directory: {d}")
@@ -138,7 +131,7 @@ class FrigateApp:
def init_camera_metrics(self) -> None:
# create camera_metrics
for camera_name in self.config.cameras.keys():
self.camera_metrics[camera_name] = CameraMetrics(self.metrics_manager)
self.camera_metrics[camera_name] = CameraMetrics()
self.ptz_metrics[camera_name] = PTZMetrics(
autotracker_enabled=self.config.cameras[
camera_name
@@ -147,16 +140,8 @@ class FrigateApp:
def init_queues(self) -> None:
# Queue for cameras to push tracked objects to
# leaving room for 2 extra cameras to be added
self.detected_frames_queue: Queue = mp.Queue(
maxsize=(
sum(
camera.enabled_in_config == True
for camera in self.config.cameras.values()
)
+ 2
)
* 2
maxsize=sum(camera.enabled for camera in self.config.cameras.values()) * 2
)
# Queue for timeline events
@@ -232,24 +217,52 @@ class FrigateApp:
self.processes["go2rtc"] = proc.info["pid"]
def init_recording_manager(self) -> None:
recording_process = RecordProcess(self.config, self.stop_event)
recording_process = util.Process(
target=manage_recordings,
name="recording_manager",
args=(self.config,),
)
recording_process.daemon = True
self.recording_process = recording_process
recording_process.start()
self.processes["recording"] = recording_process.pid or 0
logger.info(f"Recording process started: {recording_process.pid}")
def init_review_segment_manager(self) -> None:
review_segment_process = ReviewProcess(self.config, self.stop_event)
review_segment_process = util.Process(
target=manage_review_segments,
name="review_segment_manager",
args=(self.config,),
)
review_segment_process.daemon = True
self.review_segment_process = review_segment_process
review_segment_process.start()
self.processes["review_segment"] = review_segment_process.pid or 0
logger.info(f"Review process started: {review_segment_process.pid}")
def init_embeddings_manager(self) -> None:
# always start the embeddings process
embedding_process = EmbeddingProcess(
self.config, self.embeddings_metrics, self.stop_event
genai_cameras = [
c for c in self.config.cameras.values() if c.enabled and c.genai.enabled
]
if (
not self.config.semantic_search.enabled
and not genai_cameras
and not self.config.lpr.enabled
and not self.config.face_recognition.enabled
and not self.config.classification.bird.enabled
):
return
embedding_process = util.Process(
target=manage_embeddings,
name="embeddings_manager",
args=(
self.config,
self.embeddings_metrics,
),
)
embedding_process.daemon = True
self.embedding_process = embedding_process
embedding_process.start()
self.processes["embeddings"] = embedding_process.pid or 0
@@ -266,9 +279,7 @@ class FrigateApp:
"synchronous": "NORMAL", # Safe when using WAL https://www.sqlite.org/pragma.html#pragma_synchronous
},
timeout=max(
60,
10
* len([c for c in self.config.cameras.values() if c.enabled_in_config]),
60, 10 * len([c for c in self.config.cameras.values() if c.enabled])
),
load_vec_extension=self.config.semantic_search.enabled,
)
@@ -282,7 +293,6 @@ class FrigateApp:
ReviewSegment,
Timeline,
User,
Trigger,
]
self.db.bind(models)
@@ -298,15 +308,24 @@ class FrigateApp:
migrate_exports(self.config.ffmpeg, list(self.config.cameras.keys()))
def init_embeddings_client(self) -> None:
# Create a client for other processes to use
self.embeddings = EmbeddingsContext(self.db)
genai_cameras = [
c for c in self.config.cameras.values() if c.enabled and c.genai.enabled
]
if (
self.config.semantic_search.enabled
or self.config.lpr.enabled
or genai_cameras
or self.config.face_recognition.enabled
):
# Create a client for other processes to use
self.embeddings = EmbeddingsContext(self.db)
def init_inter_process_communicator(self) -> None:
self.inter_process_communicator = InterProcessCommunicator()
self.inter_config_updater = CameraConfigUpdatePublisher()
self.inter_config_updater = ConfigPublisher()
self.event_metadata_updater = EventMetadataPublisher()
self.inter_zmq_proxy = ZmqProxy()
self.detection_proxy = DetectorProxy()
def init_onvif(self) -> None:
self.onvif_controller = OnvifController(self.config, self.ptz_metrics)
@@ -339,6 +358,8 @@ class FrigateApp:
def start_detectors(self) -> None:
for name in self.config.cameras.keys():
self.detection_out_events[name] = mp.Event()
try:
largest_frame = max(
[
@@ -370,10 +391,8 @@ class FrigateApp:
self.detectors[name] = ObjectDetectProcess(
name,
self.detection_queue,
list(self.config.cameras.keys()),
self.config,
self.detection_out_events,
detector_config,
self.stop_event,
)
def start_ptz_autotracker(self) -> None:
@@ -397,22 +416,79 @@ class FrigateApp:
self.detected_frames_processor.start()
def start_video_output_processor(self) -> None:
output_processor = OutputProcess(self.config, self.stop_event)
output_processor = util.Process(
target=output_frames,
name="output_processor",
args=(self.config,),
)
output_processor.daemon = True
self.output_processor = output_processor
output_processor.start()
logger.info(f"Output process started: {output_processor.pid}")
def start_camera_processor(self) -> None:
self.camera_maintainer = CameraMaintainer(
self.config,
self.detection_queue,
self.detected_frames_queue,
self.camera_metrics,
self.ptz_metrics,
self.stop_event,
self.metrics_manager,
)
self.camera_maintainer.start()
def init_historical_regions(self) -> None:
# delete region grids for removed or renamed cameras
cameras = list(self.config.cameras.keys())
Regions.delete().where(~(Regions.camera << cameras)).execute()
# create or update region grids for each camera
for camera in self.config.cameras.values():
assert camera.name is not None
self.region_grids[camera.name] = get_camera_regions_grid(
camera.name,
camera.detect,
max(self.config.model.width, self.config.model.height),
)
def start_camera_processors(self) -> None:
for name, config in self.config.cameras.items():
if not self.config.cameras[name].enabled_in_config:
logger.info(f"Camera processor not started for disabled camera {name}")
continue
camera_process = util.Process(
target=track_camera,
name=f"camera_processor:{name}",
args=(
name,
config,
self.config.model,
self.config.model.merged_labelmap,
self.detection_queue,
self.detection_out_events[name],
self.detected_frames_queue,
self.camera_metrics[name],
self.ptz_metrics[name],
self.region_grids[name],
),
daemon=True,
)
self.camera_metrics[name].process = camera_process
camera_process.start()
logger.info(f"Camera processor started for {name}: {camera_process.pid}")
def start_camera_capture_processes(self) -> None:
shm_frame_count = self.shm_frame_count()
for name, config in self.config.cameras.items():
if not self.config.cameras[name].enabled_in_config:
logger.info(f"Capture process not started for disabled camera {name}")
continue
# pre-create shms
for i in range(shm_frame_count):
frame_size = config.frame_shape_yuv[0] * config.frame_shape_yuv[1]
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = util.Process(
target=capture_camera,
name=f"camera_capture:{name}",
args=(name, config, shm_frame_count, self.camera_metrics[name]),
)
capture_process.daemon = True
self.camera_metrics[name].capture_process = capture_process
capture_process.start()
logger.info(f"Capture process started for {name}: {capture_process.pid}")
def start_audio_processor(self) -> None:
audio_cameras = [
@@ -422,9 +498,7 @@ class FrigateApp:
]
if audio_cameras:
self.audio_process = AudioProcessor(
self.config, audio_cameras, self.camera_metrics, self.stop_event
)
self.audio_process = AudioProcessor(audio_cameras, self.camera_metrics)
self.audio_process.start()
self.processes["audio_detector"] = self.audio_process.pid or 0
@@ -472,6 +546,45 @@ class FrigateApp:
self.frigate_watchdog = FrigateWatchdog(self.detectors, self.stop_event)
self.frigate_watchdog.start()
def shm_frame_count(self) -> int:
total_shm = round(shutil.disk_usage("/dev/shm").total / pow(2, 20), 1)
# required for log files + nginx cache
min_req_shm = 40 + 10
if self.config.birdseye.restream:
min_req_shm += 8
available_shm = total_shm - min_req_shm
cam_total_frame_size = 0.0
for camera in self.config.cameras.values():
if camera.enabled and camera.detect.width and camera.detect.height:
cam_total_frame_size += round(
(camera.detect.width * camera.detect.height * 1.5 + 270480)
/ 1048576,
1,
)
if cam_total_frame_size == 0.0:
return 0
shm_frame_count = min(
int(os.environ.get(SHM_FRAMES_VAR, "50")),
int(available_shm / (cam_total_frame_size)),
)
logger.debug(
f"Calculated total camera size {available_shm} / {cam_total_frame_size} :: {shm_frame_count} frames for each camera in SHM"
)
if shm_frame_count < 20:
logger.warning(
f"The current SHM size of {total_shm}MB is too small, recommend increasing it to at least {round(min_req_shm + cam_total_frame_size * 20)}MB."
)
return shm_frame_count
def init_auth(self) -> None:
if self.config.auth.enabled:
if User.select().count() == 0:
@@ -532,17 +645,19 @@ class FrigateApp:
self.init_recording_manager()
self.init_review_segment_manager()
self.init_go2rtc()
self.start_detectors()
self.init_embeddings_manager()
self.bind_database()
self.check_db_data_migrations()
self.init_inter_process_communicator()
self.start_detectors()
self.init_dispatcher()
self.init_embeddings_client()
self.start_video_output_processor()
self.start_ptz_autotracker()
self.init_historical_regions()
self.start_detected_frames_processor()
self.start_camera_processor()
self.start_camera_processors()
self.start_camera_capture_processes()
self.start_audio_processor()
self.start_storage_maintainer()
self.start_stats_emitter()
@@ -565,7 +680,6 @@ class FrigateApp:
self.onvif_controller,
self.stats_emitter,
self.event_metadata_updater,
self.inter_config_updater,
),
host="127.0.0.1",
port=5001,
@@ -599,6 +713,24 @@ class FrigateApp:
if self.onvif_controller:
self.onvif_controller.close()
# ensure the capture processes are done
for camera, metrics in self.camera_metrics.items():
capture_process = metrics.capture_process
if capture_process is not None:
logger.info(f"Waiting for capture process for {camera} to stop")
capture_process.terminate()
capture_process.join()
# ensure the camera processors are done
for camera, metrics in self.camera_metrics.items():
camera_process = metrics.process
if camera_process is not None:
logger.info(f"Waiting for process for {camera} to stop")
camera_process.terminate()
camera_process.join()
logger.info(f"Closing frame queue for {camera}")
empty_and_close_queue(metrics.frame_queue)
# ensure the detectors are done
for detector in self.detectors.values():
detector.stop()
@@ -642,12 +774,14 @@ class FrigateApp:
self.inter_config_updater.stop()
self.event_metadata_updater.stop()
self.inter_zmq_proxy.stop()
self.detection_proxy.stop()
self.frame_manager.cleanup()
while len(self.detection_shms) > 0:
shm = self.detection_shms.pop()
shm.close()
shm.unlink()
# exit the mp Manager process
_stop_logging()
self.metrics_manager.shutdown()
os._exit(os.EX_OK)

View File

@@ -1,7 +1,7 @@
import multiprocessing as mp
from multiprocessing.managers import SyncManager
from multiprocessing.sharedctypes import Synchronized
from multiprocessing.synchronize import Event
from typing import Optional
class CameraMetrics:
@@ -16,25 +16,25 @@ class CameraMetrics:
frame_queue: mp.Queue
process_pid: Synchronized
capture_process_pid: Synchronized
process: Optional[mp.Process]
capture_process: Optional[mp.Process]
ffmpeg_pid: Synchronized
def __init__(self, manager: SyncManager):
self.camera_fps = manager.Value("d", 0)
self.detection_fps = manager.Value("d", 0)
self.detection_frame = manager.Value("d", 0)
self.process_fps = manager.Value("d", 0)
self.skipped_fps = manager.Value("d", 0)
self.read_start = manager.Value("d", 0)
self.audio_rms = manager.Value("d", 0)
self.audio_dBFS = manager.Value("d", 0)
def __init__(self):
self.camera_fps = mp.Value("d", 0)
self.detection_fps = mp.Value("d", 0)
self.detection_frame = mp.Value("d", 0)
self.process_fps = mp.Value("d", 0)
self.skipped_fps = mp.Value("d", 0)
self.read_start = mp.Value("d", 0)
self.audio_rms = mp.Value("d", 0)
self.audio_dBFS = mp.Value("d", 0)
self.frame_queue = manager.Queue(maxsize=2)
self.frame_queue = mp.Queue(maxsize=2)
self.process_pid = manager.Value("i", 0)
self.capture_process_pid = manager.Value("i", 0)
self.ffmpeg_pid = manager.Value("i", 0)
self.process = None
self.capture_process = None
self.ffmpeg_pid = mp.Value("i", 0)
class PTZMetrics:

View File

@@ -1,20 +1,9 @@
"""Manage camera activity and updating listeners."""
import datetime
import json
import logging
import random
import string
from collections import Counter
from typing import Any, Callable
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
EventMetadataTypeEnum,
)
from frigate.config import CameraConfig, FrigateConfig
logger = logging.getLogger(__name__)
from frigate.config.config import FrigateConfig
class CameraActivityManager:
@@ -34,33 +23,26 @@ class CameraActivityManager:
if not camera_config.enabled_in_config:
continue
self.__init_camera(camera_config)
self.last_camera_activity[camera_config.name] = {}
self.camera_all_object_counts[camera_config.name] = Counter()
self.camera_active_object_counts[camera_config.name] = Counter()
def __init_camera(self, camera_config: CameraConfig) -> None:
self.last_camera_activity[camera_config.name] = {}
self.camera_all_object_counts[camera_config.name] = Counter()
self.camera_active_object_counts[camera_config.name] = Counter()
for zone, zone_config in camera_config.zones.items():
if zone not in self.all_zone_labels:
self.zone_all_object_counts[zone] = Counter()
self.zone_active_object_counts[zone] = Counter()
self.all_zone_labels[zone] = set()
for zone, zone_config in camera_config.zones.items():
if zone not in self.all_zone_labels:
self.zone_all_object_counts[zone] = Counter()
self.zone_active_object_counts[zone] = Counter()
self.all_zone_labels[zone] = set()
self.all_zone_labels[zone].update(
zone_config.objects
if zone_config.objects
else camera_config.objects.track
)
self.all_zone_labels[zone].update(
zone_config.objects
if zone_config.objects
else camera_config.objects.track
)
def update_activity(self, new_activity: dict[str, dict[str, Any]]) -> None:
all_objects: list[dict[str, Any]] = []
for camera in new_activity.keys():
# handle cameras that were added dynamically
if camera not in self.camera_all_object_counts:
self.__init_camera(self.config.cameras[camera])
new_objects = new_activity[camera].get("objects", [])
all_objects.extend(new_objects)
@@ -150,110 +132,3 @@ class CameraActivityManager:
if any_changed:
self.publish(f"{camera}/all", sum(list(all_objects.values())))
self.publish(f"{camera}/all/active", sum(list(active_objects.values())))
class AudioActivityManager:
def __init__(
self, config: FrigateConfig, publish: Callable[[str, Any], None]
) -> None:
self.config = config
self.publish = publish
self.current_audio_detections: dict[str, dict[str, dict[str, Any]]] = {}
self.event_metadata_publisher = EventMetadataPublisher()
for camera_config in config.cameras.values():
if not camera_config.audio.enabled_in_config:
continue
self.__init_camera(camera_config)
def __init_camera(self, camera_config: CameraConfig) -> None:
self.current_audio_detections[camera_config.name] = {}
def update_activity(self, new_activity: dict[str, dict[str, Any]]) -> None:
now = datetime.datetime.now().timestamp()
for camera in new_activity.keys():
# handle cameras that were added dynamically
if camera not in self.current_audio_detections:
self.__init_camera(self.config.cameras[camera])
new_detections = new_activity[camera].get("detections", [])
if self.compare_audio_activity(camera, new_detections, now):
logger.debug(f"Audio detections for {camera}: {new_activity}")
self.publish(
f"{camera}/audio/all",
"ON" if len(self.current_audio_detections[camera]) > 0 else "OFF",
)
self.publish(
"audio_detections",
json.dumps(self.current_audio_detections),
)
def compare_audio_activity(
self, camera: str, new_detections: list[tuple[str, float]], now: float
) -> None:
max_not_heard = self.config.cameras[camera].audio.max_not_heard
current = self.current_audio_detections[camera]
any_changed = False
for label, score in new_detections:
any_changed = True
if label in current:
current[label]["last_detection"] = now
current[label]["score"] = score
else:
rand_id = "".join(
random.choices(string.ascii_lowercase + string.digits, k=6)
)
event_id = f"{now}-{rand_id}"
self.publish(f"{camera}/audio/{label}", "ON")
self.event_metadata_publisher.publish(
(
now,
camera,
label,
event_id,
True,
score,
None,
None,
"audio",
{},
),
EventMetadataTypeEnum.manual_event_create.value,
)
current[label] = {
"id": event_id,
"score": score,
"last_detection": now,
}
# expire detections
for label in list(current.keys()):
if now - current[label]["last_detection"] > max_not_heard:
any_changed = True
self.publish(f"{camera}/audio/{label}", "OFF")
self.event_metadata_publisher.publish(
(current[label]["id"], now),
EventMetadataTypeEnum.manual_event_end.value,
)
del current[label]
return any_changed
def expire_all(self, camera: str) -> None:
now = datetime.datetime.now().timestamp()
current = self.current_audio_detections.get(camera, {})
for label in list(current.keys()):
self.publish(f"{camera}/audio/{label}", "OFF")
self.event_metadata_publisher.publish(
(current[label]["id"], now),
EventMetadataTypeEnum.manual_event_end.value,
)
del current[label]

View File

@@ -1,220 +0,0 @@
"""Create and maintain camera processes / management."""
import logging
import multiprocessing as mp
import threading
from multiprocessing import Queue
from multiprocessing.managers import DictProxy, SyncManager
from multiprocessing.synchronize import Event as MpEvent
from frigate.camera import CameraMetrics, PTZMetrics
from frigate.config import FrigateConfig
from frigate.config.camera import CameraConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
)
from frigate.models import Regions
from frigate.util.builtin import empty_and_close_queue
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
from frigate.util.object import get_camera_regions_grid
from frigate.util.services import calculate_shm_requirements
from frigate.video import CameraCapture, CameraTracker
logger = logging.getLogger(__name__)
class CameraMaintainer(threading.Thread):
def __init__(
self,
config: FrigateConfig,
detection_queue: Queue,
detected_frames_queue: Queue,
camera_metrics: DictProxy,
ptz_metrics: dict[str, PTZMetrics],
stop_event: MpEvent,
metrics_manager: SyncManager,
):
super().__init__(name="camera_processor")
self.config = config
self.detection_queue = detection_queue
self.detected_frames_queue = detected_frames_queue
self.stop_event = stop_event
self.camera_metrics = camera_metrics
self.ptz_metrics = ptz_metrics
self.frame_manager = SharedMemoryFrameManager()
self.region_grids: dict[str, list[list[dict[str, int]]]] = {}
self.update_subscriber = CameraConfigUpdateSubscriber(
self.config,
{},
[
CameraConfigUpdateEnum.add,
CameraConfigUpdateEnum.remove,
],
)
self.shm_count = self.__calculate_shm_frame_count()
self.camera_processes: dict[str, mp.Process] = {}
self.capture_processes: dict[str, mp.Process] = {}
self.metrics_manager = metrics_manager
def __init_historical_regions(self) -> None:
# delete region grids for removed or renamed cameras
cameras = list(self.config.cameras.keys())
Regions.delete().where(~(Regions.camera << cameras)).execute()
# create or update region grids for each camera
for camera in self.config.cameras.values():
assert camera.name is not None
self.region_grids[camera.name] = get_camera_regions_grid(
camera.name,
camera.detect,
max(self.config.model.width, self.config.model.height),
)
def __calculate_shm_frame_count(self) -> int:
shm_stats = calculate_shm_requirements(self.config)
if not shm_stats:
# /dev/shm not available
return 0
logger.debug(
f"Calculated total camera size {shm_stats['available']} / "
f"{shm_stats['camera_frame_size']} :: {shm_stats['shm_frame_count']} "
f"frames for each camera in SHM"
)
if shm_stats["shm_frame_count"] < 20:
logger.warning(
f"The current SHM size of {shm_stats['total']}MB is too small, "
f"recommend increasing it to at least {shm_stats['min_shm']}MB."
)
return shm_stats["shm_frame_count"]
def __start_camera_processor(
self, name: str, config: CameraConfig, runtime: bool = False
) -> None:
if not config.enabled_in_config:
logger.info(f"Camera processor not started for disabled camera {name}")
return
if runtime:
self.camera_metrics[name] = CameraMetrics(self.metrics_manager)
self.ptz_metrics[name] = PTZMetrics(autotracker_enabled=False)
self.region_grids[name] = get_camera_regions_grid(
name,
config.detect,
max(self.config.model.width, self.config.model.height),
)
try:
largest_frame = max(
[
det.model.height * det.model.width * 3
if det.model is not None
else 320
for det in self.config.detectors.values()
]
)
UntrackedSharedMemory(name=f"out-{name}", create=True, size=20 * 6 * 4)
UntrackedSharedMemory(
name=name,
create=True,
size=largest_frame,
)
except FileExistsError:
pass
camera_process = CameraTracker(
config,
self.config.model,
self.config.model.merged_labelmap,
self.detection_queue,
self.detected_frames_queue,
self.camera_metrics[name],
self.ptz_metrics[name],
self.region_grids[name],
self.stop_event,
)
self.camera_processes[config.name] = camera_process
camera_process.start()
self.camera_metrics[config.name].process_pid.value = camera_process.pid
logger.info(f"Camera processor started for {config.name}: {camera_process.pid}")
def __start_camera_capture(
self, name: str, config: CameraConfig, runtime: bool = False
) -> None:
if not config.enabled_in_config:
logger.info(f"Capture process not started for disabled camera {name}")
return
# pre-create shms
count = 10 if runtime else self.shm_count
for i in range(count):
frame_size = config.frame_shape_yuv[0] * config.frame_shape_yuv[1]
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = CameraCapture(
config, count, self.camera_metrics[name], self.stop_event
)
capture_process.daemon = True
self.capture_processes[name] = capture_process
capture_process.start()
self.camera_metrics[name].capture_process_pid.value = capture_process.pid
logger.info(f"Capture process started for {name}: {capture_process.pid}")
def __stop_camera_capture_process(self, camera: str) -> None:
capture_process = self.capture_processes[camera]
if capture_process is not None:
logger.info(f"Waiting for capture process for {camera} to stop")
capture_process.terminate()
capture_process.join()
def __stop_camera_process(self, camera: str) -> None:
camera_process = self.camera_processes[camera]
if camera_process is not None:
logger.info(f"Waiting for process for {camera} to stop")
camera_process.terminate()
camera_process.join()
logger.info(f"Closing frame queue for {camera}")
empty_and_close_queue(self.camera_metrics[camera].frame_queue)
def run(self):
self.__init_historical_regions()
# start camera processes
for camera, config in self.config.cameras.items():
self.__start_camera_processor(camera, config)
self.__start_camera_capture(camera, config)
while not self.stop_event.wait(1):
updates = self.update_subscriber.check_for_updates()
for update_type, updated_cameras in updates.items():
if update_type == CameraConfigUpdateEnum.add.name:
for camera in updated_cameras:
self.__start_camera_processor(
camera,
self.update_subscriber.camera_configs[camera],
runtime=True,
)
self.__start_camera_capture(
camera,
self.update_subscriber.camera_configs[camera],
runtime=True,
)
elif update_type == CameraConfigUpdateEnum.remove.name:
self.__stop_camera_capture_process(camera)
self.__stop_camera_process(camera)
# ensure the capture processes are done
for camera in self.camera_processes.keys():
self.__stop_camera_capture_process(camera)
# ensure the camera processors are done
for camera in self.capture_processes.keys():
self.__stop_camera_process(camera)
self.update_subscriber.stop()
self.frame_manager.cleanup()

View File

@@ -54,7 +54,7 @@ class CameraState:
self.ptz_autotracker_thread = ptz_autotracker_thread
self.prev_enabled = self.camera_config.enabled
def get_current_frame(self, draw_options: dict[str, Any] = {}) -> np.ndarray:
def get_current_frame(self, draw_options: dict[str, Any] = {}):
with self.current_frame_lock:
frame_copy = np.copy(self._current_frame)
frame_time = self.current_frame_time
@@ -228,51 +228,12 @@ class CameraState:
position=self.camera_config.timestamp_style.position,
)
if draw_options.get("paths"):
for obj in tracked_objects.values():
if obj["frame_time"] == frame_time and obj["path_data"]:
color = self.config.model.colormap.get(
obj["label"], (255, 255, 255)
)
path_points = [
(
int(point[0][0] * self.camera_config.detect.width),
int(point[0][1] * self.camera_config.detect.height),
)
for point in obj["path_data"]
]
for point in path_points:
cv2.circle(frame_copy, point, 5, color, -1)
for i in range(1, len(path_points)):
cv2.line(
frame_copy,
path_points[i - 1],
path_points[i],
color,
2,
)
bottom_center = (
int((obj["box"][0] + obj["box"][2]) / 2),
int(obj["box"][3]),
)
cv2.line(
frame_copy,
path_points[-1],
bottom_center,
color,
2,
)
return frame_copy
def finished(self, obj_id):
del self.tracked_objects[obj_id]
def on(self, event_type: str, callback: Callable):
def on(self, event_type: str, callback: Callable[[dict], None]):
self.callbacks[event_type].append(callback)
def update(

View File

@@ -1,9 +1,8 @@
"""Facilitates communication between processes."""
import multiprocessing as mp
from _pickle import UnpicklingError
from multiprocessing.synchronize import Event as MpEvent
from typing import Any
from typing import Any, Optional
import zmq
@@ -33,7 +32,7 @@ class ConfigPublisher:
class ConfigSubscriber:
"""Simplifies receiving an updated config."""
def __init__(self, topic: str, exact: bool = False) -> None:
def __init__(self, topic: str, exact=False) -> None:
self.topic = topic
self.exact = exact
self.context = zmq.Context()
@@ -41,7 +40,7 @@ class ConfigSubscriber:
self.socket.setsockopt_string(zmq.SUBSCRIBE, topic)
self.socket.connect(SOCKET_PUB_SUB)
def check_for_update(self) -> tuple[str, Any] | tuple[None, None]:
def check_for_update(self) -> Optional[tuple[str, Any]]:
"""Returns updated config or None if no update."""
try:
topic = self.socket.recv_string(flags=zmq.NOBLOCK)
@@ -51,7 +50,7 @@ class ConfigSubscriber:
return (topic, obj)
else:
return (None, None)
except (zmq.ZMQError, UnicodeDecodeError, UnpicklingError):
except zmq.ZMQError:
return (None, None)
def stop(self) -> None:

View File

@@ -1,7 +1,7 @@
"""Facilitates communication between processes."""
from enum import Enum
from typing import Any
from typing import Any, Optional
from .zmq_proxy import Publisher, Subscriber
@@ -19,7 +19,8 @@ class DetectionPublisher(Publisher):
topic_base = "detection/"
def __init__(self, topic: str) -> None:
def __init__(self, topic: DetectionTypeEnum) -> None:
topic = topic.value
super().__init__(topic)
@@ -28,15 +29,16 @@ class DetectionSubscriber(Subscriber):
topic_base = "detection/"
def __init__(self, topic: str) -> None:
def __init__(self, topic: DetectionTypeEnum) -> None:
topic = topic.value
super().__init__(topic)
def check_for_update(
self, timeout: float | None = None
) -> tuple[str, Any] | tuple[None, None] | None:
self, timeout: float = None
) -> Optional[tuple[DetectionTypeEnum, Any]]:
return super().check_for_update(timeout)
def _return_object(self, topic: str, payload: Any) -> Any:
if payload is None:
return (None, None)
return (topic[len(self.topic_base) :], payload)
return (DetectionTypeEnum[topic[len(self.topic_base) :]], payload)

View File

@@ -3,32 +3,24 @@
import datetime
import json
import logging
from typing import Any, Callable, Optional, cast
from typing import Any, Callable, Optional
from frigate.camera import PTZMetrics
from frigate.camera.activity_manager import AudioActivityManager, CameraActivityManager
from frigate.camera.activity_manager import CameraActivityManager
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.webpush import WebPushClient
from frigate.config import BirdseyeModeEnum, FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdatePublisher,
CameraConfigUpdateTopic,
)
from frigate.const import (
CLEAR_ONGOING_REVIEW_SEGMENTS,
EXPIRE_AUDIO_ACTIVITY,
INSERT_MANY_RECORDINGS,
INSERT_PREVIEW,
NOTIFICATION_TEST,
REQUEST_REGION_GRID,
UPDATE_AUDIO_ACTIVITY,
UPDATE_BIRDSEYE_LAYOUT,
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_EVENT_DESCRIPTION,
UPDATE_MODEL_STATE,
UPDATE_REVIEW_DESCRIPTION,
UPSERT_REVIEW_SEGMENT,
)
from frigate.models import Event, Previews, Recordings, ReviewSegment
@@ -46,7 +38,7 @@ class Dispatcher:
def __init__(
self,
config: FrigateConfig,
config_updater: CameraConfigUpdatePublisher,
config_updater: ConfigPublisher,
onvif: OnvifController,
ptz_metrics: dict[str, PTZMetrics],
communicators: list[Communicator],
@@ -57,13 +49,11 @@ class Dispatcher:
self.ptz_metrics = ptz_metrics
self.comms = communicators
self.camera_activity = CameraActivityManager(config, self.publish)
self.audio_activity = AudioActivityManager(config, self.publish)
self.model_state: dict[str, ModelStatusTypesEnum] = {}
self.embeddings_reindex: dict[str, Any] = {}
self.birdseye_layout: dict[str, Any] = {}
self.model_state = {}
self.embeddings_reindex = {}
self._camera_settings_handlers: dict[str, Callable] = {
"audio": self._on_audio_command,
"audio_transcription": self._on_audio_transcription_command,
"detect": self._on_detect_command,
"enabled": self._on_enabled_command,
"improve_contrast": self._on_motion_improve_contrast_command,
@@ -78,8 +68,6 @@ class Dispatcher:
"birdseye_mode": self._on_birdseye_mode_command,
"review_alerts": self._on_alerts_command,
"review_detections": self._on_detections_command,
"object_descriptions": self._on_object_description_command,
"review_descriptions": self._on_review_description_command,
}
self._global_settings_handlers: dict[str, Callable] = {
"notifications": self._on_global_notification_command,
@@ -92,12 +80,10 @@ class Dispatcher:
(comm for comm in communicators if isinstance(comm, WebPushClient)), None
)
def _receive(self, topic: str, payload: Any) -> Optional[Any]:
def _receive(self, topic: str, payload: str) -> Optional[Any]:
"""Handle receiving of payload from communicators."""
def handle_camera_command(
command_type: str, camera_name: str, command: str, payload: str
) -> None:
def handle_camera_command(command_type, camera_name, command, payload):
try:
if command_type == "set":
self._camera_settings_handlers[command](camera_name, payload)
@@ -106,13 +92,13 @@ class Dispatcher:
except KeyError:
logger.error(f"Invalid command type or handler: {command_type}")
def handle_restart() -> None:
def handle_restart():
restart_frigate()
def handle_insert_many_recordings() -> None:
def handle_insert_many_recordings():
Recordings.insert_many(payload).execute()
def handle_request_region_grid() -> Any:
def handle_request_region_grid():
camera = payload
grid = get_camera_regions_grid(
camera,
@@ -121,32 +107,26 @@ class Dispatcher:
)
return grid
def handle_insert_preview() -> None:
def handle_insert_preview():
Previews.insert(payload).execute()
def handle_upsert_review_segment() -> None:
def handle_upsert_review_segment():
ReviewSegment.insert(payload).on_conflict(
conflict_target=[ReviewSegment.id],
update=payload,
).execute()
def handle_clear_ongoing_review_segments() -> None:
def handle_clear_ongoing_review_segments():
ReviewSegment.update(end_time=datetime.datetime.now().timestamp()).where(
ReviewSegment.end_time.is_null(True)
).execute()
def handle_update_camera_activity() -> None:
def handle_update_camera_activity():
self.camera_activity.update_activity(payload)
def handle_update_audio_activity() -> None:
self.audio_activity.update_activity(payload)
def handle_expire_audio_activity() -> None:
self.audio_activity.expire_all(payload)
def handle_update_event_description() -> None:
def handle_update_event_description():
event: Event = Event.get(Event.id == payload["id"])
cast(dict, event.data)["description"] = payload["description"]
event.data["description"] = payload["description"]
event.save()
self.publish(
"tracked_object_update",
@@ -160,48 +140,31 @@ class Dispatcher:
),
)
def handle_update_review_description() -> None:
final_data = payload["after"]
ReviewSegment.insert(final_data).on_conflict(
conflict_target=[ReviewSegment.id],
update=final_data,
).execute()
self.publish("reviews", json.dumps(payload))
def handle_update_model_state() -> None:
def handle_update_model_state():
if payload:
model = payload["model"]
state = payload["state"]
self.model_state[model] = ModelStatusTypesEnum[state]
self.publish("model_state", json.dumps(self.model_state))
def handle_model_state() -> None:
def handle_model_state():
self.publish("model_state", json.dumps(self.model_state.copy()))
def handle_update_embeddings_reindex_progress() -> None:
def handle_update_embeddings_reindex_progress():
self.embeddings_reindex = payload
self.publish(
"embeddings_reindex_progress",
json.dumps(payload),
)
def handle_embeddings_reindex_progress() -> None:
def handle_embeddings_reindex_progress():
self.publish(
"embeddings_reindex_progress",
json.dumps(self.embeddings_reindex.copy()),
)
def handle_update_birdseye_layout() -> None:
if payload:
self.birdseye_layout = payload
self.publish("birdseye_layout", json.dumps(self.birdseye_layout))
def handle_birdseye_layout() -> None:
self.publish("birdseye_layout", json.dumps(self.birdseye_layout.copy()))
def handle_on_connect() -> None:
def handle_on_connect():
camera_status = self.camera_activity.last_camera_activity.copy()
audio_detections = self.audio_activity.current_audio_detections.copy()
cameras_with_status = camera_status.keys()
for camera in self.config.cameras.keys():
@@ -214,9 +177,6 @@ class Dispatcher:
"snapshots": self.config.cameras[camera].snapshots.enabled,
"record": self.config.cameras[camera].record.enabled,
"audio": self.config.cameras[camera].audio.enabled,
"audio_transcription": self.config.cameras[
camera
].audio_transcription.live_enabled,
"notifications": self.config.cameras[camera].notifications.enabled,
"notifications_suspended": int(
self.web_push_client.suspended_cameras.get(camera, 0)
@@ -229,12 +189,6 @@ class Dispatcher:
].onvif.autotracking.enabled,
"alerts": self.config.cameras[camera].review.alerts.enabled,
"detections": self.config.cameras[camera].review.detections.enabled,
"object_descriptions": self.config.cameras[
camera
].objects.genai.enabled,
"review_descriptions": self.config.cameras[
camera
].review.genai.enabled,
}
self.publish("camera_activity", json.dumps(camera_status))
@@ -243,10 +197,8 @@ class Dispatcher:
"embeddings_reindex_progress",
json.dumps(self.embeddings_reindex.copy()),
)
self.publish("birdseye_layout", json.dumps(self.birdseye_layout.copy()))
self.publish("audio_detections", json.dumps(audio_detections))
def handle_notification_test() -> None:
def handle_notification_test():
self.publish("notification_test", "Test notification")
# Dictionary mapping topic to handlers
@@ -257,18 +209,13 @@ class Dispatcher:
UPSERT_REVIEW_SEGMENT: handle_upsert_review_segment,
CLEAR_ONGOING_REVIEW_SEGMENTS: handle_clear_ongoing_review_segments,
UPDATE_CAMERA_ACTIVITY: handle_update_camera_activity,
UPDATE_AUDIO_ACTIVITY: handle_update_audio_activity,
EXPIRE_AUDIO_ACTIVITY: handle_expire_audio_activity,
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
UPDATE_REVIEW_DESCRIPTION: handle_update_review_description,
UPDATE_MODEL_STATE: handle_update_model_state,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
NOTIFICATION_TEST: handle_notification_test,
"restart": handle_restart,
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
"modelState": handle_model_state,
"birdseyeLayout": handle_birdseye_layout,
"onConnect": handle_on_connect,
}
@@ -296,12 +243,11 @@ class Dispatcher:
logger.error(
f"Received invalid {topic.split('/')[-1]} command: {topic}"
)
return None
return
elif topic in topic_handlers:
return topic_handlers[topic]()
else:
self.publish(topic, payload, retain=False)
return None
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Handle publishing to communicators."""
@@ -327,11 +273,8 @@ class Dispatcher:
f"Turning on motion for {camera_name} due to detection being enabled."
)
motion_settings.enabled = True
self.config_updater.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum.motion, camera_name
),
motion_settings,
self.config_updater.publish(
f"config/motion/{camera_name}", motion_settings
)
self.publish(f"{camera_name}/motion/state", payload, retain=True)
elif payload == "OFF":
@@ -339,10 +282,7 @@ class Dispatcher:
logger.info(f"Turning off detection for {camera_name}")
detect_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.detect, camera_name),
detect_settings,
)
self.config_updater.publish(f"config/detect/{camera_name}", detect_settings)
self.publish(f"{camera_name}/detect/state", payload, retain=True)
def _on_enabled_command(self, camera_name: str, payload: str) -> None:
@@ -363,10 +303,7 @@ class Dispatcher:
logger.info(f"Turning off camera {camera_name}")
camera_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.enabled, camera_name),
camera_settings.enabled,
)
self.config_updater.publish(f"config/enabled/{camera_name}", camera_settings)
self.publish(f"{camera_name}/enabled/state", payload, retain=True)
def _on_motion_command(self, camera_name: str, payload: str) -> None:
@@ -389,10 +326,7 @@ class Dispatcher:
logger.info(f"Turning off motion for {camera_name}")
motion_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.publish(f"{camera_name}/motion/state", payload, retain=True)
def _on_motion_improve_contrast_command(
@@ -404,16 +338,13 @@ class Dispatcher:
if payload == "ON":
if not motion_settings.improve_contrast:
logger.info(f"Turning on improve contrast for {camera_name}")
motion_settings.improve_contrast = True
motion_settings.improve_contrast = True # type: ignore[union-attr]
elif payload == "OFF":
if motion_settings.improve_contrast:
logger.info(f"Turning off improve contrast for {camera_name}")
motion_settings.improve_contrast = False
motion_settings.improve_contrast = False # type: ignore[union-attr]
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.publish(f"{camera_name}/improve_contrast/state", payload, retain=True)
def _on_ptz_autotracker_command(self, camera_name: str, payload: str) -> None:
@@ -452,11 +383,8 @@ class Dispatcher:
motion_settings = self.config.cameras[camera_name].motion
logger.info(f"Setting motion contour area for {camera_name}: {payload}")
motion_settings.contour_area = payload
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
motion_settings.contour_area = payload # type: ignore[union-attr]
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.publish(f"{camera_name}/motion_contour_area/state", payload, retain=True)
def _on_motion_threshold_command(self, camera_name: str, payload: int) -> None:
@@ -469,11 +397,8 @@ class Dispatcher:
motion_settings = self.config.cameras[camera_name].motion
logger.info(f"Setting motion threshold for {camera_name}: {payload}")
motion_settings.threshold = payload
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
motion_settings.threshold = payload # type: ignore[union-attr]
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.publish(f"{camera_name}/motion_threshold/state", payload, retain=True)
def _on_global_notification_command(self, payload: str) -> None:
@@ -484,9 +409,9 @@ class Dispatcher:
notification_settings = self.config.notifications
logger.info(f"Setting all notifications: {payload}")
notification_settings.enabled = payload == "ON"
self.config_updater.publisher.publish(
"config/notifications", notification_settings
notification_settings.enabled = payload == "ON" # type: ignore[union-attr]
self.config_updater.publish(
"config/notifications", {"_global_notifications": notification_settings}
)
self.publish("notifications/state", payload, retain=True)
@@ -509,43 +434,9 @@ class Dispatcher:
logger.info(f"Turning off audio detection for {camera_name}")
audio_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.audio, camera_name),
audio_settings,
)
self.config_updater.publish(f"config/audio/{camera_name}", audio_settings)
self.publish(f"{camera_name}/audio/state", payload, retain=True)
def _on_audio_transcription_command(self, camera_name: str, payload: str) -> None:
"""Callback for live audio transcription topic."""
audio_transcription_settings = self.config.cameras[
camera_name
].audio_transcription
if payload == "ON":
if not self.config.cameras[
camera_name
].audio_transcription.enabled_in_config:
logger.error(
"Audio transcription must be enabled in the config to be turned on via MQTT."
)
return
if not audio_transcription_settings.live_enabled:
logger.info(f"Turning on live audio transcription for {camera_name}")
audio_transcription_settings.live_enabled = True
elif payload == "OFF":
if audio_transcription_settings.live_enabled:
logger.info(f"Turning off live audio transcription for {camera_name}")
audio_transcription_settings.live_enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum.audio_transcription, camera_name
),
audio_transcription_settings,
)
self.publish(f"{camera_name}/audio_transcription/state", payload, retain=True)
def _on_recordings_command(self, camera_name: str, payload: str) -> None:
"""Callback for recordings topic."""
record_settings = self.config.cameras[camera_name].record
@@ -565,10 +456,7 @@ class Dispatcher:
logger.info(f"Turning off recordings for {camera_name}")
record_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.record, camera_name),
record_settings,
)
self.config_updater.publish(f"config/record/{camera_name}", record_settings)
self.publish(f"{camera_name}/recordings/state", payload, retain=True)
def _on_snapshots_command(self, camera_name: str, payload: str) -> None:
@@ -584,10 +472,6 @@ class Dispatcher:
logger.info(f"Turning off snapshots for {camera_name}")
snapshots_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.snapshots, camera_name),
snapshots_settings,
)
self.publish(f"{camera_name}/snapshots/state", payload, retain=True)
def _on_ptz_command(self, camera_name: str, payload: str) -> None:
@@ -622,10 +506,7 @@ class Dispatcher:
logger.info(f"Turning off birdseye for {camera_name}")
birdseye_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.birdseye, camera_name),
birdseye_settings,
)
self.config_updater.publish(f"config/birdseye/{camera_name}", birdseye_settings)
self.publish(f"{camera_name}/birdseye/state", payload, retain=True)
def _on_birdseye_mode_command(self, camera_name: str, payload: str) -> None:
@@ -646,10 +527,7 @@ class Dispatcher:
f"Setting birdseye mode for {camera_name} to {birdseye_settings.mode}"
)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.birdseye, camera_name),
birdseye_settings,
)
self.config_updater.publish(f"config/birdseye/{camera_name}", birdseye_settings)
self.publish(f"{camera_name}/birdseye_mode/state", payload, retain=True)
def _on_camera_notification_command(self, camera_name: str, payload: str) -> None:
@@ -681,9 +559,8 @@ class Dispatcher:
):
self.web_push_client.suspended_cameras[camera_name] = 0
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.notifications, camera_name),
notification_settings,
self.config_updater.publish(
"config/notifications", {camera_name: notification_settings}
)
self.publish(f"{camera_name}/notifications/state", payload, retain=True)
self.publish(f"{camera_name}/notifications/suspended", "0", retain=True)
@@ -740,10 +617,7 @@ class Dispatcher:
logger.info(f"Turning off alerts for {camera_name}")
review_settings.alerts.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.review, camera_name),
review_settings,
)
self.config_updater.publish(f"config/review/{camera_name}", review_settings)
self.publish(f"{camera_name}/review_alerts/state", payload, retain=True)
def _on_detections_command(self, camera_name: str, payload: str) -> None:
@@ -765,58 +639,5 @@ class Dispatcher:
logger.info(f"Turning off detections for {camera_name}")
review_settings.detections.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.review, camera_name),
review_settings,
)
self.config_updater.publish(f"config/review/{camera_name}", review_settings)
self.publish(f"{camera_name}/review_detections/state", payload, retain=True)
def _on_object_description_command(self, camera_name: str, payload: str) -> None:
"""Callback for object description topic."""
genai_settings = self.config.cameras[camera_name].objects.genai
if payload == "ON":
if not self.config.cameras[camera_name].objects.genai.enabled_in_config:
logger.error(
"GenAI must be enabled in the config to be turned on via MQTT."
)
return
if not genai_settings.enabled:
logger.info(f"Turning on object descriptions for {camera_name}")
genai_settings.enabled = True
elif payload == "OFF":
if genai_settings.enabled:
logger.info(f"Turning off object descriptions for {camera_name}")
genai_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.object_genai, camera_name),
genai_settings,
)
self.publish(f"{camera_name}/object_descriptions/state", payload, retain=True)
def _on_review_description_command(self, camera_name: str, payload: str) -> None:
"""Callback for review description topic."""
genai_settings = self.config.cameras[camera_name].review.genai
if payload == "ON":
if not self.config.cameras[camera_name].review.genai.enabled_in_config:
logger.error(
"GenAI Alerts or Detections must be enabled in the config to be turned on via MQTT."
)
return
if not genai_settings.enabled:
logger.info(f"Turning on review descriptions for {camera_name}")
genai_settings.enabled = True
elif payload == "OFF":
if genai_settings.enabled:
logger.info(f"Turning off review descriptions for {camera_name}")
genai_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.review_genai, camera_name),
genai_settings,
)
self.publish(f"{camera_name}/review_descriptions/state", payload, retain=True)

View File

@@ -1,36 +1,23 @@
"""Facilitates communication between processes."""
import logging
from enum import Enum
from typing import Any, Callable
import zmq
logger = logging.getLogger(__name__)
SOCKET_REP_REQ = "ipc:///tmp/cache/embeddings"
class EmbeddingsRequestEnum(Enum):
# audio
transcribe_audio = "transcribe_audio"
# custom classification
reload_classification_model = "reload_classification_model"
# face
clear_face_classifier = "clear_face_classifier"
recognize_face = "recognize_face"
register_face = "register_face"
reprocess_face = "reprocess_face"
# semantic search
embed_description = "embed_description"
embed_thumbnail = "embed_thumbnail"
generate_search = "generate_search"
reindex = "reindex"
# LPR
recognize_face = "recognize_face"
register_face = "register_face"
reprocess_face = "reprocess_face"
reprocess_plate = "reprocess_plate"
# Review Descriptions
summarize_review = "summarize_review"
reindex = "reindex"
class EmbeddingsResponder:
@@ -47,16 +34,9 @@ class EmbeddingsResponder:
break
try:
raw = self.socket.recv_json(flags=zmq.NOBLOCK)
(topic, value) = self.socket.recv_json(flags=zmq.NOBLOCK)
if isinstance(raw, list):
(topic, value) = raw
response = process(topic, value)
else:
logging.warning(
f"Received unexpected data type in ZMQ recv_json: {type(raw)}"
)
response = None
response = process(topic, value)
if response is not None:
self.socket.send_json(response)
@@ -78,7 +58,7 @@ class EmbeddingsRequestor:
self.socket = self.context.socket(zmq.REQ)
self.socket.connect(SOCKET_REP_REQ)
def send_data(self, topic: str, data: Any) -> Any:
def send_data(self, topic: str, data: Any) -> str:
"""Sends data and then waits for reply."""
try:
self.socket.send_json((topic, data))

View File

@@ -15,7 +15,7 @@ class EventMetadataTypeEnum(str, Enum):
manual_event_end = "manual_event_end"
regenerate_description = "regenerate_description"
sub_label = "sub_label"
attribute = "attribute"
recognized_license_plate = "recognized_license_plate"
lpr_event_create = "lpr_event_create"
save_lpr_snapshot = "save_lpr_snapshot"
@@ -28,8 +28,8 @@ class EventMetadataPublisher(Publisher):
def __init__(self) -> None:
super().__init__()
def publish(self, payload: Any, sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
def publish(self, topic: EventMetadataTypeEnum, payload: Any) -> None:
super().publish(payload, topic.value)
class EventMetadataSubscriber(Subscriber):
@@ -40,10 +40,9 @@ class EventMetadataSubscriber(Subscriber):
def __init__(self, topic: EventMetadataTypeEnum) -> None:
super().__init__(topic.value)
def _return_object(
self, topic: str, payload: tuple | None
) -> tuple[str, Any] | tuple[None, None]:
def _return_object(self, topic: str, payload: tuple) -> tuple:
if payload is None:
return (None, None)
topic = EventMetadataTypeEnum[topic[len(self.topic_base) :]]
return (topic, payload)

View File

@@ -7,9 +7,7 @@ from frigate.events.types import EventStateEnum, EventTypeEnum
from .zmq_proxy import Publisher, Subscriber
class EventUpdatePublisher(
Publisher[tuple[EventTypeEnum, EventStateEnum, str | None, str, dict[str, Any]]]
):
class EventUpdatePublisher(Publisher):
"""Publishes events (objects, audio, manual)."""
topic_base = "event/"
@@ -18,11 +16,9 @@ class EventUpdatePublisher(
super().__init__("update")
def publish(
self,
payload: tuple[EventTypeEnum, EventStateEnum, str | None, str, dict[str, Any]],
sub_topic: str = "",
self, payload: tuple[EventTypeEnum, EventStateEnum, str, str, dict[str, Any]]
) -> None:
super().publish(payload, sub_topic)
super().publish(payload)
class EventUpdateSubscriber(Subscriber):
@@ -34,9 +30,7 @@ class EventUpdateSubscriber(Subscriber):
super().__init__("update")
class EventEndPublisher(
Publisher[tuple[EventTypeEnum, EventStateEnum, str, dict[str, Any]]]
):
class EventEndPublisher(Publisher):
"""Publishes events that have ended."""
topic_base = "event/"
@@ -45,11 +39,9 @@ class EventEndPublisher(
super().__init__("finalized")
def publish(
self,
payload: tuple[EventTypeEnum, EventStateEnum, str, dict[str, Any]],
sub_topic: str = "",
self, payload: tuple[EventTypeEnum, EventStateEnum, str, dict[str, Any]]
) -> None:
super().publish(payload, sub_topic)
super().publish(payload)
class EventEndSubscriber(Subscriber):

View File

@@ -1,6 +1,5 @@
"""Facilitates communication between processes."""
import logging
import multiprocessing as mp
import threading
from multiprocessing.synchronize import Event as MpEvent
@@ -10,8 +9,6 @@ import zmq
from frigate.comms.base_communicator import Communicator
logger = logging.getLogger(__name__)
SOCKET_REP_REQ = "ipc:///tmp/cache/comms"
@@ -22,7 +19,7 @@ class InterProcessCommunicator(Communicator):
self.socket.bind(SOCKET_REP_REQ)
self.stop_event: MpEvent = mp.Event()
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
def publish(self, topic: str, payload: str, retain: bool) -> None:
"""There is no communication back to the processes."""
pass
@@ -40,16 +37,9 @@ class InterProcessCommunicator(Communicator):
break
try:
raw = self.socket.recv_json(flags=zmq.NOBLOCK)
(topic, value) = self.socket.recv_json(flags=zmq.NOBLOCK)
if isinstance(raw, list):
(topic, value) = raw
response = self._dispatcher(topic, value)
else:
logging.warning(
f"Received unexpected data type in ZMQ recv_json: {type(raw)}"
)
response = None
response = self._dispatcher(topic, value)
if response is not None:
self.socket.send_json(response)

View File

@@ -11,7 +11,7 @@ from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)
class MqttClient(Communicator):
class MqttClient(Communicator): # type: ignore[misc]
"""Frigate wrapper for mqtt client."""
def __init__(self, config: FrigateConfig) -> None:
@@ -75,7 +75,7 @@ class MqttClient(Communicator):
)
self.publish(
f"{camera_name}/improve_contrast/state",
"ON" if camera.motion.improve_contrast else "OFF",
"ON" if camera.motion.improve_contrast else "OFF", # type: ignore[union-attr]
retain=True,
)
self.publish(
@@ -85,12 +85,12 @@ class MqttClient(Communicator):
)
self.publish(
f"{camera_name}/motion_threshold/state",
camera.motion.threshold,
camera.motion.threshold, # type: ignore[union-attr]
retain=True,
)
self.publish(
f"{camera_name}/motion_contour_area/state",
camera.motion.contour_area,
camera.motion.contour_area, # type: ignore[union-attr]
retain=True,
)
self.publish(
@@ -122,16 +122,6 @@ class MqttClient(Communicator):
"ON" if camera.review.detections.enabled_in_config else "OFF",
retain=True,
)
self.publish(
f"{camera_name}/object_descriptions/state",
"ON" if camera.objects.genai.enabled_in_config else "OFF",
retain=True,
)
self.publish(
f"{camera_name}/review_descriptions/state",
"ON" if camera.review.genai.enabled_in_config else "OFF",
retain=True,
)
if self.config.notifications.enabled_in_config:
self.publish(
@@ -155,7 +145,7 @@ class MqttClient(Communicator):
client: mqtt.Client,
userdata: Any,
flags: Any,
reason_code: mqtt.ReasonCode, # type: ignore[name-defined]
reason_code: mqtt.ReasonCode,
properties: Any,
) -> None:
"""Mqtt connection callback."""
@@ -187,7 +177,7 @@ class MqttClient(Communicator):
client: mqtt.Client,
userdata: Any,
flags: Any,
reason_code: mqtt.ReasonCode, # type: ignore[name-defined]
reason_code: mqtt.ReasonCode,
properties: Any,
) -> None:
"""Mqtt disconnection callback."""
@@ -225,7 +215,6 @@ class MqttClient(Communicator):
"birdseye_mode",
"review_alerts",
"review_detections",
"genai",
]
for name in self.config.cameras.keys():

View File

@@ -1,92 +0,0 @@
"""Facilitates communication between processes for object detection signals."""
import threading
import zmq
SOCKET_PUB = "ipc:///tmp/cache/detector_pub"
SOCKET_SUB = "ipc:///tmp/cache/detector_sub"
class ZmqProxyRunner(threading.Thread):
def __init__(self, context: zmq.Context[zmq.Socket]) -> None:
super().__init__(name="detector_proxy")
self.context = context
def run(self) -> None:
"""Run the proxy."""
incoming = self.context.socket(zmq.XSUB)
incoming.bind(SOCKET_PUB)
outgoing = self.context.socket(zmq.XPUB)
outgoing.bind(SOCKET_SUB)
# Blocking: This will unblock (via exception) when we destroy the context
# The incoming and outgoing sockets will be closed automatically
# when the context is destroyed as well.
try:
zmq.proxy(incoming, outgoing)
except zmq.ZMQError:
pass
class DetectorProxy:
"""Proxies object detection signals."""
def __init__(self) -> None:
self.context = zmq.Context()
self.runner = ZmqProxyRunner(self.context)
self.runner.start()
def stop(self) -> None:
# destroying the context will tell the proxy to stop
self.context.destroy()
self.runner.join()
class ObjectDetectorPublisher:
"""Publishes signal for object detection to different processes."""
topic_base = "object_detector/"
def __init__(self, topic: str = "") -> None:
self.topic = f"{self.topic_base}{topic}"
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
self.socket.connect(SOCKET_PUB)
def publish(self, sub_topic: str = "") -> None:
"""Publish message."""
self.socket.send_string(f"{self.topic}{sub_topic}/")
def stop(self) -> None:
self.socket.close()
self.context.destroy()
class ObjectDetectorSubscriber:
"""Simplifies receiving a signal for object detection."""
topic_base = "object_detector/"
def __init__(self, topic: str = "") -> None:
self.topic = f"{self.topic_base}{topic}/"
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.setsockopt_string(zmq.SUBSCRIBE, self.topic)
self.socket.connect(SOCKET_SUB)
def check_for_update(self, timeout: float = 5) -> str | None:
"""Returns message or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
if has_update:
return self.socket.recv_string(flags=zmq.NOBLOCK)
except zmq.ZMQError:
pass
return None
def stop(self) -> None:
self.socket.close()
self.context.destroy()

View File

@@ -2,7 +2,6 @@
import logging
from enum import Enum
from typing import Any
from .zmq_proxy import Publisher, Subscriber
@@ -11,22 +10,20 @@ logger = logging.getLogger(__name__)
class RecordingsDataTypeEnum(str, Enum):
all = ""
saved = "saved" # segment has been saved to db
latest = "latest" # segment is in cache
valid = "valid" # segment is valid
invalid = "invalid" # segment is invalid
recordings_available_through = "recordings_available_through"
class RecordingsDataPublisher(Publisher[Any]):
class RecordingsDataPublisher(Publisher):
"""Publishes latest recording data."""
topic_base = "recordings/"
def __init__(self) -> None:
super().__init__()
def __init__(self, topic: RecordingsDataTypeEnum) -> None:
topic = topic.value
super().__init__(topic)
def publish(self, payload: Any, sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
def publish(self, payload: tuple[str, float]) -> None:
super().publish(payload)
class RecordingsDataSubscriber(Subscriber):
@@ -35,12 +32,5 @@ class RecordingsDataSubscriber(Subscriber):
topic_base = "recordings/"
def __init__(self, topic: RecordingsDataTypeEnum) -> None:
super().__init__(topic.value)
def _return_object(
self, topic: str, payload: tuple | None
) -> tuple[str, Any] | tuple[None, None]:
if payload is None:
return (None, None)
return (topic, payload)
topic = topic.value
super().__init__(topic)

View File

@@ -1,30 +0,0 @@
"""Facilitates communication between processes."""
import logging
from .zmq_proxy import Publisher, Subscriber
logger = logging.getLogger(__name__)
class ReviewDataPublisher(
Publisher
): # update when typing improvement is added Publisher[tuple[str, float]]
"""Publishes review item data."""
topic_base = "review/"
def __init__(self, topic: str) -> None:
super().__init__(topic)
def publish(self, payload: tuple[str, float], sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
class ReviewDataSubscriber(Subscriber):
"""Receives review item data."""
topic_base = "review/"
def __init__(self, topic: str) -> None:
super().__init__(topic)

View File

@@ -17,10 +17,6 @@ from titlecase import titlecase
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigSubscriber
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
)
from frigate.const import CONFIG_DIR
from frigate.models import User
@@ -39,7 +35,7 @@ class PushNotification:
ttl: int = 0
class WebPushClient(Communicator):
class WebPushClient(Communicator): # type: ignore[misc]
"""Frigate wrapper for webpush client."""
def __init__(self, config: FrigateConfig, stop_event: MpEvent) -> None:
@@ -50,12 +46,10 @@ class WebPushClient(Communicator):
self.web_pushers: dict[str, list[WebPusher]] = {}
self.expired_subs: dict[str, list[str]] = {}
self.suspended_cameras: dict[str, int] = {
c.name: 0 # type: ignore[misc]
for c in self.config.cameras.values()
c.name: 0 for c in self.config.cameras.values()
}
self.last_camera_notification_time: dict[str, float] = {
c.name: 0 # type: ignore[misc]
for c in self.config.cameras.values()
c.name: 0 for c in self.config.cameras.values()
}
self.last_notification_time: float = 0
self.notification_queue: queue.Queue[PushNotification] = queue.Queue()
@@ -70,7 +64,7 @@ class WebPushClient(Communicator):
# Pull keys from PEM or generate if they do not exist
self.vapid = Vapid01.from_file(os.path.join(CONFIG_DIR, "notifications.pem"))
users: list[dict[str, Any]] = (
users: list[User] = (
User.select(User.username, User.notification_tokens).dicts().iterator()
)
for user in users:
@@ -79,12 +73,7 @@ class WebPushClient(Communicator):
self.web_pushers[user["username"]].append(WebPusher(sub))
# notification config updater
self.global_config_subscriber = ConfigSubscriber(
"config/notifications", exact=True
)
self.config_subscriber = CameraConfigUpdateSubscriber(
self.config, self.config.cameras, [CameraConfigUpdateEnum.notifications]
)
self.config_subscriber = ConfigSubscriber("config/notifications")
def subscribe(self, receiver: Callable) -> None:
"""Wrapper for allowing dispatcher to subscribe."""
@@ -165,19 +154,15 @@ class WebPushClient(Communicator):
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Wrapper for publishing when client is in valid state."""
# check for updated notification config
_, updated_notification_config = (
self.global_config_subscriber.check_for_update()
)
_, updated_notification_config = self.config_subscriber.check_for_update()
if updated_notification_config:
self.config.notifications = updated_notification_config
for key, value in updated_notification_config.items():
if key == "_global_notifications":
self.config.notifications = value
updates = self.config_subscriber.check_for_updates()
if "add" in updates:
for camera in updates["add"]:
self.suspended_cameras[camera] = 0
self.last_camera_notification_time[camera] = 0
elif key in self.config.cameras:
self.config.cameras[key].notifications = value
if topic == "reviews":
decoded = json.loads(payload)
@@ -188,28 +173,6 @@ class WebPushClient(Communicator):
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_alert(decoded)
if topic == "triggers":
decoded = json.loads(payload)
camera = decoded["camera"]
name = decoded["name"]
# ensure notifications are enabled and the specific trigger has
# notification action enabled
if (
not self.config.cameras[camera].notifications.enabled
or name not in self.config.cameras[camera].semantic_search.triggers
or "notification"
not in self.config.cameras[camera]
.semantic_search.triggers[name]
.actions
):
return
if self.is_camera_suspended(camera):
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_trigger(decoded)
elif topic == "notification_test":
if not self.config.notifications.enabled and not any(
cam.notifications.enabled for cam in self.config.cameras.values()
@@ -291,23 +254,6 @@ class WebPushClient(Communicator):
except Exception as e:
logger.error(f"Error processing notification: {str(e)}")
def _within_cooldown(self, camera: str) -> bool:
now = datetime.datetime.now().timestamp()
if now - self.last_notification_time < self.config.notifications.cooldown:
logger.debug(
f"Skipping notification for {camera} - in global cooldown period"
)
return True
if (
now - self.last_camera_notification_time[camera]
< self.config.cameras[camera].notifications.cooldown
):
logger.debug(
f"Skipping notification for {camera} - in camera-specific cooldown period"
)
return True
return False
def send_notification_test(self) -> None:
if not self.config.notifications.email:
return
@@ -334,12 +280,26 @@ class WebPushClient(Communicator):
return
camera: str = payload["after"]["camera"]
camera_name: str = getattr(
self.config.cameras[camera], "friendly_name", None
) or titlecase(camera.replace("_", " "))
current_time = datetime.datetime.now().timestamp()
if self._within_cooldown(camera):
# Check global cooldown period
if (
current_time - self.last_notification_time
< self.config.notifications.cooldown
):
logger.debug(
f"Skipping notification for {camera} - in global cooldown period"
)
return
# Check camera-specific cooldown period
if (
current_time - self.last_camera_notification_time[camera]
< self.config.cameras[camera].notifications.cooldown
):
logger.debug(
f"Skipping notification for {camera} - in camera-specific cooldown period"
)
return
self.check_registrations()
@@ -372,22 +332,12 @@ class WebPushClient(Communicator):
sorted_objects.update(payload["after"]["data"]["sub_labels"])
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {titlecase(', '.join(payload['after']['data']['zones']).replace('_', ' '))}"
message = f"Detected on {titlecase(camera.replace('_', ' '))}"
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
ended = state == "end" or state == "genai"
if state == "genai" and payload["after"]["data"]["metadata"]:
message = payload["after"]["data"]["metadata"]["scene"]
else:
message = f"Detected on {camera_name}"
if ended:
logger.debug(
f"Sending a notification with state {state} and message {message}"
)
# if event is ongoing open to live view otherwise open to recordings view
direct_url = f"/review?id={reviewId}" if ended else f"/#{camera}"
ttl = 3600 if ended else 0
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"
ttl = 3600 if state == "end" else 0
logger.debug(f"Sending push notification for {camera}, review ID {reviewId}")
@@ -404,53 +354,6 @@ class WebPushClient(Communicator):
self.cleanup_registrations()
def send_trigger(self, payload: dict[str, Any]) -> None:
if not self.config.notifications.email:
return
camera: str = payload["camera"]
camera_name: str = getattr(
self.config.cameras[camera], "friendly_name", None
) or titlecase(camera.replace("_", " "))
current_time = datetime.datetime.now().timestamp()
if self._within_cooldown(camera):
return
self.check_registrations()
self.last_camera_notification_time[camera] = current_time
self.last_notification_time = current_time
trigger_type = payload["type"]
event_id = payload["event_id"]
name = payload["name"]
score = payload["score"]
title = f"{name.replace('_', ' ')} triggered on {camera_name}"
message = f"{titlecase(trigger_type)} trigger fired for {camera_name} with score {score:.2f}"
image = f"clips/triggers/{camera}/{event_id}.webp"
direct_url = f"/explore?event_id={event_id}"
ttl = 0
logger.debug(
f"Sending push notification for {camera_name}, trigger name {name}"
)
for user in self.web_pushers:
self.send_push_notification(
user=user,
payload=payload,
title=title,
message=message,
direct_url=direct_url,
image=image,
ttl=ttl,
)
self.cleanup_registrations()
def stop(self) -> None:
logger.info("Closing notification queue")
self.notification_thread.join()

View File

@@ -4,7 +4,7 @@ import errno
import json
import logging
import threading
from typing import Any, Callable
from typing import Callable
from wsgiref.simple_server import make_server
from ws4py.server.wsgirefserver import (
@@ -21,8 +21,8 @@ from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)
class WebSocket(WebSocket_): # type: ignore[misc]
def unhandled_error(self, error: Any) -> None:
class WebSocket(WebSocket_):
def unhandled_error(self, error):
"""
Handles the unfriendly socket closures on the server side
without showing a confusing error message
@@ -33,12 +33,12 @@ class WebSocket(WebSocket_): # type: ignore[misc]
logging.getLogger("ws4py").exception("Failed to receive data")
class WebSocketClient(Communicator):
class WebSocketClient(Communicator): # type: ignore[misc]
"""Frigate wrapper for ws client."""
def __init__(self, config: FrigateConfig) -> None:
self.config = config
self.websocket_server: WSGIServer | None = None
self.websocket_server = None
def subscribe(self, receiver: Callable) -> None:
self._dispatcher = receiver
@@ -47,10 +47,10 @@ class WebSocketClient(Communicator):
def start(self) -> None:
"""Start the websocket client."""
class _WebSocketHandler(WebSocket):
class _WebSocketHandler(WebSocket): # type: ignore[misc]
receiver = self._dispatcher
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
def received_message(self, message: WebSocket.received_message) -> None:
try:
json_message = json.loads(message.data.decode("utf-8"))
json_message = {
@@ -86,7 +86,7 @@ class WebSocketClient(Communicator):
)
self.websocket_thread.start()
def publish(self, topic: str, payload: Any, _: bool = False) -> None:
def publish(self, topic: str, payload: str, _: bool) -> None:
try:
ws_message = json.dumps(
{
@@ -109,11 +109,9 @@ class WebSocketClient(Communicator):
pass
def stop(self) -> None:
if self.websocket_server is not None:
self.websocket_server.manager.close_all()
self.websocket_server.manager.stop()
self.websocket_server.manager.join()
self.websocket_server.shutdown()
self.websocket_server.manager.close_all()
self.websocket_server.manager.stop()
self.websocket_server.manager.join()
self.websocket_server.shutdown()
self.websocket_thread.join()
logger.info("Exiting websocket client...")

View File

@@ -2,7 +2,7 @@
import json
import threading
from typing import Generic, TypeVar
from typing import Any, Optional
import zmq
@@ -47,10 +47,7 @@ class ZmqProxy:
self.runner.join()
T = TypeVar("T")
class Publisher(Generic[T]):
class Publisher:
"""Publishes messages."""
topic_base: str = ""
@@ -61,7 +58,7 @@ class Publisher(Generic[T]):
self.socket = self.context.socket(zmq.PUB)
self.socket.connect(SOCKET_PUB)
def publish(self, payload: T, sub_topic: str = "") -> None:
def publish(self, payload: Any, sub_topic: str = "") -> None:
"""Publish message."""
self.socket.send_string(f"{self.topic}{sub_topic} {json.dumps(payload)}")
@@ -70,7 +67,7 @@ class Publisher(Generic[T]):
self.context.destroy()
class Subscriber(Generic[T]):
class Subscriber:
"""Receives messages."""
topic_base: str = ""
@@ -82,7 +79,9 @@ class Subscriber(Generic[T]):
self.socket.setsockopt_string(zmq.SUBSCRIBE, self.topic)
self.socket.connect(SOCKET_SUB)
def check_for_update(self, timeout: float | None = FAST_QUEUE_TIMEOUT) -> T | None:
def check_for_update(
self, timeout: float = FAST_QUEUE_TIMEOUT
) -> Optional[tuple[str, Any]]:
"""Returns message or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
@@ -99,5 +98,5 @@ class Subscriber(Generic[T]):
self.socket.close()
self.context.destroy()
def _return_object(self, topic: str, payload: T | None) -> T | None:
def _return_object(self, topic: str, payload: Any) -> Any:
return payload

View File

@@ -1,6 +1,6 @@
from typing import Dict, List, Optional
from typing import Optional
from pydantic import Field, field_validator, model_validator
from pydantic import Field
from .base import FrigateBaseModel
@@ -34,41 +34,3 @@ class AuthConfig(FrigateBaseModel):
)
# As of Feb 2023, OWASP recommends 600000 iterations for PBKDF2-SHA256
hash_iterations: int = Field(default=600000, title="Password hash iterations")
roles: Dict[str, List[str]] = Field(
default_factory=dict,
title="Role to camera mappings. Empty list grants access to all cameras.",
)
@field_validator("roles")
@classmethod
def validate_roles(cls, v: Dict[str, List[str]]) -> Dict[str, List[str]]:
# Ensure role names are valid (alphanumeric with underscores)
for role in v.keys():
if not role.replace("_", "").isalnum():
raise ValueError(
f"Invalid role name '{role}'. Must be alphanumeric with underscores."
)
# Ensure 'admin' and 'viewer' are not used as custom role names
reserved_roles = {"admin", "viewer"}
if v.keys() & reserved_roles:
raise ValueError(
f"Reserved roles {reserved_roles} cannot be used as custom roles."
)
# Ensure no role has an empty camera list
for role, allowed_cameras in v.items():
if not allowed_cameras:
raise ValueError(
f"Role '{role}' has no cameras assigned. Custom roles must have at least one camera."
)
return v
@model_validator(mode="after")
def ensure_default_roles(self):
# Ensure admin and viewer are never overridden
self.roles["admin"] = []
self.roles["viewer"] = []
return self

View File

@@ -1,29 +1,5 @@
from typing import Any
from pydantic import BaseModel, ConfigDict
class FrigateBaseModel(BaseModel):
model_config = ConfigDict(extra="forbid", protected_namespaces=())
def get_nested_object(self, path: str) -> Any:
parts = path.split("/")
obj = self
for part in parts:
if part == "config":
continue
if isinstance(obj, BaseModel):
try:
obj = getattr(obj, part)
except AttributeError:
return None
elif isinstance(obj, dict):
try:
obj = obj[part]
except KeyError:
return None
else:
return None
return obj

View File

@@ -2,7 +2,7 @@ import os
from enum import Enum
from typing import Optional
from pydantic import Field, PrivateAttr, model_validator
from pydantic import Field, PrivateAttr
from frigate.const import CACHE_DIR, CACHE_SEGMENT_FORMAT, REGEX_CAMERA_NAME
from frigate.ffmpeg_presets import (
@@ -19,15 +19,14 @@ from frigate.util.builtin import (
from ..base import FrigateBaseModel
from ..classification import (
AudioTranscriptionConfig,
CameraFaceRecognitionConfig,
CameraLicensePlateRecognitionConfig,
CameraSemanticSearchConfig,
)
from .audio import AudioConfig
from .birdseye import BirdseyeCameraConfig
from .detect import DetectConfig
from .ffmpeg import CameraFfmpegConfig, CameraInput
from .genai import GenAICameraConfig
from .live import CameraLiveConfig
from .motion import MotionConfig
from .mqtt import CameraMqttConfig
@@ -51,27 +50,12 @@ class CameraTypeEnum(str, Enum):
class CameraConfig(FrigateBaseModel):
name: Optional[str] = Field(None, title="Camera name.", pattern=REGEX_CAMERA_NAME)
friendly_name: Optional[str] = Field(
None, title="Camera friendly name used in the Frigate UI."
)
@model_validator(mode="before")
@classmethod
def handle_friendly_name(cls, values):
if isinstance(values, dict) and "friendly_name" in values:
pass
return values
enabled: bool = Field(default=True, title="Enable camera.")
# Options with global fallback
audio: AudioConfig = Field(
default_factory=AudioConfig, title="Audio events configuration."
)
audio_transcription: AudioTranscriptionConfig = Field(
default_factory=AudioTranscriptionConfig, title="Audio transcription config."
)
birdseye: BirdseyeCameraConfig = Field(
default_factory=BirdseyeCameraConfig, title="Birdseye camera configuration."
)
@@ -82,13 +66,18 @@ class CameraConfig(FrigateBaseModel):
default_factory=CameraFaceRecognitionConfig, title="Face recognition config."
)
ffmpeg: CameraFfmpegConfig = Field(title="FFmpeg configuration for the camera.")
genai: GenAICameraConfig = Field(
default_factory=GenAICameraConfig, title="Generative AI configuration."
)
live: CameraLiveConfig = Field(
default_factory=CameraLiveConfig, title="Live playback settings."
)
lpr: CameraLicensePlateRecognitionConfig = Field(
default_factory=CameraLicensePlateRecognitionConfig, title="LPR config."
)
motion: MotionConfig = Field(None, title="Motion detection configuration.")
motion: Optional[MotionConfig] = Field(
None, title="Motion detection configuration."
)
objects: ObjectConfig = Field(
default_factory=ObjectConfig, title="Object configuration."
)
@@ -98,10 +87,6 @@ class CameraConfig(FrigateBaseModel):
review: ReviewConfig = Field(
default_factory=ReviewConfig, title="Review configuration."
)
semantic_search: CameraSemanticSearchConfig = Field(
default_factory=CameraSemanticSearchConfig,
title="Semantic search configuration.",
)
snapshots: SnapshotsConfig = Field(
default_factory=SnapshotsConfig, title="Snapshot configuration."
)

Some files were not shown because too many files have changed in this diff Show More