Compare commits

...

88 Commits

Author SHA1 Message Date
Nicolas Mowen
90344540b3 Fix jetson build (#21173) 2025-12-06 09:16:23 -06:00
Josh Hawkins
7167cf57c5 pin cryptography version to fix vapid issues (#21126) 2025-12-02 07:20:50 -07:00
Josh Hawkins
e47e82f4be Pin onnx in rfdetr model generation command (#21127)
* pin onnx in rfdetr model generation command

* Apply suggestion from @NickM-27

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-12-02 08:15:12 -06:00
munit85
a43d294bd1 Add Axis Q-6155E camera configuration details (#21105)
* Add Axis Q-6155E camera configuration details

Added Axis Q-6155E camera details with ONVIF service port information.

* Update Axis Q-6155E ONVIF autotracking support details

Added the reason for autotracking not working
2025-12-01 10:47:01 -07:00
Josh Hawkins
9f95a5f31f version bump in docs (#21111) 2025-12-01 07:21:27 -07:00
Josh Hawkins
592c245dcd Fixes (#21061)
* require admin role to delete users

* explicitly prevent deletion of admin user

* Recordings playback fixes

* Remove nvidia pyindex

* Update version

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-11-26 07:27:16 -06:00
h-leth
914ff4f1e5 add comment about unifi g5 and newer cams (#21003) 2025-11-22 12:41:13 -06:00
Josh Hawkins
9589c5fc24 Fix rf-detr heading (#20963)
The link earlier in the file was referencing "#downloading-rf-detr-model"
2025-11-18 18:15:38 -07:00
Nicolas Mowen
3620ef27db Update hailo installation instructions (#20847)
* Update hailo docs installation

* Adjust section separation
2025-11-08 13:21:15 -06:00
GuoQing Liu
5cf2ae0121 docs: remove webrtc not support H.265 tips (#20769) 2025-11-05 06:23:45 -06:00
Nicolas Mowen
17d2bc240a Update recommended hardware to list more models (#20777)
* Update recommended hardware to list more models

* Update hardware.md with new Intel models and links
2025-11-04 10:56:28 -06:00
Nicolas Mowen
6fd7f862f5 Update coral docs / links (#20674)
* Revise GPU and AI accelerator recommendations

Updated hardware recommendations for AI acceleration.

* Revise PCIe Coral driver installation instructions

Updated instructions for PCIe Coral driver installation.

* Revise Coral driver installation instructions

Updated driver installation instructions for PCIe and M.2 versions of Google Coral.

* Change PCIe Coral driver link in getting_started.md

Updated the link for PCIe Coral driver instructions.

* Change PCIe Coral driver link in installation guide

Updated the link for PCIe Coral driver instructions.

* Update Coral TPU recommendation in hardware documentation

Added a warning about the Coral TPU's recommendation status for new Frigate installations and suggested alternatives.
2025-10-26 06:56:01 -05:00
Nicolas Mowen
5d038b5c75 Update PWA requirements and add usage section (#20562)
Added VPN as a secure context option for PWA installation and included a usage section.
2025-10-26 05:39:09 -06:00
Nicolas Mowen
c5fe354552 Improve Reolink Camera Documentation (#20605)
* Improve Reolink Camera Documentation

* Update Reolink configuration link in live.md
2025-10-21 16:20:41 -06:00
Josh Hawkins
5dc8a85f2f Update Azure OpenAI genai docs (#20549)
* Update azure openai genai docs

* tweak url
2025-10-18 06:44:26 -06:00
Nicolas Mowen
0302db1c43 Fix model exports (#20540) 2025-10-17 07:16:30 -05:00
Nicolas Mowen
a4764563a5 Fix YOLOv9 export script (#20514) 2025-10-16 07:56:37 -05:00
Josh Hawkins
942a61ddfb version bump in docs (#20501) 2025-10-15 05:53:31 -06:00
Nicolas Mowen
4d582062fb Ensure that a user must provide an image in an expected location (#20491)
* Ensure that a user must provide an image in an expected location

* Use const
2025-10-14 16:29:20 -05:00
Nicolas Mowen
e0a8445bac Improve rf-detr export (#20485) 2025-10-14 08:32:44 -05:00
Josh Hawkins
2a271c0f5e Update GenAI docs for Gemini model deprecation (#20462) 2025-10-13 10:00:21 -06:00
Nicolas Mowen
925bf78811 Update review topic description (#20445) 2025-10-12 07:28:08 -05:00
Sean Kelly
59102794e8 Add keyboard shortcut for switching to previous label (#20426)
* Add keyboard shortcut for switching to previous label

* Update docs/docs/plus/annotating.md

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>

---------

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
2025-10-11 10:43:41 -06:00
mpking828
20e5e3bdc0 Update camera_specific.md to fix 2 way audio example for Reolink (#20343)
Update camera_specific.md to fix 2 way audio example for Reolink
2025-10-03 08:49:51 -06:00
AmirHossein_Omidi
b94ebda9e5 Update license_plate_recognition.md (#20306)
* Update license_plate_recognition.md

Add PaddleOCR description for license plate recognition in Frigate docs

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update docs/docs/configuration/license_plate_recognition.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-10-01 08:18:47 -05:00
Nicolas Mowen
8cdaef307a Update face rec docs (#20256)
* Update face rec docs

* clarify

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-09-28 11:31:59 -05:00
Josh Hawkins
4914029a50 Add average_estimated_speed to mqtt docs (#20101) 2025-09-16 11:03:36 -06:00
GuoQing Liu
bafdab9d67 feat: add robots.txt (#20093) 2025-09-16 06:14:27 -06:00
GuoQing Liu
b08db4913f feat: add github mirror download endpoint (#20007)
* feat: add github mirror download endpoint

* fix: fix face_embedding endpoint line

* fix: fix github raw endpoint

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-09-14 06:51:56 -06:00
Nicolas Mowen
7c7ff49b90 Improve d-fine model export docs (#20020) 2025-09-11 06:17:08 -05:00
Nicolas Mowen
037c4d1cc0 Don't block UI while pulling the stream live info (#19998) 2025-09-09 17:53:26 -05:00
laviddichterman
1613499218 Update object_detectors.md to document configuring image size in YOLO 9 (#19951)
* Update object_detectors.md for v16

* add configurability to IMG_SIZE for YOLOv9 export
* remove TensorRT detector as it's no longer supported in v16

* Revert removing NVIDIA TensorRT detector docs

Added documentation for NVidia TensorRT Detector, including model generation, configuration parameters, and example usage.

* Dumb copy/paste

* Enhance YOLOv9 export instructions in documentation

Updated YOLOv9 export command to include IMG_SIZE parameter and clarified model size options.
2025-09-09 14:27:30 -06:00
Nicolas Mowen
205fdf3ae3 Fixes (#19984)
* Always handle RKNN as NHWC in Frigate+ model loading

* Correct Intel stats

* Update inference time docs

* Update version

* Adjust inference speeds
2025-09-09 06:17:56 -06:00
Nicolas Mowen
f46f8a2160 More inference speed updates (#19974) 2025-09-08 10:39:33 -06:00
Josh Hawkins
880902cdd7 Add specific notes for frigate+ models in object detector docs (#19971) 2025-09-08 09:29:03 -05:00
Nicolas Mowen
c5ed95ec52 More inference speed updates (#19947)
* More inference speed updates

* Update hardware.md

* Update hardware.md

* Update index.md

* More inference speeds

* Update home-assistant.md

* Update object_detectors.md

* Update first_model.md
2025-09-08 07:43:04 -05:00
Josh Hawkins
751de141d5 Fix model selection type in Frigate+ settings pane (#19952)
* model type does not need to match config model type

As long as a model is supported by a detector, it should be available in the list

* fix missing semicolon

the web linter was complaining
2025-09-07 19:19:40 -06:00
Nicolas Mowen
0eb441fe50 Update inference times for yolov9 (#19946) 2025-09-07 14:59:48 -05:00
Josh Hawkins
7566aecb0b Add note about Apple Silicon support in 0.17 (#19944) 2025-09-07 14:12:49 -05:00
Blake Blackshear
60714a733e update docs for Frigate+ yolov9 (#19938)
* update docs for Frigate+ yolov9

* footnote memryx suport

* tweaks
2025-09-07 06:01:10 -05:00
Josh Hawkins
d7f7cd7be1 best thumbnail endpoint should pass correct extension param (#19930) 2025-09-05 06:33:57 -05:00
GuoQing Liu
6591210050 docs: fix reolink camera table display (#19926) 2025-09-05 06:01:26 -05:00
Nicolas Mowen
7e7b3288a8 Update live FAQ for camera distortion (#19907)
* Add item to FAQ about stream distortion

* Update updating docs

* Update link
2025-09-04 07:44:33 -05:00
Nicolas Mowen
fe3eb24dfe Update Reolink support docs (#19887) 2025-09-02 15:21:18 -05:00
Nicolas Mowen
e664cb2285 Set lower bound on retry interval (#19883) 2025-09-02 11:24:25 -05:00
Josh Hawkins
62047c80d5 Poll for camera status on tracking end instead of waiting (#19879) 2025-09-02 06:17:01 -06:00
Josh Hawkins
198e53bd42 Fix stream stats display (#19874)
* Fix stats calculations and labels

* fix linter from complaining

* fix mse calc

* label
2025-09-01 19:23:44 -05:00
Josh Hawkins
f7ed8b4cab Autotracking improvements (#19873)
* Use asyncio lock when checking camera status

get_camera_status() can be called during normal autotracking movement and from routine camera_maintenance(). Some cameras cause one of the status calls to hang, which then subsequently hangs autotracking. A lock serializes access and prevents the hang.

* use while loop in camera_maintenance for status check

some cameras seem to take a little bit to update their status, don't assume the first call shows the motor has stopped
2025-09-01 19:18:50 -05:00
Nicolas Mowen
e9dc30235b Cleanup vod clip handling and add padding arg (#19813) 2025-08-28 07:09:23 -05:00
Josh Hawkins
16b7f7f6e7 Fix HLS video initial aspect on Chrome (#19805)
Explore videos are very small on Chrome specifically, this has something to do with how the latest version of Chrome loads video metadata. This change provides a default aspect ratio instead of a default height when the container ref is not defined yet
2025-08-27 12:27:18 -06:00
Nicolas Mowen
281c461647 Add support for Frigate+ input data type (#19799) 2025-08-27 06:27:08 -06:00
Josh Hawkins
667c302a7d Allow scrolling on languages menu on mobile devices (#19797) 2025-08-27 05:44:10 -06:00
GuoQing Liu
ad694f5511 docs: rk ffmpeg preset is outdated (#19780)
* docs: rk ffmpeg preset is outdated

* 更新 hardware_acceleration_video.md

docs: remove video decoding page redundant titles
2025-08-26 15:24:58 -06:00
Josh Hawkins
fa6956c46e Update openapi schema with include_thumbnails deprecation comment (#19777) 2025-08-26 15:24:43 -06:00
Josh Hawkins
b5aa1b2c21 Fix autotracking calibration crash when zooming is disabled (#19776) 2025-08-26 12:39:23 -05:00
GuoQing Liu
f62feeb50c docs: update hardware acceleation video page nvidia docker compose image (#19762) 2025-08-26 06:01:34 -06:00
Josh Hawkins
0dda37ac43 fix export dialog overflowing due to i18n time lengths (#19736)
wrap the pair of custom time pickers in a flex-wrap
2025-08-25 17:11:42 -06:00
Nicolas Mowen
4fcb1ea7ac Unload HLS on unmount (#19747)
* Unload HLS player on unmount so segments don't continue to load

* Add query arg for event padding
2025-08-25 13:33:17 -05:00
Nicolas Mowen
4347402fcc Don't mention 9.0.0 GFX version (#19742) 2025-08-25 07:32:50 -05:00
Josh Hawkins
4fe246f472 Fixes (#19708)
* use custom swr fetcher to check for audio support

The go2rtc API doesn't always return stream data for anything not being actively consumed, so audio support was not always being correctly deduced. So we can use a custom swr fetcher to call the endpoint that probes the streams, which returns the correct producers data.

* return correct mime type for thumbnail and latest frame endpoints

follow up to https://github.com/blakeblackshear/frigate/pull/19555
2025-08-22 07:04:30 -05:00
Josh Hawkins
7cf439e010 remove h264 reference for webrtc (#19688) 2025-08-21 08:21:18 -06:00
Josh Hawkins
8a01643acf clarify webrtc for two way talk (#19683) 2025-08-21 04:43:07 -06:00
Josh Hawkins
664a6fd0cb remove newlines (#19671)
let mermaid format the text directly
2025-08-20 14:19:55 -06:00
Josh Hawkins
2b185a1105 Update bug report template (#19664)
* update bug report template

* remove additional field
2025-08-20 12:57:24 -06:00
Josh Hawkins
75e33d8a56 Catch invalid key in genai prompt (#19657) 2025-08-20 08:03:50 -05:00
Jan Šuklje
8f4b5b4bdb Refactored Viewer role Notifications settings (#19640)
- now each individual element is shown if allowed by role, instead of having multiple return statement for each role
2025-08-19 18:29:11 -06:00
Josh Hawkins
95cea06dd3 Revert video dimension layout fix for chrome (#19636)
originally introduced in https://github.com/blakeblackshear/frigate/pull/19414
2025-08-19 14:42:20 -05:00
Nicolas Mowen
ec2543c23f Fix hls not loading video in explore (#19625) 2025-08-19 13:14:14 -05:00
Josh Hawkins
d27e8c1bbf run autotracking setup method in asyncio coroutine (#19614) 2025-08-19 07:07:24 -05:00
Josh Hawkins
353ee1228c Return 500 from the face registration endpoint if Frigate has not yet been restarted (#19601) 2025-08-18 14:49:50 -06:00
Josh Hawkins
ba20b61c43 Deprecate API field include_thumbnails (#19584)
* Add deprecation note to API docs for include_thumbnails

* for search query params as well
2025-08-18 08:26:02 -05:00
Nicolas Mowen
b45f642868 Use sed on correct file (#19590) 2025-08-18 07:21:42 -06:00
Josh Hawkins
9ed7ccab75 Embeddings maintainer should start if bird classification is enabled (#19576) 2025-08-17 19:48:21 -06:00
harry
ceced7cc91 Install non-free i965 driver (#19571) 2025-08-17 18:45:21 -06:00
Josh Hawkins
1db26cb41e Ensure birdseye is enabled before trying to grab a frame from it (#19573) 2025-08-17 17:26:18 -06:00
Josh Hawkins
6840415b6c Fix content type for latest image API endpoint (#19555)
* Fix content type for latest image API endpoint

Extension is an enum and .value needed to be appended. Additionally, fastapi's Response() automatically sets the content type when media_type is specified, so a Content-Type in the headers was redundant.

* Remove another unneeded Content-Type
2025-08-16 21:20:21 -06:00
Nicolas Mowen
06539c925c Pull sqlite3 from mirror (#19540)
* Pull sqlite3 from mirror

* Remove extra wget

* Adjust folder name

* Use pre-built sqlite

* Include unzip
2025-08-16 09:30:24 -05:00
Josh Hawkins
addb4e6891 Fix percentage in recording cleanup log (#19525)
* Fix percentage in recording cleanup log

* fix

* update reference config
2025-08-16 07:10:08 -06:00
Nicolas Mowen
fb290c411b HLS Playback Startup Time Optimization (#19503)
* Include preferred startTime in source so that the playlist does not need to seek

* Compatibility

* Cleanup

* Adjust based on inpoint

* Don't set start position if it is not valid

* Handle firefox buggy behavior
2025-08-16 07:09:15 -06:00
Josh Hawkins
89db960c05 Remove score sorting constraint (#19501)
Do not require a score filter to be applied in order to sort by object score.
2025-08-16 07:08:11 -06:00
Josh Hawkins
2cde58037d Improve recognized license plate filter (#19491)
* Fetch all license plates outside of filter component

If the swr call took a long time, the entire select component may not display. This change moves the fetch to the parent component (like sub labels).

* add loading indicator

* improve query
2025-08-16 07:05:50 -06:00
Josh Hawkins
d1be614a10 Bump makefile version (#19539) 2025-08-16 07:05:15 -06:00
Josh Hawkins
93c7c8c518 Bump version in docs (#19538) 2025-08-16 07:47:42 -05:00
Blake Blackshear
c83a35d090 Merge pull request #16390 from blakeblackshear/dev
0.16 Release
2025-08-16 07:34:45 -05:00
Blake Blackshear
d31a4e3443 Merge remote-tracking branch 'origin/master' into dev 2025-08-16 07:32:44 -05:00
Nicolas Mowen
334b6670e1 Add note for Gemini base url (#19399) 2025-08-06 07:02:40 -06:00
boc-the-git
b5067c07f8 Remove deprecated 'version' attribute (#19347) 2025-08-01 05:51:18 -06:00
Nicolas Mowen
21e9b2f2ce Add docs for planning a setup (#19326)
* Add docs for planning a setup

* Add more granularity

* Improve title

* Add storage section

* Fix level

* Change named hardware

* link to section

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-07-30 07:06:39 -06:00
76 changed files with 1333 additions and 720 deletions

View File

@@ -6,7 +6,7 @@ body:
value: |
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
Before submitting your bug report, please [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**
@@ -14,6 +14,7 @@ body:
[prs]: https://www.github.com/blakeblackshear/frigate/pulls
[docs]: https://docs.frigate.video
[faq]: https://github.com/blakeblackshear/frigate/discussions/12724
[ai]: https://docs.frigate.video
- type: checkboxes
attributes:
label: Checklist
@@ -26,6 +27,8 @@ body:
- label: I have tried a different browser to see if it is related to my browser.
required: true
- label: I have tried reproducing the issue in [incognito mode](https://www.computerworld.com/article/1719851/how-to-go-incognito-in-chrome-firefox-safari-and-edge.html) to rule out problems with any third party extensions or plugins I have installed.
- label: I have asked the AI at https://docs.frigate.video about my issue.
required: true
- type: textarea
id: description
attributes:

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.16.0
VERSION = 0.16.3
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty

View File

@@ -12,7 +12,7 @@
A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
Use of a GPU or AI accelerator such as a [Google Coral](https://coral.ai/products/) or [Hailo](https://hailo.ai/) is highly recommended. AI accelerators will outperform even the best CPUs with very little overhead.
Use of a GPU, Integrated GPU, or AI accelerator such as a [Hailo](https://hailo.ai/) is highly recommended. Dedicated hardware will outperform even the best CPUs with very little overhead.
- Tight integration with Home Assistant via a [custom component](https://github.com/blakeblackshear/frigate-hass-integration)
- Designed to minimize resource use and maximize performance by only looking for objects when and where it is necessary

View File

@@ -152,7 +152,7 @@ ARG TARGETARCH
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https wget \
apt-transport-https wget unzip \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.11 \

View File

@@ -2,18 +2,31 @@
set -euxo pipefail
SQLITE3_VERSION="96c92aba00c8375bc32fafcdf12429c58bd8aabfcadab6683e35bbb9cdebf19e" # 3.46.0
SQLITE3_VERSION="3.46.1"
PYSQLITE3_VERSION="0.5.3"
# Fetch the source code for the latest release of Sqlite.
# Install libsqlite3-dev if not present (needed for some base images like NVIDIA TensorRT)
if ! dpkg -l | grep -q libsqlite3-dev; then
echo "Installing libsqlite3-dev for compilation..."
apt-get update && apt-get install -y libsqlite3-dev && rm -rf /var/lib/apt/lists/*
fi
# Fetch the pre-built sqlite amalgamation instead of building from source
if [[ ! -d "sqlite" ]]; then
wget https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=${SQLITE3_VERSION} -O sqlite.tar.gz
tar xzf sqlite.tar.gz
cd sqlite/
LIBS="-lm" ./configure --disable-tcl --enable-tempstore=always
make sqlite3.c
mkdir sqlite
cd sqlite
# Download the pre-built amalgamation from sqlite.org
# For SQLite 3.46.1, the amalgamation version is 3460100
SQLITE_AMALGAMATION_VERSION="3460100"
wget https://www.sqlite.org/2024/sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}.zip -O sqlite-amalgamation.zip
unzip sqlite-amalgamation.zip
mv sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}/* .
rmdir sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}
rm sqlite-amalgamation.zip
cd ../
rm sqlite.tar.gz
fi
# Grab the pysqlite3 source code.

View File

@@ -57,9 +57,16 @@ fi
# arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then
# Install non-free version of i965 driver
sed -i -E "/^Components: main$/s/main/main contrib non-free non-free-firmware/" "/etc/apt/sources.list.d/debian.sources" \
&& apt-get -qq update \
&& apt-get install --no-install-recommends --no-install-suggests -y i965-va-driver-shaders \
&& sed -i -E "/^Components: main contrib non-free non-free-firmware$/s/main contrib non-free non-free-firmware/main/" "/etc/apt/sources.list.d/debian.sources" \
&& apt-get update
# install amd / intel-i965 driver packages
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver intel-gpu-tools onevpl-tools \
intel-gpu-tools onevpl-tools \
libva-drm2 \
mesa-va-drivers radeontop

View File

@@ -8,6 +8,7 @@ fastapi == 0.115.*
uvicorn == 0.30.*
slowapi == 0.1.*
joserfc == 1.0.*
cryptography == 44.0.*
pathvalidate == 3.2.*
markupsafe == 3.0.*
python-multipart == 0.0.12

View File

@@ -1,2 +1 @@
scikit-build == 0.18.*
nvidia-pyindex

View File

@@ -112,7 +112,7 @@ RUN apt-get update \
&& apt-get install -y protobuf-compiler libprotobuf-dev \
&& rm -rf /var/lib/apt/lists/*
RUN --mount=type=bind,source=docker/tensorrt/requirements-models-arm64.txt,target=/requirements-tensorrt-models.txt \
pip3 wheel --wheel-dir=/trt-model-wheels -r /requirements-tensorrt-models.txt
pip3 wheel --wheel-dir=/trt-model-wheels --no-deps -r /requirements-tensorrt-models.txt
FROM wget AS jetson-ffmpeg
ARG DEBIAN_FRONTEND
@@ -145,7 +145,8 @@ COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
--mount=type=bind,from=trt-model-wheels,source=/trt-model-wheels,target=/deps/trt-model-wheels \
pip3 uninstall -y onnxruntime \
&& pip3 install -U /deps/trt-wheels/*.whl /deps/trt-model-wheels/*.whl \
&& pip3 install -U /deps/trt-wheels/*.whl \
&& pip3 install -U /deps/trt-model-wheels/*.whl \
&& ldconfig
WORKDIR /opt/frigate/

View File

@@ -144,7 +144,14 @@ WEB Digest Algorithm - MD5
### Reolink Cameras
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
| ---------------- | ------------------------- | -------------------------------- | ----------------------------------------------------------------------- |
| 5MP or lower | All | http-flv | Stream is h264 |
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
Frigate works much better with newer reolink cameras that are setup with the below options:
If available, recommended settings are:
@@ -157,19 +164,35 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
:::warning
<details>
<summary>Example Config</summary>
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
:::tip
Reolink's latest cameras support two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
NOTE: The RTSP stream can not be prefixed with `ffmpeg:`, as go2rtc needs to handle the stream to support two way audio.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
:::
```yaml
go2rtc:
streams:
# example for connecting to a standard Reolink camera
your_reolink_camera:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
your_reolink_camera_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
# example for connectin to a Reolink camera that supports two way talk
your_reolink_camera_twt:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- "rtsp://username:password@reolink_ip/Preview_01_sub
your_reolink_camera_twt_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
- "rtsp://username:password@reolink_ip/Preview_01_sub
# example for connecting to a Reolink NVR
your_reolink_camera_via_nvr:
- "ffmpeg:http://reolink_nvr_ip/flv?port=1935&app=bcs&stream=channel3_main.bcs&user=username&password=password" # channel numbers are 0-15
- "ffmpeg:your_reolink_camera_via_nvr#audio=aac"
@@ -200,25 +223,16 @@ cameras:
roles:
- detect
```
#### Reolink Doorbell
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
streams:
your_reolink_doorbell:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
- rtsp://reolink_ip/Preview_01_sub
your_reolink_doorbell_sub:
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
```
</details>
### Unifi Protect Cameras
:::note
Unifi G5s cameras and newer need a Unifi Protect server to enable rtsps stream, it's not posible to enable it in standalone mode.
:::
Unifi protect cameras require the rtspx stream to be used with go2rtc.
To utilize a Unifi protect camera, modify the rtsps link to begin with rtspx.
Additionally, remove the "?enableSrtp" from the end of the Unifi link.
@@ -259,7 +273,7 @@ To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's
go2rtc:
streams:
usb_camera:
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
cameras:
usb_camera:

View File

@@ -98,6 +98,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Annke CZ504 | ✅ | ✅ | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
| Axis Q-6155E | ✅ | ❌ | ONVIF service port: 80; Camera does not support MoveStatus.
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, among others) have been reported to not support autotracking |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
@@ -107,10 +108,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Reolink | ✅ | ❌ | |
| Speco O8P32X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |

View File

@@ -158,6 +158,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
### Why can't I bulk upload photos?
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.

View File

@@ -21,8 +21,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
| preset-nvidia | Nvidia GPU | |
| preset-jetson-h264 | Nvidia Jetson with h264 stream | |
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with \*-rk suffix and privileged mode |
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with \*-rk suffix and privileged mode |
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
### Input Args Presets

View File

@@ -18,10 +18,10 @@ genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
cameras:
front_camera:
front_camera:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
@@ -30,7 +30,7 @@ cameras:
required_zones:
- steps
indoor_camera:
genai:
genai:
enabled: False # <- disable GenAI for your indoor camera
```
@@ -78,7 +78,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
### Get API Key
@@ -96,16 +96,22 @@ genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
model: gemini-2.0-flash
```
:::note
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
:::
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models).
### Get API Key
@@ -133,11 +139,11 @@ Microsoft offers several vision models through Azure OpenAI. A subscription is r
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key, model name, and resource URL, which must include the `api-version` parameter (see the example below).
### Configuration
@@ -145,7 +151,8 @@ To start using Azure OpenAI, you must first [create a resource](https://learn.mi
genai:
enabled: True
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
base_url: https://instance.cognitiveservices.azure.com/openai/responses?api-version=2025-04-01-preview
model: gpt-5-mini
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
@@ -196,7 +203,7 @@ genai:
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:

View File

@@ -9,7 +9,6 @@ It is highly recommended to use a GPU for hardware acceleration video decoding i
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
# Object Detection
## Raspberry Pi 3/4
@@ -229,7 +228,7 @@ Additional configuration is needed for the Docker container to be able to access
services:
frigate:
...
image: ghcr.io/blakeblackshear/frigate:stable
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
deploy: # <------------- Add this section
resources:
reservations:
@@ -247,7 +246,7 @@ docker run -d \
--name frigate \
...
--gpus=all \
ghcr.io/blakeblackshear/frigate:stable
ghcr.io/blakeblackshear/frigate:stable-tensorrt
```
### Setup Decoder

View File

@@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:

View File

@@ -15,7 +15,7 @@ The jsmpeg live view will use more browser and client GPU resources. Using go2rt
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
@@ -127,7 +127,8 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that WebRTC does not support H.265.
- Note that some browsers may not support H.265 (HEVC). You can check your browser's current version for H.265 compatibility [here](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness).
:::tip
@@ -174,7 +175,7 @@ For devices that support two way talk, Frigate can be configured to use the feat
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](http://card.camera/#/usage/2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-cameras)
### Streaming options on camera group dashboards
@@ -251,3 +252,7 @@ Note that disabling a camera through the config file (`enabled: False`) removes
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.
7. **My camera streams have lots of visual artifacts / distortion.**
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.

View File

@@ -29,6 +29,7 @@ Frigate supports multiple different detectors that work on different types of ha
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
**Nvidia Jetson**
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
@@ -325,6 +326,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@@ -440,14 +447,13 @@ Also AMD/ROCm does not "officially" support integrated GPUs. It still does work
For the rocm frigate build there is some automatic detection:
- gfx90c -> 9.0.0
- gfx1031 -> 10.3.0
- gfx1103 -> 11.0.0
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as:
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `10.0.0`, then you should configure it from command line as:
```bash
$ docker run -e HSA_OVERRIDE_GFX_VERSION=9.0.0 \
$ docker run -e HSA_OVERRIDE_GFX_VERSION=10.0.0 \
...
```
@@ -458,7 +464,7 @@ services:
frigate:
environment:
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
```
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
@@ -534,6 +540,12 @@ There is no default model provided, the following formats are supported:
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
:::warning
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@@ -561,6 +573,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
:::
:::warning
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
@@ -960,40 +978,43 @@ Here are some tips for getting different model types
### Downloading D-FINE Model
To export as ONNX:
D-FINE can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=s` in the first line to `s`, `m`, or `l` size.
1. Clone: https://github.com/Peterande/D-FINE and install all dependencies.
2. Select and download a checkpoint from the [readme](https://github.com/Peterande/D-FINE).
3. Modify line 58 of `tools/deployment/export_onnx.py` and change batch size to 1: `data = torch.rand(1, 3, 640, 640)`
4. Run the export, making sure you select the right config, for your checkpoint.
Example:
```
python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml -r output/dfine_m_obj2coco.pth
```sh
docker build . --build-arg MODEL_SIZE=s --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /dfine
RUN git clone https://github.com/Peterande/D-FINE.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnxsim onnxscript
# Create output directory and download checkpoint
RUN mkdir -p output
ARG MODEL_SIZE
RUN wget https://github.com/Peterande/storage/releases/download/dfinev1.0/dfine_${MODEL_SIZE}_obj2coco.pth -O output/dfine_${MODEL_SIZE}_obj2coco.pth
# Modify line 58 of export_onnx.py to change batch size to 1
RUN sed -i '58s/data = torch.rand(.*)/data = torch.rand(1, 3, 640, 640)/' tools/deployment/export_onnx.py
RUN python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_${MODEL_SIZE}_obj2coco.yml -r output/dfine_${MODEL_SIZE}_obj2coco.pth
FROM scratch
ARG MODEL_SIZE
COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL_SIZE}.onnx
EOF
```
:::tip
Model export has only been tested on Linux (or WSL2). Not all dependencies are in `requirements.txt`. Some live in the deployment folder, and some are still missing entirely and must be installed manually.
Make sure you change the batch size to 1 before exporting.
:::
### Download RF-DETR Model
### Downloading RF-DETR Model
RF-DETR can be exported as ONNX by running the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=Nano` in the first line to `Nano`, `Small`, or `Medium` size.
```sh
docker build . --build-arg MODEL_SIZE=Nano --output . -f- <<'EOF'
docker build . --build-arg MODEL_SIZE=Nano --rm --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /rfdetr
RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon
RUN uv pip install --system rfdetr[onnxexport] torch==2.8.0 onnx==1.19.1 onnxscript
ARG MODEL_SIZE
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()"
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
FROM scratch
ARG MODEL_SIZE
COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx
@@ -1031,23 +1052,25 @@ python3 yolo_to_onnx.py -m yolov7-320
#### YOLOv9
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available sizes are `t`, `s`, `m`, `c`, and `e`).
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
```sh
docker build . --build-arg MODEL_SIZE=t --output . -f- <<'EOF'
docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnx-simplifier>=0.4.1
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1 onnxscript
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz 320 --simplify --include onnx
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx
FROM scratch
ARG MODEL_SIZE
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /
ARG IMG_SIZE
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx
EOF
```

View File

@@ -11,7 +11,7 @@ This adds features including the ability to deep link directly into the app.
In order to install Frigate as a PWA, the following requirements must be met:
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
- Frigate must be accessed via a secure context (localhost, secure https, VPN, etc.)
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
@@ -22,3 +22,7 @@ Installation varies slightly based on the device that is being used:
- Desktop: Use the install button typically found in right edge of the address bar
- Android: Use the `Install as App` button in the more options menu for Chrome, and the `Add app to Home screen` button for Firefox
- iOS: Use the `Add to Homescreen` button in the share menu
## Usage
Once setup, the Frigate app can be used wherever it has access to Frigate. This means it can be setup as local-only, VPN-only, or fully accessible depending on your needs.

View File

@@ -438,7 +438,7 @@ record:
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
expire_interval: 60
# Optional: Sync recordings with disk on startup and once a day (default: shown below).
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Retention settings for recording
retain:

View File

@@ -36,9 +36,11 @@ If the EQ13 is out of stock, the link below may take you to a suggested alternat
:::
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
| Name | Capabilities | Notes |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4jn2qVr" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | Can run object detection on several 1080p cameras with low-medium activity | Dual gigabit NICs for easy isolated camera network. |
| Intel 1120p ([Amazon](https://www.amazon.com/Beelink-i3-1220P-Computer-Display-Gigabit/dp/B0DDCKT9YP) | Can handle a large number of 1080p cameras with high activity | |
| Intel 125H ([Amazon](https://www.amazon.com/MINISFORUM-Pro-125H-Barebone-Computer-HDMI2-1/dp/B0FH21FSZM) | Can handle a significant number of 1080p cameras with high activity | Includes NPU for more efficient detection in 0.17+ |
## Detectors
@@ -99,14 +101,21 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
| Name | Hailo8 Inference Time | Hailo8L Inference Time |
| ---------------- | ---------------------- | ----------------------- |
| ssd mobilenet v1 | ~ 6 ms | ~ 10 ms |
| yolov9-tiny | | 320: 18ms |
| yolov6n | ~ 7 ms | ~ 11 ms |
### Google Coral TPU
:::warning
The Coral is no longer recommended for new Frigate installations, except in deployments with particularly low power requirements or hardware incapable of utilizing alternative AI accelerators for object detection. Instead, we suggest using one of the numerous other supported object detectors. Frigate will continue to provide support for the Coral TPU for as long as practicably possible given its still one of the most power-efficient devices for executing object detection models.
:::
Frigate supports both the USB and M.2 versions of the Google Coral.
- The USB version is compatible with the widest variety of hardware and does not require a driver on the host machine. However, it does lack the automatic throttling features of the other versions.
- The PCIe and M.2 versions require installation of a driver on the host. Follow the instructions for your version from https://coral.ai
- The PCIe and M.2 versions require installation of a driver on the host. https://github.com/jnicolson/gasket-builder should be used.
A single Coral can handle many cameras using the default model and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at `1000/10=100`, or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.
@@ -131,17 +140,19 @@ More information is available [in the detector docs](/configuration/object_detec
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
| Name | MobileNetV2 Inference Time | YOLOv9 | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
| -------------- | -------------------------- | ------------------------------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
### TensorRT - Nvidia GPU
@@ -166,12 +177,13 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| --------------- | --------------------- | ------------------------- | ---------------------- |
| RTX 3050 | t-320: 15 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
| RTX 3070 | t-320: 11 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
| RTX A4000 | | 320: ~ 15 ms | |
| Tesla P40 | | 320: ~ 105 ms | |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| --------------- | ------------------------- | ------------------------- | ---------------------- |
| GTX 1070 | s-320: 16 ms | 320: 14 ms | |
| RTX 3050 | t-320: 15 ms s-320: 17 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
| RTX 3070 | t-320: 11 ms s-320: 13 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
| RTX A4000 | | 320: ~ 15 ms | |
| Tesla P40 | | 320: ~ 105 ms | |
### ROCm - AMD GPU
@@ -179,7 +191,7 @@ With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detec
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------- | ------------------------- |
| AMD 780M | ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 780M | 320: ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
## Community Supported Detectors

View File

@@ -43,7 +43,7 @@ The following ports are used by Frigate and can be mapped via docker as required
| `8971` | Authenticated UI and API access without TLS. Reverse proxies should use this port. |
| `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate. |
| `8554` | RTSP restreaming. By default, these streams are unauthenticated. Authentication can be configured in go2rtc section of config. |
| `8555` | WebRTC connections for low latency live views. |
| `8555` | WebRTC connections for cameras with two-way talk support. |
#### Common Docker Compose storage configurations
@@ -94,6 +94,10 @@ $ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
## Extra Steps for Specific Hardware
The following sections contain additional setup steps that are only required if you are using specific hardware. If you are not using any of these hardware types, you can skip to the [Docker](#docker) installation section.
### Raspberry Pi 3/4
By default, the Raspberry Pi limits the amount of memory available to the GPU. In order to use ffmpeg hardware acceleration, you must increase the available memory by setting `gpu_mem` to the maximum recommended value in `config.txt` as described in the [official docs](https://www.raspberrypi.org/documentation/computers/config_txt.html#memory-options).
@@ -106,14 +110,107 @@ The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form
#### Installation
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
:::warning
For other installations, follow these steps for installation:
The Raspberry Pi kernel includes an older version of the Hailo driver that is incompatible with Frigate. You **must** follow the installation steps below to install the correct driver version, and you **must** disable the built-in kernel driver as described in step 1.
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
4. Run the script with `./user_installation.sh`
:::
1. **Disable the built-in Hailo driver (Raspberry Pi only)**:
:::note
If you are **not** using a Raspberry Pi, skip this step and proceed directly to step 2.
:::
If you are using a Raspberry Pi, you need to blacklist the built-in kernel Hailo driver to prevent conflicts. First, check if the driver is currently loaded:
```bash
lsmod | grep hailo
```
If it shows `hailo_pci`, unload it:
```bash
sudo rmmod hailo_pci
```
Now blacklist the driver to prevent it from loading on boot:
```bash
echo "blacklist hailo_pci" | sudo tee /etc/modprobe.d/blacklist-hailo_pci.conf
```
Update initramfs to ensure the blacklist takes effect:
```bash
sudo update-initramfs -u
```
Reboot your Raspberry Pi:
```bash
sudo reboot
```
After rebooting, verify the built-in driver is not loaded:
```bash
lsmod | grep hailo
```
This command should return no results. If it still shows `hailo_pci`, the blacklist did not take effect properly and you may need to check for other Hailo packages installed via apt that are loading the driver.
2. **Run the installation script**:
Download the installation script:
```bash
wget https://raw.githubusercontent.com/blakeblackshear/frigate/dev/docker/hailo8l/user_installation.sh
```
Make it executable:
```bash
sudo chmod +x user_installation.sh
```
Run the script:
```bash
./user_installation.sh
```
The script will:
- Install necessary build dependencies
- Clone and build the Hailo driver from the official repository
- Install the driver
- Download and install the required firmware
- Set up udev rules
3. **Reboot your system**:
After the script completes successfully, reboot to load the firmware:
```bash
sudo reboot
```
4. **Verify the installation**:
After rebooting, verify that the Hailo device is available:
```bash
ls -l /dev/hailo0
```
You should see the device listed. You can also verify the driver is loaded:
```bash
lsmod | grep hailo_pci
```
#### Setup
@@ -200,7 +297,7 @@ services:
shm_size: "512mb" # update for your cameras based on calculation above
devices:
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
- /dev/video11:/dev/video11 # For Raspberry Pi 4B
- /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
volumes:

View File

@@ -0,0 +1,74 @@
---
id: planning_setup
title: Planning a New Installation
---
Choosing the right hardware for your Frigate NVR setup is important for optimal performance and a smooth experience. This guide will walk you through the key considerations, focusing on the number of cameras and the hardware required for efficient object detection.
## Key Considerations
### Number of Cameras and Simultaneous Activity
The most fundamental factor in your hardware decision is the number of cameras you plan to use. However, it's not just about the raw count; it's also about how many of those cameras are likely to see activity and require object detection simultaneously.
When motion is detected in a camera's feed, regions of that frame are sent to your chosen [object detection hardware](/configuration/object_detectors).
- **Low Simultaneous Activity (1-6 cameras with occasional motion)**: If you have a few cameras in areas with infrequent activity (e.g., a seldom-used backyard, a quiet interior), the demand on your object detection hardware will be lower. A single, entry-level AI accelerator will suffice.
- **Moderate Simultaneous Activity (6-12 cameras with some overlapping motion)**: For setups with more cameras, especially in areas like a busy street or a property with multiple access points, it's more likely that several cameras will capture activity at the same time. This increases the load on your object detection hardware, requiring more processing power.
- **High Simultaneous Activity (12+ cameras or highly active zones)**: Large installations or scenarios where many cameras frequently capture activity (e.g., busy street with overview, identification, dedicated LPR cameras, etc.) will necessitate robust object detection capabilities. You'll likely need multiple entry-level AI accelerators or a more powerful single unit such as a discrete GPU.
- **Commercial Installations (40+ cameras)**: Commercial installations or scenarios where a substantial number of cameras capture activity (e.g., a commercial property, an active public space) will necessitate robust object detection capabilities. You'll likely need a modern discrete GPU.
### Video Decoding
Modern CPUs with integrated GPUs (Intel Quick Sync, AMD VCN) or dedicated GPUs can significantly offload video decoding from the main CPU, freeing up resources. This is highly recommended, especially for multiple cameras.
:::tip
For commercial installations it is important to verify the number of supported concurrent streams on your GPU, many consumer GPUs max out at ~20 concurrent camera streams.
:::
## Hardware Considerations
### Object Detection
There are many different hardware options for object detection depending on priorities and available hardware. See [the recommended hardware page](./hardware.md#detectors) for more specifics on what hardware is recommended for object detection.
### Storage
Storage is an important consideration when planning a new installation. To get a more precise estimate of your storage requirements, you can use an IP camera storage calculator. Websites like [IPConfigure Storage Calculator](https://calculator.ipconfigure.com/) can help you determine the necessary disk space based on your camera settings.
#### SSDs (Solid State Drives)
SSDs are an excellent choice for Frigate, offering high speed and responsiveness. The older concern that SSDs would quickly "wear out" from constant video recording is largely no longer valid for modern consumer and enterprise-grade SSDs.
- Longevity: Modern SSDs are designed with advanced wear-leveling algorithms and significantly higher "Terabytes Written" (TBW) ratings than earlier models. For typical home NVR use, a good quality SSD will likely outlast the useful life of your NVR hardware itself.
- Performance: SSDs excel at handling the numerous small write operations that occur during continuous video recording and can significantly improve the responsiveness of the Frigate UI and clip retrieval.
- Silence and Efficiency: SSDs produce no noise and consume less power than traditional HDDs.
#### HDDs (Hard Disk Drives)
Traditional Hard Disk Drives (HDDs) remain a great and often more cost-effective option for long-term video storage, especially for larger setups where raw capacity is prioritized.
- Cost-Effectiveness: HDDs offer the best cost per gigabyte, making them ideal for storing many days, weeks, or months of continuous footage.
- Capacity: HDDs are available in much larger capacities than most consumer SSDs, which is beneficial for extensive video archives.
- NVR-Rated Drives: If choosing an HDD, consider drives specifically designed for surveillance (NVR) use, such as Western Digital Purple or Seagate SkyHawk. These drives are engineered for 24/7 operation and continuous write workloads, offering improved reliability compared to standard desktop drives.
Determining Your Storage Needs
The amount of storage you need will depend on several factors:
- Number of Cameras: More cameras naturally require more space.
- Resolution and Framerate: Higher resolution (e.g., 4K) and higher framerate (e.g., 30fps) streams consume significantly more storage.
- Recording Method: Continuous recording uses the most space. motion-only recording or object-triggered recording can save space, but may miss some footage.
- Retention Period: How many days, weeks, or months of footage do you want to keep?
#### Network Storage (NFS/SMB)
While supported, using network-attached storage (NAS) for recordings can introduce latency and network dependency considerations. For optimal performance and reliability, it is generally recommended to have local storage for your Frigate recordings. If using a NAS, ensure your network connection to it is robust and fast (Gigabit Ethernet at minimum) and that the NAS itself can handle the continuous write load.
### RAM (Memory)
- **Basic Minimum: 4GB RAM**: This is generally sufficient for a very basic Frigate setup with a few cameras and a dedicated object detection accelerator, without running any enrichments. Performance might be tight, especially with higher resolution streams or numerous detections.
- **Minimum for Enrichments: 8GB RAM**: If you plan to utilize Frigate's enrichment features (e.g., facial recognition, license plate recognition, or other AI models that run alongside standard object detection), 8GB of RAM should be considered the minimum. Enrichments require additional memory to load and process their respective models and data.
- **Recommended: 16GB RAM**: For most users, especially those with many cameras (8+) or who plan to heavily leverage enrichments, 16GB of RAM is highly recommended. This provides ample headroom for smooth operation, reduces the likelihood of swapping to disk (which can impact performance), and allows for future expansion.

View File

@@ -5,7 +5,7 @@ title: Updating
# Updating Frigate
The current stable version of Frigate is **0.15.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.15.0).
The current stable version of Frigate is **0.16.3**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.3).
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
@@ -33,21 +33,21 @@ If youre running Frigate via Docker (recommended method), follow these steps:
2. **Update and Pull the Latest Image**:
- If using Docker Compose:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.15.0` instead of `0.14.1`). For example:
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.3` instead of `0.15.2`). For example:
```yaml
services:
frigate:
image: ghcr.io/blakeblackshear/frigate:0.15.0
image: ghcr.io/blakeblackshear/frigate:0.16.3
```
- Then pull the image:
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.15.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.3
```
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you dont need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
- If using `docker run`:
- Pull the image with the appropriate tag (e.g., `0.15.0`, `0.15.0-tensorrt`, or `stable`):
- Pull the image with the appropriate tag (e.g., `0.16.3`, `0.16.3-tensorrt`, or `stable`):
```bash
docker pull ghcr.io/blakeblackshear/frigate:0.15.0
docker pull ghcr.io/blakeblackshear/frigate:0.16.3
```
3. **Start the Container**:
@@ -105,8 +105,8 @@ If an update causes issues:
1. Stop Frigate.
2. Restore your backed-up config file and database.
3. Revert to the previous image version:
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.14.1`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.14.1`), and re-run `docker compose up -d`.
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
4. Verify the old version is running again.

View File

@@ -15,10 +15,10 @@ At a high level, there are five processing steps that could be applied to a came
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
flowchart LR
Feed(Feed\nacquisition) --> Decode(Video\ndecoding)
Decode --> Motion(Motion\ndetection)
Motion --> Object(Object\ndetection)
Feed --> Recording(Recording\nand\nvisualization)
Feed(Feed acquisition) --> Decode(Video decoding)
Decode --> Motion(Motion detection)
Motion --> Object(Object detection)
Feed --> Recording(Recording and visualization)
Motion --> Recording
Object --> Recording
```

View File

@@ -114,7 +114,7 @@ section.
## Next steps
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
2. You may also prefer to [setup WebRTC](/configuration/live#webrtc-extra-configuration) for slightly lower latency than MSE. Note that WebRTC only supports h264 and specific audio formats and may require opening ports on your router.
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
## Important considerations

View File

@@ -202,7 +202,7 @@ services:
...
devices:
- /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://github.com/jnicolson/gasket-builder
...
```

View File

@@ -185,6 +185,26 @@ For clips to be castable to media devices, audio is required and may need to be
<a name="api"></a>
## Camera API
To disable a camera dynamically
```
action: camera.turn_off
data: {}
target:
entity_id: camera.back_deck_cam # your Frigate camera entity ID
```
To enable a camera that has been disabled dynamically
```
action: camera.turn_on
data: {}
target:
entity_id: camera.back_deck_cam # your Frigate camera entity ID
```
## Notification API
Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications.

View File

@@ -29,12 +29,12 @@ Message published for each changed tracked object. The first message is publishe
"camera": "front_door",
"frame_time": 1607123961.837752,
"snapshot": {
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": [],
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": []
},
"label": "person",
"sub_label": null,
@@ -61,6 +61,7 @@ Message published for each changed tracked object. The first message is publishe
}, // attributes with top score that have been identified on the object at any point
"current_attributes": [], // detailed data about the current attributes in this frame
"current_estimated_speed": 0.71, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"average_estimated_speed": 14.3, // average estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180, // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
"recognized_license_plate": "ABC12345", // a recognized license plate for car objects
"recognized_license_plate_score": 0.933451
@@ -70,12 +71,12 @@ Message published for each changed tracked object. The first message is publishe
"camera": "front_door",
"frame_time": 1607123962.082975,
"snapshot": {
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": [],
"frame_time": 1607123965.975463,
"box": [415, 489, 528, 700],
"area": 12728,
"region": [260, 446, 660, 846],
"score": 0.77546,
"attributes": []
},
"label": "person",
"sub_label": ["John Smith", 0.79],
@@ -109,6 +110,7 @@ Message published for each changed tracked object. The first message is publishe
}
],
"current_estimated_speed": 0.77, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"average_estimated_speed": 14.31, // average estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180, // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
"recognized_license_plate": "ABC12345", // a recognized license plate for car objects
"recognized_license_plate_score": 0.933451
@@ -139,7 +141,7 @@ Message published for updates to tracked object metadata, for example:
"name": "John",
"score": 0.95,
"camera": "front_door_cam",
"timestamp": 1607123958.748393,
"timestamp": 1607123958.748393
}
```
@@ -153,13 +155,20 @@ Message published for updates to tracked object metadata, for example:
"plate": "123ABC",
"score": 0.95,
"camera": "driveway_cam",
"timestamp": 1607123958.748393,
"timestamp": 1607123958.748393
}
```
### `frigate/reviews`
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
An `update` with the same ID will be published when:
- The severity changes from `detection` to `alert`
- Additional objects are detected
- An object is recognized via face, lpr, etc.
When the review activity has ended a final `end` message is published.
```json
{

View File

@@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso
| `w` | Add box |
| `d` | Toggle difficult |
| `s` | Switch to the next label |
| `Shift + s` | Switch to the previous label |
| `tab` | Select next largest box |
| `del` | Delete current box |
| `esc` | Deselect/Cancel |

View File

@@ -34,6 +34,12 @@ Model IDs are not secret values and can be shared freely. Access to your model i
:::
:::tip
When setting the plus model id, all other fields should be removed as these are configured automatically with the Frigate+ model config
:::
## Step 4: Adjust your object filters for higher scores
Frigate+ models generally have much higher scores than the default model provided in Frigate. You will likely need to increase your `threshold` and `min_score` values. Here is an example of how these values can be refined, but you should expect these to evolve as your model improves. For more information about how `threshold` and `min_score` are related, see the docs on [object filters](../configuration/object_filters.md#object-scores).

View File

@@ -11,34 +11,51 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
## Available model types
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yolov9`. All of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
| Model Type | Description |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| Model Type | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
_\* Support coming in 0.17_
### YOLOv9 Details
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
:::info
When switching to YOLOv9, you may need to adjust your thresholds for some objects.
:::
#### Hailo Support
If you have a Hailo device, you will need to specify the hardware you have when submitting a model request because they are not cross compatible. Please test using the available base models before submitting your model request.
#### Rockchip (RKNN) Support
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
## Supported detector types
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), and ONNX (`onnx`) detectors.
:::warning
Using Frigate+ models with `onnx` is only available with Frigate 0.15 and later.
:::
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
| Hardware | Recommended Detector Type | Recommended Model Type |
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
| [NVidia GPU](/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
_\* Requires Frigate 0.15_
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
## Improving your model

View File

@@ -68,8 +68,7 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
- For some newer Linux distros (for example, Ubuntu 22.04+), https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
- In most cases https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
## Attempting to load TPU as pci & Fatal Python error: Illegal instruction

View File

@@ -7,6 +7,7 @@ const sidebars: SidebarsConfig = {
Frigate: [
'frigate/index',
'frigate/hardware',
'frigate/planning_setup',
'frigate/installation',
'frigate/updating',
'frigate/camera_setup',

View File

@@ -1759,6 +1759,10 @@ paths:
- name: include_thumbnails
in: query
required: false
description: >
Deprecated. Thumbnail data is no longer included in the response.
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
deprecated: true
schema:
anyOf:
- type: integer
@@ -1973,6 +1977,10 @@ paths:
- name: include_thumbnails
in: query
required: false
description: >
Deprecated. Thumbnail data is no longer included in the response.
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
deprecated: true
schema:
anyOf:
- type: integer

View File

@@ -20,7 +20,7 @@ from fastapi.encoders import jsonable_encoder
from fastapi.params import Depends
from fastapi.responses import JSONResponse, PlainTextResponse, StreamingResponse
from markupsafe import escape
from peewee import operator
from peewee import SQL, operator
from pydantic import ValidationError
from frigate.api.auth import require_role
@@ -685,7 +685,14 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
@router.get("/recognized_license_plates")
def get_recognized_license_plates(split_joined: Optional[int] = None):
try:
events = Event.select(Event.data).distinct()
query = (
Event.select(
SQL("json_extract(data, '$.recognized_license_plate') AS plate")
)
.where(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
.distinct()
)
recognized_license_plates = [row[0] for row in query.tuples()]
except Exception:
return JSONResponse(
content=(
@@ -694,14 +701,6 @@ def get_recognized_license_plates(split_joined: Optional[int] = None):
status_code=404,
)
recognized_license_plates = []
for e in events:
if e.data is not None and "recognized_license_plate" in e.data:
recognized_license_plates.append(e.data["recognized_license_plate"])
while None in recognized_license_plates:
recognized_license_plates.remove(None)
if split_joined:
original_recognized_license_plates = recognized_license_plates.copy()
for recognized_license_plate in original_recognized_license_plates:

View File

@@ -447,8 +447,14 @@ def create_user(
return JSONResponse(content={"username": body.username})
@router.delete("/users/{username}")
def delete_user(username: str):
@router.delete("/users/{username}", dependencies=[Depends(require_role(["admin"]))])
def delete_user(request: Request, username: str):
# Prevent deletion of the built-in admin user
if username == "admin":
return JSONResponse(
content={"message": "Cannot delete admin user"}, status_code=403
)
User.delete_by_id(username)
return JSONResponse(content={"success": True})

View File

@@ -214,7 +214,7 @@ async def register_face(request: Request, name: str, file: UploadFile):
)
context: EmbeddingsContext = request.app.embeddings
result = context.register_face(name, await file.read())
result = None if context is None else context.register_face(name, await file.read())
if not isinstance(result, dict):
return JSONResponse(

View File

@@ -1,6 +1,6 @@
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, Field
DEFAULT_TIME_RANGE = "00:00,24:00"
@@ -21,7 +21,14 @@ class EventsQueryParams(BaseModel):
has_clip: Optional[int] = None
has_snapshot: Optional[int] = None
in_progress: Optional[int] = None
include_thumbnails: Optional[int] = 1
include_thumbnails: Optional[int] = Field(
1,
description=(
"Deprecated. Thumbnail data is no longer included in the response. "
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
),
deprecated=True,
)
favorites: Optional[int] = None
min_score: Optional[float] = None
max_score: Optional[float] = None
@@ -40,7 +47,14 @@ class EventsSearchQueryParams(BaseModel):
query: Optional[str] = None
event_id: Optional[str] = None
search_type: Optional[str] = "thumbnail"
include_thumbnails: Optional[int] = 1
include_thumbnails: Optional[int] = Field(
1,
description=(
"Deprecated. Thumbnail data is no longer included in the response. "
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
),
deprecated=True,
)
limit: Optional[int] = 50
cameras: Optional[str] = "all"
labels: Optional[str] = "all"

View File

@@ -10,6 +10,11 @@ class Extension(str, Enum):
jpg = "jpg"
jpeg = "jpeg"
def get_mime_type(self) -> str:
if self in (Extension.jpg, Extension.jpeg):
return "image/jpeg"
return f"image/{self.value}"
class MediaLatestFrameQueryParams(BaseModel):
bbox: Optional[int] = None

View File

@@ -724,15 +724,24 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
if (sort is None or sort == "relevance") and search_results:
processed_events.sort(key=lambda x: x.get("search_distance", float("inf")))
elif min_score is not None and max_score is not None and sort == "score_asc":
elif sort == "score_asc":
processed_events.sort(key=lambda x: x["data"]["score"])
elif min_score is not None and max_score is not None and sort == "score_desc":
elif sort == "score_desc":
processed_events.sort(key=lambda x: x["data"]["score"], reverse=True)
elif min_speed is not None and max_speed is not None and sort == "speed_asc":
processed_events.sort(key=lambda x: x["data"]["average_estimated_speed"])
elif min_speed is not None and max_speed is not None and sort == "speed_desc":
elif sort == "speed_asc":
processed_events.sort(
key=lambda x: x["data"]["average_estimated_speed"], reverse=True
key=lambda x: (
x["data"].get("average_estimated_speed") is None,
x["data"].get("average_estimated_speed"),
)
)
elif sort == "speed_desc":
processed_events.sort(
key=lambda x: (
x["data"].get("average_estimated_speed") is None,
x["data"].get("average_estimated_speed", float("-inf")),
),
reverse=True,
)
elif sort == "date_asc":
processed_events.sort(key=lambda x: x["start_time"])

View File

@@ -8,6 +8,7 @@ from pathlib import Path
import psutil
from fastapi import APIRouter, Depends, Request
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filepath
from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
@@ -15,7 +16,7 @@ from frigate.api.auth import require_role
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.request.export_rename_body import ExportRenameBody
from frigate.api.defs.tags import Tags
from frigate.const import EXPORT_DIR
from frigate.const import CLIPS_DIR, EXPORT_DIR
from frigate.models import Export, Previews, Recordings
from frigate.record.export import (
PlaybackFactorEnum,
@@ -54,7 +55,14 @@ def export_recording(
playback_factor = body.playback
playback_source = body.source
friendly_name = body.name
existing_image = body.image_path
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
# Ensure that existing_image is a valid path
if existing_image and not existing_image.startswith(CLIPS_DIR):
return JSONResponse(
content=({"success": False, "message": "Invalid image path"}),
status_code=400,
)
if playback_source == "recordings":
recordings_count = (

View File

@@ -142,15 +142,13 @@ def latest_frame(
"regions": params.regions,
}
quality = params.quality
mime_type = extension
if extension == "png":
if extension == Extension.png:
quality_params = None
elif extension == "webp":
elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
else:
else: # jpg or jpeg
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
mime_type = "jpeg"
if camera_name in request.app.frigate_config.cameras:
frame = frame_processor.get_current_frame(camera_name, draw_options)
@@ -193,18 +191,21 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
_, img = cv2.imencode(f".{extension}", frame, quality_params)
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
return Response(
content=img.tobytes(),
media_type=f"image/{mime_type}",
media_type=extension.get_mime_type(),
headers={
"Content-Type": f"image/{mime_type}",
"Cache-Control": "no-store"
if not params.store
else "private, max-age=60",
},
)
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
elif (
camera_name == "birdseye"
and request.app.frigate_config.birdseye.enabled
and request.app.frigate_config.birdseye.restream
):
frame = cv2.cvtColor(
frame_processor.get_current_frame(camera_name),
cv2.COLOR_YUV2BGR_I420,
@@ -215,12 +216,11 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
_, img = cv2.imencode(f".{extension}", frame, quality_params)
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
return Response(
content=img.tobytes(),
media_type=f"image/{mime_type}",
media_type=extension.get_mime_type(),
headers={
"Content-Type": f"image/{mime_type}",
"Cache-Control": "no-store"
if not params.store
else "private, max-age=60",
@@ -749,7 +749,10 @@ def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: st
"/vod/event/{event_id}",
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_event(event_id: str):
def vod_event(
event_id: str,
padding: int = Query(0, description="Padding to apply to the vod."),
):
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
@@ -772,32 +775,23 @@ def vod_event(event_id: str):
status_code=404,
)
clip_path = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.mp4")
if not os.path.isfile(clip_path):
end_ts = (
datetime.now().timestamp() if event.end_time is None else event.end_time
)
vod_response = vod_ts(event.camera, event.start_time, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
event.start_time < datetime.now().timestamp() - 300
and type(vod_response) is tuple
and len(vod_response) == 2
and vod_response[1] == 404
):
Event.update(has_clip=False).where(Event.id == event_id).execute()
return vod_response
duration = int((event.end_time - event.start_time) * 1000)
return JSONResponse(
content={
"cache": True,
"discontinuity": False,
"durations": [duration],
"sequences": [{"clips": [{"type": "source", "path": clip_path}]}],
}
end_ts = (
datetime.now().timestamp()
if event.end_time is None
else (event.end_time + padding)
)
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
event.start_time < datetime.now().timestamp() - 300
and type(vod_response) is tuple
and len(vod_response) == 2
and vod_response[1] == 404
):
Event.update(has_clip=False).where(Event.id == event_id).execute()
return vod_response
@router.get(
@@ -878,7 +872,7 @@ def event_snapshot(
def event_thumbnail(
request: Request,
event_id: str,
extension: str,
extension: Extension,
max_cache_age: int = Query(
2592000, description="Max cache age in seconds. Default 30 days in seconds."
),
@@ -903,7 +897,7 @@ def event_thumbnail(
if event_id in camera_state.tracked_objects:
tracked_obj = camera_state.tracked_objects.get(event_id)
if tracked_obj is not None:
thumbnail_bytes = tracked_obj.get_thumbnail(extension)
thumbnail_bytes = tracked_obj.get_thumbnail(extension.value)
except Exception:
return JSONResponse(
content={"success": False, "message": "Event not found"},
@@ -931,23 +925,21 @@ def event_thumbnail(
)
quality_params = None
if extension == "jpg" or extension == "jpeg":
if extension in (Extension.jpg, Extension.jpeg):
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
elif extension == "webp":
elif extension == Extension.webp:
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
_, img = cv2.imencode(f".{extension}", thumbnail, quality_params)
_, img = cv2.imencode(f".{extension.value}", thumbnail, quality_params)
thumbnail_bytes = img.tobytes()
return Response(
thumbnail_bytes,
media_type=f"image/{extension}",
media_type=extension.get_mime_type(),
headers={
"Cache-Control": f"private, max-age={max_cache_age}"
if event_complete
else "no-store",
"Content-Type": f"image/{extension}",
},
)
@@ -1158,7 +1150,11 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get("/events/{event_id}/clip.mp4")
def event_clip(request: Request, event_id: str):
def event_clip(
request: Request,
event_id: str,
padding: int = Query(0, description="Padding to apply to clip."),
):
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
@@ -1171,8 +1167,12 @@ def event_clip(request: Request, event_id: str):
content={"success": False, "message": "Clip not available"}, status_code=404
)
end_ts = datetime.now().timestamp() if event.end_time is None else event.end_time
return recording_clip(request, event.camera, event.start_time, end_ts)
end_ts = (
datetime.now().timestamp()
if event.end_time is None
else event.end_time + padding
)
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
@router.get("/events/{event_id}/preview.gif")
@@ -1598,7 +1598,7 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
try:
event_id = event_query.scalar()
return event_thumbnail(request, event_id, 60)
return event_thumbnail(request, event_id, Extension.jpg, 60)
except DoesNotExist:
frame = np.zeros((175, 175, 3), np.uint8)
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])

View File

@@ -250,6 +250,7 @@ class FrigateApp:
and not genai_cameras
and not self.config.lpr.enabled
and not self.config.face_recognition.enabled
and not self.config.classification.bird.enabled
):
return

View File

@@ -61,6 +61,7 @@ class FfmpegConfig(FrigateBaseModel):
retry_interval: float = Field(
default=10.0,
title="Time in seconds to wait before FFmpeg retries connecting to the camera.",
gt=0.0,
)
apple_compatibility: bool = Field(
default=False,

View File

@@ -41,10 +41,13 @@ class BirdRealTimeProcessor(RealTimeProcessorApi):
self.detected_birds: dict[str, float] = {}
self.labelmap: dict[int, str] = {}
GITHUB_RAW_ENDPOINT = os.environ.get(
"GITHUB_RAW_ENDPOINT", "https://raw.githubusercontent.com"
)
download_path = os.path.join(MODEL_CACHE_DIR, "bird")
self.model_files = {
"bird.tflite": "https://raw.githubusercontent.com/google-coral/test_data/master/mobilenet_v2_1.0_224_inat_bird_quant.tflite",
"birdmap.txt": "https://raw.githubusercontent.com/google-coral/test_data/master/inat_bird_labels.txt",
"bird.tflite": f"{GITHUB_RAW_ENDPOINT}/google-coral/test_data/master/mobilenet_v2_1.0_224_inat_bird_quant.tflite",
"birdmap.txt": f"{GITHUB_RAW_ENDPOINT}/google-coral/test_data/master/inat_bird_labels.txt",
}
if not all(

View File

@@ -60,10 +60,12 @@ class FaceRealTimeProcessor(RealTimeProcessorApi):
self.faces_per_second = EventsPerSecond()
self.inference_speed = InferenceSpeed(self.metrics.face_rec_speed)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
download_path = os.path.join(MODEL_CACHE_DIR, "facedet")
self.model_files = {
"facedet.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facedet.onnx",
"landmarkdet.yaml": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/landmarkdet.yaml",
"facedet.onnx": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/facedet.onnx",
"landmarkdet.yaml": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/landmarkdet.yaml",
}
if not all(

View File

@@ -158,6 +158,13 @@ class ModelConfig(BaseModel):
self.input_pixel_format = model_info["pixelFormat"]
self.model_type = model_info["type"]
if model_info.get("inputDataType"):
self.input_dtype = model_info["inputDataType"]
# RKNN always uses NHWC
if detector == "rknn":
self.input_tensor = InputTensorEnum.nhwc
# generate list of attribute labels
self.attributes_map = {
**model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP),

View File

@@ -139,8 +139,9 @@ class Rknn(DetectionApi):
if not os.path.isdir(model_cache_dir):
os.mkdir(model_cache_dir)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
urllib.request.urlretrieve(
f"https://github.com/MarcA711/rknn-models/releases/download/v2.3.2-2/{filename}",
f"{GITHUB_ENDPOINT}/MarcA711/rknn-models/releases/download/v2.3.2-2/{filename}",
model_cache_dir + filename,
)

View File

@@ -24,11 +24,12 @@ FACENET_INPUT_SIZE = 160
class FaceNetEmbedding(BaseEmbedding):
def __init__(self):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="facedet",
model_file="facenet.tflite",
download_urls={
"facenet.tflite": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
"facenet.tflite": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/facenet.tflite",
},
)
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
@@ -110,11 +111,12 @@ class FaceNetEmbedding(BaseEmbedding):
class ArcfaceEmbedding(BaseEmbedding):
def __init__(self):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="facedet",
model_file="arcface.onnx",
download_urls={
"arcface.onnx": "https://github.com/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
"arcface.onnx": f"{GITHUB_ENDPOINT}/NickM-27/facenet-onnx/releases/download/v1.0/arcface.onnx",
},
)
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)

View File

@@ -34,11 +34,12 @@ class PaddleOCRDetection(BaseEmbedding):
model_file = (
"detection-large.onnx" if model_size == "large" else "detection-small.onnx"
)
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="paddleocr-onnx",
model_file=model_file,
download_urls={
model_file: f"https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{model_file}"
model_file: f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/{model_file}"
},
)
self.requestor = requestor
@@ -94,11 +95,12 @@ class PaddleOCRClassification(BaseEmbedding):
requestor: InterProcessRequestor,
device: str = "AUTO",
):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="paddleocr-onnx",
model_file="classification.onnx",
download_urls={
"classification.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/classification.onnx"
"classification.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/classification.onnx"
},
)
self.requestor = requestor
@@ -154,11 +156,12 @@ class PaddleOCRRecognition(BaseEmbedding):
requestor: InterProcessRequestor,
device: str = "AUTO",
):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="paddleocr-onnx",
model_file="recognition.onnx",
download_urls={
"recognition.onnx": "https://github.com/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/recognition.onnx"
"recognition.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/paddleocr-onnx/raw/refs/heads/master/models/recognition.onnx"
},
)
self.requestor = requestor
@@ -214,11 +217,12 @@ class LicensePlateDetector(BaseEmbedding):
requestor: InterProcessRequestor,
device: str = "AUTO",
):
GITHUB_ENDPOINT = os.environ.get("GITHUB_ENDPOINT", "https://github.com")
super().__init__(
model_name="yolov9_license_plate",
model_file="yolov9-256-license-plates.onnx",
download_urls={
"yolov9-256-license-plates.onnx": "https://github.com/hawkeye217/yolov9-license-plates/raw/refs/heads/master/models/yolov9-256-license-plates.onnx"
"yolov9-256-license-plates.onnx": f"{GITHUB_ENDPOINT}/hawkeye217/yolov9-license-plates/raw/refs/heads/master/models/yolov9-256-license-plates.onnx"
},
)

View File

@@ -40,10 +40,15 @@ class GenAIClient:
event: Event,
) -> Optional[str]:
"""Generate a description for the frame."""
prompt = camera_config.genai.object_prompts.get(
event.label,
camera_config.genai.prompt,
).format(**model_to_dict(event))
try:
prompt = camera_config.genai.object_prompts.get(
event.label,
camera_config.genai.prompt,
).format(**model_to_dict(event))
except KeyError as e:
logger.error(f"Invalid key in GenAI prompt: {e}")
return None
logger.debug(f"Sending images to genai provider with prompt: {prompt}")
return self._send(prompt, thumbnails)

View File

@@ -369,12 +369,13 @@ class PtzAutoTracker:
logger.info(f"Camera calibration for {camera} in progress")
# zoom levels test
self.zoom_time[camera] = 0
if (
self.config.cameras[camera].onvif.autotracking.zooming
!= ZoomingModeEnum.disabled
):
logger.info(f"Calibration for {camera} in progress: 0% complete")
self.zoom_time[camera] = 0
for i in range(2):
# absolute move to 0 - fully zoomed out
@@ -1329,7 +1330,11 @@ class PtzAutoTracker:
if camera_config.onvif.autotracking.enabled:
if not self.autotracker_init[camera]:
self._autotracker_setup(camera_config, camera)
future = asyncio.run_coroutine_threadsafe(
self._autotracker_setup(camera_config, camera), self.onvif.loop
)
# Wait for the coroutine to complete
future.result()
if self.calibrating[camera]:
logger.debug(f"{camera}: Calibrating camera")
@@ -1476,7 +1481,8 @@ class PtzAutoTracker:
self.tracked_object[camera] = None
self.tracked_object_history[camera].clear()
self.ptz_metrics[camera].motor_stopped.wait()
while not self.ptz_metrics[camera].motor_stopped.is_set():
await self.onvif.get_camera_status(camera)
logger.debug(
f"{camera}: Time is {self.ptz_metrics[camera].frame_time.value}, returning to preset: {autotracker_config.return_preset}"
)
@@ -1486,7 +1492,7 @@ class PtzAutoTracker:
)
# update stored zoom level from preset
if not self.ptz_metrics[camera].motor_stopped.is_set():
while not self.ptz_metrics[camera].motor_stopped.is_set():
await self.onvif.get_camera_status(camera)
self.ptz_metrics[camera].tracking_active.clear()

View File

@@ -48,6 +48,8 @@ class OnvifController:
self.config = config
self.ptz_metrics = ptz_metrics
self.status_locks: dict[str, asyncio.Lock] = {}
# Create a dedicated event loop and run it in a separate thread
self.loop = asyncio.new_event_loop()
self.loop_thread = threading.Thread(target=self._run_event_loop, daemon=True)
@@ -59,6 +61,7 @@ class OnvifController:
continue
if cam.onvif.host:
self.camera_configs[cam_name] = cam
self.status_locks[cam_name] = asyncio.Lock()
asyncio.run_coroutine_threadsafe(self._init_cameras(), self.loop)
@@ -764,105 +767,110 @@ class OnvifController:
return False
async def get_camera_status(self, camera_name: str) -> None:
if camera_name not in self.cams.keys():
logger.error(f"ONVIF is not configured for {camera_name}")
return
if not self.cams[camera_name]["init"]:
if not await self._init_onvif(camera_name):
async with self.status_locks[camera_name]:
if camera_name not in self.cams.keys():
logger.error(f"ONVIF is not configured for {camera_name}")
return
status_request = self.cams[camera_name]["status_request"]
try:
status = await self.cams[camera_name]["ptz"].GetStatus(status_request)
except Exception:
pass # We're unsupported, that'll be reported in the next check.
if not self.cams[camera_name]["init"]:
if not await self._init_onvif(camera_name):
return
try:
pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None)
zoom_status = getattr(status.MoveStatus, "Zoom", None)
status_request = self.cams[camera_name]["status_request"]
try:
status = await self.cams[camera_name]["ptz"].GetStatus(status_request)
except Exception:
pass # We're unsupported, that'll be reported in the next check.
# if it's not an attribute, see if MoveStatus even exists in the status result
if pan_tilt_status is None:
pan_tilt_status = getattr(status, "MoveStatus", None)
try:
pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None)
zoom_status = getattr(status.MoveStatus, "Zoom", None)
# we're unsupported
if pan_tilt_status is None or pan_tilt_status not in [
"IDLE",
"MOVING",
]:
raise Exception
except Exception:
logger.warning(
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config."
# if it's not an attribute, see if MoveStatus even exists in the status result
if pan_tilt_status is None:
pan_tilt_status = getattr(status, "MoveStatus", None)
# we're unsupported
if pan_tilt_status is None or pan_tilt_status not in [
"IDLE",
"MOVING",
]:
raise Exception
except Exception:
logger.warning(
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config."
)
return
logger.debug(
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}"
)
return
logger.debug(
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}"
)
if pan_tilt_status == "IDLE" and (
zoom_status is None or zoom_status == "IDLE"
):
self.cams[camera_name]["active"] = False
if not self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.set()
if pan_tilt_status == "IDLE" and (zoom_status is None or zoom_status == "IDLE"):
self.cams[camera_name]["active"] = False
if not self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.set()
logger.debug(
f"{camera_name}: PTZ stop time: {self.ptz_metrics[camera_name].frame_time.value}"
)
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
camera_name
].frame_time.value
else:
self.cams[camera_name]["active"] = True
if self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.clear()
logger.debug(
f"{camera_name}: PTZ start time: {self.ptz_metrics[camera_name].frame_time.value}"
)
self.ptz_metrics[camera_name].start_time.value = self.ptz_metrics[
camera_name
].frame_time.value
self.ptz_metrics[camera_name].stop_time.value = 0
if (
self.config.cameras[camera_name].onvif.autotracking.zooming
!= ZoomingModeEnum.disabled
):
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
self.ptz_metrics[camera_name].zoom_level.value = numpy.interp(
round(status.Position.Zoom.x, 2),
[
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
],
[0, 1],
)
logger.debug(
f"{camera_name}: PTZ stop time: {self.ptz_metrics[camera_name].frame_time.value}"
f"{camera_name}: Camera zoom level: {self.ptz_metrics[camera_name].zoom_level.value}"
)
# some hikvision cams won't update MoveStatus, so warn if it hasn't changed
if (
not self.ptz_metrics[camera_name].motor_stopped.is_set()
and not self.ptz_metrics[camera_name].reset.is_set()
and self.ptz_metrics[camera_name].start_time.value != 0
and self.ptz_metrics[camera_name].frame_time.value
> (self.ptz_metrics[camera_name].start_time.value + 10)
and self.ptz_metrics[camera_name].stop_time.value == 0
):
logger.debug(
f"Start time: {self.ptz_metrics[camera_name].start_time.value}, Stop time: {self.ptz_metrics[camera_name].stop_time.value}, Frame time: {self.ptz_metrics[camera_name].frame_time.value}"
)
# set the stop time so we don't come back into this again and spam the logs
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
camera_name
].frame_time.value
else:
self.cams[camera_name]["active"] = True
if self.ptz_metrics[camera_name].motor_stopped.is_set():
self.ptz_metrics[camera_name].motor_stopped.clear()
logger.debug(
f"{camera_name}: PTZ start time: {self.ptz_metrics[camera_name].frame_time.value}"
logger.warning(
f"Camera {camera_name} is still in ONVIF 'MOVING' status."
)
self.ptz_metrics[camera_name].start_time.value = self.ptz_metrics[
camera_name
].frame_time.value
self.ptz_metrics[camera_name].stop_time.value = 0
if (
self.config.cameras[camera_name].onvif.autotracking.zooming
!= ZoomingModeEnum.disabled
):
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
self.ptz_metrics[camera_name].zoom_level.value = numpy.interp(
round(status.Position.Zoom.x, 2),
[
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
],
[0, 1],
)
logger.debug(
f"{camera_name}: Camera zoom level: {self.ptz_metrics[camera_name].zoom_level.value}"
)
# some hikvision cams won't update MoveStatus, so warn if it hasn't changed
if (
not self.ptz_metrics[camera_name].motor_stopped.is_set()
and not self.ptz_metrics[camera_name].reset.is_set()
and self.ptz_metrics[camera_name].start_time.value != 0
and self.ptz_metrics[camera_name].frame_time.value
> (self.ptz_metrics[camera_name].start_time.value + 10)
and self.ptz_metrics[camera_name].stop_time.value == 0
):
logger.debug(
f"Start time: {self.ptz_metrics[camera_name].start_time.value}, Stop time: {self.ptz_metrics[camera_name].stop_time.value}, Frame time: {self.ptz_metrics[camera_name].frame_time.value}"
)
# set the stop time so we don't come back into this again and spam the logs
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
camera_name
].frame_time.value
logger.warning(f"Camera {camera_name} is still in ONVIF 'MOVING' status.")
def close(self) -> None:
"""Gracefully shut down the ONVIF controller."""
if not hasattr(self, "loop") or self.loop.is_closed():

View File

@@ -66,7 +66,7 @@ def sync_recordings(limited: bool) -> None:
if float(len(recordings_to_delete)) / max(1, recordings.count()) > 0.5:
logger.warning(
f"Deleting {(float(len(recordings_to_delete)) / recordings.count()):2f}% of recordings DB entries, could be due to configuration error. Aborting..."
f"Deleting {(len(recordings_to_delete) / max(1, recordings.count()) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
return False
@@ -106,7 +106,7 @@ def sync_recordings(limited: bool) -> None:
if float(len(files_to_delete)) / max(1, len(files_on_disk)) > 0.5:
logger.debug(
f"Deleting {(float(len(files_to_delete)) / len(files_on_disk)):2f}% of recordings DB entries, could be due to configuration error. Aborting..."
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
)
return False

View File

@@ -301,7 +301,7 @@ def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, s
"-o",
"-",
"-s",
"1",
"1000", # Intel changed this from seconds to milliseconds in 2024+ versions
]
if intel_gpu_device:

2
web/public/robots.txt Normal file
View File

@@ -0,0 +1,2 @@
User-agent: *
Disallow: /

View File

@@ -131,10 +131,7 @@ export default function SearchFilterGroup({
);
const availableSortTypes = useMemo(() => {
const sortTypes = ["date_asc", "date_desc"];
if (filter?.min_score || filter?.max_score) {
sortTypes.push("score_desc", "score_asc");
}
const sortTypes = ["date_asc", "date_desc", "score_desc", "score_asc"];
if (filter?.min_speed || filter?.max_speed) {
sortTypes.push("speed_desc", "speed_asc");
}

View File

@@ -332,7 +332,9 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
<Portal>
<SubItemContent
className={
isDesktop ? "" : "w-[92%] rounded-lg md:rounded-2xl"
isDesktop
? ""
: "scrollbar-container max-h-[75dvh] w-[92%] overflow-y-scroll rounded-lg md:rounded-2xl"
}
>
<span tabIndex={0} className="sr-only" />

View File

@@ -433,137 +433,139 @@ function CustomTimeSelector({
className={`mt-3 flex items-center rounded-lg bg-secondary text-secondary-foreground ${isDesktop ? "mx-8 gap-2 px-2" : "pl-2"}`}
>
<FaCalendarAlt />
<Popover
open={startOpen}
onOpenChange={(open) => {
if (!open) {
setStartOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.start.title")}
variant={startOpen ? "select" : "default"}
size="sm"
onClick={() => {
setStartOpen(true);
setEndOpen(false);
}}
>
{formattedStart}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(startTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
setRange({
before: endTime,
after: day.getTime() / 1000 + 1,
});
}}
/>
<SelectSeparator className="bg-secondary" />
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={startClock}
step={isIOS ? "60" : "1"}
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"]
: clock.split(":");
const start = new Date(startTime * 1000);
start.setHours(
parseInt(hour),
parseInt(minute),
parseInt(second ?? 0),
0,
);
setRange({
before: endTime,
after: start.getTime() / 1000,
});
}}
/>
</PopoverContent>
</Popover>
<FaArrowRight className="size-4 text-primary" />
<Popover
open={endOpen}
onOpenChange={(open) => {
if (!open) {
setEndOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.end.title")}
variant={endOpen ? "select" : "default"}
size="sm"
onClick={() => {
setEndOpen(true);
<div className="flex flex-wrap items-center">
<Popover
open={startOpen}
onOpenChange={(open) => {
if (!open) {
setStartOpen(false);
}}
>
{formattedEnd}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(endTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.start.title")}
variant={startOpen ? "select" : "default"}
size="sm"
onClick={() => {
setStartOpen(true);
setEndOpen(false);
}}
>
{formattedStart}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(startTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
setRange({
after: startTime,
before: day.getTime() / 1000,
});
}}
/>
<SelectSeparator className="bg-secondary" />
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={endClock}
step={isIOS ? "60" : "1"}
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"]
: clock.split(":");
setRange({
before: endTime,
after: day.getTime() / 1000 + 1,
});
}}
/>
<SelectSeparator className="bg-secondary" />
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={startClock}
step={isIOS ? "60" : "1"}
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"]
: clock.split(":");
const end = new Date(endTime * 1000);
end.setHours(
parseInt(hour),
parseInt(minute),
parseInt(second ?? 0),
0,
);
setRange({
before: end.getTime() / 1000,
after: startTime,
});
}}
/>
</PopoverContent>
</Popover>
const start = new Date(startTime * 1000);
start.setHours(
parseInt(hour),
parseInt(minute),
parseInt(second ?? 0),
0,
);
setRange({
before: endTime,
after: start.getTime() / 1000,
});
}}
/>
</PopoverContent>
</Popover>
<FaArrowRight className="size-4 text-primary" />
<Popover
open={endOpen}
onOpenChange={(open) => {
if (!open) {
setEndOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
aria-label={t("export.time.end.title")}
variant={endOpen ? "select" : "default"}
size="sm"
onClick={() => {
setEndOpen(true);
setStartOpen(false);
}}
>
{formattedEnd}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<TimezoneAwareCalendar
timezone={config?.ui.timezone}
selectedDay={new Date(endTime * 1000)}
onSelect={(day) => {
if (!day) {
return;
}
setRange({
after: startTime,
before: day.getTime() / 1000,
});
}}
/>
<SelectSeparator className="bg-secondary" />
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={endClock}
step={isIOS ? "60" : "1"}
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, second] = isIOS
? [...clock.split(":"), "00"]
: clock.split(":");
const end = new Date(endTime * 1000);
end.setHours(
parseInt(hour),
parseInt(minute),
parseInt(second ?? 0),
0,
);
setRange({
before: end.getTime() / 1000,
after: startTime,
});
}}
/>
</PopoverContent>
</Popover>
</div>
</div>
);
}

View File

@@ -42,6 +42,7 @@ import {
CommandList,
} from "@/components/ui/command";
import { LuCheck } from "react-icons/lu";
import ActivityIndicator from "@/components/indicators/activity-indicator";
type SearchFilterDialogProps = {
config?: FrigateConfig;
@@ -64,6 +65,9 @@ export default function SearchFilterDialog({
const { t } = useTranslation(["components/filter"]);
const [currentFilter, setCurrentFilter] = useState(filter ?? {});
const { data: allSubLabels } = useSWR(["sub_labels", { split_joined: 1 }]);
const { data: allRecognizedLicensePlates } = useSWR<string[]>(
"recognized_license_plates",
);
useEffect(() => {
if (filter) {
@@ -130,6 +134,7 @@ export default function SearchFilterDialog({
}
/>
<RecognizedLicensePlatesFilterContent
allRecognizedLicensePlates={allRecognizedLicensePlates}
recognizedLicensePlates={currentFilter.recognized_license_plate}
setRecognizedLicensePlates={(plate) =>
setCurrentFilter({
@@ -875,6 +880,7 @@ export function SnapshotClipFilterContent({
}
type RecognizedLicensePlatesFilterContentProps = {
allRecognizedLicensePlates: string[] | undefined;
recognizedLicensePlates: string[] | undefined;
setRecognizedLicensePlates: (
recognizedLicensePlates: string[] | undefined,
@@ -882,18 +888,12 @@ type RecognizedLicensePlatesFilterContentProps = {
};
export function RecognizedLicensePlatesFilterContent({
allRecognizedLicensePlates,
recognizedLicensePlates,
setRecognizedLicensePlates,
}: RecognizedLicensePlatesFilterContentProps) {
const { t } = useTranslation(["components/filter"]);
const { data: allRecognizedLicensePlates, error } = useSWR<string[]>(
"recognized_license_plates",
{
revalidateOnFocus: false,
},
);
const [selectedRecognizedLicensePlates, setSelectedRecognizedLicensePlates] =
useState<string[]>(recognizedLicensePlates || []);
const [inputValue, setInputValue] = useState("");
@@ -923,7 +923,7 @@ export function RecognizedLicensePlatesFilterContent({
}
};
if (!allRecognizedLicensePlates || allRecognizedLicensePlates.length === 0) {
if (allRecognizedLicensePlates && allRecognizedLicensePlates.length === 0) {
return null;
}
@@ -947,15 +947,12 @@ export function RecognizedLicensePlatesFilterContent({
<div className="overflow-x-hidden">
<DropdownMenuSeparator className="mb-3" />
<div className="mb-3 text-lg">{t("recognizedLicensePlates.title")}</div>
{error ? (
<p className="text-sm text-red-500">
{t("recognizedLicensePlates.loadFailed")}
</p>
) : !allRecognizedLicensePlates ? (
<p className="text-sm text-muted-foreground">
{t("recognizedLicensePlates.loading")}
</p>
) : (
{allRecognizedLicensePlates == undefined ? (
<div className="flex flex-col items-center justify-center text-sm text-muted-foreground">
<ActivityIndicator className="mb-3 mr-2 size-5" />
<p>{t("recognizedLicensePlates.loading")}</p>
</div>
) : allRecognizedLicensePlates.length == 0 ? null : (
<>
<Command
className="border border-input bg-background"
@@ -1010,11 +1007,11 @@ export function RecognizedLicensePlatesFilterContent({
))}
</div>
)}
<p className="mt-1 text-sm text-muted-foreground">
{t("recognizedLicensePlates.selectPlatesFromList")}
</p>
</>
)}
<p className="mt-1 text-sm text-muted-foreground">
{t("recognizedLicensePlates.selectPlatesFromList")}
</p>
</div>
);
}

View File

@@ -1,4 +1,10 @@
import React, { useState, useRef, useEffect, useCallback } from "react";
import React, {
useState,
useRef,
useEffect,
useCallback,
useMemo,
} from "react";
import { useVideoDimensions } from "@/hooks/use-video-dimensions";
import HlsVideoPlayer from "./HlsVideoPlayer";
import ActivityIndicator from "../indicators/activity-indicator";
@@ -89,6 +95,12 @@ export function GenericVideoPlayer({
},
);
const hlsSource = useMemo(() => {
return {
playlist: source,
};
}, [source]);
return (
<div ref={containerRef} className="relative flex h-full w-full flex-col">
<div className="relative flex flex-grow items-center justify-center">
@@ -107,7 +119,7 @@ export function GenericVideoPlayer({
>
<HlsVideoPlayer
videoRef={videoRef}
currentSource={source}
currentSource={hlsSource}
hotKeys
visible
frigateControls={false}

View File

@@ -6,7 +6,7 @@ import {
useState,
} from "react";
import Hls from "hls.js";
import { isAndroid, isDesktop, isMobile } from "react-device-detect";
import { isDesktop, isMobile } from "react-device-detect";
import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
import VideoControls from "./VideoControls";
import { VideoResolutionType } from "@/types/live";
@@ -21,24 +21,29 @@ import { ASPECT_VERTICAL_LAYOUT, RecordingPlayerError } from "@/types/record";
import { useTranslation } from "react-i18next";
// Android native hls does not seek correctly
const USE_NATIVE_HLS = !isAndroid;
const USE_NATIVE_HLS = false;
const HLS_MIME_TYPE = "application/vnd.apple.mpegurl" as const;
const unsupportedErrorCodes = [
MediaError.MEDIA_ERR_SRC_NOT_SUPPORTED,
MediaError.MEDIA_ERR_DECODE,
];
export interface HlsSource {
playlist: string;
startPosition?: number;
}
type HlsVideoPlayerProps = {
videoRef: MutableRefObject<HTMLVideoElement | null>;
containerRef?: React.MutableRefObject<HTMLDivElement | null>;
visible: boolean;
currentSource: string;
currentSource: HlsSource;
hotKeys: boolean;
supportsFullscreen: boolean;
fullscreen: boolean;
frigateControls?: boolean;
inpointOffset?: number;
onClipEnded?: () => void;
onClipEnded?: (currentTime: number) => void;
onPlayerLoaded?: () => void;
onTimeUpdate?: (time: number) => void;
onPlaying?: () => void;
@@ -113,18 +118,28 @@ export default function HlsVideoPlayer({
const currentPlaybackRate = videoRef.current.playbackRate;
if (!useHlsCompat) {
videoRef.current.src = currentSource;
videoRef.current.src = currentSource.playlist;
videoRef.current.load();
return;
}
if (!hlsRef.current) {
hlsRef.current = new Hls();
hlsRef.current.attachMedia(videoRef.current);
}
hlsRef.current.loadSource(currentSource);
hlsRef.current = new Hls({
maxBufferLength: 10,
maxBufferSize: 20 * 1000 * 1000,
startPosition: currentSource.startPosition,
});
hlsRef.current.attachMedia(videoRef.current);
hlsRef.current.loadSource(currentSource.playlist);
videoRef.current.playbackRate = currentPlaybackRate;
return () => {
// we must destroy the hlsRef every time the source changes
// so that we can create a new HLS instance with startPosition
// set at the optimal point in time
if (hlsRef.current) {
hlsRef.current.destroy();
}
};
}, [videoRef, hlsRef, useHlsCompat, currentSource]);
// state handling
@@ -374,7 +389,11 @@ export default function HlsVideoPlayer({
}
}
}}
onEnded={onClipEnded}
onEnded={() => {
if (onClipEnded) {
onClipEnded(getVideoTime() ?? 0);
}
}}
onError={(e) => {
if (
!hlsRef.current &&

View File

@@ -164,7 +164,7 @@ export default function JSMpegPlayer({
statsIntervalRef.current = setInterval(() => {
const currentTimestamp = Date.now();
const timeDiff = (currentTimestamp - lastTimestampRef.current) / 1000; // in seconds
const bitrate = (bytesReceivedRef.current * 8) / timeDiff / 1000; // in kbps
const bitrate = bytesReceivedRef.current / timeDiff / 1000; // in kBps
setStats?.({
streamType: "jsmpeg",

View File

@@ -80,7 +80,7 @@ export default function LivePlayer({
const [stats, setStats] = useState<PlayerStatsType>({
streamType: "-",
bandwidth: 0, // in kbps
bandwidth: 0, // in kBps
latency: undefined, // in seconds
totalFrames: 0,
droppedFrames: undefined,

View File

@@ -338,7 +338,7 @@ function MSEPlayer({
// console.debug("VideoRTC.buffer", b.byteLength, bufLen);
} else {
try {
sb?.appendBuffer(data);
sb?.appendBuffer(data as ArrayBuffer);
} catch (e) {
// no-op
}
@@ -592,7 +592,7 @@ function MSEPlayer({
const now = Date.now();
const bytesLoaded = totalBytesLoaded.current;
const timeElapsed = (now - lastTimestamp) / 1000; // seconds
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1024; // kbps
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1000; // kBps
lastLoadedBytes = bytesLoaded;
lastTimestamp = now;

View File

@@ -17,7 +17,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
</p>
<p>
<span className="text-white/70">{t("stats.bandwidth.title")}</span>{" "}
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span>
<span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
</p>
{stats.latency != undefined && (
<p>
@@ -66,7 +66,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
</div>
<div className="flex flex-col items-center gap-1">
<span className="text-white/70">{t("stats.bandwidth.short")}</span>{" "}
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span>
<span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
</div>
{stats.latency != undefined && (
<div className="hidden flex-col items-center gap-1 md:flex">

View File

@@ -266,7 +266,7 @@ export default function WebRtcPlayer({
const bitrate =
timeDiff > 0
? (bytesReceived - lastBytesReceived) / timeDiff / 1000
: 0; // in kbps
: 0; // in kBps
setStats?.({
streamType: "WebRTC",

View File

@@ -2,7 +2,10 @@ import { Recording } from "@/types/record";
import { DynamicPlayback } from "@/types/playback";
import { PreviewController } from "../PreviewPlayer";
import { TimeRange, ObjectLifecycleSequence } from "@/types/timeline";
import { calculateInpointOffset } from "@/utils/videoUtil";
import {
calculateInpointOffset,
calculateSeekPosition,
} from "@/utils/videoUtil";
type PlayerMode = "playback" | "scrubbing";
@@ -68,38 +71,20 @@ export class DynamicVideoController {
return;
}
if (
this.recordings.length == 0 ||
time < this.recordings[0].start_time ||
time > this.recordings[this.recordings.length - 1].end_time
) {
this.setNoRecording(true);
return;
}
if (this.playerMode != "playback") {
this.playerMode = "playback";
}
let seekSeconds = 0;
(this.recordings || []).every((segment) => {
// if the next segment is past the desired time, stop calculating
if (segment.start_time > time) {
return false;
}
const seekSeconds = calculateSeekPosition(
time,
this.recordings,
this.inpointOffset,
);
if (segment.end_time < time) {
seekSeconds += segment.end_time - segment.start_time;
return true;
}
seekSeconds +=
segment.end_time - segment.start_time - (segment.end_time - time);
return true;
});
// adjust for HLS inpoint offset
seekSeconds -= this.inpointOffset;
if (seekSeconds === undefined) {
this.setNoRecording(true);
return;
}
if (seekSeconds != 0) {
this.playerController.currentTime = seekSeconds;

View File

@@ -6,14 +6,18 @@ import { Recording } from "@/types/record";
import { Preview } from "@/types/preview";
import PreviewPlayer, { PreviewController } from "../PreviewPlayer";
import { DynamicVideoController } from "./DynamicVideoController";
import HlsVideoPlayer from "../HlsVideoPlayer";
import HlsVideoPlayer, { HlsSource } from "../HlsVideoPlayer";
import { TimeRange } from "@/types/timeline";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { VideoResolutionType } from "@/types/live";
import axios from "axios";
import { cn } from "@/lib/utils";
import { useTranslation } from "react-i18next";
import { calculateInpointOffset } from "@/utils/videoUtil";
import {
calculateInpointOffset,
calculateSeekPosition,
} from "@/utils/videoUtil";
import { isFirefox } from "react-device-detect";
/**
* Dynamically switches between video playback and scrubbing preview player.
@@ -98,9 +102,10 @@ export default function DynamicVideoPlayer({
const [isLoading, setIsLoading] = useState(false);
const [isBuffering, setIsBuffering] = useState(false);
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
const [source, setSource] = useState(
`${apiHost}vod/${camera}/start/${timeRange.after}/end/${timeRange.before}/master.m3u8`,
);
// Don't set source until recordings load - we need accurate startPosition
// to avoid hls.js clamping to video end when startPosition exceeds duration
const [source, setSource] = useState<HlsSource | undefined>(undefined);
// start at correct time
@@ -172,7 +177,7 @@ export default function DynamicVideoPlayer({
);
useEffect(() => {
if (!controller || !recordings?.length) {
if (!recordings?.length) {
if (recordings?.length == 0) {
setNoRecording(true);
}
@@ -180,13 +185,38 @@ export default function DynamicVideoPlayer({
return;
}
let startPosition = undefined;
if (startTimestamp) {
const inpointOffset = calculateInpointOffset(
recordingParams.after,
(recordings || [])[0],
);
startPosition = calculateSeekPosition(
startTimestamp,
recordings,
inpointOffset,
);
}
setSource({
playlist: `${apiHost}vod/${camera}/start/${recordingParams.after}/end/${recordingParams.before}/master.m3u8`,
startPosition,
});
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [recordings]);
useEffect(() => {
if (!controller || !recordings?.length) {
return;
}
if (playerRef.current) {
playerRef.current.autoplay = !isScrubbing;
}
setSource(
`${apiHost}vod/${camera}/start/${recordingParams.after}/end/${recordingParams.before}/master.m3u8`,
);
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
controller.newPlayback({
@@ -194,7 +224,7 @@ export default function DynamicVideoPlayer({
timeRange,
});
// we only want this to change when recordings update
// we only want this to change when controller or recordings update
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [controller, recordings]);
@@ -203,40 +233,69 @@ export default function DynamicVideoPlayer({
[recordingParams, recordings],
);
const onValidateClipEnd = useCallback(
(currentTime: number) => {
if (!onClipEnded || !controller || !recordings) {
return;
}
if (!isFirefox) {
onClipEnded();
}
// Firefox has a bug where clipEnded can be called prematurely due to buffering
// we need to validate if the current play-point is truly at the end of available recordings
const lastRecordingTime = recordings.at(-1)?.start_time;
if (
!lastRecordingTime ||
controller.getProgress(currentTime) < lastRecordingTime
) {
return;
}
onClipEnded();
},
[onClipEnded, controller, recordings],
);
return (
<>
<HlsVideoPlayer
videoRef={playerRef}
containerRef={containerRef}
visible={!(isScrubbing || isLoading)}
currentSource={source}
hotKeys={hotKeys}
supportsFullscreen={supportsFullscreen}
fullscreen={fullscreen}
inpointOffset={inpointOffset}
onTimeUpdate={onTimeUpdate}
onPlayerLoaded={onPlayerLoaded}
onClipEnded={onClipEnded}
onPlaying={() => {
if (isScrubbing) {
playerRef.current?.pause();
}
{source && (
<HlsVideoPlayer
videoRef={playerRef}
containerRef={containerRef}
visible={!(isScrubbing || isLoading)}
currentSource={source}
hotKeys={hotKeys}
supportsFullscreen={supportsFullscreen}
fullscreen={fullscreen}
inpointOffset={inpointOffset}
onTimeUpdate={onTimeUpdate}
onPlayerLoaded={onPlayerLoaded}
onClipEnded={onValidateClipEnd}
onPlaying={() => {
if (isScrubbing) {
playerRef.current?.pause();
}
if (loadingTimeout) {
clearTimeout(loadingTimeout);
}
if (loadingTimeout) {
clearTimeout(loadingTimeout);
}
setNoRecording(false);
}}
setFullResolution={setFullResolution}
onUploadFrame={onUploadFrameToPlus}
toggleFullscreen={toggleFullscreen}
onError={(error) => {
if (error == "stalled" && !isScrubbing) {
setIsBuffering(true);
}
}}
/>
setNoRecording(false);
}}
setFullResolution={setFullResolution}
onUploadFrame={onUploadFrameToPlus}
toggleFullscreen={toggleFullscreen}
onError={(error) => {
if (error == "stalled" && !isScrubbing) {
setIsBuffering(true);
}
}}
/>
)}
<PreviewPlayer
className={cn(
className,

View File

@@ -1,5 +1,5 @@
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
import { useCallback, useEffect, useState } from "react";
import { useCallback, useEffect, useState, useMemo } from "react";
import useSWR from "swr";
import { LivePlayerMode, LiveStreamMetadata } from "@/types/live";
@@ -8,9 +8,68 @@ export default function useCameraLiveMode(
windowVisible: boolean,
) {
const { data: config } = useSWR<FrigateConfig>("config");
const { data: allStreamMetadata } = useSWR<{
// Get comma-separated list of restreamed stream names for SWR key
const restreamedStreamsKey = useMemo(() => {
if (!cameras || !config) return null;
const streamNames = new Set<string>();
cameras.forEach((camera) => {
const isRestreamed = Object.keys(config.go2rtc.streams || {}).includes(
Object.values(camera.live.streams)[0],
);
if (isRestreamed) {
Object.values(camera.live.streams).forEach((streamName) => {
streamNames.add(streamName);
});
}
});
return streamNames.size > 0
? Array.from(streamNames).sort().join(",")
: null;
}, [cameras, config]);
const streamsFetcher = useCallback(async (key: string) => {
const streamNames = key.split(",");
const metadataPromises = streamNames.map(async (streamName) => {
try {
const response = await fetch(`/api/go2rtc/streams/${streamName}`, {
priority: "low",
});
if (response.ok) {
const data = await response.json();
return { streamName, data };
}
return { streamName, data: null };
} catch (error) {
// eslint-disable-next-line no-console
console.error(`Failed to fetch metadata for ${streamName}:`, error);
return { streamName, data: null };
}
});
const results = await Promise.allSettled(metadataPromises);
const metadata: { [key: string]: LiveStreamMetadata } = {};
results.forEach((result) => {
if (result.status === "fulfilled" && result.value.data) {
metadata[result.value.streamName] = result.value.data;
}
});
return metadata;
}, []);
const { data: allStreamMetadata = {} } = useSWR<{
[key: string]: LiveStreamMetadata;
}>(config ? "go2rtc/streams" : null, { revalidateOnFocus: false });
}>(restreamedStreamsKey, streamsFetcher, {
revalidateOnFocus: false,
dedupingInterval: 10000,
});
const [preferredLiveModes, setPreferredLiveModes] = useState<{
[key: string]: LivePlayerMode;

View File

@@ -17,7 +17,7 @@ export function useVideoDimensions(
});
const videoAspectRatio = useMemo(() => {
return videoResolution.width / videoResolution.height;
return videoResolution.width / videoResolution.height || 16 / 9;
}, [videoResolution]);
const containerAspectRatio = useMemo(() => {
@@ -25,8 +25,8 @@ export function useVideoDimensions(
}, [containerWidth, containerHeight]);
const videoDimensions = useMemo(() => {
if (!containerWidth || !containerHeight || !videoAspectRatio)
return { width: "100%", height: "100%" };
if (!containerWidth || !containerHeight)
return { aspectRatio: "16 / 9", width: "100%" };
if (containerAspectRatio > videoAspectRatio) {
const height = containerHeight;
const width = height * videoAspectRatio;

View File

@@ -73,7 +73,11 @@ export default function Settings() {
const isAdmin = useIsAdmin();
const allowedViewsForViewer: SettingsType[] = ["ui", "debug"];
const allowedViewsForViewer: SettingsType[] = [
"ui",
"debug",
"notifications",
];
const visibleSettingsViews = !isAdmin
? allowedViewsForViewer
: allSettingsViews;
@@ -164,7 +168,7 @@ export default function Settings() {
useSearchEffect("page", (page: string) => {
if (allSettingsViews.includes(page as SettingsType)) {
// Restrict viewer to UI settings
if (!isAdmin && !["ui", "debug"].includes(page)) {
if (!isAdmin && !allowedViewsForViewer.includes(page as SettingsType)) {
setPage("ui");
} else {
setPage(page as SettingsType);
@@ -200,7 +204,7 @@ export default function Settings() {
onValueChange={(value: SettingsType) => {
if (value) {
// Restrict viewer navigation
if (!isAdmin && !["ui", "debug"].includes(value)) {
if (!isAdmin && !allowedViewsForViewer.includes(value)) {
setPageToggle("ui");
} else {
setPageToggle(value);

View File

@@ -24,3 +24,57 @@ export function calculateInpointOffset(
return 0;
}
/**
* Calculates the video player time (in seconds) for a given timestamp
* by iterating through recording segments and summing their durations.
* This accounts for the fact that the video is a concatenation of segments,
* not a single continuous stream.
*
* @param timestamp - The target timestamp to seek to
* @param recordings - Array of recording segments
* @param inpointOffset - HLS inpoint offset to subtract from the result
* @returns The calculated seek position in seconds, or undefined if timestamp is out of range
*/
export function calculateSeekPosition(
timestamp: number,
recordings: Recording[],
inpointOffset: number = 0,
): number | undefined {
if (!recordings || recordings.length === 0) {
return undefined;
}
// Check if timestamp is within the recordings range
if (
timestamp < recordings[0].start_time ||
timestamp > recordings[recordings.length - 1].end_time
) {
return undefined;
}
let seekSeconds = 0;
(recordings || []).every((segment) => {
// if the next segment is past the desired time, stop calculating
if (segment.start_time > timestamp) {
return false;
}
if (segment.end_time < timestamp) {
// Add the full duration of this segment
seekSeconds += segment.end_time - segment.start_time;
return true;
}
// We're in this segment - calculate position within it
seekSeconds +=
segment.end_time - segment.start_time - (segment.end_time - timestamp);
return true;
});
// Adjust for HLS inpoint offset
seekSeconds -= inpointOffset;
return seekSeconds >= 0 ? seekSeconds : undefined;
}

View File

@@ -390,7 +390,6 @@ export default function FrigatePlusSettingsView({
className="cursor-pointer"
value={id}
disabled={
model.type != config.model.model_type ||
!model.supportedDetectors.includes(
Object.values(config.detectors)[0]
.type,

View File

@@ -46,6 +46,8 @@ import { Alert, AlertDescription, AlertTitle } from "@/components/ui/alert";
import { Trans, useTranslation } from "react-i18next";
import { useDateLocale } from "@/hooks/use-date-locale";
import { useDocDomain } from "@/hooks/use-doc-domain";
import { useIsAdmin } from "@/hooks/use-is-admin";
import { cn } from "@/lib/utils";
const NOTIFICATION_SERVICE_WORKER = "notifications-worker.js";
@@ -64,6 +66,10 @@ export default function NotificationView({
const { t } = useTranslation(["views/settings"]);
const { getLocaleDocUrl } = useDocDomain();
// roles
const isAdmin = useIsAdmin();
const { data: config, mutate: updateConfig } = useSWR<FrigateConfig>(
"config",
{
@@ -380,7 +386,11 @@ export default function NotificationView({
<div className="flex size-full flex-col md:flex-row">
<Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
<div
className={cn(
isAdmin && "grid w-full grid-cols-1 gap-4 md:grid-cols-2",
)}
>
<div className="col-span-1">
<Heading as="h3" className="my-2">
{t("notification.notificationSettings.title")}
@@ -403,138 +413,151 @@ export default function NotificationView({
</div>
</div>
<Form {...form}>
<form
onSubmit={form.handleSubmit(onSubmit)}
className="mt-2 space-y-6"
>
<FormField
control={form.control}
name="email"
render={({ field }) => (
<FormItem>
<FormLabel>{t("notification.email.title")}</FormLabel>
<FormControl>
<Input
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark] md:w-72"
placeholder={t("notification.email.placeholder")}
{...field}
/>
</FormControl>
<FormDescription>
{t("notification.email.desc")}
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
{isAdmin && (
<Form {...form}>
<form
onSubmit={form.handleSubmit(onSubmit)}
className="mt-2 space-y-6"
>
<FormField
control={form.control}
name="email"
render={({ field }) => (
<FormItem>
<FormLabel>{t("notification.email.title")}</FormLabel>
<FormControl>
<Input
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark] md:w-72"
placeholder={t("notification.email.placeholder")}
{...field}
/>
</FormControl>
<FormDescription>
{t("notification.email.desc")}
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<FormField
control={form.control}
name="cameras"
render={({ field }) => (
<FormItem>
{allCameras && allCameras?.length > 0 ? (
<>
<div className="mb-2">
<FormLabel className="flex flex-row items-center text-base">
{t("notification.cameras.title")}
</FormLabel>
</div>
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
<FormField
control={form.control}
name="allEnabled"
render={({ field }) => (
<FormField
control={form.control}
name="cameras"
render={({ field }) => (
<FormItem>
{allCameras && allCameras?.length > 0 ? (
<>
<div className="mb-2">
<FormLabel className="flex flex-row items-center text-base">
{t("notification.cameras.title")}
</FormLabel>
</div>
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
<FormField
control={form.control}
name="allEnabled"
render={({ field }) => (
<FilterSwitch
label={t("cameras.all.title", {
ns: "components/filter",
})}
isChecked={field.value}
onCheckedChange={(checked) => {
setChangedValue(true);
if (checked) {
form.setValue("cameras", []);
}
field.onChange(checked);
}}
/>
)}
/>
{allCameras?.map((camera) => (
<FilterSwitch
label={t("cameras.all.title", {
ns: "components/filter",
})}
isChecked={field.value}
key={camera.name}
label={camera.name.replaceAll("_", " ")}
isChecked={field.value?.includes(
camera.name,
)}
onCheckedChange={(checked) => {
setChangedValue(true);
let newCameras;
if (checked) {
form.setValue("cameras", []);
newCameras = [
...field.value,
camera.name,
];
} else {
newCameras = field.value?.filter(
(value) => value !== camera.name,
);
}
field.onChange(checked);
field.onChange(newCameras);
form.setValue("allEnabled", false);
}}
/>
)}
/>
{allCameras?.map((camera) => (
<FilterSwitch
key={camera.name}
label={camera.name.replaceAll("_", " ")}
isChecked={field.value?.includes(camera.name)}
onCheckedChange={(checked) => {
setChangedValue(true);
let newCameras;
if (checked) {
newCameras = [
...field.value,
camera.name,
];
} else {
newCameras = field.value?.filter(
(value) => value !== camera.name,
);
}
field.onChange(newCameras);
form.setValue("allEnabled", false);
}}
/>
))}
))}
</div>
</>
) : (
<div className="font-normal text-destructive">
{t("notification.cameras.noCameras")}
</div>
</>
) : (
<div className="font-normal text-destructive">
{t("notification.cameras.noCameras")}
</div>
)}
)}
<FormMessage />
<FormDescription>
{t("notification.cameras.desc")}
</FormDescription>
</FormItem>
)}
/>
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
<Button
className="flex flex-1"
aria-label={t("button.cancel", { ns: "common" })}
onClick={onCancel}
type="button"
>
{t("button.cancel", { ns: "common" })}
</Button>
<Button
variant="select"
disabled={isLoading}
className="flex flex-1"
aria-label={t("button.save", { ns: "common" })}
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
<FormMessage />
<FormDescription>
{t("notification.cameras.desc")}
</FormDescription>
</FormItem>
)}
</Button>
</div>
</form>
</Form>
/>
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
<Button
className="flex flex-1"
aria-label={t("button.cancel", { ns: "common" })}
onClick={onCancel}
type="button"
>
{t("button.cancel", { ns: "common" })}
</Button>
<Button
variant="select"
disabled={isLoading}
className="flex flex-1"
aria-label={t("button.save", { ns: "common" })}
type="submit"
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>{t("button.saving", { ns: "common" })}</span>
</div>
) : (
t("button.save", { ns: "common" })
)}
</Button>
</div>
</form>
</Form>
)}
</div>
<div className="col-span-1">
<div className="mt-4 gap-2 space-y-6">
<div className="flex flex-col gap-2 md:max-w-[50%]">
<Separator className="my-2 flex bg-secondary md:hidden" />
<Heading as="h4" className="my-2">
<div
className={cn(
isAdmin && "flex flex-col gap-2 md:max-w-[50%]",
)}
>
<Separator
className={cn(
"my-2 flex bg-secondary",
isAdmin && "md:hidden",
)}
/>
<Heading as="h4" className={cn(isAdmin ? "my-2" : "my-4")}>
{t("notification.deviceSpecific")}
</Heading>
<Button
@@ -580,7 +603,7 @@ export default function NotificationView({
? t("notification.unregisterDevice")
: t("notification.registerDevice")}
</Button>
{registration != null && registration.active && (
{isAdmin && registration != null && registration.active && (
<Button
aria-label={t("notification.sendTestNotification")}
onClick={() => sendTestNotification("notification_test")}
@@ -590,7 +613,7 @@ export default function NotificationView({
)}
</div>
</div>
{notificationCameras.length > 0 && (
{isAdmin && notificationCameras.length > 0 && (
<div className="mt-4 gap-2 space-y-6">
<div className="space-y-3">
<Separator className="my-2 flex bg-secondary" />