Compare commits

...

212 Commits

Author SHA1 Message Date
dependabot[bot]
0b2bfe655c Bump python-multipart from 0.0.12 to 0.0.20 in /docker/main
Bumps [python-multipart](https://github.com/Kludex/python-multipart) from 0.0.12 to 0.0.20.
- [Release notes](https://github.com/Kludex/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Kludex/python-multipart/compare/0.0.12...0.0.20)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-11 23:48:15 +00:00
dependabot[bot]
c6bed1e108 Update markupsafe requirement from ==2.1.* to ==3.0.* in /docker/main (#14213)
Updates the requirements on [markupsafe](https://github.com/pallets/markupsafe) to permit the latest version.
- [Release notes](https://github.com/pallets/markupsafe/releases)
- [Changelog](https://github.com/pallets/markupsafe/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/markupsafe/compare/2.1.0...3.0.0)

---
updated-dependencies:
- dependency-name: markupsafe
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-03-11 17:46:47 -06:00
OmriAx
7411a8bafa Hailo Official integration (#16906)
* Adding Models

* Final Async Update

* Bug Fixing

* Fix

* Adding fixes

* Working async infer

* Final Documenatation and debug update

* Removing some extra prints

* Post-process correct label push

* config docs fix

* Review Fix

* Review fix 2.0

* Fixing the ASYNC API to work from 30ms to 10ms

* Fix for multi stream async infernce

* Format

* Fix #3

* Format#2

* Remove Unnessery includes

* Sort Imports
2025-03-11 13:36:07 -06:00
Nicolas Mowen
300f85720c Face blur factor (#17099)
* Add option to apply factor to face blurring

* Adjust blur factors

* Add debug log
2025-03-11 14:18:43 -05:00
Josh Hawkins
c9382f8969 Add docs for user roles (#17093)
* Add docs for user roles

* header mapping docs
2025-03-11 08:05:42 -06:00
Nicolas Mowen
c43092da9a Add self return for pydantic (#17091) 2025-03-11 07:57:00 -05:00
MkSavin
bea6cf29c2 fix(auth): Added trimming to jwt secret token read from .jwt_secret (#16467)
Added cleaning of leading and trailing spaces and special characters from a line when reading a secret token from a `.jwt_secret` file
2025-03-10 16:36:43 -06:00
Nicolas Mowen
0cc5d66e5b Refactor sub label api (#17079)
* Use event metadata updater to handle sub label operations

* Use event metadata publisher for sub label setting

* Formatting

* fix tests

* Cleanup
2025-03-10 16:29:29 -06:00
Nicolas Mowen
7d44970f78 Face multi select (#17068)
* Implement multi select for face library

* Clear list of selected

* Add keyboard shortcut
2025-03-10 10:01:52 -05:00
Josh Hawkins
2be5225440 More auth role fixes (#17067)
* simplify check and handle comma separated roles

* spacing
2025-03-10 10:00:35 -05:00
Josh Hawkins
cb25bd4a88 Auth role bugfixes (#17066)
* get correct role from header map

* fix profile endpoint
2025-03-10 07:59:24 -06:00
Hieu LE
b72afb6895 Fix ffmpeg cannot start because of loading shared lib (#16846)
* Fix #16845

Maybe after PR #16712 , ffmpeg build with JP6 seem broken with error `/usr/lib/ffmpeg/jetson/bin/ffmpeg: error while loading shared libraries: libavdevice.so.60: cannot open shared object file: No such file or directory`

This PR fixes the issue

* Adding new LD entry for ffmpeg new location

* Update Dockerfile.arm64

* Move LD config to Dockerfile arm64 instead of detector
2025-03-10 06:54:55 -06:00
Josh Hawkins
1c6700c688 Ensure audio listener is defined before trying to stop ffmpeg (#17045) 2025-03-09 22:01:18 -06:00
Josh Hawkins
9f9117c318 Ensure admin is default role (#17044) 2025-03-09 21:59:07 -06:00
Josh Hawkins
1e0295fad5 LPR tweaks (#17046) 2025-03-09 21:58:31 -06:00
Josh Hawkins
95b5854449 Small UI bugfix (#17035)
* test for more HA elements

* check if mobile and iOS instead of mobilesafari

* simplify

* fix for logs view
2025-03-09 07:47:10 -06:00
Josh Hawkins
cf3c0b2eb5 Prevent settings menu scroll on iOS proxy iframe from shifting entire UI (#17024) 2025-03-08 10:13:07 -06:00
Josh Hawkins
74ca009b0b UI viewer role (#16978)
* db migration

* db model

* assign admin role on password reset

* add role to jwt and api responses

* don't restrict api access for admins yet

* use json response

* frontend auth context

* update auth form for profile endpoint

* add access denied page

* add protected routes

* auth hook

* dialogs

* user settings view

* restrict viewer access to settings

* restrict camera functions for viewer role

* add password dialog to account menu

* spacing tweak

* migrator default to admin

* escape quotes in migrator

* ui tweaks

* tweaks

* colors

* colors

* fix merge conflict

* fix icons

* add api layer enforcement

* ui tweaks

* fix error message

* debug

* clean up

* remove print

* guard apis for admin only

* fix tests

* fix review tests

* use correct error responses from api in toasts

* add role to account menu
2025-03-08 10:01:08 -06:00
Nicolas Mowen
6f9d9cd5a8 Fix yolov9 link (#17007) 2025-03-07 08:50:04 -06:00
Nicolas Mowen
09705fd1b4 Update python deps (#17006) 2025-03-07 06:54:53 -07:00
dependabot[bot]
d831fab381 Bump actions/setup-python from 5.3.0 to 5.4.0 (#16184)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.3.0...v5.4.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-07 06:47:15 -07:00
Josh Hawkins
0e3e2e5ccc Add cameras filter to history view (#16995) 2025-03-06 19:00:15 -07:00
Chris Oelerich
f81bab8895 video['global'] can be empty resulting in a divide by zero (#16993)
* video['global'] can be empty resulting in a divide by zero

* formatting :(
2025-03-06 16:21:16 -06:00
Nicolas Mowen
433da8ffce Update web deps (#16983)
* Update vite

* Update LuIcons

* Update radix packages

* Fix other icons

* Use correct node version

* Remove superfluous web build on python tests

* Move web build to test
2025-03-06 10:50:37 -06:00
Nicolas Mowen
30acd26898 Disable detection by default (#16980)
* Enable detection by default

* Migrate config to have detect enabled if it is not
2025-03-06 08:00:29 -06:00
Nicolas Mowen
66d5f4f3b8 Refactor enabled camera listeners (#16979)
* Monitor if camera is disabled for review items

* Simplify multi camera disabled check

* Cleanup birdseye config handling

* Cleanup

* Remove old listeners
2025-03-06 07:59:35 -06:00
Nicolas Mowen
73c2c34127 Fix previews failing when disabled (#16962)
* Fix previews failing when offline

* Simplify frame cache handling
2025-03-05 08:07:48 -06:00
Josh Hawkins
ad0e89e147 Ensure disabling a camera also disables audio detection (#16961)
* Ensure disabling a camera also disables audio detection

* fix enabled state

* fix path
2025-03-05 06:30:23 -07:00
Josh Hawkins
73f8c97d1d Docs updates (#16949)
* live and lpr docs updates

* disabled clarity

* more disable clarity

* clarify sync_recordings
2025-03-04 17:29:11 -07:00
jdryden572
389c707ad2 Orient live camera feed for best screen fit when in fullscreen mode (#16947)
* Change orientation in fullscreen to best fit video

* Refactor effect to simplify, add more comments
2025-03-04 14:30:34 -06:00
leccelecce
c23653338f GenAI: allow configuring additional send trigger after_significant_updates as well as event_end (#16919) 2025-03-04 09:23:51 -07:00
Josh Hawkins
76c35307b2 Ensure genai thumbnails are always jpegs (#16939) 2025-03-04 07:34:19 -07:00
dxs-dev
92422d853d Fixed the issue where internal context copy occurs frequently. (#16931)
remove cache mount in nginx build

Co-authored-by: Ludis Hur <ludishur@dxsolution.kr>
2025-03-04 06:19:40 -07:00
Josh Hawkins
5210d8c0a2 Add camera enable switch to mobile drawer (#16929) 2025-03-03 17:41:28 -07:00
Nicolas Mowen
56079d080d Quick fix (#16926)
* fix

* Fix

* Fix incorrect default websocket value

* Cleanup value setting
2025-03-03 17:28:34 -06:00
Nicolas Mowen
2946c935ee Disabled camera output (#16920)
* Fix live cameras not showing on refresh

* Fix live dashboard when birdseye is added

* Handle cameras that are offline / disabled

* Use black instead of green frame

* Fix missing mqtt topics
2025-03-03 15:05:49 -06:00
D34DC3N73R
180b0af3c9 Adapt openai.py to work with xAI (#16903)
* Adapt openai.py to work with xAI

It appears xAI is a bit more strict in regards to how the prompt is sent. This changes the prompt to be a dictionary with `"type": "text"` which works with OpenAI and xAI.

* Adapt openai.py to work with xAI

add "detail": "low"

* Adapt openai.py to work with xAI

Apply Ruff formatting and linting fixes
2025-03-03 12:53:24 -07:00
leccelecce
f3765bc391 GenAI minor refactor (#16916) 2025-03-03 11:01:02 -06:00
Josh Hawkins
531042467a Dynamically enable/disable cameras (#16894)
* config options

* metrics

* stop and restart ffmpeg processes

* dispatcher

* frontend websocket

* buttons for testing

* don't recreate log pipe

* add/remove cam from birdseye when enabling/disabling

* end all objects and send empty camera activity

* enable/disable switch in ui

* disable buttons when camera is disabled

* use enabled_in_config for some frontend checks

* tweaks

* handle settings pane with disabled cameras

* frontend tweaks

* change to debug log

* mqtt docs

* tweak

* ensure all ffmpeg processes are initially started

* clean up

* use zmq

* remove camera metrics

* remove camera metrics

* tweaks

* frontend tweaks
2025-03-03 08:30:52 -07:00
Nicolas Mowen
71e6e04d77 Remove rocm detector (#16913)
* Remove rocm detector plugin

* Update docs to recommend using onnx for rocm

* Formatting
2025-03-03 08:16:14 -06:00
Nicolas Mowen
0128ec2ba6 Upgrade RocM to 6.3.3 (#16900)
* Simplify rocm install and update to 6.3.1

* Build out more necessary packages

* Update to 6.3.3

* Set bake version

* Fix typo

* Ensure NHWC is used

* Reset dev changes

* Write to cache
2025-03-02 21:46:46 -06:00
Nicolas Mowen
b8f4cb5435 Fix docs (#16889) 2025-03-02 09:30:18 -07:00
Nicolas Mowen
4e03efaba9 Disable hailort log (#16888) 2025-03-02 09:26:59 -06:00
Nicolas Mowen
f56668e467 Update d-fine documentation (#16881) 2025-03-01 17:09:41 -06:00
Martin Weinelt
458134de5d Reuse constants (#16874) 2025-02-28 21:35:09 -07:00
Nicolas Mowen
06d6e21de8 Fix cuda targetarch (#16869) 2025-02-28 14:48:08 -06:00
Josh Hawkins
8d2f461350 Embeddings tweaks (#16864)
* make semantic search optional

* config

* frontend metrics

* docs

* tweak

* fixes

* also check genai cameras for embeddings context
2025-02-28 11:43:08 -07:00
Nicolas Mowen
db4152c4ca Fix jetson (#16854)
* Fix jetson build

* Update ci.yml

* Update Dockerfile.base

* Update Dockerfile.base

* Update Dockerfile.base

* Fix

* Update ci.yml
2025-02-27 17:24:03 -06:00
Jared
f221a7ae74 Quality of life documentation updates (#16852)
* Update getting_started with full host:container syntax for hwacc

* Update edgetpu.md

Add a tip about the coral TPU not changing identification until after Frigate runs an inference on the TPU.
2025-02-27 09:45:32 -07:00
toperichvania
2b7b5e3f08 Fix incorrect storage usage per camera (#16825) (#16851) 2025-02-27 08:28:53 -07:00
Nicolas Mowen
4f855f82ea Simplify tensorrt (#16835)
* Remove unneccessary trt wheels build

* Cleanup

* Try without local cuda

* Keep specific cuda libs only

* Cleanup

* Add newer libcufft

* remove target

* Include more
2025-02-26 14:39:19 -06:00
Josh Hawkins
d0e9bcbfdc Add ability to use Jina CLIP V2 for semantic search (#16826)
* add wheels

* move extra index url to bottom

* config model option

* add postprocess

* fix config

* jina v2 embedding class

* use jina v2 in embeddings

* fix ov inference

* frontend

* update reference config

* revert device

* fix truncation

* return np tensors

* use correct embeddings from inference

* manual preprocess

* clean up

* docs

* lower batch size for v2 only

* docs clarity

* wording
2025-02-26 07:58:25 -07:00
Josh Hawkins
447f26e1b9 Fix lpr metrics and add yolov9 plate detection metric (#16827) 2025-02-26 07:29:34 -07:00
Nicolas Mowen
7eb3c87fa0 UI tweaks (#16813)
* Add escape to close review details

* Refresh review page automatically if there are currently no items to review
2025-02-25 19:17:39 -06:00
Nicolas Mowen
7ce1b354cc Use native arm runner for arm docker builds (#16804)
* Try building jetpack on latest ubuntu version

* Update ci.yml

* run natively on arm

* Run all arm builds using arm runner

* Update ci.yml
2025-02-25 11:02:56 -06:00
Jason Hunter
0de928703f Initial implementation of D-FINE model via ONNX (#16772)
* initial implementation of D-FINE model

* revert docker-compose

* add docs for D-FINE

* remove weird auto-format issue
2025-02-24 08:56:01 -07:00
Josh Hawkins
1d8f1bd7ae Ensure sub label is null when submitting an empty string (#16779)
* null sub_label when submitting an empty string

* prevent cancel from submitting form

* fix test
2025-02-24 07:02:36 -07:00
Josh Hawkins
9414e001f3 Edit sub labels from the UI (#16764)
* Add ability to edit sub labels from tracked object detail dialog

* add allowEmpty prop

* use TextEntryDialog

* clean up

* text consistency
2025-02-23 16:56:48 -07:00
Tibladar
04a718dda8 Docs: Fix broken shm calculation (#16755)
* Docs: Fix broken shm calculation

* Docs: Change wording of shm template
2025-02-23 11:44:41 -07:00
Josh Hawkins
202b9d1c79 Check websocket correctly when no cameras are enabled/defined (#16762) 2025-02-23 11:11:18 -07:00
Josh Hawkins
22cbf74dc8 Fix frigate log deduplication (#16759) 2025-02-23 06:25:50 -07:00
Josh Hawkins
71f1ea86d2 Add note for notifications on iOS devices (#16744) 2025-02-22 09:19:37 -06:00
Nicolas Mowen
844ee089d8 Fix preview fetch (#16741) 2025-02-22 09:04:09 -06:00
Josh Hawkins
b3c1b21f80 Don't require model_runner for realtime processors (#16728) 2025-02-21 13:26:03 -07:00
Josh Hawkins
60b34bcfca Refactor processors and add LPR postprocessing (#16722)
* recordings data pub/sub

* function to process recording stream frames

* model runner

* lpr model runner

* refactor to mixin class and use model runner

* separate out realtime and post processors

* move model and mixin folders

* basic postprocessor

* clean up

* docs

* postprocessing logic

* clean up

* return none if recordings are disabled

* run postprocessor handle_requests too

* tweak expansion

* add put endpoint

* postprocessor tweaks with endpoint
2025-02-21 06:51:37 -07:00
Felipe Santos
e773d63c16 Improve ffmpeg versions handling (#16712)
* Improve ffmpeg versions handling

* Remove fallback from LIBAVFORMAT_VERSION_MAJOR, it should always be set

* Mention ffprobe in custom ffmpeg docs

* Fix ffmpeg extraction

* Fix go2rtc example formatting

* Add fallback back to LIBAVFORMAT_VERSION_MAJOR

* Fix linter
2025-02-20 18:07:41 -07:00
Nicolas Mowen
6e8e88692c Fix coral dep (#16713) 2025-02-20 17:50:50 -07:00
Nicolas Mowen
391ac70e60 Fix nullable thumbnail (#16706) 2025-02-20 12:19:16 -06:00
Nicolas Mowen
c736b1dae5 Refactor ONNX embedding class to use a base class and type-specific classes (#16703)
* Move onnx runner

* Build out base embedding

* Convert text embedding to separate class

* Move image embedding to separate

* Move LPR to separate class

* Remove mono embedding

* Simplify model downloading

* Reorganize jina v1 embeddings

* Cleanup

* Cleanup for review
2025-02-20 10:17:07 -06:00
Nicolas Mowen
649e5cfda5 Fix rockchip docker build (#16705) 2025-02-20 10:04:29 -06:00
Nicolas Mowen
c7db0f479d Only match on exact config name in some cases (#16704)
* Only match on exact config name in some cases

* Cleanup
2025-02-20 10:04:06 -06:00
Lander Noterman
94a9dcede8 JetPack 6: fix BASE_HOOK (#16701) 2025-02-20 05:29:22 -07:00
Josh Hawkins
0083d09a8b Bugfix: ensure all object labels are added to zones (#16686) 2025-02-19 06:25:39 -07:00
Josh Hawkins
b34fb8bf7c Improve LPR and speed zone docs (#16685)
* Improve lpr and speed zone docs

* clarify

* clarify
2025-02-19 07:19:41 -06:00
Nicolas Mowen
0f481546cd Fix build (#16683)
* Add necessary tensorrt bindings

* Include both versions of cudnn

* Update Dockerfile
2025-02-19 05:54:02 -07:00
Lander Noterman
c240512ef4 fix trt model prepare on jp6 (#16679) 2025-02-19 04:52:35 -07:00
Josh Hawkins
2b3ab02ebf object path plotter per camera with time selection dropdown (#16676) 2025-02-18 20:55:16 -07:00
KastB
7abf28bcbc fix syntax error: missing space (#15954) 2025-02-18 20:51:08 -07:00
Josh Hawkins
2277a88f4d Ensure range is undefined when canceling an export (#16673) 2025-02-18 11:50:32 -07:00
Nicolas Mowen
ab797c95af Remove jp4 build and add notes for jp6 (#16670) 2025-02-18 12:20:35 -06:00
Nicolas Mowen
c819961cc8 Update pull request template to suggest creating a discussion (#16669)
* Update pull request template to suggest creating a discussion for large changes

* Update template

* Ignore github changes
2025-02-18 10:17:18 -06:00
Nicolas Mowen
81b873dead Quick fixes (#16668)
* Write thumbnails to disk on shutdown

* Update text on face library page
2025-02-18 10:13:28 -06:00
Josh Hawkins
b6db97d313 Add metadata title field to exports (#16664) 2025-02-18 08:48:03 -06:00
Nicolas Mowen
f49a8009ec Remove thumb from database field (#16647)
* Remove thumbnail from dict

* Create thumbnail diectory

* Cleanup handling of tracked object images

* Make thumbnail optional

* Handle cases where thumbnail is used

* Expand options for thumbnail api

* Fix up the does not exist condition

* Remove absolute usages of thumbnails

* Write thumbnails for external events

* Reduce webp quality

* Use webp everywhere in frontend

* Formatting

* Always consider all events when re-indexing

* Add thumbnail deletion and cleanup path management

* Cleanup imports

* Rename def

* Don't save thumbnail for every object

* Correct event count

* Use correct function

* Include thumbnail in query

* Remove unused

* Fix requiring exception
2025-02-18 07:46:29 -07:00
Lander Noterman
5bd412071a Add support for JetPack 6 (#16571)
* it builds!

* some fixes

* use python 3.11 (rc1)

* add deadsnakes ppa for more recent python 3.11 in jetson images

* fix pip stuff

* revert to tensor 8.6

* revert changes to docker/main

* add hook to install deadsnakes ppa for tensorrt/jetson

* remove unnecessary pip break-system-packages

* move tflite_runtime to requirements-wheels.txt
2025-02-18 07:38:07 -07:00
Josh Hawkins
1d3de77f63 Reorganize Lifecycle components (#16663)
* reorganize lifecycle components

* clean up
2025-02-18 07:17:51 -07:00
Josh Hawkins
4f88a5f2ad Object Lifecycle tweaks (#16648)
* Disable object path and add warning for autotracking cameras

* clean up
2025-02-17 16:03:51 -07:00
Ashton Johnson
11b1dbf0ff Update contributing.md (#16641)
Added note to include buildx plugin with Docker
2025-02-17 15:04:35 -07:00
Josh Hawkins
b961235187 Tracking fixes (#16645)
* use config enabled check for ptz cam tracker

* ensure we have an object match before accessing score history

* add comment for clarity
2025-02-17 14:49:24 -07:00
Josh Hawkins
124d92daa9 Improve Object Lifecycle pane (#16635)
* add path point tracking to backend

* types

* draw paths on lifecycle pane

* make points clickable

* don't display a path if we don't have any saved path points

* only object lifecycle points should have a click handler

* change to debug log

* better debug log message
2025-02-17 09:37:17 -07:00
Josh Hawkins
3f07d2d37c Improve notifications (#16632)
* add notification cooldown

* cooldown docs

* show alert box when notifications are used in an insecure context

* add ability to suspend notifications from dashboard context menu
2025-02-17 07:19:03 -07:00
Nicolas Mowen
1e709f5b3f Update coral deps and remove unused pycoral (#16630) 2025-02-17 06:57:05 -07:00
Nicolas Mowen
7b7387a68c Update info for face recognition (#16629) 2025-02-17 07:38:29 -06:00
Mitch Ross
2020cdffd5 Fix prometheus client exporter (#16620)
* wip

* wip

* put it back

* formatter

* Delete hailort.log

* Delete hailort.log

* lint

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-17 06:17:15 -07:00
Felipe Santos
349abc8d1b Remove internal docker container IP from WebRTC candidates (#16618) 2025-02-16 21:24:13 -07:00
Lukas Wolfsteiner
92553fa666 fix: Missing link to go2rtc' logging doc (#16606)
Updated the link to go2rtc docs for logging configuration.
2025-02-15 19:57:59 -06:00
Josh Hawkins
5264a18dfa Small fixes and docs tweaks (#16595)
* don't clear url params if we're creating a new object mask

* use correct threshold var in debug log

* docs tweak for mjpeg cameras
2025-02-15 06:56:45 -07:00
Josh Hawkins
6bb1a5dfd2 fix renaming exports with a slash (#16588) 2025-02-14 19:18:14 -07:00
Josh Hawkins
7b3556e4ad ensure we copy current frame to prevent a segfault crash (#16591) 2025-02-14 18:53:07 -06:00
Josh Hawkins
9a07505075 More LPR improvements (#16587)
* define a format option and adjust thresholds

* config updates

* docs

* docs clarity
2025-02-14 15:12:36 -07:00
Josh Hawkins
0b65137831 Return 404 for non-existent vod module media (#16586)
* Check if video source exists before showing player

* add comment

* also check 404

* language

* return 404 with vod module
2025-02-14 12:05:05 -07:00
Nicolas Mowen
761c5109dc Update nvidia driver req (#16560) 2025-02-13 17:18:38 -06:00
Josh Hawkins
729f5c0833 LPR improvements (#16559)
* use a small yolov9 model for detection

* use yolov9 for users without frigate+ and update retention algorithm

* new lpr config fields

* levenshtein distance package

* tweaks

* docs
2025-02-13 16:08:56 -07:00
Nicolas Mowen
f7199f205f Update docs for pcie coral driver (#16548) 2025-02-13 11:00:34 -06:00
Josh Hawkins
d6b5dc93cc Fix streaming dialog and use less text on register button (#16518) 2025-02-12 06:16:32 -07:00
Josh Hawkins
11baf237bc Ensure all streaming settings are saved correctly on mobile (#16511)
* Ensure streaming settings are saved correctly on mobile

* remove extra check
2025-02-11 16:49:22 -07:00
Nicolas Mowen
73fee6372b Remove obsolete event clip logic (#16504)
* Remove obsolete event clip logic

* Formatting
2025-02-11 15:16:10 -06:00
Josh Hawkins
2458f667c4 Refactor lpr into real time data processor (#16497) 2025-02-11 13:45:13 -07:00
Josh Hawkins
f3e2cf0a58 Small fix and docs update (#16499)
* Small docs tweak and bugfix

* don't remove page arg either
2025-02-11 13:23:41 -07:00
Nicolas Mowen
0f0b2687af Add support for YoloV9 to OpenVINO (#16495)
* Add support for yolov9 to OpenVINO

* Cleanup detector docs

* Fix link
2025-02-11 11:23:19 -07:00
Josh Hawkins
a3ede3cf8a Snap points to edges and create object mask from bounding box (#16488) 2025-02-11 09:08:28 -07:00
Nicolas Mowen
b594f198a9 Consolidate HailoRT into the main Docker Image (#16487)
* Simplify main build to include hailo

* Update docs

* Remove hailo docker build
2025-02-11 09:08:13 -07:00
Josh Hawkins
4ef6214029 Tracking improvements (#16484)
* norfair tracker config per object type

* change default R back to 3.4

* separate trackers for static and autotracking cameras

* tweak params and fix debug draw

* ensure all trackers are correctly updated even when there are no detections

* basic reid with histograms

* check mp value

* check mp value again

* stationary objects won't have embeddings

* don't switch trackers when autotracking is toggled after startup

* improve motion detection during autotracking

* use helper function

* get histogram in tracker instead of detect
2025-02-11 09:37:58 -06:00
Josh Hawkins
82f8694464 Toggle review alerts and detections (#16482)
* backend

* frontend

* docs

* fix topic name and initial websocket state

* update reference config

* fix mqtt docs

* fix initial topics

* don't apply max severity when alerts/detections are disabled

* fix ws merge

* tweaks
2025-02-11 07:46:25 -07:00
Josh Hawkins
c54259ecc6 use persistence for hls player muting (#16481) 2025-02-11 06:56:15 -07:00
Josh Hawkins
7e48b3514c remove extraneous print from recordings summary code (#16468) 2025-02-11 05:19:54 -07:00
Josh Hawkins
f0270c6e34 fix non-awaited onvif calls (#16469) 2025-02-11 05:19:20 -07:00
Nicolas Mowen
ac3dfbc30d Set stop event first (#16466) 2025-02-10 21:22:33 -06:00
Josh Hawkins
9a0211a71c Improve Notifications (#16453)
* backend

* frontend

* add notification config at camera level

* camera level notifications in dispatcher

* initial onconnect

* frontend

* backend for suspended notifications

* frontend

* use base communicator

* initialize all cameras in suspended array and use 0 for unsuspended

* remove switch and use select for suspending in frontend

* use timestamp instead of datetime

* frontend tweaks

* mqtt docs

* fix button width

* use grid for layout

* use thread and queue for processing notifications with 10s timeout

* clean up

* move async code to main class

* tweaks

* docs

* remove warning message
2025-02-10 19:47:15 -07:00
Nicolas Mowen
198d067e25 Implement support for YOLOv9 via ONNX (#16459)
* WIP yolov9

* Implement post processing for yolov9

* Cleanup detection

* Update docs to make note of supported yolov9

* Move post processing to separate utility

* Add note about other models
2025-02-10 15:00:12 -06:00
Josh Hawkins
72209986b6 Estimated object speed for zones (#16452)
* utility functions

* backend config

* backend object speed tracking

* draw speed on debug view

* basic frontend zone editor

* remove line sorting

* fix types

* highlight line on canvas when entering value in zone edit pane

* rename vars and add validation

* ensure speed estimation is disabled when user adds more than 4 points

* pixel velocity in debug

* unit_system in config

* ability to define unit system in config

* save max speed to db

* frontend

* docs

* clarify docs

* utility functions

* backend config

* backend object speed tracking

* draw speed on debug view

* basic frontend zone editor

* remove line sorting

* fix types

* highlight line on canvas when entering value in zone edit pane

* rename vars and add validation

* ensure speed estimation is disabled when user adds more than 4 points

* pixel velocity in debug

* unit_system in config

* ability to define unit system in config

* save max speed to db

* frontend

* docs

* clarify docs

* fix duplicates from merge

* include max_estimated_speed in api responses

* add units to zone edit pane

* catch undefined

* add average speed

* clarify docs

* only track average speed when object is active

* rename vars

* ensure points and distances are ordered clockwise

* only store the last 10 speeds like score history

* remove max estimated speed

* update docs

* update docs

* fix point ordering

* improve readability

* docs inertia recommendation

* fix point ordering

* check object frame time

* add velocity angle to frontend

* docs clarity

* add frontend speed filter

* fix mqtt docs

* fix mqtt docs

* don't try to remove distances if they weren't already defined

* don't display estimates on debug view/snapshots if object is not in a speed tracking zone

* docs

* implement speed_threshold for zone presence

* docs for threshold

* better ground plane image

* improve image zone size

* add inertia to speed threshold example
2025-02-10 13:23:42 -07:00
Josh Hawkins
dd7820e4ee Improve live streaming (#16447)
* config file changes

* config migrator

* stream selection on single camera live view

* camera streaming settings dialog

* manage persistent group streaming settings

* apply streaming settings in camera groups

* add ability to clear all streaming settings from settings

* docs

* update reference config

* fixes

* clarify docs

* use first stream as default in dialog

* ensure still image is visible after switching stream type to none

* docs

* clarify docs

* add ability to continue playing stream in background

* fix props

* put stream selection inside dropdown on desktop

* add capabilities to live mode hook

* live context menu component

* resize observer: only return new dimensions if they've actually changed

* pass volume prop to players

* fix slider bug, https://github.com/shadcn-ui/ui/issues/1448

* update react-grid-layout

* prevent animated transitions on draggable grid layout

* add context menu to dashboards

* use provider

* streaming dialog from context menu

* docs

* add jsmpeg warning to context menu

* audio and two way talk indicators in single camera view

* add link to debug view

* don't use hook

* create manual events from live camera view

* maintain grow classes on grid items

* fix initial volume state on default dashboard

* fix pointer events causing context menu to end up underneath image on iOS

* mobile drawer tweaks

* stream stats

* show settings menu for non-restreamed cameras

* consistent settings icon

* tweaks

* optional stats to fix birdseye player

* add toaster to live camera view

* fix crash on initial save in streaming dialog

* don't require restreaming for context menu streaming settings

* add debug view to context menu

* stats fixes

* update docs

* always show stream info when restreamed

* update camera streaming dialog

* make note of no h265 support for webrtc

* docs clarity

* ensure docs show streams as a dict

* docs clarity

* fix css file

* tweaks
2025-02-10 09:42:35 -07:00
Josh Hawkins
2a28964e63 Improve UI logs (#16434)
* use react-logviewer and backend streaming

* layout adjustments

* readd copy handler

* reorder and fix key

* add loading state

* handle frigate log consolidation

* handle newlines in sheet

* update react-logviewer

* fix scrolling and use chunked log download

* don't combine frigate log lines with timestamp

* basic deduplication

* use react-logviewer and backend streaming

* layout adjustments

* readd copy handler

* reorder and fix key

* add loading state

* handle frigate log consolidation

* handle newlines in sheet

* update react-logviewer

* fix scrolling and use chunked log download

* don't combine frigate log lines with timestamp

* basic deduplication

* move process logs function to services util

* improve layout and scrolling behavior

* clean up
2025-02-10 08:38:56 -07:00
Josh Hawkins
e207b2f50b Refactor export filenames to include start and end date/time (#16446) 2025-02-10 08:30:23 -07:00
Josh Hawkins
d5b60237a2 Improve display of recordings data (#16436)
* backend

* frontend

* add earliest recording available to storage metrics page
2025-02-09 16:02:36 -07:00
Josh Hawkins
bc96db8612 Add ability to set mqtt qos in config (#16435) 2025-02-09 16:45:04 -06:00
Nicolas Mowen
81bd956ae8 Fix sanitized api causing issues (#16433) 2025-02-09 14:50:12 -07:00
Josh Hawkins
c8cec63cb9 Object area debugging and improvements (#16432)
* add ability to specify min and max area as percentages

* debug draw area and ratio

* docs

* update for best percentage
2025-02-09 14:48:23 -07:00
Josh Hawkins
83beacf84a Add autotracking calibration message (#16431)
* check autotracking calibration values before writing to config

* docs

* clarify log message
2025-02-09 14:29:08 -07:00
Josh Hawkins
cc2dbdcb44 Timeline improvements (#16429)
* virtualize event segments

* use virtual segments in event review timeline

* add segmentkey to props

* virtualize motion segments

* use virtual segments in motion review timeline

* update draggable element hook to use only math

* timeline zooming hook

* add zooming to event review timeline

* update playground

* zoomable timeline on recording view

* consolidate divs in summary timeline

* only calculate motion data for visible motion segments

* use swr loading state

* fix motion only

* keep handlebar centered when zooming

* zoom animations

* clean up

* ensure motion only checks both halves of segment

* prevent handlebar jump when using motion only mode
2025-02-09 14:13:32 -07:00
towerhand
1f89844c67 Use /api/metrics instead of /metrics (#16425) 2025-02-09 12:50:42 -07:00
Nicolas Mowen
81a56549da Face recognition UI improvements (#16422)
* Rework face recognition APIs

* Fix error message on cancel

* Add ability to create new face library
2025-02-09 13:22:25 -06:00
Nicolas Mowen
c58d2add37 Fix missing prometheus commit (#16415)
* Add prometheus metrics

* add docs for metrics

* sidebar

* lint

* lint

---------

Co-authored-by: Mitch Ross <mitchross@users.noreply.github.com>
2025-02-09 10:04:39 -07:00
Nicolas Mowen
a42ad7ead9 UI fixes (#16406)
* Fix new review item banner blocking third chip

* Fix custom export mode
2025-02-09 07:36:55 -06:00
Nicolas Mowen
973d3aed9a Disable jetson builds (#16396) 2025-02-08 17:20:58 -06:00
Nicolas Mowen
fa300742ea Fix build (#16393)
* Fix rpi build

* Attempt to fix jetson builds
2025-02-08 14:51:42 -06:00
Nicolas Mowen
15472274ee Update docs sidebar name (#16370)
* Clarify classification

* Fix face hierarchy as well
2025-02-08 12:47:01 -06:00
Nicolas Mowen
f3485bfc13 Sanitize provided name 2025-02-08 12:47:01 -06:00
Nicolas Mowen
060ad34e1d Update cudnn and onnxruntime (#16332) 2025-02-08 12:47:01 -06:00
Josh Hawkins
ebf4403eca Add endpoint for fetching batch review items (#16254) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
fb316874ef Quick fix for face rec (#16226)
* Check both

* Fix api order
2025-02-08 12:47:01 -06:00
Nicolas Mowen
9236898a9d Sub label sensors (#16218)
* Support mqtt sensors for logo attributes

* Expose in api
2025-02-08 12:47:01 -06:00
Nicolas Mowen
1c3527f5c4 Face recognition reprocess (#16212)
* Implement update topic

* Add API for reprocessing face

* Get reprocess working

* Fix crash when no faces exist

* Simplify
2025-02-08 12:47:01 -06:00
Nicolas Mowen
6f4002a56f Add training face library information to docs (#16169) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
3f99ff65ed Face recognition improvements (#16034) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
c7c8575c9b Bird classification (#15966)
* Start working on bird processor

* Initial setup for bird processing

* Improvements to handling

* Get classification working

* Cleanup classification

* Add classification config

* Update sort
2025-02-08 12:47:01 -06:00
Nicolas Mowen
63dbcd79e2 Update hailo deps (#15958) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
9dc85d4a76 Processing refactor (#15935)
* Refactor post processor to be real time processor

* Build out generic API for post processing

* Cleanup

* Fix
2025-02-08 12:47:01 -06:00
Nicolas Mowen
88686c44fe Generalize postprocessing (#15931)
* Actually send result to face registration

* Define postprocessing api and move face processing to fit

* Standardize request handling

* Standardize handling of processors

* Rename processing metrics

* Cleanup

* Standardize object end

* Update to newer formatting

* One more

* One more
2025-02-08 12:47:01 -06:00
Nicolas Mowen
3f1d85e189 Fix onvif packages (#15906)
* Don't replace packages

* Formatting
2025-02-08 12:47:01 -06:00
Josh Hawkins
283f1b19a7 Only print line and key/value when a line number can be found (#15897) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
ab8f9e5412 Upgrade onvif-zeep dependency to use onvif-zeep-async (#15894)
* Upgrade to new dependency

* Start onvif work

* Update for async calls
2025-02-08 12:47:01 -06:00
Nicolas Mowen
4f85b18b08 Improvements to face recognition (#15854)
* Do not add margin to face images

* remove margin

* Correctly clear
2025-02-08 12:47:01 -06:00
Nicolas Mowen
a6ae208fe7 Add metrics page for embeddings and face / license plate processing times (#15818)
* Get stats for embeddings inferences

* cleanup embeddings inferences

* Enable UI for feature metrics

* Change threshold

* Fix check

* Update python for actions

* Set python version

* Ignore type for now
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0c13227f7d Fix facedet download (#15811)
* Support downloading face models

* Handle download and loading correctly

* Add face dir creation

* Fix error

* Fix

* Formatting

* Move upload to button

* Show number of faces in library for each name

* Add text color for score

* Cleanup
2025-02-08 12:47:01 -06:00
Nicolas Mowen
1edbd2d498 Refactor camera activity processing (#15803)
* Replace object label sensors with new manager

* Implement zone topics

* remove unused
2025-02-08 12:47:01 -06:00
Marc Altmann
4c7d4e6c0a rockchip: update dependencies and add script for model conversion (#15699)
* rockchip: update dependencies and add script for model conversion

* rockchip: update docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-08 12:47:01 -06:00
Nicolas Mowen
458ca4a983 Add support for SR-IOV GPU stats (#15796)
* Add option to treat GPU as SRIOV in order for stats to work correctly

* Add to intel docs

* fix tests
2025-02-08 12:47:01 -06:00
Nicolas Mowen
6a83f40135 Add ffmpeg config to increase HEVC compatibility with Apple devices (#15795)
* Add config option for handling HEVC playback on Apple devices

* Update docs

* Remove unused
2025-02-08 12:47:01 -06:00
Nicolas Mowen
281407247b Implement face recognition training in UI (#15786)
* Rename debug to train

* Add api to train image as person

* Cleanup model running

* Formatting

* Fix

* Set face recognition page title
2025-02-08 12:47:01 -06:00
Nicolas Mowen
172e7d494f Add UI for managing face recognitions (#15757)
* Add ability to view attempts

* Improve UI

* Cleanup

* Correctly refresh ui when item is deleted

* Select correct library by default

* Add min score

* Cleanup
2025-02-08 12:47:01 -06:00
Nicolas Mowen
8763390dfe Face recognition logic improvements (#15679)
* Always initialize face model on startup

* Add ability to save face images for debugging

* Implement better face recognition reasonability
2025-02-08 12:47:01 -06:00
Nicolas Mowen
c26144da75 Change folder 2025-02-08 12:47:01 -06:00
Nicolas Mowen
d025495374 Set model size 2025-02-08 12:47:01 -06:00
Nicolas Mowen
f58fc4c367 Improve face recognition (#15670)
* Face recognition tuning

* Support face alignment

* Cleanup

* Correctly download model
2025-02-08 12:47:01 -06:00
Nicolas Mowen
cc6a740a0f Update TRT (#15646) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
909444dacf Make face library scrollable 2025-02-08 12:47:01 -06:00
Nicolas Mowen
c28a0ed9a3 Update openvino (#15634) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
cd0d37ce07 Update python deps (#15618)
* Update opencv

* Update cython

* Update scikit

* Update scipy
2025-02-08 12:47:01 -06:00
Nicolas Mowen
d0ad840ef4 Enable temporary caching of camera images to improve responsiveness of UI (#15614) 2025-02-08 12:47:01 -06:00
Josh Hawkins
edab4efa42 Preserve line numbers in config validation (#15584)
* use ruamel to parse and preserve line numbers for config validation

* maintain exception for non validation errors

* fix types

* include input in log messages
2025-02-08 12:47:01 -06:00
Nicolas Mowen
877b7b2910 Update base image (#15103)
* Change base image

* Update python

* Update coral library

* Fix source file

* Install correct apt packages

* Cleanup

* Fix installation of coral deps

* fix python installations

* Fix devcontainer build

* Get tensorrt build working

* Update other deps

* Filter out tflite log

* Get ROCm build working

* Get rockchip build working

* Get hailo build working

* Add note to comment
2025-02-08 12:47:01 -06:00
Nicolas Mowen
66675cf977 Face recognition fixes (#15222)
* Fix nginx max upload size

* Close upload dialog when done and add toasts

* Formatting

* fix ruff
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0e4ff91d6b Improve face recognition (#15205)
* Validate faces using cosine distance and SVC

* Formatting

* Use opencv instead of face embedding

* Update docs for training data

* Adjust to score system

* Set bounds

* remove face embeddings

* Update writing images

* Add face library page

* Add ability to select file

* Install opencv deps

* Cleanup

* Use different deps

* Move deps

* Cleanup

* Only show face library for desktop

* Implement deleting

* Add ability to upload image

* Add support for uploading images
2025-02-08 12:47:01 -06:00
Nicolas Mowen
dd7b1be7f4 Remove standardization 2025-02-08 12:47:01 -06:00
Nicolas Mowen
102a7695a3 Fix check 2025-02-08 12:47:01 -06:00
Nicolas Mowen
755c9eea1c Remove hardcoded face name 2025-02-08 12:47:01 -06:00
Nicolas Mowen
e5fcc50ae2 Use SVC to normalize and classify faces for recognition (#14835)
* Add margin to detected faces for embeddings

* Standardize pixel values for face input

* Use SVC to classify faces

* Clear classifier when new face is added

* Formatting

* Add dependency
2025-02-08 12:47:01 -06:00
Josh Hawkins
8bb037f82e Use regular expressions for plate matching (#14727) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
a0c35101fb Update facenet model (#14647) 2025-02-08 12:47:01 -06:00
Josh Hawkins
af1eaac5ff LPR improvements (#14641) 2025-02-08 12:47:01 -06:00
Josh Hawkins
dbbfc735f0 Prevent division by zero in lpr confidence checks (#14615) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
711575736d Fix label check (#14610)
* Create config for parsing object

* Use in maintainer
2025-02-08 12:47:01 -06:00
Josh Hawkins
c4ce7f9800 License plate recognition (ALPR) backend (#14564)
* Update version

* Face recognition backend (#14495)

* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model

* Improve face recognition (#14537)

* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting

* Fix access (#14540)

* Face detection (#14544)

* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo

* Update version

* Face recognition backend (#14495)

* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model

* Improve face recognition (#14537)

* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting

* Fix access (#14540)

* Face detection (#14544)

* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo

* initial foundation for alpr with paddleocr

* initial foundation for alpr with paddleocr

* initial foundation for alpr with paddleocr

* config

* config

* lpr maintainer

* clean up

* clean up

* fix processing

* don't process for stationary cars

* fix order

* fixes

* check for known plates

* improved length and character by character confidence

* model fixes and small tweaks

* docs

* placeholder for non frigate+ model lp detection

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-08 12:47:01 -06:00
Nicolas Mowen
594a4e0ba3 Face detection (#14544)
* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo
2025-02-08 12:47:01 -06:00
Nicolas Mowen
c1d5510428 Fix access (#14540) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
a3d6266d96 Improve face recognition (#14537)
* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting
2025-02-08 12:47:01 -06:00
Nicolas Mowen
aa19ec3ddb Face recognition backend (#14495)
* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0e1139a7a4 Update version 2025-02-08 12:47:01 -06:00
Blake Blackshear
cc955b1e66 Merge remote-tracking branch 'origin/master' into dev 2025-02-08 10:42:48 -06:00
Josh Hawkins
da34ff964f Remove development wording (#16378) 2025-02-08 10:27:50 -06:00
Nicolas Mowen
d6a2965cb2 Update openvino hardware inference times (#16368) 2025-02-07 10:52:21 -06:00
Nicolas Mowen
4b429e440b Point to latest version of hailo script (#16351) 2025-02-06 10:31:24 -06:00
Josh Hawkins
8759b4a0d3 Clarify occupancy sensor usage in HA integration (#16333) 2025-02-05 09:56:16 -06:00
Rui Alves
df840b7cd5 Finish unit tests for review controller and started for event controller (#15955)
* Started unit tests for the review controller

* Revert "Started unit tests for the review controller"

This reverts commit 7746eb146f.

* Started unit tests for GET /review/activity/motion Endpoint

* Started unit tests for GET /review/event/{event_id} Endpoint

* Continued unit tests for GET /review/event/{event_id} Endpoint

* Continued unit tests for GET /review/{event_id} Endpoint

* Continued unit tests for GET /review/{review_id} Endpoint

* Added unit tests for GET /review/{review_id}/viewed Endpoint

* Added unit tests for GET /stats Endpoint

* Added unit tests for GET /events Endpoint

* Updated unit tests for GET /events Endpoint

* Deleted unit tests for /events from test_http (updated tests are now in test_http_event.py)

* Removed duplicated test for GET /review/activity/motion Endpoint
2025-02-04 06:28:14 -07:00
Nicolas Mowen
0645dc70a5 Detector docs (#16292)
* Refactor hardware docs to show model specific speeds

* Move hailo to first party detectors

* Make note of multiple detectors

* Improve hierarchy

* Update object_detectors.md

* Update hardware.md
2025-02-03 07:57:21 -06:00
Josh Hawkins
b230b35c62 Fix genai note (#16273) 2025-02-02 07:10:37 -07:00
Josh Hawkins
31da9351f0 Clarify genai provider and openai compatible endpoints (#16267) 2025-02-01 16:49:09 -07:00
Ben Clouser
93d39370b6 update docs to be more clear regarding audio support and go2rtc requi… (#16232)
* update docs to be more clear regarding audio support and go2rtc requirement

Signed-off-by: Ben Clouser <dev@benclouser.com>

* Update docs/docs/troubleshooting/faqs.md

* Update docs/docs/troubleshooting/faqs.md

* Update docs/docs/troubleshooting/faqs.md

* Clarify title

* Cleanup

---------

Signed-off-by: Ben Clouser <dev@benclouser.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-01-30 11:23:38 -07:00
Nicolas Mowen
9dc4e8f290 Add frigate notify to third party extensions (#16190) 2025-01-28 08:09:59 -06:00
glossyio
12e62488c6 corrected docs for /config/save to /api/config/save (#16077) 2025-01-21 16:52:42 -07:00
Blake Blackshear
b5e5127d48 update link (#15756) 2024-12-31 12:05:55 -06:00
PrplHaz4
24f4aa79c8 Change Amcrest example to subtype=3 (#15607)
I think this was meant to be a `3`
2024-12-19 21:47:11 -06:00
Nicolas Mowen
dfc94b5ad6 Add dahua and amcrest to camera specific documentation (#15605) 2024-12-19 17:24:34 -06:00
Nicolas Mowen
5acbe37e6f Update camera specific settings to make note of hikvision authentication (#15552) 2024-12-17 11:31:59 -06:00
Blake Blackshear
2461d01329 Update hardware recs (#15254) 2024-11-29 07:20:33 -06:00
Nicolas Mowen
5cafca1be0 Add docs for go2rtc logging (#15204) 2024-11-26 09:34:40 -06:00
victpork
9c5a04f25f Added code to download weights from new host (#15087) 2024-11-20 05:06:22 -06:00
Charles Crossan
1ffdd32013 Update authentication.md (#14980)
add detail to reset_admin_password setting
2024-11-14 08:13:37 -07:00
Nicolas Mowen
99506845f7 Update edge tpu docs for RPi 5 kernel (#14946) 2024-11-12 15:48:57 -06:00
Blake Blackshear
ffd05f90f3 update hardware recommendations (#14830) 2024-11-06 05:02:42 -07:00
Blake Blackshear
3a8c290f91 update docs for new labels (#14739) 2024-11-03 06:10:38 -06:00
358 changed files with 22583 additions and 6748 deletions

View File

@@ -2,6 +2,7 @@ aarch
absdiff
airockchip
Alloc
alpr
Amcrest
amdgpu
analyzeduration
@@ -43,6 +44,7 @@ codeproject
colormap
colorspace
comms
cooldown
coro
ctypeslib
CUDA
@@ -61,6 +63,7 @@ dsize
dtype
ECONNRESET
edgetpu
facenet
fastapi
faststart
fflags
@@ -114,6 +117,8 @@ itemsize
Jellyfin
jetson
jetsons
jina
jinaai
joserfc
jsmpeg
jsonify
@@ -187,6 +192,7 @@ openai
opencv
openvino
OWASP
paddleocr
paho
passwordless
popleft
@@ -308,4 +314,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency

View File

@@ -8,9 +8,25 @@
"overrideCommand": false,
"remoteUser": "vscode",
"features": {
"ghcr.io/devcontainers/features/common-utils:1": {}
"ghcr.io/devcontainers/features/common-utils:2": {}
// Uncomment the following lines to use ONNX Runtime with CUDA support
// "ghcr.io/devcontainers/features/nvidia-cuda:1": {
// "installCudnn": true,
// "installNvtx": true,
// "installToolkit": true,
// "cudaVersion": "12.5",
// "cudnnVersion": "9.4.0.58"
// },
// "./features/onnxruntime-gpu": {}
},
"forwardPorts": [8971, 5000, 5001, 5173, 8554, 8555],
"forwardPorts": [
8971,
5000,
5001,
5173,
8554,
8555
],
"portsAttributes": {
"8971": {
"label": "External NGINX",
@@ -64,10 +80,18 @@
"editor.formatOnType": true,
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true,
"python.testing.unittestArgs": ["-v", "-s", "./frigate/test"],
"python.testing.unittestArgs": [
"-v",
"-s",
"./frigate/test"
],
"files.trimTrailingWhitespace": true,
"eslint.workingDirectories": ["./web"],
"isort.args": ["--settings-path=./pyproject.toml"],
"eslint.workingDirectories": [
"./web"
],
"isort.args": [
"--settings-path=./pyproject.toml"
],
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true,
@@ -86,9 +110,16 @@
],
"editor.tabSize": 2
},
"cSpell.ignoreWords": ["rtmp"],
"cSpell.words": ["preact", "astype", "hwaccel", "mqtt"]
"cSpell.ignoreWords": [
"rtmp"
],
"cSpell.words": [
"preact",
"astype",
"hwaccel",
"mqtt"
]
}
}
}
}
}

View File

@@ -0,0 +1,22 @@
{
"id": "onnxruntime-gpu",
"version": "0.0.1",
"name": "ONNX Runtime GPU (Nvidia)",
"description": "Installs ONNX Runtime for Nvidia GPUs.",
"documentationURL": "",
"options": {
"version": {
"type": "string",
"proposals": [
"latest",
"1.20.1",
"1.20.0"
],
"default": "latest",
"description": "Version of ONNX Runtime to install"
}
},
"installsAfter": [
"ghcr.io/devcontainers/features/nvidia-cuda"
]
}

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
VERSION=${VERSION}
python3 -m pip config set global.break-system-packages true
# if VERSION == "latest" or VERSION is empty, install the latest version
if [ "$VERSION" == "latest" ] || [ -z "$VERSION" ]; then
python3 -m pip install onnxruntime-gpu
else
python3 -m pip install onnxruntime-gpu==$VERSION
fi
echo "Done!"

View File

@@ -19,7 +19,7 @@ sudo chown -R "$(id -u):$(id -g)" /media/frigate
# When started as a service, LIBAVFORMAT_VERSION_MAJOR is defined in the
# s6 service file. For dev, where frigate is started from an interactive
# shell, we define it in .bashrc instead.
echo 'export LIBAVFORMAT_VERSION_MAJOR=$(/usr/lib/ffmpeg/7.0/bin/ffmpeg -version | grep -Po "libavformat\W+\K\d+")' >> $HOME/.bashrc
echo 'export LIBAVFORMAT_VERSION_MAJOR=$("$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)" -version | grep -Po "libavformat\W+\K\d+")' >> "$HOME/.bashrc"
make version

View File

@@ -1,5 +1,11 @@
## Proposed change
<!--
Thank you!
If you're introducing a new feature or significantly refactoring existing functionality,
we encourage you to start a discussion first. This helps ensure your idea aligns with
Frigate's development goals.
Describe what this pull request does and how it will benefit users of Frigate.
Please describe in detail any considerations, breaking changes, etc. that are
made in this pull request.

View File

@@ -42,7 +42,7 @@ jobs:
tags: ${{ steps.setup.outputs.image-name }}-amd64
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
arm64_build:
runs-on: ubuntu-22.04
runs-on: ubuntu-22.04-arm
name: ARM Build
steps:
- name: Check out code
@@ -76,36 +76,8 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
jetson_jp4_build:
runs-on: ubuntu-22.04
name: Jetson Jetpack 4
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 4)
env:
ARCH: arm64
BASE_IMAGE: timongentzsch/l4t-ubuntu20-opencv:latest
SLIM_BASE: timongentzsch/l4t-ubuntu20-opencv:latest
TRT_BASE: timongentzsch/l4t-ubuntu20-opencv:latest
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp4
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4,mode=max
jetson_jp5_build:
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 5
steps:
@@ -134,6 +106,35 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
jetson_jp6_build:
runs-on: ubuntu-22.04-arm
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 6)
env:
ARCH: arm64
BASE_IMAGE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
SLIM_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
TRT_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp6
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp6
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp6,mode=max
amd64_extra_builds:
runs-on: ubuntu-22.04
name: AMD64 Extra Build
@@ -162,8 +163,22 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rocm
files: docker/rocm/rocm.hcl
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-rocm,mode=max
*.cache-from=type=gha
arm64_extra_builds:
runs-on: ubuntu-22.04
runs-on: ubuntu-22.04-arm
name: ARM Extra Build
needs:
- arm64_build
@@ -187,46 +202,6 @@ jobs:
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
combined_extra_builds:
runs-on: ubuntu-22.04
name: Combined Extra Builds
needs:
- amd64_build
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Hailo-8l build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: h8l
files: docker/hailo8l/h8l.hcl
set: |
h8l.tags=${{ steps.setup.outputs.image-name }}-h8l
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rocm
files: docker/rocm/rocm.hcl
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:

View File

@@ -4,9 +4,10 @@ on:
pull_request:
paths-ignore:
- "docs/**"
- ".github/**"
env:
DEFAULT_PYTHON: 3.9
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:
@@ -23,7 +24,7 @@ jobs:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
@@ -63,6 +64,9 @@ jobs:
node-version: 20.x
- run: npm install
working-directory: ./web
- name: Build web
run: npm run build
working-directory: ./web
# - name: Test
# run: npm run test
# working-directory: ./web
@@ -76,7 +80,7 @@ jobs:
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@v5.3.0
uses: actions/setup-python@v5.4.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install requirements
@@ -98,14 +102,6 @@ jobs:
uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
- run: npm install
working-directory: ./web
- name: Build web
run: npm run build
working-directory: ./web
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx

View File

@@ -39,14 +39,14 @@ jobs:
STABLE_TAG=${BASE}:stable
PULL_TAG=${BASE}:${BUILD_TAG}
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk h8l rocm; do
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
done
# stable tag
if [[ "${BUILD_TYPE}" == "stable" ]]; then
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk h8l rocm; do
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
done
fi

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.15.0
VERSION = 0.16.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty

View File

@@ -38,4 +38,4 @@ services:
container_name: mqtt
image: eclipse-mosquitto:1.6
ports:
- "1883:1883"
- "1883:1883"

View File

@@ -1,40 +0,0 @@
# syntax=docker/dockerfile:1.6
ARG DEBIAN_FRONTEND=noninteractive
# Build Python wheels
FROM wheels AS h8l-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/hailo8l/requirements-wheels-h8l.txt /requirements-wheels-h8l.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
# Create a directory to store the built wheels
RUN mkdir /h8l-wheels
# Build the wheels
RUN pip3 wheel --wheel-dir=/h8l-wheels -c /requirements-wheels.txt -r /requirements-wheels-h8l.txt
FROM wget AS hailort
ARG TARGETARCH
RUN --mount=type=bind,source=docker/hailo8l/install_hailort.sh,target=/deps/install_hailort.sh \
/deps/install_hailort.sh
# Use deps as the base image
FROM deps AS h8l-frigate
# Copy the wheels from the wheels stage
COPY --from=h8l-wheels /h8l-wheels /deps/h8l-wheels
COPY --from=hailort /hailo-wheels /deps/hailo-wheels
COPY --from=hailort /rootfs/ /
# Install the wheels
RUN pip3 install -U /deps/h8l-wheels/*.whl
RUN pip3 install -U /deps/hailo-wheels/*.whl
# Copy base files from the rootfs stage
COPY --from=rootfs / /
# Set workdir
WORKDIR /opt/frigate/

View File

@@ -1,34 +0,0 @@
target wget {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "wget"
}
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "rootfs"
}
target h8l {
dockerfile = "docker/hailo8l/Dockerfile"
contexts = {
wget = "target:wget"
wheels = "target:wheels"
deps = "target:deps"
rootfs = "target:rootfs"
}
platforms = ["linux/arm64","linux/amd64"]
}

View File

@@ -1,15 +0,0 @@
BOARDS += h8l
local-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=frigate:latest-h8l \
--load
build-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l
push-h8l: build-h8l
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l \
--push

View File

@@ -1,19 +0,0 @@
#!/bin/bash
set -euxo pipefail
hailo_version="4.19.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="x86_64"
elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" |
tar -C / -xzf -
mkdir -p /hailo-wheels
wget -P /hailo-wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp39-cp39-linux_${arch}.whl"

View File

@@ -1,12 +0,0 @@
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*

View File

@@ -4,6 +4,7 @@
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget
hailo_version="4.20.0"
arch=$(uname -m)
if [[ $arch == "x86_64" ]]; then
@@ -13,7 +14,7 @@ else
fi
# Clone the HailoRT driver repository
git clone --depth 1 --branch v4.19.0 https://github.com/hailo-ai/hailort-drivers.git
git clone --depth 1 --branch v${hailo_version} https://github.com/hailo-ai/hailort-drivers.git
# Build and install the HailoRT driver
cd hailort-drivers/linux/pcie

View File

@@ -3,14 +3,29 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG BASE_IMAGE=debian:11
ARG SLIM_BASE=debian:11-slim
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
ARG BASE_IMAGE=debian:12
ARG SLIM_BASE=debian:12-slim
# A hook that allows us to inject commands right after the base images
ARG BASE_HOOK=
FROM ${BASE_IMAGE} AS base
ARG PIP_BREAK_SYSTEM_PACKAGES
ARG BASE_HOOK
FROM --platform=${BUILDPLATFORM} debian:11 AS base_host
RUN sh -c "$BASE_HOOK"
FROM --platform=${BUILDPLATFORM} debian:12 AS base_host
ARG PIP_BREAK_SYSTEM_PACKAGES
FROM ${SLIM_BASE} AS slim-base
ARG PIP_BREAK_SYSTEM_PACKAGES
ARG BASE_HOOK
RUN sh -c "$BASE_HOOK"
FROM slim-base AS wget
ARG DEBIAN_FRONTEND
@@ -24,10 +39,7 @@ ARG DEBIAN_FRONTEND
ENV CCACHE_DIR /root/.ccache
ENV CCACHE_MAXSIZE 2G
# bind /var/cache/apt to tmpfs to speed up nginx build
RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
--mount=type=bind,source=docker/main/build_nginx.sh,target=/deps/build_nginx.sh \
--mount=type=cache,target=/root/.ccache \
RUN --mount=type=bind,source=docker/main/build_nginx.sh,target=/deps/build_nginx.sh \
/deps/build_nginx.sh
FROM wget AS sqlite-vec
@@ -139,24 +151,17 @@ ARG TARGETARCH
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https \
gnupg \
wget \
# the key fingerprint can be obtained from https://ftp-master.debian.org/keys.html
&& wget -qO- "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xA4285295FC7B1A81600062A9605C66F00D6C9793" | \
gpg --dearmor > /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/debian-archive-bullseye-stable.gpg] http://deb.debian.org/debian bullseye main contrib non-free" | \
tee /etc/apt/sources.list.d/debian-bullseye-nonfree.list \
apt-transport-https wget \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.9 \
python3.9-dev \
python3.11 \
python3.11-dev \
# opencv dependencies
build-essential cmake git pkg-config libgtk-3-dev \
libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \
libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \
gfortran openexr libatlas-base-dev libssl-dev\
libtbb2 libtbb-dev libdc1394-22-dev libopenexr-dev \
libtbbmalloc2 libtbb-dev libdc1394-dev libopenexr-dev \
libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
# sqlite3 dependencies
tclsh \
@@ -164,8 +169,7 @@ RUN apt-get -qq update \
gcc gfortran libopenblas-dev liblapack-dev && \
rm -rf /var/lib/apt/lists/*
# Ensure python3 defaults to python3.9
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
@@ -180,6 +184,9 @@ RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
# Install HailoRT & Wheels
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
/deps/install_hailort.sh
# Collect deps in a single layer
FROM scratch AS deps-rootfs
@@ -190,6 +197,7 @@ COPY --from=libusb-build /usr/local/lib /usr/local/lib
COPY --from=tempio /rootfs/ /
COPY --from=s6-overlay /rootfs/ /
COPY --from=models /rootfs/ /
COPY --from=wheels /rootfs/ /
COPY docker/main/rootfs/ /
@@ -214,14 +222,22 @@ ENV TRANSFORMERS_NO_ADVISORY_WARNINGS=1
# Set OpenCV ffmpeg loglevel to fatal: https://ffmpeg.org/doxygen/trunk/log_8h.html
ENV OPENCV_FFMPEG_LOGLEVEL=8
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
# Install dependencies
RUN --mount=type=bind,source=docker/main/install_deps.sh,target=/deps/install_deps.sh \
/deps/install_deps.sh
ENV DEFAULT_FFMPEG_VERSION="7.0"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:5.0"
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
python3 -m pip install --upgrade pip && \
pip3 install -U /deps/wheels/*.whl
COPY --from=deps-rootfs / /

View File

@@ -8,10 +8,16 @@ SECURE_TOKEN_MODULE_VERSION="1.5"
SET_MISC_MODULE_VERSION="v0.33"
NGX_DEVEL_KIT_VERSION="v0.3.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
apt-get update
source /etc/os-release
if [[ "$VERSION_ID" == "12" ]]; then
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
else
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
fi
apt-get update
apt-get -yqq build-dep nginx
apt-get -yqq install --no-install-recommends ca-certificates wget

View File

@@ -4,7 +4,7 @@ from openvino.tools import mo
ov_model = mo.convert_model(
"/models/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb",
compress_to_fp16=True,
transformations_config="/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
transformations_config="/usr/local/lib/python3.11/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
tensorflow_object_detection_api_pipeline_config="/models/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config",
reverse_input_channels=True,
)

View File

@@ -4,8 +4,15 @@ set -euxo pipefail
SQLITE_VEC_VERSION="0.1.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
source /etc/os-release
if [[ "$VERSION_ID" == "12" ]]; then
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
else
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
fi
apt-get update
apt-get -yqq build-dep sqlite3 gettext git

View File

@@ -6,82 +6,66 @@ apt-get -qq update
apt-get -qq install --no-install-recommends -y \
apt-transport-https \
ca-certificates \
gnupg \
wget \
lbzip2 \
procps vainfo \
unzip locales tzdata libxml2 xz-utils \
python3.9 \
python3-pip \
python3.11 \
curl \
lsof \
jq \
nethogs
nethogs \
libgl1 \
libglib2.0-0 \
libusb-1.0.0
# ensure python3 defaults to python3.9
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
mkdir -p -m 600 /root/.gnupg
# add coral repo
curl -fsSLo - https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
gpg --dearmor -o /etc/apt/trusted.gpg.d/google-cloud-packages-archive-keyring.gpg
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
echo "libedgetpu1-max libedgetpu/accepted-eula select true" | debconf-set-selections
# install coral runtime
wget -q -O /tmp/libedgetpu1-max.deb "https://github.com/feranick/libedgetpu/releases/download/16.0TF2.17.1-1/libedgetpu1-max_16.0tf2.17.1-1.bookworm_${TARGETARCH}.deb"
unset DEBIAN_FRONTEND
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
rm /tmp/libedgetpu1-max.deb
# enable non-free repo in Debian
if grep -q "Debian" /etc/issue; then
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
fi
# coral drivers
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
libedgetpu1-max python3-tflite-runtime python3-pycoral
# btbn-ffmpeg -> amd64
# ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
rm -rf ffmpeg.tar.xz
mkdir -p /usr/lib/ffmpeg/7.0
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/5.0/doc /usr/lib/ffmpeg/5.0/bin/ffplay
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/7.0/doc /usr/lib/ffmpeg/7.0/bin/ffplay
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
rm -rf ffmpeg.tar.xz
fi
# ffmpeg -> arm64
if [[ "${TARGETARCH}" == "arm64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
rm -f ffmpeg.tar.xz
mkdir -p /usr/lib/ffmpeg/7.0
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/5.0/doc /usr/lib/ffmpeg/5.0/bin/ffplay
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/7.0/doc /usr/lib/ffmpeg/7.0/bin/ffplay
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
rm -f ffmpeg.tar.xz
fi
# arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then
# use debian bookworm for amd / intel-i965 driver packages
echo 'deb https://deb.debian.org/debian bookworm main contrib non-free' >/etc/apt/sources.list.d/debian-bookworm.list
apt-get -qq update
# install amd / intel-i965 driver packages
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver intel-gpu-tools onevpl-tools \
libva-drm2 \
mesa-va-drivers radeontop
# something about this dependency requires it to be installed in a separate call rather than in the line above
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver-shaders
# intel packages use zst compression so we need to update dpkg
apt-get install -y dpkg
rm -f /etc/apt/sources.list.d/debian-bookworm.list
# use intel apt intel packages
wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list

14
docker/main/install_hailort.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
set -euxo pipefail
hailo_version="4.20.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="x86_64"
elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" | tar -C / -xzf -
wget -P /wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp311-cp311-linux_${arch}.whl"

View File

@@ -1,6 +1,7 @@
aiofiles == 24.1.*
click == 8.1.*
# FastAPI
aiohttp == 3.11.2
aiohttp == 3.11.3
starlette == 0.41.2
starlette-context == 0.3.6
fastapi == 0.115.*
@@ -9,39 +10,64 @@ slowapi == 0.1.*
imutils == 0.5.*
joserfc == 1.0.*
pathvalidate == 3.2.*
markupsafe == 2.1.*
markupsafe == 3.0.*
python-multipart == 0.0.20
# General
mypy == 1.6.1
numpy == 1.26.*
onvif_zeep == 0.2.12
opencv-python-headless == 4.9.0.*
onvif-zeep-async == 3.1.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.13.*
psutil == 6.1.*
pydantic == 2.8.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
pytz == 2024.*
pytz == 2025.*
pyzmq == 26.2.*
ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
scipy == 1.13.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
# Image Manipulation
numpy == 1.26.*
opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.*
scipy == 1.14.*
# OpenVino & ONNX
openvino == 2024.3.*
onnxruntime-openvino == 1.19.* ; platform_machine == 'x86_64'
onnxruntime == 1.19.* ; platform_machine == 'aarch64'
openvino == 2024.4.*
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
# Embeddings
transformers == 4.45.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.3.*
openai == 1.51.*
openai == 1.65.*
# push notifications
py-vapid == 1.9.*
pywebpush == 2.0.*
# alpr
pyclipper == 1.3.*
shapely == 2.0.*
Levenshtein==0.26.*
# HailoRT Wheels
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*
prometheus-client == 0.21.*
# TFLite
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'

View File

@@ -1,2 +1,2 @@
scikit-build == 0.17.*
scikit-build == 0.18.*
nvidia-pyindex

View File

@@ -43,8 +43,10 @@ function migrate_db_path() {
}
function set_libva_version() {
local ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
export LIBAVFORMAT_VERSION_MAJOR=$($ffmpeg_path -version | grep -Po "libavformat\W+\K\d+")
local ffmpeg_path
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
LIBAVFORMAT_VERSION_MAJOR=$("$ffmpeg_path" -version | grep -Po "libavformat\W+\K\d+")
export LIBAVFORMAT_VERSION_MAJOR
}
echo "[INFO] Preparing Frigate..."

View File

@@ -44,10 +44,14 @@ function get_ip_and_port_from_supervisor() {
}
function set_libva_version() {
local ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
export LIBAVFORMAT_VERSION_MAJOR=$($ffmpeg_path -version | grep -Po "libavformat\W+\K\d+")
local ffmpeg_path
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
LIBAVFORMAT_VERSION_MAJOR=$("$ffmpeg_path" -version | grep -Po "libavformat\W+\K\d+")
export LIBAVFORMAT_VERSION_MAJOR
}
set_libva_version
if [[ -f "/dev/shm/go2rtc.yaml" ]]; then
echo "[INFO] Removing stale config from last run..."
rm /dev/shm/go2rtc.yaml
@@ -66,8 +70,6 @@ else
echo "[WARNING] Unable to remove existing go2rtc config. Changes made to your frigate config file may not be recognized. Please remove the /dev/shm/go2rtc.yaml from your docker host manually."
fi
set_libva_version
readonly config_path="/config"
if [[ -x "${config_path}/go2rtc" ]]; then

View File

@@ -1,6 +1,5 @@
import json
import os
import shutil
import sys
from ruamel.yaml import YAML
@@ -35,10 +34,7 @@ except FileNotFoundError:
path = config.get("ffmpeg", {}).get("path", "default")
if path == "default":
if shutil.which("ffmpeg") is None:
print(f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg")
else:
print("ffmpeg")
print(f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg")
elif path in INCLUDED_FFMPEG_VERSIONS:
print(f"/usr/lib/ffmpeg/{path}/bin/ffmpeg")
else:

View File

@@ -2,7 +2,6 @@
import json
import os
import shutil
import sys
from pathlib import Path
@@ -13,6 +12,7 @@ from frigate.const import (
BIRDSEYE_PIPE,
DEFAULT_FFMPEG_VERSION,
INCLUDED_FFMPEG_VERSIONS,
LIBAVFORMAT_VERSION_MAJOR,
)
from frigate.ffmpeg_presets import parse_preset_hardware_acceleration_encode
@@ -66,29 +66,32 @@ elif go2rtc_config["log"].get("format") is None:
go2rtc_config["log"]["format"] = "text"
# ensure there is a default webrtc config
if not go2rtc_config.get("webrtc"):
if go2rtc_config.get("webrtc") is None:
go2rtc_config["webrtc"] = {}
# go2rtc should listen on 8555 tcp & udp by default
if not go2rtc_config["webrtc"].get("listen"):
if go2rtc_config["webrtc"].get("listen") is None:
go2rtc_config["webrtc"]["listen"] = ":8555"
if not go2rtc_config["webrtc"].get("candidates", []):
if go2rtc_config["webrtc"].get("candidates") is None:
default_candidates = []
# use internal candidate if it was discovered when running through the add-on
internal_candidate = os.environ.get(
"FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL", None
)
internal_candidate = os.environ.get("FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL")
if internal_candidate is not None:
default_candidates.append(internal_candidate)
# should set default stun server so webrtc can work
default_candidates.append("stun:8555")
go2rtc_config["webrtc"] = {"candidates": default_candidates}
else:
print(
"[INFO] Not injecting WebRTC candidates into go2rtc config as it has been set manually",
)
go2rtc_config["webrtc"]["candidates"] = default_candidates
# This prevents WebRTC from attempting to establish a connection to the internal
# docker IPs which are not accessible from outside the container itself and just
# wastes time during negotiation. Note that this is only necessary because
# Frigate container doesn't run in host network mode.
if go2rtc_config["webrtc"].get("filter") is None:
go2rtc_config["webrtc"]["filter"] = {"candidates": []}
elif go2rtc_config["webrtc"]["filter"].get("candidates") is None:
go2rtc_config["webrtc"]["filter"]["candidates"] = []
# sets default RTSP response to be equivalent to ?video=h264,h265&audio=aac
# this means user does not need to specify audio codec when using restream
@@ -112,10 +115,7 @@ else:
# ensure ffmpeg path is set correctly
path = config.get("ffmpeg", {}).get("path", "default")
if path == "default":
if shutil.which("ffmpeg") is None:
ffmpeg_path = f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg"
else:
ffmpeg_path = "ffmpeg"
ffmpeg_path = f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg"
elif path in INCLUDED_FFMPEG_VERSIONS:
ffmpeg_path = f"/usr/lib/ffmpeg/{path}/bin/ffmpeg"
else:
@@ -127,14 +127,12 @@ elif go2rtc_config["ffmpeg"].get("bin") is None:
go2rtc_config["ffmpeg"]["bin"] = ffmpeg_path
# need to replace ffmpeg command when using ffmpeg4
if int(os.environ.get("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") < 59:
if go2rtc_config["ffmpeg"].get("rtsp") is None:
go2rtc_config["ffmpeg"]["rtsp"] = (
"-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
)
else:
if LIBAVFORMAT_VERSION_MAJOR < 59:
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
if go2rtc_config.get("ffmpeg") is None:
go2rtc_config["ffmpeg"] = {"path": ""}
go2rtc_config["ffmpeg"] = {"rtsp": rtsp_args}
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
go2rtc_config["ffmpeg"]["rtsp"] = rtsp_args
for name in go2rtc_config.get("streams", {}):
stream = go2rtc_config["streams"][name]

View File

@@ -1,14 +1,16 @@
## Send a subrequest to verify if the user is authenticated and has permission to access the resource.
auth_request /auth;
## Save the upstream metadata response headers from Authelia to variables.
## Save the upstream metadata response headers from the auth request to variables
auth_request_set $user $upstream_http_remote_user;
auth_request_set $role $upstream_http_remote_role;
auth_request_set $groups $upstream_http_remote_groups;
auth_request_set $name $upstream_http_remote_name;
auth_request_set $email $upstream_http_remote_email;
## Inject the metadata response headers from the variables into the request made to the backend.
proxy_set_header Remote-User $user;
proxy_set_header Remote-Role $role;
proxy_set_header Remote-Groups $groups;
proxy_set_header Remote-Email $email;
proxy_set_header Remote-Name $name;

View File

@@ -81,6 +81,9 @@ http {
open_file_cache_errors on;
aio on;
# file upload size
client_max_body_size 10M;
# https://github.com/kaltura/nginx-vod-module#vod_open_file_thread_pool
vod_open_file_thread_pool default;
@@ -106,6 +109,14 @@ http {
expires off;
keepalive_disable safari;
# vod module returns 502 for non-existent media
# https://github.com/kaltura/nginx-vod-module/issues/468
error_page 502 =404 /vod-not-found;
}
location = /vod-not-found {
return 404;
}
location /stream/ {

View File

@@ -0,0 +1,20 @@
./subset/000000005001.jpg
./subset/000000038829.jpg
./subset/000000052891.jpg
./subset/000000075612.jpg
./subset/000000098261.jpg
./subset/000000181542.jpg
./subset/000000215245.jpg
./subset/000000277005.jpg
./subset/000000288685.jpg
./subset/000000301421.jpg
./subset/000000334371.jpg
./subset/000000348481.jpg
./subset/000000373353.jpg
./subset/000000397681.jpg
./subset/000000414673.jpg
./subset/000000419312.jpg
./subset/000000465822.jpg
./subset/000000475732.jpg
./subset/000000559707.jpg
./subset/000000574315.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 272 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@@ -3,25 +3,32 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels as rk-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN rm -rf /rk-wheels/opencv_python-*
FROM deps AS rk-frigate
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
pip3 install -U /deps/rk-wheels/*.whl
pip3 install --no-deps -U /deps/rk-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY docker/rockchip/COCO /COCO
COPY docker/rockchip/conv2rknn.py /opt/conv2rknn.py
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt.so /usr/lib/
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/librknnrt.so /usr/lib/
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
ENV PATH="/usr/lib/ffmpeg/6.0/bin/:${PATH}"
ENV DEFAULT_FFMPEG_VERSION="6.0"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"

View File

@@ -0,0 +1,82 @@
import os
import rknn
import yaml
from rknn.api import RKNN
try:
with open(rknn.__path__[0] + "/VERSION") as file:
tk_version = file.read().strip()
except FileNotFoundError:
pass
try:
with open("/config/conv2rknn.yaml", "r") as config_file:
configuration = yaml.safe_load(config_file)
except FileNotFoundError:
raise Exception("Please place a config.yaml file in /config/conv2rknn.yaml")
if configuration["config"] != None:
rknn_config = configuration["config"]
else:
rknn_config = {}
if not os.path.isdir("/config/model_cache/rknn_cache/onnx"):
raise Exception(
"Place the onnx models you want to convert to rknn format in /config/model_cache/rknn_cache/onnx"
)
if "soc" not in configuration:
try:
with open("/proc/device-tree/compatible") as file:
soc = file.read().split(",")[-1].strip("\x00")
except FileNotFoundError:
raise Exception("Make sure to run docker in privileged mode.")
configuration["soc"] = [
soc,
]
if "quantization" not in configuration:
configuration["quantization"] = False
if "output_name" not in configuration:
configuration["output_name"] = "{{input_basename}}"
for input_filename in os.listdir("/config/model_cache/rknn_cache/onnx"):
for soc in configuration["soc"]:
quant = "i8" if configuration["quantization"] else "fp16"
input_path = "/config/model_cache/rknn_cache/onnx/" + input_filename
input_basename = input_filename[: input_filename.rfind(".")]
output_filename = (
configuration["output_name"].format(
quant=quant,
input_basename=input_basename,
soc=soc,
tk_version=tk_version,
)
+ ".rknn"
)
output_path = "/config/model_cache/rknn_cache/" + output_filename
rknn_config["target_platform"] = soc
rknn = RKNN(verbose=True)
rknn.config(**rknn_config)
if rknn.load_onnx(model=input_path) != 0:
raise Exception("Error loading model.")
if (
rknn.build(
do_quantization=configuration["quantization"],
dataset="/COCO/coco_subset_20.txt",
)
!= 0
):
raise Exception("Error building model.")
if rknn.export_rknn(output_path) != 0:
raise Exception("Error exporting rknn model.")

View File

@@ -1 +1,2 @@
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/rknn_toolkit_lite2-2.0.0b0-cp39-cp39-linux_aarch64.whl
rknn-toolkit2 == 2.3.0
rknn-toolkit-lite2 == 2.3.0

View File

@@ -2,77 +2,48 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=5.7.3
ARG ROCM=6.3.3
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
#######################################################################
FROM ubuntu:focal as rocm
FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install gnupg wget
RUN mkdir --parents --mode=0755 /etc/apt/keyrings
RUN wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | gpg --dearmor | tee /etc/apt/keyrings/rocm.gpg > /dev/null
COPY docker/rocm/rocm.list /etc/apt/sources.list.d/
COPY docker/rocm/rocm-pin-600 /etc/apt/preferences.d/
RUN apt-get update
RUN apt-get -y install --no-install-recommends migraphx hipfft roctracer
RUN apt-get -y install --no-install-recommends migraphx-dev
RUN apt update && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/$ROCM/ubuntu/jammy/amdgpu-install_6.3.60303-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -y rocm
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocfft*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/
RUN cd /opt/rocm-$ROCM/lib && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
RUN mkdir -p /opt/rocm-dist/etc/ld.so.conf.d/
RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM --platform=linux/amd64 debian:11 as debian-base
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install --no-install-recommends libelf1 libdrm2 libdrm-amdgpu1 libnuma1 kmod
RUN apt-get -y install python3
#######################################################################
# ROCm does not come with migraphx wrappers for python 3.9, so we build it here
FROM debian-base as debian-build
ARG ROCM
COPY --from=rocm /opt/rocm-$ROCM /opt/rocm-$ROCM
RUN ln -s /opt/rocm-$ROCM /opt/rocm
RUN apt-get -y install g++ cmake
RUN apt-get -y install python3-pybind11 python3.9-distutils python3-dev
WORKDIR /opt/build
COPY docker/rocm/migraphx .
RUN mkdir build && cd build && cmake .. && make install
#######################################################################
FROM deps AS deps-prelim
# need this to install libnuma1
RUN apt-get update
# no ugprade?!?!
RUN apt-get -y install libnuma1
RUN apt-get update && apt-get install -y libnuma1
WORKDIR /opt/frigate/
WORKDIR /opt/frigate
COPY --from=rootfs / /
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip" --break-system-packages
RUN python3 -m pip config set global.break-system-packages true
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
RUN python3 -m pip install --upgrade pip \
&& pip3 uninstall -y onnxruntime-openvino \
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 install -r /requirements.txt
#######################################################################
@@ -86,12 +57,11 @@ COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
COPY --from=debian-build /opt/rocm/lib/migraphx.cpython-39-x86_64-linux-gnu.so /opt/rocm-$ROCM/lib/
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV HSA_ENABLE_SDMA=0
ENV MIGRAPHX_ENABLE_NHWC=1
COPY --from=rocm-dist / /

View File

@@ -1,26 +0,0 @@
cmake_minimum_required(VERSION 3.1)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release)
endif()
SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
project(migraphx_py)
include_directories(/opt/rocm/include)
find_package(pybind11 REQUIRED)
pybind11_add_module(migraphx migraphx_py.cpp)
target_link_libraries(migraphx PRIVATE /opt/rocm/lib/libmigraphx.so /opt/rocm/lib/libmigraphx_tf.so /opt/rocm/lib/libmigraphx_onnx.so)
install(TARGETS migraphx
COMPONENT python
LIBRARY DESTINATION /opt/rocm/lib
)

View File

@@ -1,582 +0,0 @@
/*
* The MIT License (MIT)
*
* Copyright (c) 2015-2022 Advanced Micro Devices, Inc. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/numpy.h>
#include <migraphx/program.hpp>
#include <migraphx/instruction_ref.hpp>
#include <migraphx/operation.hpp>
#include <migraphx/quantization.hpp>
#include <migraphx/generate.hpp>
#include <migraphx/instruction.hpp>
#include <migraphx/ref/target.hpp>
#include <migraphx/stringutils.hpp>
#include <migraphx/tf.hpp>
#include <migraphx/onnx.hpp>
#include <migraphx/load_save.hpp>
#include <migraphx/register_target.hpp>
#include <migraphx/json.hpp>
#include <migraphx/make_op.hpp>
#include <migraphx/op/common.hpp>
#ifdef HAVE_GPU
#include <migraphx/gpu/hip.hpp>
#endif
using half = half_float::half;
namespace py = pybind11;
#ifdef __clang__
#define MIGRAPHX_PUSH_UNUSED_WARNING \
_Pragma("clang diagnostic push") \
_Pragma("clang diagnostic ignored \"-Wused-but-marked-unused\"")
#define MIGRAPHX_POP_WARNING _Pragma("clang diagnostic pop")
#else
#define MIGRAPHX_PUSH_UNUSED_WARNING
#define MIGRAPHX_POP_WARNING
#endif
#define MIGRAPHX_PYBIND11_MODULE(...) \
MIGRAPHX_PUSH_UNUSED_WARNING \
PYBIND11_MODULE(__VA_ARGS__) \
MIGRAPHX_POP_WARNING
#define MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM(x, t) .value(#x, migraphx::shape::type_t::x)
namespace migraphx {
migraphx::value to_value(py::kwargs kwargs);
migraphx::value to_value(py::list lst);
template <class T, class F>
void visit_py(T x, F f)
{
if(py::isinstance<py::kwargs>(x))
{
f(to_value(x.template cast<py::kwargs>()));
}
else if(py::isinstance<py::list>(x))
{
f(to_value(x.template cast<py::list>()));
}
else if(py::isinstance<py::bool_>(x))
{
f(x.template cast<bool>());
}
else if(py::isinstance<py::int_>(x) or py::hasattr(x, "__index__"))
{
f(x.template cast<int>());
}
else if(py::isinstance<py::float_>(x))
{
f(x.template cast<float>());
}
else if(py::isinstance<py::str>(x))
{
f(x.template cast<std::string>());
}
else if(py::isinstance<migraphx::shape::dynamic_dimension>(x))
{
f(migraphx::to_value(x.template cast<migraphx::shape::dynamic_dimension>()));
}
else
{
MIGRAPHX_THROW("VISIT_PY: Unsupported data type!");
}
}
migraphx::value to_value(py::list lst)
{
migraphx::value v = migraphx::value::array{};
for(auto val : lst)
{
visit_py(val, [&](auto py_val) { v.push_back(py_val); });
}
return v;
}
migraphx::value to_value(py::kwargs kwargs)
{
migraphx::value v = migraphx::value::object{};
for(auto arg : kwargs)
{
auto&& key = py::str(arg.first);
auto&& val = arg.second;
visit_py(val, [&](auto py_val) { v[key] = py_val; });
}
return v;
}
} // namespace migraphx
namespace pybind11 {
namespace detail {
template <>
struct npy_format_descriptor<half>
{
static std::string format()
{
// following: https://docs.python.org/3/library/struct.html#format-characters
return "e";
}
static constexpr auto name() { return _("half"); }
};
} // namespace detail
} // namespace pybind11
template <class F>
void visit_type(const migraphx::shape& s, F f)
{
s.visit_type(f);
}
template <class T, class F>
void visit(const migraphx::raw_data<T>& x, F f)
{
x.visit(f);
}
template <class F>
void visit_types(F f)
{
migraphx::shape::visit_types(f);
}
template <class T>
py::buffer_info to_buffer_info(T& x)
{
migraphx::shape s = x.get_shape();
assert(s.type() != migraphx::shape::tuple_type);
if(s.dynamic())
MIGRAPHX_THROW("MIGRAPHX PYTHON: dynamic shape argument passed to to_buffer_info");
auto strides = s.strides();
std::transform(
strides.begin(), strides.end(), strides.begin(), [&](auto i) { return i * s.type_size(); });
py::buffer_info b;
visit_type(s, [&](auto as) {
// migraphx use int8_t data to store bool type, we need to
// explicitly specify the data type as bool for python
if(s.type() == migraphx::shape::bool_type)
{
b = py::buffer_info(x.data(),
as.size(),
py::format_descriptor<bool>::format(),
s.ndim(),
s.lens(),
strides);
}
else
{
b = py::buffer_info(x.data(),
as.size(),
py::format_descriptor<decltype(as())>::format(),
s.ndim(),
s.lens(),
strides);
}
});
return b;
}
migraphx::shape to_shape(const py::buffer_info& info)
{
migraphx::shape::type_t t;
std::size_t n = 0;
visit_types([&](auto as) {
if(info.format == py::format_descriptor<decltype(as())>::format() or
(info.format == "l" and py::format_descriptor<decltype(as())>::format() == "q") or
(info.format == "L" and py::format_descriptor<decltype(as())>::format() == "Q"))
{
t = as.type_enum();
n = sizeof(as());
}
else if(info.format == "?" and py::format_descriptor<decltype(as())>::format() == "b")
{
t = migraphx::shape::bool_type;
n = sizeof(bool);
}
});
if(n == 0)
{
MIGRAPHX_THROW("MIGRAPHX PYTHON: Unsupported data type " + info.format);
}
auto strides = info.strides;
std::transform(strides.begin(), strides.end(), strides.begin(), [&](auto i) -> std::size_t {
return n > 0 ? i / n : 0;
});
// scalar support
if(info.shape.empty())
{
return migraphx::shape{t};
}
else
{
return migraphx::shape{t, info.shape, strides};
}
}
MIGRAPHX_PYBIND11_MODULE(migraphx, m)
{
py::class_<migraphx::shape> shape_cls(m, "shape");
shape_cls
.def(py::init([](py::kwargs kwargs) {
auto v = migraphx::to_value(kwargs);
auto t = migraphx::shape::parse_type(v.get("type", "float"));
if(v.contains("dyn_dims"))
{
auto dyn_dims =
migraphx::from_value<std::vector<migraphx::shape::dynamic_dimension>>(
v.at("dyn_dims"));
return migraphx::shape(t, dyn_dims);
}
auto lens = v.get<std::size_t>("lens", {1});
if(v.contains("strides"))
return migraphx::shape(t, lens, v.at("strides").to_vector<std::size_t>());
else
return migraphx::shape(t, lens);
}))
.def("type", &migraphx::shape::type)
.def("lens", &migraphx::shape::lens)
.def("strides", &migraphx::shape::strides)
.def("ndim", &migraphx::shape::ndim)
.def("elements", &migraphx::shape::elements)
.def("bytes", &migraphx::shape::bytes)
.def("type_string", &migraphx::shape::type_string)
.def("type_size", &migraphx::shape::type_size)
.def("dyn_dims", &migraphx::shape::dyn_dims)
.def("packed", &migraphx::shape::packed)
.def("transposed", &migraphx::shape::transposed)
.def("broadcasted", &migraphx::shape::broadcasted)
.def("standard", &migraphx::shape::standard)
.def("scalar", &migraphx::shape::scalar)
.def("dynamic", &migraphx::shape::dynamic)
.def("__eq__", std::equal_to<migraphx::shape>{})
.def("__ne__", std::not_equal_to<migraphx::shape>{})
.def("__repr__", [](const migraphx::shape& s) { return migraphx::to_string(s); });
py::enum_<migraphx::shape::type_t>(shape_cls, "type_t")
MIGRAPHX_SHAPE_VISIT_TYPES(MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM);
py::class_<migraphx::shape::dynamic_dimension>(shape_cls, "dynamic_dimension")
.def(py::init<>())
.def(py::init<std::size_t, std::size_t>())
.def(py::init<std::size_t, std::size_t, std::set<std::size_t>>())
.def_readwrite("min", &migraphx::shape::dynamic_dimension::min)
.def_readwrite("max", &migraphx::shape::dynamic_dimension::max)
.def_readwrite("optimals", &migraphx::shape::dynamic_dimension::optimals)
.def("is_fixed", &migraphx::shape::dynamic_dimension::is_fixed);
py::class_<migraphx::argument>(m, "argument", py::buffer_protocol())
.def_buffer([](migraphx::argument& x) -> py::buffer_info { return to_buffer_info(x); })
.def(py::init([](py::buffer b) {
py::buffer_info info = b.request();
return migraphx::argument(to_shape(info), info.ptr);
}))
.def("get_shape", &migraphx::argument::get_shape)
.def("data_ptr",
[](migraphx::argument& x) { return reinterpret_cast<std::uintptr_t>(x.data()); })
.def("tolist",
[](migraphx::argument& x) {
py::list l{x.get_shape().elements()};
visit(x, [&](auto data) { l = py::cast(data.to_vector()); });
return l;
})
.def("__eq__", std::equal_to<migraphx::argument>{})
.def("__ne__", std::not_equal_to<migraphx::argument>{})
.def("__repr__", [](const migraphx::argument& x) { return migraphx::to_string(x); });
py::class_<migraphx::target>(m, "target");
py::class_<migraphx::instruction_ref>(m, "instruction_ref")
.def("shape", [](migraphx::instruction_ref i) { return i->get_shape(); })
.def("op", [](migraphx::instruction_ref i) { return i->get_operator(); });
py::class_<migraphx::module, std::unique_ptr<migraphx::module, py::nodelete>>(m, "module")
.def("print", [](const migraphx::module& mm) { std::cout << mm << std::endl; })
.def(
"add_instruction",
[](migraphx::module& mm,
const migraphx::operation& op,
std::vector<migraphx::instruction_ref>& args,
std::vector<migraphx::module*>& mod_args) {
return mm.add_instruction(op, args, mod_args);
},
py::arg("op"),
py::arg("args"),
py::arg("mod_args") = std::vector<migraphx::module*>{})
.def(
"add_literal",
[](migraphx::module& mm, py::buffer data) {
py::buffer_info info = data.request();
auto literal_shape = to_shape(info);
return mm.add_literal(literal_shape, reinterpret_cast<char*>(info.ptr));
},
py::arg("data"))
.def(
"add_parameter",
[](migraphx::module& mm, const std::string& name, const migraphx::shape shape) {
return mm.add_parameter(name, shape);
},
py::arg("name"),
py::arg("shape"))
.def(
"add_return",
[](migraphx::module& mm, std::vector<migraphx::instruction_ref>& args) {
return mm.add_return(args);
},
py::arg("args"))
.def("__repr__", [](const migraphx::module& mm) { return migraphx::to_string(mm); });
py::class_<migraphx::program>(m, "program")
.def(py::init([]() { return migraphx::program(); }))
.def("get_parameter_names", &migraphx::program::get_parameter_names)
.def("get_parameter_shapes", &migraphx::program::get_parameter_shapes)
.def("get_output_shapes", &migraphx::program::get_output_shapes)
.def("is_compiled", &migraphx::program::is_compiled)
.def(
"compile",
[](migraphx::program& p,
const migraphx::target& t,
bool offload_copy,
bool fast_math,
bool exhaustive_tune) {
migraphx::compile_options options;
options.offload_copy = offload_copy;
options.fast_math = fast_math;
options.exhaustive_tune = exhaustive_tune;
p.compile(t, options);
},
py::arg("t"),
py::arg("offload_copy") = true,
py::arg("fast_math") = true,
py::arg("exhaustive_tune") = false)
.def("get_main_module", [](const migraphx::program& p) { return p.get_main_module(); })
.def(
"create_module",
[](migraphx::program& p, const std::string& name) { return p.create_module(name); },
py::arg("name"))
.def("run",
[](migraphx::program& p, py::dict params) {
migraphx::parameter_map pm;
for(auto x : params)
{
std::string key = x.first.cast<std::string>();
py::buffer b = x.second.cast<py::buffer>();
py::buffer_info info = b.request();
pm[key] = migraphx::argument(to_shape(info), info.ptr);
}
return p.eval(pm);
})
.def("run_async",
[](migraphx::program& p,
py::dict params,
std::uintptr_t stream,
std::string stream_name) {
migraphx::parameter_map pm;
for(auto x : params)
{
std::string key = x.first.cast<std::string>();
py::buffer b = x.second.cast<py::buffer>();
py::buffer_info info = b.request();
pm[key] = migraphx::argument(to_shape(info), info.ptr);
}
migraphx::execution_environment exec_env{
migraphx::any_ptr(reinterpret_cast<void*>(stream), stream_name), true};
return p.eval(pm, exec_env);
})
.def("sort", &migraphx::program::sort)
.def("print", [](const migraphx::program& p) { std::cout << p << std::endl; })
.def("__eq__", std::equal_to<migraphx::program>{})
.def("__ne__", std::not_equal_to<migraphx::program>{})
.def("__repr__", [](const migraphx::program& p) { return migraphx::to_string(p); });
py::class_<migraphx::operation> op(m, "op");
op.def(py::init([](const std::string& name, py::kwargs kwargs) {
migraphx::value v = migraphx::value::object{};
if(kwargs)
{
v = migraphx::to_value(kwargs);
}
return migraphx::make_op(name, v);
}))
.def("name", &migraphx::operation::name);
py::enum_<migraphx::op::pooling_mode>(op, "pooling_mode")
.value("average", migraphx::op::pooling_mode::average)
.value("max", migraphx::op::pooling_mode::max)
.value("lpnorm", migraphx::op::pooling_mode::lpnorm);
py::enum_<migraphx::op::rnn_direction>(op, "rnn_direction")
.value("forward", migraphx::op::rnn_direction::forward)
.value("reverse", migraphx::op::rnn_direction::reverse)
.value("bidirectional", migraphx::op::rnn_direction::bidirectional);
m.def(
"argument_from_pointer",
[](const migraphx::shape shape, const int64_t address) {
return migraphx::argument(shape, reinterpret_cast<void*>(address));
},
py::arg("shape"),
py::arg("address"));
m.def(
"parse_tf",
[](const std::string& filename,
bool is_nhwc,
unsigned int batch_size,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::vector<std::string> output_names) {
return migraphx::parse_tf(
filename, migraphx::tf_options{is_nhwc, batch_size, map_input_dims, output_names});
},
"Parse tf protobuf (default format is nhwc)",
py::arg("filename"),
py::arg("is_nhwc") = true,
py::arg("batch_size") = 1,
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("output_names") = std::vector<std::string>());
m.def(
"parse_onnx",
[](const std::string& filename,
unsigned int default_dim_value,
migraphx::shape::dynamic_dimension default_dyn_dim_value,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
map_dyn_input_dims,
bool skip_unknown_operators,
bool print_program_on_error,
int64_t max_loop_iterations) {
migraphx::onnx_options options;
options.default_dim_value = default_dim_value;
options.default_dyn_dim_value = default_dyn_dim_value;
options.map_input_dims = map_input_dims;
options.map_dyn_input_dims = map_dyn_input_dims;
options.skip_unknown_operators = skip_unknown_operators;
options.print_program_on_error = print_program_on_error;
options.max_loop_iterations = max_loop_iterations;
return migraphx::parse_onnx(filename, options);
},
"Parse onnx file",
py::arg("filename"),
py::arg("default_dim_value") = 0,
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("map_dyn_input_dims") =
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
py::arg("skip_unknown_operators") = false,
py::arg("print_program_on_error") = false,
py::arg("max_loop_iterations") = 10);
m.def(
"parse_onnx_buffer",
[](const std::string& onnx_buffer,
unsigned int default_dim_value,
migraphx::shape::dynamic_dimension default_dyn_dim_value,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
map_dyn_input_dims,
bool skip_unknown_operators,
bool print_program_on_error) {
migraphx::onnx_options options;
options.default_dim_value = default_dim_value;
options.default_dyn_dim_value = default_dyn_dim_value;
options.map_input_dims = map_input_dims;
options.map_dyn_input_dims = map_dyn_input_dims;
options.skip_unknown_operators = skip_unknown_operators;
options.print_program_on_error = print_program_on_error;
return migraphx::parse_onnx_buffer(onnx_buffer, options);
},
"Parse onnx file",
py::arg("filename"),
py::arg("default_dim_value") = 0,
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("map_dyn_input_dims") =
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
py::arg("skip_unknown_operators") = false,
py::arg("print_program_on_error") = false);
m.def(
"load",
[](const std::string& name, const std::string& format) {
migraphx::file_options options;
options.format = format;
return migraphx::load(name, options);
},
"Load MIGraphX program",
py::arg("filename"),
py::arg("format") = "msgpack");
m.def(
"save",
[](const migraphx::program& p, const std::string& name, const std::string& format) {
migraphx::file_options options;
options.format = format;
return migraphx::save(p, name, options);
},
"Save MIGraphX program",
py::arg("p"),
py::arg("filename"),
py::arg("format") = "msgpack");
m.def("get_target", &migraphx::make_target);
m.def("create_argument", [](const migraphx::shape& s, const std::vector<double>& values) {
if(values.size() != s.elements())
MIGRAPHX_THROW("Values and shape elements do not match");
migraphx::argument a{s};
a.fill(values.begin(), values.end());
return a;
});
m.def("generate_argument", &migraphx::generate_argument, py::arg("s"), py::arg("seed") = 0);
m.def("fill_argument", &migraphx::fill_argument, py::arg("s"), py::arg("value"));
m.def("quantize_fp16",
&migraphx::quantize_fp16,
py::arg("prog"),
py::arg("ins_names") = std::vector<std::string>{"all"});
m.def("quantize_int8",
&migraphx::quantize_int8,
py::arg("prog"),
py::arg("t"),
py::arg("calibration") = std::vector<migraphx::parameter_map>{},
py::arg("ins_names") = std::vector<std::string>{"dot", "convolution"});
#ifdef HAVE_GPU
m.def("allocate_gpu", &migraphx::gpu::allocate_gpu, py::arg("s"), py::arg("host") = false);
m.def("to_gpu", &migraphx::gpu::to_gpu, py::arg("arg"), py::arg("host") = false);
m.def("from_gpu", &migraphx::gpu::from_gpu);
m.def("gpu_sync", [] { migraphx::gpu::gpu_sync(); });
#endif
#ifdef VERSION_INFO
m.attr("__version__") = VERSION_INFO;
#else
m.attr("__version__") = "dev";
#endif
}

View File

@@ -1 +1 @@
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v1.0.0/onnxruntime_rocm-1.17.3-cp39-cp39-linux_x86_64.whl
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.3.3/onnxruntime_rocm-1.20.1-cp311-cp311-linux_x86_64.whl

View File

@@ -1,3 +0,0 @@
Package: *
Pin: release o=repo.radeon.com
Pin-Priority: 600

View File

@@ -2,7 +2,7 @@ variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "5.7.3"
default = "6.3.3"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""
@@ -10,6 +10,13 @@ variable "HSA_OVERRIDE_GFX_VERSION" {
variable "HSA_OVERRIDE" {
default = "1"
}
target wget {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64"]
target = "wget"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64"]
@@ -26,6 +33,7 @@ target rocm {
dockerfile = "docker/rocm/Dockerfile"
contexts = {
deps = "target:deps",
wget = "target:wget",
rootfs = "target:rootfs"
}
platforms = ["linux/amd64"]

View File

@@ -1 +0,0 @@
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/5.7.3 focal main

View File

@@ -6,11 +6,12 @@ ARG DEBIAN_FRONTEND=noninteractive
FROM deps AS rpi-deps
ARG TARGETARCH
RUN rm -rf /usr/lib/btbn-ffmpeg/
# Install dependencies
RUN --mount=type=bind,source=docker/rpi/install_deps.sh,target=/deps/install_deps.sh \
/deps/install_deps.sh
ENV DEFAULT_FFMPEG_VERSION="rpi"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
WORKDIR /opt/frigate/
COPY --from=rootfs / /

View File

@@ -18,13 +18,17 @@ apt-get -qq install --no-install-recommends -y \
mkdir -p -m 600 /root/.gnupg
# enable non-free repo
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
echo "deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list
apt update
# ffmpeg -> arm64
if [[ "${TARGETARCH}" == "arm64" ]]; then
# add raspberry pi repo
gpg --no-default-keyring --keyring /usr/share/keyrings/raspbian.gpg --keyserver keyserver.ubuntu.com --recv-keys 82B129927FA3303E
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bullseye main" | tee /etc/apt/sources.list.d/raspi.list
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bookworm main" | tee /etc/apt/sources.list.d/raspi.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y ffmpeg
mkdir -p /usr/lib/ffmpeg/rpi/bin
ln -svf /usr/bin/ffmpeg /usr/lib/ffmpeg/rpi/bin/ffmpeg
ln -svf /usr/bin/ffprobe /usr/lib/ffmpeg/rpi/bin/ffprobe
fi

View File

@@ -3,22 +3,17 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Make this a separate target so it can be built/cached optionally
FROM wheels as trt-wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
# Add TensorRT wheels to another folder
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM tensorrt-base AS frigate-tensorrt
ENV TRT_VER=8.5.3
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl && \
ldconfig
ARG PIP_BREAK_SYSTEM_PACKAGES
ENV TRT_VER=8.6.1
# Install TensorRT wheels
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN pip3 install -U -r /requirements-tensorrt.txt && ldconfig
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.9/dist-packages/tensorrt:/usr/local/cuda/lib64:/usr/local/lib/python3.9/dist-packages/nvidia/cufft/lib
WORKDIR /opt/frigate/
COPY --from=rootfs / /

View File

@@ -7,20 +7,25 @@ ARG BASE_IMAGE
FROM ${BASE_IMAGE} AS build-wheels
ARG DEBIAN_FRONTEND
# Add deadsnakes PPA for python3.11
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository ppa:deadsnakes/ppa
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends \
python3.9 python3.9-dev \
python3.11 python3.11-dev \
wget build-essential cmake git \
&& rm -rf /var/lib/apt/lists/*
# Ensure python3 defaults to python3.9
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
# Ensure python3 defaults to python3.11
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
FROM build-wheels AS trt-wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
@@ -41,11 +46,12 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
ADD https://nvidia.box.com/shared/static/9aemm4grzbbkfaesg5l7fplgjtmswhj8.whl /tmp/onnxruntime_gpu-1.15.1-cp39-cp39-linux_aarch64.whl
# See https://elinux.org/Jetson_Zoo#ONNX_Runtime
ADD https://nvidia.box.com/shared/static/9yvw05k6u343qfnkhdv2x6xhygze0aq1.whl /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt \
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.15.1-cp39-cp39-linux_aarch64.whl
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
FROM build-wheels AS trt-model-wheels
ARG DEBIAN_FRONTEND
@@ -67,11 +73,21 @@ RUN --mount=type=bind,source=docker/tensorrt/build_jetson_ffmpeg.sh,target=/deps
# Frigate w/ TensorRT for NVIDIA Jetson platforms
FROM tensorrt-base AS frigate-tensorrt
RUN apt-get update \
&& apt-get install -y python-is-python3 libprotobuf17 \
&& apt-get install -y python-is-python3 libprotobuf23 \
&& rm -rf /var/lib/apt/lists/*
RUN rm -rf /usr/lib/btbn-ffmpeg/
COPY --from=jetson-ffmpeg /rootfs /
ENV DEFAULT_FFMPEG_VERSION="jetson"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
# ffmpeg runtime dependencies
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends \
libx264-163 libx265-199 libegl1 \
&& rm -rf /var/lib/apt/lists/*
# Fixes "Error loading shared libs"
RUN mkdir -p /etc/ld.so.conf.d && echo /usr/lib/ffmpeg/jetson/lib/ > /etc/ld.so.conf.d/ffmpeg.conf
COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
@@ -81,3 +97,6 @@ RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels
WORKDIR /opt/frigate/
COPY --from=rootfs / /
# Fixes "Error importing detector runtime: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block"
ENV LD_PRELOAD /usr/lib/aarch64-linux-gnu/libstdc++.so.6

View File

@@ -3,11 +3,12 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.03-py3
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.12-py3
# Build TensorRT-specific library
FROM ${TRT_BASE} AS trt-deps
ARG TARGETARCH
ARG COMPUTE_LEVEL
RUN apt-get update \
@@ -16,15 +17,26 @@ RUN apt-get update \
RUN --mount=type=bind,source=docker/tensorrt/detector/tensorrt_libyolo.sh,target=/tensorrt_libyolo.sh \
/tensorrt_libyolo.sh
# COPY required individual CUDA deps
RUN mkdir -p /usr/local/cuda-deps
RUN if [ "$TARGETARCH" = "amd64" ]; then \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libcurand.so.* /usr/local/cuda-deps/ && \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libnvrtc.so.* /usr/local/cuda-deps/ ; \
fi
# Frigate w/ TensorRT Support as separate image
FROM deps AS tensorrt-base
#Disable S6 Global timeout
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
# COPY TensorRT Model Generation Deps
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=trt-deps /usr/local/cuda-12.* /usr/local/cuda
# COPY Individual CUDA deps folder
COPY --from=trt-deps /usr/local/cuda-deps /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS=""

View File

@@ -5,7 +5,7 @@
set -euxo pipefail
INSTALL_PREFIX=/rootfs/usr/local
INSTALL_PREFIX=/rootfs/usr/lib/ffmpeg/jetson
apt-get -qq update
apt-get -qq install -y --no-install-recommends build-essential ccache clang cmake pkg-config
@@ -14,14 +14,27 @@ apt-get -qq install -y --no-install-recommends libx264-dev libx265-dev
pushd /tmp
# Install libnvmpi to enable nvmpi decoders (h264_nvmpi, hevc_nvmpi)
if [ -e /usr/local/cuda-10.2 ]; then
if [ -e /usr/local/cuda-12 ]; then
# assume Jetpack 6.2
apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc
echo "deb https://repo.download.nvidia.com/jetson/common r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
echo "deb https://repo.download.nvidia.com/jetson/t234 r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
echo "deb https://repo.download.nvidia.com/jetson/ffmpeg r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
mkdir -p /opt/nvidia/l4t-packages/
touch /opt/nvidia/l4t-packages/.nv-l4t-disable-boot-fw-update-in-preinstall
apt-get update
apt-get -qq install -y --no-install-recommends -o Dpkg::Options::="--force-confold" nvidia-l4t-jetson-multimedia-api
elif [ -e /usr/local/cuda-10.2 ]; then
# assume Jetpack 4.X
wget -q https://developer.nvidia.com/embedded/L4T/r32_Release_v5.0/T186/Jetson_Multimedia_API_R32.5.0_aarch64.tbz2 -O jetson_multimedia_api.tbz2
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
else
# assume Jetpack 5.X
wget -q https://developer.nvidia.com/downloads/embedded/l4t/r35_release_v3.1/release/jetson_multimedia_api_r35.3.1_aarch64.tbz2 -O jetson_multimedia_api.tbz2
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
fi
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
wget -q https://github.com/AndBobsYourUncle/jetson-ffmpeg/archive/9c17b09.zip -O jetson-ffmpeg.zip
unzip jetson-ffmpeg.zip && rm jetson-ffmpeg.zip && mv jetson-ffmpeg-* jetson-ffmpeg && cd jetson-ffmpeg

View File

@@ -6,23 +6,23 @@ mkdir -p /trt-wheels
if [[ "${TARGETARCH}" == "arm64" ]]; then
# NVIDIA supplies python-tensorrt for python3.8, but frigate uses python3.9,
# NVIDIA supplies python-tensorrt for python3.10, but frigate uses python3.11,
# so we must build python-tensorrt ourselves.
# Get python-tensorrt source
mkdir /workspace
mkdir -p /workspace
cd /workspace
git clone -b ${TENSORRT_VER} https://github.com/NVIDIA/TensorRT.git --depth=1
git clone -b release/8.6 https://github.com/NVIDIA/TensorRT.git --depth=1
# Collect dependencies
EXT_PATH=/workspace/external && mkdir -p $EXT_PATH
pip3 install pybind11 && ln -s /usr/local/lib/python3.9/dist-packages/pybind11 $EXT_PATH/pybind11
ln -s /usr/include/python3.9 $EXT_PATH/python3.9
pip3 install pybind11 && ln -s /usr/local/lib/python3.11/dist-packages/pybind11 $EXT_PATH/pybind11
ln -s /usr/include/python3.11 $EXT_PATH/python3.11
ln -s /usr/include/aarch64-linux-gnu/NvOnnxParser.h /workspace/TensorRT/parsers/onnx/
# Build wheel
cd /workspace/TensorRT/python
EXT_PATH=$EXT_PATH PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=9 TARGET_ARCHITECTURE=aarch64 /bin/bash ./build.sh
mv build/dist/*.whl /trt-wheels/
EXT_PATH=$EXT_PATH PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=11 TARGET_ARCHITECTURE=aarch64 TENSORRT_MODULE=tensorrt /bin/bash ./build.sh
mv build/bindings_wheel/dist/*.whl /trt-wheels/
fi

View File

@@ -1,6 +1,8 @@
/usr/local/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.9/dist-packages/tensorrt
/usr/local/cuda
/usr/local/lib/python3.11/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.11/dist-packages/tensorrt
/usr/local/lib/python3.11/dist-packages/nvidia/cufft/lib

View File

@@ -20,7 +20,7 @@ FIRST_MODEL=true
MODEL_DOWNLOAD=""
MODEL_CONVERT=""
if [ -z "$YOLO_MODELS"]; then
if [ -z "$YOLO_MODELS" ]; then
echo "tensorrt model preparation disabled"
exit 0
fi
@@ -64,7 +64,7 @@ fi
# order to run libyolo here.
# On Jetpack 5.0, these libraries are not mounted by the runtime and are supplied by the image.
if [[ "$(arch)" == "aarch64" ]]; then
if [[ ! -e /usr/lib/aarch64-linux-gnu/tegra ]]; then
if [[ ! -e /usr/lib/aarch64-linux-gnu/tegra && ! -e /usr/lib/aarch64-linux-gnu/tegra-egl ]]; then
echo "ERROR: Container must be launched with nvidia runtime"
exit 1
elif [[ ! -e /usr/lib/aarch64-linux-gnu/libnvinfer.so.8 ||

View File

@@ -1,14 +1,17 @@
# NVidia TensorRT Support (amd64 only)
--extra-index-url 'https://pypi.nvidia.com'
numpy < 1.24; platform_machine == 'x86_64'
tensorrt == 8.5.3.*; platform_machine == 'x86_64'
cuda-python == 11.8; platform_machine == 'x86_64'
cython == 0.29.*; platform_machine == 'x86_64'
tensorrt == 8.6.1; platform_machine == 'x86_64'
tensorrt_bindings == 8.6.1; platform_machine == 'x86_64'
cuda-python == 11.8.*; platform_machine == 'x86_64'
cython == 3.0.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12 == 12.1.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu11 == 11.8.*; platform_machine == 'x86_64'
nvidia-cublas-cu11 == 11.11.3.6; platform_machine == 'x86_64'
nvidia-cudnn-cu11 == 8.6.0.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12 == 9.5.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu11==10.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.18.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@@ -1 +1 @@
cuda-python == 11.7; platform_machine == 'aarch64'
cuda-python == 12.6.*; platform_machine == 'aarch64'

View File

@@ -13,13 +13,29 @@ variable "TRT_BASE" {
variable "COMPUTE_LEVEL" {
default = ""
}
variable "BASE_HOOK" {
# Ensure an up-to-date python 3.11 is available in jetson images
default = <<EOT
if grep -iq \"ubuntu\" /etc/os-release; then
. /etc/os-release
# Add the deadsnakes PPA repository
echo "deb https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu $VERSION_CODENAME main" >> /etc/apt/sources.list.d/deadsnakes.list
echo "deb-src https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu $VERSION_CODENAME main" >> /etc/apt/sources.list.d/deadsnakes.list
# Add deadsnakes signing key
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F23C5A6CF475977595C89F51BA6932366A755776
fi
EOT
}
target "_build_args" {
args = {
BASE_IMAGE = BASE_IMAGE,
SLIM_BASE = SLIM_BASE,
TRT_BASE = TRT_BASE,
COMPUTE_LEVEL = COMPUTE_LEVEL
COMPUTE_LEVEL = COMPUTE_LEVEL,
BASE_HOOK = BASE_HOOK
}
platforms = ["linux/${ARCH}"]
}
@@ -79,7 +95,6 @@ target "tensorrt" {
wget = "target:wget",
tensorrt-base = "target:tensorrt-base",
rootfs = "target:rootfs"
wheels = "target:wheels"
}
target = "frigate-tensorrt"
inherits = ["_build_args"]

View File

@@ -1,41 +1,41 @@
BOARDS += trt
JETPACK4_BASE ?= timongentzsch/l4t-ubuntu20-opencv:latest # L4T 32.7.1 JetPack 4.6.1
JETPACK5_BASE ?= nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime # L4T 35.3.1 JetPack 5.1.1
JETPACK6_BASE ?= nvcr.io/nvidia/tensorrt:23.12-py3-igpu
X86_DGPU_ARGS := ARCH=amd64 COMPUTE_LEVEL="50 60 70 80 90"
JETPACK4_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK4_BASE) SLIM_BASE=$(JETPACK4_BASE) TRT_BASE=$(JETPACK4_BASE)
JETPACK5_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK5_BASE) SLIM_BASE=$(JETPACK5_BASE) TRT_BASE=$(JETPACK5_BASE)
JETPACK6_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK6_BASE) SLIM_BASE=$(JETPACK6_BASE) TRT_BASE=$(JETPACK6_BASE)
local-trt: version
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt \
--load
local-trt-jp4: version
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt-jp4 \
--load
local-trt-jp5: version
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt-jp5 \
--load
local-trt-jp6: version
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt-jp6 \
--load
build-trt:
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp4
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp6
push-trt: build-trt
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt \
--push
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp4 \
--push
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5 \
--push
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp6 \
--push

View File

@@ -4,7 +4,9 @@ title: Advanced Options
sidebar_label: Advanced Options
---
### `logger`
### Logging
#### Frigate `logger`
Change the default log level for troubleshooting purposes.
@@ -28,6 +30,18 @@ Examples of available modules are:
- `watchdog.<camera_name>`
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
#### Go2RTC Logging
See [the go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#module-log) for logging configuration
```yaml
go2rtc:
streams:
# ...
log:
exec: trace
```
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
@@ -162,15 +176,13 @@ listen [::]:5000 ipv6only=off;
### Custom ffmpeg build
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, statically built ffmpeg binary can be downloaded to /config and used.
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, statically built `ffmpeg` and `ffprobe` binaries can be placed in `/config/custom-ffmpeg/bin` for Frigate to use.
To do this:
1. Download your ffmpeg build and uncompress to the Frigate config folder.
2. Update your docker-compose or docker CLI to include `'/home/appdata/frigate/custom-ffmpeg':'/usr/lib/btbn-ffmpeg':'ro'` in the volume mappings.
3. Restart Frigate and the custom version will be used if the mapping was done correctly.
NOTE: The folder that is set for the config needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then the `ffmpeg -> path` field should be `/config/custom-ffmpeg/bin`.
1. Download your ffmpeg build and uncompress it to the `/config/custom-ffmpeg` folder. Verify that both the `ffmpeg` and `ffprobe` binaries are located in `/config/custom-ffmpeg/bin`.
2. Update the `ffmpeg.path` in your Frigate config to `/config/custom-ffmpeg`.
3. Restart Frigate and the custom version will be used if the steps above were done correctly.
### Custom go2rtc version
@@ -178,7 +190,7 @@ Frigate currently includes go2rtc v1.9.2, there may be certain cases where you w
To do this:
1. Download the go2rtc build to the /config folder.
1. Download the go2rtc build to the `/config` folder.
2. Rename the build to `go2rtc`.
3. Give `go2rtc` execute permission.
4. Restart Frigate and the custom version will be used, you can verify by checking go2rtc logs.
@@ -189,16 +201,16 @@ When frigate starts up, it checks whether your config file is valid, and if it i
### Via API
Frigate can accept a new configuration file as JSON at the `/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
Frigate can accept a new configuration file as JSON at the `/api/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
```bash
curl -X POST http://frigate_host:5000/config/save -d @config.json
curl -X POST http://frigate_host:5000/api/config/save -d @config.json
```
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
```bash
yq r -j config.yml | curl -X POST http://frigate_host:5000/config/save -d @-
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
```
### Via Command Line

View File

@@ -24,6 +24,11 @@ On startup, an admin user and password are generated and printed in the logs. It
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
```yaml
auth:
reset_admin_password: true
```
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
@@ -92,15 +97,35 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
### Header mapping
If you have disabled Frigate's authentication and your proxy supports passing a header with the authenticated username, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` value. Header names are not case sensitive.
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive.
```yaml
proxy:
...
header_map:
user: x-forwarded-user
role: x-forwarded-role
```
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
#### Port Considerations
**Authenticated Port (8971)**
- Header mapping is **fully supported**.
- The `remote-role` header determines the users privileges:
- **admin** → Full access (user management, configuration changes).
- **viewer** → Read-only access.
- Ensure your **proxy sends both user and role headers** for proper role enforcement.
**Unauthenticated Port (5000)**
- Headers are **ignored** for role enforcement.
- All requests are treated as **anonymous**.
- The `remote-role` value is **overridden** to **admin-level access**.
- This design ensures **unauthenticated internal use** within a trusted network.
Note that only the following list of headers are permitted by default:
```
@@ -121,8 +146,6 @@ X-authentik-uid
If you would like to add more options, you can overwrite the default file with a docker bind mount at `/usr/local/nginx/conf/proxy_trusted_headers.conf`. Reference the source code for the default file formatting.
Future versions of Frigate may leverage group and role headers for authorization in Frigate as well.
### Login page redirection
Frigate gracefully performs login page redirection that should work with most authentication proxies. If your reverse proxy returns a `Location` header on `401`, `302`, or `307` unauthorized responses, Frigate's frontend will automatically detect it and redirect to that URL.
@@ -130,3 +153,31 @@ Frigate gracefully performs login page redirection that should work with most au
### Custom logout url
If your reverse proxy has a dedicated logout url, you can specify using the `logout_url` config option. This will update the link for the `Logout` link in the UI.
## User Roles
Frigate supports user roles to control access to certain features in the UI and API, such as managing users or modifying configuration settings. Roles are assigned to users in the database or through proxy headers and are enforced when accessing the UI or API through the authenticated port (`8971`).
### Supported Roles
- **admin**: Full access to all features, including user management and configuration.
- **viewer**: Read-only access to the UI and API, including viewing cameras, review items, and historical footage. Configuration editor and settings in the UI are inaccessible.
### Role Enforcement
When using the authenticated port (`8971`), roles are validated via the JWT token or proxy headers (e.g., `remote-role`).
On the internal **unauthenticated** port (`5000`), roles are **not enforced**. All requests are treated as **anonymous**, granting access equivalent to the **admin** role without restrictions.
To use role-based access control, you must connect to Frigate via the **authenticated port (`8971`)** directly or through a reverse proxy.
### Role Visibility in the UI
- When logged in via port `8971`, your **username and role** are displayed in the **account menu** (bottom corner).
- When using port `5000`, the UI will always display "anonymous" for the username and "admin" for the role.
### Managing User Roles
1. Log in as an **admin** user via port `8971`.
2. Navigate to **Settings > Users**.
3. Edit a users role by selecting **admin** or **viewer**.

View File

@@ -167,3 +167,7 @@ To maintain object tracking during PTZ moves, Frigate tracks the motion of your
### Calibration seems to have completed, but the camera is not actually moving to track my object. Why?
Some cameras have firmware that reports that FOV RelativeMove, the ONVIF command that Frigate uses for autotracking, is supported. However, if the camera does not pan or tilt when an object comes into the required zone, your camera's firmware does not actually support FOV RelativeMove. One such camera is the Uniview IPC672LR-AX4DUPK. It actually moves its zoom motor instead of panning and tilting and does not follow the ONVIF standard whatsoever.
### Frigate reports an error saying that calibration has failed. Why?
Calibration measures the amount of time it takes for Frigate to make a series of movements with your PTZ. This error message is recorded in the log if these values are too high for Frigate to support calibrated autotracking. This is often the case when your camera's motor or network connection is too slow or your camera's firmware doesn't report the motor status in a timely manner. You can try running without calibration (just remove the `movement_weights` line from your config and restart), but if calibration fails, this often means that autotracking will behave unpredictably.

View File

@@ -22,7 +22,7 @@ Note that mjpeg cameras require encoding the video into h264 for recording, and
```yaml
go2rtc:
streams:
mjpeg_cam: "ffmpeg:{your_mjpeg_stream_url}#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
mjpeg_cam: "ffmpeg:http://your_mjpeg_stream_url#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
cameras:
...
@@ -65,19 +65,32 @@ ffmpeg:
## Model/vendor specific setup
### Amcrest & Dahua
Amcrest & Dahua cameras should be connected to via RTSP using the following format:
```
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=0 # this is the main stream
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=1 # this is the sub stream, typically supporting low resolutions only
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=2 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=3 # new higher end cameras support a fourth stream with another mid resolution (1280x720, 1920x1080)
```
### Annke C800
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be adjusted using the `apple_compatibility` config.
```yaml
cameras:
annkec800: # <------ Name the camera
ffmpeg:
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
record: preset-record-generic-audio-aac
inputs:
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
- path: rtsp://USERNAME:PASSWORD@CAMERA-IP/H264/ch1/main/av_stream # <----- Update for your camera
roles:
- detect
- record
@@ -95,6 +108,29 @@ ffmpeg:
input_args: preset-rtsp-blue-iris
```
### Hikvision Cameras
Hikvision cameras should be connected to via RTSP using the following format:
```
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/101 # this is the main stream
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/102 # this is the sub stream, typically supporting low resolutions only
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/103 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
```
:::note
[Some users have reported](https://www.reddit.com/r/frigate_nvr/comments/1hg4ze7/hikvision_security_settings) that newer Hikvision cameras require adjustments to the security settings:
```
RTSP Authentication - digest/basic
RTSP Digest Algorithm - MD5
WEB Authentication - digest/basic
WEB Digest Algorithm - MD5
```
:::
### Reolink Cameras
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.

View File

@@ -7,7 +7,7 @@ title: Camera Configuration
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing tracked objects and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
A camera is enabled by default but can be disabled by using `enabled: False`. Cameras that are disabled through the configuration file will not appear in the Frigate UI and will not consume system resources.
Each role can only be assigned to one input per camera. The options for roles are as follows:

View File

@@ -0,0 +1,56 @@
---
id: face_recognition
title: Face Recognition
---
Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.
Frigate has support for CV2 Local Binary Pattern Face Recognizer to recognize faces, which runs locally. A lightweight face landmark detection model is also used to align faces before running them through the face recognizer.
## Configuration
Face recognition is disabled by default, face recognition must be enabled in your config file before it can be used. Face recognition is a global configuration setting.
```yaml
face_recognition:
enabled: true
```
## Dataset
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
- Diversity of the dataset: A dataset with diverse images, including variations in lighting, pose, and facial expressions, will require fewer images per person than a less diverse dataset.
- Desired accuracy: The higher the desired accuracy, the more images are typically needed.
However, here are some general guidelines:
- Minimum: For basic face recognition tasks, a minimum of 10-20 images per person is often recommended.
- Recommended: For more robust and accurate systems, 30-50 images per person is a good starting point.
- Ideal: For optimal performance, especially in challenging conditions, 100 or more images per person can be beneficial.
## Creating a Robust Training Set
The accuracy of face recognition is heavily dependent on the quality of data given to it for training. It is recommended to build the face training library in phases.
:::tip
When choosing images to include in the face training set it is recommended to always follow these recommendations:
- If it is difficult to make out details in a persons face it will not be helpful in training.
- Avoid images with under/over-exposure.
- Avoid blurry / pixelated images.
- Be careful when uploading images of people when they are wearing clothing that covers a lot of their face as this may confuse the training.
- Do not upload too many images at the same time, it is recommended to train 4-6 images for each person each day so it is easier to know if the previously added images helped or hurt performance.
:::
### Step 1 - Building a Strong Foundation
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-2 photos taken by a smartphone for each person. It is important that the person's face in the photo is straight-on and not turned which will ensure a good starting point.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle. Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step.
### Step 2 - Expanding The Dataset
Once straight-on images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone.

View File

@@ -5,19 +5,13 @@ title: Generative AI
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle. Descriptions can also be regenerated manually via the Frigate UI.
:::info
Semantic Search must be enabled to use Generative AI.
:::
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
@@ -116,7 +110,7 @@ genai:
model: gpt-4o
```
::: note
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
@@ -154,6 +148,15 @@ While generating simple descriptions of detected objects is useful, understandin
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end: true # default
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:

View File

@@ -175,6 +175,16 @@ For more information on the various values across different distributions, see h
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
#### Stats for SR-IOV devices
When using virtualized GPUs via SR-IOV, additional args are needed for GPU stats to function. This can be enabled with the following config:
```yaml
telemetry:
stats:
sriov: True
```
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
@@ -285,10 +295,8 @@ These instructions were originally based on the [Jellyfin documentation](https:/
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 4.6, use the
`stable-tensorrt-jp4` tagged image, or if your Jetson host is running Jetpack 5.0+, use the `stable-tensorrt-jp5`
tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform,
but the image will still allow hardware decoding and tensorrt object detection.
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 5.0+ use the `stable-tensorrt-jp5`
tagged image, or if your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
You will need to use the image with the nvidia container runtime:

View File

@@ -0,0 +1,152 @@
---
id: license_plate_recognition
title: License Plate Recognition (LPR)
---
Frigate can recognize license plates on vehicles and automatically add the detected characters or recognized name as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles.
When a plate is recognized, the detected characters or recognized name is:
- Added as a `sub_label` to the `car` tracked object.
- Viewable in the Review Item Details pane in Review and the Tracked Object Details pane in Explore.
- Filterable through the More Filters menu in Explore.
- Published via the `frigate/events` MQTT topic as a `sub_label` for the tracked object.
## Model Requirements
Users running a Frigate+ model (or any custom model that natively detects license plates) should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that runs on your CPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
:::note
Frigate needs to first detect a `car` before it can recognize a license plate. If you're using a dedicated LPR camera or have a zoomed-in view, make sure the camera captures enough of the `car` for Frigate to detect it reliably.
:::
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
```yaml
lpr:
enabled: True
```
Ensure that your camera is configured to detect objects of type `car`, and that a car is actually being detected by Frigate. Otherwise, LPR will not run.
Like the other real-time processors in Frigate, license plate recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
## Advanced Configuration
Fine-tune the LPR feature using these optional parameters:
### Detection
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
- Default: `0.7`
- Note: If you are using a Frigate+ model and you set the `threshold` in your objects config for `license_plate` higher than this value, recognition will never run. It's best to ensure these values match, or this `detection_threshold` is lower than your object config `threshold`.
- **`min_area`**: Defines the minimum size (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
### Recognition
- **`recognition_threshold`**: Recognition confidence score required to add the plate to the object as a sub label.
- Default: `0.9`.
- **`min_plate_length`**: Specifies the minimum number of characters a detected license plate must have to be added as a sub label to an object.
- Use this to filter out short, incomplete, or incorrect detections.
- **`format`**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded.
- `"^[A-Z]{1,3} [A-Z]{1,2} [0-9]{1,4}$"` matches plates like "B AB 1234" or "M X 7"
- `"^[A-Z]{2}[0-9]{2} [A-Z]{3}$"` matches plates like "AB12 XYZ" or "XY68 ABC"
- Websites like https://regex101.com/ can help test regular expressions for your plates.
### Matching
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` objects when a recognized plate matches a known value.
- These labels appear in the UI, filters, and notifications.
- **`match_distance`**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate.
- For example, setting `match_distance: 1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`.
- This parameter will _not_ operate on known plates that are defined as regular expressions. You should define the full string of your plate in `known_plates` in order to use `match_distance`.
## Configuration Examples
```yaml
lpr:
enabled: True
min_area: 1500 # Ignore plates smaller than 1500 pixels
min_plate_length: 4 # Only recognize plates with 4 or more characters
known_plates:
Wife's Car:
- "ABC-1234"
- "ABC-I234" # Accounts for potential confusion between the number one (1) and capital letter I
Johnny:
- "J*N-*234" # Matches JHN-1234 and JMN-I234, but also note that "*" matches any number of characters
Sally:
- "[S5]LL 1234" # Matches both SLL 1234 and 5LL 1234
Work Trucks:
- "EMP-[0-9]{3}[A-Z]" # Matches plates like EMP-123A, EMP-456Z
```
```yaml
lpr:
enabled: True
min_area: 4000 # Run recognition on larger plates only
recognition_threshold: 0.85
format: "^[A-Z]{2} [A-Z][0-9]{4}$" # Only recognize plates that are two letters, followed by a space, followed by a single letter and 4 numbers
match_distance: 1 # Allow one character variation in plate matching
known_plates:
Delivery Van:
- "RJ K5678"
- "UP A1234"
Supervisor:
- "MN D3163"
```
## FAQ
### Why isn't my license plate being detected and recognized?
Ensure that:
- Your camera has a clear, human-readable, well-lit view of the plate. If you can't read the plate, Frigate certainly won't be able to. This may require changing video size, quality, or frame rate settings on your camera, depending on your scene and how fast the vehicles are traveling.
- The plate is large enough in the image (try adjusting `min_area`) or increasing the resolution of your camera's stream.
- A `car` is detected first, as LPR only runs on recognized vehicles.
If you are using a Frigate+ model or a custom model that detects license plates, ensure that `license_plate` is added to your list of objects to track.
If you are using the free model that ships with Frigate, you should _not_ add `license_plate` to the list of objects to track.
### Can I run LPR without detecting `car` objects?
No, Frigate requires a `car` to be detected first before recognizing a license plate.
### How can I improve detection accuracy?
- Use high-quality cameras with good resolution.
- Adjust `detection_threshold` and `recognition_threshold` values.
- Define a `format` regex to filter out invalid detections.
### Does LPR work at night?
Yes, but performance depends on camera quality, lighting, and infrared capabilities. Make sure your camera can capture clear images of plates at night.
### How can I match known plates with minor variations?
Use `match_distance` to allow small character mismatches. Alternatively, define multiple variations in `known_plates`.
### How do I debug LPR issues?
- View MQTT messages for `frigate/events` to verify detected plates.
- Adjust `detection_threshold` and `recognition_threshold` settings.
- If you are using a Frigate+ model or a model that detects license plates, watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected with a `car`.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only enable this when necessary.
### Will LPR slow down my system?
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results.

View File

@@ -3,9 +3,9 @@ id: live
title: Live View
---
Frigate intelligently displays your camera streams on the Live view dashboard. Your camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion is detected, cameras seamlessly switch to a live stream.
Frigate intelligently displays your camera streams on the Live view dashboard. By default, Frigate employs "smart streaming" where camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion or active objects are detected, cameras seamlessly switch to a live stream.
## Live View technologies
### Live View technologies
Frigate intelligently uses three different streaming technologies to display your camera streams on the dashboard and the single camera view, switching between available modes based on network bandwidth, player errors, or required features like two-way talk. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
@@ -51,19 +51,32 @@ go2rtc:
- ffmpeg:rtsp://192.168.1.5:554/live0#video=copy
```
### Setting Stream For Live UI
### Setting Streams For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.
You can configure Frigate to allow manual selection of the stream you want to view in the Live UI. For example, you may want to view your camera's substream on mobile devices, but the full resolution stream on desktop devices. Setting the `live -> streams` list will populate a dropdown in the UI's Live view that allows you to choose between the streams. This stream setting is _per device_ and is saved in your browser's local storage.
Additionally, when creating and editing camera groups in the UI, you can choose the stream you want to use for your camera group's Live dashboard.
:::note
Frigate's default dashboard ("All Cameras") will always use the first entry you've defined in `streams:` when playing live streams from your cameras.
:::
Configure the `streams` option with a "friendly name" for your stream followed by the go2rtc stream name.
Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the `streams` configuration, only go2rtc stream names.
```yaml
go2rtc:
streams:
test_cam:
- rtsp://192.168.1.5:554/live0 # <- stream which supports video & aac audio.
- rtsp://192.168.1.5:554/live_main # <- stream which supports video & aac audio.
- "ffmpeg:test_cam#audio=opus" # <- copy of the stream which transcodes audio to opus for webrtc
test_cam_sub:
- rtsp://192.168.1.5:554/substream # <- stream which supports video & aac audio.
- "ffmpeg:test_cam_sub#audio=opus" # <- copy of the stream which transcodes audio to opus for webrtc
- rtsp://192.168.1.5:554/live_sub # <- stream which supports video & aac audio.
test_cam_another_sub:
- rtsp://192.168.1.5:554/live_alt # <- stream which supports video & aac audio.
cameras:
test_cam:
@@ -80,7 +93,10 @@ cameras:
roles:
- detect
live:
stream_name: test_cam_sub
streams: # <--- Multiple streams for Frigate 0.16 and later
Main Stream: test_cam # <--- Specify a "friendly name" followed by the go2rtc stream name
Sub Stream: test_cam_sub
Special Stream: test_cam_another_sub
```
### WebRTC extra configuration:
@@ -101,6 +117,7 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that WebRTC does not support H.265.
:::tip
@@ -148,3 +165,64 @@ For devices that support two way talk, Frigate can be configured to use the feat
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
### Streaming options on camera group dashboards
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
- Stream selection using the `live -> streams` configuration option (see _Setting Streams For Live UI_ above)
- Streaming type:
- _No streaming_: Camera images will only update once per minute and no live streaming will occur.
- _Smart Streaming_ (default, recommended setting): Smart streaming will update your camera image once per minute when no detectable activity is occurring to conserve bandwidth and resources, since a static picture is the same as a streaming image with no motion or objects. When motion or objects are detected, the image seamlessly switches to a live stream.
- _Continuous Streaming_: Camera image will always be a live stream when visible on the dashboard, even if no activity is being detected. Continuous streaming may cause high bandwidth usage and performance issues. **Use with caution.**
- _Compatibility mode_: Enable this option only if your camera's live stream is displaying color artifacts and has a diagonal line on the right side of the image. Before enabling this, try setting your camera's `detect` width and height to a standard aspect ratio (for example: 640x352 becomes 640x360, and 800x443 becomes 800x450, 2688x1520 becomes 2688x1512, etc). Depending on your browser and device, more than a few cameras in compatibility mode may not be supported, so only use this option if changing your config fails to resolve the color artifacts and diagonal line.
:::note
The default dashboard ("All Cameras") will always use Smart Streaming and the first entry set in your `streams` configuration, if defined. Use a camera group if you want to change any of these settings from the defaults.
:::
### Disabling cameras
Cameras can be temporarily disabled through the Frigate UI and through [MQTT](/integrations/mqtt#frigatecamera_nameenabledset) to conserve system resources. When disabled, Frigate's ffmpeg processes are terminated — recording stops, object detection is paused, and the Live dashboard displays a blank image with a disabled message. Review items, tracked objects, and historical footage for disabled cameras can still be accessed via the UI.
For restreamed cameras, go2rtc remains active but does not use system resources for decoding or processing unless there are active external consumers (such as the Advanced Camera Card in Home Assistant using a go2rtc source).
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
## Live view FAQ
1. **Why don't I have audio in my Live view?**
You must use go2rtc to hear audio in your live streams. If you have go2rtc already configured, you need to ensure your camera is sending PCMA/PCMU or AAC audio. If you can't change your camera's audio codec, you need to [transcode the audio](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg) using go2rtc.
Note that the low bandwidth mode player is a video-only stream. You should not expect to hear audio when in low bandwidth mode, even if you've set up go2rtc.
2. **Frigate shows that my live stream is in "low bandwidth mode". What does this mean?**
Frigate intelligently selects the live streaming technology based on a number of factors (user-selected modes like two-way talk, camera settings, browser capabilities, available bandwidth) and prioritizes showing an actual up-to-date live view of your camera's stream as quickly as possible.
When you have go2rtc configured, Live view initially attempts to load and play back your stream with a clearer, fluent stream technology (MSE). An initial timeout, a low bandwidth condition that would cause buffering of the stream, or decoding errors in the stream will cause Frigate to switch to the stream defined by the `detect` role, using the jsmpeg format. This is what the UI labels as "low bandwidth mode". On Live dashboards, the mode will automatically reset when smart streaming is configured and activity stops. You can also try using the _Reset_ button to force a reload of your stream.
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the recommendations above or ensure you have enough bandwidth available.
3. **It doesn't seem like my cameras are streaming on the Live dashboard. Why?**
On the default Live dashboard ("All Cameras"), your camera images will update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any activity is detected, cameras seamlessly switch to a full-resolution live stream. If you want to customize this behavior, use a camera group.
4. **I see a strange diagonal line on my live view, but my recordings look fine. How can I fix it?**
This is caused by incorrect dimensions set in your detect width or height (or incorrectly auto-detected), causing the jsmpeg player's rendering engine to display a slightly distorted image. You should enlarge the width and height of your `detect` resolution up to a standard aspect ratio (example: 640x352 becomes 640x360, and 800x443 becomes 800x450, 2688x1520 becomes 2688x1512, etc). If changing the resolution to match a standard (4:3, 16:9, or 32:9, etc) aspect ratio does not solve the issue, you can enable "compatibility mode" in your camera group dashboard's stream settings. Depending on your browser and device, more than a few cameras in compatibility mode may not be supported, so only use this option if changing your `detect` width and height fails to resolve the color artifacts and diagonal line.
5. **How does "smart streaming" work?**
Because a static image of a scene looks exactly the same as a live stream with no motion or activity, smart streaming updates your camera images once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any activity (motion or object/audio detection) occurs, cameras seamlessly switch to a live stream.
This static image is pulled from the stream defined in your config with the `detect` role. When activity is detected, images from the `detect` stream immediately begin updating at ~5 frames per second so you can see the activity until the live player is loaded and begins playing. This usually only takes a second or two. If the live player times out, buffers, or has streaming errors, the jsmpeg player is loaded and plays a video-only stream from the `detect` role. When activity ends, the players are destroyed and a static image is displayed until activity is detected again, and the process repeats.
This is Frigate's default and recommended setting because it results in a significant bandwidth savings, especially for high resolution cameras.
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.

View File

@@ -0,0 +1,99 @@
---
id: metrics
title: Metrics
---
# Metrics
Frigate exposes Prometheus metrics at the `/api/metrics` endpoint that can be used to monitor the performance and health of your Frigate instance.
## Available Metrics
### System Metrics
- `frigate_cpu_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process CPU usage percentage
- `frigate_mem_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process memory usage percentage
- `frigate_gpu_usage_percent{gpu_name=""}` - GPU utilization percentage
- `frigate_gpu_mem_usage_percent{gpu_name=""}` - GPU memory usage percentage
### Camera Metrics
- `frigate_camera_fps{camera_name=""}` - Frames per second being consumed from your camera
- `frigate_detection_fps{camera_name=""}` - Number of times detection is run per second
- `frigate_process_fps{camera_name=""}` - Frames per second being processed
- `frigate_skipped_fps{camera_name=""}` - Frames per second skipped for processing
- `frigate_detection_enabled{camera_name=""}` - Detection enabled status for camera
- `frigate_audio_dBFS{camera_name=""}` - Audio dBFS for camera
- `frigate_audio_rms{camera_name=""}` - Audio RMS for camera
### Detector Metrics
- `frigate_detector_inference_speed_seconds{name=""}` - Time spent running object detection in seconds
- `frigate_detection_start{name=""}` - Detector start time (unix timestamp)
### Storage Metrics
- `frigate_storage_free_bytes{storage=""}` - Storage free bytes
- `frigate_storage_total_bytes{storage=""}` - Storage total bytes
- `frigate_storage_used_bytes{storage=""}` - Storage used bytes
- `frigate_storage_mount_type{mount_type="", storage=""}` - Storage mount type info
### Service Metrics
- `frigate_service_uptime_seconds` - Uptime in seconds
- `frigate_service_last_updated_timestamp` - Stats recorded time (unix timestamp)
- `frigate_device_temperature{device=""}` - Device Temperature
### Event Metrics
- `frigate_camera_events{camera="", label=""}` - Count of camera events since exporter started
## Configuring Prometheus
To scrape metrics from Frigate, add the following to your Prometheus configuration:
```yaml
scrape_configs:
- job_name: 'frigate'
metrics_path: '/api/metrics'
static_configs:
- targets: ['frigate:5000']
scrape_interval: 15s
```
## Example Queries
Here are some example PromQL queries that might be useful:
```promql
# Average CPU usage across all processes
avg(frigate_cpu_usage_percent)
# Total GPU memory usage
sum(frigate_gpu_mem_usage_percent)
# Detection FPS by camera
rate(frigate_detection_fps{camera_name="front_door"}[5m])
# Storage usage percentage
(frigate_storage_used_bytes / frigate_storage_total_bytes) * 100
# Event count by camera in last hour
increase(frigate_camera_events[1h])
```
## Grafana Dashboard
You can use these metrics to create Grafana dashboards to monitor your Frigate instance. Here's an example of metrics you might want to track:
- CPU, Memory and GPU usage over time
- Camera FPS and detection rates
- Storage usage and trends
- Event counts by camera
- System temperatures
A sample Grafana dashboard JSON will be provided in a future update.
## Metric Types
The metrics exposed by Frigate use the following Prometheus metric types:
- **Counter**: Cumulative values that only increase (e.g., `frigate_camera_events`)
- **Gauge**: Values that can go up and down (e.g., `frigate_cpu_usage_percent`)
- **Info**: Key-value pairs for metadata (e.g., `frigate_storage_mount_type`)
For more information about Prometheus metric types, see the [Prometheus documentation](https://prometheus.io/docs/concepts/metric_types/).

View File

@@ -11,14 +11,38 @@ Frigate offers native notifications using the [WebPush Protocol](https://web.dev
In order to use notifications the following requirements must be met:
- Frigate must be accessed via a secure https connection
- Frigate must be accessed via a secure `https` connection ([see the authorization docs](/configuration/authentication)).
- A supported browser must be used. Currently Chrome, Firefox, and Safari are known to be supported.
- In order for notifications to be usable externally, Frigate must be accessible externally
- In order for notifications to be usable externally, Frigate must be accessible externally.
- For iOS devices, some users have also indicated that the Notifications switch needs to be enabled in iOS Settings --> Apps --> Safari --> Advanced --> Features.
### Configuration
To configure notifications, go to the Frigate WebUI -> Settings -> Notifications and enable, then fill out the fields and save.
Optionally, you can change the default cooldown period for notifications through the `cooldown` parameter in your config file. This parameter can also be overridden at the camera level.
Notifications will be prevented if either:
- The global cooldown period hasn't elapsed since any camera's last notification
- The camera-specific cooldown period hasn't elapsed for the specific camera
```yaml
notifications:
enabled: True
email: "johndoe@gmail.com"
cooldown: 10 # wait 10 seconds before sending another notification from any camera
```
```yaml
cameras:
doorbell:
...
notifications:
enabled: True
cooldown: 30 # wait 30 seconds before sending another notification from the doorbell camera
```
### Registration
Once notifications are enabled, press the `Register for Notifications` button on all devices that you would like to receive notifications on. This will register the background worker. After this Frigate must be restarted and then notifications will begin to be sent.
@@ -39,4 +63,4 @@ Different platforms handle notifications differently, some settings changes may
### Android
Most Android phones have battery optimization settings. To get reliable Notification delivery the browser (Chrome, Firefox) should have battery optimizations disabled. If Frigate is running as a PWA then the Frigate app should have battery optimizations disabled as well.
Most Android phones have battery optimization settings. To get reliable Notification delivery the browser (Chrome, Firefox) should have battery optimizations disabled. If Frigate is running as a PWA then the Frigate app should have battery optimizations disabled as well.

View File

@@ -10,32 +10,46 @@ title: Object Detectors
Frigate supports multiple different detectors that work on different types of hardware:
**Most Hardware**
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8l): The Hailo8 AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
**AMD**
- [ROCm](#amdrocm-gpu-detector): ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- [ONNX](#onnx): ROCm will automatically be detected and used as a detector in the `-rocm` Frigate image when a supported ONNX model is configured.
**Intel**
- [OpenVino](#openvino-detector): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
**Nvidia**
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
**Rockchip**
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
:::
:::note
Multiple detectors can not be mixed for object detection (ex: OpenVINO and Coral EdgeTPU can not be used for object detection at the same time).
This does not affect using hardware for accelerating other tasks such as [semantic search](./semantic_search.md)
:::
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -115,6 +129,111 @@ detectors:
type: edgetpu
device: pci
```
---
## Hailo-8
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the Hailo hardware.
### Configuration
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`.
#### YOLO
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
- **Hailo-8 hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
- **Hailo-8L hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 320
height: 320
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic
# The detector automatically selects the default model based on your hardware:
# - For Hailo-8 hardware: YOLOv6n (default: yolov6n.hef)
# - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef)
#
# Optionally, you can specify a local model path to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
# Example:
# path: /config/model_cache/hailo/yolov6n.hef
#
# You can also override using a custom URL:
# path: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov6n.hef
# just make sure to give it the write configuration based on the model
```
#### SSD
For SSD-based models, provide either a model path or URL to your compiled SSD model. The integration will first check the local path before downloading if necessary.
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: rgb
model_type: ssd
# Specify the local model path (if available) or URL for SSD MobileNet v1.
# Example with a local path:
# path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
#
# Or override using a custom URL:
# path: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8l/ssd_mobilenet_v1.hef
```
#### Custom Models
The Hailo detector supports all YOLO models compiled for Hailo hardware that include post-processing. You can specify a custom URL or a local path to download or use your model directly. If both are provided, the detector checks the local path first.
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 640
height: 640
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic
# Optional: Specify a local model path.
# path: /config/model_cache/hailo/custom_model.hef
#
# Alternatively, or as a fallback, provide a custom URL:
# path: https://custom-model-url.com/path/to/model.hef
```
For additional ready-to-use models, please visit: https://github.com/hailo-ai/hailo_model_zoo
Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-processing. You're welcome to choose any of these pre-configured models for your implementation.
> **Note:**
> The config.path parameter can accept either a local file path or a URL ending with .hef. When provided, the detector will first check if the path is a local file path. If the file exists locally, it will use it directly. If the file is not found locally or if a URL was provided, it will attempt to download the model from the specified URL.
---
## OpenVINO Detector
@@ -169,15 +288,7 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@@ -199,13 +310,43 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### YOLOv9
[YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
ov:
type: openvino
device: GPU
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## NVidia TensorRT Detector
Nvidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt`. This detector is designed to work with Yolo models for object detection.
### Minimum Hardware Support
The TensorRT detector uses the 12.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=530`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
The TensorRT detector uses the 12.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
To use the TensorRT detector, make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
@@ -233,6 +374,8 @@ If your GPU does not support FP16 operations, you can pass the environment varia
Specific models can be selected by passing an environment variable to the `docker run` command or in your `docker-compose.yml` file. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below.
<details>
<summary>Available Models</summary>
```
yolov3-288
yolov3-416
@@ -261,6 +404,7 @@ yolov7-320
yolov7x-640
yolov7x-320
```
</details>
An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yolov7x-640` models for a Pascal card would look something like this:
@@ -305,7 +449,7 @@ model:
### Setup
The `rocm` detector supports running YOLO-NAS models on AMD GPUs. Use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
Support for AMD GPUs is provided using the [ONNX detector](#ONNX). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
### Docker settings for GPU access
@@ -355,7 +499,7 @@ When using docker compose:
```yaml
services:
frigate:
...
environment:
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
```
@@ -384,41 +528,13 @@ $ docker exec -it frigate /bin/bash -c '(unset HSA_OVERRIDE_GFX_VERSION && /opt/
### Supported Models
There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
rocm:
type: rocm
model:
model_type: yolonas
width: 320 # <--- should match whatever was set in notebook
height: 320 # <--- should match whatever was set in notebook
input_pixel_format: bgr
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
- D-FINE models are not supported
- YOLO-NAS models are known to not run well on integrated GPUs
## ONNX
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
:::info
@@ -458,15 +574,7 @@ There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@@ -485,6 +593,62 @@ model:
labelmap_path: /labelmap/coco-80.txt
```
#### YOLOv9
[YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### D-FINE
[D-FINE](https://github.com/Peterande/D-FINE) is the [current state of the art](https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=d-fine-redefine-regression-task-in-detrs-as) at the time of writing. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
:::warning
D-FINE is currently not supported on OpenVINO
:::
After placing the downloaded onnx model in your config/model_cache folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: dfine
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/dfine_m_obj2coco.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)
@@ -550,7 +714,7 @@ Hardware accelerated object detection is supported on the following SoCs:
- RK3576
- RK3588
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.0.0.beta0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.3.0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
### Prerequisites
@@ -625,25 +789,79 @@ $ cat /sys/kernel/debug/rknpu/load
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
## Hailo-8l
### Converting your own onnx model to rknn format
This detector is available for use with Hailo-8 AI Acceleration Module.
To convert a onnx model to the rknn format using the [rknn-toolkit2](https://github.com/airockchip/rknn-toolkit2/) you have to:
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
- Place one ore more models in onnx format in the directory `config/model_cache/rknn_cache/onnx` on your docker host (this might require `sudo` privileges).
- Save the configuration file under `config/conv2rknn.yaml` (see below for details).
- Run `docker exec <frigate_container_id> python3 /opt/conv2rknn.py`. If the conversion was successful, the rknn models will be placed in `config/model_cache/rknn_cache`.
### Configuration
This is an example configuration file that you need to adjust to your specific onnx model:
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
soc: ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
quantization: false
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
output_name: "{input_basename}"
config:
mean_values: [[0, 0, 0]]
std_values: [[255, 255, 255]]
quant_img_rgb2bgr: true
```
Explanation of the paramters:
- `soc`: A list of all SoCs you want to build the rknn model for. If you don't specify this parameter, the script tries to find out your SoC and builds the rknn model for this one.
- `quantization`: true: 8 bit integer (i8) quantization, false: 16 bit float (fp16). Default: false.
- `output_name`: The output name of the model. The following variables are available:
- `quant`: "i8" or "fp16" depending on the config
- `input_basename`: the basename of the input model (e.g. "my_model" if the input model is calles "my_model.onnx")
- `soc`: the SoC this model was build for (e.g. "rk3588")
- `tk_version`: Version of `rknn-toolkit2` (e.g. "2.3.0")
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.0_EN.pdf).
# Models
Some model types are not included in Frigate by default.
## Downloading Models
Here are some tips for getting different model types
### Downloading D-FINE Model
To export as ONNX:
1. Clone: https://github.com/Peterande/D-FINE and install all dependencies.
2. Select and download a checkpoint from the [readme](https://github.com/Peterande/D-FINE).
3. Modify line 58 of `tools/deployment/export_onnx.py` and change batch size to 1: `data = torch.rand(1, 3, 640, 640)`
4. Run the export, making sure you select the right config, for your checkpoint.
Example:
```
python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml -r output/dfine_m_obj2coco.pth
```
:::tip
Model export has only been tested on Linux (or WSL2). Not all dependencies are in `requirements.txt`. Some live in the deployment folder, and some are still missing entirely and must be installed manually.
Make sure you change the batch size to 1 before exporting.
:::
### Downloading YOLO-NAS Model
You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.

View File

@@ -34,7 +34,7 @@ False positives can also be reduced by filtering a detection based on its shape.
### Object Area
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter.
`min_area` and `max_area` filter on the area of an objects bounding box and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter. These values can either be in pixels or as a percentage of the frame (for example, 0.12 represents 12% of the frame).
### Object Proportions

View File

@@ -183,6 +183,8 @@ record:
sync_recordings: True
```
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
:::warning
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.

View File

@@ -46,6 +46,11 @@ mqtt:
tls_insecure: false
# Optional: interval in seconds for publishing stats (default: shown below)
stats_interval: 60
# Optional: QoS level for subscriptions and publishing (default: shown below)
# 0 = at most once
# 1 = at least once
# 2 = exactly once
qos: 0
# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
@@ -244,10 +249,14 @@ ffmpeg:
# If set too high, then if a ffmpeg crash or camera stream timeout occurs, you could potentially lose up to a maximum of retry_interval second(s) of footage
# NOTE: this can be a useful setting for Wireless / Battery cameras to reduce how much footage is potentially lost during a connection timeout.
retry_interval: 10
# Optional: Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players. (default: shown below)
apple_compatibility: false
# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
# Optional: enables detection for the camera (default: shown below)
enabled: False
# Optional: width of the frame for the input with the detect role (default: use native stream resolution)
width: 1280
# Optional: height of the frame for the input with the detect role (default: use native stream resolution)
@@ -255,8 +264,6 @@ detect:
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 5
# Optional: enables detection for the camera (default: True)
enabled: True
# Optional: Number of consecutive detection hits required for an object to be initialized in the tracker. (default: 1/2 the frame rate)
min_initialized: 2
# Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
@@ -310,9 +317,11 @@ objects:
# Optional: filters to reduce false positives for specific object types
filters:
person:
# Optional: minimum width*height of the bounding box for the detected object (default: 0)
# Optional: minimum size of the bounding box for the detected object (default: 0).
# Can be specified as an integer for width*height in pixels or as a decimal representing the percentage of the frame (0.000001 to 0.99).
min_area: 5000
# Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
# Optional: maximum size of the bounding box for the detected object (default: 24000000).
# Can be specified as an integer for width*height in pixels or as a decimal representing the percentage of the frame (0.000001 to 0.99).
max_area: 100000
# Optional: minimum width/height of the bounding box for the detected object (default: 0)
min_ratio: 0.5
@@ -331,6 +340,8 @@ objects:
review:
# Optional: alerts configuration
alerts:
# Optional: enables alerts for the camera (default: shown below)
enabled: True
# Optional: labels that qualify as an alert (default: shown below)
labels:
- car
@@ -343,6 +354,8 @@ review:
- driveway
# Optional: detections configuration
detections:
# Optional: enables detections for the camera (default: shown below)
enabled: True
# Optional: labels that qualify as a detection (default: all labels that are tracked / listened to)
labels:
- car
@@ -400,12 +413,15 @@ motion:
mqtt_off_delay: 30
# Optional: Notification Configuration
# NOTE: Can be overridden at the camera level (except email)
notifications:
# Optional: Enable notification service (default: shown below)
enabled: False
# Optional: Email for push service to reach out to
# NOTE: This is required to use notifications
email: "admin@example.com"
# Optional: Cooldown time for notifications in seconds (default: shown below)
cooldown: 0
# Optional: Record configuration
# NOTE: Can be overridden at the camera level
@@ -520,12 +536,40 @@ semantic_search:
enabled: False
# Optional: Re-index embeddings database from historical tracked objects (default: shown below)
reindex: False
# Optional: Set the model used for embeddings. (default: shown below)
model: "jinav1"
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for face recognition capability
face_recognition:
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for license plate recognition capability
lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: License plate object confidence score required to begin running recognition (default: shown below)
detection_threshold: 0.7
# Optional: Minimum area of license plate to begin running recognition (default: shown below)
min_area: 1000
# Optional: Recognition confidence score required to add the plate to the object as a sub label (default: shown below)
recognition_threshold: 0.9
# Optional: Minimum number of characters a license plate must have to be added to the object as a sub label (default: shown below)
min_plate_length: 4
# Optional: Regular expression for the expected format of a license plate (default: shown below)
format: None
# Optional: Allow this number of missing/incorrect characters to still cause a detected plate to match a known plate
match_distance: 1
# Optional: Known plates to track (strings or regular expressions) (default: shown below)
known_plates: {}
# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.
# WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
# the camera level (enabled: False) to enhance privacy for indoor cameras.
@@ -549,16 +593,18 @@ genai:
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
# Optional: Live stream configuration for WebUI.
# NOTE: Can be overridden at the camera level
live:
# Optional: Set the name of the stream configured in go2rtc
# Optional: Set the streams configured in go2rtc
# that should be used for live view in frigate WebUI. (default: name of camera)
# NOTE: In most cases this should be set at the camera level only.
stream_name: camera_name
streams:
main_stream: main_stream_name
sub_stream: sub_stream_name
# Optional: Set the height of the jsmpeg stream. (default: 720)
# This must be less than or equal to the height of the detect stream. Lower resolutions
# reduce bandwidth required for viewing the jsmpeg stream. Width is computed to match known aspect ratio.
@@ -643,7 +689,10 @@ cameras:
front_steps:
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.284,0.997,0.389,0.869,0.410,0.745
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
# Optional: The real-world distances of a 4-sided zone used for zones with speed estimation enabled (default: none)
# List distances in order of the zone points coordinates and use the unit system defined in the ui config
distances: 10,15,12,11
# Optional: Number of consecutive frames required for object to be considered present in the zone (default: shown below).
inertia: 3
# Optional: Number of seconds that an object must loiter to be considered in the zone (default: shown below)
@@ -764,6 +813,12 @@ cameras:
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
@@ -794,6 +849,9 @@ ui:
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
# Used in the UI and in MQTT topics
unit_system: metric
# Optional: Telemetry configuration
telemetry:
@@ -807,11 +865,13 @@ telemetry:
- lo
# Optional: Configure system stats
stats:
# Enable AMD GPU stats (default: shown below)
# Optional: Enable AMD GPU stats (default: shown below)
amd_gpu_stats: True
# Enable Intel GPU stats (default: shown below)
# Optional: Enable Intel GPU stats (default: shown below)
intel_gpu_stats: True
# Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# Optional: Treat GPU as SR-IOV to fix GPU stats (default: shown below)
sriov: False
# Optional: Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# NOTE: The container must either be privileged or have cap_net_admin, cap_net_raw capabilities enabled.
network_bandwidth: False
# Optional: Enable the latest version outbound check (default: shown below)

View File

@@ -1,11 +1,11 @@
---
id: semantic_search
title: Using Semantic Search
title: Semantic Search
---
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate uses [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create and save embeddings to Frigate's database. All of this runs locally.
Frigate uses models from [Jina AI](https://huggingface.co/jinaai) to create and save embeddings to Frigate's database. All of this runs locally.
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
@@ -35,23 +35,47 @@ If you are enabling Semantic Search for the first time, be advised that Frigate
:::
### Jina AI CLIP
### Jina AI CLIP (version 1)
The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The [V1 model from Jina](https://huggingface.co/jinaai/jina-clip-v1) has a vision model which is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted versions of the Jina model are available and can be selected by setting the `model_size` config option as `small` or `large`:
Differently weighted versions of the Jina models are available and can be selected by setting the `model_size` config option as `small` or `large`:
```yaml
semantic_search:
enabled: True
model: "jinav1"
model_size: small
```
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the Jina model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
### Jina AI CLIP (version 2)
Frigate also supports the [V2 model from Jina](https://huggingface.co/jinaai/jina-clip-v2), which introduces multilingual support (89 languages). In contrast, the V1 model only supports English.
V2 offers only a 3% performance improvement over V1 in both text-image and text-text retrieval tasks, an upgrade that is unlikely to yield noticeable real-world benefits. Additionally, V2 has _significantly_ higher RAM and GPU requirements, leading to increased inference time and memory usage. If you plan to use V2, ensure your system has ample RAM and a discrete GPU. CPU inference (with the `small` model) using V2 is not recommended.
To use the V2 model, update the `model` parameter in your config:
```yaml
semantic_search:
enabled: True
model: "jinav2"
model_size: large
```
For most users, especially native English speakers, the V1 model remains the recommended choice.
:::note
Switching between V1 and V2 requires reindexing your embeddings. To do this, set `reindex: True` in your Semantic Search configuration and restart Frigate. The embeddings from V1 and V2 are incompatible, and failing to reindex will result in incorrect search results.
:::
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.

View File

@@ -122,16 +122,61 @@ cameras:
- car
```
### Loitering Time
### Speed Estimation
Zones support a `loitering_time` configuration which can be used to only consider an object as part of a zone if they loiter in the zone for the specified number of seconds. This can be used, for example, to create alerts for cars that stop on the street but not cars that just drive past your camera.
Frigate can be configured to estimate the speed of objects moving through a zone. This works by combining data from Frigate's object tracker and "real world" distance measurements of the edges of the zone. The recommended use case for this feature is to track the speed of vehicles on a road as they move through the zone.
Your zone must be defined with exactly 4 points and should be aligned to the ground where objects are moving.
![Ground plane 4-point zone](/img/ground-plane.jpg)
Speed estimation requires a minimum number of frames for your object to be tracked before a valid estimate can be calculated, so create your zone away from places where objects enter and exit for the best results. _Your zone should not take up the full frame._ An object's speed is tracked while it is in the zone and then saved to Frigate's database.
Accurate real-world distance measurements are required to estimate speeds. These distances can be specified in your zone config through the `distances` field.
```yaml
cameras:
name_of_your_camera:
zones:
front_yard:
loitering_time: 5 # unit is in seconds
objects:
- person
street:
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
distances: 10,12,11,13.5 # in meters or feet
```
Each number in the `distance` field represents the real-world distance between the points in the `coordinates` list. So in the example above, the distance between the first two points ([0.033,0.306] and [0.324,0.138]) is 10. The distance between the second and third set of points ([0.324,0.138] and [0.439,0.185]) is 12, and so on. The fastest and most accurate way to configure this is through the Zone Editor in the Frigate UI.
The `distance` values are measured in meters (metric) or feet (imperial), depending on how `unit_system` is configured in your `ui` config:
```yaml
ui:
# can be "metric" or "imperial", default is metric
unit_system: metric
```
The average speed of your object as it moved through your zone is saved in Frigate's database and can be seen in the UI in the Tracked Object Details pane in Explore. Current estimated speed can also be seen on the debug view as the third value in the object label (see the caveats below). Current estimated speed, average estimated speed, and velocity angle (the angle of the direction the object is moving relative to the frame) of tracked objects is also sent through the `events` MQTT topic. See the [MQTT docs](../integrations/mqtt.md#frigateevents).
These speed values are output as a number in miles per hour (mph) or kilometers per hour (kph). For miles per hour, set `unit_system` to `imperial`. For kilometers per hour, set `unit_system` to `metric`.
#### Best practices and caveats
- Speed estimation works best with a straight road or path when your object travels in a straight line across that path. Avoid creating your zone near intersections or anywhere that objects would make a turn. If the bounding box changes shape (either because the object made a turn or became partially obscured, for example), speed estimation will not be accurate.
- Create a zone where the bottom center of your object's bounding box travels directly through it and does not become obscured at any time. See the photo example above.
- Depending on the size and location of your zone, you may want to decrease the zone's `inertia` value from the default of 3.
- The more accurate your real-world dimensions can be measured, the more accurate speed estimation will be. However, due to the way Frigate's tracking algorithm works, you may need to tweak the real-world distance values so that estimated speeds better match real-world speeds.
- Once an object leaves the zone, speed accuracy will likely decrease due to perspective distortion and misalignment with the calibrated area. Therefore, speed values will show as a zero through MQTT and will not be visible on the debug view when an object is outside of a speed tracking zone.
- The speeds are only an _estimation_ and are highly dependent on camera position, zone points, and real-world measurements. This feature should not be used for law enforcement.
### Speed Threshold
Zones can be configured with a minimum speed requirement, meaning an object must be moving at or above this speed to be considered inside the zone. Zone `distances` must be defined as described above.
```yaml
cameras:
name_of_your_camera:
zones:
sidewalk:
coordinates: ...
distances: ...
inertia: 1
speed_threshold: 20 # unit is in kph or mph, depending on how unit_system is set (see above)
```

View File

@@ -34,7 +34,7 @@ Fork [blakeblackshear/frigate-hass-integration](https://github.com/blakeblackshe
### Prerequisites
- GNU make
- Docker
- Docker (including buildx plugin)
- An extra detector (Coral, OpenVINO, etc.) is optional but recommended to simulate real world performance.
:::note

View File

@@ -13,20 +13,19 @@ Many users have reported various issues with Reolink cameras, so I do not recomm
Here are some of the camera's I recommend:
- <a href="https://amzn.to/3uFLtxB" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) T5442TM-AS-LED</a> (affiliate link)
- <a href="https://amzn.to/3isJ3gU" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T5442TM-AS</a> (affiliate link)
- <a href="https://amzn.to/2ZWNWIA" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-28MM</a> (affiliate link)
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
## Server
My current favorite is the Beelink EQ12 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| Beelink EQ12 (<a href="https://amzn.to/3OlTMJY" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
| Intel NUC (<a href="https://amzn.to/3psFlHi" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Overkill for most, but great performance. Can handle many cameras at 5fps depending on typical amounts of motion. Requires extra parts. |
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4iQaBKu" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
## Detectors
@@ -52,24 +51,25 @@ The OpenVINO detector type is able to run on:
More information is available [in the detector docs](/configuration/object_detectors#openvino-detector)
Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below:
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | Inference Speed | Notes |
| -------------------- | --------------- | --------------------------------------------------------------------- |
| Intel NCS2 VPU | 60 - 65 ms | May vary based on host device |
| Intel Celeron J4105 | ~ 25 ms | Inference speeds on CPU were 150 - 200 ms |
| Intel Celeron N3060 | 130 - 150 ms | Inference speeds on CPU were ~ 550 ms |
| Intel Celeron N3205U | ~ 120 ms | Inference speeds on CPU were ~ 380 ms |
| Intel Celeron N4020 | 50 - 200 ms | Inference speeds on CPU were ~ 800 ms, greatly depends on other loads |
| Intel i3 6100T | 15 - 35 ms | Inference speeds on CPU were 60 - 120 ms |
| Intel i3 8100 | ~ 15 ms | Inference speeds on CPU were ~ 65 ms |
| Intel i5 4590 | ~ 20 ms | Inference speeds on CPU were ~ 230 ms |
| Intel i5 6500 | ~ 15 ms | Inference speeds on CPU were ~ 150 ms |
| Intel i5 7200u | 15 - 25 ms | Inference speeds on CPU were ~ 150 ms |
| Intel i5 7500 | ~ 15 ms | Inference speeds on CPU were ~ 260 ms |
| Intel i5 1135G7 | 10 - 15 ms | |
| Intel i5 12600K | ~ 15 ms | Inference speeds on CPU were ~ 35 ms |
| Intel Arc A750 | ~ 4 ms | |
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | Notes |
| -------------------- | -------------------------- | ------------------------- | -------------------------------------- |
| Intel Celeron J4105 | ~ 25 ms | | Can only run one detector instance |
| Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance |
| Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance |
| Intel Celeron N4020 | 50 - 200 ms | | Inference speed depends on other loads |
| Intel i3 6100T | 15 - 35 ms | | Can only run one detector instance |
| Intel i3 8100 | ~ 15 ms | | |
| Intel i5 4590 | ~ 20 ms | | |
| Intel i5 6500 | ~ 15 ms | | |
| Intel i5 7200u | 15 - 25 ms | | |
| Intel i5 7500 | ~ 15 ms | | |
| Intel i5 1135G7 | 10 - 15 ms | | |
| Intel i3 12000 | | 320: ~ 19 ms 640: ~ 54 ms | |
| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | |
### TensorRT - Nvidia GPU
@@ -78,29 +78,46 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
| Name | Inference Speed |
| --------------- | --------------- |
| GTX 1060 6GB | ~ 7 ms |
| GTX 1070 | ~ 6 ms |
| GTX 1660 SUPER | ~ 4 ms |
| RTX 3050 | 5 - 7 ms |
| RTX 3070 Mobile | ~ 5 ms |
| Quadro P400 2GB | 20 - 25 ms |
| Quadro P2000 | ~ 12 ms |
| Name | YoloV7 Inference Time | YOLO-NAS Inference Time |
| --------------- | --------------------- | ------------------------- |
| GTX 1060 6GB | ~ 7 ms | |
| GTX 1070 | ~ 6 ms | |
| GTX 1660 SUPER | ~ 4 ms | |
| RTX 3050 | 5 - 7 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 Mobile | ~ 5 ms | |
| Quadro P400 2GB | 20 - 25 ms | |
| Quadro P2000 | ~ 12 ms | |
#### AMD GPUs
### AMD GPUs
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many AMD GPUs.
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
### Community Supported:
### Hailo-8
#### Nvidia Jetson
| Name | Hailo8 Inference Time | Hailo8L Inference Time |
| --------------- | ---------------------- | ----------------------- |
| ssd mobilenet v1| ~ 6 ms | ~ 10 ms |
| yolov6n | ~ 7 ms | ~ 11 ms |
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
**Default Model Configuration:**
- **Hailo-8L:** Default model is **YOLOv6n**.
- **Hailo-8:** Default model is **YOLOv6n**.
In real-world deployments, even with multiple cameras running concurrently, Frigate has demonstrated consistent performance. Testing on x86 platforms—with dual PCIe lanes—yields further improvements in FPS, throughput, and latency compared to the Raspberry Pi setup.
## Community Supported Detectors
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
#### Rockchip platform
### Rockchip platform
Frigate supports hardware video processing on all Rockchip boards. However, hardware object detection is only supported on these boards:
@@ -112,12 +129,6 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
#### Hailo-8l PCIe
Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

View File

@@ -80,12 +80,12 @@ The Frigate container also stores logs in shm, which can take up to **40MB**, so
You can calculate the **minimum** shm size for each camera with the following formula using the resolution specified for detect:
```console
# Replace <width> and <height>
# Template for one camera without logs, replace <width> and <height>
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 20 + 270480) / 1048576))'
# Example for 1280x720, including logs
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 20 + 270480) / 1048576)) + 40'
46.63MB
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 20 + 270480) / 1048576 + 40))'
66.63MB
# Example for eight cameras detecting at 1280x720, including logs
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576) * 8 + 40))'
@@ -100,9 +100,9 @@ By default, the Raspberry Pi limits the amount of memory available to the GPU. I
Additionally, the USB Coral draws a considerable amount of power. If using any other USB devices such as an SSD, you will experience instability due to the Pi not providing enough power to USB devices. You will need to purchase an external USB hub with it's own power supply. Some have reported success with <a href="https://amzn.to/3a2mH0P" target="_blank" rel="nofollow noopener sponsored">this</a> (affiliate link).
### Hailo-8L
### Hailo-8
The Hailo-8L is an M.2 card typically connected to a carrier board for PCIe, which then connects to the Raspberry Pi 5 as part of the AI Kit. However, it can also be used on other boards equipped with an M.2 M key edge connector.
The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form factors for the Raspberry Pi. The M.2 version typically connects to a carrier board for PCIe, which then interfaces with the Raspberry Pi 5 as part of the AI Kit. The HAT version can be mounted directly onto compatible Raspberry Pi models. Both form factors have been successfully tested on x86 platforms as well, making them versatile options for various computing environments.
#### Installation
@@ -111,13 +111,13 @@ For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simpl
For other installations, follow these steps for installation:
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/41c9b13d2fffce508b32dfc971fa529b49295fbd/docker/hailo8l/user_installation.sh).
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
4. Run the script with `./user_installation.sh`
#### Setup
To set up Frigate, follow the default installation instructions, but use a Docker image with the `-h8l` suffix, for example: `ghcr.io/blakeblackshear/frigate:stable-h8l`
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
@@ -250,7 +250,7 @@ The official docker image tags for the current stable version are:
The community supported docker image tags for the current stable version are:
- `stable-tensorrt-jp5` - Frigate build optimized for nvidia Jetson devices running Jetpack 5
- `stable-tensorrt-jp4` - Frigate build optimized for nvidia Jetson devices running Jetpack 4.6
- `stable-tensorrt-jp6` - Frigate build optimized for nvidia Jetson devices running Jetpack 6
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
- `stable-rocm` - Frigate build for [AMD GPUs](../configuration/object_detectors.md#amdrocm-gpu-detector)
- `stable-h8l` - Frigate build for the Hailo-8L M.2 PICe Raspberry Pi 5 hat

View File

@@ -7,7 +7,7 @@ title: Configuring go2rtc
Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect directly to your cameras. However, adding go2rtc to your configuration is required for the following features:
- WebRTC or MSE for live viewing with higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream
- WebRTC or MSE for live viewing with audio, higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream and does not support audio
- Live stream support for cameras in Home Assistant Integration
- RTSP relay for use with other consumers to reduce the number of connections to your camera streams

View File

@@ -151,8 +151,6 @@ cameras:
- path: rtsp://10.0.10.10:554/rtsp # <----- The stream you want to use for detection
roles:
- detect
detect:
enabled: False # <---- disable detection until you have a working camera feed
```
### Step 2: Start Frigate
@@ -177,7 +175,7 @@ services:
frigate:
...
devices:
- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
- /dev/dri/renderD128:/dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
...
```
@@ -307,7 +305,7 @@ By default, Frigate will retain video of all tracked objects for 10 days. The fu
### Step 7: Complete config
At this point you have a complete config with basic functionality.
At this point you have a complete config with basic functionality.
- View [common configuration examples](../configuration/index.md#common-configuration-examples) for a list of common configuration examples.
- View [full config reference](../configuration/reference.md) for a complete list of configuration options.

View File

@@ -97,13 +97,13 @@ services:
If you are using HassOS with the addon, the URL should be one of the following depending on which addon version you are using. Note that if you are using the Proxy Addon, you do NOT point the integration at the proxy URL. Just enter the URL used to access Frigate directly from your network.
| Addon Version | URL |
| ------------------------------ | -------------------------------------- |
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
| Addon Version | URL |
| ------------------------------ | ----------------------------------------- |
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
### Frigate running on a separate machine
@@ -301,3 +301,7 @@ which server they are referring to.
#### If I am detecting multiple objects, how do I assign the correct `binary_sensor` to the camera in HomeKit?
The [HomeKit integration](https://www.home-assistant.io/integrations/homekit/) randomly links one of the binary sensors (motion sensor entities) grouped with the camera device in Home Assistant. You can specify a `linked_motion_sensor` in the Home Assistant [HomeKit configuration](https://www.home-assistant.io/integrations/homekit/#linked_motion_sensor) for each camera.
#### I have set up automations based on the occupancy sensors. Sometimes the automation runs because the sensors are turned on, but then I look at Frigate I can't find the object that triggered the sensor. Is this a bug?
No. The occupancy sensors have fewer checks in place because they are often used for things like turning the lights on where latency needs to be as low as possible. So false positives can sometimes trigger these sensors. If you want false positive filtering, you should use an mqtt sensor on the `frigate/events` or `frigate/reviews` topic.

View File

@@ -52,7 +52,9 @@ Message published for each changed tracked object. The first message is publishe
"attributes": {
"face": 0.64
}, // attributes with top score that have been identified on the object at any point
"current_attributes": [] // detailed data about the current attributes in this frame
"current_attributes": [], // detailed data about the current attributes in this frame
"current_estimated_speed": 0.71, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180 // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
},
"after": {
"id": "1607123955.475377-mxklsc",
@@ -89,7 +91,9 @@ Message published for each changed tracked object. The first message is publishe
"box": [442, 506, 534, 524],
"score": 0.86
}
]
],
"current_estimated_speed": 0.77, // current estimated speed (mph or kph) for objects moving through zones with speed estimation enabled
"velocity_angle": 180 // direction of travel relative to the frame for objects moving through zones with speed estimation enabled
}
}
```
@@ -218,6 +222,14 @@ Publishes the rms value for audio detected on this camera.
**NOTE:** Requires audio detection to be enabled
### `frigate/<camera_name>/enabled/set`
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/enabled/state`
Topic with current state of processing for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/detect/set`
Topic to turn object detection for a camera on and off. Expected values are `ON` and `OFF`.
@@ -312,6 +324,22 @@ Topic with current state of the PTZ autotracker for a camera. Published values a
Topic to determine if PTZ autotracker is actively tracking an object. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_alerts/set`
Topic to turn review alerts for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/review_alerts/state`
Topic with current state of review alerts for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_detections/set`
Topic to turn review detections for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/review_detections/state`
Topic with current state of review detections for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/birdseye/set`
Topic to turn Birdseye for a camera on and off. Expected values are `ON` and `OFF`. Birdseye mode
@@ -337,3 +365,19 @@ the camera to be removed from the view._
### `frigate/<camera_name>/birdseye_mode/state`
Topic with current state of the Birdseye mode for a camera. Published values are `CONTINUOUS`, `MOTION`, `OBJECTS`.
### `frigate/<camera_name>/notifications/set`
Topic to turn notifications on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/notifications/state`
Topic with current state of notifications. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/notifications/suspend`
Topic to suspend notifications for a certain number of minutes. Expected value is an integer.
### `frigate/<camera_name>/notifications/suspended`
Topic with timestamp that notifications are suspended until. Published value is a UNIX timestamp, or 0 if notifications are not suspended.

Some files were not shown because too many files have changed in this diff Show More