Compare commits

..

388 Commits

Author SHA1 Message Date
dependabot[bot]
0b2bfe655c Bump python-multipart from 0.0.12 to 0.0.20 in /docker/main
Bumps [python-multipart](https://github.com/Kludex/python-multipart) from 0.0.12 to 0.0.20.
- [Release notes](https://github.com/Kludex/python-multipart/releases)
- [Changelog](https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Kludex/python-multipart/compare/0.0.12...0.0.20)

---
updated-dependencies:
- dependency-name: python-multipart
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-11 23:48:15 +00:00
dependabot[bot]
c6bed1e108 Update markupsafe requirement from ==2.1.* to ==3.0.* in /docker/main (#14213)
Updates the requirements on [markupsafe](https://github.com/pallets/markupsafe) to permit the latest version.
- [Release notes](https://github.com/pallets/markupsafe/releases)
- [Changelog](https://github.com/pallets/markupsafe/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/markupsafe/compare/2.1.0...3.0.0)

---
updated-dependencies:
- dependency-name: markupsafe
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-03-11 17:46:47 -06:00
OmriAx
7411a8bafa Hailo Official integration (#16906)
* Adding Models

* Final Async Update

* Bug Fixing

* Fix

* Adding fixes

* Working async infer

* Final Documenatation and debug update

* Removing some extra prints

* Post-process correct label push

* config docs fix

* Review Fix

* Review fix 2.0

* Fixing the ASYNC API to work from 30ms to 10ms

* Fix for multi stream async infernce

* Format

* Fix #3

* Format#2

* Remove Unnessery includes

* Sort Imports
2025-03-11 13:36:07 -06:00
Nicolas Mowen
300f85720c Face blur factor (#17099)
* Add option to apply factor to face blurring

* Adjust blur factors

* Add debug log
2025-03-11 14:18:43 -05:00
Josh Hawkins
c9382f8969 Add docs for user roles (#17093)
* Add docs for user roles

* header mapping docs
2025-03-11 08:05:42 -06:00
Nicolas Mowen
c43092da9a Add self return for pydantic (#17091) 2025-03-11 07:57:00 -05:00
MkSavin
bea6cf29c2 fix(auth): Added trimming to jwt secret token read from .jwt_secret (#16467)
Added cleaning of leading and trailing spaces and special characters from a line when reading a secret token from a `.jwt_secret` file
2025-03-10 16:36:43 -06:00
Nicolas Mowen
0cc5d66e5b Refactor sub label api (#17079)
* Use event metadata updater to handle sub label operations

* Use event metadata publisher for sub label setting

* Formatting

* fix tests

* Cleanup
2025-03-10 16:29:29 -06:00
Nicolas Mowen
7d44970f78 Face multi select (#17068)
* Implement multi select for face library

* Clear list of selected

* Add keyboard shortcut
2025-03-10 10:01:52 -05:00
Josh Hawkins
2be5225440 More auth role fixes (#17067)
* simplify check and handle comma separated roles

* spacing
2025-03-10 10:00:35 -05:00
Josh Hawkins
cb25bd4a88 Auth role bugfixes (#17066)
* get correct role from header map

* fix profile endpoint
2025-03-10 07:59:24 -06:00
Hieu LE
b72afb6895 Fix ffmpeg cannot start because of loading shared lib (#16846)
* Fix #16845

Maybe after PR #16712 , ffmpeg build with JP6 seem broken with error `/usr/lib/ffmpeg/jetson/bin/ffmpeg: error while loading shared libraries: libavdevice.so.60: cannot open shared object file: No such file or directory`

This PR fixes the issue

* Adding new LD entry for ffmpeg new location

* Update Dockerfile.arm64

* Move LD config to Dockerfile arm64 instead of detector
2025-03-10 06:54:55 -06:00
Josh Hawkins
1c6700c688 Ensure audio listener is defined before trying to stop ffmpeg (#17045) 2025-03-09 22:01:18 -06:00
Josh Hawkins
9f9117c318 Ensure admin is default role (#17044) 2025-03-09 21:59:07 -06:00
Josh Hawkins
1e0295fad5 LPR tweaks (#17046) 2025-03-09 21:58:31 -06:00
Josh Hawkins
95b5854449 Small UI bugfix (#17035)
* test for more HA elements

* check if mobile and iOS instead of mobilesafari

* simplify

* fix for logs view
2025-03-09 07:47:10 -06:00
Josh Hawkins
cf3c0b2eb5 Prevent settings menu scroll on iOS proxy iframe from shifting entire UI (#17024) 2025-03-08 10:13:07 -06:00
Josh Hawkins
74ca009b0b UI viewer role (#16978)
* db migration

* db model

* assign admin role on password reset

* add role to jwt and api responses

* don't restrict api access for admins yet

* use json response

* frontend auth context

* update auth form for profile endpoint

* add access denied page

* add protected routes

* auth hook

* dialogs

* user settings view

* restrict viewer access to settings

* restrict camera functions for viewer role

* add password dialog to account menu

* spacing tweak

* migrator default to admin

* escape quotes in migrator

* ui tweaks

* tweaks

* colors

* colors

* fix merge conflict

* fix icons

* add api layer enforcement

* ui tweaks

* fix error message

* debug

* clean up

* remove print

* guard apis for admin only

* fix tests

* fix review tests

* use correct error responses from api in toasts

* add role to account menu
2025-03-08 10:01:08 -06:00
Nicolas Mowen
6f9d9cd5a8 Fix yolov9 link (#17007) 2025-03-07 08:50:04 -06:00
Nicolas Mowen
09705fd1b4 Update python deps (#17006) 2025-03-07 06:54:53 -07:00
dependabot[bot]
d831fab381 Bump actions/setup-python from 5.3.0 to 5.4.0 (#16184)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.3.0 to 5.4.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.3.0...v5.4.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-07 06:47:15 -07:00
Josh Hawkins
0e3e2e5ccc Add cameras filter to history view (#16995) 2025-03-06 19:00:15 -07:00
Chris Oelerich
f81bab8895 video['global'] can be empty resulting in a divide by zero (#16993)
* video['global'] can be empty resulting in a divide by zero

* formatting :(
2025-03-06 16:21:16 -06:00
Nicolas Mowen
433da8ffce Update web deps (#16983)
* Update vite

* Update LuIcons

* Update radix packages

* Fix other icons

* Use correct node version

* Remove superfluous web build on python tests

* Move web build to test
2025-03-06 10:50:37 -06:00
Nicolas Mowen
30acd26898 Disable detection by default (#16980)
* Enable detection by default

* Migrate config to have detect enabled if it is not
2025-03-06 08:00:29 -06:00
Nicolas Mowen
66d5f4f3b8 Refactor enabled camera listeners (#16979)
* Monitor if camera is disabled for review items

* Simplify multi camera disabled check

* Cleanup birdseye config handling

* Cleanup

* Remove old listeners
2025-03-06 07:59:35 -06:00
Nicolas Mowen
73c2c34127 Fix previews failing when disabled (#16962)
* Fix previews failing when offline

* Simplify frame cache handling
2025-03-05 08:07:48 -06:00
Josh Hawkins
ad0e89e147 Ensure disabling a camera also disables audio detection (#16961)
* Ensure disabling a camera also disables audio detection

* fix enabled state

* fix path
2025-03-05 06:30:23 -07:00
Josh Hawkins
73f8c97d1d Docs updates (#16949)
* live and lpr docs updates

* disabled clarity

* more disable clarity

* clarify sync_recordings
2025-03-04 17:29:11 -07:00
jdryden572
389c707ad2 Orient live camera feed for best screen fit when in fullscreen mode (#16947)
* Change orientation in fullscreen to best fit video

* Refactor effect to simplify, add more comments
2025-03-04 14:30:34 -06:00
leccelecce
c23653338f GenAI: allow configuring additional send trigger after_significant_updates as well as event_end (#16919) 2025-03-04 09:23:51 -07:00
Josh Hawkins
76c35307b2 Ensure genai thumbnails are always jpegs (#16939) 2025-03-04 07:34:19 -07:00
dxs-dev
92422d853d Fixed the issue where internal context copy occurs frequently. (#16931)
remove cache mount in nginx build

Co-authored-by: Ludis Hur <ludishur@dxsolution.kr>
2025-03-04 06:19:40 -07:00
Josh Hawkins
5210d8c0a2 Add camera enable switch to mobile drawer (#16929) 2025-03-03 17:41:28 -07:00
Nicolas Mowen
56079d080d Quick fix (#16926)
* fix

* Fix

* Fix incorrect default websocket value

* Cleanup value setting
2025-03-03 17:28:34 -06:00
Nicolas Mowen
2946c935ee Disabled camera output (#16920)
* Fix live cameras not showing on refresh

* Fix live dashboard when birdseye is added

* Handle cameras that are offline / disabled

* Use black instead of green frame

* Fix missing mqtt topics
2025-03-03 15:05:49 -06:00
D34DC3N73R
180b0af3c9 Adapt openai.py to work with xAI (#16903)
* Adapt openai.py to work with xAI

It appears xAI is a bit more strict in regards to how the prompt is sent. This changes the prompt to be a dictionary with `"type": "text"` which works with OpenAI and xAI.

* Adapt openai.py to work with xAI

add "detail": "low"

* Adapt openai.py to work with xAI

Apply Ruff formatting and linting fixes
2025-03-03 12:53:24 -07:00
leccelecce
f3765bc391 GenAI minor refactor (#16916) 2025-03-03 11:01:02 -06:00
Josh Hawkins
531042467a Dynamically enable/disable cameras (#16894)
* config options

* metrics

* stop and restart ffmpeg processes

* dispatcher

* frontend websocket

* buttons for testing

* don't recreate log pipe

* add/remove cam from birdseye when enabling/disabling

* end all objects and send empty camera activity

* enable/disable switch in ui

* disable buttons when camera is disabled

* use enabled_in_config for some frontend checks

* tweaks

* handle settings pane with disabled cameras

* frontend tweaks

* change to debug log

* mqtt docs

* tweak

* ensure all ffmpeg processes are initially started

* clean up

* use zmq

* remove camera metrics

* remove camera metrics

* tweaks

* frontend tweaks
2025-03-03 08:30:52 -07:00
Nicolas Mowen
71e6e04d77 Remove rocm detector (#16913)
* Remove rocm detector plugin

* Update docs to recommend using onnx for rocm

* Formatting
2025-03-03 08:16:14 -06:00
Nicolas Mowen
0128ec2ba6 Upgrade RocM to 6.3.3 (#16900)
* Simplify rocm install and update to 6.3.1

* Build out more necessary packages

* Update to 6.3.3

* Set bake version

* Fix typo

* Ensure NHWC is used

* Reset dev changes

* Write to cache
2025-03-02 21:46:46 -06:00
Nicolas Mowen
b8f4cb5435 Fix docs (#16889) 2025-03-02 09:30:18 -07:00
Nicolas Mowen
4e03efaba9 Disable hailort log (#16888) 2025-03-02 09:26:59 -06:00
Nicolas Mowen
f56668e467 Update d-fine documentation (#16881) 2025-03-01 17:09:41 -06:00
Martin Weinelt
458134de5d Reuse constants (#16874) 2025-02-28 21:35:09 -07:00
Nicolas Mowen
06d6e21de8 Fix cuda targetarch (#16869) 2025-02-28 14:48:08 -06:00
Josh Hawkins
8d2f461350 Embeddings tweaks (#16864)
* make semantic search optional

* config

* frontend metrics

* docs

* tweak

* fixes

* also check genai cameras for embeddings context
2025-02-28 11:43:08 -07:00
Nicolas Mowen
db4152c4ca Fix jetson (#16854)
* Fix jetson build

* Update ci.yml

* Update Dockerfile.base

* Update Dockerfile.base

* Update Dockerfile.base

* Fix

* Update ci.yml
2025-02-27 17:24:03 -06:00
Jared
f221a7ae74 Quality of life documentation updates (#16852)
* Update getting_started with full host:container syntax for hwacc

* Update edgetpu.md

Add a tip about the coral TPU not changing identification until after Frigate runs an inference on the TPU.
2025-02-27 09:45:32 -07:00
toperichvania
2b7b5e3f08 Fix incorrect storage usage per camera (#16825) (#16851) 2025-02-27 08:28:53 -07:00
Nicolas Mowen
4f855f82ea Simplify tensorrt (#16835)
* Remove unneccessary trt wheels build

* Cleanup

* Try without local cuda

* Keep specific cuda libs only

* Cleanup

* Add newer libcufft

* remove target

* Include more
2025-02-26 14:39:19 -06:00
Josh Hawkins
d0e9bcbfdc Add ability to use Jina CLIP V2 for semantic search (#16826)
* add wheels

* move extra index url to bottom

* config model option

* add postprocess

* fix config

* jina v2 embedding class

* use jina v2 in embeddings

* fix ov inference

* frontend

* update reference config

* revert device

* fix truncation

* return np tensors

* use correct embeddings from inference

* manual preprocess

* clean up

* docs

* lower batch size for v2 only

* docs clarity

* wording
2025-02-26 07:58:25 -07:00
Josh Hawkins
447f26e1b9 Fix lpr metrics and add yolov9 plate detection metric (#16827) 2025-02-26 07:29:34 -07:00
Nicolas Mowen
7eb3c87fa0 UI tweaks (#16813)
* Add escape to close review details

* Refresh review page automatically if there are currently no items to review
2025-02-25 19:17:39 -06:00
Nicolas Mowen
7ce1b354cc Use native arm runner for arm docker builds (#16804)
* Try building jetpack on latest ubuntu version

* Update ci.yml

* run natively on arm

* Run all arm builds using arm runner

* Update ci.yml
2025-02-25 11:02:56 -06:00
Jason Hunter
0de928703f Initial implementation of D-FINE model via ONNX (#16772)
* initial implementation of D-FINE model

* revert docker-compose

* add docs for D-FINE

* remove weird auto-format issue
2025-02-24 08:56:01 -07:00
Josh Hawkins
1d8f1bd7ae Ensure sub label is null when submitting an empty string (#16779)
* null sub_label when submitting an empty string

* prevent cancel from submitting form

* fix test
2025-02-24 07:02:36 -07:00
Josh Hawkins
9414e001f3 Edit sub labels from the UI (#16764)
* Add ability to edit sub labels from tracked object detail dialog

* add allowEmpty prop

* use TextEntryDialog

* clean up

* text consistency
2025-02-23 16:56:48 -07:00
Tibladar
04a718dda8 Docs: Fix broken shm calculation (#16755)
* Docs: Fix broken shm calculation

* Docs: Change wording of shm template
2025-02-23 11:44:41 -07:00
Josh Hawkins
202b9d1c79 Check websocket correctly when no cameras are enabled/defined (#16762) 2025-02-23 11:11:18 -07:00
Josh Hawkins
22cbf74dc8 Fix frigate log deduplication (#16759) 2025-02-23 06:25:50 -07:00
Josh Hawkins
71f1ea86d2 Add note for notifications on iOS devices (#16744) 2025-02-22 09:19:37 -06:00
Nicolas Mowen
844ee089d8 Fix preview fetch (#16741) 2025-02-22 09:04:09 -06:00
Josh Hawkins
b3c1b21f80 Don't require model_runner for realtime processors (#16728) 2025-02-21 13:26:03 -07:00
Josh Hawkins
60b34bcfca Refactor processors and add LPR postprocessing (#16722)
* recordings data pub/sub

* function to process recording stream frames

* model runner

* lpr model runner

* refactor to mixin class and use model runner

* separate out realtime and post processors

* move model and mixin folders

* basic postprocessor

* clean up

* docs

* postprocessing logic

* clean up

* return none if recordings are disabled

* run postprocessor handle_requests too

* tweak expansion

* add put endpoint

* postprocessor tweaks with endpoint
2025-02-21 06:51:37 -07:00
Felipe Santos
e773d63c16 Improve ffmpeg versions handling (#16712)
* Improve ffmpeg versions handling

* Remove fallback from LIBAVFORMAT_VERSION_MAJOR, it should always be set

* Mention ffprobe in custom ffmpeg docs

* Fix ffmpeg extraction

* Fix go2rtc example formatting

* Add fallback back to LIBAVFORMAT_VERSION_MAJOR

* Fix linter
2025-02-20 18:07:41 -07:00
Nicolas Mowen
6e8e88692c Fix coral dep (#16713) 2025-02-20 17:50:50 -07:00
Nicolas Mowen
391ac70e60 Fix nullable thumbnail (#16706) 2025-02-20 12:19:16 -06:00
Nicolas Mowen
c736b1dae5 Refactor ONNX embedding class to use a base class and type-specific classes (#16703)
* Move onnx runner

* Build out base embedding

* Convert text embedding to separate class

* Move image embedding to separate

* Move LPR to separate class

* Remove mono embedding

* Simplify model downloading

* Reorganize jina v1 embeddings

* Cleanup

* Cleanup for review
2025-02-20 10:17:07 -06:00
Nicolas Mowen
649e5cfda5 Fix rockchip docker build (#16705) 2025-02-20 10:04:29 -06:00
Nicolas Mowen
c7db0f479d Only match on exact config name in some cases (#16704)
* Only match on exact config name in some cases

* Cleanup
2025-02-20 10:04:06 -06:00
Lander Noterman
94a9dcede8 JetPack 6: fix BASE_HOOK (#16701) 2025-02-20 05:29:22 -07:00
Josh Hawkins
0083d09a8b Bugfix: ensure all object labels are added to zones (#16686) 2025-02-19 06:25:39 -07:00
Josh Hawkins
b34fb8bf7c Improve LPR and speed zone docs (#16685)
* Improve lpr and speed zone docs

* clarify

* clarify
2025-02-19 07:19:41 -06:00
Nicolas Mowen
0f481546cd Fix build (#16683)
* Add necessary tensorrt bindings

* Include both versions of cudnn

* Update Dockerfile
2025-02-19 05:54:02 -07:00
Lander Noterman
c240512ef4 fix trt model prepare on jp6 (#16679) 2025-02-19 04:52:35 -07:00
Josh Hawkins
2b3ab02ebf object path plotter per camera with time selection dropdown (#16676) 2025-02-18 20:55:16 -07:00
KastB
7abf28bcbc fix syntax error: missing space (#15954) 2025-02-18 20:51:08 -07:00
Josh Hawkins
2277a88f4d Ensure range is undefined when canceling an export (#16673) 2025-02-18 11:50:32 -07:00
Nicolas Mowen
ab797c95af Remove jp4 build and add notes for jp6 (#16670) 2025-02-18 12:20:35 -06:00
Nicolas Mowen
c819961cc8 Update pull request template to suggest creating a discussion (#16669)
* Update pull request template to suggest creating a discussion for large changes

* Update template

* Ignore github changes
2025-02-18 10:17:18 -06:00
Nicolas Mowen
81b873dead Quick fixes (#16668)
* Write thumbnails to disk on shutdown

* Update text on face library page
2025-02-18 10:13:28 -06:00
Josh Hawkins
b6db97d313 Add metadata title field to exports (#16664) 2025-02-18 08:48:03 -06:00
Nicolas Mowen
f49a8009ec Remove thumb from database field (#16647)
* Remove thumbnail from dict

* Create thumbnail diectory

* Cleanup handling of tracked object images

* Make thumbnail optional

* Handle cases where thumbnail is used

* Expand options for thumbnail api

* Fix up the does not exist condition

* Remove absolute usages of thumbnails

* Write thumbnails for external events

* Reduce webp quality

* Use webp everywhere in frontend

* Formatting

* Always consider all events when re-indexing

* Add thumbnail deletion and cleanup path management

* Cleanup imports

* Rename def

* Don't save thumbnail for every object

* Correct event count

* Use correct function

* Include thumbnail in query

* Remove unused

* Fix requiring exception
2025-02-18 07:46:29 -07:00
Lander Noterman
5bd412071a Add support for JetPack 6 (#16571)
* it builds!

* some fixes

* use python 3.11 (rc1)

* add deadsnakes ppa for more recent python 3.11 in jetson images

* fix pip stuff

* revert to tensor 8.6

* revert changes to docker/main

* add hook to install deadsnakes ppa for tensorrt/jetson

* remove unnecessary pip break-system-packages

* move tflite_runtime to requirements-wheels.txt
2025-02-18 07:38:07 -07:00
Josh Hawkins
1d3de77f63 Reorganize Lifecycle components (#16663)
* reorganize lifecycle components

* clean up
2025-02-18 07:17:51 -07:00
Josh Hawkins
4f88a5f2ad Object Lifecycle tweaks (#16648)
* Disable object path and add warning for autotracking cameras

* clean up
2025-02-17 16:03:51 -07:00
Ashton Johnson
11b1dbf0ff Update contributing.md (#16641)
Added note to include buildx plugin with Docker
2025-02-17 15:04:35 -07:00
Josh Hawkins
b961235187 Tracking fixes (#16645)
* use config enabled check for ptz cam tracker

* ensure we have an object match before accessing score history

* add comment for clarity
2025-02-17 14:49:24 -07:00
Josh Hawkins
124d92daa9 Improve Object Lifecycle pane (#16635)
* add path point tracking to backend

* types

* draw paths on lifecycle pane

* make points clickable

* don't display a path if we don't have any saved path points

* only object lifecycle points should have a click handler

* change to debug log

* better debug log message
2025-02-17 09:37:17 -07:00
Josh Hawkins
3f07d2d37c Improve notifications (#16632)
* add notification cooldown

* cooldown docs

* show alert box when notifications are used in an insecure context

* add ability to suspend notifications from dashboard context menu
2025-02-17 07:19:03 -07:00
Nicolas Mowen
1e709f5b3f Update coral deps and remove unused pycoral (#16630) 2025-02-17 06:57:05 -07:00
Nicolas Mowen
7b7387a68c Update info for face recognition (#16629) 2025-02-17 07:38:29 -06:00
Mitch Ross
2020cdffd5 Fix prometheus client exporter (#16620)
* wip

* wip

* put it back

* formatter

* Delete hailort.log

* Delete hailort.log

* lint

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-17 06:17:15 -07:00
Felipe Santos
349abc8d1b Remove internal docker container IP from WebRTC candidates (#16618) 2025-02-16 21:24:13 -07:00
Lukas Wolfsteiner
92553fa666 fix: Missing link to go2rtc' logging doc (#16606)
Updated the link to go2rtc docs for logging configuration.
2025-02-15 19:57:59 -06:00
Josh Hawkins
5264a18dfa Small fixes and docs tweaks (#16595)
* don't clear url params if we're creating a new object mask

* use correct threshold var in debug log

* docs tweak for mjpeg cameras
2025-02-15 06:56:45 -07:00
Josh Hawkins
6bb1a5dfd2 fix renaming exports with a slash (#16588) 2025-02-14 19:18:14 -07:00
Josh Hawkins
7b3556e4ad ensure we copy current frame to prevent a segfault crash (#16591) 2025-02-14 18:53:07 -06:00
Josh Hawkins
9a07505075 More LPR improvements (#16587)
* define a format option and adjust thresholds

* config updates

* docs

* docs clarity
2025-02-14 15:12:36 -07:00
Josh Hawkins
0b65137831 Return 404 for non-existent vod module media (#16586)
* Check if video source exists before showing player

* add comment

* also check 404

* language

* return 404 with vod module
2025-02-14 12:05:05 -07:00
Nicolas Mowen
761c5109dc Update nvidia driver req (#16560) 2025-02-13 17:18:38 -06:00
Josh Hawkins
729f5c0833 LPR improvements (#16559)
* use a small yolov9 model for detection

* use yolov9 for users without frigate+ and update retention algorithm

* new lpr config fields

* levenshtein distance package

* tweaks

* docs
2025-02-13 16:08:56 -07:00
Nicolas Mowen
f7199f205f Update docs for pcie coral driver (#16548) 2025-02-13 11:00:34 -06:00
Josh Hawkins
d6b5dc93cc Fix streaming dialog and use less text on register button (#16518) 2025-02-12 06:16:32 -07:00
Josh Hawkins
11baf237bc Ensure all streaming settings are saved correctly on mobile (#16511)
* Ensure streaming settings are saved correctly on mobile

* remove extra check
2025-02-11 16:49:22 -07:00
Nicolas Mowen
73fee6372b Remove obsolete event clip logic (#16504)
* Remove obsolete event clip logic

* Formatting
2025-02-11 15:16:10 -06:00
Josh Hawkins
2458f667c4 Refactor lpr into real time data processor (#16497) 2025-02-11 13:45:13 -07:00
Josh Hawkins
f3e2cf0a58 Small fix and docs update (#16499)
* Small docs tweak and bugfix

* don't remove page arg either
2025-02-11 13:23:41 -07:00
Nicolas Mowen
0f0b2687af Add support for YoloV9 to OpenVINO (#16495)
* Add support for yolov9 to OpenVINO

* Cleanup detector docs

* Fix link
2025-02-11 11:23:19 -07:00
Josh Hawkins
a3ede3cf8a Snap points to edges and create object mask from bounding box (#16488) 2025-02-11 09:08:28 -07:00
Nicolas Mowen
b594f198a9 Consolidate HailoRT into the main Docker Image (#16487)
* Simplify main build to include hailo

* Update docs

* Remove hailo docker build
2025-02-11 09:08:13 -07:00
Josh Hawkins
4ef6214029 Tracking improvements (#16484)
* norfair tracker config per object type

* change default R back to 3.4

* separate trackers for static and autotracking cameras

* tweak params and fix debug draw

* ensure all trackers are correctly updated even when there are no detections

* basic reid with histograms

* check mp value

* check mp value again

* stationary objects won't have embeddings

* don't switch trackers when autotracking is toggled after startup

* improve motion detection during autotracking

* use helper function

* get histogram in tracker instead of detect
2025-02-11 09:37:58 -06:00
Josh Hawkins
82f8694464 Toggle review alerts and detections (#16482)
* backend

* frontend

* docs

* fix topic name and initial websocket state

* update reference config

* fix mqtt docs

* fix initial topics

* don't apply max severity when alerts/detections are disabled

* fix ws merge

* tweaks
2025-02-11 07:46:25 -07:00
Josh Hawkins
c54259ecc6 use persistence for hls player muting (#16481) 2025-02-11 06:56:15 -07:00
Josh Hawkins
7e48b3514c remove extraneous print from recordings summary code (#16468) 2025-02-11 05:19:54 -07:00
Josh Hawkins
f0270c6e34 fix non-awaited onvif calls (#16469) 2025-02-11 05:19:20 -07:00
Nicolas Mowen
ac3dfbc30d Set stop event first (#16466) 2025-02-10 21:22:33 -06:00
Josh Hawkins
9a0211a71c Improve Notifications (#16453)
* backend

* frontend

* add notification config at camera level

* camera level notifications in dispatcher

* initial onconnect

* frontend

* backend for suspended notifications

* frontend

* use base communicator

* initialize all cameras in suspended array and use 0 for unsuspended

* remove switch and use select for suspending in frontend

* use timestamp instead of datetime

* frontend tweaks

* mqtt docs

* fix button width

* use grid for layout

* use thread and queue for processing notifications with 10s timeout

* clean up

* move async code to main class

* tweaks

* docs

* remove warning message
2025-02-10 19:47:15 -07:00
Nicolas Mowen
198d067e25 Implement support for YOLOv9 via ONNX (#16459)
* WIP yolov9

* Implement post processing for yolov9

* Cleanup detection

* Update docs to make note of supported yolov9

* Move post processing to separate utility

* Add note about other models
2025-02-10 15:00:12 -06:00
Josh Hawkins
72209986b6 Estimated object speed for zones (#16452)
* utility functions

* backend config

* backend object speed tracking

* draw speed on debug view

* basic frontend zone editor

* remove line sorting

* fix types

* highlight line on canvas when entering value in zone edit pane

* rename vars and add validation

* ensure speed estimation is disabled when user adds more than 4 points

* pixel velocity in debug

* unit_system in config

* ability to define unit system in config

* save max speed to db

* frontend

* docs

* clarify docs

* utility functions

* backend config

* backend object speed tracking

* draw speed on debug view

* basic frontend zone editor

* remove line sorting

* fix types

* highlight line on canvas when entering value in zone edit pane

* rename vars and add validation

* ensure speed estimation is disabled when user adds more than 4 points

* pixel velocity in debug

* unit_system in config

* ability to define unit system in config

* save max speed to db

* frontend

* docs

* clarify docs

* fix duplicates from merge

* include max_estimated_speed in api responses

* add units to zone edit pane

* catch undefined

* add average speed

* clarify docs

* only track average speed when object is active

* rename vars

* ensure points and distances are ordered clockwise

* only store the last 10 speeds like score history

* remove max estimated speed

* update docs

* update docs

* fix point ordering

* improve readability

* docs inertia recommendation

* fix point ordering

* check object frame time

* add velocity angle to frontend

* docs clarity

* add frontend speed filter

* fix mqtt docs

* fix mqtt docs

* don't try to remove distances if they weren't already defined

* don't display estimates on debug view/snapshots if object is not in a speed tracking zone

* docs

* implement speed_threshold for zone presence

* docs for threshold

* better ground plane image

* improve image zone size

* add inertia to speed threshold example
2025-02-10 13:23:42 -07:00
Josh Hawkins
dd7820e4ee Improve live streaming (#16447)
* config file changes

* config migrator

* stream selection on single camera live view

* camera streaming settings dialog

* manage persistent group streaming settings

* apply streaming settings in camera groups

* add ability to clear all streaming settings from settings

* docs

* update reference config

* fixes

* clarify docs

* use first stream as default in dialog

* ensure still image is visible after switching stream type to none

* docs

* clarify docs

* add ability to continue playing stream in background

* fix props

* put stream selection inside dropdown on desktop

* add capabilities to live mode hook

* live context menu component

* resize observer: only return new dimensions if they've actually changed

* pass volume prop to players

* fix slider bug, https://github.com/shadcn-ui/ui/issues/1448

* update react-grid-layout

* prevent animated transitions on draggable grid layout

* add context menu to dashboards

* use provider

* streaming dialog from context menu

* docs

* add jsmpeg warning to context menu

* audio and two way talk indicators in single camera view

* add link to debug view

* don't use hook

* create manual events from live camera view

* maintain grow classes on grid items

* fix initial volume state on default dashboard

* fix pointer events causing context menu to end up underneath image on iOS

* mobile drawer tweaks

* stream stats

* show settings menu for non-restreamed cameras

* consistent settings icon

* tweaks

* optional stats to fix birdseye player

* add toaster to live camera view

* fix crash on initial save in streaming dialog

* don't require restreaming for context menu streaming settings

* add debug view to context menu

* stats fixes

* update docs

* always show stream info when restreamed

* update camera streaming dialog

* make note of no h265 support for webrtc

* docs clarity

* ensure docs show streams as a dict

* docs clarity

* fix css file

* tweaks
2025-02-10 09:42:35 -07:00
Josh Hawkins
2a28964e63 Improve UI logs (#16434)
* use react-logviewer and backend streaming

* layout adjustments

* readd copy handler

* reorder and fix key

* add loading state

* handle frigate log consolidation

* handle newlines in sheet

* update react-logviewer

* fix scrolling and use chunked log download

* don't combine frigate log lines with timestamp

* basic deduplication

* use react-logviewer and backend streaming

* layout adjustments

* readd copy handler

* reorder and fix key

* add loading state

* handle frigate log consolidation

* handle newlines in sheet

* update react-logviewer

* fix scrolling and use chunked log download

* don't combine frigate log lines with timestamp

* basic deduplication

* move process logs function to services util

* improve layout and scrolling behavior

* clean up
2025-02-10 08:38:56 -07:00
Josh Hawkins
e207b2f50b Refactor export filenames to include start and end date/time (#16446) 2025-02-10 08:30:23 -07:00
Josh Hawkins
d5b60237a2 Improve display of recordings data (#16436)
* backend

* frontend

* add earliest recording available to storage metrics page
2025-02-09 16:02:36 -07:00
Josh Hawkins
bc96db8612 Add ability to set mqtt qos in config (#16435) 2025-02-09 16:45:04 -06:00
Nicolas Mowen
81bd956ae8 Fix sanitized api causing issues (#16433) 2025-02-09 14:50:12 -07:00
Josh Hawkins
c8cec63cb9 Object area debugging and improvements (#16432)
* add ability to specify min and max area as percentages

* debug draw area and ratio

* docs

* update for best percentage
2025-02-09 14:48:23 -07:00
Josh Hawkins
83beacf84a Add autotracking calibration message (#16431)
* check autotracking calibration values before writing to config

* docs

* clarify log message
2025-02-09 14:29:08 -07:00
Josh Hawkins
cc2dbdcb44 Timeline improvements (#16429)
* virtualize event segments

* use virtual segments in event review timeline

* add segmentkey to props

* virtualize motion segments

* use virtual segments in motion review timeline

* update draggable element hook to use only math

* timeline zooming hook

* add zooming to event review timeline

* update playground

* zoomable timeline on recording view

* consolidate divs in summary timeline

* only calculate motion data for visible motion segments

* use swr loading state

* fix motion only

* keep handlebar centered when zooming

* zoom animations

* clean up

* ensure motion only checks both halves of segment

* prevent handlebar jump when using motion only mode
2025-02-09 14:13:32 -07:00
towerhand
1f89844c67 Use /api/metrics instead of /metrics (#16425) 2025-02-09 12:50:42 -07:00
Nicolas Mowen
81a56549da Face recognition UI improvements (#16422)
* Rework face recognition APIs

* Fix error message on cancel

* Add ability to create new face library
2025-02-09 13:22:25 -06:00
Nicolas Mowen
c58d2add37 Fix missing prometheus commit (#16415)
* Add prometheus metrics

* add docs for metrics

* sidebar

* lint

* lint

---------

Co-authored-by: Mitch Ross <mitchross@users.noreply.github.com>
2025-02-09 10:04:39 -07:00
Nicolas Mowen
a42ad7ead9 UI fixes (#16406)
* Fix new review item banner blocking third chip

* Fix custom export mode
2025-02-09 07:36:55 -06:00
Nicolas Mowen
973d3aed9a Disable jetson builds (#16396) 2025-02-08 17:20:58 -06:00
Nicolas Mowen
fa300742ea Fix build (#16393)
* Fix rpi build

* Attempt to fix jetson builds
2025-02-08 14:51:42 -06:00
Nicolas Mowen
15472274ee Update docs sidebar name (#16370)
* Clarify classification

* Fix face hierarchy as well
2025-02-08 12:47:01 -06:00
Nicolas Mowen
f3485bfc13 Sanitize provided name 2025-02-08 12:47:01 -06:00
Nicolas Mowen
060ad34e1d Update cudnn and onnxruntime (#16332) 2025-02-08 12:47:01 -06:00
Josh Hawkins
ebf4403eca Add endpoint for fetching batch review items (#16254) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
fb316874ef Quick fix for face rec (#16226)
* Check both

* Fix api order
2025-02-08 12:47:01 -06:00
Nicolas Mowen
9236898a9d Sub label sensors (#16218)
* Support mqtt sensors for logo attributes

* Expose in api
2025-02-08 12:47:01 -06:00
Nicolas Mowen
1c3527f5c4 Face recognition reprocess (#16212)
* Implement update topic

* Add API for reprocessing face

* Get reprocess working

* Fix crash when no faces exist

* Simplify
2025-02-08 12:47:01 -06:00
Nicolas Mowen
6f4002a56f Add training face library information to docs (#16169) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
3f99ff65ed Face recognition improvements (#16034) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
c7c8575c9b Bird classification (#15966)
* Start working on bird processor

* Initial setup for bird processing

* Improvements to handling

* Get classification working

* Cleanup classification

* Add classification config

* Update sort
2025-02-08 12:47:01 -06:00
Nicolas Mowen
63dbcd79e2 Update hailo deps (#15958) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
9dc85d4a76 Processing refactor (#15935)
* Refactor post processor to be real time processor

* Build out generic API for post processing

* Cleanup

* Fix
2025-02-08 12:47:01 -06:00
Nicolas Mowen
88686c44fe Generalize postprocessing (#15931)
* Actually send result to face registration

* Define postprocessing api and move face processing to fit

* Standardize request handling

* Standardize handling of processors

* Rename processing metrics

* Cleanup

* Standardize object end

* Update to newer formatting

* One more

* One more
2025-02-08 12:47:01 -06:00
Nicolas Mowen
3f1d85e189 Fix onvif packages (#15906)
* Don't replace packages

* Formatting
2025-02-08 12:47:01 -06:00
Josh Hawkins
283f1b19a7 Only print line and key/value when a line number can be found (#15897) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
ab8f9e5412 Upgrade onvif-zeep dependency to use onvif-zeep-async (#15894)
* Upgrade to new dependency

* Start onvif work

* Update for async calls
2025-02-08 12:47:01 -06:00
Nicolas Mowen
4f85b18b08 Improvements to face recognition (#15854)
* Do not add margin to face images

* remove margin

* Correctly clear
2025-02-08 12:47:01 -06:00
Nicolas Mowen
a6ae208fe7 Add metrics page for embeddings and face / license plate processing times (#15818)
* Get stats for embeddings inferences

* cleanup embeddings inferences

* Enable UI for feature metrics

* Change threshold

* Fix check

* Update python for actions

* Set python version

* Ignore type for now
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0c13227f7d Fix facedet download (#15811)
* Support downloading face models

* Handle download and loading correctly

* Add face dir creation

* Fix error

* Fix

* Formatting

* Move upload to button

* Show number of faces in library for each name

* Add text color for score

* Cleanup
2025-02-08 12:47:01 -06:00
Nicolas Mowen
1edbd2d498 Refactor camera activity processing (#15803)
* Replace object label sensors with new manager

* Implement zone topics

* remove unused
2025-02-08 12:47:01 -06:00
Marc Altmann
4c7d4e6c0a rockchip: update dependencies and add script for model conversion (#15699)
* rockchip: update dependencies and add script for model conversion

* rockchip: update docs

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-08 12:47:01 -06:00
Nicolas Mowen
458ca4a983 Add support for SR-IOV GPU stats (#15796)
* Add option to treat GPU as SRIOV in order for stats to work correctly

* Add to intel docs

* fix tests
2025-02-08 12:47:01 -06:00
Nicolas Mowen
6a83f40135 Add ffmpeg config to increase HEVC compatibility with Apple devices (#15795)
* Add config option for handling HEVC playback on Apple devices

* Update docs

* Remove unused
2025-02-08 12:47:01 -06:00
Nicolas Mowen
281407247b Implement face recognition training in UI (#15786)
* Rename debug to train

* Add api to train image as person

* Cleanup model running

* Formatting

* Fix

* Set face recognition page title
2025-02-08 12:47:01 -06:00
Nicolas Mowen
172e7d494f Add UI for managing face recognitions (#15757)
* Add ability to view attempts

* Improve UI

* Cleanup

* Correctly refresh ui when item is deleted

* Select correct library by default

* Add min score

* Cleanup
2025-02-08 12:47:01 -06:00
Nicolas Mowen
8763390dfe Face recognition logic improvements (#15679)
* Always initialize face model on startup

* Add ability to save face images for debugging

* Implement better face recognition reasonability
2025-02-08 12:47:01 -06:00
Nicolas Mowen
c26144da75 Change folder 2025-02-08 12:47:01 -06:00
Nicolas Mowen
d025495374 Set model size 2025-02-08 12:47:01 -06:00
Nicolas Mowen
f58fc4c367 Improve face recognition (#15670)
* Face recognition tuning

* Support face alignment

* Cleanup

* Correctly download model
2025-02-08 12:47:01 -06:00
Nicolas Mowen
cc6a740a0f Update TRT (#15646) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
909444dacf Make face library scrollable 2025-02-08 12:47:01 -06:00
Nicolas Mowen
c28a0ed9a3 Update openvino (#15634) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
cd0d37ce07 Update python deps (#15618)
* Update opencv

* Update cython

* Update scikit

* Update scipy
2025-02-08 12:47:01 -06:00
Nicolas Mowen
d0ad840ef4 Enable temporary caching of camera images to improve responsiveness of UI (#15614) 2025-02-08 12:47:01 -06:00
Josh Hawkins
edab4efa42 Preserve line numbers in config validation (#15584)
* use ruamel to parse and preserve line numbers for config validation

* maintain exception for non validation errors

* fix types

* include input in log messages
2025-02-08 12:47:01 -06:00
Nicolas Mowen
877b7b2910 Update base image (#15103)
* Change base image

* Update python

* Update coral library

* Fix source file

* Install correct apt packages

* Cleanup

* Fix installation of coral deps

* fix python installations

* Fix devcontainer build

* Get tensorrt build working

* Update other deps

* Filter out tflite log

* Get ROCm build working

* Get rockchip build working

* Get hailo build working

* Add note to comment
2025-02-08 12:47:01 -06:00
Nicolas Mowen
66675cf977 Face recognition fixes (#15222)
* Fix nginx max upload size

* Close upload dialog when done and add toasts

* Formatting

* fix ruff
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0e4ff91d6b Improve face recognition (#15205)
* Validate faces using cosine distance and SVC

* Formatting

* Use opencv instead of face embedding

* Update docs for training data

* Adjust to score system

* Set bounds

* remove face embeddings

* Update writing images

* Add face library page

* Add ability to select file

* Install opencv deps

* Cleanup

* Use different deps

* Move deps

* Cleanup

* Only show face library for desktop

* Implement deleting

* Add ability to upload image

* Add support for uploading images
2025-02-08 12:47:01 -06:00
Nicolas Mowen
dd7b1be7f4 Remove standardization 2025-02-08 12:47:01 -06:00
Nicolas Mowen
102a7695a3 Fix check 2025-02-08 12:47:01 -06:00
Nicolas Mowen
755c9eea1c Remove hardcoded face name 2025-02-08 12:47:01 -06:00
Nicolas Mowen
e5fcc50ae2 Use SVC to normalize and classify faces for recognition (#14835)
* Add margin to detected faces for embeddings

* Standardize pixel values for face input

* Use SVC to classify faces

* Clear classifier when new face is added

* Formatting

* Add dependency
2025-02-08 12:47:01 -06:00
Josh Hawkins
8bb037f82e Use regular expressions for plate matching (#14727) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
a0c35101fb Update facenet model (#14647) 2025-02-08 12:47:01 -06:00
Josh Hawkins
af1eaac5ff LPR improvements (#14641) 2025-02-08 12:47:01 -06:00
Josh Hawkins
dbbfc735f0 Prevent division by zero in lpr confidence checks (#14615) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
711575736d Fix label check (#14610)
* Create config for parsing object

* Use in maintainer
2025-02-08 12:47:01 -06:00
Josh Hawkins
c4ce7f9800 License plate recognition (ALPR) backend (#14564)
* Update version

* Face recognition backend (#14495)

* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model

* Improve face recognition (#14537)

* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting

* Fix access (#14540)

* Face detection (#14544)

* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo

* Update version

* Face recognition backend (#14495)

* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model

* Improve face recognition (#14537)

* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting

* Fix access (#14540)

* Face detection (#14544)

* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo

* initial foundation for alpr with paddleocr

* initial foundation for alpr with paddleocr

* initial foundation for alpr with paddleocr

* config

* config

* lpr maintainer

* clean up

* clean up

* fix processing

* don't process for stationary cars

* fix order

* fixes

* check for known plates

* improved length and character by character confidence

* model fixes and small tweaks

* docs

* placeholder for non frigate+ model lp detection

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-02-08 12:47:01 -06:00
Nicolas Mowen
594a4e0ba3 Face detection (#14544)
* Add support for face detection

* Add support for detecting faces during registration

* Set body size to be larger

* Undo
2025-02-08 12:47:01 -06:00
Nicolas Mowen
c1d5510428 Fix access (#14540) 2025-02-08 12:47:01 -06:00
Nicolas Mowen
a3d6266d96 Improve face recognition (#14537)
* Increase requirements for face to be set

* Manage faces properly

* Add basic docs

* Simplify

* Separate out face recognition frome semantic search

* Update docs

* Formatting
2025-02-08 12:47:01 -06:00
Nicolas Mowen
aa19ec3ddb Face recognition backend (#14495)
* Add basic config and face recognition table

* Reconfigure updates processing to handle face

* Crop frame to face box

* Implement face embedding calculation

* Get matching face embeddings

* Add support face recognition based on existing faces

* Use arcface face embeddings instead of generic embeddings model

* Add apis for managing faces

* Implement face uploading API

* Build out more APIs

* Add min area config

* Handle larger images

* Add more debug logs

* fix calculation

* Reduce timeout

* Small tweaks

* Use webp images

* Use facenet model
2025-02-08 12:47:01 -06:00
Nicolas Mowen
0e1139a7a4 Update version 2025-02-08 12:47:01 -06:00
Blake Blackshear
cc955b1e66 Merge remote-tracking branch 'origin/master' into dev 2025-02-08 10:42:48 -06:00
Josh Hawkins
da34ff964f Remove development wording (#16378) 2025-02-08 10:27:50 -06:00
Nicolas Mowen
d6a2965cb2 Update openvino hardware inference times (#16368) 2025-02-07 10:52:21 -06:00
Nicolas Mowen
4b429e440b Point to latest version of hailo script (#16351) 2025-02-06 10:31:24 -06:00
Josh Hawkins
8759b4a0d3 Clarify occupancy sensor usage in HA integration (#16333) 2025-02-05 09:56:16 -06:00
Rui Alves
df840b7cd5 Finish unit tests for review controller and started for event controller (#15955)
* Started unit tests for the review controller

* Revert "Started unit tests for the review controller"

This reverts commit 7746eb146f.

* Started unit tests for GET /review/activity/motion Endpoint

* Started unit tests for GET /review/event/{event_id} Endpoint

* Continued unit tests for GET /review/event/{event_id} Endpoint

* Continued unit tests for GET /review/{event_id} Endpoint

* Continued unit tests for GET /review/{review_id} Endpoint

* Added unit tests for GET /review/{review_id}/viewed Endpoint

* Added unit tests for GET /stats Endpoint

* Added unit tests for GET /events Endpoint

* Updated unit tests for GET /events Endpoint

* Deleted unit tests for /events from test_http (updated tests are now in test_http_event.py)

* Removed duplicated test for GET /review/activity/motion Endpoint
2025-02-04 06:28:14 -07:00
Nicolas Mowen
0645dc70a5 Detector docs (#16292)
* Refactor hardware docs to show model specific speeds

* Move hailo to first party detectors

* Make note of multiple detectors

* Improve hierarchy

* Update object_detectors.md

* Update hardware.md
2025-02-03 07:57:21 -06:00
Josh Hawkins
b230b35c62 Fix genai note (#16273) 2025-02-02 07:10:37 -07:00
Josh Hawkins
31da9351f0 Clarify genai provider and openai compatible endpoints (#16267) 2025-02-01 16:49:09 -07:00
Ben Clouser
93d39370b6 update docs to be more clear regarding audio support and go2rtc requi… (#16232)
* update docs to be more clear regarding audio support and go2rtc requirement

Signed-off-by: Ben Clouser <dev@benclouser.com>

* Update docs/docs/troubleshooting/faqs.md

* Update docs/docs/troubleshooting/faqs.md

* Update docs/docs/troubleshooting/faqs.md

* Clarify title

* Cleanup

---------

Signed-off-by: Ben Clouser <dev@benclouser.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-01-30 11:23:38 -07:00
Nicolas Mowen
cea210d800 Fix csrf (#16230)
* Fix csrf check

* Simplify
2025-01-30 11:27:38 -06:00
Josh Hawkins
7b65bcf13c Fix interpolation for autotracking cameras (#16211) 2025-01-29 06:44:13 -07:00
Josh Hawkins
335b7564d5 Update plus submission docs and remove 0.14 UI image (#16199) 2025-01-28 11:39:12 -06:00
Josh Hawkins
202e9ad9ce Document OPENAI_BASE_URL env var (#16195) 2025-01-28 08:58:15 -07:00
Nicolas Mowen
9dc4e8f290 Add frigate notify to third party extensions (#16190) 2025-01-28 08:09:59 -06:00
Nicolas Mowen
99d27c154e Don't show sub labels in main label filter list (#16168) 2025-01-27 08:07:49 -06:00
Nicolas Mowen
5943fc1895 Fix h265 encoding presets (#16158) 2025-01-26 17:14:02 -07:00
Josh Hawkins
9efc20e58a Fix selection of tracked objects in Explore on desktop Safari (#16153)
* ensure meta click works on desktop safari to select objects in explore

* don't break mobile
2025-01-26 10:57:38 -07:00
Nicolas Mowen
6d8234fa27 Fix build (#16119)
* Update to bake v6

* Update setup actions

* Temp

* Use ubuntu 22.04 for build

* Remove temp
2025-01-24 09:45:46 -06:00
Nicolas Mowen
ad76c28a66 Downgrade to bake-action v5 (#16098) 2025-01-23 09:18:15 -07:00
dependabot[bot]
131d07e649 Bump docker/bake-action from 3 to 6 (#15892)
Bumps [docker/bake-action](https://github.com/docker/bake-action) from 3 to 6.
- [Release notes](https://github.com/docker/bake-action/releases)
- [Commits](https://github.com/docker/bake-action/compare/v3...v6)

---
updated-dependencies:
- dependency-name: docker/bake-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-23 07:58:42 -07:00
Nicolas Mowen
776bb79f0b Consider pre and post capture when cleaning up recordings based on review segments (#16096) 2025-01-23 08:26:53 -06:00
glossyio
12e62488c6 corrected docs for /config/save to /api/config/save (#16077) 2025-01-21 16:52:42 -07:00
Josh Hawkins
aedfaa3641 Don't prevent default when tracked object details description input is focused (#16064) 2025-01-20 06:23:22 -07:00
Nicolas Mowen
83ac42cbdc Use correct path for script (#16045) 2025-01-18 21:33:13 -07:00
Nicolas Mowen
a5ce8d0d77 Fix env variable exporting (#16043) 2025-01-18 21:31:56 -06:00
Nicolas Mowen
0ee2e404da Correctly calculate ffmpeg version based on ffmpeg path (#16041)
* Correctly calculate ffmpeg version based on ffmpeg path

* Formatting
2025-01-18 20:30:35 -06:00
Marc Altmann
3947e79086 update FFmpeg to ensure compatibility with newer kernels (#16027) 2025-01-18 05:48:28 -07:00
Nicolas Mowen
91ab1071d2 Update docs to make note of go2rtc port requirement (#16013) 2025-01-16 16:14:40 -07:00
Nicolas Mowen
409e911752 Update integration docs (#15967) 2025-01-13 08:50:44 -06:00
tpjanssen
9983bd8d92 Fix API latest image quality and API MIME types (#15964)
* Fix API latest image quality

* Fix mime types

* Code formatting + media_type fix
2025-01-13 07:46:46 -06:00
Nicolas Mowen
32c71c4108 Clean up handling of ffmpeg specific params (#15956) 2025-01-12 17:47:24 -06:00
Josh Hawkins
ef6952e3ea Fix display of save button in tracked object details pane (#15946) 2025-01-11 15:23:52 -06:00
Nicolas Mowen
173b7aa308 Handle case where user has multiple manual events on same camera (#15943) 2025-01-11 07:47:45 -07:00
Blake Blackshear
c4727f19e1 Simplify plus submit (#15941)
* remove unused annotate file

* improve plus error messages

* formatting
2025-01-11 07:04:11 -07:00
Josh Hawkins
b8a74793ca Clarify motion recording (#15917)
* Clarify motion recording

* move to troubleshooting
2025-01-09 09:55:08 -07:00
Josh Hawkins
c1dede9369 Clarify reolink doorbell two way talk requirements (#15915)
* Clarify reolink doorbell two way talk requirements

* relative paths

* move to live section

* fix link
2025-01-09 09:31:16 -07:00
Nicolas Mowen
0c4ea504d8 Update proxmox docs to align with proxmox recommendation of running in VM. (#15904) 2025-01-08 17:19:04 -06:00
Nicolas Mowen
b265b6b190 Catch case where user has multiple of the same kind of GPU (#15903) 2025-01-08 17:17:57 -06:00
Nicolas Mowen
d57a61b50f Simplify model config (#15881)
* Add migration to migrate to model_path

* Simplify model config

* Cleanup docs

* Set config version

* Formatting

* Fix tests
2025-01-07 20:59:37 -07:00
Nicolas Mowen
4fc9106c17 Update for correct audio requirements (#15882) 2025-01-07 17:02:32 -06:00
Nicolas Mowen
38e098ca31 Remove extra data except from keypackets when using qsv (#15865) 2025-01-06 17:38:46 -06:00
Nicolas Mowen
e7ad38d827 Update model docs (#15779) 2025-01-02 10:04:16 -06:00
Blake Blackshear
b5e5127d48 update link (#15756) 2024-12-31 12:05:55 -06:00
Josh Hawkins
a1ce9aacf2 Tracked object details pane bugfix (#15736)
* restore save button in tracked object details pane

* conditionally show save button
2024-12-30 08:23:25 -06:00
Nicolas Mowen
322b847356 Fix event cleanup (#15724) 2024-12-29 14:47:40 -06:00
Josh Hawkins
98338e4c7f Ensure object lifecycle ratio is re-normalized to camera aspect (#15717) 2024-12-28 13:37:39 -07:00
Josh Hawkins
171a89f37b Language consistency - use Explore instead of Search (#15709) 2024-12-27 17:38:43 -07:00
Josh Hawkins
8114b541a8 Sort camera group edit screen by ui config values (#15705) 2024-12-27 14:30:27 -06:00
Josh Hawkins
c48396c5c6 Fix crash when streams are undefined in go2rtc config password cleaning (#15695) 2024-12-27 08:36:21 -06:00
leccelecce
00371546a3 GenAI: add ability to save JPGs sent to provider (#15643)
* GenAI: add ability to save JPGs sent to provider

* Remove mention from GenAI docs

* Change config name to debug_save_thumbnails

* Change  folder structure to clips/genai-requests/{event_id}/{1.jpg}
2024-12-23 07:05:34 -07:00
Nicolas Mowen
87e7b62c85 Remove duplicated rockchip build (#15641) 2024-12-22 13:31:14 -06:00
Nicolas Mowen
15ffe5c254 Fix trt (#15640) 2024-12-22 11:56:04 -07:00
Nicolas Mowen
a767dad3a1 Simplify TensorRT image (#15638) 2024-12-22 12:13:29 -06:00
Josh Hawkins
9387246f83 Add tooltips to ptz controls (#15633) 2024-12-21 17:57:22 -06:00
Nicolas Mowen
bed20de302 Update docs deps (#15617) 2024-12-20 10:37:02 -06:00
Nicolas Mowen
70fc5393b1 Make hailo wheels support any minor version (#15616) 2024-12-20 10:36:32 -06:00
dependabot[bot]
9b80dbe014 Bump actions/setup-python from 5.1.0 to 5.3.0 (#14584)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.1.0...v5.3.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-20 09:16:21 -07:00
Josh Hawkins
78a013d63a Add "frame" to shm frame names to avoid camera name issues (#15615) 2024-12-20 08:46:40 -06:00
PrplHaz4
24f4aa79c8 Change Amcrest example to subtype=3 (#15607)
I think this was meant to be a `3`
2024-12-19 21:47:11 -06:00
Nicolas Mowen
dfc94b5ad6 Add dahua and amcrest to camera specific documentation (#15605) 2024-12-19 17:24:34 -06:00
Gabriel de Biasi
ddfe8f3921 Fix #7944: Adds tls_insecure to the onvif configuration (#15603)
* Adds tls_insecure to the onvif configuration

* reformat using ruff
2024-12-19 12:54:33 -07:00
Nicolas Mowen
4af752028f Bug Fixes (#15598)
* Catch onvif command error

* fix review item pre and post capture

* Include severity in query
2024-12-19 09:46:14 -06:00
Nicolas Mowen
b149828c9f Catch OS error (#15590) 2024-12-18 17:45:08 -06:00
Josh Hawkins
3dc26e78ef Genai descriptions are not generated until tracked objects end (#15561) 2024-12-17 17:33:04 -06:00
Nicolas Mowen
5acbe37e6f Update camera specific settings to make note of hikvision authentication (#15552) 2024-12-17 11:31:59 -06:00
Giorgio Ughini
d9ef8fa206 Fix always the same image is sent to GenAI (#15550)
* Fix always the same image is sent to GenAI

* Fix typo for bug where identical images are sent to GenAI

* Correct formatting
2024-12-17 07:44:00 -06:00
Josh Hawkins
292499aebc Improve review message again (#15538) 2024-12-16 09:18:34 -07:00
Josh Hawkins
717493e668 Improve handling of error conditions with ollama and snapshot regeneration (#15527) 2024-12-15 20:51:23 -06:00
Josh Hawkins
d49f958d4d Don't crop by region for genai snapshot for manual events (#15525) 2024-12-15 17:03:19 -06:00
Nicolas Mowen
33ee32865f Ensure that go2rtc streams are cleaned (#15524)
* Ensure that go2rtc streams are cleaned

* Formatting

* Handle go2rtc config correctly

* Set type
2024-12-15 16:56:24 -06:00
Josh Hawkins
17f8939f97 Add FAQ to explain why streams might work in VLC but not in Frigate (#15513)
* Add faq to explain why streams might work in VLC but not in Frigate

* fix go2rtc version number

* wording

* mention udp input args and preset
2024-12-14 13:58:39 -06:00
FL42
1b7fe9523d fix: use requests.Session() for DeepStack API (#15505) 2024-12-14 07:54:13 -07:00
Josh Hawkins
0763f56047 Update iframe interval recommendation (#15501)
* Update iframe interval recommendation

* clarify

* tweaks

* wording
2024-12-13 12:52:56 -07:00
Josh Hawkins
1ea282fba8 Improve the message for missing objects in review items (#15500) 2024-12-13 12:02:41 -07:00
Blake Blackshear
869fa2631e apply zizmor recommendations (#15490) 2024-12-13 07:34:09 -06:00
Nicolas Mowen
f336a91fee Cleanup handling of first object message (#15480) 2024-12-12 21:22:47 -06:00
Nicolas Mowen
d302b6e198 Cap storage bandwidth (#15473) 2024-12-12 14:46:00 -06:00
Nicolas Mowen
ed2e1f3f72 Remove debug cleanup change (#15468) 2024-12-12 07:46:06 -07:00
Nicolas Mowen
b4d82084a9 Fixes (#15465)
* Fix single event return

* Allow customizing if search is preserved for overlay state

* Remove timeout

* Cleanup

* Cleanup naming
2024-12-12 08:22:30 -06:00
Josh Hawkins
53b96dfb89 Improve semantic search docs (#15453) 2024-12-11 20:19:08 -06:00
Nicolas Mowen
0e3fb6cbdd Standardize handling of config files (#15451)
* Standardize handling of config files

* Formatting

* Remove unused
2024-12-11 18:46:42 -06:00
Blake Blackshear
6b12a45a95 return 401 for login failures (#15432)
* return 401 for login failures

* only setup the rate limiter when configured
2024-12-10 06:42:55 -07:00
Nicolas Mowen
0b9c4c18dd Refactor event cleanup to consider review severity (#15415)
* Keep track of objects max review severity

* Refactor cleanup to split snapshots and clips

* Cleanup events based on review severity

* Cleanup review imports

* Don't catch detections
2024-12-09 08:25:45 -07:00
Nicolas Mowen
d0cc8cb64b API response cleanup (#15389)
* API response cleanup

* Remove extra field definition
2024-12-06 20:07:43 -06:00
Nicolas Mowen
bb86e71e65 fix auth remote addr access (#15378) 2024-12-06 10:25:43 -06:00
Josh Hawkins
8aa6297308 Ensure label does not overlap with box or go out of frame (#15376) 2024-12-06 08:32:16 -07:00
Nicolas Mowen
d3b631a952 Api improvements (#15327)
* Organize api files

* Add more API definitions for events

* Add export select by ID

* Typing fixes

* Update openapi spec

* Change type

* Fix test

* Fix message

* Fix tests
2024-12-06 08:04:02 -06:00
Nicolas Mowen
47d495fc01 Make note of go2rtc encoded URLs (#15348)
* Make note of go2rtc encoded URLs

* clarify
2024-12-04 16:54:57 -06:00
Nicolas Mowen
32322b23b2 Update nvidia docs to reflect preset (#15347) 2024-12-04 15:43:10 -07:00
Josh Hawkins
c0ba98e26f Explore sorting (#15342)
* backend

* add type and params

* radio group in ui

* ensure search_type is cleared on reset
2024-12-04 08:54:10 -07:00
Rui Alves
a5a7cd3107 Added more unit tests for the review controller (#15162) 2024-12-04 06:52:08 -06:00
Josh Hawkins
a729408599 preserve search query in overlay state hook (#15334) 2024-12-04 06:14:53 -06:00
Josh Hawkins
4dddc53735 move label placement when overlapping small boxes (#15310) 2024-12-02 13:07:12 -06:00
Josh Hawkins
5f42caad03 Explore bulk actions (#15307)
* use id instead of index for object details and scrolling

* long press package and hook

* fix long press in review

* search action group

* multi select in explore

* add bulk deletion to backend api

* clean up

* mimic behavior of review

* don't open dialog on left click when mutli selecting

* context menu on container ref

* revert long press code

* clean up
2024-12-02 11:12:55 -07:00
Jan Čermák
5475672a9d Fix extraction of Hailo userspace libs (#15187)
The archive already has everything contained in a rootfs folder, extract
it as-is to the root folder. This also reverts changes from
33957e5360 which addressed the same issue
in a less optimal way.
2024-12-02 08:35:51 -06:00
James Livulpi
833cdcb6d2 fix audio event create (#15299) 2024-12-01 20:07:44 -06:00
Nicolas Mowen
c95bc9fe44 Handle case where camera name ends in number (#15296) 2024-12-01 12:33:10 -07:00
Josh Hawkins
a1fa9decad Fix event cleanup debug logging crash (#15293) 2024-12-01 12:37:45 -06:00
Josh Hawkins
4a5fe4138e Explore audio event tweaks (#15291) 2024-12-01 12:08:03 -06:00
Nicolas Mowen
002fdeae67 SHM tweaks (#15274)
* Use env var to control max number of frames

* Handle type

* Fix frame_name not being sent

* Formatting
2024-12-01 10:39:35 -06:00
tpjanssen
5802a66469 Fix audio events in explore section (#15286)
* Fix audio events in explore section

Make sure that audio events are listed in the explore section

* Update audio.py

* Hide other submit options

Only allow submits for objects only
2024-12-01 07:47:37 -07:00
Alessandro Genova
71e8f75a01 Let the docker container spend more time to clean up and shut down (docs) (#15275) 2024-11-30 18:27:21 -06:00
Nicolas Mowen
ee816b2251 Fix camera access and improve typing (#15272)
* Fix camera access and improve typing:

* Formatting
2024-11-30 18:22:36 -06:00
Nicolas Mowen
f094c59cd0 Fix formatting (#15271) 2024-11-30 18:21:50 -06:00
Josh Hawkins
d25ffdb292 Fix crash when consecutive underscores are used in camera name (#15257) 2024-11-29 19:44:42 -07:00
Blake Blackshear
2461d01329 Update hardware recs (#15254) 2024-11-29 07:20:33 -06:00
Nicolas Mowen
2207a91f7b Fix ruff (#15223) 2024-11-27 12:57:58 -07:00
Nicolas Mowen
5cafca1be0 Add docs for go2rtc logging (#15204) 2024-11-26 09:34:40 -06:00
Nicolas Mowen
33957e5360 Set hailo build library path (#15167) 2024-11-24 19:07:41 -07:00
Nicolas Mowen
ff92b13f35 Fix sending events (#15100) 2024-11-20 09:37:33 -07:00
victpork
9c5a04f25f Added code to download weights from new host (#15087) 2024-11-20 05:06:22 -06:00
Rui Alves
e76f4e9bd9 Started unit tests for the review controller (#15077)
* Started unit tests for the review controller

* Revert "Started unit tests for the review controller"

This reverts commit 7746eb146f.

* Started unit tests for the review controller

* FIrst test

* Added test for review endpoint (time filter - after + before)

* Assert expected event

* Added more tests for review endpoint

* Added test for review endpoint with all filters

* Added test for review endpoint with limit

* Comment

* Renamed tests to increase readability
2024-11-19 16:35:10 -07:00
Josh Hawkins
0df091f387 Fix link to api in genai docs (#15075) 2024-11-19 14:33:01 -06:00
Nicolas Mowen
66277fbb6c Fix embeddings (#15072)
* Fix embeddings reading frames

* Fix event update reading

* Formatting

* Pin AIO http to fix build failure

* Pin starlette
2024-11-19 12:20:04 -06:00
Josh Hawkins
a67ff3843a Update genai docs (#15070) 2024-11-19 08:41:16 -07:00
Josh Hawkins
9ae839ad72 Tracked object metadata changes (#15055)
* add enum and change topic name

* frontend renaming

* docs

* only display sublabel score if it it exists

* remove debug print
2024-11-18 11:26:44 -07:00
Bazyl Ichabod Horsey
66f71aecf7 fix regex for cookie_name to be general snake case (#14854)
* fix regex for cookie_name to be general snake case

* Update frigate/config/auth.py

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>

---------

Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
2024-11-18 11:26:36 -07:00
Nicolas Mowen
0b203a3673 fix writing to birdseye restream buffer (#15052) 2024-11-18 10:14:49 -06:00
Nicolas Mowen
26c3f9f914 Fix birdseye (#15051) 2024-11-18 09:38:58 -06:00
Nicolas Mowen
474c248c9d Cleanup correctly (#15043) 2024-11-17 16:57:58 -06:00
Nicolas Mowen
5b1b6b5be0 Fix round robin (#15035)
* Move camera SHM frame creation to main process

* Don't reset frame index

* Don't fail if shm exists

* Set more types
2024-11-17 11:25:49 -06:00
Nicolas Mowen
45e9030358 Round robin SHM management (#15027)
* Output frame name to frames processor

* Finish implementing round robin

* Formatting
2024-11-16 16:00:19 -07:00
Nicolas Mowen
f9c1600f0d Duplicate onnx build info (#15020) 2024-11-16 13:24:42 -06:00
Josh Hawkins
ad85f8882b Update ollama docs and add genai debug logging (#15012) 2024-11-15 15:24:17 -06:00
Nicolas Mowen
206ed06905 Make all SHM management untracked (#15011) 2024-11-15 14:14:37 -07:00
Nicolas Mowen
e407ba47c2 Increase max shm frames (#15009) 2024-11-15 14:25:57 -06:00
Nicolas Mowen
7fdf42a56f Various Fixes (#15004)
* Don't track shared memory in frame tracker

* Don't track any instance

* Don't assign sub label to objects when multiple cars are overlapping

* Formatting

* Fix assignment
2024-11-15 09:54:59 -07:00
Levi Tomes
4eea541352 Updated Documentation: Autotracking add support details for Sunba 405-D20X 4K camera. (#14352)
* Add support details for Sunba 405-D20X 4K camera.

* Update cameras.md

Updated changes to meet documentation goals of upstream project.
2024-11-15 05:35:43 -07:00
Charles Crossan
1ffdd32013 Update authentication.md (#14980)
add detail to reset_admin_password setting
2024-11-14 08:13:37 -07:00
Nicolas Mowen
99506845f7 Update edge tpu docs for RPi 5 kernel (#14946) 2024-11-12 15:48:57 -06:00
Josh Hawkins
ed9c67804a UI fixes (#14933)
* Fix plus dialog

* Remove activity indicator on review item download button

* fix explore view
2024-11-12 05:37:25 -07:00
Nicolas Mowen
9c20cd5f7b Handle in progress previews export and fix time check bug (#14930)
* Handle in progress previews and fix time check bug

* Formatting
2024-11-11 09:30:55 -06:00
Rui Alves
6c86827d3a Fix small typo (#14915) 2024-11-11 05:02:46 -07:00
Rui Alves
d2b2f3d54d Use custom body for the export recordings endpoint (#14908)
* Use custom body for the export recordings endpoint

* Fixed usage of ExportRecordingsBody

* Updated docs to reflect changes to export endpoint

* Fix friendly name and source

* Updated openAPI spec
2024-11-10 20:26:47 -07:00
Josh Hawkins
64b3397f8e Add tooltip and change default value for is_submitted (#14910) 2024-11-10 19:25:16 -06:00
Josh Hawkins
0829517b72 Add ability to filter Explore by Frigate+ submission status (#14909)
* backend

* add is_submitted to query params

* add submitted filter to dialog

* allow is_submitted filter selection with input
2024-11-10 16:57:11 -06:00
Austin Kirsch
c1bfc1df67 fix tensorrt model generation variable (#14902) 2024-11-10 16:23:32 -06:00
Nicolas Mowen
96c0c43dc8 Add support for specifying tensorrt device (#14898) 2024-11-10 08:43:24 -06:00
Nicolas Mowen
a68c7f4ef8 Pin all intel packages (#14887) 2024-11-09 11:08:25 -06:00
Nicolas Mowen
7c474e6827 Pin intel driver (#14884)
* Pin intel driver

* Use slightly older version
2024-11-09 08:09:36 -07:00
Josh Hawkins
143bab87f1 Genai bugfix (#14880)
* Fix genai init when disabled at global level

* use genai config for class init
2024-11-09 06:48:53 -07:00
Josh Hawkins
580f35112e revert changes to audio process to prevent shutdown hang (#14872) 2024-11-08 11:47:46 -07:00
Josh Hawkins
3249ffb273 Auto-unmute inbound audio when enabling two way audio (#14871)
* Automatically enable audio when initiating two way talk with mic

* remove check
2024-11-08 09:19:49 -06:00
Josh Hawkins
7bae9463b2 Small general filter bugfix (#14870) 2024-11-08 08:49:05 -06:00
Josh Hawkins
ae30ac6e3c Refactor general review filter to only call the update function once (#14866) 2024-11-08 07:45:00 -06:00
Nicolas Mowen
46ed520886 Don't generate tensorrt models by default (#14865) 2024-11-08 07:37:18 -06:00
Nicolas Mowen
ace02a6dfa Don't pass hwaccel args to preview (#14851) 2024-11-07 17:24:38 -06:00
Josh Hawkins
0d59754be2 Small genai fix (#14850)
* Ensure the regenerate button shows when genai is only enabled at the camera level

* update docs
2024-11-07 13:27:55 -07:00
Josh Hawkins
15bd26c9b1 Re-send camera states after websocket disconnects and reconnects (#14847) 2024-11-07 08:25:13 -06:00
Josh Hawkins
bc371acb3e Cleanup batching (#14836)
* Implement batching for event cleanup

* remove import

* add debug logging
2024-11-06 10:05:44 -07:00
Nicolas Mowen
2eb5fbf112 Add more debug logs for preview and output (#14833) 2024-11-06 07:59:33 -06:00
Blake Blackshear
ffd05f90f3 update hardware recommendations (#14830) 2024-11-06 05:02:42 -07:00
Josh Hawkins
fc0fb158d5 Show dialog when restarting from config editor (#14815)
* Show restart dialog when restarting from config editor

* don't save until confirmed restart
2024-11-05 09:33:41 -06:00
Nicolas Mowen
404807c697 UI tweaks (#14814)
* Tweak bird icon and fix _ in object name

* Apply to all capitalization
2024-11-05 08:29:07 -07:00
Josh Hawkins
29ea7c53f2 Bugfixes (#14813)
* fix api filter from matching event id as timestamp

* add padding on platform aware sheet for tablets

* docs tweaks

* tweaks
2024-11-05 07:22:41 -07:00
Josh Hawkins
1fc4af9c86 Optimize Explore summary database query (#14797)
* Optimize explore summary query with index

* implement rollback
2024-11-04 16:04:49 -07:00
Josh Hawkins
ac762762c3 Overwrite existing saved search (#14792)
* Overwrite existing saved search

* simplify
2024-11-04 12:50:05 -07:00
Nicolas Mowen
553676aade Fix missing tensor_input (#14790) 2024-11-04 13:04:33 -06:00
Nicolas Mowen
a13b9815f6 Various fixes (#14786)
* Catch openvino error

* Remove clip deletion

* Update deletion text

* Fix timeline not respecting timezone config

* Tweaks

* More timezone fixes

* Fix

* More timezone fixes

* Fix shm docs
2024-11-04 07:07:57 -07:00
Nicolas Mowen
156e7cc628 Clarify semantic search GPU (#14767)
* Clarify semantic search GPU

* clarity

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* fix wording

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2024-11-03 19:49:13 -06:00
Josh Hawkins
959ca0f412 Fix object processing logic for detections (#14766) 2024-11-03 17:41:31 -07:00
Felipe Santos
9755fa0537 Fix exports migration when there is none (#14761) 2024-11-03 10:00:12 -07:00
Felipe Santos
77ec86d31a Fix devcontainer when there is no ~/.ssh/know_hosts file (#14758) 2024-11-03 08:52:27 -07:00
leccelecce
189d4b459f Avoid divide by zero in shm_frame_count (#14750) 2024-11-03 08:28:19 -07:00
leccelecce
44f40966e7 Docs: correct go2rtc version used (#14753) 2024-11-03 05:16:59 -07:00
Blake Blackshear
3a8c290f91 update docs for new labels (#14739) 2024-11-03 06:10:38 -06:00
Josh Hawkins
7d3313e732 Add ability to view tracked objects in Explore from review item details pane (#14744) 2024-11-02 17:16:07 -06:00
Blake Blackshear
591b50dfa7 Merge remote-tracking branch 'origin/master' into dev 2024-11-02 08:28:34 -05:00
Blake Blackshear
27ef661fec simplify hailort (#14734) 2024-11-02 07:13:28 -05:00
joshjryan
d7935abc14 Set the loglevel for OpenCV ffmpeg messages to fatal (#14728)
* Set the loglevel for OpenCV ffmpeg messages to fatal

* Set OPENCV_FFMPEG_LOGLEVEL in Dockerfile
2024-11-01 20:01:38 -06:00
Josh Hawkins
11068aa9d0 Fix validation activity indicator (#14730)
* Don't show two spinners when loading/revalidating search results

* clarify
2024-11-01 19:52:00 -06:00
Josh Hawkins
1234003527 Fix width of object lifecycle buttons (#14729) 2024-11-01 18:30:40 -06:00
Nicolas Mowen
e5ebf938f6 Fix float input (#14720) 2024-11-01 06:55:55 -06:00
Josh Hawkins
8c2c07fd18 UI tweaks (#14719)
* Show activity indicator when search grid is revalidating

* improve frigate+ button title grammar
2024-11-01 06:37:52 -06:00
Josh Hawkins
9e1a50c3be Clean up copy output (#14705)
* Remove extra spacing for next/prev carousel buttons

* Clarify ollama genai docs

* Clean up copied gpu info output

* Clean up copied gpu info output

* Better display when manually copying/pasting log data
2024-10-31 13:48:26 -06:00
Nicolas Mowen
ac8ddada0b Various fixes (#14703)
* Fix not retaining custom events

* Fix media apis
2024-10-31 07:31:01 -05:00
Josh Hawkins
885485da70 Small tweaks (#14700)
* Remove extra spacing for next/prev carousel buttons

* Clarify ollama genai docs
2024-10-31 06:58:33 -05:00
Nicolas Mowen
bb4e863e87 Fix jetson onnxruntime (#14698)
* Fix jetson onnxruntime

* Remove comment
2024-10-30 19:16:28 -05:00
Nicolas Mowen
c7a4220d65 Jetson onnxruntime (#14688)
* Add support for using onnx runtime with jetson

* Update docs

* Clarify
2024-10-30 08:22:28 -06:00
Nicolas Mowen
03dd9b2d42 Don't open file with read permissions if there is no need to write to it (#14689) 2024-10-30 08:22:20 -06:00
Nicolas Mowen
89ca085b94 Add info about GPUs that are supported for semantic search (#14687)
* Add specific information about GPUs that are supported for semantic search

* clarity
2024-10-30 07:41:58 -05:00
Nicolas Mowen
fffd9defea Add docs update to type of change (#14686) 2024-10-30 06:30:00 -06:00
Nicolas Mowen
d10fea6012 Add specific section about GPU in semantic search (#14685) 2024-10-30 07:23:10 -05:00
Nicolas Mowen
ab26aee8b2 Fix config loading (#14684) 2024-10-30 07:16:56 -05:00
Josh Hawkins
bb80a7b2ee UI changes and bugfixes (#14669)
* Home/End buttons for search input and max 8 search columns

* Fix lifecycle label

* remove video tab if tracked object has no clip

* hide object lifecycle if there is no clip

* add test for filter value to ensure only fully numeric values are set as numbers
2024-10-30 05:54:06 -06:00
Evan Jarrett
e4a6b29279 fix string comparison on mqtt error message for Server unavailable (#14675) 2024-10-30 05:05:58 -06:00
Blake Blackshear
d12c7809dd Update Hailo Driver to 4.19 (#14674)
* update to hailo driver 4.19

* update builds for 4.19
2024-10-29 18:40:24 -05:00
Nicolas Mowen
357ce0382e Fixes (#14668)
* Fix environment vars reading

* fix yaml returning none

* Assume rocm model is onnx despite file extension
2024-10-29 15:34:07 -05:00
Josh Hawkins
73da3d9b20 Use strict equality check for annotation offset in object lifecycle settings (#14667) 2024-10-29 13:33:09 -05:00
Josh Hawkins
e67b7a6d5e Add ability to use carousel buttons to scroll through object lifecycle elements (#14662) 2024-10-29 10:28:17 -05:00
Nicolas Mowen
4e25bebdd0 Add ability to configure model input dtype (#14659)
* Add input type for dtype

* Add ability to manually enable TRT execution provider

* Formatting
2024-10-29 10:28:05 -05:00
Dan Raper
abd22d2566 Update create_config.py (#14658) 2024-10-29 08:31:03 -06:00
Josh Hawkins
8aeb597780 Fix sublabel and icon spacing (#14651) 2024-10-29 07:06:19 -06:00
Nicolas Mowen
af844ea9d5 Update coral troubleshooting docs (#14370)
* Update coral docs for latest ubuntu

* capitalization

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2024-10-15 10:39:31 -05:00
Nicolas Mowen
51509760e3 Update object docs (#14295) 2024-10-12 07:13:00 -05:00
JC
f86957e5e1 Improve docs on exports API endpoints (#14224)
* Add (optional) export name to the create-export API endpoint docs

* Add the exports list endpoint to the docs
2024-10-08 19:15:10 -05:00
Nicolas Mowen
2a15b95f18 Docs updates (#14202)
* Clarify live docs

* Link out to common config examples in getting started guide

* Add tip for go2rtc name configuration

* direct link
2024-10-07 15:28:24 -05:00
Blake Blackshear
039ab1ccd7 add docs for yolonas plus models (#14161)
* add docs for yolonas plus models

* typo
2024-10-05 14:51:05 -05:00
417 changed files with 32461 additions and 11135 deletions

View File

@@ -2,6 +2,7 @@ aarch
absdiff
airockchip
Alloc
alpr
Amcrest
amdgpu
analyzeduration
@@ -12,6 +13,7 @@ argmax
argmin
argpartition
ascontiguousarray
astype
authelia
authentik
autodetected
@@ -42,6 +44,7 @@ codeproject
colormap
colorspace
comms
cooldown
coro
ctypeslib
CUDA
@@ -60,6 +63,7 @@ dsize
dtype
ECONNRESET
edgetpu
facenet
fastapi
faststart
fflags
@@ -113,6 +117,8 @@ itemsize
Jellyfin
jetson
jetsons
jina
jinaai
joserfc
jsmpeg
jsonify
@@ -186,6 +192,7 @@ openai
opencv
openvino
OWASP
paddleocr
paho
passwordless
popleft
@@ -195,6 +202,7 @@ poweroff
preexec
probesize
protobuf
pstate
psutil
pubkey
putenv
@@ -278,6 +286,7 @@ uvicorn
vaapi
vainfo
variations
vbios
vconcat
vitb
vstream
@@ -305,4 +314,4 @@ yolo
yolonas
yolox
zeep
zerolatency
zerolatency

View File

@@ -8,9 +8,25 @@
"overrideCommand": false,
"remoteUser": "vscode",
"features": {
"ghcr.io/devcontainers/features/common-utils:1": {}
"ghcr.io/devcontainers/features/common-utils:2": {}
// Uncomment the following lines to use ONNX Runtime with CUDA support
// "ghcr.io/devcontainers/features/nvidia-cuda:1": {
// "installCudnn": true,
// "installNvtx": true,
// "installToolkit": true,
// "cudaVersion": "12.5",
// "cudnnVersion": "9.4.0.58"
// },
// "./features/onnxruntime-gpu": {}
},
"forwardPorts": [8971, 5000, 5001, 5173, 8554, 8555],
"forwardPorts": [
8971,
5000,
5001,
5173,
8554,
8555
],
"portsAttributes": {
"8971": {
"label": "External NGINX",
@@ -64,10 +80,18 @@
"editor.formatOnType": true,
"python.testing.pytestEnabled": false,
"python.testing.unittestEnabled": true,
"python.testing.unittestArgs": ["-v", "-s", "./frigate/test"],
"python.testing.unittestArgs": [
"-v",
"-s",
"./frigate/test"
],
"files.trimTrailingWhitespace": true,
"eslint.workingDirectories": ["./web"],
"isort.args": ["--settings-path=./pyproject.toml"],
"eslint.workingDirectories": [
"./web"
],
"isort.args": [
"--settings-path=./pyproject.toml"
],
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true,
@@ -86,9 +110,16 @@
],
"editor.tabSize": 2
},
"cSpell.ignoreWords": ["rtmp"],
"cSpell.words": ["preact", "astype", "hwaccel", "mqtt"]
"cSpell.ignoreWords": [
"rtmp"
],
"cSpell.words": [
"preact",
"astype",
"hwaccel",
"mqtt"
]
}
}
}
}
}

View File

@@ -0,0 +1,22 @@
{
"id": "onnxruntime-gpu",
"version": "0.0.1",
"name": "ONNX Runtime GPU (Nvidia)",
"description": "Installs ONNX Runtime for Nvidia GPUs.",
"documentationURL": "",
"options": {
"version": {
"type": "string",
"proposals": [
"latest",
"1.20.1",
"1.20.0"
],
"default": "latest",
"description": "Version of ONNX Runtime to install"
}
},
"installsAfter": [
"ghcr.io/devcontainers/features/nvidia-cuda"
]
}

View File

@@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
VERSION=${VERSION}
python3 -m pip config set global.break-system-packages true
# if VERSION == "latest" or VERSION is empty, install the latest version
if [ "$VERSION" == "latest" ] || [ -z "$VERSION" ]; then
python3 -m pip install onnxruntime-gpu
else
python3 -m pip install onnxruntime-gpu==$VERSION
fi
echo "Done!"

View File

@@ -3,10 +3,12 @@
set -euxo pipefail
# Cleanup the old github host key
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
# Add new github host key
curl -L https://api.github.com/meta | jq -r '.ssh_keys | .[]' | \
sed -e 's/^/github.com /' >> ~/.ssh/known_hosts
if [[ -f ~/.ssh/known_hosts ]]; then
# Add new github host key
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
curl -L https://api.github.com/meta | jq -r '.ssh_keys | .[]' | \
sed -e 's/^/github.com /' >> ~/.ssh/known_hosts
fi
# Frigate normal container runs as root, so it have permission to create
# the folders. But the devcontainer runs as the host user, so we need to
@@ -17,7 +19,7 @@ sudo chown -R "$(id -u):$(id -g)" /media/frigate
# When started as a service, LIBAVFORMAT_VERSION_MAJOR is defined in the
# s6 service file. For dev, where frigate is started from an interactive
# shell, we define it in .bashrc instead.
echo 'export LIBAVFORMAT_VERSION_MAJOR=$(/usr/lib/ffmpeg/7.0/bin/ffmpeg -version | grep -Po "libavformat\W+\K\d+")' >> $HOME/.bashrc
echo 'export LIBAVFORMAT_VERSION_MAJOR=$("$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)" -version | grep -Po "libavformat\W+\K\d+")' >> "$HOME/.bashrc"
make version

View File

@@ -33,9 +33,9 @@ runs:
with:
string: ${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Log in to the Container registry
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc
with:

View File

@@ -1,5 +1,11 @@
## Proposed change
<!--
Thank you!
If you're introducing a new feature or significantly refactoring existing functionality,
we encourage you to start a discussion first. This helps ensure your idea aligns with
Frigate's development goals.
Describe what this pull request does and how it will benefit users of Frigate.
Please describe in detail any considerations, breaking changes, etc. that are
made in this pull request.
@@ -13,6 +19,7 @@
- [ ] New feature
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code
- [ ] Documentation Update
## Additional information

View File

@@ -7,7 +7,7 @@ on:
- dev
- master
paths-ignore:
- 'docs/**'
- "docs/**"
# only run the latest commit to avoid cache overwrites
concurrency:
@@ -19,11 +19,13 @@ env:
jobs:
amd64_build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: AMD64 Build
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -40,11 +42,13 @@ jobs:
tags: ${{ steps.setup.outputs.image-name }}-amd64
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
arm64_build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04-arm
name: ARM Build
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -62,8 +66,9 @@ jobs:
${{ steps.setup.outputs.image-name }}-standard-arm64
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
- name: Build and push RPi build
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rpi
files: docker/rpi/rpi.hcl
@@ -71,47 +76,15 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
- name: Build and push Rockchip build
uses: docker/bake-action@v3
with:
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
jetson_jp4_build:
runs-on: ubuntu-latest
name: Jetson Jetpack 4
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 4)
env:
ARCH: arm64
BASE_IMAGE: timongentzsch/l4t-ubuntu20-opencv:latest
SLIM_BASE: timongentzsch/l4t-ubuntu20-opencv:latest
TRT_BASE: timongentzsch/l4t-ubuntu20-opencv:latest
uses: docker/bake-action@v4
with:
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp4
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp4,mode=max
jetson_jp5_build:
runs-on: ubuntu-latest
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 5
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -123,8 +96,9 @@ jobs:
BASE_IMAGE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
SLIM_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
TRT_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
@@ -132,14 +106,45 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
jetson_jp6_build:
runs-on: ubuntu-22.04-arm
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 6)
env:
ARCH: arm64
BASE_IMAGE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
SLIM_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
TRT_BASE: nvcr.io/nvidia/tensorrt:23.12-py3-igpu
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp6
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp6
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp6,mode=max
amd64_extra_builds:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: AMD64 Extra Build
needs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -148,8 +153,9 @@ jobs:
- name: Build and push TensorRT (x86 GPU)
env:
COMPUTE_LEVEL: "50 60 70 80 90"
uses: docker/bake-action@v4
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
@@ -157,68 +163,49 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rocm
files: docker/rocm/rocm.hcl
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-rocm,mode=max
*.cache-from=type=gha
arm64_extra_builds:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04-arm
name: ARM Extra Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Rockchip build
uses: docker/bake-action@v3
uses: docker/bake-action@v6
with:
source: .
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
combined_extra_builds:
runs-on: ubuntu-latest
name: Combined Extra Builds
needs:
- amd64_build
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Hailo-8l build
uses: docker/bake-action@v4
with:
push: true
targets: h8l
files: docker/hailo8l/h8l.hcl
set: |
h8l.tags=${{ steps.setup.outputs.image-name }}-h8l
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l,mode=max
- name: AMD/ROCm general build
env:
AMDGPU: gfx
HSA_OVERRIDE: 0
uses: docker/bake-action@v3
with:
push: true
targets: rocm
files: docker/rocm/rocm.hcl
set: |
rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: Assemble and push default build
needs:
- amd64_build

View File

@@ -1,24 +0,0 @@
name: dependabot-auto-merge
on: pull_request
permissions:
contents: write
jobs:
dependabot-auto-merge:
runs-on: ubuntu-latest
if: github.actor == 'dependabot[bot]'
steps:
- name: Get Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Enable auto-merge for Dependabot PRs
if: steps.metadata.outputs.dependency-type == 'direct:development' && (steps.metadata.outputs.update-type == 'version-update:semver-minor' || steps.metadata.outputs.update-type == 'version-update:semver-patch')
run: |
gh pr review --approve "$PR_URL"
gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,10 +3,11 @@ name: On pull request
on:
pull_request:
paths-ignore:
- 'docs/**'
- "docs/**"
- ".github/**"
env:
DEFAULT_PYTHON: 3.9
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:
@@ -19,9 +20,11 @@ jobs:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
@@ -38,6 +41,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
@@ -52,11 +57,16 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- run: npm install
working-directory: ./web
- name: Build web
run: npm run build
working-directory: ./web
# - name: Test
# run: npm run test
# working-directory: ./web
@@ -67,8 +77,10 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@v5.1.0
uses: actions/setup-python@v5.4.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install requirements
@@ -88,14 +100,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
- uses: actions/setup-node@master
with:
node-version: 16.x
- run: npm install
working-directory: ./web
- name: Build web
run: npm run build
working-directory: ./web
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx

View File

@@ -11,6 +11,8 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- id: lowercaseRepo
uses: ASzc/change-string-case-action@v6
with:
@@ -22,10 +24,13 @@ jobs:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create tag variables
env:
TAG: ${{ github.ref_name }}
LOWERCASE_REPO: ${{ steps.lowercaseRepo.outputs.lowercase }}
run: |
BUILD_TYPE=$([[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
BUILD_TYPE=$([[ "${TAG}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
echo "BUILD_TYPE=${BUILD_TYPE}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${LOWERCASE_REPO}" >> $GITHUB_ENV
echo "BUILD_TAG=${GITHUB_SHA::7}" >> $GITHUB_ENV
echo "CLEAN_VERSION=$(echo ${GITHUB_REF##*/} | tr '[:upper:]' '[:lower:]' | sed 's/^[v]//')" >> $GITHUB_ENV
- name: Tag and push the main image
@@ -34,14 +39,14 @@ jobs:
STABLE_TAG=${BASE}:stable
PULL_TAG=${BASE}:${BUILD_TAG}
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk h8l rocm; do
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
done
# stable tag
if [[ "${BUILD_TYPE}" == "stable" ]]; then
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk h8l rocm; do
for variant in standard-arm64 tensorrt tensorrt-jp5 tensorrt-jp6 rk h8l rocm; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
done
fi

View File

@@ -23,7 +23,9 @@ jobs:
exempt-pr-labels: "pinned,security,dependencies"
operations-per-run: 120
- name: Print outputs
run: echo ${{ join(steps.stale.outputs.*, ',') }}
env:
STALE_OUTPUT: ${{ join(steps.stale.outputs.*, ',') }}
run: echo "$STALE_OUTPUT"
# clean_ghcr:
# name: Delete outdated dev container images
@@ -38,4 +40,3 @@ jobs:
# account-type: personal
# token: ${{ secrets.GITHUB_TOKEN }}
# token-type: github-token

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.15.0
VERSION = 0.16.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty

View File

@@ -61,7 +61,7 @@ def start(id, num_detections, detection_queue, event):
object_detector.cleanup()
print(f"{id} - Processed for {duration:.2f} seconds.")
print(f"{id} - FPS: {object_detector.fps.eps():.2f}")
print(f"{id} - Average frame processing time: {mean(frame_times)*1000:.2f}ms")
print(f"{id} - Average frame processing time: {mean(frame_times) * 1000:.2f}ms")
######

View File

@@ -23,7 +23,7 @@ services:
# count: 1
# capabilities: [gpu]
environment:
YOLO_MODELS: yolov7-320
YOLO_MODELS: ""
devices:
- /dev/bus/usb:/dev/bus/usb
# - /dev/dri:/dev/dri # for intel hwaccel, needs to be updated for your hardware
@@ -38,4 +38,4 @@ services:
container_name: mqtt
image: eclipse-mosquitto:1.6
ports:
- "1883:1883"
- "1883:1883"

View File

@@ -1,104 +0,0 @@
# syntax=docker/dockerfile:1.6
ARG DEBIAN_FRONTEND=noninteractive
# Build Python wheels
FROM wheels AS h8l-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/hailo8l/requirements-wheels-h8l.txt /requirements-wheels-h8l.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
# Create a directory to store the built wheels
RUN mkdir /h8l-wheels
# Build the wheels
RUN pip3 wheel --wheel-dir=/h8l-wheels -c /requirements-wheels.txt -r /requirements-wheels-h8l.txt
# Build HailoRT and create wheel
FROM wheels AS build-hailort
ARG TARGETARCH
SHELL ["/bin/bash", "-c"]
# Install necessary APT packages
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https \
gnupg \
wget \
# the key fingerprint can be obtained from https://ftp-master.debian.org/keys.html
&& wget -qO- "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xA4285295FC7B1A81600062A9605C66F00D6C9793" | \
gpg --dearmor > /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/debian-archive-bullseye-stable.gpg] http://deb.debian.org/debian bullseye main contrib non-free" | \
tee /etc/apt/sources.list.d/debian-bullseye-nonfree.list \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.9 \
python3.9-dev \
build-essential cmake git \
&& rm -rf /var/lib/apt/lists/*
# Extract Python version and set environment variables
RUN PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}' | cut -d. -f1,2) && \
PYTHON_VERSION_NO_DOT=$(echo $PYTHON_VERSION | sed 's/\.//') && \
echo "PYTHON_VERSION=$PYTHON_VERSION" > /etc/environment && \
echo "PYTHON_VERSION_NO_DOT=$PYTHON_VERSION_NO_DOT" >> /etc/environment
# Clone and build HailoRT
RUN . /etc/environment && \
git clone https://github.com/hailo-ai/hailort.git /opt/hailort && \
cd /opt/hailort && \
git checkout v4.18.0 && \
cmake -H. -Bbuild -DCMAKE_BUILD_TYPE=Release -DHAILO_BUILD_PYBIND=1 -DPYBIND11_PYTHON_VERSION=${PYTHON_VERSION} && \
cmake --build build --config release --target libhailort && \
cmake --build build --config release --target _pyhailort && \
cp build/hailort/libhailort/bindings/python/src/_pyhailort.cpython-${PYTHON_VERSION_NO_DOT}-$(if [ $TARGETARCH == "amd64" ]; then echo 'x86_64'; else echo 'aarch64'; fi )-linux-gnu.so hailort/libhailort/bindings/python/platform/hailo_platform/pyhailort/ && \
cp build/hailort/libhailort/src/libhailort.so hailort/libhailort/bindings/python/platform/hailo_platform/pyhailort/
RUN ls -ahl /opt/hailort/build/hailort/libhailort/src/
RUN ls -ahl /opt/hailort/hailort/libhailort/bindings/python/platform/hailo_platform/pyhailort/
# Remove the existing setup.py if it exists in the target directory
RUN rm -f /opt/hailort/hailort/libhailort/bindings/python/platform/setup.py
# Copy generate_wheel_conf.py and setup.py
COPY docker/hailo8l/pyhailort_build_scripts/generate_wheel_conf.py /opt/hailort/hailort/libhailort/bindings/python/platform/generate_wheel_conf.py
COPY docker/hailo8l/pyhailort_build_scripts/setup.py /opt/hailort/hailort/libhailort/bindings/python/platform/setup.py
# Run the generate_wheel_conf.py script
RUN python3 /opt/hailort/hailort/libhailort/bindings/python/platform/generate_wheel_conf.py
# Create a wheel file using pip3 wheel
RUN cd /opt/hailort/hailort/libhailort/bindings/python/platform && \
python3 setup.py bdist_wheel --dist-dir /hailo-wheels
# Use deps as the base image
FROM deps AS h8l-frigate
# Copy the wheels from the wheels stage
COPY --from=h8l-wheels /h8l-wheels /deps/h8l-wheels
COPY --from=build-hailort /hailo-wheels /deps/hailo-wheels
COPY --from=build-hailort /etc/environment /etc/environment
RUN CC=$(python3 -c "import sysconfig; import shlex; cc = sysconfig.get_config_var('CC'); cc_cmd = shlex.split(cc)[0]; print(cc_cmd[:-4] if cc_cmd.endswith('-gcc') else cc_cmd)") && \
echo "CC=$CC" >> /etc/environment
# Install the wheels
RUN pip3 install -U /deps/h8l-wheels/*.whl
RUN pip3 install -U /deps/hailo-wheels/*.whl
RUN . /etc/environment && \
mv /usr/local/lib/python${PYTHON_VERSION}/dist-packages/hailo_platform/pyhailort/libhailort.so /usr/lib/${CC} && \
cd /usr/lib/${CC}/ && \
ln -s libhailort.so libhailort.so.4.18.0
# Copy base files from the rootfs stage
COPY --from=rootfs / /
# Set environment variables for Hailo SDK
ENV PATH="/opt/hailort/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/lib/$(if [ $TARGETARCH == "amd64" ]; then echo 'x86_64'; else echo 'aarch64'; fi )-linux-gnu:${LD_LIBRARY_PATH}"
# Set workdir
WORKDIR /opt/frigate/

View File

@@ -1,27 +0,0 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "rootfs"
}
target h8l {
dockerfile = "docker/hailo8l/Dockerfile"
contexts = {
wheels = "target:wheels"
deps = "target:deps"
rootfs = "target:rootfs"
}
platforms = ["linux/arm64","linux/amd64"]
}

View File

@@ -1,15 +0,0 @@
BOARDS += h8l
local-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=frigate:latest-h8l \
--load
build-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l
push-h8l: build-h8l
docker buildx bake --file=docker/hailo8l/h8l.hcl h8l \
--set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l \
--push

View File

@@ -1,67 +0,0 @@
import json
import os
import platform
import sys
import sysconfig
def extract_toolchain_info(compiler):
# Remove the "-gcc" or "-g++" suffix if present
if compiler.endswith("-gcc") or compiler.endswith("-g++"):
compiler = compiler.rsplit("-", 1)[0]
# Extract the toolchain and ABI part (e.g., "gnu")
toolchain_parts = compiler.split("-")
abi_conventions = next(
(part for part in toolchain_parts if part in ["gnu", "musl", "eabi", "uclibc"]),
"",
)
return abi_conventions
def generate_wheel_conf():
conf_file_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)), "wheel_conf.json"
)
# Extract current system and Python version information
py_version = f"cp{sys.version_info.major}{sys.version_info.minor}"
arch = platform.machine()
system = platform.system().lower()
libc_version = platform.libc_ver()[1]
# Get the compiler information
compiler = sysconfig.get_config_var("CC")
abi_conventions = extract_toolchain_info(compiler)
# Create the new configuration data
new_conf_data = {
"py_version": py_version,
"arch": arch,
"system": system,
"libc_version": libc_version,
"abi": abi_conventions,
"extension": {
"posix": "so",
"nt": "pyd", # Windows
}[os.name],
}
# If the file exists, load the existing data
if os.path.isfile(conf_file_path):
with open(conf_file_path, "r") as conf_file:
conf_data = json.load(conf_file)
# Update the existing data with the new data
conf_data.update(new_conf_data)
else:
# If the file does not exist, use the new data
conf_data = new_conf_data
# Write the updated data to the file
with open(conf_file_path, "w") as conf_file:
json.dump(conf_data, conf_file, indent=4)
if __name__ == "__main__":
generate_wheel_conf()

View File

@@ -1,111 +0,0 @@
import json
import os
from setuptools import find_packages, setup
from wheel.bdist_wheel import bdist_wheel as orig_bdist_wheel
class NonPurePythonBDistWheel(orig_bdist_wheel):
"""Makes the wheel platform-dependent so it can be based on the _pyhailort architecture"""
def finalize_options(self):
orig_bdist_wheel.finalize_options(self)
self.root_is_pure = False
def _get_hailort_lib_path():
lib_filename = "libhailort.so"
lib_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)),
f"hailo_platform/pyhailort/{lib_filename}",
)
if os.path.exists(lib_path):
print(f"Found libhailort shared library at: {lib_path}")
else:
print(f"Error: libhailort shared library not found at: {lib_path}")
raise FileNotFoundError(f"libhailort shared library not found at: {lib_path}")
return lib_path
def _get_pyhailort_lib_path():
conf_file_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)), "wheel_conf.json"
)
if not os.path.isfile(conf_file_path):
raise FileNotFoundError(f"Configuration file not found: {conf_file_path}")
with open(conf_file_path, "r") as conf_file:
content = json.load(conf_file)
py_version = content["py_version"]
arch = content["arch"]
system = content["system"]
extension = content["extension"]
abi = content["abi"]
# Construct the filename directly
lib_filename = f"_pyhailort.cpython-{py_version.split('cp')[1]}-{arch}-{system}-{abi}.{extension}"
lib_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)),
f"hailo_platform/pyhailort/{lib_filename}",
)
if os.path.exists(lib_path):
print(f"Found _pyhailort shared library at: {lib_path}")
else:
print(f"Error: _pyhailort shared library not found at: {lib_path}")
raise FileNotFoundError(
f"_pyhailort shared library not found at: {lib_path}"
)
return lib_path
def _get_package_paths():
packages = []
pyhailort_lib = _get_pyhailort_lib_path()
hailort_lib = _get_hailort_lib_path()
if pyhailort_lib:
packages.append(pyhailort_lib)
if hailort_lib:
packages.append(hailort_lib)
packages.append(os.path.abspath("hailo_tutorials/notebooks/*"))
packages.append(os.path.abspath("hailo_tutorials/hefs/*"))
return packages
if __name__ == "__main__":
setup(
author="Hailo team",
author_email="contact@hailo.ai",
cmdclass={
"bdist_wheel": NonPurePythonBDistWheel,
},
description="HailoRT",
entry_points={
"console_scripts": [
"hailo=hailo_platform.tools.hailocli.main:main",
]
},
install_requires=[
"argcomplete",
"contextlib2",
"future",
"netaddr",
"netifaces",
"verboselogs",
"numpy==1.23.3",
],
name="hailort",
package_data={
"hailo_platform": _get_package_paths(),
},
packages=find_packages(),
platforms=[
"linux_x86_64",
"linux_aarch64",
"win_amd64",
],
url="https://hailo.ai/",
version="4.17.0",
zip_safe=False,
)

View File

@@ -1,12 +0,0 @@
appdirs==1.4.4
argcomplete==2.0.0
contextlib2==0.6.0.post1
distlib==0.3.6
filelock==3.8.0
future==0.18.2
importlib-metadata==5.1.0
importlib-resources==5.1.2
netaddr==0.8.0
netifaces==0.10.9
verboselogs==1.7
virtualenv==20.17.0

View File

@@ -4,6 +4,7 @@
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget
hailo_version="4.20.0"
arch=$(uname -m)
if [[ $arch == "x86_64" ]]; then
@@ -13,7 +14,7 @@ else
fi
# Clone the HailoRT driver repository
git clone --depth 1 --branch v4.18.0 https://github.com/hailo-ai/hailort-drivers.git
git clone --depth 1 --branch v${hailo_version} https://github.com/hailo-ai/hailort-drivers.git
# Build and install the HailoRT driver
cd hailort-drivers/linux/pcie
@@ -38,7 +39,7 @@ cd ../../
if [ ! -d /lib/firmware/hailo ]; then
sudo mkdir /lib/firmware/hailo
fi
sudo mv hailo8_fw.4.18.0.bin /lib/firmware/hailo/hailo8_fw.bin
sudo mv hailo8_fw.*.bin /lib/firmware/hailo/hailo8_fw.bin
# Install udev rules
sudo cp ./linux/pcie/51-hailo-udev.rules /etc/udev/rules.d/

View File

@@ -3,14 +3,29 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG BASE_IMAGE=debian:11
ARG SLIM_BASE=debian:11-slim
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
ARG BASE_IMAGE=debian:12
ARG SLIM_BASE=debian:12-slim
# A hook that allows us to inject commands right after the base images
ARG BASE_HOOK=
FROM ${BASE_IMAGE} AS base
ARG PIP_BREAK_SYSTEM_PACKAGES
ARG BASE_HOOK
FROM --platform=${BUILDPLATFORM} debian:11 AS base_host
RUN sh -c "$BASE_HOOK"
FROM --platform=${BUILDPLATFORM} debian:12 AS base_host
ARG PIP_BREAK_SYSTEM_PACKAGES
FROM ${SLIM_BASE} AS slim-base
ARG PIP_BREAK_SYSTEM_PACKAGES
ARG BASE_HOOK
RUN sh -c "$BASE_HOOK"
FROM slim-base AS wget
ARG DEBIAN_FRONTEND
@@ -24,10 +39,7 @@ ARG DEBIAN_FRONTEND
ENV CCACHE_DIR /root/.ccache
ENV CCACHE_MAXSIZE 2G
# bind /var/cache/apt to tmpfs to speed up nginx build
RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
--mount=type=bind,source=docker/main/build_nginx.sh,target=/deps/build_nginx.sh \
--mount=type=cache,target=/root/.ccache \
RUN --mount=type=bind,source=docker/main/build_nginx.sh,target=/deps/build_nginx.sh \
/deps/build_nginx.sh
FROM wget AS sqlite-vec
@@ -139,24 +151,17 @@ ARG TARGETARCH
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https \
gnupg \
wget \
# the key fingerprint can be obtained from https://ftp-master.debian.org/keys.html
&& wget -qO- "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xA4285295FC7B1A81600062A9605C66F00D6C9793" | \
gpg --dearmor > /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/debian-archive-bullseye-stable.gpg] http://deb.debian.org/debian bullseye main contrib non-free" | \
tee /etc/apt/sources.list.d/debian-bullseye-nonfree.list \
apt-transport-https wget \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.9 \
python3.9-dev \
python3.11 \
python3.11-dev \
# opencv dependencies
build-essential cmake git pkg-config libgtk-3-dev \
libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \
libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \
gfortran openexr libatlas-base-dev libssl-dev\
libtbb2 libtbb-dev libdc1394-22-dev libopenexr-dev \
libtbbmalloc2 libtbb-dev libdc1394-dev libopenexr-dev \
libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
# sqlite3 dependencies
tclsh \
@@ -164,8 +169,7 @@ RUN apt-get -qq update \
gcc gfortran libopenblas-dev liblapack-dev && \
rm -rf /var/lib/apt/lists/*
# Ensure python3 defaults to python3.9
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
@@ -180,6 +184,9 @@ RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
# Install HailoRT & Wheels
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
/deps/install_hailort.sh
# Collect deps in a single layer
FROM scratch AS deps-rootfs
@@ -190,6 +197,7 @@ COPY --from=libusb-build /usr/local/lib /usr/local/lib
COPY --from=tempio /rootfs/ /
COPY --from=s6-overlay /rootfs/ /
COPY --from=models /rootfs/ /
COPY --from=wheels /rootfs/ /
COPY docker/main/rootfs/ /
@@ -211,15 +219,25 @@ ENV TOKENIZERS_PARALLELISM=true
# https://github.com/huggingface/transformers/issues/27214
ENV TRANSFORMERS_NO_ADVISORY_WARNINGS=1
# Set OpenCV ffmpeg loglevel to fatal: https://ffmpeg.org/doxygen/trunk/log_8h.html
ENV OPENCV_FFMPEG_LOGLEVEL=8
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
ENV LIBAVFORMAT_VERSION_MAJOR=60
# Install dependencies
RUN --mount=type=bind,source=docker/main/install_deps.sh,target=/deps/install_deps.sh \
/deps/install_deps.sh
ENV DEFAULT_FFMPEG_VERSION="7.0"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:5.0"
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
python3 -m pip install --upgrade pip && \
pip3 install -U /deps/wheels/*.whl
COPY --from=deps-rootfs / /

View File

@@ -8,10 +8,16 @@ SECURE_TOKEN_MODULE_VERSION="1.5"
SET_MISC_MODULE_VERSION="v0.33"
NGX_DEVEL_KIT_VERSION="v0.3.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
apt-get update
source /etc/os-release
if [[ "$VERSION_ID" == "12" ]]; then
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
else
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
fi
apt-get update
apt-get -yqq build-dep nginx
apt-get -yqq install --no-install-recommends ca-certificates wget

View File

@@ -4,7 +4,7 @@ from openvino.tools import mo
ov_model = mo.convert_model(
"/models/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb",
compress_to_fp16=True,
transformations_config="/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
transformations_config="/usr/local/lib/python3.11/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
tensorflow_object_detection_api_pipeline_config="/models/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config",
reverse_input_channels=True,
)

View File

@@ -4,8 +4,15 @@ set -euxo pipefail
SQLITE_VEC_VERSION="0.1.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
source /etc/os-release
if [[ "$VERSION_ID" == "12" ]]; then
sed -i '/^Types:/s/deb/& deb-src/' /etc/apt/sources.list.d/debian.sources
else
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
fi
apt-get update
apt-get -yqq build-dep sqlite3 gettext git

View File

@@ -6,89 +6,73 @@ apt-get -qq update
apt-get -qq install --no-install-recommends -y \
apt-transport-https \
ca-certificates \
gnupg \
wget \
lbzip2 \
procps vainfo \
unzip locales tzdata libxml2 xz-utils \
python3.9 \
python3-pip \
python3.11 \
curl \
lsof \
jq \
nethogs
nethogs \
libgl1 \
libglib2.0-0 \
libusb-1.0.0
# ensure python3 defaults to python3.9
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
mkdir -p -m 600 /root/.gnupg
# add coral repo
curl -fsSLo - https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
gpg --dearmor -o /etc/apt/trusted.gpg.d/google-cloud-packages-archive-keyring.gpg
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list
echo "libedgetpu1-max libedgetpu/accepted-eula select true" | debconf-set-selections
# install coral runtime
wget -q -O /tmp/libedgetpu1-max.deb "https://github.com/feranick/libedgetpu/releases/download/16.0TF2.17.1-1/libedgetpu1-max_16.0tf2.17.1-1.bookworm_${TARGETARCH}.deb"
unset DEBIAN_FRONTEND
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
rm /tmp/libedgetpu1-max.deb
# enable non-free repo in Debian
if grep -q "Debian" /etc/issue; then
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
fi
# coral drivers
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
libedgetpu1-max python3-tflite-runtime python3-pycoral
# btbn-ffmpeg -> amd64
# ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
rm -rf ffmpeg.tar.xz
mkdir -p /usr/lib/ffmpeg/7.0
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/5.0/doc /usr/lib/ffmpeg/5.0/bin/ffplay
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/7.0/doc /usr/lib/ffmpeg/7.0/bin/ffplay
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 amd64/bin/ffmpeg amd64/bin/ffprobe
rm -rf ffmpeg.tar.xz
fi
# ffmpeg -> arm64
if [[ "${TARGETARCH}" == "arm64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
rm -f ffmpeg.tar.xz
mkdir -p /usr/lib/ffmpeg/7.0
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/5.0/doc /usr/lib/ffmpeg/5.0/bin/ffplay
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/7.0/doc /usr/lib/ffmpeg/7.0/bin/ffplay
wget -qO ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
tar -xf ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1 arm64/bin/ffmpeg arm64/bin/ffprobe
rm -f ffmpeg.tar.xz
fi
# arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then
# use debian bookworm for amd / intel-i965 driver packages
echo 'deb https://deb.debian.org/debian bookworm main contrib non-free' >/etc/apt/sources.list.d/debian-bookworm.list
apt-get -qq update
# install amd / intel-i965 driver packages
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver intel-gpu-tools onevpl-tools \
libva-drm2 \
mesa-va-drivers radeontop
# something about this dependency requires it to be installed in a separate call rather than in the line above
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver-shaders
# intel packages use zst compression so we need to update dpkg
apt-get install -y dpkg
rm -f /etc/apt/sources.list.d/debian-bookworm.list
# use intel apt intel packages
wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | gpg --yes --dearmor --output /usr/share/keyrings/intel-graphics.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
intel-opencl-icd intel-level-zero-gpu intel-media-va-driver-non-free \
libmfx1 libmfxgen1 libvpl2
intel-opencl-icd=24.35.30872.31-996~22.04 intel-level-zero-gpu=1.3.29735.27-914~22.04 intel-media-va-driver-non-free=24.3.3-996~22.04 \
libmfx1=23.2.2-880~22.04 libmfxgen1=24.2.4-914~22.04 libvpl2=1:2.13.0.0-996~22.04
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list

14
docker/main/install_hailort.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/bash
set -euxo pipefail
hailo_version="4.20.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="x86_64"
elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" | tar -C / -xzf -
wget -P /wheels/ "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${hailo_version}-cp311-cp311-linux_${arch}.whl"

View File

@@ -1,5 +1,8 @@
aiofiles == 24.1.*
click == 8.1.*
# FastAPI
aiohttp == 3.11.3
starlette == 0.41.2
starlette-context == 0.3.6
fastapi == 0.115.*
uvicorn == 0.30.*
@@ -7,39 +10,64 @@ slowapi == 0.1.*
imutils == 0.5.*
joserfc == 1.0.*
pathvalidate == 3.2.*
markupsafe == 2.1.*
markupsafe == 3.0.*
python-multipart == 0.0.20
# General
mypy == 1.6.1
numpy == 1.26.*
onvif_zeep == 0.2.12
opencv-python-headless == 4.9.0.*
onvif-zeep-async == 3.1.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.13.*
psutil == 6.1.*
pydantic == 2.8.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
pytz == 2024.*
pytz == 2025.*
pyzmq == 26.2.*
ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
scipy == 1.13.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
# Image Manipulation
numpy == 1.26.*
opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.*
scipy == 1.14.*
# OpenVino & ONNX
openvino == 2024.3.*
onnxruntime-openvino == 1.19.* ; platform_machine == 'x86_64'
onnxruntime == 1.19.* ; platform_machine == 'aarch64'
openvino == 2024.4.*
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
# Embeddings
transformers == 4.45.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.3.*
openai == 1.51.*
openai == 1.65.*
# push notifications
py-vapid == 1.9.*
pywebpush == 2.0.*
# alpr
pyclipper == 1.3.*
shapely == 2.0.*
Levenshtein==0.26.*
# HailoRT Wheels
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*
prometheus-client == 0.21.*
# TFLite
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'

View File

@@ -1,2 +1,2 @@
scikit-build == 0.17.*
scikit-build == 0.18.*
nvidia-pyindex

View File

@@ -42,8 +42,16 @@ function migrate_db_path() {
fi
}
function set_libva_version() {
local ffmpeg_path
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
LIBAVFORMAT_VERSION_MAJOR=$("$ffmpeg_path" -version | grep -Po "libavformat\W+\K\d+")
export LIBAVFORMAT_VERSION_MAJOR
}
echo "[INFO] Preparing Frigate..."
migrate_db_path
set_libva_version
echo "[INFO] Starting Frigate..."
cd /opt/frigate || echo "[ERROR] Failed to change working directory to /opt/frigate"

View File

@@ -43,6 +43,15 @@ function get_ip_and_port_from_supervisor() {
export FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL="${ip_address}:${webrtc_port}"
}
function set_libva_version() {
local ffmpeg_path
ffmpeg_path=$(python3 /usr/local/ffmpeg/get_ffmpeg_path.py)
LIBAVFORMAT_VERSION_MAJOR=$("$ffmpeg_path" -version | grep -Po "libavformat\W+\K\d+")
export LIBAVFORMAT_VERSION_MAJOR
}
set_libva_version
if [[ -f "/dev/shm/go2rtc.yaml" ]]; then
echo "[INFO] Removing stale config from last run..."
rm /dev/shm/go2rtc.yaml

View File

@@ -0,0 +1,41 @@
import json
import os
import sys
from ruamel.yaml import YAML
sys.path.insert(0, "/opt/frigate")
from frigate.const import (
DEFAULT_FFMPEG_VERSION,
INCLUDED_FFMPEG_VERSIONS,
)
sys.path.remove("/opt/frigate")
yaml = YAML()
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, any] = yaml.load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, any] = {}
path = config.get("ffmpeg", {}).get("path", "default")
if path == "default":
print(f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg")
elif path in INCLUDED_FFMPEG_VERSIONS:
print(f"/usr/lib/ffmpeg/{path}/bin/ffmpeg")
else:
print(f"{path}/bin/ffmpeg")

View File

@@ -2,7 +2,6 @@
import json
import os
import shutil
import sys
from pathlib import Path
@@ -13,6 +12,7 @@ from frigate.const import (
BIRDSEYE_PIPE,
DEFAULT_FFMPEG_VERSION,
INCLUDED_FFMPEG_VERSIONS,
LIBAVFORMAT_VERSION_MAJOR,
)
from frigate.ffmpeg_presets import parse_preset_hardware_acceleration_encode
@@ -66,29 +66,32 @@ elif go2rtc_config["log"].get("format") is None:
go2rtc_config["log"]["format"] = "text"
# ensure there is a default webrtc config
if not go2rtc_config.get("webrtc"):
if go2rtc_config.get("webrtc") is None:
go2rtc_config["webrtc"] = {}
# go2rtc should listen on 8555 tcp & udp by default
if not go2rtc_config["webrtc"].get("listen"):
if go2rtc_config["webrtc"].get("listen") is None:
go2rtc_config["webrtc"]["listen"] = ":8555"
if not go2rtc_config["webrtc"].get("candidates", []):
if go2rtc_config["webrtc"].get("candidates") is None:
default_candidates = []
# use internal candidate if it was discovered when running through the add-on
internal_candidate = os.environ.get(
"FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL", None
)
internal_candidate = os.environ.get("FRIGATE_GO2RTC_WEBRTC_CANDIDATE_INTERNAL")
if internal_candidate is not None:
default_candidates.append(internal_candidate)
# should set default stun server so webrtc can work
default_candidates.append("stun:8555")
go2rtc_config["webrtc"] = {"candidates": default_candidates}
else:
print(
"[INFO] Not injecting WebRTC candidates into go2rtc config as it has been set manually",
)
go2rtc_config["webrtc"]["candidates"] = default_candidates
# This prevents WebRTC from attempting to establish a connection to the internal
# docker IPs which are not accessible from outside the container itself and just
# wastes time during negotiation. Note that this is only necessary because
# Frigate container doesn't run in host network mode.
if go2rtc_config["webrtc"].get("filter") is None:
go2rtc_config["webrtc"]["filter"] = {"candidates": []}
elif go2rtc_config["webrtc"]["filter"].get("candidates") is None:
go2rtc_config["webrtc"]["filter"]["candidates"] = []
# sets default RTSP response to be equivalent to ?video=h264,h265&audio=aac
# this means user does not need to specify audio codec when using restream
@@ -112,10 +115,7 @@ else:
# ensure ffmpeg path is set correctly
path = config.get("ffmpeg", {}).get("path", "default")
if path == "default":
if shutil.which("ffmpeg") is None:
ffmpeg_path = f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg"
else:
ffmpeg_path = "ffmpeg"
ffmpeg_path = f"/usr/lib/ffmpeg/{DEFAULT_FFMPEG_VERSION}/bin/ffmpeg"
elif path in INCLUDED_FFMPEG_VERSIONS:
ffmpeg_path = f"/usr/lib/ffmpeg/{path}/bin/ffmpeg"
else:
@@ -127,14 +127,12 @@ elif go2rtc_config["ffmpeg"].get("bin") is None:
go2rtc_config["ffmpeg"]["bin"] = ffmpeg_path
# need to replace ffmpeg command when using ffmpeg4
if int(os.environ.get("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") < 59:
if go2rtc_config["ffmpeg"].get("rtsp") is None:
go2rtc_config["ffmpeg"]["rtsp"] = (
"-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
)
else:
if LIBAVFORMAT_VERSION_MAJOR < 59:
rtsp_args = "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
if go2rtc_config.get("ffmpeg") is None:
go2rtc_config["ffmpeg"] = {"path": ""}
go2rtc_config["ffmpeg"] = {"rtsp": rtsp_args}
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
go2rtc_config["ffmpeg"]["rtsp"] = rtsp_args
for name in go2rtc_config.get("streams", {}):
stream = go2rtc_config["streams"][name]
@@ -165,7 +163,7 @@ if config.get("birdseye", {}).get("restream", False):
birdseye: dict[str, any] = config.get("birdseye")
input = f"-f rawvideo -pix_fmt yuv420p -video_size {birdseye.get('width', 1280)}x{birdseye.get('height', 720)} -r 10 -i {BIRDSEYE_PIPE}"
ffmpeg_cmd = f"exec:{parse_preset_hardware_acceleration_encode(ffmpeg_path, config.get('ffmpeg', {}).get('hwaccel_args'), input, '-rtsp_transport tcp -f rtsp {output}')}"
ffmpeg_cmd = f"exec:{parse_preset_hardware_acceleration_encode(ffmpeg_path, config.get('ffmpeg', {}).get('hwaccel_args', ''), input, '-rtsp_transport tcp -f rtsp {output}')}"
if go2rtc_config.get("streams"):
go2rtc_config["streams"]["birdseye"] = ffmpeg_cmd

View File

@@ -1,14 +1,16 @@
## Send a subrequest to verify if the user is authenticated and has permission to access the resource.
auth_request /auth;
## Save the upstream metadata response headers from Authelia to variables.
## Save the upstream metadata response headers from the auth request to variables
auth_request_set $user $upstream_http_remote_user;
auth_request_set $role $upstream_http_remote_role;
auth_request_set $groups $upstream_http_remote_groups;
auth_request_set $name $upstream_http_remote_name;
auth_request_set $email $upstream_http_remote_email;
## Inject the metadata response headers from the variables into the request made to the backend.
proxy_set_header Remote-User $user;
proxy_set_header Remote-Role $role;
proxy_set_header Remote-Groups $groups;
proxy_set_header Remote-Email $email;
proxy_set_header Remote-Name $name;

View File

@@ -81,6 +81,9 @@ http {
open_file_cache_errors on;
aio on;
# file upload size
client_max_body_size 10M;
# https://github.com/kaltura/nginx-vod-module#vod_open_file_thread_pool
vod_open_file_thread_pool default;
@@ -106,6 +109,14 @@ http {
expires off;
keepalive_disable safari;
# vod module returns 502 for non-existent media
# https://github.com/kaltura/nginx-vod-module/issues/468
error_page 502 =404 /vod-not-found;
}
location = /vod-not-found {
return 404;
}
location /stream/ {

View File

@@ -0,0 +1,20 @@
./subset/000000005001.jpg
./subset/000000038829.jpg
./subset/000000052891.jpg
./subset/000000075612.jpg
./subset/000000098261.jpg
./subset/000000181542.jpg
./subset/000000215245.jpg
./subset/000000277005.jpg
./subset/000000288685.jpg
./subset/000000301421.jpg
./subset/000000334371.jpg
./subset/000000348481.jpg
./subset/000000373353.jpg
./subset/000000397681.jpg
./subset/000000414673.jpg
./subset/000000419312.jpg
./subset/000000465822.jpg
./subset/000000475732.jpg
./subset/000000559707.jpg
./subset/000000574315.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 272 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@@ -3,25 +3,32 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels as rk-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN rm -rf /rk-wheels/opencv_python-*
FROM deps AS rk-frigate
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
pip3 install -U /deps/rk-wheels/*.whl
pip3 install --no-deps -U /deps/rk-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY docker/rockchip/COCO /COCO
COPY docker/rockchip/conv2rknn.py /opt/conv2rknn.py
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt.so /usr/lib/
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/librknnrt.so /usr/lib/
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffmpeg /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffprobe /usr/lib/ffmpeg/6.0/bin/
ENV PATH="/usr/lib/ffmpeg/6.0/bin/:${PATH}"
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
ENV DEFAULT_FFMPEG_VERSION="6.0"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"

View File

@@ -0,0 +1,82 @@
import os
import rknn
import yaml
from rknn.api import RKNN
try:
with open(rknn.__path__[0] + "/VERSION") as file:
tk_version = file.read().strip()
except FileNotFoundError:
pass
try:
with open("/config/conv2rknn.yaml", "r") as config_file:
configuration = yaml.safe_load(config_file)
except FileNotFoundError:
raise Exception("Please place a config.yaml file in /config/conv2rknn.yaml")
if configuration["config"] != None:
rknn_config = configuration["config"]
else:
rknn_config = {}
if not os.path.isdir("/config/model_cache/rknn_cache/onnx"):
raise Exception(
"Place the onnx models you want to convert to rknn format in /config/model_cache/rknn_cache/onnx"
)
if "soc" not in configuration:
try:
with open("/proc/device-tree/compatible") as file:
soc = file.read().split(",")[-1].strip("\x00")
except FileNotFoundError:
raise Exception("Make sure to run docker in privileged mode.")
configuration["soc"] = [
soc,
]
if "quantization" not in configuration:
configuration["quantization"] = False
if "output_name" not in configuration:
configuration["output_name"] = "{{input_basename}}"
for input_filename in os.listdir("/config/model_cache/rknn_cache/onnx"):
for soc in configuration["soc"]:
quant = "i8" if configuration["quantization"] else "fp16"
input_path = "/config/model_cache/rknn_cache/onnx/" + input_filename
input_basename = input_filename[: input_filename.rfind(".")]
output_filename = (
configuration["output_name"].format(
quant=quant,
input_basename=input_basename,
soc=soc,
tk_version=tk_version,
)
+ ".rknn"
)
output_path = "/config/model_cache/rknn_cache/" + output_filename
rknn_config["target_platform"] = soc
rknn = RKNN(verbose=True)
rknn.config(**rknn_config)
if rknn.load_onnx(model=input_path) != 0:
raise Exception("Error loading model.")
if (
rknn.build(
do_quantization=configuration["quantization"],
dataset="/COCO/coco_subset_20.txt",
)
!= 0
):
raise Exception("Error building model.")
if rknn.export_rknn(output_path) != 0:
raise Exception("Error exporting rknn model.")

View File

@@ -1 +1,2 @@
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/rknn_toolkit_lite2-2.0.0b0-cp39-cp39-linux_aarch64.whl
rknn-toolkit2 == 2.3.0
rknn-toolkit-lite2 == 2.3.0

View File

@@ -2,77 +2,48 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=5.7.3
ARG ROCM=6.3.3
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
#######################################################################
FROM ubuntu:focal as rocm
FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install gnupg wget
RUN mkdir --parents --mode=0755 /etc/apt/keyrings
RUN wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | gpg --dearmor | tee /etc/apt/keyrings/rocm.gpg > /dev/null
COPY docker/rocm/rocm.list /etc/apt/sources.list.d/
COPY docker/rocm/rocm-pin-600 /etc/apt/preferences.d/
RUN apt-get update
RUN apt-get -y install --no-install-recommends migraphx hipfft roctracer
RUN apt-get -y install --no-install-recommends migraphx-dev
RUN apt update && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/$ROCM/ubuntu/jammy/amdgpu-install_6.3.60303-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -y rocm
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocfft*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/
RUN cd /opt/rocm-$ROCM/lib && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
RUN mkdir -p /opt/rocm-dist/etc/ld.so.conf.d/
RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM --platform=linux/amd64 debian:11 as debian-base
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install --no-install-recommends libelf1 libdrm2 libdrm-amdgpu1 libnuma1 kmod
RUN apt-get -y install python3
#######################################################################
# ROCm does not come with migraphx wrappers for python 3.9, so we build it here
FROM debian-base as debian-build
ARG ROCM
COPY --from=rocm /opt/rocm-$ROCM /opt/rocm-$ROCM
RUN ln -s /opt/rocm-$ROCM /opt/rocm
RUN apt-get -y install g++ cmake
RUN apt-get -y install python3-pybind11 python3.9-distutils python3-dev
WORKDIR /opt/build
COPY docker/rocm/migraphx .
RUN mkdir build && cd build && cmake .. && make install
#######################################################################
FROM deps AS deps-prelim
# need this to install libnuma1
RUN apt-get update
# no ugprade?!?!
RUN apt-get -y install libnuma1
RUN apt-get update && apt-get install -y libnuma1
WORKDIR /opt/frigate/
WORKDIR /opt/frigate
COPY --from=rootfs / /
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip" --break-system-packages
RUN python3 -m pip config set global.break-system-packages true
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
RUN python3 -m pip install --upgrade pip \
&& pip3 uninstall -y onnxruntime-openvino \
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 install -r /requirements.txt
#######################################################################
@@ -86,12 +57,11 @@ COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*gfx908* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
COPY --from=debian-build /opt/rocm/lib/migraphx.cpython-39-x86_64-linux-gnu.so /opt/rocm-$ROCM/lib/
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV HSA_ENABLE_SDMA=0
ENV MIGRAPHX_ENABLE_NHWC=1
COPY --from=rocm-dist / /

View File

@@ -1,26 +0,0 @@
cmake_minimum_required(VERSION 3.1)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release)
endif()
SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
project(migraphx_py)
include_directories(/opt/rocm/include)
find_package(pybind11 REQUIRED)
pybind11_add_module(migraphx migraphx_py.cpp)
target_link_libraries(migraphx PRIVATE /opt/rocm/lib/libmigraphx.so /opt/rocm/lib/libmigraphx_tf.so /opt/rocm/lib/libmigraphx_onnx.so)
install(TARGETS migraphx
COMPONENT python
LIBRARY DESTINATION /opt/rocm/lib
)

View File

@@ -1,582 +0,0 @@
/*
* The MIT License (MIT)
*
* Copyright (c) 2015-2022 Advanced Micro Devices, Inc. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/numpy.h>
#include <migraphx/program.hpp>
#include <migraphx/instruction_ref.hpp>
#include <migraphx/operation.hpp>
#include <migraphx/quantization.hpp>
#include <migraphx/generate.hpp>
#include <migraphx/instruction.hpp>
#include <migraphx/ref/target.hpp>
#include <migraphx/stringutils.hpp>
#include <migraphx/tf.hpp>
#include <migraphx/onnx.hpp>
#include <migraphx/load_save.hpp>
#include <migraphx/register_target.hpp>
#include <migraphx/json.hpp>
#include <migraphx/make_op.hpp>
#include <migraphx/op/common.hpp>
#ifdef HAVE_GPU
#include <migraphx/gpu/hip.hpp>
#endif
using half = half_float::half;
namespace py = pybind11;
#ifdef __clang__
#define MIGRAPHX_PUSH_UNUSED_WARNING \
_Pragma("clang diagnostic push") \
_Pragma("clang diagnostic ignored \"-Wused-but-marked-unused\"")
#define MIGRAPHX_POP_WARNING _Pragma("clang diagnostic pop")
#else
#define MIGRAPHX_PUSH_UNUSED_WARNING
#define MIGRAPHX_POP_WARNING
#endif
#define MIGRAPHX_PYBIND11_MODULE(...) \
MIGRAPHX_PUSH_UNUSED_WARNING \
PYBIND11_MODULE(__VA_ARGS__) \
MIGRAPHX_POP_WARNING
#define MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM(x, t) .value(#x, migraphx::shape::type_t::x)
namespace migraphx {
migraphx::value to_value(py::kwargs kwargs);
migraphx::value to_value(py::list lst);
template <class T, class F>
void visit_py(T x, F f)
{
if(py::isinstance<py::kwargs>(x))
{
f(to_value(x.template cast<py::kwargs>()));
}
else if(py::isinstance<py::list>(x))
{
f(to_value(x.template cast<py::list>()));
}
else if(py::isinstance<py::bool_>(x))
{
f(x.template cast<bool>());
}
else if(py::isinstance<py::int_>(x) or py::hasattr(x, "__index__"))
{
f(x.template cast<int>());
}
else if(py::isinstance<py::float_>(x))
{
f(x.template cast<float>());
}
else if(py::isinstance<py::str>(x))
{
f(x.template cast<std::string>());
}
else if(py::isinstance<migraphx::shape::dynamic_dimension>(x))
{
f(migraphx::to_value(x.template cast<migraphx::shape::dynamic_dimension>()));
}
else
{
MIGRAPHX_THROW("VISIT_PY: Unsupported data type!");
}
}
migraphx::value to_value(py::list lst)
{
migraphx::value v = migraphx::value::array{};
for(auto val : lst)
{
visit_py(val, [&](auto py_val) { v.push_back(py_val); });
}
return v;
}
migraphx::value to_value(py::kwargs kwargs)
{
migraphx::value v = migraphx::value::object{};
for(auto arg : kwargs)
{
auto&& key = py::str(arg.first);
auto&& val = arg.second;
visit_py(val, [&](auto py_val) { v[key] = py_val; });
}
return v;
}
} // namespace migraphx
namespace pybind11 {
namespace detail {
template <>
struct npy_format_descriptor<half>
{
static std::string format()
{
// following: https://docs.python.org/3/library/struct.html#format-characters
return "e";
}
static constexpr auto name() { return _("half"); }
};
} // namespace detail
} // namespace pybind11
template <class F>
void visit_type(const migraphx::shape& s, F f)
{
s.visit_type(f);
}
template <class T, class F>
void visit(const migraphx::raw_data<T>& x, F f)
{
x.visit(f);
}
template <class F>
void visit_types(F f)
{
migraphx::shape::visit_types(f);
}
template <class T>
py::buffer_info to_buffer_info(T& x)
{
migraphx::shape s = x.get_shape();
assert(s.type() != migraphx::shape::tuple_type);
if(s.dynamic())
MIGRAPHX_THROW("MIGRAPHX PYTHON: dynamic shape argument passed to to_buffer_info");
auto strides = s.strides();
std::transform(
strides.begin(), strides.end(), strides.begin(), [&](auto i) { return i * s.type_size(); });
py::buffer_info b;
visit_type(s, [&](auto as) {
// migraphx use int8_t data to store bool type, we need to
// explicitly specify the data type as bool for python
if(s.type() == migraphx::shape::bool_type)
{
b = py::buffer_info(x.data(),
as.size(),
py::format_descriptor<bool>::format(),
s.ndim(),
s.lens(),
strides);
}
else
{
b = py::buffer_info(x.data(),
as.size(),
py::format_descriptor<decltype(as())>::format(),
s.ndim(),
s.lens(),
strides);
}
});
return b;
}
migraphx::shape to_shape(const py::buffer_info& info)
{
migraphx::shape::type_t t;
std::size_t n = 0;
visit_types([&](auto as) {
if(info.format == py::format_descriptor<decltype(as())>::format() or
(info.format == "l" and py::format_descriptor<decltype(as())>::format() == "q") or
(info.format == "L" and py::format_descriptor<decltype(as())>::format() == "Q"))
{
t = as.type_enum();
n = sizeof(as());
}
else if(info.format == "?" and py::format_descriptor<decltype(as())>::format() == "b")
{
t = migraphx::shape::bool_type;
n = sizeof(bool);
}
});
if(n == 0)
{
MIGRAPHX_THROW("MIGRAPHX PYTHON: Unsupported data type " + info.format);
}
auto strides = info.strides;
std::transform(strides.begin(), strides.end(), strides.begin(), [&](auto i) -> std::size_t {
return n > 0 ? i / n : 0;
});
// scalar support
if(info.shape.empty())
{
return migraphx::shape{t};
}
else
{
return migraphx::shape{t, info.shape, strides};
}
}
MIGRAPHX_PYBIND11_MODULE(migraphx, m)
{
py::class_<migraphx::shape> shape_cls(m, "shape");
shape_cls
.def(py::init([](py::kwargs kwargs) {
auto v = migraphx::to_value(kwargs);
auto t = migraphx::shape::parse_type(v.get("type", "float"));
if(v.contains("dyn_dims"))
{
auto dyn_dims =
migraphx::from_value<std::vector<migraphx::shape::dynamic_dimension>>(
v.at("dyn_dims"));
return migraphx::shape(t, dyn_dims);
}
auto lens = v.get<std::size_t>("lens", {1});
if(v.contains("strides"))
return migraphx::shape(t, lens, v.at("strides").to_vector<std::size_t>());
else
return migraphx::shape(t, lens);
}))
.def("type", &migraphx::shape::type)
.def("lens", &migraphx::shape::lens)
.def("strides", &migraphx::shape::strides)
.def("ndim", &migraphx::shape::ndim)
.def("elements", &migraphx::shape::elements)
.def("bytes", &migraphx::shape::bytes)
.def("type_string", &migraphx::shape::type_string)
.def("type_size", &migraphx::shape::type_size)
.def("dyn_dims", &migraphx::shape::dyn_dims)
.def("packed", &migraphx::shape::packed)
.def("transposed", &migraphx::shape::transposed)
.def("broadcasted", &migraphx::shape::broadcasted)
.def("standard", &migraphx::shape::standard)
.def("scalar", &migraphx::shape::scalar)
.def("dynamic", &migraphx::shape::dynamic)
.def("__eq__", std::equal_to<migraphx::shape>{})
.def("__ne__", std::not_equal_to<migraphx::shape>{})
.def("__repr__", [](const migraphx::shape& s) { return migraphx::to_string(s); });
py::enum_<migraphx::shape::type_t>(shape_cls, "type_t")
MIGRAPHX_SHAPE_VISIT_TYPES(MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM);
py::class_<migraphx::shape::dynamic_dimension>(shape_cls, "dynamic_dimension")
.def(py::init<>())
.def(py::init<std::size_t, std::size_t>())
.def(py::init<std::size_t, std::size_t, std::set<std::size_t>>())
.def_readwrite("min", &migraphx::shape::dynamic_dimension::min)
.def_readwrite("max", &migraphx::shape::dynamic_dimension::max)
.def_readwrite("optimals", &migraphx::shape::dynamic_dimension::optimals)
.def("is_fixed", &migraphx::shape::dynamic_dimension::is_fixed);
py::class_<migraphx::argument>(m, "argument", py::buffer_protocol())
.def_buffer([](migraphx::argument& x) -> py::buffer_info { return to_buffer_info(x); })
.def(py::init([](py::buffer b) {
py::buffer_info info = b.request();
return migraphx::argument(to_shape(info), info.ptr);
}))
.def("get_shape", &migraphx::argument::get_shape)
.def("data_ptr",
[](migraphx::argument& x) { return reinterpret_cast<std::uintptr_t>(x.data()); })
.def("tolist",
[](migraphx::argument& x) {
py::list l{x.get_shape().elements()};
visit(x, [&](auto data) { l = py::cast(data.to_vector()); });
return l;
})
.def("__eq__", std::equal_to<migraphx::argument>{})
.def("__ne__", std::not_equal_to<migraphx::argument>{})
.def("__repr__", [](const migraphx::argument& x) { return migraphx::to_string(x); });
py::class_<migraphx::target>(m, "target");
py::class_<migraphx::instruction_ref>(m, "instruction_ref")
.def("shape", [](migraphx::instruction_ref i) { return i->get_shape(); })
.def("op", [](migraphx::instruction_ref i) { return i->get_operator(); });
py::class_<migraphx::module, std::unique_ptr<migraphx::module, py::nodelete>>(m, "module")
.def("print", [](const migraphx::module& mm) { std::cout << mm << std::endl; })
.def(
"add_instruction",
[](migraphx::module& mm,
const migraphx::operation& op,
std::vector<migraphx::instruction_ref>& args,
std::vector<migraphx::module*>& mod_args) {
return mm.add_instruction(op, args, mod_args);
},
py::arg("op"),
py::arg("args"),
py::arg("mod_args") = std::vector<migraphx::module*>{})
.def(
"add_literal",
[](migraphx::module& mm, py::buffer data) {
py::buffer_info info = data.request();
auto literal_shape = to_shape(info);
return mm.add_literal(literal_shape, reinterpret_cast<char*>(info.ptr));
},
py::arg("data"))
.def(
"add_parameter",
[](migraphx::module& mm, const std::string& name, const migraphx::shape shape) {
return mm.add_parameter(name, shape);
},
py::arg("name"),
py::arg("shape"))
.def(
"add_return",
[](migraphx::module& mm, std::vector<migraphx::instruction_ref>& args) {
return mm.add_return(args);
},
py::arg("args"))
.def("__repr__", [](const migraphx::module& mm) { return migraphx::to_string(mm); });
py::class_<migraphx::program>(m, "program")
.def(py::init([]() { return migraphx::program(); }))
.def("get_parameter_names", &migraphx::program::get_parameter_names)
.def("get_parameter_shapes", &migraphx::program::get_parameter_shapes)
.def("get_output_shapes", &migraphx::program::get_output_shapes)
.def("is_compiled", &migraphx::program::is_compiled)
.def(
"compile",
[](migraphx::program& p,
const migraphx::target& t,
bool offload_copy,
bool fast_math,
bool exhaustive_tune) {
migraphx::compile_options options;
options.offload_copy = offload_copy;
options.fast_math = fast_math;
options.exhaustive_tune = exhaustive_tune;
p.compile(t, options);
},
py::arg("t"),
py::arg("offload_copy") = true,
py::arg("fast_math") = true,
py::arg("exhaustive_tune") = false)
.def("get_main_module", [](const migraphx::program& p) { return p.get_main_module(); })
.def(
"create_module",
[](migraphx::program& p, const std::string& name) { return p.create_module(name); },
py::arg("name"))
.def("run",
[](migraphx::program& p, py::dict params) {
migraphx::parameter_map pm;
for(auto x : params)
{
std::string key = x.first.cast<std::string>();
py::buffer b = x.second.cast<py::buffer>();
py::buffer_info info = b.request();
pm[key] = migraphx::argument(to_shape(info), info.ptr);
}
return p.eval(pm);
})
.def("run_async",
[](migraphx::program& p,
py::dict params,
std::uintptr_t stream,
std::string stream_name) {
migraphx::parameter_map pm;
for(auto x : params)
{
std::string key = x.first.cast<std::string>();
py::buffer b = x.second.cast<py::buffer>();
py::buffer_info info = b.request();
pm[key] = migraphx::argument(to_shape(info), info.ptr);
}
migraphx::execution_environment exec_env{
migraphx::any_ptr(reinterpret_cast<void*>(stream), stream_name), true};
return p.eval(pm, exec_env);
})
.def("sort", &migraphx::program::sort)
.def("print", [](const migraphx::program& p) { std::cout << p << std::endl; })
.def("__eq__", std::equal_to<migraphx::program>{})
.def("__ne__", std::not_equal_to<migraphx::program>{})
.def("__repr__", [](const migraphx::program& p) { return migraphx::to_string(p); });
py::class_<migraphx::operation> op(m, "op");
op.def(py::init([](const std::string& name, py::kwargs kwargs) {
migraphx::value v = migraphx::value::object{};
if(kwargs)
{
v = migraphx::to_value(kwargs);
}
return migraphx::make_op(name, v);
}))
.def("name", &migraphx::operation::name);
py::enum_<migraphx::op::pooling_mode>(op, "pooling_mode")
.value("average", migraphx::op::pooling_mode::average)
.value("max", migraphx::op::pooling_mode::max)
.value("lpnorm", migraphx::op::pooling_mode::lpnorm);
py::enum_<migraphx::op::rnn_direction>(op, "rnn_direction")
.value("forward", migraphx::op::rnn_direction::forward)
.value("reverse", migraphx::op::rnn_direction::reverse)
.value("bidirectional", migraphx::op::rnn_direction::bidirectional);
m.def(
"argument_from_pointer",
[](const migraphx::shape shape, const int64_t address) {
return migraphx::argument(shape, reinterpret_cast<void*>(address));
},
py::arg("shape"),
py::arg("address"));
m.def(
"parse_tf",
[](const std::string& filename,
bool is_nhwc,
unsigned int batch_size,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::vector<std::string> output_names) {
return migraphx::parse_tf(
filename, migraphx::tf_options{is_nhwc, batch_size, map_input_dims, output_names});
},
"Parse tf protobuf (default format is nhwc)",
py::arg("filename"),
py::arg("is_nhwc") = true,
py::arg("batch_size") = 1,
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("output_names") = std::vector<std::string>());
m.def(
"parse_onnx",
[](const std::string& filename,
unsigned int default_dim_value,
migraphx::shape::dynamic_dimension default_dyn_dim_value,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
map_dyn_input_dims,
bool skip_unknown_operators,
bool print_program_on_error,
int64_t max_loop_iterations) {
migraphx::onnx_options options;
options.default_dim_value = default_dim_value;
options.default_dyn_dim_value = default_dyn_dim_value;
options.map_input_dims = map_input_dims;
options.map_dyn_input_dims = map_dyn_input_dims;
options.skip_unknown_operators = skip_unknown_operators;
options.print_program_on_error = print_program_on_error;
options.max_loop_iterations = max_loop_iterations;
return migraphx::parse_onnx(filename, options);
},
"Parse onnx file",
py::arg("filename"),
py::arg("default_dim_value") = 0,
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("map_dyn_input_dims") =
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
py::arg("skip_unknown_operators") = false,
py::arg("print_program_on_error") = false,
py::arg("max_loop_iterations") = 10);
m.def(
"parse_onnx_buffer",
[](const std::string& onnx_buffer,
unsigned int default_dim_value,
migraphx::shape::dynamic_dimension default_dyn_dim_value,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
map_dyn_input_dims,
bool skip_unknown_operators,
bool print_program_on_error) {
migraphx::onnx_options options;
options.default_dim_value = default_dim_value;
options.default_dyn_dim_value = default_dyn_dim_value;
options.map_input_dims = map_input_dims;
options.map_dyn_input_dims = map_dyn_input_dims;
options.skip_unknown_operators = skip_unknown_operators;
options.print_program_on_error = print_program_on_error;
return migraphx::parse_onnx_buffer(onnx_buffer, options);
},
"Parse onnx file",
py::arg("filename"),
py::arg("default_dim_value") = 0,
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("map_dyn_input_dims") =
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
py::arg("skip_unknown_operators") = false,
py::arg("print_program_on_error") = false);
m.def(
"load",
[](const std::string& name, const std::string& format) {
migraphx::file_options options;
options.format = format;
return migraphx::load(name, options);
},
"Load MIGraphX program",
py::arg("filename"),
py::arg("format") = "msgpack");
m.def(
"save",
[](const migraphx::program& p, const std::string& name, const std::string& format) {
migraphx::file_options options;
options.format = format;
return migraphx::save(p, name, options);
},
"Save MIGraphX program",
py::arg("p"),
py::arg("filename"),
py::arg("format") = "msgpack");
m.def("get_target", &migraphx::make_target);
m.def("create_argument", [](const migraphx::shape& s, const std::vector<double>& values) {
if(values.size() != s.elements())
MIGRAPHX_THROW("Values and shape elements do not match");
migraphx::argument a{s};
a.fill(values.begin(), values.end());
return a;
});
m.def("generate_argument", &migraphx::generate_argument, py::arg("s"), py::arg("seed") = 0);
m.def("fill_argument", &migraphx::fill_argument, py::arg("s"), py::arg("value"));
m.def("quantize_fp16",
&migraphx::quantize_fp16,
py::arg("prog"),
py::arg("ins_names") = std::vector<std::string>{"all"});
m.def("quantize_int8",
&migraphx::quantize_int8,
py::arg("prog"),
py::arg("t"),
py::arg("calibration") = std::vector<migraphx::parameter_map>{},
py::arg("ins_names") = std::vector<std::string>{"dot", "convolution"});
#ifdef HAVE_GPU
m.def("allocate_gpu", &migraphx::gpu::allocate_gpu, py::arg("s"), py::arg("host") = false);
m.def("to_gpu", &migraphx::gpu::to_gpu, py::arg("arg"), py::arg("host") = false);
m.def("from_gpu", &migraphx::gpu::from_gpu);
m.def("gpu_sync", [] { migraphx::gpu::gpu_sync(); });
#endif
#ifdef VERSION_INFO
m.attr("__version__") = VERSION_INFO;
#else
m.attr("__version__") = "dev";
#endif
}

View File

@@ -1 +1 @@
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v1.0.0/onnxruntime_rocm-1.17.3-cp39-cp39-linux_x86_64.whl
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.3.3/onnxruntime_rocm-1.20.1-cp311-cp311-linux_x86_64.whl

View File

@@ -1,3 +0,0 @@
Package: *
Pin: release o=repo.radeon.com
Pin-Priority: 600

View File

@@ -2,7 +2,7 @@ variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "5.7.3"
default = "6.3.3"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""
@@ -10,6 +10,13 @@ variable "HSA_OVERRIDE_GFX_VERSION" {
variable "HSA_OVERRIDE" {
default = "1"
}
target wget {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64"]
target = "wget"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64"]
@@ -26,6 +33,7 @@ target rocm {
dockerfile = "docker/rocm/Dockerfile"
contexts = {
deps = "target:deps",
wget = "target:wget",
rootfs = "target:rootfs"
}
platforms = ["linux/amd64"]

View File

@@ -1 +0,0 @@
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/5.7.3 focal main

View File

@@ -6,13 +6,12 @@ ARG DEBIAN_FRONTEND=noninteractive
FROM deps AS rpi-deps
ARG TARGETARCH
RUN rm -rf /usr/lib/btbn-ffmpeg/
# Install dependencies
RUN --mount=type=bind,source=docker/rpi/install_deps.sh,target=/deps/install_deps.sh \
/deps/install_deps.sh
ENV LIBAVFORMAT_VERSION_MAJOR=58
ENV DEFAULT_FFMPEG_VERSION="rpi"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
WORKDIR /opt/frigate/
COPY --from=rootfs / /

View File

@@ -18,13 +18,17 @@ apt-get -qq install --no-install-recommends -y \
mkdir -p -m 600 /root/.gnupg
# enable non-free repo
sed -i -e's/ main/ main contrib non-free/g' /etc/apt/sources.list
echo "deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware" | tee -a /etc/apt/sources.list
apt update
# ffmpeg -> arm64
if [[ "${TARGETARCH}" == "arm64" ]]; then
# add raspberry pi repo
gpg --no-default-keyring --keyring /usr/share/keyrings/raspbian.gpg --keyserver keyserver.ubuntu.com --recv-keys 82B129927FA3303E
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bullseye main" | tee /etc/apt/sources.list.d/raspi.list
echo "deb [signed-by=/usr/share/keyrings/raspbian.gpg] https://archive.raspberrypi.org/debian/ bookworm main" | tee /etc/apt/sources.list.d/raspi.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y ffmpeg
mkdir -p /usr/lib/ffmpeg/rpi/bin
ln -svf /usr/bin/ffmpeg /usr/lib/ffmpeg/rpi/bin/ffmpeg
ln -svf /usr/bin/ffprobe /usr/lib/ffmpeg/rpi/bin/ffprobe
fi

View File

@@ -3,37 +3,17 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Make this a separate target so it can be built/cached optionally
FROM wheels as trt-wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
# Add TensorRT wheels to another folder
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
# Build CuDNN
FROM wget AS cudnn-deps
ARG COMPUTE_LEVEL
RUN apt-get update \
&& apt-get install -y git build-essential
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get -y install cuda-toolkit \
&& rm -rf /var/lib/apt/lists/*
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM tensorrt-base AS frigate-tensorrt
ENV TRT_VER=8.5.3
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl && \
ldconfig
COPY --from=cudnn-deps /usr/local/cuda-12.6 /usr/local/cuda
ARG PIP_BREAK_SYSTEM_PACKAGES
ENV TRT_VER=8.6.1
# Install TensorRT wheels
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN pip3 install -U -r /requirements-tensorrt.txt && ldconfig
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.9/dist-packages/tensorrt:/usr/local/cuda/lib64:/usr/local/lib/python3.9/dist-packages/nvidia/cufft/lib
WORKDIR /opt/frigate/
COPY --from=rootfs / /
@@ -42,7 +22,7 @@ FROM devcontainer AS devcontainer-trt
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=cudnn-deps /usr/local/cuda-12.6 /usr/local/cuda
COPY --from=trt-deps /usr/local/cuda-12.1 /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \

View File

@@ -7,20 +7,25 @@ ARG BASE_IMAGE
FROM ${BASE_IMAGE} AS build-wheels
ARG DEBIAN_FRONTEND
# Add deadsnakes PPA for python3.11
RUN apt-get -qq update && \
apt-get -qq install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository ppa:deadsnakes/ppa
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends \
python3.9 python3.9-dev \
wget build-essential cmake git \
python3.11 python3.11-dev \
wget build-essential cmake git \
&& rm -rf /var/lib/apt/lists/*
# Ensure python3 defaults to python3.9
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
# Ensure python3 defaults to python3.11
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
FROM build-wheels AS trt-wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
@@ -41,7 +46,12 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh
COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
RUN pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
# See https://elinux.org/Jetson_Zoo#ONNX_Runtime
ADD https://nvidia.box.com/shared/static/9yvw05k6u343qfnkhdv2x6xhygze0aq1.whl /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
RUN pip3 uninstall -y onnxruntime-openvino \
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt \
&& pip3 install --no-deps /tmp/onnxruntime_gpu-1.19.0-cp311-cp311-linux_aarch64.whl
FROM build-wheels AS trt-model-wheels
ARG DEBIAN_FRONTEND
@@ -63,11 +73,21 @@ RUN --mount=type=bind,source=docker/tensorrt/build_jetson_ffmpeg.sh,target=/deps
# Frigate w/ TensorRT for NVIDIA Jetson platforms
FROM tensorrt-base AS frigate-tensorrt
RUN apt-get update \
&& apt-get install -y python-is-python3 libprotobuf17 \
&& apt-get install -y python-is-python3 libprotobuf23 \
&& rm -rf /var/lib/apt/lists/*
RUN rm -rf /usr/lib/btbn-ffmpeg/
COPY --from=jetson-ffmpeg /rootfs /
ENV DEFAULT_FFMPEG_VERSION="jetson"
ENV INCLUDED_FFMPEG_VERSIONS="${DEFAULT_FFMPEG_VERSION}:${INCLUDED_FFMPEG_VERSIONS}"
# ffmpeg runtime dependencies
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends \
libx264-163 libx265-199 libegl1 \
&& rm -rf /var/lib/apt/lists/*
# Fixes "Error loading shared libs"
RUN mkdir -p /etc/ld.so.conf.d && echo /usr/lib/ffmpeg/jetson/lib/ > /etc/ld.so.conf.d/ffmpeg.conf
COPY --from=trt-wheels /etc/TENSORRT_VER /etc/TENSORRT_VER
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
@@ -77,3 +97,6 @@ RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels
WORKDIR /opt/frigate/
COPY --from=rootfs / /
# Fixes "Error importing detector runtime: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block"
ENV LD_PRELOAD /usr/lib/aarch64-linux-gnu/libstdc++.so.6

View File

@@ -3,11 +3,12 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.03-py3
ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.12-py3
# Build TensorRT-specific library
FROM ${TRT_BASE} AS trt-deps
ARG TARGETARCH
ARG COMPUTE_LEVEL
RUN apt-get update \
@@ -16,16 +17,28 @@ RUN apt-get update \
RUN --mount=type=bind,source=docker/tensorrt/detector/tensorrt_libyolo.sh,target=/tensorrt_libyolo.sh \
/tensorrt_libyolo.sh
# COPY required individual CUDA deps
RUN mkdir -p /usr/local/cuda-deps
RUN if [ "$TARGETARCH" = "amd64" ]; then \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libcurand.so.* /usr/local/cuda-deps/ && \
cp /usr/local/cuda-12.3/targets/x86_64-linux/lib/libnvrtc.so.* /usr/local/cuda-deps/ ; \
fi
# Frigate w/ TensorRT Support as separate image
FROM deps AS tensorrt-base
#Disable S6 Global timeout
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
# COPY TensorRT Model Generation Deps
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
# COPY Individual CUDA deps folder
COPY --from=trt-deps /usr/local/cuda-deps /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS="yolov7-320"
ENV YOLO_MODELS=""
HEALTHCHECK --start-period=600s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1

View File

@@ -5,7 +5,7 @@
set -euxo pipefail
INSTALL_PREFIX=/rootfs/usr/local
INSTALL_PREFIX=/rootfs/usr/lib/ffmpeg/jetson
apt-get -qq update
apt-get -qq install -y --no-install-recommends build-essential ccache clang cmake pkg-config
@@ -14,14 +14,27 @@ apt-get -qq install -y --no-install-recommends libx264-dev libx265-dev
pushd /tmp
# Install libnvmpi to enable nvmpi decoders (h264_nvmpi, hevc_nvmpi)
if [ -e /usr/local/cuda-10.2 ]; then
if [ -e /usr/local/cuda-12 ]; then
# assume Jetpack 6.2
apt-key adv --fetch-key https://repo.download.nvidia.com/jetson/jetson-ota-public.asc
echo "deb https://repo.download.nvidia.com/jetson/common r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
echo "deb https://repo.download.nvidia.com/jetson/t234 r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
echo "deb https://repo.download.nvidia.com/jetson/ffmpeg r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
mkdir -p /opt/nvidia/l4t-packages/
touch /opt/nvidia/l4t-packages/.nv-l4t-disable-boot-fw-update-in-preinstall
apt-get update
apt-get -qq install -y --no-install-recommends -o Dpkg::Options::="--force-confold" nvidia-l4t-jetson-multimedia-api
elif [ -e /usr/local/cuda-10.2 ]; then
# assume Jetpack 4.X
wget -q https://developer.nvidia.com/embedded/L4T/r32_Release_v5.0/T186/Jetson_Multimedia_API_R32.5.0_aarch64.tbz2 -O jetson_multimedia_api.tbz2
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
else
# assume Jetpack 5.X
wget -q https://developer.nvidia.com/downloads/embedded/l4t/r35_release_v3.1/release/jetson_multimedia_api_r35.3.1_aarch64.tbz2 -O jetson_multimedia_api.tbz2
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
fi
tar xaf jetson_multimedia_api.tbz2 -C / && rm jetson_multimedia_api.tbz2
wget -q https://github.com/AndBobsYourUncle/jetson-ffmpeg/archive/9c17b09.zip -O jetson-ffmpeg.zip
unzip jetson-ffmpeg.zip && rm jetson-ffmpeg.zip && mv jetson-ffmpeg-* jetson-ffmpeg && cd jetson-ffmpeg

View File

@@ -6,23 +6,23 @@ mkdir -p /trt-wheels
if [[ "${TARGETARCH}" == "arm64" ]]; then
# NVIDIA supplies python-tensorrt for python3.8, but frigate uses python3.9,
# NVIDIA supplies python-tensorrt for python3.10, but frigate uses python3.11,
# so we must build python-tensorrt ourselves.
# Get python-tensorrt source
mkdir /workspace
mkdir -p /workspace
cd /workspace
git clone -b ${TENSORRT_VER} https://github.com/NVIDIA/TensorRT.git --depth=1
git clone -b release/8.6 https://github.com/NVIDIA/TensorRT.git --depth=1
# Collect dependencies
EXT_PATH=/workspace/external && mkdir -p $EXT_PATH
pip3 install pybind11 && ln -s /usr/local/lib/python3.9/dist-packages/pybind11 $EXT_PATH/pybind11
ln -s /usr/include/python3.9 $EXT_PATH/python3.9
pip3 install pybind11 && ln -s /usr/local/lib/python3.11/dist-packages/pybind11 $EXT_PATH/pybind11
ln -s /usr/include/python3.11 $EXT_PATH/python3.11
ln -s /usr/include/aarch64-linux-gnu/NvOnnxParser.h /workspace/TensorRT/parsers/onnx/
# Build wheel
cd /workspace/TensorRT/python
EXT_PATH=$EXT_PATH PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=9 TARGET_ARCHITECTURE=aarch64 /bin/bash ./build.sh
mv build/dist/*.whl /trt-wheels/
EXT_PATH=$EXT_PATH PYTHON_MAJOR_VERSION=3 PYTHON_MINOR_VERSION=11 TARGET_ARCHITECTURE=aarch64 TENSORRT_MODULE=tensorrt /bin/bash ./build.sh
mv build/bindings_wheel/dist/*.whl /trt-wheels/
fi

View File

@@ -1,6 +1,8 @@
/usr/local/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.9/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.9/dist-packages/tensorrt
/usr/local/cuda
/usr/local/lib/python3.11/dist-packages/nvidia/cudnn/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_runtime/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cublas/lib
/usr/local/lib/python3.11/dist-packages/nvidia/cuda_nvrtc/lib
/usr/local/lib/python3.11/dist-packages/tensorrt
/usr/local/lib/python3.11/dist-packages/nvidia/cufft/lib

View File

@@ -11,6 +11,7 @@ set -o errexit -o nounset -o pipefail
MODEL_CACHE_DIR=${MODEL_CACHE_DIR:-"/config/model_cache/tensorrt"}
TRT_VER=${TRT_VER:-$(cat /etc/TENSORRT_VER)}
OUTPUT_FOLDER="${MODEL_CACHE_DIR}/${TRT_VER}"
YOLO_MODELS=${YOLO_MODELS:-""}
# Create output folder
mkdir -p ${OUTPUT_FOLDER}
@@ -19,6 +20,11 @@ FIRST_MODEL=true
MODEL_DOWNLOAD=""
MODEL_CONVERT=""
if [ -z "$YOLO_MODELS" ]; then
echo "tensorrt model preparation disabled"
exit 0
fi
for model in ${YOLO_MODELS//,/ }
do
# Remove old link in case path/version changed
@@ -58,7 +64,7 @@ fi
# order to run libyolo here.
# On Jetpack 5.0, these libraries are not mounted by the runtime and are supplied by the image.
if [[ "$(arch)" == "aarch64" ]]; then
if [[ ! -e /usr/lib/aarch64-linux-gnu/tegra ]]; then
if [[ ! -e /usr/lib/aarch64-linux-gnu/tegra && ! -e /usr/lib/aarch64-linux-gnu/tegra-egl ]]; then
echo "ERROR: Container must be launched with nvidia runtime"
exit 1
elif [[ ! -e /usr/lib/aarch64-linux-gnu/libnvinfer.so.8 ||

View File

@@ -1,14 +1,17 @@
# NVidia TensorRT Support (amd64 only)
--extra-index-url 'https://pypi.nvidia.com'
numpy < 1.24; platform_machine == 'x86_64'
tensorrt == 8.5.3.*; platform_machine == 'x86_64'
cuda-python == 11.8; platform_machine == 'x86_64'
cython == 0.29.*; platform_machine == 'x86_64'
tensorrt == 8.6.1; platform_machine == 'x86_64'
tensorrt_bindings == 8.6.1; platform_machine == 'x86_64'
cuda-python == 11.8.*; platform_machine == 'x86_64'
cython == 3.0.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu12 == 12.1.*; platform_machine == 'x86_64'
nvidia-cuda-runtime-cu11 == 11.8.*; platform_machine == 'x86_64'
nvidia-cublas-cu11 == 11.11.3.6; platform_machine == 'x86_64'
nvidia-cudnn-cu11 == 8.6.0.*; platform_machine == 'x86_64'
nvidia-cudnn-cu12 == 9.5.0.*; platform_machine == 'x86_64'
nvidia-cufft-cu11==10.*; platform_machine == 'x86_64'
nvidia-cufft-cu12==11.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.18.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@@ -1 +1 @@
cuda-python == 11.7; platform_machine == 'aarch64'
cuda-python == 12.6.*; platform_machine == 'aarch64'

View File

@@ -13,13 +13,29 @@ variable "TRT_BASE" {
variable "COMPUTE_LEVEL" {
default = ""
}
variable "BASE_HOOK" {
# Ensure an up-to-date python 3.11 is available in jetson images
default = <<EOT
if grep -iq \"ubuntu\" /etc/os-release; then
. /etc/os-release
# Add the deadsnakes PPA repository
echo "deb https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu $VERSION_CODENAME main" >> /etc/apt/sources.list.d/deadsnakes.list
echo "deb-src https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu $VERSION_CODENAME main" >> /etc/apt/sources.list.d/deadsnakes.list
# Add deadsnakes signing key
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F23C5A6CF475977595C89F51BA6932366A755776
fi
EOT
}
target "_build_args" {
args = {
BASE_IMAGE = BASE_IMAGE,
SLIM_BASE = SLIM_BASE,
TRT_BASE = TRT_BASE,
COMPUTE_LEVEL = COMPUTE_LEVEL
COMPUTE_LEVEL = COMPUTE_LEVEL,
BASE_HOOK = BASE_HOOK
}
platforms = ["linux/${ARCH}"]
}
@@ -79,7 +95,6 @@ target "tensorrt" {
wget = "target:wget",
tensorrt-base = "target:tensorrt-base",
rootfs = "target:rootfs"
wheels = "target:wheels"
}
target = "frigate-tensorrt"
inherits = ["_build_args"]

View File

@@ -1,41 +1,41 @@
BOARDS += trt
JETPACK4_BASE ?= timongentzsch/l4t-ubuntu20-opencv:latest # L4T 32.7.1 JetPack 4.6.1
JETPACK5_BASE ?= nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime # L4T 35.3.1 JetPack 5.1.1
JETPACK6_BASE ?= nvcr.io/nvidia/tensorrt:23.12-py3-igpu
X86_DGPU_ARGS := ARCH=amd64 COMPUTE_LEVEL="50 60 70 80 90"
JETPACK4_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK4_BASE) SLIM_BASE=$(JETPACK4_BASE) TRT_BASE=$(JETPACK4_BASE)
JETPACK5_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK5_BASE) SLIM_BASE=$(JETPACK5_BASE) TRT_BASE=$(JETPACK5_BASE)
JETPACK6_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK6_BASE) SLIM_BASE=$(JETPACK6_BASE) TRT_BASE=$(JETPACK6_BASE)
local-trt: version
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt \
--load
local-trt-jp4: version
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt-jp4 \
--load
local-trt-jp5: version
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt-jp5 \
--load
local-trt-jp6: version
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=frigate:latest-tensorrt-jp6 \
--load
build-trt:
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp4
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp6
push-trt: build-trt
$(X86_DGPU_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt \
--push
$(JETPACK4_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp4 \
--push
$(JETPACK5_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp5 \
--push
$(JETPACK6_ARGS) docker buildx bake --file=docker/tensorrt/trt.hcl tensorrt \
--set tensorrt.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-tensorrt-jp6 \
--push

View File

@@ -4,7 +4,9 @@ title: Advanced Options
sidebar_label: Advanced Options
---
### `logger`
### Logging
#### Frigate `logger`
Change the default log level for troubleshooting purposes.
@@ -28,6 +30,18 @@ Examples of available modules are:
- `watchdog.<camera_name>`
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
#### Go2RTC Logging
See [the go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#module-log) for logging configuration
```yaml
go2rtc:
streams:
# ...
log:
exec: trace
```
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
@@ -162,23 +176,21 @@ listen [::]:5000 ipv6only=off;
### Custom ffmpeg build
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, statically built ffmpeg binary can be downloaded to /config and used.
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, statically built `ffmpeg` and `ffprobe` binaries can be placed in `/config/custom-ffmpeg/bin` for Frigate to use.
To do this:
1. Download your ffmpeg build and uncompress to the Frigate config folder.
2. Update your docker-compose or docker CLI to include `'/home/appdata/frigate/custom-ffmpeg':'/usr/lib/btbn-ffmpeg':'ro'` in the volume mappings.
3. Restart Frigate and the custom version will be used if the mapping was done correctly.
NOTE: The folder that is set for the config needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then the `ffmpeg -> path` field should be `/config/custom-ffmpeg/bin`.
1. Download your ffmpeg build and uncompress it to the `/config/custom-ffmpeg` folder. Verify that both the `ffmpeg` and `ffprobe` binaries are located in `/config/custom-ffmpeg/bin`.
2. Update the `ffmpeg.path` in your Frigate config to `/config/custom-ffmpeg`.
3. Restart Frigate and the custom version will be used if the steps above were done correctly.
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
To do this:
1. Download the go2rtc build to the /config folder.
1. Download the go2rtc build to the `/config` folder.
2. Rename the build to `go2rtc`.
3. Give `go2rtc` execute permission.
4. Restart Frigate and the custom version will be used, you can verify by checking go2rtc logs.
@@ -189,16 +201,16 @@ When frigate starts up, it checks whether your config file is valid, and if it i
### Via API
Frigate can accept a new configuration file as JSON at the `/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
Frigate can accept a new configuration file as JSON at the `/api/config/save` endpoint. When updating the config this way, Frigate will validate the config before saving it, and return a `400` if the config is not valid.
```bash
curl -X POST http://frigate_host:5000/config/save -d @config.json
curl -X POST http://frigate_host:5000/api/config/save -d @config.json
```
if you'd like you can use your yaml config directly by using [`yq`](https://github.com/mikefarah/yq) to convert it to json:
```bash
yq r -j config.yml | curl -X POST http://frigate_host:5000/config/save -d @-
yq r -j config.yml | curl -X POST http://frigate_host:5000/api/config/save -d @-
```
### Via Command Line

View File

@@ -24,6 +24,11 @@ On startup, an admin user and password are generated and printed in the logs. It
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
```yaml
auth:
reset_admin_password: true
```
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
@@ -92,15 +97,35 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
### Header mapping
If you have disabled Frigate's authentication and your proxy supports passing a header with the authenticated username, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` value. Header names are not case sensitive.
If you have disabled Frigate's authentication and your proxy supports passing a header with authenticated usernames and/or roles, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` and `X-Forwarded-Role` values. Header names are not case sensitive.
```yaml
proxy:
...
header_map:
user: x-forwarded-user
role: x-forwarded-role
```
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
#### Port Considerations
**Authenticated Port (8971)**
- Header mapping is **fully supported**.
- The `remote-role` header determines the users privileges:
- **admin** → Full access (user management, configuration changes).
- **viewer** → Read-only access.
- Ensure your **proxy sends both user and role headers** for proper role enforcement.
**Unauthenticated Port (5000)**
- Headers are **ignored** for role enforcement.
- All requests are treated as **anonymous**.
- The `remote-role` value is **overridden** to **admin-level access**.
- This design ensures **unauthenticated internal use** within a trusted network.
Note that only the following list of headers are permitted by default:
```
@@ -121,8 +146,6 @@ X-authentik-uid
If you would like to add more options, you can overwrite the default file with a docker bind mount at `/usr/local/nginx/conf/proxy_trusted_headers.conf`. Reference the source code for the default file formatting.
Future versions of Frigate may leverage group and role headers for authorization in Frigate as well.
### Login page redirection
Frigate gracefully performs login page redirection that should work with most authentication proxies. If your reverse proxy returns a `Location` header on `401`, `302`, or `307` unauthorized responses, Frigate's frontend will automatically detect it and redirect to that URL.
@@ -130,3 +153,31 @@ Frigate gracefully performs login page redirection that should work with most au
### Custom logout url
If your reverse proxy has a dedicated logout url, you can specify using the `logout_url` config option. This will update the link for the `Logout` link in the UI.
## User Roles
Frigate supports user roles to control access to certain features in the UI and API, such as managing users or modifying configuration settings. Roles are assigned to users in the database or through proxy headers and are enforced when accessing the UI or API through the authenticated port (`8971`).
### Supported Roles
- **admin**: Full access to all features, including user management and configuration.
- **viewer**: Read-only access to the UI and API, including viewing cameras, review items, and historical footage. Configuration editor and settings in the UI are inaccessible.
### Role Enforcement
When using the authenticated port (`8971`), roles are validated via the JWT token or proxy headers (e.g., `remote-role`).
On the internal **unauthenticated** port (`5000`), roles are **not enforced**. All requests are treated as **anonymous**, granting access equivalent to the **admin** role without restrictions.
To use role-based access control, you must connect to Frigate via the **authenticated port (`8971`)** directly or through a reverse proxy.
### Role Visibility in the UI
- When logged in via port `8971`, your **username and role** are displayed in the **account menu** (bottom corner).
- When using port `5000`, the UI will always display "anonymous" for the username and "admin" for the role.
### Managing User Roles
1. Log in as an **admin** user via port `8971`.
2. Navigate to **Settings > Users**.
3. Edit a users role by selecting **admin** or **viewer**.

View File

@@ -41,6 +41,7 @@ cameras:
...
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@@ -49,6 +50,8 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: PTZ camera object autotracking. Keeps a moving object in
# the center of the frame by automatically moving the PTZ camera.
autotracking:
@@ -164,3 +167,7 @@ To maintain object tracking during PTZ moves, Frigate tracks the motion of your
### Calibration seems to have completed, but the camera is not actually moving to track my object. Why?
Some cameras have firmware that reports that FOV RelativeMove, the ONVIF command that Frigate uses for autotracking, is supported. However, if the camera does not pan or tilt when an object comes into the required zone, your camera's firmware does not actually support FOV RelativeMove. One such camera is the Uniview IPC672LR-AX4DUPK. It actually moves its zoom motor instead of panning and tilting and does not follow the ONVIF standard whatsoever.
### Frigate reports an error saying that calibration has failed. Why?
Calibration measures the amount of time it takes for Frigate to make a series of movements with your PTZ. This error message is recorded in the log if these values are too high for Frigate to support calibrated autotracking. This is often the case when your camera's motor or network connection is too slow or your camera's firmware doesn't report the motor status in a timely manner. You can try running without calibration (just remove the `movement_weights` line from your config and restart), but if calibration fails, this often means that autotracking will behave unpredictably.

View File

@@ -22,7 +22,7 @@ Note that mjpeg cameras require encoding the video into h264 for recording, and
```yaml
go2rtc:
streams:
mjpeg_cam: "ffmpeg:{your_mjpeg_stream_url}#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
mjpeg_cam: "ffmpeg:http://your_mjpeg_stream_url#video=h264#hardware" # <- use hardware acceleration to create an h264 stream usable for other components.
cameras:
...
@@ -65,19 +65,32 @@ ffmpeg:
## Model/vendor specific setup
### Amcrest & Dahua
Amcrest & Dahua cameras should be connected to via RTSP using the following format:
```
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=0 # this is the main stream
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=1 # this is the sub stream, typically supporting low resolutions only
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=2 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
rtsp://USERNAME:PASSWORD@CAMERA-IP/cam/realmonitor?channel=1&subtype=3 # new higher end cameras support a fourth stream with another mid resolution (1280x720, 1920x1080)
```
### Annke C800
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be adjusted using the `apple_compatibility` config.
```yaml
cameras:
annkec800: # <------ Name the camera
ffmpeg:
apple_compatibility: true # <- Adds compatibility with MacOS and iPhone
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
record: preset-record-generic-audio-aac
inputs:
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
- path: rtsp://USERNAME:PASSWORD@CAMERA-IP/H264/ch1/main/av_stream # <----- Update for your camera
roles:
- detect
- record
@@ -95,6 +108,29 @@ ffmpeg:
input_args: preset-rtsp-blue-iris
```
### Hikvision Cameras
Hikvision cameras should be connected to via RTSP using the following format:
```
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/101 # this is the main stream
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/102 # this is the sub stream, typically supporting low resolutions only
rtsp://USERNAME:PASSWORD@CAMERA-IP/streaming/channels/103 # higher end cameras support a third stream with a mid resolution (1280x720, 1920x1080)
```
:::note
[Some users have reported](https://www.reddit.com/r/frigate_nvr/comments/1hg4ze7/hikvision_security_settings) that newer Hikvision cameras require adjustments to the security settings:
```
RTSP Authentication - digest/basic
RTSP Digest Algorithm - MD5
WEB Authentication - digest/basic
WEB Digest Algorithm - MD5
```
:::
### Reolink Cameras
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
@@ -156,7 +192,9 @@ cameras:
#### Reolink Doorbell
The reolink doorbell supports 2-way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
@@ -181,7 +219,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.

View File

@@ -7,7 +7,7 @@ title: Camera Configuration
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing tracked objects and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
A camera is enabled by default but can be disabled by using `enabled: False`. Cameras that are disabled through the configuration file will not appear in the Frigate UI and will not consume system resources.
Each role can only be assigned to one input per camera. The options for roles are as follows:
@@ -109,7 +109,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Speco O8P32X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Uniview IPC6612SR-X33-VG | ✅ | ✅ | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |

View File

@@ -0,0 +1,56 @@
---
id: face_recognition
title: Face Recognition
---
Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.
Frigate has support for CV2 Local Binary Pattern Face Recognizer to recognize faces, which runs locally. A lightweight face landmark detection model is also used to align faces before running them through the face recognizer.
## Configuration
Face recognition is disabled by default, face recognition must be enabled in your config file before it can be used. Face recognition is a global configuration setting.
```yaml
face_recognition:
enabled: true
```
## Dataset
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
- Diversity of the dataset: A dataset with diverse images, including variations in lighting, pose, and facial expressions, will require fewer images per person than a less diverse dataset.
- Desired accuracy: The higher the desired accuracy, the more images are typically needed.
However, here are some general guidelines:
- Minimum: For basic face recognition tasks, a minimum of 10-20 images per person is often recommended.
- Recommended: For more robust and accurate systems, 30-50 images per person is a good starting point.
- Ideal: For optimal performance, especially in challenging conditions, 100 or more images per person can be beneficial.
## Creating a Robust Training Set
The accuracy of face recognition is heavily dependent on the quality of data given to it for training. It is recommended to build the face training library in phases.
:::tip
When choosing images to include in the face training set it is recommended to always follow these recommendations:
- If it is difficult to make out details in a persons face it will not be helpful in training.
- Avoid images with under/over-exposure.
- Avoid blurry / pixelated images.
- Be careful when uploading images of people when they are wearing clothing that covers a lot of their face as this may confuse the training.
- Do not upload too many images at the same time, it is recommended to train 4-6 images for each person each day so it is easier to know if the previously added images helped or hurt performance.
:::
### Step 1 - Building a Strong Foundation
When first enabling face recognition it is important to build a foundation of strong images. It is recommended to start by uploading 1-2 photos taken by a smartphone for each person. It is important that the person's face in the photo is straight-on and not turned which will ensure a good starting point.
Then it is recommended to use the `Face Library` tab in Frigate to select and train images for each person as they are detected. When building a strong foundation it is strongly recommended to only train on images that are straight-on. Ignore images from cameras that recognize faces from an angle. Once a person starts to be consistently recognized correctly on images that are straight-on, it is time to move on to the next step.
### Step 2 - Expanding The Dataset
Once straight-on images are performing well, start choosing slightly off-angle images to include for training. It is important to still choose images where enough face detail is visible to recognize someone.

View File

@@ -3,15 +3,15 @@ id: genai
title: Generative AI
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects.
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
@@ -31,15 +31,15 @@ cameras:
:::warning
Using Ollama on CPU is not recommended, high inference times make using generative AI impractical.
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [docker container](https://hub.docker.com/r/ollama/ollama) available.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
@@ -110,6 +110,12 @@ genai:
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
@@ -138,6 +144,19 @@ Frigate's thumbnail search excels at identifying specific details about tracked
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
If looking to get notifications earlier than when an object ceases to be tracked, an additional send trigger can be configured of `after_significant_updates`.
```yaml
genai:
send_triggers:
tracked_object_end: true # default
after_significant_updates: 3 # how many updates to a tracked object before we should send an image
```
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
@@ -168,7 +187,7 @@ genai:
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the thumbnails collected over the object's lifetime to the model. Using a snapshot provides the AI with a higher-resolution image (typically downscaled by the AI itself), but the trade-off is that only a single image is used, which might limit the model's ability to determine object movement or direction.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
```yaml
cameras:

View File

@@ -175,6 +175,16 @@ For more information on the various values across different distributions, see h
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
#### Stats for SR-IOV devices
When using virtualized GPUs via SR-IOV, additional args are needed for GPU stats to function. This can be enabled with the following config:
```yaml
telemetry:
stats:
sriov: True
```
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
@@ -231,28 +241,11 @@ docker run -d \
### Setup Decoder
The decoder you need to pass in the `hwaccel_args` will depend on the input video.
A list of supported codecs (you can use `ffmpeg -decoders | grep cuvid` in the container to get the ones your card supports)
```
V..... h263_cuvid Nvidia CUVID H263 decoder (codec h263)
V..... h264_cuvid Nvidia CUVID H264 decoder (codec h264)
V..... hevc_cuvid Nvidia CUVID HEVC decoder (codec hevc)
V..... mjpeg_cuvid Nvidia CUVID MJPEG decoder (codec mjpeg)
V..... mpeg1_cuvid Nvidia CUVID MPEG1VIDEO decoder (codec mpeg1video)
V..... mpeg2_cuvid Nvidia CUVID MPEG2VIDEO decoder (codec mpeg2video)
V..... mpeg4_cuvid Nvidia CUVID MPEG4 decoder (codec mpeg4)
V..... vc1_cuvid Nvidia CUVID VC1 decoder (codec vc1)
V..... vp8_cuvid Nvidia CUVID VP8 decoder (codec vp8)
V..... vp9_cuvid Nvidia CUVID VP9 decoder (codec vp9)
```
For example, for H264 video, you'll select `preset-nvidia-h264`.
Using `preset-nvidia` ffmpeg will automatically select the necessary profile for the incoming video, and will log an error if the profile is not supported by your GPU.
```yaml
ffmpeg:
hwaccel_args: preset-nvidia-h264
hwaccel_args: preset-nvidia
```
If everything is working correctly, you should see a significant improvement in performance.
@@ -302,10 +295,8 @@ These instructions were originally based on the [Jellyfin documentation](https:/
## NVIDIA Jetson (Orin AGX, Orin NX, Orin Nano\*, Xavier AGX, Xavier NX, TX2, TX1, Nano)
A separate set of docker images is available that is based on Jetpack/L4T. They come with an `ffmpeg` build
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 4.6, use the
`stable-tensorrt-jp4` tagged image, or if your Jetson host is running Jetpack 5.0+, use the `stable-tensorrt-jp5`
tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform,
but the image will still allow hardware decoding and tensorrt object detection.
with codecs that use the Jetson's dedicated media engine. If your Jetson host is running Jetpack 5.0+ use the `stable-tensorrt-jp5`
tagged image, or if your Jetson host is running Jetpack 6.0+ use the `stable-tensorrt-jp6` tagged image. Note that the Orin Nano has no video encoder, so frigate will use software encoding on this platform, but the image will still allow hardware decoding and tensorrt object detection.
You will need to use the image with the nvidia container runtime:

View File

@@ -203,14 +203,13 @@ detectors:
ov:
type: openvino
device: AUTO
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
record:

View File

@@ -0,0 +1,152 @@
---
id: license_plate_recognition
title: License Plate Recognition (LPR)
---
Frigate can recognize license plates on vehicles and automatically add the detected characters or recognized name as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles.
When a plate is recognized, the detected characters or recognized name is:
- Added as a `sub_label` to the `car` tracked object.
- Viewable in the Review Item Details pane in Review and the Tracked Object Details pane in Explore.
- Filterable through the More Filters menu in Explore.
- Published via the `frigate/events` MQTT topic as a `sub_label` for the tracked object.
## Model Requirements
Users running a Frigate+ model (or any custom model that natively detects license plates) should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
Users without a model that detects license plates can still run LPR. Frigate uses a lightweight YOLOv9 license plate detection model that runs on your CPU. In this case, you should _not_ define `license_plate` in your list of objects to track.
:::note
Frigate needs to first detect a `car` before it can recognize a license plate. If you're using a dedicated LPR camera or have a zoomed-in view, make sure the camera captures enough of the `car` for Frigate to detect it reliably.
:::
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
```yaml
lpr:
enabled: True
```
Ensure that your camera is configured to detect objects of type `car`, and that a car is actually being detected by Frigate. Otherwise, LPR will not run.
Like the other real-time processors in Frigate, license plate recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
## Advanced Configuration
Fine-tune the LPR feature using these optional parameters:
### Detection
- **`detection_threshold`**: License plate object detection confidence score required before recognition runs.
- Default: `0.7`
- Note: If you are using a Frigate+ model and you set the `threshold` in your objects config for `license_plate` higher than this value, recognition will never run. It's best to ensure these values match, or this `detection_threshold` is lower than your object config `threshold`.
- **`min_area`**: Defines the minimum size (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
### Recognition
- **`recognition_threshold`**: Recognition confidence score required to add the plate to the object as a sub label.
- Default: `0.9`.
- **`min_plate_length`**: Specifies the minimum number of characters a detected license plate must have to be added as a sub label to an object.
- Use this to filter out short, incomplete, or incorrect detections.
- **`format`**: A regular expression defining the expected format of detected plates. Plates that do not match this format will be discarded.
- `"^[A-Z]{1,3} [A-Z]{1,2} [0-9]{1,4}$"` matches plates like "B AB 1234" or "M X 7"
- `"^[A-Z]{2}[0-9]{2} [A-Z]{3}$"` matches plates like "AB12 XYZ" or "XY68 ABC"
- Websites like https://regex101.com/ can help test regular expressions for your plates.
### Matching
- **`known_plates`**: List of strings or regular expressions that assign custom a `sub_label` to `car` objects when a recognized plate matches a known value.
- These labels appear in the UI, filters, and notifications.
- **`match_distance`**: Allows for minor variations (missing/incorrect characters) when matching a detected plate to a known plate.
- For example, setting `match_distance: 1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`.
- This parameter will _not_ operate on known plates that are defined as regular expressions. You should define the full string of your plate in `known_plates` in order to use `match_distance`.
## Configuration Examples
```yaml
lpr:
enabled: True
min_area: 1500 # Ignore plates smaller than 1500 pixels
min_plate_length: 4 # Only recognize plates with 4 or more characters
known_plates:
Wife's Car:
- "ABC-1234"
- "ABC-I234" # Accounts for potential confusion between the number one (1) and capital letter I
Johnny:
- "J*N-*234" # Matches JHN-1234 and JMN-I234, but also note that "*" matches any number of characters
Sally:
- "[S5]LL 1234" # Matches both SLL 1234 and 5LL 1234
Work Trucks:
- "EMP-[0-9]{3}[A-Z]" # Matches plates like EMP-123A, EMP-456Z
```
```yaml
lpr:
enabled: True
min_area: 4000 # Run recognition on larger plates only
recognition_threshold: 0.85
format: "^[A-Z]{2} [A-Z][0-9]{4}$" # Only recognize plates that are two letters, followed by a space, followed by a single letter and 4 numbers
match_distance: 1 # Allow one character variation in plate matching
known_plates:
Delivery Van:
- "RJ K5678"
- "UP A1234"
Supervisor:
- "MN D3163"
```
## FAQ
### Why isn't my license plate being detected and recognized?
Ensure that:
- Your camera has a clear, human-readable, well-lit view of the plate. If you can't read the plate, Frigate certainly won't be able to. This may require changing video size, quality, or frame rate settings on your camera, depending on your scene and how fast the vehicles are traveling.
- The plate is large enough in the image (try adjusting `min_area`) or increasing the resolution of your camera's stream.
- A `car` is detected first, as LPR only runs on recognized vehicles.
If you are using a Frigate+ model or a custom model that detects license plates, ensure that `license_plate` is added to your list of objects to track.
If you are using the free model that ships with Frigate, you should _not_ add `license_plate` to the list of objects to track.
### Can I run LPR without detecting `car` objects?
No, Frigate requires a `car` to be detected first before recognizing a license plate.
### How can I improve detection accuracy?
- Use high-quality cameras with good resolution.
- Adjust `detection_threshold` and `recognition_threshold` values.
- Define a `format` regex to filter out invalid detections.
### Does LPR work at night?
Yes, but performance depends on camera quality, lighting, and infrared capabilities. Make sure your camera can capture clear images of plates at night.
### How can I match known plates with minor variations?
Use `match_distance` to allow small character mismatches. Alternatively, define multiple variations in `known_plates`.
### How do I debug LPR issues?
- View MQTT messages for `frigate/events` to verify detected plates.
- Adjust `detection_threshold` and `recognition_threshold` settings.
- If you are using a Frigate+ model or a model that detects license plates, watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected with a `car`.
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only enable this when necessary.
### Will LPR slow down my system?
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results.

View File

@@ -3,9 +3,9 @@ id: live
title: Live View
---
Frigate intelligently displays your camera streams on the Live view dashboard. Your camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion is detected, cameras seamlessly switch to a live stream.
Frigate intelligently displays your camera streams on the Live view dashboard. By default, Frigate employs "smart streaming" where camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion or active objects are detected, cameras seamlessly switch to a live stream.
## Live View technologies
### Live View technologies
Frigate intelligently uses three different streaming technologies to display your camera streams on the dashboard and the single camera view, switching between available modes based on network bandwidth, player errors, or required features like two-way talk. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
@@ -23,13 +23,13 @@ If you are using go2rtc, you should adjust the following settings in your camera
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes. For many users this may not be an issue, but it should be noted that that a 1x i-frame interval will cause more storage utilization if you are using the stream for the `record` role as well.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.
### Audio Support
MSE Requires AAC audio, WebRTC requires PCMU/PCMA, or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
MSE Requires PCMA/PCMU or AAC audio, WebRTC requires PCMA/PCMU or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
```yaml
go2rtc:
@@ -51,19 +51,32 @@ go2rtc:
- ffmpeg:rtsp://192.168.1.5:554/live0#video=copy
```
### Setting Stream For Live UI
### Setting Streams For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.
You can configure Frigate to allow manual selection of the stream you want to view in the Live UI. For example, you may want to view your camera's substream on mobile devices, but the full resolution stream on desktop devices. Setting the `live -> streams` list will populate a dropdown in the UI's Live view that allows you to choose between the streams. This stream setting is _per device_ and is saved in your browser's local storage.
Additionally, when creating and editing camera groups in the UI, you can choose the stream you want to use for your camera group's Live dashboard.
:::note
Frigate's default dashboard ("All Cameras") will always use the first entry you've defined in `streams:` when playing live streams from your cameras.
:::
Configure the `streams` option with a "friendly name" for your stream followed by the go2rtc stream name.
Using Frigate's internal version of go2rtc is required to use this feature. You cannot specify paths in the `streams` configuration, only go2rtc stream names.
```yaml
go2rtc:
streams:
test_cam:
- rtsp://192.168.1.5:554/live0 # <- stream which supports video & aac audio.
- rtsp://192.168.1.5:554/live_main # <- stream which supports video & aac audio.
- "ffmpeg:test_cam#audio=opus" # <- copy of the stream which transcodes audio to opus for webrtc
test_cam_sub:
- rtsp://192.168.1.5:554/substream # <- stream which supports video & aac audio.
- "ffmpeg:test_cam_sub#audio=opus" # <- copy of the stream which transcodes audio to opus for webrtc
- rtsp://192.168.1.5:554/live_sub # <- stream which supports video & aac audio.
test_cam_another_sub:
- rtsp://192.168.1.5:554/live_alt # <- stream which supports video & aac audio.
cameras:
test_cam:
@@ -80,7 +93,10 @@ cameras:
roles:
- detect
live:
stream_name: test_cam_sub
streams: # <--- Multiple streams for Frigate 0.16 and later
Main Stream: test_cam # <--- Specify a "friendly name" followed by the go2rtc stream name
Sub Stream: test_cam_sub
Special Stream: test_cam_another_sub
```
### WebRTC extra configuration:
@@ -101,6 +117,7 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
- Note that WebRTC does not support H.265.
:::tip
@@ -138,3 +155,74 @@ services:
:::
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this.
### Two way talk
For devices that support two way talk, Frigate can be configured to use the feature from the camera's Live view in the Web UI. You should:
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
### Streaming options on camera group dashboards
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
- Stream selection using the `live -> streams` configuration option (see _Setting Streams For Live UI_ above)
- Streaming type:
- _No streaming_: Camera images will only update once per minute and no live streaming will occur.
- _Smart Streaming_ (default, recommended setting): Smart streaming will update your camera image once per minute when no detectable activity is occurring to conserve bandwidth and resources, since a static picture is the same as a streaming image with no motion or objects. When motion or objects are detected, the image seamlessly switches to a live stream.
- _Continuous Streaming_: Camera image will always be a live stream when visible on the dashboard, even if no activity is being detected. Continuous streaming may cause high bandwidth usage and performance issues. **Use with caution.**
- _Compatibility mode_: Enable this option only if your camera's live stream is displaying color artifacts and has a diagonal line on the right side of the image. Before enabling this, try setting your camera's `detect` width and height to a standard aspect ratio (for example: 640x352 becomes 640x360, and 800x443 becomes 800x450, 2688x1520 becomes 2688x1512, etc). Depending on your browser and device, more than a few cameras in compatibility mode may not be supported, so only use this option if changing your config fails to resolve the color artifacts and diagonal line.
:::note
The default dashboard ("All Cameras") will always use Smart Streaming and the first entry set in your `streams` configuration, if defined. Use a camera group if you want to change any of these settings from the defaults.
:::
### Disabling cameras
Cameras can be temporarily disabled through the Frigate UI and through [MQTT](/integrations/mqtt#frigatecamera_nameenabledset) to conserve system resources. When disabled, Frigate's ffmpeg processes are terminated — recording stops, object detection is paused, and the Live dashboard displays a blank image with a disabled message. Review items, tracked objects, and historical footage for disabled cameras can still be accessed via the UI.
For restreamed cameras, go2rtc remains active but does not use system resources for decoding or processing unless there are active external consumers (such as the Advanced Camera Card in Home Assistant using a go2rtc source).
Note that disabling a camera through the config file (`enabled: False`) removes all related UI elements, including historical footage access. To retain access while disabling the camera, keep it enabled in the config and use the UI or MQTT to disable it temporarily.
## Live view FAQ
1. **Why don't I have audio in my Live view?**
You must use go2rtc to hear audio in your live streams. If you have go2rtc already configured, you need to ensure your camera is sending PCMA/PCMU or AAC audio. If you can't change your camera's audio codec, you need to [transcode the audio](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg) using go2rtc.
Note that the low bandwidth mode player is a video-only stream. You should not expect to hear audio when in low bandwidth mode, even if you've set up go2rtc.
2. **Frigate shows that my live stream is in "low bandwidth mode". What does this mean?**
Frigate intelligently selects the live streaming technology based on a number of factors (user-selected modes like two-way talk, camera settings, browser capabilities, available bandwidth) and prioritizes showing an actual up-to-date live view of your camera's stream as quickly as possible.
When you have go2rtc configured, Live view initially attempts to load and play back your stream with a clearer, fluent stream technology (MSE). An initial timeout, a low bandwidth condition that would cause buffering of the stream, or decoding errors in the stream will cause Frigate to switch to the stream defined by the `detect` role, using the jsmpeg format. This is what the UI labels as "low bandwidth mode". On Live dashboards, the mode will automatically reset when smart streaming is configured and activity stops. You can also try using the _Reset_ button to force a reload of your stream.
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the recommendations above or ensure you have enough bandwidth available.
3. **It doesn't seem like my cameras are streaming on the Live dashboard. Why?**
On the default Live dashboard ("All Cameras"), your camera images will update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any activity is detected, cameras seamlessly switch to a full-resolution live stream. If you want to customize this behavior, use a camera group.
4. **I see a strange diagonal line on my live view, but my recordings look fine. How can I fix it?**
This is caused by incorrect dimensions set in your detect width or height (or incorrectly auto-detected), causing the jsmpeg player's rendering engine to display a slightly distorted image. You should enlarge the width and height of your `detect` resolution up to a standard aspect ratio (example: 640x352 becomes 640x360, and 800x443 becomes 800x450, 2688x1520 becomes 2688x1512, etc). If changing the resolution to match a standard (4:3, 16:9, or 32:9, etc) aspect ratio does not solve the issue, you can enable "compatibility mode" in your camera group dashboard's stream settings. Depending on your browser and device, more than a few cameras in compatibility mode may not be supported, so only use this option if changing your `detect` width and height fails to resolve the color artifacts and diagonal line.
5. **How does "smart streaming" work?**
Because a static image of a scene looks exactly the same as a live stream with no motion or activity, smart streaming updates your camera images once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any activity (motion or object/audio detection) occurs, cameras seamlessly switch to a live stream.
This static image is pulled from the stream defined in your config with the `detect` role. When activity is detected, images from the `detect` stream immediately begin updating at ~5 frames per second so you can see the activity until the live player is loaded and begins playing. This usually only takes a second or two. If the live player times out, buffers, or has streaming errors, the jsmpeg player is loaded and plays a video-only stream from the `detect` role. When activity ends, the players are destroyed and a static image is displayed until activity is detected again, and the process repeats.
This is Frigate's default and recommended setting because it results in a significant bandwidth savings, especially for high resolution cameras.
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.

View File

@@ -0,0 +1,99 @@
---
id: metrics
title: Metrics
---
# Metrics
Frigate exposes Prometheus metrics at the `/api/metrics` endpoint that can be used to monitor the performance and health of your Frigate instance.
## Available Metrics
### System Metrics
- `frigate_cpu_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process CPU usage percentage
- `frigate_mem_usage_percent{pid="", name="", process="", type="", cmdline=""}` - Process memory usage percentage
- `frigate_gpu_usage_percent{gpu_name=""}` - GPU utilization percentage
- `frigate_gpu_mem_usage_percent{gpu_name=""}` - GPU memory usage percentage
### Camera Metrics
- `frigate_camera_fps{camera_name=""}` - Frames per second being consumed from your camera
- `frigate_detection_fps{camera_name=""}` - Number of times detection is run per second
- `frigate_process_fps{camera_name=""}` - Frames per second being processed
- `frigate_skipped_fps{camera_name=""}` - Frames per second skipped for processing
- `frigate_detection_enabled{camera_name=""}` - Detection enabled status for camera
- `frigate_audio_dBFS{camera_name=""}` - Audio dBFS for camera
- `frigate_audio_rms{camera_name=""}` - Audio RMS for camera
### Detector Metrics
- `frigate_detector_inference_speed_seconds{name=""}` - Time spent running object detection in seconds
- `frigate_detection_start{name=""}` - Detector start time (unix timestamp)
### Storage Metrics
- `frigate_storage_free_bytes{storage=""}` - Storage free bytes
- `frigate_storage_total_bytes{storage=""}` - Storage total bytes
- `frigate_storage_used_bytes{storage=""}` - Storage used bytes
- `frigate_storage_mount_type{mount_type="", storage=""}` - Storage mount type info
### Service Metrics
- `frigate_service_uptime_seconds` - Uptime in seconds
- `frigate_service_last_updated_timestamp` - Stats recorded time (unix timestamp)
- `frigate_device_temperature{device=""}` - Device Temperature
### Event Metrics
- `frigate_camera_events{camera="", label=""}` - Count of camera events since exporter started
## Configuring Prometheus
To scrape metrics from Frigate, add the following to your Prometheus configuration:
```yaml
scrape_configs:
- job_name: 'frigate'
metrics_path: '/api/metrics'
static_configs:
- targets: ['frigate:5000']
scrape_interval: 15s
```
## Example Queries
Here are some example PromQL queries that might be useful:
```promql
# Average CPU usage across all processes
avg(frigate_cpu_usage_percent)
# Total GPU memory usage
sum(frigate_gpu_mem_usage_percent)
# Detection FPS by camera
rate(frigate_detection_fps{camera_name="front_door"}[5m])
# Storage usage percentage
(frigate_storage_used_bytes / frigate_storage_total_bytes) * 100
# Event count by camera in last hour
increase(frigate_camera_events[1h])
```
## Grafana Dashboard
You can use these metrics to create Grafana dashboards to monitor your Frigate instance. Here's an example of metrics you might want to track:
- CPU, Memory and GPU usage over time
- Camera FPS and detection rates
- Storage usage and trends
- Event counts by camera
- System temperatures
A sample Grafana dashboard JSON will be provided in a future update.
## Metric Types
The metrics exposed by Frigate use the following Prometheus metric types:
- **Counter**: Cumulative values that only increase (e.g., `frigate_camera_events`)
- **Gauge**: Values that can go up and down (e.g., `frigate_cpu_usage_percent`)
- **Info**: Key-value pairs for metadata (e.g., `frigate_storage_mount_type`)
For more information about Prometheus metric types, see the [Prometheus documentation](https://prometheus.io/docs/concepts/metric_types/).

View File

@@ -11,14 +11,38 @@ Frigate offers native notifications using the [WebPush Protocol](https://web.dev
In order to use notifications the following requirements must be met:
- Frigate must be accessed via a secure https connection
- Frigate must be accessed via a secure `https` connection ([see the authorization docs](/configuration/authentication)).
- A supported browser must be used. Currently Chrome, Firefox, and Safari are known to be supported.
- In order for notifications to be usable externally, Frigate must be accessible externally
- In order for notifications to be usable externally, Frigate must be accessible externally.
- For iOS devices, some users have also indicated that the Notifications switch needs to be enabled in iOS Settings --> Apps --> Safari --> Advanced --> Features.
### Configuration
To configure notifications, go to the Frigate WebUI -> Settings -> Notifications and enable, then fill out the fields and save.
Optionally, you can change the default cooldown period for notifications through the `cooldown` parameter in your config file. This parameter can also be overridden at the camera level.
Notifications will be prevented if either:
- The global cooldown period hasn't elapsed since any camera's last notification
- The camera-specific cooldown period hasn't elapsed for the specific camera
```yaml
notifications:
enabled: True
email: "johndoe@gmail.com"
cooldown: 10 # wait 10 seconds before sending another notification from any camera
```
```yaml
cameras:
doorbell:
...
notifications:
enabled: True
cooldown: 30 # wait 30 seconds before sending another notification from the doorbell camera
```
### Registration
Once notifications are enabled, press the `Register for Notifications` button on all devices that you would like to receive notifications on. This will register the background worker. After this Frigate must be restarted and then notifications will begin to be sent.
@@ -39,4 +63,4 @@ Different platforms handle notifications differently, some settings changes may
### Android
Most Android phones have battery optimization settings. To get reliable Notification delivery the browser (Chrome, Firefox) should have battery optimizations disabled. If Frigate is running as a PWA then the Frigate app should have battery optimizations disabled as well.
Most Android phones have battery optimization settings. To get reliable Notification delivery the browser (Chrome, Firefox) should have battery optimizations disabled. If Frigate is running as a PWA then the Frigate app should have battery optimizations disabled as well.

View File

@@ -10,32 +10,46 @@ title: Object Detectors
Frigate supports multiple different detectors that work on different types of hardware:
**Most Hardware**
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8l): The Hailo8 AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
**AMD**
- [ROCm](#amdrocm-gpu-detector): ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- [ONNX](#onnx): ROCm will automatically be detected and used as a detector in the `-rocm` Frigate image when a supported ONNX model is configured.
**Intel**
- [OpenVino](#openvino-detector): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
**Nvidia**
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
**Rockchip**
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
:::
:::note
Multiple detectors can not be mixed for object detection (ex: OpenVINO and Coral EdgeTPU can not be used for object detection at the same time).
This does not affect using hardware for accelerating other tasks such as [semantic search](./semantic_search.md)
:::
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -115,6 +129,111 @@ detectors:
type: edgetpu
device: pci
```
---
## Hailo-8
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the Hailo hardware.
### Configuration
When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`.
#### YOLO
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
- **Hailo-8 hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
- **Hailo-8L hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 320
height: 320
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic
# The detector automatically selects the default model based on your hardware:
# - For Hailo-8 hardware: YOLOv6n (default: yolov6n.hef)
# - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef)
#
# Optionally, you can specify a local model path to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
# Example:
# path: /config/model_cache/hailo/yolov6n.hef
#
# You can also override using a custom URL:
# path: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov6n.hef
# just make sure to give it the write configuration based on the model
```
#### SSD
For SSD-based models, provide either a model path or URL to your compiled SSD model. The integration will first check the local path before downloading if necessary.
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: rgb
model_type: ssd
# Specify the local model path (if available) or URL for SSD MobileNet v1.
# Example with a local path:
# path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
#
# Or override using a custom URL:
# path: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8l/ssd_mobilenet_v1.hef
```
#### Custom Models
The Hailo detector supports all YOLO models compiled for Hailo hardware that include post-processing. You can specify a custom URL or a local path to download or use your model directly. If both are provided, the detector checks the local path first.
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
width: 640
height: 640
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic
# Optional: Specify a local model path.
# path: /config/model_cache/hailo/custom_model.hef
#
# Alternatively, or as a fallback, provide a custom URL:
# path: https://custom-model-url.com/path/to/model.hef
```
For additional ready-to-use models, please visit: https://github.com/hailo-ai/hailo_model_zoo
Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-processing. You're welcome to choose any of these pre-configured models for your implementation.
> **Note:**
> The config.path parameter can accept either a local file path or a URL ending with .hef. When provided, the detector will first check if the path is a local file path. If the file exists locally, it will use it directly. If the file is not found locally or if a URL was provided, it will attempt to download the model from the specified URL.
---
## OpenVINO Detector
@@ -144,7 +263,9 @@ detectors:
#### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
```yaml
detectors:
@@ -167,15 +288,7 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@@ -197,13 +310,43 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### YOLOv9
[YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
ov:
type: openvino
device: GPU
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## NVidia TensorRT Detector
Nvidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt`. This detector is designed to work with Yolo models for object detection.
### Minimum Hardware Support
The TensorRT detector uses the 12.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=530`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
The TensorRT detector uses the 12.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
To use the TensorRT detector, make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
@@ -223,7 +366,7 @@ The model used for TensorRT must be preprocessed on the same hardware platform t
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
By default, the `yolov7-320` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
By default, no models will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU.
@@ -231,6 +374,8 @@ If your GPU does not support FP16 operations, you can pass the environment varia
Specific models can be selected by passing an environment variable to the `docker run` command or in your `docker-compose.yml` file. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below.
<details>
<summary>Available Models</summary>
```
yolov3-288
yolov3-416
@@ -254,17 +399,19 @@ yolov4x-mish-640
yolov7-tiny-288
yolov7-tiny-416
yolov7-640
yolov7-416
yolov7-320
yolov7x-640
yolov7x-320
```
</details>
An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yolov7x-640` models for a Pascal card would look something like this:
```yml
frigate:
environment:
- YOLO_MODELS=yolov4-608,yolov7x-640
- YOLO_MODELS=yolov7-320,yolov7x-640
- USE_FP16=false
```
@@ -282,6 +429,8 @@ The TensorRT detector can be selected by specifying `tensorrt` as the model type
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
Use the config below to work with generated TRT models:
```yaml
detectors:
tensorrt:
@@ -300,7 +449,7 @@ model:
### Setup
The `rocm` detector supports running YOLO-NAS models on AMD GPUs. Use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
Support for AMD GPUs is provided using the [ONNX detector](#ONNX). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
### Docker settings for GPU access
@@ -350,7 +499,7 @@ When using docker compose:
```yaml
services:
frigate:
...
environment:
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
```
@@ -379,41 +528,31 @@ $ docker exec -it frigate /bin/bash -c '(unset HSA_OVERRIDE_GFX_VERSION && /opt/
### Supported Models
There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
rocm:
type: rocm
model:
model_type: yolonas
width: 320 # <--- should match whatever was set in notebook
height: 320 # <--- should match whatever was set in notebook
input_pixel_format: bgr
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
- D-FINE models are not supported
- YOLO-NAS models are known to not run well on integrated GPUs
## ONNX
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, ROCm, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
:::info
If the correct build is used for your GPU then the GPU will be detected and used automatically.
- **AMD**
- ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used with the ONNX detector in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp(4/5)` Frigate image.
:::
:::tip
@@ -435,15 +574,7 @@ There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
@@ -457,10 +588,67 @@ model:
width: 320 # <--- should match whatever was set in notebook
height: 320 # <--- should match whatever was set in notebook
input_pixel_format: bgr
input_tensor: nchw
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txt
```
#### YOLOv9
[YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
:::tip
The YOLOv9 detector has been designed to support YOLOv9 models, but may support other YOLO model architectures as well.
:::
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: yolov9
width: 640 # <--- should match the imgsize set during model export
height: 640 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolov9-t.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
#### D-FINE
[D-FINE](https://github.com/Peterande/D-FINE) is the [current state of the art](https://paperswithcode.com/sota/real-time-object-detection-on-coco?p=d-fine-redefine-regression-task-in-detrs-as) at the time of writing. The ONNX exported models are supported, but not included by default. See [the models section](#downloading-d-fine-model) for more information on downloading the D-FINE model for use in Frigate.
:::warning
D-FINE is currently not supported on OpenVINO
:::
After placing the downloaded onnx model in your config/model_cache folder, you can use the following configuration:
```yaml
detectors:
onnx:
type: onnx
model:
model_type: dfine
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/dfine_m_obj2coco.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)
@@ -482,11 +670,12 @@ detectors:
cpu1:
type: cpu
num_threads: 3
model:
path: "/custom_model.tflite"
cpu2:
type: cpu
num_threads: 3
model:
path: "/custom_model.tflite"
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
@@ -525,7 +714,7 @@ Hardware accelerated object detection is supported on the following SoCs:
- RK3576
- RK3588
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.0.0.beta0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.3.0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
### Prerequisites
@@ -600,26 +789,79 @@ $ cat /sys/kernel/debug/rknpu/load
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
## Hailo-8l
### Converting your own onnx model to rknn format
This detector is available for use with Hailo-8 AI Acceleration Module.
To convert a onnx model to the rknn format using the [rknn-toolkit2](https://github.com/airockchip/rknn-toolkit2/) you have to:
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
- Place one ore more models in onnx format in the directory `config/model_cache/rknn_cache/onnx` on your docker host (this might require `sudo` privileges).
- Save the configuration file under `config/conv2rknn.yaml` (see below for details).
- Run `docker exec <frigate_container_id> python3 /opt/conv2rknn.py`. If the conversion was successful, the rknn models will be placed in `config/model_cache/rknn_cache`.
### Configuration
This is an example configuration file that you need to adjust to your specific onnx model:
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
soc: ["rk3562", "rk3566", "rk3568", "rk3576", "rk3588"]
quantization: false
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
output_name: "{input_basename}"
config:
mean_values: [[0, 0, 0]]
std_values: [[255, 255, 255]]
quant_img_rgb2bgr: true
```
Explanation of the paramters:
- `soc`: A list of all SoCs you want to build the rknn model for. If you don't specify this parameter, the script tries to find out your SoC and builds the rknn model for this one.
- `quantization`: true: 8 bit integer (i8) quantization, false: 16 bit float (fp16). Default: false.
- `output_name`: The output name of the model. The following variables are available:
- `quant`: "i8" or "fp16" depending on the config
- `input_basename`: the basename of the input model (e.g. "my_model" if the input model is calles "my_model.onnx")
- `soc`: the SoC this model was build for (e.g. "rk3588")
- `tk_version`: Version of `rknn-toolkit2` (e.g. "2.3.0")
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.0/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.0_EN.pdf).
# Models
Some model types are not included in Frigate by default.
## Downloading Models
Here are some tips for getting different model types
### Downloading D-FINE Model
To export as ONNX:
1. Clone: https://github.com/Peterande/D-FINE and install all dependencies.
2. Select and download a checkpoint from the [readme](https://github.com/Peterande/D-FINE).
3. Modify line 58 of `tools/deployment/export_onnx.py` and change batch size to 1: `data = torch.rand(1, 3, 640, 640)`
4. Run the export, making sure you select the right config, for your checkpoint.
Example:
```
python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_m_obj2coco.yml -r output/dfine_m_obj2coco.pth
```
:::tip
Model export has only been tested on Linux (or WSL2). Not all dependencies are in `requirements.txt`. Some live in the deployment folder, and some are still missing entirely and must be installed manually.
Make sure you change the batch size to 1 before exporting.
:::
### Downloading YOLO-NAS Model
You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.

View File

@@ -34,7 +34,7 @@ False positives can also be reduced by filtering a detection based on its shape.
### Object Area
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter.
`min_area` and `max_area` filter on the area of an objects bounding box and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter. These values can either be in pixels or as a percentage of the frame (for example, 0.12 represents 12% of the frame).
### Object Proportions

View File

@@ -5,7 +5,7 @@ title: Available Objects
import labels from "../../../labelmap.txt";
Frigate includes the object models listed below from the Google Coral test data.
Frigate includes the object labels listed below from the Google Coral test data.
Please note:

View File

@@ -183,6 +183,8 @@ record:
sync_recordings: True
```
This feature is meant to fix variations in files, not completely delete entries in the database. If you delete all of your media, don't use `sync_recordings`, just stop Frigate, delete the `frigate.db` database, and restart.
:::warning
The sync operation uses considerable CPU resources and in most cases is not needed, only enable when necessary.

View File

@@ -46,13 +46,18 @@ mqtt:
tls_insecure: false
# Optional: interval in seconds for publishing stats (default: shown below)
stats_interval: 60
# Optional: QoS level for subscriptions and publishing (default: shown below)
# 0 = at most once
# 1 = at least once
# 2 = exactly once
qos: 0
# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
# Required: name of the detector
detector_name:
# Required: type of the detector
# Frigate provided types include 'cpu', 'edgetpu', 'openvino' and 'tensorrt' (default: shown below)
# Frigate provides many types, see https://docs.frigate.video/configuration/object_detectors for more details (default: shown below)
# Additional detector types can also be plugged in.
# Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information.
@@ -117,25 +122,27 @@ auth:
hash_iterations: 600000
# Optional: model modifications
# NOTE: The default values are for the EdgeTPU detector.
# Other detectors will require the model config to be set.
model:
# Optional: path to the model (default: automatic based on detector)
# Required: path to the model (default: automatic based on detector)
path: /edgetpu_model.tflite
# Optional: path to the labelmap (default: shown below)
# Required: path to the labelmap (default: shown below)
labelmap_path: /labelmap.txt
# Required: Object detection model input width (default: shown below)
width: 320
# Required: Object detection model input height (default: shown below)
height: 320
# Optional: Object detection model input colorspace
# Required: Object detection model input colorspace
# Valid values are rgb, bgr, or yuv. (default: shown below)
input_pixel_format: rgb
# Optional: Object detection model input tensor format
# Required: Object detection model input tensor format
# Valid values are nhwc or nchw (default: shown below)
input_tensor: nhwc
# Optional: Object detection model type, currently only used with the OpenVINO detector
# Required: Object detection model type, currently only used with the OpenVINO detector
# Valid values are ssd, yolox, yolonas (default: shown below)
model_type: ssd
# Optional: Label name modifications. These are merged into the standard labelmap.
# Required: Label name modifications. These are merged into the standard labelmap.
labelmap:
2: vehicle
# Optional: Map of object labels to their attribute labels (default: depends on model)
@@ -242,10 +249,14 @@ ffmpeg:
# If set too high, then if a ffmpeg crash or camera stream timeout occurs, you could potentially lose up to a maximum of retry_interval second(s) of footage
# NOTE: this can be a useful setting for Wireless / Battery cameras to reduce how much footage is potentially lost during a connection timeout.
retry_interval: 10
# Optional: Set tag on HEVC (H.265) recording stream to improve compatibility with Apple players. (default: shown below)
apple_compatibility: false
# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
# Optional: enables detection for the camera (default: shown below)
enabled: False
# Optional: width of the frame for the input with the detect role (default: use native stream resolution)
width: 1280
# Optional: height of the frame for the input with the detect role (default: use native stream resolution)
@@ -253,8 +264,6 @@ detect:
# Optional: desired fps for your camera for the input with the detect role (default: shown below)
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
fps: 5
# Optional: enables detection for the camera (default: True)
enabled: True
# Optional: Number of consecutive detection hits required for an object to be initialized in the tracker. (default: 1/2 the frame rate)
min_initialized: 2
# Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
@@ -308,9 +317,11 @@ objects:
# Optional: filters to reduce false positives for specific object types
filters:
person:
# Optional: minimum width*height of the bounding box for the detected object (default: 0)
# Optional: minimum size of the bounding box for the detected object (default: 0).
# Can be specified as an integer for width*height in pixels or as a decimal representing the percentage of the frame (0.000001 to 0.99).
min_area: 5000
# Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
# Optional: maximum size of the bounding box for the detected object (default: 24000000).
# Can be specified as an integer for width*height in pixels or as a decimal representing the percentage of the frame (0.000001 to 0.99).
max_area: 100000
# Optional: minimum width/height of the bounding box for the detected object (default: 0)
min_ratio: 0.5
@@ -329,6 +340,8 @@ objects:
review:
# Optional: alerts configuration
alerts:
# Optional: enables alerts for the camera (default: shown below)
enabled: True
# Optional: labels that qualify as an alert (default: shown below)
labels:
- car
@@ -341,6 +354,8 @@ review:
- driveway
# Optional: detections configuration
detections:
# Optional: enables detections for the camera (default: shown below)
enabled: True
# Optional: labels that qualify as a detection (default: all labels that are tracked / listened to)
labels:
- car
@@ -398,12 +413,15 @@ motion:
mqtt_off_delay: 30
# Optional: Notification Configuration
# NOTE: Can be overridden at the camera level (except email)
notifications:
# Optional: Enable notification service (default: shown below)
enabled: False
# Optional: Email for push service to reach out to
# NOTE: This is required to use notifications
email: "admin@example.com"
# Optional: Cooldown time for notifications in seconds (default: shown below)
cooldown: 0
# Optional: Record configuration
# NOTE: Can be overridden at the camera level
@@ -518,12 +536,40 @@ semantic_search:
enabled: False
# Optional: Re-index embeddings database from historical tracked objects (default: shown below)
reindex: False
# Optional: Set the model used for embeddings. (default: shown below)
model: "jinav1"
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for face recognition capability
face_recognition:
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for license plate recognition capability
lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: License plate object confidence score required to begin running recognition (default: shown below)
detection_threshold: 0.7
# Optional: Minimum area of license plate to begin running recognition (default: shown below)
min_area: 1000
# Optional: Recognition confidence score required to add the plate to the object as a sub label (default: shown below)
recognition_threshold: 0.9
# Optional: Minimum number of characters a license plate must have to be added to the object as a sub label (default: shown below)
min_plate_length: 4
# Optional: Regular expression for the expected format of a license plate (default: shown below)
format: None
# Optional: Allow this number of missing/incorrect characters to still cause a detected plate to match a known plate
match_distance: 1
# Optional: Known plates to track (strings or regular expressions) (default: shown below)
known_plates: {}
# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.
# WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
# the camera level (enabled: False) to enhance privacy for indoor cameras.
@@ -546,13 +592,19 @@ genai:
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
# Optional: jsmpeg stream configuration for WebUI
# Optional: Live stream configuration for WebUI.
# NOTE: Can be overridden at the camera level
live:
# Optional: Set the name of the stream that should be used for live view
# in frigate WebUI. (default: name of camera)
stream_name: camera_name
# Optional: Set the streams configured in go2rtc
# that should be used for live view in frigate WebUI. (default: name of camera)
# NOTE: In most cases this should be set at the camera level only.
streams:
main_stream: main_stream_name
sub_stream: sub_stream_name
# Optional: Set the height of the jsmpeg stream. (default: 720)
# This must be less than or equal to the height of the detect stream. Lower resolutions
# reduce bandwidth required for viewing the jsmpeg stream. Width is computed to match known aspect ratio.
@@ -637,7 +689,10 @@ cameras:
front_steps:
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 0.284,0.997,0.389,0.869,0.410,0.745
coordinates: 0.033,0.306,0.324,0.138,0.439,0.185,0.042,0.428
# Optional: The real-world distances of a 4-sided zone used for zones with speed estimation enabled (default: none)
# List distances in order of the zone points coordinates and use the unit system defined in the ui config
distances: 10,15,12,11
# Optional: Number of consecutive frames required for object to be considered present in the zone (default: shown below).
inertia: 3
# Optional: Number of seconds that an object must loiter to be considered in the zone (default: shown below)
@@ -684,6 +739,7 @@ cameras:
# to enable PTZ controls.
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@@ -692,6 +748,8 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
ignore_time_mismatch: False
@@ -755,6 +813,14 @@ cameras:
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional
ui:
@@ -783,6 +849,9 @@ ui:
# https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
# possible values are shown above (default: not set)
strftime_fmt: "%Y/%m/%d %H:%M"
# Optional: Set the unit system to either "imperial" or "metric" (default: metric)
# Used in the UI and in MQTT topics
unit_system: metric
# Optional: Telemetry configuration
telemetry:
@@ -796,11 +865,13 @@ telemetry:
- lo
# Optional: Configure system stats
stats:
# Enable AMD GPU stats (default: shown below)
# Optional: Enable AMD GPU stats (default: shown below)
amd_gpu_stats: True
# Enable Intel GPU stats (default: shown below)
# Optional: Enable Intel GPU stats (default: shown below)
intel_gpu_stats: True
# Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# Optional: Treat GPU as SR-IOV to fix GPU stats (default: shown below)
sriov: False
# Optional: Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
# NOTE: The container must either be privileged or have cap_net_admin, cap_net_raw capabilities enabled.
network_bandwidth: False
# Optional: Enable the latest version outbound check (default: shown below)

View File

@@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.4) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.2) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#configuration) for more advanced configurations and features.
:::note
@@ -132,9 +132,31 @@ cameras:
- detect
```
## Handling Complex Passwords
go2rtc expects URL-encoded passwords in the config, [urlencoder.org](https://urlencoder.org) can be used for this purpose.
For example:
```yaml
go2rtc:
streams:
my_camera: rtsp://username:$@foo%@192.168.1.100
```
becomes
```yaml
go2rtc:
streams:
my_camera: rtsp://username:$%40foo%25@192.168.1.100
```
See [this comment(https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@@ -1,11 +1,11 @@
---
id: semantic_search
title: Using Semantic Search
title: Semantic Search
---
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate has support for [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create embeddings, which runs locally. Embeddings are then saved to Frigate's database.
Frigate uses models from [Jina AI](https://huggingface.co/jinaai) to create and save embeddings to Frigate's database. All of this runs locally.
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
@@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
## Configuration
Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Settings page before it can be used. Semantic Search is a global configuration setting.
```yaml
semantic_search:
@@ -29,38 +29,86 @@ semantic_search:
:::tip
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to set the config back to `False` before restarting Frigate again.
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration or by toggling the switch on the Search Settings page in the UI and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to turn the UI's switch off or set the config back to `False` before restarting Frigate again.
If you are enabling the Search feature for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
:::
### Jina AI CLIP
### Jina AI CLIP (version 1)
The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The [V1 model from Jina](https://huggingface.co/jinaai/jina-clip-v1) has a vision model which is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option:
:::tip
The CLIP models are downloaded in ONNX format, which means they will be accelerated using GPU hardware when available. This depends on the Docker build that is used. See [the object detector docs](../configuration/object_detectors.md) for more information.
:::
Differently weighted versions of the Jina models are available and can be selected by setting the `model_size` config option as `small` or `large`:
```yaml
semantic_search:
enabled: True
model: "jinav1"
model_size: small
```
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality.
- Configuring the `small` model employs a quantized version of the Jina model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
### Jina AI CLIP (version 2)
Frigate also supports the [V2 model from Jina](https://huggingface.co/jinaai/jina-clip-v2), which introduces multilingual support (89 languages). In contrast, the V1 model only supports English.
V2 offers only a 3% performance improvement over V1 in both text-image and text-text retrieval tasks, an upgrade that is unlikely to yield noticeable real-world benefits. Additionally, V2 has _significantly_ higher RAM and GPU requirements, leading to increased inference time and memory usage. If you plan to use V2, ensure your system has ample RAM and a discrete GPU. CPU inference (with the `small` model) using V2 is not recommended.
To use the V2 model, update the `model` parameter in your config:
```yaml
semantic_search:
enabled: True
model: "jinav2"
model_size: large
```
For most users, especially native English speakers, the V1 model remains the recommended choice.
:::note
Switching between V1 and V2 requires reindexing your embeddings. To do this, set `reindex: True` in your Semantic Search configuration and restart Frigate. The embeddings from V1 and V2 are incompatible, and failing to reindex will result in incorrect search results.
:::
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.
```yaml
semantic_search:
enabled: True
model_size: large
```
:::info
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
**NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU.
- **AMD**
- ROCm will automatically be detected and used for Semantic Search in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used for Semantic Search in the default Frigate image.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image.
:::
## Usage and Best Practices
1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.
1. Semantic Search is used in conjunction with the other filters available on the Explore page. Use a combination of traditional filtering and Semantic Search for the best results.
2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object.
3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant.
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".

Some files were not shown because too many files have changed in this diff Show More