Compare commits

...

294 Commits

Author SHA1 Message Date
dependabot[bot]
b652ce6bb3 Bump eslint-plugin-react-refresh from 0.4.8 to 0.4.23 in /web
Bumps [eslint-plugin-react-refresh](https://github.com/ArnaudBarre/eslint-plugin-react-refresh) from 0.4.8 to 0.4.23.
- [Release notes](https://github.com/ArnaudBarre/eslint-plugin-react-refresh/releases)
- [Changelog](https://github.com/ArnaudBarre/eslint-plugin-react-refresh/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ArnaudBarre/eslint-plugin-react-refresh/compare/v0.4.8...v0.4.23)

---
updated-dependencies:
- dependency-name: eslint-plugin-react-refresh
  dependency-version: 0.4.23
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-02 11:02:33 +00:00
Nicolas Mowen
41e5c12e5b Don't use rknn if device is CPU (#20312) 2025-10-01 19:14:04 -05:00
Josh Hawkins
8307fe31aa Add ability to paste in image dropzone (#20310)
Primarily used in the face library, now users can use ctrl/meta-v to paste images from the clipboard in an image entry field
2025-10-01 12:49:26 -05:00
Hosted Weblate
1f061a8e73 Translated using Weblate (Norwegian Bokmål)
Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-01 09:27:45 -06:00
Hosted Weblate
55d6383234 Translated using Weblate (Korean)
Currently translated at 2.5% (3 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 2.1% (10 of 462 strings)

Translated using Weblate (Korean)

Currently translated at 6.2% (3 of 48 strings)

Translated using Weblate (Korean)

Currently translated at 3.3% (4 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 6.5% (4 of 61 strings)

Translated using Weblate (Korean)

Currently translated at 30.0% (3 of 10 strings)

Translated using Weblate (Korean)

Currently translated at 8.6% (4 of 46 strings)

Translated using Weblate (Korean)

Currently translated at 15.3% (4 of 26 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Korean)

Currently translated at 4.8% (4 of 83 strings)

Translated using Weblate (Korean)

Currently translated at 48.0% (12 of 25 strings)

Translated using Weblate (Korean)

Currently translated at 7.6% (4 of 52 strings)

Translated using Weblate (Korean)

Currently translated at 7.8% (15 of 192 strings)

Translated using Weblate (Korean)

Currently translated at 3.0% (13 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: jjyn0215 <jjyn0215@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ko/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-01 09:27:45 -06:00
Hosted Weblate
caa187e4ed Translated using Weblate (Swedish)
Currently translated at 100.0% (84 of 84 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-live
2025-10-01 09:27:45 -06:00
Hosted Weblate
4331ed0d7b Translated using Weblate (French)
Currently translated at 100.0% (84 of 84 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fr/
Translation: Frigate NVR/views-live
2025-10-01 09:27:45 -06:00
Hosted Weblate
08309793d4 Translated using Weblate (Spanish)
Currently translated at 98.8% (83 of 84 strings)

Translated using Weblate (Spanish)

Currently translated at 93.9% (434 of 462 strings)

Translated using Weblate (Spanish)

Currently translated at 93.0% (430 of 462 strings)

Translated using Weblate (Spanish)

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Spanish)

Currently translated at 86.7% (401 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Reydel Leon Machado <contact@reydelleon.me>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-10-01 09:27:45 -06:00
Hosted Weblate
c7a4e6bcc4 Translated using Weblate (Dutch)
Currently translated at 100.0% (84 of 84 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nl/
Translation: Frigate NVR/views-live
2025-10-01 09:27:45 -06:00
Hosted Weblate
c94446a472 Translated using Weblate (Polish)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Polish)

Currently translated at 99.1% (117 of 118 strings)

Translated using Weblate (Polish)

Currently translated at 73.8% (341 of 462 strings)

Translated using Weblate (Polish)

Currently translated at 98.7% (82 of 83 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Polish)

Currently translated at 95.0% (116 of 122 strings)

Co-authored-by: Bartlomiej Puls <bartlomiej.puls@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/pl/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-01 09:27:45 -06:00
Hosted Weblate
17b6128314 Translated using Weblate (Hungarian)
Currently translated at 92.4% (427 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Netesfiu <r4verino@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hu/
Translation: Frigate NVR/views-settings
2025-10-01 09:27:45 -06:00
Hosted Weblate
117a878533 Translated using Weblate (Croatian)
Currently translated at 0.8% (1 of 118 strings)

Translated using Weblate (Croatian)

Currently translated at 0.2% (1 of 462 strings)

Translated using Weblate (Croatian)

Currently translated at 2.0% (1 of 48 strings)

Translated using Weblate (Croatian)

Currently translated at 33.3% (2 of 6 strings)

Translated using Weblate (Croatian)

Currently translated at 1.2% (1 of 83 strings)

Translated using Weblate (Croatian)

Currently translated at 1.6% (1 of 61 strings)

Translated using Weblate (Croatian)

Currently translated at 22.2% (2 of 9 strings)

Translated using Weblate (Croatian)

Currently translated at 0.8% (1 of 122 strings)

Translated using Weblate (Croatian)

Currently translated at 3.8% (1 of 26 strings)

Translated using Weblate (Croatian)

Currently translated at 10.0% (1 of 10 strings)

Translated using Weblate (Croatian)

Currently translated at 0.8% (1 of 118 strings)

Translated using Weblate (Croatian)

Currently translated at 4.0% (1 of 25 strings)

Translated using Weblate (Croatian)

Currently translated at 50.0% (1 of 2 strings)

Translated using Weblate (Croatian)

Currently translated at 50.0% (1 of 2 strings)

Translated using Weblate (Croatian)

Currently translated at 2.7% (2 of 72 strings)

Translated using Weblate (Croatian)

Currently translated at 1.9% (1 of 52 strings)

Translated using Weblate (Croatian)

Currently translated at 47.8% (22 of 46 strings)

Translated using Weblate (Croatian)

Currently translated at 11.1% (1 of 9 strings)

Translated using Weblate (Croatian)

Currently translated at 0.5% (1 of 192 strings)

Translated using Weblate (Croatian)

Currently translated at 0.2% (1 of 427 strings)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Added translation using Weblate (Croatian)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: igor jukic <drj.cro@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/hr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-01 09:27:45 -06:00
Hosted Weblate
ff5ebcf94d Translated using Weblate (Czech)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Vitek <vit@vakula.cz>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translation: Frigate NVR/views-settings
2025-10-01 09:27:45 -06:00
Hosted Weblate
24c519f032 Translated using Weblate (Japanese)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Japanese)

Currently translated at 22.9% (14 of 61 strings)

Translated using Weblate (Japanese)

Currently translated at 79.1% (152 of 192 strings)

Translated using Weblate (Japanese)

Currently translated at 23.0% (6 of 26 strings)

Translated using Weblate (Japanese)

Currently translated at 22.8% (27 of 118 strings)

Translated using Weblate (Japanese)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Japanese)

Currently translated at 21.3% (13 of 61 strings)

Translated using Weblate (Japanese)

Currently translated at 44.4% (4 of 9 strings)

Translated using Weblate (Japanese)

Currently translated at 4.0% (5 of 122 strings)

Translated using Weblate (Japanese)

Currently translated at 19.2% (5 of 26 strings)

Translated using Weblate (Japanese)

Currently translated at 16.0% (4 of 25 strings)

Translated using Weblate (Japanese)

Currently translated at 8.3% (6 of 72 strings)

Translated using Weblate (Japanese)

Currently translated at 7.6% (4 of 52 strings)

Translated using Weblate (Japanese)

Currently translated at 10.8% (5 of 46 strings)

Translated using Weblate (Japanese)

Currently translated at 44.4% (4 of 9 strings)

Translated using Weblate (Japanese)

Currently translated at 4.2% (5 of 118 strings)

Translated using Weblate (Japanese)

Currently translated at 3.1% (6 of 192 strings)

Translated using Weblate (Japanese)

Currently translated at 83.3% (5 of 6 strings)

Translated using Weblate (Japanese)

Currently translated at 1.1% (5 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: virmaior <akomasinski@gmail.com>
Co-authored-by: yhi264 <yhiraki@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ja/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-01 09:27:45 -06:00
Hosted Weblate
90fbb77ee0 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (84 of 84 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/uk/
Translation: Frigate NVR/views-live
2025-10-01 09:27:45 -06:00
Hosted Weblate
9f1d8b0e31 Translated using Weblate (Bulgarian)
Currently translated at 44.7% (86 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Христо Христов <mr.hristo.hristov@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/bg/
Translation: Frigate NVR/common
2025-10-01 09:27:45 -06:00
Hosted Weblate
875d20b195 Translated using Weblate (Danish)
Currently translated at 26.9% (7 of 26 strings)

Translated using Weblate (Danish)

Currently translated at 13.5% (16 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 18.9% (81 of 427 strings)

Translated using Weblate (Danish)

Currently translated at 23.0% (6 of 26 strings)

Translated using Weblate (Danish)

Currently translated at 12.7% (15 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 12.0% (3 of 25 strings)

Translated using Weblate (Danish)

Currently translated at 17.0% (73 of 427 strings)

Translated using Weblate (Danish)

Currently translated at 5.0% (6 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 0.8% (4 of 462 strings)

Translated using Weblate (Danish)

Currently translated at 14.5% (7 of 48 strings)

Translated using Weblate (Danish)

Currently translated at 83.3% (5 of 6 strings)

Translated using Weblate (Danish)

Currently translated at 7.1% (6 of 84 strings)

Translated using Weblate (Danish)

Currently translated at 6.5% (4 of 61 strings)

Translated using Weblate (Danish)

Currently translated at 55.5% (5 of 9 strings)

Translated using Weblate (Danish)

Currently translated at 5.7% (7 of 122 strings)

Translated using Weblate (Danish)

Currently translated at 19.2% (5 of 26 strings)

Translated using Weblate (Danish)

Currently translated at 40.0% (4 of 10 strings)

Translated using Weblate (Danish)

Currently translated at 7.6% (9 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 9.7% (7 of 72 strings)

Translated using Weblate (Danish)

Currently translated at 5.7% (3 of 52 strings)

Translated using Weblate (Danish)

Currently translated at 15.2% (7 of 46 strings)

Translated using Weblate (Danish)

Currently translated at 77.7% (7 of 9 strings)

Translated using Weblate (Danish)

Currently translated at 3.9% (17 of 427 strings)

Translated using Weblate (Danish)

Currently translated at 1.6% (2 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 0.4% (2 of 462 strings)

Translated using Weblate (Danish)

Currently translated at 4.1% (2 of 48 strings)

Translated using Weblate (Danish)

Currently translated at 50.0% (3 of 6 strings)

Translated using Weblate (Danish)

Currently translated at 2.4% (2 of 83 strings)

Translated using Weblate (Danish)

Currently translated at 4.9% (3 of 61 strings)

Translated using Weblate (Danish)

Currently translated at 7.6% (2 of 26 strings)

Translated using Weblate (Danish)

Currently translated at 10.0% (1 of 10 strings)

Translated using Weblate (Danish)

Currently translated at 1.6% (2 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 8.0% (2 of 25 strings)

Translated using Weblate (Danish)

Currently translated at 50.0% (1 of 2 strings)

Translated using Weblate (Danish)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Danish)

Currently translated at 1.3% (1 of 72 strings)

Translated using Weblate (Danish)

Currently translated at 3.8% (2 of 52 strings)

Translated using Weblate (Danish)

Currently translated at 1.8% (8 of 427 strings)

Co-authored-by: Alexander <ava5270@gmail.com>
Co-authored-by: Emil Friis Osmann <Emilfriisosmann@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/da/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-10-01 09:27:45 -06:00
Hosted Weblate
48056ac15c Translated using Weblate (Portuguese (Brazil))
Currently translated at 100.0% (84 of 84 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pt_BR/
Translation: Frigate NVR/views-live
2025-10-01 09:27:45 -06:00
Nicolas Mowen
993459152b fix stationary runtime error (#20309) 2025-10-01 09:17:30 -06:00
Josh Hawkins
8430fbc705 Add request_time and upstream_response_time to nginx log (#20307) 2025-10-01 09:51:51 -05:00
Nicolas Mowen
f7c4ff12f7 Add script to generate english config translation file from config (#20301) 2025-10-01 07:39:43 -05:00
Nicolas Mowen
8f0be18422 Improve stationary classification (#20303)
* Improve stationary classification

* Cleanup for mypy
2025-10-01 07:39:11 -05:00
Nicolas Mowen
28e3aa39f0 Customizable GenAI Review prompt (#20296)
* Add customizable prompt

* Update docs
2025-09-30 18:07:16 -05:00
Josh Hawkins
16c88fa8ac Camera group url fixes (#20295)
* Fix group url param where a camera group was not always loaded

Need to use the loading state from the usePersistence hook because values are loaded from indexed db asynchronously

* ensure group icon changes when using url param

* clean up
2025-09-30 16:53:48 -06:00
Josh Hawkins
1b6c246a44 Add shadcn sidebar component (#20292) 2025-09-30 15:02:35 -06:00
Nicolas Mowen
e8b2828ca0 Use key to correctly reload live view when camera changes directly (#20291) 2025-09-30 14:51:47 -06:00
Nicolas Mowen
923412ec1c Improve Review Summary Prompt (#20289)
* Improve prompt to have better discernment and logic based on detected objects

* Be more specific about the time of day

* Add re-inforcers for LLM to be accurate and not complete a narrative
2025-09-30 06:52:38 -06:00
Josh Hawkins
8b85cd816e Rename conflicting bash variables (#20276)
In bash you cannot redeclare a readonly variable as local, even within a function scope
2025-09-29 19:51:53 -06:00
Nicolas Mowen
bebe99d9b8 Implement automatic go2rtc homekit config (#20275)
* Implement automatic go2rtc homekit config

* Update docs
2025-09-29 18:48:20 -05:00
Nicolas Mowen
a08fda62f8 Implement debug live view as part of live (#20270)
* Cleanup components

* integrate debug view

* Refactor menu handling

* Cleanup

* cleanup

* Improve ptz placement for debug view

* Cleanup

* Cleanup mobile

* Always show options

* Add info for stream picking being disabled

* Add to mobile too

* Fix ns

* Cleanup
2025-09-29 18:45:55 -05:00
Nicolas Mowen
9fdce80729 Handle case when no classification model exists (#20257) 2025-09-28 16:03:44 -05:00
Josh Hawkins
12f8c3feac Watchdog enhancements (#20237)
* refactor get_video_properties and use json output from ffprobe

* add zmq topic

* publish valid segment data in recording maintainer

* check for valid video data

- restart separate record ffmpeg process if no video data has been received in 120s
- refactor datetime import

* listen to correct topic in embeddings maintainer

* refactor to move get_latest_segment_datetime logic to recordings maintainer

* debug logging

* cleanup
2025-09-28 10:52:14 -06:00
Josh Hawkins
b6552987b0 Fixes (#20254)
* fix api async/await functions

* fix synaptics detector from throwing error when unused

* clean up
2025-09-28 07:08:52 -06:00
Nicolas Mowen
c207009d8a Refactor AMD GPU support (#20239)
* Update ROCm to 7.0.1

* Update ONNXRuntime

* Add back in

* Get basic detection working

* Use env vars

* Handle complex migraphx models

* Enable model caching

* Remove unused

* Add tip to docs
2025-09-27 14:43:11 -05:00
Nicolas Mowen
e6cbc93703 More stationary cleanup (#20229)
* Always return false for active objects

* Cleanup
2025-09-26 07:23:29 -06:00
GaryHuang-ASUS
b8b07ee6e1 [Init] Initial commit for Synaptics SL1680 NPU (#19680)
* [Init] Initial commit for Synaptics SL1680 NPU

* add a rough detector which is testing with yolov8 tflite model.

* [Feat] Add dependencies installation in docker build

- Add runtime library and wheels installation in main/Dockerfile
- Add model.synap(default model, transfer from mobilenet_224full80) in docker/synap1680

* [Update] Remove dependencies installation from main Dockerfile

- remove deps installation from Dockerfile
- add dependencies installation and split wheels, deps stage in synap1680 Dockerfile

* Refactor synap detector to more closely match other implementations

* [Update] Add model path configuration check

* [Update] update ModelType to ssd

* [Update] Remove unuse script

- install_deps.sh has already been executing in deps download stage
- Dockerfile.toolchain is for testing to extract runtime libraries from Synaptics toolchain

* [Update] update Synaptics SL1680 setup description

* [Update] remove install_synap1680

- The deps download and installation is existed in synap1680

* [Fix] update document content

* [Update] Update detector from synap1680 to synaptics

This update is in order to make the synaptics SL-series NPU detector more general.

- Fix detector `os` module not import bug
- Update detector type `synap1680` to `synaptics`
- Update document description `SL1680` to `Synaptics` only
- Update docker build content `synap1680` to `synaptics`

* [Fix] Update configuration document

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* [Update] Update document content and detector default layout

- Update object_detectors document
- Update detector's default layout
- Update default model name

* [Update] Update object detector document content

* [Fix] Fix InputTensorEnum not defined error

- import InputTensorEnum from detector_config

* [Update] Update detector script coding format

* [Update] Update synaptics detector coding format

* [Update] Add synaptics ci workflow

* [Update] update synaptics runtime libs download path

- Fork Synaptics astra sdk repo and put the runtime lib package on it
- Frigate team can update this download path later

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-09-26 07:07:12 -05:00
Nicolas Mowen
082867447b Stationary bug fixes (#20225)
* Correctly only enable for car

* Fix limiting stationary objects history
2025-09-26 07:03:59 -05:00
Nicolas Mowen
8b293449f9 Improve review summary (#20216)
* Add debug logging for review summaries report

* Improve debug logging

* Improve review report prompt

* Cleanup

* Add date to report
2025-09-25 21:05:22 -05:00
Nicolas Mowen
2f209b2cf4 Implement stationary car classifier to improve parked car management (#20206)
* Implement stationary car classifier to base stationary state on visual changes and not just bounding box stability

* Cleanup

* Fix mypy

* Move to new file and add config to disable if needed

* Cleanup

* Undo
2025-09-25 10:18:45 -05:00
Nicolas Mowen
9a22404015 Use devcontainer build to run tests (#20212)
* Use devcontainer build to run tests

* Make ignored github changes more restrictive
2025-09-25 09:59:18 -05:00
Nicolas Mowen
2c4a043dbb Update go2rtc to 1.9.10 (#20202) 2025-09-25 06:15:04 -05:00
Nicolas Mowen
b23355da53 Update apple silicon docs (#20204) 2025-09-25 06:12:35 -05:00
Nicolas Mowen
90db2d57b3 Update Ollama docs (#20201) 2025-09-24 08:17:20 -05:00
Blake Blackshear
652fdc6a38 Merge remote-tracking branch 'origin/master' into dev 2025-09-24 06:57:50 -05:00
Nicolas Mowen
7e2f5a3017 Improve 640x640 model detection of small objects (#20190)
* Allow larger models to have smaller regions

* remove unnecessary hailo resize

* Update benchmark

* Fix table

* Update nvidia specs
2025-09-23 15:49:54 -05:00
Nicolas Mowen
2f99a17e64 Add docs for classification models (#20188) 2025-09-23 08:29:16 -06:00
Nicolas Mowen
2bc92cce81 Update model explanation for genai (#20186) 2025-09-23 07:30:42 -06:00
Josh Hawkins
7f7eefef7f Live view improvements (#20177) 2025-09-22 21:21:51 -05:00
Josh Hawkins
bdb7a18602 UI tweaks (#20168)
* use mobilepage with create trigger dialog

* use mobilepage with create user dialog

* use mobilepage with create role dialog
2025-09-22 08:36:36 -06:00
Nicolas Mowen
318457113b Add ability to transfer model via ZMQ Detector (#20161)
* Add ability to transfer model via ZMQ

* Cleanup
2025-09-22 07:02:55 -05:00
Nicolas Mowen
e4d5f1f94e Tune OV for latency (#20160) 2025-09-21 18:52:04 -05:00
Nicolas Mowen
0e61d3f153 YOLOv9 LPR model is not compatible (#20159) 2025-09-21 18:51:45 -05:00
Josh Hawkins
cd519ed1ad Update triggers docs to explain why text-to-image triggers are unsupported (#20146)
Many users won't understand why CLIP models can't be magic object detectors or classifiers
2025-09-19 19:29:07 -06:00
Nicolas Mowen
2a860bd85e Update Nvidia model stats to highlight which models support CUDA Graphs (#20141) 2025-09-19 11:16:30 -05:00
Andrew Marshall
a7bbca5014 Read secrets dir from CREDENTIALS_DIRECTORY (#19327)
This supports systemd credentials, see https://systemd.io/CREDENTIALS/.
Default to `/run/secrets` (the Docker Secrets dir) for backwards
compatibility.
2025-09-19 06:34:23 -06:00
iesad
dc96940eb9 Pull count of detection events by label into prometheus metrics (#20119)
* pull count of detection events by label into prometheus metrics

* format changes with ruff

* remove unneeded f-string

* fix imports format

---------

Co-authored-by: iesad <iesad>
2025-09-19 06:27:20 -06:00
dependabot[bot]
1408abb050 Bump docker/login-action from 3.3.0 to 3.5.0 (#19387)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.5.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...184bdaa072)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-19 05:45:05 -06:00
Nicolas Mowen
9c5e560668 Deps updates (#20133) 2025-09-19 06:02:02 -05:00
Nicolas Mowen
b8fd0a2b31 Fix CUDA graph config (#20135) 2025-09-19 05:59:42 -05:00
Hosted Weblate
61d3b370b1 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 99.5% (460 of 462 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
8295773146 Translated using Weblate (Chinese (Traditional Han script))
Currently translated at 97.4% (115 of 118 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 15.6% (67 of 427 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 11.7% (50 of 427 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (46 of 46 strings)

Co-authored-by: Ban <jim515jim@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/zh_Hant/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
8412b33468 Translated using Weblate (Slovenian)
Currently translated at 25.5% (109 of 427 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (46 of 46 strings)

Co-authored-by: Dejan Rožič <drozic1989@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sl/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
508a2ea4ba Translated using Weblate (Slovak)
Currently translated at 74.5% (88 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 20.3% (87 of 427 strings)

Translated using Weblate (Slovak)

Currently translated at 72.1% (88 of 122 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Slovak)

Currently translated at 77.9% (92 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Slovak)

Currently translated at 47.3% (91 of 192 strings)

Translated using Weblate (Slovak)

Currently translated at 24.5% (105 of 427 strings)

Translated using Weblate (Slovak)

Currently translated at 19.2% (82 of 427 strings)

Translated using Weblate (Slovak)

Currently translated at 22.4% (96 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
e7048feff1 Translated using Weblate (Korean)
Currently translated at 1.6% (2 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 2.1% (9 of 427 strings)

Translated using Weblate (Korean)

Currently translated at 4.1% (2 of 48 strings)

Translated using Weblate (Korean)

Currently translated at 3.2% (2 of 61 strings)

Translated using Weblate (Korean)

Currently translated at 0.8% (1 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 1.8% (8 of 427 strings)

Translated using Weblate (Korean)

Currently translated at 2.0% (1 of 48 strings)

Translated using Weblate (Korean)

Currently translated at 50.0% (3 of 6 strings)

Translated using Weblate (Korean)

Currently translated at 1.6% (2 of 122 strings)

Translated using Weblate (Korean)

Currently translated at 1.6% (2 of 118 strings)

Translated using Weblate (Korean)

Currently translated at 1.6% (1 of 61 strings)

Translated using Weblate (Korean)

Currently translated at 20.0% (2 of 10 strings)

Translated using Weblate (Korean)

Currently translated at 1.3% (1 of 72 strings)

Translated using Weblate (Korean)

Currently translated at 4.3% (2 of 46 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Korean)

Currently translated at 7.6% (2 of 26 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Korean)

Currently translated at 22.2% (2 of 9 strings)

Translated using Weblate (Korean)

Currently translated at 2.4% (2 of 83 strings)

Translated using Weblate (Korean)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Korean)

Currently translated at 40.0% (10 of 25 strings)

Translated using Weblate (Korean)

Currently translated at 3.8% (2 of 52 strings)

Translated using Weblate (Korean)

Currently translated at 6.7% (13 of 192 strings)

Translated using Weblate (Korean)

Currently translated at 2.5% (11 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: sungtak yoon <y.mami0812@gmail.com>
Co-authored-by: ysteen <littleyu@ysteen.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ko/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
2c0416348f Translated using Weblate (Persian)
Currently translated at 2.5% (3 of 118 strings)

Translated using Weblate (Persian)

Currently translated at 0.7% (3 of 427 strings)

Translated using Weblate (Persian)

Currently translated at 6.2% (3 of 48 strings)

Translated using Weblate (Persian)

Currently translated at 50.0% (3 of 6 strings)

Translated using Weblate (Persian)

Currently translated at 2.4% (2 of 83 strings)

Translated using Weblate (Persian)

Currently translated at 3.2% (2 of 61 strings)

Translated using Weblate (Persian)

Currently translated at 33.3% (3 of 9 strings)

Translated using Weblate (Persian)

Currently translated at 0.8% (1 of 122 strings)

Translated using Weblate (Persian)

Currently translated at 11.5% (3 of 26 strings)

Translated using Weblate (Persian)

Currently translated at 20.0% (2 of 10 strings)

Translated using Weblate (Persian)

Currently translated at 12.0% (3 of 25 strings)

Translated using Weblate (Persian)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Persian)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Persian)

Currently translated at 2.7% (2 of 72 strings)

Translated using Weblate (Persian)

Currently translated at 5.7% (3 of 52 strings)

Translated using Weblate (Persian)

Currently translated at 6.5% (3 of 46 strings)

Translated using Weblate (Persian)

Currently translated at 33.3% (3 of 9 strings)

Translated using Weblate (Persian)

Currently translated at 1.5% (3 of 192 strings)

Co-authored-by: Arvin Loripour <arvinlp@users.noreply.hosted.weblate.org>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fa/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fa/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
94134970df Translated using Weblate (Swedish)
Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Swedish)

Currently translated at 96.5% (446 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 86.1% (398 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 93.2% (398 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 82.6% (382 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 89.9% (384 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 81.8% (378 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 88.9% (380 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 78.5% (363 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 85.0% (363 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 75.9% (351 of 462 strings)

Translated using Weblate (Swedish)

Currently translated at 79.3% (339 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 74.4% (318 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 76.3% (326 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 65.8% (281 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 69.0% (295 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 47.5% (203 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 47.5% (203 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 51.2% (219 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: Martin Lindhe <martin.j.lindhe@gmail.com>
Co-authored-by: Oscar Haraldsson <oscar.haraldsson@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
80dbb9e38e Translated using Weblate (French)
Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (French)

Currently translated at 95.2% (440 of 462 strings)

Translated using Weblate (French)

Currently translated at 100.0% (427 of 427 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
fc6446b9e6 Translated using Weblate (Spanish)
Currently translated at 99.1% (117 of 118 strings)

Translated using Weblate (Spanish)

Currently translated at 98.7% (82 of 83 strings)

Translated using Weblate (Spanish)

Currently translated at 97.5% (119 of 122 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Spanish)

Currently translated at 82.2% (380 of 462 strings)

Translated using Weblate (Spanish)

Currently translated at 99.4% (191 of 192 strings)

Translated using Weblate (Spanish)

Currently translated at 98.7% (82 of 83 strings)

Translated using Weblate (Spanish)

Currently translated at 97.4% (115 of 118 strings)

Translated using Weblate (Spanish)

Currently translated at 96.3% (80 of 83 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Spanish)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Spanish)

Currently translated at 97.2% (70 of 72 strings)

Co-authored-by: Carlos Sanchez <carlosesh@outlook.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: glafe <tazefr@gmail.com>
Co-authored-by: Álex Díaz <adiaz@okode.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/es/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
b1ec055aaa Translated using Weblate (Dutch)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
5a26d7b029 Translated using Weblate (Arabic)
Currently translated at 37.2% (44 of 118 strings)

Translated using Weblate (Arabic)

Currently translated at 45.6% (21 of 46 strings)

Translated using Weblate (Arabic)

Currently translated at 26.2% (16 of 61 strings)

Translated using Weblate (Arabic)

Currently translated at 8.3% (16 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Mohammed Alrasheed <mohdforever007@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ar/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
6f9b38819a Translated using Weblate (Italian)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
d679c42160 Translated using Weblate (Polish)
Currently translated at 94.4% (68 of 72 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (10 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Millatarra <pytlik.michal@wp.pl>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/pl/
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
2025-09-18 16:04:38 -06:00
Hosted Weblate
ad6da29ea2 Translated using Weblate (Hebrew)
Currently translated at 74.0% (342 of 462 strings)

Translated using Weblate (Hebrew)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Ronen Atsil <atsil55@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/he/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/he/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
a9852d62f4 Translated using Weblate (Hungarian)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (46 of 46 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Zsolt Fojtyik <zsozso830316@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/hu/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
fa2e583fd1 Translated using Weblate (Vietnamese)
Currently translated at 100.0% (427 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jack Fish <fishappy0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/vi/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
55cc4b55fe Translated using Weblate (Portuguese)
Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Portuguese)

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Portuguese)

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (427 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Manuela Silva <mmsrs@sky.com>
Co-authored-by: ssantos <ssantos@web.de>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-input/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/pt/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-input
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
db765f2a60 Translated using Weblate (Czech)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Czech)

Currently translated at 96.7% (413 of 427 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Czech)

Currently translated at 96.3% (185 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Martin S <martin@szkandera.eu>
Co-authored-by: leroyloren <lama18@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/cs/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
e06127947d Translated using Weblate (Catalan)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
96d84ab74f Translated using Weblate (Japanese)
Currently translated at 1.6% (1 of 61 strings)

Translated using Weblate (Japanese)

Currently translated at 1.6% (2 of 122 strings)

Translated using Weblate (Japanese)

Currently translated at 90.0% (9 of 10 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: kenjiro kono <kkyenji@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ja/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ja/
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
2025-09-18 16:04:38 -06:00
Hosted Weblate
483639df25 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Ukrainian)

Currently translated at 94.3% (436 of 462 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Alex Taran <oleksii.taran@gmail.com>
Co-authored-by: Denys Dovhan <denysdovhan@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
83696c60fd Translated using Weblate (Romanian)
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
b24726bc80 Translated using Weblate (Danish)
Currently translated at 2.0% (1 of 48 strings)

Translated using Weblate (Danish)

Currently translated at 22.2% (2 of 9 strings)

Translated using Weblate (Danish)

Currently translated at 4.3% (2 of 46 strings)

Translated using Weblate (Danish)

Currently translated at 0.2% (1 of 462 strings)

Translated using Weblate (Danish)

Currently translated at 11.1% (1 of 9 strings)

Translated using Weblate (Danish)

Currently translated at 0.8% (1 of 118 strings)

Translated using Weblate (Danish)

Currently translated at 50.0% (1 of 2 strings)

Translated using Weblate (Danish)

Currently translated at 2.1% (1 of 46 strings)

Translated using Weblate (Danish)

Currently translated at 1.6% (7 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Isak <isakhhb@protonmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/da/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
892560a123 Translated using Weblate (German)
Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (German)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (German)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (German)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (German)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (German)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (German)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (German)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (German)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (German)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (German)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jurals <wblat@juralus.de>
Co-authored-by: Phil Jope <phil@jope.cloud>
Co-authored-by: ahgln <ahgln@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/de/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Hosted Weblate
6d9ef1b439 Translated using Weblate (Portuguese (Brazil))
Currently translated at 100.0% (462 of 462 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
b1ec1d20af Added translation using Weblate (Tamil)
Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Added translation using Weblate (Tamil)

Update translation files

Updated by "Squash Git commits" add-on in Weblate.

Added translation using Weblate (Tamil)

Update translation files

Updated by "Squash Git commits" add-on in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Languages add-on <noreply-addon-languages@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/
Translation: Frigate NVR/common
2025-09-18 16:04:38 -06:00
Hosted Weblate
9e7ed4daa8 Translated using Weblate (Lithuanian)
Currently translated at 100.0% (462 of 462 strings)

Translated using Weblate (Lithuanian)

Currently translated at 95.6% (442 of 462 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Lithuanian)

Currently translated at 21.3% (13 of 61 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Lithuanian)

Currently translated at 13.9% (17 of 122 strings)

Translated using Weblate (Lithuanian)

Currently translated at 15.2% (11 of 72 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Lithuanian)

Currently translated at 44.9% (192 of 427 strings)

Translated using Weblate (Lithuanian)

Currently translated at 45.9% (196 of 427 strings)

Translated using Weblate (Lithuanian)

Currently translated at 13.1% (16 of 122 strings)

Translated using Weblate (Lithuanian)

Currently translated at 13.8% (10 of 72 strings)

Translated using Weblate (Lithuanian)

Currently translated at 25.0% (13 of 52 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Co-authored-by: Ramūnas Dronga <github@ramuno.lt>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lt/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-09-18 16:04:38 -06:00
Hosted Weblate
c79d9d8d86 Translated using Weblate (Turkish)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (83 of 83 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: pcislocked <git@pcislocked.net>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/tr/
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-system
2025-09-18 16:04:38 -06:00
Josh Hawkins
251b029d6e LPR improvements (#20129)
* continue to use paddleocr v3 text detection model for large

v5 was not finding text on multi-line plates at all in testing

* implement clustering of plate variants per event

should reduce OCR inconsistencies and improve plate recognition stability by using string similarity to cluster similar variants (10 per event id) and choosing the highest confidence representative as the final plate

* pass camera

* prune number of variants based on detect fps

* implement replacement rules for cleaning up and normalizing plates

* docs

* docs
2025-09-18 15:12:17 -06:00
Nicolas Mowen
68f806bb61 Cleanup onnx detector (#20128)
* Cleanup onnx detector

* Fix

* Fix classification cropping

* Deprioritize openvino

* Send model type

* Use model type to decide if model can use full optimization

* Clenanup

* Cleanup
2025-09-18 15:12:09 -06:00
Nicolas Mowen
c05e260ae9 Update ROCm to not hang when running on complex RNN models (#20118)
* Update ROCm to not hang when running on complex RNN models

* Formatting
2025-09-17 19:26:32 -05:00
Nicolas Mowen
1efff67e32 Fix ov for LPR (#20117)
* Check complex model

* Reset state for complex models

* Send arg

* Fix

* Cleanup
2025-09-17 15:21:57 -06:00
Nicolas Mowen
26178444f3 Fixes (#20102)
* Catch bird classification resize error

* Improve openvino width detection

* Use auto by default

* Set type
2025-09-16 16:06:51 -06:00
Josh Hawkins
975c8485f9 Catch exception when regex in LPR format field is invalid (#20099) 2025-09-16 07:41:25 -05:00
Nicolas Mowen
5f34a18905 Dynamically adjust to configured attribute map for lpr (#20079) 2025-09-15 08:49:07 -05:00
Tim Wesley
6cd1d1f205 memryx: fix model download bug when using multiple detectors (#20030)
* Add locking for model download files

* ruff format

---------

Co-authored-by: Abinila Siva <abinila.siva@memryx.com>
2025-09-15 08:48:55 -05:00
Nicolas Mowen
03fe054078 OpenVINO Hardware Improvements (#20071)
* Use OpenVINO directly to detect if devices are available

* Cleanup

* Update OpenVINO

* Cleanup

* Don't try to use OpenVINO when CPU is set as device

* Catch case where input tensor can't be pre-defined

* Cleanup
2025-09-15 08:35:49 -05:00
Josh Hawkins
2d4a0cc584 false_positive endpoint needs to await send_to_plus call (#20069) 2025-09-14 11:31:07 -05:00
Nicolas Mowen
ff0430964c Correctly calculate input data type for OV (#20066)
* Correctly calculate input data type for OV

* Formatting
2025-09-14 07:15:41 -06:00
arthur simas
c74e86dff2 fix(web): handle undefined screen.orientation (#20064) 2025-09-14 06:51:39 -06:00
Nicolas Mowen
81d7c47129 Optimize OpenVINO and ONNX Model Runners (#20063)
* Use re-usable inference request to reduce CPU usage

* Share tensor

* Don't count performance

* Create openvino runner class

* Break apart onnx runner

* Add specific note about inability to use CUDA graphs for some models

* Adjust rknn to use RKNNRunner

* Use optimized runner

* Add support for non-complex models for CudaExecutionProvider

* Use core mask for rknn

* Correctly handle cuda input

* Cleanup

* Sort imports
2025-09-14 06:22:22 -06:00
Josh Hawkins
41ed013cc4 Check cameras param and continue to split for query (#20048) 2025-09-12 09:58:47 -06:00
Nicolas Mowen
751678c845 Fix cuda graph fallback (#20039) 2025-09-12 06:41:26 -05:00
Josh Hawkins
ed1e3a7c9a Enhance user roles to limit camera access (#20024)
* update config for roles and add validator

* ensure admin and viewer are never overridden

* add class method to user to retrieve all allowed cameras

* enforce config roles in auth api endpoints

* add camera access api dependency functions

* protect review endpoints

* protect preview endpoints

* rename param name for better fastapi injection matching

* remove unneeded

* protect export endpoints

* protect event endpoints

* protect media endpoints

* update auth hook for allowed cameras

* update default app view

* ensure anonymous user always returns all cameras

* limit cameras in explore

* cameras is already a list

* limit cameras in review/history

* limit cameras in live view

* limit cameras in camera groups

* only show face library and classification in sidebar for admin

* remove check in delete reviews

since admin role is required, no need to check camera access. fixes failing test

* pass request with camera access for tests

* more async

* camera access tests

* fix proxy auth tests

* allowed cameras for review tests

* combine event tests and refactor for camera access

* fix post validation for roles

* don't limit roles in create user dialog

* fix triggers endpoints

no need to run require camera access dep since the required role is admin

* fix type

* create and edit role dialogs

* delete role dialog

* fix role change dialog

* update settings view for roles

* i18n changes

* minor spacing tweaks

* docs

* use badges and camera name label component

* clarify docs

* display all cameras badge for admin and viewer

* i18n fix

* use validator to prevent reserved and empty roles from being assigned

* split users and roles into separate tabs in settings

* tweak docs

* clarify docs

* change icon

* don't memoize roles

always recalculate on component render
2025-09-12 05:19:29 -06:00
Nicolas Mowen
ba650af6f2 Correctly cast to enum when loading plus config (#20031) 2025-09-11 16:39:34 -06:00
Nicolas Mowen
fad28a764c Use CUDA graphs for object detection on Nvidia GPUs (#20027)
* Use CUDA graphs to improve efficiency of object detection

* Cleanup comments and typing
2025-09-11 10:20:25 -06:00
Nicolas Mowen
15729e0f19 Provide model type in header for ZMQ detector (#20000)
* Provide model type in header

* Formatting
2025-09-09 17:53:36 -05:00
Craig
fd6e7afea9 Fix markdown table in camera_specific.md (#19919) 2025-09-05 06:00:51 -05:00
Josh Hawkins
b781f06f9c Constrain width of export preview dialog (#19908)
Matches the other export preview dialog in Review
2025-09-04 08:08:07 -05:00
Blake Blackshear
a8b7e5dd24 Merge remote-tracking branch 'origin/master' into dev 2025-09-04 06:33:22 -05:00
Nicolas Mowen
6505ae5fb5 Optimize cuda execution to run in single stream (#19896) 2025-09-03 08:53:30 -05:00
Josh Hawkins
bd255362d6 Ensure proxy group claim uses the configured separator character (#19869)
* Ensure group claim uses the configured separator character

* refactor to helper function

* tests

* clean up
2025-09-01 15:30:30 -06:00
Josh Hawkins
55160f9235 fix more merge conflicts 2025-09-01 13:51:29 -05:00
Sergejs Romancevičs
8e7f0cfd51 Translated using Weblate (Russian)
Currently translated at 100.0% (118 of 118 strings)

Translation: Frigate NVR/views-system
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ru/
2025-09-01 13:42:09 -05:00
Artyom Rybakov
9fd07bde48 Translated using Weblate (Russian)
Currently translated at 100.0% (118 of 118 strings)

Translation: Frigate NVR/views-system
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ru/
2025-09-01 13:42:09 -05:00
Sergejs Romancevičs
efe707510e Translated using Weblate (Russian)
Currently translated at 100.0% (427 of 427 strings)

Translation: Frigate NVR/views-settings
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
2025-09-01 13:42:09 -05:00
Artyom Rybakov
4cca09410f Translated using Weblate (Russian)
Currently translated at 100.0% (427 of 427 strings)

Translation: Frigate NVR/views-settings
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
2025-09-01 13:42:09 -05:00
Artyom Rybakov
cc270e7d1e Translated using Weblate (Russian)
Currently translated at 100.0% (72 of 72 strings)

Translation: Frigate NVR/components-filter
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ru/
2025-09-01 13:42:09 -05:00
Artyom Rybakov
bba22bd456 Translated using Weblate (Russian)
Currently translated at 100.0% (46 of 46 strings)

Translation: Frigate NVR/components-camera
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ru/
2025-09-01 13:42:09 -05:00
Hosted Weblate
ecfe4448c8 Translated using Weblate (Swedish)
Currently translated at 46.1% (197 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 49.8% (213 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 43.3% (185 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 47.3% (202 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 42.3% (181 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 42.3% (181 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Swedish)

Currently translated at 44.9% (192 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 44.9% (192 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Swedish)

Currently translated at 33.6% (143 of 425 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Swedish)

Currently translated at 40.0% (171 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 99.1% (115 of 116 strings)

Translated using Weblate (Swedish)

Currently translated at 27.3% (115 of 420 strings)

Translated using Weblate (Swedish)

Currently translated at 94.2% (115 of 122 strings)

Translated using Weblate (Swedish)

Currently translated at 95.8% (184 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: revellion <revellion@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:42:09 -05:00
Hosted Weblate
4d0982e9b0 Translated using Weblate (French)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (French)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (French)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (French)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (425 of 425 strings)

Translated using Weblate (French)

Currently translated at 100.0% (116 of 116 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fr/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:42:09 -05:00
Hosted Weblate
674abaee1f Translated using Weblate (Catalan)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ca/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:42:09 -05:00
Hosted Weblate
a9c4aff0b1 Translated using Weblate (Lithuanian)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Lithuanian)

Currently translated at 26.4% (113 of 427 strings)

Translated using Weblate (Lithuanian)

Currently translated at 15.6% (13 of 83 strings)

Translated using Weblate (Lithuanian)

Currently translated at 19.6% (12 of 61 strings)

Translated using Weblate (Lithuanian)

Currently translated at 10.6% (13 of 122 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Lithuanian)

Currently translated at 20.4% (86 of 420 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Lithuanian)

Currently translated at 18.0% (11 of 61 strings)

Translated using Weblate (Lithuanian)

Currently translated at 6.5% (8 of 122 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Lithuanian)

Currently translated at 21.1% (11 of 52 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/lt/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:42:09 -05:00
Hosted Weblate
f7ff452992 Translated using Weblate (Norwegian Bokmål)
Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 92.0% (393 of 427 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 81.6% (343 of 420 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nb_NO/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
e35a904df4 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (425 of 425 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (52 of 52 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/zh_Hans/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
364bff447b Translated using Weblate (Chinese (Traditional Han script))
Currently translated at 14.5% (62 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: 林柏臣 <daniel.pclin@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hant/
Translation: Frigate NVR/audio
2025-09-01 13:17:43 -05:00
Hosted Weblate
d5096a6330 Translated using Weblate (Slovak)
Currently translated at 70.6% (82 of 116 strings)

Translated using Weblate (Slovak)

Currently translated at 19.2% (82 of 425 strings)

Translated using Weblate (Slovak)

Currently translated at 68.0% (83 of 122 strings)

Translated using Weblate (Slovak)

Currently translated at 98.7% (82 of 83 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Slovak)

Currently translated at 72.0% (85 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Slovak)

Currently translated at 43.7% (84 of 192 strings)

Translated using Weblate (Slovak)

Currently translated at 22.4% (96 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
1a8b2dcbe9 Translated using Weblate (Korean)
Currently translated at 1.4% (6 of 427 strings)

Translated using Weblate (Korean)

Currently translated at 32.0% (8 of 25 strings)

Translated using Weblate (Korean)

Currently translated at 0.9% (4 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: 엄두섭 <eomds1@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/ko/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ko/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-player
Translation: Frigate NVR/views-settings
2025-09-01 13:17:43 -05:00
Hosted Weblate
b777c2cf73 Translated using Weblate (Finnish)
Currently translated at 59.9% (256 of 427 strings)

Translated using Weblate (Finnish)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Finnish)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Finnish)

Currently translated at 78.6% (48 of 61 strings)

Translated using Weblate (Finnish)

Currently translated at 66.3% (81 of 122 strings)

Translated using Weblate (Finnish)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Finnish)

Currently translated at 67.3% (35 of 52 strings)

Translated using Weblate (Finnish)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Finnish)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Finnish)

Currently translated at 38.1% (163 of 427 strings)

Translated using Weblate (Finnish)

Currently translated at 35.3% (41 of 116 strings)

Translated using Weblate (Finnish)

Currently translated at 54.3% (232 of 427 strings)

Translated using Weblate (Finnish)

Currently translated at 96.3% (80 of 83 strings)

Translated using Weblate (Finnish)

Currently translated at 58.1% (71 of 122 strings)

Translated using Weblate (Finnish)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Finnish)

Currently translated at 70.8% (51 of 72 strings)

Translated using Weblate (Finnish)

Currently translated at 61.9% (119 of 192 strings)

Translated using Weblate (Finnish)

Currently translated at 14.2% (61 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: banter <banter@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fi/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
2214bf611f Translated using Weblate (Swedish)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Swedish)

Currently translated at 43.3% (185 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 47.3% (202 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 42.3% (181 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 42.3% (181 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Swedish)

Currently translated at 44.9% (192 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 44.9% (192 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Swedish)

Currently translated at 33.6% (143 of 425 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Swedish)

Currently translated at 40.0% (171 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 99.1% (115 of 116 strings)

Translated using Weblate (Swedish)

Currently translated at 27.3% (115 of 420 strings)

Translated using Weblate (Swedish)

Currently translated at 94.2% (115 of 122 strings)

Translated using Weblate (Swedish)

Currently translated at 95.8% (184 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: revellion <revellion@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
40af34ee11 Translated using Weblate (French)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (French)

Currently translated at 100.0% (61 of 61 strings)

Translated using Weblate (French)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (French)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (French)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (French)

Currently translated at 100.0% (425 of 425 strings)

Translated using Weblate (French)

Currently translated at 100.0% (116 of 116 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/fr/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
6e4fc8611d Translated using Weblate (Dutch)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (420 of 420 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nl/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
732b1a4a6e Translated using Weblate (Arabic)
Currently translated at 4.9% (21 of 425 strings)

Translated using Weblate (Arabic)

Currently translated at 37.0% (43 of 116 strings)

Translated using Weblate (Arabic)

Currently translated at 3.3% (14 of 420 strings)

Translated using Weblate (Arabic)

Currently translated at 31.2% (15 of 48 strings)

Translated using Weblate (Arabic)

Currently translated at 26.9% (14 of 52 strings)

Translated using Weblate (Arabic)

Currently translated at 18.5% (13 of 70 strings)

Translated using Weblate (Arabic)

Currently translated at 57.6% (15 of 26 strings)

Translated using Weblate (Arabic)

Currently translated at 18.0% (15 of 83 strings)

Translated using Weblate (Arabic)

Currently translated at 13.1% (16 of 122 strings)

Translated using Weblate (Arabic)

Currently translated at 60.0% (15 of 25 strings)

Translated using Weblate (Arabic)

Currently translated at 33.3% (15 of 45 strings)

Translated using Weblate (Arabic)

Currently translated at 16.9% (20 of 118 strings)

Translated using Weblate (Arabic)

Currently translated at 24.5% (15 of 61 strings)

Translated using Weblate (Arabic)

Currently translated at 7.8% (15 of 192 strings)

Translated using Weblate (Arabic)

Currently translated at 15.2% (65 of 427 strings)

Translated using Weblate (Arabic)

Currently translated at 11.2% (13 of 116 strings)

Translated using Weblate (Arabic)

Currently translated at 2.6% (11 of 420 strings)

Translated using Weblate (Arabic)

Currently translated at 27.0% (13 of 48 strings)

Translated using Weblate (Arabic)

Currently translated at 21.1% (11 of 52 strings)

Translated using Weblate (Arabic)

Currently translated at 17.1% (12 of 70 strings)

Translated using Weblate (Arabic)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Arabic)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Arabic)

Currently translated at 53.8% (14 of 26 strings)

Translated using Weblate (Arabic)

Currently translated at 16.8% (14 of 83 strings)

Translated using Weblate (Arabic)

Currently translated at 12.2% (15 of 122 strings)

Translated using Weblate (Arabic)

Currently translated at 56.0% (14 of 25 strings)

Translated using Weblate (Arabic)

Currently translated at 28.8% (13 of 45 strings)

Translated using Weblate (Arabic)

Currently translated at 16.1% (19 of 118 strings)

Translated using Weblate (Arabic)

Currently translated at 22.9% (14 of 61 strings)

Translated using Weblate (Arabic)

Currently translated at 6.7% (13 of 192 strings)

Translated using Weblate (Arabic)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Arabic)

Currently translated at 14.9% (64 of 427 strings)

Translated using Weblate (Arabic)

Currently translated at 7.7% (9 of 116 strings)

Translated using Weblate (Arabic)

Currently translated at 1.6% (7 of 420 strings)

Translated using Weblate (Arabic)

Currently translated at 18.7% (9 of 48 strings)

Translated using Weblate (Arabic)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Arabic)

Currently translated at 15.3% (8 of 52 strings)

Translated using Weblate (Arabic)

Currently translated at 14.2% (10 of 70 strings)

Translated using Weblate (Arabic)

Currently translated at 77.7% (7 of 9 strings)

Translated using Weblate (Arabic)

Currently translated at 77.7% (7 of 9 strings)

Translated using Weblate (Arabic)

Currently translated at 42.3% (11 of 26 strings)

Translated using Weblate (Arabic)

Currently translated at 13.2% (11 of 83 strings)

Translated using Weblate (Arabic)

Currently translated at 9.8% (12 of 122 strings)

Translated using Weblate (Arabic)

Currently translated at 44.0% (11 of 25 strings)

Translated using Weblate (Arabic)

Currently translated at 22.2% (10 of 45 strings)

Translated using Weblate (Arabic)

Currently translated at 13.5% (16 of 118 strings)

Translated using Weblate (Arabic)

Currently translated at 14.7% (9 of 61 strings)

Translated using Weblate (Arabic)

Currently translated at 5.7% (11 of 192 strings)

Translated using Weblate (Arabic)

Currently translated at 80.0% (8 of 10 strings)

Translated using Weblate (Arabic)

Currently translated at 14.5% (62 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MAATECH <hmmdcool@gmail.com>
Co-authored-by: Modar Soos <modarsoos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ar/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ar/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
269ac067f5 Translated using Weblate (Italian)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Italian)

Currently translated at 99.2% (422 of 425 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (427 of 427 strings)

Co-authored-by: Andrea Stefanello <stefandre2002@hotmail.it>
Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/it/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
674c066837 Translated using Weblate (Hungarian)
Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (420 of 420 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (52 of 52 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Zsolt Fojtyik <zsozso830316@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/hu/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
54afe41ed0 Translated using Weblate (Vietnamese)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Vietnamese)

Currently translated at 88.9% (380 of 427 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (46 of 46 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Tài Nguyễn <nct.thth@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/vi/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
ee626d21f8 Translated using Weblate (Czech)
Currently translated at 99.1% (116 of 117 strings)

Translated using Weblate (Czech)

Currently translated at 80.3% (343 of 427 strings)

Translated using Weblate (Czech)

Currently translated at 95.0% (116 of 122 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Czech)

Currently translated at 80.0% (342 of 427 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Czech)

Currently translated at 96.1% (25 of 26 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Czech)

Currently translated at 97.2% (70 of 72 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Czech)

Currently translated at 100.0% (427 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Co-authored-by: Martin S <martin@szkandera.eu>
Co-authored-by: dandrapela <drapela.dan@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/cs/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/cs/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
ef74605e45 Translated using Weblate (Catalan)
Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Gerard Ricart Castells <gerard.ricart@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ca/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
d4e939b663 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (117 of 117 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (425 of 425 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (26 of 26 strings)

Co-authored-by: Anatoli Skovpen <a@ask.kiev.ua>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/uk/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
1004f73493 Translated using Weblate (Romanian)
Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (425 of 425 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ro/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
e3dc53f2e6 Translated using Weblate (Russian)
Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (420 of 420 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (26 of 26 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Igor Malysh <igmalysch@yandex.ru>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ru/
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
35eb08cbb2 Translated using Weblate (Greek)
Currently translated at 18.1% (21 of 116 strings)

Translated using Weblate (Greek)

Currently translated at 5.3% (23 of 427 strings)

Translated using Weblate (Greek)

Currently translated at 43.7% (21 of 48 strings)

Translated using Weblate (Greek)

Currently translated at 33.7% (28 of 83 strings)

Translated using Weblate (Greek)

Currently translated at 39.3% (24 of 61 strings)

Translated using Weblate (Greek)

Currently translated at 18.0% (22 of 122 strings)

Translated using Weblate (Greek)

Currently translated at 76.9% (20 of 26 strings)

Translated using Weblate (Greek)

Currently translated at 40.3% (21 of 52 strings)

Translated using Weblate (Greek)

Currently translated at 29.1% (21 of 72 strings)

Translated using Weblate (Greek)

Currently translated at 14.0% (27 of 192 strings)

Translated using Weblate (Greek)

Currently translated at 18.6% (22 of 118 strings)

Translated using Weblate (Greek)

Currently translated at 14.7% (63 of 427 strings)

Translated using Weblate (Greek)

Currently translated at 16.3% (19 of 116 strings)

Translated using Weblate (Greek)

Currently translated at 5.1% (22 of 427 strings)

Translated using Weblate (Greek)

Currently translated at 39.5% (19 of 48 strings)

Translated using Weblate (Greek)

Currently translated at 100.0% (6 of 6 strings)

Translated using Weblate (Greek)

Currently translated at 24.0% (20 of 83 strings)

Translated using Weblate (Greek)

Currently translated at 36.0% (22 of 61 strings)

Translated using Weblate (Greek)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Greek)

Currently translated at 16.3% (20 of 122 strings)

Translated using Weblate (Greek)

Currently translated at 69.2% (18 of 26 strings)

Translated using Weblate (Greek)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Greek)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Greek)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Greek)

Currently translated at 36.5% (19 of 52 strings)

Translated using Weblate (Greek)

Currently translated at 43.4% (20 of 46 strings)

Translated using Weblate (Greek)

Currently translated at 27.7% (20 of 72 strings)

Translated using Weblate (Greek)

Currently translated at 13.5% (26 of 192 strings)

Translated using Weblate (Greek)

Currently translated at 17.7% (21 of 118 strings)

Translated using Weblate (Greek)

Currently translated at 14.5% (62 of 427 strings)

Co-authored-by: Alexander Lagos <kinglagos@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-recording/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/el/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-recording
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
033ec2e0e7 Translated using Weblate (Danish)
Currently translated at 4.0% (5 of 122 strings)

Translated using Weblate (Danish)

Currently translated at 55.5% (5 of 9 strings)

Translated using Weblate (Danish)

Currently translated at 1.4% (6 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Co-authored-by: Sensei Maverick <senseimaverick@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/da/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/da/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/views-explore
2025-09-01 13:17:43 -05:00
Hosted Weblate
0402efbf7c Translated using Weblate (German)
Currently translated at 98.3% (420 of 427 strings)

Translated using Weblate (German)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (German)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Viktor Stier <viktor-stier@gmx.de>
Co-authored-by: ahgln <a.hegglin@windowslive.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/de/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
88f7959d7f Translated using Weblate (Portuguese (Brazil))
Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (122 of 122 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (192 of 192 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (425 of 425 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (116 of 116 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: João Gabriel Frohlich <gabrielfrohlich14@gmail.com>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/pt_BR/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
ed86430709 Translated using Weblate (Lithuanian)
Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Lithuanian)

Currently translated at 20.4% (86 of 420 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Lithuanian)

Currently translated at 18.0% (11 of 61 strings)

Translated using Weblate (Lithuanian)

Currently translated at 6.5% (8 of 122 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Lithuanian)

Currently translated at 21.1% (11 of 52 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/lt/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Hosted Weblate
2bdfe1316c Translated using Weblate (Turkish)
Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (72 of 72 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (116 of 116 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (420 of 420 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (83 of 83 strings)

Translated using Weblate (Turkish)

Currently translated at 99.1% (121 of 122 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (52 of 52 strings)

Translated using Weblate (Turkish)

Currently translated at 100.0% (192 of 192 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: pcislocked <git@pcislocked.net>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/tr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/tr/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-09-01 13:17:43 -05:00
Grégory Marti
b86e6e484f Replace hardcoded path with constant (#19816) 2025-09-01 07:26:04 -06:00
Alone
9af7246b0b Fix typo in Apple Silicon detector (#19854) 2025-08-31 22:10:52 -05:00
Nicolas Mowen
3a1e1d0841 Improve review segmentation behavior (#19850)
* Refactor active objects to class

* Keep segment going when detection is newer than end of alert

* Cleanup logic

* Fix

* Cleanup ending

* Adjust timing

* Improve detection saving

* Don't have padding at end for in progress reviews

* Add review config for cutoff times
2025-08-31 16:36:12 -05:00
Piotr Staśkiewicz
a478c38f8a update legacy intel-compute packages (#19848) 2025-08-31 06:20:50 -06:00
Josh Hawkins
c7231648eb Add an icon and tooltip to explain detector CPU usage metric (#19825) 2025-08-28 17:15:00 -05:00
Josh Hawkins
92555eb835 Add low shm warning to bottom bar (#19824)
* Add low shm warning to bottom bar

* change relevant link
2025-08-28 14:32:05 -05:00
Janis Hutz
a2ba4e4e39 Add PTZ Camera recommendations, explanations and docs for 2-way audio (#19740)
* Add PTZ Camera recommendations, explanations and docs for 2-way audio

* Fix typos

* Change lingo as suggested

* Add issue template for camera compatibility report and ask user to fill it out in docs

* Update docs as suggested

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Remove issue template, tweak call to action

* Add suggested tweaks

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Lingo update as suggested

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Shorter link to two-way audio page

---------

Co-authored-by: Janis Hutz <info@janishutz.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-08-28 10:01:50 -06:00
Josh Hawkins
d78b6e528b Only import degirum module if using degirum detector (#19802) 2025-08-27 08:31:01 -06:00
ChirayuRai
0febc4d456 DeGirum Detector for Frigate (#19111)
* Added degirum plugin, updated documentation for degirum detector usage, updated requirements with degirum_headless

* Fixed broken link

* Made it so openvino prioritizes using GPU and NPU over CPU

* Version that detects model and can begin using @local

* Updating requirements to build dev container

* Added optimized version of degirum plugin + updated docs

* Added guard clause for empty inference reponse

* Updated DeGirum's docs

* Moved DeGirum section to 'Community' detectors, fixed formatting of headers to be more consistent with the rest of the page, and removed uneeded 'models' folder

* Moved DeGirum section to correct place in community models

* Update ROCm to 6.4.0 (#18264)

* Update to rocm 6.4.0

* Update URL

* Remove old env var

* Dynamic Config Updates (#18353)

* Create classes to handle publishing and subscribing config updates

* Cleanup

* Use config updater

* Update handling for enabled config

* Cleanup

* Recording config updates

* Birdseye config updates

* Handle notifications

* handle review

* Update motion

* Dynamically update masks and zones for cameras (#18359)

* Include config publisher in api

* Call update topic for passed topics

* Update zones dynamically

* Update zones internally

* Support zone and mask reset

* Handle updating objects config

* Don't put status for needing to restart Frigate

* Cleanup http tests

* Fix tests

* Initial custom classification model config support (#18362)

* Add basic config for defining a teachable machine model

* Add model type

* Add basic config for teachable machine models

* Adjust config for state and object

* Use config to process

* Correctly check for objects

* Remove debug

* Rename to not be teachable machine specific

* Cleanup

* Implement support for no recordings indicator on timeline (#18363)

* Indicate no recordings on the history timeline with gray hash marks

This commit includes a new backend API endpoint and the frontend changes needed to support this functionality

* don't show slashes for now

* Update ROCm to 6.4.1 (#18364)

* Update rocm to 6.4.1

* Quick fix

* Add ability to configure when custom classification models run (#18380)

* Add config to control when classification models are run

* Cleanup

* Add basic config editor when Frigate can't startup (#18383)

* Start Frigate in safe mode when config does not validate

* Add safe mode page that is just the config editor

* Adjust Frigate config editor when in safe mode

* Cleanup

* Improve log message

* Fix incorrectly running lpr (#18390)

* Audio transcription support (#18398)

* install new packages for transcription support

* add config options

* audio maintainer modifications to support transcription

* pass main config to audio process

* embeddings support

* api and transcription post processor

* embeddings maintainer support for post processor

* live audio transcription with sherpa and faster-whisper

* update dispatcher with live transcription topic

* frontend websocket

* frontend live transcription

* frontend changes for speech events

* i18n changes

* docs

* mqtt docs

* fix linter

* use float16 and small model on gpu for real-time

* fix return value and use requestor to embed description instead of passing embeddings

* run real-time transcription in its own thread

* tweaks

* publish live transcriptions on their own topic instead of tracked_object_update

* config validator and docs

* clarify docs

* Implement API to train classification models (#18475)

* Intel updates (#18493)

* Update openvino and onnxruntime

* Install icd and level-zero-gpu deps from intel directly

* Install

* Add dep

* Fix package install

* Tiered recordings (#18492)

* Implement tiered recording

* Add migration for record config

* Update docs

* Update reference docs

* Fix preview query

* Fix incorrect accesses

* Fix

* Fix

* Fix

* Fix

* Upgrade PaddleOCR models to v4 (rec) and v5 (det) (#18505)

The PP_OCRv5 text detection models have greatly improved over v3. The v5 recognition model makes improvements to challenging handwriting and uncommon characters, which are not necessary for LPR, so using v4 seemed like a better choice to continue to keep inference time as low as possible. Also included is the full dictionary for Chinese character support.

* Audio transcription tweaks (#18540)

* use model runner

* unload whisper model when live transcription is complete

* Classification Model UI (#18571)

* Setup basic training structure

* Build out route

* Handle model configs

* Add image fetch APIs

* Implement model training screen with dataset selection

* Implement viewing of training images

* Adjust directories

* Implement viewing of images

* Add support for deleting images

* Implement full deletion

* Implement classification model training

* Improve naming

* More renaming

* Improve layout

* Reduce logging

* Cleanup

* Live classification model training (#18583)

* Implement model training via ZMQ and add model states to represent training

* Get model updates working

* Improve toasts and model state

* Clean up logging

* Add back in

* Classification Model Metrics (#18595)

* Add speed and rate metrics for custom classification models

* Use metrics for classification models

* Use keys

* Cast to list

* Add Mesa Teflon as a TFLite detector (#18310)

* Refactor common functions for tflite detector implementations

* Add detector using mesa teflon delegate

Non-EdgeTPU TFLite can use the standard .tflite format

* Add mesa-teflon-delegate from bookworm-backports to arm64 images

* feat: enable using GenAI for cameras with GenAI disabled from the API (#18616)

* fix: Initialize GenAI client if GenAI is enabled globally (#18623)

* Make Birdseye clickable (#18628)

* keep track of layout changes and publish on change

* websocket hook

* clickable overlay div to navigate to full camera view

* Refactor TensorRT (#18643)

* Combine base and arm trt detectors

* Remove unused deps for amd64 build

* Add missing packages and cleanup ldconfig

* Expand packages for tensorflow model training

* Cleanup

* Refactor training to not reserve memory

* Dynamic Management of Cameras (#18671)

* Add base class for global config updates

* Add or remove camera states

* Move camera process management to separate thread

* Move camera management fully to separate class

* Cleanup

* Stop camera processes when stop command is sent

* Start processes dynamically when needed

* Adjust

* Leave extra room in tracked object queue for two cameras

* Dynamically set extra config pieces

* Add some TODOs

* Fix type check

* Simplify config updates

* Improve typing

* Correctly handle indexed entries

* Cleanup

* Create out SHM

* Use ZMQ for signaling object detectoin is completed

* Get camera correctly created

* Cleanup for updating the cameras config

* Cleanup

* Don't enable audio if no cameras have audio transcription

* Use exact string so similar camera names don't interfere

* Add ability to update config via json body to config/set endpoint

Additionally, update the config in a single rather than multiple calls for each updated key

* fix autotracking calibration to support new config updater function

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Use Fork-Server As Spawn Method (#18682)

* Set runtime

* Use count correctly

* Don't assume camera sizes

* Use separate zmq proxy for object detection

* Correct order

* Use forkserver

* Only store PID instead of entire process reference

* Cleanup

* Catch correct errors

* Fix typing

* Remove before_run from process util

The before_run never actually ran because:

You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.

Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:

The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.

The issue with your __getattribute__ implementation for run is that:

run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.

* Cleanup

* Logging bugfix (#18465)

* use mp Manager to handle logging queues

A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.

* consolidate

* fix typing

* Fix typing

* Use global log queue

* Move to using process for logging

* Convert camera tracking to process

* Add more processes

* Finalize process

* Cleanup

* Cleanup typing

* Formatting

* Remove daemon

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Add basic camera settings to UI for testing (#18690)

* add basic camera add/edit pane to the UI for testing

* only init model runner if transcription is enabled globally

* fix role checkboxes

* Ensure logging config is propagated to forked processes (#18704)

* Move log level initialization to log

* Use logger config

* Formatting

* Fix config order

* Set process names

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Fix go2rtc init (#18708)

* Cleanup process handling

* Adjust process name

* Reduce tf initialization

* Don't use staticmethod

* Don't fail on unicode debug for config updates

* Catch unpickling error

* Fix birdseye crash when dynamically adding a camera (#18821)

* Catch invalid character index in lpr CTC decoder (#18825)

* Classification model cover images (#18843)

* Move to separate component

* Add cover images for clssification models

* Fix process name

* Handle SIGINT with forkserver (#18860)

* Pass stopevent from main start

* Share stop event across processes

* preload modules

* remove explicit os._exit call

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Don't try to close or join mp manager queues (#18866)

Multiprocessing Manager queues don't have a close() or join_thread() method, and the Manager will clean it up appropriately after we empty it. This prevents an infinite loop when an AttributeError exception fires for Manager AutoProxy queue objects.

* Improve logging (#18867)

* Ignore numpy get limits warning

* Add function wrapper to redirect stdout and stderr to logpipe

* Save stderr too

* Add more to catch

* run logpipe

* Use other logging redirect class

* Use other logging redirect class

* add decorator for redirecting c/c++ level output to logger

* fix typing

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Add ONVIF focus support (#18883)

* backend

* frontend and i18n

* 0.17 tweaks (#18892)

* Set version

* Cleanup more logs

* Don't log matplotlib

* Improve object classification (#18908)

* Ui improvements

* Improve image cropping and model saving

* Improve naming

* Add logs for training

* Improve model labeling

* Don't set sub label for none object classification

* Cleanup

* Remove TFLite init logs

* Improve classification UI (#18910)

* Move threhsold to base model config

* Improve score handling

* Add back button

* Classification improvements (#19020)

* Move classification training to full process

* Sort class images

* Semantic Search Triggers (#18969)

* semantic trigger test

* database and model

* config

* embeddings maintainer and trigger post-processor

* api to create, edit, delete triggers

* frontend and i18n keys

* use thumbnail and description for trigger types

* image picker tweaks

* initial sync

* thumbnail file management

* clean up logs and use saved thumbnail on frontend

* publish mqtt messages

* webpush changes to enable trigger notifications

* add enabled switch

* add triggers from explore

* renaming and deletion fixes

* fix typing

* UI updates and add last triggering event time and link

* log exception instead of return in endpoint

* highlight entry in UI when triggered

* save and delete thumbnails directly

* remove alert action for now and add descriptions

* tweaks

* clean up

* fix types

* docs

* docs tweaks

* docs

* reuse enum

* Optionally show tracked object paths in debug view (#19025)

* Dynamically enable/disable GenAI (#19139)

* config

* dispatcher and mqtt

* docs

* use config updater

* add switch to frontend

* Classification train updates (#19173)

* Improve model train button

* Add filters for classification

* Cleanup

* Don't run classification on false positives

* Cleanup filter

* Fix icon color

* Object attribute classification (#19205)

* Add enum for type of classification for objects

* Update recognized license plate topic to be used as attribute updater

* Update attribute for attribute type object classification

* Cleanup

* Require setting process priority for FrigateProcess (#19207)

* Add bookworm-backports to the rocm images and upgrade mesa/vaapi to support RDNA4 GPUs (#19312)

* Improve the tablet layout (#19320)

* Improve the tablet layout

* Update imports sort

* Fix more imports

* Implement start for review item description processor (#19352)

* Add review item data transmission

* Publish review updates

* Add review item subscriber

* Basic implementation for testing review processor

* Formatting

* Cleanup

* Improve comms typing (#18599)

* Enable mypy for comms

* Make zmq data types consistent

* Cleanup inter process typing issues

* Cleanup embeddings typing

* Cleanup config updater

* Cleanup recordings updator

* Make publisher have a generic type

* Cleanup event metadata updater

* Cleanup event metadata updater

* Cleanup detections updater

* Cleanup websocket

* Cleanup mqtt

* Cleanup webpush

* Cleanup dispatcher

* Formatting

* Remove unused

* Add return type

* Fix tests

* Fix semantic triggers config typing

* Cleanup

* Ensure alertVideos persistence is loaded before displaying thumb or preview (#19432)

The default value of true would cause previews to be loaded in the background even if the local storage value was false

* Adjust loitering behavior based on object type (#19433)

* Adjust loitering behavior based on object

* Update docs

* Grammar

* Enable mypy for DB and fix types (#19434)

* Install peewee type hints

* Models now have proper types

* Fix iterator type

* Enable debug builds with dev reqs installed

* Install as wheel

* Fix cast type

* Migrate object genai configuration (#19437)

* Move genAI object to objects section

* Adjust config propogation behavior

* Refactor genai config usage

* Automatic migration

* Always start the embeddings process

* Always init embeddings

* Config fixes

* Adjust reference config

* Adjust docs

* Formatting

* Fix

* Review Item GenAI metadata (#19442)

* Rename existing function

* Keep track of thumbnial updates

* Tinkering with genai prompt

* Adjust input format

* Create model for review description output

* testing prompt changes

* Prompt improvements and image saving

* Add config for review items genai

* Use genai review config

* Actual config usage

* Adjust debug image saving

* Fix

* Fix review creation

* Adjust prompt

* Prompt adjustment

* Run genai in thread

* Fix detections block

* Adjust prompt

* Prompt changes

* Save genai response to metadata model

* Handle metadata

* Send review update to dispatcher

* Save review metadata to DB

* Send review notification updates

* Quick fix

* Fix name

* Fix update type

* Correctly dump model

* Add card

* Add card

* Remove message

* Cleanup typing and UI

* Adjust prompt

* Formatting

* Add log

* Formatting

* Add inference speed and keep alive

* Review genai updates (#19448)

* Include extra level for normal activity

* Add dynamic toggling

* Update docs

* Add different threshold for genai

* Adjust webUI for object and review description feature

* Adjust config

* Send on startup

* Cleanup config setting

* Set config

* Fix config name

* Use preview frames for Review Descriptions (#19450)

* Use preview frames for genai

* Cleanup

* Adjust

* Add config for users to define additional concerns that GenAI should make note of in review summary (#19463)

* Don't default to openai

* Improve UI

* Allow configuring additional concerns that users may want the AI to note

* Formatting

* Add preferred language config

* Remove unused

* Added total camera fps, total processed fps, and total skipped fps to stats api (#19469)

Co-authored-by: Mark Francis <markfrancisonly@gmail.com>

* Genai review summaries (#19473)

* Generate review item summaries with requests

* Adjust logic to only send important items

* Don't mention ladder

* Adjust prompt to be more specific

* Add more relaxed nature for normal activity

* Cleanup summary

* Update ollama client

* Add more directions to analyze the frames in order

* Remove environment from prompt

* Add ability to pass additional args to Ollama (#19484)

* Call out recognized objects more specifically

* Cleanup

* Make keep_alive and options configurable

* Generalize

* Use for other providers

* Update GenAI docs for new review summaries feature (#19493)

* Remove old genai docs

* Separate existing genai docs to separate sections

* Add docs for genai features

* Update reference config

* Update link

* Move to bottom

* Improve natural language of prompt (#19515)

* Make sequence details human-readable so they are used in natural language response

* Cleanup

* Improve prompt and image selection

* Adjust

* Adjust sligtly

* Format time

* Adjust frame selection logic

* Debug save response

* Ignore extra fields

* Adjust docs

* Cleanup filename sanitization

* Added degirum plugin, updated documentation for degirum detector usage, updated requirements with degirum_headless

* Fixed broken link

* Made it so openvino prioritizes using GPU and NPU over CPU

* Version that detects model and can begin using @local

* Added optimized version of degirum plugin + updated docs

* Updating requirements to build dev container

* Added guard clause for empty inference reponse

* Updated DeGirum's docs

* Moved DeGirum section to 'Community' detectors, fixed formatting of headers to be more consistent with the rest of the page, and removed uneeded 'models' folder

* Moved DeGirum section to correct place in community models

* Added degirum plugin, updated documentation for degirum detector usage, updated requirements with degirum_headless

* Fixed broken link

* Made it so openvino prioritizes using GPU and NPU over CPU

* Version that detects model and can begin using @local

* Added optimized version of degirum plugin + updated docs

* Updating requirements to build dev container

* Added guard clause for empty inference reponse

* Updated DeGirum's docs

* Moved DeGirum section to 'Community' detectors, fixed formatting of headers to be more consistent with the rest of the page, and removed uneeded 'models' folder

* Moved DeGirum section to correct place in community models

* Added degirum plugin, updated documentation for degirum detector usage, updated requirements with degirum_headless

* Fixed broken link

* Made it so openvino prioritizes using GPU and NPU over CPU

* Version that detects model and can begin using @local

* Added optimized version of degirum plugin + updated docs

* Updating requirements to build dev container

* Added guard clause for empty inference reponse

* Updated DeGirum's docs

* Moved DeGirum section to 'Community' detectors, fixed formatting of headers to be more consistent with the rest of the page, and removed uneeded 'models' folder

* Moved DeGirum section to correct place in community models

* Reverted changes to classification and audio

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
Co-authored-by: Jimmy <honj@alum.rpi.edu>
Co-authored-by: FL42 <46161216+fl42@users.noreply.github.com>
Co-authored-by: Steve Smith <tarkasteve@gmail.com>
Co-authored-by: markfrancisonly <12145270+markfrancisonly@users.noreply.github.com>
Co-authored-by: Mark Francis <markfrancisonly@gmail.com>
2025-08-26 16:38:34 -06:00
Josh Hawkins
398a3a7b95 Rename nickname to friendly_name (#19782)
Better aligns with convention from Home Assistant since many Frigate users are also HA users
2025-08-26 15:29:52 -05:00
GuoQing Liu
d3af748366 feat: Add camera nickname (#19567)
* refactor: Refactor camera nickname

* fix: fix cameraNameLabel visually

* chore: The Explore search function also displays the Camera's nickname in English

* chore: add mobile page camera nickname

* feat: webpush support camera nickname

* fix: fix storage camera name is null

* chore: fix review detail and context menu camera nickname

* chore: fix use-stats and notification setting camera nickname

* fix: fix stats camera if not nickname need capitalize

* fix: fix debug page open camera web ui i18n and camera nickname support

* fix: fix camera metrics not use nickname

* refactor: refactor use-camera-nickname hook.
2025-08-26 11:15:01 -06:00
Nicolas Mowen
195f705616 Support all audio type in MQTT (#19768)
* Support all audio type in MQTT

* Formatting
2025-08-26 09:50:50 -05:00
Josh Hawkins
6c3f99150c Improve LPR regex support (#19767)
* add regex support to events api for recognized_license_plate

* frontend

add ability to use regexes in the plate search box and add select all/clear all links to quickly select all filtered plates
2025-08-26 08:11:37 -05:00
Josh Hawkins
22e981c38c Add role map support for proxy auth (#19758)
* update config

* add role map support

* docs
2025-08-25 17:58:41 -05:00
Nicolas Mowen
ed9d031e80 Add experimental support for AMD AMF decode/encode (#19745)
* Add experimental support for AMD AMF decode/encode

* Organize imports
2025-08-25 13:40:36 -05:00
Josh Hawkins
c260642604 Improve audio detection debugging (#19753)
* create audio activity manager

move publishing logic out of audio detector

* dispatcher changes

* correctly publish full array of audio detections in onConnect

* frontend websocket hooks

* line graph

* debug tab and i18n

* docs

* clean up

* fix i18n key
2025-08-25 13:40:21 -05:00
Josh Hawkins
1636fee36a Only try to import memryx SDK when memry detector is used (#19737) 2025-08-24 18:38:30 -05:00
Nicolas Mowen
5af8fbac51 Fix tls config check (#19710)
* Correct tls settings check

* Correct jq format
2025-08-22 18:23:51 -05:00
Hosted Weblate
b036eb612a Translated using Weblate (Greek)
Currently translated at 2.6% (3 of 114 strings)

Translated using Weblate (Greek)

Currently translated at 8.3% (4 of 48 strings)

Translated using Weblate (Greek)

Currently translated at 3.5% (3 of 85 strings)

Translated using Weblate (Greek)

Currently translated at 44.4% (4 of 9 strings)

Translated using Weblate (Greek)

Currently translated at 3.2% (4 of 124 strings)

Translated using Weblate (Greek)

Currently translated at 8.6% (4 of 46 strings)

Translated using Weblate (Greek)

Currently translated at 2.6% (5 of 191 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/el/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-system
2025-08-22 18:18:38 -05:00
Josh Hawkins
8be82b63f4 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (117 of 117 strings)

Translation: Frigate NVR/views-system
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/uk/
2025-08-22 17:56:38 -05:00
lukasig
1241e27aac Translated using Weblate (Romanian)
Currently translated at 100.0% (117 of 117 strings)

Translation: Frigate NVR/views-system
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/ro/
2025-08-22 17:56:38 -05:00
Marijn
02ee37ab55 Translated using Weblate (Dutch)
Currently translated at 100.0% (117 of 117 strings)

Translation: Frigate NVR/views-system
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nl/
2025-08-22 17:56:38 -05:00
Gringo
e0f695d8b1 Translated using Weblate (Italian)
Currently translated at 100.0% (117 of 117 strings)

Translation: Frigate NVR/views-system
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/it/
2025-08-22 17:56:38 -05:00
Hosted Weblate
819e0a0bf9 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 99.1% (123 of 124 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (26 of 26 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
2025-08-22 17:56:38 -05:00
Hosted Weblate
2a3065eb67 Translated using Weblate (Chinese (Traditional Han script))
Currently translated at 11.3% (49 of 431 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 98.8% (84 of 85 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Chinese (Traditional Han script))

Currently translated at 14.2% (61 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: retic9 <retic9@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/zh_Hant/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hant/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-22 17:56:38 -05:00
Hosted Weblate
db45dd3698 Translated using Weblate (Swedish)
Currently translated at 71.9% (82 of 114 strings)

Translated using Weblate (Swedish)

Currently translated at 19.0% (82 of 431 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (62 of 62 strings)

Translated using Weblate (Swedish)

Currently translated at 69.3% (86 of 124 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (53 of 53 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-22 17:56:38 -05:00
Hosted Weblate
6136504128 Translated using Weblate (Italian)
Currently translated at 100.0% (114 of 114 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (114 of 114 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (46 of 46 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/it/
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-22 17:56:38 -05:00
Hosted Weblate
47af0b63c5 Translated using Weblate (Vietnamese)
Currently translated at 89.0% (384 of 431 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Vietnamese)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Darias <yd5uecg8e@mozmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/vi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/vi/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-22 17:56:38 -05:00
Hosted Weblate
8bfa4be42f Translated using Weblate (Greek)
Currently translated at 2.6% (3 of 114 strings)

Translated using Weblate (Greek)

Currently translated at 8.3% (4 of 48 strings)

Translated using Weblate (Greek)

Currently translated at 3.5% (3 of 85 strings)

Translated using Weblate (Greek)

Currently translated at 44.4% (4 of 9 strings)

Translated using Weblate (Greek)

Currently translated at 3.2% (4 of 124 strings)

Translated using Weblate (Greek)

Currently translated at 8.6% (4 of 46 strings)

Translated using Weblate (Greek)

Currently translated at 2.6% (5 of 191 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/el/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/el/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-system
2025-08-22 17:56:38 -05:00
Hosted Weblate
b8af90331b Translated using Weblate (German)
Currently translated at 100.0% (62 of 62 strings)

Translated using Weblate (German)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (German)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (German)

Currently translated at 100.0% (26 of 26 strings)

Co-authored-by: Christos Sidiropoulos <dev@csidirop.de>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Viktor Stier <viktor-stier@gmx.de>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/de/
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
2025-08-22 17:56:38 -05:00
Josh Hawkins
95131541c5 Consolidate documentation i18n keys (#19714)
* Consolidate documentation i18n keys

* individual translations
2025-08-22 17:19:00 -05:00
Josh Hawkins
a88760efa1 Display warning in frontend if shm size is too low (#19712)
* backend

refactor shm calculation to utility function so it can be used in frontend stats

* frontend

* fix check

* clean up
2025-08-22 13:48:27 -06:00
Niko
ee48d6782d add model config parameter to full reference config (#19520)
* add model config parameter to full reference config

* oversight with comment location

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-08-22 08:07:19 -06:00
Tim Wesley
dbceb4dcc7 MemryX MX3 detector integration (#17723)
* sdk_2.0_update

* memryx docs: minor reorg

* ran ruff

* whoops, more ruff fixes

* Fixes (#6)

* Fixes and custom model path updated

* ruff formatting

* removed apt install from main

* add comment about libgomp1 in install_deps

---------

Co-authored-by: Abinila Siva <abinila.siva@memryx.com>
Co-authored-by: Abinila Siva <163017635+abinila4@users.noreply.github.com>
2025-08-22 08:11:48 -05:00
Nicolas Mowen
9dd7ead462 Camera Health Status (#19709)
* Send status of camera streams to mqtt

* Update docs

* Formatting

* Fix frontend querying fps
2025-08-22 06:42:36 -06:00
Nicolas Mowen
539c760953 Don't print when not using rknn (#19698)
* Debug logs for rknn embeddings check

* Debug logs for rknn embeddings check
2025-08-21 18:00:36 -05:00
Nicolas Mowen
f39475a383 Support face recognition via RKNN (#19687)
* Add support for face recognition via RKNN

* Fix crash when adding camera in via UI

* Update docs regarding support for face recognition

* Formatting
2025-08-21 06:18:55 -06:00
Nicolas Mowen
1be84d6833 Add automatic RKNN conversion and support for semantic search model (#19676)
* Create RKNN model runner and and use for jina v1 clip

* Formatting

* Handle model type inference

* Properly provide input to RKNN

* Adjust rknn conversion

* Update docs

* Formatting

* Fix path handling

* Handle inputs

* Cleanup

* Change normalization for better accuracy

* Clarify supported models

* Remove testing
2025-08-21 05:30:14 -06:00
Hosted Weblate
efeb089ff8 Translated using Weblate (Norwegian Bokmål)
Currently translated at 99.1% (123 of 124 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Norwegian Bokmål)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: OverTheHillsAndFarAway <prosjektx@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nb_NO/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nb_NO/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
2025-08-20 20:34:21 -06:00
Hosted Weblate
f83e528461 Translated using Weblate (Chinese (Simplified Han script))
Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Chinese (Simplified Han script))

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: GuoQing Liu <842607283@qq.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/zh_Hans/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/zh_Hans/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
51d1bc883b Translated using Weblate (Chinese (Traditional Han script))
Currently translated at 9.6% (41 of 427 strings)

Co-authored-by: Ethan Chen <ethan42411@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/zh_Hant/
Translation: Frigate NVR/audio
2025-08-20 20:34:21 -06:00
Hosted Weblate
af590310fc Translated using Weblate (Slovenian)
Currently translated at 94.7% (108 of 114 strings)

Translated using Weblate (Slovenian)

Currently translated at 25.9% (112 of 431 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Slovenian)

Currently translated at 95.0% (115 of 121 strings)

Translated using Weblate (Slovenian)

Currently translated at 96.3% (184 of 191 strings)

Translated using Weblate (Slovenian)

Currently translated at 33.2% (142 of 427 strings)

Translated using Weblate (Slovenian)

Currently translated at 72.8% (83 of 114 strings)

Translated using Weblate (Slovenian)

Currently translated at 19.7% (85 of 431 strings)

Translated using Weblate (Slovenian)

Currently translated at 97.6% (83 of 85 strings)

Translated using Weblate (Slovenian)

Currently translated at 98.3% (61 of 62 strings)

Translated using Weblate (Slovenian)

Currently translated at 81.8% (99 of 121 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Slovenian)

Currently translated at 48.1% (92 of 191 strings)

Translated using Weblate (Slovenian)

Currently translated at 30.9% (132 of 427 strings)

Translated using Weblate (Slovenian)

Currently translated at 56.1% (64 of 114 strings)

Translated using Weblate (Slovenian)

Currently translated at 7.4% (32 of 431 strings)

Translated using Weblate (Slovenian)

Currently translated at 27.0% (23 of 85 strings)

Translated using Weblate (Slovenian)

Currently translated at 40.3% (25 of 62 strings)

Translated using Weblate (Slovenian)

Currently translated at 40.4% (49 of 121 strings)

Translated using Weblate (Slovenian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Slovenian)

Currently translated at 48.5% (34 of 70 strings)

Translated using Weblate (Slovenian)

Currently translated at 45.2% (24 of 53 strings)

Translated using Weblate (Slovenian)

Currently translated at 27.1% (116 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jan Šuklje <sukljejan@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sl/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
8242fc0077 Translated using Weblate (Slovak)
Currently translated at 44.7% (51 of 114 strings)

Translated using Weblate (Slovak)

Currently translated at 11.8% (51 of 431 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (Slovak)

Currently translated at 42.9% (52 of 121 strings)

Translated using Weblate (Slovak)

Currently translated at 60.0% (51 of 85 strings)

Translated using Weblate (Slovak)

Currently translated at 72.8% (51 of 70 strings)

Translated using Weblate (Slovak)

Currently translated at 43.2% (51 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 96.2% (51 of 53 strings)

Translated using Weblate (Slovak)

Currently translated at 27.7% (53 of 191 strings)

Translated using Weblate (Slovak)

Currently translated at 15.2% (65 of 427 strings)

Translated using Weblate (Slovak)

Currently translated at 41.2% (47 of 114 strings)

Translated using Weblate (Slovak)

Currently translated at 10.9% (47 of 431 strings)

Translated using Weblate (Slovak)

Currently translated at 97.9% (47 of 48 strings)

Translated using Weblate (Slovak)

Currently translated at 39.6% (48 of 121 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Slovak)

Currently translated at 55.2% (47 of 85 strings)

Translated using Weblate (Slovak)

Currently translated at 67.1% (47 of 70 strings)

Translated using Weblate (Slovak)

Currently translated at 39.8% (47 of 118 strings)

Translated using Weblate (Slovak)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Slovak)

Currently translated at 90.5% (48 of 53 strings)

Translated using Weblate (Slovak)

Currently translated at 25.6% (49 of 191 strings)

Translated using Weblate (Slovak)

Currently translated at 14.2% (61 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jakub K <klacanjakub0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sk/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
bb2f1ca41a Translated using Weblate (Serbian)
Currently translated at 7.0% (8 of 114 strings)

Translated using Weblate (Serbian)

Currently translated at 1.9% (7 of 352 strings)

Translated using Weblate (Serbian)

Currently translated at 16.6% (8 of 48 strings)

Translated using Weblate (Serbian)

Currently translated at 10.0% (8 of 80 strings)

Translated using Weblate (Serbian)

Currently translated at 12.9% (8 of 62 strings)

Translated using Weblate (Serbian)

Currently translated at 88.8% (8 of 9 strings)

Translated using Weblate (Serbian)

Currently translated at 33.3% (8 of 24 strings)

Translated using Weblate (Serbian)

Currently translated at 13.6% (9 of 66 strings)

Translated using Weblate (Serbian)

Currently translated at 16.0% (8 of 50 strings)

Translated using Weblate (Serbian)

Currently translated at 19.5% (9 of 46 strings)

Translated using Weblate (Serbian)

Currently translated at 100.0% (9 of 9 strings)

Translated using Weblate (Serbian)

Currently translated at 6.7% (8 of 118 strings)

Translated using Weblate (Serbian)

Currently translated at 3.5% (15 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Milan Pavlov <mikecitt@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sr/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
1519ad7945 Translated using Weblate (Finnish)
Currently translated at 61.7% (71 of 115 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Pasi Hakkarainen <pasi.hakkarainen@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fi/
Translation: Frigate NVR/views-explore
2025-08-20 20:34:21 -06:00
Hosted Weblate
cfff0fa3ee Translated using Weblate (Swedish)
Currently translated at 45.6% (52 of 114 strings)

Translated using Weblate (Swedish)

Currently translated at 66.1% (41 of 62 strings)

Translated using Weblate (Swedish)

Currently translated at 34.6% (43 of 124 strings)

Translated using Weblate (Swedish)

Currently translated at 81.1% (43 of 53 strings)

Translated using Weblate (Swedish)

Currently translated at 35.9% (41 of 114 strings)

Translated using Weblate (Swedish)

Currently translated at 96.4% (82 of 85 strings)

Translated using Weblate (Swedish)

Currently translated at 48.3% (30 of 62 strings)

Translated using Weblate (Swedish)

Currently translated at 26.4% (32 of 121 strings)

Translated using Weblate (Swedish)

Currently translated at 62.2% (33 of 53 strings)

Translated using Weblate (Swedish)

Currently translated at 30.7% (35 of 114 strings)

Translated using Weblate (Swedish)

Currently translated at 14.6% (63 of 431 strings)

Translated using Weblate (Swedish)

Currently translated at 94.1% (80 of 85 strings)

Translated using Weblate (Swedish)

Currently translated at 35.4% (22 of 62 strings)

Translated using Weblate (Swedish)

Currently translated at 17.3% (21 of 121 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (25 of 25 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (2 of 2 strings)

Translated using Weblate (Swedish)

Currently translated at 41.5% (22 of 53 strings)

Translated using Weblate (Swedish)

Currently translated at 37.4% (160 of 427 strings)

Translated using Weblate (Swedish)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Swedish)

Currently translated at 30.7% (35 of 114 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Kristian Johansson <knmjohansson@gmail.com>
Co-authored-by: Magnus Kvevlander <magu@me.com>
Co-authored-by: jorg-stor <jorgen.storvist@techship.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-icons/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/sv/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/sv/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-icons
Translation: Frigate NVR/components-player
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
6b8a140c89 Translated using Weblate (French)
Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (French)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (French)

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (French)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (French)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (French)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (French)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (French)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (French)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Apocoloquintose <bertrand.moreux@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Wobak <wobak@wobak.fr>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/fr/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/fr/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
f97aa00733 Translated using Weblate (Spanish)
Currently translated at 89.3% (385 of 431 strings)

Translated using Weblate (Spanish)

Currently translated at 99.4% (190 of 191 strings)

Co-authored-by: Guillermo Vargas <guilleva@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: jjavin <javiernovoa@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/es/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/es/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
e0689027a1 Translated using Weblate (Dutch)
Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (114 of 114 strings)

Translated using Weblate (Dutch)

Currently translated at 82.1% (354 of 431 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Dutch)

Currently translated at 96.6% (117 of 121 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Dutch)

Currently translated at 98.1% (52 of 53 strings)

Translated using Weblate (Dutch)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: JBakers <evilspawn@gmail.com>
Co-authored-by: Marijn <168113859+Marijn0@users.noreply.github.com>
Co-authored-by: Wout Rombouts <woutrombouts18@gmail.com>
Co-authored-by: glenn schrooyen <gschrooyen@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/nl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/nl/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
747dac8f81 Translated using Weblate (Indonesian)
Currently translated at 90.0% (9 of 10 strings)

Translated using Weblate (Indonesian)

Currently translated at 23.9% (11 of 46 strings)

Translated using Weblate (Indonesian)

Currently translated at 15.2% (18 of 118 strings)

Translated using Weblate (Indonesian)

Currently translated at 19.4% (83 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Ivan D. Firmansyah <ivandhuha@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/id/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/id/
Translation: Frigate NVR/audio
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
2025-08-20 20:34:21 -06:00
Hosted Weblate
62e8260131 Translated using Weblate (Italian)
Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Italian)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Gringo <ita.translations@tiscali.it>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Paraomao <mauro.ant@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/it/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/it/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
cdb8b4542a Translated using Weblate (Polish)
Currently translated at 80.9% (349 of 431 strings)

Translated using Weblate (Polish)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Patryk Smoliński <smolinski.patryk@mensa.org.pl>
Co-authored-by: karaspr <karaspr@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pl/
Translation: Frigate NVR/common
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
acab9c2e5b Translated using Weblate (Hindi)
Currently translated at 2.6% (3 of 114 strings)

Translated using Weblate (Hindi)

Currently translated at 0.8% (3 of 352 strings)

Translated using Weblate (Hindi)

Currently translated at 6.2% (3 of 48 strings)

Translated using Weblate (Hindi)

Currently translated at 3.7% (3 of 80 strings)

Translated using Weblate (Hindi)

Currently translated at 4.8% (3 of 62 strings)

Translated using Weblate (Hindi)

Currently translated at 44.4% (4 of 9 strings)

Translated using Weblate (Hindi)

Currently translated at 3.4% (4 of 115 strings)

Translated using Weblate (Hindi)

Currently translated at 16.6% (4 of 24 strings)

Translated using Weblate (Hindi)

Currently translated at 16.0% (4 of 25 strings)

Translated using Weblate (Hindi)

Currently translated at 6.0% (4 of 66 strings)

Translated using Weblate (Hindi)

Currently translated at 8.0% (4 of 50 strings)

Translated using Weblate (Hindi)

Currently translated at 13.5% (16 of 118 strings)

Translated using Weblate (Hindi)

Currently translated at 8.6% (4 of 46 strings)

Translated using Weblate (Hindi)

Currently translated at 2.0% (4 of 191 strings)

Translated using Weblate (Hindi)

Currently translated at 33.4% (143 of 427 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: PRATEEK BISHT <prateekbisht04@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-player/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-exports/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hi/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/hi/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/components-player
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-exports
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
4540165703 Translated using Weblate (Hungarian)
Currently translated at 83.7% (361 of 431 strings)

Translated using Weblate (Hungarian)

Currently translated at 96.6% (117 of 121 strings)

Translated using Weblate (Hungarian)

Currently translated at 97.6% (83 of 85 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Hungarian)

Currently translated at 98.1% (52 of 53 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (62 of 62 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (50 of 50 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (50 of 50 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Hungarian)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Dávid Attila Balog <davidattilabalog@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Zsolt Fojtyik <zsozso830316@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/hu/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/hu/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
a4ebee3ead Translated using Weblate (Portuguese)
Currently translated at 99.1% (123 of 124 strings)

Translated using Weblate (Portuguese)

Currently translated at 96.1% (25 of 26 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Portuguese)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Miguel Duarte <miguelteixo@gmail.com>
Co-authored-by: The DC <thedc98@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
cbd71b25e2 Translated using Weblate (Catalan)
Currently translated at 95.8% (116 of 121 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Catalan)

Currently translated at 98.8% (84 of 85 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Catalan)

Currently translated at 100.0% (70 of 70 strings)

Co-authored-by: Eduard Frigola <eduardfrigola@yahoo.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ca/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ca/
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
2025-08-20 20:34:21 -06:00
Hosted Weblate
a0e657e207 Translated using Weblate (Ukrainian)
Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Ukrainian)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Den <denis.ua22@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/uk/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/uk/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
f589ee0f20 Translated using Weblate (Romanian)
Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Romanian)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: lukasig <lukasig@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/ro/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/ro/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
3a42b4439c Translated using Weblate (Russian)
Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Russian)

Currently translated at 98.5% (69 of 70 strings)

Translated using Weblate (Russian)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Igor Kalinin <stigory@gmail.com>
Co-authored-by: Артём Владимиров <artyomka71@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/ru/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/ru/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
2025-08-20 20:34:21 -06:00
Hosted Weblate
8c7cf1fc7e Translated using Weblate (German)
Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (German)

Currently translated at 100.0% (48 of 48 strings)

Translated using Weblate (German)

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (German)

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (German)

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (German)

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (German)

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (German)

Currently translated at 99.4% (190 of 191 strings)

Translated using Weblate (German)

Currently translated at 100.0% (352 of 352 strings)

Translated using Weblate (German)

Currently translated at 100.0% (80 of 80 strings)

Translated using Weblate (German)

Currently translated at 100.0% (191 of 191 strings)

Translated using Weblate (German)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Export33 <jannis.riemann@web.de>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Phil Jope <phil@jope.cloud>
Co-authored-by: Viktor Stier <viktor-stier@gmx.de>
Co-authored-by: ahgln <a.hegglin@windowslive.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/de/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/de/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-settings
2025-08-20 20:34:21 -06:00
Hosted Weblate
ac159f114c Translated using Weblate (Portuguese (Brazil))
Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (124 of 124 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (26 of 26 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (191 of 191 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (431 of 431 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (427 of 427 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 99.5% (429 of 431 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 91.1% (393 of 431 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 86.5% (373 of 431 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 86.0% (371 of 431 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (85 of 85 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (121 of 121 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (10 of 10 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (70 of 70 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (53 of 53 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (62 of 62 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (114 of 114 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (115 of 115 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (66 of 66 strings)

Translated using Weblate (Portuguese (Brazil))

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Marcelo Popper Costa <marcelo_popper@hotmail.com>
Co-authored-by: P1LH4 <joao.calby@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/audio/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-configeditor/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-explore/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-live/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-settings/pt_BR/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/pt_BR/
Translation: Frigate NVR/audio
Translation: Frigate NVR/common
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-configeditor
Translation: Frigate NVR/views-events
Translation: Frigate NVR/views-explore
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-live
Translation: Frigate NVR/views-settings
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
e256402e8c Translated using Weblate (Lithuanian)
Currently translated at 54.3% (62 of 114 strings)

Translated using Weblate (Lithuanian)

Currently translated at 95.8% (46 of 48 strings)

Translated using Weblate (Lithuanian)

Currently translated at 12.9% (8 of 62 strings)

Translated using Weblate (Lithuanian)

Currently translated at 20.0% (10 of 50 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (46 of 46 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (118 of 118 strings)

Translated using Weblate (Lithuanian)

Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: MaBeniu <runnerm@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-camera/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/objects/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-facelibrary/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-search/lt/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-system/lt/
Translation: Frigate NVR/common
Translation: Frigate NVR/components-camera
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/objects
Translation: Frigate NVR/views-facelibrary
Translation: Frigate NVR/views-search
Translation: Frigate NVR/views-system
2025-08-20 20:34:21 -06:00
Hosted Weblate
05152b0676 Translated using Weblate (Turkish)
Currently translated at 100.0% (191 of 191 strings)

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: pcislocked <git@pcislocked.net>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/common/tr/
Translation: Frigate NVR/common
2025-08-20 20:34:21 -06:00
Hosted Weblate
dc20903711 Translated using Weblate (Galician)
Currently translated at 10.6% (7 of 66 strings)

Translated using Weblate (Galician)

Currently translated at 16.0% (8 of 50 strings)

Translated using Weblate (Galician)

Currently translated at 29.1% (7 of 24 strings)

Translated using Weblate (Galician)

Currently translated at 77.7% (7 of 9 strings)

Co-authored-by: Alexandre Espinosa Menor <aemenor@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-auth/gl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-dialog/gl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/components-filter/gl/
Translate-URL: https://hosted.weblate.org/projects/frigate-nvr/views-events/gl/
Translation: Frigate NVR/components-auth
Translation: Frigate NVR/components-dialog
Translation: Frigate NVR/components-filter
Translation: Frigate NVR/views-events
2025-08-20 20:34:21 -06:00
Nicolas Mowen
2236ecf23f Auto convert ONNX models to RKNN format (#19674)
* Implement base rknn conversion

* Remove unused

* Formatting

* Add model conversion lock so it doesn't break when multiple detectors are defined

* Ignore unused impor
t
2025-08-20 15:15:57 -06:00
Nicolas Mowen
6e3b40eaee Fix record motion config (#19672)
* fix record config

* Formatting
2025-08-20 14:45:17 -06:00
Josh Hawkins
80144fe524 update bug report discussion template (#19670) 2025-08-20 15:10:24 -05:00
Rick Sanchez
b003cecd73 Update YOLO_NAS_Pretrained_Export.ipynb (#19669)
Google Collab has change from Python version 3.11 to 3.12, the sed line needs to be updated or the notebook will fail.
2025-08-20 14:59:43 -05:00
Nicolas Mowen
361014f01c UI improvements (#19659)
* Add indicator when GenAI review infers suspicious activity

* Fix score filtering logic

* Enable mobile view for classification and optimize for mobile layout

* Add missing keys

* Don't require face rec

* fix key
2025-08-20 08:28:47 -05:00
Nicolas Mowen
9fb09408d1 Fix build (#19634)
* Don't put special constraints

* Undo joserfc install

* Fix joserfc

* Formatting
2025-08-19 13:19:31 -06:00
Nicolas Mowen
e92267d7e2 Update deps (#19617)
* Update virtualenv

* Fix device ID

* Fix dependency conflict

* Cleanup mypy
2025-08-19 11:08:34 -05:00
dependabot[bot]
8e254aa1f0 Bump actions/checkout from 4 to 5 (#19472)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 07:46:16 -06:00
Nicolas Mowen
acf32e1a1e Various Fixes (#19615)
* Don't write to write None response to file

* fix genai config migration

* Fix JP6 build

* Include base image ARG
2025-08-19 06:49:55 -06:00
baudneo
33f3ea3b59 Enrichments: Allow targeting a specific GPU ID (#19342) 2025-08-18 17:43:53 -06:00
scyto
83e9ae616a Enable Optional IPv6 Support for Nginx (#19602) 2025-08-18 17:39:12 -06:00
On Freund
0309090852 Fix typo in Apple Silicon detector (#19595) 2025-08-18 10:02:06 -06:00
Nicolas Mowen
152d9ed4a0 Apple Silicon / ZMQ Detector (#19592)
* Add zmq detector

* Cleanup

* Logging

* Cleanup

* Cleanup

* Add to hardware docs

* Add apple silicon to docs

* Formatting
2025-08-18 09:51:12 -05:00
Nicolas Mowen
5a49d1f73c Enable mypy for track and fix typing errors (#19529)
* Enable mypy for track

* WIP cleaning up tracked object

* Fix tracked object typing

* Fix typing and imports of centroid tracker

* Cleanup typing

* Cleanup

* Formatting

* Fix imports

* Don't specify callable type

* Type out json setting
2025-08-17 12:27:42 -05:00
Nicolas Mowen
856aab8e6e Cleanup filename sanitization 2025-08-16 10:20:33 -05:00
Nicolas Mowen
ccbaa74a8b Improve natural language of prompt (#19515)
* Make sequence details human-readable so they are used in natural language response

* Cleanup

* Improve prompt and image selection

* Adjust

* Adjust sligtly

* Format time

* Adjust frame selection logic

* Debug save response

* Ignore extra fields

* Adjust docs
2025-08-16 10:20:33 -05:00
Nicolas Mowen
6671984e5a Update GenAI docs for new review summaries feature (#19493)
* Remove old genai docs

* Separate existing genai docs to separate sections

* Add docs for genai features

* Update reference config

* Update link

* Move to bottom
2025-08-16 10:20:33 -05:00
Nicolas Mowen
7740b08bd9 Add ability to pass additional args to Ollama (#19484)
* Call out recognized objects more specifically

* Cleanup

* Make keep_alive and options configurable

* Generalize

* Use for other providers
2025-08-16 10:20:33 -05:00
Nicolas Mowen
dace88bfce Genai review summaries (#19473)
* Generate review item summaries with requests

* Adjust logic to only send important items

* Don't mention ladder

* Adjust prompt to be more specific

* Add more relaxed nature for normal activity

* Cleanup summary

* Update ollama client

* Add more directions to analyze the frames in order

* Remove environment from prompt
2025-08-16 10:20:33 -05:00
markfrancisonly
8e663413bb Added total camera fps, total processed fps, and total skipped fps to stats api (#19469)
Co-authored-by: Mark Francis <markfrancisonly@gmail.com>
2025-08-16 10:20:33 -05:00
Nicolas Mowen
cc18d7f786 Add config for users to define additional concerns that GenAI should make note of in review summary (#19463)
* Don't default to openai

* Improve UI

* Allow configuring additional concerns that users may want the AI to note

* Formatting

* Add preferred language config

* Remove unused
2025-08-16 10:20:33 -05:00
Nicolas Mowen
3cf86767f1 Use preview frames for Review Descriptions (#19450)
* Use preview frames for genai

* Cleanup

* Adjust
2025-08-16 10:20:33 -05:00
Nicolas Mowen
92417a1b9c Review genai updates (#19448)
* Include extra level for normal activity

* Add dynamic toggling

* Update docs

* Add different threshold for genai

* Adjust webUI for object and review description feature

* Adjust config

* Send on startup

* Cleanup config setting

* Set config

* Fix config name
2025-08-16 10:20:33 -05:00
Nicolas Mowen
2cf8dd693c Review Item GenAI metadata (#19442)
* Rename existing function

* Keep track of thumbnial updates

* Tinkering with genai prompt

* Adjust input format

* Create model for review description output

* testing prompt changes

* Prompt improvements and image saving

* Add config for review items genai

* Use genai review config

* Actual config usage

* Adjust debug image saving

* Fix

* Fix review creation

* Adjust prompt

* Prompt adjustment

* Run genai in thread

* Fix detections block

* Adjust prompt

* Prompt changes

* Save genai response to metadata model

* Handle metadata

* Send review update to dispatcher

* Save review metadata to DB

* Send review notification updates

* Quick fix

* Fix name

* Fix update type

* Correctly dump model

* Add card

* Add card

* Remove message

* Cleanup typing and UI

* Adjust prompt

* Formatting

* Add log

* Formatting

* Add inference speed and keep alive
2025-08-16 10:20:33 -05:00
Nicolas Mowen
1f3755e45d Migrate object genai configuration (#19437)
* Move genAI object to objects section

* Adjust config propogation behavior

* Refactor genai config usage

* Automatic migration

* Always start the embeddings process

* Always init embeddings

* Config fixes

* Adjust reference config

* Adjust docs

* Formatting

* Fix
2025-08-16 10:20:33 -05:00
Nicolas Mowen
7c1681e344 Enable mypy for DB and fix types (#19434)
* Install peewee type hints

* Models now have proper types

* Fix iterator type

* Enable debug builds with dev reqs installed

* Install as wheel

* Fix cast type
2025-08-16 10:20:33 -05:00
Nicolas Mowen
6ecc631486 Adjust loitering behavior based on object type (#19433)
* Adjust loitering behavior based on object

* Update docs

* Grammar
2025-08-16 10:20:33 -05:00
Josh Hawkins
21ab164bfe Ensure alertVideos persistence is loaded before displaying thumb or preview (#19432)
The default value of true would cause previews to be loaded in the background even if the local storage value was false
2025-08-16 10:20:33 -05:00
Nicolas Mowen
fcf3824124 Improve comms typing (#18599)
* Enable mypy for comms

* Make zmq data types consistent

* Cleanup inter process typing issues

* Cleanup embeddings typing

* Cleanup config updater

* Cleanup recordings updator

* Make publisher have a generic type

* Cleanup event metadata updater

* Cleanup event metadata updater

* Cleanup detections updater

* Cleanup websocket

* Cleanup mqtt

* Cleanup webpush

* Cleanup dispatcher

* Formatting

* Remove unused

* Add return type

* Fix tests

* Fix semantic triggers config typing

* Cleanup
2025-08-16 10:20:33 -05:00
Nicolas Mowen
1add72884a Cleanup 2025-08-16 10:20:33 -05:00
Nicolas Mowen
e9e3c481b2 Implement start for review item description processor (#19352)
* Add review item data transmission

* Publish review updates

* Add review item subscriber

* Basic implementation for testing review processor

* Formatting
2025-08-16 10:20:33 -05:00
Nicolas Mowen
fa1b88097b Improve the tablet layout (#19320)
* Improve the tablet layout

* Update imports sort

* Fix more imports
2025-08-16 10:20:33 -05:00
Steve Smith
10160fb3b5 Add bookworm-backports to the rocm images and upgrade mesa/vaapi to support RDNA4 GPUs (#19312) 2025-08-16 10:20:33 -05:00
Nicolas Mowen
20104761e8 Require setting process priority for FrigateProcess (#19207) 2025-08-16 10:20:33 -05:00
Nicolas Mowen
d071325ca7 Object attribute classification (#19205)
* Add enum for type of classification for objects

* Update recognized license plate topic to be used as attribute updater

* Update attribute for attribute type object classification

* Cleanup
2025-08-16 10:20:33 -05:00
Nicolas Mowen
55e5a55fa2 Classification train updates (#19173)
* Improve model train button

* Add filters for classification

* Cleanup

* Don't run classification on false positives

* Cleanup filter

* Fix icon color
2025-08-16 10:20:33 -05:00
Josh Hawkins
8719216fa6 Dynamically enable/disable GenAI (#19139)
* config

* dispatcher and mqtt

* docs

* use config updater

* add switch to frontend
2025-08-16 10:20:33 -05:00
Josh Hawkins
22478df4d6 Optionally show tracked object paths in debug view (#19025) 2025-08-16 10:20:33 -05:00
Josh Hawkins
3609b41217 Semantic Search Triggers (#18969)
* semantic trigger test

* database and model

* config

* embeddings maintainer and trigger post-processor

* api to create, edit, delete triggers

* frontend and i18n keys

* use thumbnail and description for trigger types

* image picker tweaks

* initial sync

* thumbnail file management

* clean up logs and use saved thumbnail on frontend

* publish mqtt messages

* webpush changes to enable trigger notifications

* add enabled switch

* add triggers from explore

* renaming and deletion fixes

* fix typing

* UI updates and add last triggering event time and link

* log exception instead of return in endpoint

* highlight entry in UI when triggered

* save and delete thumbnails directly

* remove alert action for now and add descriptions

* tweaks

* clean up

* fix types

* docs

* docs tweaks

* docs

* reuse enum
2025-08-16 10:20:33 -05:00
Nicolas Mowen
28f816b49a Classification improvements (#19020)
* Move classification training to full process

* Sort class images
2025-08-16 10:20:33 -05:00
Nicolas Mowen
528f0d2b1f Improve classification UI (#18910)
* Move threhsold to base model config

* Improve score handling

* Add back button
2025-08-16 10:20:33 -05:00
Nicolas Mowen
f925154b8a Remove TFLite init logs 2025-08-16 10:20:33 -05:00
Nicolas Mowen
13fb7bc260 Improve object classification (#18908)
* Ui improvements

* Improve image cropping and model saving

* Improve naming

* Add logs for training

* Improve model labeling

* Don't set sub label for none object classification

* Cleanup
2025-08-16 10:20:33 -05:00
Nicolas Mowen
ceeb6543f5 0.17 tweaks (#18892)
* Set version

* Cleanup more logs

* Don't log matplotlib
2025-08-16 10:20:33 -05:00
Josh Hawkins
cf62bee170 Add ONVIF focus support (#18883)
* backend

* frontend and i18n
2025-08-16 10:20:33 -05:00
Nicolas Mowen
ec6c04e49a Improve logging (#18867)
* Ignore numpy get limits warning

* Add function wrapper to redirect stdout and stderr to logpipe

* Save stderr too

* Add more to catch

* run logpipe

* Use other logging redirect class

* Use other logging redirect class

* add decorator for redirecting c/c++ level output to logger

* fix typing

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-08-16 10:20:33 -05:00
Josh Hawkins
da0248db15 Don't try to close or join mp manager queues (#18866)
Multiprocessing Manager queues don't have a close() or join_thread() method, and the Manager will clean it up appropriately after we empty it. This prevents an infinite loop when an AttributeError exception fires for Manager AutoProxy queue objects.
2025-08-16 10:20:33 -05:00
Nicolas Mowen
542bf05bb8 Handle SIGINT with forkserver (#18860)
* Pass stopevent from main start

* Share stop event across processes

* preload modules

* remove explicit os._exit call

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-08-16 10:20:33 -05:00
Nicolas Mowen
e1ee6f010f Fix process name 2025-08-16 10:20:33 -05:00
Nicolas Mowen
3327be05ea Classification model cover images (#18843)
* Move to separate component

* Add cover images for clssification models
2025-08-16 10:20:33 -05:00
Josh Hawkins
9c2ba152e1 Catch invalid character index in lpr CTC decoder (#18825) 2025-08-16 10:20:33 -05:00
Josh Hawkins
7c8164aa99 Fix birdseye crash when dynamically adding a camera (#18821) 2025-08-16 10:20:33 -05:00
Nicolas Mowen
847b03e71b Catch unpickling error 2025-08-16 10:20:33 -05:00
Nicolas Mowen
0d5a49ab82 Don't fail on unicode debug for config updates 2025-08-16 10:20:33 -05:00
Nicolas Mowen
2f4d7353f4 Don't use staticmethod 2025-08-16 10:20:33 -05:00
Nicolas Mowen
ef060b97ca Reduce tf initialization 2025-08-16 10:20:33 -05:00
Nicolas Mowen
e832bb4bad Fix go2rtc init (#18708)
* Cleanup process handling

* Adjust process name
2025-08-16 10:20:33 -05:00
Josh Hawkins
4deccf08a1 Ensure logging config is propagated to forked processes (#18704)
* Move log level initialization to log

* Use logger config

* Formatting

* Fix config order

* Set process names

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-08-16 10:20:33 -05:00
Josh Hawkins
a6b80c0f9c Add basic camera settings to UI for testing (#18690)
* add basic camera add/edit pane to the UI for testing

* only init model runner if transcription is enabled globally

* fix role checkboxes
2025-08-16 10:20:33 -05:00
Nicolas Mowen
1caf8b97c4 Use Fork-Server As Spawn Method (#18682)
* Set runtime

* Use count correctly

* Don't assume camera sizes

* Use separate zmq proxy for object detection

* Correct order

* Use forkserver

* Only store PID instead of entire process reference

* Cleanup

* Catch correct errors

* Fix typing

* Remove before_run from process util

The before_run never actually ran because:

You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.

Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:

The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.

The issue with your __getattribute__ implementation for run is that:

run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.

* Cleanup

* Logging bugfix (#18465)

* use mp Manager to handle logging queues

A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.

* consolidate

* fix typing

* Fix typing

* Use global log queue

* Move to using process for logging

* Convert camera tracking to process

* Add more processes

* Finalize process

* Cleanup

* Cleanup typing

* Formatting

* Remove daemon

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-08-16 10:20:33 -05:00
Nicolas Mowen
faadea8e1f Dynamic Management of Cameras (#18671)
* Add base class for global config updates

* Add or remove camera states

* Move camera process management to separate thread

* Move camera management fully to separate class

* Cleanup

* Stop camera processes when stop command is sent

* Start processes dynamically when needed

* Adjust

* Leave extra room in tracked object queue for two cameras

* Dynamically set extra config pieces

* Add some TODOs

* Fix type check

* Simplify config updates

* Improve typing

* Correctly handle indexed entries

* Cleanup

* Create out SHM

* Use ZMQ for signaling object detectoin is completed

* Get camera correctly created

* Cleanup for updating the cameras config

* Cleanup

* Don't enable audio if no cameras have audio transcription

* Use exact string so similar camera names don't interfere

* Add ability to update config via json body to config/set endpoint

Additionally, update the config in a single rather than multiple calls for each updated key

* fix autotracking calibration to support new config updater function

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-08-16 10:20:33 -05:00
Nicolas Mowen
4b57e5e265 Refactor TensorRT (#18643)
* Combine base and arm trt detectors

* Remove unused deps for amd64 build

* Add missing packages and cleanup ldconfig

* Expand packages for tensorflow model training

* Cleanup

* Refactor training to not reserve memory
2025-08-16 10:20:33 -05:00
Josh Hawkins
40ab7d6c38 Make Birdseye clickable (#18628)
* keep track of layout changes and publish on change

* websocket hook

* clickable overlay div to navigate to full camera view
2025-08-16 10:20:33 -05:00
FL42
937459be47 fix: Initialize GenAI client if GenAI is enabled globally (#18623) 2025-08-16 10:20:33 -05:00
FL42
13b760346a feat: enable using GenAI for cameras with GenAI disabled from the API (#18616) 2025-08-16 10:20:33 -05:00
Jimmy
7ce26087f7 Add Mesa Teflon as a TFLite detector (#18310)
* Refactor common functions for tflite detector implementations

* Add detector using mesa teflon delegate

Non-EdgeTPU TFLite can use the standard .tflite format

* Add mesa-teflon-delegate from bookworm-backports to arm64 images
2025-08-16 10:20:33 -05:00
Nicolas Mowen
b1a65c88e8 Classification Model Metrics (#18595)
* Add speed and rate metrics for custom classification models

* Use metrics for classification models

* Use keys

* Cast to list
2025-08-16 10:20:33 -05:00
Nicolas Mowen
765a28d812 Live classification model training (#18583)
* Implement model training via ZMQ and add model states to represent training

* Get model updates working

* Improve toasts and model state

* Clean up logging

* Add back in
2025-08-16 10:20:33 -05:00
Nicolas Mowen
1c75ff59f1 Classification Model UI (#18571)
* Setup basic training structure

* Build out route

* Handle model configs

* Add image fetch APIs

* Implement model training screen with dataset selection

* Implement viewing of training images

* Adjust directories

* Implement viewing of images

* Add support for deleting images

* Implement full deletion

* Implement classification model training

* Improve naming

* More renaming

* Improve layout

* Reduce logging

* Cleanup
2025-08-16 10:20:33 -05:00
Josh Hawkins
ac7fb29b32 Audio transcription tweaks (#18540)
* use model runner

* unload whisper model when live transcription is complete
2025-08-16 10:20:33 -05:00
Josh Hawkins
b77e6f5ebc Upgrade PaddleOCR models to v4 (rec) and v5 (det) (#18505)
The PP_OCRv5 text detection models have greatly improved over v3. The v5 recognition model makes improvements to challenging handwriting and uncommon characters, which are not necessary for LPR, so using v4 seemed like a better choice to continue to keep inference time as low as possible. Also included is the full dictionary for Chinese character support.
2025-08-16 10:20:33 -05:00
Nicolas Mowen
3f8ec72336 Tiered recordings (#18492)
* Implement tiered recording

* Add migration for record config

* Update docs

* Update reference docs

* Fix preview query

* Fix incorrect accesses

* Fix

* Fix

* Fix

* Fix
2025-08-16 10:20:33 -05:00
Nicolas Mowen
0b9997015a Intel updates (#18493)
* Update openvino and onnxruntime

* Install icd and level-zero-gpu deps from intel directly

* Install

* Add dep

* Fix package install
2025-08-16 10:20:33 -05:00
Nicolas Mowen
2c7b71b16e Implement API to train classification models (#18475) 2025-08-16 10:20:33 -05:00
Josh Hawkins
6dc36fcbb4 Audio transcription support (#18398)
* install new packages for transcription support

* add config options

* audio maintainer modifications to support transcription

* pass main config to audio process

* embeddings support

* api and transcription post processor

* embeddings maintainer support for post processor

* live audio transcription with sherpa and faster-whisper

* update dispatcher with live transcription topic

* frontend websocket

* frontend live transcription

* frontend changes for speech events

* i18n changes

* docs

* mqtt docs

* fix linter

* use float16 and small model on gpu for real-time

* fix return value and use requestor to embed description instead of passing embeddings

* run real-time transcription in its own thread

* tweaks

* publish live transcriptions on their own topic instead of tracked_object_update

* config validator and docs

* clarify docs
2025-08-16 10:20:33 -05:00
Nicolas Mowen
2385c403ee Fix incorrectly running lpr (#18390) 2025-08-16 10:20:33 -05:00
Nicolas Mowen
cf1d50be30 Add basic config editor when Frigate can't startup (#18383)
* Start Frigate in safe mode when config does not validate

* Add safe mode page that is just the config editor

* Adjust Frigate config editor when in safe mode

* Cleanup

* Improve log message
2025-08-16 10:20:33 -05:00
Nicolas Mowen
723553edb7 Add ability to configure when custom classification models run (#18380)
* Add config to control when classification models are run

* Cleanup
2025-08-16 10:20:33 -05:00
Nicolas Mowen
53ff33135b Update ROCm to 6.4.1 (#18364)
* Update rocm to 6.4.1

* Quick fix
2025-08-16 10:20:33 -05:00
Josh Hawkins
4ebc4f6d21 Implement support for no recordings indicator on timeline (#18363)
* Indicate no recordings on the history timeline with gray hash marks

This commit includes a new backend API endpoint and the frontend changes needed to support this functionality

* don't show slashes for now
2025-08-16 10:20:33 -05:00
Nicolas Mowen
e1340443f5 Initial custom classification model config support (#18362)
* Add basic config for defining a teachable machine model

* Add model type

* Add basic config for teachable machine models

* Adjust config for state and object

* Use config to process

* Correctly check for objects

* Remove debug

* Rename to not be teachable machine specific

* Cleanup
2025-08-16 10:20:33 -05:00
Nicolas Mowen
4dc526761c Dynamically update masks and zones for cameras (#18359)
* Include config publisher in api

* Call update topic for passed topics

* Update zones dynamically

* Update zones internally

* Support zone and mask reset

* Handle updating objects config

* Don't put status for needing to restart Frigate

* Cleanup http tests

* Fix tests
2025-08-16 10:20:33 -05:00
Nicolas Mowen
dc187eee1c Dynamic Config Updates (#18353)
* Create classes to handle publishing and subscribing config updates

* Cleanup

* Use config updater

* Update handling for enabled config

* Cleanup

* Recording config updates

* Birdseye config updates

* Handle notifications

* handle review

* Update motion
2025-08-16 10:20:33 -05:00
Nicolas Mowen
b7dbcce6e5 Update ROCm to 6.4.0 (#18264)
* Update to rocm 6.4.0

* Update URL

* Remove old env var
2025-08-16 10:20:33 -05:00
829 changed files with 43895 additions and 8184 deletions

View File

@@ -23,7 +23,7 @@ jobs:
name: AMD64 Build
steps:
- name: Check out code
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -47,7 +47,7 @@ jobs:
name: ARM Build
steps:
- name: Check out code
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -77,42 +77,12 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
jetson_jp5_build:
if: false
runs-on: ubuntu-22.04
name: Jetson Jetpack 5
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (Jetson, Jetpack 5)
env:
ARCH: arm64
BASE_IMAGE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
SLIM_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
TRT_BASE: nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime
uses: docker/bake-action@v6
with:
source: .
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
jetson_jp6_build:
runs-on: ubuntu-22.04-arm
name: Jetson Jetpack 6
steps:
- name: Check out code
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -143,7 +113,7 @@ jobs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -185,7 +155,7 @@ jobs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
@@ -203,6 +173,31 @@ jobs:
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
synaptics_build:
runs-on: ubuntu-22.04-arm
name: Synaptics Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Synaptics build
uses: docker/bake-action@v6
with:
source: .
push: true
targets: synaptics
files: docker/synaptics/synaptics.hcl
set: |
synaptics.tags=${{ steps.setup.outputs.image-name }}-synaptics
*.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:
@@ -217,7 +212,7 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@@ -4,43 +4,19 @@ on:
pull_request:
paths-ignore:
- "docs/**"
- ".github/**"
- ".github/*.yml"
- ".github/DISCUSSION_TEMPLATE/**"
- ".github/ISSUE_TEMPLATE/**"
env:
DEFAULT_PYTHON: 3.11
jobs:
build_devcontainer:
runs-on: ubuntu-latest
name: Build Devcontainer
# The Dockerfile contains features that requires buildkit, and since the
# devcontainer cli uses docker-compose to build the image, the only way to
# ensure docker-compose uses buildkit is to explicitly enable it.
env:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
run: devcontainer build --workspace-folder .
# It would be nice to also test the following commands, but for some
# reason they don't work even though in VS Code devcontainer works.
# - name: Start devcontainer
# run: devcontainer up --workspace-folder .
# - name: Run devcontainer scripts
# run: devcontainer run-user-commands --workspace-folder .
web_lint:
name: Web - Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@master
@@ -56,7 +32,7 @@ jobs:
name: Web - Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
persist-credentials: false
- uses: actions/setup-node@master
@@ -76,7 +52,7 @@ jobs:
name: Python Checks
steps:
- name: Check out the repository
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
@@ -99,16 +75,21 @@ jobs:
name: Python Tests
steps:
- name: Check out code
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build
run: make
- name: Run mypy
run: docker run --rm --entrypoint=python3 frigate:latest -u -m mypy --config-file frigate/mypy.ini frigate
- name: Run tests
run: docker run --rm --entrypoint=python3 frigate:latest -u -m unittest
- uses: actions/setup-node@master
with:
node-version: 20.x
- name: Install devcontainer cli
run: npm install --global @devcontainers/cli
- name: Build devcontainer
env:
DOCKER_BUILDKIT: "1"
run: devcontainer build --workspace-folder .
- name: Start devcontainer
run: devcontainer up --workspace-folder .
- name: Run mypy in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m mypy --config-file frigate/mypy.ini frigate"
- name: Run unit tests in devcontainer
run: devcontainer exec --workspace-folder . bash -lc "python3 -u -m unittest"

View File

@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
persist-credentials: false
- id: lowercaseRepo
@@ -18,7 +18,7 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
with:
registry: ghcr.io
username: ${{ github.actor }}

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.16.2
VERSION = 0.17.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
BOARDS= #Initialized empty
@@ -20,6 +20,12 @@ local: version
--tag frigate:latest \
--load
debug: version
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--build-arg DEBUG=true \
--tag frigate:latest \
--load
amd64:
docker buildx build --target=frigate --file docker/main/Dockerfile . \
--tag $(IMAGE_REPO):$(VERSION)-$(COMMIT_HASH) \

View File

@@ -4,13 +4,13 @@ from statistics import mean
import numpy as np
import frigate.util as util
from frigate.config import DetectorTypeEnum
from frigate.object_detection.base import (
ObjectDetectProcess,
RemoteObjectDetector,
load_labels,
)
from frigate.util.process import FrigateProcess
my_frame = np.expand_dims(np.full((300, 300, 3), 1, np.uint8), axis=0)
labels = load_labels("/labelmap.txt")
@@ -91,7 +91,7 @@ edgetpu_process_2 = ObjectDetectProcess(
)
for x in range(0, 10):
camera_process = util.Process(
camera_process = FrigateProcess(
target=start, args=(x, 300, detection_queue, events[str(x)])
)
camera_process.daemon = True

View File

@@ -55,7 +55,7 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.9/go2rtc_linux_${TARGETARCH}" go2rtc
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.10/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio
ARG TARGETARCH
@@ -148,6 +148,7 @@ RUN --mount=type=bind,source=docker/main/install_s6_overlay.sh,target=/deps/inst
FROM base AS wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
ARG DEBUG=false
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
@@ -177,6 +178,8 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip"
COPY docker/main/requirements.txt /requirements.txt
COPY docker/main/requirements-dev.txt /requirements-dev.txt
RUN pip3 install -r /requirements.txt
# Build pysqlite3 from source
@@ -184,7 +187,10 @@ COPY docker/main/build_pysqlite3.sh /build_pysqlite3.sh
RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt && \
if [ "$DEBUG" = "true" ]; then \
pip3 wheel --wheel-dir=/wheels -r /requirements-dev.txt; \
fi
# Install HailoRT & Wheels
RUN --mount=type=bind,source=docker/main/install_hailort.sh,target=/deps/install_hailort.sh \
@@ -206,6 +212,7 @@ COPY docker/main/rootfs/ /
# Frigate deps (ffmpeg, python, nginx, go2rtc, s6-overlay, etc)
FROM slim-base AS deps
ARG TARGETARCH
ARG BASE_IMAGE
ARG DEBIAN_FRONTEND
# http://stackoverflow.com/questions/48162574/ddg#49462622
@@ -224,9 +231,15 @@ ENV TRANSFORMERS_NO_ADVISORY_WARNINGS=1
# Set OpenCV ffmpeg loglevel to fatal: https://ffmpeg.org/doxygen/trunk/log_8h.html
ENV OPENCV_FFMPEG_LOGLEVEL=8
# Set NumPy to ignore getlimits warning
ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE
# TensorFlow error only
ENV TF_CPP_MIN_LOG_LEVEL=3
ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
# Install dependencies
@@ -243,6 +256,10 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
pip3 install -U /deps/wheels/*.whl
# Install MemryX runtime (requires libgomp (OpenMP) in the final docker image)
RUN --mount=type=bind,source=docker/main/install_memryx.sh,target=/deps/install_memryx.sh \
bash -c "bash /deps/install_memryx.sh"
COPY --from=deps-rootfs / /
RUN ldconfig

View File

@@ -19,7 +19,8 @@ apt-get -qq install --no-install-recommends -y \
nethogs \
libgl1 \
libglib2.0-0 \
libusb-1.0.0
libusb-1.0.0 \
libgomp1 # memryx detector
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 1
@@ -31,6 +32,18 @@ unset DEBIAN_FRONTEND
yes | dpkg -i /tmp/libedgetpu1-max.deb && export DEBIAN_FRONTEND=noninteractive
rm /tmp/libedgetpu1-max.deb
# install mesa-teflon-delegate from bookworm-backports
# Only available for arm64 at the moment
if [[ "${TARGETARCH}" == "arm64" ]]; then
if [[ "${BASE_IMAGE}" == *"nvcr.io/nvidia/tensorrt"* ]]; then
echo "Info: Skipping apt-get commands because BASE_IMAGE includes 'nvcr.io/nvidia/tensorrt' for arm64."
else
echo "deb http://deb.debian.org/debian bookworm-backports main" | tee /etc/apt/sources.list.d/bookworm-backbacks.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y mesa-teflon-delegate/bookworm-backports
fi
fi
# ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
mkdir -p /usr/lib/ffmpeg/5.0
@@ -78,11 +91,33 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu jammy client" | tee /etc/apt/sources.list.d/intel-gpu-jammy.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
intel-opencl-icd=24.35.30872.31-996~22.04 intel-level-zero-gpu=1.3.29735.27-914~22.04 intel-media-va-driver-non-free=24.3.3-996~22.04 \
libmfx1=23.2.2-880~22.04 libmfxgen1=24.2.4-914~22.04 libvpl2=1:2.13.0.0-996~22.04
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2
apt-get -qq install -y ocl-icd-libopencl1
rm -f /usr/share/keyrings/intel-graphics.gpg
rm -f /etc/apt/sources.list.d/intel-gpu-jammy.list
# install legacy and standard intel icd and level-zero-gpu
# see https://github.com/intel/compute-runtime/blob/master/LEGACY_PLATFORMS.md for more info
# needed core package
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/libigdgmm12_22.5.5_amd64.deb
dpkg -i libigdgmm12_22.5.5_amd64.deb
rm libigdgmm12_22.5.5_amd64.deb
# legacy packages
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
# standard packages
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-opencl-icd_24.52.32224.5_amd64.deb
wget https://github.com/intel/compute-runtime/releases/download/24.52.32224.5/intel-level-zero-gpu_1.6.32224.5_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-opencl-2_2.5.6+18417_amd64.deb
wget https://github.com/intel/intel-graphics-compiler/releases/download/v2.5.6/intel-igc-core-2_2.5.6+18417_amd64.deb
dpkg -i *.deb
rm *.deb
fi
if [[ "${TARGETARCH}" == "arm64" ]]; then

View File

@@ -0,0 +1,31 @@
#!/bin/bash
set -e
# Download the MxAccl for Frigate github release
wget https://github.com/memryx/mx_accl_frigate/archive/refs/heads/main.zip -O /tmp/mxaccl.zip
unzip /tmp/mxaccl.zip -d /tmp
mv /tmp/mx_accl_frigate-main /opt/mx_accl_frigate
rm /tmp/mxaccl.zip
# Install Python dependencies
pip3 install -r /opt/mx_accl_frigate/freeze
# Link the Python package dynamically
SITE_PACKAGES=$(python3 -c "import site; print(site.getsitepackages()[0])")
ln -s /opt/mx_accl_frigate/memryx "$SITE_PACKAGES/memryx"
# Copy architecture-specific shared libraries
ARCH=$(uname -m)
if [[ "$ARCH" == "x86_64" ]]; then
cp /opt/mx_accl_frigate/memryx/x86/libmemx.so* /usr/lib/x86_64-linux-gnu/
cp /opt/mx_accl_frigate/memryx/x86/libmx_accl.so* /usr/lib/x86_64-linux-gnu/
elif [[ "$ARCH" == "aarch64" ]]; then
cp /opt/mx_accl_frigate/memryx/arm/libmemx.so* /usr/lib/aarch64-linux-gnu/
cp /opt/mx_accl_frigate/memryx/arm/libmx_accl.so* /usr/lib/aarch64-linux-gnu/
else
echo "Unsupported architecture: $ARCH"
exit 1
fi
# Refresh linker cache
ldconfig

View File

@@ -1 +1,4 @@
ruff
# types
types-peewee == 3.17.*

View File

@@ -1,24 +1,28 @@
aiofiles == 24.1.*
click == 8.1.*
# FastAPI
aiohttp == 3.11.3
starlette == 0.41.2
starlette-context == 0.3.6
fastapi == 0.115.*
uvicorn == 0.30.*
aiohttp == 3.12.*
starlette == 0.47.*
starlette-context == 0.4.*
fastapi[standard-no-fastapi-cloud-cli] == 0.116.*
uvicorn == 0.35.*
slowapi == 0.1.*
joserfc == 1.0.*
pathvalidate == 3.2.*
joserfc == 1.2.*
cryptography == 44.0.*
pathvalidate == 3.3.*
markupsafe == 3.0.*
python-multipart == 0.0.12
python-multipart == 0.0.20
# Classification Model Training
tensorflow == 2.19.* ; platform_machine == 'aarch64'
tensorflow-cpu == 2.19.* ; platform_machine == 'x86_64'
# General
mypy == 1.6.1
onvif-zeep-async == 3.1.*
onvif-zeep-async == 4.0.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.13.*
psutil == 6.1.*
psutil == 7.1.*
pydantic == 2.10.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
pytz == 2025.*
@@ -27,7 +31,7 @@ ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
norfair == 2.2.*
norfair == 2.3.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
@@ -36,16 +40,15 @@ titlecase == 2.4.*
numpy == 1.26.*
opencv-python-headless == 4.11.0.*
opencv-contrib-python == 4.11.0.*
scipy == 1.14.*
scipy == 1.16.*
# OpenVino & ONNX
openvino == 2024.4.*
onnxruntime-openvino == 1.20.* ; platform_machine == 'x86_64'
onnxruntime == 1.20.* ; platform_machine == 'aarch64'
openvino == 2025.3.*
onnxruntime == 1.22.*
# Embeddings
transformers == 4.45.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.3.*
ollama == 0.5.*
openai == 1.65.*
# push notifications
py-vapid == 1.9.*
@@ -71,3 +74,10 @@ prometheus-client == 0.21.*
# TFLite
tflite_runtime @ https://github.com/frigate-nvr/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_x86_64.whl; platform_machine == 'x86_64'
tflite_runtime @ https://github.com/feranick/TFlite-builds/releases/download/v2.17.1/tflite_runtime-2.17.1-cp311-cp311-linux_aarch64.whl; platform_machine == 'aarch64'
# audio transcription
sherpa-onnx==1.12.*
faster-whisper==1.1.*
librosa==0.11.*
soundfile==0.13.*
# DeGirum detector
degirum == 0.16.*

View File

@@ -10,7 +10,7 @@ echo "[INFO] Starting certsync..."
lefile="/etc/letsencrypt/live/frigate/fullchain.pem"
tls_enabled=`python3 /usr/local/nginx/get_tls_settings.py | jq -r .enabled`
tls_enabled=`python3 /usr/local/nginx/get_listen_settings.py | jq -r .tls.enabled`
while true
do

View File

@@ -50,6 +50,38 @@ function set_libva_version() {
export LIBAVFORMAT_VERSION_MAJOR
}
function setup_homekit_config() {
local config_path="$1"
if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty HomeKit config file..."
echo '{}' > "${config_path}"
fi
# Convert YAML to JSON for jq processing
local temp_json="/tmp/cache/homekit_config.json"
yq eval -o=json "${config_path}" > "${temp_json}" 2>/dev/null || {
echo "[WARNING] Failed to convert HomeKit config to JSON, skipping cleanup"
return 0
}
# Use jq to filter and keep only the homekit section
local cleaned_json="/tmp/cache/homekit_cleaned.json"
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || echo '{"homekit": {}}' > "${cleaned_json}"
# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo '{"homekit": {}}' > "${config_path}"
}
# Clean up temp files
rm -f "${temp_json}" "${cleaned_json}"
}
set_libva_version
if [[ -f "/dev/shm/go2rtc.yaml" ]]; then
@@ -70,6 +102,10 @@ else
echo "[WARNING] Unable to remove existing go2rtc config. Changes made to your frigate config file may not be recognized. Please remove the /dev/shm/go2rtc.yaml from your docker host manually."
fi
# HomeKit configuration persistence setup
readonly homekit_config_path="/config/go2rtc_homekit.yml"
setup_homekit_config "${homekit_config_path}"
readonly config_path="/config"
if [[ -x "${config_path}/go2rtc" ]]; then
@@ -82,5 +118,7 @@ fi
echo "[INFO] Starting go2rtc..."
# Replace the bash process with the go2rtc process, redirecting stderr to stdout
# Use HomeKit config as the primary config so writebacks go there
# The main config from Frigate will be loaded as a secondary config
exec 2>&1
exec "${binary_path}" -config=/dev/shm/go2rtc.yaml
exec "${binary_path}" -config="${homekit_config_path}" -config=/dev/shm/go2rtc.yaml

View File

@@ -85,7 +85,7 @@ python3 /usr/local/nginx/get_base_path.py | \
-out /usr/local/nginx/conf/base_path.conf
# build templates for optional TLS support
python3 /usr/local/nginx/get_tls_settings.py | \
python3 /usr/local/nginx/get_listen_settings.py | \
tempio -template /usr/local/nginx/templates/listen.gotmpl \
-out /usr/local/nginx/conf/listen.conf

View File

@@ -17,7 +17,9 @@ http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
'"$http_user_agent" "$http_x_forwarded_for" '
'request_time="$request_time" upstream_response_time="$upstream_response_time"';
access_log /dev/stdout main;

View File

@@ -26,6 +26,10 @@ try:
except FileNotFoundError:
config: dict[str, Any] = {}
tls_config: dict[str, Any] = config.get("tls", {"enabled": True})
tls_config: dict[str, any] = config.get("tls", {"enabled": True})
networking_config = config.get("networking", {})
ipv6_config = networking_config.get("ipv6", {"enabled": False})
print(json.dumps(tls_config))
output = {"tls": tls_config, "ipv6": ipv6_config}
print(json.dumps(output))

View File

@@ -1,33 +1,45 @@
# intended for internal traffic, not protected by auth
# Internal (IPv4 always; IPv6 optional)
listen 5000;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:5000;{{ end }}{{ end }}
{{ if not .enabled }}
# intended for external traffic, protected by auth
listen 8971;
{{ if .tls }}
{{ if .tls.enabled }}
# external HTTPS (IPv4 always; IPv6 optional)
listen 8971 ssl;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971 ssl;{{ end }}{{ end }}
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
{{ else }}
# external HTTP (IPv4 always; IPv6 optional)
listen 8971;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971;{{ end }}{{ end }}
{{ end }}
{{ else }}
# intended for external traffic, protected by auth
listen 8971 ssl;
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
# (No tls section) default to HTTP (IPv4 always; IPv6 optional)
listen 8971;
{{ if .ipv6 }}{{ if .ipv6.enabled }}listen [::]:8971;{{ end }}{{ end }}
{{ end }}

View File

@@ -0,0 +1,47 @@
#!/bin/bash
set -e # Exit immediately if any command fails
set -o pipefail
echo "Starting MemryX driver and runtime installation..."
# Detect architecture
arch=$(uname -m)
# Purge existing packages and repo
echo "Removing old MemryX installations..."
# Remove any holds on MemryX packages (if they exist)
sudo apt-mark unhold memx-* mxa-manager || true
sudo apt purge -y memx-* mxa-manager || true
sudo rm -f /etc/apt/sources.list.d/memryx.list /etc/apt/trusted.gpg.d/memryx.asc
# Install kernel headers
echo "Installing kernel headers for: $(uname -r)"
sudo apt update
sudo apt install -y dkms linux-headers-$(uname -r)
# Add MemryX key and repo
echo "Adding MemryX GPG key and repository..."
wget -qO- https://developer.memryx.com/deb/memryx.asc | sudo tee /etc/apt/trusted.gpg.d/memryx.asc >/dev/null
echo 'deb https://developer.memryx.com/deb stable main' | sudo tee /etc/apt/sources.list.d/memryx.list >/dev/null
# Update and install memx-drivers
echo "Installing memx-drivers..."
sudo apt update
sudo apt install -y memx-drivers
# ARM-specific board setup
if [[ "$arch" == "aarch64" || "$arch" == "arm64" ]]; then
echo "Running ARM board setup..."
sudo mx_arm_setup
fi
echo -e "\n\n\033[1;31mYOU MUST RESTART YOUR COMPUTER NOW\033[0m\n\n"
# Install other runtime packages
packages=("memx-accl" "mxa-manager")
for pkg in "${packages[@]}"; do
echo "Installing $pkg..."
sudo apt install -y "$pkg"
done
echo "MemryX installation complete!"

View File

@@ -11,7 +11,8 @@ COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN sed -i "/onnxruntime/d" /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN sed -i '/\[.*\]/d' /requirements-wheels.txt \
&& pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
RUN rm -rf /rk-wheels/opencv_python-*
RUN rm -rf /rk-wheels/torch-*

View File

@@ -2,7 +2,7 @@
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=6.3.3
ARG ROCM=1
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
@@ -13,16 +13,16 @@ FROM wget AS rocm
ARG ROCM
ARG AMDGPU
RUN apt update && \
RUN apt update -qq && \
apt install -y wget gpg && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/$ROCM/ubuntu/jammy/amdgpu-install_6.3.60303-1_all.deb && \
wget -O rocm.deb https://repo.radeon.com/amdgpu-install/7.0.1/ubuntu/jammy/amdgpu-install_7.0.1.70001-1_all.deb && \
apt install -y ./rocm.deb && \
apt update && \
apt install -y rocm
apt install -qq -y rocm
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* libroctracer*.so* librocsolver*.so* librocfft*.so* librocprofiler*.so* libroctx*.so* librocroller.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/ && \
mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib && \
cp -dpr migraphx/lib/* /opt/rocm-dist/opt/rocm-$ROCM/lib/migraphx/lib
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
@@ -33,7 +33,10 @@ RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM deps AS deps-prelim
RUN apt-get update && apt-get install -y libnuma1
COPY docker/rocm/debian-backports.sources /etc/apt/sources.list.d/debian-backports.sources
RUN apt-get update && \
apt-get install -y libnuma1 && \
apt-get install -qq -y -t bookworm-backports mesa-va-drivers mesa-vulkan-drivers
WORKDIR /opt/frigate
COPY --from=rootfs / /
@@ -44,7 +47,7 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
RUN python3 -m pip config set global.break-system-packages true
COPY docker/rocm/requirements-wheels-rocm.txt /requirements.txt
RUN pip3 uninstall -y onnxruntime-openvino \
RUN pip3 uninstall -y onnxruntime \
&& pip3 install -r /requirements.txt
#######################################################################
@@ -61,9 +64,10 @@ COPY --from=rocm /opt/rocm-dist/ /
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV HSA_ENABLE_SDMA=0
ENV MIGRAPHX_ENABLE_NHWC=1
ENV TF_ROCM_USE_IMMEDIATE_MODE=1
ENV MIGRAPHX_DISABLE_MIOPEN_FUSION=1
ENV MIGRAPHX_DISABLE_SCHEDULE_PASS=1
ENV MIGRAPHX_DISABLE_REDUCE_FUSION=1
ENV MIGRAPHX_ENABLE_HIPRTC_WORKAROUNDS=1
COPY --from=rocm-dist / /

View File

@@ -0,0 +1,6 @@
Types: deb
URIs: http://deb.debian.org/debian
Suites: bookworm-backports
Components: main
Enabled: yes
Signed-By: /usr/share/keyrings/debian-archive-keyring.gpg

View File

@@ -1 +1 @@
onnxruntime-rocm @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v6.3.3/onnxruntime_rocm-1.20.1-cp311-cp311-linux_x86_64.whl
onnxruntime-migraphx @ https://github.com/NickM-27/frigate-onnxruntime-rocm/releases/download/v7.0.1/onnxruntime_migraphx-1.23.0-cp311-cp311-linux_x86_64.whl

View File

@@ -2,7 +2,7 @@ variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "6.3.3"
default = "7.0.1"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""

View File

@@ -0,0 +1,28 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
# Globally set pip break-system-packages option to avoid having to specify it every time
ARG PIP_BREAK_SYSTEM_PACKAGES=1
FROM wheels AS synap1680-wheels
ARG TARGETARCH
# Install dependencies
RUN wget -qO- "https://github.com/GaryHuang-ASUS/synaptics_astra_sdk/releases/download/v1.5.0/Synaptics-SL1680-v1.5.0-rt.tar" | tar -C / -xzf -
RUN wget -P /wheels/ "https://github.com/synaptics-synap/synap-python/releases/download/v0.0.4-preview/synap_python-0.0.4-cp311-cp311-manylinux_2_35_aarch64.whl"
FROM deps AS synap1680-deps
ARG TARGETARCH
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=synap1680-wheels,source=/wheels,target=/deps/synap-wheels \
pip3 install --no-deps -U /deps/synap-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY --from=synap1680-wheels /rootfs/usr/local/lib/*.so /usr/lib
ADD https://raw.githubusercontent.com/synaptics-astra/synap-release/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80/model.synap /synaptics/mobilenet.synap

View File

@@ -0,0 +1,27 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "rootfs"
}
target synaptics {
dockerfile = "docker/synaptics/Dockerfile"
contexts = {
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/arm64"]
}

View File

@@ -0,0 +1,15 @@
BOARDS += synaptics
local-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=frigate:latest-synaptics \
--load
build-synaptics: version
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics
push-synaptics: build-synaptics
docker buildx bake --file=docker/synaptics/synaptics.hcl synaptics \
--set synaptics.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-synaptics \
--push

View File

@@ -12,13 +12,16 @@ ARG PIP_BREAK_SYSTEM_PACKAGES
# Install TensorRT wheels
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/trt-wheels -c /requirements-wheels.txt -r /requirements-tensorrt.txt
# remove dependencies from the requirements that have type constraints
RUN sed -i '/\[.*\]/d' /requirements-wheels.txt \
&& pip3 wheel --wheel-dir=/trt-wheels -c /requirements-wheels.txt -r /requirements-tensorrt.txt
FROM deps AS frigate-tensorrt
ARG PIP_BREAK_SYSTEM_PACKAGES
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 uninstall -y onnxruntime-openvino tensorflow-cpu \
pip3 uninstall -y onnxruntime tensorflow-cpu \
&& pip3 install -U /deps/trt-wheels/*.whl
COPY --from=rootfs / /

View File

@@ -13,6 +13,7 @@ nvidia_cusolver_cu12==11.6.3.*; platform_machine == 'x86_64'
nvidia_cusparse_cu12==12.5.1.*; platform_machine == 'x86_64'
nvidia_nccl_cu12==2.23.4; platform_machine == 'x86_64'
nvidia_nvjitlink_cu12==12.5.82; platform_machine == 'x86_64'
tensorflow==2.19.*; platform_machine == 'x86_64'
onnx==1.16.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.20.*; platform_machine == 'x86_64'
onnxruntime-gpu==1.22.*; platform_machine == 'x86_64'
protobuf==3.20.3; platform_machine == 'x86_64'

View File

@@ -1,3 +1,2 @@
onnx == 1.14.0; platform_machine == 'aarch64'
protobuf == 3.20.3; platform_machine == 'aarch64'
numpy == 1.23.*; platform_machine == 'aarch64' # required by python-tensorrt 8.2.1 (Jetpack 4.6)

View File

@@ -177,9 +177,11 @@ listen [::]:5000 ipv6only=off;
By default, Frigate runs at the root path (`/`). However some setups require to run Frigate under a custom path prefix (e.g. `/frigate`), especially when Frigate is located behind a reverse proxy that requires path-based routing.
### Set Base Path via HTTP Header
The preferred way to configure the base path is through the `X-Ingress-Path` HTTP header, which needs to be set to the desired base path in an upstream reverse proxy.
For example, in Nginx:
```
location /frigate {
proxy_set_header X-Ingress-Path /frigate;
@@ -188,9 +190,11 @@ location /frigate {
```
### Set Base Path via Environment Variable
When it is not feasible to set the base path via a HTTP header, it can also be set via the `FRIGATE_BASE_PATH` environment variable in the Docker Compose file.
For example:
```
services:
frigate:
@@ -200,6 +204,7 @@ services:
```
This can be used for example to access Frigate via a Tailscale agent (https), by simply forwarding all requests to the base path (http):
```
tailscale serve --https=443 --bg --set-path /frigate http://localhost:5000/frigate
```
@@ -218,7 +223,7 @@ To do this:
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.9, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.10, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@@ -50,7 +50,7 @@ cameras:
### Configuring Minimum Volume
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. MQTT explorer can be used on the audio topic to see what volume level is being detected.
The audio detector uses volume levels in the same way that motion in a camera feed is used for object detection. This means that frigate will not run audio detection unless the audio volume is above the configured level in order to reduce resource usage. Audio levels can vary widely between camera models so it is important to run tests to see what volume levels are. The Debug view in the Frigate UI has an Audio tab for cameras that have the `audio` role assigned where a graph and the current levels are is displayed. The `min_volume` parameter should be set to the minimum the `RMS` level required to run audio detection.
:::tip
@@ -72,3 +72,77 @@ audio:
- speech
- yell
```
### Audio Transcription
Frigate supports fully local audio transcription using either `sherpa-onnx` or OpenAIs open-source Whisper models via `faster-whisper`. To enable transcription, it is recommended to only configure the features at the global level, and enable it at the individual camera level.
```yaml
audio_transcription:
enabled: False
device: ...
model_size: ...
```
Enable audio transcription for select cameras at the camera level:
```yaml
cameras:
back_yard:
...
audio_transcription:
enabled: True
```
:::note
Audio detection must be enabled and configured as described above in order to use audio transcription features.
:::
The optional config parameters that can be set at the global level include:
- **`enabled`**: Enable or disable the audio transcription feature.
- Default: `False`
- It is recommended to only configure the features at the global level, and enable it at the individual camera level.
- **`device`**: Device to use to run transcription and translation models.
- Default: `CPU`
- This can be `CPU` or `GPU`. The `sherpa-onnx` models are lightweight and run on the CPU only. The `whisper` models can run on GPU but are only supported on CUDA hardware.
- **`model_size`**: The size of the model used for live transcription.
- Default: `small`
- This can be `small` or `large`. The `small` setting uses `sherpa-onnx` models that are fast, lightweight, and always run on the CPU but are not as accurate as the `whisper` model.
- The
- This config option applies to **live transcription only**. Recorded `speech` events will always use a different `whisper` model (and can be accelerated for CUDA hardware if available with `device: GPU`).
- **`language`**: Defines the language used by `whisper` to translate `speech` audio events (and live audio only if using the `large` model).
- Default: `en`
- You must use a valid [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
- Transcriptions for `speech` events are translated.
- Live audio is translated only if you are using the `large` model. The `small` `sherpa-onnx` model is English-only.
The only field that is valid at the camera level is `enabled`.
#### Live transcription
The single camera Live view in the Frigate UI supports live transcription of audio for streams defined with the `audio` role. Use the Enable/Disable Live Audio Transcription button/switch to toggle transcription processing. When speech is heard, the UI will display a black box over the top of the camera stream with text. The MQTT topic `frigate/<camera_name>/audio/transcription` will also be updated in real-time with transcribed text.
Results can be error-prone due to a number of factors, including:
- Poor quality camera microphone
- Distance of the audio source to the camera microphone
- Low audio bitrate setting in the camera
- Background noise
- Using the `small` model - it's fast, but not accurate for poor quality audio
For speech sources close to the camera with minimal background noise, use the `small` model.
If you have CUDA hardware, you can experiment with the `large` `whisper` model on GPU. Performance is not quite as fast as the `sherpa-onnx` `small` model, but live transcription is far more accurate. Using the `large` model with CPU will likely be too slow for real-time transcription.
#### Transcription and translation of `speech` audio events
Any `speech` events in Explore can be transcribed and/or translated through the Transcribe button in the Tracked Object Details pane.
In order to use transcription and translation for past events, you must enable audio detection and define `speech` as an audio type to listen for in your config. To have `speech` events translated into the language of your choice, set the `language` config parameter with the correct [language code](https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10).
The transcribed/translated speech will appear in the description box in the Tracked Object Details pane. If Semantic Search is enabled, embeddings are generated for the transcription text and are fully searchable using the description search type.
Recorded `speech` events will always use a `whisper` model, regardless of the `model_size` config setting. Without a GPU, generating transcriptions for longer `speech` events may take a fair amount of time, so be patient.

View File

@@ -59,6 +59,7 @@ The default session length for user authentication in Frigate is 24 hours. This
While the default provides a balance of security and convenience, you can customize this duration to suit your specific security requirements and user experience preferences. The session length is configured in seconds.
The default value of `86400` will expire the authentication session after 24 hours. Some other examples:
- `0`: Setting the session length to 0 will require a user to log in every time they access the application or after a very short, immediate timeout.
- `604800`: Setting the session length to 604800 will require a user to log in if the token is not refreshed for 7 days.
@@ -80,7 +81,7 @@ python3 -c 'import secrets; print(secrets.token_hex(64))'
Frigate looks for a JWT token secret in the following order:
1. An environment variable named `FRIGATE_JWT_SECRET`
2. A docker secret named `FRIGATE_JWT_SECRET` in `/run/secrets/`
2. A file named `FRIGATE_JWT_SECRET` in the directory specified by the `CREDENTIALS_DIRECTORY` environment variable (defaults to the Docker Secrets directory: `/run/secrets/`)
3. A `jwt_secret` option from the Home Assistant Add-on options
4. A `.jwt_secret` file in the config directory
@@ -123,7 +124,7 @@ proxy:
role: x-forwarded-groups
```
Frigate supports both `admin` and `viewer` roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
Frigate supports `admin`, `viewer`, and custom roles (see below). When using port `8971`, Frigate validates these headers and subsequent requests use the headers `remote-user` and `remote-role` for authorization.
A default role can be provided. Any value in the mapped `role` header will override the default.
@@ -133,6 +134,34 @@ proxy:
default_role: viewer
```
## Role mapping
In some environments, upstream identity providers (OIDC, SAML, LDAP, etc.) do not pass a Frigate-compatible role directly, but instead pass one or more group claims. To handle this, Frigate supports a `role_map` that translates upstream group names into Frigates internal roles (`admin`, `viewer`, or custom).
```yaml
proxy:
...
header_map:
user: x-forwarded-user
role: x-forwarded-groups
role_map:
admin:
- sysadmins
- access-level-security
viewer:
- camera-viewer
operator: # Custom role mapping
- operators
```
In this example:
- If the proxy passes a role header containing `sysadmins` or `access-level-security`, the user is assigned the `admin` role.
- If the proxy passes a role header containing `camera-viewer`, the user is assigned the `viewer` role.
- If the proxy passes a role header containing `operators`, the user is assigned the `operator` custom role.
- If no mapping matches, Frigate falls back to `default_role` if configured.
- If `role_map` is not defined, Frigate assumes the role header directly contains `admin`, `viewer`, or a custom role name.
#### Port Considerations
**Authenticated Port (8971)**
@@ -141,6 +170,7 @@ proxy:
- The `remote-role` header determines the users privileges:
- **admin** → Full access (user management, configuration changes).
- **viewer** → Read-only access.
- **Custom roles** → Read-only access limited to the cameras defined in `auth.roles[role]`.
- Ensure your **proxy sends both user and role headers** for proper role enforcement.
**Unauthenticated Port (5000)**
@@ -186,6 +216,41 @@ Frigate supports user roles to control access to certain features in the UI and
- **admin**: Full access to all features, including user management and configuration.
- **viewer**: Read-only access to the UI and API, including viewing cameras, review items, and historical footage. Configuration editor and settings in the UI are inaccessible.
- **Custom Roles**: Arbitrary role names (alphanumeric, dots/underscores) with specific camera permissions. These extend the system for granular access (e.g., "operator" for select cameras).
### Custom Roles and Camera Access
The viewer role provides read-only access to all cameras in the UI and API. Custom roles allow admins to limit read-only access to specific cameras. Each role specifies an array of allowed camera names. If a user is assigned a custom role, their account is like the **viewer** role - they can only view Live, Review/History, Explore, and Export for the designated cameras. Backend API endpoints enforce this server-side (e.g., returning 403 for unauthorized cameras), and the frontend UI filters content accordingly (e.g., camera dropdowns show only permitted options).
### Role Configuration Example
```yaml
cameras:
front_door:
# ... camera config
side_yard:
# ... camera config
garage:
# ... camera config
auth:
enabled: true
roles:
operator: # Custom role
- front_door
- garage # Operator can access front and garage
neighbor:
- side_yard
```
If you want to provide access to all cameras to a specific user, just use the **viewer** role.
### Managing User Roles
1. Log in as an **admin** user via port `8971` (preferred), or unauthenticated via port `5000`.
2. Navigate to **Settings**.
3. In the **Users** section, edit a users role by selecting from available roles (admin, viewer, or custom).
4. In the **Roles** section, add/edit/delete custom roles (select cameras via switches). Deleting a role auto-reassigns users to "viewer".
### Role Enforcement

View File

@@ -147,7 +147,7 @@ WEB Digest Algorithm - MD5
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
| ---------------- | ------------------------- | -------------------------------- | ----------------------------------------------------------------------- |
| ----------------- | ------------------------- | --------------------------------- | ----------------------------------------------------------------------- |
| 5MP or lower | All | http-flv | Stream is h264 |
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
@@ -231,7 +231,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
@@ -250,6 +250,7 @@ TP-Link VIGI cameras need some adjustments to the main stream settings on the ca
To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's [FFmpeg Device](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#source-ffmpeg-device) support:
- Preparation outside of Frigate:
- Get USB camera path. Run `v4l2-ctl --list-devices` to get a listing of locally-connected cameras available. (You may need to install `v4l-utils` in a way appropriate for your Linux distribution). In the sample configuration below, we use `video=0` to correlate with a detected device path of `/dev/video0`
- Get USB camera formats & resolutions. Run `ffmpeg -f v4l2 -list_formats all -i /dev/video0` to get an idea of what formats and resolutions the USB Camera supports. In the sample configuration below, we use a width of 1024 and height of 576 in the stream and detection settings based on what was reported back.
- If using Frigate in a container (e.g. Docker on TrueNAS), ensure you have USB Passthrough support enabled, along with a specific Host Device (`/dev/video0`) + Container Device (`/dev/video0`) listed.
@@ -277,5 +278,3 @@ cameras:
width: 1024
height: 576
```

View File

@@ -89,7 +89,9 @@ An ONVIF-capable camera that supports relative movement within the field of view
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback.
This list of working and non-working PTZ cameras is based on user feedback. If you'd like to report specific quirks or issues with a manufacturer or camera that would be helpful for other users, open a pull request to add to this list.
The FeatureList on the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/) can provide a starting point to determine a camera's compatibility with Frigate's autotracking. Look to see if a camera lists `PTZRelative`, `PTZRelativePanTilt` and/or `PTZRelativeZoom`, plus `PTZAuxiliary`. These features are required for autotracking, but some cameras still fail to respond even if they claim support. If they are missing, autotracking will not work (though basic PTZ in the WebUI might). Avoid cameras with no database entry unless they are confirmed as working below.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -99,11 +101,13 @@ This list of working and non-working PTZ cameras is based on user feedback.
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Annke CZ504 | ✅ | ✅ | Annke support provide specific firmware ([V5.7.1 build 250227](https://github.com/pierrepinon/annke_cz504/raw/refs/heads/main/digicap_V5-7-1_build_250227.dav)) to fix issue with ONVIF "TranslationSpaceFov" |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, among others) have been reported to not support autotracking |
| Dahua | ✅ | ✅ | Some low-end Dahuas (lite series, picoo series (commonly), among others) have been reported to not support autotracking. These models usually don't have a four digit model number with chassis prefix and options postfix (e.g. DH-P5AE-PV vs DH-SD49825GB-HNR). |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
| Dahua DH-SD49825GB-HNR | ✅ | ✅ | |
| Dahua DH-P5AE-PV | ❌ | ❌ | |
| Foscam | ✅ | ❌ | In general support PTZ, but not relative move. There are no official ONVIF certifications and tests available on the ONVIF Conformant Products Database | |
| Foscam R5 | ✅ | ❌ | |
| Foscam SD4 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
@@ -134,3 +138,6 @@ camera_groups:
icon: LuCar
order: 0
```
## Two-Way Audio
See the guide [here](/configuration/live/#two-way-talk)

View File

@@ -0,0 +1,73 @@
---
id: object_classification
title: Object Classification
---
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
## Minimum System Requirements
Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
### Sub label vs Attribute
- **Sub label**:
- Applied to the objects `sub_label` field.
- Ideal for a single, more specific identity or type.
- Example: `cat``Leo`, `Charlie`, `None`.
- **Attribute**:
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
## Example use cases
### Sub label
- **Known pet vs unknown**: For `dog` objects, set sub label to your pets name (e.g., `buddy`) or `none` for others.
- **Mail truck vs normal car**: For `car`, classify as `mail_truck` vs `car` to filter important arrivals.
- **Delivery vs non-delivery person**: For `person`, classify `delivery` vs `visitor` based on uniform/props.
### Attributes
- **Backpack**: For `person`, add attribute `backpack: yes/no`.
- **Helmet**: For `person` (worksite), add `helmet: yes/no`.
- **Leash**: For `dog`, add `leash: yes/no` (useful for park or yard rules).
- **Ladder rack**: For `truck`, add `ladder_rack: yes/no` to flag service vehicles.
## Configuration
Object classification is configured as a custom classification model. Each model has its own name and settings. You must list which object labels should be classified.
```yaml
classification:
custom:
dog:
threshold: 0.8
object_config:
objects: [dog] # object labels to classify
classification_type: sub_label # or: attribute
```
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
### Getting Started
When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.
// TODO add this section once UI is implemented. Explain process of selecting objects and curating training examples.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigates boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

View File

@@ -0,0 +1,52 @@
---
id: state_classification
title: State Classification
---
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
## Minimum System Requirements
State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.
Training the model does briefly use a high amount of system resources for about 13 minutes per training run. On lower-power devices, training may take longer.
When running the `-tensorrt` image, Nvidia GPUs will automatically be used to accelerate training.
## Example use cases
- **Door state**: Detect if a garage or front door is open vs closed.
- **Gate state**: Track if a driveway gate is open or closed.
- **Trash day**: Bins at curb vs no bins present.
- **Pool cover**: Cover on vs off.
## Configuration
State classification is configured as a custom classification model. Each model has its own name and settings. You must provide at least one camera crop under `state_config.cameras`.
```yaml
classification:
custom:
front_door:
threshold: 0.8
state_config:
motion: true # run when motion overlaps the crop
interval: 10 # also run every N seconds (optional)
cameras:
front:
crop: [0, 180, 220, 400]
```
## Training the model
Creating and training the model is done within the Frigate UI using the `Classification` page.
### Getting Started
When choosing a portion of the camera frame for state classification, it is important to make the crop tight around the area of interest to avoid extra signals unrelated to what is being classified.
// TODO add this section once UI is implemented. Explain process of selecting a crop.
### Improving the Model
- **Problem framing**: Keep classes visually distinct and state-focused (e.g., `open`, `closed`, `unknown`). Avoid combining object identity with state in a single model unless necessary.
- **Data collection**: Use the models Train tab to gather balanced examples across times of day and weather.

View File

@@ -24,7 +24,7 @@ Frigate needs to first detect a `person` before it can detect and recognize a fa
Frigate has support for two face recognition model types:
- **small**: Frigate will run a FaceNet embedding model to recognize faces, which runs locally on the CPU. This model is optimized for efficiency and is not as accurate.
- **large**: Frigate will run a large ArcFace embedding model that is optimized for accuracy. It is only recommended to be run when an integrated or dedicated GPU is available.
- **large**: Frigate will run a large ArcFace embedding model that is optimized for accuracy. It is only recommended to be run when an integrated or dedicated GPU / NPU is available.
In both cases, a lightweight face landmark detection model is also used to align faces before running recognition.
@@ -34,7 +34,7 @@ All of these features run locally on your system.
The `small` model is optimized for efficiency and runs on the CPU, most CPUs should run the model efficiently.
The `large` model is optimized for accuracy, an integrated or discrete GPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
The `large` model is optimized for accuracy, an integrated or discrete GPU / NPU is required. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
## Configuration
@@ -73,6 +73,9 @@ Fine-tune face recognition with these optional parameters at the global level of
- Default: `100`.
- `blur_confidence_filter`: Enables a filter that calculates how blurry the face is and adjusts the confidence based on this.
- Default: `True`.
- `device`: Target a specific device to run the face recognition model on (multi-GPU installation).
- Default: `None`.
- Note: This setting is only applicable when using the `large` model. See [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)
## Usage

View File

@@ -9,35 +9,38 @@ Requests for a description are sent off automatically to your AI provider at the
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
Generative AI can be enabled for all cameras or only for specific cameras. If GenAI is disabled for a camera, you can still manually generate descriptions for events using the HTTP API. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
cameras:
front_camera:
front_camera:
objects:
genai:
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
enabled: True # <- enable GenAI for your front camera
use_snapshot: True
objects:
- person
required_zones:
- steps
indoor_camera:
genai:
enabled: False # <- disable GenAI for your indoor camera
objects:
genai:
enabled: False # <- disable GenAI for your indoor camera
```
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Ollama
:::warning
@@ -66,7 +69,6 @@ You should have at least 8 GB of RAM available (or VRAM if running on GPU) to ru
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava:7b
@@ -93,7 +95,6 @@ To start using Gemini, you must first get an API key from [Google AI Studio](htt
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
@@ -121,7 +122,6 @@ To start using OpenAI, you must first [create an API key](https://platform.opena
```yaml
genai:
enabled: True
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
@@ -149,7 +149,6 @@ To start using Azure OpenAI, you must first [create a resource](https://learn.mi
```yaml
genai:
enabled: True
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
@@ -192,32 +191,35 @@ You are also able to define custom prompts in your configuration.
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
genai:
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts

View File

@@ -0,0 +1,142 @@
---
id: genai_config
title: Configuring Generative AI
---
## Configuration
A Generative AI provider can be configured in the global config, which will make the Generative AI features available for use. There are currently 3 native providers available to integrate with Frigate. Other providers that support the OpenAI standard API can also be used. See the OpenAI section below.
To use Generative AI, you must define a single provider at the global level of your Frigate configuration. If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). Note that Frigate will not automatically download the model you specify in your config, Ollama will try to download the model but it may take longer than the timeout, it is recommended to pull the model beforehand by running `ollama pull your_model` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::info
Each model is available in multiple parameter sizes (3b, 4b, 8b, etc.). Larger sizes are more capable of complex tasks and understanding of situations, but requires more memory and computational resources. It is recommended to try multiple models and experiment to see which performs best.
:::
:::tip
If you are trying to use a single model for Frigate and HomeAssistant, it will need to support vision and tools calling. https://github.com/skye-harris/ollama-modelfiles contains optimized model configs for this task.
:::
The following models are recommended:
| Model | Notes |
| ----------------- | ----------------------------------------------------------- |
| `Intern3.5VL` | Relatively fast with good vision comprehension
| `gemma3` | Strong frame-to-frame understanding, slower inference times |
| `qwen2.5vl` | Fast but capable model with good vision comprehension |
| `llava-phi3` | Lightweight and fast model with vision comprehension |
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: minicpm-v:8b
provider_options: # other Ollama client options can be defined
keep_alive: -1
options:
num_ctx: 8192 # make sure the context matches other services that are using ollama
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
```
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
:::note
To use a different OpenAI-compatible API endpoint, set the `OPENAI_BASE_URL` environment variable to your provider's API URL.
:::
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
### Configuration
```yaml
genai:
provider: azure_openai
base_url: https://example-endpoint.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview
api_key: "{FRIGATE_OPENAI_API_KEY}"
```

View File

@@ -0,0 +1,77 @@
---
id: genai_objects
title: Object Descriptions
---
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle, or can optionally be sent earlier after a number of significantly changed frames, for example in use in more real-time notifications. Descriptions can also be regenerated manually via the Frigate UI. Note that if you are manually entering a description for tracked objects prior to its end, this will be overwritten by the generated response.
By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
Generative AI object descriptions can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/object_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_nameobjectdescriptionsset).
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
provider: ollama
base_url: http://localhost:11434
model: llava
objects:
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overridden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
```yaml
cameras:
front_door:
objects:
genai:
enabled: True
use_snapshot: True
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@@ -0,0 +1,56 @@
---
id: genai_review
title: Review Summaries
---
Generative AI can be used to automatically generate structured summaries of review items. These summaries will show up in Frigate's native notifications as well as in the UI. Generative AI can also be used to take a collection of summaries over a period of time and provide a report, which may be useful to get a quick report of everything that happened while out for some amount of time.
Requests for a summary are requested automatically to your AI provider for alert review items when the activity has ended, they can also be optionally enabled for detections as well.
Generative AI review summaries can also be toggled dynamically for a camera via MQTT with the topic `frigate/<camera_name>/review_descriptions/set`. See the [MQTT documentation](/integrations/mqtt/#frigatecamera_namereviewdescriptionsset).
## Review Summary Usage and Best Practices
Review summaries provide structured JSON responses that are saved for each review item:
```
- `scene` (string): A full description including setting, entities, actions, and any plausible supported inferences.
- `confidence` (float): 0-1 confidence in the analysis.
- `other_concerns` (list): List of user-defined concerns that may need additional investigation.
- `potential_threat_level` (integer): 0, 1, or 2 as defined below.
Threat-level definitions:
- 0 — Typical or expected activity for this location/time (includes residents, guests, or known animals engaged in normal activities, even if they glance around or scan surroundings).
- 1 — Unusual or suspicious activity: At least one security-relevant behavior is present **and not explainable by a normal residential activity**.
- 2 — Active or immediate threat: Breaking in, vandalism, aggression, weapon display.
```
This will show in the UI as a list of concerns that each review item has along with the general description.
### Defining Typical Activity
Each installation and even camera can have different parameters for what is considered suspicious activity. Frigate allows the `activity_context_prompt` to be defined globally and at the camera level, which allows you to define more specifically what should be considered normal activity. It is important that this is not overly specific as it can sway the output of the response. The default `activity_context_prompt` is below:
```
- **Zone context is critical**: Private enclosed spaces (back yards, back decks, fenced areas, inside garages) are resident territory where brief transient activity, routine tasks, and pet care are expected and normal. Front yards, driveways, and porches are semi-public but still resident spaces where deliveries, parking, and coming/going are routine. Consider whether the zone and activity align with normal residential use.
- **Person + Pet = Normal Activity**: When both "Person" and "Dog" (or "Cat") are detected together in residential zones, this is routine pet care activity (walking, letting out, playing, supervising). Assign Level 0 unless there are OTHER strong suspicious behaviors present (like testing doors, taking items, etc.). A person with their pet in a residential zone is baseline normal activity.
- Brief appearances in private zones (back yards, garages) are normal residential patterns.
- Normal residential activity includes: residents, family members, guests, deliveries, services, maintenance workers, routine property use (parking, unloading, mail pickup, trash removal).
- Brief movement with legitimate items (bags, packages, tools, equipment) in appropriate zones is routine.
```
### Additional Concerns
Along with the concern of suspicious activity or immediate threat, you may have concerns such as animals in your garden or a gate being left open. These concerns can be configured so that the review summaries will make note of them if the activity requires additional review. For example:
```yaml
review:
genai:
enabled: true
additional_concerns:
- animals in the garden
```
## Review Reports
Along with individual review item summaries, Generative AI provides the ability to request a report of a given time period. For example, you can get a daily report while on a vacation of any suspicious activity or other concerns that may require review.

View File

@@ -5,11 +5,11 @@ title: Enrichments
# Enrichments
Some of Frigate's enrichments can use a discrete GPU for accelerated processing.
Some of Frigate's enrichments can use a discrete GPU / NPU for accelerated processing.
## Requirements
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU and configure the enrichment according to its specific documentation.
Object detection and enrichments (like Semantic Search, Face Recognition, and License Plate Recognition) are independent features. To use a GPU / NPU for object detection, see the [Object Detectors](/configuration/object_detectors.md) documentation. If you want to use your GPU for any supported enrichments, you must choose the appropriate Frigate Docker image for your GPU / NPU and configure the enrichment according to its specific documentation.
- **AMD**
@@ -23,6 +23,9 @@ Object detection and enrichments (like Semantic Search, Face Recognition, and Li
- Nvidia GPUs will automatically be detected and used for enrichments in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for enrichments in the `-tensorrt-jp6` Frigate image.
- **RockChip**
- RockChip NPU will automatically be detected and used for semantic search v1 and face recognition in the `-rk` Frigate image.
Utilizing a GPU for enrichments does not require you to use the same GPU for object detection. For example, you can run the `tensorrt` Docker image for enrichments and still use other dedicated hardware like a Coral or Hailo for object detection. However, one combination that is not supported is TensorRT for object detection and OpenVINO for enrichments.
:::note

View File

@@ -427,3 +427,29 @@ cameras:
```
:::
## Synaptics
Hardware accelerated video de-/encoding is supported on Synpatics SL-series SoC.
### Prerequisites
Make sure to follow the [Synaptics specific installation instructions](/frigate/installation#synaptics).
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
```yaml
ffmpeg:
hwaccel_args: -c:v h264_v4l2m2m
input_args: preset-rtsp-restream
output_args:
record: preset-record-generic-audio-aac
```
:::warning
Make sure that your SoC supports hardware acceleration for your input stream and your input stream is h264 encoding. For example, if your camera streams with h264 encoding, your SoC must be able to de- and encode with it. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::

View File

@@ -67,12 +67,15 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- **`min_area`**: Defines the minimum area (in pixels) a license plate must be before recognition runs.
- Default: `1000` pixels. Note: this is intentionally set very low as it is an _area_ measurement (length x width). For reference, 1000 pixels represents a ~32x32 pixel square in your camera image.
- Depending on the resolution of your camera's `detect` stream, you can increase this value to ignore small or distant plates.
- **`device`**: Device to use to run license plate recognition models.
- **`device`**: Device to use to run license plate detection _and_ recognition models.
- Default: `CPU`
- This can be `CPU` or `GPU`. For users without a model that detects license plates natively, using a GPU may increase performance of the models, especially the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
- **`model_size`**: The size of the model used to detect text on plates.
- This can be `CPU`, `GPU`, or the GPU's device number. For users without a model that detects license plates natively, using a GPU may increase performance of the YOLOv9 license plate detector model. See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation. However, for users who run a model that detects `license_plate` natively, there is little to no performance gain reported with running LPR on GPU compared to the CPU.
- **`model_size`**: The size of the model used to identify regions of text on plates.
- Default: `small`
- This can be `small` or `large`. The `large` model uses an enhanced text detector and is more accurate at finding text on plates but slower than the `small` model. For most users, the small model is recommended. For users in countries with multiple lines of text on plates, the large model is recommended. Note that using the large model does not improve _text recognition_, but it may improve _text detection_.
- This can be `small` or `large`.
- The `small` model is fast and identifies groups of Latin and Chinese characters.
- The `large` model identifies Latin characters only, but uses an enhanced text detector and is more capable at finding characters on multi-line plates. It is significantly slower than the `small` model. Note that using the `large` model does not improve _text recognition_, but it may improve _text detection_.
- For most users, the `small` model is recommended.
### Recognition
@@ -102,6 +105,32 @@ Fine-tune the LPR feature using these optional parameters at the global level of
- This setting is best adjusted at the camera level if running LPR on multiple cameras.
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at 5 and adjusting as needed. You should see how different enhancement levels affect your plates. Use the `debug_save_plates` configuration option (see below).
### Normalization Rules
- **`replace_rules`**: List of regex replacement rules to normalize detected plates. These rules are applied sequentially. Each rule must have a `pattern` (which can be a string or a regex, prepended by `r`) and `replacement` (a string, which also supports [backrefs](https://docs.python.org/3/library/re.html#re.sub) like `\1`). These rules are useful for dealing with common OCR issues like noise characters, separators, or confusions (e.g., 'O'→'0').
These rules must be defined at the global level of your `lpr` config.
```yaml
lpr:
replace_rules:
- pattern: r'[%#*?]' # Remove noise symbols
replacement: ""
- pattern: r'[= ]' # Normalize = or space to dash
replacement: "-"
- pattern: "O" # Swap 'O' to '0' (common OCR error)
replacement: "0"
- pattern: r'I' # Swap 'I' to '1'
replacement: "1"
- pattern: r'(\w{3})(\w{3})' # Split 6 chars into groups (e.g., ABC123 → ABC-123)
replacement: r'\1-\2'
```
- Rules fire in order: In the example above: clean noise first, then separators, then swaps, then splits.
- Backrefs (`\1`, `\2`) allow dynamic replacements (e.g., capture groups).
- Any changes made by the rules are printed to the LPR debug log.
- Tip: You can test patterns with tools like regex101.com.
### Debugging
- **`debug_save_plates`**: Set to `True` to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `<camera>/<event_id>`, and named based on the capture timestamp.
@@ -136,6 +165,9 @@ lpr:
recognition_threshold: 0.85
format: "^[A-Z]{2} [A-Z][0-9]{4}$" # Only recognize plates that are two letters, followed by a space, followed by a single letter and 4 numbers
match_distance: 1 # Allow one character variation in plate matching
replace_rules:
- pattern: "O"
replacement: "0" # Replace the letter O with the number 0 in every plate
known_plates:
Delivery Van:
- "RJ K5678"

View File

@@ -176,6 +176,8 @@ For devices that support two way talk, Frigate can be configured to use the feat
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
As a starting point to check compatibility for your camera, view the list of cameras supported for two-way talk on the [go2rtc repository](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#two-way-audio). For cameras in the category `ONVIF Profile T`, you can use the [ONVIF Conformant Products Database](https://www.onvif.org/conformant-products/)'s FeatureList to check for the presence of `AudioOutput`. A camera that supports `ONVIF Profile T` _usually_ supports this, but due to inconsistent support, a camera that explicitly lists this feature may still not work. If no entry for your camera exists on the database, it is recommended not to buy it or to consult with the manufacturer's support on the feature availability.
### Streaming options on camera group dashboards
Frigate provides a dialog in the Camera Group Edit pane with several options for streaming on a camera group's dashboard. These settings are _per device_ and are saved in your device's local storage.
@@ -228,7 +230,26 @@ Note that disabling a camera through the config file (`enabled: False`) removes
If you are using continuous streaming or you are loading more than a few high resolution streams at once on the dashboard, your browser may struggle to begin playback of your streams before the timeout. Frigate always prioritizes showing a live stream as quickly as possible, even if it is a lower quality jsmpeg stream. You can use the "Reset" link/button to try loading your high resolution stream again.
If you are still experiencing Frigate falling back to low bandwidth mode, you may need to adjust your camera's settings per the [recommendations above](#camera_settings_recommendations).
Errors in stream playback (e.g., connection failures, codec issues, or buffering timeouts) that cause the fallback to low bandwidth mode (jsmpeg) are logged to the browser console for easier debugging. These errors may include:
- Network issues (e.g., MSE or WebRTC network connection problems).
- Unsupported codecs or stream formats (e.g., H.265 in WebRTC, which is not supported in some browsers).
- Buffering timeouts or low bandwidth conditions causing fallback to jsmpeg.
- Browser compatibility problems (e.g., iOS Safari limitations with MSE).
To view browser console logs:
1. Open the Frigate Live View in your browser.
2. Open the browser's Developer Tools (F12 or right-click > Inspect > Console tab).
3. Reproduce the error (e.g., load a problematic stream or simulate network issues).
4. Look for messages prefixed with the camera name.
These logs help identify if the issue is player-specific (MSE vs. WebRTC) or related to camera configuration (e.g., go2rtc streams, codecs). If you see frequent errors:
- Verify your camera's H.264/AAC settings (see [Frigate's camera settings recommendations](#camera_settings_recommendations)).
- Check go2rtc configuration for transcoding (e.g., audio to AAC/OPUS).
- Test with a different stream via the UI dropdown (if `live -> streams` is configured).
- For WebRTC-specific issues, ensure port 8555 is forwarded and candidates are set (see (WebRTC Extra Configuration)(#webrtc-extra-configuration)).
3. **It doesn't seem like my cameras are streaming on the Live dashboard. Why?**

View File

@@ -13,12 +13,18 @@ Frigate supports multiple different detectors that work on different types of ha
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- [DeGirum](#degirum): Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on [their website](https://hub.degirum.com).
**AMD**
- [ROCm](#amdrocm-gpu-detector): ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- [ONNX](#onnx): ROCm will automatically be detected and used as a detector in the `-rocm` Frigate image when a supported ONNX model is configured.
**Apple Silicon**
- [Apple Silicon](#apple-silicon-detector): Apple Silicon can run on M1 and newer Apple Silicon devices.
**Intel**
- [OpenVino](#openvino-detector): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
@@ -37,6 +43,10 @@ Frigate supports multiple different detectors that work on different types of ha
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
@@ -53,7 +63,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -265,7 +275,7 @@ detectors:
:::
### Supported Models
### OpenVINO Supported Models
#### SSDLite MobileNet v2
@@ -409,6 +419,60 @@ model:
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## Apple Silicon detector
The NPU in Apple Silicon can't be accessed from within a container, so the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) must first be setup. It is recommended to use the Frigate docker image with `-standard-arm64` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-standard-arm64`.
### Setup
1. Setup the [Apple Silicon detector client](https://github.com/frigate-nvr/apple-silicon-detector) and run the client
2. Configure the detector in Frigate and startup Frigate
### Configuration
Using the detector config below will connect to the client:
```yaml
detectors:
apple-silicon:
type: zmq
endpoint: tcp://host.docker.internal:5555
```
### Apple Silicon Supported Models
There is no default model provided, the following formats are supported:
#### YOLO (v3, v4, v7, v9)
YOLOv3, YOLOv4, YOLOv7, and [YOLOv9](https://github.com/WongKinYiu/yolov9) models are supported, but not included by default.
:::tip
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well. See [the models section](#downloading-yolo-models) for more information on downloading YOLO models for use in Frigate.
:::
When Frigate is started with the following config it will connect to the detector client and transfer the model automatically:
```yaml
detectors:
apple-silicon:
type: zmq
endpoint: tcp://host.docker.internal:5555
model:
model_type: yolo-generic
width: 320 # <--- should match the imgsize set during model export
height: 320 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolo.onnx
labelmap_path: /labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## AMD/ROCm GPU detector
### Setup
@@ -489,7 +553,18 @@ We unset the `HSA_OVERRIDE_GFX_VERSION` to prevent an existing override from mes
$ docker exec -it frigate /bin/bash -c '(unset HSA_OVERRIDE_GFX_VERSION && /opt/rocm/bin/rocminfo |grep gfx)'
```
### Supported Models
### ROCm Supported Models
:::tip
The AMD GPU kernel is known problematic especially when converting models to mxr format. The recommended approach is:
1. Disable object detection in the config.
2. Startup Frigate with the onnx detector configured, the main object detection model will be converted to mxr format and cached in the config directory.
3. Once this is finished as indicated by the logs, enable object detection in the UI and confirm that it is working correctly.
4. Re-enable object detection in the config.
:::
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
@@ -532,7 +607,7 @@ detectors:
:::
### Supported Models
### ONNX Supported Models
There is no default model provided, the following formats are supported:
@@ -717,6 +792,196 @@ To verify that the integration is working correctly, start Frigate and observe t
# Community Supported Detectors
## MemryX MX3
This detector is available for use with the MemryX MX3 accelerator M.2 module. Frigate supports the MX3 on compatible hardware platforms, providing efficient and high-performance object detection.
See the [installation docs](../frigate/installation.md#memryx-mx3) for information on configuring the MemryX hardware.
To configure a MemryX detector, simply set the `type` attribute to `memryx` and follow the configuration guide below.
### Configuration
To configure the MemryX detector, use the following example configuration:
#### Single PCIe MemryX MX3
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
```
#### Multiple PCIe MemryX MX3 Modules
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
memx1:
type: memryx
device: PCIe:1
memx2:
type: memryx
device: PCIe:2
```
### Supported Models
MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the container at `/memryx_models/model_folder/`.
#### YOLO-NAS
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
**Note:** The default model for the MemryX detector is YOLO-NAS 320x320.
The input size for **YOLO-NAS** can be set to either **320x320** (default) or **640x640**.
- The default size of **320x320** is optimized for lower CPU usage and faster inference times.
##### Configuration
Below is the recommended configuration for using the **YOLO-NAS** (small) model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: yolonas
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolonas.zip
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOv9
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
##### Configuration
Below is the recommended configuration for using the **YOLOv9** (small) model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: yolo-generic
width: 320 # (Can be set to 640 for higher resolution)
height: 320 # (Can be set to 640 for higher resolution)
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolov9.zip
# The .zip file must contain:
# ├── yolov9.dfp (a file ending with .dfp)
# └── yolov9_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### YOLOX
The model is sourced from the [OpenCV Model Zoo](https://github.com/opencv/opencv_zoo) and precompiled to DFP.
##### Configuration
Below is the recommended configuration for using the **YOLOX** (small) model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: yolox
width: 640
height: 640
input_tensor: nchw
input_dtype: float_denorm
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/yolox.zip
# The .zip file must contain:
# ├── yolox.dfp (a file ending with .dfp)
```
#### SSDLite MobileNet v2
The model is sourced from the [OpenMMLab Model Zoo](https://mmdeploy-oss.openmmlab.com/model/mmdet-det/ssdlite-e8679f.onnx) and has been converted to DFP.
##### Configuration
Below is the recommended configuration for using the **SSDLite MobileNet v2** model with the MemryX detector:
```yaml
detectors:
memx0:
type: memryx
device: PCIe:0
model:
model_type: ssd
width: 320
height: 320
input_tensor: nchw
input_dtype: float
labelmap_path: /labelmap/coco-80.txt
# Optional: The model is normally fetched through the runtime, so 'path' can be omitted unless you want to use a custom or local model.
# path: /config/ssdlite_mobilenet.zip
# The .zip file must contain:
# ├── ssdlite_mobilenet.dfp (a file ending with .dfp)
# └── ssdlite_mobilenet_post.onnx (optional; only if the model includes a cropped post-processing network)
```
#### Using a Custom Model
To use your own model:
1. Package your compiled model into a `.zip` file.
2. The `.zip` must contain the compiled `.dfp` file.
3. Depending on the model, the compiler may also generate a cropped post-processing network. If present, it will be named with the suffix `_post.onnx`.
4. Bind-mount the `.zip` file into the container and specify its path using `model.path` in your config.
5. Update the `labelmap_path` to match your custom model's labels.
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
```yaml
# The detector automatically selects the default model if nothing is provided in the config.
#
# Optionally, you can specify a local model path as a .zip file to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
#
# Example:
# path: /config/yolonas.zip
#
# The .zip file must contain:
# ├── yolonas.dfp (a file ending with .dfp)
# └── yolonas_post.onnx (optional; only if the model includes a cropped post-processing network)
```
---
## NVidia TensorRT Detector
Nvidia Jetson devices may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt-jp6` tag suffix, e.g. `ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp6`. This detector is designed to work with Yolo models for object detection.
@@ -799,6 +1064,41 @@ model:
height: 320 # MUST match the chosen model i.e yolov7-320 -> 320 yolov4-416 -> 416
```
## Synaptics
Hardware accelerated object detection is supported on the following SoCs:
- SL1680
This implementation uses the [Synaptics model conversion](https://synaptics-synap.github.io/doc/v/latest/docs/manual/introduction.html#offline-model-conversion), version v3.1.0.
This implementation is based on sdk `v1.5.0`.
See the [installation docs](../frigate/installation.md#synaptics) for information on configuring the SL-series NPU hardware.
### Configuration
When configuring the Synap detector, you have to specify the model: a local **path**.
#### SSD Mobilenet
A synap model is provided in the container at /mobilenet.synap and is used by this detector type by default. The model comes from [Synap-release Github](https://github.com/synaptics-astra/synap-release/tree/v1.5.0/models/dolphin/object_detection/coco/model/mobilenet224_full80).
Use the model configuration shown below when using the synaptics detector with the default synap model:
```yaml
detectors: # required
synap_npu: # required
type: synaptics # required
model: # required
path: /synaptics/mobilenet.synap # required
width: 224 # required
height: 224 # required
tensor_format: nhwc # default value (optional. If you change the model, it is required)
labelmap_path: /labelmap/coco-80.txt # required
```
## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs:
@@ -842,7 +1142,7 @@ $ cat /sys/kernel/debug/rknpu/load
:::
### Supported Models
### RockChip Supported Models
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for two). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
@@ -968,6 +1268,101 @@ Explanation of the paramters:
- **example**: Specifying `output_name = "frigate-{quant}-{input_basename}-{soc}-v{tk_version}"` could result in a model called `frigate-i8-my_model-rk3588-v2.3.0.rknn`.
- `config`: Configuration passed to `rknn-toolkit2` for model conversion. For an explanation of all available parameters have a look at section "2.2. Model configuration" of [this manual](https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.3.2/03_Rockchip_RKNPU_API_Reference_RKNN_Toolkit2_V2.3.2_EN.pdf).
## DeGirum
DeGirum is a detector that can use any type of hardware listed on [their website](https://hub.degirum.com). DeGirum can be used with local hardware through a DeGirum AI Server, or through the use of `@local`. You can also connect directly to DeGirum's AI Hub to run inferences. **Please Note:** This detector *cannot* be used for commercial purposes.
### Configuration
#### AI Server Inference
Before starting with the config file for this section, you must first launch an AI server. DeGirum has an AI server ready to use as a docker container. Add this to your `docker-compose.yml` to get started:
```yaml
degirum_detector:
container_name: degirum
image: degirum/aiserver:latest
privileged: true
ports:
- "8778:8778"
```
All supported hardware will automatically be found on your AI server host as long as relevant runtimes and drivers are properly installed on your machine. Refer to [DeGirum's docs site](https://docs.degirum.com/pysdk/runtimes-and-drivers) if you have any trouble.
Once completed, changing the `config.yml` file is simple.
```yaml
degirum_detector:
type: degirum
location: degirum # Set to service name (degirum_detector), container_name (degirum), or a host:port (192.168.29.4:8778)
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start. If you aren't pulling a model from the AI Hub, leave this and 'token' blank.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Setting up a model in the `config.yml` is similar to setting up an AI server.
You can set it to:
- A model listed on the [AI Hub](https://hub.degirum.com), given that the correct zoo name is listed in your detector
- If this is what you choose to do, the correct model will be downloaded onto your machine before running.
- A local directory acting as a zoo. See DeGirum's docs site [for more information](https://docs.degirum.com/pysdk/user-guide-pysdk/organizing-models#model-zoo-directory-structure).
- A path to some model.json.
```yaml
model:
path: ./mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1 # directory to model .json and file
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
#### Local Inference
It is also possible to eliminate the need for an AI server and run the hardware directly. The benefit of this approach is that you eliminate any bottlenecks that occur when transferring prediction results from the AI server docker container to the frigate one. However, the method of implementing local inference is different for every device and hardware combination, so it's usually more trouble than it's worth. A general guideline to achieve this would be:
1. Ensuring that the frigate docker container has the runtime you want to use. So for instance, running `@local` for Hailo means making sure the container you're using has the Hailo runtime installed.
2. To double check the runtime is detected by the DeGirum detector, make sure the `degirum sys-info` command properly shows whatever runtimes you mean to install.
3. Create a DeGirum detector in your `config.yml` file.
```yaml
degirum_detector:
type: degirum
location: "@local" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the [AI Hub](https://hub.degirum.com). This can be left blank if you're pulling a model from the public zoo and running inferences on your local hardware using @local or a local DeGirum AI Server
```
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
```yaml
model:
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
#### AI Hub Cloud Inference
If you do not possess whatever hardware you want to run, there's also the option to run cloud inferences. Do note that your detection fps might need to be lowered as network latency does significantly slow down this method of detection. For use with Frigate, we highly recommend using a local AI server as described above. To set up cloud inferences,
1. Sign up at [DeGirum's AI Hub](https://hub.degirum.com).
2. Get an access token.
3. Create a DeGirum detector in your `config.yml` file.
```yaml
degirum_detector:
type: degirum
location: "@cloud" # For accessing AI Hub devices and models
zoo: degirum/public # DeGirum's public model zoo. Zoo name should be in format "workspace/zoo_name". degirum/public is available to everyone, so feel free to use it if you don't know where to start.
token: dg_example_token # For authentication with the AI Hub. Get this token through the "tokens" section on the main page of the (AI Hub)[https://hub.degirum.com).
```
Once `degirum_detector` is setup, you can choose a model through 'model' section in the `config.yml` file.
```yaml
model:
path: mobilenet_v2_ssd_coco--300x300_quant_n2x_orca1_1
width: 300 # width is in the model name as the first number in the "int"x"int" section
height: 300 # height is in the model name as the second number in the "int"x"int" section
input_pixel_format: rgb/bgr # look at the model.json to figure out which to put here
```
# Models
Some model types are not included in Frigate by default.

View File

@@ -13,14 +13,15 @@ H265 recordings can be viewed in Chrome 108+, Edge and Safari only. All other br
### Most conservative: Ensure all video is saved
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion will be saved for 7 days. After 7 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
```yaml
record:
enabled: True
retain:
continuous:
days: 3
mode: all
motion:
days: 7
alerts:
retain:
days: 30
@@ -38,9 +39,8 @@ In order to reduce storage requirements, you can adjust your config to only reta
```yaml
record:
enabled: True
retain:
motion:
days: 3
mode: motion
alerts:
retain:
days: 30
@@ -58,7 +58,7 @@ If you only want to retain video that occurs during a tracked object, this confi
```yaml
record:
enabled: True
retain:
continuous:
days: 0
alerts:
retain:
@@ -80,15 +80,17 @@ Retention configs support decimals meaning they can be configured to retain `0.5
:::
### Continuous Recording
### Continuous and Motion Recording
The number of days to retain continuous recordings can be set via the following config where X is a number, by default continuous recording is disabled.
The number of days to retain continuous and motion recordings can be set via the following config where X is a number, by default continuous recording is disabled.
```yaml
record:
enabled: True
retain:
continuous:
days: 1 # <- number of days to keep continuous recordings
motion:
days: 2 # <- number of days to keep motion recordings
```
Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean)
@@ -112,38 +114,6 @@ This configuration will retain recording segments that overlap with alerts and d
**WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect.
## What do the different retain modes mean?
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects).
Let's say you have Frigate configured so that your doorbell camera would retain the last **2** days of continuous recording.
- With the `all` option all 48 hours of those two days would be kept and viewable.
- With the `motion` option the only parts of those 48 hours would be segments that Frigate detected motion. This is the middle ground option that won't keep all 48 hours, but will likely keep all segments of interest along with the potential for some extra segments.
- With the `active_objects` option the only segments that would be kept are those where there was a true positive object that was not considered stationary.
The same options are available with alerts and detections, except it will only save the recordings when it overlaps with a review item of that type.
A configuration example of the above retain modes where all `motion` segments are stored for 7 days and `active objects` are stored for 14 days would be as follows:
```yaml
record:
enabled: True
retain:
days: 7
mode: motion
alerts:
retain:
days: 14
mode: active_objects
detections:
retain:
days: 14
mode: active_objects
```
The above configuration example can be added globally or on a per camera basis.
## Can I have "continuous" recordings, but only at certain times?
Using Frigate UI, Home Assistant, or MQTT, cameras can be automated to only record in certain situations or at certain times.

View File

@@ -73,6 +73,12 @@ tls:
# Optional: Enable TLS for port 8971 (default: shown below)
enabled: True
# Optional: IPv6 configuration
networking:
# Optional: Enable IPv6 on 5000, and 8971 if tls is configured (default: shown below)
ipv6:
enabled: False
# Optional: Proxy configuration
proxy:
# Optional: Mapping for headers from upstream proxies. Only used if Frigate's auth
@@ -82,7 +88,13 @@ proxy:
# See the docs for more info.
header_map:
user: x-forwarded-user
role: x-forwarded-role
role: x-forwarded-groups
role_map:
admin:
- sysadmins
- access-level-security
viewer:
- camera-viewer
# Optional: Url for logging out a user. This sets the location of the logout url in
# the UI.
logout_url: /api/logout
@@ -275,6 +287,9 @@ detect:
max_disappeared: 25
# Optional: Configuration for stationary object tracking
stationary:
# Optional: Stationary classifier that uses visual characteristics to determine if an object
# is stationary even if the box changes enough to be considered motion (default: shown below).
classifier: True
# Optional: Frequency for confirming stationary objects (default: same as threshold)
# When set to 1, object detection will run to confirm the object still exists on every frame.
# If set to 10, object detection will run to confirm the object still exists on every 10th frame.
@@ -339,6 +354,33 @@ objects:
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI object description generation (default: shown below)
enabled: False
# Optional: Use the object snapshot instead of thumbnails for description generation (default: shown below)
use_snapshot: False
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: objects to generate descriptions for (default: all objects that are tracked)
objects:
- person
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional: Review configuration
# NOTE: Can be overridden at the camera level
@@ -351,6 +393,8 @@ review:
labels:
- car
- person
# Time to cutoff alerts after no alert-causing activity has occurred (default: shown below)
cutoff_time: 40
# Optional: required zones for an object to be marked as an alert (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
@@ -365,12 +409,27 @@ review:
labels:
- car
- person
# Time to cutoff detections after no detection-causing activity has occurred (default: shown below)
cutoff_time: 30
# Optional: required zones for an object to be marked as a detection (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
# should be configured at the camera level.
required_zones:
- driveway
# Optional: GenAI Review Summary Configuration
genai:
# Optional: Enable the GenAI review summary feature (default: shown below)
enabled: False
# Optional: Enable GenAI review summaries for alerts (default: shown below)
alerts: True
# Optional: Enable GenAI review summaries for detections (default: shown below)
detections: False
# Optional: Additional concerns that the GenAI should make note of (default: None)
additional_concerns:
- Animals in the garden
# Optional: Preferred response language (default: English)
preferred_language: English
# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
@@ -440,18 +499,18 @@ record:
expire_interval: 60
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
sync_recordings: False
# Optional: Retention settings for recording
retain:
# Optional: Continuous retention settings
continuous:
# Optional: Number of days to retain recordings regardless of tracked objects or motion (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Motion retention settings
motion:
# Optional: Number of days to retain recordings regardless of tracked objects (default: shown below)
# NOTE: This should be set to 0 and retention should be defined in alerts and detections section below
# if you only want to retain recordings of alerts and detections.
days: 0
# Optional: Mode for retention. Available options are: all, motion, and active_objects
# all - save all recording segments regardless of activity
# motion - save all recordings segments with any detected motion
# active_objects - save all recording segments with active/moving objects
# NOTE: this mode only applies when the days setting above is greater than 0
mode: all
# Optional: Recording Export Settings
export:
# Optional: Timelapse Output Args (default: shown below).
@@ -546,6 +605,9 @@ semantic_search:
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Target a specific device to run the model (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: None
# Optional: Configuration for face recognition capability
# NOTE: enabled, min_area can be overridden at the camera level
@@ -569,6 +631,9 @@ face_recognition:
blur_confidence_filter: True
# Optional: Set the model size used face recognition. (default: shown below)
model_size: small
# Optional: Target a specific device to run the model (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: None
# Optional: Configuration for license plate recognition capability
# NOTE: enabled, min_area, and enhancement can be overridden at the camera level
@@ -576,6 +641,7 @@ lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: The device to run the models on (default: shown below)
# NOTE: See https://onnxruntime.ai/docs/execution-providers/ for more information
device: CPU
# Optional: Set the model size used for text detection. (default: shown below)
model_size: small
@@ -598,6 +664,8 @@ lpr:
enhancement: 0
# Optional: Save plate images to /media/frigate/clips/lpr for debugging purposes (default: shown below)
debug_save_plates: False
# Optional: List of regex replacement rules to normalize detected plates (default: shown below)
replace_rules: {}
# Optional: Configuration for AI generated tracked object descriptions
# WARNING: Depending on the provider, this will send thumbnails over the internet
@@ -612,16 +680,27 @@ genai:
base_url: http://localhost::11434
# Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}"
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Required if enabled: The model to use with the provider.
model: gemini-1.5-flash
# Optional additional args to pass to the GenAI Provider (default: None)
provider_options:
keep_alive: -1
# Optional: Configuration for audio transcription
# NOTE: only the enabled option can be overridden at the camera level
audio_transcription:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
# Optional: The device to run the models on (default: shown below)
device: CPU
# Optional: Set the model size used for transcription. (default: shown below)
model_size: small
# Optional: Set the language used for transcription translation. (default: shown below)
# List of language codes: https://github.com/openai/whisper/blob/main/whisper/tokenizer.py#L10
language: en
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.9)
# Uses https://github.com/AlexxIT/go2rtc (v1.9.10)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
@@ -827,33 +906,22 @@ cameras:
# By default the cameras are sorted alphabetically.
order: 0
# Optional: Configuration for AI generated tracked object descriptions
genai:
# Optional: Enable AI description generation (default: shown below)
enabled: False
# Optional: Use the object snapshot instead of thumbnails for description generation (default: shown below)
use_snapshot: False
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: objects to generate descriptions for (default: all objects that are tracked)
objects:
- person
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: What triggers to use to send frames for a tracked object to generative AI (default: shown below)
send_triggers:
# Once the object is no longer tracked
tracked_object_end: True
# Optional: After X many significant updates are received (default: shown below)
after_significant_updates: None
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional: Configuration for triggers to automate actions based on semantic search results.
triggers:
# Required: Unique identifier for the trigger (generated automatically from friendly_name if not specified).
trigger_name:
# Required: Enable or disable the trigger. (default: shown below)
enabled: true
# Type of trigger, either `thumbnail` for image-based matching or `description` for text-based matching. (default: none)
type: thumbnail
# Reference data for matching, either an event ID for `thumbnail` or a text string for `description`. (default: none)
data: 1751565549.853251-b69j73
# Similarity threshold for triggering. (default: none)
threshold: 0.7
# List of actions to perform when the trigger fires. (default: none)
# Available options: `notification` (send a webpush notification)
actions:
- notification
# Optional
ui:

View File

@@ -7,7 +7,7 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.9) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.10) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration) for more advanced configurations and features.
:::note
@@ -156,7 +156,7 @@ See [this comment](https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-22
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@@ -39,7 +39,7 @@ If you are enabling Semantic Search for the first time, be advised that Frigate
The [V1 model from Jina](https://huggingface.co/jinaai/jina-clip-v1) has a vision model which is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
The V1 text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the object description docs](/configuration/genai/objects.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted versions of the Jina models are available and can be selected by setting the `model_size` config option as `small` or `large`:
@@ -78,17 +78,21 @@ Switching between V1 and V2 requires reindexing your embeddings. The embeddings
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU / NPU hardware, when available. This depends on the Docker build that is used. You can also target a specific device in a multi-GPU installation.
```yaml
semantic_search:
enabled: True
model_size: large
# Optional, if using the 'large' model in a multi-GPU installation
device: 0
```
:::info
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
If the correct build is used for your GPU / NPU and the `large` model is configured, then the GPU / NPU will be detected and used automatically.
Specify the `device` option to target a specific GPU in a multi-GPU system (see [onnxruntime's provider options](https://onnxruntime.ai/docs/execution-providers/)).
If you do not specify a device, the first available GPU will be used.
See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_enrichments.md) documentation.
@@ -102,3 +106,49 @@ See the [Hardware Accelerated Enrichments](/configuration/hardware_acceleration_
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".
5. Semantic search on thumbnails tends to return better results when matching large subjects that take up most of the frame. Small things like "cat" tend to not work well.
6. Experiment! Find a tracked object you want to test and start typing keywords and phrases to see what works for you.
## Triggers
Triggers utilize semantic search to automate actions when a tracked object matches a specified image or description. Triggers can be configured so that Frigate executes a specific actions when a tracked object's image or description matches a predefined image or text, based on a similarity threshold. Triggers are managed per camera and can be configured via the Frigate UI in the Settings page under the Triggers tab.
### Configuration
Triggers are defined within the `semantic_search` configuration for each camera in your Frigate configuration file or through the UI. Each trigger consists of a `type` (either `thumbnail` or `description`), a `data` field (the reference image event ID or text), a `threshold` for similarity matching, and a list of `actions` to perform when the trigger fires.
#### Managing Triggers in the UI
1. Navigate to the **Settings** page and select the **Triggers** tab.
2. Choose a camera from the dropdown menu to view or manage its triggers.
3. Click **Add Trigger** to create a new trigger or use the pencil icon to edit an existing one.
4. In the **Create Trigger** dialog:
- Enter a **Name** for the trigger (e.g., "red_car_alert").
- Select the **Type** (`Thumbnail` or `Description`).
- For `Thumbnail`, select an image to trigger this action when a similar thumbnail image is detected, based on the threshold.
- For `Description`, enter text to trigger this action when a similar tracked object description is detected.
- Set the **Threshold** for similarity matching.
- Select **Actions** to perform when the trigger fires.
5. Save the trigger to update the configuration and store the embedding in the database.
When a trigger fires, the UI highlights the trigger with a blue outline for 3 seconds for easy identification.
### Usage and Best Practices
1. **Thumbnail Triggers**: Select a representative image (event ID) from the Explore page that closely matches the object you want to detect. For best results, choose images where the object is prominent and fills most of the frame.
2. **Description Triggers**: Write concise, specific text descriptions (e.g., "Person in a red jacket") that align with the tracked objects description. Avoid vague terms to improve matching accuracy.
3. **Threshold Tuning**: Adjust the threshold to balance sensitivity and specificity. A higher threshold (e.g., 0.8) requires closer matches, reducing false positives but potentially missing similar objects. A lower threshold (e.g., 0.6) is more inclusive but may trigger more often.
4. **Using Explore**: Use the context menu or right-click / long-press on a tracked object in the Grid View in Explore to quickly add a trigger based on the tracked object's thumbnail.
5. **Editing triggers**: For the best experience, triggers should be edited via the UI. However, Frigate will ensure triggers edited in the config will be synced with triggers created and edited in the UI.
### Notes
- Triggers rely on the same Jina AI CLIP models (V1 or V2) used for semantic search. Ensure `semantic_search` is enabled and properly configured.
- Reindexing embeddings (via the UI or `reindex: True`) does not affect trigger configurations but may update the embeddings used for matching.
- For optimal performance, use a system with sufficient RAM (8GB minimum, 16GB recommended) and a GPU for `large` model configurations, as described in the Semantic Search requirements.
### FAQ
#### Why can't I create a trigger on thumbnails for some text, like "person with a blue shirt" and have it trigger when a person with a blue shirt is detected?
TL;DR: Text-to-image triggers arent supported because CLIP can confuse similar images and give inconsistent scores, making automation unreliable.
Text-to-image triggers are not supported due to fundamental limitations of CLIP-based similarity search. While CLIP works well for exploratory, manual queries, it is unreliable for automated triggers based on a threshold. Issues include embedding drift (the same textimage pair can yield different cosine distances over time), lack of true semantic grounding (visually similar but incorrect matches), and unstable thresholding (distance distributions are dataset-dependent and often too tightly clustered to separate relevant from irrelevant results). Instead, it is recommended to set up a workflow with thumbnail triggers: first use text search to manually select 35 representative reference tracked objects, then configure thumbnail triggers based on that visual similarity. This provides robust automation without the semantic ambiguity of text to image matching.

View File

@@ -88,7 +88,9 @@ Sometimes objects are expected to be passing through a zone, but an object loite
:::note
When using loitering zones, a review item will remain active until the object leaves. Loitering zones are only meant to be used in areas where loitering is not expected behavior.
When using loitering zones, a review item will behave in the following way:
- When a person is in a loitering zone, the review item will remain active until the person leaves the loitering zone, regardless of if they are stationary.
- When any other object is in a loitering zone, the review item will remain active until the loitering time is met. Then if the object is stationary the review item will end.
:::

View File

@@ -56,24 +56,36 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
- Runs best with tiny, small, or medium-size models
**AMD**
- [ROCm](#rocm---amd-gpu): ROCm can run on AMD Discrete GPUs to provide efficient object detection
- [Supports limited model architectures](../../configuration/object_detectors#supported-models-1)
- [Supports limited model architectures](../../configuration/object_detectors#rocm-supported-models)
- Runs best on discrete AMD GPUs
**Apple Silicon**
- [Apple Silicon](#apple-silicon): Apple Silicon is usable on all M1 and newer Apple Silicon devices to provide efficient and fast object detection
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#apple-silicon-supported-models)
- Runs well with any size models including large
- Runs via ZMQ proxy which adds some latency, only recommended for local connection
**Intel**
- [OpenVino](#openvino---intel): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- [Supports majority of model architectures](../../configuration/object_detectors#supported-models)
- [Supports majority of model architectures](../../configuration/object_detectors#openvino-supported-models)
- Runs best with tiny, small, or medium models
**Nvidia**
- [TensortRT](#tensorrt---nvidia-gpu): TensorRT can run on Nvidia GPUs and Jetson devices.
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#supported-models-2)
- [Supports majority of model architectures via ONNX](../../configuration/object_detectors#onnx-supported-models)
- Runs well with any size models including large
**Rockchip**
@@ -83,8 +95,21 @@ Frigate supports multiple different detectors that work on different types of ha
- Runs best with tiny or small size models
- Runs efficiently on low power hardware
**Synaptics**
- [Synaptics](#synaptics): synap models can run on Synaptics devices(e.g astra machina) with included NPUs to provide efficient object detection.
:::
### Synaptics
- **Synaptics** Default model is **mobilenet**
| Name | Synaptics SL1680 Inference Time |
| ---------------- | ------------------------------- |
| ssd mobilenet | ~ 25 ms |
| yolov5m | ~ 118 ms |
### Hailo-8
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
@@ -142,7 +167,7 @@ Inference speeds vary greatly depending on the CPU or GPU used, some known examp
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
@@ -152,7 +177,7 @@ Frigate is able to utilize an Nvidia GPU which supports the 12.x series of CUDA
#### Minimum Hardware Support
12.x series of CUDA libraries are used which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
12.x series of CUDA libraries are used which have minor version compatibility. The minimum driver version on the host system must be `>=545`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
Make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
@@ -167,27 +192,71 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
[NVIDIA GPU Compute Capability](https://developer.nvidia.com/cuda-gpus)
Inference speeds will vary greatly depending on the GPU and the model used.
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
`tiny (t)` variants are faster than the equivalent non-tiny model, some known examples are below:
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
| --------------- | ------------------------- | ------------------------- | ---------------------- |
| GTX 1070 | s-320: 16 ms | 320: 14 ms | |
| RTX 3050 | t-320: 15 ms s-320: 17 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
| RTX 3070 | t-320: 11 ms s-320: 13 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
| RTX A4000 | | 320: ~ 15 ms | |
| Tesla P40 | | 320: ~ 105 ms | |
✅ - Accelerated with CUDA Graphs
❌ - Not accelerated with CUDA Graphs
| Name | ✅ YOLOv9 Inference Time | ✅ RF-DETR Inference Time | ❌ YOLO-NAS Inference Time |
| --------- | ------------------------------------- | ------------------------- | -------------------------- |
| GTX 1070 | s-320: 16 ms | | 320: 14 ms |
| RTX 3050 | t-320: 8 ms s-320: 10 ms s-640: 28 ms | Nano-320: ~ 12 ms | 320: ~ 10 ms 640: ~ 16 ms |
| RTX 3070 | t-320: 6 ms s-320: 8 ms s-640: 25 ms | Nano-320: ~ 9 ms | 320: ~ 8 ms 640: ~ 14 ms |
| RTX A4000 | | | 320: ~ 15 ms |
| Tesla P40 | | | 320: ~ 105 ms |
### Apple Silicon
With the [Apple Silicon](../configuration/object_detectors.md#apple-silicon-detector) detector Frigate can take advantage of the NPU in M1 and newer Apple Silicon.
:::warning
Apple Silicon can not run within a container, so a ZMQ proxy is utilized to communicate with [the Apple Silicon Frigate detector](https://github.com/frigate-nvr/apple-silicon-detector) which runs on the host. This should add minimal latency when run on the same device.
:::
| Name | YOLOv9 Inference Time |
| ------ | ------------------------------------ |
| M4 | s-320: 10 ms |
| M3 Pro | t-320: 6 ms s-320: 8 ms s-640: 20 ms |
| M1 | s-320: 9ms |
### ROCm - AMD GPU
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
With the [ROCm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------- | ------------------------- |
| AMD 780M | 320: ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
| --------- | --------------------------- | ------------------------- |
| AMD 780M | t-320: ~ 14 ms s-320: 20 ms | 320: ~ 25 ms 640: ~ 50 ms |
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
## Community Supported Detectors
### MemryX MX3
Frigate supports the MemryX MX3 M.2 AI Acceleration Module on compatible hardware platforms, including both x86 (Intel/AMD) and ARM-based SBCs such as Raspberry Pi 5.
A single MemryX MX3 module is capable of handling multiple camera streams using the default models, making it sufficient for most users. For larger deployments with more cameras or bigger models, multiple MX3 modules can be used. Frigate supports multi-detector configurations, allowing you to connect multiple MX3 modules to scale inference capacity.
Detailed information is available [in the detector docs](/configuration/object_detectors#memryx-mx3).
**Default Model Configuration:**
- Default model is **YOLO-NAS-Small**.
The MX3 is a pipelined architecture, where the maximum frames per second supported (and thus supported number of cameras) cannot be calculated as `1/latency` (1/"Inference Time") and is measured separately. When estimating how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.
| Model | Input Size | MX3 Inference Time | MX3 Total FPS |
| -------------------- | ---------- | ------------------ | ------------- |
| YOLO-NAS-Small | 320 | ~ 9 ms | ~ 378 |
| YOLO-NAS-Small | 640 | ~ 21 ms | ~ 138 |
| YOLOv9s | 320 | ~ 16 ms | ~ 382 |
| YOLOv9s | 640 | ~ 41 ms | ~ 110 |
| YOLOX-Small | 640 | ~ 16 ms | ~ 263 |
| SSDlite MobileNet v2 | 320 | ~ 5 ms | ~ 1056 |
Inference speeds may vary depending on the host platform. The above data was measured on an **Intel 13700 CPU**. Platforms like Raspberry Pi, Orange Pi, and other ARM-based SBCs have different levels of processing capability, which may limit total FPS.
### Nvidia Jetson
Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powerful Jetson Orin AGX. It will [make use of the Jetson's hardware media engine](/configuration/hardware_acceleration_video#nvidia-jetson-orin-agx-orin-nx-orin-nano-xavier-agx-xavier-nx-tx2-tx1-nano) when configured with the [appropriate presets](/configuration/ffmpeg_presets#hwaccel-presets), and will make use of the Jetson's GPU and DLA for object detection when configured with the [TensorRT detector](/configuration/object_detectors#nvidia-tensorrt-detector).

View File

@@ -132,6 +132,77 @@ If you are using `docker run`, add this option to your command `--device /dev/ha
Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup.
### MemryX MX3
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
- x86 (Intel/AMD) PCs
- Raspberry Pi 5
- Orange Pi 5 Plus/Max
- Multi-M.2 PCIe carrier cards
#### Configuration
#### Installation
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
Then follow these steps for installing the correct driver/runtime configuration:
1. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/memryx/user_installation.sh).
2. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
3. Run the script with `./user_installation.sh`
4. **Restart your computer** to complete driver installation.
#### Setup
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
```yaml
devices:
- /dev/memx0
```
During configuration, you must run Docker in privileged mode and ensure the container can access the max-manager.
In your `docker-compose.yml`, also add:
```yaml
privileged: true
volumes:
/run/mxa_manager:/run/mxa_manager
```
If you can't use Docker Compose, you can run the container with something similar to this:
```bash
docker run -d \
--name frigate-memx \
--restart=unless-stopped \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
--shm-size=256m \
-v /path/to/your/storage:/media/frigate \
-v /path/to/your/config:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /run/mxa_manager:/run/mxa_manager \
-e FRIGATE_RTSP_PASSWORD='password' \
--privileged=true \
-p 8971:8971 \
-p 8554:8554 \
-p 5000:5000 \
-p 8555:8555/tcp \
-p 8555:8555/udp \
--device /dev/memx0 \
ghcr.io/blakeblackshear/frigate:stable
```
#### Configuration
Finally, configure [hardware object detection](/configuration/object_detectors#memryx-mx3) to complete the setup.
### Rockchip platform
Make sure that you use a linux distribution that comes with the rockchip BSP kernel 5.10 or 6.1 and necessary drivers (especially rkvdec2 and rknpu). To check, enter the following commands:
@@ -185,6 +256,37 @@ or add these options to your `docker run` command:
Next, you should configure [hardware object detection](/configuration/object_detectors#rockchip-platform) and [hardware video processing](/configuration/hardware_acceleration_video#rockchip-platform).
### Synaptics
- SL1680
#### Setup
Follow Frigate's default installation instructions, but use a docker image with `-synaptics` suffix for example `ghcr.io/blakeblackshear/frigate:stable-synaptics`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
```yaml
devices:
- /dev/synap
- /dev/video0
- /dev/video1
```
or add these options to your `docker run` command:
```
--device /dev/synap \
--device /dev/video0 \
--device /dev/video1
```
#### Configuration
Next, you should configure [hardware object detection](/configuration/object_detectors#synaptics) and [hardware video processing](/configuration/hardware_acceleration_video#synaptics).
## Docker
Running through Docker with Docker Compose is the recommended install method.

View File

@@ -3,17 +3,15 @@ id: configuring_go2rtc
title: Configuring go2rtc
---
# Configuring go2rtc
Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect directly to your cameras. However, adding go2rtc to your configuration is required for the following features:
- WebRTC or MSE for live viewing with audio, higher resolutions and frame rates than the jsmpeg stream which is limited to the detect stream and does not support audio
- Live stream support for cameras in Home Assistant Integration
- RTSP relay for use with other consumers to reduce the number of connections to your camera streams
# Setup a go2rtc stream
## Setup a go2rtc stream
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#module-streams), not just rtsp.
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#module-streams), not just rtsp.
:::tip
@@ -49,8 +47,8 @@ After adding this to the config, restart Frigate and try to watch the live strea
- Check Video Codec:
- If the camera stream works in go2rtc but not in your browser, the video codec might be unsupported.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.9#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
- If using H265, switch to H264. Refer to [video codec compatibility](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#codecs-madness) in go2rtc documentation.
- If unable to switch from H265 to H264, or if the stream format is different (e.g., MJPEG), re-encode the video using [FFmpeg parameters](https://github.com/AlexxIT/go2rtc/tree/v1.9.10#source-ffmpeg). It supports rotating and resizing video feeds and hardware acceleration. Keep in mind that transcoding video from one format to another is a resource intensive task and you may be better off using the built-in jsmpeg view.
```yaml
go2rtc:
streams:
@@ -111,11 +109,11 @@ section.
:::
## Next steps
### Next steps
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
## Important considerations
## Homekit Configuration
If you are configuring go2rtc to publish HomeKit camera streams, on pairing the configuration is written to the `/dev/shm/go2rtc.yaml` file inside the container. These changes must be manually copied across to the `go2rtc` section of your Frigate configuration in order to persist through restarts.
To add camera streams to Homekit Frigate must be configured in docker to use `host` networking mode. Once that is done, you can use the go2rtc WebUI (accessed via port 1984, which is disabled by default) to share export a camera to Homekit. Any changes made will automatically be saved to `/config/go2rtc_homekit.yml`.

View File

@@ -208,6 +208,20 @@ Message published for each changed review item. The first message is published w
}
```
### `frigate/triggers`
Message published when a trigger defined in a camera's `semantic_search` configuration fires.
```json
{
"name": "car_trigger",
"camera": "driveway",
"event_id": "1751565549.853251-b69j73",
"type": "thumbnail",
"score": 0.85
}
```
### `frigate/stats`
Same data available at `/api/stats` published at a configurable interval.
@@ -226,6 +240,14 @@ Topic with current state of notifications. Published values are `ON` and `OFF`.
## Frigate Camera Topics
### `frigate/<camera_name>/<role>/status`
Publishes the current health status of each role that is enabled (`audio`, `detect`, `record`). Possible values are:
- `online`: Stream is running and being processed
- `offline`: Stream is offline and is being restarted
- `disabled`: Camera is currently disabled
### `frigate/<camera_name>/<object_name>`
Publishes the count of objects for the camera for use as a sensor in Home Assistant.
@@ -259,6 +281,8 @@ The height and crop of snapshots can be configured in the config.
Publishes "ON" when a type of audio is detected and "OFF" when it is not for the camera for use as a sensor in Home Assistant.
`all` can be used as the audio_type for the status of all audio types.
### `frigate/<camera_name>/audio/dBFS`
Publishes the dBFS value for audio detected on this camera.
@@ -271,6 +295,12 @@ Publishes the rms value for audio detected on this camera.
**NOTE:** Requires audio detection to be enabled
### `frigate/<camera_name>/audio/transcription`
Publishes transcribed text for audio detected on this camera.
**NOTE:** Requires audio detection and transcription to be enabled
### `frigate/<camera_name>/enabled/set`
Topic to turn Frigate's processing of a camera on and off. Expected values are `ON` and `OFF`.
@@ -393,6 +423,22 @@ Topic to turn review detections for a camera on or off. Expected values are `ON`
Topic with current state of review detections for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/object_descriptions/set`
Topic to turn generative AI object descriptions for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/object_descriptions/state`
Topic with current state of generative AI object descriptions for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/review_descriptions/set`
Topic to turn generative AI review descriptions for a camera on or off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/review_descriptions/state`
Topic with current state of generative AI review descriptions for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/birdseye/set`
Topic to turn Birdseye for a camera on and off. Expected values are `ON` and `OFF`. Birdseye mode

3073
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -5,14 +5,14 @@ import frigateHttpApiSidebar from "./docs/integrations/api/sidebar";
const sidebars: SidebarsConfig = {
docs: {
Frigate: [
'frigate/index',
'frigate/hardware',
'frigate/planning_setup',
'frigate/installation',
'frigate/updating',
'frigate/camera_setup',
'frigate/video_pipeline',
'frigate/glossary',
"frigate/index",
"frigate/hardware",
"frigate/planning_setup",
"frigate/installation",
"frigate/updating",
"frigate/camera_setup",
"frigate/video_pipeline",
"frigate/glossary",
],
Guides: [
"guides/getting_started",
@@ -28,7 +28,7 @@ const sidebars: SidebarsConfig = {
{
type: "link",
label: "Go2RTC Configuration Reference",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.9#configuration",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.9.10#configuration",
} as PropSidebarItemLink,
],
Detectors: [
@@ -37,10 +37,36 @@ const sidebars: SidebarsConfig = {
],
Enrichments: [
"configuration/semantic_search",
"configuration/genai",
"configuration/face_recognition",
"configuration/license_plate_recognition",
"configuration/bird_classification",
{
type: "category",
label: "Custom Classification",
link: {
type: "generated-index",
title: "Custom Classification",
description: "Configuration for custom classification models",
},
items: [
"configuration/custom_classification/state_classification",
"configuration/custom_classification/object_classification",
],
},
{
type: "category",
label: "Generative AI",
link: {
type: "generated-index",
title: "Generative AI",
description: "Generative AI Features",
},
items: [
"configuration/genai/genai_config",
"configuration/genai/genai_review",
"configuration/genai/genai_objects",
],
},
],
Cameras: [
"configuration/cameras",
@@ -93,11 +119,11 @@ const sidebars: SidebarsConfig = {
"configuration/metrics",
"integrations/third_party_extensions",
],
'Frigate+': [
'plus/index',
'plus/annotating',
'plus/first_model',
'plus/faq',
"Frigate+": [
"plus/index",
"plus/annotating",
"plus/first_model",
"plus/faq",
],
Troubleshooting: [
"troubleshooting/faqs",

View File

@@ -1,5 +1,6 @@
import argparse
import faulthandler
import multiprocessing as mp
import signal
import sys
import threading
@@ -15,12 +16,17 @@ from frigate.util.config import find_config_file
def main() -> None:
manager = mp.Manager()
faulthandler.enable()
# Setup the logging thread
setup_logging()
setup_logging(manager)
threading.current_thread().name = "frigate"
stop_event = mp.Event()
# send stop event on SIGINT
signal.signal(signal.SIGINT, lambda sig, frame: stop_event.set())
# Make sure we exit cleanly on SIGTERM.
signal.signal(signal.SIGTERM, lambda sig, frame: sys.exit())
@@ -93,7 +99,14 @@ def main() -> None:
print("*************************************************************")
print("*** End Config Validation Errors ***")
print("*************************************************************")
sys.exit(1)
# attempt to start Frigate in recovery mode
try:
config = FrigateConfig.load(install=True, safe_load=True)
print("Starting Frigate in safe mode.")
except ValidationError:
print("Unable to start Frigate in safe mode.")
sys.exit(1)
if args.validate_config:
print("*************************************************************")
print("*** Your config file is valid. ***")
@@ -101,8 +114,23 @@ def main() -> None:
sys.exit(0)
# Run the main application.
FrigateApp(config).start()
FrigateApp(config, manager, stop_event).start()
if __name__ == "__main__":
mp.set_forkserver_preload(
[
# Standard library and core dependencies
"sqlite3",
# Third-party libraries commonly used in Frigate
"numpy",
"cv2",
"peewee",
"zmq",
"ruamel.yaml",
# Frigate core modules
"frigate.camera.maintainer",
]
)
mp.set_start_method("forkserver", force=True)
main()

View File

@@ -6,11 +6,12 @@ import json
import logging
import os
import traceback
import urllib
from datetime import datetime, timedelta
from functools import reduce
from io import StringIO
from pathlib import Path as FilePath
from typing import Any, Optional
from typing import Any, Dict, List, Optional
import aiofiles
import requests
@@ -20,7 +21,7 @@ from fastapi.encoders import jsonable_encoder
from fastapi.params import Depends
from fastapi.responses import JSONResponse, PlainTextResponse, StreamingResponse
from markupsafe import escape
from peewee import SQL, operator
from peewee import SQL, fn, operator
from pydantic import ValidationError
from frigate.api.auth import require_role
@@ -28,12 +29,18 @@ from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryPa
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateTopic,
)
from frigate.models import Event, Timeline
from frigate.stats.prometheus import get_metrics, update_metrics
from frigate.util.builtin import (
clean_camera_user_pass,
flatten_config_data,
get_tz_modifiers,
update_yaml_from_url,
process_config_query_string,
update_yaml_file_bulk,
)
from frigate.util.config import find_config_file
from frigate.util.services import (
@@ -123,7 +130,14 @@ def metrics(request: Request):
"""Expose Prometheus metrics endpoint and update metrics with latest stats"""
# Retrieve the latest statistics and update the Prometheus metrics
stats = request.app.stats_emitter.get_latest_stats()
update_metrics(stats)
# query DB for count of events by camera, label
event_counts: List[Dict[str, Any]] = (
Event.select(Event.camera, Event.label, fn.Count())
.group_by(Event.camera, Event.label)
.dicts()
)
update_metrics(stats=stats, event_counts=event_counts)
content, content_type = get_metrics()
return Response(content=content, media_type=content_type)
@@ -354,14 +368,37 @@ def config_set(request: Request, body: AppConfigSetBody):
with open(config_file, "r") as f:
old_raw_config = f.read()
f.close()
try:
update_yaml_from_url(config_file, str(request.url))
updates = {}
# process query string parameters (takes precedence over body.config_data)
parsed_url = urllib.parse.urlparse(str(request.url))
query_string = urllib.parse.parse_qs(parsed_url.query, keep_blank_values=True)
# Filter out empty keys but keep blank values for non-empty keys
query_string = {k: v for k, v in query_string.items() if k}
if query_string:
updates = process_config_query_string(query_string)
elif body.config_data:
updates = flatten_config_data(body.config_data)
if not updates:
return JSONResponse(
content=(
{"success": False, "message": "No configuration data provided"}
),
status_code=400,
)
# apply all updates in a single operation
update_yaml_file_bulk(config_file, updates)
# validate the updated config
with open(config_file, "r") as f:
new_raw_config = f.read()
f.close()
# Validate the config schema
try:
config = FrigateConfig.parse(new_raw_config)
except Exception:
@@ -385,8 +422,25 @@ def config_set(request: Request, body: AppConfigSetBody):
status_code=500,
)
if body.requires_restart == 0:
if body.requires_restart == 0 or body.update_topic:
old_config: FrigateConfig = request.app.frigate_config
request.app.frigate_config = config
if body.update_topic and body.update_topic.startswith("config/cameras/"):
_, _, camera, field = body.update_topic.split("/")
if field == "add":
settings = config.cameras[camera]
elif field == "remove":
settings = old_config.cameras[camera]
else:
settings = config.get_nested_object(body.update_topic)
request.app.config_publisher.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum[field], camera),
settings,
)
return JSONResponse(
content=(
{

View File

@@ -11,7 +11,7 @@ import secrets
import time
from datetime import datetime
from pathlib import Path
from typing import List
from typing import List, Optional
from fastapi import APIRouter, Depends, HTTPException, Request, Response
from fastapi.responses import JSONResponse, RedirectResponse
@@ -33,7 +33,6 @@ from frigate.models import User
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.auth])
VALID_ROLES = ["admin", "viewer"]
class RateLimiter:
@@ -204,6 +203,7 @@ async def get_current_user(request: Request):
def require_role(required_roles: List[str]):
async def role_checker(request: Request):
proxy_config: ProxyConfig = request.app.frigate_config.proxy
config_roles = list(request.app.frigate_config.auth.roles.keys())
# Get role from header (could be comma-separated)
role_header = request.headers.get("remote-role")
@@ -217,19 +217,123 @@ def require_role(required_roles: List[str]):
if not roles:
raise HTTPException(status_code=403, detail="Role not provided")
# Check if any role matches required_roles
if not any(role in required_roles for role in roles):
# enforce config roles
valid_roles = [r for r in roles if r in config_roles]
if not valid_roles:
raise HTTPException(
status_code=403,
detail=f"Role {', '.join(roles)} not authorized. Required: {', '.join(required_roles)}",
detail=f"No valid roles found in {roles}. Required: {', '.join(required_roles)}. Available: {', '.join(config_roles)}",
)
# Return the first matching role
return next((role for role in roles if role in required_roles), roles[0])
if not any(role in required_roles for role in valid_roles):
raise HTTPException(
status_code=403,
detail=f"Role {', '.join(valid_roles)} not authorized. Required: {', '.join(required_roles)}",
)
return next(
(role for role in valid_roles if role in required_roles), valid_roles[0]
)
return role_checker
def resolve_role(
headers: dict, proxy_config: ProxyConfig, config_roles: set[str]
) -> str:
"""
Determine the effective role for a request based on proxy headers and configuration.
Order of resolution:
1. If a role header is defined in proxy_config.header_map.role:
- If a role_map is configured, treat the header as group claims
(split by proxy_config.separator) and map to roles.
- If no role_map is configured, treat the header as role names directly.
2. If no valid role is found, return proxy_config.default_role if it's valid in config_roles, else 'viewer'.
Args:
headers (dict): Incoming request headers (case-insensitive).
proxy_config (ProxyConfig): Proxy configuration.
config_roles (set[str]): Set of valid roles from config.
Returns:
str: Resolved role (one of config_roles or validated default).
"""
default_role = proxy_config.default_role
role_header = proxy_config.header_map.role
# Validate default_role against config; fallback to 'viewer' if invalid
validated_default = default_role if default_role in config_roles else "viewer"
if not config_roles:
validated_default = "viewer" # Edge case: no roles defined
if not role_header:
logger.debug(
"No role header configured in proxy_config.header_map. Returning validated default role '%s'.",
validated_default,
)
return validated_default
raw_value = headers.get(role_header, "")
logger.debug("Raw role header value from '%s': %r", role_header, raw_value)
if not raw_value:
logger.debug(
"Role header missing or empty. Returning validated default role '%s'.",
validated_default,
)
return validated_default
# role_map configured, treat header as group claims
if proxy_config.header_map.role_map:
groups = [
g.strip() for g in raw_value.split(proxy_config.separator) if g.strip()
]
logger.debug("Parsed groups from role header: %s", groups)
matched_roles = {
role_name
for role_name, required_groups in proxy_config.header_map.role_map.items()
if any(group in groups for group in required_groups)
}
logger.debug("Matched roles from role_map: %s", matched_roles)
if matched_roles:
resolved = next(
(r for r in config_roles if r in matched_roles), validated_default
)
logger.debug("Resolved role (with role_map) to '%s'.", resolved)
return resolved
logger.debug(
"No role_map match for groups '%s'. Using validated default role '%s'.",
raw_value,
validated_default,
)
return validated_default
# no role_map, treat as role names directly
roles_from_header = [
r.strip().lower() for r in raw_value.split(proxy_config.separator) if r.strip()
]
logger.debug("Parsed roles directly from header: %s", roles_from_header)
resolved = next(
(r for r in config_roles if r in roles_from_header),
validated_default,
)
if resolved == validated_default and roles_from_header:
logger.debug(
"Provided proxy role header values '%s' did not contain a valid role. Using validated default role '%s'.",
raw_value,
validated_default,
)
else:
logger.debug("Resolved role (direct header) to '%s'.", resolved)
return resolved
# Endpoints
@router.get("/auth")
def auth(request: Request):
@@ -266,22 +370,11 @@ def auth(request: Request):
else "anonymous"
)
role_header = proxy_config.header_map.role
role = (
request.headers.get(role_header, default=proxy_config.default_role)
if role_header
else proxy_config.default_role
)
# if comma-separated with "admin", use "admin",
# if comma-separated with "viewer", use "viewer",
# else use default role
roles = [r.strip() for r in role.split(proxy_config.separator)] if role else []
success_response.headers["remote-role"] = next(
(r for r in VALID_ROLES if r in roles), proxy_config.default_role
)
# parse header and resolve a valid role
config_roles_set = set(auth_config.roles.keys())
role = resolve_role(request.headers, proxy_config, config_roles_set)
success_response.headers["remote-role"] = role
return success_response
# now apply authentication
@@ -373,7 +466,13 @@ def profile(request: Request):
username = request.headers.get("remote-user", "anonymous")
role = request.headers.get("remote-role", "viewer")
return JSONResponse(content={"username": username, "role": role})
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
return JSONResponse(
content={"username": username, "role": role, "allowed_cameras": allowed_cameras}
)
@router.get("/logout")
@@ -404,8 +503,12 @@ def login(request: Request, body: AppPostLoginBody):
password_hash = db_user.password_hash
if verify_password(password, password_hash):
role = getattr(db_user, "role", "viewer")
if role not in VALID_ROLES:
role = "viewer" # Enforce valid roles
config_roles_set = set(request.app.frigate_config.auth.roles.keys())
if role not in config_roles_set:
logger.warning(
f"User {db_user.username} has an invalid role {role}, falling back to 'viewer'."
)
role = "viewer"
expiration = int(time.time()) + JWT_SESSION_LENGTH
encoded_jwt = create_encoded_jwt(user, role, expiration, request.app.jwt_token)
response = Response("", 200)
@@ -430,11 +533,17 @@ def create_user(
body: AppPostUsersBody,
):
HASH_ITERATIONS = request.app.frigate_config.auth.hash_iterations
config_roles = list(request.app.frigate_config.auth.roles.keys())
if not re.match("^[A-Za-z0-9._]+$", body.username):
return JSONResponse(content={"message": "Invalid username"}, status_code=400)
role = body.role if body.role in VALID_ROLES else "viewer"
if body.role not in config_roles:
return JSONResponse(
content={"message": f"Role must be one of: {', '.join(config_roles)}"},
status_code=400,
)
role = body.role or "viewer"
password_hash = hash_password(body.password, iterations=HASH_ITERATIONS)
User.insert(
{
@@ -505,10 +614,52 @@ async def update_role(
return JSONResponse(
content={"message": "Cannot modify admin user's role"}, status_code=403
)
if body.role not in VALID_ROLES:
config_roles = list(request.app.frigate_config.auth.roles.keys())
if body.role not in config_roles:
return JSONResponse(
content={"message": "Role must be 'admin' or 'viewer'"}, status_code=400
content={"message": f"Role must be one of: {', '.join(config_roles)}"},
status_code=400,
)
User.set_by_id(username, {User.role: body.role})
return JSONResponse(content={"success": True})
async def require_camera_access(
camera_name: Optional[str] = None,
request: Request = None,
):
"""Dependency to enforce camera access based on user role."""
if camera_name is None:
return # For lists, filter later
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
return current_user
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
allowed_cameras = User.get_allowed_cameras(role, roles_dict, all_camera_names)
# Admin or full access bypasses
if role == "admin" or not roles_dict.get(role):
return
if camera_name not in allowed_cameras:
raise HTTPException(
status_code=403,
detail=f"Access denied to camera '{camera_name}'. Allowed: {allowed_cameras}",
)
async def get_allowed_cameras_for_filter(request: Request):
"""Dependency to get allowed_cameras for filtering lists."""
current_user = await get_current_user(request)
if isinstance(current_user, JSONResponse):
return [] # Unauthorized: no cameras
role = current_user["role"]
all_camera_names = set(request.app.frigate_config.cameras.keys())
roles_dict = request.app.frigate_config.auth.roles
return User.get_allowed_cameras(role, roles_dict, all_camera_names)

View File

@@ -14,10 +14,14 @@ from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import require_role
from frigate.api.defs.request.classification_body import RenameFaceBody
from frigate.api.defs.request.classification_body import (
AudioTranscriptionBody,
RenameFaceBody,
)
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.config.camera import DetectConfig
from frigate.const import FACE_DIR
from frigate.const import CLIPS_DIR, FACE_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event
from frigate.util.path import get_event_snapshot
@@ -384,3 +388,255 @@ def reindex_embeddings(request: Request):
},
status_code=500,
)
@router.put("/audio/transcribe")
def transcribe_audio(request: Request, body: AudioTranscriptionBody):
event_id = body.event_id
try:
event = Event.get(Event.id == event_id)
except DoesNotExist:
message = f"Event {event_id} not found"
logger.error(message)
return JSONResponse(
content=({"success": False, "message": message}), status_code=404
)
if not request.app.frigate_config.cameras[event.camera].audio_transcription.enabled:
message = f"Audio transcription is not enabled for {event.camera}."
logger.error(message)
return JSONResponse(
content=(
{
"success": False,
"message": message,
}
),
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
response = context.transcribe_audio(model_to_dict(event))
if response == "started":
return JSONResponse(
content={
"success": True,
"message": "Audio transcription has started.",
},
status_code=202, # 202 Accepted
)
elif response == "in_progress":
return JSONResponse(
content={
"success": False,
"message": "Audio transcription for a speech event is currently in progress. Try again later.",
},
status_code=409, # 409 Conflict
)
else:
return JSONResponse(
content={
"success": False,
"message": "Failed to transcribe audio.",
},
status_code=500,
)
# custom classification training
@router.get("/classification/{name}/dataset")
def get_classification_dataset(name: str):
dataset_dict: dict[str, list[str]] = {}
dataset_dir = os.path.join(CLIPS_DIR, sanitize_filename(name), "dataset")
if not os.path.exists(dataset_dir):
return JSONResponse(status_code=200, content={})
for name in os.listdir(dataset_dir):
category_dir = os.path.join(dataset_dir, name)
if not os.path.isdir(category_dir):
continue
dataset_dict[name] = []
for file in filter(
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(category_dir),
):
dataset_dict[name].append(file)
return JSONResponse(status_code=200, content=dataset_dict)
@router.get("/classification/{name}/train")
def get_classification_images(name: str):
train_dir = os.path.join(CLIPS_DIR, sanitize_filename(name), "train")
if not os.path.exists(train_dir):
return JSONResponse(status_code=200, content=[])
return JSONResponse(
status_code=200,
content=list(
filter(
lambda f: (f.lower().endswith((".webp", ".png", ".jpg", ".jpeg"))),
os.listdir(train_dir),
)
),
)
@router.post("/classification/{name}/train")
async def train_configured_model(request: Request, name: str):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
context: EmbeddingsContext = request.app.embeddings
context.start_classification_training(name)
return JSONResponse(
content={"success": True, "message": "Started classification model training."},
status_code=200,
)
@router.post(
"/classification/{name}/dataset/{category}/delete",
dependencies=[Depends(require_role(["admin"]))],
)
def delete_classification_dataset_images(
request: Request, name: str, category: str, body: dict = None
):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
list_of_ids = json.get("ids", "")
folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", sanitize_filename(category)
)
for id in list_of_ids:
file_path = os.path.join(folder, sanitize_filename(id))
if os.path.isfile(file_path):
os.unlink(file_path)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)
@router.post(
"/classification/{name}/dataset/categorize",
dependencies=[Depends(require_role(["admin"]))],
)
def categorize_classification_image(request: Request, name: str, body: dict = None):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
category = sanitize_filename(json.get("category", ""))
training_file_name = sanitize_filename(json.get("training_file", ""))
training_file = os.path.join(
CLIPS_DIR, sanitize_filename(name), "train", training_file_name
)
if training_file_name and not os.path.isfile(training_file):
return JSONResponse(
content=(
{
"success": False,
"message": f"Invalid filename or no file exists: {training_file_name}",
}
),
status_code=404,
)
new_name = f"{category}-{datetime.datetime.now().timestamp()}.png"
new_file_folder = os.path.join(
CLIPS_DIR, sanitize_filename(name), "dataset", category
)
if not os.path.exists(new_file_folder):
os.mkdir(new_file_folder)
# use opencv because webp images can not be used to train
img = cv2.imread(training_file)
cv2.imwrite(os.path.join(new_file_folder, new_name), img)
os.unlink(training_file)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)
@router.post(
"/classification/{name}/train/delete",
dependencies=[Depends(require_role(["admin"]))],
)
def delete_classification_train_images(request: Request, name: str, body: dict = None):
config: FrigateConfig = request.app.frigate_config
if name not in config.classification.custom:
return JSONResponse(
content=(
{
"success": False,
"message": f"{name} is not a known classification model.",
}
),
status_code=404,
)
json: dict[str, Any] = body or {}
list_of_ids = json.get("ids", "")
folder = os.path.join(CLIPS_DIR, sanitize_filename(name), "train")
for id in list_of_ids:
file_path = os.path.join(folder, sanitize_filename(id))
if os.path.isfile(file_path):
os.unlink(file_path)
return JSONResponse(
content=({"success": True, "message": "Successfully deleted faces."}),
status_code=200,
)

View File

@@ -1,7 +1,8 @@
from enum import Enum
from typing import Optional
from typing import Optional, Union
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
class Extension(str, Enum):
@@ -22,6 +23,7 @@ class MediaLatestFrameQueryParams(BaseModel):
zones: Optional[int] = None
mask: Optional[int] = None
motion: Optional[int] = None
paths: Optional[int] = None
regions: Optional[int] = None
quality: Optional[int] = 70
height: Optional[int] = None
@@ -51,3 +53,10 @@ class MediaMjpegFeedQueryParams(BaseModel):
class MediaRecordingsSummaryQueryParams(BaseModel):
timezone: str = "utc"
cameras: Optional[str] = "all"
class MediaRecordingsAvailabilityQueryParams(BaseModel):
cameras: str = "all"
before: Union[float, SkipJsonSchema[None]] = None
after: Union[float, SkipJsonSchema[None]] = None
scale: int = 30

View File

@@ -1,9 +1,13 @@
from typing import Optional
from pydantic import BaseModel
from pydantic import BaseModel, Field
from frigate.events.types import RegenerateDescriptionEnum
class RegenerateQueryParameters(BaseModel):
source: Optional[RegenerateDescriptionEnum] = RegenerateDescriptionEnum.thumbnails
force: Optional[bool] = Field(
default=False,
description="Force (re)generating the description even if GenAI is disabled for this camera.",
)

View File

@@ -1,10 +1,12 @@
from typing import Optional
from typing import Any, Dict, Optional
from pydantic import BaseModel
class AppConfigSetBody(BaseModel):
requires_restart: int = 1
update_topic: str | None = None
config_data: Optional[Dict[str, Any]] = None
class AppPutPasswordBody(BaseModel):

View File

@@ -3,3 +3,7 @@ from pydantic import BaseModel
class RenameFaceBody(BaseModel):
new_name: str
class AudioTranscriptionBody(BaseModel):
event_id: str

View File

@@ -2,6 +2,8 @@ from typing import List, Optional, Union
from pydantic import BaseModel, Field
from frigate.config.classification import TriggerType
class EventsSubLabelBody(BaseModel):
subLabel: str = Field(title="Sub label", max_length=100)
@@ -45,3 +47,9 @@ class EventsDeleteBody(BaseModel):
class SubmitPlusBody(BaseModel):
include_annotation: int = Field(default=1)
class TriggerEmbeddingBody(BaseModel):
type: TriggerType
data: str
threshold: float = Field(default=0.5, ge=0.0, le=1.0)

View File

@@ -1,5 +1,6 @@
"""Event apis."""
import base64
import datetime
import logging
import os
@@ -7,16 +8,23 @@ import random
import string
from functools import reduce
from pathlib import Path
from typing import List
from urllib.parse import unquote
import cv2
import numpy as np
from fastapi import APIRouter, Request
from fastapi.params import Depends
from fastapi.responses import JSONResponse
from pathvalidate import sanitize_filename
from peewee import JOIN, DoesNotExist, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import require_role
from frigate.api.auth import (
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
)
from frigate.api.defs.query.events_query_parameters import (
DEFAULT_TIME_RANGE,
EventsQueryParams,
@@ -34,6 +42,7 @@ from frigate.api.defs.request.events_body import (
EventsLPRBody,
EventsSubLabelBody,
SubmitPlusBody,
TriggerEmbeddingBody,
)
from frigate.api.defs.response.event_response import (
EventCreateResponse,
@@ -44,11 +53,12 @@ from frigate.api.defs.response.event_response import (
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
from frigate.const import CLIPS_DIR
from frigate.const import CLIPS_DIR, TRIGGER_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.models import Event, ReviewSegment, Timeline
from frigate.models import Event, ReviewSegment, Timeline, Trigger
from frigate.track.object_processing import TrackedObject
from frigate.util.builtin import get_tz_modifiers
from frigate.util.path import get_event_thumbnail_bytes
logger = logging.getLogger(__name__)
@@ -56,7 +66,10 @@ router = APIRouter(tags=[Tags.events])
@router.get("/events", response_model=list[EventResponse])
def events(params: EventsQueryParams = Depends()):
def events(
params: EventsQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
camera = params.camera
cameras = params.cameras
@@ -130,8 +143,14 @@ def events(params: EventsQueryParams = Depends()):
clauses.append((Event.camera == camera))
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Event.camera << camera_list))
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((Event.camera << camera_list))
if labels != "all":
label_list = labels.split(",")
@@ -161,43 +180,32 @@ def events(params: EventsQueryParams = Depends()):
clauses.append((sub_label_clause))
if recognized_license_plate != "all":
# use matching so joined recognized_license_plates are included
# for example a recognized license plate 'ABC123' would get events
# with recognized license plates 'ABC123' and 'ABC123, XYZ789'
recognized_license_plate_clauses = []
filtered_recognized_license_plates = recognized_license_plate.split(",")
clauses_for_plates = []
if "None" in filtered_recognized_license_plates:
filtered_recognized_license_plates.remove("None")
recognized_license_plate_clauses.append(
(Event.data["recognized_license_plate"].is_null())
clauses_for_plates.append(Event.data["recognized_license_plate"].is_null())
# regex vs exact matching
normal_plates = []
for plate in filtered_recognized_license_plates:
if plate.startswith("^") or any(ch in plate for ch in ".[]?+*"):
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").regexp(plate)
)
else:
normal_plates.append(plate)
# if there are any plain string plates, match them with IN
if normal_plates:
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").in_(normal_plates)
)
for recognized_license_plate in filtered_recognized_license_plates:
# Exact matching plus list inclusion
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
== recognized_license_plate
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*{recognized_license_plate},*"
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*, {recognized_license_plate}*"
)
)
recognized_license_plate_clause = reduce(
operator.or_, recognized_license_plate_clauses
)
clauses.append((recognized_license_plate_clause))
recognized_license_plate_clause = reduce(operator.or_, clauses_for_plates)
clauses.append(recognized_license_plate_clause)
if zones != "all":
# use matching so events with multiple zones
@@ -327,9 +335,17 @@ def events(params: EventsQueryParams = Depends()):
@router.get("/events/explore", response_model=list[EventResponse])
def events_explore(limit: int = 10):
def events_explore(
limit: int = 10,
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
# get distinct labels for all events
distinct_labels = Event.select(Event.label).distinct().order_by(Event.label)
distinct_labels = (
Event.select(Event.label)
.where(Event.camera << allowed_cameras)
.distinct()
.order_by(Event.label)
)
label_counts = {}
@@ -340,14 +356,18 @@ def events_explore(limit: int = 10):
# get most recent events for this label
label_events = (
Event.select()
.where(Event.label == label)
.where((Event.label == label) & (Event.camera << allowed_cameras))
.order_by(Event.start_time.desc())
.limit(limit)
.iterator()
)
# count total events for this label
label_counts[label] = Event.select().where(Event.label == label).count()
label_counts[label] = (
Event.select()
.where((Event.label == label) & (Event.camera << allowed_cameras))
.count()
)
yield from label_events
@@ -400,7 +420,7 @@ def events_explore(limit: int = 10):
@router.get("/event_ids", response_model=list[EventResponse])
def event_ids(ids: str):
async def event_ids(ids: str, request: Request):
ids = ids.split(",")
if not ids:
@@ -409,6 +429,16 @@ def event_ids(ids: str):
status_code=400,
)
for event_id in ids:
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": f"Event {event_id} not found"}),
status_code=404,
)
try:
events = Event.select().where(Event.id << ids).dicts().iterator()
return JSONResponse(list(events))
@@ -419,7 +449,11 @@ def event_ids(ids: str):
@router.get("/events/search")
def events_search(request: Request, params: EventsSearchQueryParams = Depends()):
def events_search(
request: Request,
params: EventsSearchQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
query = params.query
search_type = params.search_type
include_thumbnails = params.include_thumbnails
@@ -492,7 +526,13 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
event_filters = []
if cameras != "all":
event_filters.append((Event.camera << cameras.split(",")))
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
event_filters.append((Event.camera << list(filtered)))
else:
event_filters.append((Event.camera << allowed_cameras))
if labels != "all":
event_filters.append((Event.label << labels.split(",")))
@@ -511,42 +551,31 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
event_filters.append((reduce(operator.or_, zone_clauses)))
if recognized_license_plate != "all":
# use matching so joined recognized_license_plates are included
# for example an recognized_license_plate 'ABC123' would get events
# with recognized_license_plates 'ABC123' and 'ABC123, XYZ789'
recognized_license_plate_clauses = []
filtered_recognized_license_plates = recognized_license_plate.split(",")
clauses_for_plates = []
if "None" in filtered_recognized_license_plates:
filtered_recognized_license_plates.remove("None")
recognized_license_plate_clauses.append(
(Event.data["recognized_license_plate"].is_null())
clauses_for_plates.append(Event.data["recognized_license_plate"].is_null())
# regex vs exact matching
normal_plates = []
for plate in filtered_recognized_license_plates:
if plate.startswith("^") or any(ch in plate for ch in ".[]?+*"):
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").regexp(plate)
)
else:
normal_plates.append(plate)
# if there are any plain string plates, match them with IN
if normal_plates:
clauses_for_plates.append(
Event.data["recognized_license_plate"].cast("text").in_(normal_plates)
)
for recognized_license_plate in filtered_recognized_license_plates:
# Exact matching plus list inclusion
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
== recognized_license_plate
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*{recognized_license_plate},*"
)
)
recognized_license_plate_clauses.append(
(
Event.data["recognized_license_plate"].cast("text")
% f"*, {recognized_license_plate}*"
)
)
recognized_license_plate_clause = reduce(
operator.or_, recognized_license_plate_clauses
)
recognized_license_plate_clause = reduce(operator.or_, clauses_for_plates)
event_filters.append((recognized_license_plate_clause))
if after:
@@ -756,7 +785,10 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
@router.get("/events/summary")
def events_summary(params: EventsSummaryQueryParams = Depends()):
def events_summary(
params: EventsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
tz_name = params.timezone
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(tz_name)
has_clip = params.has_clip
@@ -788,7 +820,7 @@ def events_summary(params: EventsSummaryQueryParams = Depends()):
Event.zones,
fn.COUNT(Event.id).alias("count"),
)
.where(reduce(operator.and_, clauses))
.where(reduce(operator.and_, clauses) & (Event.camera << allowed_cameras))
.group_by(
Event.camera,
Event.label,
@@ -803,9 +835,11 @@ def events_summary(params: EventsSummaryQueryParams = Depends()):
@router.get("/events/{event_id}", response_model=EventResponse)
def event(event_id: str):
async def event(event_id: str, request: Request):
try:
return model_to_dict(Event.get(Event.id == event_id))
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
return model_to_dict(event)
except DoesNotExist:
return JSONResponse(content="Event not found", status_code=404)
@@ -834,7 +868,7 @@ def set_retain(event_id: str):
@router.post("/events/{event_id}/plus", response_model=EventUploadPlusResponse)
def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
async def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
if not request.app.frigate_config.plus_api.is_active():
message = "PLUS_API_KEY environment variable is not set"
logger.error(message)
@@ -852,6 +886,7 @@ def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
message = f"Event {event_id} not found"
logger.error(message)
@@ -946,7 +981,7 @@ def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
@router.put("/events/{event_id}/false_positive", response_model=EventUploadPlusResponse)
def false_positive(request: Request, event_id: str):
async def false_positive(request: Request, event_id: str):
if not request.app.frigate_config.plus_api.is_active():
message = "PLUS_API_KEY environment variable is not set"
logger.error(message)
@@ -962,6 +997,7 @@ def false_positive(request: Request, event_id: str):
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
message = f"Event {event_id} not found"
logger.error(message)
@@ -985,7 +1021,7 @@ def false_positive(request: Request, event_id: str):
)
if not event.plus_id:
plus_response = send_to_plus(request, event_id)
plus_response = await send_to_plus(request, event_id)
if plus_response.status_code != 200:
return plus_response
# need to refetch the event now that it has a plus_id
@@ -1039,9 +1075,10 @@ def false_positive(request: Request, event_id: str):
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def delete_retain(event_id: str):
async def delete_retain(event_id: str, request: Request):
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
@@ -1062,13 +1099,14 @@ def delete_retain(event_id: str):
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def set_sub_label(
async def set_sub_label(
request: Request,
event_id: str,
body: EventsSubLabelBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
event = None
@@ -1099,7 +1137,7 @@ def set_sub_label(
new_score = None
request.app.event_metadata_updater.publish(
EventMetadataTypeEnum.sub_label, (event_id, new_sub_label, new_score)
(event_id, new_sub_label, new_score), EventMetadataTypeEnum.sub_label.value
)
return JSONResponse(
@@ -1116,13 +1154,14 @@ def set_sub_label(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def set_plate(
async def set_plate(
request: Request,
event_id: str,
body: EventsLPRBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
event = None
@@ -1153,7 +1192,8 @@ def set_plate(
new_score = None
request.app.event_metadata_updater.publish(
EventMetadataTypeEnum.recognized_license_plate, (event_id, new_plate, new_score)
(event_id, "recognized_license_plate", new_plate, new_score),
EventMetadataTypeEnum.attribute.value,
)
return JSONResponse(
@@ -1170,13 +1210,14 @@ def set_plate(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def set_description(
async def set_description(
request: Request,
event_id: str,
body: EventsDescriptionBody,
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
@@ -1221,11 +1262,12 @@ def set_description(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def regenerate_description(
async def regenerate_description(
request: Request, event_id: str, params: RegenerateQueryParameters = Depends()
):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
@@ -1234,9 +1276,10 @@ def regenerate_description(
camera_config = request.app.frigate_config.cameras[event.camera]
if camera_config.genai.enabled:
if camera_config.objects.genai.enabled or params.force:
request.app.event_metadata_updater.publish(
EventMetadataTypeEnum.regenerate_description, (event.id, params.source)
(event.id, params.source, params.force),
EventMetadataTypeEnum.regenerate_description.value,
)
return JSONResponse(
@@ -1263,9 +1306,42 @@ def regenerate_description(
)
def delete_single_event(event_id: str, request: Request) -> dict:
@router.post(
"/description/generate",
response_model=GenericResponse,
# dependencies=[Depends(require_role(["admin"]))],
)
def generate_description_embedding(
request: Request,
body: EventsDescriptionBody,
):
new_description = body.description
# If semantic search is enabled, update the index
if request.app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = request.app.embeddings
if len(new_description) > 0:
result = context.generate_description_embedding(
new_description,
)
return JSONResponse(
content=(
{
"success": True,
"message": f"Embedding for description is {result}"
if result
else "Failed to generate embedding",
}
),
status_code=200,
)
async def delete_single_event(event_id: str, request: Request) -> dict:
try:
event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
except DoesNotExist:
return {"success": False, "message": f"Event {event_id} not found"}
@@ -1295,8 +1371,8 @@ def delete_single_event(event_id: str, request: Request) -> dict:
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def delete_event(request: Request, event_id: str):
result = delete_single_event(event_id, request)
async def delete_event(request: Request, event_id: str):
result = await delete_single_event(event_id, request)
status_code = 200 if result["success"] else 404
return JSONResponse(content=result, status_code=status_code)
@@ -1306,7 +1382,7 @@ def delete_event(request: Request, event_id: str):
response_model=EventMultiDeleteResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def delete_events(request: Request, body: EventsDeleteBody):
async def delete_events(request: Request, body: EventsDeleteBody):
if not body.event_ids:
return JSONResponse(
content=({"success": False, "message": "No event IDs provided."}),
@@ -1317,7 +1393,7 @@ def delete_events(request: Request, body: EventsDeleteBody):
not_found_events = []
for event_id in body.event_ids:
result = delete_single_event(event_id, request)
result = await delete_single_event(event_id, request)
if result["success"]:
deleted_events.append(event_id)
else:
@@ -1361,7 +1437,6 @@ def create_event(
event_id = f"{now}-{rand_id}"
request.app.event_metadata_updater.publish(
EventMetadataTypeEnum.manual_event_create,
(
now,
camera_name,
@@ -1374,6 +1449,7 @@ def create_event(
body.source_type,
body.draw,
),
EventMetadataTypeEnum.manual_event_create.value,
)
return JSONResponse(
@@ -1393,11 +1469,13 @@ def create_event(
response_model=GenericResponse,
dependencies=[Depends(require_role(["admin"]))],
)
def end_event(request: Request, event_id: str, body: EventsEndBody):
async def end_event(request: Request, event_id: str, body: EventsEndBody):
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
end_time = body.end_time or datetime.datetime.now().timestamp()
request.app.event_metadata_updater.publish(
EventMetadataTypeEnum.manual_event_end, (event_id, end_time)
(event_id, end_time), EventMetadataTypeEnum.manual_event_end.value
)
except Exception:
return JSONResponse(
@@ -1411,3 +1489,430 @@ def end_event(request: Request, event_id: str, body: EventsEndBody):
content=({"success": True, "message": "Event successfully ended."}),
status_code=200,
)
@router.post(
"/trigger/embedding",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def create_trigger_embedding(
request: Request,
body: TriggerEmbeddingBody,
camera_name: str,
name: str,
):
try:
if not request.app.frigate_config.semantic_search.enabled:
return JSONResponse(
content={
"success": False,
"message": "Semantic search is not enabled",
},
status_code=400,
)
# Check if trigger already exists
if (
Trigger.select()
.where(Trigger.camera == camera_name, Trigger.name == name)
.exists()
):
return JSONResponse(
content={
"success": False,
"message": f"Trigger {camera_name}:{name} already exists",
},
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
# Generate embedding based on type
embedding = None
if body.type == "description":
embedding = context.generate_description_embedding(body.data)
elif body.type == "thumbnail":
try:
event: Event = Event.get(Event.id == body.data)
except DoesNotExist:
# TODO: check triggers directory for image
return JSONResponse(
content={
"success": False,
"message": f"Failed to fetch event for {body.type} trigger",
},
status_code=400,
)
# Skip the event if not an object
if event.data.get("type") != "object":
return
if thumbnail := get_event_thumbnail_bytes(event):
cursor = context.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[body.data],
)
row = cursor.fetchone() if cursor else None
if row:
query_embedding = row[0]
embedding = np.frombuffer(query_embedding, dtype=np.float32)
else:
# Extract valid thumbnail
thumbnail = get_event_thumbnail_bytes(event)
if thumbnail is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to get thumbnail for {body.data} for {body.type} trigger",
},
status_code=400,
)
embedding = context.generate_image_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to generate embedding for {body.type} trigger",
},
status_code=400,
)
if body.type == "thumbnail":
# Save image to the triggers directory
try:
os.makedirs(
os.path.join(TRIGGER_DIR, sanitize_filename(camera_name)),
exist_ok=True,
)
with open(
os.path.join(
TRIGGER_DIR,
sanitize_filename(camera_name),
f"{sanitize_filename(body.data)}.webp",
),
"wb",
) as f:
f.write(thumbnail)
logger.debug(
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
)
Trigger.create(
camera=camera_name,
name=name,
type=body.type,
data=body.data,
threshold=body.threshold,
model=request.app.frigate_config.semantic_search.model,
embedding=np.array(embedding, dtype=np.float32).tobytes(),
triggering_event_id="",
last_triggered=None,
)
return JSONResponse(
content={
"success": True,
"message": f"Trigger created successfully for {camera_name}:{name}",
},
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
return JSONResponse(
content={
"success": False,
"message": "Error creating trigger embedding",
},
status_code=500,
)
@router.put(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def update_trigger_embedding(
request: Request,
camera_name: str,
name: str,
body: TriggerEmbeddingBody,
):
try:
if not request.app.frigate_config.semantic_search.enabled:
return JSONResponse(
content={
"success": False,
"message": "Semantic search is not enabled",
},
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
# Generate embedding based on type
embedding = None
if body.type == "description":
embedding = context.generate_description_embedding(body.data)
elif body.type == "thumbnail":
webp_file = sanitize_filename(body.data) + ".webp"
webp_path = os.path.join(
TRIGGER_DIR, sanitize_filename(camera_name), webp_file
)
try:
event: Event = Event.get(Event.id == body.data)
# Skip the event if not an object
if event.data.get("type") != "object":
return JSONResponse(
content={
"success": False,
"message": f"Event {body.data} is not a tracked object for {body.type} trigger",
},
status_code=400,
)
# Extract valid thumbnail
thumbnail = get_event_thumbnail_bytes(event)
with open(webp_path, "wb") as f:
f.write(thumbnail)
except DoesNotExist:
# check triggers directory for image
if not os.path.exists(webp_path):
return JSONResponse(
content={
"success": False,
"message": f"Failed to fetch event for {body.type} trigger",
},
status_code=400,
)
else:
# Load the image from the triggers directory
with open(webp_path, "rb") as f:
thumbnail = f.read()
embedding = context.generate_image_embedding(
body.data, (base64.b64encode(thumbnail).decode("ASCII"))
)
if embedding is None:
return JSONResponse(
content={
"success": False,
"message": f"Failed to generate embedding for {body.type} trigger",
},
status_code=400,
)
# Check if trigger exists for upsert
trigger = Trigger.get_or_none(
Trigger.camera == camera_name, Trigger.name == name
)
if trigger:
# Update existing trigger
if trigger.data != body.data: # Delete old thumbnail only if data changes
try:
os.remove(
os.path.join(
TRIGGER_DIR,
sanitize_filename(camera_name),
f"{trigger.data}.webp",
)
)
logger.debug(
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
)
Trigger.update(
data=body.data,
model=request.app.frigate_config.semantic_search.model,
embedding=np.array(embedding, dtype=np.float32).tobytes(),
threshold=body.threshold,
triggering_event_id="",
last_triggered=None,
).where(Trigger.camera == camera_name, Trigger.name == name).execute()
else:
# Create new trigger (for rename case)
Trigger.create(
camera=camera_name,
name=name,
type=body.type,
data=body.data,
threshold=body.threshold,
model=request.app.frigate_config.semantic_search.model,
embedding=np.array(embedding, dtype=np.float32).tobytes(),
triggering_event_id="",
last_triggered=None,
)
if body.type == "thumbnail":
# Save image to the triggers directory
try:
camera_path = os.path.join(TRIGGER_DIR, sanitize_filename(camera_name))
os.makedirs(camera_path, exist_ok=True)
with open(
os.path.join(camera_path, f"{sanitize_filename(body.data)}.webp"),
"wb",
) as f:
f.write(thumbnail)
logger.debug(
f"Writing thumbnail for trigger with data {body.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to write thumbnail for trigger with data {body.data} in {camera_name}"
)
return JSONResponse(
content={
"success": True,
"message": f"Trigger updated successfully for {camera_name}:{name}",
},
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
return JSONResponse(
content={
"success": False,
"message": "Error updating trigger embedding",
},
status_code=500,
)
@router.delete(
"/trigger/embedding/{camera_name}/{name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def delete_trigger_embedding(
request: Request,
camera_name: str,
name: str,
):
try:
trigger = Trigger.get_or_none(
Trigger.camera == camera_name, Trigger.name == name
)
if trigger is None:
return JSONResponse(
content={
"success": False,
"message": f"Trigger {camera_name}:{name} not found",
},
status_code=500,
)
deleted = (
Trigger.delete()
.where(Trigger.camera == camera_name, Trigger.name == name)
.execute()
)
if deleted == 0:
return JSONResponse(
content={
"success": False,
"message": f"Error deleting trigger {camera_name}:{name}",
},
status_code=401,
)
try:
os.remove(
os.path.join(
TRIGGER_DIR, sanitize_filename(camera_name), f"{trigger.data}.webp"
)
)
logger.debug(
f"Deleted thumbnail for trigger with data {trigger.data} in {camera_name}."
)
except Exception as e:
logger.error(e.with_traceback())
logger.error(
f"Failed to delete thumbnail for trigger with data {trigger.data} in {camera_name}"
)
return JSONResponse(
content={
"success": True,
"message": f"Trigger deleted successfully for {camera_name}:{name}",
},
status_code=200,
)
except Exception as e:
logger.error(e.with_traceback())
return JSONResponse(
content={
"success": False,
"message": "Error deleting trigger embedding",
},
status_code=500,
)
@router.get(
"/triggers/status/{camera_name}",
response_model=dict,
dependencies=[Depends(require_role(["admin"]))],
)
def get_triggers_status(
camera_name: str,
):
try:
# Fetch all triggers for the specified camera
triggers = Trigger.select().where(Trigger.camera == camera_name)
# Prepare the response with trigger status
status = {
trigger.name: {
"last_triggered": trigger.last_triggered.timestamp()
if trigger.last_triggered
else None,
"triggering_event_id": trigger.triggering_event_id
if trigger.triggering_event_id
else None,
}
for trigger in triggers
}
if not status:
return JSONResponse(
content={
"success": False,
"message": f"No triggers found for camera {camera_name}",
},
status_code=404,
)
return {"success": True, "triggers": status}
except Exception as ex:
logger.exception(ex)
return JSONResponse(
content=({"success": False, "message": "Error fetching trigger status"}),
status_code=400,
)

View File

@@ -4,6 +4,7 @@ import logging
import random
import string
from pathlib import Path
from typing import List
import psutil
from fastapi import APIRouter, Depends, Request
@@ -11,7 +12,11 @@ from fastapi.responses import JSONResponse
from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import require_role
from frigate.api.auth import (
get_allowed_cameras_for_filter,
require_camera_access,
require_role,
)
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.request.export_rename_body import ExportRenameBody
from frigate.api.defs.tags import Tags
@@ -30,12 +35,23 @@ router = APIRouter(tags=[Tags.export])
@router.get("/exports")
def get_exports():
exports = Export.select().order_by(Export.date.desc()).dicts().iterator()
def get_exports(
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
exports = (
Export.select()
.where(Export.camera << allowed_cameras)
.order_by(Export.date.desc())
.dicts()
.iterator()
)
return JSONResponse(content=[e for e in exports])
@router.post("/export/{camera_name}/start/{start_time}/end/{end_time}")
@router.post(
"/export/{camera_name}/start/{start_time}/end/{end_time}",
dependencies=[Depends(require_camera_access)],
)
def export_recording(
request: Request,
camera_name: str,
@@ -134,9 +150,10 @@ def export_recording(
@router.patch(
"/export/{event_id}/rename", dependencies=[Depends(require_role(["admin"]))]
)
def export_rename(event_id: str, body: ExportRenameBody):
async def export_rename(event_id: str, body: ExportRenameBody, request: Request):
try:
export: Export = Export.get(Export.id == event_id)
await require_camera_access(export.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=(
@@ -162,9 +179,10 @@ def export_rename(event_id: str, body: ExportRenameBody):
@router.delete("/export/{event_id}", dependencies=[Depends(require_role(["admin"]))])
def export_delete(event_id: str):
async def export_delete(event_id: str, request: Request):
try:
export: Export = Export.get(Export.id == event_id)
await require_camera_access(export.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=(
@@ -215,9 +233,11 @@ def export_delete(event_id: str):
@router.get("/exports/{export_id}")
def get_export(export_id: str):
async def get_export(export_id: str, request: Request):
try:
return JSONResponse(content=model_to_dict(Export.get(Export.id == export_id)))
export = Export.get(Export.id == export_id)
await require_camera_access(export.camera, request=request)
return JSONResponse(content=model_to_dict(export))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export not found"},

View File

@@ -1,8 +1,10 @@
import logging
import re
from typing import Optional
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
from joserfc.jwk import OctKey
from playhouse.sqliteq import SqliteQueueDatabase
from slowapi import _rate_limit_exceeded_handler
from slowapi.errors import RateLimitExceeded
@@ -26,6 +28,7 @@ from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
)
from frigate.config import FrigateConfig
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.embeddings import EmbeddingsContext
from frigate.ptz.onvif import OnvifController
from frigate.stats.emitter import StatsEmitter
@@ -57,6 +60,7 @@ def create_fastapi_app(
onvif: OnvifController,
stats_emitter: StatsEmitter,
event_metadata_updater: EventMetadataPublisher,
config_publisher: CameraConfigUpdatePublisher,
):
logger.info("Starting FastAPI app")
app = FastAPI(
@@ -127,6 +131,27 @@ def create_fastapi_app(
app.onvif = onvif
app.stats_emitter = stats_emitter
app.event_metadata_updater = event_metadata_updater
app.jwt_token = get_jwt_secret() if frigate_config.auth.enabled else None
app.config_publisher = config_publisher
if frigate_config.auth.enabled:
secret = get_jwt_secret()
key_bytes = None
if isinstance(secret, str):
# If the secret looks like hex (e.g., generated by secrets.token_hex), use raw bytes
if len(secret) % 2 == 0 and re.fullmatch(r"[0-9a-fA-F]+", secret or ""):
try:
key_bytes = bytes.fromhex(secret)
except ValueError:
key_bytes = secret.encode("utf-8")
else:
key_bytes = secret.encode("utf-8")
elif isinstance(secret, (bytes, bytearray)):
key_bytes = bytes(secret)
else:
key_bytes = str(secret).encode("utf-8")
app.jwt_token = OctKey.import_key(key_bytes)
else:
app.jwt_token = None
return app

View File

@@ -8,25 +8,27 @@ import os
import subprocess as sp
import time
from datetime import datetime, timedelta, timezone
from functools import reduce
from pathlib import Path as FilePath
from typing import Any
from typing import Any, List
from urllib.parse import unquote
import cv2
import numpy as np
import pytz
from fastapi import APIRouter, Path, Query, Request, Response
from fastapi.params import Depends
from fastapi import APIRouter, Depends, Path, Query, Request, Response
from fastapi.responses import FileResponse, JSONResponse, StreamingResponse
from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn
from peewee import DoesNotExist, fn, operator
from tzlocal import get_localzone_name
from frigate.api.auth import get_allowed_cameras_for_filter, require_camera_access
from frigate.api.defs.query.media_query_parameters import (
Extension,
MediaEventsSnapshotQueryParams,
MediaLatestFrameQueryParams,
MediaMjpegFeedQueryParams,
MediaRecordingsAvailabilityQueryParams,
MediaRecordingsSummaryQueryParams,
)
from frigate.api.defs.tags import Tags
@@ -48,12 +50,11 @@ from frigate.util.path import get_event_thumbnail_bytes
logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.media])
@router.get("/{camera_name}")
def mjpeg_feed(
@router.get("/{camera_name}", dependencies=[Depends(require_camera_access)])
async def mjpeg_feed(
request: Request,
camera_name: str,
params: MediaMjpegFeedQueryParams = Depends(),
@@ -109,7 +110,7 @@ def imagestream(
)
@router.get("/{camera_name}/ptz/info")
@router.get("/{camera_name}/ptz/info", dependencies=[Depends(require_camera_access)])
async def camera_ptz_info(request: Request, camera_name: str):
if camera_name in request.app.frigate_config.cameras:
# Schedule get_camera_info in the OnvifController's event loop
@@ -125,8 +126,10 @@ async def camera_ptz_info(request: Request, camera_name: str):
)
@router.get("/{camera_name}/latest.{extension}")
def latest_frame(
@router.get(
"/{camera_name}/latest.{extension}", dependencies=[Depends(require_camera_access)]
)
async def latest_frame(
request: Request,
camera_name: str,
extension: Extension,
@@ -139,6 +142,7 @@ def latest_frame(
"zones": params.zones,
"mask": params.mask,
"motion_boxes": params.motion,
"paths": params.paths,
"regions": params.regions,
}
quality = params.quality
@@ -233,8 +237,11 @@ def latest_frame(
)
@router.get("/{camera_name}/recordings/{frame_time}/snapshot.{format}")
def get_snapshot_from_recording(
@router.get(
"/{camera_name}/recordings/{frame_time}/snapshot.{format}",
dependencies=[Depends(require_camera_access)],
)
async def get_snapshot_from_recording(
request: Request,
camera_name: str,
frame_time: float,
@@ -320,8 +327,10 @@ def get_snapshot_from_recording(
)
@router.post("/{camera_name}/plus/{frame_time}")
def submit_recording_snapshot_to_plus(
@router.post(
"/{camera_name}/plus/{frame_time}", dependencies=[Depends(require_camera_access)]
)
async def submit_recording_snapshot_to_plus(
request: Request, camera_name: str, frame_time: str
):
if camera_name not in request.app.frigate_config.cameras:
@@ -409,11 +418,23 @@ def get_recordings_storage_usage(request: Request):
@router.get("/recordings/summary")
def all_recordings_summary(params: MediaRecordingsSummaryQueryParams = Depends()):
def all_recordings_summary(
request: Request,
params: MediaRecordingsSummaryQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Returns true/false by day indicating if recordings exist"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(params.timezone)
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
query = (
Recordings.select(
@@ -441,7 +462,7 @@ def all_recordings_summary(params: MediaRecordingsSummaryQueryParams = Depends()
.order_by(Recordings.start_time.desc())
)
if cameras != "all":
if params.cameras != "all":
query = query.where(Recordings.camera << cameras.split(","))
recording_days = query.namedtuples()
@@ -450,8 +471,10 @@ def all_recordings_summary(params: MediaRecordingsSummaryQueryParams = Depends()
return JSONResponse(content=days)
@router.get("/{camera_name}/recordings/summary")
def recordings_summary(camera_name: str, timezone: str = "utc"):
@router.get(
"/{camera_name}/recordings/summary", dependencies=[Depends(require_camera_access)]
)
async def recordings_summary(camera_name: str, timezone: str = "utc"):
"""Returns hourly summary for recordings of given camera"""
hour_modifier, minute_modifier, seconds_offset = get_tz_modifiers(timezone)
recording_groups = (
@@ -512,8 +535,8 @@ def recordings_summary(camera_name: str, timezone: str = "utc"):
return JSONResponse(content=list(days.values()))
@router.get("/{camera_name}/recordings")
def recordings(
@router.get("/{camera_name}/recordings", dependencies=[Depends(require_camera_access)])
async def recordings(
camera_name: str,
after: float = (datetime.now() - timedelta(hours=1)).timestamp(),
before: float = datetime.now().timestamp(),
@@ -542,11 +565,87 @@ def recordings(
return JSONResponse(content=list(recordings))
@router.get("/recordings/unavailable", response_model=list[dict])
async def no_recordings(
request: Request,
params: MediaRecordingsAvailabilityQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get time ranges with no recordings."""
cameras = params.cameras
if cameras != "all":
requested = set(unquote(cameras).split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
cameras = ",".join(filtered)
else:
cameras = allowed_cameras
before = params.before or datetime.datetime.now().timestamp()
after = (
params.after
or (datetime.datetime.now() - datetime.timedelta(hours=1)).timestamp()
)
scale = params.scale
clauses = [(Recordings.start_time > after) & (Recordings.end_time < before)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((Recordings.camera << camera_list))
else:
camera_list = allowed_cameras
# Get recording start times
data: list[Recordings] = (
Recordings.select(Recordings.start_time, Recordings.end_time)
.where(reduce(operator.and_, clauses))
.order_by(Recordings.start_time.asc())
.dicts()
.iterator()
)
# Convert recordings to list of (start, end) tuples
recordings = [(r["start_time"], r["end_time"]) for r in data]
# Generate all time segments
current = after
no_recording_segments = []
current_start = None
while current < before:
segment_end = current + scale
# Check if segment overlaps with any recording
has_recording = any(
start <= segment_end and end >= current for start, end in recordings
)
if not has_recording:
if current_start is None:
current_start = current # Start a new gap
else:
if current_start is not None:
# End the current gap and append it
no_recording_segments.append(
{"start_time": int(current_start), "end_time": int(current)}
)
current_start = None
current = segment_end
# Append the last gap if it exists
if current_start is not None:
no_recording_segments.append(
{"start_time": int(current_start), "end_time": int(before)}
)
return JSONResponse(content=no_recording_segments)
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/clip.mp4",
dependencies=[Depends(require_camera_access)],
description="For iOS devices, use the master.m3u8 HLS link instead of clip.mp4. Safari does not reliably process progressive mp4 files.",
)
def recording_clip(
async def recording_clip(
request: Request,
camera_name: str,
start_ts: float,
@@ -642,9 +741,10 @@ def recording_clip(
@router.get(
"/vod/{camera_name}/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified timestamp-range on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_ts(camera_name: str, start_ts: float, end_ts: float):
async def vod_ts(camera_name: str, start_ts: float, end_ts: float):
recordings = (
Recordings.select(
Recordings.path,
@@ -719,20 +819,24 @@ def vod_ts(camera_name: str, start_ts: float, end_ts: float):
@router.get(
"/vod/{year_month}/{day}/{hour}/{camera_name}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified date-time on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str):
async def vod_hour_no_timezone(year_month: str, day: int, hour: int, camera_name: str):
"""VOD for specific hour. Uses the default timezone (UTC)."""
return vod_hour(
return await vod_hour(
year_month, day, hour, camera_name, get_localzone_name().replace("/", ",")
)
@router.get(
"/vod/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
dependencies=[Depends(require_camera_access)],
description="Returns an HLS playlist for the specified date-time (with timezone) on the specified camera. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
async def vod_hour(
year_month: str, day: int, hour: int, camera_name: str, tz_name: str
):
parts = year_month.split("-")
start_date = (
datetime(int(parts[0]), int(parts[1]), day, hour, tzinfo=timezone.utc)
@@ -742,14 +846,15 @@ def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: st
start_ts = start_date.timestamp()
end_ts = end_date.timestamp()
return vod_ts(camera_name, start_ts, end_ts)
return await vod_ts(camera_name, start_ts, end_ts)
@router.get(
"/vod/event/{event_id}",
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
)
def vod_event(
async def vod_event(
request: Request,
event_id: str,
padding: int = Query(0, description="Padding to apply to the vod."),
):
@@ -765,22 +870,14 @@ def vod_event(
status_code=404,
)
if not event.has_clip:
logger.error(f"Event does not have recordings: {event_id}")
return JSONResponse(
content={
"success": False,
"message": "Recordings not available.",
},
status_code=404,
)
await require_camera_access(event.camera, request=request)
end_ts = (
datetime.now().timestamp()
if event.end_time is None
else (event.end_time + padding)
)
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
vod_response = await vod_ts(event.camera, event.start_time - padding, end_ts)
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
if (
@@ -798,7 +895,7 @@ def vod_event(
"/events/{event_id}/snapshot.jpg",
description="Returns a snapshot image for the specified object id. NOTE: The query params only take affect while the event is in-progress. Once the event has ended the snapshot configuration is used.",
)
def event_snapshot(
async def event_snapshot(
request: Request,
event_id: str,
params: MediaEventsSnapshotQueryParams = Depends(),
@@ -808,6 +905,7 @@ def event_snapshot(
try:
event = Event.get(Event.id == event_id, Event.end_time != None)
event_complete = True
await require_camera_access(event.camera, request=request)
if not event.has_snapshot:
return JSONResponse(
content={"success": False, "message": "Snapshot not available"},
@@ -836,6 +934,7 @@ def event_snapshot(
height=params.height,
quality=params.quality,
)
await require_camera_access(camera_state.name, request=request)
except Exception:
return JSONResponse(
content={"success": False, "message": "Ongoing event not found"},
@@ -869,7 +968,7 @@ def event_snapshot(
@router.get("/events/{event_id}/thumbnail.{extension}")
def event_thumbnail(
async def event_thumbnail(
request: Request,
event_id: str,
extension: Extension,
@@ -882,6 +981,7 @@ def event_thumbnail(
event_complete = False
try:
event: Event = Event.get(Event.id == event_id)
await require_camera_access(event.camera, request=request)
if event.end_time is not None:
event_complete = True
@@ -944,7 +1044,7 @@ def event_thumbnail(
)
@router.get("/{camera_name}/grid.jpg")
@router.get("/{camera_name}/grid.jpg", dependencies=[Depends(require_camera_access)])
def grid_snapshot(
request: Request, camera_name: str, color: str = "green", font_scale: float = 0.5
):
@@ -1150,7 +1250,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get("/events/{event_id}/clip.mp4")
def event_clip(
async def event_clip(
request: Request,
event_id: str,
padding: int = Query(0, description="Padding to apply to clip."),
@@ -1172,7 +1272,9 @@ def event_clip(
if event.end_time is None
else event.end_time + padding
)
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
return await recording_clip(
request, event.camera, event.start_time - padding, end_ts
)
@router.get("/events/{event_id}/preview.gif")
@@ -1191,7 +1293,10 @@ def event_preview(request: Request, event_id: str):
return preview_gif(request, event.camera, start_ts, end_ts)
@router.get("/{camera_name}/start/{start_ts}/end/{end_ts}/preview.gif")
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/preview.gif",
dependencies=[Depends(require_camera_access)],
)
def preview_gif(
request: Request,
camera_name: str,
@@ -1347,7 +1452,10 @@ def preview_gif(
)
@router.get("/{camera_name}/start/{start_ts}/end/{end_ts}/preview.mp4")
@router.get(
"/{camera_name}/start/{start_ts}/end/{end_ts}/preview.mp4",
dependencies=[Depends(require_camera_access)],
)
def preview_mp4(
request: Request,
camera_name: str,
@@ -1587,9 +1695,14 @@ def preview_thumbnail(file_name: str):
####################### dynamic routes ###########################
@router.get("/{camera_name}/{label}/best.jpg")
@router.get("/{camera_name}/{label}/thumbnail.jpg")
def label_thumbnail(request: Request, camera_name: str, label: str):
@router.get(
"/{camera_name}/{label}/best.jpg", dependencies=[Depends(require_camera_access)]
)
@router.get(
"/{camera_name}/{label}/thumbnail.jpg",
dependencies=[Depends(require_camera_access)],
)
async def label_thumbnail(request: Request, camera_name: str, label: str):
label = unquote(label)
event_query = Event.select(fn.MAX(Event.id)).where(Event.camera == camera_name)
if label != "any":
@@ -1598,7 +1711,7 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
try:
event_id = event_query.scalar()
return event_thumbnail(request, event_id, Extension.jpg, 60)
return await event_thumbnail(request, event_id, Extension.jpg, 60)
except DoesNotExist:
frame = np.zeros((175, 175, 3), np.uint8)
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
@@ -1610,8 +1723,10 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
)
@router.get("/{camera_name}/{label}/clip.mp4")
def label_clip(request: Request, camera_name: str, label: str):
@router.get(
"/{camera_name}/{label}/clip.mp4", dependencies=[Depends(require_camera_access)]
)
async def label_clip(request: Request, camera_name: str, label: str):
label = unquote(label)
event_query = Event.select(fn.MAX(Event.id)).where(
Event.camera == camera_name, Event.has_clip == True
@@ -1622,15 +1737,17 @@ def label_clip(request: Request, camera_name: str, label: str):
try:
event = event_query.get()
return event_clip(request, event.id)
return await event_clip(request, event.id)
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Event not found"}, status_code=404
)
@router.get("/{camera_name}/{label}/snapshot.jpg")
def label_snapshot(request: Request, camera_name: str, label: str):
@router.get(
"/{camera_name}/{label}/snapshot.jpg", dependencies=[Depends(require_camera_access)]
)
async def label_snapshot(request: Request, camera_name: str, label: str):
"""Returns the snapshot image from the latest event for the given camera and label combo"""
label = unquote(label)
if label == "any":
@@ -1651,7 +1768,7 @@ def label_snapshot(request: Request, camera_name: str, label: str):
try:
event: Event = event_query.get()
return event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
return await event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
except DoesNotExist:
frame = np.zeros((720, 1280, 3), np.uint8)
_, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])

View File

@@ -5,9 +5,10 @@ import os
from datetime import datetime, timedelta, timezone
import pytz
from fastapi import APIRouter
from fastapi import APIRouter, Depends
from fastapi.responses import JSONResponse
from frigate.api.auth import require_camera_access
from frigate.api.defs.tags import Tags
from frigate.const import BASE_DIR, CACHE_DIR, PREVIEW_FRAME_TYPE
from frigate.models import Previews
@@ -18,7 +19,10 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.preview])
@router.get("/preview/{camera_name}/start/{start_ts}/end/{end_ts}")
@router.get(
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}",
dependencies=[Depends(require_camera_access)],
)
def preview_ts(camera_name: str, start_ts: float, end_ts: float):
"""Get all mp4 previews relevant for time period."""
if camera_name != "all":
@@ -71,7 +75,10 @@ def preview_ts(camera_name: str, start_ts: float, end_ts: float):
return JSONResponse(content=clips, status_code=200)
@router.get("/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}")
@router.get(
"/preview/{year_month}/{day}/{hour}/{camera_name}/{tz_name}",
dependencies=[Depends(require_camera_access)],
)
def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: str):
"""Get all mp4 previews relevant for time period given the timezone"""
parts = year_month.split("-")
@@ -86,7 +93,10 @@ def preview_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name
return preview_ts(camera_name, start_ts, end_ts)
@router.get("/preview/{camera_name}/start/{start_ts}/end/{end_ts}/frames")
@router.get(
"/preview/{camera_name}/start/{start_ts}/end/{end_ts}/frames",
dependencies=[Depends(require_camera_access)],
)
def get_preview_frames_from_cache(camera_name: str, start_ts: float, end_ts: float):
"""Get list of cached preview frames"""
preview_dir = os.path.join(CACHE_DIR, "preview_frames")

View File

@@ -4,15 +4,21 @@ import datetime
import logging
from functools import reduce
from pathlib import Path
from typing import List
import pandas as pd
from fastapi import APIRouter
from fastapi import APIRouter, Request
from fastapi.params import Depends
from fastapi.responses import JSONResponse
from peewee import Case, DoesNotExist, IntegrityError, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.auth import get_current_user, require_role
from frigate.api.auth import (
get_allowed_cameras_for_filter,
get_current_user,
require_camera_access,
require_role,
)
from frigate.api.defs.query.review_query_parameters import (
ReviewActivityMotionQueryParams,
ReviewQueryParams,
@@ -26,6 +32,8 @@ from frigate.api.defs.response.review_response import (
ReviewSummaryResponse,
)
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.embeddings import EmbeddingsContext
from frigate.models import Recordings, ReviewSegment, UserReviewStatus
from frigate.review.types import SeverityEnum
from frigate.util.builtin import get_tz_modifiers
@@ -39,6 +47,7 @@ router = APIRouter(tags=[Tags.review])
async def review(
params: ReviewQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
if isinstance(current_user, JSONResponse):
return current_user
@@ -63,8 +72,14 @@ async def review(
]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((ReviewSegment.camera << camera_list))
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((ReviewSegment.camera << camera_list))
if labels != "all":
# use matching so segments with multiple labels
@@ -138,7 +153,7 @@ async def review(
@router.get("/review_ids", response_model=list[ReviewSegmentResponse])
def review_ids(ids: str):
async def review_ids(request: Request, ids: str):
ids = ids.split(",")
if not ids:
@@ -147,6 +162,18 @@ def review_ids(ids: str):
status_code=400,
)
for review_id in ids:
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
await require_camera_access(review.camera, request=request)
except DoesNotExist:
return JSONResponse(
content=(
{"success": False, "message": f"Review {review_id} not found"}
),
status_code=404,
)
try:
reviews = (
ReviewSegment.select().where(ReviewSegment.id << ids).dicts().iterator()
@@ -163,6 +190,7 @@ def review_ids(ids: str):
async def review_summary(
params: ReviewSummaryQueryParams = Depends(),
current_user: dict = Depends(get_current_user),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
if isinstance(current_user, JSONResponse):
return current_user
@@ -179,8 +207,14 @@ async def review_summary(
clauses = [(ReviewSegment.start_time > day_ago)]
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((ReviewSegment.camera << camera_list))
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((ReviewSegment.camera << camera_list))
if labels != "all":
# use matching so segments with multiple labels
@@ -274,8 +308,14 @@ async def review_summary(
clauses = []
if cameras != "all":
camera_list = cameras.split(",")
clauses.append((ReviewSegment.camera << camera_list))
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content={})
camera_list = list(filtered)
else:
camera_list = allowed_cameras
clauses.append((ReviewSegment.camera << camera_list))
if labels != "all":
# use matching so segments with multiple labels
@@ -378,6 +418,7 @@ async def review_summary(
@router.post("/reviews/viewed", response_model=GenericResponse)
async def set_multiple_reviewed(
request: Request,
body: ReviewModifyMultipleBody,
current_user: dict = Depends(get_current_user),
):
@@ -388,6 +429,8 @@ async def set_multiple_reviewed(
for review_id in body.ids:
try:
review = ReviewSegment.get(ReviewSegment.id == review_id)
await require_camera_access(review.camera, request=request)
review_status = UserReviewStatus.get(
UserReviewStatus.user_id == user_id,
UserReviewStatus.review_segment == review_id,
@@ -469,7 +512,10 @@ def delete_reviews(body: ReviewModifyMultipleBody):
@router.get(
"/review/activity/motion", response_model=list[ReviewActivityMotionResponse]
)
def motion_activity(params: ReviewActivityMotionQueryParams = Depends()):
def motion_activity(
params: ReviewActivityMotionQueryParams = Depends(),
allowed_cameras: List[str] = Depends(get_allowed_cameras_for_filter),
):
"""Get motion and audio activity."""
cameras = params.cameras
before = params.before or datetime.datetime.now().timestamp()
@@ -484,8 +530,14 @@ def motion_activity(params: ReviewActivityMotionQueryParams = Depends()):
clauses.append((Recordings.motion > 0))
if cameras != "all":
camera_list = cameras.split(",")
requested = set(cameras.split(","))
filtered = requested.intersection(allowed_cameras)
if not filtered:
return JSONResponse(content=[])
camera_list = list(filtered)
clauses.append((Recordings.camera << camera_list))
else:
clauses.append((Recordings.camera << allowed_cameras))
data: list[Recordings] = (
Recordings.select(
@@ -543,15 +595,13 @@ def motion_activity(params: ReviewActivityMotionQueryParams = Depends()):
@router.get("/review/event/{event_id}", response_model=ReviewSegmentResponse)
def get_review_from_event(event_id: str):
async def get_review_from_event(request: Request, event_id: str):
try:
return JSONResponse(
model_to_dict(
ReviewSegment.get(
ReviewSegment.data["detections"].cast("text") % f'*"{event_id}"*'
)
)
review = ReviewSegment.get(
ReviewSegment.data["detections"].cast("text") % f'*"{event_id}"*'
)
await require_camera_access(review.camera, request=request)
return JSONResponse(model_to_dict(review))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Review item not found"},
@@ -560,11 +610,11 @@ def get_review_from_event(event_id: str):
@router.get("/review/{review_id}", response_model=ReviewSegmentResponse)
def get_review(review_id: str):
async def get_review(request: Request, review_id: str):
try:
return JSONResponse(
content=model_to_dict(ReviewSegment.get(ReviewSegment.id == review_id))
)
review = ReviewSegment.get(ReviewSegment.id == review_id)
await require_camera_access(review.camera, request=request)
return JSONResponse(content=model_to_dict(review))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Review item not found"},
@@ -606,3 +656,35 @@ async def set_not_reviewed(
content=({"success": True, "message": f"Set Review {review_id} as not viewed"}),
status_code=200,
)
@router.post(
"/review/summarize/start/{start_ts}/end/{end_ts}",
description="Use GenAI to summarize review items over a period of time.",
)
def generate_review_summary(request: Request, start_ts: float, end_ts: float):
config: FrigateConfig = request.app.frigate_config
if not config.genai.provider:
return JSONResponse(
content=(
{
"success": False,
"message": "GenAI must be configured to use this feature.",
}
),
status_code=400,
)
context: EmbeddingsContext = request.app.embeddings
summary = context.generate_review_summary(start_ts, end_ts)
if summary:
return JSONResponse(
content=({"success": True, "summary": summary}), status_code=200
)
else:
return JSONResponse(
content=({"success": False, "message": "Failed to create summary."}),
status_code=500,
)

View File

@@ -5,6 +5,7 @@ import os
import secrets
import shutil
from multiprocessing import Queue
from multiprocessing.managers import DictProxy, SyncManager
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from typing import Optional
@@ -14,19 +15,20 @@ import uvicorn
from peewee_migrate import Router
from playhouse.sqlite_ext import SqliteExtDatabase
import frigate.util as util
from frigate.api.auth import hash_password
from frigate.api.fastapi_app import create_fastapi_app
from frigate.camera import CameraMetrics, PTZMetrics
from frigate.camera.maintainer import CameraMaintainer
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.dispatcher import Dispatcher
from frigate.comms.event_metadata_updater import EventMetadataPublisher
from frigate.comms.inter_process import InterProcessCommunicator
from frigate.comms.mqtt import MqttClient
from frigate.comms.object_detector_signaler import DetectorProxy
from frigate.comms.webpush import WebPushClient
from frigate.comms.ws import WebSocketClient
from frigate.comms.zmq_proxy import ZmqProxy
from frigate.config.camera.updater import CameraConfigUpdatePublisher
from frigate.config.config import FrigateConfig
from frigate.const import (
CACHE_DIR,
@@ -36,12 +38,12 @@ from frigate.const import (
FACE_DIR,
MODEL_CACHE_DIR,
RECORD_DIR,
SHM_FRAMES_VAR,
THUMB_DIR,
TRIGGER_DIR,
)
from frigate.data_processing.types import DataProcessorMetrics
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.embeddings import EmbeddingsContext, manage_embeddings
from frigate.embeddings import EmbeddingProcess, EmbeddingsContext
from frigate.events.audio import AudioProcessor
from frigate.events.cleanup import EventCleanup
from frigate.events.maintainer import EventProcessor
@@ -55,56 +57,58 @@ from frigate.models import (
Regions,
ReviewSegment,
Timeline,
Trigger,
User,
)
from frigate.object_detection.base import ObjectDetectProcess
from frigate.output.output import output_frames
from frigate.output.output import OutputProcess
from frigate.ptz.autotrack import PtzAutoTrackerThread
from frigate.ptz.onvif import OnvifController
from frigate.record.cleanup import RecordingCleanup
from frigate.record.export import migrate_exports
from frigate.record.record import manage_recordings
from frigate.review.review import manage_review_segments
from frigate.record.record import RecordProcess
from frigate.review.review import ReviewProcess
from frigate.stats.emitter import StatsEmitter
from frigate.stats.util import stats_init
from frigate.storage import StorageMaintainer
from frigate.timeline import TimelineProcessor
from frigate.track.object_processing import TrackedObjectProcessor
from frigate.util.builtin import empty_and_close_queue
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
from frigate.util.object import get_camera_regions_grid
from frigate.util.image import UntrackedSharedMemory
from frigate.util.services import set_file_limit
from frigate.version import VERSION
from frigate.video import capture_camera, track_camera
from frigate.watchdog import FrigateWatchdog
logger = logging.getLogger(__name__)
class FrigateApp:
def __init__(self, config: FrigateConfig) -> None:
def __init__(
self, config: FrigateConfig, manager: SyncManager, stop_event: MpEvent
) -> None:
self.metrics_manager = manager
self.audio_process: Optional[mp.Process] = None
self.stop_event: MpEvent = mp.Event()
self.stop_event = stop_event
self.detection_queue: Queue = mp.Queue()
self.detectors: dict[str, ObjectDetectProcess] = {}
self.detection_out_events: dict[str, MpEvent] = {}
self.detection_shms: list[mp.shared_memory.SharedMemory] = []
self.log_queue: Queue = mp.Queue()
self.camera_metrics: dict[str, CameraMetrics] = {}
self.camera_metrics: DictProxy = self.metrics_manager.dict()
self.embeddings_metrics: DataProcessorMetrics | None = (
DataProcessorMetrics()
DataProcessorMetrics(
self.metrics_manager, list(config.classification.custom.keys())
)
if (
config.semantic_search.enabled
or config.lpr.enabled
or config.face_recognition.enabled
or len(config.classification.custom) > 0
)
else None
)
self.ptz_metrics: dict[str, PTZMetrics] = {}
self.processes: dict[str, int] = {}
self.embeddings: Optional[EmbeddingsContext] = None
self.region_grids: dict[str, list[list[dict[str, int]]]] = {}
self.frame_manager = SharedMemoryFrameManager()
self.config = config
def ensure_dirs(self) -> None:
@@ -121,6 +125,9 @@ class FrigateApp:
if self.config.face_recognition.enabled:
dirs.append(FACE_DIR)
if self.config.semantic_search.enabled:
dirs.append(TRIGGER_DIR)
for d in dirs:
if not os.path.exists(d) and not os.path.islink(d):
logger.info(f"Creating directory: {d}")
@@ -131,7 +138,7 @@ class FrigateApp:
def init_camera_metrics(self) -> None:
# create camera_metrics
for camera_name in self.config.cameras.keys():
self.camera_metrics[camera_name] = CameraMetrics()
self.camera_metrics[camera_name] = CameraMetrics(self.metrics_manager)
self.ptz_metrics[camera_name] = PTZMetrics(
autotracker_enabled=self.config.cameras[
camera_name
@@ -140,8 +147,16 @@ class FrigateApp:
def init_queues(self) -> None:
# Queue for cameras to push tracked objects to
# leaving room for 2 extra cameras to be added
self.detected_frames_queue: Queue = mp.Queue(
maxsize=sum(camera.enabled for camera in self.config.cameras.values()) * 2
maxsize=(
sum(
camera.enabled_in_config == True
for camera in self.config.cameras.values()
)
+ 2
)
* 2
)
# Queue for timeline events
@@ -217,52 +232,24 @@ class FrigateApp:
self.processes["go2rtc"] = proc.info["pid"]
def init_recording_manager(self) -> None:
recording_process = util.Process(
target=manage_recordings,
name="recording_manager",
args=(self.config,),
)
recording_process.daemon = True
recording_process = RecordProcess(self.config, self.stop_event)
self.recording_process = recording_process
recording_process.start()
self.processes["recording"] = recording_process.pid or 0
logger.info(f"Recording process started: {recording_process.pid}")
def init_review_segment_manager(self) -> None:
review_segment_process = util.Process(
target=manage_review_segments,
name="review_segment_manager",
args=(self.config,),
)
review_segment_process.daemon = True
review_segment_process = ReviewProcess(self.config, self.stop_event)
self.review_segment_process = review_segment_process
review_segment_process.start()
self.processes["review_segment"] = review_segment_process.pid or 0
logger.info(f"Review process started: {review_segment_process.pid}")
def init_embeddings_manager(self) -> None:
genai_cameras = [
c for c in self.config.cameras.values() if c.enabled and c.genai.enabled
]
if (
not self.config.semantic_search.enabled
and not genai_cameras
and not self.config.lpr.enabled
and not self.config.face_recognition.enabled
and not self.config.classification.bird.enabled
):
return
embedding_process = util.Process(
target=manage_embeddings,
name="embeddings_manager",
args=(
self.config,
self.embeddings_metrics,
),
# always start the embeddings process
embedding_process = EmbeddingProcess(
self.config, self.embeddings_metrics, self.stop_event
)
embedding_process.daemon = True
self.embedding_process = embedding_process
embedding_process.start()
self.processes["embeddings"] = embedding_process.pid or 0
@@ -279,7 +266,9 @@ class FrigateApp:
"synchronous": "NORMAL", # Safe when using WAL https://www.sqlite.org/pragma.html#pragma_synchronous
},
timeout=max(
60, 10 * len([c for c in self.config.cameras.values() if c.enabled])
60,
10
* len([c for c in self.config.cameras.values() if c.enabled_in_config]),
),
load_vec_extension=self.config.semantic_search.enabled,
)
@@ -293,6 +282,7 @@ class FrigateApp:
ReviewSegment,
Timeline,
User,
Trigger,
]
self.db.bind(models)
@@ -308,24 +298,15 @@ class FrigateApp:
migrate_exports(self.config.ffmpeg, list(self.config.cameras.keys()))
def init_embeddings_client(self) -> None:
genai_cameras = [
c for c in self.config.cameras.values() if c.enabled and c.genai.enabled
]
if (
self.config.semantic_search.enabled
or self.config.lpr.enabled
or genai_cameras
or self.config.face_recognition.enabled
):
# Create a client for other processes to use
self.embeddings = EmbeddingsContext(self.db)
# Create a client for other processes to use
self.embeddings = EmbeddingsContext(self.db)
def init_inter_process_communicator(self) -> None:
self.inter_process_communicator = InterProcessCommunicator()
self.inter_config_updater = ConfigPublisher()
self.inter_config_updater = CameraConfigUpdatePublisher()
self.event_metadata_updater = EventMetadataPublisher()
self.inter_zmq_proxy = ZmqProxy()
self.detection_proxy = DetectorProxy()
def init_onvif(self) -> None:
self.onvif_controller = OnvifController(self.config, self.ptz_metrics)
@@ -358,8 +339,6 @@ class FrigateApp:
def start_detectors(self) -> None:
for name in self.config.cameras.keys():
self.detection_out_events[name] = mp.Event()
try:
largest_frame = max(
[
@@ -391,8 +370,10 @@ class FrigateApp:
self.detectors[name] = ObjectDetectProcess(
name,
self.detection_queue,
self.detection_out_events,
list(self.config.cameras.keys()),
self.config,
detector_config,
self.stop_event,
)
def start_ptz_autotracker(self) -> None:
@@ -416,79 +397,22 @@ class FrigateApp:
self.detected_frames_processor.start()
def start_video_output_processor(self) -> None:
output_processor = util.Process(
target=output_frames,
name="output_processor",
args=(self.config,),
)
output_processor.daemon = True
output_processor = OutputProcess(self.config, self.stop_event)
self.output_processor = output_processor
output_processor.start()
logger.info(f"Output process started: {output_processor.pid}")
def init_historical_regions(self) -> None:
# delete region grids for removed or renamed cameras
cameras = list(self.config.cameras.keys())
Regions.delete().where(~(Regions.camera << cameras)).execute()
# create or update region grids for each camera
for camera in self.config.cameras.values():
assert camera.name is not None
self.region_grids[camera.name] = get_camera_regions_grid(
camera.name,
camera.detect,
max(self.config.model.width, self.config.model.height),
)
def start_camera_processors(self) -> None:
for name, config in self.config.cameras.items():
if not self.config.cameras[name].enabled_in_config:
logger.info(f"Camera processor not started for disabled camera {name}")
continue
camera_process = util.Process(
target=track_camera,
name=f"camera_processor:{name}",
args=(
name,
config,
self.config.model,
self.config.model.merged_labelmap,
self.detection_queue,
self.detection_out_events[name],
self.detected_frames_queue,
self.camera_metrics[name],
self.ptz_metrics[name],
self.region_grids[name],
),
daemon=True,
)
self.camera_metrics[name].process = camera_process
camera_process.start()
logger.info(f"Camera processor started for {name}: {camera_process.pid}")
def start_camera_capture_processes(self) -> None:
shm_frame_count = self.shm_frame_count()
for name, config in self.config.cameras.items():
if not self.config.cameras[name].enabled_in_config:
logger.info(f"Capture process not started for disabled camera {name}")
continue
# pre-create shms
for i in range(shm_frame_count):
frame_size = config.frame_shape_yuv[0] * config.frame_shape_yuv[1]
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = util.Process(
target=capture_camera,
name=f"camera_capture:{name}",
args=(name, config, shm_frame_count, self.camera_metrics[name]),
)
capture_process.daemon = True
self.camera_metrics[name].capture_process = capture_process
capture_process.start()
logger.info(f"Capture process started for {name}: {capture_process.pid}")
def start_camera_processor(self) -> None:
self.camera_maintainer = CameraMaintainer(
self.config,
self.detection_queue,
self.detected_frames_queue,
self.camera_metrics,
self.ptz_metrics,
self.stop_event,
self.metrics_manager,
)
self.camera_maintainer.start()
def start_audio_processor(self) -> None:
audio_cameras = [
@@ -498,7 +422,9 @@ class FrigateApp:
]
if audio_cameras:
self.audio_process = AudioProcessor(audio_cameras, self.camera_metrics)
self.audio_process = AudioProcessor(
self.config, audio_cameras, self.camera_metrics, self.stop_event
)
self.audio_process.start()
self.processes["audio_detector"] = self.audio_process.pid or 0
@@ -546,45 +472,6 @@ class FrigateApp:
self.frigate_watchdog = FrigateWatchdog(self.detectors, self.stop_event)
self.frigate_watchdog.start()
def shm_frame_count(self) -> int:
total_shm = round(shutil.disk_usage("/dev/shm").total / pow(2, 20), 1)
# required for log files + nginx cache
min_req_shm = 40 + 10
if self.config.birdseye.restream:
min_req_shm += 8
available_shm = total_shm - min_req_shm
cam_total_frame_size = 0.0
for camera in self.config.cameras.values():
if camera.enabled and camera.detect.width and camera.detect.height:
cam_total_frame_size += round(
(camera.detect.width * camera.detect.height * 1.5 + 270480)
/ 1048576,
1,
)
if cam_total_frame_size == 0.0:
return 0
shm_frame_count = min(
int(os.environ.get(SHM_FRAMES_VAR, "50")),
int(available_shm / (cam_total_frame_size)),
)
logger.debug(
f"Calculated total camera size {available_shm} / {cam_total_frame_size} :: {shm_frame_count} frames for each camera in SHM"
)
if shm_frame_count < 20:
logger.warning(
f"The current SHM size of {total_shm}MB is too small, recommend increasing it to at least {round(min_req_shm + cam_total_frame_size * 20)}MB."
)
return shm_frame_count
def init_auth(self) -> None:
if self.config.auth.enabled:
if User.select().count() == 0:
@@ -645,19 +532,17 @@ class FrigateApp:
self.init_recording_manager()
self.init_review_segment_manager()
self.init_go2rtc()
self.start_detectors()
self.init_embeddings_manager()
self.bind_database()
self.check_db_data_migrations()
self.init_inter_process_communicator()
self.start_detectors()
self.init_dispatcher()
self.init_embeddings_client()
self.start_video_output_processor()
self.start_ptz_autotracker()
self.init_historical_regions()
self.start_detected_frames_processor()
self.start_camera_processors()
self.start_camera_capture_processes()
self.start_camera_processor()
self.start_audio_processor()
self.start_storage_maintainer()
self.start_stats_emitter()
@@ -680,6 +565,7 @@ class FrigateApp:
self.onvif_controller,
self.stats_emitter,
self.event_metadata_updater,
self.inter_config_updater,
),
host="127.0.0.1",
port=5001,
@@ -713,24 +599,6 @@ class FrigateApp:
if self.onvif_controller:
self.onvif_controller.close()
# ensure the capture processes are done
for camera, metrics in self.camera_metrics.items():
capture_process = metrics.capture_process
if capture_process is not None:
logger.info(f"Waiting for capture process for {camera} to stop")
capture_process.terminate()
capture_process.join()
# ensure the camera processors are done
for camera, metrics in self.camera_metrics.items():
camera_process = metrics.process
if camera_process is not None:
logger.info(f"Waiting for process for {camera} to stop")
camera_process.terminate()
camera_process.join()
logger.info(f"Closing frame queue for {camera}")
empty_and_close_queue(metrics.frame_queue)
# ensure the detectors are done
for detector in self.detectors.values():
detector.stop()
@@ -774,14 +642,12 @@ class FrigateApp:
self.inter_config_updater.stop()
self.event_metadata_updater.stop()
self.inter_zmq_proxy.stop()
self.detection_proxy.stop()
self.frame_manager.cleanup()
while len(self.detection_shms) > 0:
shm = self.detection_shms.pop()
shm.close()
shm.unlink()
# exit the mp Manager process
_stop_logging()
os._exit(os.EX_OK)
self.metrics_manager.shutdown()

View File

@@ -1,7 +1,7 @@
import multiprocessing as mp
from multiprocessing.managers import SyncManager
from multiprocessing.sharedctypes import Synchronized
from multiprocessing.synchronize import Event
from typing import Optional
class CameraMetrics:
@@ -16,25 +16,25 @@ class CameraMetrics:
frame_queue: mp.Queue
process: Optional[mp.Process]
capture_process: Optional[mp.Process]
process_pid: Synchronized
capture_process_pid: Synchronized
ffmpeg_pid: Synchronized
def __init__(self):
self.camera_fps = mp.Value("d", 0)
self.detection_fps = mp.Value("d", 0)
self.detection_frame = mp.Value("d", 0)
self.process_fps = mp.Value("d", 0)
self.skipped_fps = mp.Value("d", 0)
self.read_start = mp.Value("d", 0)
self.audio_rms = mp.Value("d", 0)
self.audio_dBFS = mp.Value("d", 0)
def __init__(self, manager: SyncManager):
self.camera_fps = manager.Value("d", 0)
self.detection_fps = manager.Value("d", 0)
self.detection_frame = manager.Value("d", 0)
self.process_fps = manager.Value("d", 0)
self.skipped_fps = manager.Value("d", 0)
self.read_start = manager.Value("d", 0)
self.audio_rms = manager.Value("d", 0)
self.audio_dBFS = manager.Value("d", 0)
self.frame_queue = mp.Queue(maxsize=2)
self.frame_queue = manager.Queue(maxsize=2)
self.process = None
self.capture_process = None
self.ffmpeg_pid = mp.Value("i", 0)
self.process_pid = manager.Value("i", 0)
self.capture_process_pid = manager.Value("i", 0)
self.ffmpeg_pid = manager.Value("i", 0)
class PTZMetrics:

View File

@@ -1,9 +1,20 @@
"""Manage camera activity and updating listeners."""
import datetime
import json
import logging
import random
import string
from collections import Counter
from typing import Any, Callable
from frigate.config.config import FrigateConfig
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
EventMetadataTypeEnum,
)
from frigate.config import CameraConfig, FrigateConfig
logger = logging.getLogger(__name__)
class CameraActivityManager:
@@ -23,26 +34,33 @@ class CameraActivityManager:
if not camera_config.enabled_in_config:
continue
self.last_camera_activity[camera_config.name] = {}
self.camera_all_object_counts[camera_config.name] = Counter()
self.camera_active_object_counts[camera_config.name] = Counter()
self.__init_camera(camera_config)
for zone, zone_config in camera_config.zones.items():
if zone not in self.all_zone_labels:
self.zone_all_object_counts[zone] = Counter()
self.zone_active_object_counts[zone] = Counter()
self.all_zone_labels[zone] = set()
def __init_camera(self, camera_config: CameraConfig) -> None:
self.last_camera_activity[camera_config.name] = {}
self.camera_all_object_counts[camera_config.name] = Counter()
self.camera_active_object_counts[camera_config.name] = Counter()
self.all_zone_labels[zone].update(
zone_config.objects
if zone_config.objects
else camera_config.objects.track
)
for zone, zone_config in camera_config.zones.items():
if zone not in self.all_zone_labels:
self.zone_all_object_counts[zone] = Counter()
self.zone_active_object_counts[zone] = Counter()
self.all_zone_labels[zone] = set()
self.all_zone_labels[zone].update(
zone_config.objects
if zone_config.objects
else camera_config.objects.track
)
def update_activity(self, new_activity: dict[str, dict[str, Any]]) -> None:
all_objects: list[dict[str, Any]] = []
for camera in new_activity.keys():
# handle cameras that were added dynamically
if camera not in self.camera_all_object_counts:
self.__init_camera(self.config.cameras[camera])
new_objects = new_activity[camera].get("objects", [])
all_objects.extend(new_objects)
@@ -132,3 +150,110 @@ class CameraActivityManager:
if any_changed:
self.publish(f"{camera}/all", sum(list(all_objects.values())))
self.publish(f"{camera}/all/active", sum(list(active_objects.values())))
class AudioActivityManager:
def __init__(
self, config: FrigateConfig, publish: Callable[[str, Any], None]
) -> None:
self.config = config
self.publish = publish
self.current_audio_detections: dict[str, dict[str, dict[str, Any]]] = {}
self.event_metadata_publisher = EventMetadataPublisher()
for camera_config in config.cameras.values():
if not camera_config.audio.enabled_in_config:
continue
self.__init_camera(camera_config)
def __init_camera(self, camera_config: CameraConfig) -> None:
self.current_audio_detections[camera_config.name] = {}
def update_activity(self, new_activity: dict[str, dict[str, Any]]) -> None:
now = datetime.datetime.now().timestamp()
for camera in new_activity.keys():
# handle cameras that were added dynamically
if camera not in self.current_audio_detections:
self.__init_camera(self.config.cameras[camera])
new_detections = new_activity[camera].get("detections", [])
if self.compare_audio_activity(camera, new_detections, now):
logger.debug(f"Audio detections for {camera}: {new_activity}")
self.publish(
f"{camera}/audio/all",
"ON" if len(self.current_audio_detections[camera]) > 0 else "OFF",
)
self.publish(
"audio_detections",
json.dumps(self.current_audio_detections),
)
def compare_audio_activity(
self, camera: str, new_detections: list[tuple[str, float]], now: float
) -> None:
max_not_heard = self.config.cameras[camera].audio.max_not_heard
current = self.current_audio_detections[camera]
any_changed = False
for label, score in new_detections:
any_changed = True
if label in current:
current[label]["last_detection"] = now
current[label]["score"] = score
else:
rand_id = "".join(
random.choices(string.ascii_lowercase + string.digits, k=6)
)
event_id = f"{now}-{rand_id}"
self.publish(f"{camera}/audio/{label}", "ON")
self.event_metadata_publisher.publish(
(
now,
camera,
label,
event_id,
True,
score,
None,
None,
"audio",
{},
),
EventMetadataTypeEnum.manual_event_create.value,
)
current[label] = {
"id": event_id,
"score": score,
"last_detection": now,
}
# expire detections
for label in list(current.keys()):
if now - current[label]["last_detection"] > max_not_heard:
any_changed = True
self.publish(f"{camera}/audio/{label}", "OFF")
self.event_metadata_publisher.publish(
(current[label]["id"], now),
EventMetadataTypeEnum.manual_event_end.value,
)
del current[label]
return any_changed
def expire_all(self, camera: str) -> None:
now = datetime.datetime.now().timestamp()
current = self.current_audio_detections.get(camera, {})
for label in list(current.keys()):
self.publish(f"{camera}/audio/{label}", "OFF")
self.event_metadata_publisher.publish(
(current[label]["id"], now),
EventMetadataTypeEnum.manual_event_end.value,
)
del current[label]

View File

@@ -0,0 +1,220 @@
"""Create and maintain camera processes / management."""
import logging
import multiprocessing as mp
import threading
from multiprocessing import Queue
from multiprocessing.managers import DictProxy, SyncManager
from multiprocessing.synchronize import Event as MpEvent
from frigate.camera import CameraMetrics, PTZMetrics
from frigate.config import FrigateConfig
from frigate.config.camera import CameraConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
)
from frigate.models import Regions
from frigate.util.builtin import empty_and_close_queue
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
from frigate.util.object import get_camera_regions_grid
from frigate.util.services import calculate_shm_requirements
from frigate.video import CameraCapture, CameraTracker
logger = logging.getLogger(__name__)
class CameraMaintainer(threading.Thread):
def __init__(
self,
config: FrigateConfig,
detection_queue: Queue,
detected_frames_queue: Queue,
camera_metrics: DictProxy,
ptz_metrics: dict[str, PTZMetrics],
stop_event: MpEvent,
metrics_manager: SyncManager,
):
super().__init__(name="camera_processor")
self.config = config
self.detection_queue = detection_queue
self.detected_frames_queue = detected_frames_queue
self.stop_event = stop_event
self.camera_metrics = camera_metrics
self.ptz_metrics = ptz_metrics
self.frame_manager = SharedMemoryFrameManager()
self.region_grids: dict[str, list[list[dict[str, int]]]] = {}
self.update_subscriber = CameraConfigUpdateSubscriber(
self.config,
{},
[
CameraConfigUpdateEnum.add,
CameraConfigUpdateEnum.remove,
],
)
self.shm_count = self.__calculate_shm_frame_count()
self.camera_processes: dict[str, mp.Process] = {}
self.capture_processes: dict[str, mp.Process] = {}
self.metrics_manager = metrics_manager
def __init_historical_regions(self) -> None:
# delete region grids for removed or renamed cameras
cameras = list(self.config.cameras.keys())
Regions.delete().where(~(Regions.camera << cameras)).execute()
# create or update region grids for each camera
for camera in self.config.cameras.values():
assert camera.name is not None
self.region_grids[camera.name] = get_camera_regions_grid(
camera.name,
camera.detect,
max(self.config.model.width, self.config.model.height),
)
def __calculate_shm_frame_count(self) -> int:
shm_stats = calculate_shm_requirements(self.config)
if not shm_stats:
# /dev/shm not available
return 0
logger.debug(
f"Calculated total camera size {shm_stats['available']} / "
f"{shm_stats['camera_frame_size']} :: {shm_stats['shm_frame_count']} "
f"frames for each camera in SHM"
)
if shm_stats["shm_frame_count"] < 20:
logger.warning(
f"The current SHM size of {shm_stats['total']}MB is too small, "
f"recommend increasing it to at least {shm_stats['min_shm']}MB."
)
return shm_stats["shm_frame_count"]
def __start_camera_processor(
self, name: str, config: CameraConfig, runtime: bool = False
) -> None:
if not config.enabled_in_config:
logger.info(f"Camera processor not started for disabled camera {name}")
return
if runtime:
self.camera_metrics[name] = CameraMetrics(self.metrics_manager)
self.ptz_metrics[name] = PTZMetrics(autotracker_enabled=False)
self.region_grids[name] = get_camera_regions_grid(
name,
config.detect,
max(self.config.model.width, self.config.model.height),
)
try:
largest_frame = max(
[
det.model.height * det.model.width * 3
if det.model is not None
else 320
for det in self.config.detectors.values()
]
)
UntrackedSharedMemory(name=f"out-{name}", create=True, size=20 * 6 * 4)
UntrackedSharedMemory(
name=name,
create=True,
size=largest_frame,
)
except FileExistsError:
pass
camera_process = CameraTracker(
config,
self.config.model,
self.config.model.merged_labelmap,
self.detection_queue,
self.detected_frames_queue,
self.camera_metrics[name],
self.ptz_metrics[name],
self.region_grids[name],
self.stop_event,
)
self.camera_processes[config.name] = camera_process
camera_process.start()
self.camera_metrics[config.name].process_pid.value = camera_process.pid
logger.info(f"Camera processor started for {config.name}: {camera_process.pid}")
def __start_camera_capture(
self, name: str, config: CameraConfig, runtime: bool = False
) -> None:
if not config.enabled_in_config:
logger.info(f"Capture process not started for disabled camera {name}")
return
# pre-create shms
count = 10 if runtime else self.shm_count
for i in range(count):
frame_size = config.frame_shape_yuv[0] * config.frame_shape_yuv[1]
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = CameraCapture(
config, count, self.camera_metrics[name], self.stop_event
)
capture_process.daemon = True
self.capture_processes[name] = capture_process
capture_process.start()
self.camera_metrics[name].capture_process_pid.value = capture_process.pid
logger.info(f"Capture process started for {name}: {capture_process.pid}")
def __stop_camera_capture_process(self, camera: str) -> None:
capture_process = self.capture_processes[camera]
if capture_process is not None:
logger.info(f"Waiting for capture process for {camera} to stop")
capture_process.terminate()
capture_process.join()
def __stop_camera_process(self, camera: str) -> None:
camera_process = self.camera_processes[camera]
if camera_process is not None:
logger.info(f"Waiting for process for {camera} to stop")
camera_process.terminate()
camera_process.join()
logger.info(f"Closing frame queue for {camera}")
empty_and_close_queue(self.camera_metrics[camera].frame_queue)
def run(self):
self.__init_historical_regions()
# start camera processes
for camera, config in self.config.cameras.items():
self.__start_camera_processor(camera, config)
self.__start_camera_capture(camera, config)
while not self.stop_event.wait(1):
updates = self.update_subscriber.check_for_updates()
for update_type, updated_cameras in updates.items():
if update_type == CameraConfigUpdateEnum.add.name:
for camera in updated_cameras:
self.__start_camera_processor(
camera,
self.update_subscriber.camera_configs[camera],
runtime=True,
)
self.__start_camera_capture(
camera,
self.update_subscriber.camera_configs[camera],
runtime=True,
)
elif update_type == CameraConfigUpdateEnum.remove.name:
self.__stop_camera_capture_process(camera)
self.__stop_camera_process(camera)
# ensure the capture processes are done
for camera in self.camera_processes.keys():
self.__stop_camera_capture_process(camera)
# ensure the camera processors are done
for camera in self.capture_processes.keys():
self.__stop_camera_process(camera)
self.update_subscriber.stop()
self.frame_manager.cleanup()

View File

@@ -54,7 +54,7 @@ class CameraState:
self.ptz_autotracker_thread = ptz_autotracker_thread
self.prev_enabled = self.camera_config.enabled
def get_current_frame(self, draw_options: dict[str, Any] = {}):
def get_current_frame(self, draw_options: dict[str, Any] = {}) -> np.ndarray:
with self.current_frame_lock:
frame_copy = np.copy(self._current_frame)
frame_time = self.current_frame_time
@@ -228,12 +228,51 @@ class CameraState:
position=self.camera_config.timestamp_style.position,
)
if draw_options.get("paths"):
for obj in tracked_objects.values():
if obj["frame_time"] == frame_time and obj["path_data"]:
color = self.config.model.colormap.get(
obj["label"], (255, 255, 255)
)
path_points = [
(
int(point[0][0] * self.camera_config.detect.width),
int(point[0][1] * self.camera_config.detect.height),
)
for point in obj["path_data"]
]
for point in path_points:
cv2.circle(frame_copy, point, 5, color, -1)
for i in range(1, len(path_points)):
cv2.line(
frame_copy,
path_points[i - 1],
path_points[i],
color,
2,
)
bottom_center = (
int((obj["box"][0] + obj["box"][2]) / 2),
int(obj["box"][3]),
)
cv2.line(
frame_copy,
path_points[-1],
bottom_center,
color,
2,
)
return frame_copy
def finished(self, obj_id):
del self.tracked_objects[obj_id]
def on(self, event_type: str, callback: Callable[[dict], None]):
def on(self, event_type: str, callback: Callable):
self.callbacks[event_type].append(callback)
def update(

View File

@@ -1,8 +1,9 @@
"""Facilitates communication between processes."""
import multiprocessing as mp
from _pickle import UnpicklingError
from multiprocessing.synchronize import Event as MpEvent
from typing import Any, Optional
from typing import Any
import zmq
@@ -32,7 +33,7 @@ class ConfigPublisher:
class ConfigSubscriber:
"""Simplifies receiving an updated config."""
def __init__(self, topic: str, exact=False) -> None:
def __init__(self, topic: str, exact: bool = False) -> None:
self.topic = topic
self.exact = exact
self.context = zmq.Context()
@@ -40,7 +41,7 @@ class ConfigSubscriber:
self.socket.setsockopt_string(zmq.SUBSCRIBE, topic)
self.socket.connect(SOCKET_PUB_SUB)
def check_for_update(self) -> Optional[tuple[str, Any]]:
def check_for_update(self) -> tuple[str, Any] | tuple[None, None]:
"""Returns updated config or None if no update."""
try:
topic = self.socket.recv_string(flags=zmq.NOBLOCK)
@@ -50,7 +51,7 @@ class ConfigSubscriber:
return (topic, obj)
else:
return (None, None)
except zmq.ZMQError:
except (zmq.ZMQError, UnicodeDecodeError, UnpicklingError):
return (None, None)
def stop(self) -> None:

View File

@@ -1,7 +1,7 @@
"""Facilitates communication between processes."""
from enum import Enum
from typing import Any, Optional
from typing import Any
from .zmq_proxy import Publisher, Subscriber
@@ -19,8 +19,7 @@ class DetectionPublisher(Publisher):
topic_base = "detection/"
def __init__(self, topic: DetectionTypeEnum) -> None:
topic = topic.value
def __init__(self, topic: str) -> None:
super().__init__(topic)
@@ -29,16 +28,15 @@ class DetectionSubscriber(Subscriber):
topic_base = "detection/"
def __init__(self, topic: DetectionTypeEnum) -> None:
topic = topic.value
def __init__(self, topic: str) -> None:
super().__init__(topic)
def check_for_update(
self, timeout: float = None
) -> Optional[tuple[DetectionTypeEnum, Any]]:
self, timeout: float | None = None
) -> tuple[str, Any] | tuple[None, None] | None:
return super().check_for_update(timeout)
def _return_object(self, topic: str, payload: Any) -> Any:
if payload is None:
return (None, None)
return (DetectionTypeEnum[topic[len(self.topic_base) :]], payload)
return (topic[len(self.topic_base) :], payload)

View File

@@ -3,24 +3,32 @@
import datetime
import json
import logging
from typing import Any, Callable, Optional
from typing import Any, Callable, Optional, cast
from frigate.camera import PTZMetrics
from frigate.camera.activity_manager import CameraActivityManager
from frigate.camera.activity_manager import AudioActivityManager, CameraActivityManager
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.webpush import WebPushClient
from frigate.config import BirdseyeModeEnum, FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdatePublisher,
CameraConfigUpdateTopic,
)
from frigate.const import (
CLEAR_ONGOING_REVIEW_SEGMENTS,
EXPIRE_AUDIO_ACTIVITY,
INSERT_MANY_RECORDINGS,
INSERT_PREVIEW,
NOTIFICATION_TEST,
REQUEST_REGION_GRID,
UPDATE_AUDIO_ACTIVITY,
UPDATE_BIRDSEYE_LAYOUT,
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_EVENT_DESCRIPTION,
UPDATE_MODEL_STATE,
UPDATE_REVIEW_DESCRIPTION,
UPSERT_REVIEW_SEGMENT,
)
from frigate.models import Event, Previews, Recordings, ReviewSegment
@@ -38,7 +46,7 @@ class Dispatcher:
def __init__(
self,
config: FrigateConfig,
config_updater: ConfigPublisher,
config_updater: CameraConfigUpdatePublisher,
onvif: OnvifController,
ptz_metrics: dict[str, PTZMetrics],
communicators: list[Communicator],
@@ -49,11 +57,13 @@ class Dispatcher:
self.ptz_metrics = ptz_metrics
self.comms = communicators
self.camera_activity = CameraActivityManager(config, self.publish)
self.model_state = {}
self.embeddings_reindex = {}
self.audio_activity = AudioActivityManager(config, self.publish)
self.model_state: dict[str, ModelStatusTypesEnum] = {}
self.embeddings_reindex: dict[str, Any] = {}
self.birdseye_layout: dict[str, Any] = {}
self._camera_settings_handlers: dict[str, Callable] = {
"audio": self._on_audio_command,
"audio_transcription": self._on_audio_transcription_command,
"detect": self._on_detect_command,
"enabled": self._on_enabled_command,
"improve_contrast": self._on_motion_improve_contrast_command,
@@ -68,6 +78,8 @@ class Dispatcher:
"birdseye_mode": self._on_birdseye_mode_command,
"review_alerts": self._on_alerts_command,
"review_detections": self._on_detections_command,
"object_descriptions": self._on_object_description_command,
"review_descriptions": self._on_review_description_command,
}
self._global_settings_handlers: dict[str, Callable] = {
"notifications": self._on_global_notification_command,
@@ -80,10 +92,12 @@ class Dispatcher:
(comm for comm in communicators if isinstance(comm, WebPushClient)), None
)
def _receive(self, topic: str, payload: str) -> Optional[Any]:
def _receive(self, topic: str, payload: Any) -> Optional[Any]:
"""Handle receiving of payload from communicators."""
def handle_camera_command(command_type, camera_name, command, payload):
def handle_camera_command(
command_type: str, camera_name: str, command: str, payload: str
) -> None:
try:
if command_type == "set":
self._camera_settings_handlers[command](camera_name, payload)
@@ -92,13 +106,13 @@ class Dispatcher:
except KeyError:
logger.error(f"Invalid command type or handler: {command_type}")
def handle_restart():
def handle_restart() -> None:
restart_frigate()
def handle_insert_many_recordings():
def handle_insert_many_recordings() -> None:
Recordings.insert_many(payload).execute()
def handle_request_region_grid():
def handle_request_region_grid() -> Any:
camera = payload
grid = get_camera_regions_grid(
camera,
@@ -107,26 +121,32 @@ class Dispatcher:
)
return grid
def handle_insert_preview():
def handle_insert_preview() -> None:
Previews.insert(payload).execute()
def handle_upsert_review_segment():
def handle_upsert_review_segment() -> None:
ReviewSegment.insert(payload).on_conflict(
conflict_target=[ReviewSegment.id],
update=payload,
).execute()
def handle_clear_ongoing_review_segments():
def handle_clear_ongoing_review_segments() -> None:
ReviewSegment.update(end_time=datetime.datetime.now().timestamp()).where(
ReviewSegment.end_time.is_null(True)
).execute()
def handle_update_camera_activity():
def handle_update_camera_activity() -> None:
self.camera_activity.update_activity(payload)
def handle_update_event_description():
def handle_update_audio_activity() -> None:
self.audio_activity.update_activity(payload)
def handle_expire_audio_activity() -> None:
self.audio_activity.expire_all(payload)
def handle_update_event_description() -> None:
event: Event = Event.get(Event.id == payload["id"])
event.data["description"] = payload["description"]
cast(dict, event.data)["description"] = payload["description"]
event.save()
self.publish(
"tracked_object_update",
@@ -140,31 +160,48 @@ class Dispatcher:
),
)
def handle_update_model_state():
def handle_update_review_description() -> None:
final_data = payload["after"]
ReviewSegment.insert(final_data).on_conflict(
conflict_target=[ReviewSegment.id],
update=final_data,
).execute()
self.publish("reviews", json.dumps(payload))
def handle_update_model_state() -> None:
if payload:
model = payload["model"]
state = payload["state"]
self.model_state[model] = ModelStatusTypesEnum[state]
self.publish("model_state", json.dumps(self.model_state))
def handle_model_state():
def handle_model_state() -> None:
self.publish("model_state", json.dumps(self.model_state.copy()))
def handle_update_embeddings_reindex_progress():
def handle_update_embeddings_reindex_progress() -> None:
self.embeddings_reindex = payload
self.publish(
"embeddings_reindex_progress",
json.dumps(payload),
)
def handle_embeddings_reindex_progress():
def handle_embeddings_reindex_progress() -> None:
self.publish(
"embeddings_reindex_progress",
json.dumps(self.embeddings_reindex.copy()),
)
def handle_on_connect():
def handle_update_birdseye_layout() -> None:
if payload:
self.birdseye_layout = payload
self.publish("birdseye_layout", json.dumps(self.birdseye_layout))
def handle_birdseye_layout() -> None:
self.publish("birdseye_layout", json.dumps(self.birdseye_layout.copy()))
def handle_on_connect() -> None:
camera_status = self.camera_activity.last_camera_activity.copy()
audio_detections = self.audio_activity.current_audio_detections.copy()
cameras_with_status = camera_status.keys()
for camera in self.config.cameras.keys():
@@ -177,6 +214,9 @@ class Dispatcher:
"snapshots": self.config.cameras[camera].snapshots.enabled,
"record": self.config.cameras[camera].record.enabled,
"audio": self.config.cameras[camera].audio.enabled,
"audio_transcription": self.config.cameras[
camera
].audio_transcription.live_enabled,
"notifications": self.config.cameras[camera].notifications.enabled,
"notifications_suspended": int(
self.web_push_client.suspended_cameras.get(camera, 0)
@@ -189,6 +229,12 @@ class Dispatcher:
].onvif.autotracking.enabled,
"alerts": self.config.cameras[camera].review.alerts.enabled,
"detections": self.config.cameras[camera].review.detections.enabled,
"object_descriptions": self.config.cameras[
camera
].objects.genai.enabled,
"review_descriptions": self.config.cameras[
camera
].review.genai.enabled,
}
self.publish("camera_activity", json.dumps(camera_status))
@@ -197,8 +243,10 @@ class Dispatcher:
"embeddings_reindex_progress",
json.dumps(self.embeddings_reindex.copy()),
)
self.publish("birdseye_layout", json.dumps(self.birdseye_layout.copy()))
self.publish("audio_detections", json.dumps(audio_detections))
def handle_notification_test():
def handle_notification_test() -> None:
self.publish("notification_test", "Test notification")
# Dictionary mapping topic to handlers
@@ -209,13 +257,18 @@ class Dispatcher:
UPSERT_REVIEW_SEGMENT: handle_upsert_review_segment,
CLEAR_ONGOING_REVIEW_SEGMENTS: handle_clear_ongoing_review_segments,
UPDATE_CAMERA_ACTIVITY: handle_update_camera_activity,
UPDATE_AUDIO_ACTIVITY: handle_update_audio_activity,
EXPIRE_AUDIO_ACTIVITY: handle_expire_audio_activity,
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
UPDATE_REVIEW_DESCRIPTION: handle_update_review_description,
UPDATE_MODEL_STATE: handle_update_model_state,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
UPDATE_BIRDSEYE_LAYOUT: handle_update_birdseye_layout,
NOTIFICATION_TEST: handle_notification_test,
"restart": handle_restart,
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
"modelState": handle_model_state,
"birdseyeLayout": handle_birdseye_layout,
"onConnect": handle_on_connect,
}
@@ -243,11 +296,12 @@ class Dispatcher:
logger.error(
f"Received invalid {topic.split('/')[-1]} command: {topic}"
)
return
return None
elif topic in topic_handlers:
return topic_handlers[topic]()
else:
self.publish(topic, payload, retain=False)
return None
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Handle publishing to communicators."""
@@ -273,8 +327,11 @@ class Dispatcher:
f"Turning on motion for {camera_name} due to detection being enabled."
)
motion_settings.enabled = True
self.config_updater.publish(
f"config/motion/{camera_name}", motion_settings
self.config_updater.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum.motion, camera_name
),
motion_settings,
)
self.publish(f"{camera_name}/motion/state", payload, retain=True)
elif payload == "OFF":
@@ -282,7 +339,10 @@ class Dispatcher:
logger.info(f"Turning off detection for {camera_name}")
detect_settings.enabled = False
self.config_updater.publish(f"config/detect/{camera_name}", detect_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.detect, camera_name),
detect_settings,
)
self.publish(f"{camera_name}/detect/state", payload, retain=True)
def _on_enabled_command(self, camera_name: str, payload: str) -> None:
@@ -303,7 +363,10 @@ class Dispatcher:
logger.info(f"Turning off camera {camera_name}")
camera_settings.enabled = False
self.config_updater.publish(f"config/enabled/{camera_name}", camera_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.enabled, camera_name),
camera_settings.enabled,
)
self.publish(f"{camera_name}/enabled/state", payload, retain=True)
def _on_motion_command(self, camera_name: str, payload: str) -> None:
@@ -326,7 +389,10 @@ class Dispatcher:
logger.info(f"Turning off motion for {camera_name}")
motion_settings.enabled = False
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.publish(f"{camera_name}/motion/state", payload, retain=True)
def _on_motion_improve_contrast_command(
@@ -338,13 +404,16 @@ class Dispatcher:
if payload == "ON":
if not motion_settings.improve_contrast:
logger.info(f"Turning on improve contrast for {camera_name}")
motion_settings.improve_contrast = True # type: ignore[union-attr]
motion_settings.improve_contrast = True
elif payload == "OFF":
if motion_settings.improve_contrast:
logger.info(f"Turning off improve contrast for {camera_name}")
motion_settings.improve_contrast = False # type: ignore[union-attr]
motion_settings.improve_contrast = False
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.publish(f"{camera_name}/improve_contrast/state", payload, retain=True)
def _on_ptz_autotracker_command(self, camera_name: str, payload: str) -> None:
@@ -383,8 +452,11 @@ class Dispatcher:
motion_settings = self.config.cameras[camera_name].motion
logger.info(f"Setting motion contour area for {camera_name}: {payload}")
motion_settings.contour_area = payload # type: ignore[union-attr]
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
motion_settings.contour_area = payload
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.publish(f"{camera_name}/motion_contour_area/state", payload, retain=True)
def _on_motion_threshold_command(self, camera_name: str, payload: int) -> None:
@@ -397,8 +469,11 @@ class Dispatcher:
motion_settings = self.config.cameras[camera_name].motion
logger.info(f"Setting motion threshold for {camera_name}: {payload}")
motion_settings.threshold = payload # type: ignore[union-attr]
self.config_updater.publish(f"config/motion/{camera_name}", motion_settings)
motion_settings.threshold = payload
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.motion, camera_name),
motion_settings,
)
self.publish(f"{camera_name}/motion_threshold/state", payload, retain=True)
def _on_global_notification_command(self, payload: str) -> None:
@@ -409,9 +484,9 @@ class Dispatcher:
notification_settings = self.config.notifications
logger.info(f"Setting all notifications: {payload}")
notification_settings.enabled = payload == "ON" # type: ignore[union-attr]
self.config_updater.publish(
"config/notifications", {"_global_notifications": notification_settings}
notification_settings.enabled = payload == "ON"
self.config_updater.publisher.publish(
"config/notifications", notification_settings
)
self.publish("notifications/state", payload, retain=True)
@@ -434,9 +509,43 @@ class Dispatcher:
logger.info(f"Turning off audio detection for {camera_name}")
audio_settings.enabled = False
self.config_updater.publish(f"config/audio/{camera_name}", audio_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.audio, camera_name),
audio_settings,
)
self.publish(f"{camera_name}/audio/state", payload, retain=True)
def _on_audio_transcription_command(self, camera_name: str, payload: str) -> None:
"""Callback for live audio transcription topic."""
audio_transcription_settings = self.config.cameras[
camera_name
].audio_transcription
if payload == "ON":
if not self.config.cameras[
camera_name
].audio_transcription.enabled_in_config:
logger.error(
"Audio transcription must be enabled in the config to be turned on via MQTT."
)
return
if not audio_transcription_settings.live_enabled:
logger.info(f"Turning on live audio transcription for {camera_name}")
audio_transcription_settings.live_enabled = True
elif payload == "OFF":
if audio_transcription_settings.live_enabled:
logger.info(f"Turning off live audio transcription for {camera_name}")
audio_transcription_settings.live_enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(
CameraConfigUpdateEnum.audio_transcription, camera_name
),
audio_transcription_settings,
)
self.publish(f"{camera_name}/audio_transcription/state", payload, retain=True)
def _on_recordings_command(self, camera_name: str, payload: str) -> None:
"""Callback for recordings topic."""
record_settings = self.config.cameras[camera_name].record
@@ -456,7 +565,10 @@ class Dispatcher:
logger.info(f"Turning off recordings for {camera_name}")
record_settings.enabled = False
self.config_updater.publish(f"config/record/{camera_name}", record_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.record, camera_name),
record_settings,
)
self.publish(f"{camera_name}/recordings/state", payload, retain=True)
def _on_snapshots_command(self, camera_name: str, payload: str) -> None:
@@ -472,6 +584,10 @@ class Dispatcher:
logger.info(f"Turning off snapshots for {camera_name}")
snapshots_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.snapshots, camera_name),
snapshots_settings,
)
self.publish(f"{camera_name}/snapshots/state", payload, retain=True)
def _on_ptz_command(self, camera_name: str, payload: str) -> None:
@@ -506,7 +622,10 @@ class Dispatcher:
logger.info(f"Turning off birdseye for {camera_name}")
birdseye_settings.enabled = False
self.config_updater.publish(f"config/birdseye/{camera_name}", birdseye_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.birdseye, camera_name),
birdseye_settings,
)
self.publish(f"{camera_name}/birdseye/state", payload, retain=True)
def _on_birdseye_mode_command(self, camera_name: str, payload: str) -> None:
@@ -527,7 +646,10 @@ class Dispatcher:
f"Setting birdseye mode for {camera_name} to {birdseye_settings.mode}"
)
self.config_updater.publish(f"config/birdseye/{camera_name}", birdseye_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.birdseye, camera_name),
birdseye_settings,
)
self.publish(f"{camera_name}/birdseye_mode/state", payload, retain=True)
def _on_camera_notification_command(self, camera_name: str, payload: str) -> None:
@@ -559,8 +681,9 @@ class Dispatcher:
):
self.web_push_client.suspended_cameras[camera_name] = 0
self.config_updater.publish(
"config/notifications", {camera_name: notification_settings}
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.notifications, camera_name),
notification_settings,
)
self.publish(f"{camera_name}/notifications/state", payload, retain=True)
self.publish(f"{camera_name}/notifications/suspended", "0", retain=True)
@@ -617,7 +740,10 @@ class Dispatcher:
logger.info(f"Turning off alerts for {camera_name}")
review_settings.alerts.enabled = False
self.config_updater.publish(f"config/review/{camera_name}", review_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.review, camera_name),
review_settings,
)
self.publish(f"{camera_name}/review_alerts/state", payload, retain=True)
def _on_detections_command(self, camera_name: str, payload: str) -> None:
@@ -639,5 +765,58 @@ class Dispatcher:
logger.info(f"Turning off detections for {camera_name}")
review_settings.detections.enabled = False
self.config_updater.publish(f"config/review/{camera_name}", review_settings)
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.review, camera_name),
review_settings,
)
self.publish(f"{camera_name}/review_detections/state", payload, retain=True)
def _on_object_description_command(self, camera_name: str, payload: str) -> None:
"""Callback for object description topic."""
genai_settings = self.config.cameras[camera_name].objects.genai
if payload == "ON":
if not self.config.cameras[camera_name].objects.genai.enabled_in_config:
logger.error(
"GenAI must be enabled in the config to be turned on via MQTT."
)
return
if not genai_settings.enabled:
logger.info(f"Turning on object descriptions for {camera_name}")
genai_settings.enabled = True
elif payload == "OFF":
if genai_settings.enabled:
logger.info(f"Turning off object descriptions for {camera_name}")
genai_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.object_genai, camera_name),
genai_settings,
)
self.publish(f"{camera_name}/object_descriptions/state", payload, retain=True)
def _on_review_description_command(self, camera_name: str, payload: str) -> None:
"""Callback for review description topic."""
genai_settings = self.config.cameras[camera_name].review.genai
if payload == "ON":
if not self.config.cameras[camera_name].review.genai.enabled_in_config:
logger.error(
"GenAI Alerts or Detections must be enabled in the config to be turned on via MQTT."
)
return
if not genai_settings.enabled:
logger.info(f"Turning on review descriptions for {camera_name}")
genai_settings.enabled = True
elif payload == "OFF":
if genai_settings.enabled:
logger.info(f"Turning off review descriptions for {camera_name}")
genai_settings.enabled = False
self.config_updater.publish_update(
CameraConfigUpdateTopic(CameraConfigUpdateEnum.review_genai, camera_name),
genai_settings,
)
self.publish(f"{camera_name}/review_descriptions/state", payload, retain=True)

View File

@@ -1,23 +1,36 @@
"""Facilitates communication between processes."""
import logging
from enum import Enum
from typing import Any, Callable
import zmq
logger = logging.getLogger(__name__)
SOCKET_REP_REQ = "ipc:///tmp/cache/embeddings"
class EmbeddingsRequestEnum(Enum):
# audio
transcribe_audio = "transcribe_audio"
# custom classification
reload_classification_model = "reload_classification_model"
# face
clear_face_classifier = "clear_face_classifier"
embed_description = "embed_description"
embed_thumbnail = "embed_thumbnail"
generate_search = "generate_search"
recognize_face = "recognize_face"
register_face = "register_face"
reprocess_face = "reprocess_face"
reprocess_plate = "reprocess_plate"
# semantic search
embed_description = "embed_description"
embed_thumbnail = "embed_thumbnail"
generate_search = "generate_search"
reindex = "reindex"
# LPR
reprocess_plate = "reprocess_plate"
# Review Descriptions
summarize_review = "summarize_review"
class EmbeddingsResponder:
@@ -34,9 +47,16 @@ class EmbeddingsResponder:
break
try:
(topic, value) = self.socket.recv_json(flags=zmq.NOBLOCK)
raw = self.socket.recv_json(flags=zmq.NOBLOCK)
response = process(topic, value)
if isinstance(raw, list):
(topic, value) = raw
response = process(topic, value)
else:
logging.warning(
f"Received unexpected data type in ZMQ recv_json: {type(raw)}"
)
response = None
if response is not None:
self.socket.send_json(response)
@@ -58,7 +78,7 @@ class EmbeddingsRequestor:
self.socket = self.context.socket(zmq.REQ)
self.socket.connect(SOCKET_REP_REQ)
def send_data(self, topic: str, data: Any) -> str:
def send_data(self, topic: str, data: Any) -> Any:
"""Sends data and then waits for reply."""
try:
self.socket.send_json((topic, data))

View File

@@ -15,7 +15,7 @@ class EventMetadataTypeEnum(str, Enum):
manual_event_end = "manual_event_end"
regenerate_description = "regenerate_description"
sub_label = "sub_label"
recognized_license_plate = "recognized_license_plate"
attribute = "attribute"
lpr_event_create = "lpr_event_create"
save_lpr_snapshot = "save_lpr_snapshot"
@@ -28,8 +28,8 @@ class EventMetadataPublisher(Publisher):
def __init__(self) -> None:
super().__init__()
def publish(self, topic: EventMetadataTypeEnum, payload: Any) -> None:
super().publish(payload, topic.value)
def publish(self, payload: Any, sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
class EventMetadataSubscriber(Subscriber):
@@ -40,9 +40,10 @@ class EventMetadataSubscriber(Subscriber):
def __init__(self, topic: EventMetadataTypeEnum) -> None:
super().__init__(topic.value)
def _return_object(self, topic: str, payload: tuple) -> tuple:
def _return_object(
self, topic: str, payload: tuple | None
) -> tuple[str, Any] | tuple[None, None]:
if payload is None:
return (None, None)
topic = EventMetadataTypeEnum[topic[len(self.topic_base) :]]
return (topic, payload)

View File

@@ -7,7 +7,9 @@ from frigate.events.types import EventStateEnum, EventTypeEnum
from .zmq_proxy import Publisher, Subscriber
class EventUpdatePublisher(Publisher):
class EventUpdatePublisher(
Publisher[tuple[EventTypeEnum, EventStateEnum, str | None, str, dict[str, Any]]]
):
"""Publishes events (objects, audio, manual)."""
topic_base = "event/"
@@ -16,9 +18,11 @@ class EventUpdatePublisher(Publisher):
super().__init__("update")
def publish(
self, payload: tuple[EventTypeEnum, EventStateEnum, str, str, dict[str, Any]]
self,
payload: tuple[EventTypeEnum, EventStateEnum, str | None, str, dict[str, Any]],
sub_topic: str = "",
) -> None:
super().publish(payload)
super().publish(payload, sub_topic)
class EventUpdateSubscriber(Subscriber):
@@ -30,7 +34,9 @@ class EventUpdateSubscriber(Subscriber):
super().__init__("update")
class EventEndPublisher(Publisher):
class EventEndPublisher(
Publisher[tuple[EventTypeEnum, EventStateEnum, str, dict[str, Any]]]
):
"""Publishes events that have ended."""
topic_base = "event/"
@@ -39,9 +45,11 @@ class EventEndPublisher(Publisher):
super().__init__("finalized")
def publish(
self, payload: tuple[EventTypeEnum, EventStateEnum, str, dict[str, Any]]
self,
payload: tuple[EventTypeEnum, EventStateEnum, str, dict[str, Any]],
sub_topic: str = "",
) -> None:
super().publish(payload)
super().publish(payload, sub_topic)
class EventEndSubscriber(Subscriber):

View File

@@ -1,5 +1,6 @@
"""Facilitates communication between processes."""
import logging
import multiprocessing as mp
import threading
from multiprocessing.synchronize import Event as MpEvent
@@ -9,6 +10,8 @@ import zmq
from frigate.comms.base_communicator import Communicator
logger = logging.getLogger(__name__)
SOCKET_REP_REQ = "ipc:///tmp/cache/comms"
@@ -19,7 +22,7 @@ class InterProcessCommunicator(Communicator):
self.socket.bind(SOCKET_REP_REQ)
self.stop_event: MpEvent = mp.Event()
def publish(self, topic: str, payload: str, retain: bool) -> None:
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""There is no communication back to the processes."""
pass
@@ -37,9 +40,16 @@ class InterProcessCommunicator(Communicator):
break
try:
(topic, value) = self.socket.recv_json(flags=zmq.NOBLOCK)
raw = self.socket.recv_json(flags=zmq.NOBLOCK)
response = self._dispatcher(topic, value)
if isinstance(raw, list):
(topic, value) = raw
response = self._dispatcher(topic, value)
else:
logging.warning(
f"Received unexpected data type in ZMQ recv_json: {type(raw)}"
)
response = None
if response is not None:
self.socket.send_json(response)

View File

@@ -11,7 +11,7 @@ from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)
class MqttClient(Communicator): # type: ignore[misc]
class MqttClient(Communicator):
"""Frigate wrapper for mqtt client."""
def __init__(self, config: FrigateConfig) -> None:
@@ -75,7 +75,7 @@ class MqttClient(Communicator): # type: ignore[misc]
)
self.publish(
f"{camera_name}/improve_contrast/state",
"ON" if camera.motion.improve_contrast else "OFF", # type: ignore[union-attr]
"ON" if camera.motion.improve_contrast else "OFF",
retain=True,
)
self.publish(
@@ -85,12 +85,12 @@ class MqttClient(Communicator): # type: ignore[misc]
)
self.publish(
f"{camera_name}/motion_threshold/state",
camera.motion.threshold, # type: ignore[union-attr]
camera.motion.threshold,
retain=True,
)
self.publish(
f"{camera_name}/motion_contour_area/state",
camera.motion.contour_area, # type: ignore[union-attr]
camera.motion.contour_area,
retain=True,
)
self.publish(
@@ -122,6 +122,16 @@ class MqttClient(Communicator): # type: ignore[misc]
"ON" if camera.review.detections.enabled_in_config else "OFF",
retain=True,
)
self.publish(
f"{camera_name}/object_descriptions/state",
"ON" if camera.objects.genai.enabled_in_config else "OFF",
retain=True,
)
self.publish(
f"{camera_name}/review_descriptions/state",
"ON" if camera.review.genai.enabled_in_config else "OFF",
retain=True,
)
if self.config.notifications.enabled_in_config:
self.publish(
@@ -145,7 +155,7 @@ class MqttClient(Communicator): # type: ignore[misc]
client: mqtt.Client,
userdata: Any,
flags: Any,
reason_code: mqtt.ReasonCode,
reason_code: mqtt.ReasonCode, # type: ignore[name-defined]
properties: Any,
) -> None:
"""Mqtt connection callback."""
@@ -177,7 +187,7 @@ class MqttClient(Communicator): # type: ignore[misc]
client: mqtt.Client,
userdata: Any,
flags: Any,
reason_code: mqtt.ReasonCode,
reason_code: mqtt.ReasonCode, # type: ignore[name-defined]
properties: Any,
) -> None:
"""Mqtt disconnection callback."""
@@ -215,6 +225,7 @@ class MqttClient(Communicator): # type: ignore[misc]
"birdseye_mode",
"review_alerts",
"review_detections",
"genai",
]
for name in self.config.cameras.keys():

View File

@@ -0,0 +1,92 @@
"""Facilitates communication between processes for object detection signals."""
import threading
import zmq
SOCKET_PUB = "ipc:///tmp/cache/detector_pub"
SOCKET_SUB = "ipc:///tmp/cache/detector_sub"
class ZmqProxyRunner(threading.Thread):
def __init__(self, context: zmq.Context[zmq.Socket]) -> None:
super().__init__(name="detector_proxy")
self.context = context
def run(self) -> None:
"""Run the proxy."""
incoming = self.context.socket(zmq.XSUB)
incoming.bind(SOCKET_PUB)
outgoing = self.context.socket(zmq.XPUB)
outgoing.bind(SOCKET_SUB)
# Blocking: This will unblock (via exception) when we destroy the context
# The incoming and outgoing sockets will be closed automatically
# when the context is destroyed as well.
try:
zmq.proxy(incoming, outgoing)
except zmq.ZMQError:
pass
class DetectorProxy:
"""Proxies object detection signals."""
def __init__(self) -> None:
self.context = zmq.Context()
self.runner = ZmqProxyRunner(self.context)
self.runner.start()
def stop(self) -> None:
# destroying the context will tell the proxy to stop
self.context.destroy()
self.runner.join()
class ObjectDetectorPublisher:
"""Publishes signal for object detection to different processes."""
topic_base = "object_detector/"
def __init__(self, topic: str = "") -> None:
self.topic = f"{self.topic_base}{topic}"
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
self.socket.connect(SOCKET_PUB)
def publish(self, sub_topic: str = "") -> None:
"""Publish message."""
self.socket.send_string(f"{self.topic}{sub_topic}/")
def stop(self) -> None:
self.socket.close()
self.context.destroy()
class ObjectDetectorSubscriber:
"""Simplifies receiving a signal for object detection."""
topic_base = "object_detector/"
def __init__(self, topic: str = "") -> None:
self.topic = f"{self.topic_base}{topic}/"
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.setsockopt_string(zmq.SUBSCRIBE, self.topic)
self.socket.connect(SOCKET_SUB)
def check_for_update(self, timeout: float = 5) -> str | None:
"""Returns message or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
if has_update:
return self.socket.recv_string(flags=zmq.NOBLOCK)
except zmq.ZMQError:
pass
return None
def stop(self) -> None:
self.socket.close()
self.context.destroy()

View File

@@ -2,6 +2,7 @@
import logging
from enum import Enum
from typing import Any
from .zmq_proxy import Publisher, Subscriber
@@ -10,20 +11,22 @@ logger = logging.getLogger(__name__)
class RecordingsDataTypeEnum(str, Enum):
all = ""
recordings_available_through = "recordings_available_through"
saved = "saved" # segment has been saved to db
latest = "latest" # segment is in cache
valid = "valid" # segment is valid
invalid = "invalid" # segment is invalid
class RecordingsDataPublisher(Publisher):
class RecordingsDataPublisher(Publisher[Any]):
"""Publishes latest recording data."""
topic_base = "recordings/"
def __init__(self, topic: RecordingsDataTypeEnum) -> None:
topic = topic.value
super().__init__(topic)
def __init__(self) -> None:
super().__init__()
def publish(self, payload: tuple[str, float]) -> None:
super().publish(payload)
def publish(self, payload: Any, sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
class RecordingsDataSubscriber(Subscriber):
@@ -32,5 +35,12 @@ class RecordingsDataSubscriber(Subscriber):
topic_base = "recordings/"
def __init__(self, topic: RecordingsDataTypeEnum) -> None:
topic = topic.value
super().__init__(topic)
super().__init__(topic.value)
def _return_object(
self, topic: str, payload: tuple | None
) -> tuple[str, Any] | tuple[None, None]:
if payload is None:
return (None, None)
return (topic, payload)

View File

@@ -0,0 +1,30 @@
"""Facilitates communication between processes."""
import logging
from .zmq_proxy import Publisher, Subscriber
logger = logging.getLogger(__name__)
class ReviewDataPublisher(
Publisher
): # update when typing improvement is added Publisher[tuple[str, float]]
"""Publishes review item data."""
topic_base = "review/"
def __init__(self, topic: str) -> None:
super().__init__(topic)
def publish(self, payload: tuple[str, float], sub_topic: str = "") -> None:
super().publish(payload, sub_topic)
class ReviewDataSubscriber(Subscriber):
"""Receives review item data."""
topic_base = "review/"
def __init__(self, topic: str) -> None:
super().__init__(topic)

View File

@@ -17,6 +17,10 @@ from titlecase import titlecase
from frigate.comms.base_communicator import Communicator
from frigate.comms.config_updater import ConfigSubscriber
from frigate.config import FrigateConfig
from frigate.config.camera.updater import (
CameraConfigUpdateEnum,
CameraConfigUpdateSubscriber,
)
from frigate.const import CONFIG_DIR
from frigate.models import User
@@ -35,7 +39,7 @@ class PushNotification:
ttl: int = 0
class WebPushClient(Communicator): # type: ignore[misc]
class WebPushClient(Communicator):
"""Frigate wrapper for webpush client."""
def __init__(self, config: FrigateConfig, stop_event: MpEvent) -> None:
@@ -46,10 +50,12 @@ class WebPushClient(Communicator): # type: ignore[misc]
self.web_pushers: dict[str, list[WebPusher]] = {}
self.expired_subs: dict[str, list[str]] = {}
self.suspended_cameras: dict[str, int] = {
c.name: 0 for c in self.config.cameras.values()
c.name: 0 # type: ignore[misc]
for c in self.config.cameras.values()
}
self.last_camera_notification_time: dict[str, float] = {
c.name: 0 for c in self.config.cameras.values()
c.name: 0 # type: ignore[misc]
for c in self.config.cameras.values()
}
self.last_notification_time: float = 0
self.notification_queue: queue.Queue[PushNotification] = queue.Queue()
@@ -64,7 +70,7 @@ class WebPushClient(Communicator): # type: ignore[misc]
# Pull keys from PEM or generate if they do not exist
self.vapid = Vapid01.from_file(os.path.join(CONFIG_DIR, "notifications.pem"))
users: list[User] = (
users: list[dict[str, Any]] = (
User.select(User.username, User.notification_tokens).dicts().iterator()
)
for user in users:
@@ -73,7 +79,12 @@ class WebPushClient(Communicator): # type: ignore[misc]
self.web_pushers[user["username"]].append(WebPusher(sub))
# notification config updater
self.config_subscriber = ConfigSubscriber("config/notifications")
self.global_config_subscriber = ConfigSubscriber(
"config/notifications", exact=True
)
self.config_subscriber = CameraConfigUpdateSubscriber(
self.config, self.config.cameras, [CameraConfigUpdateEnum.notifications]
)
def subscribe(self, receiver: Callable) -> None:
"""Wrapper for allowing dispatcher to subscribe."""
@@ -154,15 +165,19 @@ class WebPushClient(Communicator): # type: ignore[misc]
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Wrapper for publishing when client is in valid state."""
# check for updated notification config
_, updated_notification_config = self.config_subscriber.check_for_update()
_, updated_notification_config = (
self.global_config_subscriber.check_for_update()
)
if updated_notification_config:
for key, value in updated_notification_config.items():
if key == "_global_notifications":
self.config.notifications = value
self.config.notifications = updated_notification_config
elif key in self.config.cameras:
self.config.cameras[key].notifications = value
updates = self.config_subscriber.check_for_updates()
if "add" in updates:
for camera in updates["add"]:
self.suspended_cameras[camera] = 0
self.last_camera_notification_time[camera] = 0
if topic == "reviews":
decoded = json.loads(payload)
@@ -173,6 +188,28 @@ class WebPushClient(Communicator): # type: ignore[misc]
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_alert(decoded)
if topic == "triggers":
decoded = json.loads(payload)
camera = decoded["camera"]
name = decoded["name"]
# ensure notifications are enabled and the specific trigger has
# notification action enabled
if (
not self.config.cameras[camera].notifications.enabled
or name not in self.config.cameras[camera].semantic_search.triggers
or "notification"
not in self.config.cameras[camera]
.semantic_search.triggers[name]
.actions
):
return
if self.is_camera_suspended(camera):
logger.debug(f"Notifications for {camera} are currently suspended.")
return
self.send_trigger(decoded)
elif topic == "notification_test":
if not self.config.notifications.enabled and not any(
cam.notifications.enabled for cam in self.config.cameras.values()
@@ -254,6 +291,23 @@ class WebPushClient(Communicator): # type: ignore[misc]
except Exception as e:
logger.error(f"Error processing notification: {str(e)}")
def _within_cooldown(self, camera: str) -> bool:
now = datetime.datetime.now().timestamp()
if now - self.last_notification_time < self.config.notifications.cooldown:
logger.debug(
f"Skipping notification for {camera} - in global cooldown period"
)
return True
if (
now - self.last_camera_notification_time[camera]
< self.config.cameras[camera].notifications.cooldown
):
logger.debug(
f"Skipping notification for {camera} - in camera-specific cooldown period"
)
return True
return False
def send_notification_test(self) -> None:
if not self.config.notifications.email:
return
@@ -280,26 +334,12 @@ class WebPushClient(Communicator): # type: ignore[misc]
return
camera: str = payload["after"]["camera"]
camera_name: str = getattr(
self.config.cameras[camera], "friendly_name", None
) or titlecase(camera.replace("_", " "))
current_time = datetime.datetime.now().timestamp()
# Check global cooldown period
if (
current_time - self.last_notification_time
< self.config.notifications.cooldown
):
logger.debug(
f"Skipping notification for {camera} - in global cooldown period"
)
return
# Check camera-specific cooldown period
if (
current_time - self.last_camera_notification_time[camera]
< self.config.cameras[camera].notifications.cooldown
):
logger.debug(
f"Skipping notification for {camera} - in camera-specific cooldown period"
)
if self._within_cooldown(camera):
return
self.check_registrations()
@@ -332,12 +372,22 @@ class WebPushClient(Communicator): # type: ignore[misc]
sorted_objects.update(payload["after"]["data"]["sub_labels"])
title = f"{titlecase(', '.join(sorted_objects).replace('_', ' '))}{' was' if state == 'end' else ''} detected in {titlecase(', '.join(payload['after']['data']['zones']).replace('_', ' '))}"
message = f"Detected on {titlecase(camera.replace('_', ' '))}"
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
ended = state == "end" or state == "genai"
if state == "genai" and payload["after"]["data"]["metadata"]:
message = payload["after"]["data"]["metadata"]["scene"]
else:
message = f"Detected on {camera_name}"
if ended:
logger.debug(
f"Sending a notification with state {state} and message {message}"
)
# if event is ongoing open to live view otherwise open to recordings view
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"
ttl = 3600 if state == "end" else 0
direct_url = f"/review?id={reviewId}" if ended else f"/#{camera}"
ttl = 3600 if ended else 0
logger.debug(f"Sending push notification for {camera}, review ID {reviewId}")
@@ -354,6 +404,53 @@ class WebPushClient(Communicator): # type: ignore[misc]
self.cleanup_registrations()
def send_trigger(self, payload: dict[str, Any]) -> None:
if not self.config.notifications.email:
return
camera: str = payload["camera"]
camera_name: str = getattr(
self.config.cameras[camera], "friendly_name", None
) or titlecase(camera.replace("_", " "))
current_time = datetime.datetime.now().timestamp()
if self._within_cooldown(camera):
return
self.check_registrations()
self.last_camera_notification_time[camera] = current_time
self.last_notification_time = current_time
trigger_type = payload["type"]
event_id = payload["event_id"]
name = payload["name"]
score = payload["score"]
title = f"{name.replace('_', ' ')} triggered on {camera_name}"
message = f"{titlecase(trigger_type)} trigger fired for {camera_name} with score {score:.2f}"
image = f"clips/triggers/{camera}/{event_id}.webp"
direct_url = f"/explore?event_id={event_id}"
ttl = 0
logger.debug(
f"Sending push notification for {camera_name}, trigger name {name}"
)
for user in self.web_pushers:
self.send_push_notification(
user=user,
payload=payload,
title=title,
message=message,
direct_url=direct_url,
image=image,
ttl=ttl,
)
self.cleanup_registrations()
def stop(self) -> None:
logger.info("Closing notification queue")
self.notification_thread.join()

View File

@@ -4,7 +4,7 @@ import errno
import json
import logging
import threading
from typing import Callable
from typing import Any, Callable
from wsgiref.simple_server import make_server
from ws4py.server.wsgirefserver import (
@@ -21,8 +21,8 @@ from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)
class WebSocket(WebSocket_):
def unhandled_error(self, error):
class WebSocket(WebSocket_): # type: ignore[misc]
def unhandled_error(self, error: Any) -> None:
"""
Handles the unfriendly socket closures on the server side
without showing a confusing error message
@@ -33,12 +33,12 @@ class WebSocket(WebSocket_):
logging.getLogger("ws4py").exception("Failed to receive data")
class WebSocketClient(Communicator): # type: ignore[misc]
class WebSocketClient(Communicator):
"""Frigate wrapper for ws client."""
def __init__(self, config: FrigateConfig) -> None:
self.config = config
self.websocket_server = None
self.websocket_server: WSGIServer | None = None
def subscribe(self, receiver: Callable) -> None:
self._dispatcher = receiver
@@ -47,10 +47,10 @@ class WebSocketClient(Communicator): # type: ignore[misc]
def start(self) -> None:
"""Start the websocket client."""
class _WebSocketHandler(WebSocket): # type: ignore[misc]
class _WebSocketHandler(WebSocket):
receiver = self._dispatcher
def received_message(self, message: WebSocket.received_message) -> None:
def received_message(self, message: WebSocket.received_message) -> None: # type: ignore[name-defined]
try:
json_message = json.loads(message.data.decode("utf-8"))
json_message = {
@@ -86,7 +86,7 @@ class WebSocketClient(Communicator): # type: ignore[misc]
)
self.websocket_thread.start()
def publish(self, topic: str, payload: str, _: bool) -> None:
def publish(self, topic: str, payload: Any, _: bool = False) -> None:
try:
ws_message = json.dumps(
{
@@ -109,9 +109,11 @@ class WebSocketClient(Communicator): # type: ignore[misc]
pass
def stop(self) -> None:
self.websocket_server.manager.close_all()
self.websocket_server.manager.stop()
self.websocket_server.manager.join()
self.websocket_server.shutdown()
if self.websocket_server is not None:
self.websocket_server.manager.close_all()
self.websocket_server.manager.stop()
self.websocket_server.manager.join()
self.websocket_server.shutdown()
self.websocket_thread.join()
logger.info("Exiting websocket client...")

View File

@@ -2,7 +2,7 @@
import json
import threading
from typing import Any, Optional
from typing import Generic, TypeVar
import zmq
@@ -47,7 +47,10 @@ class ZmqProxy:
self.runner.join()
class Publisher:
T = TypeVar("T")
class Publisher(Generic[T]):
"""Publishes messages."""
topic_base: str = ""
@@ -58,7 +61,7 @@ class Publisher:
self.socket = self.context.socket(zmq.PUB)
self.socket.connect(SOCKET_PUB)
def publish(self, payload: Any, sub_topic: str = "") -> None:
def publish(self, payload: T, sub_topic: str = "") -> None:
"""Publish message."""
self.socket.send_string(f"{self.topic}{sub_topic} {json.dumps(payload)}")
@@ -67,7 +70,7 @@ class Publisher:
self.context.destroy()
class Subscriber:
class Subscriber(Generic[T]):
"""Receives messages."""
topic_base: str = ""
@@ -79,9 +82,7 @@ class Subscriber:
self.socket.setsockopt_string(zmq.SUBSCRIBE, self.topic)
self.socket.connect(SOCKET_SUB)
def check_for_update(
self, timeout: float = FAST_QUEUE_TIMEOUT
) -> Optional[tuple[str, Any]]:
def check_for_update(self, timeout: float | None = FAST_QUEUE_TIMEOUT) -> T | None:
"""Returns message or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
@@ -98,5 +99,5 @@ class Subscriber:
self.socket.close()
self.context.destroy()
def _return_object(self, topic: str, payload: Any) -> Any:
def _return_object(self, topic: str, payload: T | None) -> T | None:
return payload

View File

@@ -1,6 +1,6 @@
from typing import Optional
from typing import Dict, List, Optional
from pydantic import Field
from pydantic import Field, field_validator, model_validator
from .base import FrigateBaseModel
@@ -34,3 +34,41 @@ class AuthConfig(FrigateBaseModel):
)
# As of Feb 2023, OWASP recommends 600000 iterations for PBKDF2-SHA256
hash_iterations: int = Field(default=600000, title="Password hash iterations")
roles: Dict[str, List[str]] = Field(
default_factory=dict,
title="Role to camera mappings. Empty list grants access to all cameras.",
)
@field_validator("roles")
@classmethod
def validate_roles(cls, v: Dict[str, List[str]]) -> Dict[str, List[str]]:
# Ensure role names are valid (alphanumeric with underscores)
for role in v.keys():
if not role.replace("_", "").isalnum():
raise ValueError(
f"Invalid role name '{role}'. Must be alphanumeric with underscores."
)
# Ensure 'admin' and 'viewer' are not used as custom role names
reserved_roles = {"admin", "viewer"}
if v.keys() & reserved_roles:
raise ValueError(
f"Reserved roles {reserved_roles} cannot be used as custom roles."
)
# Ensure no role has an empty camera list
for role, allowed_cameras in v.items():
if not allowed_cameras:
raise ValueError(
f"Role '{role}' has no cameras assigned. Custom roles must have at least one camera."
)
return v
@model_validator(mode="after")
def ensure_default_roles(self):
# Ensure admin and viewer are never overridden
self.roles["admin"] = []
self.roles["viewer"] = []
return self

View File

@@ -1,5 +1,29 @@
from typing import Any
from pydantic import BaseModel, ConfigDict
class FrigateBaseModel(BaseModel):
model_config = ConfigDict(extra="forbid", protected_namespaces=())
def get_nested_object(self, path: str) -> Any:
parts = path.split("/")
obj = self
for part in parts:
if part == "config":
continue
if isinstance(obj, BaseModel):
try:
obj = getattr(obj, part)
except AttributeError:
return None
elif isinstance(obj, dict):
try:
obj = obj[part]
except KeyError:
return None
else:
return None
return obj

View File

@@ -2,7 +2,7 @@ import os
from enum import Enum
from typing import Optional
from pydantic import Field, PrivateAttr
from pydantic import Field, PrivateAttr, model_validator
from frigate.const import CACHE_DIR, CACHE_SEGMENT_FORMAT, REGEX_CAMERA_NAME
from frigate.ffmpeg_presets import (
@@ -19,14 +19,15 @@ from frigate.util.builtin import (
from ..base import FrigateBaseModel
from ..classification import (
AudioTranscriptionConfig,
CameraFaceRecognitionConfig,
CameraLicensePlateRecognitionConfig,
CameraSemanticSearchConfig,
)
from .audio import AudioConfig
from .birdseye import BirdseyeCameraConfig
from .detect import DetectConfig
from .ffmpeg import CameraFfmpegConfig, CameraInput
from .genai import GenAICameraConfig
from .live import CameraLiveConfig
from .motion import MotionConfig
from .mqtt import CameraMqttConfig
@@ -50,12 +51,27 @@ class CameraTypeEnum(str, Enum):
class CameraConfig(FrigateBaseModel):
name: Optional[str] = Field(None, title="Camera name.", pattern=REGEX_CAMERA_NAME)
friendly_name: Optional[str] = Field(
None, title="Camera friendly name used in the Frigate UI."
)
@model_validator(mode="before")
@classmethod
def handle_friendly_name(cls, values):
if isinstance(values, dict) and "friendly_name" in values:
pass
return values
enabled: bool = Field(default=True, title="Enable camera.")
# Options with global fallback
audio: AudioConfig = Field(
default_factory=AudioConfig, title="Audio events configuration."
)
audio_transcription: AudioTranscriptionConfig = Field(
default_factory=AudioTranscriptionConfig, title="Audio transcription config."
)
birdseye: BirdseyeCameraConfig = Field(
default_factory=BirdseyeCameraConfig, title="Birdseye camera configuration."
)
@@ -66,18 +82,13 @@ class CameraConfig(FrigateBaseModel):
default_factory=CameraFaceRecognitionConfig, title="Face recognition config."
)
ffmpeg: CameraFfmpegConfig = Field(title="FFmpeg configuration for the camera.")
genai: GenAICameraConfig = Field(
default_factory=GenAICameraConfig, title="Generative AI configuration."
)
live: CameraLiveConfig = Field(
default_factory=CameraLiveConfig, title="Live playback settings."
)
lpr: CameraLicensePlateRecognitionConfig = Field(
default_factory=CameraLicensePlateRecognitionConfig, title="LPR config."
)
motion: Optional[MotionConfig] = Field(
None, title="Motion detection configuration."
)
motion: MotionConfig = Field(None, title="Motion detection configuration.")
objects: ObjectConfig = Field(
default_factory=ObjectConfig, title="Object configuration."
)
@@ -87,6 +98,10 @@ class CameraConfig(FrigateBaseModel):
review: ReviewConfig = Field(
default_factory=ReviewConfig, title="Review configuration."
)
semantic_search: CameraSemanticSearchConfig = Field(
default_factory=CameraSemanticSearchConfig,
title="Semantic search configuration.",
)
snapshots: SnapshotsConfig = Field(
default_factory=SnapshotsConfig, title="Snapshot configuration."
)

View File

@@ -29,6 +29,10 @@ class StationaryConfig(FrigateBaseModel):
default_factory=StationaryMaxFramesConfig,
title="Max frames for stationary objects.",
)
classifier: bool = Field(
default=True,
title="Enable visual classifier for determing if objects with jittery bounding boxes are stationary.",
)
class DetectConfig(FrigateBaseModel):

View File

@@ -1,12 +1,12 @@
from enum import Enum
from typing import Optional, Union
from typing import Any, Optional
from pydantic import BaseModel, Field, field_validator
from pydantic import Field
from ..base import FrigateBaseModel
from ..env import EnvString
__all__ = ["GenAIConfig", "GenAICameraConfig", "GenAIProviderEnum"]
__all__ = ["GenAIConfig", "GenAIProviderEnum"]
class GenAIProviderEnum(str, Enum):
@@ -16,70 +16,13 @@ class GenAIProviderEnum(str, Enum):
ollama = "ollama"
class GenAISendTriggersConfig(BaseModel):
tracked_object_end: bool = Field(
default=True, title="Send once the object is no longer tracked."
)
after_significant_updates: Optional[int] = Field(
default=None,
title="Send an early request to generative AI when X frames accumulated.",
ge=1,
)
# uses BaseModel because some global attributes are not available at the camera level
class GenAICameraConfig(BaseModel):
enabled: bool = Field(default=False, title="Enable GenAI for camera.")
use_snapshot: bool = Field(
default=False, title="Use snapshots for generating descriptions."
)
prompt: str = Field(
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
title="Default caption prompt.",
)
object_prompts: dict[str, str] = Field(
default_factory=dict, title="Object specific prompts."
)
objects: Union[str, list[str]] = Field(
default_factory=list,
title="List of objects to run generative AI for.",
)
required_zones: Union[str, list[str]] = Field(
default_factory=list,
title="List of required zones to be entered in order to run generative AI.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails sent to generative AI for debugging purposes.",
)
send_triggers: GenAISendTriggersConfig = Field(
default_factory=GenAISendTriggersConfig,
title="What triggers to use to send frames to generative AI for a tracked object.",
)
@field_validator("required_zones", mode="before")
@classmethod
def validate_required_zones(cls, v):
if isinstance(v, str) and "," not in v:
return [v]
return v
class GenAIConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable GenAI.")
prompt: str = Field(
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
title="Default caption prompt.",
)
object_prompts: dict[str, str] = Field(
default_factory=dict, title="Object specific prompts."
)
"""Primary GenAI Config to define GenAI Provider."""
api_key: Optional[EnvString] = Field(default=None, title="Provider API key.")
base_url: Optional[str] = Field(default=None, title="Provider base url.")
model: str = Field(default="gpt-4o", title="GenAI model.")
provider: GenAIProviderEnum = Field(
default=GenAIProviderEnum.openai, title="GenAI provider."
provider: GenAIProviderEnum | None = Field(default=None, title="GenAI provider.")
provider_options: dict[str, Any] = Field(
default={}, title="GenAI Provider extra options."
)

View File

@@ -10,7 +10,7 @@ __all__ = ["NotificationConfig"]
class NotificationConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable notifications")
email: Optional[str] = Field(default=None, title="Email required for push.")
cooldown: Optional[int] = Field(
cooldown: int = Field(
default=0, ge=0, title="Cooldown period for notifications (time in seconds)."
)
enabled_in_config: Optional[bool] = Field(

View File

@@ -1,10 +1,10 @@
from typing import Any, Optional, Union
from pydantic import Field, PrivateAttr, field_serializer
from pydantic import Field, PrivateAttr, field_serializer, field_validator
from ..base import FrigateBaseModel
__all__ = ["ObjectConfig", "FilterConfig"]
__all__ = ["ObjectConfig", "GenAIObjectConfig", "FilterConfig"]
DEFAULT_TRACKED_OBJECTS = ["person"]
@@ -49,12 +49,69 @@ class FilterConfig(FrigateBaseModel):
return None
class GenAIObjectTriggerConfig(FrigateBaseModel):
tracked_object_end: bool = Field(
default=True, title="Send once the object is no longer tracked."
)
after_significant_updates: Optional[int] = Field(
default=None,
title="Send an early request to generative AI when X frames accumulated.",
ge=1,
)
class GenAIObjectConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable GenAI for camera.")
use_snapshot: bool = Field(
default=False, title="Use snapshots for generating descriptions."
)
prompt: str = Field(
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
title="Default caption prompt.",
)
object_prompts: dict[str, str] = Field(
default_factory=dict, title="Object specific prompts."
)
objects: Union[str, list[str]] = Field(
default_factory=list,
title="List of objects to run generative AI for.",
)
required_zones: Union[str, list[str]] = Field(
default_factory=list,
title="List of required zones to be entered in order to run generative AI.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails sent to generative AI for debugging purposes.",
)
send_triggers: GenAIObjectTriggerConfig = Field(
default_factory=GenAIObjectTriggerConfig,
title="What triggers to use to send frames to generative AI for a tracked object.",
)
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of generative AI."
)
@field_validator("required_zones", mode="before")
@classmethod
def validate_required_zones(cls, v):
if isinstance(v, str) and "," not in v:
return [v]
return v
class ObjectConfig(FrigateBaseModel):
track: list[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
filters: dict[str, FilterConfig] = Field(
default_factory=dict, title="Object filters."
)
mask: Union[str, list[str]] = Field(default="", title="Object mask.")
genai: GenAIObjectConfig = Field(
default_factory=GenAIObjectConfig,
title="Config for using genai to analyze objects.",
)
_all_objects: list[str] = PrivateAttr()
@property

View File

@@ -22,27 +22,31 @@ __all__ = [
DEFAULT_TIME_LAPSE_FFMPEG_ARGS = "-vf setpts=0.04*PTS -r 30"
class RecordRetainConfig(FrigateBaseModel):
days: float = Field(default=0, ge=0, title="Default retention period.")
class RetainModeEnum(str, Enum):
all = "all"
motion = "motion"
active_objects = "active_objects"
class RecordRetainConfig(FrigateBaseModel):
days: float = Field(default=0, title="Default retention period.")
mode: RetainModeEnum = Field(default=RetainModeEnum.all, title="Retain mode.")
class ReviewRetainConfig(FrigateBaseModel):
days: float = Field(default=10, title="Default retention period.")
days: float = Field(default=10, ge=0, title="Default retention period.")
mode: RetainModeEnum = Field(default=RetainModeEnum.motion, title="Retain mode.")
class EventsConfig(FrigateBaseModel):
pre_capture: int = Field(
default=5, title="Seconds to retain before event starts.", le=MAX_PRE_CAPTURE
default=5,
title="Seconds to retain before event starts.",
le=MAX_PRE_CAPTURE,
ge=0,
)
post_capture: int = Field(
default=5, ge=0, title="Seconds to retain after event ends."
)
post_capture: int = Field(default=5, title="Seconds to retain after event ends.")
retain: ReviewRetainConfig = Field(
default_factory=ReviewRetainConfig, title="Event retention settings."
)
@@ -77,8 +81,12 @@ class RecordConfig(FrigateBaseModel):
default=60,
title="Number of minutes to wait between cleanup runs.",
)
retain: RecordRetainConfig = Field(
default_factory=RecordRetainConfig, title="Record retention settings."
continuous: RecordRetainConfig = Field(
default_factory=RecordRetainConfig,
title="Continuous recording retention settings.",
)
motion: RecordRetainConfig = Field(
default_factory=RecordRetainConfig, title="Motion recording retention settings."
)
detections: EventsConfig = Field(
default_factory=EventsConfig, title="Detection specific retention settings."

View File

@@ -26,6 +26,10 @@ class AlertsConfig(FrigateBaseModel):
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of alerts."
)
cutoff_time: int = Field(
default=40,
title="Time to cutoff alerts after no alert-causing activity has occurred.",
)
@field_validator("required_zones", mode="before")
@classmethod
@@ -48,6 +52,10 @@ class DetectionsConfig(FrigateBaseModel):
default_factory=list,
title="List of required zones to be entered in order to save the event as a detection.",
)
cutoff_time: int = Field(
default=30,
title="Time to cutoff detection after no detection-causing activity has occurred.",
)
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of detections."
@@ -62,6 +70,39 @@ class DetectionsConfig(FrigateBaseModel):
return v
class GenAIReviewConfig(FrigateBaseModel):
enabled: bool = Field(
default=False,
title="Enable GenAI descriptions for review items.",
)
alerts: bool = Field(default=True, title="Enable GenAI for alerts.")
detections: bool = Field(default=False, title="Enable GenAI for detections.")
additional_concerns: list[str] = Field(
default=[],
title="Additional concerns that GenAI should make note of on this camera.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails sent to generative AI for debugging purposes.",
)
enabled_in_config: Optional[bool] = Field(
default=None, title="Keep track of original state of generative AI."
)
preferred_language: str | None = Field(
title="Preferred language for GenAI Response",
default=None,
)
activity_context_prompt: str = Field(
default="""- **Zone context is critical**: Private enclosed spaces (back yards, back decks, fenced areas, inside garages) are resident territory where brief transient activity, routine tasks, and pet care are expected and normal. Front yards, driveways, and porches are semi-public but still resident spaces where deliveries, parking, and coming/going are routine. Consider whether the zone and activity align with normal residential use.
- **Person + Pet = Normal Activity**: When both "Person" and "Dog" (or "Cat") are detected together in residential zones, this is routine pet care activity (walking, letting out, playing, supervising). Assign Level 0 unless there are OTHER strong suspicious behaviors present (like testing doors, taking items, etc.). A person with their pet in a residential zone is baseline normal activity.
- Brief appearances in private zones (back yards, garages) are normal residential patterns.
- Normal residential activity includes: residents, family members, guests, deliveries, services, maintenance workers, routine property use (parking, unloading, mail pickup, trash removal).
- Brief movement with legitimate items (bags, packages, tools, equipment) in appropriate zones is routine.
""",
title="Custom activity context prompt defining normal activity patterns for this property.",
)
class ReviewConfig(FrigateBaseModel):
"""Configure reviews"""
@@ -71,3 +112,6 @@ class ReviewConfig(FrigateBaseModel):
detections: DetectionsConfig = Field(
default_factory=DetectionsConfig, title="Review detections config."
)
genai: GenAIReviewConfig = Field(
default_factory=GenAIReviewConfig, title="Review description genai config."
)

View File

@@ -0,0 +1,147 @@
"""Convenience classes for updating configurations dynamically."""
from dataclasses import dataclass
from enum import Enum
from typing import Any
from frigate.comms.config_updater import ConfigPublisher, ConfigSubscriber
from frigate.config import CameraConfig, FrigateConfig
class CameraConfigUpdateEnum(str, Enum):
"""Supported camera config update types."""
add = "add" # for adding a camera
audio = "audio"
audio_transcription = "audio_transcription"
birdseye = "birdseye"
detect = "detect"
enabled = "enabled"
motion = "motion" # includes motion and motion masks
notifications = "notifications"
objects = "objects"
object_genai = "object_genai"
record = "record"
remove = "remove" # for removing a camera
review = "review"
review_genai = "review_genai"
semantic_search = "semantic_search" # for semantic search triggers
snapshots = "snapshots"
zones = "zones"
@dataclass
class CameraConfigUpdateTopic:
update_type: CameraConfigUpdateEnum
camera: str
@property
def topic(self) -> str:
return f"config/cameras/{self.camera}/{self.update_type.name}"
class CameraConfigUpdatePublisher:
def __init__(self):
self.publisher = ConfigPublisher()
def publish_update(self, topic: CameraConfigUpdateTopic, config: Any) -> None:
self.publisher.publish(topic.topic, config)
def stop(self) -> None:
self.publisher.stop()
class CameraConfigUpdateSubscriber:
def __init__(
self,
config: FrigateConfig | None,
camera_configs: dict[str, CameraConfig],
topics: list[CameraConfigUpdateEnum],
):
self.config = config
self.camera_configs = camera_configs
self.topics = topics
base_topic = "config/cameras"
if len(self.camera_configs) == 1:
base_topic += f"/{list(self.camera_configs.keys())[0]}"
self.subscriber = ConfigSubscriber(
base_topic,
exact=False,
)
def __update_config(
self, camera: str, update_type: CameraConfigUpdateEnum, updated_config: Any
) -> None:
if update_type == CameraConfigUpdateEnum.add:
self.config.cameras[camera] = updated_config
self.camera_configs[camera] = updated_config
return
elif update_type == CameraConfigUpdateEnum.remove:
self.config.cameras.pop(camera)
self.camera_configs.pop(camera)
return
config = self.camera_configs.get(camera)
if not config:
return
if update_type == CameraConfigUpdateEnum.audio:
config.audio = updated_config
elif update_type == CameraConfigUpdateEnum.audio_transcription:
config.audio_transcription = updated_config
elif update_type == CameraConfigUpdateEnum.birdseye:
config.birdseye = updated_config
elif update_type == CameraConfigUpdateEnum.detect:
config.detect = updated_config
elif update_type == CameraConfigUpdateEnum.enabled:
config.enabled = updated_config
elif update_type == CameraConfigUpdateEnum.object_genai:
config.objects.genai = updated_config
elif update_type == CameraConfigUpdateEnum.motion:
config.motion = updated_config
elif update_type == CameraConfigUpdateEnum.notifications:
config.notifications = updated_config
elif update_type == CameraConfigUpdateEnum.objects:
config.objects = updated_config
elif update_type == CameraConfigUpdateEnum.record:
config.record = updated_config
elif update_type == CameraConfigUpdateEnum.review:
config.review = updated_config
elif update_type == CameraConfigUpdateEnum.review_genai:
config.review.genai = updated_config
elif update_type == CameraConfigUpdateEnum.semantic_search:
config.semantic_search = updated_config
elif update_type == CameraConfigUpdateEnum.snapshots:
config.snapshots = updated_config
elif update_type == CameraConfigUpdateEnum.zones:
config.zones = updated_config
def check_for_updates(self) -> dict[str, list[str]]:
updated_topics: dict[str, list[str]] = {}
# get all updates available
while True:
update_topic, update_config = self.subscriber.check_for_update()
if update_topic is None or update_config is None:
break
_, _, camera, raw_type = update_topic.split("/")
update_type = CameraConfigUpdateEnum[raw_type]
if update_type in self.topics:
if update_type.name in updated_topics:
updated_topics[update_type.name].append(camera)
else:
updated_topics[update_type.name] = [camera]
self.__update_config(camera, update_type, update_config)
return updated_topics
def stop(self) -> None:
self.subscriber.stop()

Some files were not shown because too many files have changed in this diff Show More