* Update Unifi specific configuration
Provided more specific detail on what modifications are required to the Unifi camera rtsps links: change to rtspx to remove authentication and remove the ?enableSrtp to function on TCP. Provided a sample configuration for a Unifi camera.
* Update docs/docs/configuration/camera_specific.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/camera_specific.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Comment out timezone as it should not be set with None if copied
* Use "" for ffmpeg: so it does not appear as comment
* Add example to timezone setting
* Fixed extension of config file
Using frigate.yml as the config file for the HA addon gives a validation error, the same contents in frigate.yaml work.
* More accurate description of config file handling.
* Update docs/docs/configuration/index.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
It supports the same entrypoints, given that tflite is a small cut-out
of the big tensorflow picture.
This patch was created for downstream usage in nixpkgs, where we don't
have the tflite python package, but do have the full tensorflow package.
* Add docs for time / date styling
* Convert 12hour time format option to enum
* Change option in web
* Add docs with examples
* Fix errors in docs
* Fix mismatched names
* Upgrade s6-overlay from 3.1.3.0 to 3.1.4.0
* Add go2rtc healthcheck service
Also don't make go2rtc exits cause the container to fail.
* Reword healthcheck message
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add timeout to go2rtc healthcheck
* Update healthcheck message
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Give additional time for go2rtc start/restart
* Fix typo
* Avoid creating go2rtc config multiple times
* Fix healthcheck not starting
* Fix sleep
* Fix more hidden logs
* Decrease time window and use curl's timeout flag
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Yolov7-640, Yolov7-320, Yolov7x-640 and Yolov7x-320 models got added to the download_yolo.sh script that gets used as part of generating tensorrt models so they can now be generated
* Initial commit that adds YOLOv5 and YOLOv8 support for OpenVINO detector
* Fixed double inference bug with YOLOv5 and YOLOv8
* Modified documentation to mention YOLOv5 and YOLOv8
* Changes to pass lint checks
* Change minimum threshold to improve model performance
* Fix link
* Clean up YOLO post-processing
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add ability to GPU device to be automatically detected when multiple exist
* Add logging info
* Fix access
* Fix
* Formatting
* Fix path of device
* Use log error instead of raise
* Remove log which could apply to other caess
* Set default value
* rework logic and support auto gpu selection for encoding gpu as well
* dont wait so long for queues
* implement stop methods for comms
* set the detection events on exit and return early from processing
* handle the stop event in the broadcast threads
* short circuit the detection process exit code if it already exited
* some logging for stats thread
* just keep the log process alive 1 second after the last log message
* ensure the multiprocessing queues are emptied and closed
* Update frigate/log.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update frigate/log.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* mypy fixes
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Initial commit to enable Yolox models with OpenVINO in Frigate
* Fix ModelEnumtType import error in openvino.py
* Initial edit of the docs to include verbage about yolox
* Initial edit of the docs to include verbage about yolox
* Elaborate configuration and limitations in docs.
* Add capability to dynamically determine number of classes in yolox model
* Further refinements
* Removed unnecesarry comments, improved documentation, addressed PR items
* Fixed lint formatting issues
* Update live.md
Placed `ffmpeg:http_cam#audio=opus` in quotes so it doesn't appear as commented out in docs.
* Update restream.md
Placed `ffmpeg:http_cam#audio=opus` in quotes so it doesn't appear as commented out in docs.
* Restart ffmpeg if process exceeds detect fps by 10
* Update frigate/video.py
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* spelling
---------
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Auto discover internal WebRTC candidate for add-on
* Write logs to stderr
* Fix port number
* Integrate with newest changes
* Update docs
* Use local variable more
* Use Python to write file, fix JSON->YAML
* Store into variable
* Update docs/docs/configuration/live.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/live.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/live.md
* Update docs/docs/configuration/live.md
* Refator s6 scripts to the new format
* Remove unneeded workaround
* Update docker/rootfs/usr/local/go2rtc/create_config.py
* Migrate logging to new s6 format
* Remove more unnecessary s6 variables
* Fix prepare-log and when go2rtc is not present in config
* Restart the whole container if either Frigate or go2rtc fails
* D
* Fix service name in finish
* Fix nginx finish comment
* Restart improvements
* Fix devcontainer
* Fix format
* Update Dockerfile
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Improve scripts logging
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Refator s6 scripts to the new format
* Remove unneeded workaround
* Migrate logging to new s6 format
* Remove more unnecessary s6 variables
* Fix prepare-log and when go2rtc is not present in config
* Restart the whole container if either Frigate or go2rtc fails
* D
* Fix service name in finish
* Fix nginx finish comment
* Restart improvements
* Fix devcontainer
* Fix format
* Update Dockerfile
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add config fields
* Clean up camera default values
* Set recordings timezone with config if available
* Adjust for timezone config
* Cleanup setting of the timezone
* Don't fail on MSE check iPad
* Fix MSE check for birdseye
* Add docs
* Fix test
* Update readme features
* Remove RTMP setup from setup guide
* Update integration for RTSP
* Remove rtmp from faq
* Remove RTMP stream from guide
* Remove rtmp from install
* Remove rtmp from dev config
* Add in_progress parameter to /api/events to filter the results.
* Change in_progress to default to no filtering, 0 means no in progress, 1 means only in progress.
* Fix code format with black.
* Clear blank line.
* System page: various minor UI tweaks
* Be consistent with capitalisation
* Change detection start epoch to running/idle
* Remove detection start column entirely
* Show jsmpeg when restream is disabled
* Fix debug button status not showing correctly when switching cameras
* Change label to make clear only motion masks are shown
The link to the home assistant integration documentation was missing the leading slash which caused the path to be appended to the `/frigate` path of this page.
* Add missing labels to default labelmap. Fill any holes with "unknown". Remove unique labelmap for tensorrt.
* Replace "truck" with "car" on Openvino labelmap
* Add tables for ffmpeg presets and how to use them
* Make it clear that ffmepg processes may not show when nvidia-smi is run inside the container
* Add specific example of mixed input arg presets
* Update docs/docs/configuration/ffmpeg_presets.md
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* typos
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Remove branch from URL to tensorrt_models.sh
* Reword to make TensorRT model singular
* Add note about installing nvidia docker runtime and compatible drivers
* add information about frigate plus to docs
* Update docs/docs/integrations/plus.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* check stream specific hwaccel_args for gpu stats
* fix indentation
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* check special chars for linter
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add video codec to restream config
* Add handling of encode engine and video codec
* Add test for video encoding
* Set in main configuration docs as well
* Add example to restream docs
* Put back patch
* Update recommended hardware page to reflect multiple detectors are supproted
* Shift numbers around slightly
* Update with specific range
* Update with new observed range
* Add i5 example
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* Should support arm32 as well
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* Add more detail to supported platforms
* Fix typo
* Format table
* Fix table header
* Add info about tensorrt detectors and link to docs
* Add info about tensorrt detectors and link to docs
Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
* [API] filter for favorite events
* Added /api/events filter for favorite (retain_indefinitely) events
* New Star button to filter for favorite events on the Events page
* fix python formatting
* keep Events favorite button to right side
* [API] filter for favorite events
* Added /api/events filter for favorite (retain_indefinitely) events
* New Star button to filter for favorite events on the Events page
* fix python formatting
* keep Events favorite button to right side
* Try using RTSP for restream
* Add ability to get snapshot of birdseye when birdseye restream is enabled
* Write to pipe instead of encoding mpeg1
* Write to cache instead
* Use const for location
* Formatting
* Add hardware encoding for birdseye based on ffmpeg preset
* Provide framerate
* Adjust args
* Fix order
* Delete pipe file if it exists
* Cleanup spacing
* Fix spacing
* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support
* Add sub filter for monaco editor
* Don't include files for unused languages
* Move necessary files and cleanup build
* Update sub filter for new location
* Still need to include default editor worker
* Fix error when model already exists
* Show All and Solo selection buttons for MultiSelect.
Similar to previous behavior of viewing events for single camera with 1 click, or All.
* Fix visual bug with MultiSelect when selecting similar named options.
eg. options like frontdoor, frontside, backdoor, etc
* fix key prop for lint
* Change MultiSelect onSolo to onSelectSingle
* cosmetic changes on MultiSelect
* Different look for SelectSingle buttons
* Show All button is aligned with the items below it, no matter the popup size
* MultiSelect ShowAll unfocused by default
* Use prewrap so vainfo output appears normalized
* Move copy button to top so user doesn't need to scroll to copy logs
* Show calculating if no value for stream bandwidth
* Write files in UTC and update folder structure to not conflict
* Add timezone arg for events summary
* Fixes for timezone in calls
* Use timezone for recording and recordings summary endpoints
* Fix sqlite parsing
* Fix sqlite parsing
* Fix recordings summary with timezone
* Fix
* Formatting
* Add pytz
* Fix default timezone
* Add note about times being displayed in localtime
* Improve timezone wording and show actual timezone
* Add alternate endpoint to cover existing usecase to avoid breaking change
* Formatting
* Catch when recording segments are not being written to cache and restart ffmpeg responsible for record
* Ensure this check is only run for role with record
* Fix formatting
* Redo recordings validator to watch segments time and restart if no segment for 30 seconds
* Formatting
* Increase wait time to 120 seconds and improve error message
* Add more config checks for record args and add test
* Formatting
* Specify output args.
* Log all services to RAM
* Gracefully handle shutdown
* Add logs route
* Remove tail
* Return logs for services
* Display log chooser with copy button
* show logs for specific services
* Clean up settings logs
* Add copy functionality to logs
* Add copy functionality to logs
* Fix merge
* Set archive count to 0
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Log all services to RAM
* Fix tests workdir
* Rotate logs when they reach 10MB and keep only 1 archive
* Gracefully handle shutdown
* Add note about gracetime not working
* Fix logs permission, create fake logs for devcontainer
* Remove empty line
* Update docker/rootfs/etc/services.d/frigate/run
* Fix fake Frigate shebang
* Add raw config endpoint
* Add config editor
* Add code editor
* Add error
* Add ability to copy config
* Only show the save button when code has been edited
* Update errors
* Remove debug config from system page
* Break out config saving steps to pinpoint where error occurred.
* Show correct config errors
* Switch to monaco editor
* Adjust UI colors and behavior
* Get yaml validation working
* Set success color
* Initial work for adding OpenVino detector. Not functional
* Load model and submit for inference.
Sucessfully load model and initialize OpenVino engine with either CPU or GPU as device.
Does not parse results for objects.
* Detection working with ssdlite_mobilenetv2 FP16 model
* Add OpenVIno support and model to docker image
* Add documentation for OpenVino detector configuration
* Adds support for ARM32/ARM64 and the Myriad X hardware
- Use custom-built openvino wheel for all platforms
- Add libusb build without udev for NCS2 support
* Add documentation around Intel CPU requirements and NCS2 setup
* Print all available output tensors
* Update documentation for config parameters
* Get storage output stats for each camera
* Add storage route
* Add storage route
* Add storage page
* Cleanup
* Add stats and show more storage
* Add tests for mb abbrev util fun
* Rewrite storage logic to use storage maintainer and segment sizes
* Include storage maintainer for http
* Use correct format
* Remove utils
* Fix tests
* Remove total from equation
* Multiply by 100 to get percent
* Add basic storage info
* Fix storage stats
* Fix endpoint and ui
* Fix formatting
* Add hwaccel presets
* Use hwaccel presets
* Add input arg presets
* Use input arg presets
* Make util to clean up redundant code
* Add support for output arg presets
* Add tests
* Update camera specific to use presets
* Update hwaccel to use presets
* Format files and fix tests
* Rewrite tests to test record correctly
* Move presets from string to list to avoid manually separating into a list
* Add mjpeg cuvid decoder preset
* Fix tests
* Fix comment
* Move to images specific folder
* Send error image when camera stream is not available
* Immediately set camera_fps to 0 if camera crashes
* Cache error image so it is not read from file system on each run
* Move camera fps set
* Add ffprobe endpoint
* Get ffprobe for multiple inputs
* Copy ffprobe in output
* Fix bad if statement
* Return full output of ffprobe process
* Return full output of ffprobe process
* Make ffprobe button show dialog with output and option to copy
* Add driver names to consts
* Add driver env var name
* Setup general tracking for GPU stats
* Catch RPi args as well
* Add util to get radeontop results
* Add real amd GPU stats
* Fix missed arg
* pass config
* Use only the values
* Fix vram
* Add nvidia gpu stats
* Use nvidia stats
* Add chart for gpu stats
* Format AMD with space between percent
* Get correct nvidia %
* Start to add support for intel GPU stats
* Block out RPi as util is not currently available
* Formatting
* Fix mypy
* Strip for float conversion
* Strip for float conversion
* Fix percent formatting
* Remove name from gpu map
* Add tests and fix AMD formatting
* Add nvidia gpu stats test
* Formatting
* Add intel_gpu_top for testing
* Formatting
* Handle case where hwaccel is not setup
* Formatting
* Check to remove none
* Don't use set
* Cleanup and fix types
* Handle case where args is list
* Fix mypy
* Cast to str
* Fix type checking
* Return none instead of empty
* Fix organization
* Make keys consistent
* Make gpu match style
* Get support for vainfo
* Add vainfo endpoint
* Set vainfo output in error correctly
* Remove duplicate function
* Fix errors
* Do cpu & gpu work asynchonously
* Fix async
* Fix event loop
* Fix crash
* Fix naming
* Send empty data for gpu if error occurs
* Show error if gpu stats could not be retrieved
* Fix mypy
* Fix test
* Don't use json for vainfo
* Fix cross references
* Strip unicode still
* await vainfo response
* Add gpu deps
* Formatting
* remove comments
* Use empty string
* Add vainfo back in
* Add option for mqtt config
* Setup communication layer
* Have a dispatcher which is responsible for handling and sending messages
* Move mqtt to communication
* Separate ws communications module
* Make ws client conform to communicator
* Cleanup imports
* Migrate to new dispatcher
* Clean up
* Need to set topic prefix
* Remove references to mqtt in dispatcher
* Don't start mqtt until dispatcher is subscribed
* Cleanup
* Shorten package
* Formatting
* Remove unused
* Cleanup
* Rename mqtt to ws on web
* Fix ws mypy
* Fix mypy
* Reformat
* Cleanup if/else chain
* Catch bad set commands
* fix makefile variable
* add branch for testing
* fix arm32 build
* use amd64 for web build
* install wheels in a separate layer for better parallel builds
* try build-push-action
* try using gh context
* use short sha
* cleanup
* Refactor mqtt client
* Protect callback method
* Use async to handle reconnects
* Set types and clenup
* Don't set connected until rc code is checked
* Make it easier to run the devcontainer
* Some more improvements
* Tidy up few other things
* Better name stages
* Fix CI
* Setup everything with one click
* Allow to set IMAGE_OWNER
* Change IMAGE_OWNER to IMAGE_REPO
* Fix CI with IMAGE_REPO
* Fix nodejs installation
* Test devcontainer build as part of CI
* Build devcontainer in its own job
* Fix devcontainer cli installation
* Fix devcontainer build
* Fix devcontainer build in CI again
* Enable buildkit only
* Increase coverage of devcontainer test
* Fix devcontainer start in CI
* Ensure latest version of docker compose is used
* Fix install compose action
* Disable CI stuff which does not work until we fix them
* Typing: events.py
* Remove unused variable
* Fix return Any from return statement
Not all elements from the event dict are sure to be something that can be evaluated
See e.g.: https://github.com/python/mypy/issues/5697
* Sort out Event disambiguity
There was a name collision of multiprocessing Event type and frigate events
Co-authored-by: Sebastian Englbrecht <sebastian.englbrecht@kabelmail.de>
* Start restream before detection
* Add docs explaining how to reduce connections to the camera
* Fix typos for consistency
* Add link to other part of doc for readability
* Update go2rtc to rc3
* Simplify ffmpeg / audio conversions
* Set ffmpeg bin location
* Manually set video as copied
* Run go2rtc with env vars
* Remove manual ffmpeg declaration
* Enable force_audio by default
* Fix test
* Move each camera to a separate card and show per process info
* Install top
* Add support for cpu usage stats
* Use cpu usage stats in debug
* Increase number of runs to ensure good results
* Add ffprobe endpoint
* Get ffprobe for multiple inputs
* Copy ffprobe in output
* Add fps to camera metrics
* Fix lint errors
* Update stats config
* Add ffmpeg pid
* Use grid display so more cameras can take less vertical space
* Fix hanging characters
* Only show the current detector
* Fix bad if statement
* Return full output of ffprobe process
* Return full output of ffprobe process
* Don't specify rtsp_transport
* Make ffprobe button show dialog with output and option to copy
* Adjust ffprobe api to take paths directly
* Add docs for ffprobe api
* Move existing checks to own functions
* Add config check for zone objects that are not tracked
* Add tests for config error
* Formatting
* Catch case where field is defined multiple times and add test
* Add warning for rtmp
* try a different approach for build_web
* add automatic image builds
* build web first
* try disabling log file
* chown dir
* use volume
* set cache path
* test a push
* limit to dev/master branch commits
* Refactor EdgeTPU and CPU model handling to detector submodules.
* Fix selecting the correct detection device type from the config
* Remove detector type check when creating ObjectDetectProcess
* Fixes after rebasing to 0.11
* Add init file to detector folder
* Rename to detect_api
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add unit test for LocalObjectDetector class
* Add configuration for model inputs
Support transforming detection regions to RGB or BGR.
Support specifying the input tensor shape. The tensor shape has a standard format ["BHWC"] when handed to the detector, but can be transformed in the detector to match the model shape using the model input_tensor config.
* Add documentation for new model config parameters
* Add input tensor transpose to LocalObjectDetector
* Change the model input tensor config to use an enumeration
* Updates for model config documentation
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* ffmpeg stopped working without correct png size
When I made my own size for my picture, ffmpeg stopped working. Only with a size of 180x180 my frigate runs normal
* Adjusting language
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Consts for regex
* Add regex for camera username and password
* Redact user:pass from ffmpeg logs
* Redact ffmpeg commands
* Move common function to util
* Add tests
* Formatting
* Remove unused imports
* Fix test
* Add port to test
* Support special characters in passwords
* Add tests for special character handling
* Remove docs about not supporting special characters
* Adding clip length in s to Events View
* added function returning human readable length
* switched to date-fns functions for formatting
* fixed switched start/end time, changed length to duration
* add option enabled for each camera in config
* Simplified If-block and removed wrong Optional
* Update Docs enabling/disabling camera in config
* correct format for option
* Disabling Camera for processes, no config changes
* Describe effects of disabled cam in documentation
* change if-logic, obsolete copy, info disabled cam
* changed color to white, added top padding in disabled camera info
* changed indentation
* Pull go2rtc dependency
* Add go2rtc to local services and add to s6
* Add relay controller for go2rtc
* Add restream role
* Add restream role
* Add restream to nginx
* Add camera live source config
* Disable RTMP by default and use restream
* Use go2rtc for camera config
* Fix go2rtc move
* Start restream on frigate start
* Send restream to camera level
* Fix restream
* Make sure jsmpeg works as expected
* Make view rspect live size config
* Tweak player options to fit live view
* Adjust VideoPlayer to accept live option which disables irrelevant controls
* Add multiple options from restream live view
* Add base for webrtc option
* Setup specific restream modules
* Make mp4 the default streaming for now
* Expose 8554 for rtsp relay from go2rtc
* Formatting
* Update docs to suggest new restream method.
* Update docs to reflect restream role
* Update docs to reflect restream role
* Add webrtc player
* Improvements to webRTC
* Support webrtc
* Cleanup
* Adjust rtmp test and add restream test
* Fix tests
* Add restream tests
* Add live view docs and show different options
* Small docs tweak
* Support all stream types
* Update to beta 9 of go2rtc
* Formatting
* Make jsmpeg the default
* Support wss if made from https
* Support wss if made from https
* Use onEffect
* Set url outside onEffect
* Fix passed deps
* Update docs about required host mode
* Try memo instead
* Close websocket on changing camera
* Formatting
* Close pc connection
* Set video source to null on cleanup
* Use full path since go2rtc can't see PATH var
* Adjust audio codec to enable browser audio by default
* Cleanup stream creation
* Add restream tests
* Format tests
* Mock requests
* Adjust paths
* Move stream configs to restream
* Remove live source
* Remove live config
* Use live persistence for which view to use on each camera
* Fix live sizes
* Only use jsmpeg sizes for jsmpeg live
* Set max live size
* Remove access of live config
* Add selector for live view source in web view
* Remove RTMP from default list of roles
* Update docs
* Fix tests
* Fix docs for live view modes
* make default undefined to avoid race condition
* Wait until camera source is loaded to avoid race condition
* Fix tests
* Add config to go2rtc
* Work with config
* Set full path for config
* Set to use stun
* Check for mounted file
* Look for frigate-go2rtc
* Update docs to reflect webRTC configuration.
* Add link to go2rtc config
* Update docs to be more clear
* Update docs to be more clear
* Update format
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Update live docs
* Improve bash startup script
* Add option to force audio compatibility
* Formatting
* Fix mapping
* Fix broken link
* Update go2rtc version
* Get go2rtc webui working
* Add support for mse
* Remove mp4 option
* Undo changes to video player
* Update docs for new live view options
* Make separate path for mse
* Remove unused
* Remove mp4 path
* Try to get go2rtc proxy working
* Try to get go2rtc proxy working
* Remove unused callback
* Allow websocket on restrea dashboard
* Make mse default stream option
* Fix mse sizing
* don't assume roles is defined
* Remove nginx mapping to go2rtc ui
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add field and migration for segment size
* Store the segment size in db
* Add comment
* Add default
* Fix size parsing
* Include segment size in recordings endpoint
* Start adding storage maintainer
* Add storage maintainer and calculate average sizes
* Update comment
* Store segment and hour avg sizes per camera
* Formatting
* Keep track of total segment and hour averages
* Remove unused files
* Cleanup 2 hours of recordings at a time
* Formatting
* Fix bug
* Round segment size
* Cleanup some comments
* Handle case where segments are not deleted on initial run or is only retained segments
* Improve cleanup log
* Formatting
* Fix typo and improve logging
* Catch case where no recordings exist for camera
* Specifically define sort
* Handle edge case for cameras that only record part time
* Increase definition of part time recorder
* Remove warning about not supported storage based retention
* Add note about storage based retention to recording docs
* Add tests for storage maintenance calculation and cleanup
* Format tests
* Don't run for a camera with no recording segments
* Get size of file from cache
* Rework camera stats to be more efficient
* Remove total and other inefficencies
* Rewrite storage cleanup logic to be much more efficient
* Fix existing tests
* Fix bugs from tests
* Add another test
* Improve logging
* Formatting
* Set back correct loop time
* Update name
* Update comment
* Only include segments that have a nonzero size
* Catch case where camera has 0 nonzero segment durations
* Add test to cover zero bandwidth migration case
* Fix test
* Incorrect boolean logic
* Formatting
* Explicity re-define iterator
* Restructured camera specific documentation
* Make room for manufacture specific docs
* Added initial (more or less) working setup for Annke C800 camera
* Update docs/docs/configuration/camera_specific.md
remove tracking settings from example
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Moved unify and blue iris cams examples
* headline cleanup
* removed doubled headline in advanced options
* changed headline level for camera specific setup to make headlines
show up in toc
* removed specific optimizations not related to cam
* more generic phrasing
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* identation, device Id
indentation issue : "deploy" need to ne at th esame level as "image"
Device ID : use "device id" instead of "count: 1" cf : https://docs.docker.com/compose/gpu-support/
* Update docs/docs/configuration/hardware_acceleration.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Set sub label on object data if event is in progress
* Include sub_label in dict
* Don't need to set and passively get
* Formatting
* Don't expect event to be valid
* Update docs to reflect that sub label is included
* -1 so ensure indexes are correct
* Catch case of zero division
* Due to the -1, it may be negative
* Ignore source of error
The error is occurring due to a detections bounding box starting beyond the frame, this should be immediately ignored
* Formatting
* Check horizontal placement as well
* Remove original frame clamping
* Add tabs to show snapshot or thumbnail as part of event details,
even if event has a clip available.
* Add ability for TextTab to render as disabled.
* Fix VOD issues with longer keyframe intervals
* Move probe function to util
Update comment
* Use recording duration for keyFrameDurations
* Remove unused early return
* Avoid clipping first clip
* Adding configuration example for retain modes
Reading the documentation on its own didn't help me but when I found https://github.com/blakeblackshear/frigate/discussions/2447 I was able to understand how to add this to my configuration. I've added the example given in that discussion to help future readers of the page.
* Update record.md
Added suggested changes and have also added wording beneath the example mentioning the configuration can be added on a per camera basis.
Have also built on the example to add object specific retentions timings - Not sure if it would be preferred to have it all within one example to not complicate things?
Let me know your thoughts
* Update record.md
Created Object Specific Retention header
* Typo
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Add log message when discarding recording segments in cache
Currently Frigate silently discards recording segments in cache if there's more than "keep_count" for a camera, which can happen on high load/busy/slow systems.
This results in recording segments being lost with no apparent cause in the logs (even when set to debug).
This PR adds a warning log entry when discarding segments, in the same way as discarding corrupted segments
* Add explanatory warning and properly format cache_path warning
* lint fixes
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Set up for http tests
* Setup basics for testing and first test
* Add testing consts
* Cleanup db creation
* Add one more check to test
* Get event that does not exist
* Get events working with cleaner db
* Test retain / un-retain
* Test setting and deleting sub label
* Test getting list of sub labels
* Fix bug caught in tests
* Test deleting event
* Test geting list of events
* Expand test
* Test more event filters
* Write version module so tests don't fail on version import
* Test config
* Test recordings endpoint
* Formatting
* Remove unused imports
* Test stats
* Add cleanup files in const
* Add name to match other checks
* Use default names so filters are more clear
* Add endpoint to get list of sub labels inside DB
* Fix crash on no internet
* Cleanups for sub_label http
* Add sub label selector to events UI
* Add event filtering for sub label
* Formatting files
* Reduce size of filters to fit on one line
* Add handler for tests
* Remove unused imports
* Only show the sub labels filter when there are sub labels in the DB
* Fix tests
* Use distinct instead of group_by
* Formatting
* Cleanup event logic
* Send mqtt message when motion is detected
* Use object processing instead of passing mqtt client around
* Cleanup
* Formatting
* add comment
* Make off delay configurable.
* Handle updating each camera based on config off delay
* Formatting
* Update docker-compose.yml
* Fix processing issue
* Update mqtt docs
* Update main config docs
* Make sure multiple True values aren't published for the same motion
* Make sure multiple True values aren't published for the same motion
* Update payload to fit existing HA standard values
* Update docs to fit new values
* Update docs
* Update motion topic
* Use datetime.datetime and remove unused imports
* Cast to int
* Clarify motion detector behavior in docs
* Fix typo
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Starting working on adding motion toggle
* Add all info to mqtt command
* Send motion to correct funs
* Update mqtt docs
* Fixes for contingencies
* format
* mypy
* Tweak behavior
* Fix motion breaking frames
* Fix bad logic in detect set
* Always set value for motion boxes
* __main__.py
* app.py
* models.py
* plus.py
* stats.py
In addition a new module was introduced: types
There all TypedDicts are included. Bitte geben Sie eine Commit-Beschreibung für Ihre Änderungen ein. Zeilen,
* Toggle improve_contrast for cameras via MQTT
* Process parameter to mqtt toggle improve_contrast
* Update mqtt docs for improve_contrast topic
* Spacing
* Add class variable and update in process_frames
* Pass to constructor
* pass by reference mistake
* remove parameter
* remove parameter
* Add options for reordering and hiding cameras selectively
* Add newline at end of camera file
* Make each camera for birdseye togglable as well
* Update names to be less ambiguous
* Update defaults
* Include sidebar change
* Remove birdseye toggle (will be added in separate PR)
* Remove birdseye toggle (will be added in separate PR)
* Remove birdseye toggle (will be added in separate PR)
* Update sidebar to only sort cameras once
* Simplify sorting logic
* Add camera level processing for birdseye
* Add camera level birdseye configruation
* Propogate birdseye from global
* Update docs to show that birdseye is overridable
* Fix incorrect default factory
* Update note to indicate values that can be overridden
* Cleanup config accessing
* Add tests for birdseye config behavior
* Fix mistake on test format
* Update tests
* Add docs to elaborate more on stationary tracking
* Add link to guide on avoiding stationary objects in driveway scenario
* Update wording in reference config
* Small cleanups
* Update with PR comments
* Add object ratio config parameters
Issue: #2948
* Add config test for object filter ratios
Issue: #2948
* Address review comments
- Accept `ratio` default
- Rename `bounds` to `box` for consistency
- Add migration for new field
Issue: #2948
* Fix logical errors
- field migrations require default values
- `clipped` referenced the wrong index for region, since it shifted
- missed an inclusion of `ratio` for detections in `process_frames`
- revert naming `o[2]` as `box` since it is out of scope!
This has now been test-run against a video, so I believe the kinks are
worked out.
Issue: #2948
* Update contributing notes for `make`
Issue: #2948
* Fix migration
- Ensure that defaults match between Event and migration script
- Deconflict migration script number (from rebase)
Issue: #2948
* Filter objects out of ratio bounds
Issue: #2948
* Update migration file to 009
Issue: #2948
* Add sub label to model and set / delete funs
* Add migrations for sub label
* Tweaks to API and model
* Show sublabel if available
* Cleanups
* Update docs
* Show person in UI title
* Fix typo and don't fail on no json
* Transfer sub labels for in progress events
* Remove sublabel reset
* Remove person only check
* Make default null
* Update docs and formatting
* Make default null
* Make nullable in migration
* Undo null
* Update model to accept null
* Update migration to accept null
* Don't set to default values
* Remove redundant defaults and update http logic
* Only need a single route
* Enforce 20 character limit in http
* Update docs to mention 20 character limit
* Cleanup
* Separate insert and update to make sure updated values are retained when event ends
* Use insert instead of replace
* Remove redundant if and have should_update_db include clip or snapshot requirement.
* Added object thumbnail def and made camera tracked objects use it.
* Add object snapshot def
* Remove documentation for best.jpg
* Update docs for label thumbnail and snapshot defs
* new datepicker
* dev
* dev
* dev
* fix for version 0.10
* added rounded corners for date range
* lint
* Commented out some Select.test.
* improved date range selection
* improved functions with useCallback
* improved Select.test.jsx
* keyboard navigation
* keyboard navigation
* added dropdown menu icon
* Hide filters on xs, Button to show
* check if to far left before right
* Filter button text
* improved local timezone
* Improve audio convert guide
* Mention faq in RTMP configuration
* Add example for audio conversion tip
* Change comma to period
* Explain why this is needed
Based on issue #1976 - specify explicitly that these fields can include environment variables to avoid interpretation that environment variables could be used anywhere.
I am participating in #hacktoberfest, so I would appreciate if you could add the 'hacktoberfest-accepted' label (or add #hacktoberfest topic to your repo). Thanks!
Hoped to investigate this with my dev board at some point. In the meantime, added a warning for others who may experience it when upgrading to the new stable release.
* rearrange event route and splitted into several components
* useIntersectionObserver
* re-arrange
* searchstring improvement
* added xs tailwind breakpoint
* useOuterClick hook
* cleaned up
* removed some video controls for mobile devices
* lint
* moved hooks to global folder
* moved buttons for small devices
* added button groups
Co-authored-by: Bernt Christian Egeland <cbegelan@gmail.com>
* 📝✅🔧 - Make RTMP config global
Fixes#1671
* 📝✅🔧 - Make timestamp style config global
Fixes#1656
* fix test function names
* formatter
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* fixed position for Dialog
* added eventId to deleted item
* removed page route redirect + New Close Button
* event component added to events list. New delete reducer
* removed event route
* moved delete reducer to event page
* removed redundant event details
* keep aspect ratio
* keep aspect ratio
* removed old buttons - repositioned to top
* removed console.log
* event view function
* removed clip header
* top position
* centered image if no clips avail
* comments
* linting
* lint
* added scrollIntoView when event has been mounted
* added Clip header
* added scrollIntoView to test
* lint
* useRef to scroll event into view
* removed unused functions
* reverted changes to event.test
* scroll into view
* moved delete reducer
* removed commented code
* styling
* moved close button to right side
* Added new close svg icon
Co-authored-by: Bernt Christian Egeland <cbegelan@gmail.com>
In the homeassistant app, the notification timestamp is generated when the push message is received by the app. Delays caused by servers, device load, or network latency/availability will delay those pushes - so in the following case:
1:00 - A dog is detected in the front
1:02 - It stops moving around or leaves view, last notification push sent
1:05 - The phone connects to the network
The user, seeing the alert at 1:05, will see that the notification occurred "a few seconds ago", since the timestamp the app sends to the OS was at 1:05. By adding the `when` parameter, it will instead correctly show that the event was triggered at 1:00.
This is exacerbated by the fact that the default behavior of android pushes won't wake the device from deep sleep - in order to receive it as a high priority notification, the additional parameters
```
data:
priority: high
ttl: 0
```
have to be added.
* Add FAQ section
Add FAQ section and verbiage about a finding with camera motion sensors in HomeKit.
* Changes made based on inputs
* Fix markdown
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
Refs #1440
Indicate that width and height are only used for the detect role and so other streams for with other roles are passed through and resolution is not needed.
If jpg_bytes wasn't retrieved from either desk or a tracked object, respond with 404
Prevents uncaught error for unknown event ids sent to event_snapshot endpoint
Lock updates to tracked objects, current frame time, motion boxes, and
regions on `update()`.
Directly create Counters using counted values.
Don't convert removed_ids, new_ids, or updated_ids sets to lists.
Update defaultdict's to remove un-necessary lambdas when possible.
When possible, drop un-necessay list comprehensions, such as when
calling `any`.
Use set comprehension, rather than passing a list comprehension into
`set()`.
Do the slightly more pythonic `x not in y` rather than `not x in y` to
check list inclusion.
Use config data classes to eliminate some of the boilerplate associated
with setting up the configuration. In particular, using dataclasses
removes a lot of the boilerplate around assigning properties to the
object and allows these to be easily immutable by freezing them. In the
case of simple, non-nested dataclasses, this also provides more
convenient `asdict` helpers.
To set this up, where previously the objects would be parsed from the
config via the `__init__` method, create a `build` classmethod that does
this and calls the dataclass initializer.
Some of the objects are mutated at runtime, in particular some of the
zones are mutated to set the color (this might be able to be refactored
out) and some of the camera functionality can be enabled/disabled. Some
of the configs with `enabled` properties don't seem to have mqtt hooks
to be able to toggle this, in particular, the clips, snapshots, and
detect can be toggled but rtmp and record configs do not, but all of
these configs are still not frozen in case there is some other
functionality I am missing.
There are a couple other minor fixes here, one that was introduced
by me recently where `max_seconds` was not defined, the other to
properly `get()` the message payload when handling publishing mqtt
messages sent via websocket.
Use `np.unique` to determine the correct set of row/col pairs to iterate
over when doing the object matching without needing to track which rows
or columns have already been seen. Add to some of the accompanying
documentation to clarify this algorithm.
Also fix what looks to be an erroneous early return, and change this to
a continue.
Generally eliminate the `while True` loops while waiting for a stop
event and prefer to condition the loops on if the stop event is set,
blocking on that where it makes sense. This generally comes in 3
flavors. First and simplest, when there is a sleep and the stop event
is the only thing the loop blocks on, instead do a check using
`stop_event.wait(timeout)` to instead block on the stop event for the
designated amount of time. Second, when there is a different event that
is blocking in the loop, condition the loop on `stop_event.is_set()`
rather than breaking when it is set. Finally, when there is a separate
internal condition that requires a counter, have the loop iterate over
the counter and use `if stop_event.wait(timeout)` internal to the loop.
When running ffprobe, use `subprocess.run` rather than
`subprocess.Popen`. This simplifies the handling that is needed to run
and process the outputs. Here, filename parsing is also simplified by
explicitly removing the file extension with `os.path.splitext` and
forcing a single split into the camera name and the formatted date.
It is not possible--unless I'm totally overlooking something--to define the add-on's configuration in the GUI. The user must define the configuration in a frigate.yml file,
StatsEmitter thread to send stats to MQTT every 60 seconds by default, optional stats_interval config value.
New service stats attribute, containing uptime in seconds and version.
Frigate provides an HTTP server that can be used to detect if frigate is running or not. Using the docker-compose "healthcheck" feature we can set automations to restart the service if it stops working.
I have very little experience with Docker, but it seems the command in the README has two mistakes in it:
- unknown shorthand flag: 'n' in -name
- docker: Error response from daemon: Invalid container name (blakeblackshear/frigate:stable), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed.
I am running Docker version 19.03.13-ce, build 4484c46d9d on Arch linux.
* Add object size to the bounding box
Remove script from Dockerfile
Fix framerate command
Move default value for framerate
update dockerfile
dockerfile changes
Add person_area label to surrounding box
Update dockerfile
ffmpeg config bug
Add `person_area` label to `best_person` frame
Resolve debug view showing area label for non-persons
Add object size to the bounding box
Add object size to the bounding box
* Move object area outside of conditional to work with all object types
# Warns and then closes issues and PRs that have had no activity for a specified amount of time.
# https://github.com/actions/stale
name:"Stalebot"
on:
schedule:
- cron:"0 0 * * *"# run stalebot once a day
jobs:
stale:
runs-on:ubuntu-latest
steps:
- uses:actions/stale@main
id:stale
with:
stale-issue-message:"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions."
# Frigate - Realtime Object Detection for RTSP Cameras
<palign="center">
Uses OpenCV and Tensorflow to perform realtime object detection locally for RTSP cameras. Designed for integration with HomeAssistant or others via MQTT.
- Leverages multiprocessing and threads heavily with an emphasis on realtime over processing every frame
# Frigate - NVR With Realtime Object Detection for IP Cameras
- Allows you to define specific regions (squares) in the image to look for motion/objects
- Motion detection runs in a separate process per region and signals to object detection to avoid wasting CPU cycles looking for objects when there is no motion
- Object detection with Tensorflow runs in a separate process per region
- Detected objects are placed on a shared mp.Queue and aggregated into a list of recently detected objects in a separate thread
- A person score is calculated as the sum of all scores/5
- Motion and object info is published over MQTT for integration into HomeAssistant or others
- An endpoint is available to view an MJPEG stream for debugging

A complete and local NVR designed for [Home Assistant](https://www.home-assistant.io) with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
## Example video
Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but highly recommended. The Coral will outperform even the best CPUs and can process 100+ FPS with very little overhead.
You see multiple bounding boxes because it draws bounding boxes from all frames in the past 1 second where a person was detected. Not all of the bounding boxes were from the current frame.
echo"deb [signed-by=/usr/share/keyrings/raspbian.gpg] http://raspbian.raspberrypi.org/raspbian/ bullseye main contrib non-free rpi"| tee /etc/apt/sources.list.d/raspi.list
# Optional: module by module log level configuration
logs:
frigate.mqtt:error
```
Available log levels are: `debug`, `info`, `warning`, `error`, `critical`
Examples of available modules are:
-`frigate.app`
-`frigate.mqtt`
-`frigate.object_detection`
-`frigate.zeroconf`
-`detector.<detector_name>`
-`watchdog.<camera_name>`
-`ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
Example:
```yaml
environment_vars:
VARIABLE_NAME:variable_value
```
### `database`
Event and recording information is managed in a sqlite database at `/media/frigate/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
This may need to be in a custom location if network storage is used for the media folder.
```yaml
database:
path:/path/to/frigate.db
```
### `model`
If using a custom model, the width and height will need to be specified.
Custom models may also require different input tensor formats. The colorspace conversion supports RGB, BGR, or YUV frames to be sent to the object detector. The input tensor shape parameter is an enumeration to match what specified by the model.
| Tensor Dimension | Description |
| :--------------: | -------------- |
| N | Batch Size |
| H | Model Height |
| W | Model Width |
| C | Color Channels |
| Available Input Tensor Shapes |
| :---------------------------: |
| "nhwc" |
| "nchw" |
```yaml
# Optional: model config
model:
path:/path/to/model
width:320
height:320
input_tensor:"nhwc"
input_pixel_format:"bgr"
```
The labelmap can be customized to your needs. A common reason to do this is to combine multiple object types that are easily confused when you don't need to be as granular such as car/truck. By default, truck is renamed to car because they are often confused. You cannot add new object types, but you can change the names of existing objects in the model.
```yaml
model:
labelmap:
2:vehicle
3:vehicle
5:vehicle
7:vehicle
15:animal
16:animal
17:animal
```
Note that if you rename objects in the labelmap, you will also need to update your `objects -> track` list as well.
## Custom ffmpeg build
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, a docker volume mapping can be used to overwrite the included ffmpeg build with an ffmpeg build that works for your specific hardware setup.
To do this:
1. Download your ffmpeg build and uncompress to a folder on the host (let's use `/home/appdata/frigate/custom-ffmpeg` for this example).
2. Update your docker-compose or docker CLI to include `'/home/appdata/frigate/custom-ffmpeg':'/usr/lib/btbn-ffmpeg':'ro'` in the volume mappings.
3. Restart Frigate and the custom version will be used if the mapping was done correctly.
NOTE: The folder that is mapped from the host needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then `/home/appdata/frigate/custom-ffmpeg` needs to be mapped to `/usr/lib/btbn-ffmpeg`.
Birdseye allows a heads-up view of your cameras to see what is going on around your property / space without having to watch all cameras that may have nothing happening. Birdseye allows specific modes that intelligently show and disappear based on what you care about.
### Birdseye Modes
Birdseye offers different modes to customize which cameras show under which circumstances.
- **continuous:** All cameras are always included
- **motion:** Cameras that have detected motion within the last 30 seconds are included
- **objects:** Cameras that have tracked an active object within the last 30 seconds are included
### Custom Birdseye Icon
A custom icon can be added to the birdseye background by providing a 180x180 image named `custom.png` inside of the Frigate `media` folder. The file must be a png with the icon as transparent, any non-transparent pixels will be white when displayed in the birdseye view.
### Birdseye view override at camera level
If you want to include a camera in Birdseye view only for specific circumstances, or just don't include it at all, the Birdseye setting can be set at the camera level.
```yaml
# Include all cameras by default in Birdseye view
birdseye:
enabled:True
mode:continuous
cameras:
front:
# Only include the "front" camera in Birdseye view when objects are detected
This page makes use of presets of FFmpeg args. For more information on presets, see the [FFmpeg Presets](/configuration/ffmpeg_presets) page.
:::
## MJPEG Cameras
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
```yaml
go2rtc:
streams:
mjpeg_cam:"ffmpeg:{your_mjpeg_stream_url}#video=h264#hardware"# <- use hardware acceleration to create an h264 stream usable for other components.
cameras:
...
mjpeg_cam:
ffmpeg:
inputs:
- path:rtsp://127.0.0.1:8554/mjpeg_cam
roles:
- detect
- record
```
## JPEG Stream Cameras
Cameras using a live changing jpeg image will need input parameters as below
```yaml
input_args:preset-http-jpeg-generic
```
Outputting the stream will have the same args and caveats as per [MJPEG Cameras](#mjpeg-cameras)
## RTMP Cameras
The input parameters need to be adjusted for RTMP cameras
```yaml
ffmpeg:
input_args:preset-rtmp-generic
```
## UDP Only Cameras
If your cameras do not support TCP connections for RTSP, you can use UDP.
```yaml
ffmpeg:
input_args:preset-rtsp-udp
```
## Model/vendor specific setup
### Annke C800
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
- path:rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream# <----- Update for your camera
roles:
- detect
- record
- rtmp
rtmp:
enabled:False# <-- RTMP should be disabled if your stream is not H264
detect:
width:# <---- update for your camera's resolution
height:# <---- update for your camera's resolution
```
### Blue Iris RTSP Cameras
You will need to remove `nobuffer` flag for Blue Iris RTSP cameras
```yaml
ffmpeg:
input_args:preset-rtsp-blue-iris
```
### Reolink Cameras
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
Frigate works much better with newer reolink cameras that are setup with the below options:
If available, recommended settings are:
-`On, fluency first` this sets the camera to CBR (constant bit rate)
-`Interframe Space 1x` this sets the iframe interval to the same as the frame rate
According to [this discussion](https://github.com/blakeblackshear/frigate/issues/3235#issuecomment-1135876973), the http video streams seem to be the most reliable for Reolink.
Unifi protect cameras require the rtspx stream to be used with go2rtc.
To utilize a Unifi protect camera, modify the rtsps link to begin with rtspx.
Additionally, remove the "?enableSrtp" from the end of the Unifi link.
```yaml
go2rtc:
streams:
front:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.2.0#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record and rtmp if used directly with unifi protect.
```yaml
ffmpeg:
output_args:
record:preset-record-ubiquiti
rtmp:preset-rtmp-ubiquiti# recommend using go2rtc instead
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing events and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
Each role can only be assigned to one input per camera. The options for roles are as follows:
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## CPU Detector (not recommended)
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
```yaml
detectors:
cpu1:
type:cpu
num_threads:3
model:
path:"/custom_model.tflite"
cpu2:
type:cpu
num_threads:3
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
## Edge-TPU Detector
The EdgeTPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an EdgeTPU detector, set the `"type"` attribute to `"edgetpu"`.
The EdgeTPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
### Single USB Coral
```yaml
detectors:
coral:
type:edgetpu
device:usb
model:
path:"/custom_model.tflite"
```
### Multiple USB Corals
```yaml
detectors:
coral1:
type:edgetpu
device:usb:0
coral2:
type:edgetpu
device:usb:1
```
### Native Coral (Dev Board)
_warning: may have [compatibility issues](https://github.com/blakeblackshear/frigate/issues/1706) after `v0.9.x`_
```yaml
detectors:
coral:
type:edgetpu
device:""
```
### Multiple PCIE/M.2 Corals
```yaml
detectors:
coral1:
type:edgetpu
device:pci:0
coral2:
type:edgetpu
device:pci:1
```
### Mixing Corals
```yaml
detectors:
coral_usb:
type:edgetpu
device:usb
coral_pci:
type:edgetpu
device:pci
```
## OpenVINO Detector
The OpenVINO detector type runs an OpenVINO IR model on Intel CPU, GPU and VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html). Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc. If not specified, the default OpenVINO device will be selected by the `AUTO` plugin.
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the `GPU` device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
```yaml
detectors:
ov:
type:openvino
device:AUTO
model:
path:/openvino-model/ssdlite_mobilenet_v2.xml
model:
width:300
height:300
input_tensor:nhwc
input_pixel_format:bgr
labelmap_path:/openvino-model/coco_91cl_bkgr.txt
```
This detector also supports some YOLO variants: YOLOX, YOLOv5, and YOLOv8 specifically. Other YOLO variants are not officially supported/tested. Frigate does not come with any yolo models preloaded, so you will need to supply your own models. This detector has been verified to work with the [yolox_tiny](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny) model from Intel's Open Model Zoo. You can follow [these instructions](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny#download-a-model-and-convert-it-into-openvino-ir-format) to retrieve the OpenVINO-compatible `yolox_tiny` model. Make sure that the model input dimensions match the `width` and `height` parameters, and `model_type` is set accordingly. See [Full Configuration Reference](/configuration/index.md#full-configuration-reference) for a list of possible `model_type` options. Below is an example of how `yolox_tiny` can be used in Frigate:
```yaml
detectors:
ov:
type:openvino
device:AUTO
model:
path:/path/to/yolox_tiny.xml
model:
width:416
height:416
input_tensor:nchw
input_pixel_format:bgr
model_type:yolox
labelmap_path:/path/to/coco_80cl.txt
```
### Intel NCS2 VPU and Myriad X Setup
Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
NVidia GPUs may be used for object detection using the TensorRT libraries. Due to the size of the additional libraries, this detector is only provided in images with the `-tensorrt` tag suffix. This detector is designed to work with Yolo models for object detection.
### Minimum Hardware Support
The TensorRT detector uses the 11.x series of CUDA libraries which have minor version compatibility. The minimum driver version on the host system must be `>=450.80.02`. Also the GPU must support a Compute Capability of `5.0` or greater. This generally correlates to a Maxwell-era GPU or newer, check the NVIDIA GPU Compute Capability table linked below.
> **TODO:** NVidia claims support on compute 3.5 and 3.7, but marks it as deprecated. This would have some, but not all, Kepler GPUs as possibly working. This needs testing before making any claims of support.
To use the TensorRT detector, make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
There are improved capabilities in newer GPU architectures that TensorRT can benefit from, such as INT8 operations and Tensor cores. The features compatible with your hardware will be optimized when the model is converted to a trt file. Currently the script provided for generating the model provides a switch to enable/disable FP16 operations. If you wish to use newer features such as INT8 optimization, more work is required.
#### Compatibility References:
[NVIDIA TensorRT Support Matrix](https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-841/support-matrix/index.html)
[NVIDIA CUDA Compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
The model used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate a model file for the TensorRT library. A script is provided that will build several common models.
To generate model files, create a new folder to save the models, download the script, and launch a docker container that will run the script.
The `trt-models` folder can then be mapped into your Frigate container as `trt-models` and the models referenced from the config.
If your GPU does not support FP16 operations, you can pass the environment variable `-e USE_FP16=False` to the `docker run` command to disable it.
Specific models can be selected by passing an environment variable to the `docker run` command. Use the form `-e YOLO_MODELS=yolov4-416,yolov4-tiny-416` to select one or more model names. The models available are shown below.
```
yolov3-288
yolov3-416
yolov3-608
yolov3-spp-288
yolov3-spp-416
yolov3-spp-608
yolov3-tiny-288
yolov3-tiny-416
yolov4-288
yolov4-416
yolov4-608
yolov4-csp-256
yolov4-csp-512
yolov4-p5-448
yolov4-p5-896
yolov4-tiny-288
yolov4-tiny-416
yolov4x-mish-320
yolov4x-mish-640
yolov7-tiny-288
yolov7-tiny-416
yolov7-640
yolov7-320
yolov7x-640
yolov7x-320
```
### Configuration Parameters
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector uses `.trt` model files that are located in `/trt-models/` by default. These model file path and dimensions used will depend on which model you have generated.
```yaml
detectors:
tensorrt:
type:tensorrt
device:0#This is the default, select the first GPU
Some presets of FFmpeg args are provided by default to make the configuration easier. All presets can be seen in [this file](https://github.com/blakeblackshear/frigate/blob/master/frigate/ffmpeg_presets.py).
### Hwaccel Presets
It is highly recommended to use hwaccel presets in the config. These presets not only replace the longer args, but they also give Frigate hints of what hardware is available and allows Frigate to make other optimizations using the GPU such as when encoding the birdseye restream or when scaling a stream that has a size different than the native stream size.
See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on how to setup hwaccel for your GPU / iGPU.
| preset-http-reolink | Reolink HTTP-FLV Stream | Only for reolink http, not when restreaming as rtsp |
| preset-rtmp-generic | RTMP Stream | |
| preset-rtsp-generic | RTSP Stream | This is the default when nothing is specified |
| preset-rtsp-restream | RTSP Stream from restream | Use for rtsp restream as source for frigate |
| preset-rtsp-restream-low-latency | RTSP Stream from restream | Use for rtsp restream as source for frigate to lower latency, may cause issues with some cameras |
| preset-rtsp-udp | RTSP Stream via UDP | Use when camera is UDP only |
| preset-rtsp-blue-iris | Blue Iris RTSP Stream | Use when consuming a stream from Blue Iris |
:::caution
It is important to be mindful of input args when using restream because you can have a mix of protocols. `http` and `rtmp` presets cannot be used with `rtsp` streams. For example, when using a reolink cam with the rtsp restream as a source for record the preset-http-reolink will cause a crash. In this case presets will need to be set at the stream level. See the example below.
It is recommended to update your configuration to enable hardware accelerated decoding in ffmpeg. Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
### Raspberry Pi 3/4
Ensure you increase the allocated RAM for your GPU to at least 128 (raspi-config > Performance Options > GPU Memory).
**NOTICE**: If you are using the addon, you may need to turn off `Protection mode` for hardware acceleration.
```yaml
ffmpeg:
hwaccel_args:preset-rpi-64-h264
```
### Intel-based CPUs (<10th Generation) via VAAPI
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams. VAAPI is recommended for all generations of Intel-based CPUs if QSV does not work.
```yaml
ffmpeg:
hwaccel_args:preset-vaapi
```
**NOTICE**: With some of the processors, like the J4125, the default driver `iHD` doesn't seem to work correctly for hardware acceleration. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the frigate.yml for HA OS users](advanced.md#environment_vars).
### Intel-based CPUs (>=10th Generation) via Quicksync
QSV must be set specifically based on the video encoding of the stream.
#### H.264 streams
```yaml
ffmpeg:
hwaccel_args:preset-intel-qsv-h264
```
#### H.265 streams
```yaml
ffmpeg:
hwaccel_args:preset-intel-qsv-h265
```
### AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
**Note:** You also need to set `LIBVA_DRIVER_NAME=radeonsi` as an environment variable on the container.
```yaml
ffmpeg:
hwaccel_args:preset-vaapi
```
### NVIDIA GPUs
While older GPUs may work, it is recommended to use modern, supported GPUs. NVIDIA provides a [matrix of supported GPUs and features](https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new). If your card is on the list and supports CUVID/NVDEC, it will most likely work with Frigate for decoding. However, you must also use [a driver version that will work with FFmpeg](https://github.com/FFmpeg/nv-codec-headers/blob/master/README). Older driver versions may be missing symbols and fail to work, and older cards are not supported by newer driver versions. The only way around this is to [provide your own FFmpeg](/configuration/advanced#custom-ffmpeg-build) that will work with your driver version, but this is unsupported and may not work well if at all.
A more complete list of cards and ther compatible drivers is available in the [driver release readme](https://download.nvidia.com/XFree86/Linux-x86_64/525.85.05/README/supportedchips.html).
If your distribution does not offer NVIDIA driver packages, you can [download them here](https://www.nvidia.com/en-us/drivers/unix/).
#### Docker Configuration
Additional configuration is needed for the Docker container to be able to access the NVIDIA GPU. The supported method for this is to install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) and specify the GPU to Docker. How you do this depends on how Docker is being run:
##### Docker Compose
```yaml
services:
frigate:
...
image:ghcr.io/blakeblackshear/frigate:stable
deploy:# <------------- Add this section
resources:
reservations:
devices:
- driver:nvidia
device_ids:['0']# this is only needed when using multiple GPUs
count:1# number of GPUs
capabilities:[gpu]
```
##### Docker Run CLI
```bash
docker run -d \
--name frigate \
...
--gpus=all \
ghcr.io/blakeblackshear/frigate:stable
```
#### Setup Decoder
The decoder you need to pass in the `hwaccel_args` will depend on the input video.
A list of supported codecs (you can use `ffmpeg -decoders | grep cuvid` in the container to get the ones your card supports)
For example, for H264 video, you'll select `preset-nvidia-h264`.
```yaml
ffmpeg:
hwaccel_args:preset-nvidia-h264
```
If everything is working correctly, you should see a significant improvement in performance.
Verify that hardware decoding is working by running `nvidia-smi`, which should show `ffmpeg`
processes:
:::note
`nvidia-smi` may not show `ffmpeg` processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458).
If you do not see these processes, check the `docker logs` for the container and look for decoding errors.
These instructions were originally based on the [Jellyfin documentation](https://jellyfin.org/docs/general/administration/hardware-acceleration.html#nvidia-hardware-acceleration-on-docker-linux).
For Home Assistant Addon installations, the config file needs to be in the root of your Home Assistant config directory (same location as `configuration.yaml`). It can be named `frigate.yml` or `frigate.yaml`, but if both files exist `frigate.yaml` will be preferred and `frigate.yml` will be ignored.
For all other installation types, the config file should be mapped to `/config/config.yml` inside the container.
It is recommended to start with a minimal configuration and add to it as described in [this guide](../guides/getting_started.md):
VSCode (and VSCode addon) supports the JSON schemas which will automatically validate the config. This can be added by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the top of the config file. `frigate_host` being the IP address of Frigate or `ccab4aaf-frigate` if running in the addon.
### Full configuration reference:
:::caution
It is not recommended to copy this full configuration file. Only specify values that are different from the defaults. Configuration options and default values may change in future versions.
:::
**Note:** The following values will be replaced at runtime by using environment variables
Frigate has different live view options, some of which require the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
## Live View Options
Live view options can be selected while viewing the live stream. The options are:
| jsmpeg | low | same as `detect -> fps`, capped at 10 | same as detect | no | no | none |
| mse | low | native | native | yes (depends on audio codec) | yes | not supported on iOS, Firefox is h.264 only |
| webrtc | lowest | native | native | yes (depends on audio codec) | yes | requires extra config, doesn't support h.265 |
### Audio Support
MSE Requires AAC audio, WebRTC requires PCMU/PCMA, or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
```yaml
go2rtc:
streams:
rtsp_cam:# <- for RTSP streams
- rtsp://192.168.1.5:554/live0# <- stream which supports video & aac audio
- "ffmpeg:rtsp_cam#audio=opus"# <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
http_cam:# <- for http streams
- http://192.168.50.155/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=user&password=password# <- stream which supports video & aac audio
- "ffmpeg:http_cam#audio=opus"# <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
```
### Setting Stream For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.
```yaml
go2rtc:
streams:
rtsp_cam:
- rtsp://192.168.1.5:554/live0# <- stream which supports video & aac audio.
- "ffmpeg:rtsp_cam#audio=opus"# <- copy of the stream which transcodes audio to opus
rtsp_cam_sub:
- rtsp://192.168.1.5:554/substream# <- stream which supports video & aac audio.
- "ffmpeg:rtsp_cam_sub#audio=opus"# <- copy of the stream which transcodes audio to opus
cameras:
test_cam:
ffmpeg:
output_args:
record:preset-record-generic-audio-copy
inputs:
- path:rtsp://127.0.0.1:8554/test_cam# <--- the name here must match the name of the camera in restream
input_args:preset-rtsp-restream
roles:
- record
- path:rtsp://127.0.0.1:8554/test_cam_sub# <--- the name here must match the name of the camera_sub in restream
input_args:preset-rtsp-restream
roles:
- detect
live:
stream_name:rtsp_cam_sub
```
### WebRTC extra configuration:
WebRTC works by creating a TCP or UDP connection on port `8555`. However, it requires additional configuration:
- For external access, over the internet, setup your router to forward port `8555` to port `8555` on the Frigate device, for both TCP and UDP.
- For internal/local access, unless you are running through the add-on, you will also need to set the WebRTC candidates list in the go2rtc config. For example, if `192.168.1.10` is the local IP of the device running Frigate:
```yaml title="/config/frigate.yaml"
go2rtc:
streams:
test_cam: ...
webrtc:
candidates:
- 192.168.1.10:8555
- stun:8555
```
:::tip
This extra configuration may not be required if Frigate has been installed as a Home Assistant add-on, as Frigate uses the Supervisor's API to generate a WebRTC candidate.
However, it is recommended if issues occur to define the candidates manually. You should do this if the Frigate add-on fails to generate a valid candidate. If an error occurs you will see some warnings like the below in the add-on logs page during the initialization:
```log
[WARN] Failed to get IP address from supervisor
[WARN] Failed to get WebRTC port from supervisor
```
:::
:::note
If you are having difficulties getting WebRTC to work and you are running Frigate with docker, you may want to try changing the container network mode:
- `network: host`, in this mode you don't need to forward any ports. The services inside of the Frigate container will have full access to the network interfaces of your host machine as if they were running natively and not in a container. Any port conflicts will need to be resolved. This network mode is recommended by go2rtc, but we recommend you only use it if necessary.
- `network: bridge` creates a virtual network interface for the container, and the container will have full access to it. You also don't need to forward any ports, however, the IP for accessing Frigate locally will differ from the IP of the host machine. Your router will see Frigate as if it was a new device connected in the network.
:::
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.2.0#module-webrtc) for more information about this.
**Motion masks**: Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the debug feed with `Motion Boxes` enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with `Motion Boxes` enabled again.
**Object filter masks**: Object filter masks are used to filter out false positives for a given object type based on location. These should be used to filter any areas where it is not possible for an object of that type to be. The bottom center of the detected object's bounding box is evaluated against the mask. If it is in a masked area, it is assumed to be a false positive. For example, you may want to mask out rooftops, walls, the sky, treetops for people. For cars, masking locations other than the street or your driveway will tell Frigate that anything in your yard is a false positive.
To create a poly mask:
1. Visit the Web UI
1. Click the camera you wish to create a mask for
1. Select "Debug" at the top
1. Expand the "Options" below the video feed
1. Click "Mask & Zone creator"
1. Click "Add" on the type of mask or zone you would like to create
1. Click on the camera's latest image to create a masked area. The yaml representation will be updated in real-time
1. When you've finished creating your mask, click "Copy" and paste the contents into your config file and restart Frigate
Example of a finished row corresponding to the below example image:
This is a response to a [question posed on reddit](https://www.reddit.com/r/homeautomation/comments/ppxdve/replacing_my_doorbell_with_a_security_camera_a_6/hd876w4?utm_source=share&utm_medium=web2x&context=3):
It is helpful to understand a bit about how Frigate uses motion detection and object detection together.
First, Frigate uses motion detection as a first line check to see if there is anything happening in the frame worth checking with object detection.
Once motion is detected, it tries to group up nearby areas of motion together in hopes of identifying a rectangle in the image that will capture the area worth inspecting. These are the red "motion boxes" you see in the debug viewer.
After the area with motion is identified, Frigate creates a "region" (the green boxes in the debug viewer) to run object detection on. The models are trained on square images, so these regions are always squares. It adds a margin around the motion area in hopes of capturing a cropped view of the object moving that fills most of the image passed to object detection, but doesn't cut anything off. It also takes into consideration the location of the bounding box from the previous frame if it is tracking an object.
After object detection runs, if there are detected objects that seem to be cut off, Frigate reframes the region and runs object detection again on the same frame to get a better look.
All of this happens for each area of motion and tracked object.
> Are you simply saying that INITIAL triggering of any kind of detection will only happen in un-masked areas, but that once this triggering happens, the masks become irrelevant and object detection takes precedence?
Essentially, yes. I wouldn't describe it as object detection taking precedence though. The motion masks just prevent those areas from being counted as motion. Those masks do not modify the regions passed to object detection in any way, so you can absolutely detect objects in areas masked for motion.
> If so, this is completely expected and intuitive behavior for me. Because obviously if a "foot" starts motion detection the camera should be able to check if it's an entire person before it fully crosses into the zone. The docs imply this is the behavior, so I also don't understand why this would be detrimental to object detection on the whole.
When just a foot is triggering motion, Frigate will zoom in and look only at the foot. If that even qualifies as a person, it will determine the object is being cut off and look again and again until it zooms back out enough to find the whole person.
It is also detrimental to how Frigate tracks a moving object. Motion nearby the bounding box from the previous frame is used to intelligently determine where the region should be in the next frame. With too much masking, tracking is hampered and if an object walks from an unmasked area into a fully masked area, they essentially disappear and will be picked up as a "new" object if they leave the masked area. This is important because Frigate uses the history of scores while tracking an object to determine if it is a false positive or not. It takes a minimum of 3 frames for Frigate to determine is the object type it thinks it is, and the median score must be greater than the threshold. If a person meets this threshold while on the sidewalk before they walk into your stoop, you will get an alert the instant they step a single foot into a zone.
> I thought the main point of this feature was to cut down on CPU use when motion is happening in unnecessary areas.
It is, but the definition of "unnecessary" varies. I want to ignore areas of motion that I know are definitely not being triggered by objects of interest. Timestamps, trees, sky, rooftops. I don't want to ignore motion from objects that I want to track and know where they go.
> For me, giving my masks ANY padding results in a lot of people detection I'm not interested in. I live in the city and catch a lot of the sidewalk on my camera. People walk by my front door all the time and the margin between the sidewalk and actually walking onto my stoop is very thin, so I basically have everything but the exact contours of my stoop masked out. This results in very tidy detections but this info keeps throwing me off. Am I just overthinking it?
This is what `required_zones` are for. You should define a zone (remember this is evaluated based on the bottom center of the bounding box) and make it required to save snapshots and clips (now events in 0.9.0). You can also use this in your conditions for a notification.
> Maybe my specific situation just warrants this. I've just been having a hard time understanding the relevance of this information - it seems to be that it's exactly what would be expected when "masking out" an area of ANY image.
That may be the case for you. Frigate will definitely work harder tracking people on the sidewalk to make sure it doesn't miss anyone who steps foot on your stoop. The trade off with the way you have it now is slower recognition of objects and potential misses. That may be acceptable based on your needs. Also, if your resolution is low enough on the detect stream, your regions may already be so big that they grab the entire object anyway.
Frigate includes the object models listed below from the Google Coral test data.
Please note:
- `car` is listed twice because `truck` has been renamed to `car` by default. These object types are frequently confused.
- `person` is the only tracked object by default. See the [full configuration reference](index.md#full-configuration-reference) for an example of expanding the list of tracked objects.
<ul>
{labels.split("\n").map((label) => (
<li>{label.replace(/^\d+\s+/, "")}</li>
))}
</ul>
## Custom Models
Models for both CPU and EdgeTPU (Coral) are bundled in the image. You can use your own models with volume mounts:
- CPU Model: `/cpu_model.tflite`
- EdgeTPU Model: `/edgetpu_model.tflite`
- Labels: `/labelmap.txt`
You also need to update the [model config](advanced.md#model) if they differ from the defaults.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.