mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-10-04 15:13:22 +08:00
Compare commits
29 Commits
live-view-
...
v0.10.0-be
Author | SHA1 | Date | |
---|---|---|---|
![]() |
e6d2df5661 | ||
![]() |
a3301e0347 | ||
![]() |
3d556cc2cb | ||
![]() |
585efe1a0f | ||
![]() |
c7d47439dd | ||
![]() |
19a6978228 | ||
![]() |
1ebb8a54bf | ||
![]() |
ae968044d6 | ||
![]() |
b912851e49 | ||
![]() |
14c74e4361 | ||
![]() |
51fb532e1a | ||
![]() |
3541f966e3 | ||
![]() |
c7faef8faa | ||
![]() |
cdd3000315 | ||
![]() |
1c1c28d0e5 | ||
![]() |
4422e86907 | ||
![]() |
8f43a2d109 | ||
![]() |
bd7755fdd3 | ||
![]() |
d554175631 | ||
![]() |
ff667b019a | ||
![]() |
57dcb29f8b | ||
![]() |
9dc6c423b7 | ||
![]() |
58117e2a3e | ||
![]() |
5bec438f9c | ||
![]() |
24cc63d6d3 | ||
![]() |
d17bd74c9a | ||
![]() |
8f101ccca8 | ||
![]() |
b63c56d810 | ||
![]() |
61c62d4685 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -8,6 +8,7 @@ models
|
||||
*.mp4
|
||||
*.ts
|
||||
*.db
|
||||
*.csv
|
||||
frigate/version.py
|
||||
web/build
|
||||
web/node_modules
|
||||
|
2
Makefile
2
Makefile
@@ -3,7 +3,7 @@ default_target: amd64_frigate
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
|
||||
version:
|
||||
echo "VERSION='0.9.4-$(COMMIT_HASH)'" > frigate/version.py
|
||||
echo "VERSION='0.10.0-$(COMMIT_HASH)'" > frigate/version.py
|
||||
|
||||
web:
|
||||
docker build --tag frigate-web --file docker/Dockerfile.web web/
|
||||
|
@@ -159,6 +159,8 @@ detect:
|
||||
enabled: True
|
||||
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
|
||||
max_disappeared: 25
|
||||
# Optional: Frequency for running detection on stationary objects (default: 10x the frame rate)
|
||||
stationary_interval: 50
|
||||
|
||||
# Optional: Object configuration
|
||||
# NOTE: Can be overridden at the camera level
|
||||
@@ -192,10 +194,14 @@ motion:
|
||||
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
|
||||
# The value should be between 1 and 255.
|
||||
threshold: 25
|
||||
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: ~0.17% of the motion frame area)
|
||||
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller
|
||||
# moving objects.
|
||||
contour_area: 100
|
||||
# Optional: Minimum size in pixels in the resized motion image that counts as motion (default: 30)
|
||||
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
|
||||
# make motion detection more sensitive to smaller moving objects.
|
||||
# As a rule of thumb:
|
||||
# - 15 - high sensitivity
|
||||
# - 30 - medium sensitivity
|
||||
# - 50 - low sensitivity
|
||||
contour_area: 30
|
||||
# Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
|
||||
# Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
|
||||
# Too low and a fast moving person wont be detected as motion.
|
||||
@@ -205,10 +211,10 @@ motion:
|
||||
# Low values will cause things like moving shadows to be detected as motion for longer.
|
||||
# https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
|
||||
frame_alpha: 0.2
|
||||
# Optional: Height of the resized motion frame (default: 1/6th of the original frame height, but no less than 180)
|
||||
# This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense of higher CPU usage.
|
||||
# Lower values result in less CPU, but small changes may not register as motion.
|
||||
frame_height: 180
|
||||
# Optional: Height of the resized motion frame (default: 50)
|
||||
# This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense
|
||||
# of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion.
|
||||
frame_height: 50
|
||||
# Optional: motion mask
|
||||
# NOTE: see docs for more detailed info on creating masks
|
||||
mask: 0,900,1080,900,1080,1920,0,1920
|
||||
|
@@ -22,4 +22,4 @@ record:
|
||||
|
||||
This configuration will retain recording segments that overlap with events for 10 days. Because multiple events can reference the same recording segments, this avoids storing duplicate footage for overlapping events and reduces overall storage needs.
|
||||
|
||||
When `retain_days` is set to `0`, events will have up to `max_seconds` (defaults to 5 minutes) of recordings retained. Increasing `retain_days` to `1` will allow events to exceed the `max_seconds` limitation of up to 1 day.
|
||||
When `retain_days` is set to `0`, segments will be deleted from the cache if no events are in progress
|
||||
|
@@ -78,7 +78,7 @@ Frigate utilizes shared memory to store frames during processing. The default `s
|
||||
|
||||
The default shm-size of 64m is fine for setups with 2 or less 1080p cameras. If frigate is exiting with "Bus error" messages, it is likely because you have too many high resolution cameras and you need to specify a higher shm size.
|
||||
|
||||
You can calculate the necessary shm-size for each camera with the following formula:
|
||||
You can calculate the necessary shm-size for each camera with the following formula using the resolution specified for detect:
|
||||
|
||||
```
|
||||
(width * height * 1.5 * 9 + 270480)/1048576 = <shm size in mb>
|
||||
|
@@ -71,6 +71,9 @@ class FrigateApp:
|
||||
self.config = user_config.runtime_config
|
||||
|
||||
for camera_name in self.config.cameras.keys():
|
||||
# generage the ffmpeg commands
|
||||
self.config.cameras[camera_name].create_ffmpeg_cmds()
|
||||
|
||||
# create camera_metrics
|
||||
self.camera_metrics[camera_name] = {
|
||||
"camera_fps": mp.Value("d", 0.0),
|
||||
|
@@ -12,7 +12,7 @@ import yaml
|
||||
from pydantic import BaseModel, Extra, Field, validator
|
||||
from pydantic.fields import PrivateAttr
|
||||
|
||||
from frigate.const import BASE_DIR, CACHE_DIR, RECORD_DIR
|
||||
from frigate.const import BASE_DIR, CACHE_DIR
|
||||
from frigate.edgetpu import load_labels
|
||||
from frigate.util import create_mask, deep_merge
|
||||
|
||||
@@ -103,10 +103,10 @@ class MotionConfig(FrigateBaseModel):
|
||||
ge=1,
|
||||
le=255,
|
||||
)
|
||||
contour_area: Optional[int] = Field(title="Contour Area")
|
||||
contour_area: Optional[int] = Field(default=30, title="Contour Area")
|
||||
delta_alpha: float = Field(default=0.2, title="Delta Alpha")
|
||||
frame_alpha: float = Field(default=0.2, title="Frame Alpha")
|
||||
frame_height: Optional[int] = Field(title="Frame Height")
|
||||
frame_height: Optional[int] = Field(default=50, title="Frame Height")
|
||||
mask: Union[str, List[str]] = Field(
|
||||
default="", title="Coordinates polygon for the motion mask."
|
||||
)
|
||||
@@ -119,15 +119,6 @@ class RuntimeMotionConfig(MotionConfig):
|
||||
def __init__(self, **config):
|
||||
frame_shape = config.get("frame_shape", (1, 1))
|
||||
|
||||
if "frame_height" not in config:
|
||||
config["frame_height"] = max(frame_shape[0] // 6, 180)
|
||||
|
||||
if "contour_area" not in config:
|
||||
frame_width = frame_shape[1] * config["frame_height"] / frame_shape[0]
|
||||
config["contour_area"] = (
|
||||
config["frame_height"] * frame_width * 0.00173611111
|
||||
)
|
||||
|
||||
mask = config.get("mask", "")
|
||||
config["raw_mask"] = mask
|
||||
|
||||
@@ -162,6 +153,9 @@ class DetectConfig(FrigateBaseModel):
|
||||
max_disappeared: Optional[int] = Field(
|
||||
title="Maximum number of frames the object can dissapear before detection ends."
|
||||
)
|
||||
stationary_interval: Optional[int] = Field(
|
||||
title="Frame interval for checking stationary objects."
|
||||
)
|
||||
|
||||
|
||||
class FilterConfig(FrigateBaseModel):
|
||||
@@ -495,6 +489,7 @@ class CameraConfig(FrigateBaseModel):
|
||||
timestamp_style: TimestampStyleConfig = Field(
|
||||
default_factory=TimestampStyleConfig, title="Timestamp style configuration."
|
||||
)
|
||||
_ffmpeg_cmds: List[Dict[str, List[str]]] = PrivateAttr()
|
||||
|
||||
def __init__(self, **config):
|
||||
# Set zone colors
|
||||
@@ -521,6 +516,9 @@ class CameraConfig(FrigateBaseModel):
|
||||
|
||||
@property
|
||||
def ffmpeg_cmds(self) -> List[Dict[str, List[str]]]:
|
||||
return self._ffmpeg_cmds
|
||||
|
||||
def create_ffmpeg_cmds(self):
|
||||
ffmpeg_cmds = []
|
||||
for ffmpeg_input in self.ffmpeg.inputs:
|
||||
ffmpeg_cmd = self._get_ffmpeg_cmd(ffmpeg_input)
|
||||
@@ -528,7 +526,7 @@ class CameraConfig(FrigateBaseModel):
|
||||
continue
|
||||
|
||||
ffmpeg_cmds.append({"roles": ffmpeg_input.roles, "cmd": ffmpeg_cmd})
|
||||
return ffmpeg_cmds
|
||||
self._ffmpeg_cmds = ffmpeg_cmds
|
||||
|
||||
def _get_ffmpeg_cmd(self, ffmpeg_input: CameraInput):
|
||||
ffmpeg_output_args = []
|
||||
@@ -745,6 +743,11 @@ class FrigateConfig(FrigateBaseModel):
|
||||
if camera_config.detect.max_disappeared is None:
|
||||
camera_config.detect.max_disappeared = max_disappeared
|
||||
|
||||
# Default stationary_interval configuration
|
||||
stationary_interval = camera_config.detect.fps * 10
|
||||
if camera_config.detect.stationary_interval is None:
|
||||
camera_config.detect.stationary_interval = stationary_interval
|
||||
|
||||
# FFMPEG input substitution
|
||||
for input in camera_config.ffmpeg.inputs:
|
||||
input.path = input.path.format(**FRIGATE_ENV_VARS)
|
||||
|
@@ -30,6 +30,11 @@ class EventProcessor(threading.Thread):
|
||||
self.stop_event = stop_event
|
||||
|
||||
def run(self):
|
||||
# set an end_time on events without an end_time on startup
|
||||
Event.update(end_time=Event.start_time + 30).where(
|
||||
Event.end_time == None
|
||||
).execute()
|
||||
|
||||
while not self.stop_event.is_set():
|
||||
try:
|
||||
event_type, camera, event_data = self.event_queue.get(timeout=10)
|
||||
@@ -38,14 +43,35 @@ class EventProcessor(threading.Thread):
|
||||
|
||||
logger.debug(f"Event received: {event_type} {camera} {event_data['id']}")
|
||||
|
||||
event_config: EventsConfig = self.config.cameras[camera].record.events
|
||||
|
||||
if event_type == "start":
|
||||
self.events_in_process[event_data["id"]] = event_data
|
||||
|
||||
if event_type == "end":
|
||||
event_config: EventsConfig = self.config.cameras[camera].record.events
|
||||
|
||||
elif event_type == "update":
|
||||
self.events_in_process[event_data["id"]] = event_data
|
||||
# TODO: this will generate a lot of db activity possibly
|
||||
if event_data["has_clip"] or event_data["has_snapshot"]:
|
||||
Event.create(
|
||||
Event.replace(
|
||||
id=event_data["id"],
|
||||
label=event_data["label"],
|
||||
camera=camera,
|
||||
start_time=event_data["start_time"] - event_config.pre_capture,
|
||||
end_time=None,
|
||||
top_score=event_data["top_score"],
|
||||
false_positive=event_data["false_positive"],
|
||||
zones=list(event_data["entered_zones"]),
|
||||
thumbnail=event_data["thumbnail"],
|
||||
region=event_data["region"],
|
||||
box=event_data["box"],
|
||||
area=event_data["area"],
|
||||
has_clip=event_data["has_clip"],
|
||||
has_snapshot=event_data["has_snapshot"],
|
||||
).execute()
|
||||
|
||||
elif event_type == "end":
|
||||
if event_data["has_clip"] or event_data["has_snapshot"]:
|
||||
Event.replace(
|
||||
id=event_data["id"],
|
||||
label=event_data["label"],
|
||||
camera=camera,
|
||||
@@ -60,11 +86,15 @@ class EventProcessor(threading.Thread):
|
||||
area=event_data["area"],
|
||||
has_clip=event_data["has_clip"],
|
||||
has_snapshot=event_data["has_snapshot"],
|
||||
)
|
||||
).execute()
|
||||
|
||||
del self.events_in_process[event_data["id"]]
|
||||
self.event_processed_queue.put((event_data["id"], camera))
|
||||
|
||||
# set an end_time on events without an end_time before exiting
|
||||
Event.update(end_time=datetime.datetime.now().timestamp()).where(
|
||||
Event.end_time == None
|
||||
).execute()
|
||||
logger.info(f"Exiting event processor...")
|
||||
|
||||
|
||||
|
@@ -1,6 +1,7 @@
|
||||
import base64
|
||||
from collections import OrderedDict
|
||||
from datetime import datetime, timedelta
|
||||
import copy
|
||||
import json
|
||||
import glob
|
||||
import logging
|
||||
@@ -190,7 +191,7 @@ def event_snapshot(id):
|
||||
download = request.args.get("download", type=bool)
|
||||
jpg_bytes = None
|
||||
try:
|
||||
event = Event.get(Event.id == id)
|
||||
event = Event.get(Event.id == id, Event.end_time != None)
|
||||
if not event.has_snapshot:
|
||||
return "Snapshot not available", 404
|
||||
# read snapshot from disk
|
||||
@@ -321,7 +322,7 @@ def config():
|
||||
# add in the ffmpeg_cmds
|
||||
for camera_name, camera in current_app.frigate_config.cameras.items():
|
||||
camera_dict = config["cameras"][camera_name]
|
||||
camera_dict["ffmpeg_cmds"] = camera.ffmpeg_cmds
|
||||
camera_dict["ffmpeg_cmds"] = copy.deepcopy(camera.ffmpeg_cmds)
|
||||
for cmd in camera_dict["ffmpeg_cmds"]:
|
||||
cmd["cmd"] = " ".join(cmd["cmd"])
|
||||
|
||||
@@ -697,7 +698,10 @@ def vod_event(id):
|
||||
clip_path = os.path.join(CLIPS_DIR, f"{event.camera}-{id}.mp4")
|
||||
|
||||
if not os.path.isfile(clip_path):
|
||||
return vod_ts(event.camera, event.start_time, event.end_time)
|
||||
end_ts = (
|
||||
datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||
)
|
||||
return vod_ts(event.camera, event.start_time, end_ts)
|
||||
|
||||
duration = int((event.end_time - event.start_time) * 1000)
|
||||
return jsonify(
|
||||
|
@@ -23,6 +23,7 @@ class MotionDetector:
|
||||
interpolation=cv2.INTER_LINEAR,
|
||||
)
|
||||
self.mask = np.where(resized_mask == [0])
|
||||
self.save_images = False
|
||||
|
||||
def detect(self, frame):
|
||||
motion_boxes = []
|
||||
@@ -36,10 +37,15 @@ class MotionDetector:
|
||||
interpolation=cv2.INTER_LINEAR,
|
||||
)
|
||||
|
||||
# TODO: can I improve the contrast of the grayscale image here?
|
||||
|
||||
# convert to grayscale
|
||||
# resized_frame = cv2.cvtColor(resized_frame, cv2.COLOR_BGR2GRAY)
|
||||
# Improve contrast
|
||||
minval = np.percentile(resized_frame, 4)
|
||||
maxval = np.percentile(resized_frame, 96)
|
||||
# don't adjust if the image is a single color
|
||||
if minval < maxval:
|
||||
resized_frame = np.clip(resized_frame, minval, maxval)
|
||||
resized_frame = (
|
||||
((resized_frame - minval) / (maxval - minval)) * 255
|
||||
).astype(np.uint8)
|
||||
|
||||
# mask frame
|
||||
resized_frame[self.mask] = [255]
|
||||
@@ -49,6 +55,8 @@ class MotionDetector:
|
||||
if self.frame_counter < 30:
|
||||
self.frame_counter += 1
|
||||
else:
|
||||
if self.save_images:
|
||||
self.frame_counter += 1
|
||||
# compare to average
|
||||
frameDelta = cv2.absdiff(resized_frame, cv2.convertScaleAbs(self.avg_frame))
|
||||
|
||||
@@ -58,7 +66,6 @@ class MotionDetector:
|
||||
cv2.accumulateWeighted(frameDelta, self.avg_delta, self.config.delta_alpha)
|
||||
|
||||
# compute the threshold image for the current frame
|
||||
# TODO: threshold
|
||||
current_thresh = cv2.threshold(
|
||||
frameDelta, self.config.threshold, 255, cv2.THRESH_BINARY
|
||||
)[1]
|
||||
@@ -75,8 +82,10 @@ class MotionDetector:
|
||||
|
||||
# dilate the thresholded image to fill in holes, then find contours
|
||||
# on thresholded image
|
||||
thresh = cv2.dilate(thresh, None, iterations=2)
|
||||
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
|
||||
thresh_dilated = cv2.dilate(thresh, None, iterations=2)
|
||||
cnts = cv2.findContours(
|
||||
thresh_dilated, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE
|
||||
)
|
||||
cnts = imutils.grab_contours(cnts)
|
||||
|
||||
# loop over the contours
|
||||
@@ -94,6 +103,35 @@ class MotionDetector:
|
||||
)
|
||||
)
|
||||
|
||||
if self.save_images:
|
||||
thresh_dilated = cv2.cvtColor(thresh_dilated, cv2.COLOR_GRAY2BGR)
|
||||
# print("--------")
|
||||
# print(self.frame_counter)
|
||||
for c in cnts:
|
||||
contour_area = cv2.contourArea(c)
|
||||
# print(contour_area)
|
||||
if contour_area > self.config.contour_area:
|
||||
x, y, w, h = cv2.boundingRect(c)
|
||||
cv2.rectangle(
|
||||
thresh_dilated,
|
||||
(x, y),
|
||||
(x + w, y + h),
|
||||
(0, 0, 255),
|
||||
2,
|
||||
)
|
||||
# print("--------")
|
||||
image_row_1 = cv2.hconcat(
|
||||
[
|
||||
cv2.cvtColor(frameDelta, cv2.COLOR_GRAY2BGR),
|
||||
cv2.cvtColor(avg_delta_image, cv2.COLOR_GRAY2BGR),
|
||||
]
|
||||
)
|
||||
image_row_2 = cv2.hconcat(
|
||||
[cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR), thresh_dilated]
|
||||
)
|
||||
combined_image = cv2.vconcat([image_row_1, image_row_2])
|
||||
cv2.imwrite(f"motion/motion-{self.frame_counter}.jpg", combined_image)
|
||||
|
||||
if len(motion_boxes) > 0:
|
||||
self.motion_frame_count += 1
|
||||
if self.motion_frame_count >= 10:
|
||||
|
@@ -603,6 +603,8 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
self.event_queue.put(("start", camera, obj.to_dict()))
|
||||
|
||||
def update(camera, obj: TrackedObject, current_frame_time):
|
||||
obj.has_snapshot = self.should_save_snapshot(camera, obj)
|
||||
obj.has_clip = self.should_retain_recording(camera, obj)
|
||||
after = obj.to_dict()
|
||||
message = {
|
||||
"before": obj.previous,
|
||||
@@ -613,6 +615,9 @@ class TrackedObjectProcessor(threading.Thread):
|
||||
f"{self.topic_prefix}/events", json.dumps(message), retain=False
|
||||
)
|
||||
obj.previous = after
|
||||
self.event_queue.put(
|
||||
("update", camera, obj.to_dict(include_thumbnail=True))
|
||||
)
|
||||
|
||||
def end(camera, obj: TrackedObject, current_frame_time):
|
||||
# populate has_snapshot
|
||||
|
@@ -13,7 +13,7 @@ import numpy as np
|
||||
from scipy.spatial import distance as dist
|
||||
|
||||
from frigate.config import DetectConfig
|
||||
from frigate.util import draw_box_with_label
|
||||
from frigate.util import intersection_over_union
|
||||
|
||||
|
||||
class ObjectTracker:
|
||||
@@ -27,6 +27,7 @@ class ObjectTracker:
|
||||
id = f"{obj['frame_time']}-{rand_id}"
|
||||
obj["id"] = id
|
||||
obj["start_time"] = obj["frame_time"]
|
||||
obj["motionless_count"] = 0
|
||||
self.tracked_objects[id] = obj
|
||||
self.disappeared[id] = 0
|
||||
|
||||
@@ -36,6 +37,13 @@ class ObjectTracker:
|
||||
|
||||
def update(self, id, new_obj):
|
||||
self.disappeared[id] = 0
|
||||
if (
|
||||
intersection_over_union(self.tracked_objects[id]["box"], new_obj["box"])
|
||||
> 0.9
|
||||
):
|
||||
self.tracked_objects[id]["motionless_count"] += 1
|
||||
else:
|
||||
self.tracked_objects[id]["motionless_count"] = 0
|
||||
self.tracked_objects[id].update(new_obj)
|
||||
|
||||
def match_and_update(self, frame_time, new_objects):
|
||||
|
@@ -1,4 +1,5 @@
|
||||
import datetime
|
||||
import time
|
||||
import itertools
|
||||
import logging
|
||||
import os
|
||||
@@ -7,6 +8,7 @@ import shutil
|
||||
import string
|
||||
import subprocess as sp
|
||||
import threading
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
|
||||
import psutil
|
||||
@@ -43,9 +45,11 @@ class RecordingMaintainer(threading.Thread):
|
||||
self.name = "recording_maint"
|
||||
self.config = config
|
||||
self.stop_event = stop_event
|
||||
self.first_pass = True
|
||||
self.end_time_cache = {}
|
||||
|
||||
def move_files(self):
|
||||
recordings = [
|
||||
cache_files = [
|
||||
d
|
||||
for d in os.listdir(CACHE_DIR)
|
||||
if os.path.isfile(os.path.join(CACHE_DIR, d))
|
||||
@@ -66,7 +70,9 @@ class RecordingMaintainer(threading.Thread):
|
||||
except:
|
||||
continue
|
||||
|
||||
for f in recordings:
|
||||
# group recordings by camera
|
||||
grouped_recordings = defaultdict(list)
|
||||
for f in cache_files:
|
||||
# Skip files currently in use
|
||||
if f in files_in_use:
|
||||
continue
|
||||
@@ -76,45 +82,130 @@ class RecordingMaintainer(threading.Thread):
|
||||
camera, date = basename.rsplit("-", maxsplit=1)
|
||||
start_time = datetime.datetime.strptime(date, "%Y%m%d%H%M%S")
|
||||
|
||||
# Just delete files if recordings are turned off
|
||||
if (
|
||||
not camera in self.config.cameras
|
||||
or not self.config.cameras[camera].record.enabled
|
||||
):
|
||||
Path(cache_path).unlink(missing_ok=True)
|
||||
continue
|
||||
|
||||
ffprobe_cmd = [
|
||||
"ffprobe",
|
||||
"-v",
|
||||
"error",
|
||||
"-show_entries",
|
||||
"format=duration",
|
||||
"-of",
|
||||
"default=noprint_wrappers=1:nokey=1",
|
||||
f"{cache_path}",
|
||||
]
|
||||
p = sp.run(ffprobe_cmd, capture_output=True)
|
||||
if p.returncode == 0:
|
||||
duration = float(p.stdout.decode().strip())
|
||||
end_time = start_time + datetime.timedelta(seconds=duration)
|
||||
else:
|
||||
logger.warning(f"Discarding a corrupt recording segment: {f}")
|
||||
Path(cache_path).unlink(missing_ok=True)
|
||||
continue
|
||||
|
||||
directory = os.path.join(
|
||||
RECORD_DIR, start_time.strftime("%Y-%m/%d/%H"), camera
|
||||
grouped_recordings[camera].append(
|
||||
{
|
||||
"cache_path": cache_path,
|
||||
"start_time": start_time,
|
||||
}
|
||||
)
|
||||
|
||||
if not os.path.exists(directory):
|
||||
os.makedirs(directory)
|
||||
# delete all cached files past the most recent 5
|
||||
keep_count = 5
|
||||
for camera in grouped_recordings.keys():
|
||||
if len(grouped_recordings[camera]) > keep_count:
|
||||
sorted_recordings = sorted(
|
||||
grouped_recordings[camera], key=lambda i: i["start_time"]
|
||||
)
|
||||
to_remove = sorted_recordings[:-keep_count]
|
||||
for f in to_remove:
|
||||
Path(f["cache_path"]).unlink(missing_ok=True)
|
||||
self.end_time_cache.pop(f["cache_path"], None)
|
||||
grouped_recordings[camera] = sorted_recordings[-keep_count:]
|
||||
|
||||
file_name = f"{start_time.strftime('%M.%S.mp4')}"
|
||||
file_path = os.path.join(directory, file_name)
|
||||
for camera, recordings in grouped_recordings.items():
|
||||
# get all events with the end time after the start of the oldest cache file
|
||||
# or with end_time None
|
||||
events: Event = (
|
||||
Event.select()
|
||||
.where(
|
||||
Event.camera == camera,
|
||||
(Event.end_time == None)
|
||||
| (Event.end_time >= recordings[0]["start_time"]),
|
||||
Event.has_clip,
|
||||
)
|
||||
.order_by(Event.start_time)
|
||||
)
|
||||
for r in recordings:
|
||||
cache_path = r["cache_path"]
|
||||
start_time = r["start_time"]
|
||||
|
||||
# Just delete files if recordings are turned off
|
||||
if (
|
||||
not camera in self.config.cameras
|
||||
or not self.config.cameras[camera].record.enabled
|
||||
):
|
||||
Path(cache_path).unlink(missing_ok=True)
|
||||
self.end_time_cache.pop(cache_path, None)
|
||||
continue
|
||||
|
||||
if cache_path in self.end_time_cache:
|
||||
end_time, duration = self.end_time_cache[cache_path]
|
||||
else:
|
||||
ffprobe_cmd = [
|
||||
"ffprobe",
|
||||
"-v",
|
||||
"error",
|
||||
"-show_entries",
|
||||
"format=duration",
|
||||
"-of",
|
||||
"default=noprint_wrappers=1:nokey=1",
|
||||
f"{cache_path}",
|
||||
]
|
||||
p = sp.run(ffprobe_cmd, capture_output=True)
|
||||
if p.returncode == 0:
|
||||
duration = float(p.stdout.decode().strip())
|
||||
end_time = start_time + datetime.timedelta(seconds=duration)
|
||||
self.end_time_cache[cache_path] = (end_time, duration)
|
||||
else:
|
||||
logger.warning(f"Discarding a corrupt recording segment: {f}")
|
||||
Path(cache_path).unlink(missing_ok=True)
|
||||
continue
|
||||
|
||||
# if cached file's start_time is earlier than the retain_days for the camera
|
||||
if start_time <= (
|
||||
(
|
||||
datetime.datetime.now()
|
||||
- datetime.timedelta(
|
||||
days=self.config.cameras[camera].record.retain_days
|
||||
)
|
||||
)
|
||||
):
|
||||
# if the cached segment overlaps with the events:
|
||||
overlaps = False
|
||||
for event in events:
|
||||
# if the event starts in the future, stop checking events
|
||||
# and remove this segment
|
||||
if event.start_time > end_time.timestamp():
|
||||
overlaps = False
|
||||
break
|
||||
|
||||
# if the event is in progress or ends after the recording starts, keep it
|
||||
# and stop looking at events
|
||||
if event.end_time is None or event.end_time >= start_time:
|
||||
overlaps = True
|
||||
break
|
||||
|
||||
if overlaps:
|
||||
# move from cache to recordings immediately
|
||||
self.store_segment(
|
||||
camera,
|
||||
start_time,
|
||||
end_time,
|
||||
duration,
|
||||
cache_path,
|
||||
)
|
||||
# else retain_days includes this segment
|
||||
else:
|
||||
self.store_segment(
|
||||
camera, start_time, end_time, duration, cache_path
|
||||
)
|
||||
|
||||
def store_segment(self, camera, start_time, end_time, duration, cache_path):
|
||||
directory = os.path.join(RECORD_DIR, start_time.strftime("%Y-%m/%d/%H"), camera)
|
||||
|
||||
if not os.path.exists(directory):
|
||||
os.makedirs(directory)
|
||||
|
||||
file_name = f"{start_time.strftime('%M.%S.mp4')}"
|
||||
file_path = os.path.join(directory, file_name)
|
||||
|
||||
try:
|
||||
start_frame = datetime.datetime.now().timestamp()
|
||||
# copy then delete is required when recordings are stored on some network drives
|
||||
shutil.copyfile(cache_path, file_path)
|
||||
logger.debug(
|
||||
f"Copied {file_path} in {datetime.datetime.now().timestamp()-start_frame} seconds."
|
||||
)
|
||||
os.remove(cache_path)
|
||||
|
||||
rand_id = "".join(
|
||||
@@ -128,14 +219,34 @@ class RecordingMaintainer(threading.Thread):
|
||||
end_time=end_time.timestamp(),
|
||||
duration=duration,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Unable to store recording segment {cache_path}")
|
||||
Path(cache_path).unlink(missing_ok=True)
|
||||
logger.error(e)
|
||||
|
||||
# clear end_time cache
|
||||
self.end_time_cache.pop(cache_path, None)
|
||||
|
||||
def run(self):
|
||||
# Check for new files every 5 seconds
|
||||
wait_time = 5
|
||||
while not self.stop_event.wait(wait_time):
|
||||
run_start = datetime.datetime.now().timestamp()
|
||||
self.move_files()
|
||||
wait_time = max(0, 5 - (datetime.datetime.now().timestamp() - run_start))
|
||||
try:
|
||||
self.move_files()
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
"Error occurred when attempting to maintain recording cache"
|
||||
)
|
||||
logger.error(e)
|
||||
duration = datetime.datetime.now().timestamp() - run_start
|
||||
wait_time = max(0, 5 - duration)
|
||||
if wait_time == 0 and not self.first_pass:
|
||||
logger.warning(
|
||||
"Cache is taking longer than 5 seconds to clear. Your recordings disk may be too slow."
|
||||
)
|
||||
if self.first_pass:
|
||||
self.first_pass = False
|
||||
|
||||
logger.info(f"Exiting recording maintenance...")
|
||||
|
||||
@@ -231,9 +342,9 @@ class RecordingCleanup(threading.Thread):
|
||||
keep = False
|
||||
break
|
||||
|
||||
# if the event ends after the recording starts, keep it
|
||||
# if the event is in progress or ends after the recording starts, keep it
|
||||
# and stop looking at events
|
||||
if event.end_time >= recording.start_time:
|
||||
if event.end_time is None or event.end_time >= recording.start_time:
|
||||
keep = True
|
||||
break
|
||||
|
||||
@@ -280,6 +391,9 @@ class RecordingCleanup(threading.Thread):
|
||||
oldest_timestamp = p.stat().st_mtime - 1
|
||||
except DoesNotExist:
|
||||
oldest_timestamp = datetime.datetime.now().timestamp()
|
||||
except FileNotFoundError:
|
||||
logger.warning(f"Unable to find file from recordings database: {p}")
|
||||
oldest_timestamp = datetime.datetime.now().timestamp()
|
||||
|
||||
logger.debug(f"Oldest recording in the db: {oldest_timestamp}")
|
||||
process = sp.run(
|
||||
|
27
frigate/test/test_reduce_boxes.py
Normal file
27
frigate/test/test_reduce_boxes.py
Normal file
@@ -0,0 +1,27 @@
|
||||
import cv2
|
||||
import numpy as np
|
||||
from unittest import TestCase, main
|
||||
from frigate.video import box_overlaps, reduce_boxes
|
||||
|
||||
|
||||
class TestBoxOverlaps(TestCase):
|
||||
def test_overlap(self):
|
||||
assert box_overlaps((100, 100, 200, 200), (50, 50, 150, 150))
|
||||
|
||||
def test_overlap_2(self):
|
||||
assert box_overlaps((50, 50, 150, 150), (100, 100, 200, 200))
|
||||
|
||||
def test_no_overlap(self):
|
||||
assert not box_overlaps((100, 100, 200, 200), (250, 250, 350, 350))
|
||||
|
||||
|
||||
class TestReduceBoxes(TestCase):
|
||||
def test_cluster(self):
|
||||
clusters = reduce_boxes(
|
||||
[(144, 290, 221, 459), (225, 178, 426, 341), (343, 105, 584, 250)]
|
||||
)
|
||||
assert len(clusters) == 2
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(verbosity=2)
|
@@ -191,7 +191,7 @@ def draw_box_with_label(
|
||||
|
||||
def calculate_region(frame_shape, xmin, ymin, xmax, ymax, multiplier=2):
|
||||
# size is the longest edge and divisible by 4
|
||||
size = int(max(xmax - xmin, ymax - ymin) // 4 * 4 * multiplier)
|
||||
size = int((max(xmax - xmin, ymax - ymin) * multiplier) // 4 * 4)
|
||||
# dont go any smaller than 300
|
||||
if size < 300:
|
||||
size = 300
|
||||
|
140
frigate/video.py
140
frigate/video.py
@@ -3,18 +3,18 @@ import itertools
|
||||
import logging
|
||||
import multiprocessing as mp
|
||||
import queue
|
||||
import subprocess as sp
|
||||
import signal
|
||||
import subprocess as sp
|
||||
import threading
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from setproctitle import setproctitle
|
||||
from typing import Dict, List
|
||||
|
||||
from cv2 import cv2
|
||||
import numpy as np
|
||||
from cv2 import cv2, reduce
|
||||
from setproctitle import setproctitle
|
||||
|
||||
from frigate.config import CameraConfig
|
||||
from frigate.config import CameraConfig, DetectConfig
|
||||
from frigate.edgetpu import RemoteObjectDetector
|
||||
from frigate.log import LogPipe
|
||||
from frigate.motion import MotionDetector
|
||||
@@ -23,8 +23,11 @@ from frigate.util import (
|
||||
EventsPerSecond,
|
||||
FrameManager,
|
||||
SharedMemoryFrameManager,
|
||||
area,
|
||||
calculate_region,
|
||||
clipped,
|
||||
intersection,
|
||||
intersection_over_union,
|
||||
listen,
|
||||
yuv_region_2_rgb,
|
||||
)
|
||||
@@ -364,6 +367,7 @@ def track_camera(
|
||||
frame_queue,
|
||||
frame_shape,
|
||||
model_shape,
|
||||
config.detect,
|
||||
frame_manager,
|
||||
motion_detector,
|
||||
object_detector,
|
||||
@@ -379,26 +383,36 @@ def track_camera(
|
||||
logger.info(f"{name}: exiting subprocess")
|
||||
|
||||
|
||||
def reduce_boxes(boxes):
|
||||
if len(boxes) == 0:
|
||||
return []
|
||||
reduced_boxes = cv2.groupRectangles(
|
||||
[list(b) for b in itertools.chain(boxes, boxes)], 1, 0.2
|
||||
)[0]
|
||||
return [tuple(b) for b in reduced_boxes]
|
||||
def box_overlaps(b1, b2):
|
||||
if b1[2] < b2[0] or b1[0] > b2[2] or b1[1] > b2[3] or b1[3] < b2[1]:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def reduce_boxes(boxes, iou_threshold=0.0):
|
||||
clusters = []
|
||||
|
||||
for box in boxes:
|
||||
matched = 0
|
||||
for cluster in clusters:
|
||||
if intersection_over_union(box, cluster) > iou_threshold:
|
||||
matched = 1
|
||||
cluster[0] = min(cluster[0], box[0])
|
||||
cluster[1] = min(cluster[1], box[1])
|
||||
cluster[2] = max(cluster[2], box[2])
|
||||
cluster[3] = max(cluster[3], box[3])
|
||||
|
||||
if not matched:
|
||||
clusters.append(list(box))
|
||||
|
||||
return [tuple(c) for c in clusters]
|
||||
|
||||
|
||||
# modified from https://stackoverflow.com/a/40795835
|
||||
def intersects_any(box_a, boxes):
|
||||
for box in boxes:
|
||||
if (
|
||||
box_a[2] < box[0]
|
||||
or box_a[0] > box[2]
|
||||
or box_a[1] > box[3]
|
||||
or box_a[3] < box[1]
|
||||
):
|
||||
continue
|
||||
return True
|
||||
if box_overlaps(box_a, box):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def detect(
|
||||
@@ -434,6 +448,7 @@ def process_frames(
|
||||
frame_queue: mp.Queue,
|
||||
frame_shape,
|
||||
model_shape,
|
||||
detect_config: DetectConfig,
|
||||
frame_manager: FrameManager,
|
||||
motion_detector: MotionDetector,
|
||||
object_detector: RemoteObjectDetector,
|
||||
@@ -487,11 +502,28 @@ def process_frames(
|
||||
# look for motion
|
||||
motion_boxes = motion_detector.detect(frame)
|
||||
|
||||
# only get the tracked object boxes that intersect with motion
|
||||
# get stationary object ids
|
||||
# check every Nth frame for stationary objects
|
||||
# disappeared objects are not stationary
|
||||
# also check for overlapping motion boxes
|
||||
stationary_object_ids = [
|
||||
obj["id"]
|
||||
for obj in object_tracker.tracked_objects.values()
|
||||
# if there hasn't been motion for 10 frames
|
||||
if obj["motionless_count"] >= 10
|
||||
# and it isn't due for a periodic check
|
||||
and obj["motionless_count"] % detect_config.stationary_interval != 0
|
||||
# and it hasn't disappeared
|
||||
and object_tracker.disappeared[obj["id"]] == 0
|
||||
# and it doesn't overlap with any current motion boxes
|
||||
and not intersects_any(obj["box"], motion_boxes)
|
||||
]
|
||||
|
||||
# get tracked object boxes that aren't stationary
|
||||
tracked_object_boxes = [
|
||||
obj["box"]
|
||||
for obj in object_tracker.tracked_objects.values()
|
||||
if intersects_any(obj["box"], motion_boxes)
|
||||
if not obj["id"] in stationary_object_ids
|
||||
]
|
||||
|
||||
# combine motion boxes with known locations of existing objects
|
||||
@@ -503,17 +535,25 @@ def process_frames(
|
||||
for a in combined_boxes
|
||||
]
|
||||
|
||||
# combine overlapping regions
|
||||
combined_regions = reduce_boxes(regions)
|
||||
|
||||
# re-compute regions
|
||||
# consolidate regions with heavy overlap
|
||||
regions = [
|
||||
calculate_region(frame_shape, a[0], a[1], a[2], a[3], 1.0)
|
||||
for a in combined_regions
|
||||
for a in reduce_boxes(regions, 0.4)
|
||||
]
|
||||
|
||||
# resize regions and detect
|
||||
detections = []
|
||||
# seed with stationary objects
|
||||
detections = [
|
||||
(
|
||||
obj["label"],
|
||||
obj["score"],
|
||||
obj["box"],
|
||||
obj["area"],
|
||||
obj["region"],
|
||||
)
|
||||
for obj in object_tracker.tracked_objects.values()
|
||||
if obj["id"] in stationary_object_ids
|
||||
]
|
||||
for region in regions:
|
||||
detections.extend(
|
||||
detect(
|
||||
@@ -582,14 +622,46 @@ def process_frames(
|
||||
if refining:
|
||||
refine_count += 1
|
||||
|
||||
# Limit to the detections overlapping with motion areas
|
||||
# to avoid picking up stationary background objects
|
||||
detections_with_motion = [
|
||||
d for d in detections if intersects_any(d[2], motion_boxes)
|
||||
]
|
||||
## drop detections that overlap too much
|
||||
consolidated_detections = []
|
||||
# group by name
|
||||
detected_object_groups = defaultdict(lambda: [])
|
||||
for detection in detections:
|
||||
detected_object_groups[detection[0]].append(detection)
|
||||
|
||||
# loop over detections grouped by label
|
||||
for group in detected_object_groups.values():
|
||||
# if the group only has 1 item, skip
|
||||
if len(group) == 1:
|
||||
consolidated_detections.append(group[0])
|
||||
continue
|
||||
|
||||
# sort smallest to largest by area
|
||||
sorted_by_area = sorted(group, key=lambda g: g[3])
|
||||
|
||||
for current_detection_idx in range(0, len(sorted_by_area)):
|
||||
current_detection = sorted_by_area[current_detection_idx][2]
|
||||
overlap = 0
|
||||
for to_check_idx in range(
|
||||
min(current_detection_idx + 1, len(sorted_by_area)),
|
||||
len(sorted_by_area),
|
||||
):
|
||||
to_check = sorted_by_area[to_check_idx][2]
|
||||
# if 90% of smaller detection is inside of another detection, consolidate
|
||||
if (
|
||||
area(intersection(current_detection, to_check))
|
||||
/ area(current_detection)
|
||||
> 0.9
|
||||
):
|
||||
overlap = 1
|
||||
break
|
||||
if overlap == 0:
|
||||
consolidated_detections.append(
|
||||
sorted_by_area[current_detection_idx]
|
||||
)
|
||||
|
||||
# now that we have refined our detections, we need to track objects
|
||||
object_tracker.match_and_update(frame_time, detections_with_motion)
|
||||
object_tracker.match_and_update(frame_time, consolidated_detections)
|
||||
|
||||
# add to the queue if not full
|
||||
if detected_objects_queue.full():
|
||||
|
43
migrations/005_make_end_time_nullable.py
Normal file
43
migrations/005_make_end_time_nullable.py
Normal file
@@ -0,0 +1,43 @@
|
||||
"""Peewee migrations -- 004_add_bbox_region_area.py.
|
||||
|
||||
Some examples (model - class or model name)::
|
||||
|
||||
> Model = migrator.orm['model_name'] # Return model in current state by name
|
||||
|
||||
> migrator.sql(sql) # Run custom SQL
|
||||
> migrator.python(func, *args, **kwargs) # Run python code
|
||||
> migrator.create_model(Model) # Create a model (could be used as decorator)
|
||||
> migrator.remove_model(model, cascade=True) # Remove a model
|
||||
> migrator.add_fields(model, **fields) # Add fields to a model
|
||||
> migrator.change_fields(model, **fields) # Change fields
|
||||
> migrator.remove_fields(model, *field_names, cascade=True)
|
||||
> migrator.rename_field(model, old_field_name, new_field_name)
|
||||
> migrator.rename_table(model, new_table_name)
|
||||
> migrator.add_index(model, *col_names, unique=False)
|
||||
> migrator.drop_index(model, *col_names)
|
||||
> migrator.add_not_null(model, *field_names)
|
||||
> migrator.drop_not_null(model, *field_names)
|
||||
> migrator.add_default(model, field_name, default)
|
||||
|
||||
"""
|
||||
|
||||
import datetime as dt
|
||||
import peewee as pw
|
||||
from playhouse.sqlite_ext import *
|
||||
from decimal import ROUND_HALF_EVEN
|
||||
from frigate.models import Event
|
||||
|
||||
try:
|
||||
import playhouse.postgres_ext as pw_pext
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
SQL = pw.SQL
|
||||
|
||||
|
||||
def migrate(migrator, database, fake=False, **kwargs):
|
||||
migrator.drop_not_null(Event, "end_time")
|
||||
|
||||
|
||||
def rollback(migrator, database, fake=False, **kwargs):
|
||||
pass
|
@@ -1,23 +1,26 @@
|
||||
import datetime
|
||||
import sys
|
||||
from typing_extensions import runtime
|
||||
|
||||
sys.path.append("/lab/frigate")
|
||||
|
||||
import json
|
||||
import logging
|
||||
import multiprocessing as mp
|
||||
import os
|
||||
import subprocess as sp
|
||||
import sys
|
||||
from unittest import TestCase, main
|
||||
|
||||
import click
|
||||
import csv
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
from frigate.config import FRIGATE_CONFIG_SCHEMA, FrigateConfig
|
||||
from frigate.config import FrigateConfig
|
||||
from frigate.edgetpu import LocalObjectDetector
|
||||
from frigate.motion import MotionDetector
|
||||
from frigate.object_processing import CameraState
|
||||
from frigate.objects import ObjectTracker
|
||||
from frigate.util import (
|
||||
DictFrameManager,
|
||||
EventsPerSecond,
|
||||
SharedMemoryFrameManager,
|
||||
draw_box_with_label,
|
||||
@@ -96,20 +99,22 @@ class ProcessClip:
|
||||
ffmpeg_process.wait()
|
||||
ffmpeg_process.communicate()
|
||||
|
||||
def process_frames(self, objects_to_track=["person"], object_filters={}):
|
||||
def process_frames(
|
||||
self, object_detector, objects_to_track=["person"], object_filters={}
|
||||
):
|
||||
mask = np.zeros((self.frame_shape[0], self.frame_shape[1], 1), np.uint8)
|
||||
mask[:] = 255
|
||||
motion_detector = MotionDetector(
|
||||
self.frame_shape, mask, self.camera_config.motion
|
||||
)
|
||||
motion_detector = MotionDetector(self.frame_shape, self.camera_config.motion)
|
||||
motion_detector.save_images = False
|
||||
|
||||
object_detector = LocalObjectDetector(labels="/labelmap.txt")
|
||||
object_tracker = ObjectTracker(self.camera_config.detect)
|
||||
process_info = {
|
||||
"process_fps": mp.Value("d", 0.0),
|
||||
"detection_fps": mp.Value("d", 0.0),
|
||||
"detection_frame": mp.Value("d", 0.0),
|
||||
}
|
||||
|
||||
detection_enabled = mp.Value("d", 1)
|
||||
stop_event = mp.Event()
|
||||
model_shape = (self.config.model.height, self.config.model.width)
|
||||
|
||||
@@ -118,6 +123,7 @@ class ProcessClip:
|
||||
self.frame_queue,
|
||||
self.frame_shape,
|
||||
model_shape,
|
||||
self.camera_config.detect,
|
||||
self.frame_manager,
|
||||
motion_detector,
|
||||
object_detector,
|
||||
@@ -126,25 +132,16 @@ class ProcessClip:
|
||||
process_info,
|
||||
objects_to_track,
|
||||
object_filters,
|
||||
mask,
|
||||
detection_enabled,
|
||||
stop_event,
|
||||
exit_on_empty=True,
|
||||
)
|
||||
|
||||
def top_object(self, debug_path=None):
|
||||
obj_detected = False
|
||||
top_computed_score = 0.0
|
||||
|
||||
def handle_event(name, obj, frame_time):
|
||||
nonlocal obj_detected
|
||||
nonlocal top_computed_score
|
||||
if obj.computed_score > top_computed_score:
|
||||
top_computed_score = obj.computed_score
|
||||
if not obj.false_positive:
|
||||
obj_detected = True
|
||||
|
||||
self.camera_state.on("new", handle_event)
|
||||
self.camera_state.on("update", handle_event)
|
||||
def stats(self, debug_path=None):
|
||||
total_regions = 0
|
||||
total_motion_boxes = 0
|
||||
object_ids = set()
|
||||
total_frames = 0
|
||||
|
||||
while not self.detected_objects_queue.empty():
|
||||
(
|
||||
@@ -154,7 +151,8 @@ class ProcessClip:
|
||||
motion_boxes,
|
||||
regions,
|
||||
) = self.detected_objects_queue.get()
|
||||
if not debug_path is None:
|
||||
|
||||
if debug_path:
|
||||
self.save_debug_frame(
|
||||
debug_path, frame_time, current_tracked_objects.values()
|
||||
)
|
||||
@@ -162,10 +160,22 @@ class ProcessClip:
|
||||
self.camera_state.update(
|
||||
frame_time, current_tracked_objects, motion_boxes, regions
|
||||
)
|
||||
total_regions += len(regions)
|
||||
total_motion_boxes += len(motion_boxes)
|
||||
for id, obj in self.camera_state.tracked_objects.items():
|
||||
if not obj.false_positive:
|
||||
object_ids.add(id)
|
||||
|
||||
self.frame_manager.delete(self.camera_state.previous_frame_id)
|
||||
total_frames += 1
|
||||
|
||||
return {"object_detected": obj_detected, "top_score": top_computed_score}
|
||||
self.frame_manager.delete(self.camera_state.previous_frame_id)
|
||||
|
||||
return {
|
||||
"total_regions": total_regions,
|
||||
"total_motion_boxes": total_motion_boxes,
|
||||
"true_positive_objects": len(object_ids),
|
||||
"total_frames": total_frames,
|
||||
}
|
||||
|
||||
def save_debug_frame(self, debug_path, frame_time, tracked_objects):
|
||||
current_frame = cv2.cvtColor(
|
||||
@@ -178,7 +188,6 @@ class ProcessClip:
|
||||
for obj in tracked_objects:
|
||||
thickness = 2
|
||||
color = (0, 0, 175)
|
||||
|
||||
if obj["frame_time"] != frame_time:
|
||||
thickness = 1
|
||||
color = (255, 0, 0)
|
||||
@@ -221,10 +230,9 @@ class ProcessClip:
|
||||
@click.command()
|
||||
@click.option("-p", "--path", required=True, help="Path to clip or directory to test.")
|
||||
@click.option("-l", "--label", default="person", help="Label name to detect.")
|
||||
@click.option("-t", "--threshold", default=0.85, help="Threshold value for objects.")
|
||||
@click.option("-s", "--scores", default=None, help="File to save csv of top scores")
|
||||
@click.option("-o", "--output", default=None, help="File to save csv of data")
|
||||
@click.option("--debug-path", default=None, help="Path to output frames for debugging.")
|
||||
def process(path, label, threshold, scores, debug_path):
|
||||
def process(path, label, output, debug_path):
|
||||
clips = []
|
||||
if os.path.isdir(path):
|
||||
files = os.listdir(path)
|
||||
@@ -235,51 +243,78 @@ def process(path, label, threshold, scores, debug_path):
|
||||
|
||||
json_config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"detectors": {"coral": {"type": "edgetpu", "device": "usb"}},
|
||||
"cameras": {
|
||||
"camera": {
|
||||
"ffmpeg": {
|
||||
"inputs": [
|
||||
{
|
||||
"path": "path.mp4",
|
||||
"global_args": "",
|
||||
"input_args": "",
|
||||
"global_args": "-hide_banner",
|
||||
"input_args": "-loglevel info",
|
||||
"roles": ["detect"],
|
||||
}
|
||||
]
|
||||
},
|
||||
"height": 1920,
|
||||
"width": 1080,
|
||||
"rtmp": {"enabled": False},
|
||||
"record": {"enabled": False},
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
object_detector = LocalObjectDetector(labels="/labelmap.txt")
|
||||
|
||||
results = []
|
||||
for c in clips:
|
||||
logger.info(c)
|
||||
frame_shape = get_frame_shape(c)
|
||||
|
||||
json_config["cameras"]["camera"]["height"] = frame_shape[0]
|
||||
json_config["cameras"]["camera"]["width"] = frame_shape[1]
|
||||
json_config["cameras"]["camera"]["detect"] = {
|
||||
"height": frame_shape[0],
|
||||
"width": frame_shape[1],
|
||||
}
|
||||
json_config["cameras"]["camera"]["ffmpeg"]["inputs"][0]["path"] = c
|
||||
|
||||
config = FrigateConfig(config=FRIGATE_CONFIG_SCHEMA(json_config))
|
||||
frigate_config = FrigateConfig(**json_config)
|
||||
runtime_config = frigate_config.runtime_config
|
||||
|
||||
process_clip = ProcessClip(c, frame_shape, config)
|
||||
process_clip = ProcessClip(c, frame_shape, runtime_config)
|
||||
process_clip.load_frames()
|
||||
process_clip.process_frames(objects_to_track=[label])
|
||||
process_clip.process_frames(object_detector, objects_to_track=[label])
|
||||
|
||||
results.append((c, process_clip.top_object(debug_path)))
|
||||
results.append((c, process_clip.stats(debug_path)))
|
||||
|
||||
if not scores is None:
|
||||
with open(scores, "w") as writer:
|
||||
for result in results:
|
||||
writer.write(f"{result[0]},{result[1]['top_score']}\n")
|
||||
|
||||
positive_count = sum(1 for result in results if result[1]["object_detected"])
|
||||
positive_count = sum(
|
||||
1 for result in results if result[1]["true_positive_objects"] > 0
|
||||
)
|
||||
print(
|
||||
f"Objects were detected in {positive_count}/{len(results)}({positive_count/len(results)*100:.2f}%) clip(s)."
|
||||
)
|
||||
|
||||
if output:
|
||||
# now we will open a file for writing
|
||||
data_file = open(output, "w")
|
||||
|
||||
# create the csv writer object
|
||||
csv_writer = csv.writer(data_file)
|
||||
|
||||
# Counter variable used for writing
|
||||
# headers to the CSV file
|
||||
count = 0
|
||||
|
||||
for result in results:
|
||||
if count == 0:
|
||||
|
||||
# Writing headers of CSV file
|
||||
header = ["file"] + list(result[1].keys())
|
||||
csv_writer.writerow(header)
|
||||
count += 1
|
||||
|
||||
# Writing data of CSV file
|
||||
csv_writer.writerow([result[0]] + list(result[1].values()))
|
||||
|
||||
data_file.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
process()
|
@@ -121,12 +121,12 @@ describe('MqttProvider', () => {
|
||||
</MqttProvider>
|
||||
);
|
||||
await screen.findByTestId('data');
|
||||
expect(screen.getByTestId('front/detect/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"ON"}');
|
||||
expect(screen.getByTestId('front/recordings/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF"}');
|
||||
expect(screen.getByTestId('front/snapshots/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"ON"}');
|
||||
expect(screen.getByTestId('side/detect/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF"}');
|
||||
expect(screen.getByTestId('side/recordings/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF"}');
|
||||
expect(screen.getByTestId('side/snapshots/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF"}');
|
||||
expect(screen.getByTestId('front/detect/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"ON","retain":true}');
|
||||
expect(screen.getByTestId('front/recordings/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF","retain":true}');
|
||||
expect(screen.getByTestId('front/snapshots/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"ON","retain":true}');
|
||||
expect(screen.getByTestId('side/detect/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF","retain":true}');
|
||||
expect(screen.getByTestId('side/recordings/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF","retain":true}');
|
||||
expect(screen.getByTestId('side/snapshots/state')).toHaveTextContent('{"lastUpdate":123456,"payload":"OFF","retain":true}');
|
||||
});
|
||||
});
|
||||
|
||||
|
@@ -42,9 +42,9 @@ export function MqttProvider({
|
||||
useEffect(() => {
|
||||
Object.keys(config.cameras).forEach((camera) => {
|
||||
const { name, record, detect, snapshots } = config.cameras[camera];
|
||||
dispatch({ topic: `${name}/recordings/state`, payload: record.enabled ? 'ON' : 'OFF' });
|
||||
dispatch({ topic: `${name}/detect/state`, payload: detect.enabled ? 'ON' : 'OFF' });
|
||||
dispatch({ topic: `${name}/snapshots/state`, payload: snapshots.enabled ? 'ON' : 'OFF' });
|
||||
dispatch({ topic: `${name}/recordings/state`, payload: record.enabled ? 'ON' : 'OFF', retain: true });
|
||||
dispatch({ topic: `${name}/detect/state`, payload: detect.enabled ? 'ON' : 'OFF', retain: true });
|
||||
dispatch({ topic: `${name}/snapshots/state`, payload: snapshots.enabled ? 'ON' : 'OFF', retain: true });
|
||||
});
|
||||
}, [config]);
|
||||
|
||||
|
@@ -99,7 +99,7 @@ export default function Event({ eventId, close, scrollRef }) {
|
||||
}
|
||||
|
||||
const startime = new Date(data.start_time * 1000);
|
||||
const endtime = new Date(data.end_time * 1000);
|
||||
const endtime = data.end_time ? new Date(data.end_time * 1000) : null;
|
||||
return (
|
||||
<div className="space-y-4">
|
||||
<div className="flex md:flex-row justify-between flex-wrap flex-col">
|
||||
@@ -155,7 +155,7 @@ export default function Event({ eventId, close, scrollRef }) {
|
||||
<Tr index={1}>
|
||||
<Td>Timeframe</Td>
|
||||
<Td>
|
||||
{startime.toLocaleString()} – {endtime.toLocaleString()}
|
||||
{startime.toLocaleString()}{endtime === null ? ` – ${endtime.toLocaleString()}`:''}
|
||||
</Td>
|
||||
</Tr>
|
||||
<Tr>
|
||||
@@ -186,7 +186,7 @@ export default function Event({ eventId, close, scrollRef }) {
|
||||
},
|
||||
],
|
||||
poster: data.has_snapshot
|
||||
? `${apiHost}/clips/${data.camera}-${eventId}.jpg`
|
||||
? `${apiHost}/api/events/${eventId}/snapshot.jpg`
|
||||
: `data:image/jpeg;base64,${data.thumbnail}`,
|
||||
}}
|
||||
seekOptions={{ forward: 10, back: 5 }}
|
||||
|
@@ -42,7 +42,7 @@ const EventsRow = memo(
|
||||
);
|
||||
|
||||
const start = new Date(parseInt(startTime * 1000, 10));
|
||||
const end = new Date(parseInt(endTime * 1000, 10));
|
||||
const end = endTime ? new Date(parseInt(endTime * 1000, 10)) : null;
|
||||
|
||||
return (
|
||||
<Tbody reference={innerRef}>
|
||||
@@ -102,7 +102,7 @@ const EventsRow = memo(
|
||||
</Td>
|
||||
<Td>{start.toLocaleDateString()}</Td>
|
||||
<Td>{start.toLocaleTimeString()}</Td>
|
||||
<Td>{end.toLocaleTimeString()}</Td>
|
||||
<Td>{end === null ? 'In progress' : end.toLocaleTimeString()}</Td>
|
||||
</Tr>
|
||||
{viewEvent === id ? (
|
||||
<Tr className="border-b-1">
|
||||
|
Reference in New Issue
Block a user