mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-10-29 10:12:45 +08:00
Compare commits
8 Commits
dependabot
...
v0.16.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4d582062fb | ||
|
|
e0a8445bac | ||
|
|
2a271c0f5e | ||
|
|
925bf78811 | ||
|
|
59102794e8 | ||
|
|
20e5e3bdc0 | ||
|
|
b94ebda9e5 | ||
|
|
8cdaef307a |
@@ -213,7 +213,7 @@ go2rtc:
|
||||
streams:
|
||||
your_reolink_doorbell:
|
||||
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=username&password=password#video=copy#audio=copy#audio=opus"
|
||||
- rtsp://reolink_ip/Preview_01_sub
|
||||
- rtsp://username:password@reolink_ip/Preview_01_sub
|
||||
your_reolink_doorbell_sub:
|
||||
- "ffmpeg:http://reolink_ip/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=username&password=password"
|
||||
```
|
||||
|
||||
@@ -158,6 +158,8 @@ Start with the [Usage](#usage) section and re-read the [Model Requirements](#mod
|
||||
|
||||
Accuracy is definitely a going to be improved with higher quality cameras / streams. It is important to look at the DORI (Detection Observation Recognition Identification) range of your camera, if that specification is posted. This specification explains the distance from the camera that a person can be detected, observed, recognized, and identified. The identification range is the most relevant here, and the distance listed by the camera is the furthest that face recognition will realistically work.
|
||||
|
||||
Some users have also noted that setting the stream in camera firmware to a constant bit rate (CBR) leads to better image clarity than with a variable bit rate (VBR).
|
||||
|
||||
### Why can't I bulk upload photos?
|
||||
|
||||
It is important to methodically add photos to the library, bulk importing photos (especially from a general photo library) will lead to over-fitting in that particular scenario and hurt recognition performance.
|
||||
|
||||
@@ -18,10 +18,10 @@ genai:
|
||||
enabled: True
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-1.5-flash
|
||||
model: gemini-2.0-flash
|
||||
|
||||
cameras:
|
||||
front_camera:
|
||||
front_camera:
|
||||
genai:
|
||||
enabled: True # <- enable GenAI for your front camera
|
||||
use_snapshot: True
|
||||
@@ -30,7 +30,7 @@ cameras:
|
||||
required_zones:
|
||||
- steps
|
||||
indoor_camera:
|
||||
genai:
|
||||
genai:
|
||||
enabled: False # <- disable GenAI for your indoor camera
|
||||
```
|
||||
|
||||
@@ -78,7 +78,7 @@ Google Gemini has a free tier allowing [15 queries per minute](https://ai.google
|
||||
|
||||
### Supported Models
|
||||
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
|
||||
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini).
|
||||
|
||||
### Get API Key
|
||||
|
||||
@@ -96,7 +96,7 @@ genai:
|
||||
enabled: True
|
||||
provider: gemini
|
||||
api_key: "{FRIGATE_GEMINI_API_KEY}"
|
||||
model: gemini-1.5-flash
|
||||
model: gemini-2.0-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
@@ -202,7 +202,7 @@ genai:
|
||||
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
|
||||
```
|
||||
|
||||
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
|
||||
@@ -30,8 +30,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` or `motorcycle`
|
||||
|
||||
## Minimum System Requirements
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
|
||||
License plate recognition works by running AI models locally on your system. The YOLOv9 plate detector model and the OCR models ([PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)) are relatively lightweight and can run on your CPU or GPU, depending on your configuration. At least 4GB of RAM is required.
|
||||
## Configuration
|
||||
|
||||
License plate recognition is disabled by default. Enable it in your config file:
|
||||
|
||||
@@ -1012,9 +1012,9 @@ FROM python:3.11 AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /rfdetr
|
||||
RUN uv pip install --system rfdetr onnx onnxruntime onnxsim onnx-graphsurgeon
|
||||
RUN uv pip install --system rfdetr[onnxexport]
|
||||
ARG MODEL_SIZE
|
||||
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export()"
|
||||
RUN python3 -c "from rfdetr import RFDETR${MODEL_SIZE}; x = RFDETR${MODEL_SIZE}(resolution=320); x.export(simplify=True)"
|
||||
FROM scratch
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /rfdetr/output/inference_model.onnx /rfdetr-${MODEL_SIZE}.onnx
|
||||
|
||||
@@ -161,7 +161,14 @@ Message published for updates to tracked object metadata, for example:
|
||||
|
||||
### `frigate/reviews`
|
||||
|
||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated. When additional objects are detected or when a zone change occurs, it will publish a, `update` message with the same id. When the review activity has ended a final `end` message is published.
|
||||
Message published for each changed review item. The first message is published when the `detection` or `alert` is initiated.
|
||||
|
||||
An `update` with the same ID will be published when:
|
||||
- The severity changes from `detection` to `alert`
|
||||
- Additional objects are detected
|
||||
- An object is recognized via face, lpr, etc.
|
||||
|
||||
When the review activity has ended a final `end` message is published.
|
||||
|
||||
```json
|
||||
{
|
||||
|
||||
@@ -42,6 +42,7 @@ Misidentified objects should have a correct label added. For example, if a perso
|
||||
| `w` | Add box |
|
||||
| `d` | Toggle difficult |
|
||||
| `s` | Switch to the next label |
|
||||
| `Shift + s` | Switch to the previous label |
|
||||
| `tab` | Select next largest box |
|
||||
| `del` | Delete current box |
|
||||
| `esc` | Deselect/Cancel |
|
||||
|
||||
@@ -8,6 +8,7 @@ from pathlib import Path
|
||||
import psutil
|
||||
from fastapi import APIRouter, Depends, Request
|
||||
from fastapi.responses import JSONResponse
|
||||
from pathvalidate import sanitize_filepath
|
||||
from peewee import DoesNotExist
|
||||
from playhouse.shortcuts import model_to_dict
|
||||
|
||||
@@ -15,7 +16,7 @@ from frigate.api.auth import require_role
|
||||
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
|
||||
from frigate.api.defs.request.export_rename_body import ExportRenameBody
|
||||
from frigate.api.defs.tags import Tags
|
||||
from frigate.const import EXPORT_DIR
|
||||
from frigate.const import CLIPS_DIR, EXPORT_DIR
|
||||
from frigate.models import Export, Previews, Recordings
|
||||
from frigate.record.export import (
|
||||
PlaybackFactorEnum,
|
||||
@@ -54,7 +55,14 @@ def export_recording(
|
||||
playback_factor = body.playback
|
||||
playback_source = body.source
|
||||
friendly_name = body.name
|
||||
existing_image = body.image_path
|
||||
existing_image = sanitize_filepath(body.image_path) if body.image_path else None
|
||||
|
||||
# Ensure that existing_image is a valid path
|
||||
if existing_image and not existing_image.startswith(CLIPS_DIR):
|
||||
return JSONResponse(
|
||||
content=({"success": False, "message": "Invalid image path"}),
|
||||
status_code=400,
|
||||
)
|
||||
|
||||
if playback_source == "recordings":
|
||||
recordings_count = (
|
||||
|
||||
Reference in New Issue
Block a user