mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-09-26 19:41:29 +08:00
Fixes (#18139)
* Catch error and show toast when failing to delete review items * i18n keys * add link to speed estimation docs in zone edit pane * Implement reset of tracked object update for each camera * Cleanup * register mqtt callbacks for toggling alerts and detections * clarify snapshots docs * clarify semantic search reindexing * add ukrainian * adjust date granularity for last recording time The api endpoint only returns granularity down to the day * Add amd hardware * fix crash in face library on initial start after enabling * Fix recordings view for mobile landscape The events view incorrectly was displaying two columns on landscape view and it only took up 20% of the screen width. Additionally, in landscape view the timeline was too wide (especially on iPads of various screen sizes) and would overlap the main video * face rec overfitting instructions * Clarify * face docs * clarify * clarify --------- Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
@@ -137,6 +137,15 @@ This can happen for a few different reasons, but this is usually an indicator th
|
||||
- When you provide images with different poses, lighting, and expressions, the algorithm extracts features that are consistent across those variations.
|
||||
- By training on a diverse set of images, the algorithm becomes less sensitive to minor variations and noise in the input image.
|
||||
|
||||
Review your face collections and remove most of the unclear or low-quality images. Then, use the **Reprocess** button on each face in the **Train** tab to evaluate how the changes affect recognition scores.
|
||||
|
||||
Avoid training on images that already score highly, as this can lead to over-fitting. Instead, focus on relatively clear images that score lower - ideally with different lighting, angles, and conditions—to help the model generalize more effectively.
|
||||
|
||||
### Frigate misidentified a face. Can I tell it that a face is "not" a specific person?
|
||||
|
||||
No, face recognition does not support negative training (i.e., explicitly telling it who someone is _not_). Instead, the best approach is to improve the training data by using a more diverse and representative set of images for each person.
|
||||
For more guidance, refer to the section above on improving recognition accuracy.
|
||||
|
||||
### I see scores above the threshold in the train tab, but a sub label wasn't assigned?
|
||||
|
||||
The Frigate considers the recognition scores across all recognition attempts for each person object. The scores are continually weighted based on the area of the face, and a sub label will only be assigned to person if a person is confidently recognized consistently. This avoids cases where a single high confidence recognition would throw off the results.
|
||||
|
@@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
|
||||
|
||||
## Configuration
|
||||
|
||||
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Settings page before it can be used. Semantic Search is a global configuration setting.
|
||||
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Classification Settings page before it can be used. Semantic Search is a global configuration setting.
|
||||
|
||||
```yaml
|
||||
semantic_search:
|
||||
@@ -29,9 +29,9 @@ semantic_search:
|
||||
|
||||
:::tip
|
||||
|
||||
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration or by toggling the switch on the Search Settings page in the UI and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to turn the UI's switch off or set the config back to `False` before restarting Frigate again.
|
||||
The embeddings database can be re-indexed from the existing tracked objects in your database by pressing the "Reindex" button in the Classification Settings in the UI or by adding `reindex: True` to your `semantic_search` configuration and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing.
|
||||
|
||||
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
|
||||
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to reindex as described above.
|
||||
|
||||
:::
|
||||
|
||||
@@ -72,7 +72,7 @@ For most users, especially native English speakers, the V1 model remains the rec
|
||||
|
||||
:::note
|
||||
|
||||
Switching between V1 and V2 requires reindexing your embeddings. To do this, set `reindex: True` in your Semantic Search configuration and restart Frigate. The embeddings from V1 and V2 are incompatible, and failing to reindex will result in incorrect search results.
|
||||
Switching between V1 and V2 requires reindexing your embeddings. The embeddings from V1 and V2 are incompatible, and failing to reindex will result in incorrect search results.
|
||||
|
||||
:::
|
||||
|
||||
|
@@ -5,7 +5,7 @@ title: Snapshots
|
||||
|
||||
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>.jpg`. They are also accessible [via the api](../integrations/api/event-snapshot-events-event-id-snapshot-jpg-get.api.mdx)
|
||||
|
||||
For users with Frigate+ enabled, snapshots are accessible in the UI in the Frigate+ pane to allow for quick submission to the Frigate+ service.
|
||||
Snapshots are accessible in the UI in the Explore pane. This allows for quick submission to the Frigate+ service.
|
||||
|
||||
To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones)
|
||||
|
||||
|
@@ -143,9 +143,10 @@ Inference speeds will vary greatly depending on the GPU and the model used.
|
||||
|
||||
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
|
||||
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
||||
| -------- | --------------------- | ------------------------- |
|
||||
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
||||
| --------- | --------------------- | ------------------------- |
|
||||
| AMD 780M | ~ 14 ms | 320: ~ 30 ms 640: ~ 60 ms |
|
||||
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
|
||||
|
||||
## Community Supported Detectors
|
||||
|
||||
|
Reference in New Issue
Block a user