Compare commits
25 Commits
ab8a1c82c1
...
UI
Author | SHA1 | Date | |
---|---|---|---|
![]() |
51b38fe253 | ||
![]() |
97a76c5e5b | ||
![]() |
df72a5431b | ||
![]() |
9a472e2435 | ||
![]() |
a581b81bd9 | ||
![]() |
69b7970b87 | ||
![]() |
7fce1fd8b4 | ||
![]() |
c2918a52df | ||
![]() |
64c0f085b1 | ||
![]() |
3c30ab8a5d | ||
![]() |
acc8e778b6 | ||
![]() |
4b6f60f59b | ||
![]() |
1178232268 | ||
![]() |
ad918c5523 | ||
![]() |
7b8d6171b7 | ||
![]() |
6b86b0a72d | ||
![]() |
936e78f93b | ||
![]() |
aca0bcb7ce | ||
![]() |
966cb5a7df | ||
![]() |
1ab4bf06b1 | ||
![]() |
6649e2a5df | ||
![]() |
72d4c9e9c4 | ||
![]() |
76d65247b7 | ||
![]() |
37f224cb47 | ||
![]() |
b58ffffd37 |
50
README.md
@@ -1,12 +1,12 @@
|
|||||||
<h1 align="center">Deep Live Cam</h1>
|
<h1 align="center">Deep-Live-Cam</h1>
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
Real-time face swap and video deepfake with a single click and only a single image.
|
Real-time face swap and video deepfake with a single click and only a single image.
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="demo.gif" alt="Demo GIF">
|
<img src="media/demo.gif" alt="Demo GIF">
|
||||||
<img src="avgpcperformancedemo.gif" alt="Performance Demo GIF">
|
<img src="media/avgpcperformancedemo.gif" alt="Performance Demo GIF">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
## Disclaimer
|
## Disclaimer
|
||||||
@@ -20,11 +20,7 @@ Users are expected to use this software responsibly and legally. If using a real
|
|||||||
|
|
||||||
## Quick Start (Windows / Nvidia)
|
## Quick Start (Windows / Nvidia)
|
||||||
|
|
||||||
[](https://hacksider.gumroad.com/l/vccdmm)
|
[](https://hacksider.gumroad.com/l/vccdmm)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required.
|
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required.
|
||||||
|
|
||||||
@@ -159,7 +155,7 @@ python run.py --execution-provider openvino
|
|||||||
- Use a screen capture tool like OBS to stream.
|
- Use a screen capture tool like OBS to stream.
|
||||||
- To change the face, select a new source image.
|
- To change the face, select a new source image.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
@@ -167,28 +163,35 @@ python run.py --execution-provider openvino
|
|||||||
|
|
||||||
Dynamically improve performance using the `--live-resizable` parameter.
|
Dynamically improve performance using the `--live-resizable` parameter.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Face Mapping
|
### Face Mapping
|
||||||
|
|
||||||
Track and change faces on the fly.
|
Track and change faces on the fly.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
**Source Video:**
|
**Source Video:**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
**Enable Face Mapping:**
|
**Enable Face Mapping:**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
**Map the Faces:**
|
**Map the Faces:**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
**See the Magic!**
|
**See the Magic!**
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**Watch movies in realtime:**
|
||||||
|
|
||||||
|
It's as simple as opening a movie on the screen, and selecting OBS as your camera!
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
## Command Line Arguments
|
## Command Line Arguments
|
||||||
|
|
||||||
@@ -379,6 +382,10 @@ For the latest experimental builds and features, see the [experimental branch](h
|
|||||||
|
|
||||||
This is an open-source project developed in our free time. Updates may be delayed.
|
This is an open-source project developed in our free time. Updates may be delayed.
|
||||||
|
|
||||||
|
**Tips and Links:**
|
||||||
|
- [How to make the most of Deep-Live-Cam](https://hacksider.gumroad.com/p/how-to-make-the-most-on-deep-live-cam)
|
||||||
|
- Face enhancer is good, but still very slow for any live streaming purpose.
|
||||||
|
|
||||||
|
|
||||||
## Credits
|
## Credits
|
||||||
|
|
||||||
@@ -392,9 +399,14 @@ This is an open-source project developed in our free time. Updates may be delaye
|
|||||||
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
|
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
|
||||||
- Foot Note: [This is originally roop-cam, see the full history of the code here.](https://github.com/hacksider/roop-cam) Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
|
- Foot Note: [This is originally roop-cam, see the full history of the code here.](https://github.com/hacksider/roop-cam) Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
|
||||||
|
|
||||||
## Thanks to all the contributors
|
## Contributions
|
||||||
<a href="https://github.com/hacksider/Deep-Live-Cam/graphs/contributors" target="_blank">
|
|
||||||
<img src="https://contrib.rocks/image?repo=hacksider/Deep-Live-Cam" />
|
|
||||||
</a>
|
|
||||||
|
|
||||||

|

|
||||||
|
## Star History
|
||||||
|
|
||||||
|
<a href="https://star-history.com/#hacksider/deep-live-cam&Date">
|
||||||
|
<picture>
|
||||||
|
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date&theme=dark" />
|
||||||
|
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
|
||||||
|
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
|
||||||
|
</picture>
|
||||||
|
</a>
|
||||||
|
Before Width: | Height: | Size: 5.2 MiB After Width: | Height: | Size: 5.2 MiB |
Before Width: | Height: | Size: 11 MiB After Width: | Height: | Size: 11 MiB |
BIN
media/download.png
Normal file
After Width: | Height: | Size: 9.0 KiB |
Before Width: | Height: | Size: 76 KiB After Width: | Height: | Size: 76 KiB |
Before Width: | Height: | Size: 104 KiB After Width: | Height: | Size: 104 KiB |
Before Width: | Height: | Size: 4.0 MiB After Width: | Height: | Size: 4.0 MiB |
Before Width: | Height: | Size: 8.6 MiB After Width: | Height: | Size: 8.6 MiB |
Before Width: | Height: | Size: 73 KiB After Width: | Height: | Size: 73 KiB |
BIN
media/movie.gif
Normal file
After Width: | Height: | Size: 1.6 MiB |
BIN
media/movie_img.png
Normal file
After Width: | Height: | Size: 794 KiB |
Before Width: | Height: | Size: 4.3 MiB After Width: | Height: | Size: 4.3 MiB |
@@ -1 +1,4 @@
|
|||||||
just put the models in this folder
|
just put the models in this folder -
|
||||||
|
|
||||||
|
https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx?download=true
|
||||||
|
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth
|
@@ -2,11 +2,11 @@ import os
|
|||||||
from typing import List, Dict, Any
|
from typing import List, Dict, Any
|
||||||
|
|
||||||
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||||
WORKFLOW_DIR = os.path.join(ROOT_DIR, 'workflow')
|
WORKFLOW_DIR = os.path.join(ROOT_DIR, "workflow")
|
||||||
|
|
||||||
file_types = [
|
file_types = [
|
||||||
('Image', ('*.png','*.jpg','*.jpeg','*.gif','*.bmp')),
|
("Image", ("*.png", "*.jpg", "*.jpeg", "*.gif", "*.bmp")),
|
||||||
('Video', ('*.mp4','*.mkv'))
|
("Video", ("*.mp4", "*.mkv")),
|
||||||
]
|
]
|
||||||
|
|
||||||
souce_target_map = []
|
souce_target_map = []
|
||||||
@@ -16,23 +16,31 @@ source_path = None
|
|||||||
target_path = None
|
target_path = None
|
||||||
output_path = None
|
output_path = None
|
||||||
frame_processors: List[str] = []
|
frame_processors: List[str] = []
|
||||||
keep_fps = None
|
keep_fps = True # Initialize with default value
|
||||||
keep_audio = None
|
keep_audio = True # Initialize with default value
|
||||||
keep_frames = None
|
keep_frames = False # Initialize with default value
|
||||||
many_faces = None
|
many_faces = False # Initialize with default value
|
||||||
map_faces = None
|
map_faces = False # Initialize with default value
|
||||||
color_correction = None # New global variable for color correction toggle
|
color_correction = False # Initialize with default value
|
||||||
nsfw_filter = None
|
nsfw_filter = False # Initialize with default value
|
||||||
video_encoder = None
|
video_encoder = None
|
||||||
video_quality = None
|
video_quality = None
|
||||||
live_mirror = None
|
live_mirror = False # Initialize with default value
|
||||||
live_resizable = None
|
live_resizable = False # Initialize with default value
|
||||||
max_memory = None
|
max_memory = None
|
||||||
execution_providers: List[str] = []
|
execution_providers: List[str] = []
|
||||||
execution_threads = None
|
execution_threads = None
|
||||||
headless = None
|
headless = None
|
||||||
log_level = 'error'
|
log_level = "error"
|
||||||
fp_ui: Dict[str, bool] = {}
|
fp_ui: Dict[str, bool] = {"face_enhancer": False} # Initialize with default value
|
||||||
camera_input_combobox = None
|
camera_input_combobox = None
|
||||||
webcam_preview_running = False
|
webcam_preview_running = False
|
||||||
opacity = 100
|
show_fps = False # Initialize with default value
|
||||||
|
mouth_mask = False
|
||||||
|
show_mouth_mask_box = False
|
||||||
|
mask_down_size = 0.5
|
||||||
|
mask_size = 1.0
|
||||||
|
mask_feather_ratio = 8
|
||||||
|
opacity_switch = False
|
||||||
|
face_opacity = 100
|
||||||
|
selected_camera = None
|
||||||
|
@@ -1,3 +1,3 @@
|
|||||||
name = 'Deep Live Cam'
|
name = 'Deep Live Cam'
|
||||||
version = '1.5.0'
|
version = '1.6.0'
|
||||||
edition = 'Portable'
|
edition = 'Portable'
|
||||||
|
@@ -2,35 +2,49 @@ from typing import Any, List
|
|||||||
import cv2
|
import cv2
|
||||||
import insightface
|
import insightface
|
||||||
import threading
|
import threading
|
||||||
|
import numpy as np
|
||||||
import modules.globals
|
import modules.globals
|
||||||
import modules.processors.frame.core
|
import modules.processors.frame.core
|
||||||
from modules.core import update_status
|
from modules.core import update_status
|
||||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||||
from modules.typing import Face, Frame
|
from modules.typing import Face, Frame
|
||||||
from modules.utilities import conditional_download, resolve_relative_path, is_image, is_video
|
from modules.utilities import (
|
||||||
|
conditional_download,
|
||||||
|
resolve_relative_path,
|
||||||
|
is_image,
|
||||||
|
is_video,
|
||||||
|
)
|
||||||
from modules.cluster_analysis import find_closest_centroid
|
from modules.cluster_analysis import find_closest_centroid
|
||||||
|
|
||||||
FACE_SWAPPER = None
|
FACE_SWAPPER = None
|
||||||
THREAD_LOCK = threading.Lock()
|
THREAD_LOCK = threading.Lock()
|
||||||
NAME = 'DLC.FACE-SWAPPER'
|
NAME = "DLC.FACE-SWAPPER"
|
||||||
|
|
||||||
|
|
||||||
def pre_check() -> bool:
|
def pre_check() -> bool:
|
||||||
download_directory_path = resolve_relative_path('../models')
|
download_directory_path = resolve_relative_path("../models")
|
||||||
conditional_download(download_directory_path, ['https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128.onnx'])
|
conditional_download(
|
||||||
|
download_directory_path,
|
||||||
|
[
|
||||||
|
"https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx"
|
||||||
|
],
|
||||||
|
)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
def pre_start() -> bool:
|
def pre_start() -> bool:
|
||||||
if not modules.globals.map_faces and not is_image(modules.globals.source_path):
|
if not modules.globals.map_faces and not is_image(modules.globals.source_path):
|
||||||
update_status('Select an image for source path.', NAME)
|
update_status("Select an image for source path.", NAME)
|
||||||
return False
|
return False
|
||||||
elif not modules.globals.map_faces and not get_one_face(cv2.imread(modules.globals.source_path)):
|
elif not modules.globals.map_faces and not get_one_face(
|
||||||
update_status('No face in source path detected.', NAME)
|
cv2.imread(modules.globals.source_path)
|
||||||
|
):
|
||||||
|
update_status("No face in source path detected.", NAME)
|
||||||
return False
|
return False
|
||||||
if not is_image(modules.globals.target_path) and not is_video(modules.globals.target_path):
|
if not is_image(modules.globals.target_path) and not is_video(
|
||||||
update_status('Select an image or video for target path.', NAME)
|
modules.globals.target_path
|
||||||
|
):
|
||||||
|
update_status("Select an image or video for target path.", NAME)
|
||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@@ -40,20 +54,49 @@ def get_face_swapper() -> Any:
|
|||||||
|
|
||||||
with THREAD_LOCK:
|
with THREAD_LOCK:
|
||||||
if FACE_SWAPPER is None:
|
if FACE_SWAPPER is None:
|
||||||
model_path = resolve_relative_path('../models/inswapper_128.onnx')
|
model_path = resolve_relative_path("../models/inswapper_128_fp16.onnx")
|
||||||
FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
|
FACE_SWAPPER = insightface.model_zoo.get_model(
|
||||||
|
model_path, providers=modules.globals.execution_providers
|
||||||
|
)
|
||||||
return FACE_SWAPPER
|
return FACE_SWAPPER
|
||||||
|
|
||||||
|
|
||||||
def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
|
def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
|
||||||
return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
|
swapped_frame = get_face_swapper().get(
|
||||||
|
temp_frame, target_face, source_face, paste_back=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply opacity if enabled
|
||||||
|
if modules.globals.opacity_switch:
|
||||||
|
opacity = modules.globals.face_opacity / 100
|
||||||
|
swapped_frame = cv2.addWeighted(
|
||||||
|
swapped_frame, opacity, temp_frame, 1 - opacity, 0
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply mouth mask if enabled
|
||||||
|
if modules.globals.mouth_mask:
|
||||||
|
face_mask = create_face_mask(target_face, temp_frame)
|
||||||
|
mouth_mask_data = create_lower_mouth_mask(target_face, temp_frame)
|
||||||
|
mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon = mouth_mask_data
|
||||||
|
|
||||||
|
if mouth_box is not None:
|
||||||
|
swapped_frame = apply_mouth_area(
|
||||||
|
swapped_frame, mouth_cutout, mouth_box, face_mask, lower_lip_polygon
|
||||||
|
)
|
||||||
|
|
||||||
|
if modules.globals.show_mouth_mask_box:
|
||||||
|
swapped_frame = draw_mouth_mask_visualization(
|
||||||
|
swapped_frame, target_face, mouth_mask_data
|
||||||
|
)
|
||||||
|
|
||||||
|
return swapped_frame
|
||||||
|
|
||||||
|
|
||||||
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
||||||
# Ensure the frame is in RGB format if color correction is enabled
|
# Ensure the frame is in RGB format if color correction is enabled
|
||||||
if modules.globals.color_correction:
|
if modules.globals.color_correction:
|
||||||
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
|
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
|
||||||
|
|
||||||
if modules.globals.many_faces:
|
if modules.globals.many_faces:
|
||||||
many_faces = get_many_faces(temp_frame)
|
many_faces = get_many_faces(temp_frame)
|
||||||
if many_faces:
|
if many_faces:
|
||||||
@@ -71,34 +114,42 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
|||||||
if modules.globals.many_faces:
|
if modules.globals.many_faces:
|
||||||
source_face = default_source_face()
|
source_face = default_source_face()
|
||||||
for map in modules.globals.souce_target_map:
|
for map in modules.globals.souce_target_map:
|
||||||
target_face = map['target']['face']
|
target_face = map["target"]["face"]
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||||
|
|
||||||
elif not modules.globals.many_faces:
|
elif not modules.globals.many_faces:
|
||||||
for map in modules.globals.souce_target_map:
|
for map in modules.globals.souce_target_map:
|
||||||
if "source" in map:
|
if "source" in map:
|
||||||
source_face = map['source']['face']
|
source_face = map["source"]["face"]
|
||||||
target_face = map['target']['face']
|
target_face = map["target"]["face"]
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||||
|
|
||||||
elif is_video(modules.globals.target_path):
|
elif is_video(modules.globals.target_path):
|
||||||
if modules.globals.many_faces:
|
if modules.globals.many_faces:
|
||||||
source_face = default_source_face()
|
source_face = default_source_face()
|
||||||
for map in modules.globals.souce_target_map:
|
for map in modules.globals.souce_target_map:
|
||||||
target_frame = [f for f in map['target_faces_in_frame'] if f['location'] == temp_frame_path]
|
target_frame = [
|
||||||
|
f
|
||||||
|
for f in map["target_faces_in_frame"]
|
||||||
|
if f["location"] == temp_frame_path
|
||||||
|
]
|
||||||
|
|
||||||
for frame in target_frame:
|
for frame in target_frame:
|
||||||
for target_face in frame['faces']:
|
for target_face in frame["faces"]:
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||||
|
|
||||||
elif not modules.globals.many_faces:
|
elif not modules.globals.many_faces:
|
||||||
for map in modules.globals.souce_target_map:
|
for map in modules.globals.souce_target_map:
|
||||||
if "source" in map:
|
if "source" in map:
|
||||||
target_frame = [f for f in map['target_faces_in_frame'] if f['location'] == temp_frame_path]
|
target_frame = [
|
||||||
source_face = map['source']['face']
|
f
|
||||||
|
for f in map["target_faces_in_frame"]
|
||||||
|
if f["location"] == temp_frame_path
|
||||||
|
]
|
||||||
|
source_face = map["source"]["face"]
|
||||||
|
|
||||||
for frame in target_frame:
|
for frame in target_frame:
|
||||||
for target_face in frame['faces']:
|
for target_face in frame["faces"]:
|
||||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||||
else:
|
else:
|
||||||
detected_faces = get_many_faces(temp_frame)
|
detected_faces = get_many_faces(temp_frame)
|
||||||
@@ -110,25 +161,46 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
|||||||
|
|
||||||
elif not modules.globals.many_faces:
|
elif not modules.globals.many_faces:
|
||||||
if detected_faces:
|
if detected_faces:
|
||||||
if len(detected_faces) <= len(modules.globals.simple_map['target_embeddings']):
|
if len(detected_faces) <= len(
|
||||||
|
modules.globals.simple_map["target_embeddings"]
|
||||||
|
):
|
||||||
for detected_face in detected_faces:
|
for detected_face in detected_faces:
|
||||||
closest_centroid_index, _ = find_closest_centroid(modules.globals.simple_map['target_embeddings'], detected_face.normed_embedding)
|
closest_centroid_index, _ = find_closest_centroid(
|
||||||
|
modules.globals.simple_map["target_embeddings"],
|
||||||
|
detected_face.normed_embedding,
|
||||||
|
)
|
||||||
|
|
||||||
temp_frame = swap_face(modules.globals.simple_map['source_faces'][closest_centroid_index], detected_face, temp_frame)
|
temp_frame = swap_face(
|
||||||
|
modules.globals.simple_map["source_faces"][
|
||||||
|
closest_centroid_index
|
||||||
|
],
|
||||||
|
detected_face,
|
||||||
|
temp_frame,
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
detected_faces_centroids = []
|
detected_faces_centroids = []
|
||||||
for face in detected_faces:
|
for face in detected_faces:
|
||||||
detected_faces_centroids.append(face.normed_embedding)
|
detected_faces_centroids.append(face.normed_embedding)
|
||||||
i = 0
|
i = 0
|
||||||
for target_embedding in modules.globals.simple_map['target_embeddings']:
|
for target_embedding in modules.globals.simple_map[
|
||||||
closest_centroid_index, _ = find_closest_centroid(detected_faces_centroids, target_embedding)
|
"target_embeddings"
|
||||||
|
]:
|
||||||
|
closest_centroid_index, _ = find_closest_centroid(
|
||||||
|
detected_faces_centroids, target_embedding
|
||||||
|
)
|
||||||
|
|
||||||
temp_frame = swap_face(modules.globals.simple_map['source_faces'][i], detected_faces[closest_centroid_index], temp_frame)
|
temp_frame = swap_face(
|
||||||
|
modules.globals.simple_map["source_faces"][i],
|
||||||
|
detected_faces[closest_centroid_index],
|
||||||
|
temp_frame,
|
||||||
|
)
|
||||||
i += 1
|
i += 1
|
||||||
return temp_frame
|
return temp_frame
|
||||||
|
|
||||||
|
|
||||||
def process_frames(source_path: str, temp_frame_paths: List[str], progress: Any = None) -> None:
|
def process_frames(
|
||||||
|
source_path: str, temp_frame_paths: List[str], progress: Any = None
|
||||||
|
) -> None:
|
||||||
if not modules.globals.map_faces:
|
if not modules.globals.map_faces:
|
||||||
source_face = get_one_face(cv2.imread(source_path))
|
source_face = get_one_face(cv2.imread(source_path))
|
||||||
for temp_frame_path in temp_frame_paths:
|
for temp_frame_path in temp_frame_paths:
|
||||||
@@ -162,7 +234,9 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
|||||||
cv2.imwrite(output_path, result)
|
cv2.imwrite(output_path, result)
|
||||||
else:
|
else:
|
||||||
if modules.globals.many_faces:
|
if modules.globals.many_faces:
|
||||||
update_status('Many faces enabled. Using first source image. Progressing...', NAME)
|
update_status(
|
||||||
|
"Many faces enabled. Using first source image. Progressing...", NAME
|
||||||
|
)
|
||||||
target_frame = cv2.imread(output_path)
|
target_frame = cv2.imread(output_path)
|
||||||
result = process_frame_v2(target_frame)
|
result = process_frame_v2(target_frame)
|
||||||
cv2.imwrite(output_path, result)
|
cv2.imwrite(output_path, result)
|
||||||
@@ -170,5 +244,250 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
|||||||
|
|
||||||
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
||||||
if modules.globals.map_faces and modules.globals.many_faces:
|
if modules.globals.map_faces and modules.globals.many_faces:
|
||||||
update_status('Many faces enabled. Using first source image. Progressing...', NAME)
|
update_status(
|
||||||
modules.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames)
|
"Many faces enabled. Using first source image. Progressing...", NAME
|
||||||
|
)
|
||||||
|
modules.processors.frame.core.process_video(
|
||||||
|
source_path, temp_frame_paths, process_frames
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def create_face_mask(face: Face, frame: Frame) -> np.ndarray:
|
||||||
|
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
|
||||||
|
landmarks = face.landmark_2d_106
|
||||||
|
if landmarks is not None:
|
||||||
|
landmarks = landmarks.astype(np.int32)
|
||||||
|
|
||||||
|
right_side_face = landmarks[0:16]
|
||||||
|
left_side_face = landmarks[17:32]
|
||||||
|
right_eye = landmarks[33:42]
|
||||||
|
right_eye_brow = landmarks[43:51]
|
||||||
|
left_eye = landmarks[87:96]
|
||||||
|
left_eye_brow = landmarks[97:105]
|
||||||
|
|
||||||
|
right_eyebrow_top = np.min(right_eye_brow[:, 1])
|
||||||
|
left_eyebrow_top = np.min(left_eye_brow[:, 1])
|
||||||
|
eyebrow_top = min(right_eyebrow_top, left_eyebrow_top)
|
||||||
|
|
||||||
|
face_top = np.min([right_side_face[0, 1], left_side_face[-1, 1]])
|
||||||
|
forehead_height = face_top - eyebrow_top
|
||||||
|
extended_forehead_height = int(forehead_height * 5.0)
|
||||||
|
|
||||||
|
forehead_left = right_side_face[0].copy()
|
||||||
|
forehead_right = left_side_face[-1].copy()
|
||||||
|
forehead_left[1] -= extended_forehead_height
|
||||||
|
forehead_right[1] -= extended_forehead_height
|
||||||
|
|
||||||
|
face_outline = np.vstack(
|
||||||
|
[
|
||||||
|
[forehead_left],
|
||||||
|
right_side_face,
|
||||||
|
left_side_face[::-1],
|
||||||
|
[forehead_right],
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
padding = int(np.linalg.norm(right_side_face[0] - left_side_face[-1]) * 0.05)
|
||||||
|
|
||||||
|
hull = cv2.convexHull(face_outline)
|
||||||
|
hull_padded = []
|
||||||
|
for point in hull:
|
||||||
|
x, y = point[0]
|
||||||
|
center = np.mean(face_outline, axis=0)
|
||||||
|
direction = np.array([x, y]) - center
|
||||||
|
direction = direction / np.linalg.norm(direction)
|
||||||
|
padded_point = np.array([x, y]) + direction * padding
|
||||||
|
hull_padded.append(padded_point)
|
||||||
|
|
||||||
|
hull_padded = np.array(hull_padded, dtype=np.int32)
|
||||||
|
cv2.fillConvexPoly(mask, hull_padded, 255)
|
||||||
|
mask = cv2.GaussianBlur(mask, (5, 5), 3)
|
||||||
|
|
||||||
|
return mask
|
||||||
|
|
||||||
|
|
||||||
|
def create_lower_mouth_mask(face: Face, frame: Frame) -> tuple:
|
||||||
|
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
|
||||||
|
mouth_cutout = None
|
||||||
|
landmarks = face.landmark_2d_106
|
||||||
|
if landmarks is not None:
|
||||||
|
lower_lip_order = [
|
||||||
|
65,
|
||||||
|
66,
|
||||||
|
62,
|
||||||
|
70,
|
||||||
|
69,
|
||||||
|
18,
|
||||||
|
19,
|
||||||
|
20,
|
||||||
|
21,
|
||||||
|
22,
|
||||||
|
23,
|
||||||
|
24,
|
||||||
|
0,
|
||||||
|
8,
|
||||||
|
7,
|
||||||
|
6,
|
||||||
|
5,
|
||||||
|
4,
|
||||||
|
3,
|
||||||
|
2,
|
||||||
|
65,
|
||||||
|
]
|
||||||
|
lower_lip_landmarks = landmarks[lower_lip_order].astype(np.float32)
|
||||||
|
|
||||||
|
center = np.mean(lower_lip_landmarks, axis=0)
|
||||||
|
expansion_factor = 1 + modules.globals.mask_down_size
|
||||||
|
expanded_landmarks = (lower_lip_landmarks - center) * expansion_factor + center
|
||||||
|
|
||||||
|
toplip_indices = [20, 0, 1, 2, 3, 4, 5]
|
||||||
|
toplip_extension = modules.globals.mask_size * 0.5
|
||||||
|
for idx in toplip_indices:
|
||||||
|
direction = expanded_landmarks[idx] - center
|
||||||
|
direction = direction / np.linalg.norm(direction)
|
||||||
|
expanded_landmarks[idx] += direction * toplip_extension
|
||||||
|
|
||||||
|
chin_indices = [11, 12, 13, 14, 15, 16]
|
||||||
|
chin_extension = 2 * 0.2
|
||||||
|
for idx in chin_indices:
|
||||||
|
expanded_landmarks[idx][1] += (
|
||||||
|
expanded_landmarks[idx][1] - center[1]
|
||||||
|
) * chin_extension
|
||||||
|
|
||||||
|
expanded_landmarks = expanded_landmarks.astype(np.int32)
|
||||||
|
|
||||||
|
min_x, min_y = np.min(expanded_landmarks, axis=0)
|
||||||
|
max_x, max_y = np.max(expanded_landmarks, axis=0)
|
||||||
|
|
||||||
|
padding = int((max_x - min_x) * 0.1)
|
||||||
|
min_x = max(0, min_x - padding)
|
||||||
|
min_y = max(0, min_y - padding)
|
||||||
|
max_x = min(frame.shape[1], max_x + padding)
|
||||||
|
max_y = min(frame.shape[0], max_y + padding)
|
||||||
|
|
||||||
|
if max_x <= min_x or max_y <= min_y:
|
||||||
|
if (max_x - min_x) <= 1:
|
||||||
|
max_x = min_x + 1
|
||||||
|
if (max_y - min_y) <= 1:
|
||||||
|
max_y = min_y + 1
|
||||||
|
|
||||||
|
mask_roi = np.zeros((max_y - min_y, max_x - min_x), dtype=np.uint8)
|
||||||
|
cv2.fillPoly(mask_roi, [expanded_landmarks - [min_x, min_y]], 255)
|
||||||
|
mask_roi = cv2.GaussianBlur(mask_roi, (15, 15), 5)
|
||||||
|
mask[min_y:max_y, min_x:max_x] = mask_roi
|
||||||
|
mouth_cutout = frame[min_y:max_y, min_x:max_x].copy()
|
||||||
|
|
||||||
|
return mask, mouth_cutout, (min_x, min_y, max_x, max_y), expanded_landmarks
|
||||||
|
|
||||||
|
return mask, mouth_cutout, None, None
|
||||||
|
|
||||||
|
|
||||||
|
def apply_mouth_area(
|
||||||
|
frame: Frame,
|
||||||
|
mouth_cutout: np.ndarray,
|
||||||
|
mouth_box: tuple,
|
||||||
|
face_mask: np.ndarray,
|
||||||
|
mouth_polygon: np.ndarray,
|
||||||
|
) -> Frame:
|
||||||
|
min_x, min_y, max_x, max_y = mouth_box
|
||||||
|
box_width = max_x - min_x
|
||||||
|
box_height = max_y - min_y
|
||||||
|
|
||||||
|
if (
|
||||||
|
mouth_cutout is None
|
||||||
|
or box_width is None
|
||||||
|
or box_height is None
|
||||||
|
or face_mask is None
|
||||||
|
or mouth_polygon is None
|
||||||
|
):
|
||||||
|
return frame
|
||||||
|
|
||||||
|
try:
|
||||||
|
resized_mouth_cutout = cv2.resize(mouth_cutout, (box_width, box_height))
|
||||||
|
roi = frame[min_y:max_y, min_x:max_x]
|
||||||
|
|
||||||
|
if roi.shape != resized_mouth_cutout.shape:
|
||||||
|
resized_mouth_cutout = cv2.resize(
|
||||||
|
resized_mouth_cutout, (roi.shape[1], roi.shape[0])
|
||||||
|
)
|
||||||
|
|
||||||
|
color_corrected_mouth = apply_color_transfer(resized_mouth_cutout, roi)
|
||||||
|
|
||||||
|
polygon_mask = np.zeros(roi.shape[:2], dtype=np.uint8)
|
||||||
|
adjusted_polygon = mouth_polygon - [min_x, min_y]
|
||||||
|
cv2.fillPoly(polygon_mask, [adjusted_polygon], 255)
|
||||||
|
|
||||||
|
feather_amount = min(
|
||||||
|
30,
|
||||||
|
box_width // modules.globals.mask_feather_ratio,
|
||||||
|
box_height // modules.globals.mask_feather_ratio,
|
||||||
|
)
|
||||||
|
feathered_mask = cv2.GaussianBlur(
|
||||||
|
polygon_mask.astype(float), (0, 0), feather_amount
|
||||||
|
)
|
||||||
|
feathered_mask = feathered_mask / feathered_mask.max()
|
||||||
|
|
||||||
|
face_mask_roi = face_mask[min_y:max_y, min_x:max_x]
|
||||||
|
combined_mask = feathered_mask * (face_mask_roi / 255.0)
|
||||||
|
|
||||||
|
combined_mask = combined_mask[:, :, np.newaxis]
|
||||||
|
blended = (
|
||||||
|
color_corrected_mouth * combined_mask + roi * (1 - combined_mask)
|
||||||
|
).astype(np.uint8)
|
||||||
|
|
||||||
|
face_mask_3channel = (
|
||||||
|
np.repeat(face_mask_roi[:, :, np.newaxis], 3, axis=2) / 255.0
|
||||||
|
)
|
||||||
|
final_blend = blended * face_mask_3channel + roi * (1 - face_mask_3channel)
|
||||||
|
|
||||||
|
frame[min_y:max_y, min_x:max_x] = final_blend.astype(np.uint8)
|
||||||
|
except Exception as e:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return frame
|
||||||
|
|
||||||
|
|
||||||
|
def apply_color_transfer(source: np.ndarray, target: np.ndarray) -> np.ndarray:
|
||||||
|
source = cv2.cvtColor(source, cv2.COLOR_BGR2LAB).astype("float32")
|
||||||
|
target = cv2.cvtColor(target, cv2.COLOR_BGR2LAB).astype("float32")
|
||||||
|
|
||||||
|
source_mean, source_std = cv2.meanStdDev(source)
|
||||||
|
target_mean, target_std = cv2.meanStdDev(target)
|
||||||
|
|
||||||
|
source_mean = source_mean.reshape(1, 1, 3)
|
||||||
|
source_std = source_std.reshape(1, 1, 3)
|
||||||
|
target_mean = target_mean.reshape(1, 1, 3)
|
||||||
|
target_std = target_std.reshape(1, 1, 3)
|
||||||
|
|
||||||
|
source = (source - source_mean) * (target_std / source_std) + target_mean
|
||||||
|
|
||||||
|
return cv2.cvtColor(np.clip(source, 0, 255).astype("uint8"), cv2.COLOR_LAB2BGR)
|
||||||
|
|
||||||
|
|
||||||
|
def draw_mouth_mask_visualization(
|
||||||
|
frame: Frame, face: Face, mouth_mask_data: tuple
|
||||||
|
) -> Frame:
|
||||||
|
landmarks = face.landmark_2d_106
|
||||||
|
if landmarks is not None and mouth_mask_data is not None:
|
||||||
|
mask, mouth_cutout, (min_x, min_y, max_x, max_y), lower_lip_polygon = (
|
||||||
|
mouth_mask_data
|
||||||
|
)
|
||||||
|
|
||||||
|
vis_frame = frame.copy()
|
||||||
|
|
||||||
|
# Draw the lower lip polygon
|
||||||
|
cv2.polylines(vis_frame, [lower_lip_polygon], True, (0, 255, 0), 2)
|
||||||
|
|
||||||
|
# Add labels
|
||||||
|
cv2.putText(
|
||||||
|
vis_frame,
|
||||||
|
"Mouth Mask",
|
||||||
|
(min_x, min_y - 10),
|
||||||
|
cv2.FONT_HERSHEY_SIMPLEX,
|
||||||
|
0.5,
|
||||||
|
(255, 255, 255),
|
||||||
|
1,
|
||||||
|
)
|
||||||
|
|
||||||
|
return vis_frame
|
||||||
|
return frame
|
||||||
|
692
modules/ui.py
@@ -7,6 +7,8 @@ from cv2_enumerate_cameras import enumerate_cameras
|
|||||||
from PIL import Image, ImageOps
|
from PIL import Image, ImageOps
|
||||||
import tkinterdnd2 as tkdnd
|
import tkinterdnd2 as tkdnd
|
||||||
import time
|
import time
|
||||||
|
import json
|
||||||
|
|
||||||
import modules.globals
|
import modules.globals
|
||||||
import modules.metadata
|
import modules.metadata
|
||||||
from modules.face_analyser import (
|
from modules.face_analyser import (
|
||||||
@@ -26,7 +28,6 @@ from modules.utilities import (
|
|||||||
has_image_extension,
|
has_image_extension,
|
||||||
)
|
)
|
||||||
|
|
||||||
modules.globals.face_opacity = 100
|
|
||||||
os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1"
|
os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1"
|
||||||
os.environ["QT_SCREEN_SCALE_FACTORS"] = "1"
|
os.environ["QT_SCREEN_SCALE_FACTORS"] = "1"
|
||||||
os.environ["QT_SCALE_FACTOR"] = "1"
|
os.environ["QT_SCALE_FACTOR"] = "1"
|
||||||
@@ -201,8 +202,193 @@ class TargetLabel(DragDropLabel):
|
|||||||
target_label.configure(text="")
|
target_label.configure(text="")
|
||||||
|
|
||||||
|
|
||||||
|
class ModernOptionMenu(ctk.CTkFrame):
|
||||||
|
def __init__(self, master, values, command=None, **kwargs):
|
||||||
|
super().__init__(master, fg_color="transparent")
|
||||||
|
|
||||||
|
self.values = values
|
||||||
|
self.command = command
|
||||||
|
|
||||||
|
# Set initial value based on saved camera or first available
|
||||||
|
self.current_value = (
|
||||||
|
modules.globals.selected_camera
|
||||||
|
if modules.globals.selected_camera in values
|
||||||
|
else (values[0] if values else "No cameras found")
|
||||||
|
)
|
||||||
|
|
||||||
|
# Main button
|
||||||
|
self.main_button = ctk.CTkButton(
|
||||||
|
self,
|
||||||
|
text=self.current_value,
|
||||||
|
command=self.show_dropdown,
|
||||||
|
width=300,
|
||||||
|
height=40,
|
||||||
|
corner_radius=8,
|
||||||
|
fg_color="#1f538d",
|
||||||
|
hover_color="#1a4572",
|
||||||
|
text_color="white",
|
||||||
|
font=("Roboto", 13, "bold"),
|
||||||
|
border_width=2,
|
||||||
|
border_color="#3d7ab8",
|
||||||
|
)
|
||||||
|
self.main_button.pack(expand=True, fill="both")
|
||||||
|
|
||||||
|
# Dropdown frame (initially hidden)
|
||||||
|
self.dropdown_frame = None
|
||||||
|
self.is_dropdown_visible = False
|
||||||
|
self.click_binding = None
|
||||||
|
|
||||||
|
def show_dropdown(self):
|
||||||
|
if self.is_dropdown_visible:
|
||||||
|
self.hide_dropdown()
|
||||||
|
return
|
||||||
|
|
||||||
|
# Calculate position and size
|
||||||
|
button_width = self.main_button.winfo_width()
|
||||||
|
dropdown_height = min(len(self.values) * 35, 200) # Limit max height
|
||||||
|
|
||||||
|
# Create and show dropdown with fixed size
|
||||||
|
self.dropdown_frame = ctk.CTkFrame(
|
||||||
|
self.winfo_toplevel(),
|
||||||
|
width=button_width,
|
||||||
|
height=dropdown_height,
|
||||||
|
fg_color="#1f538d",
|
||||||
|
corner_radius=8,
|
||||||
|
border_width=2,
|
||||||
|
border_color="#3d7ab8",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get the absolute coordinates of the button relative to the screen
|
||||||
|
button_x = self.winfo_rootx()
|
||||||
|
button_y = self.winfo_rooty()
|
||||||
|
|
||||||
|
# Position the dropdown above the button, using relative coordinates
|
||||||
|
relative_x = button_x - self.winfo_toplevel().winfo_rootx()
|
||||||
|
relative_y = button_y - self.winfo_toplevel().winfo_rooty() - dropdown_height
|
||||||
|
|
||||||
|
self.dropdown_frame.place(in_=self.winfo_toplevel(), x=relative_x, y=relative_y)
|
||||||
|
|
||||||
|
# Prevent frame from resizing
|
||||||
|
self.dropdown_frame.pack_propagate(False)
|
||||||
|
|
||||||
|
# Create scrollable frame if needed
|
||||||
|
if len(self.values) * 35 > 200:
|
||||||
|
scrollable_frame = ctk.CTkScrollableFrame(
|
||||||
|
self.dropdown_frame,
|
||||||
|
width=button_width - 20,
|
||||||
|
height=dropdown_height - 10,
|
||||||
|
fg_color="#1f538d",
|
||||||
|
scrollbar_button_color="#3d7ab8",
|
||||||
|
scrollbar_button_hover_color="#2b5d8b",
|
||||||
|
)
|
||||||
|
scrollable_frame.pack(expand=True, fill="both", padx=5, pady=5)
|
||||||
|
|
||||||
|
container = scrollable_frame
|
||||||
|
else:
|
||||||
|
container = self.dropdown_frame
|
||||||
|
|
||||||
|
# Add options
|
||||||
|
for value in self.values:
|
||||||
|
option_button = ctk.CTkButton(
|
||||||
|
container,
|
||||||
|
text=value,
|
||||||
|
fg_color="transparent",
|
||||||
|
hover_color="#233d54",
|
||||||
|
text_color="white",
|
||||||
|
height=35,
|
||||||
|
corner_radius=4,
|
||||||
|
font=("Roboto", 13),
|
||||||
|
command=lambda v=value: self.select_value(v),
|
||||||
|
)
|
||||||
|
option_button.pack(padx=2, pady=1, fill="x")
|
||||||
|
|
||||||
|
self.is_dropdown_visible = True
|
||||||
|
self.click_binding = self.winfo_toplevel().bind(
|
||||||
|
"<Button-1>", self.on_click_outside, add="+"
|
||||||
|
)
|
||||||
|
|
||||||
|
def on_click_outside(self, event):
|
||||||
|
if self.is_dropdown_visible:
|
||||||
|
widget_under_cursor = event.widget.winfo_containing(
|
||||||
|
event.x_root, event.y_root
|
||||||
|
)
|
||||||
|
if widget_under_cursor not in [self.main_button] + (
|
||||||
|
self.dropdown_frame.winfo_children() if self.dropdown_frame else []
|
||||||
|
):
|
||||||
|
self.hide_dropdown()
|
||||||
|
|
||||||
|
def hide_dropdown(self):
|
||||||
|
if self.dropdown_frame:
|
||||||
|
if self.click_binding:
|
||||||
|
self.winfo_toplevel().unbind("<Button-1>", self.click_binding)
|
||||||
|
self.click_binding = None
|
||||||
|
self.dropdown_frame.destroy()
|
||||||
|
self.dropdown_frame = None
|
||||||
|
self.is_dropdown_visible = False
|
||||||
|
|
||||||
|
def select_value(self, value):
|
||||||
|
self.current_value = value
|
||||||
|
self.main_button.configure(text=value)
|
||||||
|
self.hide_dropdown()
|
||||||
|
if self.command:
|
||||||
|
self.command(value)
|
||||||
|
|
||||||
|
def get(self):
|
||||||
|
return self.current_value
|
||||||
|
|
||||||
|
|
||||||
|
def save_switch_states():
|
||||||
|
switch_states = {
|
||||||
|
"keep_fps": modules.globals.keep_fps,
|
||||||
|
"keep_audio": modules.globals.keep_audio,
|
||||||
|
"keep_frames": modules.globals.keep_frames,
|
||||||
|
"many_faces": modules.globals.many_faces,
|
||||||
|
"map_faces": modules.globals.map_faces,
|
||||||
|
"color_correction": modules.globals.color_correction,
|
||||||
|
"nsfw_filter": modules.globals.nsfw_filter,
|
||||||
|
"live_mirror": modules.globals.live_mirror,
|
||||||
|
"live_resizable": modules.globals.live_resizable,
|
||||||
|
"fp_ui": modules.globals.fp_ui,
|
||||||
|
"show_fps": modules.globals.show_fps,
|
||||||
|
"mouth_mask": modules.globals.mouth_mask,
|
||||||
|
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
|
||||||
|
"mask_down_size": modules.globals.mask_down_size,
|
||||||
|
"mask_feather_ratio": modules.globals.mask_feather_ratio,
|
||||||
|
"selected_camera": modules.globals.selected_camera,
|
||||||
|
}
|
||||||
|
with open("switch_states.json", "w") as f:
|
||||||
|
json.dump(switch_states, f)
|
||||||
|
|
||||||
|
|
||||||
|
def load_switch_states():
|
||||||
|
try:
|
||||||
|
with open("switch_states.json", "r") as f:
|
||||||
|
switch_states = json.load(f)
|
||||||
|
modules.globals.keep_fps = switch_states.get("keep_fps", True)
|
||||||
|
modules.globals.keep_audio = switch_states.get("keep_audio", True)
|
||||||
|
modules.globals.keep_frames = switch_states.get("keep_frames", False)
|
||||||
|
modules.globals.many_faces = switch_states.get("many_faces", False)
|
||||||
|
modules.globals.map_faces = switch_states.get("map_faces", False)
|
||||||
|
modules.globals.color_correction = switch_states.get("color_correction", False)
|
||||||
|
modules.globals.nsfw_filter = switch_states.get("nsfw_filter", False)
|
||||||
|
modules.globals.live_mirror = switch_states.get("live_mirror", False)
|
||||||
|
modules.globals.live_resizable = switch_states.get("live_resizable", False)
|
||||||
|
modules.globals.fp_ui = switch_states.get("fp_ui", {"face_enhancer": False})
|
||||||
|
modules.globals.show_fps = switch_states.get("show_fps", False)
|
||||||
|
modules.globals.mouth_mask = switch_states.get("mouth_mask", False)
|
||||||
|
modules.globals.show_mouth_mask_box = switch_states.get(
|
||||||
|
"show_mouth_mask_box", False
|
||||||
|
)
|
||||||
|
modules.globals.mask_down_size = switch_states.get("mask_down_size", 0.5)
|
||||||
|
modules.globals.mask_feather_ratio = switch_states.get("mask_feather_ratio", 8)
|
||||||
|
modules.globals.selected_camera = switch_states.get("selected_camera", None)
|
||||||
|
except FileNotFoundError:
|
||||||
|
# If the file doesn't exist, use default values
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
def init(start: Callable[[], None], destroy: Callable[[], None]) -> tkdnd.TkinterDnD.Tk:
|
def init(start: Callable[[], None], destroy: Callable[[], None]) -> tkdnd.TkinterDnD.Tk:
|
||||||
global ROOT, PREVIEW
|
global ROOT, PREVIEW, donate_frame
|
||||||
|
|
||||||
ROOT = create_root(start, destroy)
|
ROOT = create_root(start, destroy)
|
||||||
PREVIEW = create_preview(ROOT)
|
PREVIEW = create_preview(ROOT)
|
||||||
@@ -213,7 +399,9 @@ def init(start: Callable[[], None], destroy: Callable[[], None]) -> tkdnd.Tkinte
|
|||||||
def create_root(
|
def create_root(
|
||||||
start: Callable[[], None], destroy: Callable[[], None]
|
start: Callable[[], None], destroy: Callable[[], None]
|
||||||
) -> tkdnd.TkinterDnD.Tk:
|
) -> tkdnd.TkinterDnD.Tk:
|
||||||
global source_label, target_label, status_label
|
global source_label, target_label, status_label, donate_frame, show_fps_switch
|
||||||
|
|
||||||
|
load_switch_states()
|
||||||
|
|
||||||
ctk.set_appearance_mode("dark")
|
ctk.set_appearance_mode("dark")
|
||||||
ctk.set_default_color_theme("blue")
|
ctk.set_default_color_theme("blue")
|
||||||
@@ -225,7 +413,8 @@ def create_root(
|
|||||||
root.configure(bg="#1a1a1a")
|
root.configure(bg="#1a1a1a")
|
||||||
root.protocol("WM_DELETE_WINDOW", lambda: destroy())
|
root.protocol("WM_DELETE_WINDOW", lambda: destroy())
|
||||||
root.resizable(True, True)
|
root.resizable(True, True)
|
||||||
root.attributes("-alpha", 1.0) # Set window opacity to fully opaque
|
root.attributes("-alpha", 1.0)
|
||||||
|
root.minsize(650, 870)
|
||||||
|
|
||||||
main_frame = ctk.CTkFrame(root, fg_color="#1a1a1a")
|
main_frame = ctk.CTkFrame(root, fg_color="#1a1a1a")
|
||||||
main_frame.pack(fill="both", expand=True, padx=20, pady=20)
|
main_frame.pack(fill="both", expand=True, padx=20, pady=20)
|
||||||
@@ -288,83 +477,100 @@ def create_root(
|
|||||||
options_frame = ctk.CTkFrame(main_frame, fg_color="#2a2d2e", corner_radius=15)
|
options_frame = ctk.CTkFrame(main_frame, fg_color="#2a2d2e", corner_radius=15)
|
||||||
options_frame.grid(row=1, column=0, columnspan=3, padx=10, pady=10, sticky="nsew")
|
options_frame.grid(row=1, column=0, columnspan=3, padx=10, pady=10, sticky="nsew")
|
||||||
|
|
||||||
# Create a single column for options, centered
|
# Create two columns for options
|
||||||
options_column = ctk.CTkFrame(options_frame, fg_color="#2a2d2e")
|
left_column = ctk.CTkFrame(options_frame, fg_color="#2a2d2e")
|
||||||
options_column.pack(expand=True)
|
left_column.pack(side="left", padx=20, expand=True)
|
||||||
|
|
||||||
# Switches
|
right_column = ctk.CTkFrame(options_frame, fg_color="#2a2d2e")
|
||||||
|
right_column.pack(side="right", padx=20, expand=True)
|
||||||
|
|
||||||
|
# Left column - Video/Processing Options
|
||||||
keep_fps_value = ctk.BooleanVar(value=modules.globals.keep_fps)
|
keep_fps_value = ctk.BooleanVar(value=modules.globals.keep_fps)
|
||||||
keep_fps_checkbox = ctk.CTkSwitch(
|
keep_fps_checkbox = ctk.CTkSwitch(
|
||||||
options_column,
|
left_column,
|
||||||
text="Keep fps",
|
text="Keep fps",
|
||||||
variable=keep_fps_value,
|
variable=keep_fps_value,
|
||||||
cursor="hand2",
|
cursor="hand2",
|
||||||
command=lambda: setattr(
|
command=lambda: (
|
||||||
modules.globals, "keep_fps", not modules.globals.keep_fps
|
setattr(modules.globals, "keep_fps", keep_fps_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
),
|
),
|
||||||
progress_color="#3a7ebf",
|
progress_color="#3a7ebf",
|
||||||
font=("Roboto", 14, "bold"),
|
font=("Roboto", 14, "bold"),
|
||||||
)
|
)
|
||||||
keep_fps_checkbox.pack(pady=5, anchor="w")
|
keep_fps_checkbox.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
# Move many faces switch to left column
|
||||||
|
many_faces_value = ctk.BooleanVar(value=modules.globals.many_faces)
|
||||||
|
many_faces_switch = ctk.CTkSwitch(
|
||||||
|
left_column,
|
||||||
|
text="Many faces",
|
||||||
|
variable=many_faces_value,
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: (
|
||||||
|
setattr(modules.globals, "many_faces", many_faces_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
)
|
||||||
|
many_faces_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
keep_audio_value = ctk.BooleanVar(value=modules.globals.keep_audio)
|
||||||
|
keep_audio_switch = ctk.CTkSwitch(
|
||||||
|
left_column,
|
||||||
|
text="Keep audio",
|
||||||
|
variable=keep_audio_value,
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: (
|
||||||
|
setattr(modules.globals, "keep_audio", keep_audio_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
)
|
||||||
|
keep_audio_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
keep_frames_value = ctk.BooleanVar(value=modules.globals.keep_frames)
|
keep_frames_value = ctk.BooleanVar(value=modules.globals.keep_frames)
|
||||||
keep_frames_switch = ctk.CTkSwitch(
|
keep_frames_switch = ctk.CTkSwitch(
|
||||||
options_column,
|
left_column,
|
||||||
text="Keep frames",
|
text="Keep frames",
|
||||||
variable=keep_frames_value,
|
variable=keep_frames_value,
|
||||||
cursor="hand2",
|
cursor="hand2",
|
||||||
command=lambda: setattr(
|
command=lambda: (
|
||||||
modules.globals, "keep_frames", keep_frames_value.get()
|
setattr(modules.globals, "keep_frames", keep_frames_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
),
|
),
|
||||||
progress_color="#3a7ebf",
|
progress_color="#3a7ebf",
|
||||||
font=("Roboto", 14, "bold"),
|
font=("Roboto", 14, "bold"),
|
||||||
)
|
)
|
||||||
keep_frames_switch.pack(pady=5, anchor="w")
|
keep_frames_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
# Face Processing Options
|
||||||
enhancer_value = ctk.BooleanVar(value=modules.globals.fp_ui["face_enhancer"])
|
enhancer_value = ctk.BooleanVar(value=modules.globals.fp_ui["face_enhancer"])
|
||||||
enhancer_switch = ctk.CTkSwitch(
|
enhancer_switch = ctk.CTkSwitch(
|
||||||
options_column,
|
left_column,
|
||||||
text="Face Enhancer",
|
text="Face Enhancer",
|
||||||
variable=enhancer_value,
|
variable=enhancer_value,
|
||||||
cursor="hand2",
|
cursor="hand2",
|
||||||
command=lambda: update_tumbler("face_enhancer", enhancer_value.get()),
|
command=lambda: (
|
||||||
|
update_tumbler("face_enhancer", enhancer_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
progress_color="#3a7ebf",
|
progress_color="#3a7ebf",
|
||||||
font=("Roboto", 14, "bold"),
|
font=("Roboto", 14, "bold"),
|
||||||
)
|
)
|
||||||
enhancer_switch.pack(pady=5, anchor="w")
|
enhancer_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
keep_audio_value = ctk.BooleanVar(value=modules.globals.keep_audio)
|
|
||||||
keep_audio_switch = ctk.CTkSwitch(
|
|
||||||
options_column,
|
|
||||||
text="Keep audio",
|
|
||||||
variable=keep_audio_value,
|
|
||||||
cursor="hand2",
|
|
||||||
command=lambda: setattr(modules.globals, "keep_audio", keep_audio_value.get()),
|
|
||||||
progress_color="#3a7ebf",
|
|
||||||
font=("Roboto", 14, "bold"),
|
|
||||||
)
|
|
||||||
keep_audio_switch.pack(pady=5, anchor="w")
|
|
||||||
|
|
||||||
many_faces_value = ctk.BooleanVar(value=modules.globals.many_faces)
|
|
||||||
many_faces_switch = ctk.CTkSwitch(
|
|
||||||
options_column,
|
|
||||||
text="Many faces",
|
|
||||||
variable=many_faces_value,
|
|
||||||
cursor="hand2",
|
|
||||||
command=lambda: setattr(modules.globals, "many_faces", many_faces_value.get()),
|
|
||||||
progress_color="#3a7ebf",
|
|
||||||
font=("Roboto", 14, "bold"),
|
|
||||||
)
|
|
||||||
many_faces_switch.pack(pady=5, anchor="w")
|
|
||||||
|
|
||||||
color_correction_value = ctk.BooleanVar(value=modules.globals.color_correction)
|
color_correction_value = ctk.BooleanVar(value=modules.globals.color_correction)
|
||||||
color_correction_switch = ctk.CTkSwitch(
|
color_correction_switch = ctk.CTkSwitch(
|
||||||
options_column,
|
left_column,
|
||||||
text="Fix Blueish Cam",
|
text="Fix Blueish Cam",
|
||||||
variable=color_correction_value,
|
variable=color_correction_value,
|
||||||
cursor="hand2",
|
cursor="hand2",
|
||||||
command=lambda: setattr(
|
command=lambda: (
|
||||||
modules.globals, "color_correction", color_correction_value.get()
|
setattr(modules.globals, "color_correction", color_correction_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
),
|
),
|
||||||
progress_color="#3a7ebf",
|
progress_color="#3a7ebf",
|
||||||
font=("Roboto", 14, "bold"),
|
font=("Roboto", 14, "bold"),
|
||||||
@@ -373,16 +579,137 @@ def create_root(
|
|||||||
|
|
||||||
map_faces = ctk.BooleanVar(value=modules.globals.map_faces)
|
map_faces = ctk.BooleanVar(value=modules.globals.map_faces)
|
||||||
map_faces_switch = ctk.CTkSwitch(
|
map_faces_switch = ctk.CTkSwitch(
|
||||||
options_column,
|
left_column,
|
||||||
text="Map faces",
|
text="Map faces",
|
||||||
variable=map_faces,
|
variable=map_faces,
|
||||||
cursor="hand2",
|
cursor="hand2",
|
||||||
command=lambda: setattr(modules.globals, "map_faces", map_faces.get()),
|
command=lambda: (
|
||||||
|
setattr(modules.globals, "map_faces", map_faces.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
progress_color="#3a7ebf",
|
progress_color="#3a7ebf",
|
||||||
font=("Roboto", 14, "bold"),
|
font=("Roboto", 14, "bold"),
|
||||||
)
|
)
|
||||||
map_faces_switch.pack(pady=5, anchor="w")
|
map_faces_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
# Right column - Face Detection & Masking Options
|
||||||
|
show_fps_value = ctk.BooleanVar(value=modules.globals.show_fps)
|
||||||
|
show_fps_switch = ctk.CTkSwitch(
|
||||||
|
right_column,
|
||||||
|
text="Show FPS",
|
||||||
|
variable=show_fps_value,
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: (
|
||||||
|
setattr(modules.globals, "show_fps", show_fps_value.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
)
|
||||||
|
show_fps_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
# Mouth Mask Controls
|
||||||
|
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
|
||||||
|
mouth_mask_switch = ctk.CTkSwitch(
|
||||||
|
right_column,
|
||||||
|
text="Mouth Mask",
|
||||||
|
variable=mouth_mask_var,
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: (
|
||||||
|
setattr(modules.globals, "mouth_mask", mouth_mask_var.get()),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
)
|
||||||
|
mouth_mask_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
show_mouth_mask_box_var = ctk.BooleanVar(value=modules.globals.show_mouth_mask_box)
|
||||||
|
show_mouth_mask_box_switch = ctk.CTkSwitch(
|
||||||
|
right_column,
|
||||||
|
text="Show Mouth Box",
|
||||||
|
variable=show_mouth_mask_box_var,
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: (
|
||||||
|
setattr(
|
||||||
|
modules.globals, "show_mouth_mask_box", show_mouth_mask_box_var.get()
|
||||||
|
),
|
||||||
|
save_switch_states(),
|
||||||
|
),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
)
|
||||||
|
show_mouth_mask_box_switch.pack(pady=5, anchor="w")
|
||||||
|
|
||||||
|
# Face Opacity Controls - Moved under Show Mouth Box
|
||||||
|
opacity_frame = ctk.CTkFrame(right_column, fg_color="#2a2d2e")
|
||||||
|
opacity_frame.pack(pady=5, anchor="w", fill="x")
|
||||||
|
|
||||||
|
opacity_switch = ctk.CTkSwitch(
|
||||||
|
opacity_frame,
|
||||||
|
text="Face Opacity",
|
||||||
|
variable=ctk.BooleanVar(value=modules.globals.opacity_switch),
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: setattr(
|
||||||
|
modules.globals, "opacity_switch", not modules.globals.opacity_switch
|
||||||
|
),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
)
|
||||||
|
opacity_switch.pack(side="left", padx=(0, 10))
|
||||||
|
|
||||||
|
opacity_slider = ctk.CTkSlider(
|
||||||
|
opacity_frame,
|
||||||
|
from_=0,
|
||||||
|
to=100,
|
||||||
|
number_of_steps=100,
|
||||||
|
command=update_opacity,
|
||||||
|
fg_color=("gray75", "gray25"),
|
||||||
|
progress_color="#3a7ebf",
|
||||||
|
button_color="#3a7ebf",
|
||||||
|
button_hover_color="#2b5d8b",
|
||||||
|
)
|
||||||
|
opacity_slider.pack(side="left", fill="x", expand=True)
|
||||||
|
opacity_slider.set(modules.globals.face_opacity)
|
||||||
|
|
||||||
|
# Mask Size Controls
|
||||||
|
mask_down_size_label = ctk.CTkLabel(
|
||||||
|
right_column, text="Mask Size:", font=("Roboto", 12)
|
||||||
|
)
|
||||||
|
mask_down_size_label.pack(pady=(5, 0), anchor="w")
|
||||||
|
|
||||||
|
mask_down_size_slider = ctk.CTkSlider(
|
||||||
|
right_column,
|
||||||
|
from_=0.1,
|
||||||
|
to=1.0,
|
||||||
|
number_of_steps=9,
|
||||||
|
command=lambda value: [
|
||||||
|
setattr(modules.globals, "mask_down_size", value),
|
||||||
|
save_switch_states(),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
mask_down_size_slider.set(modules.globals.mask_down_size)
|
||||||
|
mask_down_size_slider.pack(pady=(0, 5), fill="x")
|
||||||
|
|
||||||
|
# Mask Feather Controls
|
||||||
|
mask_feather_label = ctk.CTkLabel(
|
||||||
|
right_column, text="Mask Feather:", font=("Roboto", 12)
|
||||||
|
)
|
||||||
|
mask_feather_label.pack(pady=(5, 0), anchor="w")
|
||||||
|
|
||||||
|
mask_feather_slider = ctk.CTkSlider(
|
||||||
|
right_column,
|
||||||
|
from_=4,
|
||||||
|
to=16,
|
||||||
|
number_of_steps=12,
|
||||||
|
command=lambda value: [
|
||||||
|
setattr(modules.globals, "mask_feather_ratio", int(value)),
|
||||||
|
save_switch_states(),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
mask_feather_slider.set(modules.globals.mask_feather_ratio)
|
||||||
|
mask_feather_slider.pack(pady=(0, 5), fill="x")
|
||||||
|
|
||||||
button_frame = ctk.CTkFrame(main_frame, fg_color="#1a1a1a")
|
button_frame = ctk.CTkFrame(main_frame, fg_color="#1a1a1a")
|
||||||
button_frame.grid(row=2, column=0, columnspan=3, padx=10, pady=10, sticky="nsew")
|
button_frame.grid(row=2, column=0, columnspan=3, padx=10, pady=10, sticky="nsew")
|
||||||
|
|
||||||
@@ -390,52 +717,24 @@ def create_root(
|
|||||||
button_frame,
|
button_frame,
|
||||||
text="Start",
|
text="Start",
|
||||||
cursor="hand2",
|
cursor="hand2",
|
||||||
command=lambda: analyze_target(start, root),
|
command=lambda: [
|
||||||
|
(
|
||||||
|
donate_frame.destroy()
|
||||||
|
if "donate_frame" in globals() and donate_frame.winfo_exists()
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
analyze_target(start, root),
|
||||||
|
],
|
||||||
fg_color="#4CAF50",
|
fg_color="#4CAF50",
|
||||||
hover_color="#45a049",
|
hover_color="#45a049",
|
||||||
)
|
)
|
||||||
start_button.pack(side="left", padx=10, expand=True)
|
start_button.pack(side="left", padx=10, expand=True)
|
||||||
|
|
||||||
preview_button = ModernButton(
|
preview_button = ModernButton(
|
||||||
button_frame,
|
button_frame, text="Preview", cursor="hand2", command=lambda: toggle_preview()
|
||||||
text="Preview",
|
|
||||||
cursor="hand2",
|
|
||||||
command=lambda: toggle_preview(),
|
|
||||||
)
|
)
|
||||||
preview_button.pack(side="left", padx=10, expand=True)
|
preview_button.pack(side="left", padx=10, expand=True)
|
||||||
|
|
||||||
# --- Camera Selection ---
|
|
||||||
camera_label = ctk.CTkLabel(root, text="Select Camera:")
|
|
||||||
camera_label.place(relx=0.4, rely=0.86, relwidth=0.2, relheight=0.05)
|
|
||||||
available_cameras = get_available_cameras()
|
|
||||||
# Convert camera indices to strings for CTkOptionMenu
|
|
||||||
available_camera_indices, available_camera_strings = available_cameras
|
|
||||||
camera_variable = ctk.StringVar(
|
|
||||||
value=(
|
|
||||||
available_camera_strings[0]
|
|
||||||
if available_camera_strings
|
|
||||||
else "No cameras found"
|
|
||||||
)
|
|
||||||
)
|
|
||||||
camera_optionmenu = ctk.CTkOptionMenu(
|
|
||||||
root, variable=camera_variable, values=available_camera_strings
|
|
||||||
)
|
|
||||||
camera_optionmenu.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
|
|
||||||
# --- End Camera Selection ---
|
|
||||||
|
|
||||||
live_button = ModernButton(
|
|
||||||
button_frame,
|
|
||||||
text="Live",
|
|
||||||
cursor="hand2",
|
|
||||||
command=lambda: webcam_preview(
|
|
||||||
root,
|
|
||||||
available_camera_indices[
|
|
||||||
available_camera_strings.index(camera_variable.get())
|
|
||||||
],
|
|
||||||
),
|
|
||||||
)
|
|
||||||
live_button.pack(side="left", padx=10, expand=True)
|
|
||||||
|
|
||||||
stop_button = ModernButton(
|
stop_button = ModernButton(
|
||||||
button_frame,
|
button_frame,
|
||||||
text="Destroy",
|
text="Destroy",
|
||||||
@@ -446,10 +745,55 @@ def create_root(
|
|||||||
)
|
)
|
||||||
stop_button.pack(side="left", padx=10, expand=True)
|
stop_button.pack(side="left", padx=10, expand=True)
|
||||||
|
|
||||||
|
# Camera Selection
|
||||||
|
camera_frame = ctk.CTkFrame(main_frame, fg_color="#2a2d2e", corner_radius=15)
|
||||||
|
camera_frame.grid(row=3, column=0, columnspan=3, padx=10, pady=(0, 10), sticky="ew")
|
||||||
|
|
||||||
|
camera_label = ctk.CTkLabel(
|
||||||
|
camera_frame, text="Select Camera:", font=("Roboto", 14)
|
||||||
|
)
|
||||||
|
camera_label.pack(side="left", padx=(20, 10), pady=10)
|
||||||
|
|
||||||
|
available_cameras = get_available_cameras()
|
||||||
|
available_camera_indices, available_camera_strings = available_cameras
|
||||||
|
camera_variable = ctk.StringVar(
|
||||||
|
value=(
|
||||||
|
available_camera_strings[0]
|
||||||
|
if available_camera_strings
|
||||||
|
else "No cameras found"
|
||||||
|
)
|
||||||
|
)
|
||||||
|
camera_optionmenu = ModernOptionMenu(
|
||||||
|
camera_frame,
|
||||||
|
values=available_camera_strings,
|
||||||
|
command=lambda value: print(f"Selected: {value}"), # Add your command here
|
||||||
|
)
|
||||||
|
camera_optionmenu.pack(side="left", padx=(10, 20), pady=10, fill="x", expand=True)
|
||||||
|
|
||||||
|
live_button = ModernButton(
|
||||||
|
camera_frame,
|
||||||
|
text="Live",
|
||||||
|
cursor="hand2",
|
||||||
|
command=lambda: [
|
||||||
|
(
|
||||||
|
donate_frame.destroy()
|
||||||
|
if "donate_frame" in globals() and donate_frame.winfo_exists()
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
webcam_preview(
|
||||||
|
root,
|
||||||
|
available_camera_indices[
|
||||||
|
available_camera_strings.index(camera_optionmenu.get())
|
||||||
|
],
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
live_button.pack(side="left", padx=10, pady=10)
|
||||||
|
|
||||||
status_label = ModernLabel(
|
status_label = ModernLabel(
|
||||||
main_frame, text=None, justify="center", fg_color="#1a1a1a"
|
main_frame, text=None, justify="center", fg_color="#1a1a1a"
|
||||||
)
|
)
|
||||||
status_label.grid(row=3, column=0, columnspan=3, pady=10, sticky="ew")
|
status_label.grid(row=4, column=0, columnspan=3, pady=10, sticky="ew")
|
||||||
|
|
||||||
donate_frame = ctk.CTkFrame(main_frame, fg_color="#1a1a1a")
|
donate_frame = ctk.CTkFrame(main_frame, fg_color="#1a1a1a")
|
||||||
donate_frame.grid(row=4, column=0, columnspan=3, pady=5, sticky="ew")
|
donate_frame.grid(row=4, column=0, columnspan=3, pady=5, sticky="ew")
|
||||||
@@ -463,7 +807,6 @@ def create_root(
|
|||||||
text_color="#1870c4",
|
text_color="#1870c4",
|
||||||
)
|
)
|
||||||
donate_label.pack(side="left", expand=True)
|
donate_label.pack(side="left", expand=True)
|
||||||
|
|
||||||
donate_label.bind(
|
donate_label.bind(
|
||||||
"<Button>", lambda event: webbrowser.open("https://paypal.me/hacksider")
|
"<Button>", lambda event: webbrowser.open("https://paypal.me/hacksider")
|
||||||
)
|
)
|
||||||
@@ -708,6 +1051,10 @@ def create_preview(parent: ctk.CTkToplevel) -> ctk.CTkToplevel:
|
|||||||
)
|
)
|
||||||
preview_slider.pack(fill="x", padx=20, pady=10)
|
preview_slider.pack(fill="x", padx=20, pady=10)
|
||||||
|
|
||||||
|
# Add keyboard bindings for left and right arrow keys
|
||||||
|
preview.bind("<Left>", lambda event: navigate_frames(-1))
|
||||||
|
preview.bind("<Right>", lambda event: navigate_frames(1))
|
||||||
|
|
||||||
return preview
|
return preview
|
||||||
|
|
||||||
|
|
||||||
@@ -746,7 +1093,7 @@ def select_source_path() -> None:
|
|||||||
else:
|
else:
|
||||||
modules.globals.source_path = None
|
modules.globals.source_path = None
|
||||||
source_label.configure(image=None)
|
source_label.configure(image=None)
|
||||||
source_label.configure(text="Drag & Drop Source Image Here")
|
source_label.configure(text="Drag & Drop\nSource Image Here")
|
||||||
|
|
||||||
|
|
||||||
def swap_faces_paths() -> None:
|
def swap_faces_paths() -> None:
|
||||||
@@ -893,36 +1240,77 @@ def toggle_preview() -> None:
|
|||||||
if PREVIEW.state() == "normal":
|
if PREVIEW.state() == "normal":
|
||||||
PREVIEW.withdraw()
|
PREVIEW.withdraw()
|
||||||
elif modules.globals.source_path and modules.globals.target_path:
|
elif modules.globals.source_path and modules.globals.target_path:
|
||||||
init_preview()
|
try:
|
||||||
update_preview()
|
init_preview()
|
||||||
|
update_preview()
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error initializing preview: {str(e)}")
|
||||||
|
# Optionally, show an error message to the user
|
||||||
|
update_status(f"Error initializing preview: {str(e)}")
|
||||||
|
|
||||||
|
|
||||||
def init_preview() -> None:
|
def init_preview() -> None:
|
||||||
|
global preview_slider
|
||||||
|
|
||||||
if is_image(modules.globals.target_path):
|
if is_image(modules.globals.target_path):
|
||||||
preview_slider.pack_forget()
|
if hasattr(preview_slider, "pack_forget"):
|
||||||
if is_video(modules.globals.target_path):
|
preview_slider.pack_forget()
|
||||||
|
elif is_video(modules.globals.target_path):
|
||||||
video_frame_total = get_video_frame_total(modules.globals.target_path)
|
video_frame_total = get_video_frame_total(modules.globals.target_path)
|
||||||
preview_slider.configure(to=video_frame_total)
|
|
||||||
preview_slider.pack(fill="x")
|
# Check if preview_slider exists and is a valid widget
|
||||||
preview_slider.set(0)
|
if hasattr(preview_slider, "winfo_exists") and preview_slider.winfo_exists():
|
||||||
|
try:
|
||||||
|
preview_slider.configure(to=video_frame_total)
|
||||||
|
preview_slider.pack(fill="x")
|
||||||
|
preview_slider.set(0)
|
||||||
|
except tk.TclError:
|
||||||
|
print("Error: Preview slider widget not available. Recreating it.")
|
||||||
|
create_preview_slider()
|
||||||
|
else:
|
||||||
|
print("Preview slider not found. Creating a new one.")
|
||||||
|
create_preview_slider()
|
||||||
|
|
||||||
|
|
||||||
|
def create_preview_slider():
|
||||||
|
global preview_slider, PREVIEW
|
||||||
|
|
||||||
|
# Ensure PREVIEW window exists
|
||||||
|
if not hasattr(PREVIEW, "winfo_exists") or not PREVIEW.winfo_exists():
|
||||||
|
print("Error: Preview window does not exist.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Create a new slider
|
||||||
|
preview_slider = ctk.CTkSlider(
|
||||||
|
PREVIEW,
|
||||||
|
from_=0,
|
||||||
|
to=get_video_frame_total(modules.globals.target_path),
|
||||||
|
command=lambda frame_value: update_preview(int(frame_value)),
|
||||||
|
fg_color=("gray75", "gray25"),
|
||||||
|
progress_color=("DodgerBlue", "DodgerBlue"),
|
||||||
|
button_color=("DodgerBlue", "DodgerBlue"),
|
||||||
|
button_hover_color=("RoyalBlue", "RoyalBlue"),
|
||||||
|
)
|
||||||
|
preview_slider.pack(fill="x", padx=20, pady=10)
|
||||||
|
|
||||||
|
|
||||||
|
def navigate_frames(direction: int) -> None:
|
||||||
|
current_frame = int(preview_slider.get())
|
||||||
|
new_frame = max(0, min(current_frame + direction, int(preview_slider.cget("to"))))
|
||||||
|
preview_slider.set(new_frame)
|
||||||
|
update_preview(new_frame)
|
||||||
|
|
||||||
|
|
||||||
def update_preview(frame_number: int = 0) -> None:
|
def update_preview(frame_number: int = 0) -> None:
|
||||||
if modules.globals.source_path and modules.globals.target_path:
|
if modules.globals.source_path and modules.globals.target_path:
|
||||||
update_status("Processing...")
|
update_status("Processing...")
|
||||||
|
|
||||||
# Debug: Print the target path and frame number
|
|
||||||
print(
|
|
||||||
f"Target path: {modules.globals.target_path}, Frame number: {frame_number}"
|
|
||||||
)
|
|
||||||
|
|
||||||
temp_frame = None
|
temp_frame = None
|
||||||
if is_video(modules.globals.target_path):
|
if is_video(modules.globals.target_path):
|
||||||
temp_frame = get_video_frame(modules.globals.target_path, frame_number)
|
temp_frame = get_video_frame(modules.globals.target_path, frame_number)
|
||||||
elif is_image(modules.globals.target_path):
|
elif is_image(modules.globals.target_path):
|
||||||
temp_frame = cv2.imread(modules.globals.target_path)
|
temp_frame = cv2.imread(modules.globals.target_path)
|
||||||
|
|
||||||
# Debug: Check if temp_frame is None
|
|
||||||
if temp_frame is None:
|
if temp_frame is None:
|
||||||
print("Error: temp_frame is None")
|
print("Error: temp_frame is None")
|
||||||
update_status("Error: Could not read frame from video or image.")
|
update_status("Error: Could not read frame from video or image.")
|
||||||
@@ -934,19 +1322,10 @@ def update_preview(frame_number: int = 0) -> None:
|
|||||||
for frame_processor in get_frame_processors_modules(
|
for frame_processor in get_frame_processors_modules(
|
||||||
modules.globals.frame_processors
|
modules.globals.frame_processors
|
||||||
):
|
):
|
||||||
# Debug: Print the type of frame_processor
|
|
||||||
print(f"Processing frame with: {type(frame_processor).__name__}")
|
|
||||||
|
|
||||||
temp_frame = frame_processor.process_frame(
|
temp_frame = frame_processor.process_frame(
|
||||||
get_one_face(cv2.imread(modules.globals.source_path)), temp_frame
|
get_one_face(cv2.imread(modules.globals.source_path)), temp_frame
|
||||||
)
|
)
|
||||||
|
|
||||||
# Debug: Check if temp_frame is None after processing
|
|
||||||
if temp_frame is None:
|
|
||||||
print("Error: temp_frame is None after processing")
|
|
||||||
update_status("Error: Frame processing failed.")
|
|
||||||
return
|
|
||||||
|
|
||||||
image = Image.fromarray(cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB))
|
image = Image.fromarray(cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB))
|
||||||
image = ImageOps.contain(
|
image = ImageOps.contain(
|
||||||
image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS
|
image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS
|
||||||
@@ -987,25 +1366,25 @@ def update_opacity(value):
|
|||||||
modules.globals.face_opacity = int(value)
|
modules.globals.face_opacity = int(value)
|
||||||
|
|
||||||
|
|
||||||
# Modify the create_webcam_preview function to include the slider
|
def create_webcam_preview(camera_index: int):
|
||||||
def create_webcam_preview(camera_index):
|
|
||||||
global preview_label, PREVIEW
|
global preview_label, PREVIEW
|
||||||
|
|
||||||
camera = cv2.VideoCapture(camera_index)
|
camera = cv2.VideoCapture(camera_index)
|
||||||
if not camera.isOpened():
|
if not camera.isOpened():
|
||||||
update_status(f"Error: Could not open camera with index {camera_index}")
|
update_status(f"Error: Could not open camera with index {camera_index}")
|
||||||
return
|
return
|
||||||
|
|
||||||
camera.set(cv2.CAP_PROP_FRAME_WIDTH, PREVIEW_DEFAULT_WIDTH)
|
camera.set(cv2.CAP_PROP_FRAME_WIDTH, PREVIEW_DEFAULT_WIDTH)
|
||||||
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, PREVIEW_DEFAULT_HEIGHT)
|
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, PREVIEW_DEFAULT_HEIGHT)
|
||||||
camera.set(cv2.CAP_PROP_FPS, 60)
|
camera.set(cv2.CAP_PROP_FPS, 60)
|
||||||
|
|
||||||
PREVIEW.deiconify()
|
PREVIEW.deiconify()
|
||||||
|
|
||||||
# Remove any existing widgets in PREVIEW window
|
# Clear any existing widgets in the PREVIEW window
|
||||||
for widget in PREVIEW.winfo_children():
|
for widget in PREVIEW.winfo_children():
|
||||||
widget.destroy()
|
widget.destroy()
|
||||||
|
|
||||||
# Create a main frame to hold all widgets
|
# Create a main frame to contain all widgets
|
||||||
main_frame = ctk.CTkFrame(PREVIEW)
|
main_frame = ctk.CTkFrame(PREVIEW)
|
||||||
main_frame.pack(fill="both", expand=True)
|
main_frame.pack(fill="both", expand=True)
|
||||||
|
|
||||||
@@ -1016,10 +1395,17 @@ def create_webcam_preview(camera_index):
|
|||||||
preview_label = ctk.CTkLabel(preview_frame, text="")
|
preview_label = ctk.CTkLabel(preview_frame, text="")
|
||||||
preview_label.pack(fill="both", expand=True)
|
preview_label.pack(fill="both", expand=True)
|
||||||
|
|
||||||
|
# Initialize frame processors
|
||||||
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
||||||
|
|
||||||
|
# Variables for source image and FPS calculation
|
||||||
source_image = None
|
source_image = None
|
||||||
|
prev_time = time.time()
|
||||||
|
fps_update_interval = 0.5
|
||||||
|
frame_count = 0
|
||||||
|
fps = 0
|
||||||
|
|
||||||
|
# Function to update frame size when the window is resized
|
||||||
def update_frame_size(event):
|
def update_frame_size(event):
|
||||||
nonlocal temp_frame
|
nonlocal temp_frame
|
||||||
if modules.globals.live_resizable:
|
if modules.globals.live_resizable:
|
||||||
@@ -1027,44 +1413,63 @@ def create_webcam_preview(camera_index):
|
|||||||
|
|
||||||
preview_frame.bind("<Configure>", update_frame_size)
|
preview_frame.bind("<Configure>", update_frame_size)
|
||||||
|
|
||||||
while camera:
|
# Main loop for capturing and processing frames
|
||||||
|
while camera.isOpened() and PREVIEW.state() != "withdrawn":
|
||||||
ret, frame = camera.read()
|
ret, frame = camera.read()
|
||||||
if not ret:
|
if not ret:
|
||||||
break
|
break
|
||||||
|
|
||||||
temp_frame = frame.copy() # Create a copy of the frame
|
temp_frame = frame.copy()
|
||||||
|
|
||||||
|
# Apply mirroring if enabled
|
||||||
if modules.globals.live_mirror:
|
if modules.globals.live_mirror:
|
||||||
temp_frame = cv2.flip(temp_frame, 1) # horizontal flipping
|
temp_frame = cv2.flip(temp_frame, 1)
|
||||||
|
|
||||||
|
# Resize frame if enabled
|
||||||
if modules.globals.live_resizable:
|
if modules.globals.live_resizable:
|
||||||
temp_frame = fit_image_to_size(
|
temp_frame = fit_image_to_size(
|
||||||
temp_frame, PREVIEW.winfo_width(), PREVIEW.winfo_height()
|
temp_frame, PREVIEW.winfo_width(), PREVIEW.winfo_height()
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# Process frame based on face mapping mode
|
||||||
if not modules.globals.map_faces:
|
if not modules.globals.map_faces:
|
||||||
# Check if source_path has changed and update source_image if necessary
|
# Update source image if path has changed
|
||||||
if modules.globals.source_path and (
|
if modules.globals.source_path and (
|
||||||
source_image is None
|
source_image is None
|
||||||
or modules.globals.source_path != source_image["location"]
|
or modules.globals.source_path != source_image["location"]
|
||||||
):
|
):
|
||||||
source_image = get_one_face(cv2.imread(modules.globals.source_path))
|
source_image = get_one_face(cv2.imread(modules.globals.source_path))
|
||||||
source_image["location"] = (
|
source_image["location"] = modules.globals.source_path
|
||||||
modules.globals.source_path
|
|
||||||
) # Store location for comparison
|
|
||||||
|
|
||||||
|
# Apply frame processors (e.g., face swapping, enhancement)
|
||||||
for frame_processor in frame_processors:
|
for frame_processor in frame_processors:
|
||||||
temp_frame = frame_processor.process_frame(source_image, temp_frame)
|
temp_frame = frame_processor.process_frame(source_image, temp_frame)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
modules.globals.target_path = None
|
modules.globals.target_path = None
|
||||||
|
|
||||||
for frame_processor in frame_processors:
|
for frame_processor in frame_processors:
|
||||||
temp_frame = frame_processor.process_frame_v2(temp_frame)
|
temp_frame = frame_processor.process_frame_v2(temp_frame)
|
||||||
|
|
||||||
image = cv2.cvtColor(
|
# Calculate and display FPS
|
||||||
temp_frame, cv2.COLOR_BGR2RGB
|
current_time = time.time()
|
||||||
) # Convert the image to RGB format to display it with Tkinter
|
frame_count += 1
|
||||||
|
if current_time - prev_time >= fps_update_interval:
|
||||||
|
fps = frame_count / (current_time - prev_time)
|
||||||
|
frame_count = 0
|
||||||
|
prev_time = current_time
|
||||||
|
|
||||||
|
if modules.globals.show_fps:
|
||||||
|
cv2.putText(
|
||||||
|
temp_frame,
|
||||||
|
f"FPS: {fps:.1f}",
|
||||||
|
(10, 30),
|
||||||
|
cv2.FONT_HERSHEY_SIMPLEX,
|
||||||
|
1,
|
||||||
|
(0, 255, 0),
|
||||||
|
2,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Convert frame to RGB and display in preview label
|
||||||
|
image = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
|
||||||
image = Image.fromarray(image)
|
image = Image.fromarray(image)
|
||||||
image = ImageOps.contain(
|
image = ImageOps.contain(
|
||||||
image, (temp_frame.shape[1], temp_frame.shape[0]), Image.LANCZOS
|
image, (temp_frame.shape[1], temp_frame.shape[0]), Image.LANCZOS
|
||||||
@@ -1073,11 +1478,9 @@ def create_webcam_preview(camera_index):
|
|||||||
preview_label.configure(image=image)
|
preview_label.configure(image=image)
|
||||||
ROOT.update()
|
ROOT.update()
|
||||||
|
|
||||||
if PREVIEW.state() == "withdrawn":
|
# Release camera and close preview window
|
||||||
break
|
|
||||||
|
|
||||||
camera.release()
|
camera.release()
|
||||||
PREVIEW.withdraw() # Close preview window when loop is finished
|
PREVIEW.withdraw()
|
||||||
|
|
||||||
|
|
||||||
def create_source_target_popup_for_webcam(root: ctk.CTk, map: list) -> None:
|
def create_source_target_popup_for_webcam(root: ctk.CTk, map: list) -> None:
|
||||||
@@ -1092,7 +1495,12 @@ def create_source_target_popup_for_webcam(root: ctk.CTk, map: list) -> None:
|
|||||||
if has_valid_map():
|
if has_valid_map():
|
||||||
POPUP_LIVE.destroy()
|
POPUP_LIVE.destroy()
|
||||||
simplify_maps()
|
simplify_maps()
|
||||||
create_webcam_preview()
|
# Get the selected camera index
|
||||||
|
selected_camera = camera_optionmenu.get()
|
||||||
|
camera_index = available_camera_indices[
|
||||||
|
available_camera_strings.index(selected_camera)
|
||||||
|
]
|
||||||
|
create_webcam_preview(camera_index)
|
||||||
else:
|
else:
|
||||||
update_pop_live_status("At least 1 source with target is required!")
|
update_pop_live_status("At least 1 source with target is required!")
|
||||||
|
|
||||||
@@ -1101,7 +1509,10 @@ def create_source_target_popup_for_webcam(root: ctk.CTk, map: list) -> None:
|
|||||||
refresh_data(map)
|
refresh_data(map)
|
||||||
update_pop_live_status("Please provide mapping!")
|
update_pop_live_status("Please provide mapping!")
|
||||||
|
|
||||||
popup_status_label_live = ctk.CTkLabel(POPUP_LIVE, text=None, justify="center")
|
# Rest of the popup content
|
||||||
|
popup_status_label_live = ctk.CTkLabel(
|
||||||
|
POPUP_LIVE, text=None, justify="center", font=("Roboto", 14)
|
||||||
|
)
|
||||||
popup_status_label_live.grid(row=1, column=0, pady=15)
|
popup_status_label_live.grid(row=1, column=0, pady=15)
|
||||||
|
|
||||||
add_button = ctk.CTkButton(
|
add_button = ctk.CTkButton(
|
||||||
@@ -1124,6 +1535,29 @@ def create_source_target_popup_for_webcam(root: ctk.CTk, map: list) -> None:
|
|||||||
)
|
)
|
||||||
close_button.place(relx=0.6, rely=0.92, relwidth=0.2, relheight=0.05)
|
close_button.place(relx=0.6, rely=0.92, relwidth=0.2, relheight=0.05)
|
||||||
|
|
||||||
|
# Create a better styled camera selection frame
|
||||||
|
camera_frame = ctk.CTkFrame(POPUP_LIVE, fg_color="#2a2d2e", corner_radius=15)
|
||||||
|
camera_frame.grid(row=2, column=0, pady=15, padx=20, sticky="ew")
|
||||||
|
POPUP_LIVE.grid_columnconfigure(0, weight=1)
|
||||||
|
|
||||||
|
camera_label = ctk.CTkLabel(
|
||||||
|
camera_frame,
|
||||||
|
text="Select Camera:",
|
||||||
|
font=("Roboto", 14, "bold"),
|
||||||
|
text_color="#DCE4EE",
|
||||||
|
)
|
||||||
|
camera_label.pack(side="left", padx=20, pady=10)
|
||||||
|
|
||||||
|
available_cameras = get_available_cameras()
|
||||||
|
available_camera_indices, available_camera_strings = available_cameras
|
||||||
|
|
||||||
|
camera_optionmenu = ModernOptionMenu(
|
||||||
|
camera_frame,
|
||||||
|
values=available_camera_strings,
|
||||||
|
command=lambda value: print(f"Selected: {value}"), # Add your command here
|
||||||
|
)
|
||||||
|
camera_optionmenu.pack(side="left", padx=(10, 20), pady=10, fill="x", expand=True)
|
||||||
|
|
||||||
refresh_data(map) # Initial data refresh
|
refresh_data(map) # Initial data refresh
|
||||||
|
|
||||||
|
|
||||||
|
@@ -7,7 +7,6 @@ onnx==1.16.0
|
|||||||
insightface==0.7.3
|
insightface==0.7.3
|
||||||
psutil==5.9.8
|
psutil==5.9.8
|
||||||
tk==0.1.0
|
tk==0.1.0
|
||||||
customtkinter==5.2.2
|
|
||||||
pillow==9.5.0
|
pillow==9.5.0
|
||||||
torch==2.0.1+cu118; sys_platform != 'darwin'
|
torch==2.0.1+cu118; sys_platform != 'darwin'
|
||||||
torch==2.0.1; sys_platform == 'darwin'
|
torch==2.0.1; sys_platform == 'darwin'
|
||||||
|