72 Commits

Author SHA1 Message Date
Kenneth Estanislao
28c60b69d1 Merge pull request #1532 from hacksider/dependabot/pip/torch-2.7.1cu128 2025-10-12 22:53:43 +08:00
dependabot[bot]
fcf547d7d2 Bump torch from 2.5.1 to 2.7.1+cu128
Bumps torch from 2.5.1 to 2.7.1+cu128.

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.7.1+cu128
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-12 14:34:15 +00:00
Kenneth Estanislao
ae2d21456d Version 2.0c Release!
Sharpness and some other improvements added!
2025-10-12 22:33:09 +08:00
Kenneth Estanislao
f9270c5d1c Fix installation instructions for gfpgan and basicsrs 2025-08-29 14:44:46 +08:00
Kenneth Estanislao
fdbc29c1a9 Update README.md 2025-08-11 21:37:45 +08:00
Kenneth Estanislao
87d982e6f8 Merge pull request #1435 from rugk/patch-1
Add Golem.de (German IT news magazine) article
2025-08-08 02:26:51 +08:00
rugk
cf47dabf0e Add Golem.de (German IT news magazine) article 2025-08-06 15:43:52 +02:00
Kenneth Estanislao
d0d90ecc03 Creating a fallback and switching of models
Models switch depending on the execution provider
2025-08-02 02:56:20 +08:00
Kenneth Estanislao
2b70131e6a Update requirements.txt 2025-07-09 17:19:26 +08:00
Kenneth Estanislao
fc86365a90 Delete .yml 2025-07-02 18:37:10 +08:00
Kenneth Estanislao
1dd0e8e509 Create .yml 2025-07-02 18:29:32 +08:00
Kenneth Estanislao
4e0ff540f0 Update requirements.txt
faster and better requirements
2025-07-02 04:08:26 +08:00
Kenneth Estanislao
f0fae811d8 Update requirements.txt
should improve the performance by 30%
2025-06-29 15:03:35 +08:00
Kenneth Estanislao
42687f5bd9 Update README.md 2025-06-29 14:58:13 +08:00
Kenneth Estanislao
9086072b8e Update README.md 2025-06-23 17:06:34 +08:00
KRSHH
12fda0a3ed fix formatting 2025-06-17 18:42:36 +05:30
KRSHH
d963430854 Add techlinked link 2025-06-17 18:42:10 +05:30
KRSHH
5855d15c09 Removed outdated links 2025-06-17 18:35:24 +05:30
KRSHH
fcc73d0add Update Download Button 2025-06-16 14:37:41 +05:30
KRSHH
8d4a386a27 Upgrade prebuilt to 2.1 2025-06-15 22:19:49 +05:30
Chittimalla Krish
b98c5234d8 Revert 8bdc14a 2025-06-15 20:08:43 +05:30
Chittimalla Krish
8bdc14a789 Update prebuilt version 2025-06-15 17:50:38 +05:30
Kenneth Estanislao
f121083bc8 Update README.md
RTX 50xx support
2025-06-15 02:22:00 +08:00
Kenneth Estanislao
745d449ca6 Update README.md
support for RTX 50xx
2025-06-09 00:34:26 +08:00
Kenneth Estanislao
ec6d7d2995 Merge pull request #1327 from zjy-dev/fix/add-cudnn-installation-docs
docs: add cuDNN installation guidance for CUDA
2025-06-01 12:05:04 +08:00
zjy-dev
e791f2f18a docs: add cuDNN installation guidance for CUDA 2025-06-01 00:40:29 +08:00
KRSHH
3795e41fd7 Merge pull request #1307 from Neurofix/main
ADD locale ko.json
2025-05-28 08:09:02 +05:30
KRSHH
ab8a1c82c1 Merge pull request #1310 from Jocund96/main
Add Russian locale file: ru.json
2025-05-26 02:34:03 +05:30
Jasurbek Odilov
e1842ae0ba Merge pull request #1 from Jocund96/Jocund96-patch-1
Add locale Russian
2025-05-25 21:28:57 +02:00
Jasurbek Odilov
989106e914 Add files via upload 2025-05-25 21:28:07 +02:00
Neurofix
de27fb8a81 Create ko.json
Add korean
2025-05-25 14:49:54 +09:00
KRSHH
28109e93bb Merge pull request #1297 from j-hewett/main
Add Spanish translation
2025-05-21 21:44:03 +05:30
Jonah Hewett
fc312516e3 Add Spanish translation 2025-05-21 16:35:37 +01:00
Chou Chamnan
72049f3e91 Add khmer translation (#1291)
* Add khmer language

* Fix khmer language

---------

Co-authored-by: Chamnan dev
2025-05-18 23:03:53 +05:30
inwchamp1337
6cb5de01f8 Added a Thai translation (#1284)
* Added a Thai translation

* Update th.json
2025-05-18 23:03:19 +05:30
KRSHH
0bcf340217 Merge pull request #1281 from Giovannapls/add/pt-br-translate
[Added] pt br translate
2025-05-18 23:01:00 +05:30
Giovanna
994a63c546 [Added] pt br translate 2025-05-14 19:24:13 -03:00
Kenneth Estanislao
d5a3fb0c47 Merge pull request #1268 from jiacheng-0/main
Update __init__.py
2025-05-13 00:57:09 +08:00
Teo Jia Cheng
9690070399 Update __init__.py 2025-05-13 00:14:49 +08:00
Kenneth Estanislao
f3e83b985c Merge pull request #1210 from KunjShah01/main
Update __init__.py
2025-05-12 15:14:58 +08:00
Kenneth Estanislao
e3e3638b79 Merge pull request #1232 from gboeer/patch-1
Add german localization and fix minor typos
2025-05-12 15:14:32 +08:00
VilkkuKoo
4a7874a968 Added a Finnish translation (#1255)
* Added finnish translations

* Fixed a typo
2025-05-11 03:58:53 +05:30
Gordon Böer
75122da389 Create german localization 2025-05-07 13:30:22 +02:00
Gordon Böer
7063bba4b3 fix typos in zh.json 2025-05-07 13:24:54 +02:00
Gordon Böer
bdbd7dcfbc fix typos in ui.py 2025-05-07 13:23:31 +02:00
KUNJ SHAH
a64940def7 update 2025-05-05 13:19:46 +00:00
KUNJ SHAH
fe4a87e8f2 update 2025-05-05 13:19:29 +00:00
KUNJ SHAH
9ecd2dab83 changes 2025-05-05 13:10:00 +00:00
KUNJ SHAH
c9f36eb350 Update __init__.py 2025-05-05 18:29:44 +05:30
Kenneth Estanislao
b1f610d432 Update README.md 2025-05-05 08:30:44 +08:00
KRSHH
d86c36dc47 Change Download URL 2025-05-04 23:44:01 +05:30
Kenneth Estanislao
532e7c05ee Merge pull request #1155 from killerlux/patch-1
Added commands for linux
2025-05-03 10:16:02 +08:00
KRSHH
267a273cb2 Download for windows 2025-05-01 22:12:55 +05:30
KRSHH
938aa9eaf1 Delete media/download.png 2025-05-01 22:11:21 +05:30
KRSHH
37bac27302 Add files via upload 2025-05-01 22:10:52 +05:30
killerlux
84836932e6 Added cmomands for linux 2025-04-30 23:09:12 +02:00
Kenneth Estanislao
e879d2ca64 Merge pull request #1094 from NeuroDonu/main
fix core.py for face_enhancer and add TRT support in face_enhancer
2025-04-30 22:28:46 +08:00
Kenneth Estanislao
181144ce33 Update requirements.txt 2025-04-20 03:02:23 +08:00
NeuroDonu
890beb0eae fix & add trt support 2025-04-19 16:03:49 +03:00
NeuroDonu
75b5b096d6 fix 2025-04-19 16:03:24 +03:00
Kenneth Estanislao
40e47a469c Update requirements.txt 2025-04-19 03:41:00 +08:00
KRSHH
874abb4e59 v2 prebuilt 2025-04-17 09:34:10 +05:30
Kenneth Estanislao
18b259da70 Update requirements.txt
improves speed by 10 to 40%
2025-04-17 02:44:24 +08:00
Kenneth Estanislao
01900dcfb5 Revert "Update metadata.py"
This reverts commit 90d5c28542.
2025-04-17 02:39:05 +08:00
Kenneth Estanislao
07e30fe781 Revert "Update face_swapper.py"
This reverts commit 104d8cf4d6.
2025-04-17 02:03:34 +08:00
Kenneth Estanislao
3dda4f2179 Update requirements.txt 2025-04-14 17:45:07 +08:00
Kenneth Estanislao
71735e4f60 Update requirements.txt
update requirements.txt
2025-04-13 03:36:51 +08:00
Kenneth Estanislao
90d5c28542 Update metadata.py
- 40% faster than 1.8
- compatible with 50xx GPU
- onnxruntime 1.21
2025-04-13 03:34:10 +08:00
Kenneth Estanislao
104d8cf4d6 Update face_swapper.py
compatibility with inswapper 1.21
2025-04-13 01:13:40 +08:00
KRSHH
ac3696b69d remove prebuilt 2025-04-04 16:02:28 +05:30
Kenneth Estanislao
76fb209e6c Update README.md 2025-03-29 03:28:22 +08:00
Kenneth Estanislao
2dcd552c4b Update README.md 2025-03-29 03:23:49 +08:00
26 changed files with 2338 additions and 579 deletions

View File

@@ -30,12 +30,11 @@ By using this software, you agree to these terms and commit to using it in a man
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
## Exclusive v2.2 Quick Start - Pre-built (Windows/Mac Silicon)
## Quick Start - Pre-built (Windows / Nvidia)
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="media/Download.png" width="285" height="77" />
<a href="https://hacksider.gumroad.com/l/vccdmm"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
##### This is the fastest build you can get if you have a discrete NVIDIA GPU.
##### This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU or Mac Silicon, And you'll receive special priority support.
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
@@ -99,7 +98,7 @@ Users are expected to use this software responsibly and legally. If using a real
## Installation (Manual)
**Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the prebuilt version.**
**Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the quickstart version.**
<details>
<summary>Click to see the process</summary>
@@ -110,7 +109,7 @@ This is more likely to work on your computer but will be slower as it utilizes t
**1. Set up Your Platform**
- Python (3.10 recommended)
- Python (3.11 recommended)
- pip
- git
- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - ```iex (irm ffmpeg.tc.ht)```
@@ -134,26 +133,34 @@ Place these files in the "**models**" folder.
We highly recommend using a `venv` to avoid issues.
For Windows:
```bash
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
```
For Linux:
```bash
# Ensure you use the installed Python 3.10
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```
**For macOS:**
Apple Silicon (M1/M2/M3) requires specific setup:
```bash
# Install Python 3.10 (specific version is important)
brew install python@3.10
# Install Python 3.11 (specific version is important)
brew install python@3.11
# Install tkinter package (required for the GUI)
brew install python-tk@3.10
# Create and activate virtual environment with Python 3.10
python3.10 -m venv venv
# Create and activate virtual environment with Python 3.11
python3.11 -m venv venv
source venv/bin/activate
# Install dependencies
@@ -172,6 +179,11 @@ source venv/bin/activate
# install the dependencies again
pip install -r requirements.txt
# gfpgan and basicsrs issue fix
pip install git+https://github.com/xinntao/BasicSR.git@master
pip uninstall gfpgan -y
pip install git+https://github.com/TencentARC/GFPGAN.git@master
```
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
@@ -180,12 +192,16 @@ pip install -r requirements.txt
**CUDA Execution Provider (Nvidia)**
1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive)
2. Install dependencies:
1. Install [CUDA Toolkit 12.8.0](https://developer.nvidia.com/cuda-12-8-0-download-archive)
2. Install [cuDNN v8.9.7 for CUDA 12.x](https://developer.nvidia.com/rdp/cudnn-archive) (required for onnxruntime-gpu):
- Download cuDNN v8.9.7 for CUDA 12.x
- Make sure the cuDNN bin directory is in your system PATH
3. Install dependencies:
```bash
pip install -U torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3
pip install onnxruntime-gpu==1.21.0
```
3. Usage:
@@ -225,7 +241,7 @@ python3.10 run.py --execution-provider coreml
# Uninstall conflicting versions if needed
brew uninstall --ignore-dependencies python@3.11 python@3.13
# Keep only Python 3.10
# Keep only Python 3.11
brew cleanup
```
@@ -235,7 +251,7 @@ python3.10 run.py --execution-provider coreml
```bash
pip uninstall onnxruntime onnxruntime-coreml
pip install onnxruntime-coreml==1.13.1
pip install onnxruntime-coreml==1.21.0
```
2. Usage:
@@ -250,7 +266,7 @@ python run.py --execution-provider coreml
```bash
pip uninstall onnxruntime onnxruntime-directml
pip install onnxruntime-directml==1.15.1
pip install onnxruntime-directml==1.21.0
```
2. Usage:
@@ -265,7 +281,7 @@ python run.py --execution-provider directml
```bash
pip uninstall onnxruntime onnxruntime-openvino
pip install onnxruntime-openvino==1.15.0
pip install onnxruntime-openvino==1.21.0
```
2. Usage:
@@ -293,19 +309,6 @@ python run.py --execution-provider openvino
- Use a screen capture tool like OBS to stream.
- To change the face, select a new source image.
## Tips and Tricks
Check out these helpful guides to get the most out of Deep-Live-Cam:
- [Unlocking the Secrets to the Perfect Deepfake Image](https://deeplivecam.net/index.php/blog/tips-and-tricks/unlocking-the-secrets-to-the-perfect-deepfake-image) - Learn how to create the best deepfake with full head coverage
- [Video Call with DeepLiveCam](https://deeplivecam.net/index.php/blog/tips-and-tricks/video-call-with-deeplivecam) - Make your meetings livelier by using DeepLiveCam with OBS and meeting software
- [Have a Special Guest!](https://deeplivecam.net/index.php/blog/tips-and-tricks/have-a-special-guest) - Tutorial on how to use face mapping to add special guests to your stream
- [Watch Deepfake Movies in Realtime](https://deeplivecam.net/index.php/blog/tips-and-tricks/watch-deepfake-movies-in-realtime) - See yourself star in any video without processing the video
- [Better Quality without Sacrificing Speed](https://deeplivecam.net/index.php/blog/tips-and-tricks/better-quality-without-sacrificing-speed) - Tips for achieving better results without impacting performance
- [Instant Vtuber!](https://deeplivecam.net/index.php/blog/tips-and-tricks/instant-vtuber) - Create a new persona/vtuber easily using Metahuman Creator
Visit our [official blog](https://deeplivecam.net/index.php/blog/tips-and-tricks) for more tips and tutorials.
## Command Line Arguments (Unmaintained)
```
@@ -349,6 +352,9 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
- [*"This real-time webcam deepfake tool raises alarms about the future of identity theft"*](https://www.diyphotography.net/this-real-time-webcam-deepfake-tool-raises-alarms-about-the-future-of-identity-theft/) - DIYPhotography
- [*"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"*](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - SomeOrdinaryGamers
- [*"Alright look look look, now look chat, we can do any face we want to look like chat"*](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - IShowSpeed
- [*"They do a pretty good job matching poses, expression and even the lighting"*](https://www.youtube.com/watch?v=wnCghLjqv3s&t=551s) - TechLinked (LTT)
- [*"Als Sean Connery an der Redaktionskonferenz teilnahm"*](https://www.golem.de/news/deepfakes-als-sean-connery-an-der-redaktionskonferenz-teilnahm-2408-188172.html) - Golem.de (German)
## Credits

46
locales/de.json Normal file
View File

@@ -0,0 +1,46 @@
{
"Source x Target Mapper": "Quelle x Ziel Zuordnung",
"select a source image": "Wähle ein Quellbild",
"Preview": "Vorschau",
"select a target image or video": "Wähle ein Zielbild oder Video",
"save image output file": "Bildausgabedatei speichern",
"save video output file": "Videoausgabedatei speichern",
"select a target image": "Wähle ein Zielbild",
"source": "Quelle",
"Select a target": "Wähle ein Ziel",
"Select a face": "Wähle ein Gesicht",
"Keep audio": "Audio beibehalten",
"Face Enhancer": "Gesichtsverbesserung",
"Many faces": "Mehrere Gesichter",
"Show FPS": "FPS anzeigen",
"Keep fps": "FPS beibehalten",
"Keep frames": "Frames beibehalten",
"Fix Blueish Cam": "Bläuliche Kamera korrigieren",
"Mouth Mask": "Mundmaske",
"Show Mouth Mask Box": "Mundmaskenrahmen anzeigen",
"Start": "Starten",
"Live": "Live",
"Destroy": "Beenden",
"Map faces": "Gesichter zuordnen",
"Processing...": "Verarbeitung läuft...",
"Processing succeed!": "Verarbeitung erfolgreich!",
"Processing ignored!": "Verarbeitung ignoriert!",
"Failed to start camera": "Kamera konnte nicht gestartet werden",
"Please complete pop-up or close it.": "Bitte das Pop-up komplettieren oder schließen.",
"Getting unique faces": "Einzigartige Gesichter erfassen",
"Please select a source image first": "Bitte zuerst ein Quellbild auswählen",
"No faces found in target": "Keine Gesichter im Zielbild gefunden",
"Add": "Hinzufügen",
"Clear": "Löschen",
"Submit": "Absenden",
"Select source image": "Quellbild auswählen",
"Select target image": "Zielbild auswählen",
"Please provide mapping!": "Bitte eine Zuordnung angeben!",
"At least 1 source with target is required!": "Mindestens eine Quelle mit einem Ziel ist erforderlich!",
"At least 1 source with target is required!": "Mindestens eine Quelle mit einem Ziel ist erforderlich!",
"Face could not be detected in last upload!": "Im letzten Upload konnte kein Gesicht erkannt werden!",
"Select Camera:": "Kamera auswählen:",
"All mappings cleared!": "Alle Zuordnungen gelöscht!",
"Mappings successfully submitted!": "Zuordnungen erfolgreich übermittelt!",
"Source x Target Mapper is already open.": "Quell-zu-Ziel-Zuordnung ist bereits geöffnet."
}

46
locales/es.json Normal file
View File

@@ -0,0 +1,46 @@
{
"Source x Target Mapper": "Mapeador de fuente x destino",
"select a source image": "Seleccionar imagen fuente",
"Preview": "Vista previa",
"select a target image or video": "elegir un video o una imagen fuente",
"save image output file": "guardar imagen final",
"save video output file": "guardar video final",
"select a target image": "elegir una imagen objetiva",
"source": "fuente",
"Select a target": "Elegir un destino",
"Select a face": "Elegir una cara",
"Keep audio": "Mantener audio original",
"Face Enhancer": "Potenciador de caras",
"Many faces": "Varias caras",
"Show FPS": "Mostrar fps",
"Keep fps": "Mantener fps",
"Keep frames": "Mantener frames",
"Fix Blueish Cam": "Corregir tono azul de video",
"Mouth Mask": "Máscara de boca",
"Show Mouth Mask Box": "Mostrar área de la máscara de boca",
"Start": "Iniciar",
"Live": "En vivo",
"Destroy": "Borrar",
"Map faces": "Mapear caras",
"Processing...": "Procesando...",
"Processing succeed!": "¡Proceso terminado con éxito!",
"Processing ignored!": "¡Procesamiento omitido!",
"Failed to start camera": "No se pudo iniciar la cámara",
"Please complete pop-up or close it.": "Complete o cierre el pop-up",
"Getting unique faces": "Buscando caras únicas",
"Please select a source image first": "Primero, seleccione una imagen fuente",
"No faces found in target": "No se encontró una cara en el destino",
"Add": "Agregar",
"Clear": "Limpiar",
"Submit": "Enviar",
"Select source image": "Seleccionar imagen fuente",
"Select target image": "Seleccionar imagen destino",
"Please provide mapping!": "Por favor, proporcione un mapeo",
"At least 1 source with target is required!": "Se requiere al menos una fuente con un destino.",
"At least 1 source with target is required!": "Se requiere al menos una fuente con un destino.",
"Face could not be detected in last upload!": "¡No se pudo encontrar una cara en el último video o imagen!",
"Select Camera:": "Elegir cámara:",
"All mappings cleared!": "¡Todos los mapeos fueron borrados!",
"Mappings successfully submitted!": "Mapeos enviados con éxito!",
"Source x Target Mapper is already open.": "El mapeador de fuente x destino ya está abierto."
}

46
locales/fi.json Normal file
View File

@@ -0,0 +1,46 @@
{
"Source x Target Mapper": "Source x Target Kartoitin",
"select an source image": "Valitse lähde kuva",
"Preview": "Esikatsele",
"select an target image or video": "Valitse kohde kuva tai video",
"save image output file": "tallenna kuva",
"save video output file": "tallenna video",
"select an target image": "Valitse kohde kuva",
"source": "lähde",
"Select a target": "Valitse kohde",
"Select a face": "Valitse kasvot",
"Keep audio": "Säilytä ääni",
"Face Enhancer": "Kasvojen Parantaja",
"Many faces": "Useampia kasvoja",
"Show FPS": "Näytä FPS",
"Keep fps": "Säilytä FPS",
"Keep frames": "Säilytä ruudut",
"Fix Blueish Cam": "Korjaa Sinertävä Kamera",
"Mouth Mask": "Suu Maski",
"Show Mouth Mask Box": "Näytä Suu Maski Laatiko",
"Start": "Aloita",
"Live": "Live",
"Destroy": "Tuhoa",
"Map faces": "Kartoita kasvot",
"Processing...": "Prosessoi...",
"Processing succeed!": "Prosessointi onnistui!",
"Processing ignored!": "Prosessointi lopetettu!",
"Failed to start camera": "Kameran käynnistäminen epäonnistui",
"Please complete pop-up or close it.": "Viimeistele tai sulje ponnahdusikkuna",
"Getting unique faces": "Hankitaan uniikkeja kasvoja",
"Please select a source image first": "Valitse ensin lähde kuva",
"No faces found in target": "Kasvoja ei löydetty kohteessa",
"Add": "Lisää",
"Clear": "Tyhjennä",
"Submit": "Lähetä",
"Select source image": "Valitse lähde kuva",
"Select target image": "Valitse kohde kuva",
"Please provide mapping!": "Tarjoa kartoitus!",
"Atleast 1 source with target is required!": "Vähintään 1 lähde kohteen kanssa on vaadittu!",
"At least 1 source with target is required!": "Vähintään 1 lähde kohteen kanssa on vaadittu!",
"Face could not be detected in last upload!": "Kasvoja ei voitu tunnistaa edellisessä latauksessa!",
"Select Camera:": "Valitse Kamera:",
"All mappings cleared!": "Kaikki kartoitukset tyhjennetty!",
"Mappings successfully submitted!": "Kartoitukset lähetety onnistuneesti!",
"Source x Target Mapper is already open.": "Lähde x Kohde Kartoittaja on jo auki."
}

45
locales/km.json Normal file
View File

@@ -0,0 +1,45 @@
{
"Source x Target Mapper": "ប្រភប x បន្ថែម Mapper",
"select a source image": "ជ្រើសរើសប្រភពរូបភាព",
"Preview": "បង្ហាញ",
"select a target image or video": "ជ្រើសរើសគោលដៅរូបភាពឬវីដេអូ",
"save image output file": "រក្សាទុកលទ្ធផលឯកសាររូបភាព",
"save video output file": "រក្សាទុកលទ្ធផលឯកសារវីដេអូ",
"select a target image": "ជ្រើសរើសគោលដៅរូបភាព",
"source": "ប្រភព",
"Select a target": "ជ្រើសរើសគោលដៅ",
"Select a face": "ជ្រើសរើសមុខ",
"Keep audio": "រម្លងសម្លេង",
"Face Enhancer": "ឧបករណ៍ពង្រឹងមុខ",
"Many faces": "ទម្រង់មុខច្រើន",
"Show FPS": "បង្ហាញ FPS",
"Keep fps": "រម្លង fps",
"Keep frames": "រម្លងទម្រង់",
"Fix Blueish Cam": "ជួសជុល Cam Blueish",
"Mouth Mask": "របាំងមាត់",
"Show Mouth Mask Box": "បង្ហាញប្រអប់របាំងមាត់",
"Start": "ចាប់ផ្ដើម",
"Live": "ផ្សាយផ្ទាល់",
"Destroy": "លុប",
"Map faces": "ផែនទីមុខ",
"Processing...": "កំពុងដំណើរការ...",
"Processing succeed!": "ការដំណើរការទទួលបានជោគជ័យ!",
"Processing ignored!": "ការដំណើរការមិនទទួលបានជោគជ័យ!",
"Failed to start camera": "បរាជ័យដើម្បីចាប់ផ្ដើមបើកកាមេរ៉ា",
"Please complete pop-up or close it.": "សូមបញ្ចប់ផ្ទាំងផុស ឬបិទវា.",
"Getting unique faces": "ការចាប់ផ្ដើមទម្រង់មុខប្លែក",
"Please select a source image first": "សូមជ្រើសរើសប្រភពរូបភាពដំបូង",
"No faces found in target": "រកអត់ឃើញមុខនៅក្នុងគោលដៅ",
"Add": "បន្ថែម",
"Clear": "សម្អាត",
"Submit": "បញ្ចូន",
"Select source image": "ជ្រើសរើសប្រភពរូបភាព",
"Select target image": "ជ្រើសរើសគោលដៅរូបភាព",
"Please provide mapping!": "សូមផ្ដល់នៅផែនទី",
"At least 1 source with target is required!": "ត្រូវការប្រភពយ៉ាងហោចណាស់ ១ ដែលមានគោលដៅ!",
"Face could not be detected in last upload!": "មុខមិនអាចភ្ជាប់នៅក្នុងការបង្ហេាះចុងក្រោយ!",
"Select Camera:": "ជ្រើសរើសកាមេរ៉ា",
"All mappings cleared!": "ផែនទីទាំងអស់ត្រូវបានសម្អាត!",
"Mappings successfully submitted!": "ផែនទីត្រូវបានបញ្ជូនជោគជ័យ!",
"Source x Target Mapper is already open.": "ប្រភព x Target Mapper បានបើករួចហើយ។"
}

45
locales/ko.json Normal file
View File

@@ -0,0 +1,45 @@
{
"Source x Target Mapper": "소스 x 타겟 매퍼",
"select a source image": "소스 이미지 선택",
"Preview": "미리보기",
"select a target image or video": "타겟 이미지 또는 영상 선택",
"save image output file": "이미지 출력 파일 저장",
"save video output file": "영상 출력 파일 저장",
"select a target image": "타겟 이미지 선택",
"source": "소스",
"Select a target": "타겟 선택",
"Select a face": "얼굴 선택",
"Keep audio": "오디오 유지",
"Face Enhancer": "얼굴 향상",
"Many faces": "여러 얼굴",
"Show FPS": "FPS 표시",
"Keep fps": "FPS 유지",
"Keep frames": "프레임 유지",
"Fix Blueish Cam": "푸른빛 카메라 보정",
"Mouth Mask": "입 마스크",
"Show Mouth Mask Box": "입 마스크 박스 표시",
"Start": "시작",
"Live": "라이브",
"Destroy": "종료",
"Map faces": "얼굴 매핑",
"Processing...": "처리 중...",
"Processing succeed!": "처리 성공!",
"Processing ignored!": "처리 무시됨!",
"Failed to start camera": "카메라 시작 실패",
"Please complete pop-up or close it.": "팝업을 완료하거나 닫아주세요.",
"Getting unique faces": "고유 얼굴 가져오는 중",
"Please select a source image first": "먼저 소스 이미지를 선택해주세요",
"No faces found in target": "타겟에서 얼굴을 찾을 수 없음",
"Add": "추가",
"Clear": "지우기",
"Submit": "제출",
"Select source image": "소스 이미지 선택",
"Select target image": "타겟 이미지 선택",
"Please provide mapping!": "매핑을 입력해주세요!",
"At least 1 source with target is required!": "최소 하나의 소스와 타겟이 필요합니다!",
"Face could not be detected in last upload!": "최근 업로드에서 얼굴을 감지할 수 없습니다!",
"Select Camera:": "카메라 선택:",
"All mappings cleared!": "모든 매핑이 삭제되었습니다!",
"Mappings successfully submitted!": "매핑이 성공적으로 제출되었습니다!",
"Source x Target Mapper is already open.": "소스 x 타겟 매퍼가 이미 열려 있습니다."
}

46
locales/pt-br.json Normal file
View File

@@ -0,0 +1,46 @@
{
"Source x Target Mapper": "Mapeador de Origem x Destino",
"select an source image": "Escolha uma imagem de origem",
"Preview": "Prévia",
"select an target image or video": "Escolha uma imagem ou vídeo de destino",
"save image output file": "Salvar imagem final",
"save video output file": "Salvar vídeo final",
"select an target image": "Escolha uma imagem de destino",
"source": "Origem",
"Select a target": "Escolha o destino",
"Select a face": "Escolha um rosto",
"Keep audio": "Manter o áudio original",
"Face Enhancer": "Melhorar rosto",
"Many faces": "Vários rostos",
"Show FPS": "Mostrar FPS",
"Keep fps": "Manter FPS",
"Keep frames": "Manter frames",
"Fix Blueish Cam": "Corrigir tom azulado da câmera",
"Mouth Mask": "Máscara da boca",
"Show Mouth Mask Box": "Mostrar área da máscara da boca",
"Start": "Começar",
"Live": "Ao vivo",
"Destroy": "Destruir",
"Map faces": "Mapear rostos",
"Processing...": "Processando...",
"Processing succeed!": "Tudo certo!",
"Processing ignored!": "Processamento ignorado!",
"Failed to start camera": "Não foi possível iniciar a câmera",
"Please complete pop-up or close it.": "Finalize ou feche o pop-up",
"Getting unique faces": "Buscando rostos diferentes",
"Please select a source image first": "Selecione primeiro uma imagem de origem",
"No faces found in target": "Nenhum rosto encontrado na imagem de destino",
"Add": "Adicionar",
"Clear": "Limpar",
"Submit": "Enviar",
"Select source image": "Escolha a imagem de origem",
"Select target image": "Escolha a imagem de destino",
"Please provide mapping!": "Você precisa realizar o mapeamento!",
"Atleast 1 source with target is required!": "É necessária pelo menos uma origem com um destino!",
"At least 1 source with target is required!": "É necessária pelo menos uma origem com um destino!",
"Face could not be detected in last upload!": "Não conseguimos detectar o rosto na última imagem!",
"Select Camera:": "Escolher câmera:",
"All mappings cleared!": "Todos os mapeamentos foram removidos!",
"Mappings successfully submitted!": "Mapeamentos enviados com sucesso!",
"Source x Target Mapper is already open.": "O Mapeador de Origem x Destino já está aberto."
}

45
locales/ru.json Normal file
View File

@@ -0,0 +1,45 @@
{
"Source x Target Mapper": "Сопоставитель Источник x Цель",
"select a source image": "выберите исходное изображение",
"Preview": "Предпросмотр",
"select a target image or video": "выберите целевое изображение или видео",
"save image output file": "сохранить выходной файл изображения",
"save video output file": "сохранить выходной файл видео",
"select a target image": "выберите целевое изображение",
"source": "источник",
"Select a target": "Выберите целевое изображение",
"Select a face": "Выберите лицо",
"Keep audio": "Сохранить аудио",
"Face Enhancer": "Улучшение лица",
"Many faces": "Несколько лиц",
"Show FPS": "Показать FPS",
"Keep fps": "Сохранить FPS",
"Keep frames": "Сохранить кадры",
"Fix Blueish Cam": "Исправить синеву камеры",
"Mouth Mask": "Маска рта",
"Show Mouth Mask Box": "Показать рамку маски рта",
"Start": "Старт",
"Live": "В реальном времени",
"Destroy": "Остановить",
"Map faces": "Сопоставить лица",
"Processing...": "Обработка...",
"Processing succeed!": "Обработка успешна!",
"Processing ignored!": "Обработка проигнорирована!",
"Failed to start camera": "Не удалось запустить камеру",
"Please complete pop-up or close it.": "Пожалуйста, заполните всплывающее окно или закройте его.",
"Getting unique faces": "Получение уникальных лиц",
"Please select a source image first": "Сначала выберите исходное изображение, пожалуйста",
"No faces found in target": "В целевом изображении не найдено лиц",
"Add": "Добавить",
"Clear": "Очистить",
"Submit": "Отправить",
"Select source image": "Выбрать исходное изображение",
"Select target image": "Выбрать целевое изображение",
"Please provide mapping!": "Пожалуйста, укажите сопоставление!",
"At least 1 source with target is required!": "Требуется хотя бы 1 источник с целью!",
"Face could not be detected in last upload!": "Лицо не обнаружено в последнем загруженном изображении!",
"Select Camera:": "Выберите камеру:",
"All mappings cleared!": "Все сопоставления очищены!",
"Mappings successfully submitted!": "Сопоставления успешно отправлены!",
"Source x Target Mapper is already open.": "Сопоставитель Источник-Цель уже открыт."
}

45
locales/th.json Normal file
View File

@@ -0,0 +1,45 @@
{
"Source x Target Mapper": "ตัวจับคู่ต้นทาง x ปลายทาง",
"select a source image": "เลือกรูปภาพต้นฉบับ",
"Preview": "ตัวอย่าง",
"select a target image or video": "เลือกรูปภาพหรือวิดีโอเป้าหมาย",
"save image output file": "บันทึกไฟล์รูปภาพ",
"save video output file": "บันทึกไฟล์วิดีโอ",
"select a target image": "เลือกรูปภาพเป้าหมาย",
"source": "ต้นฉบับ",
"Select a target": "เลือกเป้าหมาย",
"Select a face": "เลือกใบหน้า",
"Keep audio": "เก็บเสียง",
"Face Enhancer": "ปรับปรุงใบหน้า",
"Many faces": "หลายใบหน้า",
"Show FPS": "แสดง FPS",
"Keep fps": "คงค่า FPS",
"Keep frames": "คงค่าเฟรม",
"Fix Blueish Cam": "แก้ไขภาพอมฟ้าจากกล้อง",
"Mouth Mask": "มาสก์ปาก",
"Show Mouth Mask Box": "แสดงกรอบมาสก์ปาก",
"Start": "เริ่ม",
"Live": "สด",
"Destroy": "หยุด",
"Map faces": "จับคู่ใบหน้า",
"Processing...": "กำลังประมวลผล...",
"Processing succeed!": "ประมวลผลสำเร็จแล้ว!",
"Processing ignored!": "การประมวลผลถูกละเว้น",
"Failed to start camera": "ไม่สามารถเริ่มกล้องได้",
"Please complete pop-up or close it.": "โปรดดำเนินการในป๊อปอัปให้เสร็จสิ้น หรือปิด",
"Getting unique faces": "กำลังค้นหาใบหน้าที่ไม่ซ้ำกัน",
"Please select a source image first": "โปรดเลือกภาพต้นฉบับก่อน",
"No faces found in target": "ไม่พบใบหน้าในภาพเป้าหมาย",
"Add": "เพิ่ม",
"Clear": "ล้าง",
"Submit": "ส่ง",
"Select source image": "เลือกภาพต้นฉบับ",
"Select target image": "เลือกภาพเป้าหมาย",
"Please provide mapping!": "โปรดระบุการจับคู่!",
"At least 1 source with target is required!": "ต้องมีการจับคู่ต้นฉบับกับเป้าหมายอย่างน้อย 1 คู่!",
"Face could not be detected in last upload!": "ไม่สามารถตรวจพบใบหน้าในไฟล์อัปโหลดล่าสุด!",
"Select Camera:": "เลือกกล้อง:",
"All mappings cleared!": "ล้างการจับคู่ทั้งหมดแล้ว!",
"Mappings successfully submitted!": "ส่งการจับคู่สำเร็จแล้ว!",
"Source x Target Mapper is already open.": "ตัวจับคู่ต้นทาง x ปลายทาง เปิดอยู่แล้ว"
}

View File

@@ -1,11 +1,11 @@
{
"Source x Target Mapper": "Source x Target Mapper",
"select an source image": "选择一个源图像",
"select a source image": "选择一个源图像",
"Preview": "预览",
"select an target image or video": "选择一个目标图像或视频",
"select a target image or video": "选择一个目标图像或视频",
"save image output file": "保存图像输出文件",
"save video output file": "保存视频输出文件",
"select an target image": "选择一个目标图像",
"select a target image": "选择一个目标图像",
"source": "源",
"Select a target": "选择一个目标",
"Select a face": "选择一张脸",
@@ -36,7 +36,7 @@
"Select source image": "请选取源图像",
"Select target image": "请选取目标图像",
"Please provide mapping!": "请提供映射",
"Atleast 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
"Face could not be detected in last upload!": "最近上传的图像中没有检测到人脸!",
"Select Camera:": "选择摄像头",

BIN
media/Download.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

View File

@@ -0,0 +1,18 @@
import os
import cv2
import numpy as np
# Utility function to support unicode characters in file paths for reading
def imread_unicode(path, flags=cv2.IMREAD_COLOR):
return cv2.imdecode(np.fromfile(path, dtype=np.uint8), flags)
# Utility function to support unicode characters in file paths for writing
def imwrite_unicode(path, img, params=None):
root, ext = os.path.splitext(path)
if not ext:
ext = ".png"
result, encoded_img = cv2.imencode(ext, img, params if params else [])
result, encoded_img = cv2.imencode(f".{ext}", img, params if params is not None else [])
encoded_img.tofile(path)
return True
return False

7
modules/custom_types.py Normal file
View File

@@ -0,0 +1,7 @@
from typing import Any
from insightface.app.common import Face
import numpy
Face = Face
Frame = numpy.ndarray[Any, Any]

View File

@@ -1,3 +1,5 @@
# --- START OF FILE globals.py ---
import os
from typing import List, Dict, Any
@@ -9,35 +11,61 @@ file_types = [
("Video", ("*.mp4", "*.mkv")),
]
source_target_map = []
simple_map = {}
# Face Mapping Data
souce_target_map: List[Dict[str, Any]] = [] # Stores detailed map for image/video processing
simple_map: Dict[str, Any] = {} # Stores simplified map (embeddings/faces) for live/simple mode
source_path = None
target_path = None
output_path = None
# Paths
source_path: str | None = None
target_path: str | None = None
output_path: str | None = None
# Processing Options
frame_processors: List[str] = []
keep_fps = True
keep_audio = True
keep_frames = False
many_faces = False
map_faces = False
color_correction = False # New global variable for color correction toggle
nsfw_filter = False
video_encoder = None
video_quality = None
live_mirror = False
live_resizable = True
max_memory = None
execution_providers: List[str] = []
execution_threads = None
headless = None
log_level = "error"
keep_fps: bool = True
keep_audio: bool = True
keep_frames: bool = False
many_faces: bool = False # Process all detected faces with default source
map_faces: bool = False # Use souce_target_map or simple_map for specific swaps
color_correction: bool = False # Enable color correction (implementation specific)
nsfw_filter: bool = False
# Video Output Options
video_encoder: str | None = None
video_quality: int | None = None # Typically a CRF value or bitrate
# Live Mode Options
live_mirror: bool = False
live_resizable: bool = True
camera_input_combobox: Any | None = None # Placeholder for UI element if needed
webcam_preview_running: bool = False
show_fps: bool = False
# System Configuration
max_memory: int | None = None # Memory limit in GB? (Needs clarification)
execution_providers: List[str] = [] # e.g., ['CUDAExecutionProvider', 'CPUExecutionProvider']
execution_threads: int | None = None # Number of threads for CPU execution
headless: bool | None = None # Run without UI?
log_level: str = "error" # Logging level (e.g., 'debug', 'info', 'warning', 'error')
# Face Processor UI Toggles (Example)
fp_ui: Dict[str, bool] = {"face_enhancer": False}
camera_input_combobox = None
webcam_preview_running = False
show_fps = False
mouth_mask = False
show_mouth_mask_box = False
mask_feather_ratio = 8
mask_down_size = 0.50
mask_size = 1
# Face Swapper Specific Options
face_swapper_enabled: bool = True # General toggle for the swapper processor
opacity: float = 1.0 # Blend factor for the swapped face (0.0-1.0)
sharpness: float = 0.0 # Sharpness enhancement for swapped face (0.0-1.0+)
# Mouth Mask Options
mouth_mask: bool = False # Enable mouth area masking/pasting
show_mouth_mask_box: bool = False # Visualize the mouth mask area (for debugging)
mask_feather_ratio: int = 12 # Denominator for feathering calculation (higher = smaller feather)
mask_down_size: float = 0.1 # Expansion factor for lower lip mask (relative)
mask_size: float = 1.0 # Expansion factor for upper lip mask (relative)
# --- START: Added for Frame Interpolation ---
enable_interpolation: bool = True # Toggle temporal smoothing
interpolation_weight: float = 0 # Blend weight for current frame (0.0-1.0). Lower=smoother.
# --- END: Added for Frame Interpolation ---
# --- END OF FILE globals.py ---

View File

@@ -1,3 +1,3 @@
name = 'Deep-Live-Cam'
version = '1.8'
version = '2.0c'
edition = 'GitHub Edition'

View File

@@ -42,18 +42,29 @@ def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType
def set_frame_processors_modules_from_ui(frame_processors: List[str]) -> None:
global FRAME_PROCESSORS_MODULES
current_processor_names = [proc.__name__.split('.')[-1] for proc in FRAME_PROCESSORS_MODULES]
for frame_processor, state in modules.globals.fp_ui.items():
if state == True and frame_processor not in frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
modules.globals.frame_processors.append(frame_processor)
if state == False:
if state == True and frame_processor not in current_processor_names:
try:
frame_processor_module = load_frame_processor_module(frame_processor)
FRAME_PROCESSORS_MODULES.remove(frame_processor_module)
modules.globals.frame_processors.remove(frame_processor)
except:
pass
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
if frame_processor not in modules.globals.frame_processors:
modules.globals.frame_processors.append(frame_processor)
except SystemExit:
print(f"Warning: Failed to load frame processor {frame_processor} requested by UI state.")
except Exception as e:
print(f"Warning: Error loading frame processor {frame_processor} requested by UI state: {e}")
elif state == False and frame_processor in current_processor_names:
try:
module_to_remove = next((mod for mod in FRAME_PROCESSORS_MODULES if mod.__name__.endswith(f'.{frame_processor}')), None)
if module_to_remove:
FRAME_PROCESSORS_MODULES.remove(module_to_remove)
if frame_processor in modules.globals.frame_processors:
modules.globals.frame_processors.remove(frame_processor)
except Exception as e:
print(f"Warning: Error removing frame processor {frame_processor}: {e}")
def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], progress: Any = None) -> None:
with ThreadPoolExecutor(max_workers=modules.globals.execution_threads) as executor:

View File

@@ -1,16 +1,18 @@
# --- START OF FILE face_enhancer.py ---
from typing import Any, List
import cv2
import threading
import gfpgan
import os
import platform
import torch # Make sure torch is imported
import modules.globals
import modules.processors.frame.core
from modules.core import update_status
from modules.face_analyser import get_one_face
from modules.typing import Frame, Face
import platform
import torch
from modules.utilities import (
conditional_download,
is_image,
@@ -49,61 +51,156 @@ def pre_start() -> bool:
def get_face_enhancer() -> Any:
"""
Initializes and returns the GFPGAN face enhancer instance,
prioritizing CUDA, then MPS (Mac), then CPU.
"""
global FACE_ENHANCER
with THREAD_LOCK:
if FACE_ENHANCER is None:
model_path = os.path.join(models_dir, "GFPGANv1.4.pth")
device = None
try:
# Priority 1: CUDA
if torch.cuda.is_available():
device = torch.device("cuda")
print(f"{NAME}: Using CUDA device.")
# Priority 2: MPS (Mac Silicon)
elif platform.system() == "Darwin" and torch.backends.mps.is_available():
device = torch.device("mps")
print(f"{NAME}: Using MPS device.")
# Priority 3: CPU
else:
device = torch.device("cpu")
print(f"{NAME}: Using CPU device.")
match platform.system():
case "Darwin": # Mac OS
if torch.backends.mps.is_available():
mps_device = torch.device("mps")
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=mps_device) # type: ignore[attr-defined]
else:
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
case _: # Other OS
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
FACE_ENHANCER = gfpgan.GFPGANer(
model_path=model_path,
upscale=1, # upscale=1 means enhancement only, no resizing
arch='clean',
channel_multiplier=2,
bg_upsampler=None,
device=device
)
print(f"{NAME}: GFPGANer initialized successfully on {device}.")
except Exception as e:
print(f"{NAME}: Error initializing GFPGANer: {e}")
# Fallback to CPU if initialization with GPU fails for some reason
if device is not None and device.type != 'cpu':
print(f"{NAME}: Falling back to CPU due to error.")
try:
device = torch.device("cpu")
FACE_ENHANCER = gfpgan.GFPGANer(
model_path=model_path,
upscale=1,
arch='clean',
channel_multiplier=2,
bg_upsampler=None,
device=device
)
print(f"{NAME}: GFPGANer initialized successfully on CPU after fallback.")
except Exception as fallback_e:
print(f"{NAME}: FATAL: Could not initialize GFPGANer even on CPU: {fallback_e}")
FACE_ENHANCER = None # Ensure it's None if totally failed
else:
# If it failed even on the first CPU attempt or device was already CPU
print(f"{NAME}: FATAL: Could not initialize GFPGANer on CPU: {e}")
FACE_ENHANCER = None # Ensure it's None if totally failed
# Check if enhancer is still None after attempting initialization
if FACE_ENHANCER is None:
raise RuntimeError(f"{NAME}: Failed to initialize GFPGANer. Check logs for errors.")
return FACE_ENHANCER
def enhance_face(temp_frame: Frame) -> Frame:
with THREAD_SEMAPHORE:
_, _, temp_frame = get_face_enhancer().enhance(temp_frame, paste_back=True)
return temp_frame
"""Enhances faces in a single frame using the global GFPGANer instance."""
# Ensure enhancer is ready
enhancer = get_face_enhancer()
try:
with THREAD_SEMAPHORE:
# The enhance method returns: _, restored_faces, restored_img
_, _, restored_img = enhancer.enhance(
temp_frame,
has_aligned=False, # Assume faces are not pre-aligned
only_center_face=False, # Enhance all detected faces
paste_back=True # Paste enhanced faces back onto the original image
)
# GFPGAN might return None if no face is detected or an error occurs
if restored_img is None:
# print(f"{NAME}: Warning: GFPGAN enhancement returned None. Returning original frame.")
return temp_frame
return restored_img
except Exception as e:
print(f"{NAME}: Error during face enhancement: {e}")
# Return the original frame in case of error during enhancement
return temp_frame
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
target_face = get_one_face(temp_frame)
if target_face:
temp_frame = enhance_face(temp_frame)
def process_frame(source_face: Face | None, temp_frame: Frame) -> Frame:
"""Processes a frame: enhances face if detected."""
# We don't strictly need source_face for enhancement only
# Check if any face exists to potentially save processing time, though GFPGAN also does detection.
# For simplicity and ensuring enhancement is attempted if possible, we can rely on enhance_face.
# target_face = get_one_face(temp_frame) # This gets only ONE face
# If you want to enhance ONLY if a face is detected by your *own* analyser first:
# has_face = get_one_face(temp_frame) is not None # Or use get_many_faces
# if has_face:
# temp_frame = enhance_face(temp_frame)
# else: # Enhance regardless, let GFPGAN handle detection
temp_frame = enhance_face(temp_frame)
return temp_frame
def process_frames(
source_path: str, temp_frame_paths: List[str], progress: Any = None
source_path: str | None, temp_frame_paths: List[str], progress: Any = None
) -> None:
"""Processes multiple frames from file paths."""
for temp_frame_path in temp_frame_paths:
if not os.path.exists(temp_frame_path):
print(f"{NAME}: Warning: Frame path not found {temp_frame_path}, skipping.")
if progress:
progress.update(1)
continue
temp_frame = cv2.imread(temp_frame_path)
result = process_frame(None, temp_frame)
cv2.imwrite(temp_frame_path, result)
if temp_frame is None:
print(f"{NAME}: Warning: Failed to read frame {temp_frame_path}, skipping.")
if progress:
progress.update(1)
continue
result_frame = process_frame(None, temp_frame)
cv2.imwrite(temp_frame_path, result_frame)
if progress:
progress.update(1)
def process_image(source_path: str, target_path: str, output_path: str) -> None:
def process_image(source_path: str | None, target_path: str, output_path: str) -> None:
"""Processes a single image file."""
target_frame = cv2.imread(target_path)
result = process_frame(None, target_frame)
cv2.imwrite(output_path, result)
if target_frame is None:
print(f"{NAME}: Error: Failed to read target image {target_path}")
return
result_frame = process_frame(None, target_frame)
cv2.imwrite(output_path, result_frame)
print(f"{NAME}: Enhanced image saved to {output_path}")
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
modules.processors.frame.core.process_video(None, temp_frame_paths, process_frames)
def process_video(source_path: str | None, temp_frame_paths: List[str]) -> None:
"""Processes video frames using the frame processor core."""
# source_path might be optional depending on how process_video is called
modules.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames)
# Optional: Keep process_frame_v2 if it's used elsewhere, otherwise it's redundant
# def process_frame_v2(temp_frame: Frame) -> Frame:
# target_face = get_one_face(temp_frame)
# if target_face:
# temp_frame = enhance_face(temp_frame)
# return temp_frame
def process_frame_v2(temp_frame: Frame) -> Frame:
target_face = get_one_face(temp_frame)
if target_face:
temp_frame = enhance_face(temp_frame)
return temp_frame
# --- END OF FILE face_enhancer.py ---

View File

@@ -0,0 +1,609 @@
import cv2
import numpy as np
from modules.typing import Face, Frame
import modules.globals
def apply_color_transfer(source, target):
"""
Apply color transfer from target to source image
"""
source = cv2.cvtColor(source, cv2.COLOR_BGR2LAB).astype("float32")
target = cv2.cvtColor(target, cv2.COLOR_BGR2LAB).astype("float32")
source_mean, source_std = cv2.meanStdDev(source)
target_mean, target_std = cv2.meanStdDev(target)
# Reshape mean and std to be broadcastable
source_mean = source_mean.reshape(1, 1, 3)
source_std = source_std.reshape(1, 1, 3)
target_mean = target_mean.reshape(1, 1, 3)
target_std = target_std.reshape(1, 1, 3)
# Perform the color transfer
source = (source - source_mean) * (target_std / source_std) + target_mean
return cv2.cvtColor(np.clip(source, 0, 255).astype("uint8"), cv2.COLOR_LAB2BGR)
def create_face_mask(face: Face, frame: Frame) -> np.ndarray:
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
landmarks = face.landmark_2d_106
if landmarks is not None:
# Convert landmarks to int32
landmarks = landmarks.astype(np.int32)
# Extract facial features
right_side_face = landmarks[0:16]
left_side_face = landmarks[17:32]
right_eye = landmarks[33:42]
right_eye_brow = landmarks[43:51]
left_eye = landmarks[87:96]
left_eye_brow = landmarks[97:105]
# Calculate padding
padding = int(
np.linalg.norm(right_side_face[0] - left_side_face[-1]) * 0.05
) # 5% of face width
# Create a slightly larger convex hull for padding
hull = cv2.convexHull(face_outline)
hull_padded = []
for point in hull:
x, y = point[0]
center = np.mean(face_outline, axis=0)
direction = np.array([x, y]) - center
direction = direction / np.linalg.norm(direction)
padded_point = np.array([x, y]) + direction * padding
hull_padded.append(padded_point)
hull_padded = np.array(hull_padded, dtype=np.int32)
# Fill the padded convex hull
cv2.fillConvexPoly(mask, hull_padded, 255)
# Smooth the mask edges
mask = cv2.GaussianBlur(mask, (5, 5), 3)
return mask
def create_lower_mouth_mask(
face: Face, frame: Frame
) -> (np.ndarray, np.ndarray, tuple, np.ndarray):
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
mouth_cutout = None
landmarks = face.landmark_2d_106
if landmarks is not None:
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
lower_lip_order = [
65,
66,
62,
70,
69,
18,
19,
20,
21,
22,
23,
24,
0,
8,
7,
6,
5,
4,
3,
2,
65,
]
lower_lip_landmarks = landmarks[lower_lip_order].astype(
np.float32
) # Use float for precise calculations
# Calculate the center of the landmarks
center = np.mean(lower_lip_landmarks, axis=0)
# Expand the landmarks outward using the mouth_mask_size
expansion_factor = (
1 + modules.globals.mask_down_size * modules.globals.mouth_mask_size
) # Adjust expansion based on slider
expanded_landmarks = (lower_lip_landmarks - center) * expansion_factor + center
# Extend the top lip part
toplip_indices = [
20,
0,
1,
2,
3,
4,
5,
] # Indices for landmarks 2, 65, 66, 62, 70, 69, 18
toplip_extension = (
modules.globals.mask_size * modules.globals.mouth_mask_size * 0.5
) # Adjust extension based on slider
for idx in toplip_indices:
direction = expanded_landmarks[idx] - center
direction = direction / np.linalg.norm(direction)
expanded_landmarks[idx] += direction * toplip_extension
# Extend the bottom part (chin area)
chin_indices = [
11,
12,
13,
14,
15,
16,
] # Indices for landmarks 21, 22, 23, 24, 0, 8
chin_extension = 2 * 0.2 # Adjust this factor to control the extension
for idx in chin_indices:
expanded_landmarks[idx][1] += (
expanded_landmarks[idx][1] - center[1]
) * chin_extension
# Convert back to integer coordinates
expanded_landmarks = expanded_landmarks.astype(np.int32)
# Calculate bounding box for the expanded lower mouth
min_x, min_y = np.min(expanded_landmarks, axis=0)
max_x, max_y = np.max(expanded_landmarks, axis=0)
# Add some padding to the bounding box
padding = int((max_x - min_x) * 0.1) # 10% padding
min_x = max(0, min_x - padding)
min_y = max(0, min_y - padding)
max_x = min(frame.shape[1], max_x + padding)
max_y = min(frame.shape[0], max_y + padding)
# Ensure the bounding box dimensions are valid
if max_x <= min_x or max_y <= min_y:
if (max_x - min_x) <= 1:
max_x = min_x + 1
if (max_y - min_y) <= 1:
max_y = min_y + 1
# Create the mask
mask_roi = np.zeros((max_y - min_y, max_x - min_x), dtype=np.uint8)
cv2.fillPoly(mask_roi, [expanded_landmarks - [min_x, min_y]], 255)
# Apply Gaussian blur to soften the mask edges
mask_roi = cv2.GaussianBlur(mask_roi, (15, 15), 5)
# Place the mask ROI in the full-sized mask
mask[min_y:max_y, min_x:max_x] = mask_roi
# Extract the masked area from the frame
mouth_cutout = frame[min_y:max_y, min_x:max_x].copy()
# Return the expanded lower lip polygon in original frame coordinates
lower_lip_polygon = expanded_landmarks
return mask, mouth_cutout, (min_x, min_y, max_x, max_y), lower_lip_polygon
def create_eyes_mask(face: Face, frame: Frame) -> (np.ndarray, np.ndarray, tuple, np.ndarray):
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
eyes_cutout = None
landmarks = face.landmark_2d_106
if landmarks is not None:
# Left eye landmarks (87-96) and right eye landmarks (33-42)
left_eye = landmarks[87:96]
right_eye = landmarks[33:42]
# Calculate centers and dimensions for each eye
left_eye_center = np.mean(left_eye, axis=0).astype(np.int32)
right_eye_center = np.mean(right_eye, axis=0).astype(np.int32)
# Calculate eye dimensions with size adjustment
def get_eye_dimensions(eye_points):
x_coords = eye_points[:, 0]
y_coords = eye_points[:, 1]
width = int((np.max(x_coords) - np.min(x_coords)) * (1 + modules.globals.mask_down_size * modules.globals.eyes_mask_size))
height = int((np.max(y_coords) - np.min(y_coords)) * (1 + modules.globals.mask_down_size * modules.globals.eyes_mask_size))
return width, height
left_width, left_height = get_eye_dimensions(left_eye)
right_width, right_height = get_eye_dimensions(right_eye)
# Add extra padding
padding = int(max(left_width, right_width) * 0.2)
# Calculate bounding box for both eyes
min_x = min(left_eye_center[0] - left_width//2, right_eye_center[0] - right_width//2) - padding
max_x = max(left_eye_center[0] + left_width//2, right_eye_center[0] + right_width//2) + padding
min_y = min(left_eye_center[1] - left_height//2, right_eye_center[1] - right_height//2) - padding
max_y = max(left_eye_center[1] + left_height//2, right_eye_center[1] + right_height//2) + padding
# Ensure coordinates are within frame bounds
min_x = max(0, min_x)
min_y = max(0, min_y)
max_x = min(frame.shape[1], max_x)
max_y = min(frame.shape[0], max_y)
# Create mask for the eyes region
mask_roi = np.zeros((max_y - min_y, max_x - min_x), dtype=np.uint8)
# Draw ellipses for both eyes
left_center = (left_eye_center[0] - min_x, left_eye_center[1] - min_y)
right_center = (right_eye_center[0] - min_x, right_eye_center[1] - min_y)
# Calculate axes lengths (half of width and height)
left_axes = (left_width//2, left_height//2)
right_axes = (right_width//2, right_height//2)
# Draw filled ellipses
cv2.ellipse(mask_roi, left_center, left_axes, 0, 0, 360, 255, -1)
cv2.ellipse(mask_roi, right_center, right_axes, 0, 0, 360, 255, -1)
# Apply Gaussian blur to soften mask edges
mask_roi = cv2.GaussianBlur(mask_roi, (15, 15), 5)
# Place the mask ROI in the full-sized mask
mask[min_y:max_y, min_x:max_x] = mask_roi
# Extract the masked area from the frame
eyes_cutout = frame[min_y:max_y, min_x:max_x].copy()
# Create polygon points for visualization
def create_ellipse_points(center, axes):
t = np.linspace(0, 2*np.pi, 32)
x = center[0] + axes[0] * np.cos(t)
y = center[1] + axes[1] * np.sin(t)
return np.column_stack((x, y)).astype(np.int32)
# Generate points for both ellipses
left_points = create_ellipse_points((left_eye_center[0], left_eye_center[1]), (left_width//2, left_height//2))
right_points = create_ellipse_points((right_eye_center[0], right_eye_center[1]), (right_width//2, right_height//2))
# Combine points for both eyes
eyes_polygon = np.vstack([left_points, right_points])
return mask, eyes_cutout, (min_x, min_y, max_x, max_y), eyes_polygon
def create_curved_eyebrow(points):
if len(points) >= 5:
# Sort points by x-coordinate
sorted_idx = np.argsort(points[:, 0])
sorted_points = points[sorted_idx]
# Calculate dimensions
x_min, y_min = np.min(sorted_points, axis=0)
x_max, y_max = np.max(sorted_points, axis=0)
width = x_max - x_min
height = y_max - y_min
# Create more points for smoother curve
num_points = 50
x = np.linspace(x_min, x_max, num_points)
# Fit quadratic curve through points for more natural arch
coeffs = np.polyfit(sorted_points[:, 0], sorted_points[:, 1], 2)
y = np.polyval(coeffs, x)
# Increased offsets to create more separation
top_offset = height * 0.5 # Increased from 0.3 to shift up more
bottom_offset = height * 0.2 # Increased from 0.1 to shift down more
# Create smooth curves
top_curve = y - top_offset
bottom_curve = y + bottom_offset
# Create curved endpoints with more pronounced taper
end_points = 5
start_x = np.linspace(x[0] - width * 0.15, x[0], end_points) # Increased taper
end_x = np.linspace(x[-1], x[-1] + width * 0.15, end_points) # Increased taper
# Create tapered ends
start_curve = np.column_stack((
start_x,
np.linspace(bottom_curve[0], top_curve[0], end_points)
))
end_curve = np.column_stack((
end_x,
np.linspace(bottom_curve[-1], top_curve[-1], end_points)
))
# Combine all points to form a smooth contour
contour_points = np.vstack([
start_curve,
np.column_stack((x, top_curve)),
end_curve,
np.column_stack((x[::-1], bottom_curve[::-1]))
])
# Add slight padding for better coverage
center = np.mean(contour_points, axis=0)
vectors = contour_points - center
padded_points = center + vectors * 1.2 # Increased padding slightly
return padded_points
return points
def create_eyebrows_mask(face: Face, frame: Frame) -> (np.ndarray, np.ndarray, tuple, np.ndarray):
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
eyebrows_cutout = None
landmarks = face.landmark_2d_106
if landmarks is not None:
# Left eyebrow landmarks (97-105) and right eyebrow landmarks (43-51)
left_eyebrow = landmarks[97:105].astype(np.float32)
right_eyebrow = landmarks[43:51].astype(np.float32)
# Calculate centers and dimensions for each eyebrow
left_center = np.mean(left_eyebrow, axis=0)
right_center = np.mean(right_eyebrow, axis=0)
# Calculate bounding box with padding adjusted by size
all_points = np.vstack([left_eyebrow, right_eyebrow])
padding_factor = modules.globals.eyebrows_mask_size
min_x = np.min(all_points[:, 0]) - 25 * padding_factor
max_x = np.max(all_points[:, 0]) + 25 * padding_factor
min_y = np.min(all_points[:, 1]) - 20 * padding_factor
max_y = np.max(all_points[:, 1]) + 15 * padding_factor
# Ensure coordinates are within frame bounds
min_x = max(0, int(min_x))
min_y = max(0, int(min_y))
max_x = min(frame.shape[1], int(max_x))
max_y = min(frame.shape[0], int(max_y))
# Create mask for the eyebrows region
mask_roi = np.zeros((max_y - min_y, max_x - min_x), dtype=np.uint8)
try:
# Convert points to local coordinates
left_local = left_eyebrow - [min_x, min_y]
right_local = right_eyebrow - [min_x, min_y]
def create_curved_eyebrow(points):
if len(points) >= 5:
# Sort points by x-coordinate
sorted_idx = np.argsort(points[:, 0])
sorted_points = points[sorted_idx]
# Calculate dimensions
x_min, y_min = np.min(sorted_points, axis=0)
x_max, y_max = np.max(sorted_points, axis=0)
width = x_max - x_min
height = y_max - y_min
# Create more points for smoother curve
num_points = 50
x = np.linspace(x_min, x_max, num_points)
# Fit quadratic curve through points for more natural arch
coeffs = np.polyfit(sorted_points[:, 0], sorted_points[:, 1], 2)
y = np.polyval(coeffs, x)
# Increased offsets to create more separation
top_offset = height * 0.5 # Increased from 0.3 to shift up more
bottom_offset = height * 0.2 # Increased from 0.1 to shift down more
# Create smooth curves
top_curve = y - top_offset
bottom_curve = y + bottom_offset
# Create curved endpoints with more pronounced taper
end_points = 5
start_x = np.linspace(x[0] - width * 0.15, x[0], end_points) # Increased taper
end_x = np.linspace(x[-1], x[-1] + width * 0.15, end_points) # Increased taper
# Create tapered ends
start_curve = np.column_stack((
start_x,
np.linspace(bottom_curve[0], top_curve[0], end_points)
))
end_curve = np.column_stack((
end_x,
np.linspace(bottom_curve[-1], top_curve[-1], end_points)
))
# Combine all points to form a smooth contour
contour_points = np.vstack([
start_curve,
np.column_stack((x, top_curve)),
end_curve,
np.column_stack((x[::-1], bottom_curve[::-1]))
])
# Add slight padding for better coverage
center = np.mean(contour_points, axis=0)
vectors = contour_points - center
padded_points = center + vectors * 1.2 # Increased padding slightly
return padded_points
return points
# Generate and draw eyebrow shapes
left_shape = create_curved_eyebrow(left_local)
right_shape = create_curved_eyebrow(right_local)
# Apply multi-stage blurring for natural feathering
# First, strong Gaussian blur for initial softening
mask_roi = cv2.GaussianBlur(mask_roi, (21, 21), 7)
# Second, medium blur for transition areas
mask_roi = cv2.GaussianBlur(mask_roi, (11, 11), 3)
# Finally, light blur for fine details
mask_roi = cv2.GaussianBlur(mask_roi, (5, 5), 1)
# Normalize mask values
mask_roi = cv2.normalize(mask_roi, None, 0, 255, cv2.NORM_MINMAX)
# Place the mask ROI in the full-sized mask
mask[min_y:max_y, min_x:max_x] = mask_roi
# Extract the masked area from the frame
eyebrows_cutout = frame[min_y:max_y, min_x:max_x].copy()
# Combine points for visualization
eyebrows_polygon = np.vstack([
left_shape + [min_x, min_y],
right_shape + [min_x, min_y]
]).astype(np.int32)
except Exception as e:
# Fallback to simple polygons if curve fitting fails
left_local = left_eyebrow - [min_x, min_y]
right_local = right_eyebrow - [min_x, min_y]
cv2.fillPoly(mask_roi, [left_local.astype(np.int32)], 255)
cv2.fillPoly(mask_roi, [right_local.astype(np.int32)], 255)
mask_roi = cv2.GaussianBlur(mask_roi, (21, 21), 7)
mask[min_y:max_y, min_x:max_x] = mask_roi
eyebrows_cutout = frame[min_y:max_y, min_x:max_x].copy()
eyebrows_polygon = np.vstack([left_eyebrow, right_eyebrow]).astype(np.int32)
return mask, eyebrows_cutout, (min_x, min_y, max_x, max_y), eyebrows_polygon
def apply_mask_area(
frame: np.ndarray,
cutout: np.ndarray,
box: tuple,
face_mask: np.ndarray,
polygon: np.ndarray,
) -> np.ndarray:
min_x, min_y, max_x, max_y = box
box_width = max_x - min_x
box_height = max_y - min_y
if (
cutout is None
or box_width is None
or box_height is None
or face_mask is None
or polygon is None
):
return frame
try:
resized_cutout = cv2.resize(cutout, (box_width, box_height))
roi = frame[min_y:max_y, min_x:max_x]
if roi.shape != resized_cutout.shape:
resized_cutout = cv2.resize(
resized_cutout, (roi.shape[1], roi.shape[0])
)
color_corrected_area = apply_color_transfer(resized_cutout, roi)
# Create mask for the area
polygon_mask = np.zeros(roi.shape[:2], dtype=np.uint8)
# Split points for left and right parts if needed
if len(polygon) > 50: # Arbitrary threshold to detect if we have multiple parts
mid_point = len(polygon) // 2
left_points = polygon[:mid_point] - [min_x, min_y]
right_points = polygon[mid_point:] - [min_x, min_y]
cv2.fillPoly(polygon_mask, [left_points], 255)
cv2.fillPoly(polygon_mask, [right_points], 255)
else:
adjusted_polygon = polygon - [min_x, min_y]
cv2.fillPoly(polygon_mask, [adjusted_polygon], 255)
# Apply strong initial feathering
polygon_mask = cv2.GaussianBlur(polygon_mask, (21, 21), 7)
# Apply additional feathering
feather_amount = min(
30,
box_width // modules.globals.mask_feather_ratio,
box_height // modules.globals.mask_feather_ratio,
)
feathered_mask = cv2.GaussianBlur(
polygon_mask.astype(float), (0, 0), feather_amount
)
feathered_mask = feathered_mask / feathered_mask.max()
# Apply additional smoothing to the mask edges
feathered_mask = cv2.GaussianBlur(feathered_mask, (5, 5), 1)
face_mask_roi = face_mask[min_y:max_y, min_x:max_x]
combined_mask = feathered_mask * (face_mask_roi / 255.0)
combined_mask = combined_mask[:, :, np.newaxis]
blended = (
color_corrected_area * combined_mask + roi * (1 - combined_mask)
).astype(np.uint8)
# Apply face mask to blended result
face_mask_3channel = (
np.repeat(face_mask_roi[:, :, np.newaxis], 3, axis=2) / 255.0
)
final_blend = blended * face_mask_3channel + roi * (1 - face_mask_3channel)
frame[min_y:max_y, min_x:max_x] = final_blend.astype(np.uint8)
except Exception as e:
pass
return frame
def draw_mask_visualization(
frame: Frame,
mask_data: tuple,
label: str,
draw_method: str = "polygon"
) -> Frame:
mask, cutout, (min_x, min_y, max_x, max_y), polygon = mask_data
vis_frame = frame.copy()
# Ensure coordinates are within frame bounds
height, width = vis_frame.shape[:2]
min_x, min_y = max(0, min_x), max(0, min_y)
max_x, max_y = min(width, max_x), min(height, max_y)
if draw_method == "ellipse" and len(polygon) > 50: # For eyes
# Split points for left and right parts
mid_point = len(polygon) // 2
left_points = polygon[:mid_point]
right_points = polygon[mid_point:]
try:
# Fit ellipses to points - need at least 5 points
if len(left_points) >= 5 and len(right_points) >= 5:
# Convert points to the correct format for ellipse fitting
left_points = left_points.astype(np.float32)
right_points = right_points.astype(np.float32)
# Fit ellipses
left_ellipse = cv2.fitEllipse(left_points)
right_ellipse = cv2.fitEllipse(right_points)
# Draw the ellipses
cv2.ellipse(vis_frame, left_ellipse, (0, 255, 0), 2)
cv2.ellipse(vis_frame, right_ellipse, (0, 255, 0), 2)
except Exception as e:
# If ellipse fitting fails, draw simple rectangles as fallback
left_rect = cv2.boundingRect(left_points)
right_rect = cv2.boundingRect(right_points)
cv2.rectangle(vis_frame,
(left_rect[0], left_rect[1]),
(left_rect[0] + left_rect[2], left_rect[1] + left_rect[3]),
(0, 255, 0), 2)
cv2.rectangle(vis_frame,
(right_rect[0], right_rect[1]),
(right_rect[0] + right_rect[2], right_rect[1] + right_rect[3]),
(0, 255, 0), 2)
else: # For mouth and eyebrows
# Draw the polygon
if len(polygon) > 50: # If we have multiple parts
mid_point = len(polygon) // 2
left_points = polygon[:mid_point]
right_points = polygon[mid_point:]
cv2.polylines(vis_frame, [left_points], True, (0, 255, 0), 2, cv2.LINE_AA)
cv2.polylines(vis_frame, [right_points], True, (0, 255, 0), 2, cv2.LINE_AA)
else:
cv2.polylines(vis_frame, [polygon], True, (0, 255, 0), 2, cv2.LINE_AA)
# Add label
cv2.putText(
vis_frame,
label,
(min_x, min_y - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255, 255, 255),
1,
)
return vis_frame

File diff suppressed because it is too large Load Diff

9
modules/run.py Normal file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env python3
# Import the tkinter fix to patch the ScreenChanged error
import tkinter_fix
import core
if __name__ == '__main__':
core.run()

26
modules/tkinter_fix.py Normal file
View File

@@ -0,0 +1,26 @@
import tkinter
# Only needs to be imported once at the beginning of the application
def apply_patch():
# Create a monkey patch for the internal _tkinter module
original_init = tkinter.Tk.__init__
def patched_init(self, *args, **kwargs):
# Call the original init
original_init(self, *args, **kwargs)
# Define the missing ::tk::ScreenChanged procedure
self.tk.eval("""
if {[info commands ::tk::ScreenChanged] == ""} {
proc ::tk::ScreenChanged {args} {
# Do nothing
return
}
}
""")
# Apply the monkey patch
tkinter.Tk.__init__ = patched_init
# Apply the patch automatically when this module is imported
apply_patch()

View File

@@ -27,6 +27,7 @@ from modules.utilities import (
)
from modules.video_capture import VideoCapturer
from modules.gettext import LanguageManager
from modules import globals
import platform
if platform.system() == "Windows":
@@ -35,7 +36,7 @@ if platform.system() == "Windows":
ROOT = None
POPUP = None
POPUP_LIVE = None
ROOT_HEIGHT = 700
ROOT_HEIGHT = 750
ROOT_WIDTH = 600
PREVIEW = None
@@ -152,20 +153,20 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
root.protocol("WM_DELETE_WINDOW", lambda: destroy())
source_label = ctk.CTkLabel(root, text=None)
source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25)
source_label.place(relx=0.1, rely=0.05, relwidth=0.275, relheight=0.225)
target_label = ctk.CTkLabel(root, text=None)
target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
target_label.place(relx=0.6, rely=0.05, relwidth=0.275, relheight=0.225)
select_face_button = ctk.CTkButton(
root, text=_("Select a face"), cursor="hand2", command=lambda: select_source_path()
)
select_face_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1)
select_face_button.place(relx=0.1, rely=0.30, relwidth=0.3, relheight=0.1)
swap_faces_button = ctk.CTkButton(
root, text="", cursor="hand2", command=lambda: swap_faces_paths()
)
swap_faces_button.place(relx=0.45, rely=0.4, relwidth=0.1, relheight=0.1)
swap_faces_button.place(relx=0.45, rely=0.30, relwidth=0.1, relheight=0.1)
select_target_button = ctk.CTkButton(
root,
@@ -173,7 +174,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
cursor="hand2",
command=lambda: select_target_path(),
)
select_target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1)
select_target_button.place(relx=0.6, rely=0.30, relwidth=0.3, relheight=0.1)
keep_fps_value = ctk.BooleanVar(value=modules.globals.keep_fps)
keep_fps_checkbox = ctk.CTkSwitch(
@@ -186,7 +187,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
keep_fps_checkbox.place(relx=0.1, rely=0.6)
keep_fps_checkbox.place(relx=0.1, rely=0.5)
keep_frames_value = ctk.BooleanVar(value=modules.globals.keep_frames)
keep_frames_switch = ctk.CTkSwitch(
@@ -199,7 +200,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
keep_frames_switch.place(relx=0.1, rely=0.65)
keep_frames_switch.place(relx=0.1, rely=0.55)
enhancer_value = ctk.BooleanVar(value=modules.globals.fp_ui["face_enhancer"])
enhancer_switch = ctk.CTkSwitch(
@@ -212,7 +213,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
enhancer_switch.place(relx=0.1, rely=0.7)
enhancer_switch.place(relx=0.1, rely=0.6)
keep_audio_value = ctk.BooleanVar(value=modules.globals.keep_audio)
keep_audio_switch = ctk.CTkSwitch(
@@ -225,7 +226,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
keep_audio_switch.place(relx=0.6, rely=0.6)
keep_audio_switch.place(relx=0.6, rely=0.5)
many_faces_value = ctk.BooleanVar(value=modules.globals.many_faces)
many_faces_switch = ctk.CTkSwitch(
@@ -238,7 +239,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
many_faces_switch.place(relx=0.6, rely=0.65)
many_faces_switch.place(relx=0.6, rely=0.55)
color_correction_value = ctk.BooleanVar(value=modules.globals.color_correction)
color_correction_switch = ctk.CTkSwitch(
@@ -251,7 +252,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
color_correction_switch.place(relx=0.6, rely=0.70)
color_correction_switch.place(relx=0.6, rely=0.6)
# nsfw_value = ctk.BooleanVar(value=modules.globals.nsfw_filter)
# nsfw_switch = ctk.CTkSwitch(root, text='NSFW filter', variable=nsfw_value, cursor='hand2', command=lambda: setattr(modules.globals, 'nsfw_filter', nsfw_value.get()))
@@ -269,7 +270,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
close_mapper_window() if not map_faces.get() else None
),
)
map_faces_switch.place(relx=0.1, rely=0.75)
map_faces_switch.place(relx=0.1, rely=0.65)
show_fps_value = ctk.BooleanVar(value=modules.globals.show_fps)
show_fps_switch = ctk.CTkSwitch(
@@ -282,7 +283,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
save_switch_states(),
),
)
show_fps_switch.place(relx=0.6, rely=0.75)
show_fps_switch.place(relx=0.6, rely=0.65)
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
mouth_mask_switch = ctk.CTkSwitch(
@@ -292,7 +293,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
cursor="hand2",
command=lambda: setattr(modules.globals, "mouth_mask", mouth_mask_var.get()),
)
mouth_mask_switch.place(relx=0.1, rely=0.55)
mouth_mask_switch.place(relx=0.1, rely=0.45)
show_mouth_mask_box_var = ctk.BooleanVar(value=modules.globals.show_mouth_mask_box)
show_mouth_mask_box_switch = ctk.CTkSwitch(
@@ -304,7 +305,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
modules.globals, "show_mouth_mask_box", show_mouth_mask_box_var.get()
),
)
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
show_mouth_mask_box_switch.place(relx=0.6, rely=0.45)
start_button = ctk.CTkButton(
root, text=_("Start"), cursor="hand2", command=lambda: analyze_target(start, root)
@@ -365,6 +366,72 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
live_button.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
# --- End Camera Selection ---
# 1) Define a DoubleVar for transparency (0 = fully transparent, 1 = fully opaque)
transparency_var = ctk.DoubleVar(value=1.0)
def on_transparency_change(value: float):
# Convert slider value to float
val = float(value)
modules.globals.opacity = val # Set global opacity
percentage = int(val * 100)
if percentage == 0:
modules.globals.fp_ui["face_enhancer"] = False
update_status("Transparency set to 0% - Face swapping disabled.")
elif percentage == 100:
modules.globals.face_swapper_enabled = True
update_status("Transparency set to 100%.")
else:
modules.globals.face_swapper_enabled = True
update_status(f"Transparency set to {percentage}%")
# 2) Transparency label and slider (placed ABOVE sharpness)
transparency_label = ctk.CTkLabel(root, text="Transparency:")
transparency_label.place(relx=0.15, rely=0.69, relwidth=0.2, relheight=0.05)
transparency_slider = ctk.CTkSlider(
root,
from_=0.0,
to=1.0,
variable=transparency_var,
command=on_transparency_change,
fg_color="#E0E0E0",
progress_color="#007BFF",
button_color="#FFFFFF",
button_hover_color="#CCCCCC",
height=5,
border_width=1,
corner_radius=3,
)
transparency_slider.place(relx=0.35, rely=0.71, relwidth=0.5, relheight=0.02)
# 3) Sharpness label & slider
sharpness_var = ctk.DoubleVar(value=0.0) # start at 0.0
def on_sharpness_change(value: float):
modules.globals.sharpness = float(value)
update_status(f"Sharpness set to {value:.1f}")
sharpness_label = ctk.CTkLabel(root, text="Sharpness:")
sharpness_label.place(relx=0.15, rely=0.74, relwidth=0.2, relheight=0.05)
sharpness_slider = ctk.CTkSlider(
root,
from_=0.0,
to=5.0,
variable=sharpness_var,
command=on_sharpness_change,
fg_color="#E0E0E0",
progress_color="#007BFF",
button_color="#FFFFFF",
button_hover_color="#CCCCCC",
height=5,
border_width=1,
corner_radius=3,
)
sharpness_slider.place(relx=0.35, rely=0.76, relwidth=0.5, relheight=0.02)
# Status and link at the bottom
global status_label
status_label = ctk.CTkLabel(root, text=None, justify="center")
status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
@@ -381,6 +448,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
return root
def close_mapper_window():
global POPUP, POPUP_LIVE
if POPUP and POPUP.winfo_exists():
@@ -397,7 +465,7 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
return
if modules.globals.map_faces:
modules.globals.source_target_map = []
modules.globals.souce_target_map = []
if is_image(modules.globals.target_path):
update_status("Getting unique faces")
@@ -406,8 +474,8 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
update_status("Getting unique faces")
get_unique_faces_from_target_video()
if len(modules.globals.source_target_map) > 0:
create_source_target_popup(start, root, modules.globals.source_target_map)
if len(modules.globals.souce_target_map) > 0:
create_source_target_popup(start, root, modules.globals.souce_target_map)
else:
update_status("No faces found in target")
else:
@@ -696,21 +764,17 @@ def check_and_ignore_nsfw(target, destroy: Callable = None) -> bool:
def fit_image_to_size(image, width: int, height: int):
if width is None or height is None or width <= 0 or height <= 0:
if width is None and height is None:
return image
h, w, _ = image.shape
ratio_h = 0.0
ratio_w = 0.0
ratio_w = width / w
ratio_h = height / h
# Use the smaller ratio to ensure the image fits within the given dimensions
ratio = min(ratio_w, ratio_h)
# Compute new dimensions, ensuring they're at least 1 pixel
new_width = max(1, int(ratio * w))
new_height = max(1, int(ratio * h))
new_size = (new_width, new_height)
if width > height:
ratio_h = height / h
else:
ratio_w = width / w
ratio = max(ratio_w, ratio_h)
new_size = (int(ratio * w), int(ratio * h))
return cv2.resize(image, dsize=new_size)
@@ -791,9 +855,9 @@ def webcam_preview(root: ctk.CTk, camera_index: int):
return
create_webcam_preview(camera_index)
else:
modules.globals.source_target_map = []
modules.globals.souce_target_map = []
create_source_target_popup_for_webcam(
root, modules.globals.source_target_map, camera_index
root, modules.globals.souce_target_map, camera_index
)

View File

@@ -1,25 +1,23 @@
--extra-index-url https://download.pytorch.org/whl/cu118
--extra-index-url https://download.pytorch.org/whl/cu128
numpy>=1.23.5,<2
typing-extensions>=4.8.0
opencv-python==4.10.0.84
cv2_enumerate_cameras==1.1.15
onnx==1.16.0
onnx==1.18.0
insightface==0.7.3
psutil==5.9.8
tk==0.1.0
customtkinter==5.2.2
pillow==11.1.0
torch==2.5.1+cu118; sys_platform != 'darwin'
torch==2.5.1; sys_platform == 'darwin'
torchvision==0.20.1; sys_platform != 'darwin'
torch; sys_platform != 'darwin'
torch==2.7.1+cu128; sys_platform == 'darwin'
torchvision; sys_platform != 'darwin'
torchvision==0.20.1; sys_platform == 'darwin'
onnxruntime-silicon==1.16.3; sys_platform == 'darwin' and platform_machine == 'arm64'
onnxruntime-gpu==1.16.3; sys_platform != 'darwin'
onnxruntime-gpu==1.22.0; sys_platform != 'darwin'
tensorflow; sys_platform != 'darwin'
opennsfw2==0.10.2
protobuf==4.23.2
tqdm==4.66.4
gfpgan==1.3.8
tkinterdnd2==0.4.2
pygrabber==0.2
protobuf==4.25.1
git+https://github.com/xinntao/BasicSR.git@master
git+https://github.com/TencentARC/GFPGAN.git@master

3
run.py
View File

@@ -1,5 +1,8 @@
#!/usr/bin/env python3
# Import the tkinter fix to patch the ScreenChanged error
import tkinter_fix
from modules import core
if __name__ == '__main__':

26
tkinter_fix.py Normal file
View File

@@ -0,0 +1,26 @@
import tkinter
# Only needs to be imported once at the beginning of the application
def apply_patch():
# Create a monkey patch for the internal _tkinter module
original_init = tkinter.Tk.__init__
def patched_init(self, *args, **kwargs):
# Call the original init
original_init(self, *args, **kwargs)
# Define the missing ::tk::ScreenChanged procedure
self.tk.eval("""
if {[info commands ::tk::ScreenChanged] == ""} {
proc ::tk::ScreenChanged {args} {
# Do nothing
return
}
}
""")
# Apply the monkey patch
tkinter.Tk.__init__ = patched_init
# Apply the patch automatically when this module is imported
apply_patch()