mirror of
https://github.com/hacksider/Deep-Live-Cam.git
synced 2025-10-04 16:02:49 +08:00
Compare commits
38 Commits
privacy
...
refactorin
Author | SHA1 | Date | |
---|---|---|---|
![]() |
1d41a20abf | ||
![]() |
df940ccc3d | ||
![]() |
834f39ec0c | ||
![]() |
56cddde87c | ||
![]() |
0dbed2883a | ||
![]() |
66248a37b4 | ||
![]() |
aa9b7ed3b6 | ||
![]() |
51a4246050 | ||
![]() |
3f1c072fac | ||
![]() |
f91f9203e7 | ||
![]() |
80477676b4 | ||
![]() |
c728994e6b | ||
![]() |
65da3be2a4 | ||
![]() |
390b88216b | ||
![]() |
dabaa64695 | ||
![]() |
1fad1cd43a | ||
![]() |
2f67e2f159 | ||
![]() |
a3af249ea6 | ||
![]() |
5bc3ada632 | ||
![]() |
650e89eb21 | ||
![]() |
4d2aea37b7 | ||
![]() |
28c4b34db1 | ||
![]() |
49e8f78513 | ||
![]() |
d753f5d4b0 | ||
![]() |
4fb69476d8 | ||
![]() |
f3adfd194d | ||
![]() |
e5f04cf917 | ||
![]() |
67394a3157 | ||
![]() |
186d155e1b | ||
![]() |
87081e78d0 | ||
![]() |
f79373d4db | ||
![]() |
513e413956 | ||
![]() |
f82cebf86e | ||
![]() |
d45dedc9a6 | ||
![]() |
2d489b57ec | ||
![]() |
ccc04983cf | ||
![]() |
2506c5a261 | ||
![]() |
e862ff1456 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -25,3 +25,4 @@ models/DMDNet.pth
|
||||
faceswap/
|
||||
.vscode/
|
||||
switch_states.json
|
||||
venv.rar
|
||||
|
98
README.md
98
README.md
@@ -36,12 +36,8 @@ Users are expected to use this software responsibly and legally. If using a real
|
||||
<a href="https://hacksider.gumroad.com/l/vccdmm"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
|
||||
|
||||
##### This is the fastest build you can get if you have a discrete NVIDIA GPU.
|
||||
|
||||
## Quick Start - Pre-built (Mac / Silicon)
|
||||
|
||||
<a href="https://krshh.gumroad.com/l/Deep-Live-Cam-Mac"> <img src="https://github.com/user-attachments/assets/d5d913b5-a7de-4609-96b9-979a5749a703" width="285" height="77" />
|
||||
|
||||
###### These Pre-builts are perfect for non-technical users or those who don’t have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
|
||||
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
|
||||
|
||||
## TLDR; Live Deepfake in just 3 Clicks
|
||||

|
||||
@@ -49,7 +45,7 @@ Users are expected to use this software responsibly and legally. If using a real
|
||||
2. Select which camera to use
|
||||
3. Press live!
|
||||
|
||||
## Features & Uses - Everything is real-time
|
||||
## Features & Uses - Everything is in real-time
|
||||
|
||||
### Mouth Mask
|
||||
|
||||
@@ -85,7 +81,7 @@ Users are expected to use this software responsibly and legally. If using a real
|
||||
|
||||
### Memes
|
||||
|
||||
**Create Your most viral meme yet**
|
||||
**Create Your Most Viral Meme Yet**
|
||||
|
||||
<p align="center">
|
||||
<img src="media/meme.gif" alt="show" width="450">
|
||||
@@ -93,6 +89,13 @@ Users are expected to use this software responsibly and legally. If using a real
|
||||
<sub>Created using Many Faces feature in Deep-Live-Cam</sub>
|
||||
</p>
|
||||
|
||||
### Omegle
|
||||
|
||||
**Surprise people on Omegle**
|
||||
|
||||
<p align="center">
|
||||
<video src="https://github.com/user-attachments/assets/2e9b9b82-fa04-4b70-9f56-b1f68e7672d0" width="450" controls></video>
|
||||
</p>
|
||||
|
||||
## Installation (Manual)
|
||||
|
||||
@@ -116,7 +119,8 @@ This is more likely to work on your computer but will be slower as it utilizes t
|
||||
**2. Clone the Repository**
|
||||
|
||||
```bash
|
||||
https://github.com/hacksider/Deep-Live-Cam.git
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
cd Deep-Live-Cam
|
||||
```
|
||||
|
||||
**3. Download the Models**
|
||||
@@ -130,14 +134,44 @@ Place these files in the "**models**" folder.
|
||||
|
||||
We highly recommend using a `venv` to avoid issues.
|
||||
|
||||
For Windows:
|
||||
```bash
|
||||
python -m venv venv
|
||||
venv\Scripts\activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**For macOS:** Install or upgrade the `python-tk` package:
|
||||
**For macOS:**
|
||||
|
||||
Apple Silicon (M1/M2/M3) requires specific setup:
|
||||
|
||||
```bash
|
||||
# Install Python 3.10 (specific version is important)
|
||||
brew install python@3.10
|
||||
|
||||
# Install tkinter package (required for the GUI)
|
||||
brew install python-tk@3.10
|
||||
|
||||
# Create and activate virtual environment with Python 3.10
|
||||
python3.10 -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
** In case something goes wrong and you need to reinstall the virtual environment **
|
||||
|
||||
```bash
|
||||
# Deactivate the virtual environment
|
||||
rm -rf venv
|
||||
|
||||
# Reinstall the virtual environment
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# install the dependencies again
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
|
||||
@@ -146,7 +180,7 @@ brew install python-tk@3.10
|
||||
|
||||
**CUDA Execution Provider (Nvidia)**
|
||||
|
||||
1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) or [CUDA Toolkit 12.1.1](https://developer.nvidia.com/cuda-12-1-1-download-archive)
|
||||
1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||
2. Install dependencies:
|
||||
|
||||
```bash
|
||||
@@ -162,19 +196,39 @@ python run.py --execution-provider cuda
|
||||
|
||||
**CoreML Execution Provider (Apple Silicon)**
|
||||
|
||||
1. Install dependencies:
|
||||
Apple Silicon (M1/M2/M3) specific installation:
|
||||
|
||||
1. Make sure you've completed the macOS setup above using Python 3.10.
|
||||
2. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-silicon
|
||||
pip install onnxruntime-silicon==1.13.1
|
||||
```
|
||||
|
||||
2. Usage:
|
||||
3. Usage (important: specify Python 3.10):
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider coreml
|
||||
python3.10 run.py --execution-provider coreml
|
||||
```
|
||||
|
||||
**Important Notes for macOS:**
|
||||
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
|
||||
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
|
||||
- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
|
||||
- If you get model loading errors, check that your models are in the correct folder
|
||||
- If you encounter conflicts with other Python versions, consider uninstalling them:
|
||||
```bash
|
||||
# List all installed Python versions
|
||||
brew list | grep python
|
||||
|
||||
# Uninstall conflicting versions if needed
|
||||
brew uninstall --ignore-dependencies python@3.11 python@3.13
|
||||
|
||||
# Keep only Python 3.10
|
||||
brew cleanup
|
||||
```
|
||||
|
||||
**CoreML Execution Provider (Apple Legacy)**
|
||||
|
||||
1. Install dependencies:
|
||||
@@ -219,7 +273,6 @@ pip install onnxruntime-openvino==1.15.0
|
||||
```bash
|
||||
python run.py --execution-provider openvino
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Usage
|
||||
@@ -240,6 +293,19 @@ python run.py --execution-provider openvino
|
||||
- Use a screen capture tool like OBS to stream.
|
||||
- To change the face, select a new source image.
|
||||
|
||||
## Tips and Tricks
|
||||
|
||||
Check out these helpful guides to get the most out of Deep-Live-Cam:
|
||||
|
||||
- [Unlocking the Secrets to the Perfect Deepfake Image](https://deeplivecam.net/index.php/blog/tips-and-tricks/unlocking-the-secrets-to-the-perfect-deepfake-image) - Learn how to create the best deepfake with full head coverage
|
||||
- [Video Call with DeepLiveCam](https://deeplivecam.net/index.php/blog/tips-and-tricks/video-call-with-deeplivecam) - Make your meetings livelier by using DeepLiveCam with OBS and meeting software
|
||||
- [Have a Special Guest!](https://deeplivecam.net/index.php/blog/tips-and-tricks/have-a-special-guest) - Tutorial on how to use face mapping to add special guests to your stream
|
||||
- [Watch Deepfake Movies in Realtime](https://deeplivecam.net/index.php/blog/tips-and-tricks/watch-deepfake-movies-in-realtime) - See yourself star in any video without processing the video
|
||||
- [Better Quality without Sacrificing Speed](https://deeplivecam.net/index.php/blog/tips-and-tricks/better-quality-without-sacrificing-speed) - Tips for achieving better results without impacting performance
|
||||
- [Instant Vtuber!](https://deeplivecam.net/index.php/blog/tips-and-tricks/instant-vtuber) - Create a new persona/vtuber easily using Metahuman Creator
|
||||
|
||||
Visit our [official blog](https://deeplivecam.net/index.php/blog/tips-and-tricks) for more tips and tutorials.
|
||||
|
||||
## Command Line Arguments (Unmaintained)
|
||||
|
||||
```
|
||||
@@ -312,6 +378,4 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
|
||||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
||||
</a>
|
1034
modules/core.py
1034
modules/core.py
File diff suppressed because it is too large
Load Diff
@@ -39,13 +39,13 @@ def get_many_faces(frame: Frame) -> Any:
|
||||
return None
|
||||
|
||||
def has_valid_map() -> bool:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map and "target" in map:
|
||||
return True
|
||||
return False
|
||||
|
||||
def default_source_face() -> Any:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map:
|
||||
return map['source']['face']
|
||||
return None
|
||||
@@ -53,7 +53,7 @@ def default_source_face() -> Any:
|
||||
def simplify_maps() -> Any:
|
||||
centroids = []
|
||||
faces = []
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map and "target" in map:
|
||||
centroids.append(map['target']['face'].normed_embedding)
|
||||
faces.append(map['source']['face'])
|
||||
@@ -64,10 +64,10 @@ def simplify_maps() -> Any:
|
||||
def add_blank_map() -> Any:
|
||||
try:
|
||||
max_id = -1
|
||||
if len(modules.globals.souce_target_map) > 0:
|
||||
max_id = max(modules.globals.souce_target_map, key=lambda x: x['id'])['id']
|
||||
if len(modules.globals.source_target_map) > 0:
|
||||
max_id = max(modules.globals.source_target_map, key=lambda x: x['id'])['id']
|
||||
|
||||
modules.globals.souce_target_map.append({
|
||||
modules.globals.source_target_map.append({
|
||||
'id' : max_id + 1
|
||||
})
|
||||
except ValueError:
|
||||
@@ -75,14 +75,14 @@ def add_blank_map() -> Any:
|
||||
|
||||
def get_unique_faces_from_target_image() -> Any:
|
||||
try:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
target_frame = cv2.imread(modules.globals.target_path)
|
||||
many_faces = get_many_faces(target_frame)
|
||||
i = 0
|
||||
|
||||
for face in many_faces:
|
||||
x_min, y_min, x_max, y_max = face['bbox']
|
||||
modules.globals.souce_target_map.append({
|
||||
modules.globals.source_target_map.append({
|
||||
'id' : i,
|
||||
'target' : {
|
||||
'cv2' : target_frame[int(y_min):int(y_max), int(x_min):int(x_max)],
|
||||
@@ -96,7 +96,7 @@ def get_unique_faces_from_target_image() -> Any:
|
||||
|
||||
def get_unique_faces_from_target_video() -> Any:
|
||||
try:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
frame_face_embeddings = []
|
||||
face_embeddings = []
|
||||
|
||||
@@ -127,7 +127,7 @@ def get_unique_faces_from_target_video() -> Any:
|
||||
face['target_centroid'] = closest_centroid_index
|
||||
|
||||
for i in range(len(centroids)):
|
||||
modules.globals.souce_target_map.append({
|
||||
modules.globals.source_target_map.append({
|
||||
'id' : i
|
||||
})
|
||||
|
||||
@@ -135,7 +135,7 @@ def get_unique_faces_from_target_video() -> Any:
|
||||
for frame in tqdm(frame_face_embeddings, desc=f"Mapping frame embeddings to centroids-{i}"):
|
||||
temp.append({'frame': frame['frame'], 'faces': [face for face in frame['faces'] if face['target_centroid'] == i], 'location': frame['location']})
|
||||
|
||||
modules.globals.souce_target_map[i]['target_faces_in_frame'] = temp
|
||||
modules.globals.source_target_map[i]['target_faces_in_frame'] = temp
|
||||
|
||||
# dump_faces(centroids, frame_face_embeddings)
|
||||
default_target_face()
|
||||
@@ -144,7 +144,7 @@ def get_unique_faces_from_target_video() -> Any:
|
||||
|
||||
|
||||
def default_target_face():
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
best_face = None
|
||||
best_frame = None
|
||||
for frame in map['target_faces_in_frame']:
|
||||
|
@@ -9,7 +9,7 @@ file_types = [
|
||||
("Video", ("*.mp4", "*.mkv")),
|
||||
]
|
||||
|
||||
souce_target_map = []
|
||||
source_target_map = []
|
||||
simple_map = {}
|
||||
|
||||
source_path = None
|
||||
|
@@ -4,6 +4,7 @@ import insightface
|
||||
import threading
|
||||
import numpy as np
|
||||
import modules.globals
|
||||
import logging
|
||||
import modules.processors.frame.core
|
||||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||
@@ -105,24 +106,30 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
||||
many_faces = get_many_faces(temp_frame)
|
||||
if many_faces:
|
||||
for target_face in many_faces:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
if source_face and target_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
print("Face detection failed for target/source.")
|
||||
else:
|
||||
target_face = get_one_face(temp_frame)
|
||||
if target_face:
|
||||
if target_face and source_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
logging.error("Face detection failed for target or source.")
|
||||
return temp_frame
|
||||
|
||||
|
||||
|
||||
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
if is_image(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face = default_source_face()
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
target_face = map["target"]["face"]
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map:
|
||||
source_face = map["source"]["face"]
|
||||
target_face = map["target"]["face"]
|
||||
@@ -131,7 +138,7 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
elif is_video(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face = default_source_face()
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
target_frame = [
|
||||
f
|
||||
for f in map["target_faces_in_frame"]
|
||||
@@ -143,7 +150,7 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map:
|
||||
target_frame = [
|
||||
f
|
||||
|
@@ -397,7 +397,7 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
|
||||
return
|
||||
|
||||
if modules.globals.map_faces:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
|
||||
if is_image(modules.globals.target_path):
|
||||
update_status("Getting unique faces")
|
||||
@@ -406,8 +406,8 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
|
||||
update_status("Getting unique faces")
|
||||
get_unique_faces_from_target_video()
|
||||
|
||||
if len(modules.globals.souce_target_map) > 0:
|
||||
create_source_target_popup(start, root, modules.globals.souce_target_map)
|
||||
if len(modules.globals.source_target_map) > 0:
|
||||
create_source_target_popup(start, root, modules.globals.source_target_map)
|
||||
else:
|
||||
update_status("No faces found in target")
|
||||
else:
|
||||
@@ -696,17 +696,21 @@ def check_and_ignore_nsfw(target, destroy: Callable = None) -> bool:
|
||||
|
||||
|
||||
def fit_image_to_size(image, width: int, height: int):
|
||||
if width is None and height is None:
|
||||
if width is None or height is None or width <= 0 or height <= 0:
|
||||
return image
|
||||
h, w, _ = image.shape
|
||||
ratio_h = 0.0
|
||||
ratio_w = 0.0
|
||||
if width > height:
|
||||
ratio_h = height / h
|
||||
else:
|
||||
ratio_w = width / w
|
||||
ratio = max(ratio_w, ratio_h)
|
||||
new_size = (int(ratio * w), int(ratio * h))
|
||||
ratio_w = width / w
|
||||
ratio_h = height / h
|
||||
# Use the smaller ratio to ensure the image fits within the given dimensions
|
||||
ratio = min(ratio_w, ratio_h)
|
||||
|
||||
# Compute new dimensions, ensuring they're at least 1 pixel
|
||||
new_width = max(1, int(ratio * w))
|
||||
new_height = max(1, int(ratio * h))
|
||||
new_size = (new_width, new_height)
|
||||
|
||||
return cv2.resize(image, dsize=new_size)
|
||||
|
||||
|
||||
@@ -787,9 +791,9 @@ def webcam_preview(root: ctk.CTk, camera_index: int):
|
||||
return
|
||||
create_webcam_preview(camera_index)
|
||||
else:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
create_source_target_popup_for_webcam(
|
||||
root, modules.globals.souce_target_map, camera_index
|
||||
root, modules.globals.source_target_map, camera_index
|
||||
)
|
||||
|
||||
|
||||
@@ -1199,4 +1203,4 @@ def update_webcam_target(
|
||||
target_label_dict_live[button_num] = target_image
|
||||
else:
|
||||
update_pop_live_status("Face could not be detected in last upload!")
|
||||
return map
|
||||
return map
|
||||
|
@@ -1,21 +1,22 @@
|
||||
--extra-index-url https://download.pytorch.org/whl/cu118
|
||||
--extra-index-url https://download.pytorch.org/whl/nightly/cu128
|
||||
|
||||
numpy>=1.23.5,<2
|
||||
opencv-python==4.10.0.84
|
||||
cv2_enumerate_cameras==1.1.15
|
||||
onnx==1.16.0
|
||||
typing-extensions>=4.8.0
|
||||
opencv-python==4.11.0.86
|
||||
onnx==1.17.0
|
||||
cv2_enumerate_cameras==1.1.18.3
|
||||
insightface==0.7.3
|
||||
psutil==5.9.8
|
||||
tk==0.1.0
|
||||
customtkinter==5.2.2
|
||||
pillow==11.1.0
|
||||
torch==2.0.1+cu118; sys_platform != 'darwin'
|
||||
torch==2.0.1; sys_platform == 'darwin'
|
||||
torchvision==0.15.2+cu118; sys_platform != 'darwin'
|
||||
torchvision==0.15.2; sys_platform == 'darwin'
|
||||
onnxruntime-silicon==1.16.3; sys_platform == 'darwin' and platform_machine == 'arm64'
|
||||
onnxruntime-gpu==1.16.3; sys_platform != 'darwin'
|
||||
tensorflow==2.12.1; sys_platform != 'darwin'
|
||||
torch; sys_platform != 'darwin'
|
||||
torch; sys_platform == 'darwin'
|
||||
torchvision; sys_platform != 'darwin'
|
||||
torchvision; sys_platform == 'darwin'
|
||||
onnxruntime-silicon==1.21; sys_platform == 'darwin' and platform_machine == 'arm64'
|
||||
onnxruntime-gpu==1.21; sys_platform != 'darwin'
|
||||
tensorflow; sys_platform != 'darwin'
|
||||
opennsfw2==0.10.2
|
||||
protobuf==4.23.2
|
||||
tqdm==4.66.4
|
||||
|
Reference in New Issue
Block a user