
* ffmpeg platform-agnostic hardware-acceleration * clear CUDA cache after swapping on low VRAM + ffmpeg cuda acceleration, clearing cache prevent cuda out-of-memory error * check torch gpu before clearing cache * torch check nvidia only * syntax error * Adjust comment * Normalize ARGS * Remove path normalization * Remove args overrides * Run test on Linux and Windows * Run test on Linux and Windows * Run test on Linux and Windows * Run test on Linux and Windows * Run test on Linux and Windows * Run test on Linux and Windows * Run test on Linux and Windows * Revert to Ubuntu test only as Windows hangs * Simplified the way to maintain aspect ratio of the preview, and maintaining aspect ratio of the miniatures * Change face and target images from contain to fit * Improve status output * Massive utilities and core refactoring * Fix sound * Fix sound part2 * Fix more * Move every UI related thing to ui.py * Refactor UI * Introduce render_video_preview() * Add preview back part1 * Add preview back part2, Introduce --video-quality for CLI * Get the preview working * Couple if minor UI fixes * Add video encoder via CLI * Change default video quality, Integrate recent directories for UI * Move temporary files to temp/{target-name} * Fix fps detection * Rename method * Introduce suggest methods for args defaults, output mode and core/threads count via postfix * Fix max_memory and output memory in progress bar too * Turns out mac has a different memory unit * Add typing to swapper * Fix FileNotFoundError while deleting temp * Updated requirements.txt for macs. (cherry picked from commitfd00a18771
) * Doing bunch of renaming and typing * Just a cosmetic * Doing bunch of renaming again * Introduce execution provider to support DirectML * enhancer update * remove useless code * remove useless code * remove useless code * fix * reslove some errors in code review. * methods rename * del Unused import * recover the conditional installation for darwin! * recover the import module * del try catch in unrelate codes * fix error in argument and potential infinity loop * remove the ROCM check before face-enhancer * fix lint error * add the process for image * conditional process according to --frame-processor * Hotfix usage of --frame-processor face-swapper face-enhancer * Auto download models * typo fixed * Fix framerate and audio sync issues * Limit the video encoders according to -crf support * Limit the video encoders according to -crf support part2 * Migrate to theme based UI using customtkinter * Show full preview according to video frames total * Simplify is_image and is_video, close preview on source/target change, show preview slider on video only, fix start button error * Fix linter * Use float over int for more accurate fps * introduce a pre_check() to enhancer... * update * Update utilities.py * move the model_path to the method * Fix model paths * Fix linter * Fix images scaling * Update onnxruntime-silicon and monkey patch ssl for mac * Downgrade onnxruntime-silicon again * Introduce a first version of CONTRIBUTING.md * Update CONTRIBUTING.md * Add libvpx-vp9 to video encoder's choices * Update CONTRIBUTING.md * Migrate gpu flags to execution flags * Fix linter * Encode and decode execution providers for easier usage * Fix comment * Update CLI usage * Fixed wrong frame colors for preview * Introduce dynamic frame procesors * Remove unused imports * Use different structure * modified: roop/core.py , enhancer and swapper * fix link error * modified core.py ui.py frame/core.py * fix get_frame_processors_modules() * fix lint error in core.py * fix face_enhancer.py * fix enhancer.py * fix ui.py * modified: roop/ui.py * fix ui.py * Update ui.py * Fix preview to work with multiple frame processors * Remove multi processing as the performance is equal but memory consumtion better * Extract common methods to processors.frame.core * Add comments and todos * Minor fixes * Minor fixes * Limit enhancer to 1 thread but restore afterwards * Limit enhancer to 1 thread but restore afterwards * Update README and GUI demo * Implementation check for frame processors * Improve path validation * Introduce normalize output path * Fix GUI startup * Introduce pre_start() and move globals check to pre_check() * Flip the hooks and update CLI usage * Introduce bool returns for pre_check() and pre_start(), Scope for terminal output * Cast deprecated args and warn the user * Migrate to ctk.CTkImage * readme update: old keys fixed * Unused import made CI test to fail * Give gfpgan a shot * Update dependencies * Update dependencies * Update dependencies * Update dependencies * Fix ci (#498) * Use different dependencies for CI * Use different dependencies for CI * Use different dependencies for CI * Use different dependencies for CI * Use different dependencies for CI * Fix preview (#499) * Minor changes * Fix override of files using restore_audio() * Host the models in our huggingface space * Remove everything source face related from enhancer (#510) * Remove everything source face related from enhancer * Disable upscale for enhancer to double fps * Using futures for multi threading * Introduce predicter (#530) * Hotfix predicter * Fix square brackets in the target path (#532) * fixes the problem with square brackets in the target file path * fixes the problem with square brackets in the target file path * Ok, here is the fix without regexps * unused import * fix for ci * glob.escape fits here * Make multiple processors work with images * Fix output normalize for deprecated args * Make roop more or less type-safe (#541) * Make roop more or less type-safe * Fix ci.yml * Fix urllib type error * Rename globals in ui * Update utilities.py (#542) Updated the extraction process with the ffmpeg command that corrects the colorspace while extracting to png, and corrected the ffmpeg command, adding '-pix_fmt', 'rgb24', '-sws_flags', '+accurate_rnd+full_chroma_int', '-colorspace', '1', '-color_primaries', '1', '-color_trc', '1' '-pix_fmt rgb24', means treat the image as RGB (or RGBA) '-sws_flags +accurate_rnd+full_chroma_int', means use full color and chroma subsampling instead of 4:2:0 '-colorspace 1', '-color_primaries 1', '-color_trc 1' put the metadata color tags to the png * Use GFPGANv1.4 for enhancer * Fixing the colorspace issue when writing the mp4 from the extracted pngs (#550) * Update utilities.py Updated the extraction process with the ffmpeg command that corrects the colorspace while extracting to png, and corrected the ffmpeg command, adding '-pix_fmt', 'rgb24', '-sws_flags', '+accurate_rnd+full_chroma_int', '-colorspace', '1', '-color_primaries', '1', '-color_trc', '1' '-pix_fmt rgb24', means treat the image as RGB (or RGBA) '-sws_flags +accurate_rnd+full_chroma_int', means use full color and chroma subsampling instead of 4:2:0 '-colorspace 1', '-color_primaries 1', '-color_trc 1' put the metadata color tags to the png * Fixing color conversion from temp png sequence to mp4 '-sws_flags', 'spline+accurate_rnd+full_chroma_int', ' use full color and chroma subsampling -vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', keep the same rec709 colorspace '-color_range', '1', '-colorspace', '1', '-color_primaries', '1', '-color_trc', '1', put the metadata color tags to the mp4 * Revert "Fixing the colorspace issue when writing the mp4 from the extracted pngs (#550)" This reverts commitcf5f27d36a
. * Revert "Update utilities.py (#542)" This reverts commitd57279ceb6
. * Restore colorspace restoring * Add metadata to cli and ui * Introduce Face and Frame typing * Update CLI usage --------- Co-authored-by: Phan Tuấn Anh <phantuananh@hotmail.com.vn> Co-authored-by: Antoine Buchser <10513467+AntwaneB@users.noreply.github.com> Co-authored-by: Eamonn A. Sweeney <mail@eamonnsweeney.ie> Co-authored-by: Moeblack <Moeblack@kuroinekorachi@gmail.com> Co-authored-by: Pozitronik <antikillerxxl@gmail.com> Co-authored-by: Pikachu~~~ <Moeblack@users.noreply.github.com> Co-authored-by: K1llM@n <k1llmanws@yandex.ru> Co-authored-by: NickPittas <107440357+NickPittas@users.noreply.github.com>
Take a video and replace the face in it with a face of your choice. You only need one image of the desired face. No dataset, no training.
You can watch some demos here. A StableDiffusion extension is also available, here.
Disclaimer
This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc.
The developers of this software are aware of its possible unethical applicaitons and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law.
Users of this software are expected to use this software responsibly while abiding the local law. If face of a real person is being used, users are suggested to get consent from the concerned person and clearly mention that it is a deepfake when posting content online. Developers of this software will not be responsible for actions of end-users.
How do I install it?
Issues according installation will be closed without ceremony from now on, we cannot handle the amount of requests.
There are two types of installations: basic and gpu-powered.
-
Basic: It is more likely to work on your computer but it will also be very slow. You can follow instructions for the basic install here.
-
GPU: If you have a good GPU and are ready for solving any software issues you may face, you can enable GPU which is wayyy faster. To do this, first follow the basic install instructions given above and then follow GPU-specific instructions here.
How do I use it?
Note: When you run this program for the first time, it will download some models ~300MB in size.
Executing python run.py
command will launch this window:
Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on Start
. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named <video_title>
where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it.
Don't touch the FPS checkbox unless you know what you are doing.
Additional command line arguments are given below:
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH
select an source image
-t TARGET_PATH, --target TARGET_PATH
select an target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH
select output file or directory
--frame-processor {face_swapper,face_enhancer} [{face_swapper,face_enhancer} ...]
pipeline of frame processors
--keep-fps keep original fps
--keep-audio keep original audio
--keep-frames keep temporary frames
--many-faces process every face
--video-encoder {libx264,libx265,libvpx-vp9}
adjust output video encoder
--video-quality VIDEO_QUALITY
adjust output video quality
--max-memory MAX_MEMORY
maximum amount of RAM in GB
--execution-provider {cpu,...} [{cpu,...} ...]
execution provider
--execution-threads EXECUTION_THREADS
number of execution threads
-v, --version show program's version number and exit
Looking for a CLI mode? Using the -s/--source argument will make the program in cli mode.
Credits
- henryruhs: for being an irreplaceable contributor to the project
- ffmpeg: for making video related operations easy
- deepinsight: for their insightface project which provided a well-made library and models.
- and all developers behind libraries used in this project.