Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
* Simplify bbox access

* Code cleanup

* Simplify bbox access

* Move code to face helper

* Swap and paste back without insightface

* Swap and paste back without insightface

* Remove semaphore where possible

* Improve paste back performance

* Cosmetic changes

* Move the predictor to ONNX to avoid tensorflow, Use video ranges for prediction

* Make CI happy

* Move template and size to the options

* Fix different color on box

* Uniform model handling for predictor

* Uniform frame handling for predictor

* Pass kps direct to warp_face

* Fix urllib

* Analyse based on matches

* Analyse based on rate

* Fix CI

* ROCM and OpenVINO mapping for torch backends

* Fix the paste back speed

* Fix import

* Replace retinaface with yunet (facefusion#168)

* Remove insightface dependency

* Fix urllib

* Some fixes

* Analyse based on matches

* Analyse based on rate

* Fix CI

* Migrate to Yunet

* Something is off here

* We indeed need semaphore for yunet

* Normalize the normed_embedding

* Fix download of models

* Fix download of models

* Fix download of models

* Add score and improve affine_matrix

* Temp fix for bbox out of frame

* Temp fix for bbox out of frame

* ROCM and OpenVINO mapping for torch backends

* Normalize bbox

* Implement gender age

* Cosmetics on cli args

* Prevent face jumping

* Fix the paste back speed

* FIx import

* Introduce detection size

* Cosmetics on face analyser ARGS and globals

* Temp fix for shaking face

* Accurate event handling

* Accurate event handling

* Accurate event handling

* Set the reference_frame_number in face_selector component

* Simswap model (facefusion#171)

* Add simswap models

* Add ghost models

* Introduce normed template

* Conditional prepare and normalize for ghost

* Conditional prepare and normalize for ghost

* Get simswap working

* Get simswap working

* Fix refresh of swapper model

* Refine face selection and detection (facefusion#174)

* Refine face selection and detection

* Update README.md

* Fix some face analyser UI

* Fix some face analyser UI

* Introduce range handling for CLI arguments

* Introduce range handling for CLI arguments

* Fix some spacings

* Disable onnxruntime warnings

* Use cv2.blur over cv2.GaussianBlur for better performance

* Revert "Use cv2.blur over cv2.GaussianBlur for better performance"

This reverts commit bab666d.

* Prepare universal face detection

* Prepare universal face detection part2

* Reimplement retinaface

* Introduce cached anchors creation

* Restore filtering to enhance performance

* Minor changes

* Minor changes

* More code but easier to understand

* Minor changes

* Rename predictor to content analyser

* Change detection/recognition to detector/recognizer

* Fix crop frame borders

* Fix spacing

* Allow normalize output without a source

* Improve conditional set face reference

* Update dependencies

* Add timeout for get_download_size

* Fix performance due disorder

* Move models to assets repository, Adjust namings

* Refactor face analyser

* Rename models once again

* Fix spacing

* Highres simswap (facefusion#192)

* Introduce highres simswap

* Fix simswap 256 color issue (facefusion#191)

* Fix simswap 256 color issue

* Update face_swapper.py

* Normalize models and host in our repo

* Normalize models and host in our repo

---------

Co-authored-by: Harisreedhar <[email protected]>

* Rename face analyser direction to face analyser order

* Improve the UI for face selector

* Add best-worst, worst-best detector ordering

* Clear as needed and fix zero score bug

* Fix linter

* Improve startup time by multi thread remote download size

* Just some cosmetics

* Normalize swagger source input, Add blendface_256 (unfinished)

* New paste back (facefusion#195)

* add new paste_back (facefusion#194)

* add new paste_back

* Update face_helper.py

* Update face_helper.py

* add commandline arguments and gui

* fix conflict

* Update face_mask.py

* type fix

* Clean some wording and typing

---------

Co-authored-by: Harisreedhar <[email protected]>

* Clean more names, use blur range approach

* Add blur padding range

* Change the padding order

* Fix yunet filename

* Introduce face debugger

* Use percent for mask padding

* Ignore this

* Ignore this

* Simplify debugger output

* implement blendface (facefusion#198)

* Clean up after the genius

* Add gpen_bfr_256

* Cosmetics

* Ignore face_mask_padding on face enhancer

* Update face_debugger.py (facefusion#202)

* Shrink debug_face() to a minimum

* Mark as 2.0.0 release

* remove unused (facefusion#204)

* Apply NMS (facefusion#205)

* Apply NMS

* Apply NMS part2

* Fix restoreformer url

* Add debugger cli and gui components (facefusion#206)

* Add debugger cli and gui components

* update

* Polishing the types

* Fix usage in README.md

* Update onnxruntime

* Support for webp

* Rename paste-back to face-mask

* Add license to README

* Add license to README

* Extend face selector mode by one

* Update utilities.py (facefusion#212)

* Stop inline camera on stream

* Minor webcam updates

* Gracefully start and stop webcam

* Rename capture to video_capture

* Make get webcam capture pure

* Check webcam to not be None

* Remove some is not None

* Use index 0 for webcam

* Remove memory lookup within progress bar

* Less progress bar updates

* Uniform progress bar

* Use classic progress bar

* Fix image and video validation

* Use different hash for cache

* Use best-worse order for webcam

* Normalize padding like CSS

* Update preview

* Fix max memory

* Move disclaimer and license to the docs

* Update wording in README

* Add LICENSE.md

* Fix argument in README

---------

Co-authored-by: Harisreedhar <[email protected]>
Co-authored-by: alex00ds <[email protected]>
  • Loading branch information
3 people authored Nov 28, 2023
1 parent ea8ecf7 commit 6587d2d
Show file tree
Hide file tree
Showing 48 changed files with 1,556 additions and 601 deletions.
Binary file modified .github/preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
MIT license

Copyright (c) 2023 Henry Ruhs
98 changes: 49 additions & 49 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Preview
Installation
------------

Be aware, the installation needs technical skills and is not for beginners. Please do not open platform and installation related issues on GitHub. We have a very helpful [Discord](https://join.facefusion.io) community that will guide you to install FaceFusion.
Be aware, the installation needs technical skills and is not for beginners. Please do not open platform and installation related issues on GitHub. We have a very helpful [Discord](https://join.facefusion.io) community that will guide you to complete the installation.

Read the [installation](https://docs.facefusion.io/installation) now.
Get started with the [installation](https://docs.facefusion.io/installation) guide.


Usage
Expand All @@ -30,68 +30,68 @@ Run the command:
python run.py [options]
options:
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH select a source image
-t TARGET_PATH, --target TARGET_PATH select a target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH specify the output file or directory
-v, --version show program's version number and exit
-h, --help show this help message and exit
-s SOURCE_PATH, --source SOURCE_PATH select a source image
-t TARGET_PATH, --target TARGET_PATH select a target image or video
-o OUTPUT_PATH, --output OUTPUT_PATH specify the output file or directory
-v, --version show program's version number and exit
misc:
--skip-download omit automate downloads and lookups
--headless run the program in headless mode
--skip-download omit automate downloads and lookups
--headless run the program in headless mode
execution:
--execution-providers {cpu} [{cpu} ...] choose from the available execution providers (choices: cpu, ...)
--execution-thread-count EXECUTION_THREAD_COUNT specify the number of execution threads
--execution-queue-count EXECUTION_QUEUE_COUNT specify the number of execution queries
--max-memory MAX_MEMORY specify the maximum amount of ram to be used (in gb)
face recognition:
--face-recognition {reference,many} specify the method for face recognition
--face-analyser-direction {left-right,right-left,top-bottom,bottom-top,small-large,large-small} specify the direction used for face analysis
--face-analyser-age {child,teen,adult,senior} specify the age used for face analysis
--face-analyser-gender {male,female} specify the gender used for face analysis
--reference-face-position REFERENCE_FACE_POSITION specify the position of the reference face
--reference-face-distance REFERENCE_FACE_DISTANCE specify the distance between the reference face and the target face
--reference-frame-number REFERENCE_FRAME_NUMBER specify the number of the reference frame
--execution-providers {cpu} [{cpu} ...] choose from the available execution providers
--execution-thread-count [1-128] specify the number of execution threads
--execution-queue-count [1-32] specify the number of execution queries
--max-memory [0-128] specify the maximum amount of ram to be used (in gb)
face analyser:
--face-analyser-order {left-right,right-left,top-bottom,bottom-top,small-large,large-small,best-worst,worst-best} specify the order used for the face analyser
--face-analyser-age {child,teen,adult,senior} specify the age used for the face analyser
--face-analyser-gender {male,female} specify the gender used for the face analyser
--face-detector-model {retinaface,yunet} specify the model used for the face detector
--face-detector-size {160x160,320x320,480x480,512x512,640x640,768x768,960x960,1024x1024} specify the size threshold used for the face detector
--face-detector-score [0.0-1.0] specify the score threshold used for the face detector
face selector:
--face-selector-mode {reference,one,many} specify the mode for the face selector
--reference-face-position REFERENCE_FACE_POSITION specify the position of the reference face
--reference-face-distance [0.0-1.5] specify the distance between the reference face and the target face
--reference-frame-number REFERENCE_FRAME_NUMBER specify the number of the reference frame
face mask:
--face-mask-blur [0.0-1.0] specify the blur amount for face mask
--face-mask-padding FACE_MASK_PADDING [FACE_MASK_PADDING ...] specify the face mask padding (top, right, bottom, left) in percent
frame extraction:
--trim-frame-start TRIM_FRAME_START specify the start frame for extraction
--trim-frame-end TRIM_FRAME_END specify the end frame for extraction
--temp-frame-format {jpg,png} specify the image format used for frame extraction
--temp-frame-quality [0-100] specify the image quality used for frame extraction
--keep-temp retain temporary frames after processing
--trim-frame-start TRIM_FRAME_START specify the start frame for extraction
--trim-frame-end TRIM_FRAME_END specify the end frame for extraction
--temp-frame-format {jpg,png} specify the image format used for frame extraction
--temp-frame-quality [0-100] specify the image quality used for frame extraction
--keep-temp retain temporary frames after processing
output creation:
--output-image-quality [0-100] specify the quality used for the output image
--output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc} specify the encoder used for the output video
--output-video-quality [0-100] specify the quality used for the output video
--keep-fps preserve the frames per second (fps) of the target
--skip-audio omit audio from the target
--output-image-quality [0-100] specify the quality used for the output image
--output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc} specify the encoder used for the output video
--output-video-quality [0-100] specify the quality used for the output video
--keep-fps preserve the frames per second (fps) of the target
--skip-audio omit audio from the target
frame processors:
--frame-processors FRAME_PROCESSORS [FRAME_PROCESSORS ...] choose from the available frame processors (choices: face_enhancer, face_swapper, frame_enhancer, ...)
--face-enhancer-model {codeformer,gfpgan_1.2,gfpgan_1.3,gfpgan_1.4,gpen_bfr_512} choose from the mode for the frame processor
--face-enhancer-blend [0-100] specify the blend factor for the frame processor
--face-swapper-model {inswapper_128,inswapper_128_fp16} choose from the mode for the frame processor
--frame-enhancer-model {realesrgan_x2plus,realesrgan_x4plus,realesrnet_x4plus} choose from the mode for the frame processor
--frame-enhancer-blend [0-100] specify the blend factor for the frame processor
--frame-processors FRAME_PROCESSORS [FRAME_PROCESSORS ...] choose from the available frame processors (choices: face_debugger, face_enhancer, face_swapper, frame_enhancer, ...)
--face-debugger-items {bbox,kps,face-mask,score} [{bbox,kps,face-mask,score} ...] specify the face debugger items
--face-enhancer-model {codeformer,gfpgan_1.2,gfpgan_1.3,gfpgan_1.4,gpen_bfr_256,gpen_bfr_512,restoreformer} choose the model for the frame processor
--face-enhancer-blend [0-100] specify the blend factor for the frame processor
--face-swapper-model {blendface_256,inswapper_128,inswapper_128_fp16,simswap_256,simswap_512_unofficial} choose the model for the frame processor
--frame-enhancer-model {real_esrgan_x2plus,real_esrgan_x4plus,real_esrnet_x4plus} choose the model for the frame processor
--frame-enhancer-blend [0-100] specify the blend factor for the frame processor
uis:
--ui-layouts UI_LAYOUTS [UI_LAYOUTS ...] choose from the available ui layouts (choices: benchmark, webcam, default, ...)
--ui-layouts UI_LAYOUTS [UI_LAYOUTS ...] choose from the available ui layouts (choices: benchmark, webcam, default, ...)
```


Disclaimer
----------

We acknowledge the unethical potential of FaceFusion and are resolutely dedicated to establishing safeguards against such misuse. This program has been engineered to abstain from processing inappropriate content such as nudity, graphic content and sensitive material.

It is important to note that we maintain a strong stance against any type of pornographic nature and do not collaborate with any websites promoting the unauthorized use of our software.

Users who seek to engage in such activities will face consequences, including being banned from our community. We reserve the right to report developers on GitHub who distribute unlocked forks of our software at any time.


Documentation
-------------

Expand Down
22 changes: 19 additions & 3 deletions facefusion/choices.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,26 @@
from typing import List

from facefusion.typing import FaceRecognition, FaceAnalyserDirection, FaceAnalyserAge, FaceAnalyserGender, TempFrameFormat, OutputVideoEncoder
import numpy

face_recognitions : List[FaceRecognition] = [ 'reference', 'many' ]
face_analyser_directions : List[FaceAnalyserDirection] = [ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small' ]
from facefusion.typing import FaceSelectorMode, FaceAnalyserOrder, FaceAnalyserAge, FaceAnalyserGender, TempFrameFormat, OutputVideoEncoder


face_analyser_orders : List[FaceAnalyserOrder] = [ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small', 'best-worst', 'worst-best' ]
face_analyser_ages : List[FaceAnalyserAge] = [ 'child', 'teen', 'adult', 'senior' ]
face_analyser_genders : List[FaceAnalyserGender] = [ 'male', 'female' ]
face_detector_models : List[str] = [ 'retinaface', 'yunet' ]
face_detector_sizes : List[str] = [ '160x160', '320x320', '480x480', '512x512', '640x640', '768x768', '960x960', '1024x1024' ]
face_selector_modes : List[FaceSelectorMode] = [ 'reference', 'one', 'many' ]
temp_frame_formats : List[TempFrameFormat] = [ 'jpg', 'png' ]
output_video_encoders : List[OutputVideoEncoder] = [ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc' ]

execution_thread_count_range : List[int] = numpy.arange(1, 129, 1).tolist()
execution_queue_count_range : List[int] = numpy.arange(1, 33, 1).tolist()
max_memory_range : List[int] = numpy.arange(0, 129, 1).tolist()
face_detector_score_range : List[float] = numpy.arange(0.0, 1.05, 0.05).tolist()
face_mask_blur_range : List[float] = numpy.arange(0.0, 1.05, 0.05).tolist()
face_mask_padding_range : List[float] = numpy.arange(0, 101, 1).tolist()
reference_face_distance_range : List[float] = numpy.arange(0.0, 1.55, 0.05).tolist()
temp_frame_quality_range : List[int] = numpy.arange(0, 101, 1).tolist()
output_image_quality_range : List[int] = numpy.arange(0, 101, 1).tolist()
output_video_quality_range : List[int] = numpy.arange(0, 101, 1).tolist()
102 changes: 102 additions & 0 deletions facefusion/content_analyser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
from typing import Any, Dict
from functools import lru_cache
import threading
import cv2
import numpy
import onnxruntime
from tqdm import tqdm

import facefusion.globals
from facefusion import wording
from facefusion.typing import Frame, ModelValue
from facefusion.vision import get_video_frame, count_video_frame_total, read_image, detect_fps
from facefusion.utilities import resolve_relative_path, conditional_download

CONTENT_ANALYSER = None
THREAD_LOCK : threading.Lock = threading.Lock()
MODELS : Dict[str, ModelValue] =\
{
'open_nsfw':
{
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/open_nsfw.onnx',
'path': resolve_relative_path('../.assets/models/open_nsfw.onnx')
}
}
MAX_PROBABILITY = 0.80
MAX_RATE = 5
STREAM_COUNTER = 0


def get_content_analyser() -> Any:
global CONTENT_ANALYSER

with THREAD_LOCK:
if CONTENT_ANALYSER is None:
model_path = MODELS.get('open_nsfw').get('path')
CONTENT_ANALYSER = onnxruntime.InferenceSession(model_path, providers = facefusion.globals.execution_providers)
return CONTENT_ANALYSER


def clear_content_analyser() -> None:
global CONTENT_ANALYSER

CONTENT_ANALYSER = None


def pre_check() -> bool:
if not facefusion.globals.skip_download:
download_directory_path = resolve_relative_path('../.assets/models')
model_url = MODELS.get('open_nsfw').get('url')
conditional_download(download_directory_path, [ model_url ])
return True


def analyse_stream(frame : Frame, fps : float) -> bool:
global STREAM_COUNTER

STREAM_COUNTER = STREAM_COUNTER + 1
if STREAM_COUNTER % int(fps) == 0:
return analyse_frame(frame)
return False


def prepare_frame(frame : Frame) -> Frame:
frame = cv2.resize(frame, (224, 224)).astype(numpy.float32)
frame -= numpy.array([ 104, 117, 123 ]).astype(numpy.float32)
frame = numpy.expand_dims(frame, axis = 0)
return frame


def analyse_frame(frame : Frame) -> bool:
content_analyser = get_content_analyser()
frame = prepare_frame(frame)
probability = content_analyser.run(None,
{
'input:0': frame
})[0][0][1]
return probability > MAX_PROBABILITY


@lru_cache(maxsize = None)
def analyse_image(image_path : str) -> bool:
frame = read_image(image_path)
return analyse_frame(frame)


@lru_cache(maxsize = None)
def analyse_video(video_path : str, start_frame : int, end_frame : int) -> bool:
video_frame_total = count_video_frame_total(video_path)
fps = detect_fps(video_path)
frame_range = range(start_frame or 0, end_frame or video_frame_total)
rate = 0.0
counter = 0
with tqdm(total = len(frame_range), desc = wording.get('analysing'), unit = 'frame', ascii = ' =') as progress:
for frame_number in frame_range:
if frame_number % int(fps) == 0:
frame = get_video_frame(video_path, frame_number)
if analyse_frame(frame):
counter += 1
rate = counter * int(fps) / len(frame_range) * 100
progress.update()
progress.set_postfix(rate = rate)
return rate > MAX_RATE
Loading

0 comments on commit 6587d2d

Please sign in to comment.