Stars
AlphaPlayer is a video animation engine.
✨ Local and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows
This repository contains the codes of "A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild", published at ACM Multimedia 2020. For HD commercial model, please try out Sync Labs
Real-time image and video processing library similar to GPUImage, with built-in beauty filters, Written in C++11 and based on OpenGL/ES.
美狐美颜sdk,支持美颜滤镜(Beauty Filter)、面具特效(Mask the special effects)、贴纸(Software/Hardware Encoder) 、滤镜(LUTs)
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Industry leading face manipulation platform
DR2: Diffusion-based Robust Degradation Remover for Blind Face Restoration. CVPR 2023.
GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.
MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting
real time face swap and one-click video deepfake with only a single image
Source code for 'Pro HTML5 Games' by Aditya Ravi Shankar
Emscripten: An LLVM-to-WebAssembly Compiler
🎰 Circular slot machine mobile-first SPA built using JavaScript, CSS variables and Emojis!
Modern casino slot machine game using only plain JavaScript (Web Animations API)
🐍🎮 pygame (the library) is a Free and Open Source python programming language library for making multimedia applications like games built on top of the excellent SDL library. C, Python, Native, Ope…
Turn your Python application into an Android APK
Toolchain for compiling Python / Kivy / other libraries for iOS
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931) ECCV Workshops 2022)
Visualizer for neural network, deep learning and machine learning models
Open source UI framework written in Python, running on Windows, Linux, macOS, Android and iOS
Implementation of paper - YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information
[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.