Starred repositories
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
This repository contains the source code for the paper First Order Motion Model for Image Animation
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
This repository contains implementations and illustrative code to accompany DeepMind publications
LAVIS - A One-stop Library for Language-Vision Intelligence
Using Low-rank adaptation to quickly fine-tune diffusion models.
Reference models and tools for Cloud TPUs.
Silero Models: pre-trained speech-to-text, text-to-speech and text-enhancement models made embarrassingly simple
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Pytorch0.4.1 codes for InsightFace
Unofficial implementation of the ImageNet, CIFAR 10 and SVHN Augmentation Policies learned by AutoAugment using pillow
Starter code to solve real world text data problems. Includes: Gensim Word2Vec, phrase embeddings, Text Classification with Logistic Regression, word count with pyspark, simple text preprocessing, …
FSGAN - Official PyTorch Implementation
A high-fidelity 3D face reconstruction library from monocular RGB image(s)
Efficient face emotion recognition in photos and videos
Training and experimentation code used for "Stacked Hourglass Networks for Human Pose Estimation"
[CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
AraVec is a pre-trained distributed word representation (word embedding) open source project which aims to provide the Arabic NLP research community with free to use and powerful word embedding mod…
SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation (ECCV2020)
Code for paper 'Audio-Driven Emotional Video Portraits'.
Open source person re-identification in Pytorch
an numpy-based implement of PointRend
A PyTorch3D walkthrough and a Medium article 👋 on how to render 3D .obj meshes from various viewpoints to create 2D images.
Experiment with diffusion models that you can run on your local jupyter instances
A simple implementation of classifier-free guidance DDIM on MNIST