Stars
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
High-Resolution Image Synthesis with Latent Diffusion Models
Official Code for DragGAN (SIGGRAPH 2023)
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
OpenMMLab Detection Toolbox and Benchmark
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
SoftVC VITS Singing Voice Conversion
Image-to-Image Translation in PyTorch
Open-Sora: Democratizing Efficient Video Production for All
Graph Neural Network Library for PyTorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫、百度贴吧帖子 | 百度贴吧评论回复爬虫 | 知乎问答文章|评论爬虫
Open standard for machine learning interoperability
PyTorch implementations of Generative Adversarial Networks.
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
A computer algebra system written in pure Python
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
YOLOv10: Real-Time End-to-End Object Detection [NeurIPS 2024]
🐍 Geometric Computer Vision Library for Spatial AI
StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support