Starred repositories
(SIGGRAPH Asia 2024) This is the official PyTorch implementation of SIGGRAPH Asia 2024 paper: DrawingSpinUp: 3D Animation from Single Character Drawings
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Demos for xr-frame system in wx-mini-program.
OpenXRLab Multi-view Motion Capture Toolbox and Benchmark
OpenXRLab Visual-inertial SLAM Toolbox and Benchmark
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
Open-Sora: Democratizing Efficient Video Production for All
Official release of InternLM2.5 base and chat models. 1M context support
Official implementation of OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesis
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
ModelScope-Agent: An agent framework connecting models in ModelScope with the world
Official Implementation for "Only a Matter of Style: Age Transformation Using a Style-Based Regression Model" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02754
Official implementations for paper: Anydoor: zero-shot object-level image customization
Official code for paper "PICTURE: PhotorealistIC virtual Try-on from UnconstRained dEsigns"
Speech-to-text, text-to-speech, speaker recognition, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, Raspberry Pi, RISC-V, x86_64 …
Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On
Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person
A Unity package that allows using an LED screen as a backdrop with camera frustum projection
App showcasing multiple real-time diffusion models pipelines with Diffusers
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
FLUX, Stable Diffusion, SDXL, SD3, LoRA, Fine Tuning, DreamBooth, Training, Automatic1111, Forge WebUI, SwarmUI, DeepFake, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, Comfy…
The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.
This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.