Stars
Stable Diffusion web UI
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Robust Speech Recognition via Large-Scale Weak Supervision
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
A Gradio web UI for Large Language Models with support for multiple inference backends.
A high-throughput and memory-efficient inference and serving engine for LLMs
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
WebUI extension for ControlNet
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
An open source implementation of CLIP.
Hackable and optimized Transformers building blocks, supporting a composable construction.
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
a state-of-the-art-level open visual language model | 多模态预训练模型
Model parallel transformers in JAX and Haiku
NanoDet-Plus⚡Super fast and lightweight anchor-free object detection model. 🔥Only 980 KB(int8) / 1.8MB (fp16) and run 97FPS on cellphone🔥
Emulator for rapid prototyping of Software Defined Networks