Stars
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Stable Diffusion web UI
The simplest, fastest repository for training/finetuning medium-sized GPTs.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
OpenMMLab Detection Toolbox and Benchmark
Code and documentation to train Stanford's Alpaca models, and generate the data.
利用AI大模型,一键生成高清短视频 Generate short videos with one click using AI LLM.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
WebUI extension for ControlNet
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Refine high-quality datasets and visual AI models
Wan: Open and Advanced Large-Scale Video Generative Models
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
PointNet and PointNet++ implemented by pytorch (pure python) and on ModelNet, ShapeNet and S3DIS.
Understand Human Behavior to Align True Needs
Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
A PyTorch native library for large model training
Implementation of AudioLM, a SOTA Language Modeling Approach to Audio Generation out of Google Research, in Pytorch
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework