Stars
A general fine-tuning kit geared toward diffusion models.
Google Research
The benchmark of SOTA text-to-image diffusion models with a new benchmarking strategy based on MiniGPT-4, namely X-IQE.
FlagEval is an evaluation toolkit for AI large foundation models.
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Generative Models by Stability AI
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
Diffusion model papers, survey, and taxonomy
Simple samples for TensorRT programming
Stable Diffusion web UI
High-Resolution Image Synthesis with Latent Diffusion Models
Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
A high-throughput and memory-efficient inference and serving engine for LLMs
Accessible large language models via k-bit quantization for PyTorch.
Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
Transformer related optimization, including BERT, GPT
4 bits quantization of LLaMA using GPTQ
YOLOv5 Series Multi-backbone(TPH-YOLOv5, Ghostnet, ShuffleNetv2, Mobilenetv3Small, EfficientNetLite, PP-LCNet, SwinTransformer YOLO), Module(CBAM, DCN), Pruning (EagleEye, Network Slimming), Quanti…
implementation of Distilling Object Detectors with Fine-grained Feature Imitation on yolov5
Unofficial implementation of LSQ-Net, a neural network quantization framework
YOLOv6: a single-stage object detection framework dedicated to industrial applications.
OpenMMLab Model Compression Toolbox and Benchmark.
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.
source code of the paper: Robust Quantization: One Model to Rule Them All
Two simple and effective designs of vision transformer, which is on par with the Swin transformer