-
Harbin Institute of Technology, Shenzhen
- Shenzhen, Guangdong, China
- [email protected]
Stars
FFCV: Fast Forward Computer Vision (and other ML workloads!)
Fast and differentiable MS-SSIM and SSIM for pytorch.
[CVPR 2023] DepGraph: Towards Any Structural Pruning
Open standard for machine learning interoperability
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-…
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Bjontegaard metric calculation. Include BD-PSNR and BD-rate
My best practice of training large dataset using PyTorch.
Count the MACs / FLOPs of your PyTorch model.
🌍 针对小白的算法训练 | 包括四部分:①.大厂面经 ②.力扣图解 ③.千本开源电子书 ④.百张技术思维导图(项目花了上百小时,希望可以点 star 支持,🌹感谢~)推荐免费ChatGPT使用网站
Pytorch implementation of the paper "The Devil Is in the Details: Window-based Attention for Image Compression".
Real-ESRGAN aims at developing Practical Algorithms for General Image/Video Restoration.
This repo is implementation for Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules in pytorch.
Repository of the paper "Learned Image Compression with Discretized Gaussian Mixture Likelihoods and Attention Modules"
👾 Fast and simple video download library and CLI tool written in Go
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
fire-smoke-detect-yolov4-yolov5 and fire-smoke-detection-dataset 火灾检测,烟雾检测
[BMVC 2021] GhostShiftAddNet: More Features from Energy-Efficient Operations.
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
The implementation of various lightweight networks by using PyTorch. such as:MobileNetV2,MobileNeXt,GhostNet,ParNet,MobileViT、AdderNet,ShuffleNetV1-V2,LCNet,ConvNeXt,etc. ⭐⭐⭐⭐⭐
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".