Stars
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Support most of operator which convert mxnet to caffe.
Multi Model Server is a tool for serving neural net models for inference
MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, …
TensorFlow Code for paper "Efficient Neural Architecture Search via Parameter Sharing"
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Code for 'Learning Efficient Single-stage Pedestrian Detectors by Asymptotic Localization Fitting' in ECCV2018
ethanhe42 / softer-NMS
Forked from facebookresearch/DetectronBounding Box Regression with Uncertainty for Accurate Object Detection (CVPR'19)
Non-maximum suppression for object detection in a neural network
Learning Lightweight Lane Detection CNNs by Self Attention Distillation (ICCV 2019)
Open-source code for paper "Dataset Distillation"
Official pytorch Implementation of Relational Knowledge Distillation, CVPR 2019
Codebase of the paper "Feature Intertwiner for Object Detection", ICLR 2019
Implementation of Data-free Knowledge Distillation for Deep Neural Networks (on arxiv!)
NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection.
Implementations of CVPR 2019 paper Distilling Object Detectors with Fine-grained Feature Imitation
Implementation of CVPR 2019 paper: Distilling Object Detectors with Fine-grained Feature Imitation
Distillation for faster rcnn in classification,regression,feature level,feature level +mask
Semi-supervised Adaptive Distillation is a model compression method for object detection.
The official code for the paper 'Structured Knowledge Distillation for Semantic Segmentation'. (CVPR 2019 ORAL) and extension to other tasks.
PyTorch implementation of "Distilling the Knowledge in a Neural Network" for model compression
Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"