-
Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey [IEEE '20]
compact model \ tensor decomposition \ data quantization \ network sparsifification(pruning)
-
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey | [arXiv '20]
-
Binary neural networks: A survey
-
Sparse low rank factorization for deep neural network compression
-
Taxonomy and evaluation of structured compression of convolutional neural networks. 2019
-
A Survey of Methods for Low-Power Deep Learning and Computer Vision [arXiv '20]
-
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks [arXiv '18]
network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design and hardware accelerators
-
Survey of model compression method for neural networks [arXiv '18]
-
Survey of neural network model compression method [arXiv '18]
-
A Survey of Model Compression and Acceleration for Deep Neural Networks [arXiv '17]
-
Model compression as constrained optimization, with application to neural nets. Part I: general framework [arXiv '17]
-
Model compression as constrained optimization, with application to neural nets. Part II: quantization [arXiv '17]
Survey
Folders and files
Name | Name | Last commit date | ||
---|---|---|---|---|
parent directory.. | ||||