List of papers I have read.
- An image signature for any kind of image (H. Chi Wong, Marshall Bern, David Goldberg) - pHash
- Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, pp. 2278–2324, Nov 1998
- D. Lu and Q. Weng, “A survey of image classification methods and techniques for improving classification performance,” International Journal of Remote Sensing, vol. 28, no. 5, pp. 823–870, 2007.
- S. Wu, L. Dong, H. Chen, and J. Zhan, “Application of image classification technology in mangrove information extraction,” in 2015 11th International Conference on Computational Intelligence and Security (CIS), pp. 167–170, Dec 2015.
- Q. Li, W. Cai, X. Wang, Y. Zhou, D. D. Feng, and M. Chen, “Medical image classification with convolutional neural network,” in 2014 13th International Conference on Control Automation Robotics Vision (ICARCV), pp. 844–848, Dec 2014.
- J. Zhang, Y. Xia, Q. Wu, and Y. Xie, “Classification of medical images and illustrations in the biomedical literature using synergic deep learning,” CoRR, vol. abs/1706.09092, 2017.
- L. J. Li and L. Fei-Fei, “What, where and who? classifying events by scene and object recognition,” in 2007 IEEE 11th International Conference on Computer Vision, pp. 1–8, Oct 2007.
- An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data (Julian D. Olden, michael K. Joy, Russel G. Death)
- (AlexNet) A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, (USA), pp. 1097–1105, Curran Associates Inc., 2012.
- (VGG) K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
- (Inception) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015.
- (ResNet) K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, June 2016.
- S. Akay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within x-ray baggage security imagery,” in 2016 IEEE International Conference on Image Processing (ICIP), pp. 1057–1061, Sept 2016.
- J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, (Cambridge, MA, USA), pp. 3320–3328, MIT Press, 2014.
- L. Torrey and J. W. Shavlik, “Transfer learning,” 2009.
- S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, pp. 1345–1359, Oct 2010.
- A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” CoRR, vol. abs/1403.6382, 2014.
- M. Huh, P. Agrawal, and A. A. Efros, “What makes imagenet good for transfer learning?,” CoRR, vol. abs/1608.08614, 2016.
- S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” CoRR, vol. abs/1502.03167, 2015.
- H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301–320, 2005.
- N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, Jan. 2014.
- M. D. Zeiler, “ADADELTA: an adaptive learning rate method,” CoRR, vol. abs/1212.5701, 2012.
- T. Tieleman and G. Hinton, “Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude.” COURSERA: Neural Networks for Machine Learning, 2012.
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Neurocomputing: Foundations of research,” ch. Learning Representations by Back-propagating Errors, pp. 696–699, Cambridge, MA, USA: MIT Press, 1988
- G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in NIPS Deep Learning and Representation Learning Workshop, 2015.
- Generative Adversarial Networks (https://arxiv.org/abs/1406.2661) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
-
A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576) Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
-
Perceptual Losses for Real-Time Style Transfer and Super-Resolution (https://arxiv.org/abs/1603.0815) Justin Johnson, Alexandre Alahi, Li Fei-Fei
- Deep Supervised Hashing for Fast Image Retrieval (http://openaccess.thecvf.com/content_cvpr_2016/papers/Liu_Deep_Supervised_Hashing_CVPR_2016_paper.pdf)
- You Only Look Once: Unified, Real-Time Object Detection (https://arxiv.org/abs/1506.02640)
- YOLO9000: Better, Faster, Stronger (https://arxiv.org/pdf/1612.08242.pdf)
- YOLOv3: An Incremental Improvement (https://arxiv.org/pdf/1804.02767.pdf)