Skip to content

akbargumbira/ml_papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 

Repository files navigation

List of papers I have read.

Image Processing and Computer Vision

  • An image signature for any kind of image (H. Chi Wong, Marshall Bern, David Goldberg) - pHash

Image Classification

  • Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, pp. 2278–2324, Nov 1998
  • D. Lu and Q. Weng, “A survey of image classification methods and techniques for improving classification performance,” International Journal of Remote Sensing, vol. 28, no. 5, pp. 823–870, 2007.
  • S. Wu, L. Dong, H. Chen, and J. Zhan, “Application of image classification technology in mangrove information extraction,” in 2015 11th International Conference on Computational Intelligence and Security (CIS), pp. 167–170, Dec 2015.
  • Q. Li, W. Cai, X. Wang, Y. Zhou, D. D. Feng, and M. Chen, “Medical image classification with convolutional neural network,” in 2014 13th International Conference on Control Automation Robotics Vision (ICARCV), pp. 844–848, Dec 2014.
  • J. Zhang, Y. Xia, Q. Wu, and Y. Xie, “Classification of medical images and illustrations in the biomedical literature using synergic deep learning,” CoRR, vol. abs/1706.09092, 2017.
  • L. J. Li and L. Fei-Fei, “What, where and who? classifying events by scene and object recognition,” in 2007 IEEE 11th International Conference on Computer Vision, pp. 1–8, Oct 2007.

Neural Network

  • An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data (Julian D. Olden, michael K. Joy, Russel G. Death)

Network Architecture

  • (AlexNet) A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, (USA), pp. 1097–1105, Curran Associates Inc., 2012.
  • (VGG) K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014.
  • (Inception) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015.
  • (ResNet) K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, June 2016.

Transfer Learning

  • S. Akay, M. E. Kundegorski, M. Devereux, and T. P. Breckon, “Transfer learning using convolutional neural networks for object classification within x-ray baggage security imagery,” in 2016 IEEE International Conference on Image Processing (ICIP), pp. 1057–1061, Sept 2016.
  • J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, (Cambridge, MA, USA), pp. 3320–3328, MIT Press, 2014.
  • L. Torrey and J. W. Shavlik, “Transfer learning,” 2009.
  • S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, pp. 1345–1359, Oct 2010.
  • A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” CoRR, vol. abs/1403.6382, 2014.
  • M. Huh, P. Agrawal, and A. A. Efros, “What makes imagenet good for transfer learning?,” CoRR, vol. abs/1608.08614, 2016.

Normalization

  • S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” CoRR, vol. abs/1502.03167, 2015.

Regularization

  • H. Zou and T. Hastie, “Regularization and variable selection via the elastic net,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 67, no. 2, pp. 301–320, 2005.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, Jan. 2014.

Optimizer

  • M. D. Zeiler, “ADADELTA: an adaptive learning rate method,” CoRR, vol. abs/1212.5701, 2012.
  • T. Tieleman and G. Hinton, “Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude.” COURSERA: Neural Networks for Machine Learning, 2012.

Backpropagation

  • D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Neurocomputing: Foundations of research,” ch. Learning Representations by Back-propagating Errors, pp. 696–699, Cambridge, MA, USA: MIT Press, 1988

Smart Tricks

  • G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in NIPS Deep Learning and Representation Learning Workshop, 2015.

Generative Adversarial Networks

  • Generative Adversarial Networks (https://arxiv.org/abs/1406.2661) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

Style Transfer

Image Retrieval

Object Detection

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published