A fresh approach to deep learning written in MATLAB
We are just getting started, so please be patient with us. If you find a bug, please report it by opening an issue or email [email protected]. In any case, include a small example that helps us re-produce the error. We'll work on this as quickly as possible.
- Clone or download the code
- Add folder to your MATLAB path
- (optional) run KernelTypes/mexcuda/make_cuda.m for fast CNNs using CuDNN
- (optional) gather test data or binary files
The convMCN
kernel type and the average pooling require compiled binaries
from the MatConvNet package. Please follow these instructions
and add the files for vl_nnconv
, vl_nnconvt
, and vl_nnpool
to your MATLAB path.
For best performance these files can be compiled with GPU or CuDNN support.
Some examples use these benchmark data
- MNIST
- CIFAR10
- STL-10
The implementation is based on the ideas presented in:
- Haber E, Ruthotto L: Stable Architectures for Deep Neural Networks, Inverse Problems, 2017
- Chang B, Meng L, Haber E, Ruthotto L, Begert D, Holtham E: Reversible Architectures for Arbitrarily Deep Residual Neural Networks, AAAI Conference on Artificial Intelligence 2018
- Haber E, Ruthotto L, Holtham E, Jun SH: Learning across scales - A multiscale method for Convolution Neural Networks, AAAI Conference on Artificial Intelligence 2018