A collection of optimization techniques I have implemented in jax and flux with particular effort put into readability and reproducibility.
- Python >= 3.8
- jax
$ git clone https://github.com/BeeGass/Readable-Optimization.git
$ cd Readable-Optimization/vae-jax
$ python main.py
- TODO
- TODO
$ cd Readable-Optimization/vae-flux
$ # TBA
Config File Template
TBA
Model | Jax/Flax | Flux | Config | Paper | Animations | Samples |
---|---|---|---|---|---|---|
Gradient Descent | ☐ | ☐ | ☐ | Link | TBA | TBA |
Stochastic Gradient Descent | ☐ | ☐ | ☐ | Link | TBA | TBA |
Batch Gradient Descent | ☐ | ☐ | ☐ | Link | TBA | TBA |
Mini-Batch Gradient Descent | ☐ | ☐ | ☐ | Link | TBA | TBA |
SGD w/ Momentum | ☐ | ☐ | ☐ | Link | TBA | TBA |
Nesterov's Gradient Acceleration | ☐ | ☐ | ☐ | Link | TBA | TBA |
AdaGrad(Adaptive Gradient Descent) | ☐ | ☐ | ☐ | Link | TBA | TBA |
AdaDelta | ☐ | ☐ | ☐ | Link | TBA | TBA |
RMS-Prop (Root Mean Square Propagation) | ☐ | ☐ | ☐ | Link | TBA | TBA |
Adam(Adaptive Moment Estimation) | ☐ | ☐ | ☐ | Link | TBA | TBA |
Adamw | ☐ | ☐ | ☐ | Link | TBA | TBA |
Model | Jax/Flax | Flux | Config | Paper | Animations | Samples |
---|---|---|---|---|---|---|
Newton's Method | ☐ | ☐ | ☐ | Link | TBA | TBA |
Secant Method | ☐ | ☐ | ☐ | Link | TBA | TBA |
Davidson-Fletcher-Powell (DFP) | ☐ | ☐ | ☐ | Link | TBA | TBA |
Broyden-Fletcher-Goldfarb-Shanno (BFGS) | ☐ | ☐ | ☐ | Link | TBA | TBA |
Limited-memory BFGS (L-BFGS) | ☐ | ☐ | ☐ | Link | TBA | TBA |
Newton-Raphson | ☐ | ☐ | ☐ | Link | TBA | TBA |
Levenberg-Marquardt | ☐ | ☐ | ☐ | Link | TBA | TBA |
Powell's method | ☐ | ☐ | ☐ | Link | TBA | TBA |
Steepest Descent | ☐ | ☐ | ☐ | Link | TBA | TBA |
Truncated Newton | ☐ | ☐ | ☐ | Link | TBA | TBA |
Fletcher-Reeves | ☐ | ☐ | ☐ | Link | TBA | TBA |
@software{B_Gass_Optimization_2022,
author = {B Gass, B Gass},
doi = {10.5281/zenodo.1234},
month = {1},
title = {{Readable-Optimization}},
url = {https://github.com/BeeGass/Optimization},
version = {1.0.0},
year = {2022}
}