BOML is a modularized optimization library that unifies several ML algorithms into a common bilevel optimization framework. It provides interfaces to implement popular bilevel optimization algorithms, so that you could quickly build your own meta learning neural network and test its performance.
ReadMe.md file contains recommended instruction for training Maml-based and Meta-representation in few-shot learning field. It's flexible to build your own networks or use structures with attached documentation.
Meta learning works fairly well when facing incoming new tasks by learning an initialization with favorable generalization capability. And it also has good performance even provided with a small amount of training data available, which gives birth to new solutions to the few-shot learning problem.
- Hyperparameter optimization with approximate gradient(Implicit HG)
- Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks(MAML)
- On First-Order Meta-Learning Algorithms(FOMAML)
- Bilevel Programming for Hyperparameter Optimization and Meta-Learning(Reverse HG)
- Truncated Back-propagation for Bilevel Optimization(Truncated Reverse HG)
- Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace(MTNet)
- Meta-Learning with warped gradient Descent(Warp-Grad))
- DARTS: Differentiable Architecture Search(DARTS)
- A Generic First-Order Algorithmic Framework for Bi-Level Programming Beyond Lower-Level Singleton(BDA)
For more detailed information of basic function and construction process, please refer to our help page: Help Documentation
It's flexible to build your own networks or use structures in py_bm.networks. Scripts in the directory named train_script are useful for basic training process.
Here we give recommended settings for specific hyper paremeters to quickly test performance of popular algorithms.