This repository is maintained by Massimo Caccia and Timothée Lesort don't hesitate to send us an email to collaborate or fix some entries ({massimo.p.caccia , t.lesort} at gmail.com). The automation script of this repo is adapted from Automatic_Awesome_Bibliography.
For contributing to the repository please follow the process here
You can directly use our bib.tex in overleaf with this link
- Classics
- Empirical Study
- Surveys
- Influentials
- New Settings or Metrics
- Regularization Methods
- Distillation Methods
- Rehearsal Methods
- Generative Replay Methods
- Dynamic Architectures or Routing Methods
- Hybrid Methods
- Continual Few-Shot Learning
- Meta-Continual Learning
- Lifelong Reinforcement Learning
- Continual Generative Modeling
- Applications
- Thesis
- Libraries
- Workshops
- Catastrophic forgetting in connectionist networks , (1999) by French, Robert M. [bib]
- Lifelong robot learning , (1995) by Thrun, Sebastian and Mitchell, Tom M [bib]
Argues knowledge transfer is essential if robots are to learn control with moderate learning times
- Catastrophic Forgetting, Rehearsal and Pseudorehearsal , (1995) by * Anthony Robins * [bib]
- Catastrophic interference in connectionist networks: The sequential learning problem , (1989) by McCloskey, Michael and Cohen, Neal J [bib]
Introduces CL and reveals the catastrophic forgetting problem
- Rethinking Experience Replay: a Bag of Tricks for Continual Learning , (2021) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo and Calderara, Simone [bib]
- A comprehensive study of class incremental learning algorithms for visual tasks , (2021) by Eden Belouadah, Adrian Popescu and Ioannis Kanellos [bib]
- Online Continual Learning in Image Classification: An Empirical Survey, (2021) by Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim and Scott Sanner [bib]
- GDumb: A simple approach that questions our progress in continual learning, (2020) by Prabhu, Ameya, Torr, Philip HS and Dokania, Puneet K [bib]
introduces a super simple methods that outperforms almost all methods in all of the CL benchmarks. We need new better benchamrks
- Continual learning: A comparative study on how to defy forgetting in classification tasks , (2019) by Matthias De Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars [bib]
Extensive empirical study of CL methods (in the multi-head setting)
- Three scenarios for continual learning , (2019) by van de Ven, Gido M and Tolias, Andreas S [bib]
An extensive review of CL methods in three different scenarios (task-, domain-, and class-incremental learning)
- Continuous learning in single-incremental-task scenarios, (2019) by Maltoni, Davide and Lomonaco, Vincenzo [bib]
- Towards Robust Evaluations of Continual Learning , (2018) by Farquhar, Sebastian and Gal, Yarin [bib]
Proposes desideratas and reexamines the evaluation protocol
- Catastrophic forgetting: still a problem for DNNs, (2018) by Pf"ulb, B, Gepperth, A, Abdullah, S and Krawczyk, A [bib]
- Measuring Catastrophic Forgetting in Neural Networks, (2017) by Kemker, R., McClure, M., Abitino, A. and Hayes, T. and Kanan, C. [bib]
- CORe50: a New Dataset and Benchmark for Continuous Object Recognition , (2017) by Vincenzo Lomonaco and Davide Maltoni [bib]
- An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks , (2013) by Goodfellow, I.~J., Mirza, M., Xiao, D., Courville, A. and Bengio, Y. [bib]
Investigates CF in neural networks
- Towards Continual Reinforcement Learning: A Review and Perspectives, (2020) by Khimya Khetarpal, Matthew Riemer, Irina Rish and Doina Precup [bib]
A review on continual reinforcement learning
- Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , (2020) by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodríguez [bib]
- A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning , (2020) by Mundt, Martin, Hong, Yong Won, Pliushch, Iuliia and Ramesh, Visvanathan [bib]
propose a consolidated view to bridge continual learning, active learning and open set recognition in DNNs
- Continual Lifelong Learning in Natural Language Processing: A Survey , (2020) by Magdalena Biesialska, Katarzyna Biesialska, Marta R. Costa-jussà [bib]
An extensive review of CL in Natural Language Processing (NLP)
- Continual lifelong learning with neural networks: A review , (2019) by German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan and Stefan Wermter [bib]
An extensive review of CL
- Incremental learning algorithms and applications , (2016) by Gepperth, Alexander and Hammer, Barbara [bib]
A survey on incremental learning and the various applications fields
- Efficient Lifelong Learning with A-GEM , (2019) by Chaudhry, Arslan, Ranzato, Marc’Aurelio, Rohrbach, Marcus and Elhoseiny, Mohamed [bib]
More efficient GEM; Introduces online continual learning
- Towards Robust Evaluations of Continual Learning , (2018) by Farquhar, Sebastian and Gal, Yarin [bib]
Proposes desideratas and reexamines the evaluation protocol
- Continual Learning in Practice , (2018) by Diethe, Tom, Borchert, Tom, Thereska, Eno, Pigem, Borja de Balle and Lawrence, Neil [bib]
Proposes a reference architecture for a continual learning system
- Overcoming catastrophic forgetting in neural networks , (2017) by Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka and others [bib]
- Gradient Episodic Memory for Continual Learning , (2017) by Lopez-Paz, David and Ranzato, Marc-Aurelio [bib]
A model that alliviates CF via constrained optimization
- Continual learning with deep generative replay , (2017) by Shin, Hanul, Lee, Jung Kwon, Kim, Jaehong and Kim, Jiwon [bib]
Introduces generative replay
- An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks , (2013) by Goodfellow, I.~J., Mirza, M., Xiao, D., Courville, A. and Bengio, Y. [bib]
Investigates CF in neural networks
- IIRC: Incremental Implicitly-Refined Classification , (2021) by Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani and Sarath Chandar [bib]
A setup and benchmark to evaluate lifelong learning models in more real-life aligned scenarios.
- Wandering Within a World: Online Contextualized Few-Shot Learning , (2020) by Mengye Ren, Michael L. Iuzzolino, Michael C. Mozer and Richard S. Zemel [bib]
proposes a new continual few-shot setting where spacial and temporal context can be leveraged to and unseen classes need to be predicted
- Defining Benchmarks for Continual Few-Shot Learning , (2020) by Antoniou, Antreas, Patacchiola, Massimiliano, Ochal, Mateusz and Storkey, Amos [bib]
(title is a good enough summary)
- Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , (2020) by Caccia, Massimo, Rodriguez, Pau, Ostapenko, Oleksiy, Normandin, Fabrice, Lin, Min, Caccia, Lucas, Laradji, Issam, Rish, Irina, Lacoste, Alexandre, Vazquez, David and Charlin, Laurent [bib]
Proposes a new approach to CL evaluation more aligned with real-life applications, bringing CL closer to Online Learning and Open-World learning
- Compositional Language Continual Learning , (2020) by Yuanpeng Li, Liang Zhao, Kenneth Church and Mohamed Elhoseiny [bib]
method for compositional continual learning of sequence-to-sequence models
- A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning , (2020) by Mundt, Martin, Hong, Yong Won, Pliushch, Iuliia and Ramesh, Visvanathan [bib]
propose a consolidated view to bridge continual learning, active learning and open set recognition in DNNs
- Don't forget, there is more than forgetting: new metrics for Continual Learning, (2018) by D{'\i}az-Rodr{'\i}guez, Natalia, Lomonaco, Vincenzo, Filliat, David and Maltoni, Davide [bib]
introduces a CL score that takes more than just forgetting into account
- Continual Learning with Bayesian Neural Networks for Non-Stationary Data , (2020) by Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt and Stephan Günnemann [bib]
continual learning for non-stationary data using Bayesian neural networks and memory-based online variational Bayes
- Improving and Understanding Variational Continual Learning , (2019) by Siddharth Swaroop, Cuong V. Nguyen, Thang D. Bui and Richard E. Turner [bib]
Improved results and interpretation of VCL.
- Uncertainty-based Continual Learning with Adaptive Regularization , (2019) by Ahn, Hongjoon, Cha, Sungmin, Lee, Donggyu and Moon, Taesup [bib]
Introduces VCL with uncertainty measured for neurons instead of weights.
- Functional Regularisation for Continual Learning with Gaussian Processes , (2019) by Titsias, Michalis K, Schwarz, Jonathan, Matthews, Alexander G de G, Pascanu, Razvan and Teh, Yee Whye [bib]
functional regularisation for Continual Learning: avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function
- Task Agnostic Continual Learning Using Online Variational Bayes , (2018) by Chen Zeno, Itay Golan, Elad Hoffer and Daniel Soudry [bib]
Introduces an optimizer for CL that relies on closed form updates of mu and sigma of BNN; introduce label trick for class learning (single-head)
- Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation , (2018) by Xu He and Herbert Jaeger [bib]
Conceptor-Aided Backprop (CAB): gradients are shielded by conceptors against degradation of previously learned tasks
- Overcoming Catastrophic Forgetting with Hard Attention to the Task , (2018) by Serra, Joan, Suris, Didac, Miron, Marius and Karatzoglou, Alexandros [bib]
Introducing a hard attention idea with binary masks
- Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence , (2018) by Chaudhry, Arslan, Dokania, Puneet K, Ajanthan, Thalaiyasingam and Torr, Philip HS [bib]
Formalizes the shortcomings of multi-head evaluation, as well as the importance of replay in single-head setup. Presenting an improved version of EWC.
- Variational Continual Learning , (2018) by Cuong V. Nguyen, Yingzhen Li, Thang D. Bui and Richard E. Turner [bib]
- Progress & compress: A scalable framework for continual learning , (2018) by Schwarz, Jonathan, Luketina, Jelena, Czarnecki, Wojciech M, Grabska-Barwinska, Agnieszka, Teh, Yee Whye, Pascanu, Razvan and Hadsell, Raia [bib]
A new P\&C architecture; online EWC for keeping the knowledge about the previous task, knowledge for keeping the knowledge about the current task (Multi-head setting, RL)
- Online structured laplace approximations for overcoming catastrophic forgetting, (2018) by Ritter, Hippolyt, Botev, Aleksandar and Barber, David [bib]
- Facilitating Bayesian Continual Learning by Natural Gradients and Stein Gradients , (2018) by Chen, Yu, Diethe, Tom and Lawrence, Neil [bib]
Improves on VCL
- Overcoming catastrophic forgetting in neural networks , (2017) by Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska-Barwinska, Agnieszka and others [bib]
- Memory Aware Synapses: Learning what (not) to forget , (2017) by Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach and Tinne Tuytelaars [bib]
Importance of parameter measured based on their contribution to change in the learned prediction function
- Continual Learning Through Synaptic Intelligence , (2017) by *Zenke, Friedeman, Poole, Ben and Ganguli, Surya * [bib]
Synaptic Intelligence (SI). Importance of parameter measured based on their contribution to change in the loss.
- Overcoming catastrophic forgetting by incremental moment matching, (2017) by Lee, Sang-Woo, Kim, Jin-Hwa, Jun, Jaehyun, Ha, Jung-Woo and Zhang, Byoung-Tak [bib]
- Dark Experience for General Continual Learning: a Strong, Simple Baseline , (2020) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo, Abati, Davide and Calderara, Simone [bib]
- Online Continual Learning under Extreme Memory Constraints , (2020) by Fini, Enrico, Lathuilière, Stèphane, Sangineto, Enver, Nabi, Moin and Ricci, Elisa [bib]
Introduces Memory-Constrained Online Continual Learning, a setting where no information can be transferred between tasks, and proposes a distillation-based solution (Batch-level Distillation)
- PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning , (2020) by Douillard, Arthur, Cord, Matthieu, Ollion, Charles, Robert, Thomas and Valle, Eduardo [bib]
Novel knowledge distillation that trades efficiently rigidity and plasticity to learn large amount of small tasks
- Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild , (2019) by Lee, Kibok, Lee, Kimin, Shin, Jinwoo and Lee, Honglak [bib]
Introducing global distillation loss and balanced finetuning; leveraging unlabeled data in the open world setting (Single-head setting)
- Large scale incremental learning , (2019) by Wu, Yue, Chen, Yinpeng, Wang, Lijuan, Ye, Yuancheng, Liu, Zicheng, Guo, Yandong and Fu, Yun [bib]
Introducing bias parameters to the last fully connected layer to resolve the data imbalance issue (Single-head setting)
- Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
- Lifelong learning via progressive distillation and retrospection , (2018) by Hou, Saihui, Pan, Xinyu, Change Loy, Chen, Wang, Zilei and Lin, Dahua [bib]
Introducing an expert of the current task in the knowledge distillation method (Multi-head setting)
- End-to-end incremental learning , (2018) by Castro, Francisco M, Marin-Jimenez, Manuel J, Guil, Nicolas, Schmid, Cordelia and Alahari, Karteek [bib]
Finetuning the last fully connected layer with a balanced dataset to resolve the data imbalance issue (Single-head setting)
- Learning without forgetting , (2017) by Li, Zhizhong and Hoiem, Derek [bib]
Functional regularization through distillation (keeping the output of the updated network on the new data close to the output of the old network on the new data)
- icarl: Incremental classifier and representation learning , (2017) by Rebuffi, Sylvestre-Alvise, Kolesnikov, Alexander, Sperl, Georg and Lampert, Christoph H [bib]
Binary cross-entropy loss for representation learning \& exemplar memory (or coreset) for replay (Single-head setting)
- Rethinking Experience Replay: a Bag of Tricks for Continual Learning , (2021) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo and Calderara, Simone [bib]
- Graph-Based Continual Learning , (2021) by Binh Tang and David S. Matteson [bib]
Use graphs to link saved samples and improve the memory quality.
- Online Class-Incremental Continual Learning with Adversarial Shapley Value , (2021) by Dongsub Shim, Zheda Mai, Jihwan Jeong*, Scott Sanner, Hyunwoo Kim and Jongseong Jang [bib]
Use Shapley Value adversarially to select which samples to relay
- Dark Experience for General Continual Learning: a Strong, Simple Baseline , (2020) by Buzzega, Pietro, Boschini, Matteo, Porrello, Angelo, Abati, Davide and Calderara, Simone [bib]
- GDumb: A simple approach that questions our progress in continual learning, (2020) by Prabhu, Ameya, Torr, Philip HS and Dokania, Puneet K [bib]
introduces a super simple methods that outperforms almost all methods in all of the CL benchmarks. We need new better benchamrks
- Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes , (2020) by Timothée Lesort [bib]
- Imbalanced Continual Learning with Partitioning Reservoir Sampling , (2020) by Kim, Chris Dongjoo, Jeong, Jinseo and Kim, Gunhee [bib]
- PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning , (2020) by Douillard, Arthur, Cord, Matthieu, Ollion, Charles, Robert, Thomas and Valle, Eduardo [bib]
Novel knowledge distillation that trades efficiently rigidity and plasticity to learn large amount of small tasks
- {REMIND Your Neural Network to Prevent Catastrophic Forgetting} , (2020) by Hayes, Tyler L., Kafle, Kushal, Shrestha, Robik and Acharya, Manoj and Kanan, Christopher [bib]
- Efficient Lifelong Learning with A-GEM , (2019) by Chaudhry, Arslan, Ranzato, Marc’Aurelio, Rohrbach, Marcus and Elhoseiny, Mohamed [bib]
More efficient GEM; Introduces online continual learning
- Orthogonal Gradient Descent for Continual Learning , (2019) by Mehrdad Farajtabar, Navid Azizan, Alex Mott and Ang Li [bib]
projecting the gradients from new tasks onto a subspace in which the neural network output on previous task does not change and the projected gradient is still in a useful direction for learning the new task
- Gradient based sample selection for online continual learning , (2019) by Aljundi, Rahaf, Lin, Min, Goujaud, Baptiste and Bengio, Yoshua [bib]
sample selection as a constraint reduction problem based on the constrained optimization view of continual learning
- Online Continual Learning with Maximal Interfered Retrieval , (2019) by Aljundi, Rahaf and
, Lucas, Belilovsky, Eugene, Caccia, Massimo, Lin, Min, Charlin, Laurent and Tuytelaars, Tinne [bib]
Controlled sampling of memories for replay to automatically rehearse on tasks currently undergoing the most forgetting
- Online Learned Continual Compression with Adaptative Quantization Module , (2019) by Caccia, Lucas, Belilovsky, Eugene, Caccia, Massimo and Pineau, Joelle [bib]
Uses stacks of VQ-VAE modules to progressively compress the data stream, enabling better rehearsal
- Large scale incremental learning , (2019) by Wu, Yue, Chen, Yinpeng, Wang, Lijuan, Ye, Yuancheng, Liu, Zicheng, Guo, Yandong and Fu, Yun [bib]
Introducing bias parameters to the last fully connected layer to resolve the data imbalance issue (Single-head setting)
- Learning a Unified Classifier Incrementally via Rebalancing, (2019) by Hou, Saihui, Pan, Xinyu, Loy, Chen Change, Wang, Zilei and Lin, Dahua [bib]
- Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
- Experience replay for continual learning , (2019) by Rolnick, David, Ahuja, Arun, Schwarz, Jonathan, Lillicrap, Timothy and Wayne, Gregory [bib]
- Gradient Episodic Memory for Continual Learning , (2017) by Lopez-Paz, David and Ranzato, Marc-Aurelio [bib]
A model that alliviates CF via constrained optimization
- icarl: Incremental classifier and representation learning , (2017) by Rebuffi, Sylvestre-Alvise, Kolesnikov, Alexander, Sperl, Georg and Lampert, Christoph H [bib]
Binary cross-entropy loss for representation learning \& exemplar memory (or coreset) for replay (Single-head setting)
- Catastrophic Forgetting, Rehearsal and Pseudorehearsal , (1995) by * Anthony Robins * [bib]
- Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes , (2020) by Timothée Lesort [bib]
- Brain-Like Replay For Continual Learning With Artificial Neural Networks , (2020) by van de Ven, Gido M, Siegelmann, Hava T and Tolias, Andreas S [bib]
- Learning to remember: A synaptic plasticity driven framework for continual learning , (2019) by Ostapenko, Oleksiy, Puscas, Mihai, Klein, Tassilo, Jahnichen, Patrick and Nabi, Moin [bib]
introdudes Dynamic generative memory (DGM) which relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking
- Generative Models from the perspective of Continual Learning , (2019) by Lesort, Timothée, Caselles-Dupré, Hugo, Garcia-Ortiz, Michael, Goudou, Jean-Fran{\c c}ois and Filliat, David [bib]
Extensive evaluation of CL methods for generative modeling
- Closed-loop Memory GAN for Continual Learning , (2019) by Rios, Amanda and Itti, Laurent [bib]
- Marginal replay vs conditional replay for continual learning , (2019) by Lesort, Timothée, Gepperth, Alexander, Stoian, Andrei and Filliat, David [bib]
Extensive evaluation of generative replay methods
- Generative replay with feedback connections as a general strategy for continual learning , (2018) by Michiel van der Ven and Andreas S. Tolias [bib]
smarter Generative Replay
- Continual learning with deep generative replay , (2017) by Shin, Hanul, Lee, Jung Kwon, Kim, Jaehong and Kim, Jiwon [bib]
Introduces generative replay
- ORACLE: Order Robust Adaptive Continual Learning , (2019) by Jaehong Yoon and Saehoon Kim and Eunho Yang and Sung Ju Hwang [bib]
- Learn to Grow: {A} Continual Structure Learning Framework for Overcoming Catastrophic Forgetting , (2019) by Xilai Li and Yingbo Zhou and Tianfu Wu and Richard Socher and Caiming Xiong [bib]
- Incremental Learning through Deep Adaptation , (2018) by Amir Rosenfeld and John K. Tsotsos [bib]
- Packnet: Adding multiple tasks to a single network by iterative pruning, (2018) by Mallya, Arun and Lazebnik, Svetlana [bib]
- Piggyback: Adapting a single network to multiple tasks by learning to mask weights, (2018) by Mallya, Arun, Davis, Dillon and Lazebnik, Svetlana [bib]
- Continual Learning in Practice , (2018) by Diethe, Tom, Borchert, Tom, Thereska, Eno, Pigem, Borja de Balle and Lawrence, Neil [bib]
Proposes a reference architecture for a continual learning system
- Growing a brain: Fine-tuning by increasing model capacity, (2017) by Wang, Yu-Xiong, Ramanan, Deva and Hebert, Martial [bib]
- PathNet: Evolution Channels Gradient Descent in Super Neural Networks , (2017) by Chrisantha Fernando and Dylan Banarse and Charles Blundell and Yori Zwols and David Ha and Andrei A. Rusu and Alexander Pritzel and Daan Wierstra [bib]
- Lifelong learning with dynamically expandable networks, (2017) by Yoon, Jaehong, Yang, Eunho, Lee, Jeongtae and Hwang, Sung Ju [bib]
- Progressive Neural Networks , (2016) by Rusu, A.~A., Rabinowitz, N.~C., Desjardins, G. and
Soyer, H., Kirkpatrick, J., Kavukcuoglu, K. and
Pascanu, R. and Hadsell, R. [bib]
Each task have a specific model connected to the previous ones
- Continual learning with hypernetworks , (2020) by Johannes von Oswald, Christian Henning, João Sacramento and Benjamin F. Grewe [bib]
Learning task-conditioned hypernetworks for continual learning as well as task embeddings; hypernetwors offers good model compression.
- Compacting, Picking and Growing for Unforgetting Continual Learning , (2019) by Hung, Ching-Yi, Tu, Cheng-Hao, Wu, Cheng-En, Chen, Chien-Hung, Chan, Yi-Ming and Chen, Chu-Song [bib]
Approach leverages the principles of deep model compression, critical weights selection, and progressive networks expansion. All enforced in an iterative manner
- A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning , (2019) by Lee, Soochan, Ha, Junsoo, Zhang, Dongsu and Kim, Gunhee [bib]
This paper introduces expansion-based approach for task-free continual learning
- Wandering Within a World: Online Contextualized Few-Shot Learning , (2020) by Mengye Ren, Michael L. Iuzzolino, Michael C. Mozer and Richard S. Zemel [bib]
proposes a new continual few-shot setting where spacial and temporal context can be leveraged to and unseen classes need to be predicted
- Defining Benchmarks for Continual Few-Shot Learning , (2020) by Antoniou, Antreas, Patacchiola, Massimiliano, Ochal, Mateusz and Storkey, Amos [bib]
(title is a good enough summary)
- Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , (2020) by Caccia, Massimo, Rodriguez, Pau, Ostapenko, Oleksiy, Normandin, Fabrice, Lin, Min, Caccia, Lucas, Laradji, Issam, Rish, Irina, Lacoste, Alexandre, Vazquez, David and Charlin, Laurent [bib]
Proposes a new approach to CL evaluation more aligned with real-life applications, bringing CL closer to Online Learning and Open-World learning
- Learning from the Past: Continual Meta-Learning via Bayesian Graph Modeling , (2019) by Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh and Yang Yang [bib]
- Online Meta-Learning , (2019) by Finn, Chelsea, Rajeswaran, Aravind, Kakade, Sham and Levine, Sergey [bib]
defines Online Meta-learning; propsoses Follow the Meta Leader (FTML) (~ Online MAML)
- Reconciling meta-learning and continual learning with online mixtures of tasks , (2019) by Jerfel, Ghassen, Grant, Erin, Griffiths, Tom and Heller, Katherine A [bib]
Meta-learns a tasks structure; continual adaptation via non-parametric prior
- Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL , (2019) by Anusha Nagabandi, Chelsea Finn and Sergey Levine [bib]
Formulates an online learning procedure that uses SGD to update model parameters, and an EM with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distribution
- Task Agnostic Continual Learning via Meta Learning , (2019) by Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A. Rusu, Yee Whye Teh and Razvan Pascanu [bib]
Introduces What \& How framework; enables Task Agnostic CL with meta learned task inference
- La-MAML: Look-ahead Meta Learning for Continual Learning , (2020) by Gunshi Gupta, Karmesh Yadav and Liam Paull [bib]
Proposes an online replay-based meta-continual learning algorithm with learning-rate modulation to mitigate catastrophic forgetting
- Learning to Continually Learn , (2020) by Beaulieu, Shawn, Frati, Lapo, Miconi, Thomas, Lehman, Joel, Stanley, Kenneth O, Clune, Jeff and Cheney, Nick [bib]
Follow-up of OML. Meta-learns an activation-gating function instead.
- Meta-Learning Representations for Continual Learning , (2019) by Javed, Khurram and White, Martha [bib]
Introduces Learns how to continually learn (OML) i.e. learns how to do online updates without forgetting.
- Meta-learnt priors slow down catastrophic forgetting in neural networks , (2019) by Spigler, Giacomo [bib]
Learning MAML in a Meta continual learning way slows down forgetting
- Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , (2019) by Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu and and Gerald Tesauro [bib]
- Reset-Free Lifelong Learning with Skill-Space Planning , (2021) by Kevin Lu, Aditya Grover, Pieter Abbeel and Igor Mordatch [bib]
- Towards Continual Reinforcement Learning: A Review and Perspectives, (2020) by Khimya Khetarpal, Matthew Riemer, Irina Rish and Doina Precup [bib]
A review on continual reinforcement learning
- Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , (2020) by Timothée Lesort, Vincenzo Lomonaco, Andrei Stoian, Davide Maltoni, David Filliat and Natalia Díaz-Rodríguez [bib]
- Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL , (2019) by Anusha Nagabandi, Chelsea Finn and Sergey Levine [bib]
Formulates an online learning procedure that uses SGD to update model parameters, and an EM with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distribution
- Continual Reinforcement Learning deployed in Real-life using PolicyDistillation and Sim2Real Transfer, (2019) by *Kalifou, René Traoré, Caselles-Dupré, Hugo, Lesort, Timothée, Sun, Te, Diaz-Rodriguez, Natalia and Filliat, David * [bib]
- Experience replay for continual learning , (2019) by Rolnick, David, Ahuja, Arun, Schwarz, Jonathan, Lillicrap, Timothy and Wayne, Gregory [bib]
- PathNet: Evolution Channels Gradient Descent in Super Neural Networks , (2017) by Chrisantha Fernando and Dylan Banarse and Charles Blundell and Yori Zwols and David Ha and Andrei A. Rusu and Alexander Pritzel and Daan Wierstra [bib]
- Continual Unsupervised Representation Learning , (2019) by Dushyant Rao, Francesco Visin, Andrei A. Rusu, Yee Whye Teh, Razvan Pascanu and Raia Hadsell [bib]
Introduces unsupervised continual learning (no task label and no task boundaries)
- Generative Models from the perspective of Continual Learning , (2019) by Lesort, Timothée, Caselles-Dupré, Hugo, Garcia-Ortiz, Michael, Goudou, Jean-Fran{\c c}ois and Filliat, David [bib]
Extensive evaluation of CL methods for generative modeling
- Closed-loop Memory GAN for Continual Learning , (2019) by Rios, Amanda and Itti, Laurent [bib]
- Lifelong Generative Modeling , (2017) by Ramapuram, Jason, Gregorova, Magda and Kalousis, Alexandros [bib]
- CLOPS: Continual Learning of Physiological Signals , (2020) by Kiyasseh, Dani, Zhu, Tingting and Clifton, David A [bib]
a healthcare-specific replay-based method to mitigate destructive interference during continual learning
- LAMAL: LAnguage Modeling Is All You Need for Lifelong Language Learning , (2020) by Fan-Keng Sun, Cheng-Hao Ho and Hung-Yi Lee [bib]
- Compositional Language Continual Learning , (2020) by Yuanpeng Li, Liang Zhao, Kenneth Church and Mohamed Elhoseiny [bib]
method for compositional continual learning of sequence-to-sequence models
- Unsupervised real-time anomaly detection for streaming data , (2017) by Ahmad, Subutai, Lavin, Alexander, Purdy, Scott and Agha, Zuha [bib]
HTM applied to real-world anomaly detection problem
- Continuous online sequence learning with an unsupervised neural network model , (2016) by Cui, Yuwei, Ahmad, Subutai and Hawkins, Jeff [bib]
HTM applied to a prediction problem of taxi passenger demand
- Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes , (2020) by Timothée Lesort [bib]
- Continual Learning with Deep Architectures , (2019) by Vincenzo Lomonaco [bib]
- Continual Learning in Neural Networks , (2019) by Aljundi, Rahaf [bib]
- Continual learning in reinforcement environments , (1994) by Ring, Mark Bishop [bib]
- Sequoia - Towards a Systematic Organization of Continual Learning Research , (2021) by Fabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Matthew Riemer, Pau Rodriguez, Julio Hurtado, Khimya Khetarpal, Timothée Lesort, Laurent Charlin, Irina Rish and Massimo Caccia [bib]
A library that unifies Continual Supervised and Continual Reinforcement Learning research
- Avalanche: an End-to-End Library for Continual Learning , (2021) by Vincenzo Lomonaco, Lorenzo Pellegrini, Andrea Cossu, Gabriele Graffieti and Antonio Carta [bib]
A library for Continual Supervised Learning
- Continuous Coordination As a Realistic Scenario for Lifelong Learning , (2021) by Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville and Sarath Chandar [bib]
a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings.
- River: machine learning for streaming data in Python, (2020) by Jacob Montiel, Max Halford, Saulo Martiello Mastelini
and Geoffrey Bolmier, Raphael Sourty, Robin Vaysse
and Adil Zouitine, Heitor Murilo Gomes, Jesse Read
and Talel Abdessalem and Albert Bifet [bib]
A library for online learning.
- Continuum, Data Loaders for Continual Learning, (2020) by Douillard, Arthur and Lesort, Timothée [bib]
A library proposing continual learning scenarios and metrics.
- Framework for Analysis of Class-Incremental Learning , (2020) by Masana, Marc, Liu, Xialei, Twardowski, Bartlomiej, Menta, Mikel, Bagdanov, Andrew D and van de Weijer, Joost [bib]
A library for Continual Class-Incremental Learning
- Workshop on Continual Learning at ICML 2020 , (2020) by Rahaf Aljundi, Haytham Fayek, Eugene Belilovsky, David Lopez-Paz, Arslan Chaudhry, Marc Pickett, Puneet Dokania, Jonathan Schwarz and Sayna Ebrahimi [bib]
- 4th Lifelong Machine Learning Workshop at ICML 2020 , (2020) by Shagun Sodhani, Sarath Chandar, Balaraman Ravindran and Doina Precup [bib]
- CVPR 2020 Continual Learning in Computer Vision Competition: Approaches, Results, Current Challenges and Future Directions, (2020) by Lomonaco, Vincenzo, Pellegrini, Lorenzo, Rodriguez, Pau, Caccia, Massimo, She, Qi, Chen, Yu, Jodelet, Quentin, Wang, Ruiping, Mai, Zheda, Vazquez, David and others [bib]
surveys the results of the first CL competition at CVPR
- 1st Lifelong Learning for Machine Translation Shared Task at WMT20 (EMNLP 2020) , (2020) by Loïc Barrault, Magdalena Biesialska, Marta R. Costa-jussà, Fethi Bougares and Olivier Galibert [bib]