User Tools

Site Tools


Sidebar

深度学习

应用领域:

  • 计算机视觉
  • 自然语言处理
  • 语音识别

相关方法:

  • 增量学习
  • 强化学习
  • 迁移学习
deep_learning:incremental_learning

Incremental Learning

Introduction

Incremental Learning(IL) is also referred to as: Continual Learning(CL), Life Long Learning(LLL), Online Learning(OL), Never Ending Learning(NEL).

Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.1)

Key Challenge

Catastrophic Forgetting(CF): Learning multiple tasks in sequence, however, remains a substantial challenge for deep learning. When trained on a new task, standard neural networks forget most of the information related to previously learned tasks, a phenomenon referred to as “catastrophic forgetting”2).

Scenarios

There are 3 scenarios on Incremental Learning3), including:

  • Task Incremental Learning
  • Domain Incremental Learning
  • Class Incremental Learning

Methods

All methods of incremental learning can be concluded into 4 categories4).

  • Task specific components(sub-network per task): XDG(Context-dependent Gating)
  • regularized optimization(differently regularizing parameters): EWC(Elastic Weight Consolidation), SI(Synaptic Intelligence)
  • Modifying Training Data(pseudo-data, generate samples): LwF(Learning without Forgetting), DGR(Deep Generative Replay)
  • Using Exemplars(store data from previous tasks): iCaRL

Resources

Course

Papers

deep_learning/incremental_learning.txt · Last modified: 2023/03/08 16:04 by xujianglong