User Tools

Site Tools


deep_learning:incremental_learning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
deep_learning:incremental_learning [2021/08/01 17:55]
jordan [Introduction]
deep_learning:incremental_learning [2023/03/08 16:04] (current)
xujianglong ↷ Page moved from 内部资料:deep_learning:incremental_learning to deep_learning:incremental_learning
Line 10: Line 10:
  
 Catastrophic Forgetting(CF): Learning multiple tasks in sequence, however, remains a substantial challenge for deep learning. When trained on a new task, standard neural networks forget most of the information related to previously learned tasks, a phenomenon referred to as “catastrophic forgetting”(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])). Catastrophic Forgetting(CF): Learning multiple tasks in sequence, however, remains a substantial challenge for deep learning. When trained on a new task, standard neural networks forget most of the information related to previously learned tasks, a phenomenon referred to as “catastrophic forgetting”(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])).
 +
 +===== Scenarios =====
 +
 +There are 3 scenarios on Incremental Learning(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])), including:
 +
 +  * Task Incremental Learning
 +  * Domain Incremental Learning
 +  * Class Incremental Learning
 +
 +===== Methods =====
 +
 +All methods of incremental learning can be concluded into 4 categories(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])).
 +
 +  * Task specific components(sub-network per task): **XDG**(Context-dependent Gating)
 +  * regularized optimization(differently regularizing parameters): **[[deep_learning:incremental_learning:EWC]]**(Elastic Weight Consolidation), **SI**(Synaptic Intelligence)
 +  * Modifying Training Data(pseudo-data, generate samples): **[[deep_learning:incremental_learning:LwF]]**(Learning without Forgetting), **DGR**(Deep Generative Replay)
 +  * Using Exemplars(store data from previous tasks): **[[deep_learning:incremental_learning:iCaRL]]**
 ===== Resources ===== ===== Resources =====
  
deep_learning/incremental_learning.1627811742.txt.gz · Last modified: 2021/08/01 17:55 by jordan