User Tools

Site Tools


deep_learning:incremental_learning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
deep_learning:incremental_learning [2021/08/01 17:45]
jordan [Key Challenge]
deep_learning:incremental_learning [2023/03/08 16:04] (current)
xujianglong ↷ Page moved from 内部资料:deep_learning:incremental_learning to deep_learning:incremental_learning
Line 5: Line 5:
 Incremental Learning(IL) is also referred to as: Continual Learning(CL), Life Long Learning(LLL), Online Learning(OL), Never Ending Learning(NEL). Incremental Learning(IL) is also referred to as: Continual Learning(CL), Life Long Learning(LLL), Online Learning(OL), Never Ending Learning(NEL).
  
-Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.+Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.(([[https://arxiv.org/abs/2010.15277|Class-incremental learning: survey and performance evaluation on image classification]]))
  
 ===== Key Challenge ===== ===== Key Challenge =====
  
-Catastrophic Forgetting(CF): Learning multiple tasks in sequence, however, remains a substantial challenge for deep learning. When trained on a new task, standard neural networks forget most of the information related to previously learned tasks, a phenomenon referred to as “catastrophic forgetting”((Three scenarios for continual learning)).+Catastrophic Forgetting(CF): Learning multiple tasks in sequence, however, remains a substantial challenge for deep learning. When trained on a new task, standard neural networks forget most of the information related to previously learned tasks, a phenomenon referred to as “catastrophic forgetting”(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])). 
 + 
 +===== Scenarios ===== 
 + 
 +There are 3 scenarios on Incremental Learning(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])), including: 
 + 
 +  * Task Incremental Learning 
 +  * Domain Incremental Learning 
 +  * Class Incremental Learning 
 + 
 +===== Methods ===== 
 + 
 +All methods of incremental learning can be concluded into 4 categories(([[https://arxiv.org/abs/1904.07734v1|Three scenarios for continual learning]])). 
 + 
 +  * Task specific components(sub-network per task): **XDG**(Context-dependent Gating) 
 +  * regularized optimization(differently regularizing parameters): **[[deep_learning:incremental_learning:EWC]]**(Elastic Weight Consolidation), **SI**(Synaptic Intelligence) 
 +  * Modifying Training Data(pseudo-data, generate samples): **[[deep_learning:incremental_learning:LwF]]**(Learning without Forgetting), **DGR**(Deep Generative Replay) 
 +  * Using Exemplars(store data from previous tasks): **[[deep_learning:incremental_learning:iCaRL]]**
 ===== Resources ===== ===== Resources =====
  
deep_learning/incremental_learning.1627811109.txt.gz · Last modified: 2021/08/01 17:45 by jordan