doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1710.10368 | 2 | and Pathnets (Fernando et al., 2017) directly freeze important pathways in neural networks, which eliminates forgetting altogether but requires growing the network after each task and can cause the architecture complexity to grow with the number of tasks. Li & Hoiem (2017) have evaluated freez- ing weights in earlier layers of a network and ï¬ne tuning the rest for multiple tasks. These methods outperform sparse representations but may not be explicitly targeting the cause of catastrophic forgetting. | 1710.10368#2 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 3 | # 1. Introduction
Many machine learning models, when trained sequentially on tasks, forget how to perform previously learnt tasks. This phenomenon, called catastrophic forgetting is an impor- tant challenge to overcome in order to enable systems to learn continuously. In the early stages of investigation, Mc- Closkey & Cohen (1989) suggested the underlying cause of forgetting to be the distributed shared representation of tasks via network weights. Subsequent works attempted to reduce representational overlap between input representations via activation sharpening algorithms (Kortge, 1990), orthogonal recoding of inputs (Lewandowsky, 1991) and orthogonal activations at all hidden layers (McRae & Hetherington,
1Department of Computer Science, University of Southern California, Los Angeles, CA, USA. Correspondence to: Nitin Kamra <[email protected]>. | 1710.10368#3 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 4 | 1Department of Computer Science, University of Southern California, Los Angeles, CA, USA. Correspondence to: Nitin Kamra <[email protected]>.
An important assumption for successful gradient-based learning is to observe iid samples from the joint distribution of all tasks to be learnt. Since sequential learning systems violate this assumption, catastrophic forgetting is inevitable. So a direct approach would be to store previously seen sam- ples and replay them along with new samples in appropriate proportions to restore the iid sampling assumption (Lopez- Paz et al., 2017). This experience replay approach has been adopted by maintaining a ï¬xed-size episodic memory of exemplars which are either directly replayed while learn- ing e.g. in iCaRL (Rebufï¬ et al., 2017) or indirectly used to modify future gradient updates to the system e.g. in GEM (Lopez-Paz et al., 2017) to mitigate forgetting on previously seen tasks. However, choosing to store sam- ples from previous tasks is challenging since it requires determining how many samples need to be stored, which samples are most representative of a task, and which samCopyright 2018 by the author(s).
Deep Generative Dual Memory Network for Continual Learning | 1710.10368#4 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 5 | Deep Generative Dual Memory Network for Continual Learning
ples to discard as new tasks arrive (Lucic et al., 2017). We propose that this problem can be solved by maintaining a generative model over samples which would automatically provide the most frequently encountered samples from the distribution learnt so far. This is also feasible with limited total memory and avoids explicitly determining which and how many samples should be stored and/or discarded per task. Previous non-generative approaches to experience replay e.g. pseudo-pattern rehearsal (Robins, 2004) have proposed to preserve neural networksâ learnt mappings by uniformly sampling random inputs and their corresponding outputs from networks and replaying them along with new task samples. These approaches have only been tested in small binary input spaces and our experiments show that sampling random inputs in high-dimensional spaces (e.g. images) does not preserve the learnt mappings. | 1710.10368#5 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 6 | Neuroscientiï¬c evidence suggests that experience replay of patterns has also been observed in the human brain during sleep and waking rest (McClelland et al., 1995; ONeill et al., 2010). Further, humans have evolved mechanisms to sepa- rately learn new incoming tasks and consolidate them with previous knowledge to avert catastrophic forgetting (McClel- land et al., 1995; French, 1999). The widely acknowledged complementary learning systems theory (McClelland et al., 1995; Kumaran et al., 2016) suggests that this separation has been achieved in the human brain via evolution of two sepa- rate areas: (a) the neocortex, which is a long term memory specializing in consolidating new information with previous knowledge to gradually learn the joint structure of all tasks, and (b) the hippocampus, which acts as a temporary mem- ory to rapidly learn new tasks and then slowly transfers the knowledge to neocortex after acquisition. | 1710.10368#6 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 7 | In this paper, we propose a dual-memory architecture for learning tasks sequentially while averting catastrophic for- getting. Our model comprises of two generative models: a short-term memory (STM) to emulate the human hippocam- pal system and a long term memory (LTM) to emulate the neocortical learning system. The STM learns new tasks without interfering with previously learnt tasks in the LTM. The LTM stores all previously learnt tasks and aids the STM in learning tasks similar to previously seen tasks. During sleep/down-time, the STM generates and transfers samples of learnt tasks to the LTM. These are gradually consolidated with the LTMâs knowledge base of previous tasks via gen- erative replay. Our model exploits the strengths of deep generative models, experience replay and complementary learning systems literature. We demonstrate its performance experimentally in averting catastrophic forgetting by sequen- tially learning multiple tasks. Moreover, our experiments shed light on some characteristics of human memory as observed in the psychology and neuroscience literature.
# 2. Problem Description | 1710.10368#7 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 8 | Formally, our problem setting is characterized by a set of tasks T, to be learnt by a parameterized model. Note that we use the the phrase model and neural network architecture interchangeably. In this work, we mainly consider super- vised learning tasks i.e. task t â T has training samples: {Xt, Yt} = {xt i â Y, but our model easily generalizes to unsupervised learning settings. Samples for each task are drawn iid from an (unknown) data generating distribution Pt associated with the task i.e. {xt i } â¼ Pt âi â [Nt], but the distributions {Pt}tâT can be completely different from each other. The tasks arrive sequentially and the total number of tasks T = |T| is not known a priori. Note that the full sequence of samples seen by the architecture is not sampled iid from the joint distri- bution of all samples. The architecture observes the task descriptor and the data {t, Xt, Yt} for each task while train- ing sequentially. It can be evaluated at any time on a test sample {t, xt} to predict its label yt where {xt, yt} â¼ Pt after | 1710.10368#8 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 10 | Finite memory: We allow a limited storage for algorithms to store or generate samples while learning.The storage size is limited to N,na2 and usually smaller than the total num- ber of samples an N;. Hence, just storing all training samples and reusing them is infeasible.
Evaluation metrics: After training on each task, we evalu- ate models on separate test sets for each task. This gives us a matrix A â RT ÃT with Ai,j being the test accuracy on task j after training on task i. Following (Lopez-Paz et al., 2017), we evaluate algorithms on the following metrics â Average accuracy (ACC) achieved across all tasks and Backward Transfer (BWT):
1 T 1 T-1 an . TT = .â A, ACC = 5 > Ar; | BWT = 75 » Ari â Ais
Backward transfer (BWT) measures the inï¬uence of task t on a previously learnt task Ï . This is generally negative since learning new tasks sequentially causes the model to lose performance on previous tasks. A large negative backward BWT represents catastrophic forgetting. An ideal continual learning algorithm should achieve maximum ACC while having least negative (or positive) BWT.
# 3. Deep Generative Dual Memory Network
# 3.1. Deep Generative Replay
We present a generative experience replay algorithm to learn from sequentially arriving samples. We ï¬rst introduce a | 1710.10368#10 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 11 | # 3.1. Deep Generative Replay
We present a generative experience replay algorithm to learn from sequentially arriving samples. We ï¬rst introduce a
Deep Generative Dual Memory Network for Continual Learning
Generator Training Consolidation Generated Samples Generator learns from samples and generative replay DGM Training Learner Training Learner learns trom reconstructed samples and labels DGM Testing
Figure 1: Deep Generative Replay to train a Deep Generative Memory
sub-model called the Deep Generative Memory (DGM)1 with three elements: (i) a generative model (the generator G), (ii) a feedforward network (the learner L), and (iii) a dictionary (Ddgm) with task descriptors of learnt tasks and the number of times they were encountered. Though most previous works (Kirkpatrick et al., 2017; Lopez-Paz et al., 2017; Zenke et al., 2017) and our algorithm involve usage of task descriptors t in some form, our architecture also works when they are either unavailable, non-integral or just an inseparable part of the input xt (see Appendix A). We choose variational autoencoder (VAE) (Kingma & Welling, 2014) for the generator, since our generative model requires reconstruction capabilities (see section 3.2) but can also work with other kinds of generative models (see section 5). | 1710.10368#11 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 12 | We update a DGM with samples from (potentially multi- ple) new tasks using our algorithm Deep Generative Replay (DGR). The pseudocode is shown in algorithm 1 and vi- sualized in ï¬gure 1. DGR essentially combines the new incoming samples (X, Y ) with its own generated samples from previous tasks and relearns jointly on these samples. Given new incoming samples (X, Y ), DGR computes the fraction of samples to use from incoming samples (ηtasks) and the fraction to preserve from previous tasks (ηgen) ac- cording to the number of samples seen so far (i.e. age of DGM). If needed, the incoming samples are downsampled while still allocating at least a minimum fraction κ of the memory to them (lines 3â16). This ensures that as the DGM saturates with tasks over time, new tasks are still learnt at the cost of gradually losing performance on the least re- cent previous tasks. This is synonymous to how learning slows down in humans as they age but they still continue to learn while forgetting old things gradually (French, 1999). Next, DGR generates samples of previously learnt tasks (Xgen, Ygen) using the generator and learner, transfers the
task descriptors of samples in (X, Y ) to its own dictionary Ddgm and updates its age (lines 17â21). It then trains the | 1710.10368#12 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 13 | Algorithm 1 Deep Generative Replay 1: Input: Current params and age of DGM, new samples: (X,Y), dictionary for new samples: Diasxs, minimum fraction: &, memory capacity: Nmaz 2: Output: New parameters of DGM {Compute number of samples} 3: Masks = |X| 4: Ngen = age 5: if |X|+ age > Nmax then enone (a) > Masks = Max (A TeTpage 7: Ntasks = Masks X Nmax 8 = Nmax â Ntasks 9 10: Mtotat = Ntasks + Ngen {Subsample X,Y if needed} 11: if Neasks < |X| then 12: Xtasks; Yiasks = Draw Niasks samples from X,Y 13: else 14: Niasks, Ngen = |X|; Ntotat â |X| 15: Xtasks; Ytasks = X,Y 16: end if {Generate samples from previous tasks} 17: Xgen = Draw Ngen samples from G 18: Ygen = L(Xygen) 19: Xty, Yip = concat(X asks, X gen), concat(Yiasks, Ygen) 20: Add task descriptors from Djasks to | 1710.10368#13 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 15 | 1We call this a memory because of its weights and learning capacity, not due to any recurrent connections.
22: Train generator G on Xtr 23: Xrecon = Reconstruct Xtasks from generator G 24: Xtr = concat(Xrecon, Xgen) 25: Train learner L on (Xtr, Ytr)
Deep Generative Dual Memory Network for Continual Learning
generator on the total training samples Xtr, reconstructs the new samples via the trained generator as Xrecon (hence we use a VAE) and then trains the learner on resulting samples Xtr = concat(Xrecon, Xgen) and their labels Ytr (lines 22â 25). Doing this ï¬nal reconstruction provides robustness to noise and occlusion (section 5).
keeps track of task descriptors in dictionaries but does not use them for learning. DGDMN only uses task descriptors to recognize whether a task has been previously observed and/or the memory in which a task currently resides. This can be relaxed by using the reconstruction error from gen- erators as a proxy for recognition (see appendix A). Hence DGDMN still works in the absence of task descriptors. | 1710.10368#15 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 16 | Ideas similar to DGR have recently been proposed by Mo- canu et al. (2016) and Shin et al. (2017) independently, but they do not describe balancing new and generated sam- ples and cannot recognize repeated tasks (section 7.1 in appendix A). Also generative replay without a dual memory architecture is costly to train (section 4.2) and a lack of reconstruction for new samples makes their representations less robust to noise and occlusions (section 5).
# 3.2. Dual memory networks
Though DGR is a continual learning algorithm on its own, our preliminary experiments showed that it is slow and in- accurate. To balance the conï¬icting requirements of quick acquisition of new tasks and performance retention on pre- viously learnt tasks, we propose a dual memory network to combat forgetting. Our architecture (DGDMN) shown in ï¬g- ure 2 comprises of a large DGM called the long-term mem- ory (LTM) which stores information of all previously learnt tasks like the neocortex and a short-term memory (STM) which behaves similar to the hippocampus and learns new incoming tasks quickly without interference from previous tasks. The STM is a collection of nST M small, dedicated deep generative memories (called short-term task memory â STTM), which can each learn one unique task. | 1710.10368#16 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 17 | While training on an incoming task, if it is already in an STTM, the same STTM is retrained on it, otherwise a fresh STTM is allocated to the task. Additionally, if the task has been previously seen and consolidated into the LTM, then the LTM reconstructs the incoming samples for that task using the generator (hence we use a VAE), predicts labels for the reconstructions using its learner and sends these newly generated samples to the STTM allocated to this task. This provides extra samples on tasks which have been learnt previously and helps to learn them better, while also preserving the previous performance on that task to some extent. Once all (nST M ) STTMs are exhausted, the architecture sleeps (like humans) to consolidate all tasks into the LTM and free up the STTMs for new tasks. While asleep, the STM generates and sends samples of learnt tasks to the LTM, where these are consolidated via deep generative replay (see ï¬gure 2).
# 4. Experiments | 1710.10368#17 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 18 | # 4. Experiments
We perform experiments to demonstrate forgetting on se- quential image classiï¬cation tasks. We brieï¬y describe our datasets here (details in appendix B): (a) Permnist is a catastrophic forgetting benchmark (Kirkpatrick et al., 2017) and each task contains a ï¬xed permutation of pixels on MNIST images, (b) Digits dataset involves classifying a single MNIST digit per task, (c) TDigits is a transformed variant of MNIST similar to Digits but with 40 tasks for long task sequences, (d) Shapes contains several geometric shape classiï¬cation tasks, and (e) Hindi contains a sequence of 8 tasks with hindi language consonant recognition. | 1710.10368#18 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 19 | We compare DGDMN with several baselines for catas- trophic forgetting, while choosing at least one from each category: representational overlap, learning slowdown and experience replay. These are brieï¬y described here (im- plementation and hyperparameter details in appendix B): (a) Feedforward neural networks (NN): To characterize forgetting in the absence of any prevention mechanism and as a reference for other approaches, (b) Neural nets with dropout (DropNN): Goodfellow et al. (2013) suggested us- ing dropout as a means to prevent representational overlaps and pacify catastrophic forgetting, (c) Pseudopattern Re- hearsal (PPR): A non-generative approach to experience replay (Robins, 2004), (d) Elastic Weight Consolidation (EWC): Kirkpatrick et al. (2017) proposed using the Fisher Information Matrix for task-speciï¬c learning slowdown of weights in a neural network, and (e) Deep Generative Re- play (DGR): We train only the LTM from DGDMN to sep- arate the effects of deep generative replay and dual memory architecture. This is partly similar to Shin et al. (2017). | 1710.10368#19 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 20 | In our preliminary experiments, we observed that large over- parameterized networks can more easily adapt to sequen- tially incoming tasks, thereby partly mitigating catastrophic forgetting. So we have chosen network architectures which have to share all their parameters appropriately amongst the various tasks in a dataset to achieve reasonable joint accu- racy. This allows us to evaluate algorithms carefully while ignoring the beneï¬ts provided by overparameterization.
While testing on task t (even intermittently between tasks), if any STTM currently contains task t, it is used to predict the labels, else the prediction is deferred to the LTM. This allows predicting on all tasks seen uptil now (including the most recent ones) without sleeping. Finally note that DGR
# 4.1. Accuracy and Forgetting curves
We trained DGDMN and all baselines sequentially on the image classiï¬cation tasks of Permnist, Digits, Shapes and
Deep Generative Dual Memory Network for Continual Learning
Training STM s⢠eos LTM provides reconstructed samples (recom Yrecon) t0 aid STM aos Training D@DMN Training LTM Xsriwa Yermuz Xstm: Ystm L⢠Xow Yurm Xto:t0) Ypred:(o} LTM consolidates via Deep Generative Replay £ Ypred:{5) Testing DGDMN | 1710.10368#20 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 21 | Figure 2: Deep Generative Dual Memory Network (DGDMN)
Hindi datasets (separately). Due to space constraints, we show results on the Shapes and Hindi datasets in appendix A. The classiï¬cation accuracy on a held out test set for each task, after training on the tth task has been shown in ï¬gures 3 and 4. We used the same network architecture for NN, PPR, EWC, learner in DGR, and learner in the LTM of DGDMN for a given dataset. DropNN had intermediate dropouts after hidden layers (details in appendix B).
We observe from ï¬gures 3 and 4, that NN and DropNN forget catastrophically while learning and perform similarly. We veriï¬ed the same on other datasets in Appendix A. EWC performs better than NN and DropNN, but rapidly slows down learning on many weights and effectively stagnates after Task 3 (e.g. see Tasks 5 and 6 in ï¬gure 3d). The learning slowdown on weights hinders EWC from reusing those weights later to jointly discover common structures between tasks. Note that the networks do have the capacity to learn all tasks and our generative replay based algorithms DGR and DGDMN indeed learn all tasks sequentially with the same learner networks. | 1710.10368#21 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 22 | Further, we observed heavy forgetting on Digits (ï¬gure 4) for most baselines, which is expected because all samples in the tth task have a single label (t) and the tth task can be learnt on its own by setting the tth bias of the ï¬nal softmax layer to be high and the other biases to be low. Such sequential tasks cause networks to forget catastrophically. We observed that NN, DropNN, PPR and EWC learnt only the task being trained on and forgot all previous knowledge immediately. Sometimes, we also observed saturation due to the softmax bias being set very high and then being unable to recover from it. PPR showed severe saturation since its replay prevented it from coming out of the saturation. | 1710.10368#22 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 23 | ter training on the tth task (for both Digits and Permnist) is shown in ï¬gure 5. For absolute reference, the accuracy of NN by training it jointly on all tasks uptil the tth task has also been shown for each t. This also shows that DGR and DGDMN consistently outperform baselines in terms of retained average accuracy. In ï¬gure 5b, NN, DropNN, PPR and EWC follow nearly overlapping curves (acc â 1 t ) since they are only able to learn one task at a time. Though PPR also involves experience replay, it is not able to preserve its learnt mapping by randomly sampling points from its domain and hence forgets catastrophically. These observa- tions substantiate our claim that a replay mechanism must be generative and model the input distribution accurately. We observed similar results on other datasets (appendix A).
Table 1: Average accuracies for all algorithms. | 1710.10368#23 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 24 | Table 1: Average accuracies for all algorithms.
ALGORITHM DIGITS PERMNIST SHAPES HINDI NN DROPNN PPR EWC DGR DGDMN 0.1 0.1 0.1 0.1 0.596 0.818 0.588 0.59 0.574 0.758 0.861 0.831 0.167 0.167 0.167 0.167 0.661 0.722 0.125 0.125 0.134 0.125 0.731 0.658
Table 2: Backward transfer for all algorithms.
ALGORITHM DIGITS PERMNIST SHAPES HINDI NN DROPNN PPR EWC DGR DGDMN -0.778 -1.0 -0.444 -1.0 -0.425 -0.15 -0.434 -0.43 -0.452 -0.05 -0.068 -0.075 -0.4 -0.8 -0.2 -1.0 -0.288 -0.261 -1.0 -1.0 -0.989 -1.0 -0.270 -0.335 | 1710.10368#24 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 25 | DGR and DGDMN still retain performance on all tasks of Digits, since they replay generated samples from previous tasks. The average forgetting on all tasks â {1, . . . , t}, afWe show the ï¬nal average accuracies (ACC) and backward transfer (BWT) between tasks in tables 1 and 2 respectively.
Deep Generative Dual Memory Network for Continual Learning
(a) NN (b) DropNN (c) PPR (d) EWC (e) DGR (f) DGDMN
Task 1 Task 2 Task 3 > 8 Task 4 3 Task5 g Task6 0.0 er er, a b & G& GF GB F Be @ @ @ 8
> 8 5 2 0.2 0.0 a a? % GF & FG &¢@ ¢ 8 Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 no a & sf e& ¢
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy a a mm + 4 © % % &F %B GF F es © s Ss 8 & e ⬠⬠£⬠Eg
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy rr b & G& GF GB F Be @ @ @ 8 Task
Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 a a % GF GF GF GB FG Boe 8 @ ⬠8 Task | 1710.10368#25 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 26 | Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 a a % GF GF GF GB FG Boe 8 @ ⬠8 Task
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy er er, a b & G& GF GB F Be @ @ @ 8 Task
Figure 3: Accuracy curves for Permnist (x: tasks seen, y: classiï¬cation accuracy on task).
NN, DropNN, PPR and EWC get near random accuracies on all datasets except Permnist due to catastrophic forgetting. DGDMN and DGR perform similarly and outperform other baselines on ACC while having the least negative BWT. Since backward transfer is a direct measure of forgetting, this also shows that we effectively mitigate catastrophic forgetting and avoid inter-task interference. We point out that datasets like Digits should be considered important benchmarks for continual learning since they have low cor- relation between samples of different tasks and promote overï¬tting to the new incoming task thereby causing catas- trophic forgetting. Being able to retain performance on such task sequences is a strong indicator of the effectiveness of a continual learning algorithm.
# 4.2. Connections to complementary learning systems and sleep | 1710.10368#26 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 27 | # 4.2. Connections to complementary learning systems and sleep
To differentiate between DGDMN and DGR, we trained both of them on a long sequence of 40 tasks from TDigits dataset. We limited Nmax to 120, 000 samples for this task to explore the case where the LTM in DGDMN (DGM in DGR) cannot regenerate many samples and has to forget some tasks. At least κ = 0.05 fraction of memory was ensured for new task samples and consolidation in DGDMN happened after nST M = 5 tasks.
because DGR consolidates its DGM after every task. Since LTM is a large memory and requires more samples to con- solidate, it trains slower. Further, the DGMâs self-generated slightly erroneous samples compound errors quite fast. On the other hand, DGDMN uses small STTMs to learn single tasks faster and with low error. Consequently, the LTM con- solidates less often and sees more accurate samples, hence its error accumulates much slower. Lastly, DGDMN stays around 90% average accuracy on the most recently observed 10 tasks (ï¬gure 6b), whereas DGR propagates errors too fast and also fails on this metric eventually. | 1710.10368#27 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 28 | Dual memory architecture and periodic sleep has emerged naturally in humans as a scalable design choice. Though sleeping is a dangerous behavior for any organism due to risk of being attacked by a predator, it has still survived eons of evolution (Joiner, 2016) and most organisms with even a slightly developed nervous system (centralized or diffuse) still exhibit either sleep or light-resting behavior (Nath et al., 2017). This experiment partly sheds light on the importance of dual memory architecture intertwined with periodic sleep, without which learning would be highly time consuming and short lived (as in DGR).
# 5. Analysis and discussion
The average forgetting curves are plotted in ï¬gure 6a and show that forgetting is gradual and not catastrophic. DGDMN retains more accuracy on all tasks as compared to DGR and is faster to train as shown by ï¬gure 6c. This is
We next show that DGDMN shares some remarkable char- acteristics with the human memory and present a discussion of some relevant ideas. Due to space constraints, we have deferred some visualizations of the learnt latent structures to appendix A. The hyperparameters of DGDMN (κ and
Deep Generative Dual Memory Network for Continual Learning
(a) NN (b) DropNN (c) PPR (d) EWC (e) DGR (f) DGDMN | 1710.10368#28 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 29 | (a) NN (b) DropNN (c) PPR (d) EWC (e) DGR (f) DGDMN
Task 1 = Task 2 + Task 3 z ââ Task4 FI * Task5 g ââ Taske ~ Task 7 | -â Task 8 , = Task 9 genre er ees â taskio ereeeeercres Task
5 £ Task 1 = Task 2 ââ Task 3 ââTask4 *Task5 ââTaske Task 7 + Task 8 ââ Task 9 <= Task 10
Task 1 = Task 2 ââ Task 3 & â Task 4 5 *â Task 5 & â Task 6 *â Task 7 + Task 8 ââ Task 9 genres pref â Taskio B@esee ees eg Task
âs Task 1 = Task 2 + Task 3 z ââ Task4 FI * Task5 g ââ Taske ~ Task 7 -â Task 8 Task 9 Task 10 Ee & fy Task 1 Task 2 Task 3 Task 4 Task 7 Task 9 Task 10
5 £ Task 4 â Task 1 = Task 2 ââ Task 3 ââTask4 *Task5 ââTaske Task 7 + Task 8 ââ Task 9 ââ Task 10 Task 7 Task 9 Task 10 | 1710.10368#29 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 30 | âs Task 1 = Task 2 ââ Task 3 & â Task 4 5 *â Task 5 & â Task 6 *â Task 7 + Task 8 ââ Task 9 ââ Task 10
Figure 4: Accuracy curves for Digits (x: tasks seen, y: classiï¬cation accuracy on task).
SSS 208 ââ DropNN g â+â PPR 5 0.6 ââ EWC x ââ DGR 04 ââ DGDMN 2 . 2 02 âeâ joint M»uwuweueueMueuM MN $8 5 8 8 6G GBBT FeEFFFFFFEF & Tasks
(a) Permnist (b) Digits
1.0; âââ NN 20.8 = | âsâ DropNN § â-â PPR 5 8 0.6 ââ EWC 2 ââ DGR gO4 ââ DGDMN Da â-â Joi Zo2 Joint 0.0 a a m + WH © Mow âvoy yoy 4 4 FG & GF F ef fF £ ££ FF F Tasks
Figure 5: Forgetting curves (x: tasks seen, y: avg classiï¬cation accuracy on tasks seen).
nST M ) admit intuitive interpretations and can be tuned with simple heuristics (see appendix B). | 1710.10368#30 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 31 | nST M ) admit intuitive interpretations and can be tuned with simple heuristics (see appendix B).
Resilience to noise and occlusion: We have used a VAE to be able to reconstruct all samples, which helps to recog- nize task examples (appendix A) and also makes our model resilient to noise, distortion and occlusion. We tested our LTM model and a NN model by jointly training on uncor- rupted Digits data and testing on noisy and occluded images. Figure 7 shows that the LTM is more robust to noise and occlusion due to its denoising reconstructive properties.
The choice of underlying generative model: Our architec- ture is agnostic to the choice of the underlying generative model as long as the generator can generate reliable sam- ples and reconstruct incoming samples accurately. Hence,
apart from VAEs, variants of Generative Adversarial Net- works like BiGANs (Donahue et al., 2017), ALI (Dumoulin et al., 2017) and AVB (Mescheder et al., 2017) can be used depending on the modeled domain. | 1710.10368#31 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 32 | Connections to knowledge distillation: Previous works on (joint) multitask learning have also proposed approaches to learn individual tasks with small networks and then âdis- tillingâ them jointly into a larger network (Rusu et al., 2015). Such distillation can sometimes improve performance on individual tasks if they share structure and at other times mitigate inter-task interference due to reï¬nement of learnt functions while distilling (Parisotto et al., 2016). Similarly, due to reï¬nement and compression during consolidation phase, DGDMN is also able to learn joint task structure
Deep Generative Dual Memory Network for Continual Learning
(a) (b) (c)
08.
âvg task accuracy on last 10 tasks âTasks
= DR
Figure 6: Accuracy and training time for DGDMN and DGR on TDigits: (a) Accuracy on tasks seen so far, (b) Accuracy on last 10 tasks seen, (c) Training time
(a) (b) (c)
Noise Occlusion 66)|9q 77 é 6 CO 97 QQA\| o b| ¢ a 3 S a uy q ta] # a i) B
0.9 _ DGDMN 2 oe : â 0.7 06 00 Seclusion factor 028 | 1710.10368#32 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 33 | 0.9 _ DGDMN 2 oe : â 0.7 06 00 Seclusion factor 028
°° 30% ios g â 0.80) _. ny 0.75| DGDMN °° Gaussian noise stdev °8
Figure 7: LTM is robust to noisy and occluded images and exhibits smoother degradation in classiï¬cation accuracy because of its denoising reconstructive properties: (a) LTM reconstruction from noisy and occluded digits, (b) Classiï¬cation accuracy with increasing gaussian noise, and (c) Classiï¬cation accuracy with increasing occlusion factor.
effectively while mitigating interference between tasks. | 1710.10368#33 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 34 | effectively while mitigating interference between tasks.
Learning from streaming data: We have presently for- mulated our setup with task descriptors to compare it with existing approaches in the continual learning literature, but we emphasize that having no dependence on task descrip- tors is an essential step to learn continually from streaming data. Our approach allows online recognition of task sam- ples via a reconstructive generative model and is applicable in domains with directly streaming data without any task descriptors unlike most previous approaches which make explicit use of task descriptors (Zenke et al., 2017; Kirk- patrick et al., 2017; Rebufï¬ et al., 2017; Lopez-Paz et al., 2017) (see appendix A). This would allow DGDMN to be used for learning policies over many tasks via reinforcement learning without explicit replay memories, and we plan to explore this in future work.
Approaches based on synaptic consolidation: Though our architecture draws inspiration from complementary learning systems and experience replay in the human brain, there is also neuroscientiï¬c evidence for synaptic consolidation in the human brain like in (Kirkpatrick et al., 2017) and (Zenke et al., 2017). It might be interesting to explore how synaptic consolidation can be incorporated in our dual mem- ory architecture without causing stagnation and we leave this to future work.
# 6. Conclusion | 1710.10368#34 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 35 | # 6. Conclusion
In this work, we have developed a continual learning archi- tecture to avert catastrophic forgetting. Our dual memory architecture emulates the complementary learning systems in the human brain and maintains a consolidated long-term memory via generative replay of past experiences. We have shown that generative replay performs the best for long- term performance retention and scales well along with a dual memory architecture via our experiments. Moreover, our architecture displays signiï¬cant parallels with the hu- man memory system and provides useful insights about the connection between sleep and learning in humans.
Deep Generative Dual Memory Network for Continual Learning
# References
Cepeda, Nicholas J, Pashler, Harold, Vul, Edward, Wixted, John T, and Rohrer, Doug. Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psycho- logical bulletin, 132(3):354, 2006.
Kirkpatrick, James, Pascanu, Razvan, Rabinowitz, Neil, Veness, Joel, Desjardins, Guillaume, Rusu, Andrei A, Milan, Kieran, Quan, John, Ramalho, Tiago, Grabska- Barwinska, Agnieszka, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the Na- tional Academy of Sciences, 114(13):3521â3526, 2017. | 1710.10368#35 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 36 | Donahue, Jeff, Kr¨ahenb¨uhl, Philipp, and Darrell, Trevor. Adversarial feature learning. In International Conference on Learning Representations, 2017.
Dumoulin, Vincent, Belghazi, Ishmael, Poole, Ben, Lamb, Alex, Arjovsky, Martin, Mastropietro, Olivier, and Courville, Aaron. Adversarially learned inference. In International Conference on Learning Representations, 2017.
Kortge, Chris A. Episodic memory in connectionist net- works. In Proceedings of the 12th Annual Conference of the Cognitive Science Society, volume 764, pp. 771. Erlbaum, 1990.
Kumaran, Dharshan, Hassabis, Demis, and McClelland, James L. What learning systems do intelligent agents need? complementary learning systems theory updated. Trends in cognitive sciences, 20(7):512â534, 2016.
Fernando, Chrisantha, Banarse, Dylan, Blundell, Charles, Zwols, Yori, Ha, David, Rusu, Andrei A, Pritzel, Alexan- der, and Wierstra, Daan. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017. | 1710.10368#36 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 37 | Lewandowsky, Stephan. Gradual unlearning and catas- trophic interference: A comparison of distributed archi- tectures. Relating theory and data: Essays on human memory in honor of Bennet B. Murdock, pp. 445â476, 1991.
French, Robert M. Dynamically constraining connectionist networks to produce distributed, orthogonal representa- tions to reduce catastrophic interference. network, 1994.
Li, Zhizhong and Hoiem, Derek. Learning without forget- ting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
French, Robert M. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128â135, 1999.
Lopez-Paz, David et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6470â6479, 2017.
Goodfellow, Ian J, Mirza, Mehdi, Xiao, Da, Courville, Aaron, and Bengio, Yoshua. An empirical investiga- tion of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
URL: https://github.com/googlecreativelab/quickdraw-dataset, 2017.
Hinton, Geoffrey. Neural networks for machine learning - lecture 6a - overview of mini-batch gradient descent, 2012. | 1710.10368#37 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 38 | Hinton, Geoffrey. Neural networks for machine learning - lecture 6a - overview of mini-batch gradient descent, 2012.
Lucic, Mario, Faulkner, Matthew, Krause, Andreas, and Feldman, Dan. Training mixture models at scale via coresets. arXiv preprint arXiv:1703.08110, 2017.
Maaten, Laurens van der and Hinton, Geoffrey. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579â2605, 2008.
McClelland, James L, McNaughton, Bruce L, and Oâreilly, Randall C. Why there are complementary learning sys- tems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995.
Joiner, William J. Unraveling the evolutionary determinants of sleep. Current Biology, 26(20):R1073âR1087, 2016.
Kaggle. Devanagari character set. URL: https://www.kaggle.com/rishianand/devanagari- character-set, 2017.
Kahana, Michael J and Howard, Marc W. Spacing and lag effects in free recall of pure lists. Psychonomic Bulletin & Review, 12(1):159â164, 2005. | 1710.10368#38 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 39 | Kahana, Michael J and Howard, Marc W. Spacing and lag effects in free recall of pure lists. Psychonomic Bulletin & Review, 12(1):159â164, 2005.
McCloskey, Michael and Cohen, Neal J. Catastrophic inter- ference in connectionist networks: The sequential learn- ing problem. Psychology of learning and motivation, 24: 109â165, 1989.
McRae, Ken and Hetherington, Phil A. Catastrophic interfer- ence is eliminated in pretrained networks. In Proceedings of the 15h Annual Conference of the Cognitive Science Society, pp. 723â728, 1993.
Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Repre- sentations, 2014.
Mescheder, Lars, Nowozin, Sebastian, and Geiger, Andreas. Adversarial variational bayes: Unifying variational au- toencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722, 2017.
Deep Generative Dual Memory Network for Continual Learning
Mocanu, Decebal Constantin, Vega, Maria Torres, Eaton, Eric, Stone, Peter, and Liotta, Antonio. Online contrastive divergence with generative replay: Experience replay without storing data. CoRR, abs/1610.05555, 2016. | 1710.10368#39 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 40 | Nath, Ravi D, Bedbrook, Claire N, Abrams, Michael J, Basinger, Ty, Bois, Justin S, Prober, David A, Sternberg, Paul W, Gradinaru, Viviana, and Goentoro, Lea. The jellyï¬sh cassiopea exhibits a sleep-like state. Current Biology, 27(19):2984â2990, 2017.
ONeill, Joseph, Pleydell-Bouverie, Barty, Dupret, David, and Csicsvari, Jozsef. Play it again: reactivation of wak- ing experience and memory. Trends in Neurosciences, 33 (5):220â229, 2010.
Parisotto, Emilio, Ba, Jimmy Lei, and Salakhutdinov, Rus- lan. Actor-mimic: Deep multitask and transfer reinforce- ment learning. In International Conference on Learning Representations, 2016.
Rebufï¬, Sylvestre-Alvise, Kolesnikov, Alexander, and Lam- pert, Christoph H. icarl: Incremental classiï¬er and rep- resentation learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. | 1710.10368#40 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 41 | Robins, Anthony. Sequential learning in neural networks: A review and a discussion of pseudorehearsal based meth- ods. Intelligent Data Analysis, 8(3):301â322, 2004.
Rusu, Andrei A, Colmenarejo, Sergio Gomez, Gulcehre, Caglar, Desjardins, Guillaume, Kirkpatrick, James, Pas- canu, Razvan, Mnih, Volodymyr, Kavukcuoglu, Koray, and Hadsell, Raia. Policy distillation. arXiv preprint arXiv:1511.06295, 2015.
Rusu, Andrei A, Rabinowitz, Neil C, Desjardins, Guillaume, Soyer, Hubert, Kirkpatrick, James, Kavukcuoglu, Koray, Pascanu, Razvan, and Hadsell, Raia. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
Shin, Hanul, Lee, Jung Kwon, Kim, Jaehong, and Kim, Jiwon. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pp. 2994â3003, 2017. | 1710.10368#41 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 42 | Srivastava, Rupesh K, Masci, Jonathan, Kazerounian, Sohrob, Gomez, Faustino, and Schmidhuber, J¨urgen. Compete to compute. In Advances in Neural Information Processing Systems, pp. 2310â2318, 2013.
Zenke, Friedemann, Poole, Ben, and Ganguli, Surya. Con- tinual learning through synaptic intelligence. In Interna- tional Conference on Machine Learning, pp. 3987â3995, 2017.
Deep Generative Dual Memory Network for Continual Learning
# 7. Appendix A
# 7.1. Repeated tasks and revision
in these cases and we leave a more thorough investigation to future work. | 1710.10368#42 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 43 | Deep Generative Dual Memory Network for Continual Learning
# 7. Appendix A
# 7.1. Repeated tasks and revision
in these cases and we leave a more thorough investigation to future work.
It is well known in psychology literature that human learn- ing improves via revision (Kahana & Howard, 2005; Cepeda et al., 2006). We show performance of EWC and DGDMN on Permnist, when some tasks are repeated (ï¬gure 8). DGR performs very similar to DGDMN, hence we omit it. EWC stagnates and once learning has slowed down on the weights important for Task 1, the weights cannot be changed again, not even for improving Task 1. Further, it did not learn Task 6 the ï¬rst time and revision does not help either. However, DGDMN learns all tasks uptil Task 6 and then improves by revising Task 1 and 6 again. We point out that methods involving freezing (or slowdown) of learning often do not learn well via revision since they do not have any means of identifying tasks and unfreezing the previously frozen weights when the task is re-encountered. While many pre- vious works do not investigate revision, it is crucial for learning continuously and should improve performance on tasks. The ability to learn from correlated task samples and revision makes our architecture functionally similar to that of humans.
# 7.2. Experiments on other datasets | 1710.10368#43 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 44 | # 7.2. Experiments on other datasets
In this section, we present more experiments on the Shapes and the Hindi dataset, which contain sequences of tasks with geometric shapes and hindi consonants recognition respectively. We observed similar forgetting patterns as on the Digits dataset in section 4. All baselines exhibited catas- trophic forgetting on these sequences of tasks, but DGR and DGDMN were able to learn the task structure sequen- tially (ï¬gures 9, 10). The same is reï¬ected in the average forgetting curves in ï¬gure 11.
# 7.3. Jointly vs. sequentially learnt structure
To explore whether learning tasks sequentially results in a similar structure as learning them jointly, we visualized t-SNE (Maaten & Hinton, 2008) embeddings of the latent vectors of the LTM generator (VAE) in DGDMN after train- ing it: (a) jointly over all tasks (Figure 12a), and (b) sequen- tially over tasks seen one at a time (Figure 12b) on the Digits dataset. To maintain consistency, we used the same random seed in t-SNE for both joint and sequential embeddings. | 1710.10368#44 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 45 | We observe that the LTMâs latent space effectively seg- regates the 10 digits in both cases (joint and sequential). Though the absolute locations of the digit clusters differ in the two plots, the relative locations of digits share some sim- ilarity between both plots i.e. the neighboring digit clusters for each cluster are roughly similar. This may not be suf- ï¬cient to conclude that the LTM discovers the same latent representation for the underlying shared structure of tasks
# 7.4. Visualizations for the jointly and sequentially learnt LTM
We also show visualizations of digits from the LTM when trained jointly on Digits tasks (Figure 13a) and when trained sequentially (Figure 13b). Though the digits generated from the jointly trained LTM are quite sharp, the same is not true for the sequentially trained LTM. We observe that the sequentially trained LTM produces sharp samples of the re- cently learnt tasks (digits 6, 7, 8 and 9), but blurred samples of previously learnt tasks, which is due to partial forgetting on these previous tasks.
# 7.5. DGDMN with no task descriptors | 1710.10368#45 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 46 | # 7.5. DGDMN with no task descriptors
As described in section 3.2, DGDMN only uses task descrip- tors to recognize if a task already exists in an STTM or the LTM so that it can be appropriately allocated to the correct memory. Note that in our architecture this can also be done by using the reconstruction error of the generator on the task samples as a proxy for recognition. Speciï¬cally, in this variant DGDMN recog, tasks arrive sequentially but only (Xt, Yt) is observed while training and only Xt while test- ing. A DGM, when tested to recognize task t from samples Xt, reconstructs all samples Xt using the generator G and checks if the recognition loss is less than a certain threshold:
Ne recog _loss(X;) = > i=1 recons_loss(a?) 7 7 i Ydgm; intensity (a4) | 1710.10368#46 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 47 | Ne recog _loss(X;) = > i=1 recons_loss(a?) 7 7 i Ydgm; intensity (a4)
where recons loss(·) is the reconstruction loss on a sample, intensity(·) describes the strength of the input sample (for images, the sum of pixel intensities) and γdgm is a scalar threshold and a hyperparameter which can be tuned sep- arately for the LTM and the STM (same for all STTMs). We kept γdgm = 1.55 for both the LTM and all STTMs. In this case the training of the generators also employs a new termination criteria i.e. the generator of a DGM is trained till recog loss(·) is below γdgm. The rest of the algo- rithm remains unchanged. We show the accuracy curves and the average forgetting curves for this variant on the Digits dataset in ï¬gures 14a and 14b respectively. We observe very little degradation from the original DGDMN which uses task descriptors for recognition. DGDMN recog achieved ACC = 0.766 and BW T = â0.197 across all tasks which is similar to that of DGDMN.
# 8. Appendix B
# 8.1. Dataset preprocessing | 1710.10368#47 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 48 | # 8. Appendix B
# 8.1. Dataset preprocessing
All our datasets have images with intensities normalized in the range [0.0, 1.0] and size (28 Ã 28), except Hindi which
Deep Generative Dual Memory Network for Continual Learning
1.0 No] 08 ââ Task 6 20.6 5 ee nae Boa 0.2 0.0 aan +4 6 4 6 %%EE HE GES BES eeE BB Task
1.0 ââ Task1 08 ââ Task 6 20.6 FA Boa 0.2 0.0 aan +4 6 4 6 %%EE HE GES BES eeE BB Task
(a) (b)
Figure 8: Accuracy curves when tasks are revised: (a) EWC, (b) GEM, and (c) DGDMN.
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6
Accuracy Task 1 Task 2 Task 3 Task 4 Task1 Task2 Task 3 Task4 Tasks Taskk6 â ° a & 2 8
1? â- Task 1 gg ââ Task 2 â+ Task 3 20.6 = Task 4 5 â Task 5 904 + Task 6 0.2 0.0 =â a m + 4 © % % GF %F GF F 8 @ ⬠8 8 8 | 1710.10368#48 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 49 | (a) CNN (b) DropCNN (c) PPR (d) EWC (e) DGR (f) DGDMN
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy Sr ee a b & G& GF GB F Be @ @ @ 8 Task
Accuracy Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 i ne % GF GF GF GB FG Boe 8 @ ⬠8 Task
Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Accuracy er or rr b & G& GF GB F Be @ @ @ 8 Task
Figure 9: Accuracy curves for Shapes (x: tasks seen, y: classiï¬cation accuracy on task).
has (32 Ã 32) size images. | 1710.10368#49 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 50 | Figure 9: Accuracy curves for Shapes (x: tasks seen, y: classiï¬cation accuracy on task).
has (32 Ã 32) size images.
Permnist: Our version involved six tasks, each containing a ï¬xed permutation on images sampled from the original MNIST dataset. We sampled 30, 000 images from the train- ing set and all the 10, 000 test set images for each task. The tasks were as follows: (i) Original MNIST, (ii) 8x8 central patch of each image blackened, (iii) 8x8 central patch of each image whitened, (iv) 8x8 central patch of each im- age permuted with a ï¬xed random permutation, (v) 12x12 central patch of each image permuted with a ï¬xed random permutation, and (vi) mirror images of MNIST. This way each task is as hard as MNIST and the tasks share some common underlying structure. Digits: We introduce this smaller dataset which contains 10 tasks with the tth task being classiï¬cation of digit t from the MNIST dataset. TDigits: We introduced a transformed variant of MNIST | 1710.10368#50 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 51 | containing all ten digits, their mirror images, their upside down images, and their images when reï¬ected about the main diagonal making a total of 40 tasks. This dataset poses similar difï¬culty as the Digits dataset and we use it for ex- periments involving longer sequence of tasks. Shapes: This dataset was extracted from the Quick, Draw! dataset recently released by Google (2017), which contains 50 million drawings across 345 categories of hand-drawn images. We subsampled 4, 500 training images and 500 test images from all geometric shapes in Quick, Draw! (namely circle, hexagon, octagon, square, triangle and zigzag). Hindi: Extracted from the Devanagri dataset (Kaggle, 2017) and contains a sequence of 8 tasks, each involving image classiï¬cation of a hindi language consonant.
Deep Generative Dual Memory Network for Continual Learning
(a) CNN (b) DropCNN (c) PPR (d) EWC (e) DGR (f) DGDMN
ââ Task1 ââ Task 2 + Task 3 z ââ Taka 5 â Task5 g ââ Task6 oe Task 7 â Task 8 an m⢠+ OF © ye eee eg Task | 1710.10368#51 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 52 | â Task 1 â Task 2 + Task 3 2 ââ Taka 5 = TaskS 2 ââ Task6 ~~ Task 7 â Task 8 aa m+n on © ye ee ee gg Task
â Task 1 ââ Task 2 + Task 3 2 = Task 4 5 â Task 5 & ââ Task 6 oe Task 7 â Task 8 =a Am +n on © ye eee eg Task
â Task 1 ââ Task 2 + Task 3 z ââ Taka 5 â Task5 Z ââTask6 <= Task 7 Task 8 an m⢠+ OF © ye eee eg Task
â Task 1 â Task 2 + Task 3 2 ââ Taka 5 = TaskS & ââTask6 | | ââ Task7 | | Task 8 aa m+n on © ye ee ee gg Task
â Task 1 ââ Task 2 + Task 3 2 = Task 4 5 â Task 5 & ââ Task 6 <= Task 7 / Task 8 aan m + On © ye eee eg Task
Figure 10: Accuracy curves for Hindi (x: tasks seen, y: classiï¬cation accuracy on task).
# 8.2. Training algorithm and its parameters | 1710.10368#52 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 53 | Figure 10: Accuracy curves for Hindi (x: tasks seen, y: classiï¬cation accuracy on task).
# 8.2. Training algorithm and its parameters
All models were trained with RMSProp (Hinton, 2012) us- ing learning rate = 0.001, p = 0.9, ⬠= 1078 and no decay. We used a batch size of 128 and all classifiers were provided 20 epochs of training when trained jointly, and 6 epochs when trained sequentially over tasks. For generative mod- els (VAEs), we used gradient clipping in RMSProp with clipnorm= 1.0 and clipvalue= 0.5, and they were trained for 25 epochs regardless of the task or dataset.
# 8.3. Neural network architectures
We chose all models by ï¬rst training them jointly on all tasks in a dataset to ensure that our models had enough capacity to perform reasonably well. But we gave preference to simpler models over very high capacity models.
the cross-entropy objective function. The STTM learners employed in DGDMN were smaller for speed and efï¬ciency. | 1710.10368#53 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 54 | the cross-entropy objective function. The STTM learners employed in DGDMN were smaller for speed and efï¬ciency.
Generative models: The generators for DGR and LTM of DGDMN employed encoders and decoders with two fully connected hidden layers each with ReLU activation for Permnist, Digits and TDigits, and convolutional variants for Shapes and Hindi. The sizes and number of units/kernels in the layers were tuned independently for each dataset with an approximate coarse grid-search. The size of the latent variable z was set to 32 for Digits, 64 for Permnist, 96 for TDigits, 32 for Shapes and 48 for Hindi. The STTM generators in DGDMN were kept smaller for speed and efï¬ciency concerns.
# 8.4. Hyperparameters of DGDMN | 1710.10368#54 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 55 | # 8.4. Hyperparameters of DGDMN
Classiï¬er Models: Our implementation of NN, DropNN, PPR, EWC, learner for DGR and the learner for LTM in DGDMN used a neural network with three fully-connected layers with the number of units tuned differently according to the dataset (24, 24 units for Digits, 48, 48 for Permnist and 36, 36 for TDigits). DropNN also added two dropout layers, one after each hidden layer with droput rate = 0.2 each. The classiï¬ers (learners) for Shapes and Hindi datasets had two convolutional layers (12, 20 : 3 à 3 kernels for Shapes and 24, 32 : 3 à 3 kernels for Hindi) each followed by a 2 à 2 max-pooling layer. The last two layers were fully-connected (16, 6 for Shapes and 144, 36 for Hindi). The hidden layers used ReLU activations, the last layer had a softmax activation, and the model was trained to minimize
DGDMN has two new hyperparameters: (i) κ: minimum fraction of Nmax reserved for incoming tasks, and (ii) nST M : number of STTMs (also sleep/consolidation fre- quency). Both these have straightforward interpretations and can be set directly without complex hyperparameter searches. | 1710.10368#55 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 56 | κ ensures continual incorporation of new tasks by guaran- teeing them a minimum fraction of LTM samples during consolidation. Given that LTM should perform well on last K tasks seen in long task sequence of T tasks, we observed that it is safe to assume that about 50% of the LTM would be crowded by the earlier T â K tasks. The remaining 0.5 fraction should be distributed to the last K tasks. So choosing κ = 0.5 K works well in practice (or as a good startDeep Generative Dual Memory Network for Continual Learning
Pr ° CNN >08 DropCNN fa PPR 5 06 EWC 8 & DGR B04 DGDMN ey Z 02 9.0 ca N al st ra oO 4% G GF GF GF FB Xe £ £ FF F F Tasks
(a) Shapes
1.0 â- CNN 70.8 âsâ DropCNN g âsâ PPR 5 06 ââ EWC 8 & â-â DGR B04 ââ DGDMN ey ae Zo2 Joint 0.0 aN Mm tT HH oO Re wo 4G GH GG ED ee eeeeree Tasks
# (b) Hindi
Figure 11: Forgetting curves on Shapes and Hindi dataset (x: tasks seen, y: avg classiï¬cation accuracy on tasks seen).
(a) | 1710.10368#56 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 57 | ing point for tuning). We made this choice in section 4.2 with K = 10 and κ = 0.05, and hence plotted the average accuracy over the last 10 tasks as a metric.
nST M controls the consolidation cycle frequency. Increas- ing nST M gives more STTMs, less frequent consolidations and hence a learning speed advantage. But this also means that fewer samples of previous tasks would participate in consolidation (due to maximum capacity Nmax of LTM), and hence more forgetting might occur. This parameter does not affect learning much till the LTM remains unsaturated (i.e. Nmax capacity is unï¬lled by generated + new samples) and becomes active after that. For long sequences of tasks, we found it best to keep at least 75% of the total samples from previously learnt tasks to have appropriate retention. Hence, nST M can be set as approximately 0.25 κ in practice (as we did in section 4.2), or as a starting point for tuning.
# 8.5. Algorithm speciï¬c hyperparameters | 1710.10368#57 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 58 | # 8.5. Algorithm speciï¬c hyperparameters
PPR: We used a maximum memory capacity of about 3 â 6 times the number of samples in a task for the dataset being learnt on (i.e. 18, 000 for Digits, 60, 000 for Permnist, 15, 000 for Shapes and 5, 400 for Hindi). While replaying, apart from the task samples, the remaining memory was ï¬lled with random samples and corresponding labels.
(b)
Figure 12: t-SNE embedding for latent vectors of the VAE generator on Digits dataset when: (a) tasks are learnt jointly, and (b) tasks are learnt sequentially.
experiments. | 1710.10368#58 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 59 | experiments.
DGR and DGDMN: Nmax for the DGM in DGR and for the LTM in DGDMN for Digits, Permnist, Shapes and Hindi was set as the total number of samples in the datasets (summed over all tasks) to ensure that there was enough capacity to regenerate the datasets well. For TDigits, we deliberately restricted memory capacity to see the effects of learning tasks over a long time and we kept Nmax as half the total number of samples. nST M was kept at 2 for Digits, Permnist and Shapes, 5 for TDigits and 2 for Hindi. κ was set to be small, so that it does not come into play for Digits, Permnist, Shapes and Hindi since we already provided memories with full capacity for all samples. For TDigits, we used κ = 0.05 which would let us incorporate roughly 10 out of the 40 tasks well.
EWC: Most values of the coefï¬cient of the Fisher Infor- mation Matrix based regularizer between 1 to 500 worked reasonably well for our datasets. We chose 100 for our
Deep Generative Dual Memory Network for Continual Learning | 1710.10368#59 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10368 | 60 | Deep Generative Dual Memory Network for Continual Learning
TADN OY Rad AQOHMNGADYRSO AO ~DOATAAUNANA +Hbrm er CONKR meow gorn TT WOMANNeTHhAâ TONES roe MD NROOC+-ATAo& WHOOBSAMY HA RI reyw*oa»sae
(a)
oe s>5 @aense Qh oc QErCHs BRO Or News ROW KY OPeo~MhLo o> he (Am FH oe Been â~âM RBH Se Hwee 8ecwewworbocn® ~QArnwoenhrrn Pans eRe oe Fox hrf Orn mM Oe OS
(a)
ââ Task 1 ââ Task 2 ââ Task 3 ââ Task 4 âeâ Task 5 ââ Task 6 â=~ Task 7 ââ Task 8 ââ Task 9 ââ Task 10 OT yseL 6 seL g>1seL £sel 9 4seL Gg ysel prsel ⬠sel zseL Tse Task Adeinooy
g id TSeL Areinoze yse] Bay So oO £ Zz = 83 aas ttt OT ASeL 6 1seL axel LyseL 9 9 SPL g sxse, & pseL eseL Z SEL | 1710.10368#60 | Deep Generative Dual Memory Network for Continual Learning | Despite advances in deep learning, neural networks can only learn multiple
tasks when trained on them jointly. When tasks arrive sequentially, they lose
performance on previously learnt tasks. This phenomenon called catastrophic
forgetting is a fundamental challenge to overcome before neural networks can
learn continually from incoming data. In this work, we derive inspiration from
human memory to develop an architecture capable of learning continuously from
sequentially incoming tasks, while averting catastrophic forgetting.
Specifically, our contributions are: (i) a dual memory architecture emulating
the complementary learning systems (hippocampus and the neocortex) in the human
brain, (ii) memory consolidation via generative replay of past experiences,
(iii) demonstrating advantages of generative replay and dual memories via
experiments, and (iv) improved performance retention on challenging tasks even
for low capacity models. Our architecture displays many characteristics of the
mammalian memory and provides insights on the connection between sleep and
learning. | http://arxiv.org/pdf/1710.10368 | Nitin Kamra, Umang Gupta, Yan Liu | cs.LG | null | null | cs.LG | 20171028 | 20180525 | [
{
"id": "1703.08110"
},
{
"id": "1511.06295"
},
{
"id": "1701.08734"
},
{
"id": "1701.04722"
},
{
"id": "1606.04671"
}
] |
1710.10196 | 1 | # Timo Aila NVIDIA
# Samuli Laine NVIDIA
# Jaakko Lehtinen NVIDIA and Aalto University
# ABSTRACT
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly ï¬ne details as training progresses. This both speeds the training up and greatly stabilizes it, al- lowing us to produce images of unprecedented quality, e.g., CELEBA images at 10242. We also propose a simple way to increase the variation in generated im- ages, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Fi- nally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CELEBA dataset.
# INTRODUCTION | 1710.10196#1 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 1 | # ABSTRACT
Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. How- ever, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn vi- sual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with au- toregressive models to enable effective few-shot density estimation. Our proposed modiï¬cations to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and ï¬nd that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset.
# INTRODUCTION | 1710.10304#1 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 2 | Generative methods that produce novel samples from high-dimensional data distributions, such as images, are ï¬nding widespread use, for example in speech synthesis (van den Oord et al., 2016a), image-to-image translation (Zhu et al., 2017; Liu et al., 2017; Wang et al., 2017), and image in- painting (Iizuka et al., 2017). Currently the most prominent approaches are autoregressive models (van den Oord et al., 2016b;c), variational autoencoders (VAE) (Kingma & Welling, 2014), and gen- erative adversarial networks (GAN) (Goodfellow et al., 2014). Currently they all have signiï¬cant strengths and weaknesses. Autoregressive models â such as PixelCNN â produce sharp images but are slow to evaluate and do not have a latent representation as they directly model the conditional distribution over pixels, potentially limiting their applicability. VAEs are easy to train but tend to produce blurry results due to restrictions in the model, although recent work is improving this (Kingma et al., 2016). GANs produce sharp images, albeit only in fairly small resolutions and with somewhat limited variation, and the training continues to | 1710.10196#2 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 2 | # INTRODUCTION
Contemporary machine learning systems are still far behind humans in their ability to rapidly learn new visual concepts from only a few examples (Lake et al., 2013). This setting, called few-shot learning, has been studied using deep neural networks and many other approaches in the context of discriminative models, for example Vinyals et al. (2016); Santoro et al. (2016). However, compara- tively little attention has been devoted to the task of few-shot image density estimation; that is, the problem of learning a model of a probability distribution from a small number of examples. Below we motivate our study of few-shot autoregressive models, their connection to meta-learning, and provide a comparison of multiple approaches to conditioning in neural density models.
WHY AUTOREGRESSIVE MODELS?
Autoregressive neural networks are useful for studying few-shot density estimation for several rea- sons. They are fast and stable to train, easy to implement, and have tractable likelihoods, allowing us to quantitatively compare a large number of model variants in an objective manner. Therefore we can easily add complexity in orthogonal directions to the generative model itself.
Autoregressive image models factorize the joint distribution into per-pixel factors:
N P(x|s;0) = T] Plailece, f(s): 9) (1) t=1 | 1710.10304#2 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 3 | this (Kingma et al., 2016). GANs produce sharp images, albeit only in fairly small resolutions and with somewhat limited variation, and the training continues to be unstable despite recent progress (Sali- mans et al., 2016; Gulrajani et al., 2017; Berthelot et al., 2017; Kodali et al., 2017). Hybrid methods combine various strengths of the three, but so far lag behind GANs in image quality (Makhzani & Frey, 2017; Ulyanov et al., 2017; Dumoulin et al., 2016). | 1710.10196#3 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 3 | N P(x|s;0) = T] Plailece, f(s): 9) (1) t=1
where θ are the model parameters, x â RN are the image pixels, s is a conditioning variable, and f is a function encoding this conditioning variable. For example in text-to-image synthesis, s would be an image caption and f could be a convolutional or recurrent encoder network, as in Reed et al. (2016). In label-conditional image generation, s would be the discrete class label and f could simply convert s to a one-hot encoding possibly followed by an MLP.
A straightforward approach to few-shot density estimation would be to simply treat samples from the target distribution as conditioning variables for the model. That is, let s correspond to a few data examples illustrating a concept. For example, s may consist of four images depicting bears, and the task is then to generate an image x of a bear, or to compute its probability P (x|s; θ).
1
Published as a conference paper at ICLR 2018
A learned conditional density model that conditions on samples from its target distribution is in fact learning a learning algorithm, embedded into the weights of the network. This learning algorithm is executed by a feed-forward pass through the network encoding the target distribution samples.
WHY LEARN TO LEARN DISTRIBUTIONS? | 1710.10304#3 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 4 | Typically, a GAN consists of two networks: generator and discriminator (aka critic). The generator produces a sample, e.g., an image, from a latent code, and the distribution of these images should ideally be indistinguishable from the training distribution. Since it is generally infeasible to engineer a function that tells whether that is the case, a discriminator network is trained to do the assessment, and since networks are differentiable, we also get a gradient we can use to steer both networks to the right direction. Typically, the generator is of main interest â the discriminator is an adaptive loss function that gets discarded once the generator has been trained.
There are multiple potential problems with this formulation. When we measure the distance between the training distribution and the generated distribution, the gradients can point to more or less random directions if the distributions do not have substantial overlap, i.e., are too easy to tell apart (Arjovsky & Bottou, 2017). Originally, Jensen-Shannon divergence was used as a distance metric (Goodfellow et al., 2014), and recently that formulation has been improved (Hjelm et al., 2017) and a number of more stable alternatives have been proposed, including least squares (Mao et al., 2016b), absolute deviation with margin (Zhao et al., 2017), and Wasserstein distance (Arjovsky et al., 2017; Gulrajani
1
Published as a conference paper at ICLR 2018 | 1710.10196#4 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 4 | WHY LEARN TO LEARN DISTRIBUTIONS?
If the number of training samples from a target distribution is tiny, then using standard gradient descent to train a deep network from scratch or even ï¬ne-tuning is likely to result in memorization of the samples; there is little reason to expect generalization. Therefore what is needed is a learning algorithm that can be expected to work on tiny training sets. Since designing such an algorithm has thus far proven to be challenging, one could try to learn the algorithm itself. In general this may be impossible, but if there is shared underlying structure among the set of target distributions, this learning algorithm can be learned from experience as we show in this paper.
For our purposes, it is instructive to think of learning to learn as two nested learning problems, where the inner learning problem is less constrained than the outer one. For example, the inner learning problem may be unsupervised while the outer one may be supervised. Similarly, the inner learning problem may involve only a few data points. In this latter case, the aim is to meta-learn a model that when deployed is able to infer, generate or learn rapidly using few data s. | 1710.10304#4 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 5 | 1
Published as a conference paper at ICLR 2018
et al., 2017). Our contributions are largely orthogonal to this ongoing discussion, and we primarily use the improved Wasserstein loss, but also experiment with least-squares loss.
The generation of high-resolution images is difï¬cult because higher resolution makes it easier to tell the generated images apart from training images (Odena et al., 2017), thus drastically amplifying the gradient problem. Large resolutions also necessitate using smaller minibatches due to memory constraints, further compromising training stability. Our key insight is that we can grow both the generator and discriminator progressively, starting from easier low-resolution images, and add new layers that introduce higher-resolution details as the training progresses. This greatly speeds up training and improves stability in high resolutions, as we will discuss in Section 2. | 1710.10196#5 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 5 | A rough analogy can be made to evolution: a slow and expensive meta-learning process, which has resulted in life-forms that at birth already have priors that facilitate rapid learning and inductive leaps. Understanding the exact form of the priors is an active, very challenging, area of research (Spelke & Kinzler, 2007; Smith & Gasser, 2005). From this research perspective, we can think of meta-learning as a potential data-driven alternative to hand engineering priors.
The meta-learning process can be undertaken using large amounts of computation and data. The output is however a model that can learn from few data. This facilitates the deployment of models in resource-constrained computing devices, e.g. mobile phones, to learn from few data. This may prove to be very important for protection of private data s and for personalisation.
FEW-SHOT LEARNING AS INFERENCE OR AS A WEIGHT UPDATE?
A sample-conditional density model Pθ(x|s) treats meta-learning as inference; the conditioning samples s vary but the model parameters θ are ï¬xed. A standard MLP or convolutional network can parameterize the sample encoding (i.e. meta-learning) component, or an attention mechanism can be used, which we will refer to as PixelCNN and Attention PixelCNN, respectively. | 1710.10304#5 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 6 | The GAN formulation does not explicitly require the entire training data distribution to be repre- sented by the resulting generative model. The conventional wisdom has been that there is a tradeoff between image quality and variation, but that view has been recently challenged (Odena et al., 2017). The degree of preserved variation is currently receiving attention and various methods have been suggested for measuring it, including inception score (Salimans et al., 2016), multi-scale structural similarity (MS-SSIM) (Odena et al., 2017; Wang et al., 2003), birthday paradox (Arora & Zhang, 2017), and explicit tests for the number of discrete modes discovered (Metz et al., 2016). We will describe our method for encouraging variation in Section 3, and propose a new metric for evaluating the quality and variation in Section 5.
Section 4.1 discusses a subtle modiï¬cation to the initialization of networks, leading to a more bal- anced learning speed for different layers. Furthermore, we observe that mode collapses traditionally plaguing GANs tend to happen very quickly, over the course of a dozen minibatches. Commonly they start when the discriminator overshoots, leading to exaggerated gradients, and an unhealthy competition follows where the signal magnitudes escalate in both networks. We propose a mecha- nism to stop the generator from participating in such escalation, overcoming the issue (Section 4.2). | 1710.10196#6 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 6 | A very different approach to meta-learning is taken by Ravi & Larochelle (2016) and Finn et al. (2017a), who instead learn unconditional models that adapt their weights based on a gradient step computed on the few-shot samples. This same approach can also be taken with PixelCNN: train an unconditional network Pg (x) that is implicitly conditioned by a previous gradient ascent step on log Po(s); that is, 6â = 0 â aVo¢ log Po(s). We will refer to this as Meta PixelCNN.
In Section 2 we connect our work to previous attentive autoregressive models, as well as to work on gradient based meta-learning. In Section 3 we describe Attention PixelCNN and Meta PixelCNN in greater detail. We show how attention can improve performance in the the few-shot density estimation problem by enabling the model to easily transmit texture information from the support set onto the target image canvas. In Section 4 we compare several few-shot PixelCNN variants on simple image mirroring, Omniglot and Stanford Online Products. We show that both gradient-based and attention-based few-shot PixelCNN can learn to learn simple distributions, and both achieve state-of-the-art likelihoods on Omniglot.
# 2 RELATED WORK | 1710.10304#6 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 7 | We evaluate our contributions using the CELEBA, LSUN, CIFAR10 datasets. We improve the best published inception score for CIFAR10. Since the datasets commonly used in bench- marking generative methods are limited to a fairly low resolution, we have also created a higher quality version of the CELEBA dataset that allows experimentation with output resolu- tions up to 1024 Ã 1024 pixels. This dataset and our full implementation are available at https://github.com/tkarras/progressive_growing_of_gans, trained networks can be found at https://drive.google.com/open?id=0B4qLcYyJmiz0NHFULTdYc05lX0U along with result images, and a supplementary video illustrating the datasets, additional results, and la- tent space interpolations is at https://youtu.be/G06dEcZ-QTg.
# 2 PROGRESSIVE GROWING OF GANS
Our primary contribution is a training methodology for GANs where we start with low-resolution images, and then progressively increase the resolution by adding layers to the networks as visualized in Figure 1. This incremental nature allows the training to ï¬rst discover large-scale structure of the image distribution and then shift attention to increasingly ï¬ner scale detail, instead of having to learn all scales simultaneously. | 1710.10196#7 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 7 | # 2 RELATED WORK
Learning to learn or meta-learning has been studied in cognitive science and machine learning for decades (Harlow, 1949; Thrun & Pratt, 1998; Hochreiter et al., 2001). In the context of modern deep networks, Andrychowicz et al. (2016) learned a gradient descent optimizer by gradient descent, itself parameterized as a recurrent network. Chen et al. (2017) showed how to learn to learn by gradient descent in the black-box optimization setting.
2
Published as a conference paper at ICLR 2018
Ravi & Larochelle (2017) showed the effectiveness of learning an optimizer in the few-shot learning setting. Finn et al. (2017a) advanced a simplified yet effective variation in which the optimizer is not learned but rather fixed as one or a few steps of gradient descent, and the meta-learning problem reduces to learning an initial set of base parameters 6 that can be adapted to minimize any task loss L;, by a single step of gradient descent, i.e. 0â = 6 â aVL,(6). This approach was further shown to be effective in imitation learning including on real robotic manipulation tasks (Finn et al., 2017b). Shyam et al. (2017) train a neural attentive recurrent comparator function to perform one- shot classification on Omniglot. | 1710.10304#7 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 8 | We use generator and discriminator networks that are mirror images of each other and always grow in synchrony. All existing layers in both networks remain trainable throughout the training process. When new layers are added to the networks, we fade them in smoothly, as illustrated in Figure 2. This avoids sudden shocks to the already well-trained, smaller-resolution layers. Appendix A de- scribes structure of the generator and discriminator in detail, along with other training parameters.
We observe that the progressive training has several beneï¬ts. Early on, the generation of smaller images is substantially more stable because there is less class information and fewer modes (Odena et al., 2017). By increasing the resolution little by little we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors to e.g. 10242 images. This approach has conceptual similarity to recent work by Chen & Koltun (2017). In practice it stabilizes the training sufï¬ciently for us to reliably synthesize megapixel-scale images using WGAN-GP loss (Gulrajani et al., 2017) and even LSGAN loss (Mao et al., 2016b).
2
Published as a conference paper at ICLR 2018 | 1710.10196#8 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 8 | Few-shot density estimation has been studied previously using matching networks (Bartunov & Vetrov, 2016) and variational autoencoders (VAEs). Bornschein et al. (2017) apply variational in- ference to memory addressing, treating the memory address as a latent variable. Rezende et al. (2016) develop a sequential generative model for few-shot learning, generalizing the Deep Recur- rent Attention Writer (DRAW) model (Gregor et al., 2015). In this work, our focus is on extending autoregressive models to the few-shot setting, in particular PixelCNN (van den Oord et al., 2016).
Autoregressive (over time) models with attention are well-established in language tasks. Bahdanau et al. (2014) developed an attention-based network for machine translation. This work inspired a wave of recurrent attention models for other applications. Xu et al. (2015) used visual attention to produce higher-quality and more interpretable image captioning systems. This type of model has also been applied in motor control, for the purpose of imitation learning. Duan et al. (2017) learn a policy for robotic block stacking conditioned on a small number of demonstration trajectories. | 1710.10304#8 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10304 | 9 | Gehring et al. (2017) developed convolutional machine translation models augmented with attention over the input sentence. A nice property of this model is that all attention operations can be batched over time, because one does not need to unroll a recurrent net during training. Our attentive Pixel- CNN is similar in high-level design, but our data is pixels rather than words, and 2D instead of 1D, and we consider image generation rather than text generation as our task.
3 MODEL
3.1 FEW-SHOT LEARNING WITH ATTENTION PIXELCNN
In this section we describe the model, which we refer to as Attention PixelCNN. At a high level, it works as follows: at the point of generating every pixel, the network queries a memory. This memory can consist of anything, but in this work it will be a support set of images of a visual concept. In addition to global features derived from these support images, the network has access to textures via support image patches. Figure 2 illustrates the attention mechanism. | 1710.10304#9 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 10 | G Latent Latent Latent + + + 4x4 4x4 4x4 i 8x8 oââI : | Reals ems 1024x1024 vy 4x4 4x4 4x4 Training progresses âââââââââ___»
Figure 1: Our training starts with both the generator (G) and discriminator (D) having a low spa- tial resolution of 4Ã4 pixels. As the training advances, we incrementally add layers to G and D, thus increasing the spatial resolution of the generated images. All existing layers remain trainable throughout the process. Here N Ã N refers to convolutional layers operating on N Ã N spatial resolution. This allows stable synthesis in high resolutions and also speeds up training considerably. One the right we show six example images generated using progressive growing at 1024 Ã 1024.
Another beneï¬t is the reduced training time. With progressively growing GANs most of the itera- tions are done at lower resolutions, and comparable result quality is often obtained up to 2â6 times faster, depending on the ï¬nal output resolution. | 1710.10196#10 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 10 | In previous conditional PixelCNN works, the encoding f (s) was shared across all pixels. However, this can be sub-optimal for several reasons. First, at different points of generating the target image x, different aspects of the support images may become relevant. Second, it can make learning difï¬cult, because the network will need to encode the entire support set of images into a single global conditioning vector, fed to every output pixel. This single vector would need to transmit information across all pairs of salient regions in the supporting images and the target image.
UG GEBIâ| CE TBIâ ime Supports + attention Sampl Supports + attention Sampl mm NEA ie at AH ka cl Ct ict cl aT eH Al id at AA(o7| a W ctict Cl cl cy my ct
Figure 1: Sampling from Attention PixelCNN. Support images are overlaid in red to indicate the attention weights. The support sets can be viewed as small training sets, illustrating the connection between sample-conditional density estimation and learning to learn distributions.
3
Published as a conference paper at ICLR 2018
To overcome this difï¬culty, we propose to replace the simple encoder function f (s) with a context- sensitive attention mechanism ft(s, x<t). It produces an encoding of the context that depends on the image generated up until the current step t. The weights are shared over t. | 1710.10304#10 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 11 | The idea of growing GANs progressively is related to the work of Wang et al. (2017), who use mul- tiple discriminators that operate on different spatial resolutions. That work in turn is motivated by Durugkar et al. (2016) who use one generator and multiple discriminators concurrently, and Ghosh et al. (2017) who do the opposite with multiple generators and one discriminator. Hierarchical GANs (Denton et al., 2015; Huang et al., 2016; Zhang et al., 2017) deï¬ne a generator and discrimi- nator for each level of an image pyramid. These methods build on the same observation as our work â that the complex mapping from latents to high-resolution images is easier to learn in steps â but the crucial difference is that we have only a single GAN instead of a hierarchy of them. In contrast to early work on adaptively growing networks, e.g., growing neural gas (Fritzke, 1995) and neuro evolution of augmenting topologies (Stanley & Miikkulainen, 2002) that grow networks greedily, we simply defer the introduction of pre-conï¬gured layers. In that sense our approach resembles layer-wise training of autoencoders (Bengio et al., 2007).
# INCREASING VARIATION USING MINIBATCH STANDARD DEVIATION | 1710.10196#11 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 11 | We will use the following nota- tion. Let the target image be x â RHÃW Ã3. and the support set images be s â RSÃHÃW Ã3, where S is the number of supports.
To capture texture information, we encode all supporting images with a shallow convolutional network, typi- cally only two layers. Each hidden unit of the resulting feature map will have a small receptive ï¬eld, e.g. cor- responding to a 10 à 10 patch in a support set image. We encode these support images into a set of spatially- indexed key and value vectors.
kakxP 4/4 reduce â] tx1xP m f(s, x.) + were a 7 \ KxKx4 ZY pre ry 5 KxKxP i 7 Z | WxHxP Th WxHx3 WxHx3 - Lf KxKxP Support image, s Target image, x
Figure 2: The PixelCNN attention mechanism.
After encoding the support images in parallel, we reshape the resulting S Ã K Ã K Ã 2P feature maps to squeeze out the spatial dimensions, resulting in a SK 2 Ã 2P matrix.
p = fpatch(s) = reshape(CNN(s), [SK 2 Ã 2P ]) (2) | 1710.10304#11 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 12 | # INCREASING VARIATION USING MINIBATCH STANDARD DEVIATION
GANs have a tendency to capture only a subset of the variation found in training data, and Salimans et al. (2016) suggest âminibatch discriminationâ as a solution. They compute feature statistics not only from individual images but also across the minibatch, thus encouraging the minibatches of generated and training images to show similar statistics. This is implemented by adding a minibatch layer towards the end of the discriminator, where the layer learns a large tensor that projects the input activation to an array of statistics. A separate set of statistics is produced for each example in a minibatch and it is concatenated to the layerâs output, so that the discriminator can use the statistics internally. We simplify this approach drastically while also improving the variation. | 1710.10196#12 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 12 | p = fpatch(s) = reshape(CNN(s), [SK 2 Ã 2P ]) (2)
pkey = p[:, 0 : P ], pvalue = p[:, P : 2P ] (3)
where CNN is a shallow convolutional network. We take the ï¬rst P channels as the patch key vectors pkey â RSK2ÃP and the second P channels as the patch value vectors pvalue â RSK2ÃP . Together these form a queryable memory for image generation.
To query this memory, we need to encode both the global context from the support set s as well as the pixels x<t generated so far. We can obtain these features simply by taking any layer of a PixelCNN conditioned on the support set:
qt = PixelCNNL(f (s), x<t), (4)
where L is the desired layer of hidden unit activations within the PixelCNN network. In practice we use the middle layer.
To incorporate the patch attention features into the pixel predictions, we build a scoring function us- ing q and pkey. Following the design proposed by Bahdanau et al. (2014), we compute a normalized matching score αtj between query pixel qt and supporting patch pkey
) j (5)
# ey = ov" tanh(q, + prâ) ong = exp(erj)/ DES | 1710.10304#12 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 13 | Our simpliï¬ed solution has neither learnable parameters nor new hyperparameters. We ï¬rst compute the standard deviation for each feature in each spatial location over the minibatch. We then average these estimates over all features and spatial locations to arrive at a single value. We replicate the value and concatenate it to all spatial locations and over the minibatch, yielding one additional (con- stant) feature map. This layer could be inserted anywhere in the discriminator, but we have found it best to insert it towards the end (see Appendix A.1 for details). We experimented with a richer set of statistics, but were not able to improve the variation further. In parallel work, Lin et al. (2017) provide theoretical insights about the beneï¬ts of showing multiple images to the discriminator.
3
Published as a conference paper at ICLR 2018
t t t G 16x16 16x16 16x16 2x 2x + 32x32 } 32x32 J toRGB toRGB toRGB toRGB Fact ba | re > 4 i D fromRGB os fromRGB fromRGB ft = == 32x32 32x32 Fornkes 05x 05x ay 16x16 16x16 16x16 (a) + (b) # | | 1710.10196#13 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 13 | ) j (5)
# ey = ov" tanh(q, + prâ) ong = exp(erj)/ DES
k=1 exp(eik). (6)
The resulting attention-gated context function can be written as:
3 SK? value Fels, ver) = CN anjpyaâ «)
which can be substituted into the objective in equation 1. context features ft(s, x<t) with global context features f (s) by channel-wise concatenation.
This attention mechanism can also be straightforwardly applied to the multiscale PixelCNN archi- tecture of Reed et al. (2017). In that model, pixel factors P (xt|x<t, ft(s, x<t)) are simply replaced by pixel group factors P (xg|x<g, fg(s, x<g)), where g indexes a set of pixels and < g indicates all pixels in previous pixel groups, including previously-generated lower resolutions.
We ï¬nd that a few simple modiï¬cations to the above design can signiï¬cantly improve performance. First, we can augment the supporting images with a channel encoding relative position within the image, normalized to [â1, 1]. One channel is added for x-position, another for y-position. When
4
Published as a conference paper at ICLR 2018 | 1710.10304#13 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 14 | Figure 2: When doubling the resolution of the generator (G) and discriminator (D) we fade in the new layers smoothly. This example illustrates the transition from 16 à 16 images (a) to 32 à 32 images (c). During the transition (b) we treat the layers that operate on the higher resolution like a residual block, whose weight α increases linearly from 0 to 1. Here 2à and 0.5à refer to doubling and halving the image resolution using nearest neighbor ï¬ltering and average pooling, respectively. The toRGB represents a layer that projects feature vectors to RGB colors and fromRGB does the reverse; both use 1 à 1 convolutions. When training the discriminator, we feed in real images that are downscaled to match the current resolution of the network. During a resolution transition, we interpolate between two resolutions of the real images, similarly to how the generator output combines two resolutions. | 1710.10196#14 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 14 | 4
Published as a conference paper at ICLR 2018
patch features are extracted, position information is thus encoded, which may help the network assemble the output image. Second, we add a 1-of-K channel for the supporting image label, where K is the number of supporting images. This provides patch encodings information about which global context they are extracted from, which may be useful e.g. when assembling patches from multiple views of an object.
3.2 FEW-SHOT LEARNING WITH META PIXELCNN
As an alternative to explicit conditioning with attention, in this section we propose an implicitly- conditioned version using gradient descent. This is an instance of what Finn et al. (2017a) called model-agnostic meta learning, because it works in the same way regardless of the network archi- tecture. The conditioning pathway (i.e. ï¬ow of information from supports s to the next pixel xt) introduces no additional parameters. The objective to minimize is as follows:
L(x, ;0) = âlog P(x; 6"), where 0â = 6 â aV6Linner(s; 9) (8) | 1710.10304#14 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 15 | Alternative solutions to the variation problem include unrolling the discriminator (Metz et al., 2016) to regularize its updates, and a ârepelling regularizerâ (Zhao et al., 2017) that adds a new loss term to the generator, trying to encourage it to orthogonalize the feature vectors in a minibatch. The multiple generators of Ghosh et al. (2017) also serve a similar goal. We acknowledge that these solutions may increase the variation even more than our solution â or possibly be orthogonal to it â but leave a detailed comparison to a later time.
# 4 NORMALIZATION IN GENERATOR AND DISCRIMINATOR
GANs are prone to the escalation of signal magnitudes as a result of unhealthy competition between the two networks. Most if not all earlier solutions discourage this by using a variant of batch nor- malization (Ioffe & Szegedy, 2015; Salimans & Kingma, 2016; Ba et al., 2016) in the generator, and often also in the discriminator. These normalization methods were originally introduced to elimi- nate covariate shift. However, we have not observed that to be an issue in GANs, and thus believe that the actual need in GANs is constraining signal magnitudes and competition. We use a different approach that consists of two ingredients, neither of which include learnable parameters. | 1710.10196#15 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 15 | L(x, ;0) = âlog P(x; 6"), where 0â = 6 â aV6Linner(s; 9) (8)
A natural choice for the inner objective would be Linner(s; θ) = â log P (s; θ). However, as shown in Finn et al. (2017b) and similar to the setup in Neu & Szepesv´ari (2012), we actually have consid- erable ï¬exibility here to make the inner and outer objectives different.
Any learnable function of s and 6 could potentially learn to produce gradients that increase log P(x; 6â). In particular, this function does not need to compute log likelihood, and does not even need to respect the causal ordering of pixels implied by the chain rule factorization in equation 1. Effectively, the model can learn to learn by maximum likelihood without likelihoods.
As input features for computing Linner(s, θ), we use the L-th layer of spatial features q = PixelCNNL(s, θ) â RSÃHÃW ÃZ, where S is the number of support images - acting as the batch dimension - and Z is the number of feature channels used in the PixelCNN. Note that this is the same network used to model P (x; θ). | 1710.10304#15 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 16 | 4.1 EQUALIZED LEARNING RATE
We deviate from the current trend of careful weight initialization, and instead use a trivial N (0, 1) initialization and then explicitly scale the weights at runtime. To be precise, we set Ëwi = wi/c, where wi are the weights and c is the per-layer normalization constant from Heâs initializer (He et al., 2015). The beneï¬t of doing this dynamically instead of during initialization is somewhat subtle, and relates to the scale-invariance in commonly used adaptive stochastic gradient descent methods such as RMSProp (Tieleman & Hinton, 2012) and Adam (Kingma & Ba, 2015). These methods normalize a gradient update by its estimated standard deviation, thus making the update independent of the scale of the parameter. As a result, if some parameters have a larger dynamic range than others, they will take longer to adjust. This is a scenario modern initializers cause, and thus it is possible that a learning rate is both too large and too small at the same time. Our approach ensures that the dynamic range, and thus the learning speed, is the same for all weights. A similar reasoning was independently used by van Laarhoven (2017).
4
Published as a conference paper at ICLR 2018
4.2 PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATOR | 1710.10196#16 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 16 | The features q are fed through a convolutional network g (whose parameters are also included in θ) producing a scalar, which is treated as the learned inner loss Linner. In practice, we used α = 0.1, and the encoder had three layers of stride-2 convolutions with 3 à 3 kernels, followed by L2 norm of the ï¬nal layer features. Since these convolutional weights are part of θ, they are learned jointly with the generative model weights by minimizing equation 8.
Algorithm 1 Meta PixelCNN training 1: θ: Randomly initialized model parameters 2: p(s, x) : Distribution over support sets and target outputs. 3: while not done do {si, xi}M 4: for all si, xi do 5: 6: 7: | 1710.10304#16 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 17 | 4
Published as a conference paper at ICLR 2018
4.2 PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATOR
To disallow the scenario where the magnitudes in the generator and discriminator spiral out of con- trol as a result of competition, we normalize the feature vector in each pixel to unit length in the generator after each convolutional layer. We do this using a variant of âlocal response normaliza- tionâ (Krizhevsky et al., 2012), configured as bey = dx,y/\/ we (aby)? + â¬, where e = 10-8, N is the number of feature maps, and az, and b,,, are the original and normalized feature vector in pixel (a, y), respectively. We find it surprising that this heavy-handed constraint does not seem to harm the generator in any way, and indeed with most datasets it does not change the results much, but it prevents the escalation of signal magnitudes very effectively when needed.
# 5 MULTI-SCALE STATISTICAL SIMILARITY FOR ASSESSING GAN RESULTS | 1710.10196#17 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 17 | 1: @: Randomly initialized model parameters 2: p(s, x) : Distribution over support sets and target outputs. 3: while not done do > Training loop 4: {s;,c:}â¢4, ~ p(s,t). > Sample a batch of M support sets and target outputs 5: for all s;,7; do 6: qi = PixelCNN_(s;,9) > Compute support set embedding as L-th layer features 7: 6 = 6 â aVog(u, 6) > Adapt 6 using Linner(si,9) = 9(Gi, 4) 8: 0 = 0 â BVg XY, â log P(x;, 04) > Update parameters using maximum likelihood
Algorithm 1 describes the training procedure for Meta PixelCNN. Note that in the outer loop step (line 8), the distribution parametrized by 6/ is not explicitly conditioned on the support set images, but implicitly through the weight adaptation from 6 in line 7.
# 4 EXPERIMENTS | 1710.10304#17 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 18 | # 5 MULTI-SCALE STATISTICAL SIMILARITY FOR ASSESSING GAN RESULTS
In order to compare the results of one GAN to another, one needs to investigate a large number of images, which can be tedious, difï¬cult, and subjective. Thus it is desirable to rely on automated methods that compute some indicative metric from large image collections. We noticed that existing methods such as MS-SSIM (Odena et al., 2017) ï¬nd large-scale mode collapses reliably but fail to react to smaller effects such as loss of variation in colors or textures, and they also do not directly assess image quality in terms of similarity to the training set.
We build on the intuition that a successful generator will produce samples whose local image struc- ture is similar to the training set over all scales. We propose to study this by considering the multi- scale statistical similarity between distributions of local image patches drawn from Laplacian pyra- mid (Burt & Adelson, 1987) representations of generated and target images, starting at a low-pass resolution of 16 Ã 16 pixels. As per standard practice, the pyramid progressively doubles until the full resolution is reached, each successive level encoding the difference to an up-sampled version of the previous level. | 1710.10196#18 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 18 | # 4 EXPERIMENTS
In this section we describe experiments on image ï¬ipping, Omniglot, and Stanford Online Products. In all experiments, the support set encoder f (s) has the following structure: in parallel over support images, a 5 à 5 conv layer, followed by a sequence of 3 à 3 convolutions and max-pooling until the spatial dimension is 1. Finally, the support image encodings are concatenated and fed through two fully-connected layers to get the support set embedding.
5
Published as a conference paper at ICLR 2018
IMAGENET FLIPPING
As a diagnostic task, we consider the problem of image ï¬ipping as few-shot learning. The âsupport setâ contains only one image and is simply the horizontally-ï¬ipped target image. A trivial algorithm exists for this problem, which of course is to simply copy pixel values directly from the support to the corresponding target location. We ï¬nd that the Attention PixelCNN did indeed learn to solve the task, however, interestingly, the baseline conditional PixelCNN and Meta PixelCNN did not. | 1710.10304#18 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 19 | A single Laplacian pyramid level corresponds to a speciï¬c spatial frequency band. We randomly sample 16384 images and extract 128 descriptors from each level in the Laplacian pyramid, giving us 221 (2.1M) descriptors per level. Each descriptor is a 7 à 7 pixel neighborhood with 3 color channels, denoted by x â R7Ã7Ã3 = R147. We denote the patches from level l of the training set and generated set as {xl i} w.r.t. the mean and standard deviation of each color channel, and then estimate the statistical similarity by computing their sliced Wasserstein distance SWD({xl i}), an efï¬ciently computable random- ized approximation to earthmovers distance, using 512 projections (Rabin et al., 2011).
Intuitively a small Wasserstein distance indicates that the distribution of the patches is similar, mean- ing that the training images and generator samples appear similar in both appearance and variation at this spatial resolution. In particular, the distance between the patch sets extracted from the lowest- resolution 16 à 16 images indicate similarity in large-scale image structures, while the ï¬nest-level patches encode information about pixel-level attributes such as sharpness of edges and noise.
# 6 EXPERIMENTS | 1710.10196#19 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 19 | We trained the model on ImageNet (Deng et al., 2009) images resized to 48 x 48 for 30K steps using RMSProp with learning rate le~+. The network was a 16-layer PixelCNN with 128-dimensional feature maps at each layer, with skip connections to a 256-dimensional penultimate layer before pixel prediction. The baseline PixelCNN is conditioned on the 128-dimensional encoding of the flipped image at each layer; f(s) = f(zâ), where xâ is the mirror image of x. The Attention PixelCNN network is exactly the same for the first 8 layers, and the latter 8 layers are conditioned also on attention features f;(s, <4) = f(xâ, xz) as described in section 3.1.
Source Without attention Source With attention ae a } ey» smal tm rf a=
Figure 3: Horizontally ï¬ipping ImageNet images. The network using attention learns to mirror, while the network without attention does not. | 1710.10304#19 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 20 | # 6 EXPERIMENTS
In this section we discuss a set of experiments that we conducted to evaluate the quality of Please refer to Appendix A for detailed description of our network structures our results. and training conï¬gurations. We also invite the reader to consult the accompanying video (https://youtu.be/G06dEcZ-QTg) for additional result images and latent space interpolations. In this section we will distinguish between the network structure (e.g., convolutional layers, resiz- ing), training conï¬guration (various normalization layers, minibatch-related operations), and train- ing loss (WGAN-GP, LSGAN).
IMPORTANCE OF INDIVIDUAL CONTRIBUTIONS IN TERMS OF STATISTICAL SIMILARITY | 1710.10196#20 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 20 | Figure 3: Horizontally ï¬ipping ImageNet images. The network using attention learns to mirror, while the network without attention does not.
Figure 3 shows the qualitative results for several validation set images. We observe that the baseline model without attention completely fails to ï¬ip the image or even produce a similar image. With attention, the model learns to consistently apply the horizontal ï¬ip operation. However, it is not entirely perfect - one can observe slight mistakes on the upper and left borders. This makes sense because in those regions, the model has the least context to predict pixel values. We also ran the experiment on 24 à 24 images; see ï¬gure 6 in the appendix. Even in this simpliï¬ed setting, neither the baseline conditional PixelCNN or Meta PixelCNN learned to ï¬ip the image.
Quantitatively, we also observe a clear difference between the baseline and the attention model. The baseline achieves 2.64 nats/dim on the training set and 2.65 on the validation set. The attention model achieves 0.89 and 0.90 nats/dim, respectively. During sampling, Attention PixelCNN learns a simple copy operation in which the attention head proceeds in right-to-left raster order over the input, while the output is written in left-to-right raster order.
# 4.2 OMNIGLOT | 1710.10304#20 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 21 | IMPORTANCE OF INDIVIDUAL CONTRIBUTIONS IN TERMS OF STATISTICAL SIMILARITY
We will ï¬rst use the sliced Wasserstein distance (SWD) and multi-scale structural similarity (MS- SSIM) (Odena et al., 2017) to evaluate the importance our individual contributions, and also percep- tually validate the metrics themselves. We will do this by building on top of a previous state-of-the- art loss function (WGAN-GP) and training conï¬guration (Gulrajani et al., 2017) in an unsupervised setting using CELEBA (Liu et al., 2015) and LSUN BEDROOM (Yu et al., 2015) datasets in 1282
5
Published as a conference paper at ICLR 2018 | 1710.10196#21 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 21 | # 4.2 OMNIGLOT
In this section we benchmark our model on Omniglot (Lake et al., 2013), and analyze the learned behavior of the attention module. We trained the model on 26 Ã 26 binarized images and a 45 â 5 split into training and testing character alphabets as in Bornschein et al. (2017).
To avoid over-ï¬tting, we used a very small network architecture. It had a total of 12 layers with 24 planes each, with skip connections to a penultimate layer with 32 planes. As before, the baseline model conditioned each pixel prediction on a single global vector computed from the support set. The attention model is the same for the ï¬rst half (6 layers), and for the second half it also conditions on attention features.
The task is set up as follows: the network sees several images of a character from the same alphabet, and then tries to induce a density model of that character. We evaluate the likelihood on a held-out example image of that same character from the same alphabet.
All PixelCNN variants achieve state-of-the-art likelihood results (see table 1). Attention PixelCNN signiï¬cantly outperforms the other methods, including PixelCNN without attention, across 1, 2, 4
6
Published as a conference paper at ICLR 2018 | 1710.10304#21 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
1710.10196 | 22 | CELEBA LSUN BEDROOM Training conï¬guration (a) Gulrajani et al. (2017) (b) + Progressive growing (c) + Small minibatch (d) + Revised training parameters (eâ) + Minibatch discrimination (e) Minibatch stddev (f) + Equalized learning rate (g) + Pixelwise normalization (h) Converged Sliced Wasserstein distance Ã103 MS-SSIM Sliced Wasserstein distance Ã103 MS-SSIM 16 Avg 32 128 9.28 7.62 12.99 4.62 4.28 3.78 75.42 41.33 41.62 26.57 46.23 8.07 9.20 9.84 10.76 7.04 13.94 4.39 4.42 3.56 4.06 2.96 2.42 64 7.79 2.64 16 Avg 64 8.03 14.48 11.25 11.97 10.51 7.09 7.60 9.64 7.40 6.27 72.73 40.16 42.75 42.46 49.52 6.54 9.63 3.65 7.39 8.43 5.32 11.88 10.29 6.48 9.64 3.27 7.77 2.71 3.61 4.02 6.44 | 1710.10196#22 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks.
The key idea is to grow both the generator and discriminator progressively:
starting from a low resolution, we add new layers that model increasingly fine
details as training progresses. This both speeds the training up and greatly
stabilizes it, allowing us to produce images of unprecedented quality, e.g.,
CelebA images at 1024^2. We also propose a simple way to increase the variation
in generated images, and achieve a record inception score of 8.80 in
unsupervised CIFAR10. Additionally, we describe several implementation details
that are important for discouraging unhealthy competition between the generator
and discriminator. Finally, we suggest a new metric for evaluating GAN results,
both in terms of image quality and variation. As an additional contribution, we
construct a higher-quality version of the CelebA dataset. | http://arxiv.org/pdf/1710.10196 | Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen | cs.NE, cs.LG, stat.ML | Final ICLR 2018 version | null | cs.NE | 20171027 | 20180226 | [] |
1710.10304 | 22 | 6
Published as a conference paper at ICLR 2018
Number of support set examples Model Bornschein et al. (2017) Gregor et al. (2016) Conditional PixelCNN Attention PixelCNN 1 0.128(ââ) 0.079(0.063) 0.077(0.070) 0.071(0.066) 2 0.123(ââ) 0.076(0.060) 0.077(0.068) 0.068(0.064) 4 0.117(ââ) 0.076(0.060) 0.077(0.067) 0.066(0.062) 8 â â (ââ) 0.076(0.057) 0.076(0.065) 0.064(0.060)
Table 1: Omniglot test(train) few-shot density estimation NLL in nats/dim. Bornschein et al. (2017) refers to Variational Memory Addressing and Gregor et al. (2016) to ConvDRAW.
and 8-shot learning. PixelCNN and Attention PixelCNN models are also fast to train: 10K iterations with batch size 32 took under an hour using NVidia Tesla K80 GPUs. | 1710.10304#22 | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet.
However, such models require many thousands of gradient-based weight updates
and unique image examples for training. Ideally, the models would rapidly learn
visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with
autoregressive models to enable effective few-shot density estimation. Our
proposed modifications to PixelCNN result in state-of-the art few-shot density
estimation on the Omniglot dataset. Furthermore, we visualize the learned
attention policy and find that it learns intuitive algorithms for simple tasks
such as image mirroring on ImageNet and handwriting on Omniglot without
supervision. Finally, we extend the model to natural images and demonstrate
few-shot image generation on the Stanford Online Products dataset. | http://arxiv.org/pdf/1710.10304 | Scott Reed, Yutian Chen, Thomas Paine, Aäron van den Oord, S. M. Ali Eslami, Danilo Rezende, Oriol Vinyals, Nando de Freitas | cs.NE, cs.CV | null | null | cs.NE | 20171027 | 20180228 | [
{
"id": "1705.03122"
},
{
"id": "1709.04905"
},
{
"id": "1703.07326"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.